text
stringlengths
14
1.76M
# Data-driven MHD simulation of successive solar plasma eruptions Takafumi Kaneko Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan Sung-Hong Park Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan Kanya Kusano Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, 464-8601, Japan Takafumi Kaneko <EMAIL_ADDRESS> (Received November, 12, 2020; Revised January 10, 2021; Accepted ) ###### Abstract Solar flares and plasma eruptions are sudden releases of magnetic energy stored in the plasma atmosphere. To understand the physical mechanisms governing their occurrences, three-dimensional magnetic fields from the photosphere up to the corona must be studied. The solar photospheric magnetic fields are observable, whereas the coronal magnetic fields cannot be measured. One method for inferring coronal magnetic fields is performing data-driven simulations, which involves time-series observational data of the photospheric magnetic fields with the bottom boundary of magnetohydrodynamic simulations. We developed a data-driven method in which temporal evolutions of the observational vector magnetic field can be reproduced at the bottom boundary in the simulation by introducing an inverted velocity field. This velocity field is obtained by inversely solving the induction equation and applying an appropriate gauge transformation. Using this method, we performed a data- driven simulation of successive small eruptions observed by the Solar Dynamics Observatory and the Solar Magnetic Activity Telescope in November 2017. The simulation well reproduced the converging motion between opposite-polarity magnetic patches, demonstrating successive formation and eruptions of helical flux ropes. Sun: flares—Sun: filaments, prominences—Sun: corona—Sun: photosphere ††journal: ApJ ## 1 Introduction Solar flares are the sudden releases of energy from the sun. Flares are often accompanied by plasma eruptions such as prominence eruptions and coronal mass ejections. Flares and plasma eruptions are caused by the release of magnetic energy stored in the plasma atmosphere. Evidences of flares and plasma eruptions have also been found in other sun-like stars (Osten et al., 2005; Pandey & Singh, 2008; Maehara et al., 2012; Notsu et al., 2019; Namekata et al., 2020). The sun is the only star of which the photospheric magnetic fields can be observed with a high spatio-temporal resolution. From the solar observations and theoretical studies based on magnetohydrodynamic (MHD) theories, we can infer the detailed magnetic activities leading to the sudden energy release, which can be common in other sun-like stars. In the current understanding of solar physics, magnetic reconnection and MHD instabilities (Hood & Priest, 1979; Kliem & Török, 2006; Ishiguro & Kusano, 2017) are the essential mechanisms leading to the magnetic energy release. In the solar observations, the photospheric magnetic fields are temporally changed via advection by convective flows and magnetic fluxes emerging from the deeper convection zone. To reveal the mechanisms of the explosive events and develop methodologies to predict them, previous studies attempted to evaluate the possibility of magnetic reconnection and the critical conditions of MHD instabilities (Amari et al., 2014; Kusano et al., 2020). For this, the information of three-dimensional magnetic fields from the photosphere up to the corona is required. The photospheric magnetic fields can be observed, whereas the coronal magnetic fields cannot be measured directly. Previous studies developed numerical methods to extrapolate three-dimensional coronal magnetic fields from the two-dimensional observational vector magnetic fields in the photosphere, e.g., nonlinear force-free field (NLFFF) approximation (reviewed by Inoue, 2016). There have been attempts of data-constrained simulations, where the NLFFF approximation was used as the initial condition of MHD simulation (Amari et al., 2014; Muhamad et al., 2017). In these simulations, the photospheric magnetic fields after temporal integration did not always reproduce the observed ones. Another attempt was the data-driven simulation in which time-series photospheric magnetic data were involved in the bottom boundary of MHD simulations. The expected advantage of the data- driven methods, compared with the NLFFF or data-constrained model, is that the results are free from the assumption of force-free field. We can follow more realistic temporal evolution of coronal magnetic fields as a response of temporal change of the observational photospheric magnetic fields. Several data-driven MHD simulations have been performed, and their results agree with some aspects in the observations, e.g., morphology of the coronal magnetic loops (Cheung & DeRosa, 2012; Cheung et al., 2015; Jiang et al., 2016; Hayashi et al., 2018, 2019; Pomoell et al., 2019; Guo et al., 2019; He et al., 2020). In contrast, a recent comparative study by Toriumi et al. (2020) reported that the numerical solutions obtained from the different data-driven simulations using the same time-series magnetic data were different from each other. The data-driven methods must be improved further to resolve these discrepancies. In this study, we focus on the velocity fields in the bottom boundary of MHD simulation. In several data-driven methods, the velocity fields at the bottom boundary were set to be zero, leading to physical inconsistency between velocity fields and electric or magnetic fields in terms of the induction equation. A recent study by Hayashi et al. (2019) combined the velocity fields derived from differential affine velocity estimator for vector magnetograms (DAVE4VM; Schuck, 2006) with their own data-driven method (denoted as the $v$-driven method in their paper). They confirmed that the frozen-in condition between plasmas and magnetic fields was well established. This is because the DAVE4VM-inferred velocity works as the bottom boundary condition to the equation of motion in MHD simulation, providing the motion of plasmas coherent with the time evolution of the magnetic fields in the observation. Another recent study by Guo et al. (2019) reported that the numerical results of data- driven simulations with and without velocity fields by DAVE4VM were similar in terms of morphology and propagation path of the erupted flux ropes. They argued that eruption inevitably happens if the initial condition of MHD simulation is already close to the dynamic eruptive phase. Their conclusion was that the change of the bottom boundary condition had subtle effect to the onset mechanism, while it would affect magnetic energy build-up before eruptive phase. Since the observational targets and many aspects of numerical techniques were different in Hayashi et al. (2019) and Guo et al. (2019), it is fairly difficult to compare their results. One issue we concern about their numerical techniques is that the DAVE4VM-inferred velocity is not always consistent with the inverted electric fields or the time evolution of the observational magnetic fields used as the bottom boundary condition of MHD simulations. In the present study, we attempted to implement an inversion technique of the induction equation directly in our simulation code, and proposed a method to derive the velocity fields reproducing the observed time evolution of magnetic field as a numerical solution of MHD equations. To confirm the feasibility of the method, we applied it to the successive small eruptive events that occurred in November 2017. The observation of the eruptive events are described in Section 2. The numerical method including velocity inversion is described in Section 3. The numerical results are shown in Section 4. We summarize and discuss the results in Section 5 ## 2 Observation The Solar Dynamics Doppler Imager (SDDI; Ichimoto et al., 2017) installed on the Solar Magnetic Activity Research Telescope (SMART; UeNo et al., 2004) at Hida Observatory of Kyoto University provides full-disk solar images at multiple wavelengths around the H$\mathrm{\alpha}$ 6563 Å line with a 0.25 Å bandpass. The top and bottom panels of Figure 1 show H$\mathrm{\alpha}$ blue wing images at $-$0.5 Å from the line center and H$\mathrm{\alpha}$ line center images, respectively, in six snapshots taken from SMART/SDDI observations on November 4–5, 2017, which demonstrate two successive eruptions. As indicated by the arrow in panel (a2) of Figure 1, the first eruption event started at 23:40 UT on November 4 and appeared as a compact dark feature with a size of $\sim$10″ in H$\mathrm{\alpha}-$0.5 Å. This dark feature (i.e., a so-called H$\mathrm{\alpha}$ upflow event), which is only visible in the H$\mathrm{\alpha}$ blue wing, displays an upward motion and is known to be often associated with magnetic reconnection (Wang et al., 1998; Chae et al., 1998). In a sequence of H$\mathrm{\alpha}-$0.5 Å images, it was found that the H$\mathrm{\alpha}-$0.5 Å upflow features increase in size as they erupt in the south-west direction (see panel (a3)). During the eruption, an enhanced brightening in H$\mathrm{\alpha}$ was observed near the magnetic polarity inversion line (PIL), where two opposite-polarity magnetic patches approached each other. Approximately 3 h after the first eruption, another eruption event began at 02:50 UT on November 5, exhibiting characteristics similar to those of the first eruption event in the context of the south-west eruption direction and H$\mathrm{\alpha}$ brightening. In the case of the second eruption, contrarily, we note that an inverse S-shaped structure is clearly seen in the H$\mathrm{\alpha}$ line center (refer to panel (b6)). Figure 1: Two successive eruption events observed in H$\mathrm{\alpha}-$0.5 Å images (top panels) and H$\mathrm{\alpha}$ line center images (bottom panels). The arrows in panels (a2) and (a3) indicate the first event, while those in panels (a5) and (a6) indicate the second event. In each panel, the red and blue contours represent $\pm$50 G of the vertical magnetic field $B_{z}$. The yellow box in panel (b1) represents the region used as the bottom boundary in our MHD simulation. In this study, we used a sequence of photospheric vector magnetograms obtained at 12-min cadence by the Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) onboard the Solar Dynamics Observatory (SDO; Pesnell et al., 2012). The pixel size of the HMI vector magnetograms was $\sim$360 km. Figure 2 shows two co-aligned images of the vertical ($B_{z}$) and horizontal ($B_{x}$ and $B_{y}$) components of the photospheric magnetic field at 22:58 UT on November 4, 2017 (left column) and at 00:58 UT on November 5, 2017 (right column). The field-of-view of the co-aligned magnetic field images is marked by the yellow box in panel (b1) of Figure 1, which contains the magnetic source region that produced the two eruptions. The source region consists of two main opposite- polarity magnetic patches that, in general, showed a converging motion as well as a decrease in the magnetic flux of both polarities over a 5 h interval around the times that the two eruptions occurred. Moreover, as shown in panels (d) and (f) of Figure 2, the strengths of the horizontal components $B_{x}$ and $B_{y}$ are found to increase after the first eruption. Figure 2: Snapshots of time-series data of magnetic field observed by SDO/HMI. Panels (a), (c), and (e) show snapshots of 22:58 UT on November 4, 2017 (corresponding to $t=0$ in the simulation). Panels (b), (d), and (f) show snapshots of 00:58 UT on November 5, 2017 (corresponding to $t=120~{}\mathrm{min}$ in the simulation). The field-of-view of these figures is represented by the yellow box in Fig. 1 (b1) ## 3 Numerical Method We numerically solved the zero-beta MHD equations as follows. $\frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho\mbox{\boldmath$v$}\right)=0,$ (1) $\frac{\partial\left(\rho\mbox{\boldmath$v$}\right)}{\partial t}+\nabla\cdot\left(\rho\mbox{\boldmath$v$}\mbox{\boldmath$v$}+\frac{B^{2}}{\ 8\pi}\mbox{\boldmath$I$}-\frac{\mbox{\boldmath$B$}\mbox{\boldmath$B$}}{4\pi}\right)=0,$ (2) $\frac{\partial\mbox{\boldmath$B$}}{\partial t}=\nabla\times(\mbox{\boldmath$v$}\times\mbox{\boldmath$B$}-\eta\mbox{\boldmath$J$}),$ (3) $\mbox{\boldmath$J$}=\frac{1}{4\pi}\nabla\times\mbox{\boldmath$B$},$ (4) where $t$, $\rho$, $v$, $B$, $J$, $\eta$, and $I$ denote time, mass density, velocity fields, magnetic fields, current density, resistivity, and unit vector, respectively. We used the anomalous resistivity in the following form: $\eta=0,~{}~{}(J<J_{c}),$ (5) $\eta=\eta_{0}(J/J_{c}-1)^{2},~{}~{}(J\geq J_{c}),$ (6) where $J_{c}=10^{-9}~{}\mathrm{G/cm}$, $\eta_{0}=10^{11}~{}\mathrm{cm^{2}/s}$, and we restrict $\eta\leq\eta_{\mathrm{max}}=10^{11}~{}\mathrm{cm^{2}/s}$. We inverted velocity fields which reproduce the observational photospheric magnetic fields by solving Eq. (3), and implemented them in the bottom boundary layer of the MHD simulation. The inverted velocity fields were computed by the following three steps: 1. 1. Inversion of the induction equation We solved an inverse problem of the induction equation as, in principle, $\frac{\mbox{\boldmath$B$}_{\mathrm{obs}}^{n+1}-\mbox{\boldmath$B$}_{\mathrm{obs}}^{n}}{\tau}=-\nabla\times\mbox{\boldmath$E$}^{I},$ (7) where $\mbox{\boldmath$B$}_{\mathrm{obs}}^{n}$, $\tau$, and $\mbox{\boldmath$E$}^{I}$ represent the $n$-th snapshot in the time-series data of the observational magnetic fields, the temporal cadence of the HMI observation, and an inverted electric field, respectively. As pointed out in a previous study (Kusano et al., 2002), we cannot solve this inverse problem completely because Eq. (7) includes the derivative in the $z$-direction (the direction normal to the photosphere), whereas the observational magnetic data have only two-dimensional information in the $xy$-plane (corresponding to the solar surface). Several methods have been proposed to resolve this problem. In this study, we adopt the poloidal-toloidal decomposition method (Fisher et al., 2010) and obtain $\mbox{\boldmath$E$}^{I}$. The advantage of this method is that we can estimate the vertical derivative of electric fields to some extent. However, the complete solution cannot be obtained even by this method. We carried out the inversion of the electric fields between the simulated magnetic fields and the observational magnetic fields during the observational time cadence: $\frac{\mbox{\boldmath$B$}_{\mathrm{obs}}^{n+1}-\mbox{\boldmath$B$}_{\mathrm{sim}}^{n+m/M}}{(1-m/M)\tau}=-\nabla\times\mbox{\boldmath$E$}^{I},$ (8) where $\mbox{\boldmath$B$}_{\mathrm{sim}}$ denotes the simulated magnetic fields, $m=1,2,...,M-1$ represents the $m$-th sub-snapshot between the $n$-th and the $(n+1)$-th observational snapshots. We adopted $M=6$ in this study, hence, the inversion was performed every 2 min during 12-min observational cadence of HMI. This piecewise inversion technique increases feasibility of the observational magnetic fields compared with the case that electric fields are inverted only once between $B_{\mathrm{obs}}^{n}$ and $B_{\mathrm{obs}}^{n+1}$. 2. 2. Gauge transformation Electric field is mathematically gauge-invariant to the induction equation; we can add an arbitrary scalar potential $\phi$ in the following form. $\mbox{\boldmath$E$}=\mbox{\boldmath$E$}^{I}-\nabla\phi,$ (9) where $E$ represents the electric field after gauge transformation. In contrast, as demonstrated by Pomoell et al. (2019), the results of data-driven simulations are influenced by gauge transformation. We adopted a gauge transformation that satisfied $\mbox{\boldmath$E$}\cdot\mbox{\boldmath$B$}=0$ using the iterative approach in Fisher et al. (2010). The motivation to use this gauge transformation is as follows: the electric fields defined by $\mbox{\boldmath$E$}=-\mbox{\boldmath$v$}\times\mbox{\boldmath$B$}$ are always perpendicular to magnetic fields. In contrast, the inverted electric fields $\mbox{\boldmath$E$}^{I}$ before the gauge transformation usually contain nonzero $\mbox{\boldmath$E$}_{\parallel}$ (parallel component to $B$). The nonzero $\mbox{\boldmath$E$}_{\parallel}$ can cause mismatch of the electric fields between the bottom boundary layer and the main simulation domain because the electric fields in the main simulation domain are computed as $\mbox{\boldmath$E$}=-\mbox{\boldmath$v$}\times\mbox{\boldmath$B$}$. Thus, we assumed that $\mbox{\boldmath$E$}\cdot\mbox{\boldmath$B$}=0$ is a necessary condition for the boundary electric fields in data-driven MHD simulations. Note that this assumption is valid even if resistive term $\eta\mbox{\boldmath$J$}$ was introduced because the magnitude of $\eta\mbox{\boldmath$J$}$ is constrained much smaller than that of $-\mbox{\boldmath$v$}\times\mbox{\boldmath$B$}$ in MHD simulations. 3. 3. Derivation of velocity fields We compute velocity fields as follows: $\mbox{\boldmath$v$}^{I}=\frac{\mbox{\boldmath$E$}\times\mbox{\boldmath$B$}}{B^{2}},$ (10) where $\mbox{\boldmath$v$}^{I}$ represents the inverted velocity field. We substituted $\mbox{\boldmath$B$}_{\mathrm{sim}}$ to $B$ in Eq. (10). $\mbox{\boldmath$v$}^{I}$ was updated every numerical time step. In Step 3, in the case that $E$ contains $\mbox{\boldmath$E$}_{\parallel}$, $\mbox{\boldmath$v$}^{I}$ loses the information of $\mbox{\boldmath$E$}_{\parallel}$ (because $\mbox{\boldmath$E$}_{\parallel}\times\mbox{\boldmath$B$}=0$). In our manipulations, in Step 2, $\mbox{\boldmath$E$}_{\parallel}$ has already been eliminated by the gauge transformation. The inductive electric fields were calculated in a part of the right-hand side of Eq. (3) as follows: $\mbox{\boldmath$v$}^{I}\times\mbox{\boldmath$B$}=\frac{(\mbox{\boldmath$E$}\times\mbox{\boldmath$B$})\times\mbox{\boldmath$B$}}{B^{2}}=-\mbox{\boldmath$E$}+\frac{(\mbox{\boldmath$E$}\cdot\mbox{\boldmath$B$})\mbox{\boldmath$B$}}{B^{2}}=-\mbox{\boldmath$E$}.$ (11) Thus, we expect that the observed photospheric magnetic fields are reproduced as a self-consistent numerical solution of Eq. (3) only by introducing $\mbox{\boldmath$v$}^{I}$ in the bottom boundary. To reduce the observational noise in the area of weak magnetic fields, which can damage the inversion of electric fields and the gauge transformation, we applied a low-pass filter using FFT to the original magnetic data. The practical spatial resolution was 8 times lower than the original spatial resolution of the HMI. The simulation domain is a rectangular box. Its Cartesian coordinates $(x,y,z)$ are extended to $0<x<89.6~{}\mathrm{Mm}$, $0<y<89.6~{}\mathrm{Mm}$, and $-1.44~{}\mathrm{Mm}<z<73.8~{}\mathrm{Mm}$, respectively, where the $xy$-plane is the horizontal plane parallel to the solar surface, and the $z$-direction represents the height. We adopted uniform grid spacing in every direction, and the grid size was $360~{}\mathrm{km}$ corresponding to the spatial resolution of the HMI. Below the $z=0$ plane, we set 5 grids in $z$-direction where Eq. (3) was numerically solved by introducing $\mbox{\boldmath$v$}^{I}$. Note that only the horizontal derivatives were calculated in the lowest 2 grids. The $z=-360~{}\mathrm{km}$ plane (one grid below $z=0$) is at the height where the observational magnetic fields were expected to be reproduced. The method of Fisher et al. (2010) derives $\partial_{z}E_{x}$ and $\partial_{z}E_{y}$. Assuming $\partial_{z}E_{z}=0$, we linearly extrapolated the inverted electric fields in the $z$-direction below $z=0$ and computed $\mbox{\boldmath$v$}^{I}$ with the local magnetic fields using Eq. (10). The density was fixed to the initial values below $z=0$. We adopted free boundary condition to the top boundary and fixed to the side boundaries. Our simulation only included the corona with typical density of $10^{9}~{}\mathrm{cm^{-3}}$, which is much smaller than the typical photospheric density $10^{17}~{}\mathrm{cm^{-3}}$. To suppress the unrealistically fast Alfvèn speed, we reduce the magnetic field strength to be 10 times smaller than the original observed values. The same modification was also adopted in Jiang et al. (2016). The initial condition was a potential field computed by the Fourier expansion method (Priest, 2014) from the vertical magnetic field at 22:58 UT on November 4, 2017, observed by HMI. The initial density was given by $\rho=\rho_{0}\exp[-z/H]$, where $\rho_{0}=3.2\times 10^{-15}~{}\mathrm{g/cm^{-3}}$ and $H=3.0\times 10^{4}~{}\mathrm{km}$. The numerical scheme used was a four-step Runge-Kutta method (Jameson, 2017) and a fourth order central finite difference method with an artificial viscosity (Rempel, 2014). ## 4 Results Figure 3 shows snapshots of magnetic fields in the bottom boundary at the height $z=-360~{}\mathrm{km}$, where the observed magnetic fields were expected to be reproduced. Note that the magnetic fields shown in Fig. 3 are the numerically obtained solutions, not merely smoothed observational data. Compared with Fig. 2, we confirmed that the converging motion of the opposite- polarity magnetic patches and the intrusion of the negative patch to the positive patch were well reproduced in our simulation. The small structures were smoothed out by the low-pass filter used for the inversion and the anomalous resistivity during the temporal integration of the MHD simulation. The structural similarity (SSIM) values (Wang et al., 2004) between the raw observational data and the low-pass filtered observational data at $t=120~{}\mathrm{min}$ were 0.22, 0.12, and 0.58 for the components $B_{x}$, $B_{y}$, and $B_{z}$, respectively, and the SSIM values between the low-pass filtered observational data and the simulated data were 0.82, 0.61, and 0.93 for the components $B_{x}$, $B_{y}$, and $B_{z}$, respectively. We used the low-pass filtered data for calculation of $\mbox{\boldmath$v$}^{I}$. We confirmed that the inverted velocities work well because the given magnetic fields were reproduced with high accuracy. Figure 3: Snapshots of time evolution of magnetic fields at bottom boundary. Panels (a), (c), and (e) show snapshots of $t=0$ (corresponding to Nov. 4. 2017, 22:58 UT in the observation). Panels (b), (d), and (f) show snapshots of $t=120~{}\mathrm{min}$ (corresponding to Nov. 5. 2017, 0:58 UT in the observation). As the opposite-polarity patches converged with each other and the negative magnetic patch further trespassed into the positive magnetic patch, the formation and eruption of flux ropes via reconnection successively occurred in our simulation. Figure 4 shows the temporal evolution of the three-dimensional magnetic field in the corona. Figures 4 (a) and (b) show snapshots of when the horizontal magnetic fields were shifted from the potential fields to the observed ones at 22:58 UT on November 4, 2017. Figures 4 (c) and (d) show snapshots of the first eruption. Figures 4 (e) and (f) show snapshots of the second eruption. We succeeded in reproducing the successive eruptions of flux ropes. In both cases, the flux ropes erupted in the south-west direction. We can interpret that the erupting filamentary structures in the observation were manifestations of the erupting flux ropes. Compared with H$\mathrm{\alpha}$ blue wing images in the observation (see Fig. 1 (a3) and (a6)), the direction of eruptions in the simulation were in agreement with the observational results. Figure 4: Temporal evolution of the coronal magnetic fields. Lines and colors on the lines represent the magnetic field lines and vertical velocity, respectively. Blue and red represent upward and downward velocities, respectively. The grayscale on the bottom surface represents the vertical magnetic fields. Figure 5 (a) shows the temporal evolution of the kinetic energy integrated in the simulation domain over the $z=0$ plane. The rapid increase in kinetic energy at $t\sim 100~{}\mathrm{min}$ and $t\sim 200~{}\mathrm{min}$ represents the eruptions. The onset time of the first eruption in the simulation was delayed $40~{}\mathrm{min}$ compared with the observational results, and that of the second eruption was $40~{}\mathrm{min}$ earlier. Figure 5 (b) shows the temporal evolution of the nonpotential magnetic energy computed as difference of the total magnetic energy and the potential magnetic energy. The rapid increase of kinetic energy temporally coincided with the reduction of nonpotential magnetic energy. The kinetic energy was approximately $1\times 10^{25}~{}\mathrm{erg}$ and the released magnetic energy was $2-3\times 10^{25}~{}\mathrm{erg}$. Note that the magnetic energy can change also due to the energy flux at the bottom and the top boundaries. Because we reduced the magnetic field strength in the simulation to 10 times smaller than the original observational values, we can speculate that the actual energy release was of the order of $10^{27}~{}\mathrm{erg}$ for these eruptive events (the right axis of Fig. 5 (a)). This is because magnetic energy is proportional to $B^{2}$, and we solved scale-free MHD equations. Figure 5: The solid, dashed, and dash-dotted lines in panel (a) represent the kinetic energy of all velocity components $\int_{z>0}\frac{1}{2}\rho(v_{x}^{2}+v_{y}^{2}+v_{z}^{2})dV$, of the vertical component $\int_{z>0}\frac{1}{2}\rho v_{z}^{2}dV$, and of the horizontal components $\int_{z>0}\frac{1}{2}\rho(v_{x}^{2}+v_{y}^{2})dV$, respectively. The vertical dotted lines indicate the actual onset times of the eruptions in the observation. The right axis in panel (a) represents the estimated released energy in reality. The solid and dashed lines in panel (b) represent the nonpotential magnetic energy and the kinetic energy of all velocity components, respectively. ## 5 Summary and Discussion We developed a numerical methodology that reproduces the temporal evolution of observational vector magnetic fields in the bottom boundary of MHD simulation by introducing the velocity fields inverted from the time-series observational magnetic data ($\mbox{\boldmath$E$}\times\mbox{\boldmath$B$}$-driven method). The inverted velocity fields were computed by the formula of $\mbox{\boldmath$E$}\times\mbox{\boldmath$B$}$ drift. The gauge transformation of electric field satisfying $\mbox{\boldmath$E$}\cdot\mbox{\boldmath$B$}=0$ enables this simple formulation. In the previous data-driven simulations, the velocity fields inferred by DAVE4VM were introduced in addition with the inverted electric fields or the observational magnetic fields (Hayashi et al., 2019; Guo et al., 2019). The DAVE4VM inversely solves the induction equation as well, whereas the vertical derivative of the horizontal components of electric fields is assumed negligible. In practice, the observed time evolution of the magnetic fields is not always reproduced completely even if the induction equation was integrated along with the DAVE4VM-inferred velocity fields. Therefore, the previous data-driven simulations used the DAVE4VM- inferred velocity as the bottom boundary condition of the equation of motion, and the observational magnetic fields or the inverted electric fields for the induction equation. In the present simulation, the observational magnetic fields were reproduced only by introducing the inverted velocity fields. The bottom boundary condition for the equation of motion and the induction equation is both physically and numerically consistent in our method. We applied our method to the successive eruptive events. Our simulation succeeded in reproducing the successive formation and eruption of flux ropes as a response of temporal evolution of the observational magnetic fields. The inversion technique of electric fields by Fisher et al. (2010) used in Step 1, described in Section 3, was widely used in the previous studies (Pomoell et al., 2019). The computation of the inverted velocity fields in Step 3 is straightforward. Our method is simple yet feasible to reproduce magnetic activities in the solar atmosphere. The energy release of flares in the solar active regions, the typical field strength of which is several thousand gauss, is in the range of $10^{28}-10^{32}~{}\mathrm{erg}$. The field strength of the magnetic patches in our study was approximately one hundred gauss. The estimation of the released energy of $10^{27}~{}\mathrm{erg}$ from our simulation result is plausible for small eruptive events. The successive flares and eruptions from active regions are often reported. The largest flare in Solar Cycle 24, which marked X9.3 in GOES X-ray classification, also had the preceding X2.2 flare. The flares were triggered by the continuous intrusion of the opposite-polarity magnetic fluxes, according to the analyses by Bamba et al. (2020). In our case, although the spatial size and magnetic field strength were much smaller, it is common that the continuous convergence of the opposite magnetic flux and the subsequent partial deformation of the PIL were the triggers of the successive eruptions. It is worthy to note that the triggering mechanism of successive eruptions was similar over a wide range of different spatial and temporal scales. The previous theoretical studies (Kusano et al., 2012, 2020) also support that the partial deformation of PILs can trigger eruption. Kusano et al. (2020) discussed how a small area of reconnection can lead to MHD instability. We speculate that the local converging motion can create small reconnetion-favor (opposite-polarity or reversed-shear type) regions along the PILs in the active regions. In contrast, the origin of continuous converging motion is still unclear. It is also unclear whether the converging motion is concentrated nearby PILs or ubiquitous in the photosphere. The ultra high resolution observations by the Daniel K. Inouye Solar Telescope may reveal this issue. A comprehensive understanding of the photospheric motion coupling with magnetic activity in the convection zone is also required (Cheung et al., 2019; Hotta et al., 2019; Toriumi & Hotta, 2019). The onset times of the eruptions in our simulations were differed by $40~{}\mathrm{min}$ compared with the observational ones. A possible reason is that the small-scale structures were smoothed out in our simulation. As mentioned in Section 4, the SSIMs of the magnetic fields after applying the low-pass filter were already low. The smaller structures may have to be included as much as possible to reproduce the accurate onset times. The anomalous resistivity might also affect the results. A parameter survey on the anomalous resistivity must be conducted in future work. We are grateful to the anonymous referee for the constructive and thoughtful comments. This work was supported by MEXT/JSPS KAKENHI grant number JP15H05814, Project for Solar-Terrestrial Environmental Prediction (PSTEP), and JSPS KAKENHI grant number JP20K14519. This work was partially supported by MEXT as ”Program for Promoting Researches on the Supercomputer Fugaku” (Toward a unified view of the universe: from large scale structure to planets, Elucidation of solar and planetary dynamics and evolution). Numerical computations were conducted on a Cray XC50 supercomputer at the Center for Computational Astrophysics (CfCA) of the National Astronomical Observatory of Japan. A part of this study was carried out using the computational resource of the Center for Integrated Data Science, Institute for Space-Earth Environmental Research, Nagoya University. HMI is an instrument on the SDO, a mission for NASA’s Living with a Star program. We are grateful to the staff of Hida Observatory for supporting the instrument development and daily observations. ## References * Amari et al. (2014) Amari, T., Canou, A., & Aly, J.-J. 2014, Nature, 514, 465, doi: 10.1038/nature13815 * Bamba et al. (2020) Bamba, Y., Inoue, S., & Imada, S. 2020, ApJ, 894, 29, doi: 10.3847/1538-4357/ab85ca * Chae et al. (1998) Chae, J., Wang, H., Lee, C.-Y., Goode, P. R., & Schühle, U. 1998, ApJ, 504, L123, doi: 10.1086/311583 * Cheung & DeRosa (2012) Cheung, M. C. M., & DeRosa, M. L. 2012, ApJ, 757, 147, doi: 10.1088/0004-637X/757/2/147 * Cheung et al. (2015) Cheung, M. C. M., De Pontieu, B., Tarbell, T. D., et al. 2015, ApJ, 801, 83, doi: 10.1088/0004-637X/801/2/83 * Cheung et al. (2019) Cheung, M. C. M., Rempel, M., Chintzoglou, G., et al. 2019, Nature Astronomy, 3, 160, doi: 10.1038/s41550-018-0629-3 * Fisher et al. (2010) Fisher, G. H., Welsch, B. T., Abbett, W. P., & Bercik, D. J. 2010, ApJ, 715, 242, doi: 10.1088/0004-637X/715/1/242 * Guo et al. (2019) Guo, Y., Xu, Y., Ding, M. D., et al. 2019, ApJ, 884, L1, doi: 10.3847/2041-8213/ab4514 * Hayashi et al. (2018) Hayashi, K., Feng, X., Xiong, M., & Jiang, C. 2018, ApJ, 856, 181, doi: 10.3847/1538-4357/aab787 * Hayashi et al. (2019) —. 2019, ApJ, 871, L28, doi: 10.3847/2041-8213/aaffcf * He et al. (2020) He, W., Jiang, C., Zou, P., et al. 2020, ApJ, 892, 9, doi: 10.3847/1538-4357/ab75ab * Hood & Priest (1979) Hood, A. W., & Priest, E. R. 1979, Sol. Phys., 64, 303, doi: 10.1007/BF00151441 * Hotta et al. (2019) Hotta, H., Iijima, H., & Kusano, K. 2019, Science Advances, 5, 2307, doi: 10.1126/sciadv.aau2307 * Ichimoto et al. (2017) Ichimoto, K., Ishii, T. T., Otsuji, K., et al. 2017, Sol. Phys., 292, 63, doi: 10.1007/s11207-017-1082-7 * Inoue (2016) Inoue, S. 2016, Progress in Earth and Planetary Science, 3, 19, doi: 10.1186/s40645-016-0084-7 * Ishiguro & Kusano (2017) Ishiguro, N., & Kusano, K. 2017, ApJ, 843, 101, doi: 10.3847/1538-4357/aa799b * Jameson (2017) Jameson, A. 2017, AIAA Journal, 1487 * Jiang et al. (2016) Jiang, C., Wu, S. T., Yurchyshyn, V., et al. 2016, ApJ, 828, 62, doi: 10.3847/0004-637X/828/1/62 * Kliem & Török (2006) Kliem, B., & Török, T. 2006, Phys. Rev. Lett., 96, 255002, doi: 10.1103/PhysRevLett.96.255002 * Kusano et al. (2012) Kusano, K., Bamba, Y., Yamamoto, T. T., et al. 2012, ApJ, 760, 31, doi: 10.1088/0004-637X/760/1/31 * Kusano et al. (2020) Kusano, K., Iju, T., Bamba, Y., & Inoue, S. 2020, Science, 369, 587, doi: 10.1126/science.aaz2511 * Kusano et al. (2002) Kusano, K., Maeshiro, T., Yokoyama, T., & Sakurai, T. 2002, ApJ, 577, 501, doi: 10.1086/342171 * Maehara et al. (2012) Maehara, H., Shibayama, T., Notsu, S., et al. 2012, Nature, 485, 478, doi: 10.1038/nature11063 * Muhamad et al. (2017) Muhamad, J., Kusano, K., Inoue, S., & Shiota, D. 2017, ApJ, 842, 86, doi: 10.3847/1538-4357/aa750e * Namekata et al. (2020) Namekata, K., Maehara, H., Sasaki, R., et al. 2020, PASJ, 72, 68, doi: 10.1093/pasj/psaa051 * Notsu et al. (2019) Notsu, Y., Maehara, H., Honda, S., et al. 2019, ApJ, 876, 58, doi: 10.3847/1538-4357/ab14e6 * Osten et al. (2005) Osten, R. A., Hawley, S. L., Allred, J. C., Johns-Krull, C. M., & Roark, C. 2005, ApJ, 621, 398, doi: 10.1086/427275 * Pandey & Singh (2008) Pandey, J. C., & Singh, K. P. 2008, MNRAS, 387, 1627, doi: 10.1111/j.1365-2966.2008.13342.x * Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3, doi: 10.1007/s11207-011-9841-3 * Pomoell et al. (2019) Pomoell, J., Lumme, E., & Kilpua, E. 2019, Sol. Phys., 294, 41, doi: 10.1007/s11207-019-1430-x * Priest (2014) Priest, E. 2014, Magnetohydrodynamics of the Sun (Cambridge University Press), doi: 10.1017/CBO9781139020732 * Rempel (2014) Rempel, M. 2014, ApJ, 789, 132, doi: 10.1088/0004-637X/789/2/132 * Schou et al. (2012) Schou, J., Scherrer, P. H., Bush, R. I., et al. 2012, Sol. Phys., 275, 229, doi: 10.1007/s11207-011-9842-2 * Schuck (2006) Schuck, P. W. 2006, ApJ, 646, 1358, doi: 10.1086/505015 * Toriumi et al. (2020) Toriumi, S., Takasao, S., Cheung, M. C. M., et al. 2020, ApJ, 890, 103, doi: 10.3847/1538-4357/ab6b1f * Toriumi & Hotta (2019) Toriumi, S., & Hotta, H. 2019, ApJ, 886, L21, doi: 10.3847/2041-8213/ab55e7 * UeNo et al. (2004) UeNo, S., Nagata, S.-i., Kitai, R., Kurokawa, H., & Ichimoto, K. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5492, Ground-based Instrumentation for Astronomy, ed. A. F. M. Moorwood & M. Iye, 958–969, doi: 10.1117/12.550304 * Wang et al. (1998) Wang, H., Johannesson, A., Stage, M., Lee, C., & Zirin, H. 1998, Sol. Phys., 178, 55, doi: 10.1023/A:1004974927114 * Wang et al. (2004) Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. 2004, IEEE transactions on image processing, 13, 600
00footnotetext: MATHEMATICS, Volume 8 Issue 11 Published 2020 # The geometry of a Randers rotational surface with an arbitrary direction wind Rattanasak Hama and Sorin V. Sabau ## 1 Introduction A Finsler structure on a surface $M$ can be regarded as a smooth 3-manifold $\Sigma\subset TM$ for which the canonical projection $\pi:\Sigma\to M$ is a surjective submersion and having the property that for each $x\in M$, the $\pi$-fiber $\Sigma_{x}=\pi^{-1}(x)$ is a strictly convex curve including the origin $O_{x}\in T_{x}M$. Here we denote by $TM$ the tangent bundle of $M$. This is actually equivalent to saying that such a geometrical structure $(M,F)$ is a surface $M$ endowed with a Minkowski norm in each tangent space $T_{x}M$ that varies smoothly with the base point $x\in M$ all over the manifold. Obviously $\Sigma$ is the unit sphere bundle $\\{(x,y)\in TM:F(x,y)=1\\}$, also called the indicatrix bundle. Even though the these notions are defined for arbitrary dimension, we restrict to surfaces hereafter ([BCS]). On the other hand, such a Finsler structure defines a 2-parameter family of oriented paths on $M$, one in every oriented direction through every point. This is a special case of the notion of path geometry. We recall that, roughly speaking, a path geometry on a surface $M$ is a 2-parameter family of curves on $M$ with the property that through each point $x\in M$ and in each tangent direction at $x$ there passes a unique curve in the family. The fundamental example to keep in mind is the family of lines in the Euclidean plane. To be more precise, a path geometry on a surface $M$ is a foliation $\mathcal{P}$ of the projective tangent bundle $\mathbb{P}TM$ by contact curves, each of which is transverse to the fibers of the canonical projection $\pi:\mathbb{P}TM\to M$. Observe that even though $\mathbb{P}TM$ is independent of any norm $F$, actually there is a Riemannian isometry between $\mathbb{P}TM$ and $\Sigma$, fact that allows us to identify them in the Finslerian case([B]). The 3-manifold $\mathbb{P}TM$ is naturally endowed with a contact structure. Indeed, observe that for a curve be a smooth, immersed curve $\gamma:(a,b)\to M$, let us denote by $\hat{\gamma}:(a,b)\to\mathbb{P}TM$ its canonical lift to the projective tangent bundle $\mathbb{P}TM$. Then, the fact that the canonical projection is a submersion implies that, for each line $L\in\mathbb{P}TM$, the linear map $\pi_{*,L}:T_{L}\mathbb{P}TM\to T_{x}M$, is surjective, where $\pi(L)=x\in M$. Therefore $E_{L}:=\pi_{*,L}^{-1}(L)\subset T_{L}\mathbb{P}TM$ is a 2-plane in $T_{L}\mathbb{P}TM$ that defines a contact distribution and therefore a contact structure on $\mathbb{P}TM$. A curve on $\mathbb{P}TM$ is called contact curve if it is tangent to the contact distribution $E$. Nevertheless, the canonical lift $\hat{\gamma}$ to $\mathbb{P}TM$ of a curve $\gamma$ on $M$ is a contact curve. A local path geometry on $M$ is a foliation $\mathcal{P}$ of an open subset $U\subset\mathbb{P}TM$ by contact curves, each of which is transverse to the fibers of $\pi:\mathbb{P}TM\to M$. If $(M,F)$ is a Finsler surface, then the 3-manifold $\Sigma$ is endowed with a canonical coframe $(\omega^{1},\omega^{2},\omega^{3})$ satisfying the structure equations $\begin{split}d\omega^{1}&=-I\omega^{1}\wedge\omega^{3}+\omega^{2}\wedge\omega^{3}\\\ d\omega^{2}&=\omega^{3}\wedge\omega^{1}\\\ d\omega^{3}&=K\omega^{1}\wedge\omega^{2}-J\omega^{1}\wedge\omega^{3},\\\ \end{split}$ (1.1) where the functions $I,J$ and $K:TM\to\mathbb{R}$ are the Cartan scalar, the Landsberg curvature and the Gauss curvature, respectively. The 2-plane field $D:=\langle\hat{e}_{2},\hat{e}_{3}\rangle$ defines a contact structure on $\Sigma$, where we denote $(\hat{e}_{1},\hat{e}_{2},\hat{e}_{3})$ the dual frame of $(\omega^{1},\omega^{2},\omega^{3})$. Indeed, it can be seen that the 1-form $\eta:=A\omega^{1}$ is a contact form for any function $A\neq 0$ on $\Sigma$. The structure equations (1.1) imply $\eta\wedge d\eta=A^{2}\omega^{1}\wedge\omega^{2}\wedge\omega^{3}\neq 0$. Observe that in the Finslerian case, we actually have two foliations on the 3-manifold $\Sigma$: 1. 1. $\mathcal{P}=\\{\omega^{1}=0,\omega^{3}=0\\}$ the geodesic foliation of $\Sigma$, i.e. the leaves are curves in $\Sigma$ tangent to the geodesic spray $\hat{e}_{2}$; 2. 2. $\mathcal{Q}=\\{\omega^{1}=0,\omega^{2}=0\\}$ the indicatrix foliation of $\Sigma$, i.e. the leaves are indicatrix curves in $\Sigma$ tangent $\hat{e}_{3}$. The pair $(\mathcal{P},\mathcal{Q})$ is called sometimes a generalized path geometry (see [Br]). The (forward) integral length of a regular piecewise $C^{\infty}$-curve $\gamma:[a,b]\to M$ on a Finsler surface $(M,F)$ is given by ${\cal L}_{\gamma}:=\sum_{i=1}^{k}\int_{t_{i-1}}^{t_{i}}F(\gamma(t),\dot{\gamma}(t))dt,$ where $\dot{\gamma}=\frac{d\gamma}{dt}$ is the tangent vector along the curve $\gamma|_{[t_{i-1},t_{i}]}$. A regular piecewise $C^{\infty}$-curve $\gamma$ on a Finsler manifold is called a forward geodesic if $({\mathcal{L}_{\gamma}})^{\prime}(0)=0$ for all piecewise $C^{\infty}$-variations of $\gamma$ that keep its ends fixed. In terms of Chern connection a constant speed geodesic is characterized by the condition $D_{\dot{\gamma}}{\dot{\gamma}}=0$. Observe that the canonical lift of a geodesic $\gamma$ to $\mathbb{P}TM$ gives the geodesics foliation $\mathcal{P}$ described above. Using the integral length of a curve, one can define the Finslerian distance between two points on $M$. For any two points $p$, $q$ on $M$, let us denote by $\Omega_{p,q}$ the set of all piecewise $C^{\infty}$-curves $\gamma:[a,b]\to M$ such that $\gamma(a)=p$ and $\gamma(b)=q$. Then the map $d:M\times M\to[0,\infty),\qquad d(p,q):=\inf_{\gamma\in\Omega_{p,q}}{\cal L}_{\gamma}$ gives the Finslerian distance on $M$. It can be easily seen that $d$ is in general a quasi-distance, i.e., it has the properties $d(p,q)\geq 0$, with equality if and only if $p=q$, and $d(p,q)\leq d(p,r)+d(r,q)$, with equality if and only if $r$ lies on a minimal geodesic segment joining from $p$ to $q$ (triangle inequality). A Finsler manifold $(M,F)$ is called forward geodesically complete if and only if any short geodesic $\gamma:[a,b)\to M$ can be extended to a long geodesic $\gamma:[a,\infty)\to M$. The equivalence between forward completeness as metric space and geodesically completeness is given by the Finslerian version of Hopf-Rinow Theorem (see for eg. [BCS], p. 168). Same is true for backward geodesics. In the Finsler case, unlikely the Riemannian counterpart, forward completeness is not equivalent to backward one, except the case when $M$ is compact. Any geodesic $\gamma$ emanating from a point $p$ in a compact Finsler manifold loses the global minimising property at a point $q$ on $\gamma$. Such a point $q$ is called a cut point of $p$ along $\gamma$. The cut locus of a point $p$ is the set of all cut points along geodesics emanating from $p$. This kind of points often appears as an obstacle when we try to prove some global theorems in differential geometry being in the same time vital in analysis, where appear as a singular points set. In fact, the cut locus of a point $p$ in a complete Finsler manifold equals the closure of the set of all non- differentiable points of the distance function from the point $p$. The structure of the cut locus plays an important role in optimal control problems in space and quantum dynamics allowing to obtain global optimal results in orbital transfer and for Lindblad equations in quantum control. The notion of cut locus was introduced and studied for the first time by H. Poincare in 1905 for the Riemannian case. In the case of a two dimensional analytical sphere, S. B. Myers has proved in 1935 that the cut locus of a point is a finite tree in both Riemannian and Finslerian cases. In the case of an analytic Riemannian manifold, M. Buchner has shown the triangulability of the cut locus of a point $p$, and has determined its local structure for the low dimensional case in 1977 and 1978, respectively. The cut locus of a point can have a very complicated structure. For example, H. Gluck and D. Singer have constructed a $C^{\infty}$ Riemannian manifold that has a point whose cut locus is not triangulable (see [SST] for an exposition). There are $C^{k}$-Riemannian or Finsler metrics on spheres with a preferential point whose cut locus is a fractal ([IS]). In the present paper we will study the local and global behaviour of the geodesics of a Finsler metric of revolution on topological cylinders. In special, we will determine the structure of the cut locus on the cylinder for such metrics and compare it with the Riemannian case. Will focus on the Finsler metrics of Randers type obtained as solutions of the Zermelo’s navigation problem for the navigation data $(M,h)$, where $h$ is the canonical Riemannian metric on the topological cylinder $h=dr^{2}+m^{2}(r)d\theta^{2}$, and $W=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ is a vector field on $M$. Observe that our wind is more general than a Killing vector field, hence our theory presented here is a generalization of the classical study of geodesics and cut locus for Randers metrics obtained as solutions of the Zermelo’s navigation problem with Killing vector fields studied in [HCS] and [HKS]. Nevertheless, by taking the wind $W$ in this way we obtain a quite general Randers metric on $M$ which is a Finsler metric of revolution and whose geodesics and cut locus can be computed explicitly. Our paper is organized as follows. In the Section 2, we recall basics of Finsler geometryusing the Randers metrics that we will actually use in order to obtain explicit information on the geodesics behaviour and cut locus structure. We introduce an extension of the Zermelo’s navigation problem for Killing winds to a more general case $\widetilde{W}=V+W$, where only $W$ is Killing. We show that the geodesics, conjugate locus and cut locus can be determined in this case as well. In the section 3 we describe the theory of general Finsler surfaces of revolution. In the case this Finsler metric is a Riemannian one, we obtain the theory of geodesics and cut locus known already ([C1], [C2]). In the Section 4 we consider some examples that ilustrate the theory depicted until here. In particular, in subsection 4.1 we consider the general wind $W=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ which obviously is not Killing with respect to $h$, where $A=A(r)$ is a bounded function and $B$ is a constant and determine its geometry here. Essentially, we are reducing the geodesics theory of the Finsler metric $\widetilde{F}$, obtained from the Zermelo’s navigation problem for $(M,h)$ and $\widetilde{W}$, to the theory of a Riemannian metric $(M,\alpha)$. Moreover, in the particular case $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ in Section 4.2, where $A,B$ are constants, the geodesic theory of $\widetilde{F}$ can be directly obtained from the geometry of the Riemannian metric $(M,h)$. A similar study can be done for the case $W=A(r)\frac{\partial}{\partial r}$. We leave a detailed study of these Randers metrics to a forthcoming research. ## 2 Finsler metrics. The Randers case Finsler structures are one of the most natural generalization of Riemannian metrics. Let us recall here that a Finsler structure on a real smooth $n$-dimensional manifold $M$ is a function $F:TM\to[0,\infty)$ which is smooth on $\widetilde{TM}=TM\setminus O$, where $O$ is the zero section, has the homogeneity property $F(x,\lambda y)=\lambda F(x,y)$, for all $\lambda>0$ and all $y\in T_{x}M$ and also has the strong convexity property that the Hessian matrix $g_{ij}=\frac{1}{2}\frac{\partial^{2}F^{2}}{\partial y^{i}\partial y^{j}}(x,y)$ (2.1) is positive definite at any point $(x,y)\in TM$. ### 2.1 An ubiquitous family of Finsler structures: the Randers metrics Initially introduced in the context of general relativity, Randers metrics are the most ubiquitous family of Finsler structures. A Randers metric on a surface $M$ is obtained by a rigid translation of an ellipse in each tangent plane $T_{x}M$ such that the origin of $T_{x}M$ remains inside it. $w$$y^{1}$$y^{2}$$\Sigma_{h}$$\Sigma_{F}$ Figure 1: Randers metrics: a rigid dispacement of an ellipse. Formally, on a Riemannian manifold $(M,\alpha)$, a Randers metric is a Finsler structure $(M,F)$ whose fundamental function $F:TM\to[0,\infty)$ can be written as $F(x,y)=\alpha(x,y)+\beta(x,y),$ where $\alpha(x,y)=\sqrt{a_{ij}(x)y^{i}y^{j}}$ and $\beta(x,y)=b_{i}(x)y^{i}$, such that the Riemannian norm of $\beta$ is less than 1, i.e. $b^{2}:=a^{ij}b_{i}b_{j}<1$. It is known that Randers metrics are solutions of the Zermelo’s navigation problem [Z] which we recall here. Consider a ship sailing on the open sea in calm waters. If a mild breeze comes up, how should the ship be steered in order to reach a given destination in the shortest time possible? The solution was given by Zermelo in the case the open sea is an Euclidean space, by [Sh] in the Riemannian case and studied in detailed in [BRS]. Indeed, for a time-independent wind $W\in TM$, on a Riemannian manifold $(M,h)$, the paths minimizing travel-time are exactly the geodesics of the Randers metric $F(x,y)=\alpha(x,y)+\beta(x,y)=\frac{\sqrt{\lambda\|y\|_{h}^{2}+W_{0}}}{\lambda}-\frac{W_{0}}{\lambda},$ where $W=W^{i}(x)\frac{\partial}{\partial x^{i}}$, $\|y\|_{h}^{2}=h(y,y)$, $\lambda=1-|W|_{h}^{2}$, and $W_{0}=h(W,y)$. Requiring $\|W\|_{h}<1$ we obtain a positive definite Finslerian norm. In components, $a_{ij}=\frac{1}{\lambda}h_{ij}+\frac{W_{i}}{\lambda}$, $b_{i}(x)=-\frac{W_{i}}{\lambda}$, where $W_{i}=h_{ij}W^{j}$ (see [R] for a general discussion). The Randers metric obtained above is called the solution of the Zermelo’s navigation problem for the navigation data $(M,h)$ and $W$. ###### Remark 2.1 Obviously, at any $x\in M$, the condition $F(y)=1$ is equivalent to $\|y-W\|_{h}=1$ fact that assures that, indeed, the indicatrix of $(M,F)$ in $T_{x}M$ differs from the unit sphere of $h$ by a translation along $W(x)$ (see Figure 1). More generally, the Zermelo’s navigation problem can be considered where the open sea is a given Finsler manifold (see [Sh]). We have ###### Proposition 2.2 Let $(M,F)$ be a Finsler manifold and $W$ a vector field on $M$ such that $F(-W)<1$. Then the solution of the Zermelo’s navigation problem with navigation data $F,W$ is th Finsler metric $\widetilde{F}$ obtained by solving the equation $F(y-\widetilde{F}W)=\widetilde{F},\ \text{for any}\ y\in TM.$ (2.2) Indeed, if we consider the Zermelo’s navigation problem where the open sea is the Finsler manifold $(M,F)$ and the wind $W$, by rigid translation of the indicatrix $\Sigma_{F}$ we obtain the closed, smooth, strongly convex indicatrix $\Sigma_{\widetilde{F}}$, where $\widetilde{F}$ is solution of the equation $F\left(\frac{y}{\widetilde{F}}-W\right)=1$ which is clearly equivalent to (2.2) due to positively of $\widetilde{F}$ and homogeneity of $F$. To get a genuine Finsler metric $\widetilde{F}$, We need for the origin $O_{x}\in T_{x}M$ to belong to the interior of $\Sigma_{\widetilde{F}}=\Sigma_{F}+W$, that is $F(-W)<1$. ###### Remark 2.3 Consider the Zermelo’s navigation problem for $(M,F)$ and wind $W$, where $F$ is a (positive-defined) Finsler metric. If we solve the equation $F\left(\frac{y}{\widetilde{F}}-W\right)=1\Leftrightarrow F(y-\widetilde{F}W)=\widetilde{F}$ let $\widetilde{F}$ we obtain the solution of this Zermelo’s navigation problem. In order that $\widetilde{F}$ is Finsler we need to check: * (i) $\widetilde{F}$ is strongly convex * (ii) the indicatrix of $\widetilde{F}$ includes the origin $O_{x}\in T_{x}M$. Since indicatrix of $\widetilde{F}$ is the rigid translation by $W$ of the indicatrix of $F$, and indicatrix of $F$ is strongly convex, it follows indicatrix of $\widetilde{F}$ is also strongly convex. Hence, we need to find the condition for (ii) only. Denote $B_{F}(1):=\\{y\in T_{x}M:F(y)<1\\},\quad\widetilde{B}_{\widetilde{F}}(1):=\\{y\in T_{x}M:\widetilde{F}(y)<1\\}$ the unit balls of $F$ and $\widetilde{F}$, respectively. The Zermelo’s navigation problem shows $B_{\widetilde{F}}(1)=B_{F}(1)+W.$ Hence $O_{x}\in B_{\widetilde{F}}(1)\Leftrightarrow O_{x}\in B_{F}(1)+W\Leftrightarrow-W\in B_{F}(1)\Leftrightarrow F(-W)<1.$ Hence, indicatrix of $\widetilde{F}$ include $O_{x}\Leftrightarrow F(-W)<1$, where we denote by $O_{x}$ the zero vector. ###### Proposition 2.4 Let $(M,F_{1}=\alpha+\beta)$ be a Randers space and $W=W^{i}(x)\frac{\partial}{\partial x^{i}}$ a vector field on $M$. Then, the solution of the Zermelo’s navigation problem with navigation data $(M,F_{1})$ and $W$ is also a Randers metric $F=\widetilde{\alpha}+\widetilde{\beta}$, where $\begin{split}\widetilde{a}_{ij}&=\frac{1}{\eta}\left(a_{ij}-b_{i}b_{j}\right)+\left(\frac{W_{i}-b_{i}[1+\beta(W)]}{\eta}\right)\left(\frac{W_{j}-b_{j}[1+\beta(W)]}{\eta}\right)\\\ \widetilde{b}_{i}&=-\frac{W_{i}-b_{i}[1+\beta(W)]}{\eta},\end{split}$ (2.3) where $\eta=[1+\beta(W)]^{2}-\alpha^{2}(W)$, $W_{i}=a_{ij}W^{j}$. ###### Proof. (Proof of Proposition 2.4) Let us consider the equation $F_{1}\left(\frac{y}{\widetilde{F}}-W\right)=1$ which is equivalent to $F_{1}(y-\widetilde{F}W)=\widetilde{F}$ due to positively of $\widetilde{F}$ and 1-positive homogeneity of $F_{1}$. If we use $F_{1}=\alpha+\beta$, it follows $\alpha(y-\widetilde{F}W)=\widetilde{F}-\beta(y-\widetilde{F}W),$ using the linearity of $\beta$, i.e. $\beta(y-\widetilde{F}W)=\beta(y)-\widetilde{F}\beta(W)$, where $\beta(y)=b_{i}y^{i}$, $\beta(W)=b_{i}W^{i}$, and squaring this formula, we get the equation $\alpha^{2}(y-\widetilde{F}W)=[\widetilde{F}(1+\beta(W))-\beta(y)]^{2}.$ (2.4) Observe that $\alpha^{2}(y-\widetilde{F}W)=\alpha^{2}(y)-2\widetilde{F}<y,W>_{\alpha}+\widetilde{F}^{2}\alpha^{2}(W)$ (2.5) and $[\widetilde{F}-\beta(y-\widetilde{F}W)]^{2}=[1+\beta(W)]^{2}\widetilde{F}^{2}-2\widetilde{F}\beta(y)[1+\beta(W)]+\beta^{2}(y),$ (2.6) substituting (2.5), (2.6) in (2.4) gives the 2nd degree equation $\eta\widetilde{F}^{2}+2\widetilde{F}<y,\quad W-B[1+\beta(W)]>_{\alpha}-[\alpha^{2}(y)-\beta^{2}(y)]=0,$ (2.7) where $B=b^{i}\frac{\partial}{\partial x^{i}}=(a^{ij}b_{j})\frac{\partial}{\partial x^{i}}$ and $\eta:=[1+\beta(W)]^{2}-\alpha^{2}(W)$, i.e. $<y,W-B[1+\beta(W)]>_{\alpha}=a_{ij}y^{i}(w^{j}-b^{j}[1+\beta(W)])=<y,W>_{\alpha}-\beta(y)[1+\beta(W)].$ The discriminant of (2.7) is $D^{\prime}=\\{<y,W>_{\alpha}-\beta(y)[1+\beta(W)]\\}^{2}+\eta[\alpha^{2}(y)-\beta^{2}(y)].$ Let us observe that $F_{1}(-W)<1$ implies $\eta>0$. Indeed $F_{1}(-W)=\alpha(W)-\beta(W)<1\Leftrightarrow\alpha^{2}(W)<[1+\beta(W)]^{2}$ hence $\eta>0$. Moreover, observe that $D^{\prime}=\\{\eta(a_{ij}-b_{i}b_{j})+(w_{i}-b_{i}[1+\beta(W)])(w_{j}-b_{j}[1+\beta(W)])\\}y^{i}y^{j}.$ The solution of (2.7) is given by $\widetilde{F}=\frac{\sqrt{<y,W-B[1-\beta(W)]>_{\alpha}^{2}+\eta[\alpha^{2}(y)-\beta^{2}(y)]}}{\eta}-\frac{<y,W-B[1-\beta(W)]>_{\alpha}}{\eta}$ or equivalently to $\widetilde{F}=\frac{\sqrt{\\{\eta(a_{ij}-b_{i}b_{j})+(w_{i}-b_{i}[1+\beta(W)])(w_{j}-b_{j}[1+\beta(W)])\\}y^{i}y^{j}}}{\eta}-\frac{\\{W_{i}-b_{i}[1+\beta(W)]\\}y^{i}}{\eta},$ that is $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$, where $\widetilde{a}_{ij}$ and $\widetilde{b}_{i}$ are given by (2.3). Observe that $\widetilde{a}_{ij}$ is positive defined. Indeed, for any $v\in TM$, $\widetilde{\alpha}^{2}(v,v)=\widetilde{a}_{ij}v^{i}v^{j}=\eta[\alpha^{2}(v)-\beta^{2}(v)]+<v,W-B[1+\beta(W)]>^{2}$. On the other hand, since $F_{1}=\alpha+\beta$ is Randers metric, $F_{1}(X)>0$ for any tangent vector $X\in TM$, hence for $X=v$ and $X=-v$ we get $\alpha(v)+\beta(v)>0$ and $\alpha(v)-\beta(v)>0$, respectively, hence $\alpha^{2}(v)-\beta^{2}(v)>0$ for any $v\in TM$. This implies $\widetilde{a}_{ij}$ is positive defined. ### 2.2 A two steps Zermelo’s navigation We have discussed in the previous section the Zermelo’s navigation when the open sea is a Riemannian manifold and when it is a Finsler manifold, respectively. In order to obtain a more general version of the navigation, we combine these two approaches. We have ###### Theorem 2.5 Let $(M,h)$ be a Riemannian manifold and $V$, $W$ two vector fields on $M$. Let us consider the Zermelo’s navigation problem on $M$ with the following data 1. (I) Riemannian metric $(M,h)$ with wind $V+W$ and assume condition $\|V+W\|_{h}<1$; 2. (II) Finsler metric $(M,F_{1})$ with wind $W$ and assume $W$ satisfies condition $F_{1}(-W)<1$, where $F_{1}=\alpha+\beta$ is the solution of the Zermelo’ s navigation problem for the navigation data $(M,h)$ with wind $V$, such that $\|V\|_{h}<1$. Then, the above Zermelo’s navigation problems (I) and (II) have the same solution $F=\widetilde{\alpha}+\widetilde{\beta}$. ###### Proof. (Proof of Theorem 2.5) Let us consider case (I), i.e. the sea is the Riemannian metric $(M,h)$ with the wind $\widetilde{W}:=V+W$ such that $\|V+W\|_{h}<1$. The associated Randers metric through the Zermelo’s navigation problem is given by $\widetilde{\alpha}+\widetilde{\beta}$, where $\begin{split}\widetilde{a}_{ij}:=\frac{1}{\Lambda}h_{ij}+\left(\frac{\widetilde{W}_{i}}{\Lambda}\right)\left(\frac{\widetilde{W}_{j}}{\Lambda}\right),\ \widetilde{b}_{i}:=-\frac{\widetilde{W}_{i}}{\Lambda},\end{split}$ (2.8) where $\Lambda=1-\|\widetilde{W}\|^{2}_{h}=1-\|V+W\|^{2}_{h}$, $\widetilde{W}_{i}=h_{ij}\widetilde{W}^{j}$. Observe that (2.8) are actually equivalent to $\begin{split}\widetilde{a}_{ij}&:=\frac{1}{\Lambda}h_{ij}+\left(\frac{V_{i}^{(h)}+W_{i}^{(h)}}{\Lambda}\right)\left(\frac{V_{j}^{(h)}+W_{j}^{(j)}}{\Lambda}\right),\\\ \widetilde{b}_{i}&:=-\frac{W_{i}^{(h)}}{\Lambda}-\frac{V_{i}^{(h)}}{\Lambda},\end{split}$ (2.9) where $V_{i}^{(h)}=h_{ij}V^{j}$ and $W_{i}^{(h)}=h_{ij}W^{j}$. Next, we will consider the case (II) which we regard as a two steps Zermelo type navigation: $\underline{\text{Step 1}}$. Consider the Zermelo’s navigation with data $(M,h)$ and wind $V$, $\|V\|_{h}^{2}<1$ with the solution $F_{1}=\alpha+\beta$, where $\begin{split}a_{ij}=\frac{1}{\lambda}h_{ij}+\left(\frac{V_{i}^{(h)}}{\lambda}\right)\left(\frac{V_{j}^{(h)}}{\lambda}\right),\ b_{i}=-\frac{V_{i}^{(h)}}{\lambda},\end{split}$ where $\lambda=1-\|V\|_{h}^{2}$, $V_{i}^{(h)}=h_{ij}V^{j}$. $\underline{\text{Step 2}}$. Consider the Zermelo’s navigation with data $(M,F_{1}=\alpha+\beta)$ obtained at step 1, and wind $W$ such that $F_{1}(-W)<1$, with solution $\widetilde{F}=\widehat{\alpha}+\widehat{\beta}$ (see Proposition 2.4), where $\begin{split}\widehat{a}_{ij}&=\frac{1}{\eta}(a_{ij}-b_{i}b_{j})+\left(\frac{W_{i}^{(\alpha)}-b_{i}[1+\beta(W)]}{\eta}\right)\left(\frac{W_{j}^{(\alpha)}-b_{j}[1+\beta(W)]}{\eta}\right),\\\ \widehat{b}_{i}&=-\frac{W_{i}^{(\alpha)}}{\eta}\end{split}$ (2.10) with $\eta=[1+\beta(W)]^{2}-\alpha^{2}(W),\text{ and }W_{i}^{(\alpha)}=a_{ij}W^{j}.$ We will show that $\widetilde{a}_{ij}=\widehat{a}_{ij}$ and $\widetilde{b}_{i}=\widehat{b}_{i}$, respectively, for all indices $i,j\in\\{1,\dots,n\\}$. It is trivial to see that $\Lambda=\lambda-\|W\|_{h}^{2}-2<V,W>_{h}$. Next, by straightforward computation we get $\alpha^{2}(W)=a_{ij}W^{i}W^{j}=\frac{1}{\lambda}\|W\|_{h}^{2}+\left(\frac{h(V,W)}{\lambda}\right)^{2},\ \beta(W)=-\frac{h(V,W)}{\lambda}.$ It follows that $\eta=\left[1-\frac{h(V,W)}{\lambda}\right]^{2}-\frac{1}{\lambda}\|W\|_{h}^{2}-\frac{h^{2}(V,W)}{\lambda^{2}}=1-2\frac{h(V,W)}{\lambda}-\frac{1}{\lambda}\|W\|_{h}^{2},$ we get $\eta=\frac{\Lambda}{\lambda}.$ (2.11) In a similar manner, $\frac{W_{i}^{(\alpha)}-b_{i}[1+\beta(W)]}{\eta}=\frac{1}{\eta}\left[\frac{h_{ij}W^{j}}{\lambda}+\frac{V_{i}^{(h)}W^{i}}{\lambda}\frac{V_{j}^{(h)}W^{j}}{\lambda}+\frac{V_{i}^{(h)}}{\lambda}\left(1-\frac{h(V,W)}{\lambda}\right)\right],$ hence we obtain $\frac{W_{i}^{(\alpha)}-b_{i}(1+\beta(W))}{\eta}=\frac{W_{i}^{(h)}+V_{i}^{(h)}}{\Lambda},$ that is $\widetilde{b}_{i}=\widehat{b}_{i}$. It can be also seen that $\frac{1}{\eta}(a_{ij}-b_{i}b_{j})=\frac{1}{\Lambda}h_{ij},$ hence $\widetilde{a}_{ij}=\widehat{a}_{ij}$ and the identity of formulas (2.8) and (2.10) is proved. In order to finish the proof we show that the conditions (i) $\|V+W\|^{2}_{h}<1$ and (ii) $\|V\|_{h}^{2}<1$ and $F(-W)<1$ are actually equivalent. Geometrically speaking, the 2-steps Zermelo’s navigation is the rigid translation of $\Sigma_{h}$ by $V$ followed by the rigid translation of $\Sigma_{F_{1}}$ by $W$. This is obviously equivalent to the rigid translation of $\Sigma_{h}$ by $\widetilde{W}=V+W$. $y^{1}$$y^{2}$$0$$V$$\widetilde{W}$$W$$\Sigma_{h}$$T_{x}M$$\Sigma_{F_{1}}$$\Sigma_{\widetilde{F}}$ Figure 2: The $h$-indicatrix, $F_{1}$-indicatrix and $F$-indicatrix. The geometrical meaning of (i) is that the origin $O_{x}\in T_{x}M$ is in the interior of the translated indicatrix $\Sigma_{\widetilde{F}}$ (see Figure 2. On the other hand, the relation in (ii) shows that the origin $O_{x}$ is in the interior of the translated indicatrix $\Sigma_{h}$ by $V$ and $\Sigma_{F_{1}}$ by $W$. This equivalence can also be checked analytically. For initial data $(M,h)$ and $V$, we obtain by Zermelo’s navigation the Randers metric $F=\alpha+\beta$, where $\begin{split}a_{ij}=\frac{1}{\lambda}h_{ij}+\left(\frac{V_{i}}{\lambda}\right)\left(\frac{V_{j}}{\lambda}\right),\ b_{i}=-\frac{V_{i}}{\lambda},\end{split}$ with $V_{i}=h_{ij}V^{j}$ and $\lambda=1-\|V\|^{2}_{h}<1$. Consider another vector field $W$ and compute $\begin{split}F(-W)&=\sqrt{\frac{1}{\lambda}\|W\|^{2}_{h}+\left(\frac{V_{i}W^{i}}{\lambda}\right)^{2}}+\frac{V_{i}W^{i}}{\lambda}\\\ &=\frac{1}{\lambda}\left[\sqrt{\lambda\|W\|_{h}^{2}+h^{2}(V,W)}+h(V,W)\right].\end{split}$ Let us assume $F(-W)<1$, hence $\sqrt{\lambda\|W\|_{h}^{2}+h^{2}(V,W)}+h(V,W)<\lambda,$ i.e. $\begin{split}&\hskip 18.49411pt\lambda\|W\|_{h}^{2}+h^{2}(V,W)<[\lambda-h(V,W)]^{2}\\\ &\Leftrightarrow\lambda\|W\|_{h}^{2}+\cancel{h^{2}(V,W)}<\lambda^{2}-2\lambda h(V,W)+\cancel{h^{2}(V,W)},\ \lambda>0\\\ &\Leftrightarrow\|W\|_{h}^{2}<\lambda-2h(V,W)\Leftrightarrow\|W\|_{h}^{2}+2h(V,W)+\|V\|_{h}^{2}<1\\\ &\Leftrightarrow\|W+V\|_{h}^{2}<1.\end{split}$ Conversely, if $\|V+W\|_{h}^{2}<1$, by reversing the computation above, we obtain $F(-W)<1$, provided $\lambda-h(V,W)>0$. Indeed, observe that $\|V+W\|_{h}^{2}<1$ actually implies $\lambda-h(V,W)>0$, because $1-\|V\|_{h}^{2}-h(V,W)=1-h(V,V+W)>0\Leftrightarrow h(V,V+W)<1$. However Cauchy-Schwartz inequality : $h(V,V+W)\leq\|V\|_{h}\|V+W\|_{h}<1$ using $\|V\|_{h}<1$ and $\|V+W\|_{h}<1$. The 2-steps Zermelo’s navigation problem discussed above, can be generalized to $k$-steps Zermelo’s navigation. ###### Remark 2.6 Let $(M,F)$ be a Finsler space and let $W_{0},W_{1},\dots,W_{k-1}$ be $k$ linearly independent vector fields on $M$. We consider the following $k$-step Zermelo’s navigation problem. $\underline{\text{Step 0}}$. $F_{1}$ solution of $(M,F_{0},W_{0})$ with $F_{0}(-W_{0})<1$, i.e. Solution of $F_{0}\left(\frac{y}{F_{1}}-W_{0}\right)=1$. $\underline{\text{Step 1}}$. $F_{2}$ solution of $(M,F_{1},W_{1})$ with $F_{1}(-W_{1})<1$, i.e. Solution of $F_{1}\left(\frac{y}{F_{2}}-W_{1}\right)=1$. $\vdots$ $\underline{\text{Step k-1}}$. $F_{k}$ solution of $(M,F_{k-1},W_{k-1})$ with $F_{k-1}(-W_{k-1})<1$, i.e. Solution of $F_{k-1}\left(\frac{y}{F_{k}}-W_{k-1}\right)=1$. Then $F_{k}$ is the Finsler metric obtained as solution of the Zermelo’s navigation problem with data $F_{0},\widetilde{W}:=W_{0}+\dots+W_{k-1}$ with condition $F_{0}(-\widetilde{W})<1$. ### 2.3 Geodesics, conjugate and cut loci ###### Proposition 2.7 Let $(M,h)$ be a Riemannian manifold, $V$ a vector field such that $\|V\|_{h}<1$, and let $F=\alpha+\beta$ be the solution of the Zermelo’s navigation with data $(M,h)$ and $V$. Then $d\beta=0$ if and only if $V$ satisfies the differential equation $d\gamma=d\log\lambda\wedge\gamma,$ (2.12) where $\gamma=V_{i}(x)dx^{i}$, $V_{i}=h_{ij}V^{j}$, and $\lambda=1-\|V\|_{h}^{2}$. ###### Proof. (Proof of Proposition 2.7) Observe that $b_{i}=-\frac{V_{i}}{\lambda}$ is equivalent to $\lambda\beta=-\gamma$, hence $d\lambda\wedge\beta+\lambda d\beta=-d\gamma$ and using $d\beta=0$ we obtain $d\lambda\wedge\beta=-d\gamma.$ By using $\beta=-\frac{1}{\lambda}\gamma$ we get (2.12) easily. The converse is easy to show taking into account $\lambda\neq 0$. ###### Remark 2.8 The equation (2.12) can be written in coordinates $\left(\frac{\partial V_{i}}{\partial x^{j}}-\frac{\partial V_{j}}{\partial x^{i}}\right)dx^{i}\wedge dx^{j}=\left(\frac{\partial\log\lambda}{\partial x^{i}}dx^{i}\right)\wedge(V_{j}dx^{j}),$ that is $\frac{\partial V_{i}}{\partial x^{j}}-\frac{\partial V_{j}}{\partial x^{i}}=\frac{\partial\log\lambda}{\partial x^{i}}V_{j}-\frac{\partial\log\lambda}{\partial x^{j}}V_{i}.$ In the 2-dimensional case, we get the 1st order PDE $\frac{\partial V_{1}}{\partial x^{2}}-\frac{\partial V_{2}}{\partial x^{1}}=-\frac{1}{\lambda}\left[\frac{\partial h^{ij}}{\partial x^{1}}V_{2}-\frac{\partial h^{ij}}{\partial x^{2}}V_{1}\right]V_{i}V_{j}-\frac{2}{\lambda}V^{i}\left[\frac{\partial V_{i}}{\partial x^{1}}V_{2}-\frac{\partial V_{i}}{\partial x^{2}}V_{1}\right],\ i,j=1,2.$ (2.13) It can easily be seen that in the case of a surface of revolution $h=dr^{2}+m^{2}(r)d\theta^{2}$ the wind $V=A(r)\frac{\partial}{\partial r}$ is a solution of (2.12) and of (2.13). ###### Theorem 2.9 Let $(M,h)$ be a simply connected Riemannian manifold and $V=V^{i}\frac{\partial}{\partial x^{i}}$ a vector field on $M$ such that $\|V\|_{h}<1$, and let $F=\alpha+\beta$ be the Randers metric obtained as the solution of the Zermelo’s navigation problem with this data. If $V$ satisfies the differential relation $d\eta=d(\log\lambda)\wedge\eta,$ (2.14) where $\eta=V_{i}(x)dx^{i}$, $V_{i}=h_{ij}V^{j}$, then the followings hold good. 1. 1. There exists a smooth function $f:M\to\mathbb{R}$ such that $\beta=df$. 2. 2. The Randers metric $F$ is projectively equivalent to $\alpha$, i.e. the geodesics of $(M,F)$ coincide with the geodesics of the Riemannian metric $\alpha$ as non-parametrized curve. 3. 3. The Finslerian length of any $C^{\infty}$ piecewise curve $\gamma:[a,b]\to M$ on $M$ joining the points $p$ and $q$ is given by $\mathcal{L}_{F}(\gamma)=\mathcal{L}_{\alpha}(\gamma)+f(q)-f(p),$ (2.15) where $L_{\alpha}(\gamma)$ is the Riemannian length with respect to $\alpha$ of $\gamma$. 4. 4. The geodesic $\gamma$ is minimizing with respect to $\alpha$ if and only if it is minimizing with respect to $F$. 5. 5. For any two points $p$ and $q$ we have $d_{F}(p,q)=d_{\alpha}(p,q)+f(q)-f(p),$ (2.16) where $d_{\alpha}(p,q)$ is the Riemannian distance between $p$ and $q$ with respect to $\alpha$ of $\gamma$. 6. 6. For an $F$-unit speed geodesic $\gamma$, if we put $p:=\gamma(0)$ and $q:=\gamma(t_{0})$, then $q$ is conjugate to $p$ along $\gamma$ with respect to $F$ if and only if $q$ is conjugate to $p$ along $\gamma$ with respect to $\alpha$. 7. 7. The cut locus of $p$ with respect to $F$ coincide with the cut locus of $p$ with respect to $\alpha$. ###### Proof. (Proof of Theorem 2.9) 1. 1. Using Proposition 2.7, it is clear that the differential equation (2.14) is equivalent to $\beta$ closed 1-form, i.e. $d\beta=0$. On the other hand, since $M$ is simply connected manifold, any closed 1-form is exact, hence in this case (2.14) is equivalent to $\beta=df$. 2. 2. Follows immediately from the classical result in Finsler geometry that a Randers metric $\alpha+\beta$ is projectively equivalent to its Riemannian part $\alpha$ if and only if $d\beta=0$ (see for instance [BCS], p.298). 3. 3. The length of the curve $\gamma[a,b]\to M$, given by $x^{i}=x^{i}(t)$ is defined as $\begin{split}\mathcal{L}_{F_{1}}(\gamma)&=\int_{a}^{b}F_{1}(\gamma(t),\dot{\gamma}(t))dt=\int_{a}^{b}\alpha(\gamma(t),\dot{\gamma}(t))dt+\int_{a}^{b}\beta(\gamma,\dot{\gamma}(t))dt\\\ &=\mathcal{L}_{\alpha}(\gamma)+f(q)-f(p)\\\ \end{split}$ where we use $\int_{a}^{b}\beta(\gamma(t),\dot{\gamma}(t))dt=\int_{a}^{b}df(\gamma(t),\dot{\gamma}(t))dt=f(\gamma(b))-f(\gamma(a))=f(q)-f(p).$ 4. 4. It follows from 3. 5. 5. It follows immediately from 2 and 3 (see [SSS] for a detailed discussion on this type of distance). 6. 6. From (2) we know that $\alpha$ and $F=\alpha+\beta$ are projectively equivalent, i.e. their non-parametrized geodesics coincide as set points. More precisely, if $\gamma:[0,l]\to M$, $\gamma(t)=(x^{i}(t))$ is an $\alpha$-unit speed geodesic, and $\overline{\gamma}:[0,\widetilde{l}]\to M$, $\overline{\gamma}(s)=(x^{i}(s))$ is an $F$-unit speed geodesic, then there exists a parameter changing $t=t(s)$, $\frac{dt}{ds}>0$ such that $\gamma(t)=\overline{\gamma}(t(s))$ with the inverse function $s=s(t)$ such that $\overline{\gamma}(s)=\gamma(s(t))$. Observe that if $q=\gamma(a)$ then $q=\overline{\gamma}(\widetilde{a})$, where $t(\widetilde{a})=a$. Let us consider a Jacobi field $Y(t)$ along $\gamma$ such that $\begin{cases}Y(0)=0\vspace{0.2cm}\\\ <Y(t),\frac{d\gamma}{dt}>_{\alpha}=0,\end{cases}$ and construct the geodesic variation $\gamma:[0,a]\times(-\varepsilon,\varepsilon)\to M$, $(t,u)\mapsto\gamma(t,u)$ such that $\begin{cases}\gamma(t,0)=\gamma(t)\vspace{0.2cm}\\\ \frac{\partial\gamma}{\partial u}\Big{|}_{u=0}=Y(t).\end{cases}$ Since the variation vector field $\frac{\partial\gamma}{\partial u}\Big{|}_{u=0}$ is Jacobi field it follows that all geodesics $\gamma_{u}(t)$ in the variation are $\alpha$-geodesics for any $u\in(-\varepsilon,\varepsilon)$. Similarly with the case of base manifold, every curve in the variation can be reparametrized as an $F$-geodesic. In other words, for each $u\in(-\varepsilon,\varepsilon)$ it exists a parameter changing $t=t(s,u)$, $\frac{\partial t}{\partial s}>0$ such that $\gamma(t,u)=\overline{\gamma}(t(s,u),u).$ We will compute the variation vector field of the variation $\overline{\gamma}(s,u)$ as follows $\frac{\partial\overline{\gamma}}{\partial u}(s,u)=\frac{\partial\gamma}{\partial t}\Big{|}_{(t(s,u),u)}\frac{\partial t}{\partial u}(s,u)+\frac{\partial\gamma}{\partial u}\Big{|}_{(t(s,u),u)}.$ If we evaluate this relation for $u=0$ we get $\frac{\partial\overline{\gamma}}{\partial u}(s,0)=\frac{\partial\gamma}{\partial t}\Big{|}_{(t(s,0),0)}\frac{\partial t}{\partial u}(s,0)+\frac{\partial\gamma}{\partial u}\Big{|}_{(t(s,0),0)},$ that is $\overline{Y}(s)=\frac{\partial\gamma}{\partial t}\Big{|}_{t(s,0),0}\frac{\partial t}{\partial u}\Big{|}_{(s,0)}+Y\Big{|}_{(t(s,0),0)}\in T_{\overline{\gamma}(s)}M\equiv T_{\gamma(t(s))}M.$ For a point $q=\gamma(a)=\overline{\gamma}(\widetilde{a})$ this formula reads $\begin{split}\overline{Y}(\widetilde{a})&=\frac{\partial\gamma}{\partial t}\Big{|}_{\widetilde{a}}\frac{\partial t}{\partial u}\Big{|}_{(\widetilde{a},0)}+Y(t(\widetilde{a}))\\\ &=\frac{d\gamma}{dt}\Big{|}_{a}\frac{\partial t}{\partial u}\Big{|}_{(\widetilde{a},0)}+Y(a)\in T_{\overline{\gamma}(\widetilde{a})}M\equiv T_{\gamma(a)}M,\end{split}$ (2.17) i.e. the Jacobi field $\overline{Y}(\widetilde{a})$ is linear combination of the tangent vector $\frac{\partial\gamma}{\partial t}(a)$ and $Y(a)$. Let us assume $q=\overline{\gamma}(\widetilde{a})$ is conjugate point to $p$ along the $F$-geodesic $\overline{\gamma}$, i.e. $\overline{Y}(\widetilde{a})=0$. It results $\frac{d\gamma}{dt}(a)$ cannot be linear independent, hence $Y(a)=0$, i.e. $q=\gamma(a)$ is conjugate to $p$ along the $\alpha$-geodesic $\gamma$. Conversely, if $q=\gamma(a)$ is conjugate to $p$ along the $\alpha$-geodesic $\gamma$ then (2.17) can be written as $Y(a)=\overline{Y}(s(a))-\frac{d\overline{\gamma}}{ds}(s(a))\frac{ds}{dt}\frac{dt}{du}$ and the conclusion follows from the same linearly independence argument as above. 7. 7. Observe that $Cut_{\alpha}(p)\neq\emptyset\Leftrightarrow Cut_{F}(p)\neq\emptyset$. Indeed, if $Cut_{\alpha}(p)=\emptyset$ all $\alpha$-geodesics from $p$ are globally minimizing. Assume $q\in Cut_{F}(p)$ and we can consider $q$ end point of $Cut_{F}(p)$, i.e. $q$ must be $F$-conjugate to $p$ along the geodesic $\sigma(s)$ from $p$ to $q$. This implies the corresponding point on $\sigma(t)$ is conjugate to $p$, this is a contradiction. Converse argument is identical. Let us assume $Cut_{\alpha}(p)$ and $Cut_{F}(q)$ are not empty sets. If $q\in Cut_{\alpha}(p)$ then we have two cases: * (i) $q$ is an end point of $Cut_{\alpha}(p)$, i.e. it is conjugate to $p$ along a minimizing geodesic $\gamma$ from $p$ to $q$. Therefore $q$ is closes conjugate to $p$ along the $F$-geodesic $\overline{\gamma}$ which is the reparametrization of $\gamma$ (see 6). * (ii) $q$ is an interior point of $Cut_{\alpha}(p)$. Since the set of points in $Cut_{\alpha}(p)$ founded at the intersection of exactly minimizing two geodesics of same length is dense in the closed set $Cut_{\alpha}(p)$ it is enough to consider this kind of cut points. In the case $q\in Cut_{\alpha}(p)$ such that there are 2 $\alpha$-geodesics $\gamma_{1}$, $\gamma_{2}$ of same length from $p$ to $q=\gamma_{1}(a)=\gamma_{2}(a)$, then from (4) it is clear that the point $q=\overline{\gamma}_{1}(\widetilde{a})=\overline{\gamma}_{2}(\widetilde{a})$ has the same property with respect to $F$. Hence $Cut_{\alpha}(p)\subset Cut_{F}(p)$. This inverse conclusion follows from the same argument as above by changing roles of $\alpha$ with $F$. ###### Remark 2.10 See [INS] for a more general case. We recall the following well-known result for later use. ###### Lemma 2.11 ([HS],[MHSS]) Let $F=\alpha+\beta$ be the solution of Zermelo’s navigation problem with navigation data $(h,V)$, $\|V\|_{h}<1$. Then the Legendre dual of $F$ is Hamiltonian function $F^{*}=\alpha^{*}+\beta^{*}$ where ${\alpha^{*}}^{2}=h^{ij}(x)p_{i}p_{j}$ and $\beta^{*}=V^{i}(x)p_{i}$. Here $(x,p)$ are the canonical coordinates of the cotangent bundle $T^{*}M$. Moreover, $g_{ij}(x,y){g^{*}}^{ik}(x,p)=\delta_{j}^{k}$, where $F^{2}(x,y)=g_{ij}(x,y)y^{i}y^{j}$ and ${F^{*}}^{2}(x,p)={g^{*}}^{ij}(x,p)p_{i}p_{j}$. The following result is similar to the Riemannian counterpart and we give it here with proof. We recall that a smooth vector field $X$ on a Finsler manifold $(M,F)$ is called Killing field if every local one-parameter transformation group $\\{\varphi_{t}\\}$ of $M$ generated by $X$ consists of local isometries. It is clear from our construction above that $W$ is Killing field on the surface of revolution $(M,F)$. We also have ###### Proposition 2.12 Let $(M,F)$ be a Finsler manifold (any dimension) with local coordinates $(x^{i},y^{i})\in TM$ and $X=X^{i}(x)\frac{\partial}{\partial x^{i}}$ a vector field on $M$. The following formulas are equivalent 1. (i) $X$ is Killing field for $(M,F)$; 2. (ii) $\mathcal{L}_{\widehat{X}}F=0$, where $\widehat{X}:=X^{i}\frac{\partial}{\partial x^{i}}+y^{j}\frac{\partial X^{i}}{\partial x^{j}}\frac{\partial}{\partial y^{i}}$ is the canonical lift of $X$ to $TM$; 3. (iii) $\frac{\partial g_{ij}}{\partial x^{p}}X^{p}+g_{pj}\frac{\partial X^{p}}{\partial x^{i}}+g_{ip}\frac{\partial x^{p}}{\partial x^{j}}+2C_{ijp}\frac{\partial x^{p}}{\partial x^{q}}y^{q}=0;$ 4. (iv) $X_{i|j}+X_{j|i}+2C_{ij}^{p}X_{p|q}y^{q}=0$, where “ $|$ ” is the $h$-covariant derivative with respect to the Chern connection. ###### Lemma 2.13 With the notation in Lemma 2.11, the vector field $W=W^{i}(x)\frac{\partial}{\partial x^{i}}$ on $M$ is Killing field with respect to $F$ if and only if $\\{F^{*},W^{*}\\}=0,$ where $W^{*}=W^{i}(x)p_{i}$ and $\\{\cdot,\cdot\\}$ is the Poincaré bracket. ###### Proof. (Proof of Lemma 2.13) Recall that $W$ is Killing field of $(M,F)$ if and only if every local one- parameter transformation group $\\{\varphi_{t}\\}$ of $M$ generated by $W$ consists of local isometries. A straight forward computation shows that $W$ is Killing on $(M,F)$ if and only if $\mathcal{L}_{\widehat{W}}F=0$, where $\widehat{W}=W^{i}\frac{\partial}{\partial x^{i}}+y^{j}\frac{\partial W^{i}}{\partial x^{j}}\frac{\partial}{\partial y^{i}}$ is the canonical lift of $W$ to $TM$. In local coordinates this is equivalent to $\frac{\partial g_{ij}}{\partial x^{p}}W^{p}+g_{pj}\frac{\partial W^{p}}{\partial x^{i}}+g_{ip}\frac{\partial W^{p}}{\partial x^{j}}+2C_{ijp}\frac{\partial W^{p}}{\partial x^{q}}y^{q}=0.$ (2.18) Since the left hand side is 0-homogeneous in the $y$-variable, this relation is actually equivalent to the contracted relation by $y^{i}y^{j}$, i.e. (2.18) is equivalent to $\left(\frac{\partial g_{ij}}{\partial x^{p}}W^{p}+g_{pj}\frac{\partial W^{p}}{\partial x^{i}}+g_{ip}\frac{\partial W^{p}}{\partial x^{j}}\right)y^{i}y^{j}=0,$ where we use $C_{ijk}y^{i}=0$. We get the equivalent relation $\frac{\partial g_{ij}}{\partial x^{p}}W^{p}y^{i}y^{j}+2g_{pj}\frac{\partial W^{p}}{\partial x^{i}}y^{i}y^{j}=0.$ (2.19) Observe that $g_{ij}{g^{*}}^{jk}=\delta_{i}^{k}$ is equivalent to $\frac{\partial g_{ij}}{\partial x^{p}}{g^{*}}^{ik}=-g_{ij}\frac{\partial{g^{*}}^{ik}}{\partial x^{p}}$, hence (2.19) reads $\frac{\partial g_{ij}}{\partial x^{p}}W^{p}\left({g^{*}}^{ik}p_{k}\right)\left({g^{*}}^{jl}p_{l}\right)+2g_{pj}\frac{\partial W^{p}}{\partial x^{i}}\left({g^{*}}^{ik}p_{k}\right)\left({g^{*}}^{jl}p_{l}\right)=0$ and from here $-g_{ij}\frac{\partial{g^{*}}^{ik}}{\partial x^{p}}W^{p}p_{k}{g^{*}}^{jl}p_{l}+2g_{pj}\frac{\partial W^{p}}{\partial x^{i}}\left({g^{*}}^{ik}p_{k}\right)\left({g^{*}}^{jl}p_{l}\right)=0.$ We finally obtain $-\frac{\partial{g^{*}}^{ik}}{\partial x^{p}}W^{p}p_{i}p_{k}+2{g^{*}}^{jk}\frac{\partial W^{i}}{\partial x^{j}}p_{i}p_{k}=0.$ (2.20) On the other hand, we compute $\begin{split}\\{{F^{*}}^{2},W^{*}\\}&=\\{{g^{*}}^{ij}p_{i}p_{j},W^{s}p_{s}\\}\\\ &=\frac{\partial({g^{*}}^{ij}p_{i}p_{j})}{\partial p_{k}}\frac{\partial(W^{s}p_{s})}{\partial x^{k}}-\frac{\partial({g^{*}}^{ij}p_{i}p_{j})}{\partial x^{k}}\frac{\partial(W^{s}p_{s})}{\partial p_{k}}\\\ &=\left(\frac{\partial{g^{*}}^{ij}}{\partial p_{k}}p_{i}p_{j}+2{g^{*}}^{ik}p_{i}\right)\frac{\partial W^{s}}{\partial x^{k}}p_{s}-\frac{\partial{g^{*}}^{ij}}{\partial x^{k}}p_{i}p_{j}W^{k}\\\ &=2{g^{*}}^{ik}\frac{\partial W^{s}}{\partial x^{k}}p_{i}p_{s}-\frac{\partial{g^{*}}^{ij}}{\partial x^{k}}W^{k}p_{i}p_{j}\end{split}$ which is the same with (2.20). Here we have used the 0-homogeneity of ${g^{*}}^{ij}(x,p)$ with respect to $p$. We also observe that for any functions $f,\ g:T^{*}M\to\mathbb{R}$ we have $\\{f^{2},g\\}=2f\\{f,g\\}$. Therefore, the following are equivalent * (i) $W$ is Killing field on $(M,F)$; * (ii) $\mathcal{L}_{\widehat{W}}F=0$; * (iii) formula (2.19) * (iv) formula (2.20) * (v) $\\{{F^{*}}^{2},W^{*}\\}=0$ * (vi) $\\{F^{*},W^{*}\\}=0$ and the lemma is proved. ###### Proposition 2.14 ([FM]) Let $(M,F)$ be a Finsler manifold and $W=W^{i}(x)\frac{\partial}{\partial x^{i}}$ a Killing filed on $(M,F)$ with $F(-W)<1$. If we denote by $\widetilde{F}$ the solution of the Zermelo’s navigation problem with data $(F,W)$, then the following are true 1. 1. The $\widetilde{F}$-unit speed geodesics $\mathcal{P}(t)$ can be written as $\mathcal{P}(t)=\varphi(t,\rho(t)),$ where $\varphi_{t}$ is the 1-parameter flow of $W$ and $\rho$ is an $F$-unit speed geodesic. 2. 2. For any Jacobi field $J(t)$ along $\rho(t)$ such that $g_{\dot{\rho}(t)}(\dot{\rho}(t),J(t))=0$, the vector field $\widetilde{J}(t):=\varphi_{t*}(J(t))$ is a Jacobi field along $\mathcal{P}$ and $\widetilde{g}_{\dot{\mathcal{P}}(t)}(\dot{\mathcal{P}}(t),\widetilde{J}(t))=0$. 3. 3. For any $x\in M$ and any flag $(y,V)$ with flag pole $y\in T_{x}M$ and transverse edge $V\in T_{x}M$, the flag curvatures $K$ and $\widetilde{K}$ of $F$ and $\widetilde{F}$, respectively, are related by ${K}(x,y,V)=\widetilde{K}(x,y+W,V)$ provided $y+W$ and $V$ are linearly independent. In the 2-dimensional case, since any Finsler surface is of scalar flag curvature, we get ###### Corollary 2.15 In the two-dimensional case, with the notation in Proposition 2.14, the Gauss curvature $K$ and $\widetilde{K}$ of $F$ and $\widetilde{F}$ are related by $K(x,y)=\widetilde{K}(x,y+W)$, for any $(x,y)\in TM$. ###### Lemma 2.16 Let $(M,F)$ be a (forward) complete Finsler manifold, and let $W$ be a Killing field with respect to $F$. Then $W$ is a complete vector field on $M$, i.e. for any $x\in M$ the flow $\varphi_{x}(t)$ is defined for any $t$. ###### Proof. (Proof of Lemma 2.16) Since $W$ is Killing field, it is clear that its flow $\varphi$ preserves the Finsler metric $F$ and the field $W$. In other words, for any $p\in M$, the curve $\alpha:(a,b)\to M$, $\alpha(t)=\varphi_{x}(t)$ has constant speed. Indeed, it is trivial to see that $\begin{split}\frac{d}{dt}F(\gamma(t),W\gamma(t))&=\frac{\partial F}{\partial x^{i}}\frac{d\gamma^{i}}{dt}+\frac{\partial F}{\partial y^{i}}\frac{\partial W^{i}}{\partial x^{k}}\frac{d\gamma^{k}}{dt}\\\ &=\frac{\partial F}{\partial x^{i}}W^{i}+\frac{\partial F}{\partial y^{i}}\frac{\partial W^{i}}{\partial x^{k}}W^{k}=\mathcal{L}_{W}F(W)=0.\end{split}$ It means that the $F$-length of $\alpha$ is $b-a$, i.e. finite, hence by completeness it can be extended to a compact domain $[a,b],$ and therefore $\alpha$ is defined on whole $\mathbb{R}$. It results $W$ is complete. ###### Theorem 2.17 Let $(M,F)$ be a Finsler manifold (not necessary Randers) and $W=W^{i}(x)\frac{\partial}{\partial x^{i}}$ a Killing field for $F$, with $F(-W)<1$. If $\widetilde{F}$ is the solution of the Zermelo’s navigation problem with data $(M,F)$ with the wind $W$ then the followings hold good: 1. (i) The point $\mathcal{P}(l)$ is $\widetilde{F}$-conjugate to $\mathcal{P}(0)$ along the $\widetilde{F}$-geodesic $\mathcal{P}(t)=\varphi(t,\rho(t))$ if and only if the corresponding point $\rho(l)=\varphi(-l,\mathcal{P}(l))$ is the $F$-conjugate point to $\mathcal{P}(0)=\rho(0)$ along $\rho$. 2. (ii) $(M,F)$ is (forward) complete if and only if $(M,\widetilde{F})$ is (forward) complete. 3. (iii) If $\rho$ is a $F$-global minimizing geodesic from $p=\rho(0)$ to a point $\widehat{q}=\rho(l)$, then $\mathcal{P}(t)=\varphi(t,\rho(t))$ is an $\widetilde{F}$-global minimizing geodesic from $p=\mathcal{P}(0)$ to $q=\mathcal{P}(l)$, where $l=d_{F}(p,\widehat{q})$. 4. (iv) If $\widehat{q}\in cut_{F}(p)$ is a $F$-cut point of $p$, then $q=\varphi(l,\widehat{q})\in cut_{\widetilde{F}}(p)$, i.e. it is a $\widetilde{F}$-cut point of $p$, where $l=d_{F}(p,\widehat{q})$. ###### Proof. (Proof of Theorem 2.17) 1. (i) Since $\varphi_{t}(\cdot)$ is a diffeomorphism on $M$ (see Lemma 2.16), it is clear that its tangent map $\varphi_{t*}$ is a regular linear mapping (Jacobian of $\varphi_{t}$ is non-vanishing). Then Lemma 2.14 shows that $\widetilde{J}$ vanishes if and only if $J$ vanishes, and the conclusion follow easily. 2. (ii) Let us denote by $\exp_{p}:T_{p}M\to M$ and $\widetilde{\exp}_{p}:T_{p}M\to M$ the exponential maps of $F$ and $\widetilde{F}$, respectively. Then $\mathcal{P}(t)=\varphi(t,\rho(t))$ implies $\widetilde{\exp}_{p}(ty)=\varphi_{t}\circ\exp_{p}(t[y-W(p)]).$ (2.21) If $(M,F)$ is complete, Hopf-Rinow theorem for Finsler manifolds implies that for any $p\in M$, the exponential map $\exp_{p}$ is defined on all of $M$. Taking into account Lemma 2.16, from (2.21) it follows $\widetilde{\exp}_{p}$ is defined on all of $T_{p}M$, and again by Hopf-Rinow theorem we obtain that $\widetilde{F}$ is complete. The converse proof is similar. 3. (iii) Firstly observe that $l=d_{F}(p,\widehat{q})=d_{F}(p,q)$, since $\widehat{q}=\rho(l)=\varphi(-l,\mathcal{P}(l))=\varphi(-l,q)$ and $q=\mathcal{P}(l)=\varphi(l,\rho(l))=\varphi(l,\widehat{q})$. $-W$$p$$\mathcal{P}$$\mathcal{P}_{s}$$q$$q_{0}$$\rho_{s}$$\xi$$\widehat{q}$$\rho$ Figure 3: Riemannian and Finsler geodesics in Zermelo’s navigation problem. We will proof this statement by contradiction (see Figure 3). For this, let us assume that, even though $\rho$ is globally minimizing, the flow-corresponding geodesic $\mathcal{P}$ from $p$ to $q$ is not minimizing anymore. In other words, there must exist a shorter minimizing geodesic $\mathcal{P}_{s}:[0,l_{0}]\to M$ from $p$ to $q=\mathcal{P}_{s}(l_{0})$ such that $d_{\widetilde{F}}(p,q)=l_{0}<l$. (We use the subscript $s$ for short). We consider next, the $F$-geodesic $\rho_{s}:[0,l_{0}]\to M$ obtained from $\mathcal{P}$ by flow deviation, i.e. $\rho_{s}(t)=\varphi(-t,\mathcal{P}_{s}(t))$, and denote $q_{0}=\rho_{s}(l_{0})=\varphi(-l_{0},\mathcal{P}(l_{0}))$. Then, triangle inequality in $pq_{0}\widehat{q}$ shows that $\mathcal{L}_{F}(\rho)\leq\mathcal{L}_{F}(\rho_{s})+\mathcal{L}_{F}(\xi),$ where we denote by $\xi$ the flow orbit from $W$ through $q$m oriented from $q_{0}$ to $\widehat{q}$. In other words $\dot{\xi}(t)=-W$, and using the hypothesis $F(-W)<1$, it follows $\mathcal{L}_{F}(\xi)=\int_{a}^{b}F(-W)dt<b-a=\mathcal{L}_{F}(\rho)-\mathcal{L}_{F}(\rho_{s}).$ (2.22) By comparing relations (2.21) with (2.22) it can be seen that this is a contradiction, hence $\mathcal{P}$ must be globally minimizing. 4. (iv) It follows from (iii) and the definition of cut locus. ###### Remark 2.18 Observe that statement (iii) and (iv) are not necessary and sufficient conditions, Indeed, from the proof of (iii) it is clear that for proving $\rho$ global minimizer implies $\mathcal{P}$ global minimizer we have used condition $F(-W)<1$, which is equivalent to the fact that $\widetilde{F}$-indicatrix includes the origin of $T_{p}M$, a necessary condition for $\widetilde{F}$ to be positive defined (see Remark 2.3). Likewise, if we want to show that $\mathcal{P}$ global minimizer implies $\rho$ global minimizer, we need $F(W)<1$, that is, the indicatrix $\Sigma_{F}$ translated by $-W$ must also include the origin, i.e. the metric $\widetilde{F}_{2}$ defined by $F(y+\widetilde{F}_{2}W)=\widetilde{F}_{2}$, with the indicatrix $\Sigma_{\widetilde{F}_{2}}=\Sigma_{F}-W$ is also a positive defined Finsler metric. In conclusion if we assume $F(-W)<1$ and $F(W)<1$ then the statements (iii) and (iv) in Theorem 2.17 can be written with “if and only if”. ###### Lemma 2.19 Let $F=\alpha+\beta$ be the solution of Zermelo’s navigation problem with navigation data $(h,V)$. Then a vector field $W$ on $M$ is Killing with respect to $F=\alpha+\beta$ if $W$ is Killing with respect to $h$ and $[V,W]=0$, where $[\cdot,\cdot]$ is the Lie bracket. ###### Proof. (Proof of Lemma 2.19) The proof is immediate from Lemmas 2.11 and 2.13, Indeed, $W$ is Killing on $(M,F)$ if and only if $\\{F^{*},W^{*}\\}=0$, hence $\\{\alpha^{*}+\beta^{*},W^{*}\\}=\\{\alpha^{*},W^{*}\\}+\\{\beta^{*},W^{*}\\}=0$. If $\\{\alpha^{*},W^{*}\\}=0$, i.e. $W$ is Killing with respect to $h$ and $\\{\beta^{*},W^{*}\\}=\\{V^{*},W^{*}\\}=0$. Let us observe that $\\{V^{*},W^{*}\\}=0$ is actually equivalent to $[V,W]=0$. Geometrically, this means that the flows of $V$ and $W$ commute locally, then the conclusion follows. Observe that in local coordinates the conditions in Lemma 2.19 reads $\begin{cases}W_{i:j}+W_{j:i}=0\vspace{0.2cm}\\\ \sum_{i=1}^{n}\left(\frac{\partial W^{k}}{\partial x^{i}}V^{i}-\frac{\partial V^{k}}{\partial x^{i}}W^{i}\right)=0,\end{cases}$ where $:$ is the covariant derivative with respect to the Levi-Civita connection of $h$. ###### Theorem 2.20 Let $(M,h)$ be a simply connected Riemannian manifold and $V=V^{i}\frac{\partial}{\partial x^{i}}$, $W=W^{i}\frac{\partial}{\partial x^{i}}$ vector fields on $M$ such that 1. (i) $V$ satisfies the differential relation $d\eta=d(\log\lambda)\wedge\eta,$ (2.23) where $\eta=V_{i}(x)dx^{i}$, $V_{i}=h_{ij}V^{j}$; 2. (ii) $W$ is Killing with respect to $h$ and $\\{V^{*},W^{*}\\}=0$, where $V^{*}=V^{i}p_{i}$ and $W^{*}=W^{i}p_{i}$. Then 1. (i) The $\widetilde{F}$-unit speed geodesics $\mathcal{P}(t)$ are given by $\mathcal{P}(t)=\varphi(t,\sigma(t)),$ where $\varphi$ is the flow of $W$ and $\sigma(t)$ is an $F_{1}$-unit speed geodesic. Equivalently, $\mathcal{P}(t)=\varphi(t,\gamma(s(t))),$ where $\gamma(s)$ is an $\alpha$-unit speed geodesic and $s=s(t)$ is the parameter change $t=\int_{0}^{s}F_{1}\left(\rho(\tau),\frac{d\rho}{d\tau}\right)d\tau$. 2. (ii) The point $\mathcal{P}(l)$ is conjugate to $\mathcal{P}(0)=p$ along the $\widetilde{F}-geodesic$ $\mathcal{P}(t)$ if and only if the corresponding point $\widehat{q}=\rho(l)=\varphi(-l,\mathcal{P}(l))$ on the $\widetilde{F}$-geodesic $\rho$ is conjugate to $p$, or equivalently, $\widehat{q}$ is conjugate to $p$ along the $\alpha$-geodesic from $p$ to $\widehat{q}$. 3. (iii) If $\widehat{q}\in Cut_{\alpha}(p)$ then $q=\varphi(l,\widehat{q})\in Cut_{\widetilde{F}}(p)$, where $l=d_{\widetilde{F}}(p,\widehat{q})=d_{F}(p,\widehat{q})+f(\widehat{q})-f(p)$. ###### Proof. (Proof of Theorem 2.20) All statements follows immediately by combining Theorem 2.9 with Theorem 2.17. ###### Remark 2.21 Informally, we may say that the cut locus of $p$ with respect to $F$ is the $W$-flow deformation of the cut locus of $p$ with respect to $F_{1}$, that is, the the $W$-flow deformation of the cut locus of $p$ with respect to $\alpha$, due to Theorem 2.9, 7. ## 3 Surfaces of revolution ### 3.1 Finsler surfaces of revolution Let $(M,F)$ be a (forward) complete oriented Finsler surface, and $W$ a vector field on $M$, whose one-parameter group of transformations $\\{\varphi_{t}:t\in I\\}$ consists of $F$-isometries, i.e. $F(\varphi_{t}(x),\varphi_{t,x}(y))=F(x,y),\quad\text{for all}\ (x,y)\in TM\ \text{and any}\ t\in\mathbb{R}.$ This is equivalent with $d_{F}(\varphi_{t}(q_{1}),\varphi_{t}(q_{2}))=d_{F}(q_{1},q_{2}),$ for any $q_{1},q_{2}\in M$ and any given $t$, where $d_{F}$ is the Finslerian distance on $M$. If $\varphi_{t}$ is not the identity map, then it is known that $W$ must have at most two zeros on $M$. We assume hereafter that $W$ has no zeros, hence from Poincaré-Hopf theorem it follows that $M$ is a surface homeomorphic to a plane, a cylinder or a torus. Furthermore, we assume that $M$ is the topological cylinder $\mathbb{S}^{1}\times\mathbb{R}$. By definition it follows that, at any $x\in M\setminus\\{p\\}$, $W_{x}$ is tangent to the curve $\varphi_{x}(t)$ at the point $x=\varphi_{x}(0)$. The set of points $Orb_{W}(x):=\\{\varphi_{t}(x):t\in\mathbb{R}\\}$ is called the orbit of $W$ through $x$, or a parallel circle and it can be seen that the period $\tau(x):=\min\\{t>0:\varphi_{t}(x)=x\\}$ is constant for a fixed $x\in M$. ###### Definition 3.1 A (forward) complete oriented Finsler surface $(M,F)$ homeomorphic to $\mathbb{S}^{1}\times\mathbb{R}$, with a vector field $W$ that has no zero points, is called a Finsler cylinder of revolution, and $\varphi_{t}$ a rotation on $M$. It is clear from our construction above that $W$ is Killing field on the surface of revolution $(M,F)$. ### 3.2 The Riemannian case The simplest case is when the Finsler norm $F$ is actually a Riemannian one. A Riemannian cylinder of revolution $(M,h)$ is a complete Riemannian manifold $M=\mathbb{S}^{1}\times\mathbb{R}=\\{(r,\theta):r\in\mathbb{R},\ \theta\in[0,2\pi)\\}$ with a warped product metric $h=dr^{2}+m^{2}(r)d\theta^{2}.$ (3.1) of the real line $(\mathbb{R},dr^{2})$ and the unit circle $(\mathbb{S}^{1},d\theta^{2})$. Suppose that the warping function $m$ is a positive-valued even function. Recall that the equations of an $h$-unit speed geodesic $\gamma(s):=(r(s),\theta(s))$ of $(M,h)$ are $\begin{cases}\frac{d^{2}r}{ds^{2}}-mm^{\prime}\left(\frac{d\theta}{ds}\right)^{2}=0\vspace{0.2cm}\\\ \frac{d^{2}\theta}{ds^{2}}+2\frac{m^{\prime}}{m}\frac{dr}{ds}\frac{d\theta}{ds}=0\end{cases},$ (3.2) with the unit speed parametrization condition $\left(\frac{dr}{ds}\right)^{2}+m^{2}\left(\frac{d\theta}{ds}\right)^{2}=1.$ (3.3) It follows that every profile curve $\\{\theta=\theta_{0}\\}$, or meridian, is an $h$-geodesic, and that a parallel $\\{r=r_{0}\\}$ is geodesic if and only if $m^{\prime}(r_{0})=0$, where $\theta_{0}\in[0,2\pi)$ and $r_{0}\in\mathbb{R}$ are constants. It is clear that two meridians do not intersect on $M$ and for a point $p\in M$, the meridian through $p$ does not contain any cut points of $p$, that is, this meridian is a ray through $p$ and hence $d_{h}(\gamma(0),\gamma(s))=s$, for all $s\geq 0$. We observe that (3.2) implies $\frac{d\theta(s)}{ds}m^{2}(r(s))=\nu,\quad\nu\text{ is constant},$ (3.4) that is the quantity $\frac{d\theta}{ds}m^{2}$ is conserved along the $h$-geodesics. $z$$\frac{\partial}{\partial r}$$\dot{\gamma}$$x$$\phi$$\gamma$$0-meridian$$\pi- meridian$$\theta_{0}-meridian$$y$$parallels$ Figure 4: The angle $\phi$ between $\dot{\gamma}$ and a meridian for a cylinder of revolution. If $\gamma(s)=(r(s),\theta(s))$ is a geodesic on the surface of revolution $(M,h)$, then the angle $\phi(s)$ between $\dot{\gamma}$ and the profile curve passing through a point $\gamma(s)$ satisfy Clairaut relation $m(r(s))\sin\phi(s)=\nu$. The constant $\nu$ is called the Clairaut constant (see Figure 4). We recall the Theorem of cut locus on cylinder of revolution from [C1] ###### Theorem 3.2 Let $(M,h)$ is a cylinder of revolution with the warping function $m:\mathbb{R}\to\mathbb{R}$ is a positive valued even function, and the Gaussian curvature $G_{h}(r)=-\frac{m^{\prime\prime}(r)}{m(r)}$ is decreasing along the half meridian. If the Gaussian curvature of $M$ is positive on $r=0$, then the structure of the cut locus $C_{q}$ of a point $\theta(q)=0$ in $M$ is given as follows: 1. 1. The cut locus $C_{q}$ is the union of a subarc of the parallel $r=-r(q)$ opposite to $q$ and the meridian opposite to $q$ if $|r(q)<r_{0}|:=\sup\\{r>0|m^{\prime}(r)<0\\}$ and $\varphi(m(r(q)))<\pi$, i.e. $C_{q}=\theta^{-1}(\pi)\cup(r^{-1}(-r(q))\cap\theta^{-1}[\varphi(m(r(q))),2\pi-\varphi(m(r(q)))]).$ 2. 2. The cut locus $C_{q}$ is the meridian $\theta^{-1}(\pi)$ opposite to $q$ if $\varphi(m(r(q)))\geq\pi$ or if $|r(q)|\geq r_{0}$. Here the function $\varphi(\nu)$ on $(\inf m,m(0))$ is defined as $\varphi(\nu):=2\int_{\xi(\nu)}^{0}\frac{\nu}{m\sqrt{m^{2}-\nu^{2}}}dr=2\int_{0}^{\xi(\nu)}\frac{\nu}{m\sqrt{m^{2}-\nu^{2}}}dr,$ where $\xi(\nu):=\min\\{r>0|m(r)=\nu\\}$. ###### Remark 3.3 1. 1. It is easy to see that if the Gauss curvature $G_{h}<0$ everywhere, then $h$-geodesics cannot have conjugate points. It follows that in the case the $h$-cut locus of a point $p\in M$ is the opposite meridian to the point. 2. 2. See [C2] for a more general class of Riemannian cylinders of revolution whose cut locus can be determined. ## 4 Randers rotational metrics ### 4.1 The navigation with wind $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ Let $(M,h)$ be the Riemannian metric (3.1) on the topological cylinder $M=\\{(r,\theta):r\in\mathbb{R},\ \theta\in[0,2\pi)\\}$ such that the Gaussian curvature $G_{h}\neq 0$, i.e. $m(r)$ is not linear function. We will make this assumption all over the paper. ###### Proposition 4.1 Let $(M,h)$ be the topological cylinder $\mathbb{R}\times\mathbb{S}^{1}$ with its Riemannian metric $h$ and let $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$, be a vector filed on $M$ where $A=A(r)$ is smooth function on $\mathbb{R}$, $B$ constant, such that $A^{2}(r)-B^{2}m^{2}(r)<1$. Then 1. (i) The solution of the Zermelo’s navigation problem for $(M,h)$ and wind $\widetilde{W}$ is the Randers metric $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$, where $(\widetilde{a}_{ij})=\frac{1}{\Lambda^{2}}\begin{pmatrix}1-B^{2}m^{2}(r)&BA(r)m^{2}(r)\\\ BA(r)m^{2}(r))&m^{2}(r)(1-A^{2}(r))\end{pmatrix},\ (\widetilde{b}_{i})=\frac{1}{\Lambda}\begin{pmatrix}-A(r)\\\ -Bm^{2}(r)\end{pmatrix},$ (4.1) and $\Lambda:=1-\|\widetilde{W}\|_{h}^{2}=1-A^{2}(r)-B^{2}m^{2}(r)>0$. 2. (ii) The solution of Zermelo’s navigation problem for the data $(M,h)$ and wind $V=A(r)\frac{\partial}{\partial r}$, $A^{2}(r)<1$ is the Randers metric $F=\alpha+\beta$, where $(a_{ij})=\frac{1}{\lambda^{2}}\begin{pmatrix}1&0\\\ 0&\lambda m^{2}(r)\end{pmatrix},\ (b_{i})=\frac{1}{\lambda}\begin{pmatrix}-A(r)\\\ 0\end{pmatrix},$ (4.2) and $\lambda:=1-\|V\|_{h}^{2}=1-A^{2}(r)>0$. 3. (iii) The solution of Zermelo’s navigation problem for $(M,F=\alpha+\beta)$ and wind $W=B\frac{\partial}{\partial\theta}$, $F(-W)<1$ is the Randers metric $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ given in (4.1). ###### Proof. (Proof of Proposition 4.1) 1. (i) The solution of Zermelo’s navigation problem with $(M,h)$ and $\widetilde{W}=(\widetilde{W}^{1},\widetilde{W}^{2})=(A(r),B)$ is obtained from (2.8) with $\Lambda=1-\|\widetilde{W}\|_{h}^{2}=1-A^{2}(r)-B^{2}m^{2}(r)$. Taking into account that $\widetilde{W}_{i}=h_{ij}\widetilde{W}^{j}$ it follows $(\widetilde{W}_{1},\widetilde{W}_{2})=(A(r),Bm^{2}(r))$ and a straightforward computation leads to (4.1). 2. (ii) Similar with (i) using $(M,h)$ and $V=(V^{1},V^{2})=(A(r),0)$, hence $(V_{1},V_{2})=(A(r),0)$ and $\lambda=1-\|V\|_{h}^{2}=1-A^{2}(r)$. 3. (iii) Follows from Theorem 2.5. We observe that $\Lambda=1-A^{2}(r)-B^{2}m^{2}(r)>0$ is actually equivalent to $A^{2}(r)<1$ and $F(-W)<1$. Indeed, $\begin{split}1-A^{2}(r)-B^{2}m^{2}(r)>0\Rightarrow 1-A^{2}(r)>B^{2}m^{2}(r)>0\Rightarrow A^{2}(r)<1.\end{split}$ and $\begin{split}B^{2}m^{2}(r)<1-A^{2}(r)\Rightarrow\frac{Bm(r)}{\sqrt{1-A^{2}(r)}}<1\Rightarrow F(-W)<1,\end{split}$ where we use $F(-W)=\sqrt{a_{22}(-B)^{2}}=\frac{Bm(r)}{\sqrt{1-A^{2}(r)}}$. ###### Remark 4.2 1. 1. Observe that we actually perform a rigid translation of the Riemannian indicatrix $\Sigma_{h}$ by $\widetilde{W}$, which is actually equivalent to translating $\Sigma_{h}$ by $V$ followed by the translation of $\Sigma_{F}$ by $W$ (see Remark 2.3). 2. 2. Observe that the Randers metric given by (4.1) on the cylinder $\mathbb{R}\times\mathbb{S}^{1}$ is rotational invariant, hence $(M,\widetilde{\alpha}+\widetilde{\beta})$ is a Finslerian surface of revolution. This type of Randers metircs are called Randers rotational metrics. Indeed, let us denote $m_{F}(r):=F(\frac{\partial}{\partial\theta})$. Observe that in the case $A(r)$ is odd or even function, the function $m_{F}(r)$ is even function such that $m_{F}(0)>0$. Theorem 2.20 implies ###### Theorem 4.3 Let $(M,h)$ be the topological cylinder $\mathbb{R}\times\mathbb{S}^{1}$ with the Riemannian metric $h=dr^{2}+m^{2}(r)d\theta^{2}$ and $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$, $A^{2}(r)+B^{2}m^{2}(r)<1$. If we denote by $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ the solution of Zermelo’s navigation problem for $(M,h)$ and $\widetilde{W}$, then the followings are true. 1. (i) The $\widetilde{F}$-unit speed geodesics $\mathcal{P}(t)$ are given by $\mathcal{P}(t)=(r(s(t)),\theta(s(t))+B\cdot s(t)),$ where $\rho(s)=(r(s),\theta(s))$ are $\alpha$-unit speed geodesic and $t=t(s)$ is the parametric change $t=\int_{0}^{s}F(\rho(s),\dot{\rho}(s))ds$. 2. (ii) The point $q=\mathcal{P}(l)$ is conjugate to $\mathcal{P}(0)=p$ along $\mathcal{P}$ if and only if $\widehat{q}=(r(q),\theta(q)-Bl)$ is conjugate to $p$ with respect to $\alpha$ along the $\alpha$-geodesic from $p$ to $\widehat{q}$. 3. (iii) The point $\widehat{q}\in Cut_{\alpha}(p)$ is an $\alpha$-cut point of $p$ if and only if $q=(r(\widehat{q}),\theta(\widehat{q})+Bl)\in Cut_{\widetilde{F}}(p)$, where $l=d_{\widetilde{F}}(p,q)$. ###### Proof. (Proof of Theorem 4.3) First of all, observe that $V=A(r)\frac{\partial}{\partial r}$ and $W=B\frac{\partial}{\partial\theta}$ satisfy conditions (i), (ii) in the hypothesis of Theorem 2.20. Indeed, since $(M,h)$ is surface of revolution and $V=(A(r),0)$ it results that $\eta=A(r)dr$ is closed form, hence (2.23) is satisfied. Moreover $W=B\frac{\partial}{\partial\theta}$ is obviously Killing field with respect to $h$, and it is trivial to see that $[V,W]=\left[A(r)\frac{\partial}{\partial r},B\frac{\partial}{\partial\theta}\right]=0$. The statements (i)-(iii) follows now from Theorem 2.20 and the fact that the flow of $W=B\frac{\partial}{\partial\theta}$ is just $\varphi_{t}(r,\theta)=(r,\theta+Bt)$ for any $(r,\theta)\in M$, $t\in\mathbb{R}$. In this case, $\beta(W)=0$, hence $F(-W)=F(W)=\alpha(W)<1$, hence (iii) is necessary and sufficient condition. We have reduced the geometry of the Randers type metric $(M,\widetilde{F})$ to the geometry of the Riemannian manifold $(M,\alpha)$, obtained from $(M,h)$ by (4.2). ###### Example 4.4 Let us observe that there are many cylinders $(M,h)$ and winds $\widetilde{W}$ satisfying conditions in Theorem 4.3. For instance, let us consider the topological cylinder $\mathbb{R}\times\mathbb{S}^{1}$ with the Riemannian metric $h=dr^{2}+m^{2}(r)d\theta^{2}$ defined using the warp function $m(r)=e^{-r^{2}}$. Consider the smooth function $A:\mathbb{R}\to\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$, $A(r)=\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^{2}+1}}$ and any constant $B\in\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$. Then $A^{2}(r)+B^{2}m^{2}(r)<\frac{1}{2}+B^{2}m^{2}(r)\leq\frac{1}{2}+B^{2}<1$. In this case $\widetilde{W}=\widetilde{\alpha}+\widetilde{\beta}$ is given by $\begin{split}(\widetilde{a}_{ij})=\frac{1}{\Lambda^{2}}\begin{pmatrix}1-B^{2}e^{-2r^{2}}&\frac{B}{\sqrt{2}}\frac{re^{-2r^{2}}}{\sqrt{r^{2}+1}}\vspace{0.2cm}\\\ \frac{B}{\sqrt{2}}\frac{re^{-2r^{2}}}{\sqrt{r^{2}+1}}&\frac{1}{2}\frac{(r^{2}+2)e^{-2r^{2}}}{r^{2}+1}\end{pmatrix},\ (\widetilde{b}_{i})=\frac{1}{\Lambda}\begin{pmatrix},-\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^{2}+1}}\vspace{0.2cm}\\\ -Be^{-2r^{2}}\end{pmatrix},\end{split}$ where $\Lambda=\frac{1}{2}\frac{r^{2}+2}{r^{2}+1}-B^{2}e^{-2r^{2}}$. Observe that $F=\alpha+\beta$ is given by $\begin{split}(a_{ij})=\frac{1}{\lambda}\begin{pmatrix}1&0\vspace{0.2cm}\\\ 0&\lambda e^{-r^{2}}\end{pmatrix},\ (b_{i})=\frac{1}{\lambda}\begin{pmatrix}-\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^{2}+1}}\vspace{0.2cm}\\\ 0\end{pmatrix},\\\ \end{split}$ where $\lambda=\frac{1}{2}\frac{r^{2}+2}{r^{2}+1}$. Moreover, we have ###### Corollary 4.5 1. (I) With notations in Theorem 4.3 let us assume that $(M,h)$ has negative curvature everywhere $G_{h}(r)<0$, for all $r\in\mathbb{R}$. If there exist a smooth function $A:\mathbb{R}\to(-1,1)$ and a constant $B$ such that $A^{2}(r)+B^{2}m^{2}(r)<1$ and if $G_{\alpha}(r)<0$ everywhere, then the $\alpha$-cut locus and the $F=\alpha+\beta$ cut locus of a point $p\in M$ is the opposite meridian to the point $p$. Moreover, the $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ cut locus of $p=(r_{0},\theta_{0})$ is the deformed opposite meridian by the flow of the vector field $W=B\frac{\partial}{\partial\theta}$, i.e. $(r(t),\theta_{0}+\pi+Bt)$, for all $t\in\mathbb{R}$, where $(r(t),\theta_{0}+\pi)$ is the opposite meridian to $p=(r_{0},\theta_{0})$, $r(0)=r_{0}$. 2. (II) With the notations in Theorem 4.3 let us assume that $(M,h)$ has Gaussian curvature $G_{h}(r)$ decreasing along any half meridian $[0,\infty)$ and $G_{h}(0)\geq 0$. If there exist a smooth function $A:\mathbb{R}\to(-1,1)$ and a constant $B$ such that $A^{2}(r)+B^{2}m^{2}(r)<1$, $G_{\alpha}(r)$ is decreasing along any half meridian and $G_{\alpha}\geq 0$, then the $\alpha$-cut locus and the $F$-cut locus of a point $p=(r_{0},\theta_{0})$ is given in Theorem 3.2. Moreover, the $\widetilde{F}$-cut locus of $p$ is obtained by the deformation of the cut locus described in Theorem 3.2 by the flow of $W=B\frac{\partial}{\partial\theta}$. ###### Proof. (Proof of Corollary 4.5) It is trivial by combining Proposition 4.1 and Theorem 4.3. ###### Remark 4.6 It is not trivial to obtain concrete examples satisfying conditions (I) and (II) in Corollary 4.5 in the case $A\neq$ constant. We conjecture that such examples exist leaving the concrete construction for a forthcoming research. The case $A=$ constant is treated below. ### 4.2 The case $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ Consider the case $\widetilde{W}=(\widetilde{W}^{1},\widetilde{W}^{2})=(A,B)$, where $A$ and $B$ are constants, on the topological cylinder $M=\\{(r,\theta):r\in\mathbb{R},\ \theta\in[0,2\pi)\\}$. Here $m:\mathbb{R}\to[0,\infty)$ is an even bounded function such that $m^{2}<\frac{1-A^{2}}{B^{2}}$, $|A|<1$, $B\neq 0$. Proposition 4.1 and Theorem 4.3 can be easily rewritten for this case by putting $A(r)=A=$ constant. We will not write them again here. Instead, let us give some special properties specific to this case. A straightforward computation gives: ###### Proposition 4.7 Let $(M,h)$ be the Riemannian metric of the cylinder $\mathbb{R}\times\mathbb{S}^{1}$, and let $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$, with $A,B$ real constants such that $m^{2}<\frac{1-A^{2}}{B^{2}}$, $|A|<1$, $B\neq 0$. Then, the followings are true. 1. (I) The Gauss curvatures $G_{h}$ and $G_{\alpha}$ of $(M,h)$ and $(M,\alpha)$, respectively, are proportional, i.e. $G_{\alpha}(r)=\frac{1}{\lambda^{2}}G_{h}(r),$ where $\alpha$ is the Riemannian metric obtained in the solution of the Zermelo’s navigation problem for $(M,h)$ and $V=A\frac{\partial}{\partial r}$. 2. (II) The geodesic flows $S_{h}$ and $S_{\alpha}$ of $(M,h)$ and $(M,\alpha)$, respectively, satisfy $S_{h}=S_{\alpha}+\Delta,$ where $\Delta=-2A^{2}mm^{\prime\prime}(y^{2})^{2}\frac{\partial}{\partial y^{1}}$ is the difference vector field on $TM$ endowed with the canonical coordinates $(r,\theta;y^{1},y^{2})$. Moreover, we have ###### Theorem 4.8 In this case, if $(M,h)$ is a Riemannian metric on the cylinder $M=R\times\mathbb{S}^{1}$ with bounded warp function $m(r)<\frac{\sqrt{1-A^{2}}}{B}$ where $A,B$ are constants, $|A|<1$, $B\neq 0$, and wind $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial\theta}$ then the followings hold good. 1. (I) If $G_{h}(r)<0$ everywhere, then 1. (i) the $\alpha$-cut locus of a point $p$ is the opposite meridian. 2. (ii) the $F$-cut locus of a point $p$ is the opposite meridian, where $F=\alpha+\beta$, $\begin{split}(a_{ij})=\frac{1}{\lambda^{2}}\begin{pmatrix}1&0\\\ 0&\lambda m^{2}(r)\end{pmatrix}\ (b_{i})=\frac{1}{\lambda}\begin{pmatrix}-A\\\ 0\end{pmatrix},\end{split}$ and $\lambda:=1-\|V\|_{h}^{2}=1-A^{2}>0$. 3. (iii) The $\widetilde{F}$-cut locus of a point $p$ is the twisted opposite meridian by the flow action $\varphi_{t}(r,\theta)=(r,\theta+Bt)$. 2. (II) With the notations in Theorem 4.3 let us assume that $(M,h)$ has Gaussian curvature satisfying $G_{h}(r)$ is decreasing along any half meridian $[0,\infty)$ and $G_{h}(0)\geq 0$. Then in this case the cut locus of $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ is a subarc of the opposite meridian is of the opposite parallel deformed by the flow of $W=B\frac{\partial}{\partial\theta}$. ###### Example 4.9 There are many examples satisfying this theorem. For instance the choice $m(r)=e^{-r^{2}}\leq 1$ gives $G_{h}(r)=-4r^{2}-2<0$ which is decreasing on $[0,\infty)$ and $G_{h}(0)=2>0$. Any choice of constants $A,B$ such that $1<\frac{\sqrt{1-A^{2}}}{B}$, i.e. $A^{2}+B^{2}<1$ is suitable (for instance $(A,B)=(\sin\omega,\cos\omega)$ for a fixed angle $\omega$) for $\widetilde{W}$. Many other examples are possible. ###### Remark 4.10 A similar study can be done for the case $B=0$. ###### Remark 4.11 The extension to the Randers case of the Riemannian cylinders of revolution and study of their cut loci in [C2] can be done in a similar manner. ## References * [B] D. Bao, On two curvature-driven problems in Riemann-Finsler geometry, Finsler geometry, Sapporo 2005 - in memory of Makoto Matsumoto, 19–71, Adv. Stud. Pure Math., 48, Math. Soc. Japan, Tokyo, 2007. * [BCS] D. Bao, S. S. Chern, Z. Shen, An Introduction to Riemann Finsler Geometry, Springer, GTM 200, 2000\. * [BRS] D. Bao, C. Robles and Z. Shen, Zermelo navigation on Riemannian manifolds, J. Diff. Geom, 66, 2004, 377–435. * [Br] R. Bryant, Projectively flat Finsler 2-spheres of constant flag curvature, Selecta Mathematica (NS) 3(2) (1997), 161-203. * [C1] P. Chitsakul, The structure theorem for the cut locus of a certain class of cylinders of revolution I. Tokyo J. Math. 37 (2014), no. 2, 473–484. * [C2] P. Chitsakul, The structure theorem for the cut locus of a certain class of cylinders of revolution II. Tokyo J. Math. 38 (2015), no. 1, 239–248. * [FM] P. Foulon, V. S. Matveev, Zermelo deformation of Finsler metrics by Killing vector fields. Electron. Res. Announc. Math. Sci. 25 (2018), 1–7. * [HCS] R. Hama, P. Chitsakul, S. V. Sabau, The geometry of a Randers rotational surface. Publ. Math. Debrecen 87 (2015), no. 3-4, 473–502. * [HKS] R. Hama, J. Kasemsuwan, S. V. Sabau, The cut locus of a Randers rotational 2-sphere of revolution. Publ. Math. Debrecen 93 (2018), no. 3-4, 387–412. * [HS] D. Hrimiuc, H. Shimada, On the $\mathcal{L}$-duality between Lagrange and Hamilton manifolds. Nonlinear World 3 (1996), no. 4, 613–641. * [INS] N. Innami, T. Nagano, K. Shiohama, Geodesics in a Finsler surface with one-parameter group of motions, Publ. Math. Debrecen 89 (2016), no. 1-2, 137–160. * [IS] J. Itoh , S. V. Sabau, Riemannian and Finslerian spheres with fractal cut loci, Diff. Geom. and its Applications, Volume 49, December 2016, 43-64. * [MHSS] R. Miron, D. Hrimiuc, H. Shimada, S. V. Sabau, The Geometry of Hamilton and Lagrange spaces, Kluwer Acad. Publ., FTP 118, 2001. * [R] C. Robles, Geodesics in Randers spaces of constant curvature, Trans. AMS 359 (2007), no. 4, 1633–1651. * [Sh] Z. Shen, Finsler metrics with $K=0$ and $S=0$. Canad. J. Math. 55 (2003), no. 1, 112–132. * [SSS] S. V. Sabau, K. Shibuya, H. Shimada, Metric structures associated to Finsler metrics. Publ. Math. Debrecen 84 (2014), no. 1-2, 89–103. * [SST] K. Shiohama, T. Shioya, and M. Tanaka, The Geometry of Total Curvature on Complete Open Surfaces, Cambridge tracts in mathematics 159, Cambridge University Press, Cambridge, 2003. * [T] M. Tanaka, On the cut loci of a von Mangoldt’s surface of revolution, J. Math. Soc. Japan, (44) no.4, 1992, 631–641. * [Z] E. Zermelo, Über das Navigationsproblem bei ruhender oder veränderlicher Windverteilung, Z. Angew. Math. Mech. 11 (1931), 114-124.
# Fair Resource Allocation for Demands with Sharp Lower Tail Inequalities Vacharapat Mettanant Email<EMAIL_ADDRESS>Department of Computer Engineering, Kasetsart University, Sriracha Campus, Chonburi, Thailand. Supported by Faculty of Engineering at Sriracha Graduate Scholarship, Kasetsart University. Jittat Fakcharoenphol Email: <EMAIL_ADDRESS>Department of Computer Engineering, Kasetsart University, Bangkok, Thailand. Supported by the Thailand Research Fund, Grant RSA-6180074. ###### Abstract We consider a fairness problem in resource allocation where multiple groups demand resources from a common source with the total fixed amount. The general model was introduced by Elzayn et al. [FAT*’19]. We follow Donahue and Kleinberg [FAT*’20] who considered the case when the demand distribution is known. We show that for many common demand distributions that satisfy sharp lower tail inequalities, a natural allocation that provides resources proportional to each group’s average demand performs very well. More specifically, this natural allocation is approximately fair and efficient (i.e., it provides near maximum utilization). We also show that, when small amount of unfairness is allowed, the Price of Fairness (PoF), in this case, is close to 1. ## 1 Introduction Resource allocation has been a central problem in computer science and operation research [8, 10, 14]. Typically, to distribute resources well, there are many requirements to be considered. One of the most fundamental and important requirements is fairness [3, 13, 6]. When fairness is a factor, in a pioneering work, Elzayn et al. [5] proposed a setting where $N$ groups of people would like to obtain shared common resources, with limited amount $R$. There is an unknown distribution for the number of candidates in each group in need of the resource. They would like to allocate the resources so that the possibility for anyone in any group to access the resource is relatively equal, i.e., access to the distributed resource is fair. In their setting, the distributions are unknown and they would like to learn how to allocate fairly and efficiently. At each step, their learning algorithm provides an allocation and later receives feedback, for a particular group, on the number of candidates who received the resource. They also showed that when the unknown distributions are Poisson or “single-parameter Lipschitz-continuous distributions”, their learning algorithm, based on MLE, after a logarithmic number of rounds, outputs an approximately fair allocation with an almost maximum utility. As a subroutine to their learning algorithm, they presented an algorithm for computing an optimal approximately fair allocation, assuming that candidate distributions are known. Leaving out the learning aspect of the problem, Donahue and Kleinberg [4] considered the settings where the candidate distributions are already known and focused mostly on the trade-offs between fairness and utilization under different probability distributions, and under different allocation versions, e.g., integral and fractional allocations. They showed many interesting results. When the fairness is relaxed to $\alpha$-fair, they gave an upper bound on the Price of Fairness to $1/\alpha$ under fractional allocation. They proved that when the family of distributions contains distribution that can be scaled to one another, e.g., exponential and Weibull distributions, there is no gap in fairness and utilization, i.e., PoF is 1. They also established the bound on the Price of Fairness for Power Law distributions. This paper follows the approach by Donahue and Kleinberg [4]. We consider fractional resource allocation, i.e., we allow allocations where resources are distributed fractionally (or, similarly, probabilistically). We show that when the candidate distribution $\mathcal{C}_{i}$ for each group satisfies lower deviation tail bound, the natural way to allocate resource based on each group’s mean provides both fairness and good utilization. More specifically, when the total amount of resource is $R$, the amount of resource allocated to group $i$ is $R\cdot\frac{\mu_{i}}{\sum_{j}\mu_{j}},$ where, for each group $i$, $\mu_{i}$ is the expected number of candidates belonging to the group. We refer to this allocation as the mean-weighted allocation. In contrast to Donahue and Kleinberg’s results [4] that provided many examples of distributions arising from modern applications such as the Power Law distributions where the fairness-utilization gap is significant, our work shows that for many classic distributions, the natural allocation works just fine. More over, our proofs are mostly elementary. We would like to point out that our work is also very closely related to the results presented in Elzayn et al. [5]. On the surface, what we show here seems to be implicit in or be “part” of their learning algorithms that outputs approximately fair allocation with almost maximum utilization for Poisson and other distributions. However, we note that for distributions satisfying our assumption we do not need to compute the allocations, we can just explicitly use the mean-weighted allocation. Our fairness and utilization analysis is based on this natural allocation. We believe that, as in the work of Donahue and Kleinberg [4], our work simplifies the analysis and essentially shed some lights on the trade-off between the fairness and utilization for this problem. In the next section, we review formal definitions and results of Donahue and Kleinberg [4]. Section 3 demonstrates our intuition on why mean-weighted allocation works for distributions with mean concentration. We specify the tail assumption in Section 4 and show the fairness and utilization analysis. Section 5 provides examples on many common distributions satisfying the assumption in Section 4. ## 2 Problem definitions and reviews of Donahue and Kleinberg’s results We follow a two-stage probabilistic model of Elzayn et al. [5], and Donahue and Kleinberg [4]. There are $K$ groups. Each group $i$ has a distribution $\mathcal{C}_{i}$ over the number of candidates $C_{i}$ in need of the resource. We assume that $\mathbb{E}_{C_{i}\sim\mathcal{C}_{i}}[C_{i}]>0$ and all $C_{i}$’s are independent. When the context is clear, we use $\mathbb{E}[C_{i}]$ instead of $\mathbb{E}_{C_{i}\sim\mathcal{C}_{i}}[C_{i}]$ for simplicity. We let $f_{i}$ be the probability density function and $F_{i}$ be the cumulative distribution function for $C_{i}$. We have $R$ units of resource that can be distributed for these $K$ groups. We assume that the resource is discrete; therefore, each unit of resource can be allocated to one and only one candidate. We would like to find allocation $v_{i}$ of resource for each group $i$ such that $\sum v_{i}=R$ (i.e., we are required to allocate all the resource). When $v_{i}$ units of resource is allocated, we assume that each candidate of group $i$ has the same opportunity to receive the resource. Therefore, the probability of receiving the resource for each candidate is $\min(v_{i}/C_{i},1)$. Let vector $\mathbf{v}=[v_{1},v_{2},\ldots,v_{K}]$. There are two (somewhat) competing goals. The utilization of $\mathbf{v}$ is defined as $U(\mathbf{v},\\{\mathcal{C}_{i}\\}):=\sum_{i=1}^{K}\mathbb{E}_{C_{i}\sim\mathcal{C}_{i}}[\min(C_{i},v_{i})].$ Let $q(v,\mathcal{C})$ be the availability of the resource for a group with distribution $\mathcal{C}$ when $v$ units of resource is allocated, defined as the opportunity of a candidate receiving the resource. Formally, if $x$ is a member of the group, the availability is $q(v,\mathcal{C}):=\Pr[x\text{ receives the resource }|x\text{ is a candidate}].$ In the paper of Donahue and Kleinberg [4], they showed that $q(v,\mathcal{C})=\frac{\mathbb{E}_{C\sim\mathcal{C}}[\min(C,v)]}{\mathbb{E}_{C\sim\mathcal{C}}[C]}.$ Inspired from _equality of opportunity_ proposed by Hardt et al. [9], we define the fairness of $\mathbf{v}$ to be the maximum difference of the availability, i.e., the fairness of $\mathbf{v}$ is $Q(\mathbf{v},\\{\mathcal{C}_{i}\\}):=\max_{i,j}|q(v_{i},\mathcal{C}_{i})-q(v_{j},\mathcal{C}_{j})|.$ If the fairness of $\mathbf{v}$ is less than or equal to $\alpha$, we say that the allocation $\mathbf{v}$ is _$\alpha$ -fair_. Since there are two objectives, one approach is to guarantee a certain fairness with parameter $\alpha$, i.e., we would like to find an allocation $\mathbf{v}$ (with $\sum_{i}v_{i}=R$) such that $Q(\mathbf{v},\\{\mathcal{C}_{i}\\})\leq\alpha$ that maximizes the utilization $U(\mathbf{v},\\{\mathcal{C}_{i}\\})$. This motivates the notion of Price of Fairness (PoF), defined to be $\mathrm{PoF}(\alpha):=\frac{\max_{\mathbf{v}:\sum_{i}v_{i}=R}U(\mathbf{v},\\{\mathcal{C}_{i}\\})}{\max_{\mathbf{v}:\sum_{i}v_{i}=R}\left(U(\mathbf{v},\\{\mathcal{C}_{i}\\})\ \ \mbox{s.t.}\ \ Q(\mathbf{v},\\{\mathcal{C}_{i}\\})\leq\alpha\ \right)}.$ Donahue and Kleinberg [4] consider two versions of the allocations: one where the allocations $v_{i}$ must be integer and one where $v_{i}$ can be fractional. For integer allocation, they showed that PoF is unbounded. When fractional or probabilistic allocations are allowed, they showed that PoF is bounded by $1/\alpha$. Moreover, they showed, in the next theorem, that PoF is 1 for candidate distributions satisfying some condition. ###### Theorem 1 (Theorem 2 from [4]). Consider candidate distributions with $F_{i}(0)=0$ and $f_{i}(v)>0$, for $v\geq 0$. Suppose the set of candidate distributions $\\{\mathcal{C}_{i}\\}$ has the following property: $F_{i}(v)=F_{j}\left(v\cdot\frac{\mathbb{E}[C_{j}]}{\mathbb{E}[C_{i}]}\right),$ for $v\geq 0$, for all $i,j$. Then, under the fractional allocation of resources, the max-utilization allocation is $0$-fair. ## 3 Illustrative examples To see how availability and utilization change with various allocation levels, it is useful to start with an easy case with constant candidate distribution. For simplicity assume that the number of candidates is scaled down to be exactly 1, so that the availability and utilization are equal. See Figure 1 (left). The figure also shows the accumulative density function $F$; note that in this case it is a step function that changes from 0 to 1 at the mean $\mu$. As the plot shows, the availability keeps increasing up to the point when the resource is enough for all candidates. Figure 1: Availability for constant demand (left) and for normally-distributed demand (right) Note that this case falls into the case of Theorem 1 by Donahue and Kleinberg, and we know that PoF is 1. However, it serves as an introduction to our approach to prove that directly here. For this case, we have a simple way to allocate all $R$ units of resource showing that PoF is 1. We allocate $v_{i}=R\cdot\frac{\mu_{i}}{\sum_{j}\mu_{j}}$ units to each group $i$. ###### Lemma 1. The allocation gives $PoF=1$ for constant candidates. ###### Proof. If $R\geq\sum_{i}\mu_{i}$, we allocate $v_{i}\geq\mu_{i}$ to each group. The utilization will be $\sum_{i}\mu_{i}$, which is maximum. The availability of each group is $\frac{\min(\mu_{i},v_{i})}{\mu_{i}}=\frac{\mu_{i}}{\mu_{i}}=1.$ Since the availability of all groups are equal, this allocation is $0$-fair. If $R<\sum_{i}\mu_{i}$, we have $v_{i}<\mu_{i}$. The allocation gives us $R$ utilization which also maximum. The availability of each group is $\frac{\min(\mu_{i},v_{i})}{\mu_{i}}=\frac{v_{i}}{\mu_{i}}=\frac{R}{\sum_{j}\mu_{j}}.$ We can see that the availability of all groups are equal as well. Hence the allocation is $0$-fair. In both case, the allocation is $0$-fair and gives us maximum utilization. Therefore, the PoF is 1. ∎ When dealing with non-constant demand distribution highly concentrated around its mean, we see a similar picture (See Figure 1 (right), for the case with normally distributed demands). When the level of allocated resource is far from the mean $\mu$, the utilization and availability behave roughly as in the previous case. Things get more interesting around the mean (highlighted in yellow in the figure). If we keep distributing resource proportionally to each group’s mean, we might observe the price of fairness here. Intuitively, if the range is small, we would expect small penalty. This is what we shall prove in Section 4. Notably we do not need that the distribution symmetrically concentrates around its mean, we only need that the lower tail is very small. To see that this lower concentration is crucial when using mean-weighted allocation, we provide another example where the random variable $C$ for the number candidates is defined to be such that $\Pr[C=0]=(k-1)/k$ and $\Pr[C=k]=1/k$. Note that $\mathbb{E}[C]=1$, but the probability that $C$ is less than its expectation is very large, i.e., $\Pr[C<\mathbb{E}[C]]=1-1/k$. In this case, allocating resource $v\leq k$ to the group only yields the availability and utilization of $v/k$. Another example is the exponential distribution considered by Donahue and Kleinberg, who showed that PoF is always 1. In stark contrast, our approach cannot show any good bounds for this case. ## 4 General assumptions and fairness analysis In this section, we provide analysis of availability, utilization and fairness for classes of candidate distributions satisfying certain concentration property. We show in Section 5 that many common distributions satisfy this condition using well-known concentration inequalities (see, e.g., a survey by Boucheron, Lugosi, and Bousquet [1]). We say that a random variable $X$ satisfies an $(\epsilon,\delta)$-lower deviation inequality for $0<\epsilon,\delta<1$ if $\Pr[X\leq(1-\epsilon)\mathbb{E}[X]]\leq\delta.$ We say that a distribution satisfies an $(\epsilon,\delta)$-lower deviation inequality if a random variable from that distribution is with $(\epsilon,\delta)$-lower deviation inequality. Before we continue, we note that typically the parameter $\epsilon$ is usually a small constant (say 1%, or 10%), and $\delta$ is a very small number, usually polynomially small. In what follows, we assume that distributions $\\{\mathcal{C}_{i}\\}$ satisfies an $(\epsilon,\delta)$-lower deviation inequality. Also, let $\mu_{i}=\mathbb{E}[C_{i}]$. Let $Z=\sum_{i=1}^{K}\mu_{i}$ be the total expected number of candidates over all groups. We will use a mean-weighted allocation based on groups’ mean, i.e., we let $v_{i}=R\cdot\frac{\mu_{i}}{Z},$ for $1\leq i\leq K$. We would show that this allocation is very fair (the fairness value is closed to 0) and gives almost optimal utilization. We shall use this to prove the bound on the price of fairness (PoF). ### 4.1 Fairness We analyze the fairness in two regions based on the total resources $R$ and $Z$: 1. 1. when $R\leq(1-\epsilon)Z$, and 2. 2. when $R\geq(1-\epsilon)Z$. #### 4.1.1 When $R\leq(1-\epsilon)Z$ In this case, our allocation set $v_{i}=R\cdot\frac{\mu_{i}}{Z}\leq(1-\epsilon)\mu_{i}.$ We will prove the upper bound and the lower bound on the expected availability in this case. ###### Lemma 2. The allocation ensures $\frac{v_{i}}{\mu_{i}}(1-\delta)\leq q(v_{i},\mathcal{C}_{i})\leq\frac{v_{i}}{\mu_{i}}\leq 1-\epsilon.$ ###### Proof. First consider the upper bound. For each group $i$, let $U_{i}=\min(C_{i},v_{i})$ represents the utilization of the group. Since $\mathbb{E}[U_{i}]\leq v_{i}$, the availability of group $i$ can be bounded by $q(v_{i},\mathcal{C}_{i})=\frac{\mathbb{E}[U_{i}]}{\mu_{i}}\leq\frac{v_{i}}{\mu_{i}}\leq 1-\epsilon.$ To show the lower bound, recall that $\begin{split}\mathbb{E}[U_{i}]&=\mathbb{E}[U_{i}|C_{i}<v_{i}]\Pr[C_{i}<v_{i}]+\mathbb{E}[U_{i}|C_{i}\geq v_{i}]\Pr[C_{i}\geq v_{i}]\\\ &\geq\mathbb{E}[U_{i}|C_{i}\geq v_{i}]\Pr[C_{i}\geq v_{i}].\end{split}$ Given that the number of candidates $C_{i}\geq v_{i}$, we get $U_{i}=\min(C_{i},v_{i})=v_{i}$. Moreover, since $C_{i}$ satisfies $(\epsilon,\delta)$-lower deviation inequality, we know that $\Pr[C_{i}\geq v_{i}]\geq\Pr[C_{i}\geq(1-\epsilon)\mu_{i}]\geq 1-\delta.$ Using these facts, we get $\mathbb{E}[U_{i}]\geq v_{i}(1-\delta)$ and the availability $q(v_{i},\mathcal{C}_{i})$ of group $i$ can be bounded by $q(v_{i},\mathcal{C}_{i})=\frac{\mathbb{E}[U_{i}]}{\mu_{i}}\geq\frac{v_{i}}{\mu_{i}}(1-\delta),$ as required. ∎ ###### Lemma 3. When $R\leq(1-\epsilon)Z$, the mean-weighted allocation gives $Q(\mathbf{v},\\{\mathcal{C}_{i}\\})\leq(1-\epsilon)\delta=\delta-\epsilon\delta.$ ###### Proof. From Lemma 2, we know that for each group $i$, $\frac{v_{i}}{\mu_{i}}(1-\delta)\leq q(v_{i},\mathcal{C}_{i})\leq\frac{v_{i}}{\mu_{i}}.$ Hence the fairness is bounded by $\begin{split}Q(\mathbf{v},\\{\mathcal{C}_{i}\\})&\leq\max_{i}\frac{v_{i}}{\mu_{i}}-\min_{j}\frac{v_{j}}{\mu_{j}}(1-\delta).\end{split}$ However, by the definition of $v_{i}$, we know that the ratio $v_{i}/\mu_{i}=R/Z$ for all $i$ and does not depend on groups. So, $\begin{split}Q(\mathbf{v},\\{\mathcal{C}_{i}\\})&\leq\frac{R}{Z}-\frac{R}{Z}(1-\delta)\\\ &=\frac{R}{Z}\delta\\\ &\leq(1-\epsilon)\delta.\end{split}$ ∎ #### 4.1.2 When $R\geq(1-\epsilon)Z$ When $R\geq(1-\epsilon)Z$, our allocation will set $v_{i}=R\cdot\frac{\mu_{i}}{Z}\geq(1-\epsilon)\mu_{i}.$ ###### Lemma 4. In this case, $(1-\epsilon)(1-\delta)\leq q(v_{i},\mathcal{C}_{i})\leq 1.$ ###### Proof. Consider the lower bound. Since $v_{i}\geq(1-\epsilon)\mu_{i}$, we get that $\mathbb{E}[U_{i}]=\mathbb{E}[\min(C_{i},v_{i})]\geq\mathbb{E}[\min(C_{i},(1-\epsilon)\mu_{i})].$ and the availability of each group $i$ can be bounded by $q(v_{i},C_{i})\geq\frac{\mathbb{E}[\min(C_{i},(1-\epsilon)\mu_{i})]}{\mu_{i}}.$ As in the proof of Lemma 2, recall that $\begin{split}&\mathbb{E}[\min(C_{i},(1-\epsilon)\mu_{i})]\\\ &\geq\mathbb{E}[\min(C_{i},(1-\epsilon)\mu_{i}|C_{i}\geq(1-\epsilon)\mu_{i}]\Pr[C_{i}\geq(1-\epsilon)\mu_{i}]\\\ &=(1-\epsilon)\mu_{i}\Pr[C_{i}\geq(1-\epsilon)\mu_{i}]\\\ &\geq(1-\epsilon)(1-\delta)\mu_{i}\end{split}$ since $C_{i}$ satisfies $(\epsilon,\delta)$-lower deviation inequality. Therefore, $q(v_{i},C_{i})\geq(1-\epsilon)(1-\delta).$ For the upper bound, note that from $\mathbb{E}[U_{i}]=\mathbb{E}[\min(C_{i},v_{i})]\leq\mathbb{E}[C_{i}]=\mu_{i},$ we know that $q(v_{i},C_{i})\leq 1$. ∎ Therefore, we have the following corollary. ###### Corollary 1. When $R\geq(1-\epsilon)Z$, we have that $Q(\mathbf{v},\\{\mathcal{C}_{i}\\})\leq 1-(1-\epsilon)(1-\delta)=\epsilon+\delta-\epsilon\delta.$ From both cases of $R$, we can conclude as followed. ###### Lemma 5. When the distributions $\\{\mathcal{C}_{i}\\}$ have ($\epsilon,\delta$)-lower deviation inequality, the mean-weighted allocation gives the fairness within $\epsilon+\delta-\epsilon\delta$. ### 4.2 Utilization This section shows that the mean-weighted allocation also gives a very good utilization bound. Before we start, recall that the maximum expectation of total utilization is at most $\min(R,Z)$. We first consider the case when $R\leq(1-\epsilon)Z$. ###### Lemma 6. If $R\leq(1-\epsilon)Z$, the utilization is at least $(1-\delta)R$. ###### Proof. Recall that the utilization is defined as $U(\mathbf{v},\\{\mathcal{C}_{i}\\})=\sum_{i=1}^{K}\mathbb{E}[U_{i}].$ From our proof of Lemma 2, we have $\mathbb{E}[U_{i}]\geq(1-\delta)v_{i}$ for each $i$. Therefore, the utilization is at least $U(\mathbf{v},\\{\mathcal{C}_{i}\\})\geq\sum_{i=1}^{K}(1-\delta)v_{i}=(1-\delta)R.$ ∎ On the other hand, when $R\geq(1-\epsilon)Z$, we show that the utilization is at least $(1-\epsilon-\delta)Z$. ###### Lemma 7. If $R\geq(1-\epsilon)Z$, the utilization is at least $(1-\epsilon-\delta)Z.$ ###### Proof. From the proof of Lemma 4, we have $\mathbb{E}[U_{i}]\geq(1-\epsilon)\mu_{i}\Pr[C_{i}\geq(1-\epsilon)\mu_{i}]\geq(1-\epsilon)(1-\delta)\mu_{i}$ in this case. Therefore, the utilization is $\begin{split}U(\mathbf{v},\\{\mathcal{C}_{i}\\})=\sum_{i=1}^{K}\mathbb{E}[U_{i}]&\geq\sum_{i=1}^{K}(1-\epsilon)(1-\delta)\mu_{i}\\\ &=(1-\epsilon)(1-\delta)Z\\\ &\geq(1-\epsilon-\delta)Z.\end{split}$ ∎ These two lemmas imply the following key lemma. ###### Lemma 8. When the candidate distributions satisfy the $(\epsilon,\delta)$-lower deviation inequality, the utilization for the mean-weighted allocation is at least $\min(1-\delta,1-\epsilon-\delta)=1-\epsilon-\delta$ of the maximum utilization. ### 4.3 The bound on PoF Assume that $\epsilon+\delta<1$. We use the bounds from Lemma 5 and Lemma 8 to show that when $\epsilon+\delta\leq\alpha<1,$ the Price of Fairness is at most $\frac{1}{1-\epsilon-\delta}\leq\frac{1}{1-\alpha}.$ Moreover, if $\epsilon+\delta\leq 1/2$, we have $\frac{1}{1-\epsilon-\delta}=1+\frac{\epsilon+\delta}{1-\epsilon-\delta}\leq 1+2(\epsilon+\delta)\leq 1+2\alpha.$ Thus, we have the following main theorem. ###### Theorem 2. If candidate distributions $\\{\mathcal{C}_{i}\\}$ satisfy the $(\epsilon,\delta)$-lower deviation inequality for $\epsilon,\delta$ such that $\epsilon+\delta<1$, the Price of Fairness (PoF) when $\alpha\geq\epsilon+\delta$ is at most $1/(1-\alpha)$. In addition, if $\epsilon+\delta\leq 1/2$ the PoF is at most $1+2\alpha$. ## 5 Results for specific distributions In this section, we show that many common distributions, for demand modeling, satisfies the $(\epsilon,\delta)$-lower deviation inequality. We only provide a few examples. ### 5.1 Binomial distribution Assume that there are $n_{i}$ people in group $i$, and independently each person in group $i$ would be a candidate with probability $p_{i}$. The number of candidates in group $i$, $C_{i}$, is a binomial random variable with parameter $n_{i}$ and $p_{i}$. We have, for an integer $x$ such that $0\leq x\leq n_{i}$, $\Pr[C_{i}=x]=\binom{n_{i}}{x}p^{x}(1-p)^{n_{i}-x},$ with $\mu_{i}=n_{i}p_{i}$. For this type of random variables, we can apply the Chernoff bound to get that $\Pr[C_{i}\leq(1-\epsilon)\mu_{i}]\leq e^{-\mu_{i}\epsilon^{2}/2}.$ Note that the term $e^{-\mu_{i}\epsilon^{2}/2}$ specifies the parameter $\delta$ and is dependent on $\mu_{i}$. Thus, if we take $\epsilon,\delta$, and $p_{i}$ to be fixed, we have the following lemma. ###### Lemma 9. Assume that the candidate distributions are all binomial. For any $\epsilon,\delta$ such that $\epsilon+\delta\leq 1/2$, and for any $p_{i}$, The number of candidates $C_{i}$ satisfies the $(\epsilon,\delta)$-lower deviation inequality when $n_{i}\geq\frac{2}{\epsilon^{2}p_{i}}\ln\frac{1}{\delta}.$ ###### Proof. When $n_{i}\geq\frac{2}{\epsilon^{2}p_{i}}\ln\frac{1}{\delta}$, we have $e^{-\mu_{i}\epsilon^{2}/2}\leq\delta$. This fact implies that $\Pr[C_{i}\leq(1-\epsilon)\mu_{i}]\leq\delta$, which is the definition of $(\epsilon,\delta)$-lower deviation inequality. ∎ ### 5.2 Normal distribution Normal distribution or Gaussian distribution is a continuous distribution whose random variable $C$ with parameter mean $\mu$ and standard deviation $\sigma$ has the density probability distribution $f$ defined as $f(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}}.$ Normal distributions are catch-all distributions, used in numerous modelings calculations when the distributions is not clear or unknown. In the context of this problem, the number of candidates $C_{i}$ for each group $i$ is a normal random variable with mean $\mu_{i}$ and standard deviation $\sigma_{i}$. Using the Chernoff bound, we have that $\Pr[C_{i}\leq(1-\epsilon)\mu_{i}]\leq e^{-\frac{\epsilon^{2}\mu_{i}^{2}}{2\sigma_{i}^{2}}}.$ Again, with the same argument as in Lemma 9, this implies that Normal random variable $C_{i}$ satisfies the $(\epsilon,\delta)$-lower deviation inequality when $\delta\geq e^{-\frac{\epsilon^{2}\mu_{i}^{2}}{2\sigma_{i}^{2}}}$, which implies $\mu_{i}\geq\sqrt{\frac{2\sigma_{i}^{2}}{\epsilon^{2}}\ln\frac{1}{\delta}}.$ ### 5.3 Poisson distribution Poisson distribution is a discrete distribution typically used to express the number of events occurring in the particular time period (usually for rare events). A Poisson random variable $C$ with parameter $\lambda$ satisfies $\Pr[C=x]=\frac{\lambda^{x}e^{-\lambda}}{x!},$ for integer $x=0,1,\ldots$. The expectation $\mathbb{E}[C]$ is $\lambda$. It can be viewed as the limit of the binomial distribution (i.e., fixing $\lambda=np$, and take $n\rightarrow\infty$). To quote Feller [7], examples of observations fitting the Poisson distribution are radioactive disintegrations, flying-bomb hits on London, chromosome interchanges in cells, connections to wrong number, and bacteria and blood counts. In the context of this problem, we consider the situation when the number of candidates $C_{i}$ for each group $i$ is a Poisson random variable with parameter $\lambda_{i}$. It is folklore that Poisson random variables have sub-exponential concentration bounds. The following is from Canonne’s note [2]: $\Pr[C_{i}<(1-\epsilon)\lambda_{i}]\leq e^{-\frac{\epsilon^{2}\lambda_{i}}{2}h(-\epsilon)},$ where $h(x)=2\frac{(1+x)\ln(1+x)-x}{x^{2}}$. Thus, when each $\lambda_{i}$ is large enough, i.e., when $\lambda_{i}\geq\frac{2}{\epsilon^{2}h(-\epsilon)}\ln\frac{1}{\delta},$ we obtain our required assumption. When each $C_{i}$ is a random variable of one of these three specific distributions, we can see that if the mean is large enough, $C_{i}$ satisfies the $(\epsilon,\delta)$-lower deviation inequality for any $\epsilon$ and $\delta$. Thus, given $\alpha>0$, we can choose $\epsilon$ and $\delta$ such that $\epsilon+\delta\leq\min(\alpha,1/2)$. Then, combined with Lemma 5, Lemma 8, and Theorem 2, we can conclude as followed. ###### Theorem 3. Assume that the distribution of each $C_{i}$ is binomial, normal, or Poisson. Given that all the mean $\mathbb{E}[C_{i}]$ are large enough, for any $\alpha\in(0,1)$, the mean-weighted allocation is $\alpha$-fair and gives us at least $(1-\alpha)$ of the maximum utilization. The PoF of this case is at most $1+2\alpha$. ### 5.4 Other examples There are many other experiments that result in random variables satisfying the required $(\epsilon,\delta)$-lower deviation inequality, e.g., sub- Gaussian random variables and those random variables which are applicable to strong classic tail inequalities, such as the Chernoff’s bound, Hoeffding’s bound, Azuma’s inequality, and McDiarmid’s inequality. For examples, the number of empty bins in a balls-and-bins experiment. See more from classic probability textbooks, e.g.,[12, 11], or surveys [1]. ## References * [1] S. Boucheron, G. Lugosi, and O. Bousquet. Concentration Inequalities, pages 208–240. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. * [2] C. Canonne. A short note on poisson tail bounds. Available at URL: http://www.cs.columbia.edu/ ccanonne/files/misc/2017-poissonconcentration.pdf (2020/10/7), 2017. * [3] A. Demers, S. Keshav, and S. Shenker. Analysis and simulation of a fair queueing algorithm. ACM SIGCOMM Computer Communication Review, 19(4):1–12, 1989. * [4] K. Donahue and J. Kleinberg. Fairness and utilization in allocating resources with uncertain demand. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 658–668, New York, NY, USA, 2020. Association for Computing Machinery. * [5] H. Elzayn, S. Jabbari, C. Jung, M. J. Kearns, S. Neel, A. Roth, and Z. Schutzman. Fair algorithms for learning in allocation problems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019, pages 170–179. ACM, 2019. * [6] V. Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, 2018. * [7] W. Feller. An Introduction to Probability Theory and Its Applications, volume 1. Wiley, January 1968. * [8] O. Gross. A class of discrete-type minimization problems. Technical report, RAND CORP SANTA MONICA CA, 1956. * [9] M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315–3323, 2016. * [10] N. Katoh, T. Ibaraki, and H. Mine. A polynomial time algorithm for the resource allocation problem with a convex objective function. Journal of the Operational Research Society, 30(5):449–455, 1979\. * [11] M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, 2005. * [12] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, Cambridge; NY, 1995. * [13] A. D. Procaccia. Cake cutting: not just child’s play. Communications of the ACM, 56(7):78–87, 2013. * [14] C. Shi, H. Zhang, and C. Qin. A faster algorithm for the resource allocation problem with convex cost functions. Journal of Discrete Algorithms, 34:137 – 146, 2015.
# Few-Shot Domain Adaptation for Grammatical Error Correction via Meta-Learning Shengsheng Zhang1,2, Yaping Huang1, Yun Chen3, Liner Yang2, Chencheng Wang4, Erhong Yang2 1Beijing Jiaotong University, Beijing, China 2Beijing Language and Culture University, Beijing, China 3Shanghai University of Finance and Economics, Shanghai, China 4Beijing University of Technology, Beijing, China ###### Abstract Most existing Grammatical Error Correction (GEC) methods based on sequence-to- sequence mainly focus on how to generate more pseudo data to obtain better performance. Few work addresses few-shot GEC domain adaptation. In this paper, we treat different GEC domains as different GEC tasks and propose to extend meta-learning to few-shot GEC domain adaptation without using any pseudo data. We exploit a set of data-rich source domains to learn the initialization of model parameters that facilitates fast adaptation on new resource-poor target domains. We adapt GEC model to the first language (L1) of the second language learner. To evaluate the proposed method, we use nine L1s as source domains and five L1s as target domains. Experiment results on the L1 GEC domain adaptation dataset demonstrate that the proposed approach outperforms the multi-task transfer learning baseline by 0.50 $F_{0.5}$ score on average and enables us to effectively adapt to a new L1 domain with only 200 parallel sentences. ## 1 Introduction Grammatical Error Correction (GEC) aims to correct errors in text. For example, “He notice the picture.” can be corrected to “He notices the picture.”. A GEC system takes an incorrect sentence as input and outputs the corresponding correct sentence. With the development of deep learning, GEC has drawn the attention of many researchers during the last few years. Most existing methods (Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018; Zhao et al., 2019) frame GEC as a sequence-to-sequence (seq2seq) task and have obtained high performance on the general domain while using a large number of training examples. However, these seq2seq-based models cannot gain satisfactory performance in special GEC domains due to domain shift and the limited in-domain data. For instance, Nadejde and Tetreault (2019) use the GEC model trained on the general domain to test on specific domains and find that the performance drops dramatically. One way to tackle this issue is transfer learning (Nadejde and Tetreault, 2019), in which a GEC model is pretrained on the high-resource general domain and then fine-tuned on a low-resource target domain. Although leading to empirical improvements in the target domain, this method suffers from model over-fitting and catastrophic forgetting when the in-domain data is insufficient Sharaf et al. (2020). Figure 1: Difference between multi-task transfer learning and meta learning. Solid lines denote the learning of initial parameters and dashed lines are the path of fine-tuning. Pink and gray represent the source task and target task, respectively. In this paper, we frame GEC system for different domains as different tasks and propose a meta-learning method for few-shot GEC domain adaptation. Specifically, we use model-agnostic meta-learning algorithm (MAML; Finn et al. (2017)) to learn the initialization of model parameters from high-resource domains, which can quickly adapt to a new target domain with a minimal amount of data. Fig.1 shows the difference between our method and the multi-task transfer learning method in Nadejde and Tetreault (2019). Their method first trains GEC model on multi-domain data and then fine-tunes it on a target domain. To evaluate the proposed method, we adapt GEC model to Chinese as a Second Language (CSL) learner’s first language (L1). We construct a few-shot GEC domain adaptation dataset by making use of 4 resource-poor L1s as the test domains and the rest 10 L1s as the source and valid domains. Our experiments on the constructed dataset show that our method can effectively adapt to a new domain using only 200 parallel sentences and outperform the multi-task transfer learning method by 0.50 $F_{0.5}$ score on average. To our best knowledge, we are the first to apply meta-learning to GEC. ## 2 Method ### 2.1 GEC Domain Adaptation Given an erroneous sentence $X=\\{x_{1},...,x_{M}\\}$ and a learner’s domain $d$, a Neural Machine Translation (NMT)-based model for domain-aware GEC models the conditional probability of the output sentence $Y=\\{y_{1},...,y_{N}\\}$ with neural networks as follows: $p(Y|X,d;\theta)=\prod_{t=1}^{N}p(y_{t}|y_{1:t-1},x_{1:M},d;\theta),$ (1) where $\theta$ is a set of model parameters. Following Madotto et al. (2019), we first adapt $\theta$ to the learner’s domain $d$ and then model the output sentence conditional on the erroneous input sentence with: $p(Y|X;\theta_{d})=\prod_{t=1}^{N}p(y_{t}|y_{1:t-1},x_{1:M};\theta_{d}),$ (2) where $\theta_{d}$ is the set of domain-aware model parameters. A learner’s domain can be defined with different criterion, such as the L1 and the proficiency level. In this paper, we use the L1 as the criterion and adapt a GEC system to the learner’s L1. Since our method is agnostic to the definition of domains, it can be easily extended to other type of domain-aware GEC systems. ### 2.2 Few-Shot GEC Domain Adaptation via Meta Learning We propose to apply the model-agnostic meta-learning (MAML; Finn et al. (2017) in few-shot GEC domain adaptation. We use MAML to learn a good initialization of model parameters $\theta^{0}$, which can quickly adapt to new domains using few training examples. We call the proposed meta-learning method for GEC domain adaptation as MetaGEC. We define a set of source tasks $\mathscr{T}=\\{\mathcal{T}_{d_{1}},...,\mathcal{T}_{d_{k}}\\}$, where each task $\mathcal{T}_{d_{i}}$ is a GEC system of a specific domain $d_{i}$ and $k$ is the number of learner’s domains. For each meta-learning episode, we randomly sample a task $\mathcal{T}_{d_{i}}$ from $\mathscr{T}$. Then we sample two batches independently from task $\mathcal{T}_{d_{i}}$’s data, a support batch $D_{d_{i}}^{s}$ and a query batch $D_{d_{i}}^{q}$. We first use $D_{d_{i}}^{s}$ to update the GEC model parameters $\theta$ as follows: $\theta^{{}^{\prime}}_{d_{i}}=\theta-\alpha\nabla_{\theta}\mathcal{L}_{D_{d_{i}}^{s}}(\theta),$ (3) where $\alpha$ is the learning rate and $\mathcal{L}$ is the cross-entropy loss function: $\mathcal{L}_{D_{d_{i}}^{s}}(\theta)=-\sum_{D_{d_{i}}^{s}}\log p(Y|X,\theta).$ (4) After that, we evaluate the updated parameters $\theta_{d_{i}}^{{}^{\prime}}$ on $D_{d_{i}}^{q}$ and update the original model parameters $\theta$ with gradient computed from this evaluation. It is possible to aggregate multiple episodes of source tasks before updating $\theta$. Therefore the original model parameters $\theta$ are updated as follows: $\theta=\theta-\beta\sum_{d_{i}}\nabla_{\theta}\mathcal{L}_{D_{d_{i}}^{q}}(\theta_{d_{i}}^{{}^{\prime}}),$ (5) where $\beta$ is the meta learning rate. The full algorithm is shown in Algorithm 1. Algorithm 1 Meta learning for few-shot GEC domain adaptation Require: $\mathscr{T}$: set of source tasks Require: $\alpha,\beta$: step size hyperparameters 1:Randomly initialize $\theta$ 2:while not done do 3: Sample batch of tasks $\mathcal{T}_{d_{i}}\sim\mathscr{T}$ 4: for all $\mathcal{T}_{d_{i}}$ do 5: $(D_{d_{i}}^{s},D_{d_{i}}^{q})\sim D_{d_{i}}$ 6: Evaluate$\nabla_{\theta}\mathcal{L}_{D_{d_{i}}^{s}}(\theta)$ using $D_{d_{i}}^{s}$ 7: Compute adapted parameters with gra- 8: dient descent: $\theta_{d_{i}}^{{}^{\prime}}=\theta-\alpha\nabla_{\theta}\mathcal{L}_{D_{d_{i}}^{s}}(\theta)$ 9: end for 10: Update meta $\theta:$ 11: $\theta=\theta-\beta\sum_{d_{i}}\nabla_{\theta}\mathcal{L}_{D_{d_{i}}^{q}}(\theta_{d_{i}}^{\prime})$ 12:end while The update of meta parameters involves second-order partial derivatives, which is computationally expensive. In our experiments, we use a first-order approximation to save memory consumption following previous work Gu et al. (2018). Corpus | #Sentence | #SrcToken | #TgtToken ---|---|---|--- Lang-8 | 1.09M | 14M | 15M HSK | 88K | 1.78M | 1.76M Table 1: Data statistics for the Lang-8 and HSK datasets After the meta-training phrase, task-specific learning is done on a small amount of examples from a new target task $\mathcal{T}_{d}$, in order to obtain a task-specific model $\theta_{d}$. ## 3 Experiments ### 3.1 Settings Dataset We use two datasets in our experiments: Lang-8111https://lang-8.com and HSK222http://hsk.blcu.edu.cn/. Both dataset are written by CSL learners and corrected by Chinese native speakers. We tokenize the datasets by jieba333https://github.com/fxsjy/jieba and apply Byte Pair Encoding Sennrich et al. (2016) to limit vocabulary size.444https://github.com/rsennrich/subword-nmt We first pretrain our model on Lang-8 and then study GEC domain adaptation on the HSK dataset with the pretrained model. Table 1 shows the statistics of both datasets. HSK consists of examination essays written by CSL learners with fourteen different L1s. First, we choose four domains with the least data as the test domains, including German (De), Russian (Ru), French (Fr) and Mongolian (Mo). Then, we randomly sample one domain from the rest domains as the valid domain, while the other domains serve as the source domains. Specifically, we use Indonesian (In) as the valid domain, and Korean (Ko), Traditional Chinese (Zh-tw), Japanese (Ja), Singapore English (En-Sg), Malay (Ma), Burmese (Bu), Thai (Th), Vietnamese (Vi) and English (En) as the source domains. For each source domain, we sample 1000 parallel sentences as the in-domain dataset. For valid domain, we sample 200, 800, and 400 parallel sentences as the in-domain training set, development set, and test set respectively. For each test domain, we sample 200 parallel sentences as the in-domain training set, and divide the rest data in HSK into development set and test set according to a two-to-one ratio. The valid and test domains are also called target domains. We use the ERRANT555https://github.com/chrisjbryant/errant to make the gold edits of grammatical errors in sentences of each test set. GEC System Target Task | No Fine-tuning | Fine-tuning | MTL+Fine-tuning | MetaGEC ---|---|---|---|--- In | 19.18 | 25.40 | 37.09 | 37.46 De | 24.43 | 30.03 | 37.76 | 39.43 Ru | 21.44 | 33.98 | 40.15 | 39.14 Fr | 29.10 | 35.48 | 43.19 | 43.49 Mo | 29.30 | 36.72 | 48.07 | 49.21 Average | 24.69 | 32.32 | 41.25 | 41.75 Table 2: $F_{0.5}$ score on the test set of the target tasks, where In is the valid task and all other L1s are the test tasks. We utilize the Transformer (Vaswani et al., 2017) implemented by fairseq666https://github.com/pytorch/fairseq as our GEC model. We follow the model configure in transformer_wmt_en_de and set batch size to 4000 tokens. For pretraining on Lang-8, we follow the training instructions in Ott et al. (2018).777https://github.com/pytorch/fairseq/blob/v0.9.0/examples/scaling_nmt/README.md For meta training, we use the same Adam optimizer except that we set lr=1e-5 for the outer loop and lr=1e-7 for the inner loop. At test time, we fine-tune the model on the target task’s training set with lr=5e-4. For all models, we translate with beam search using beam_size=12. Baselines We compare MetaGEC with three baselines: (1) No Fine-tuning Sharaf et al. (2020): the method that evaluates the pretrained GEC model on the target task’s test set; (2) Fine-tuning Sharaf et al. (2020): the method that fine-tunes the pretrained GEC model on the target task’s training data directly; (3) MTL+Fine-tuning Nadejde and Tetreault (2019): the multi-task transfer learning method we discussed in Section 1. It first fine-tunes the pretrained GEC model on all data of the source tasks in a multi-task learning framework, and then fine-tunes the resulting model on the target task’s training data. As an evaluation metric, we use $F_{0.5}$ score computed by applying the MaxMatch888https://www.comp.nus.edu.sg/~nlp/conll14st.html ($M^{2}$) scorer (Dahlmeier and Ng, 2012). We repeat the baselines and our method three times with different seeds and report the averaged score. ### 3.2 Results Table 2 shows the evaluation results of MetaGEC and the baselines. Overall, MetaGEC outperforms all the baselines, improving No Fine-tuning, Fine-tuning and MTL+Fine-tuning by 17.06, 9.43 and 0.50 $F_{0.5}$ on average. This indicates that MetaGEC has successfully found a good initialization of model parameters for fast domain adaptation. We also observe that for Ru, MetaGEC performs worse than the baseline MTL+Fine-tuning. Ru benefits the most when fine-tuning with in-domain data (No Fine-tuning to Fine-tuning) among all five target tasks. In contrast, it benefits the least from multi-task learning (Fine-tuning to MTL+Fine-tuning). We hypothesize that for Ru, fine-tuning with in-domain data is more important than the way we choose to utilize the data contained in source tasks. Since MetaGEC is different from MTL+Fine-tuning in the way of utilizing data from the source tasks, our hypothesis also partially explains the degraded performance of MetaGEC on Ru. Figure 2: Impact of the number of source tasks. We report the averaged $F_{0.5}$ score on the test sets of the five target tasks (In, De, Ru, Fr, Mo). To study the impact of the number of source tasks, we experiment with different number of source tasks and report the averaged $F_{0.5}$ score on the test sets of the five target tasks, as shown in Fig. 2. Note that we only ran the experiments once here. We use Ko, Zh-tw, Ja, Ma and Bu as the source tasks when the number of source tasks is 5, and gradually add Th, En-Sg, En and Vi when increasing the number of source tasks from 5 to 9. We observe that when including more source tasks at the meta training phase, we can obtain better performance on the target tasks, demonstrating that better initialization model can be learned with more source tasks. ## 4 Related Work Grammatical Error Correction The traditional GEC approaches include two categories: specific rule-based methods (Heidorn et al., 1982; Bustamante and León, 1996) and statistical machine translation (SMT)-based approaches (Brockett et al., 2006; Junczys-Dowmunt and Grundkiewicz, 2014). Specific ruled-based methods only correct certain types of errors in the text. SMT- based approaches greatly improve the performance of GEC. But they are surpassed by deep learning-based methods. Junczys-Dowmunt et al. (2018) cast GEC as a low-resource NMT task. Due to the limited public data, many works (Lichtarge et al., 2019; Kiyono et al., 2019; Wang et al., 2019; Kaneko et al., 2020) pay attention to how to generate more pseudo data to improve the performance of neural GEC models. GEC Domain Adaptation Rozovskaya and Roth (2011) use Naive Bayes classifier to adapt a model to the L1 of the learner. Chollampatt et al. (2016) first train a neural network joint model on the data labeled by L1 of the learner and then integrate it into a SMT based GEC system. Nadejde and Tetreault (2019) utilize transfer learning method to adapt a model to different domains. Meta Learning Recently, meta-learning (Lake et al., 2015; Andrychowicz et al., 2016; Finn et al., 2017) has attracted lots of attention. Meta-learning aims at solving how to achieve fast adaption on new data. Current meta-learning methods can be classified into two categories: 1) Learning strategies and policies (Andrychowicz et al., 2016). 2) Learning good initial parameters of model (Finn et al., 2017). Many works have applied meta-learning to Natural Language Processing tasks, such as low-resource NMT (Gu et al., 2018), personalizing dialogue agents (Madotto et al., 2019) and few-shot NMT adaptation (Sharaf et al., 2020). ## 5 Conclusion In this paper, we introduce MetaGEC, a model-agnostic meta-learning algorithm for few-shot GEC domain adaptation. MetaGEC exploits a set of data-rich source domains to learn the initialization of model parameters that facilitates fast adaptation for a new target domain with a minimal amount of training examples. Experiment results demonstrate the effectiveness of the proposed method. In the future, we will apply different meta-learning methods in the GEC task. ## References * Andrychowicz et al. (2016) Marcin Andrychowicz, Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. Learning to learn by gradient descent by gradient descent. _CoRR_ , abs/1606.04474. * Brockett et al. (2006) Chris Brockett, William B. Dolan, and Michael Gamon. 2006. Correcting ESL errors using phrasal SMT techniques. In _Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics_ , pages 249–256, Sydney, Australia. Association for Computational Linguistics. * Bustamante and León (1996) Flora Ramírez Bustamante and Fernando Sánchez León. 1996. Gramcheck: A grammar and style checker. _CoRR_ , cmp-lg/9607001. * Chollampatt et al. (2016) Shamil Chollampatt, Duc Tam Hoang, and Hwee Tou Ng. 2016. Adapting grammatical error correction based on the native language of writers with neural network joint models. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1901–1911, Austin, Texas. Association for Computational Linguistics. * Chollampatt and Ng (2018) Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural network for grammatical error correction. _CoRR_ , abs/1801.08831. * Dahlmeier and Ng (2012) Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In _Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 568–572, Montréal, Canada. Association for Computational Linguistics. * Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. _CoRR_ , abs/1703.03400. * Gu et al. (2018) Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for low-resource neural machine translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3622–3631, Brussels, Belgium. Association for Computational Linguistics. * Heidorn et al. (1982) G. E. Heidorn, K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow. 1982. The epistle text-critiquing system. _IBM Systems Journal_ , 21(3):305–326. * Junczys-Dowmunt and Grundkiewicz (2014) Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2014. The AMU system in the CoNLL-2014 shared task: Grammatical error correction by data-intensive and feature-rich statistical machine translation. In _Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task_ , pages 25–33, Baltimore, Maryland. Association for Computational Linguistics. * Junczys-Dowmunt et al. (2018) Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018\. Approaching neural grammatical error correction as a low-resource machine translation task. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 595–606, New Orleans, Louisiana. Association for Computational Linguistics. * Kaneko et al. (2020) Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction. * Kiyono et al. (2019) Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1236–1242, Hong Kong, China. Association for Computational Linguistics. * Lake et al. (2015) Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. _Science_ , 350(6266):1332–1338. * Lichtarge et al. (2019) Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3291–3301, Minneapolis, Minnesota. Association for Computational Linguistics. * Madotto et al. (2019) Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5454–5459, Florence, Italy. Association for Computational Linguistics. * Nadejde and Tetreault (2019) Maria Nadejde and Joel Tetreault. 2019. Personalizing grammatical error correction: Adaptation to proficiency level and L1. In _Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)_ , pages 27–33, Hong Kong, China. Association for Computational Linguistics. * Ott et al. (2018) Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 1–9. * Rozovskaya and Roth (2011) Alla Rozovskaya and Dan Roth. 2011. Algorithm selection and model adaptation for ESL correction tasks. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 924–933, Portland, Oregon, USA. Association for Computational Linguistics. * Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. * Sharaf et al. (2020) Amr Sharaf, Hany Hassan, and Hal Daumé. 2020. Meta-learning for few-shot nmt adaptation. _ArXiv_ , abs/2004.02745. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _CoRR_ , abs/1706.03762. * Wang et al. (2019) Chencheng Wang, Liner Yang, Yun Chen, Yongping Du, and Erhong Yang. 2019. Controllable data synthesis method for grammatical error correction. * Zhao et al. (2019) Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics.
bdk short = bdk, long = Bitcoin Dev Kit, cite = bdk HTLC short = HTLC, long = Hash Time-Locked Contract, short-indefinite = an, long-indefinite = a UTXO short = UTXO, long = Unspent Transaction Output CPFP short = CPFP, long = Child-pays-for-parent, cite = cpfp p2p short = p2p, long = peer-to-peer API short = API, long = Application Programming Interface LN short = LN, long = Lightning Network, cite = lightning2016 11institutetext: CoBloX Pty Ltd, Australia 11email<EMAIL_ADDRESS> # Open problems in cross-chain protocols Thomas Eizinger 11 Philipp Hoenisch 11 Lucas Soriano del Pino 11 ###### Abstract Blockchain interoperability is a prominent research field which aims to build bridges between otherwise isolated blockchains. With advances in cryptography, novel protocols are published by academia and applied in different applications and products in the industry. In theory, these innovative protocols provide strong privacy and security guarantees by including formal proofs. However, pure theoretical work often lacks the perspective of real world applications. In this work, we describe a number of hardly researched problems which developers encounter when building cross-chain products. ###### Keywords: Blockchain cross-chain bitcoin ## Introduction The domain of blockchain technology has been a prominent research field for industry and academia ever since Bitcoin was introduced in 2008 [5]. Its central idea is simple: to provide a trustless and censorship-resistant way of transferring asset ownership between parties. Besides Bitcoin, a blockchain ecosystem has evolved over the years with hundreds of different implementations. Most blockchains provide their own coin which is used to pay for transaction fees, smart contract executions or as digital cash. The age-old problem of interoperability, well studied in various other computer systems, is now also an important issue to be tackled in the ever evolving world of blockchain. In particular, cross-chain protocols are an active area of research that tries to provide certain guarantees when moving coins between two blockchains. Several protocols have been proposed over the past few years, e.g. hash-based locks, Scriptless Scripts and other more complex signature-based protocols. Most of these proposals (rightfully) focus on the security aspects of the cryptography involved in the protocols, by including formal proofs or simply showing that they work [3]. However, there is a whole new dimension of hardly studied and completely unsolved problems, which lies in between theoretical work and practical cross- chain product development. In this work, we describe several of these with the intent of motivating anyone working in this field to collaborate on possible solutions. This list is by no means exhaustive. It instead represents the issues that we found to be most pressing in developing software for cross- chain protocols. ## 1 Wallets Implementing cryptographic protocols on top of blockchains such as Bitcoin requires precise control over how the individual transactions are built. For example, to unlock the funds within HTLC on Bitcoin, one has to correctly construct the witness stack according to the semantics of the initial locking script. In practice, this means having control of the private key within the software that knows about the protocol semantics, in order to produce the correct signatures. This is an issue from a security perspective as a user has to share their private key with the software. Miniscript [2] represents an effort in trying to generalize how such spending conditions can be expressed. A general way of expressing spending conditions allows existing wallets to support signing of arbitrary UTXO as long as their script follows the miniscript language. Miniscript enables application to create complex spending conditions such as HTLC and have transactions be signed by the user’s wallet without the user having to share their private key with the application. Unfortunately, support for miniscript within wallets in the wild is basically non-existent. Additionally, solutions like miniscript only work for spending conditions that are expressed as actual scripts. Modern locking mechanisms such as adaptor signatures cannot be expressed using miniscript. ### 1.1 Hot wallets Any software that implements cryptographic protocols currently has to roll its own hot wallet tailored to the needs of the protocol. Efforts like bdk attempt to make it easier to build such a wallet although it is still a non-trivial task. In summary, we see two classes of problems here: 1. 1. Lack of general solutions for expressing arbitrary locking mechanisms. These would enable the development of “off-the-shelf” components that can be used to implement the required wallet functionalities for blockchain protocols without prior knowledge of specific protocols. Consequently, it would be simpler to develop software implementing blockchain protocols and the user experience would improve by allowing users to utilize their existing wallets with the software providing the protocol implementation. 2. 2. Lack of reusable components to safely roll your own wallet. Reusable components to roll your own wallet would improve the ecosystem by generally accelerating development and allowing developers to focus on their protocol instead of having to implement a wallet from scratch. ### 1.2 Multi-currency wallets So far, we have looked at the problem space of wallets when implementing protocols on a single blockchain. For cross-chain protocols, the problem space grows linearly with the number of chains supported by the protocol. Different blockchains may use different elliptic curves and signature schemes, making it hard if not impossible to share algorithms or implementations between them. ## 2 Blockchain monitoring and interaction A software implementing a cross-chain protocol is effectively a state machine that gets advanced by certain events happening on any of the blockchains involved. Examples of such events are: * • Blockchain time exceeds a certain time * • A specific UTXO is being spent * • A transaction reaches a certain confirmation target To react to these events promptly, the software needs to be aware of the latest blockchain state at all times. This is what we call “blockchain monitoring”. There are several ways how an application can get access to the blockchain state: 1. 1. Talking to a self-hosted full node 2. 2. Talking to a shared, hosted full node 3. 3. Talking to a blockchain explorer 4. 4. Talking directly to other full nodes via the p2p protocol We define the following three desired properties of blockchain monitoring: * • Allow easy and fast on-boarding of the user * • Trustless and privacy preserving * • Efficient In the following sections, we will go through the four possible monitoring options and show that none of them embody all three requirements. ### 2.1 Self-hosted full node Running a full-node yourself provides by far the most flexibility for protocol software. It allows for a simple poll-based model of querying for the latest blockchain state. Given the self-hosted nature of the full node, such an implementation is reasonably efficient. Additionally, accessing the network through a dedicated full node preserves the user’s privacy and does not demand trusting a third-party. Unfortunately, setting up full nodes is a time- intensive task due to the initial block download. Furthermore, the resources required to continuously run multiple nodes - one per blockchain - are not to be underestimated. ### 2.2 Shared, hosted full-node A shared, hosted full node allows for an easier setup compared to running one yourself. However, privacy might be impacted and the user has to trust the node operator to provide an accurate view of the blockchain. Finally, a node operator might charge the user based on the bandwidth used which has implications on how the software interacts with the node, i.e. the software cannot simply poll for state updates every 10 seconds. ### 2.3 Blockchain explorer Accessing the blockchain via a block explorer enables easy and fast on- boarding of the user because it requires no prior setup. Similar to a shared, hosted full-node, privacy might be impacted and trust in the operator of the blockchain explorer is required. Blockchain explorers usually offer more high- level API to interact with the blockchain. For example, instead of having to query individual blocks and transactions, explorers usually already index the balances of addresses. Such high-level API drastically reduce the required network communication between the software and the explorer, thereby providing a reasonably efficient way of blockchain monitoring. ### 2.4 p2p protocol Talking directly to other full nodes over the p2p protocol of a blockchain network has several advantages over the previous options. For one, it is as privacy preserving as running your own full node. Second, depending on the capabilities of the p2p protocol, it can be trustless. For example, BIP157 [6] allows for a client-side filtering of blocks. Finally, accessing blockchain state directly via the p2p protocol can also be more efficient than the other solutions due to decreased communication overhead and indirection. Unfortunately, directly talking to other full nodes requires the application to apply all consensus rules itself to make sure it has a correct view of the latest state. This turns out to be a non-trivial undertaking. For example, for many blockchains, there is no clear specification of all consensus rules making them practically impossible to re-implement. ### 2.5 Summary For a software implementing a cross-chain protocol, monitoring the blockchain is vital. Without access to the latest state, the protocol’s state machine cannot advance. As the above sections show, efficient and trustless monitoring is currently only possible by running a full-node yourself. That, in turn, greatly hinders the on-boarding process of users and is resource-intensive to operate. ## 3 Fees Users have to pay transaction fees to get their transaction included in a block in almost any public blockchain system. Whilst being useful to prevent spam and other attacks, they also present a problem in blockchain protocols that involve timelocks and are generally composed of more than a single transaction. ### 3.1 Timelocks Timelocks are a common component of blockchain protocols. They make it possible to express spending conditions that are not only based on knowledge of secrets like pre-images but based on time. This is useful in allowing parties to abort or bail out of a protocol execution. ### 3.2 Pre-signed transaction A pre-signed transaction is a transaction that is signed by one or more parties ahead of the time it is meant to be broadcasted. Many blockchain protocols pre-sign punishment or refund transactions to provide safety to the parties involved. For example, when setting up a LN channel, the transaction to close the channel is signed before the transaction that actually opens it. The mining fee that is paid by a transaction cannot be changed once it has been signed. As such, pre-signing a transaction requires making an estimate of how large the mining fee should be. Depending on the timeline over which the protocol operates, a forecast of the blockchain load can be extremely unreliable. A varying network load can negatively impact the user if such a pre-signed transaction will no longer be confirmed within the block target that was originally anticipated. One solution to this problem is the fee pumping technique CPFP although similar to other topics mentioned in this work, such a strategy needs to be specifically developed and included in the software. ### 3.3 Summary Timelocks as well as pre-signed transactions share a common characteristic: Both concepts are about transactions that are to be broadcasted some time in the future instead of right away. Given the dynamic nature of a blockchain’s fee market, it is a non-trivial undertaking to reliably get a transaction confirmed within a certain block target starting from some point in the future. Cross-chain protocols often rely on both concepts, timelocks and pre-signed transactions, to achieve certain properties like atomicity or fairness. Yet, in our experience, it is usually left up to the software and/or the user to ensure a certain transaction is included in the blockchain. We see a lack of “off-the-shelf” solutions and general algorithms for reliably determining fees of such delayed transactions. ## 4 Testing Like every software, implementations of cross-chain protocols need to be tested. The ideal test suite combines a fast execution speed with a high degree of confidence that the software works as expected. This combination allows for short feedback cycles and therefore greater productivity. We have found that it is currently not possible to develop test suites for the cross- chain protocol space that fulfill both of these criteria. For a transaction to be accepted by the network, a number of conditions must be met. Some examples are: * • The transaction must not spent an already spent output * • The signatures on the transaction must verify * • The transaction must not spend more coins than it consumes * • Any smart contract involved in the transaction must execute without errors Some of these aspects can be verified in isolation. For example, checking the validity of a signature is reasonably easy. It follows that testing code that produces such a signature is also easy. Verifying that a transaction does not double-spend an output is more complicated. By nature, such a verification depends on the current state of the network. Similarly, evaluating a smart contract also requires access to the current state. Dependence on this state implies a more complicated test setup in which this state is constructed. In the current state of affairs, creating such a state reliably is only possible by spinning up instances of the respective blockchain nodes with a “test” configuration. Unfortunately, this drastically slows down the execution speed due to the start-up time and network communication overhead. We’ve also found that running such tests in parallel can cause problems if all tests share the same nodes. We experienced sporadic test failures caused by timeouts that don’t happen with decreased parallelization. On the other hand, spinning up a node per test to achieve isolation demands even more resources. While testing against actual blockchain nodes provides a high level of confidence, these test suites tend to be slow and often need to be executed sequentially. Attempts to run them in parallel easily lead to instability which lowers the confidence in the test suite. In a cross-chain setting, the situation is even worse due to an increased number of combinations that need to be tested. ## 5 Economics of atomic swaps Atomic swaps are a very popular cross-chain protocol. Their name suggests that they represent an atomic swap operation. However, that is not quite true. Atomic swaps merely present a time window within which both parties are committed to the swap. Outside of this time window, no atomicity is guaranteed. This has consequences for applications built on top of atomic swaps once the implementation details of this atomicity leak into the application layer. ### 5.1 Draining attack Alice - by convention the party that moves first - can suffer from a draining attack where she sends the first transaction to the network without Bob ever moving forward afterwards. Sending a transaction to the network requires Alice to pay mining fees yet Bob has no obligation to lock in his part of the swap. Bob’s inaction forces Alice to spend her coins via the “refund” path after a certain timeout has been reached, incurring in further fee expense. ### 5.2 American Option If Bob decides to lock in his part of the swap, Alice is presented with an American Option to either take Bob’s money - effectively executing the swap - or wait for the timeout, forcing both parties to spend their coins via the “refund” path. Why would Alice make use of this option? Once both parties have locked in their share, the rate is fixed. Alice can now check how the price evolves on other markets and take the according action that is in her favor. ### 5.3 Where is the atomicity? An atomic swap is atomic as long as both parties are committed to the swap. In that case, it is a safe option to swap assets because no party has to make the full transfer first. The problem here is the cost associated with achieving this atomicity: Each party has to pay for a single transaction before they can actually execute the swap. The asymmetry of the protocol leaks up into the application layer as you try to use atomic swaps to implement, for example, a trading platform. Whoever moves first will be exposed to the draining problem. Whoever redeems first holds an American Option. Either problem is not pleasant to deal with as a market maker or service provider. In summary, we want to point out that it is important to consider such kinds of scenarios in the design of cross-chain protocols. To be easily and practically applicable, a protocol must not only be cryptographically sound, it must also present a clean abstraction that does not leak into the application layer. ## 6 Summary This work presents a number of problems that we consider open in the cross- chain protocol space from an industry perspective. In several cases, the problems can be solved ad-hoc by, for example, implementing your own wallet with key management, blockchain monitoring, fee management and testing environments. Having to deal with these problems slows down development if the actual goal is to implement a product on top of a cryptographic protocol. ## References * [1] Bitcoin Optech: Child-pays-for-parent. https://bitcoinops.org/en/topics/cpfp/ (2020), accessed: 2021-01-29 * [2] Blockstream: Miniscript - website. http://bitcoin.sipa.be/miniscript/ (2020), accessed: 2021-01-25 * [3] COMIT: Connect all the blockchains!!! https://medium.com/coblox/connect-all-the-blockchains-atomic-swap-78b38fff42e (2018), accessed: 2021-01-13 * [4] Filini, A., Casatta, R.: Bitcoin dev kit. https://github.com/bitcoindevkit/bdk (2020), accessed: 2021-01-29 * [5] Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf (2008), accessed: 2021-01-13 * [6] Osuntokun, O., Akselrod, A., Posen, J.: Client side block filtering. https://github.com/bitcoin/bips/blob/master/bip-0157.mediawiki (2017), accessed: 2021-01-29 * [7] Poon, J., Dryja, T.: The bitcoin lightning network: Scalable off-chain instant payments (2016)
STAR Collaboration # Cumulants and Correlation Functions of Net-proton, Proton and Antiproton Multiplicity Distributions in Au+Au Collisions at RHIC M. S. Abdallah American University of Cairo, New Cairo 11835, New Cairo, Egypt J. Adam Brookhaven National Laboratory, Upton, New York 11973 L. Adamczyk AGH University of Science and Technology, FPACS, Cracow 30-059, Poland J. R. Adams Ohio State University, Columbus, Ohio 43210 J. K. Adkins University of Kentucky, Lexington, Kentucky 40506-0055 G. Agakishiev Joint Institute for Nuclear Research, Dubna 141 980, Russia I. Aggarwal Panjab University, Chandigarh 160014, India M. M. Aggarwal Panjab University, Chandigarh 160014, India Z. Ahammed Variable Energy Cyclotron Centre, Kolkata 700064, India I. Alekseev Alikhanov Institute for Theoretical and Experimental Physics NRC ”Kurchatov Institute”, Moscow 117218, Russia National Research Nuclear University MEPhI, Moscow 115409, Russia D. M. Anderson Texas A&M University, College Station, Texas 77843 A. Aparin Joint Institute for Nuclear Research, Dubna 141 980, Russia E. C. Aschenauer Brookhaven National Laboratory, Upton, New York 11973 M. U. Ashraf Central China Normal University, Wuhan, Hubei 430079 F. G. Atetalla Kent State University, Kent, Ohio 44242 A. Attri Panjab University, Chandigarh 160014, India G. S. Averichev Joint Institute for Nuclear Research, Dubna 141 980, Russia V. Bairathi Instituto de Alta Investigación, Universidad de Tarapacá, Arica 1000000, Chile W. Baker University of California, Riverside, California 92521 J. G. Ball Cap University of Houston, Houston, Texas 77204 K. Barish University of California, Riverside, California 92521 A. Behera State University of New York, Stony Brook, New York 11794 R. Bellwied University of Houston, Houston, Texas 77204 P. Bhagat University of Jammu, Jammu 180001, India A. Bhasin University of Jammu, Jammu 180001, India J. Bielcik Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic J. Bielcikova Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic I. G. Bordyuzhin Alikhanov Institute for Theoretical and Experimental Physics NRC ”Kurchatov Institute”, Moscow 117218, Russia J. D. Brandenburg Brookhaven National Laboratory, Upton, New York 11973 A. V. Brandin National Research Nuclear University MEPhI, Moscow 115409, Russia I. Bunzarov Joint Institute for Nuclear Research, Dubna 141 980, Russia J. Butterworth Rice University, Houston, Texas 77251 X. Z. Cai Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 H. Caines Yale University, New Haven, Connecticut 06520 M. Calderón de la Barca Sánchez University of California, Davis, California 95616 D. Cebra University of California, Davis, California 95616 I. Chakaberia Lawrence Berkeley National Laboratory, Berkeley, California 94720 Brookhaven National Laboratory, Upton, New York 11973 P. Chaloupka Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic B. K. Chan University of California, Los Angeles, California 90095 F-H. Chang National Cheng Kung University, Tainan 70101 Z. Chang Brookhaven National Laboratory, Upton, New York 11973 N. Chankova-Bunzarova Joint Institute for Nuclear Research, Dubna 141 980, Russia A. Chatterjee Central China Normal University, Wuhan, Hubei 430079 S. Chattopadhyay Variable Energy Cyclotron Centre, Kolkata 700064, India D. Chen University of California, Riverside, California 92521 J. Chen Shandong University, Qingdao, Shandong 266237 J. H. Chen Fudan University, Shanghai, 200433 X. Chen University of Science and Technology of China, Hefei, Anhui 230026 Z. Chen Shandong University, Qingdao, Shandong 266237 J. Cheng Tsinghua University, Beijing 100084 M. Chevalier University of California, Riverside, California 92521 S. Choudhury Fudan University, Shanghai, 200433 W. Christie Brookhaven National Laboratory, Upton, New York 11973 X. Chu Brookhaven National Laboratory, Upton, New York 11973 H. J. Crawford University of California, Berkeley, California 94720 M. Csanád ELTE Eötvös Loránd University, Budapest, Hungary H-1117 M. Daugherity Abilene Christian University, Abilene, Texas 79699 T. G. Dedovich Joint Institute for Nuclear Research, Dubna 141 980, Russia I. M. Deppner University of Heidelberg, Heidelberg 69120, Germany A. A. Derevschikov NRC ”Kurchatov Institute”, Institute of High Energy Physics, Protvino 142281, Russia A. Dhamija Panjab University, Chandigarh 160014, India L. Di Carlo Wayne State University, Detroit, Michigan 48201 L. Didenko Brookhaven National Laboratory, Upton, New York 11973 X. Dong Lawrence Berkeley National Laboratory, Berkeley, California 94720 J. L. Drachenberg Abilene Christian University, Abilene, Texas 79699 J. C. Dunlop Brookhaven National Laboratory, Upton, New York 11973 N. Elsey Wayne State University, Detroit, Michigan 48201 J. Engelage University of California, Berkeley, California 94720 G. Eppley Rice University, Houston, Texas 77251 S. Esumi University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan O. Evdokimov University of Illinois at Chicago, Chicago, Illinois 60607 A. Ewigleben Lehigh University, Bethlehem, Pennsylvania 18015 O. Eyser Brookhaven National Laboratory, Upton, New York 11973 R. Fatemi University of Kentucky, Lexington, Kentucky 40506-0055 F. M. Fawzi American University of Cairo, New Cairo 11835, New Cairo, Egypt S. Fazio Brookhaven National Laboratory, Upton, New York 11973 P. Federic Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic J. Fedorisin Joint Institute for Nuclear Research, Dubna 141 980, Russia C. J. Feng National Cheng Kung University, Tainan 70101 Y. Feng Purdue University, West Lafayette, Indiana 47907 P. Filip Joint Institute for Nuclear Research, Dubna 141 980, Russia E. Finch Southern Connecticut State University, New Haven, Connecticut 06515 Y. Fisyak Brookhaven National Laboratory, Upton, New York 11973 A. Francisco Yale University, New Haven, Connecticut 06520 C. Fu Central China Normal University, Wuhan, Hubei 430079 L. Fulek AGH University of Science and Technology, FPACS, Cracow 30-059, Poland C. A. Gagliardi Texas A&M University, College Station, Texas 77843 T. Galatyuk Technische Universität Darmstadt, Darmstadt 64289, Germany F. Geurts Rice University, Houston, Texas 77251 N. Ghimire Temple University, Philadelphia, Pennsylvania 19122 A. Gibson Valparaiso University, Valparaiso, Indiana 46383 K. Gopal Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati 517507, India X. Gou Shandong University, Qingdao, Shandong 266237 D. Grosnick Valparaiso University, Valparaiso, Indiana 46383 A. Gupta University of Jammu, Jammu 180001, India W. Guryn Brookhaven National Laboratory, Upton, New York 11973 A. I. Hamad Kent State University, Kent, Ohio 44242 A. Hamed American University of Cairo, New Cairo 11835, New Cairo, Egypt Y. Han Rice University, Houston, Texas 77251 S. Harabasz Technische Universität Darmstadt, Darmstadt 64289, Germany M. D. Harasty University of California, Davis, California 95616 J. W. Harris Yale University, New Haven, Connecticut 06520 H. Harrison University of Kentucky, Lexington, Kentucky 40506-0055 S. He Central China Normal University, Wuhan, Hubei 430079 W. He Fudan University, Shanghai, 200433 X. H. He Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 Y. He Shandong University, Qingdao, Shandong 266237 S. Heppelmann University of California, Davis, California 95616 S. Heppelmann Pennsylvania State University, University Park, Pennsylvania 16802 N. Herrmann University of Heidelberg, Heidelberg 69120, Germany E. Hoffman University of Houston, Houston, Texas 77204 L. Holub Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic Y. Hu Fudan University, Shanghai, 200433 H. Huang National Cheng Kung University, Tainan 70101 H. Z. Huang University of California, Los Angeles, California 90095 S. L. Huang State University of New York, Stony Brook, New York 11794 T. Huang National Cheng Kung University, Tainan 70101 X. Huang Tsinghua University, Beijing 100084 Y. Huang Tsinghua University, Beijing 100084 T. J. Humanic Ohio State University, Columbus, Ohio 43210 D. Isenhower Abilene Christian University, Abilene, Texas 79699 W. W. Jacobs Indiana University, Bloomington, Indiana 47408 C. Jena Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati 517507, India A. Jentsch Brookhaven National Laboratory, Upton, New York 11973 Y. Ji Lawrence Berkeley National Laboratory, Berkeley, California 94720 J. Jia Brookhaven National Laboratory, Upton, New York 11973 State University of New York, Stony Brook, New York 11794 K. Jiang University of Science and Technology of China, Hefei, Anhui 230026 X. Ju University of Science and Technology of China, Hefei, Anhui 230026 E. G. Judd University of California, Berkeley, California 94720 S. Kabana Instituto de Alta Investigación, Universidad de Tarapacá, Arica 1000000, Chile M. L. Kabir University of California, Riverside, California 92521 S. Kagamaster Lehigh University, Bethlehem, Pennsylvania 18015 D. Kalinkin Indiana University, Bloomington, Indiana 47408 Brookhaven National Laboratory, Upton, New York 11973 K. Kang Tsinghua University, Beijing 100084 D. Kapukchyan University of California, Riverside, California 92521 K. Kauder Brookhaven National Laboratory, Upton, New York 11973 H. W. Ke Brookhaven National Laboratory, Upton, New York 11973 D. Keane Kent State University, Kent, Ohio 44242 A. Kechechyan Joint Institute for Nuclear Research, Dubna 141 980, Russia Y. V. Khyzhniak National Research Nuclear University MEPhI, Moscow 115409, Russia D. P. Kikoła Warsaw University of Technology, Warsaw 00-661, Poland C. Kim University of California, Riverside, California 92521 B. Kimelman University of California, Davis, California 95616 D. Kincses ELTE Eötvös Loránd University, Budapest, Hungary H-1117 I. Kisel Frankfurt Institute for Advanced Studies FIAS, Frankfurt 60438, Germany A. Kiselev Brookhaven National Laboratory, Upton, New York 11973 A. G. Knospe Lehigh University, Bethlehem, Pennsylvania 18015 L. Kochenda National Research Nuclear University MEPhI, Moscow 115409, Russia L. K. Kosarzewski Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic L. Kramarik Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic P. Kravtsov National Research Nuclear University MEPhI, Moscow 115409, Russia L. Kumar Panjab University, Chandigarh 160014, India S. Kumar Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 R. Kunnawalkam Elayavalli Yale University, New Haven, Connecticut 06520 J. H. Kwasizur Indiana University, Bloomington, Indiana 47408 R. Lacey State University of New York, Stony Brook, New York 11794 S. Lan Central China Normal University, Wuhan, Hubei 430079 J. M. Landgraf Brookhaven National Laboratory, Upton, New York 11973 J. Lauret Brookhaven National Laboratory, Upton, New York 11973 A. Lebedev Brookhaven National Laboratory, Upton, New York 11973 R. Lednicky Joint Institute for Nuclear Research, Dubna 141 980, Russia J. H. Lee Brookhaven National Laboratory, Upton, New York 11973 Y. H. Leung Lawrence Berkeley National Laboratory, Berkeley, California 94720 C. Li Shandong University, Qingdao, Shandong 266237 C. Li University of Science and Technology of China, Hefei, Anhui 230026 W. Li Rice University, Houston, Texas 77251 X. Li University of Science and Technology of China, Hefei, Anhui 230026 Y. Li Tsinghua University, Beijing 100084 X. Liang University of California, Riverside, California 92521 Y. Liang Kent State University, Kent, Ohio 44242 R. Licenik Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic T. Lin Texas A&M University, College Station, Texas 77843 Y. Lin Central China Normal University, Wuhan, Hubei 430079 M. A. Lisa Ohio State University, Columbus, Ohio 43210 F. Liu Central China Normal University, Wuhan, Hubei 430079 H. Liu Indiana University, Bloomington, Indiana 47408 P. Liu State University of New York, Stony Brook, New York 11794 T. Liu Yale University, New Haven, Connecticut 06520 X. Liu Ohio State University, Columbus, Ohio 43210 Y. Liu Texas A&M University, College Station, Texas 77843 Z. Liu University of Science and Technology of China, Hefei, Anhui 230026 T. Ljubicic Brookhaven National Laboratory, Upton, New York 11973 W. J. Llope Wayne State University, Detroit, Michigan 48201 R. S. Longacre Brookhaven National Laboratory, Upton, New York 11973 E. Loyd University of California, Riverside, California 92521 N. S. Lukow Temple University, Philadelphia, Pennsylvania 19122 X. Luo Central China Normal University, Wuhan, Hubei 430079 L. Ma Fudan University, Shanghai, 200433 R. Ma Brookhaven National Laboratory, Upton, New York 11973 Y. G. Ma Fudan University, Shanghai, 200433 N. Magdy University of Illinois at Chicago, Chicago, Illinois 60607 R. Majka Deceased Yale University, New Haven, Connecticut 06520 D. Mallick National Institute of Science Education and Research, HBNI, Jatni 752050, India S. Margetis Kent State University, Kent, Ohio 44242 C. Markert University of Texas, Austin, Texas 78712 H. S. Matis Lawrence Berkeley National Laboratory, Berkeley, California 94720 J. A. Mazer Rutgers University, Piscataway, New Jersey 08854 N. G. Minaev NRC ”Kurchatov Institute”, Institute of High Energy Physics, Protvino 142281, Russia S. Mioduszewski Texas A&M University, College Station, Texas 77843 B. Mohanty National Institute of Science Education and Research, HBNI, Jatni 752050, India M. M. Mondal State University of New York, Stony Brook, New York 11794 I. Mooney Wayne State University, Detroit, Michigan 48201 D. A. Morozov NRC ”Kurchatov Institute”, Institute of High Energy Physics, Protvino 142281, Russia A. Mukherjee ELTE Eötvös Loránd University, Budapest, Hungary H-1117 M. Nagy ELTE Eötvös Loránd University, Budapest, Hungary H-1117 J. D. Nam Temple University, Philadelphia, Pennsylvania 19122 Md. Nasim Indian Institute of Science Education and Research (IISER), Berhampur 760010 , India K. Nayak Central China Normal University, Wuhan, Hubei 430079 D. Neff University of California, Los Angeles, California 90095 J. M. Nelson University of California, Berkeley, California 94720 D. B. Nemes Yale University, New Haven, Connecticut 06520 M. Nie Shandong University, Qingdao, Shandong 266237 G. Nigmatkulov National Research Nuclear University MEPhI, Moscow 115409, Russia T. Niida University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan R. Nishitani University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan L. V. Nogach NRC ”Kurchatov Institute”, Institute of High Energy Physics, Protvino 142281, Russia T. Nonaka University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan A. S. Nunes Brookhaven National Laboratory, Upton, New York 11973 G. Odyniec Lawrence Berkeley National Laboratory, Berkeley, California 94720 A. Ogawa Brookhaven National Laboratory, Upton, New York 11973 S. Oh Lawrence Berkeley National Laboratory, Berkeley, California 94720 V. A. Okorokov National Research Nuclear University MEPhI, Moscow 115409, Russia B. S. Page Brookhaven National Laboratory, Upton, New York 11973 R. Pak Brookhaven National Laboratory, Upton, New York 11973 A. Pandav National Institute of Science Education and Research, HBNI, Jatni 752050, India A. K. Pandey University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan Y. Panebratsev Joint Institute for Nuclear Research, Dubna 141 980, Russia P. Parfenov National Research Nuclear University MEPhI, Moscow 115409, Russia B. Pawlik Institute of Nuclear Physics PAN, Cracow 31-342, Poland D. Pawlowska Warsaw University of Technology, Warsaw 00-661, Poland H. Pei Central China Normal University, Wuhan, Hubei 430079 C. Perkins University of California, Berkeley, California 94720 L. Pinsky University of Houston, Houston, Texas 77204 R. L. Pintér ELTE Eötvös Loránd University, Budapest, Hungary H-1117 J. Pluta Warsaw University of Technology, Warsaw 00-661, Poland B. R. Pokhrel Temple University, Philadelphia, Pennsylvania 19122 G. Ponimatkin Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic J. Porter Lawrence Berkeley National Laboratory, Berkeley, California 94720 M. Posik Temple University, Philadelphia, Pennsylvania 19122 V. Prozorova Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic N. K. Pruthi Panjab University, Chandigarh 160014, India M. Przybycien AGH University of Science and Technology, FPACS, Cracow 30-059, Poland J. Putschke Wayne State University, Detroit, Michigan 48201 H. Qiu Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 A. Quintero Temple University, Philadelphia, Pennsylvania 19122 C. Racz University of California, Riverside, California 92521 S. K. Radhakrishnan Kent State University, Kent, Ohio 44242 N. Raha Wayne State University, Detroit, Michigan 48201 R. L. Ray University of Texas, Austin, Texas 78712 R. Reed Lehigh University, Bethlehem, Pennsylvania 18015 H. G. Ritter Lawrence Berkeley National Laboratory, Berkeley, California 94720 M. Robotkova Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic O. V. Rogachevskiy Joint Institute for Nuclear Research, Dubna 141 980, Russia J. L. Romero University of California, Davis, California 95616 L. Ruan Brookhaven National Laboratory, Upton, New York 11973 J. Rusnak Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic N. R. Sahoo Shandong University, Qingdao, Shandong 266237 H. Sako University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan S. Salur Rutgers University, Piscataway, New Jersey 08854 J. Sandweiss Deceased Yale University, New Haven, Connecticut 06520 S. Sato University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan W. B. Schmidke Brookhaven National Laboratory, Upton, New York 11973 N. Schmitz Max-Planck-Institut für Physik, Munich 80805, Germany B. R. Schweid State University of New York, Stony Brook, New York 11794 F. Seck Technische Universität Darmstadt, Darmstadt 64289, Germany J. Seger Creighton University, Omaha, Nebraska 68178 M. Sergeeva University of California, Los Angeles, California 90095 R. Seto University of California, Riverside, California 92521 P. Seyboth Max-Planck-Institut für Physik, Munich 80805, Germany N. Shah Indian Institute Technology, Patna, Bihar 801106, India E. Shahaliev Joint Institute for Nuclear Research, Dubna 141 980, Russia P. V. Shanmuganathan Brookhaven National Laboratory, Upton, New York 11973 M. Shao University of Science and Technology of China, Hefei, Anhui 230026 T. Shao Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 A. I. Sheikh Kent State University, Kent, Ohio 44242 D. Shen Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 S. S. Shi Central China Normal University, Wuhan, Hubei 430079 Y. Shi Shandong University, Qingdao, Shandong 266237 Q. Y. Shou Fudan University, Shanghai, 200433 E. P. Sichtermann Lawrence Berkeley National Laboratory, Berkeley, California 94720 R. Sikora AGH University of Science and Technology, FPACS, Cracow 30-059, Poland M. Simko Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic J. Singh Panjab University, Chandigarh 160014, India S. Singha Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 M. J. Skoby Purdue University, West Lafayette, Indiana 47907 N. Smirnov Yale University, New Haven, Connecticut 06520 Y. Söhngen University of Heidelberg, Heidelberg 69120, Germany W. Solyst Indiana University, Bloomington, Indiana 47408 P. Sorensen Brookhaven National Laboratory, Upton, New York 11973 H. M. Spinka Deceased Argonne National Laboratory, Argonne, Illinois 60439 B. Srivastava Purdue University, West Lafayette, Indiana 47907 T. D. S. Stanislaus Valparaiso University, Valparaiso, Indiana 46383 M. Stefaniak Warsaw University of Technology, Warsaw 00-661, Poland D. J. Stewart Yale University, New Haven, Connecticut 06520 M. Strikhanov National Research Nuclear University MEPhI, Moscow 115409, Russia B. Stringfellow Purdue University, West Lafayette, Indiana 47907 A. A. P. Suaide Universidade de São Paulo, São Paulo, Brazil 05314-970 M. Sumbera Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic B. Summa Pennsylvania State University, University Park, Pennsylvania 16802 X. M. Sun Central China Normal University, Wuhan, Hubei 430079 X. Sun University of Illinois at Chicago, Chicago, Illinois 60607 Y. Sun University of Science and Technology of China, Hefei, Anhui 230026 Y. Sun Huzhou University, Huzhou, Zhejiang 313000 B. Surrow Temple University, Philadelphia, Pennsylvania 19122 D. N. Svirida Alikhanov Institute for Theoretical and Experimental Physics NRC ”Kurchatov Institute”, Moscow 117218, Russia Z. W. Sweger University of California, Davis, California 95616 P. Szymanski Warsaw University of Technology, Warsaw 00-661, Poland A. H. Tang Brookhaven National Laboratory, Upton, New York 11973 Z. Tang University of Science and Technology of China, Hefei, Anhui 230026 A. Taranenko National Research Nuclear University MEPhI, Moscow 115409, Russia T. Tarnowsky Michigan State University, East Lansing, Michigan 48824 J. H. Thomas Lawrence Berkeley National Laboratory, Berkeley, California 94720 A. R. Timmins University of Houston, Houston, Texas 77204 D. Tlusty Creighton University, Omaha, Nebraska 68178 T. Todoroki University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan M. Tokarev Joint Institute for Nuclear Research, Dubna 141 980, Russia C. A. Tomkiel Lehigh University, Bethlehem, Pennsylvania 18015 S. Trentalange University of California, Los Angeles, California 90095 R. E. Tribble Texas A&M University, College Station, Texas 77843 P. Tribedy Brookhaven National Laboratory, Upton, New York 11973 S. K. Tripathy ELTE Eötvös Loránd University, Budapest, Hungary H-1117 T. Truhlar Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic B. A. Trzeciak Czech Technical University in Prague, FNSPE, Prague 115 19, Czech Republic O. D. Tsai University of California, Los Angeles, California 90095 Z. Tu Brookhaven National Laboratory, Upton, New York 11973 T. Ullrich Brookhaven National Laboratory, Upton, New York 11973 D. G. Underwood Argonne National Laboratory, Argonne, Illinois 60439 I. Upsal Shandong University, Qingdao, Shandong 266237 Brookhaven National Laboratory, Upton, New York 11973 G. Van Buren Brookhaven National Laboratory, Upton, New York 11973 J. Vanek Nuclear Physics Institute of the CAS, Rez 250 68, Czech Republic A. N. Vasiliev NRC ”Kurchatov Institute”, Institute of High Energy Physics, Protvino 142281, Russia I. Vassiliev Frankfurt Institute for Advanced Studies FIAS, Frankfurt 60438, Germany V. Verkest Wayne State University, Detroit, Michigan 48201 F. Videbæk Brookhaven National Laboratory, Upton, New York 11973 S. Vokal Joint Institute for Nuclear Research, Dubna 141 980, Russia S. A. Voloshin Wayne State University, Detroit, Michigan 48201 F. Wang Purdue University, West Lafayette, Indiana 47907 G. Wang University of California, Los Angeles, California 90095 J. S. Wang Huzhou University, Huzhou, Zhejiang 313000 P. Wang University of Science and Technology of China, Hefei, Anhui 230026 Y. Wang Central China Normal University, Wuhan, Hubei 430079 Y. Wang Tsinghua University, Beijing 100084 Z. Wang Shandong University, Qingdao, Shandong 266237 J. C. Webb Brookhaven National Laboratory, Upton, New York 11973 P. C. Weidenkaff University of Heidelberg, Heidelberg 69120, Germany L. Wen University of California, Los Angeles, California 90095 G. D. Westfall Michigan State University, East Lansing, Michigan 48824 H. Wieman Lawrence Berkeley National Laboratory, Berkeley, California 94720 S. W. Wissink Indiana University, Bloomington, Indiana 47408 R. Witt United States Naval Academy, Annapolis, Maryland 21402 J. Wu Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 Y. Wu University of California, Riverside, California 92521 B. Xi Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 Z. G. Xiao Tsinghua University, Beijing 100084 G. Xie Lawrence Berkeley National Laboratory, Berkeley, California 94720 W. Xie Purdue University, West Lafayette, Indiana 47907 H. Xu Huzhou University, Huzhou, Zhejiang 313000 N. Xu Lawrence Berkeley National Laboratory, Berkeley, California 94720 Q. H. Xu Shandong University, Qingdao, Shandong 266237 Y. Xu Shandong University, Qingdao, Shandong 266237 Z. Xu Brookhaven National Laboratory, Upton, New York 11973 Z. Xu University of California, Los Angeles, California 90095 C. Yang Shandong University, Qingdao, Shandong 266237 Q. Yang Shandong University, Qingdao, Shandong 266237 S. Yang Rice University, Houston, Texas 77251 Y. Yang National Cheng Kung University, Tainan 70101 Z. Yang Central China Normal University, Wuhan, Hubei 430079 Z. Ye Rice University, Houston, Texas 77251 Z. Ye University of Illinois at Chicago, Chicago, Illinois 60607 L. Yi Shandong University, Qingdao, Shandong 266237 K. Yip Brookhaven National Laboratory, Upton, New York 11973 Y. Yu Shandong University, Qingdao, Shandong 266237 H. Zbroszczyk Warsaw University of Technology, Warsaw 00-661, Poland W. Zha University of Science and Technology of China, Hefei, Anhui 230026 C. Zhang State University of New York, Stony Brook, New York 11794 D. Zhang Central China Normal University, Wuhan, Hubei 430079 S. Zhang University of Illinois at Chicago, Chicago, Illinois 60607 S. Zhang Fudan University, Shanghai, 200433 X. P. Zhang Tsinghua University, Beijing 100084 Y. Zhang Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, Gansu 730000 Y. Zhang University of Science and Technology of China, Hefei, Anhui 230026 Y. Zhang Central China Normal University, Wuhan, Hubei 430079 Z. J. Zhang National Cheng Kung University, Tainan 70101 Z. Zhang Brookhaven National Laboratory, Upton, New York 11973 Z. Zhang University of Illinois at Chicago, Chicago, Illinois 60607 J. Zhao Purdue University, West Lafayette, Indiana 47907 C. Zhou Fudan University, Shanghai, 200433 X. Zhu Tsinghua University, Beijing 100084 Z. Zhu Shandong University, Qingdao, Shandong 266237 M. Zurek Lawrence Berkeley National Laboratory, Berkeley, California 94720 M. Zyzak Frankfurt Institute for Advanced Studies FIAS, Frankfurt 60438, Germany ###### Abstract We report a systematic measurement of cumulants, $C_{n}$, for net-proton, proton and antiproton, and correlation functions, $\kappa_{n}$, for proton and antiproton multiplicity distributions up to the fourth order in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The $C_{n}$ and $\kappa_{n}$ are presented as a function of collision energy, centrality and kinematic acceptance in rapidity, $y$, and transverse momentum, $p_{T}$. The data were taken during the first phase of the Beam Energy Scan (BES) program (2010 – 2017) at the Relativistic Heavy Ion Collider (RHIC) facility. The measurements are carried out at midrapidity ($|y|<$ 0.5) and transverse momentum 0.4 $<$ $p_{\rm T}$ $<$ 2.0 GeV/$c$, using the STAR detector at RHIC. We observe a non-monotonic energy dependence ($\sqrt{s_{{\rm NN}}}$ = 7.7 – 62.4 GeV) of the net-proton $C_{4}$/$C_{2}$ with the significance of 3.1$\sigma$ for the 0-5% central Au+Au collisions. This is consistent with the expectations of critical fluctuations in a QCD- inspired model. Thermal and transport model calculations show a monotonic variation with $\sqrt{s_{{\rm NN}}}$. For the multiparticle correlation functions, we observe significant negative values for a two-particle correlation function, $\kappa_{2}$, of protons and antiprotons, which are mainly due to the effects of baryon number conservation. Furthermore, it is found that the four-particle correlation function, $\kappa_{4}$, of protons plays a role in determining the energy dependence of proton $C_{4}/C_{1}$ below 19.6 GeV, which cannot be solely understood by the negative values of $\kappa_{2}$ for protons. ###### pacs: 25.75.Gz,12.38.Mh,21.65.Qr,25.75.-q,25.75.Nq ## I Introduction The main goal of the BES program at the RHIC is to study the QCD phase structure Aggarwal _et al._ (2010a); bes . This is expected to lead to the mapping of the phase diagram for strong interactions in the space of temperature ($T$) versus baryon chemical potential ($\mu_{\rm B}$). Both theoretically and experimentally, several advancements have been made towards this goal. Lattice QCD calculations have established that at high temperatures, there occurs a crossover transition from hadronic matter to a deconfined state of quarks and gluons at $\mu_{\rm B}$ = 0 MeV Aoki _et al._ (2006). Experimental data from RHIC and the Large Hadron Collider (LHC) have provided evidence of this matter with quark and gluon degrees of freedom called the Quark-Gluon Plasma (QGP) Arsene _et al._ (2005); Back _et al._ (2005); Adcox _et al._ (2005); Adams _et al._ (2005). The QGP has been found to hadronize into a gas of hadrons, which undergoes chemical freeze-out (inelastic collisions cease) Adamczyk _et al._ (2017) at a temperature close to the lattice QCD-estimated quark-hadron transition temperature at $\mu_{\rm B}$ = 0 MeV Borsanyi _et al._ (2010); Bazavov _et al._ (2019). A suite of interesting results from the BES program indicate a change of equation of state of QCD matter, with collision energy from partonic-interaction-dominated matter at higher collision energies to a hadronic-interaction regime at lower energies. These include the observations of breakdown in the number of constituent-quark scaling of the elliptic flow at lower $\sqrt{s_{{\rm NN}}}$ Adamczyk _et al._ (2013), non-monotonic variation of the slope of the directed flow for protons and net-protons at midrapidity as a function of $\sqrt{s_{{\rm NN}}}$ Adamczyk _et al._ (2014a), nuclear modification factor changing values from smaller than unity to larger than unity at high $p_{\mathrm{T}}$ as we go to lower $\sqrt{s_{{\rm NN}}}$ Adamczyk _et al._ (2018a), and finite to vanishing values of the three-particle correlations with respect to the event plane Adamczyk _et al._ (2014b) as we go to lower $\sqrt{s_{{\rm NN}}}$. One of the most important studies of the QCD phase structure relates to the first-order phase boundary and the expected existence of the critical point Fukushima and Hatsuda (2011); Stephanov _et al._ (1999); Stephanov (2004); Fodor and Katz (2004); Gavai and Gupta (2008, 2005); Gupta (2009) at finite baryon chemical potential. This is the end point of a first-order phase boundary between quark-gluon and hadronic phases Ejiri (2008); Bowman and Kapusta (2009). Experimental confirmation of the CP would be a landmark of exploring the QCD phase structure. Previous studies of higher-order cumulants of net-proton multiplicity distributions suggest that the possible CP region is unlikely to be below $\mu_{\rm B}$ = 200 MeV Aggarwal _et al._ (2010b), which is consistent with the theoretical findings Fodor and Katz (2004); Gavai and Gupta (2008, 2005); Gupta (2009). The versatility of the RHIC machine has permitted the colliding energies of ions to be varied below the injection energy of $\sqrt{s_{\mathrm{NN}}}$ = 19.6 GeV Abelev _et al._ (2010), and thereby the RHIC BES program provides the possibility to scan the QCD phase diagram up to $\mu_{\rm B}$ = 420 MeV with the collider mode, and $\mu_{\rm B}$ = 720 MeV with the fixed-target mode bes ; Adam _et al._ (2020a). This, in turn, opens the possibility to find the experimental signatures of a first- order phase transition and the CP Luo and Xu (2017); Bzdak _et al._ (2020). Higher-order cumulants of the distributions of conserved charge, such as net- baryon ($B$), net-charge ($Q$), and net-strangeness ($S$) numbers, are sensitive to the QCD phase transition and CP Asakawa _et al._ (2000); Hatta and Stephanov (2003); Koch _et al._ (2005); Asakawa _et al._ (2009); Gupta _et al._ (2011); Ding _et al._ (2015). The signatures of conserved-charge fluctuations near the QCD critical point have been extensively studied by various model calculations, such as the DSE method Shi _et al._ (2014); Gao and Liu (2016); Fischer (2019), PQM Friman _et al._ (2011), FRG Fu _et al._ (2020), NJL Lu _et al._ (2015); Chen _et al._ (2016); Fan _et al._ (2019), PNJL Fu _et al._ (2008); Li _et al._ (2019) and other effective models Herold _et al._ (2016); Chen _et al._ (2015); Vovchenko _et al._ (2015); Jiang _et al._ (2016); Mukherjee _et al._ (2017); Zhang _et al._ (2017); Schaefer and Wagner (2012). However, these model calculations are based on the assumption of thermal equilibrium with a static and infinite medium. In heavy- ion collisions, finite-size and time effects will put constraints on the significance of the signals Palhares _et al._ (2010); Pan _et al._ (2017). A theoretical calculation suggests the non-equilibrium correlation length $\xi$ $\approx$ 2-3 fm for heavy-ion collisions Berdnikov and Rajagopal (2000). Dynamical modeling of heavy-ion collisions with the physics of a critical point and non-equilibrium effects is in progress Stephanov and Yin (2018); Rajagopal _et al._ (2020); An _et al._ (2020). The signatures of a phase transition or a CP are detectable if they survive the evolution of the system Stephanov (2010). Due to a stronger dependence on the correlation length ($\xi$) Stephanov (2009); Athanasiou _et al._ (2010), it is proposed to study the higher moments – skewness (${\it{S}}$ = $\left\langle(\delta N)^{3}\right\rangle/\sigma^{3}$) and kurtosis ($\kappa$ = $\left\langle(\delta N)^{4}\right\rangle/\sigma^{4}$ – 3) with $\delta N$ = $N$ – $\langle N\rangle$, or cumulants $C_{n}$ (defined in Sec. II.5) of distributions of conserved quantities. Both the magnitude and the sign of the moments or $C_{n}$ Asakawa _et al._ (2009); Stephanov (2011), which quantify the shape of the multiplicity distributions, are important for understanding the phase transition and CP effects. The aim is to search for signatures of the CP over a broad range of $\mu_{B}$ in the QCD phase diagram Aggarwal _et al._ (2010b). Furthermore, the products of the moments or ratios of $C_{n}$ can be related to susceptibilities associated with the conserved numbers. The product ($\kappa$$\sigma^{2}$), or equivalently, the ratio ($C_{4}$/$C_{2}$) of the net-baryon number distribution is related to the ratio of fourth-order ($\chi^{\mathrm{B}}_{4}$) to second-order ($\chi^{\mathrm{B}}_{2}$) baryon number susceptibilities Ejiri _et al._ (2006); Cheng _et al._ (2009); Stokic _et al._ (2009); Gupta _et al._ (2011); Gavai and Gupta (2011). The ratio, $\chi^{\mathrm{B}}_{4}$/$\chi^{\mathrm{B}}_{2}$, is expected to deviate from unity near the CP. It has different values for the hadronic and partonic phases Gavai and Gupta (2011). Similarly, the products $\it{S}$$\sigma$ ($C_{3}$/$C_{2}$) and $\sigma^{2}$/$\langle N\rangle$ ($C_{2}$/$C_{1}$) are related to $\chi^{\mathrm{B}}_{3}$/$\chi^{\mathrm{B}}_{2}$ and $\chi^{\mathrm{B}}_{2}$/$\chi^{B}_{1}$, respectively. Experimentally, it is not possible to measure the net-baryon distributions, however, theoretical calculations have shown that net-proton multiplicity ($N_{p}-N_{\bar{p}}$ = $\Delta N_{p}$) fluctuations reflect the singularity of the charge and baryon number susceptibility, as expected at the CP Hatta and Stephanov (2003). Refs. Kitazawa and Asakawa (2012); Bzdak and Koch (2012) discuss the effect of using net-proton as the approximation for the net-baryon distributions and the acceptance dependence for the moments of the protons and antiprotons. In an early publication from the STAR experiment on the higher moments of net- proton distributions, the selected kinematics of (anti)proton are $|y|<$ 0.5 and 0.4 $<$ $p_{\rm T}$ $<$ 0.8 GeV/$c$, where only the Time Projection Chamber (TPC) Ackermann _et al._ (2003); Anderson _et al._ (2003) was used for (anti)protons identification. Interesting hints of a non-monotonic variation of $\kappa$$\sigma^{2}$ (or $C_{4}$/$C_{2}$) was observed Adamczyk _et al._ (2014c). In this paper, we report measurements of the energy dependence of $C_{n}$ up to fourth order of the net-proton multiplicity distributions from Au+Au collisions with a larger acceptance of 0.4 $<$ $p_{\rm T}$ $<$ 2.0 GeV/$c$ Adam _et al._ (2020b). This is achieved by adding the information from STAR’s Time-of-Flight (TOF) detector Llope (2012). We present results from Au+Au collisions at 9 different collision energies, $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The paper is organized as follows. In the next section, we discuss the data sets used, event selection criteria, centrality selection procedure, proton identification method, measurement of raw cumulants of the net-proton distributions, corrections for the effects of centrality bin width (CBW) and efficiency, and estimation of statistical and systematic uncertainties on the measurements. In section III, we present the results of cumulants and their ratios for net protons, protons and antiprotons in Au+Au collisions as a function of collision energy ($\sqrt{s_{\mathrm{NN}}}$), centrality, transverse momentum ($p_{T}$) acceptance and rapidity acceptance ($\Delta y$). In addition, we present the extracted various order integrated correlation functions of protons and antiprotons from the measured cumulants. In this section, we also discuss the results from the HRG model and transport model calculations. In section IV, we present the summary. Detailed discussions on the efficiency correction, and the estimation of the statistical uncertainties are presented in Appendices A and B, respectively. ## II Experiment and Data Analysis ### II.1 Data set and event selection The data presented in the paper were obtained using the Time Projection Chamber (TPC) Ackermann _et al._ (2003) and the Time-of-Flight detectors (TOF) Llope (2012) of the Solenoidal Tracker at RHIC (STAR) Ackermann _et al._ (2003). The event-by-event proton ($N_{p}$) and antiproton ($N_{\bar{p}}$) multiplicities are measured for Au+Au minimum-bias events at $\sqrt{s_{\mathrm{NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV for collisions occurring within a certain $Z$-position ($V_{z}$) range of the collision vertex (given in Table 1) from the TPC center along the beam line. These data sets were taken with a minimum-bias trigger, which was defined using a coincidence of hits in the zero degree calorimeters (ZDCs) Adler _et al._ (2001), vertex position detectors (VPDs) Llope _et al._ (2004), and/or beam-beam counters (BBCs) Bieser _et al._ (2003). The range of $|V_{z}|$ is chosen to optimize the event statistics and uniformity of the response of the detectors used in the analysis. Table 1: Total number of events for Au+Au collisions analysed for various collision energies ($\sqrt{s_{\rm NN}}$) obtained after all of the event selection criteria are applied. The $Z$-vertex ($V_{z}$) range, the chemical freeze-out temperature ($T_{\mathrm{ch}}$) and baryon chemical potential ($\mu_{\mathrm{B}}$) for 0-5% Au+Au collisions Adamczyk _et al._ (2017) are also given. $\sqrt{s_{NN}}$ (GeV) | No. of events (million) | $|V_{z}|$ (cm) | $T_{\mathrm{ch}}$ (MeV) | $\mu_{\mathrm{B}}$ (MeV) ---|---|---|---|--- 200 | 238 | 30 | 164.3 | 28 62.4 | 47 | 30 | 160.3 | 70 54.4 | 550 | 30 | 160.0 | 83 39 | 86 | 30 | 156.4 | 103 27 | 30 | 30 | 155.0 | 144 19.6 | 15 | 30 | 153.9 | 188 14.5 | 20 | 30 | 151.6 | 264 11.5 | 6.6 | 30 | 149.4 | 287 7.7 | 3 | 40 | 144.3 | 398 Figure 1: (Color online) Top left panel: The mass squared ($m^{2}$) versus rigidity for charged tracks in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 39 GeV. The rigidity is defined as momentum/z, where z is the dimensionless ratio of particle charge to the electron charge magnitude. Bottom left panel: The specific ionization energy loss ($dE/dx$) as a function of rigidity measured in the TPC for the same data set. Also shown as solid lines are the theoretical expectations for each particle species. Right panels: Rapidity ($y$) versus transverse momentum ($p_{\rm T}$). The color reflects the relative yields of protons (top) and antiprotons (bottom) using the TPC PID for Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 39 GeV. The dashed boxes represent the acceptance used in the current analysis. Two blobs at large rapidities are contaminated by particles other than (anti)protons. This contamination is rejected in later steps of the analysis. Table 2: Proton and antiproton track selection criteria at all energies. The $\rm N_{Fit}$ and $\rm N_{HitPoss}$ represent the number of hits used in track fitting and the maximum number of possible hits in the TPC. $|y|$ | | $p_{T}$ (GeV/$c$) | | DCA (cm) | | $\rm N_{Fit}$ | | $\rm N_{Fit}/N_{HitPoss}$ | | No. of $dE/dx$ points ---|---|---|---|---|---|---|---|---|---|--- $<$ 0.5 | | 0.4-2.0 | | $<$ 1 | | $>$ 20 | | $>$ 0.52 | | $>$ 5 In order to reject background events which involve interactions with the beam pipe, the transverse radius of the event vertex is required to be within 2 cm (1 cm for 14.5 GeV) of the center of STAR Adamczyk _et al._ (2017). We use two methods to determine the $V_{z}$: one from a fast scintillator-based vertex position detector, and the other from the most probable point of common origin of the tracks, which are reconstructed from the hits measured in the TPC. To remove pile-up events at energies above 27 GeV, we require the $V_{z}$ difference between the two methods to be within 3 cm. Further, a detailed study of the TPC tracks as a function of the TOF matched tracks with valid TOF information is carried out and outlier events are rejected. To ensure the quality of the data, a run-by-run study of several variables, such as the total number of uncorrected charged particles measured in the TPC, average transverse momentum ($\langle p_{\rm T}\rangle$) in an event, mean pseudorapidity ($\eta$) and azimuthal angle ($\phi$) in an event, is carried out. Outlier runs beyond $\pm$ 3$\sigma$, where $\sigma$ corresponds to the standard deviation of run-by-run distributions of a variable, are not included in the current analysis. In addition, the distance of closest approach (DCA) of the charged-particle track from the primary vertex, especially the signed transverse DCA (DCAxy) are studied to remove bad events (The signed transverse DCA refers to the DCA with respect to the primary vertex in the transverse plane. Its sign is the sign of the vector product of the DCA vector and the track momentum). These classes of bad events are primarily related to unstable beam conditions during the data taking and improper space-charge calibration of the TPC. Table 1 gives the total number of minimum-bias events analyzed for each $\sqrt{s_{\mathrm{NN}}}$ and the corresponding chemical freeze-out temperature ($T_{\mathrm{ch}}$) and baryon chemical potential ($\mu_{\mathrm{B}}$) values for central 0-5% Au+Au collisions. The beam energy values in the BES program are chosen so that the difference in $\mu_{B}$ values is not larger than 100 MeV between adjacent collision energies. Figure 2: (Color online) The uncorrected reference charged particle multiplicity ($N_{\rm{ch}}$) distributions within pseudorapidity $|\eta|<$ 1 by excluding protons and antiprotons in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 200 GeV. These distributions are used for centrality determination. The shaded region at each $\sqrt{s_{\mathrm{NN}}}$ corresponds to 0-5% central collisions. The dashed line corresponds to Monte Carlo Glauber model simulations Miller _et al._ (2007). ### II.2 Track selection, particle identification and acceptance The proton and antiproton track selection criteria for all the $\sqrt{s_{\mathrm{NN}}}$ are presented in Table 2. In order to suppress contamination by tracks from secondary vertices, a requirement of less than 1 cm is placed on DCA between each track and the event vertex. Tracks are required to have at least 20 points used in track fitting out of a maximum of 45 possible hits in the TPC. To prevent multiple counting of split tracks, more than 52% of the maximum-possible fit points are required. A condition is also placed on the number of points ($>$ 5) used to extract the energy loss ($dE/dx$) values, which is used to identify the (anti)protons from the charged particles detected in the TPC. The results presented here are within kinematics $|y|<$0.5 and 0.4 $<p_{\rm T}<$ 2.0 GeV/$c$. Particle identification (PID) is carried out using the TPC and TOF by measuring the $dE/dx$ and time of flight, respectively. Figure 1 (left top panel) shows a typical plot of the square of the mass ($m^{2}$) associated with a track measured in the TPC as a function of rigidity (defined as momentum/z, where z is the dimensionless ratio of particle charge to the electron charge magnitude) for Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 39 GeV. The $m^{2}$ is given by: $m^{2}=p^{2}\left(\frac{c^{2}t^{2}}{L^{2}}-1\right),$ (1) where $p$, $t$, $L$, and $c$ are the momentum, time-of-flight of the particle, path length, and speed of light, respectively. Protons and antiprotons can be identified by selecting charged tracks for which 0.6 $<$ $m^{2}$ $<$ 1.2 $\mathrm{GeV}^{2}/c^{4}$. Figure 1 (left bottom panel) shows the $dE/dx$ of measured charged particles plotted as a function of the rigidity. The measured values of $dE/dx$ are compared to the expected theoretical values Bichsel (2006) (shown as solid lines in Fig. 1) to select the proton and antiproton tracks. A quantity called $N_{\sigma,p}$ for charged tracks in the TPC is defined as: $N_{\sigma,p}=(1/\sigma_{R})\ln\left(\frac{\langle dE/dx\rangle}{\langle dE/dx\rangle_{p}^{\rm th}}\right),$ (2) where $\langle dE/dx\rangle$ is the truncated mean value of the track energy loss measured in the TPC, $\langle dE/dx\rangle_{p}^{\rm th}$ is the corresponding theoretical value for proton (or antiproton) in the STAR TPC Bichsel (2006) and $\sigma_{R}$ is the $dE/dx$ resolution which is momentum- dependent and of the order of 7.5% for the momentum range of this analysis. Assuming that the $N_{\sigma,p}$ distribution in a given momentum range is Gaussian, it should peak at zero for proton tracks and the values represent the deviation from the theoretical values for proton tracks in terms of standard deviations ($\sigma_{R}$). Momentum-dependent selection criteria are used for TPC tracks to select protons or antiprotons. For 0.4 $<$ $p_{\rm T}$ $<$ 0.8 GeV/$c$ and momentum ($p$) less than 1 GeV/$c$, $|N_{\sigma,p}|<$ 2.0 is chosen and for 0.8 $<$ $p_{\rm T}$ $<$ 2.0 GeV/$c$ and momentum ($p$) less than 3 GeV/$c$, in addition to $|N_{\sigma,p}|<$ 2.0, the track is required to have 0.6 $<$ $m^{2}$ $<$ 1.2 $\mathrm{GeV}^{2}/c^{4}$ from TOF. The purity is estimated by referring to the $N_{\sigma,p}$ distributions from the TPC in various $p_{\mathrm{T}}$ ranges (within 0.4 to 0.8 GeV/$c$) to estimate the contamination from other hadrons within the PID selection criteria. For the higher $p_{\mathrm{T}}$ range, the $m^{2}$ distributions from the TOF are studied after applying the $N_{\sigma,p}$ criteria and the contamination from other hadrons within the PID selection criteria is estimated. The purity of the proton and antiproton samples are better than 97% for all the $p_{\mathrm{T}}$ ranges and $\sqrt{s_{{\rm NN}}}$ studied. Figure 1 (right panels) shows the $p_{\rm T}$ versus $y$ for protons and antiprotons selected by the TPC with $|N_{\sigma,p}|<$ 2.0 in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 39 GeV. The acceptance is uniform in $y$-$p_{\rm T}$ and is the same for other $\sqrt{s_{\mathrm{NN}}}$ studied here. This is a major advantage of collider-based experiments over fixed-target experiments. The boxes show the acceptance criteria used in this analysis. The addition of the TOF extends the PID capabilities to higher $p_{\rm T}$, thereby allowing for the detection of $\sim$ 80% of the total protons per unit rapidity (or antiprotons per unit rapidity) produced in the collisions at midrapidity. This is a significant improvement compared to the previous analysis reported in Ref. Adamczyk _et al._ (2014c). The uniform and large acceptance at midrapidity in $y$, $p_{\rm T}$ and $\phi$ allows STAR to measure and compare the cumulants in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 to 200 GeV. ### II.3 Centrality selection Centrality selection plays a crucial role in the fluctuation analysis. There are two effects related to the centrality selection which need to be addressed. These are (a) the self-correlation Luo _et al._ (2013); Chatterjee _et al._ (2020) and (b) centrality resolution/fluctuations effects Luo _et al._ (2013); Chatterjee _et al._ (2020); Zhou and Jia (2018); Sugiura _et al._ (2019). One of the main self-correlation effects arises when particles used for the fluctuation analysis are also used for the centrality definition. This can be significantly reduced by removing the particles used in the fluctuation analysis from the centrality definition. Hence, we exclude protons and antiprotons from charged particles for the centrality selection. The centrality resolution effect arises due to the fact that the number of participant nucleons and particle multiplicities fluctuate even if the impact parameter is fixed. Through a model simulation it has been shown that the larger the $\eta$ acceptance used for centrality selection, the closer are the values of the cumulants to the actual values Luo _et al._ (2013). This is because the centrality resolution is improved by increasing the number of particles for the centrality definition with wider acceptance. Therefore, to suppress the effect of centrality resolution, one should use the maximum available acceptance of charged particles for centrality selection. In addition, it may be mentioned that the choice of centrality definition also affects the way volume fluctuations (discussed later) contribute to the measurements. These are the driving considerations for the centrality selection for net- proton studies presented in this paper and are discussed below. The basic strategy is to maximize the acceptance window for the centrality determination as allowed by the detectors, and to not use protons and antiprotons for the centrality selection. In addition, the centrality definition method given below is determined after several optimization studies using data and models. These studies were carried out by varying the acceptances in $\eta$ and charged particle types in order to understand the effect of the choice of centrality determination method on the analysis Chatterjee _et al._ (2020). Table 3: The uncorrected number of charged particles other than protons and antiprotons ($N_{\rm{ch}}$) within the pseudorapidity $|\eta|<$ 1.0 used for the centrality selection for various collision centralities expressed in % centrality in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 – 200 GeV. % centrality | $N_{\rm{ch}}$ values at different $\sqrt{s_{{\rm NN}}}$ (GeV) ---|--- 200 | 62.4 | 54.4 | 39 | 27 | 19.6 | 14.5 | 11.5 | 7.7 0-5 | 725 | 571 | 621 | 522 | 490 | 448 | 393 | 343 | 270 5-10 | 618 | 482 | 516 | 439 | 412 | 376 | 330 | 287 | 225 10-20 | 440 | 338 | 354 | 308 | 289 | 263 | 231 | 199 | 155 20-30 | 301 | 230 | 237 | 209 | 196 | 178 | 157 | 134 | 105 30-40 | 196 | 149 | 151 | 136 | 127 | 116 | 103 | 87 | 68 40-50 | 120 | 91 | 90 | 83 | 78 | 71 | 63 | 53 | 41 50-60 | 67 | 51 | 50 | 47 | 44 | 40 | 36 | 30 | 23 60-70 | 34 | 26 | 24 | 24 | 22 | 20 | 19 | 15 | 11 70-80 | 16 | 12 | 10 | 11 | 10 | 9 | 13 | 7 | 5 Table 4: The average number of participant nucleons ($\langle N_{\rm{part}}\rangle$) for various collision centralities in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 – 200 GeV from a Monte Carlo Glauber Model. The numbers in parentheses are systematic uncertainties. % centrality | $\langle N_{\mathrm{part}}\rangle$ values at different $\sqrt{s_{{\rm NN}}}$ (GeV) ---|--- 200 | 62.4 | 54.4 | 39 | 27 | 19.6 | 14.5 | 11.5 | 7.7 0-5 | 351 (2) | 347 (3) | 346 (2) | 342(2) | 343 (2) | 338 (2) | 340(2) | 338 (2) | 337 (2) 5-10 | 299 (4) | 294 (4) | 292 (6) | 294 (6) | 299 (6) | 289 (6) | 289 (6) | 291 (6) | 290 (6) 10-20 | 234 (5) | 230 (5) | 228 (8) | 230 (9) | 234 (9) | 225 (9) | 225 (8) | 226 (8) | 226 (8) 20-30 | 168 (5) | 164 (5) | 161 (10) | 162 (10) | 166 (11) | 158 (10) | 159 (9) | 160 (9) | 160 (10) 30-40 | 117 (5) | 114 (5) | 111 (11) | 111 (11) | 114 (11) | 108 (11) | 109 (11) | 110 (11) | 110 (11) 40-50 | 78 (5) | 76 (5) | 73 (10) | 74 (10) | 75 (10) | 71 (10) | 72 (10) | 73 (10) | 72 (10) 50-60 | 49 (5) | 48 (5) | 45 (9) | 46 (9) | 47 (9) | 44 (9) | 45 (9) | 45 (9) | 45 (9) 60-70 | 29 (4) | 28 (4) | 26 (7) | 26 (7) | 27 (8) | 26 (7) | 26 (7) | 26 (7) | 26 (7) 70-80 | 16 (3) | 15 (2) | 13 (5) | 14 (5) | 14 (6) | 14 (5) | 14 (6) | 14 (6) | 14 (4) Figure 3: (Color online) Net-proton multiplicity ($\Delta N_{p}$) distributions in Au+Au collisions at various $\sqrt{s_{\mathrm{NN}}}$ for 0-5%, 30-40% and 70-80% collision centralities at midrapidity. The statistical errors are small and within the symbol size. The distributions are not corrected neither for the finite-centrality-width effect nor for the reconstruction efficiencies of protons and antiprotons. In order to suppress the self-correlation, centrality resolution and volume fluctuation effects with the available STAR detectors, a new centrality measure is defined, and is different from other analyses reported by STAR Adamczyk _et al._ (2017). The centrality is determined from the uncorrected charged particle multiplicity within pseudorapidity $|\eta|<$ 1 ($N_{\mathrm{ch}}$) after excluding the protons and antiprotons. Strict particle identification criteria are used to remove the proton and antiproton contributions. Charged tracks with $N_{\sigma,p}<-3$ are used and for those tracks which have TOF information an additional criterion, $m^{2}<$ 0.4 GeV${}^{2}/c^{4}$ is applied. The resultant distribution of charged particles is corrected for luminosity and $V_{z}$ dependence at each $\sqrt{s_{\mathrm{NN}}}$. The corrected charged particle distribution is then fit to a Monte Carlo Glauber Model Abelev _et al._ (2010); Miller _et al._ (2007) to define the centrality classes in the experiment (the percentage cross section and the associated cuts on the charged-particle multiplicity). In the fitting process, a multiplicity-dependent efficiency has been applied Abelev _et al._ (2010). Figure 2 shows the reference charged particle multiplicity distributions after excluding protons and antiprotons used for centrality determination for all of the $\sqrt{s_{{\rm NN}}}$ studied here. The lower boundaries of each centrality class based on $N_{\mathrm{ch}}$ are given in Table 3. Table 4 gives the average number of participant nucleons ($\langle N_{\rm{part}}\rangle$) for various collision centralities for $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 200 GeV obtained from a Monte Carlo Glauber model simulation. ### II.4 Uncorrected net-proton multiplicity distributions Figure 3 shows the event-by-event net-proton multiplicity ($\Delta N_{p}$) distributions from Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 – 200 GeV for 0-5%, 30-40% and 70-80% collision centralities. The $\Delta N_{p}$ distribution is obtained by counting the number of protons and antiprotons within the $y$-$p_{\rm T}$ acceptance on an event-by-event basis for a given collision centrality and $\sqrt{s_{\mathrm{NN}}}$. The distributions presented in Fig. 3 are not corrected for the efficiency and acceptance effects. In general, the shape of the $\Delta N_{p}$ distributions is broader, more symmetric and closer to Gaussian, for central collisions than for peripheral collisions. The shape of the distributions also changes with $\sqrt{s_{\mathrm{NN}}}$. Cumulants ($C_{n}$) up to the fourth order are obtained from these distributions for each collision centrality and $\sqrt{s_{\mathrm{NN}}}$. ### II.5 Definition of cumulants and integrated correlation functions In this subsection, we give the definition of the cumulants used in this paper. Let $N$ represent any entry in the data sample, its deviation from its mean value ($\langle N\rangle$, referred to as the $1^{st}$ moment) is then given by $\delta N=N-\langle N\rangle$. Any $r^{th}$-order central moment is defined as: $\mu_{r}=\,\langle(\delta N)^{r}\rangle.$ (3) The cumulants of a given data sample could be written in terms of moments as follows: $\displaystyle C_{1}$ $\displaystyle=$ $\displaystyle\langle N\rangle,$ (4) $\displaystyle C_{2}$ $\displaystyle=$ $\displaystyle\langle(\delta N)^{2}\rangle=\mu_{2},$ (5) $\displaystyle C_{3}$ $\displaystyle=$ $\displaystyle\langle(\delta N)^{3}\rangle=\mu_{3},$ (6) $\displaystyle C_{4}$ $\displaystyle=$ $\displaystyle\langle(\delta N)^{4}\rangle-3\langle(\delta N)^{2}\rangle^{2}$ (7) $\displaystyle=$ $\displaystyle\mu_{4}-3\mu_{2}^{2},$ $\displaystyle C_{n}(n>3)$ $\displaystyle=$ $\displaystyle\mu_{n}-\sum\limits_{m=2}^{n-2}{\left(\begin{array}[]{l}n-1\\\ m-1\\\ \end{array}\right)C_{m}}\mu_{n-m}.$ (10) The relations between cumulants and various moments are given as: $\displaystyle M=C_{1},~{}~{}\sigma^{2}=C_{2},~{}~{}S=\frac{C_{3}}{(C_{2})^{3/2}},~{}~{}\kappa=\frac{C_{4}}{(C_{2})^{2}}.$ (11) where $M$, $\sigma^{2}$, $S$ and $\kappa$ are mean, variance, skewness and kurtosis, respectively. The products $\kappa\sigma^{2}$ and $S\sigma$ can be expressed in terms of the ratio of cumulants as: $\displaystyle\sigma^{2}/M=\frac{C_{2}}{C_{1}},~{}~{}S\sigma=\frac{C_{3}}{C_{2}},~{}~{}\kappa\sigma^{2}=\frac{C_{4}}{C_{2}}.$ (12) With the above definition, we can calculate various order cumulants (moments) and cumulant ratios (moment products) from the measured event-by-event net- proton, proton and antiproton distributions for each centrality at a given $\sqrt{s_{\mathrm{NN}}}$. For two independent variables $X$ and $Y$, the cumulants of the probability distributions of their sum ($X+Y$), are just the addition of cumulants of the individual distributions for $X$ and $Y$ $i.e.$ $C_{n,X+Y}=C_{n,X}+C_{n,Y}$ for $n^{th}$-order cumulant. For a distribution of difference between $X$ and $Y$, the cumulants are $C_{n,X-Y}=C_{n,X}+(-1)^{n}C_{n,Y}$, where the even-order cumulants are the addition of the individual cumulants, while the odd-order cumulants are obtained by taking their difference. If the protons and antiprotons are distributed as independent Poissonian distributions, the various order cumulants of net-proton, proton and antiproton distributions can be expressed as: $\displaystyle C_{n,p}$ $\displaystyle=$ $\displaystyle C_{1,p},~{}C_{n,\bar{p}}=C_{1,\bar{p}},$ $\displaystyle C_{n,p-\bar{p}}$ $\displaystyle=$ $\displaystyle C_{1,p}+(-1)^{n}C_{1,\bar{p}}$ where the net-proton multiplicity distributions obey the Skellam distribution and the Poisson baseline/expectation values of the net-proton, proton and antiproton cumulant ratios are: $\displaystyle(\sigma^{2}/M)_{p,\bar{p}}=(S\sigma)_{p,\bar{p}}=(\kappa\sigma^{2})_{p,\bar{p}}=1,$ $\displaystyle(\sigma^{2}/M)_{p-\bar{p}}=\frac{1}{(S\sigma)_{p-\bar{p}}}=\frac{C_{1,p}+C_{1,\bar{p}}}{C_{1,p}-C_{1,\bar{p}}},$ $\displaystyle(\kappa\sigma^{2})_{p-\bar{p}}=1$ where $C_{1,p}$ and $C_{1,\bar{p}}$ are the mean values of proton and antiproton, respectively. On the other hand, it is expected that close to the CP, the three- and four- particle correlations are dominant relative to two-particle correlations Stephanov (2009). The various orders integrated correlation functions of proton and antiproton ($\kappa_{n}$, also known as factorial cumulants) are related to the corresponding proton and antiproton cumulants ($C_{n}$) through the following relations Ling and Stephanov (2016); Bzdak _et al._ (2017a); Kitazawa and Luo (2017): $\begin{split}\kappa_{1}&=C_{1}=\langle N\rangle,\\\ \kappa_{2}&=-C_{1}+C_{2},\\\ \kappa_{3}&=2C_{1}-3C_{2}+C_{3},\\\ \kappa_{4}&=-6C_{1}+11C_{2}-6C_{3}+C_{4},\\\ C_{2}&=\kappa_{2}+\kappa_{1},\\\ C_{3}&=\kappa_{3}+3\kappa_{2}+\kappa_{1},\\\ C_{4}&=\kappa_{4}+6\kappa_{3}+7\kappa_{2}+\kappa_{1},\end{split}$ (13) where $C_{1}$ and $\kappa_{1}$ represent the mean values for protons or antiprotons. For proton and antiproton cumulant ratios $C_{2}/C_{1}$, $C_{3}/C_{2}$ and $C_{4}/C_{2}$, they can be expressed in terms of corresponding normalized correlation functions $\kappa_{n}/\kappa_{1}$ ($n>1$) as: $\displaystyle\frac{C_{2}}{C_{1}}$ $\displaystyle=$ $\displaystyle\frac{\kappa_{2}}{\kappa_{1}}+1,$ (14) $\displaystyle\frac{C_{3}}{C_{2}}$ $\displaystyle=$ $\displaystyle\frac{\kappa_{3}/\kappa_{1}-2}{\kappa_{2}/\kappa_{1}+1}+3,$ (15) $\displaystyle\frac{C_{4}}{C_{2}}$ $\displaystyle=$ $\displaystyle\frac{\kappa_{4}/\kappa_{1}+6\kappa_{3}/\kappa_{1}-6}{\kappa_{2}/\kappa_{1}+1}+7,$ (16) The higher-order integrated correlation functions $\kappa_{n}$ ($n>1$) are equal to zero when the distributions are Poisson. Thus, $\kappa_{n}$ can be used to quantify the deviations from the Poisson distributions in terms of $n$-particle correlations. For simplicity, from here on, we refer to the $\kappa_{n}$ as correlation functions instead of integrated correlation functions. In the following subsections, we discuss corrections that are related to collision centrality bin width (Sec. F) and detection efficiency (Sec. G). This is followed by the estimation of statistical and systematic uncertainties in sections H and I, respectively. ### II.6 Centrality bin width correction The data are presented in this paper as a function of various collision centrality classes for 0-5%, 5-10%, 10-20%, 20-30%, 30-40%, 40-50%, 50-60%, 60-70% and 70-80%. The finite size of centrality bins implies that the average number of protons and antiprotons varies even within a centrality class. This variation has to be accounted for while calculating the cumulants in a broad centrality class. In addition, it is known that calculating cumulants in such broad centrality bins leads to a strong enhancement of cumulants and cumulant ratios due to initial volume fluctuations Luo _et al._ (2013); He and Luo (2018). Figure 4: (Color online) $C_{n}$ of net-proton distributions in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7, 19.6 and 62.4 GeV as a function of $\langle N_{\mathrm{part}}\rangle$. The results are shown for 10%, 5% and 2.5% centrality bins without CBWC and for 9 centrality bins (0-5%, 5-10%, 10-20%,…, 70-80%) with CBWC. The bars are the statistical uncertainties. A Centrality Bin Width Correction (CBWC) is the procedure used to take care of the measurements in a wide centrality bin and is based on weighting the cumulants measured at each multiplicity bin by the number of events in the bin Luo _et al._ (2013); Chatterjee _et al._ (2020); He and Luo (2018). This procedure is mathematically expressed in the equation below: $\displaystyle C_{n}$ $\displaystyle=$ $\displaystyle\frac{{\sum\limits_{r}{n_{r}C_{n}^{r}}}}{{\sum\limits_{r}{n_{r}}}}=\sum\limits_{r}{\omega_{r}C_{n}^{r}},$ (17) where the $n_{r}$ is the number of events at the $r^{th}$ multiplicity bin for the centrality determination, the $C_{n}^{r}$ represents the $n^{th}$-order cumulant of particle number distributions at $r^{th}$ multiplicity. The corresponding weight for the $r^{th}$ multiplicity bin is $\omega_{r}={{{n_{r}}}}/{{\sum\limits_{r}{n_{r}}}}$. Figure 5: (Color online) $\kappa\sigma^{2}$ as a function of collision energy for Au+Au collisions for 0-5% centrality. The data has been corrected for volume fluctuation effects using CBWC, a data driven approach, and a model- dependent volume fluctuation correction method. The bars are the statistical uncertainties. Figure 6: (Color online) Efficiency-uncorrected $C_{n}$ of net- proton, proton and antiproton multiplicity distributions in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7– 200 GeV as a function of $\langle N_{\mathrm{part}}\rangle$. The results are CBW-corrected. The bars are the statistical uncertainties. Figure 4 shows the $C_{n}$ up to the fourth order as a function of $\langle N_{\rm{part}}\rangle$ for three different collision energies: $\sqrt{s_{\mathrm{NN}}}$ = 7.7, 19.6 and 62.4 GeV. For each $C_{n}$ case, four different results are shown. One of them is the CBWC result for 9 collision centrality bins, which correspond to 0-5%, 5-10%, 10-20%, 20-30%,…,70-80%. For comparison, cumulants are also calculated for the other three cases, which are 10%, 5% and 2.5% centrality bin width without CBWC. The higher-order cumulant results with 10% centrality bins are found to have significant deviations compared to those with 5% and 2.5% centrality bins without CBWC. This finding means that it is important to correct for the CBW effect, as one normally expects that, irrespective of the centrality bin width, the cumulant values should exhibit the same dependence on $\langle N_{\rm{part}}\rangle$. It is found that the results get closer to CBWC results with narrower centrality bins and the results with 2.5% centrality bins almost overlap with CBWC results, which indicates that the CBWC can effectively suppress the effect of the volume fluctuations on cumulants (up to the fourth order) within a finite centrality bin width. A different approach, volume fluctuation correction (VFC) method Skokov _et al._ (2013); Braun-Munzinger _et al._ (2017), which assumes independent production of protons, has been also applied at $\sqrt{s_{{\mathrm{NN}}}}$ = 7.7, 19.6 and 62.4 GeV for 0-5% Au+Au central collisions. The correction factors are determined by the Glauber model Braun-Munzinger _et al._ (2017). Figure 5 shows the comparison between the results based on CBWC and VFC methods. As can be seen from the plot, for the 0-5% central collisions, the results of CBWC and VFC are found to be consistent within statistical uncertainties. However, follow-up UrQMD model studies reported in Ref. Sugiura _et al._ (2019), indicate that the VFC method (as discussed in Ref. Skokov _et al._ (2013)) does not work, as the independent particle production model assumed in the VFC is expected to be broken. Therefore, we follow the data- driven method, CBWC, in this paper. ### II.7 Efficiency correction Figure 6 shows the efficiency-uncorrected $C_{n}$ for proton, antiproton and net-proton multiplicity distributions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV as a function of $\langle N_{\rm{part}}\rangle$. This section discusses the method of efficiency correction. One such method is called the binomial-model-based method Kitazawa and Luo (2017); Bzdak and Koch (2012); Luo (2015); Nonaka _et al._ (2017); Luo and Nonaka (2019) and another is the unfolding method Garg _et al._ (2013a); Esumi _et al._ (2021). The cumulants presented in the subsequent sections are corrected for efficiency and acceptance effects related to proton and antiproton reconstruction, unless specified otherwise. Figure 7: (Color online) Efficiencies of proton and antiproton as a function of $\langle N_{\mathrm{part}}\rangle$ in Au+Au collisions for various $\sqrt{s_{\mathrm{NN}}}$. For the lower $\mathrm{p}_{T}$ range ($0.4<p_{\rm T}<0.8$ GeV/$c$), only the TPC is used. For the higher $\mathrm{p}_{T}$ range ($0.8<p_{\rm T}<$ 2.0 GeV/$c$), both the TPC and TOF are used. #### II.7.1 Binomial model method The binomial-based method involves two steps. First we obtain the efficiency of proton and antiproton reconstruction in the STAR detector and then correct the cumulants for efficiency and acceptance effects using analytic expressions. The former uses the embedding process and the latter invokes binomial model assumptions for the detector response function for the efficiencies. One can find more details in Appendix A. The detector acceptance and the efficiency of reconstructing proton and antiproton tracks are determined together by embedding Monte Carlo (MC) tracks, simulated using the GEANT Fine and Nevski (2000) model of the STAR detector response, into real events at the raw data level. One important requirement is the matching of the distributions of reconstructed embedded tracks and real data tracks for quantities reflecting track quality and those used for track selection Adamczyk _et al._ (2017). The ratio of the distribution of reconstructed to embedded Monte Carlo tracks as a function of $p_{T}$ gives the efficiency $\times$ acceptance correction factor ($\varepsilon_{\mathrm{TPC}}(p_{T})$) for the rapidity interval studied. We refer to this factor as simply efficiency. The current analysis makes use of both the TPC and the TOF detectors. While the TPC identifies low $p_{T}$ ($0.4<p_{T}<0.8$ GeV/$c$) protons and antiprotons with high purity, the TOF gives better particle identification than the TPC in the higher $p_{T}$ range ($0.8<p_{T}<2.0$ GeV/$c$). However, not all TPC tracks have valid TOF information due to the limited TOF acceptance and the mismatching of the TPC tracks to TOF hits. This extra efficiency is called the TOF-matching efficiency ($\varepsilon_{\mathrm{TOF}}(p_{T})$). The TOF-matching efficiency is particle-species-dependent and can be obtained using a data-driven technique, which is defined as the ratio of the number of (anti)proton tracks detected in the TOF to the total number of (anti)proton tracks in the TPC within the same acceptance Adamczyk _et al._ (2017). Thus, the final average (anti)proton efficiency within a certain $p_{T}$ range can be calculated as: $\langle\varepsilon\rangle=\frac{{\int\limits_{{p_{{T_{1}}}}}^{{p_{{T_{2}}}}}{\varepsilon({p_{T}})f({p_{T}})d{p_{T}}}}}{{\int\limits_{{p_{{T_{1}}}}}^{{p_{{T_{2}}}}}{f({p_{T}})d{p_{T}}}}},$ (18) where the $p_{T}$-dependent efficiency, $\varepsilon({p_{T}})$, is defined as $\varepsilon({p_{T}})={\varepsilon_{\mathrm{TPC}}}({p_{T}})$ for $0.4<p_{T}<0.8$ GeV/$c$ and $\varepsilon({p_{T}})={\varepsilon_{\mathrm{TPC}}}({p_{T}})\times{\varepsilon_{\mathrm{TOF}}}({p_{T}})$ for $0.8<p_{T}<2.0$ GeV/$c$. The function $f({p_{T}})$ is the efficiency- corrected $p_{T}$ spectrum for (anti)protons Adamczyk _et al._ (2017). Figure 7 shows the average efficiency ($\langle\varepsilon\rangle$) for protons and antiprotons at midrapidity ($|y|<$ 0.5) as a function of collision centrality ($\langle N_{\mathrm{part}}\rangle$). For $0.4<p_{\rm T}<$ 0.8 GeV/$c$ the efficiency is only from the TPC and for $0.8<p_{\rm T}<$ 2.0 GeV/$c$ it is the product of efficiencies from the TPC and TOF. In Fig. 7, only statistical uncertainties are presented and a $\pm$ 5% systematic uncertainty associated with determining the efficiency is considered in the analysis. Figure 8: (Color online) Distributions of reconstructed protons (black circles) from embedding simulations in 200 GeV top 2.5%-central Au+Au collisions. Red lines are fits to the binomial distribution, and green dotted lines represent the fit with the beta-binomial distributions using the $\alpha$ that gives the minimum $\chi^{2}/{\rm ndf}$. Each panel presents results for a different combination of the number of embedded protons and antiprotons as labeled in the legend. The ratio of the fits to the embedding data is shown for each panel at the bottom. #### II.7.2 Unfolding method In this section we discuss the effect of efficiency correction on the $C_{n}$ measurement if the assumption of binomial detector efficiency response breaks down due to some of the reasons given in Refs. Bzdak _et al._ (2016); Nonaka _et al._ (2018). The technique is based on unfolding of the detector response Garg _et al._ (2013a); Esumi _et al._ (2021). The response function is obtained by MC simulations carried out in the STAR detector environment Fine and Nevski (2000). MC tracks are simulated through GEANT and embedded in the real data, track reconstruction is performed as is done in the real experiment. Many effects can lead to non-binomial detector response in heavy- ion experiments. One of those effects could be track merging due to the extreme environment of high particle multiplicity densities in the detector. Hence, we have performed the embedding simulations using the real data for 0-5% Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV. The number of embedded tracks of $N_{\rm p}$ and $N_{\rm\bar{p}}$ are varied within $5\leq N_{\rm p(\bar{p})}\leq 40$. Since we are measuring the net-proton multiplicity distributions, protons and antiprotons are embedded simultaneously. We have shown in Ref. Adamczyk _et al._ (2018b) that for the event statistics in the current analysis, the efficiencies for kaon reconstruction follow binomial distributions. Figure 9: (Color online) Unfolded net-proton multiplicity distributions for $\sqrt{s_{\rm NN}}=$ 200 GeV Au+Au collisions where the binomial distribution (black circle), beta-binomial distributions with $\alpha+\sigma$ (green triangle), $\alpha$ (red square) and $\alpha-\sigma$ (blue triangle) are utilised in response matrices. Ratios of the beta-binomial unfolded distributions to that from binomial response matrices are shown in the bottom panel. Table 5: Net-proton cumulant ratios and their statistical errors for 0-5% central Au+Au collisions at $\sqrt{s_{{\mathrm{NN}}}}$ = 200 GeV, (second column) from the conventional efficiency correction with the binomial detector response, and (third column) from unfolding with the beta-binomial detector response. Systematic errors are also shown for the beta-binomial case. The last column shows the difference between two results normalized by total uncertainty, which is equal to the statistical and systematic uncertainties summed in quadrature. Cumulant ratio | binomial $\pm$ statistical error | beta $\pm$ statistical error $\pm$ systematical error | significance ---|---|---|--- $C_{2}/C_{1}$ | $1.3\pm\mathrm{neg.}$ | $1.20\pm\mathrm{neg.}\pm 0.03$ | $3.1$ $C_{3}/C_{2}$ | $0.13\pm 0.01$ | $0.13\pm 0.01\pm\mathrm{neg.}$ | $4.8\times 10^{-2}$ $C_{4}/C_{2}$ | $1.10\pm 0.21$ | $0.97\pm 0.21\pm 0.08$ | $4.2\times 10^{-1}$ $C_{5}/C_{1}$ | $0.10\pm 0.48$ | $-0.14\pm 0.44\pm 0.11$ | $3.8\times 10^{-1}$ $C_{6}/C_{2}$ | $-0.45\pm 0.24$ | $-0.14\pm 0.20\pm 0.07$ | $1.0$ Figure 8 shows the reconstructed protons from the embedding data (black circles) of Au+Au collisions at $\sqrt{s_{{\rm NN}}}$= 200 GeV and 0-2.5% collision centrality. Each panel represents a different number of embedded (anti)protons. These distributions are fitted by a binomial distribution (red solid line) at a fixed efficiency $\varepsilon$. The ratios of the fitted function to the embedding data are shown in the lower panels. The fitted $\chi^{2}/$ndf ranges from 5.2 to 17.8 and the tails of the distributions are not well described by the binomial distribution for several combinations of embedded $N_{\rm p}$ and $N_{\rm\bar{p}}$ tracks. We find that the embedding data is better described by a beta-binomial distribution given by: $\beta(n:N,a,b)=\int_{0}^{1}dpB(\varepsilon,a,b){\rm B}(n;N,\varepsilon),$ (19) and with the beta distribution given as: $\beta(\varepsilon;a,b)=\varepsilon^{a}(1-\varepsilon)^{b}/{\rm B}(a,b),$ (20) where $B(a,b)$ is the beta function. The beta-binomial distribution is given by an urn model. Let us consider $N_{w}$ white balls and $N_{b}$ black balls in the urn. One draws a ball from the urn. If it is white (black), return two white (black) balls to the urn. This procedure is repeated with $N$ times, then the resulting distribution of $n$ white balls is given by the beta- binomial distributions as $\beta(n;N,N_{w},N_{b})$. This is actually equivalent to $\beta(n;N,\alpha,\varepsilon)$, where $N_{w}=\alpha N$ with $\varepsilon=N_{w}/(N_{w}+N_{b})$. A smaller $\alpha$ gives a broader distribution than the binomial, while the distribution becomes close to the binomial distribution with a larger value of $\alpha$. The beta-binomial distributions are numerically generated with various values of $\alpha$. These are compared to the embedding data to determine the best fit parameter value of $\alpha$. The green lines in Fig. 8 show the beta- binomial distribution for the value of $\alpha$ that gives the minimum $\chi^{2}/{\rm ndf}$. It is found that $\chi^{2}/{\rm ndf}\approx 1$ for most $(N_{\rm p},N_{\bar{p}})$ combinations. With this additional parameter $\alpha$, it is found that the detector response is better described in the tails by a beta-binomial distribution compared to a binomial distribution. From the embedding simulations as discussed above, the $\varepsilon$ and $\alpha$ are parametrized as a function of $N_{\rm p}$ and $N_{\bar{\rm p}}$. Using the parametrization, a 4-dimensional response matrix between generated and reconstructed protons and antiprotons is generated with 1 billion events. The limited statistics in the embedding simulations lead to uncertainties on the $\alpha$ values. Therefore, two more response matrices are generated using $\alpha-\sigma$ and $\alpha+\sigma$, where $\sigma$ is the statistical uncertainty on the $\alpha$ values determined by the embedding simulation. Furthermore, the standard response matrices are also generated with the binomial distribution as a reference using a multiplicity-dependent efficiency. These response matrices are used to correct for the detector effects as a confirmation of this approach by comparing to the binomial correction method described in the previous section. The consistency of the unfolding method has been checked through a detailed simulation and an analytic study. Figure 9 shows the unfolded net-proton distributions for 200 GeV Au+Au collisions at 0-2.5% centrality. Results from four assumptions on the detector response are shown, one is the binomial detector response and the other three assume the beta-binomial distributions with different non-binomial $\alpha$ values. The ratios of the beta-binomial unfolded distributions to the binomial unfolded distributions are shown in the bottom panel. The unfolded distributions with beta-binomial response matrices are found to be narrower with a decreasing value of $\alpha$. Calculations are done for 0-2.5% and 2.5-5.0% centralities separately and averaged to determine the $C_{n}$ values for the 0-5% centrality. The $C_{n}$ values and their ratios from data obtained using the binomial model method of efficiency correction and those using the binomial detector response matrix in the unfolding method are consistent. Table 5 summarizes the cumulant ratios and their errors. Results are also obtained from the unfolding method using the beta-binomial response function with non-binomial parameters in the range $\alpha\pm\sigma$. This range in values of $\alpha$ is used to generate the systematic uncertainties associated with the unfolding method. The deviations of those non-binomial efficiency-corrected results with respect to the conventional efficiency correction with binomial detector response is found to be 3.1 $\sigma$ for $C_{2}/C_{1}$ and less than 1.0 $\sigma$ for $C_{4}/C_{2}$ and for $C_{3}/C_{2}$. The $\sigma$ value is the statistical and systematic uncertainties added in quadrature. These studies have been done for Au+Au collisions for the highest collision energy of $\sqrt{s_{\rm NN}}=$ 200 GeV and top-most 5% centrality. This set of data provides the largest charged-particle-density environment for the detectors, where we expect the maximum non-binomial detector effects. Even in this situation, the differences in the two methods of efficiency correction are at a level of less than one $\sigma$. Thus, we conclude that the non- binomial detector effects on higher-order cumulant ratios presented in this work are within the uncertainties quoted for all of the BES-I energies. Figure 10: (Color online) Comparison of the statistical uncertainties on $C_{n}$ of net-proton distributions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 19.6 GeV from the delta theorem and bootstrap methods. The results are presented as a function of $\langle N_{\mathrm{part}}\rangle$. Figure 11: (Color online) Ratios of cumulants ($C_{n}$) as a function of $\langle N_{\rm{part}}\rangle$, for net-protons distributions in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV obtained by varying the analysis criteria in terms of track selection criteria, particle identification criteria and efficiency. Since variations with respect to default selection criteria are used to obtain the systematic uncertainties on the measurements, the errors are shown only for the default case. ### II.8 Statistical uncertainty The higher-order cumulants are sensitive to the shape of the distribution, and estimating their statistical uncertainty is crucial due to the limited available statistics. It has been shown that among the various methods of obtaining statistical uncertainty on cumulants, the delta theorem method Luo (2012) and the bootstrap method Luo _et al._ (2013); Luo (2015); Pandav _et al._ (2019); boo ; Efron (1979) are the most reliable. Below we briefly discuss the two methods and show that the uncertainty values obtained up to the fourth order cumulant from both methods are consistent. Table 6: Total systematic uncertainty as well as the absolute uncertainties from individual sources, such as DCA and NhitsFit, for net-proton $C_{n}$ in 0-5% central Au+Au collisions at $\sqrt{s_{\rm{NN}}}=$ 7.7 - 200 GeV. The total systematic uncertainties are obtained by adding the uncertainties from individual sources in quadrature. $\sqrt{s_{\mathrm{NN}}}$ (GeV) | Cumulant | Total syst. | DCA | NhitsFit | $N_{\sigma,p}$ | $m^{2}$ | Efficiency ---|---|---|---|---|---|---|--- | $C_{1}$ | 2.42 | 0.85 | 0.78 | 0.99 | 0.028 | 1.88 | $C_{2}$ | 2.03 | 0.72 | 0.60 | 0.82 | 0.032 | 1.61 7.7 | $C_{3}$ | 1.65 | 0.60 | 0.97 | 0.54 | 0.31 | 1.02 | $C_{4}$ | 16.20 | 5.56 | 12.54 | 6.40 | 2.68 | 5.11 | $C_{1}$ | 2.82 | 1.76 | 1.03 | 1.13 | 0.033 | 1.59 | $C_{2}$ | 2.34 | 1.44 | 0.73 | 0.99 | 0.020 | 1.37 11.5 | $C_{3}$ | 1.36 | 0.64 | 0.20 | 0.85 | 0.035 | 0.82 | $C_{4}$ | 7.37 | 2.28 | 4.10 | 4.94 | 2.60 | 1.06 | $C_{1}$ | 1.72 | 0.77 | 0.54 | 0.76 | 0.03 | 1.22 | $C_{2}$ | 1.60 | 0.69 | 0.49 | 0.74 | 0.021 | 1.13 14.5 | $C_{3}$ | 1.16 | 0.52 | 0.44 | 0.51 | 0.047 | 0.78 | $C_{4}$ | 8.06 | 2.89 | 3.10 | 5.41 | 0.71 | 4.15 | $C_{1}$ | 1.46 | 0.60 | 0.62 | 0.56 | 0.045 | 1.03 | $C_{2}$ | 1.46 | 0.62 | 0.62 | 0.57 | 0.041 | 1.02 19.6 | $C_{3}$ | 0.68 | 0.36 | 0.26 | 0.23 | 0.13 | 0.44 | $C_{4}$ | 3.65 | 0.86 | 1.99 | 2.58 | 0.59 | 0.89 | $C_{1}$ | 1.20 | 0.51 | 0.53 | 0.47 | 0.025 | 0.83 | $C_{2}$ | 1.44 | 0.67 | 0.63 | 0.57 | 0.027 | 0.96 27 | $C_{3}$ | 0.62 | 0.33 | 0.27 | 0.23 | 0.035 | 0.39 | $C_{4}$ | 3.10 | 1.58 | 1.36 | 1.80 | 0.38 | 1.36 | $C_{1}$ | 0.94 | 0.39 | 0.45 | 0.35 | 0.026 | 0.64 | $C_{2}$ | 1.48 | 0.67 | 0.67 | 0.59 | 0.033 | 0.97 39 | $C_{3}$ | 0.51 | 0.29 | 0.21 | 0.17 | 0.04 | 0.313 | $C_{4}$ | 3.35 | 1.00 | 2.76 | 1.43 | 0.20 | 0.65 | $C_{1}$ | 0.81 | 0.43 | 0.33 | 0.20 | 0.034 | 0.56 | $C_{2}$ | 1.57 | 0.88 | 0.65 | 0.39 | 0.064 | 1.06 54.4 | $C_{3}$ | 0.42 | 0.27 | 0.15 | 0.078 | 0.025 | 0.27 | $C_{4}$ | 2.95 | 1.18 | 1.41 | 1.93 | 1.24 | 0.21 | $C_{1}$ | 1.04 | 0.45 | 0.49 | 0.35 | 0.044 | 0.71 | $C_{2}$ | 2.15 | 1.05 | 1.087 | 0.79 | 0.11 | 1.31 62.4 | $C_{3}$ | 0.58 | 0.14 | 0.22 | 0.30 | 0.081 | 0.41 | $C_{4}$ | 3.99 | 2.40 | 2.30 | 1.38 | 1.21 | 1.23 | $C_{1}$ | 0.39 | 0.19 | 0.24 | 0.11 | 0.01 | 0.22 | $C_{2}$ | 2.42 | 1.11 | 1.53 | 0.77 | 0.087 | 1.31 200 | $C_{3}$ | 0.39 | 0.24 | 0.18 | 0.19 | 0.074 | 0.14 | $C_{4}$ | 4.89 | 2.69 | 3.07 | 1.80 | 1.41 | 1.42 The delta theorem method gives a concise form of standard error propagation method. This method of statistical uncertainty estimation uses the Central Limit Theorem (CLT). The variance of the statistic $\phi$ can be calculated as: $V(\phi)=\sum\limits_{i,j=1}^{m}{\left({\frac{{\partial\phi}}{{\partial{X_{i}}}}}\right)}\left({\frac{{\partial\phi}}{{\partial{X_{j}}}}}\right){\rm Cov}({X_{i}},{X_{j}}),$ (21) where the ${\rm Cov}(X_{i},X_{j})$ is the covariance between random variables $X_{i}$ and $X_{j}$. Thus, we need to know the covariance between $X_{i}$ and $X_{j}$ to calculate the statistical errors. If particle multiplicities follow a Gaussian distribution with width $\sigma$, the statistical uncertainty of the cumulants and cumulant ratios at different orders can be estimated as: $\displaystyle\mathrm{error}(C_{m})\propto\frac{\sigma^{m}}{\sqrt{N}~{}\varepsilon^{\alpha}},~{}\mathrm{error}(C_{n}/C_{2})\propto\frac{\sigma^{n-2}}{\sqrt{N}~{}\varepsilon^{\beta}},$ (22) where $m$ and $n$ are integer numbers with $m\geq 1$ and $n\geq 2$, $\alpha$ and $\beta$ are real numbers with $\alpha>0$ and $\beta>0$. The $N$ and $\varepsilon$ denote the number of events and the particle-reconstruction efficiency, respectively. Thus, one can find that the statistical uncertainty strongly depends on the width ($\sigma$) of the distributions. For similar event statistics, due to the increasing width of the net-proton distributions from peripheral to central collisions, the statistical uncertainties are larger in central collisions than those from peripheral. Furthermore, the reconstruction efficiency increases the statistical uncertainties on the cumulants compared to their corresponding uncorrected case. A more detailed discussion can be found in Appendix B. The bootstrap method finds the statistical uncertainties on the cumulants in a Monte Carlo way by forming bootstrap samples. It makes use of a random selection of elements with replacement from the original sample to construct bootstrap samples over which the sampling variance of a given order cumulant is calculated boo ; Efron (1979). Let $X$ be a random sample representing the experimental dataset. Let $\mu_{r}$ be the estimator of a statistic (such as mean or variance etc.), on which we intend to find the statistical error. Given a parent sample of size $n$, construct $B$ number of independent bootstrap samples $X^{*}_{1}$, $X^{*}_{2}$, $X^{*}_{3}$, …, $X^{*}_{B}$, each consisting of $n$ data points randomly drawn with replacement from the parent sample. Then evaluate the estimator in each bootstrap sample: $\mu_{r}^{*}=\mu_{r}(X^{*}_{b})\qquad b=1,2,3,...,B.$ (23) Then obtain the sampling variance of the estimator as: $\mathrm{Var}(\mu_{r})=\frac{1}{B-1}\sum_{b=1}^{B}\Big{(}\mu_{r}^{*}-\bar{\mu}_{r}\Big{)}^{2},$ (24) where $\bar{\mu}_{r}=\frac{1}{B}\sum_{b=1}^{B}(\mu_{r}^{*})$. The value of $B$ is optimized and in general, the larger the value of $B$ the better the estimate of the error. Figure 10 shows the statistical uncertainties on various orders of $C_{\rm n}$ obtained using the delta theorem and bootstrap methods for Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 19.6 GeV. The results are shown as a function of $\langle N_{\mathrm{part}}\rangle$ for each $C_{n}$. The value of $B$ is 200. Good agreement of the statistical uncertainties is seen from both methods. The delta theorem method is used for obtaining the statistical uncertainties on the results discussed below. Figure 12: (Color online) Collision centrality dependence of proton (open squares), antiproton (open triangles) and net-proton (filled circles) cumulants from (7.7 – 200 GeV) Au+Au collisions at RHIC. The data are from $|y|<0.5$ and $0.4<p_{T}<2.0$ GeV/$c$. Statistical and systematic uncertainties are shown as the narrow black and wide grey bands, respectively. Note that the net-proton and proton $C_{4}$ from 0-5% and 5-10% central Au+Au collisions at 7.7 GeV have been scaled down by a factor of 2, indicated in the yellow box. ### II.9 Systematic uncertainty Systematic uncertainties are estimated by varying the following requirements for $p(\bar{p})$ tracks: DCA, track quality (as reflected by the number of fit points used in track reconstruction), $dE/dx$ and $m^{2}$ for $p(\bar{p})$ identification Adamczyk _et al._ (2014c). A $\pm$ 5% systematic uncertainty associated with determining the efficiency is also considered Adamczyk _et al._ (2017). All of the different sources of systematic uncertainty are added in quadrature to obtain the final systematic uncertainties on the $C_{n}$ and its ratios. Figure 11 shows the variations of the cumulants ratios with the changes in the above selection criteria for the net-proton distributions in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 200 GeV. Table 6 gives the systematic uncertainties on the $C_{n}$ of the net-proton distribution for 0-5% central Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 200 GeV. The statistical and systematic uncertainties are presented separately in the figures. Figure 13: (Color online) Collision centrality dependence of the cumulant ratios of proton, antiproton and net-proton multiplicity distributions for Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent the statistical and systematic uncertainties, respectively. ## III Results In this section we present the efficiency-corrected cumulants and cumulant ratios of net-proton, proton and antiproton multiplicity distributions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The cumulant ratios are related to the ratios of baryon number susceptibilities ($\chi_{\mathrm{B}}$) computed in QCD-motivated models as: $\sigma^{2}$/$M$ = $\chi^{B}_{\mathrm{2}}/\chi^{B}_{\mathrm{1}}$, ${\it{S}}$$\sigma$ = $\chi^{B}_{\mathrm{3}}/\chi^{B}_{\mathrm{2}}$ and $\kappa$$\sigma^{2}$ = $\chi^{B}_{\mathrm{4}}/\chi^{B}_{\mathrm{2}}$ Ejiri _et al._ (2006); Cheng _et al._ (2009); Stokic _et al._ (2009); Gupta _et al._ (2011); Gavai and Gupta (2011). Normalized correlation functions ($\kappa_{n}/\kappa_{1}$, $n>1$) for proton and antiproton extracted from the measured $C_{n}$ are also presented. The statistical uncertainties on $\kappa_{n}$ are obtained from the uncertainties on $C_{n}$ using standard error propagation method. These results will be also compared to corresponding results from a hadron resonance gas (HRG) Garg _et al._ (2013b) and hadronic- transport-based UrQMD model calculations Xu _et al._ (2016); He and Luo (2017). In the following subsections, the dependence of the cumulants and correlation functions on collision energy, centrality, rapidity and transverse momentum are presented. The corresponding physics implications are discussed. ### III.1 Centrality dependence In this subsection, we show the $\langle N_{\mathrm{part}}\rangle$ (representing collision centrality) dependence of the cumulants, cumulant ratios and normalized correlation functions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. To understand the evolution of the centrality dependence of the cumulants and cumulant ratios, we invoke the central limit theorem and consider the distribution at any given centrality $i$ to be a superposition of several independent source distributions Aggarwal _et al._ (2010b). Assuming the average number of sources for a given centrality is proportional to the corresponding $\langle N_{\mathrm{part}}\rangle$, the $C_{n}$ should have a linear dependence on $\langle N_{\mathrm{part}}\rangle$ and the ratios $C_{2}/C_{1}$, $C_{3}/C_{2}$ and $C_{4}/C_{2}$ should be constant as a function of $\langle N_{\mathrm{part}}\rangle$. Figure 14: (Color online) Collision centrality dependence of normalized correlation functions $\kappa_{n}/\kappa_{1}$ ($n=2,3,4$) for proton and antiproton multiplicity distributions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$= 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent the statistical and systematic uncertainties, respectively. For clarity, the X-axis values for protons are shifted and the values of proton and antiproton $\kappa_{4}/\kappa_{1}$ at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. Figure 12 shows the $\langle N_{\mathrm{part}}\rangle$ dependence of $C_{n}$ for net-proton, proton and antiproton distributions in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. Since the cumulants are extensive quantities, the $C_{n}$ for net-proton, proton and antiproton increase with increasing $\langle N_{\mathrm{part}}\rangle$ for all of the $\sqrt{s_{{\rm NN}}}$ studied. The different mean values of the proton and antiproton distributions at each energy are determined by the interplay between proton- antiproton pair production and baryon stopping effects. At the lower $\sqrt{s_{{\rm NN}}}$, the effects of baryon stopping at midrapidity are more important than at higher $\sqrt{s_{{\rm NN}}}$, and therefore the net-proton $C_{n}$ has dominant contributions from protons. The small mean values for antiprotons at lower $\sqrt{s_{{\rm NN}}}$ are due to their low rate of production. At higher $\sqrt{s_{{\rm NN}}}$, the pair production process dominates the production of protons and antiprotons at midrapidity. The $\bar{p}/p$ ratio for 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 200 GeV and 7.7 GeV are 0.769 and 0.007, respectively Abelev _et al._ (2009); Adamczyk _et al._ (2017). Large values of $C_{3}$ and $C_{4}$ also indicate that the net-proton, proton and antiproton distributions are non-Gaussian. To facilitate plotting, the net-proton and proton $C_{4}$ from the 0-5% and 5-10% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. Figure 13 shows the $\langle N_{\mathrm{part}}\rangle$ dependence of cumulant ratios $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ for net-proton, proton and antiproton distributions measured in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. In terms of the moments of the distributions, they correspond to $\sigma^{2}/M$ ($C_{2}$/$C_{1}$), ${\it{S}}$$\sigma$ ($C_{3}$/$C_{2}$) and $\kappa$$\sigma^{2}$ ($C_{4}$/$C_{2}$). The volume effects are cancelled to the first order in these cumulant ratios. It is found that both of the proton and antiproton cumulant ratios $C_{2}$/$C_{1}$ and $C_{3}$/$C_{2}$ show weak variations with $\langle N_{\mathrm{part}}\rangle$. Based on the HRG model with Boltzmann approximation, the orders of baryon number fluctuations can be analytically expressed as $C^{B}_{1}/C^{B}_{2}$ = $C^{B}_{3}/C^{B}_{2}$ = $\mathrm{tanh}(\mu_{B}/T)$ and $C^{B}_{4}/C^{B}_{2}$ = 1, where $\mu_{B}$ and $T$ are the baryon chemical potential and temperature of the system, respectively. The values of net-proton $C_{2}$/$C_{1}$ show a monotonic decrease with increasing $\langle N_{\mathrm{part}}\rangle$ while the values of $C_{3}$/$C_{2}$ show a slight increase with $\langle N_{\mathrm{part}}\rangle$. For a fixed centrality, both net-proton $C_{2}$/$C_{1}$ and $C_{3}$/$C_{2}$ show strong energy dependence, which can be understood as $C_{3}/C_{2}\propto\mathrm{tanh}(\mu_{B}/T)$ and $C_{2}/C_{1}\propto 1/\mathrm{tanh}(\mu_{B}/T)$. At high $\sqrt{s_{{\rm NN}}}$, the net-proton $C_{3}$/$C_{2}\propto\mathrm{tanh}(\mu_{B}/T)\approx\mu_{B}/T\to 0$ and $C_{2}$/$C_{1}\propto 1/\mathrm{tanh}(\mu_{B}/T)\approx T/\mu_{B}>1$. Since the $\mu_{B}/T\gg 1$ for the lower energies, the values of net-proton $C_{2}/C_{1}$ and $C_{3}$/$C_{2}$ approach unity. Due to the connection between higher-order net-proton cumulant ratios and chemical freeze-out $\mu_{B}$ and $T$, those cumulant ratios have been extensively applied to probe the chemical freeze-out conditions and thermal nature of the medium created in heavy-ion collisions Bazavov _et al._ (2012); Borsanyi _et al._ (2013); Gupta _et al._ (2020). Finally, the net-proton and proton $C_{4}$/$C_{2}$ ratios have weak $\langle N_{\mathrm{part}}\rangle$ dependence for energies above $\sqrt{s_{{\rm NN}}}$ = 39 GeV. For energies below $\sqrt{s_{{\rm NN}}}$ = 39 GeV, the net-proton and proton $C_{4}$/$C_{2}$ generally show a decreasing trend with increasing $\langle N_{\mathrm{part}}\rangle$, except that, within current uncertainties, weak centrality dependences of $C_{4}/C_{2}$ are observed in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 and 11.5 GeV. Figure 14 shows the variation of normalized correlation functions $\kappa_{n}/\kappa_{1}$ ($n>1$) with $\langle N_{\rm part}\rangle$ for protons and antiprotons in Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. The values of $\kappa_{1}$ are equal to mean $C_{1}$ values for protons and antiprotons, and linearly increase with $\langle N_{\mathrm{part}}\rangle$ as shown in Fig. 12. The normalized two-particle correlation functions, $\kappa_{2}/\kappa_{1}$, for protons and antiprotons are found to be negative and increase in magnitude with increasing $\langle N_{\mathrm{part}}\rangle$. The values of proton and antiproton $\kappa_{2}/\kappa_{1}$ become comparable at $\sqrt{s_{{\rm NN}}}$ = 200 GeV but exhibit larger discrepancies at lower energies. This can be understood as the interplay between baryon stopping and pair production of protons and antiprotons as a function of $\sqrt{s_{{\rm NN}}}$. The centrality dependence of the normalized three and four particle correlation functions $\kappa_{3}/\kappa_{1}$, $\kappa_{4}/\kappa_{1}$ of proton and antiproton do not show significant deviation from zero within uncertainties for all centralities and energies. The significances of proton $\kappa_{4}/\kappa_{1}$ deviating from zero are 1.04$\sigma$, 0.05$\sigma$, 1.27$\sigma$, 0.90$\sigma$, 0.95$\sigma$, 0.40$\sigma$, 2.91$\sigma$, 1.43$\sigma$, 0.11$\sigma$ in the 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV, respectively. The $\sigma$ is defined as the sum in quadrature of the statistical and systematic uncertainties. Figure 15: (Color online) Rapidity acceptance dependence of cumulants of proton, antiproton and net-proton multiplicity distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for protons are shifted and the values of proton, antiproton and net-proton $C_{4}$ at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. As shown in Eqs. (14 – 16), the proton and antiproton cumulant ratios $C_{2}/C_{1}$, $C_{3}/C_{2}$ and $C_{4}/C_{2}$ can be expressed in terms of corresponding normalized correlation function $\kappa_{n}/\kappa_{1}$. Therefore, the results shown in Fig. 14 provide important information on how different orders of multiparticle correlation functions of protons and antiprotons contribute to the cumulant ratios. Figure 16: (Color online) Rapidity acceptance dependence of normalized correlation functions up to fourth order ($\kappa_{n}/\kappa_{1}$, n = 2, 3, 4) for proton and antiproton multiplicity distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The X-axis rapidity cut $y_{max}$ is applied as $|y|<y_{max}$. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for protons are shifted and the values of proton and antiproton $\kappa_{4}/\kappa_{1}$ at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. Figure 17: (Color online) Rapidity-acceptance dependence of cumulant ratios of proton, antiproton and net-proton multiplicity distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for net-protons and protons are shifted. ### III.2 Acceptance dependence In this subsection, we focus on discussing the acceptance dependence of the proton, antiproton and net-proton cumulants ($C_{n}$) and cumulant ratios in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. It was pointed out in Refs. Ling and Stephanov (2016); Bzdak _et al._ (2017a); Bzdak and Koch (2017); Brewer _et al._ (2018) that when the rapidity acceptance ($\Delta y$) is much smaller than the typical correlation length ($\xi$) of the system ($\Delta y\ll\xi$), the cumulants ($C_{n}$) and correlation functions ($\kappa_{n}$) should scale with some power $n$ of the accepted mean particle multiplicities as $C_{n},\kappa_{n}\propto(\Delta N)^{n}\propto(\Delta y)^{n}$. Meanwhile, in the regime where the rapidity acceptance becomes much larger than $\xi$ ($\Delta y\gg\xi$), the $C_{n}$ and $\kappa_{n}$ scale linearly with mean multiplicities or $\Delta y$. Thus, the rapidity acceptance dependence of the higher-order cumulants and correlation functions of proton, antiproton and net-proton distributions are important observables to search for a signature of the QCD critical point in heavy-ion collisions. On the other hand, that acceptance dependence of $C_{n}$ and $\kappa_{n}$ could be affected by the effects of non-equilibrium Mukherjee _et al._ (2016); Wu _et al._ (2019) and smearing due to diffusion and hadronic re-scattering Ohnishi _et al._ (2016); Sakaida _et al._ (2017); Nahrgang _et al._ (2019); Asakawa _et al._ (2020) in the dynamical expansion of the created fireball. #### III.2.1 Rapidity dependence Figure 15 shows the rapidity ($-y_{max}<y<y_{max}$, $\Delta y=2y_{max}$) dependence of the $C_{n}$ for proton, antiproton and net-proton distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. The measurements are made in the $p_{\mathrm{T}}$ range of 0.4 to 2.0 GeV/$c$. The rapidity acceptance is cumulatively increased and the $C_{n}$ values for proton, antiproton and net-proton increase with increasing rapidity acceptance. For $\sqrt{s_{{\rm NN}}}$ $<$ 27 GeV, the proton and net-proton $C_{n}$ have similar values, an inevitable consequence of the small production rate of antiproton at lower energies. Figure 16 shows the variation of normalized correlation functions $\kappa_{n}/\kappa_{1}$ with rapidity acceptance for proton and antiproton in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. The $\kappa_{2}/\kappa_{1}$ values for protons and antiprotons are negative and monotonically increase in magnitude when enlarging the rapidity acceptance up to $y_{max}$=0.5 ($\Delta y$ = 1). For the antiproton, the values of $\kappa_{2}/\kappa_{1}$ show stronger deviations from zero at higher $\sqrt{s_{{\rm NN}}}$. As discussed in Fig. 14, the negative values of the two-particle correlation functions ($\kappa_{2}$) of protons and antiprotons are consistent with the expectation of the effect of baryon number conservation. Within current uncertainties, the acceptance dependence for the $\kappa_{3}/\kappa_{1}$ and $\kappa_{4}/\kappa_{1}$ of protons and antiprotons in Au+Au collisions at different $\sqrt{s_{{\rm NN}}}$ are not significant. Figure 18: (Color online) $\mathrm{p}_{T}$-acceptance dependence of cumulants of proton, antiproton and net-proton multiplicity distributions for 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for net-protons are shifted and the values of proton, antiproton and net-proton $C_{4}$ at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. Figure 19: (Color online) The $p_{\mathrm{T}}$-acceptance dependence of the normalized correlation functions up to fourth order ($\kappa_{n}/\kappa_{1}$, $n$ = 2, 3, 4) for proton and antiproton multiplicity distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for protons are shifted and the values of proton and antiproton $\kappa_{4}/\kappa_{1}$ at $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV are scaled down by a factor of 2. Figure 20: (Color online) $\mathrm{p}_{T}$-acceptance dependence of cumulant ratios of proton, antiproton and net-proton multiplicity distributions for 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The bars and caps represent statistical and systematic uncertainties, respectively. For clarity, the X-axis values for net protons are shifted. Figure 17 shows the rapidity acceptance dependence of the cumulant ratios $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ for proton, antiproton and net-proton in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. Based on Eqs. (14) to (16), the rapidity acceptance dependence of the cumulant ratios of proton and antiproton can be understood by the interplay between different orders of normalized correlation functions ($\kappa_{n}/\kappa_{1}$). The negative values of two-particle correlation functions ($\kappa_{2}$) for protons and antiprotons leads to a deviation of the corresponding $C_{2}$/$C_{1}$ and $C_{3}$/$C_{2}$ below unity. Due to low production rate of antiproton at low energies, the values of $C_{2}$/$C_{1}$ and $C_{3}$/$C_{2}$ for the net-proton distributions approach the corresponding values for protons when the beam energy decreases. The rapidity acceptance dependence of $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ values for protons and antiprotons are comparable at $\sqrt{s_{{\rm NN}}}$ = 200 GeV. However, among these ratios, protons and antiprotons start to deviate at lower beam energies. This is mainly due to baryon stopping and the larger fraction of transported protons compared with proton-antiproton pair production at midrapidity. The $C_{4}$/$C_{2}$ values for proton, antiproton and net-proton distributions are consistent within uncertainties for $\sqrt{s_{{\rm NN}}}$ = 39, 54.4, 62.4 and 200 GeV. Significant deviations from unity are observed for proton and net-proton $C_{4}$/$C_{2}$ at $\sqrt{s_{{\rm NN}}}$ = 19.6 and 27 GeV, and the deviation decreases with decreasing $\Delta y$ acceptance, where the effects of baryon number conservation plays an important role. For energies below 19.6 GeV, the rapidity acceptance dependence of $C_{4}$/$C_{2}$ for protons, antiprotons and net-protons is not significant within uncertainties. #### III.2.2 Transverse momentum dependence Figure 18 shows the $p_{\mathrm{T}}$ acceptance dependence for the $C_{n}$ of proton, antiproton and net-proton distributions at midrapidity ($|y|<$ 0.5) for 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. We fix the lower $p_{\mathrm{T}}$ cut at 0.4 GeV/$c$, and then the $p_{\mathrm{T}}$ acceptance is increased by varying the upper limit in steps between 1 and 2 GeV/$c$. The average efficiency values used in the efficiency correction for various $p_{T}$ acceptances are calculated based on Eq. (18). By extending the upper $p_{\mathrm{T}}$ coverage from 1 GeV/$c$ to 2 GeV/$c$, the mean numbers of protons increased about 50% and 80% at $\sqrt{s_{{\rm NN}}}$ = 7.7 and 200 GeV, respectively. It is found that the $C_{n}$ values for protons, antiprotons and net protons increase with increasing $p_{\mathrm{T}}$ acceptance, except for a weak $p_{\mathrm{T}}$ acceptance dependence for $C_{4}$ observed at energies below 39 GeV. Figure 19 shows the variation of normalized correlation functions $\kappa_{n}/\kappa_{1}$ with $p_{\mathrm{T}}$ acceptance for protons and antiprotons at midrapidity ($|y|<$ 0.5) in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. The $\kappa_{2}/\kappa_{1}$ values for protons and antiprotons are found to be negative and decrease with increasing $p_{\mathrm{T}}$ acceptance at higher $\sqrt{s_{{\rm NN}}}$. The $\kappa_{2}/\kappa_{1}$ values for antiprotons approach zero when the beam energy is decreased, due to the small production rate of antiprotons at low energies. The negative values of $\kappa_{2}/\kappa_{1}$ for protons observed at low energies are mainly dominated by the baryon stopping. Figure 20 shows the $p_{\mathrm{T}}$ acceptance dependence of $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ for proton, antiproton and net-proton distributions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 200 GeV. In general, most of the ratios show a weak dependence on $p_{T}$ acceptance for all of the $\sqrt{s_{{\rm NN}}}$ studied. The $C_{4}$/$C_{2}$ ratios of proton and net-proton distributions are similar for all $\sqrt{s_{{\rm NN}}}$ below 27 GeV. The $C_{3}$/$C_{2}$ ratios for protons and antiprotons are similar at higher beam energy. However, they differ from each other at the lower $\sqrt{s_{{\rm NN}}}$. From the above differential measurements, it is found that the baryon number conservation strongly influences the cumulants and correlation functions in heavy-ion collisions, especially at low energies. It could be the main reason for the negative two- particle correlation functions for protons and antiprotons He and Luo (2017). Figure 21: (Color online) Left panel: Collision energy dependence of $C^{\rm B}_{\mathrm{2}}$/$C^{\rm B}_{\mathrm{1}}$,$C^{\rm B}_{\mathrm{3}}$/$C^{\rm B}_{\mathrm{2}}$ and $C^{\rm B}_{\mathrm{4}}$/$C^{\rm B}_{\mathrm{2}}$ for various $p_{\mathrm{T}}$ acceptances from the hadron resonance gas model. Right panel: The variation of net-proton and net-baryon $C_{2}/C_{1}$, $C_{3}/C_{2}$ and $C_{4}/C_{2}$ within the experimental acceptance Garg _et al._ (2013b). Note: this simulation is done within a pseudorapidity window in order to make comparison between baryons of different mass. ### III.3 Cumulants from models Although our results can be compared to several models Li _et al._ (2018); Lin _et al._ (2017); Almasi _et al._ (2017); Yang _et al._ (2017); Zhou _et al._ (2017); Zhao _et al._ (2017); Xu _et al._ (2016); Vovchenko _et al._ (2018); Albright _et al._ (2015); Fukushima (2015); Netrakanti _et al._ (2016); Morita _et al._ (2015); Samanta and Mohanty (2019), we have chosen two models which do not have phase transition or critical point physics. They have contrasting physics processes to understand the following: (a) the effect of measuring net-protons instead of net-baryons Kitazawa and Asakawa (2012); He _et al._ (2016), (b) the role of resonance decay for net-proton measurements Nahrgang _et al._ (2015); Mishra _et al._ (2016); Bluhm _et al._ (2017); Zhang _et al._ (2020), (c) the effect of finite $p_{\mathrm{T}}$ acceptance for the measurements Karsch _et al._ (2016); He and Luo (2017), and (d) the effect of net-baryon number conservation Bzdak _et al._ (2013); He _et al._ (2016); Braun-Munzinger _et al._ (2019). Models without a critical point also provide an appropriate baseline for comparison to data. #### III.3.1 Hadron resonance gas model The Hadron Resonance Gas model includes all the relevant degrees of freedom for the hadronic matter and also implicitly takes into account the interactions that are necessary for resonance formation Garg _et al._ (2013b); Karsch and Redlich (2011). Hadrons and resonances of masses up to 3 GeV/$c^{2}$ are included. Considering a Grand Canonical Ensemble picture, the logarithm of the partition function ($Z$) in the HRG model is given as: $\displaystyle\ln Z(T,V,\mu)$ $\displaystyle=$ $\displaystyle\sum_{B}\ln Z_{i}(T,V,\mu_{i})$ (25) $\displaystyle+$ $\displaystyle\sum_{M}\ln Z_{i}(T,V,\mu_{i})\ ,$ where: $\displaystyle\ln Z_{i}(T,V,\mu_{i})=$ (26) $\displaystyle\pm$ $\displaystyle\frac{Vg_{i}}{2\pi^{2}}\int d^{3}{p}\ln{\big{\\{}1\pm\exp[(\mu_{i}-E)/T}]\big{\\}},$ $T$ is the temperature, $V$ is the volume of the system, $\mu_{i}$ is the chemical potential, $E$ is the energy, and $g_{i}$ is the degeneracy factor of the $i^{th}$ particle. The total chemical potential $\mu_{i}$ = $B_{i}\mu_{B}$ \+ $Q_{i}\mu_{Q}$ \+ $S_{i}\mu_{S}$, where $B_{i}$, $Q_{i}$ and $S_{i}$ are the baryon, electric charge and strangeness number of the $i^{th}$ particle, with corresponding chemical potentials $\mu_{B}$, $\mu_{Q}$ and $\mu_{S}$, respectively. The $+$ and $-$ signs in Eq. (26) are for baryons ($B$) and mesons ($M$), respectively. The $n^{th}$-order generalized susceptibility for baryons can be expressed as Karsch and Redlich (2011): $\displaystyle\chi_{x,\mathrm{baryon}}^{(n)}=\frac{x^{n}}{VT^{3}}\int{d^{3}p}\sum_{k=0}^{\infty}{(-1)^{k}}(k+1)^{n}$ (27) $\displaystyle\exp\bigg{\\{}\frac{-(k+1)E}{T}\bigg{\\}}{\exp\bigg{\\{}\frac{(k+1)\mu}{T}\bigg{\\}}},\,$ and for mesons: $\displaystyle\chi_{x,\mathrm{meson}}^{(n)}=\frac{x^{n}}{VT^{3}}\int{d^{3}p}\sum_{k=0}^{\infty}(k+1)^{n}$ (28) $\displaystyle\exp\bigg{\\{}\frac{-(k+1)E}{T}\bigg{\\}}{\exp\bigg{\\{}\frac{(k+1)\mu}{T}\bigg{\\}}}.\,$ The factor $x$ represents either $B$, $Q$ or $S$ of the $i^{th}$ particle, depending on whether the computed $\chi_{x}$ represents baryon, electric charge or strangeness susceptibility. Figure 22: (Color online) Left panel: UrQMD results on $p_{\mathrm{T}}$ acceptance dependence of $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ ratio as a function of $\sqrt{s_{{\rm NN}}}$ for net baryons. Right panel: Same ratios within the experimental acceptance for net protons and net baryons. Note: similar to Fig 21, this simulation is done within a pseudorapidity window in order to make comparison between baryons of different mass. For a particle of mass $m$ with $p_{T}$, $\eta$ and $\phi$, the volume element ($d^{3}p$) and energy ($E$) can be written as $d^{3}p=p_{T}m_{T}{\cosh}(\eta)$$d{p_{T}}$$d\eta$$d\phi$ and $E$ = $m_{T}\cosh\eta$, where $m_{T}$=$\sqrt{p_{T}^{2}+m^{2}}$. The experimental acceptance can be incorporated by considering the appropriate integration ranges in $\eta$, $p_{T}$, $\phi$ and charge states by considering the values of $|x|$. The total generalized susceptibilities will then be the sum of the contributions from baryons and mesons as in $\chi^{(n)}_{x}=\sum\chi^{(n)}_{x,\mathrm{baryon}}+\sum\chi^{(n)}_{x,\mathrm{meson}}$. Figure 23: (Color online) Upper Panel: (1) $\sigma^{2}/M$, (2) $S\sigma$ and (3) $\kappa\sigma^{2}$ of net-proton distributions for 0-5% central Au+Au collisions from $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 62.4 GeV. The error bars on the data points are statistical and systematic uncertainties added in quadrature. The black solid lines are polynomial fit functions which well describe the cumulant ratios. The legends also specify the chi-squared per degree of freedom for the respective fits. The black dashed lines are the Poisson baselines. Lower Panel: Derivative of the fitted polynomial as a function of collision energy. The bar and the gold band on the derivatives represent the statistical and systematic uncertainties, respectively. Figure 21 shows the variation of $C^{\rm B}_{2}$/$C^{\rm B}_{1}$, $C^{\rm B}_{3}$/$C^{\rm B}_{2}$ and $C^{\rm B}_{4}$/$C^{\rm B}_{2}$ as functions of $\sqrt{s_{{\rm NN}}}$ from a hadron resonance gas model Garg _et al._ (2013b). The results are shown for different $p_{\mathrm{T}}$ acceptances. The differences due to acceptance are very small, and the maximum effect is at the level of 5% for $\sqrt{s_{{\rm NN}}}$ = 7.7 GeV for $C^{\rm B}_{\mathrm{4}}$/$C^{\rm B}_{\mathrm{2}}$. The HRG results also show that the net-proton results with resonance decays are smaller compared to net baryons and larger than net protons without the decay effect. Here also the effect is at the level of 5% for the lowest $\sqrt{s_{{\rm NN}}}$ and smaller at higher energies in the case of $C^{\rm B}_{\mathrm{4}}$/$C^{\rm B}_{\mathrm{2}}$. The corresponding effect on $C^{\rm B}_{\mathrm{3}}$/$C^{\rm B}_{\mathrm{2}}$ and $C^{\rm B}_{\mathrm{2}}$/$C^{\rm B}_{\mathrm{1}}$ is larger at the higher energies and of the order of 17% for net protons without resonance decay and net baryons, while the effect is 10% for net-proton with resonance decays and net-baryons. #### III.3.2 UrQMD Model The UrQMD (Ultra relativistic Quantum Molecular Dynamics) model Bass _et al._ (1998); Bleicher _et al._ (1999) is a microscopic transport model where the phase space description of the reactions are considered. It treats the propagation of all hadrons as classical trajectories in combination with stochastic binary scattering, color string formation and resonance decays. It incorporates baryon-baryon, meson-baryon and meson-meson interactions. The collisional term includes more than 50 baryon species and 45 meson species. The model preserves the conservation of electric charge, baryon number, and strangeness number as expected for QCD matter. It also models the phenomenon of baryon stopping, an essential feature encountered in heavy-ion collisions at lower beam energies. In this model, the space-time evolution of the fireball is studied in terms of excitation and fragmentation of color strings and formation and decay of hadronic resonances. Since the model does not include the physics of the quark-hadron phase transition or the QCD critical point, the comparison of the data to the results obtained from the UrQMD model will shed light on the contributions from the hadronic phase and its associated processes, baryon number conservation and effect of measuring only net protons relative to net baryons. In Fig. 22, the panels on the left present the energy dependence of $C_{n}$ ratios of net-baryon distributions for various $p_{\mathrm{T}}$ acceptance. It is observed that the larger the $p_{T}$ acceptance, the smaller the cumulant ratios. Furthermore, with the same $p_{T}$ acceptance, the values of net- baryon $C_{4}/C_{2}$ and $C_{2}/C_{1}$ ratios decrease with decreasing energies. Figure 22 right also shows the comparison of the cumulant ratios for net-baryon and net-proton distributions within the experimental acceptance for various $\sqrt{s_{{\rm NN}}}$. The differences between results from different acceptance are larger for UrQMD compared to the HRG model. In UrQMD the difference between net baryons and net protons is larger at the lower beam energies for a fixed $p_{\mathrm{T}}$ and $y$ acceptance. The negative $C_{4}/C_{2}$ values of net-baryon distributions observed at low energies could be mainly due to the effect of baryon number conservation. The effects of resonance weak decay and hadronic re-scattering on proton and net-proton number fluctuations in heavy-ion collisions have also been investigated in Ref. Zhang _et al._ (2020) within the JAM model. It is important to point out that in both the HRG model and UrQMD transport model calculations, a suppression in $C_{4}/C_{2}$ at low collision energy is observed, as evident from the right plots of Fig. 21 and Fig. 22, respectively. In the case of the transport results, the suppression is attributed to the effect of baryon number conservation in strong interactions. However, the interpretation does not apply to the HRG calculation, since for the grand canonical ensemble (GCE), the event-by-event conservation is absent although, on average, the conservation law is preserved. In addition to the law of conservation, quantum effects and the change of temperature and baryon chemical potential could play a role here. #### III.3.3 Energy dependence Figure 23 shows the collision-energy dependence of cumulant ratios (1) $\sigma^{2}/M$, (2) $S\sigma$ and (3) $\kappa\sigma^{2}$ of net-proton distributions for 0-5% central Au+Au collisions from $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 62.4 GeV. As shown in Fig. 23, a polynomial of order four (five) well describes the plotted collision-energy dependence of $\kappa\sigma^{2}$ ($S\sigma$) of net-proton distributions for central Au+Au collisions with a $\chi^{2}$/ndf = 1.3(0.72). The local derivative of the fitted polynomial function shown in the lower panel of Fig. 23 changes sign, demonstrating the non-monotonic variation of the measurements with respect to collision energy. The statistical and systematic uncertainties on derivatives are obtained by randomly varying the data points at each energy within their statistical and systematic uncertainties. Figure 24: (Color online) Collision energy dependence of $C_{2}$/$C_{1}$, $C_{3}$/$C_{2}$ and $C_{4}$/$C_{2}$ for net-proton multiplicity distributions in 0-5% central Au+Au collisions. The experimental net-proton measurements are compared to corresponding values from UrQMD and HRG models within the experimental acceptances. The bars and caps represent the statistical and systematic uncertainties of the experimental data, respectively. The widths of the bands reflect the statistical uncertainties for the model calculations. Table 7: The right-tail $p$ values of a chi-squared test between experimental data and various models (shown in Fig. 24) for the energy dependence of the net-proton cumulant ratios in 0-5% central Au+Au collisions at two ranges of collision energy: $\sqrt{s_{\rm{NN}}}=$ 7.7 – 27 GeV and 7.7 – 62.4 GeV (the latter shown in the parenthesis). Those $p$ values denote the probability of obtaining discrepancies at least as large as the results actually observed Wasserstein and Lazar (2016). The right-tail $p$ values are calculated via $p=\mathrm{Pr}(\chi^{2}_{n}>\chi^{2})$, where $\chi^{2}_{n}$ obeys the chi-square distribution with $n$ independent energy data points and the $\chi^{2}$ values are obtained in the chi-squared test. Cumulant Ratios | HRG GCE | HRG CE | HRG GCE+E.V. (R=0.5 fm) | UrQMD ---|---|---|---|--- $C_{2}/C_{1}$ | <0.001(<0.001) | <0.001(<0.001) | <0.001(<0.001) | <0.001(<0.001) $C_{3}/C_{2}$ | <0.001(<0.001) | 0.0754 (<0.001) | <0.001(<0.001) | <0.001(<0.001) $C_{4}/C_{2}$ | 0.00553 (0.00174) | 0.0450 (0.128) | 0.0145 (0.0107) | 0.0221 (0.0577) The significance of the observed non-monotonic dependence of $\kappa\sigma^{2}$ ($S\sigma$) on collision energy, in the energy range $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 62.4 GeV, is obtained based on the fourth (fifth) order polynomial fitting procedure. This significance is evaluated by randomly varying the $\kappa\sigma^{2}$ and $S\sigma$ data points within their total Gaussian uncertainties (statistical and systematic uncertainties added in quadrature) at each corresponding energy. This procedure is repeated a million times for $\kappa\sigma^{2}$ and for $S\sigma$. Out of 1 million trials, there are 1143 cases for $\kappa\sigma^{2}$ and 158640 cases for $S\sigma$ where the signs of the derivative at all $\sqrt{s_{\mathrm{NN}}}$ are found to be the same. Thus, the probability that at least one derivative at a given $\sqrt{s_{\mathrm{NN}}}$ has a different sign from the derivatives at remaining energies among the 1 million trials performed is 0.99886 (0.84136), which corresponds to a 3.1 $\sigma$ (1.0 $\sigma$) effect for $\kappa\sigma^{2}$ ($S\sigma$). Similarly, based on the third-order polynomial fitting procedure, the cumulant ratio $\sigma^{2}/M$ on the other hand ($\chi^{2}$/ndf = 0.32), exhibits a monotonic dependence on collision energy with a significance of 3.4$\sigma$. Thus we find that the cumulant ratios as a function of collision energy change from a monotonic variation to a non- monotonic variation with $\sqrt{s_{{\rm NN}}}$ as we go to higher orders. This is consistent with the QCD-based model expectation that, the higher the order of the moments, the more sensitive it is to physics processes such as a critical point Stephanov (2009, 2011). Figure 25: (Color online) Collision energy dependence of the scaled (anti)proton cumulants and correlation functions in 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4 and 200 GeV. The error bars and bands represent the statistical and systematic uncertainties, respectively. The results from UrQMD model calculation are also shown for comparison. Figure 24 shows the collision-energy dependence of the cumulant ratios of net- proton multiplicity distributions for 0-5% central Au+Au collisions. The comparison has been made between experimental measurements and the corresponding results from the HRG and UrQMD models. We observe that both models, which do not have phase transition effects, show monotonic variations of the cumulant ratios with beam energy. However, the experimental measurements of net-proton $C_{4}$/$C_{2}$ ratios show a non-monotonic variation with $\sqrt{s_{{\rm NN}}}$. On the other hand, the net-proton $C_{3}$/$C_{2}$ ($C_{2}$/$C_{1}$) in both model and data show a smooth decrease (increase) trend with increasing $\sqrt{s_{{\rm NN}}}$. Although both models show a smooth energy dependence, the third-order ratios in the middle panel are larger for UrQMD than for (GCE) HRG at collision energies above 14.5 GeV. At lower energy, a suppression relative to the results of GCE HRG is observed. On the other hand, the canonical ensemble (CE) HRG, has presented a consistent suppression in all three panels. In this approach, the baryon number conservation is the main source of the suppression Fu (2017); Braun- Munzinger _et al._ (2020). It is interesting to point out that GCE models incorporating excluded volume effects (GCE E.V.) can also reproduce the suppression. The larger the repulsive volume, the stronger the suppression. Since the repulsive volume reflects the ‘baryon density’, the observed suppression GCE E.V. is due to the local density. For details, see Refs. Fu (2013); Bhattacharyya _et al._ (2014); Samanta and Mohanty (2019). To quantify the level of agreement between the experimental measurements and the model calculations, the widely used $\chi^{2}$ test has been applied for two energy ranges ($\sqrt{s_{{\rm NN}}}$ = 7.7 – 27 and 7.7 – 62.4 GeV). The $\chi^{2}$ value is calculated as $\chi^{2}(R)=\sum_{\sqrt{s_{\mathrm{NN}}}}\frac{\left|{R_{\rm data}-R_{\rm model}}\right|^{2}}{\mathrm{error}^{2}}$, where $R$ denotes the cumulant ratios ($C_{2}/C_{1},~{}~{}C_{3}/C_{2},~{}~{}C_{4}/C_{2}$) and the ‘error’ represents the statistical and systematic uncertainties of the data and the statistical uncertainties of the model added in quadrature. In addition, the obtained $\chi^{2}$ value can be converted to the corresponding right-tail $p$-value, which is the probability of obtaining discrepancies at least as large as the results actually observed Wasserstein and Lazar (2016). The resulting right tail $p$-values listed in Table 7 are calculated via $p=\mathrm{Pr}(\chi^{2}_{n}>\chi^{2})$, where $\chi^{2}_{n}$ obeys the chi- square distribution with $n$ independent energy data points and the $\chi^{2}$ values are obtained in the chi-squared test. Usually, for the right tail $p$-value test, $p<0.05$ is the commonly used standard to reject the null hypothesis and claim a significant deviation between the data and model results. It is found that the $p$-values from the the $\chi^{2}$ test are smaller than 0.05 for all of the different variants of HRG and the UrQMD model at $\sqrt{s_{{\rm NN}}}$ = 7.7 – 27 GeV, which means the deviations between data and model results are significant and cannot be explained by statistical fluctuations. But, for the range $\sqrt{s_{{\rm NN}}}$ = 7.7 – 62.4 GeV, the $p$-values of $C_{4}/C_{2}$ for the HRG CE and UrQMD model cases are 0.128 and 0.0577, respectively. Clearly as far as these tests are concerned, all of the above-mentioned models, showing monotonic energy dependences, do not fit the data in the most relevant energy region, $\sqrt{s_{{\rm NN}}}$ $\leq$ 27 GeV. This result will be further tested with the high-precision data from the second phase of the RHIC beam energy scan program (BES-II). Based on Eq. (13), the cumulants can be expressed in terms of the sum of various-order multiparticle correlation functions. In order to understand the contributions to the cumulants, one can present different orders of correlation functions separately. Figure 25 shows the energy dependence of the cumulants and correlation functions normalized by the mean numbers of protons and antiprotons in 0-5% central Au+Au collisions. By definition and as shown in Fig. 25, the values of $C_{2}/C_{1}-1$ are equal to $\kappa_{2}/\kappa_{1}$. It is observed that the normalized second and third- order cumulants minus unity ($C_{2}/C_{1}-1$, $C_{3}/C_{1}-1$) are negative and show an increasing (decreasing) energy dependence in magnitude for protons (antiprotons) with decreasing collision energies. From the right panels in Fig. 25, the third-order normalized correlation functions ($\kappa_{3}/\kappa_{1}$) of protons and antiprotons show flat energy dependence and are consistent with zero within uncertainties. Therefore, the energy dependence for $C_{3}/C_{1}$ is dominated by the negative two-particle normalized correlation functions ($\kappa_{2}/\kappa_{1}$), which is mainly due to the effects of baryon number conservation. The normalized four-particle correlation functions ($\kappa_{4}/\kappa_{1}$) of antiprotons show flat energy dependence and are consistent with zero within uncertainties. In panel 5) of Fig. 25, we observe a similar energy dependence trend for the normalized fourth order cumulants ($C_{4}/C_{1}$) of protons as for the net-proton $C_{4}/C_{2}$ in 0-5% central Au+Au collisions shown in Fig. 24. For $\sqrt{s_{{\rm NN}}}$ $\geq$ 19.6 GeV, the values of proton $C_{4}/C_{1}$ are dominated by the negative two-particle correlation function ($\kappa_{2}$) of protons (see panel 2 in Fig. 25). For $\sqrt{s_{{\rm NN}}}$ $<$ 19.6 GeV, the four-particle correlation function ($\kappa_{4}$) of protons plays a role in determining the energy dependence of proton $C_{4}/C_{1}$, which cannot be solely understood by the suppression effects due to negative values of $\kappa_{2}$ for protons. As discussed in Refs. Ling and Stephanov (2016); Bzdak _et al._ (2017b), the observed large values of the four-particle correlation function of protons ($\kappa_{4}$) could be attributed to the formation of proton cluster and related to the signature of a critical point or a first order phase transition. Therefore, it is necessary to perform precise measurements of the $\kappa_{4}/\kappa_{1}$ of protons below 19.6 GeV with high statistics data taken in the second phase of the beam energy scan at RHIC. In addition, we compare the experimental data in Fig. 25 with UrQMD model calculations. The energy dependence of the second- and third-order normalized cumulants and correlation functions can be qualitatively described by the UrQMD model. However, the non-monotonic energy dependence observed in the proton $C_{4}/C_{1}$ cannot be described by the UrQMD model. Furthermore, the three- and four-particle correlation functions ($\kappa_{3}$ and $\kappa_{4}$) for (anti)protons from UrQMD show flat energy dependence and are consistent with zero. Thus, it indicates that the higher-order (anti)proton correlation functions $\kappa_{3}$ and $\kappa_{4}$ are not sensitive to the effect of baryon number conservation within the current acceptance, and therefore can serve as good probes of critical fluctuations in heavy-ion collisions He and Luo (2017); Zhang _et al._ (2020). ## IV Summary and Outlook In summary, we report a systematic study of the cumulants of the net-proton, proton and antiproton multiplicity distributions from Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7 - 200 GeV. The data have been collected with the STAR experiment in the first phase of the RHIC beam energy scan acquired over the period of 2010 - 2017. The energy, centrality and acceptance dependence of the correlation functions of protons and antiprotons are presented in this paper. Both cumulants and correlation functions up to fourth order at midrapidity ($|y|$$<$ 0.5) within 0.4 $<$ $p_{\mathrm{T}}$ $<$ 2.0 GeV/$c$ in Au+Au collisions are presented to search for the signatures of a critical point and/or a first-order phase transition over a broad region of baryon chemical potential. The protons and antiprotons are identified with greater than 97% purity using the TPC and TOF detectors of STAR. The centrality selection is based on midrapidity pions and kaons only to avoid self-correlation effects. The maximum-allowed rapidity acceptance around midrapidity has been used for centrality determination to minimize the effect of centrality resolution. The variation of the average number of protons and antiprotons in a given centrality bin has been accounted for by applying a centrality bin-width correction, which also minimizes volume fluctuation effects. The cumulants are corrected for the proton and antiproton reconstruction efficiencies using a binomial response function. Study of the unfolding technique for efficiency correction of cumulants has shown that, even in the 0-5% central Au+Au collisions at $\sqrt{s_{{\rm NN}}}$ = 200 GeV, the case with the highest multiplicity, the results are consistent with the commonly-used binomial approach within current statistical uncertainties. The statistical errors on the cumulants are based on the delta theorem method and are shown to be consistent with those obtained by the bootstrap method. A detailed estimate of the systematic uncertainties is also presented. Results on cumulant ratios from different variants of the HRG and the UrQMD models are presented to understand the effects of experimental acceptance, resonance decay, baryon number conservation, and net-proton versus net-baryon analysis. The cumulant ratios show a centrality and energy dependence, which are neither reproduced by purely hadronic-transport-based UrQMD model calculations, nor by different variants of the hadron resonance gas model. Specifically, the net-proton $C_{4}$/$C_{2}$ ratio for 0-5% central Au+Au collisions shows a non-monotonic variation with $\sqrt{s_{{\rm NN}}}$, with a significance of 3.1$\sigma$. This is consistent with the expectations of critical fluctuations in a QCD-inspired model. A $\chi^{2}$ test has been applied to quantify the level of agreement between experimental data and model calculations. The resulting $p$-values suggest that the models fail to explain the 0-5% Au+Au collision data at $\sqrt{s_{{\rm NN}}}$ $\leq$ 27 GeV. The $y$ and $p_{\mathrm{T}}$ acceptance dependence of the cumulants and their ratios provide valuable data to understand the range of the correlations and their relation to the acceptance of the detector Ling and Stephanov (2016); Brewer _et al._ (2018). Furthermore, the systematic analysis presented here can be used to constrain the freeze-out conditions in high-energy heavy-ion collisions using QCD-based approaches, and to understand the nature of thermalization in such collisions Bazavov _et al._ (2012); Borsanyi _et al._ (2013); Gupta _et al._ (2020). From the analysis of multiparticle correlation functions, one observes significant negative values for $\kappa_{2}$ of protons and antiprotons, which are mainly due to the effects of baryon number conservation in heavy-ion collisions. The values of $\kappa_{3}$ of protons and antiprotons are consistent with zero for all of the collision energies studied. Further, the energy dependence trend of proton $C_{4}$/$C_{1}$ below 19.6 GeV cannot be solely understood by the negative values of $\kappa_{2}$ for protons, and the four-particle correlation function of protons ($\kappa_{4}$) is found to play a role, which needs to be confirmed with the high statistics data taken in RHIC BES-II, which began data-taking in 2018. Upgrades to the STAR detector system have significantly improved the quality of the measurements bes . Primarily the goal of BES-II is to make high-statistics measurements, with extended kinematic range in rapidity and transverse momentum for the measurements discussed in this paper. The extended kinematic range in rapidity and transverse momentum are brought about by upgrading the inner TPC (iTPC) to extend the measurement coverage to $|\eta|<$ 1.5, the $p_{\mathrm{T}}$ acceptance down to 100 MeV/$c$ and improved $dE/dx$ resolution. Particle identification capability will be extended to -1.6 $<\eta<$ 1.0 with the addition of an endcap TOF (eTOF) detector. The collected event statistics to date, along with the goal for 2021, are listed in Table 8. Table 8: Total number of collected/expected events in BES phase II for various collision energies ($\sqrt{s_{\mathrm{NN}}}$) bes . $\sqrt{s_{\mathrm{NN}}}$ (GeV) | Year | No. of events (million) | ---|---|---|--- 27 | 2018 | 500 | 19.6 | 2019 | 400 | 14.5 | 2019 | 300 | 11.5 | 2020 | 230 | 9.2 | 2020 | 160 | 7.7 | 2021 | 100 | At the same time, STAR will take data in fixed-target mode to extend $\sqrt{s_{\mathrm{NN}}}$ to 3 GeV. With these upgrades, and with the benefits of extended kinematic coverage and the use of sensitive observables, the RHIC BES Phase-II program will allow measurements of unprecedented precision for exploring the QCD phase structure within $200<\mu_{B}~{}(\rm{MeV})<720$. ## Acknowledgement We thank F. Karsch, M. Kitazawa, S. Gupta, D. Mishra, K. Rajagopal, K. Redlich, M. Stephanov, and V. Koch for stimulating discussions related to this work. We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. National Science Foundation, the Ministry of Education and Science of the Russian Federation, National Natural Science Foundation of China, Chinese Academy of Science, the Ministry of Science and Technology of China and the Chinese Ministry of Education, the Higher Education Sprout Project by Ministry of Education at NCKU, the National Research Foundation of Korea, Czech Science Foundation and Ministry of Education, Youth and Sports of the Czech Republic, Hungarian National Research, Development and Innovation Office, New National Excellency Programme of the Hungarian Ministry of Human Capacities, Department of Atomic Energy and Department of Science and Technology of the Government of India, the National Science Centre of Poland, the Ministry of Science, Education and Sports of the Republic of Croatia, RosAtom of Russia and German Bundesministerium fur Bildung, Wissenschaft, Forschung and Technologie (BMBF), Helmholtz Association, Ministry of Education, Culture, Sports, Science, and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS). ## Appendix A Efficiency Correction In order to correct the $C_{n}$ for efficiency effects, one has to invoke a model assumption for the response of the detector. The detector response is assumed to follow a binomial probability distribution function. The probability distribution function of measured proton number $n_{p}$ and antiproton number $n_{\bar{p}}$ can be expressed as Bzdak and Koch (2012); Luo (2015): $\begin{split}p({n_{p}},{n_{\bar{p}}})&=\sum\limits_{{N_{p}}=n_{p}}^{\infty}{\sum\limits_{{N_{\bar{p}}}=n_{\bar{p}}}^{\infty}{P({N_{p}},{N_{\bar{p}}})\times\frac{{{N_{p}}!}}{{{n_{p}}!\left({{N_{p}}-{n_{p}}}\right)!}}{{({\varepsilon_{p}})}^{{n_{p}}}}{{(1-{\varepsilon_{p}})}^{{N_{p}}-{n_{p}}}}}}\\\ &\times\frac{{{N_{\bar{p}}}!}}{{{n_{\bar{p}}}!\left({{N_{\bar{p}}}-{n_{\bar{p}}}}\right)!}}{({\varepsilon_{\bar{p}}})^{{n_{\bar{p}}}}}{(1-{\varepsilon_{\bar{p}}})^{{N_{\bar{p}}}-{n_{\bar{p}}}}}\end{split}$ (29) where the $P({N_{p}},{N_{\bar{p}}})$ is the original joint probability distribution of number of proton ($N_{p}$) and antiproton ($N_{\bar{p}}$), and $\varepsilon_{p}$, $\varepsilon_{\bar{p}}$ are the efficiency of reconstructing the protons and antiprotons, respectively. In order to arrive at an expression for efficiency-corrected cumulants or moments, the bivariate factorial moments are first defined as: $\displaystyle{F_{i,k}(N_{p},N_{\bar{p}})}=\left\langle\frac{{{N_{p}}!}}{{\left({{N_{p}}-i}\right)!}}\frac{{{N_{\bar{p}}}!}}{{\left({{N_{\bar{p}}}-k}\right)!}}\right\rangle=\sum\limits_{{N_{p}}=i}^{\infty}{\sum\limits_{{N_{\bar{p}}}=k}^{\infty}{P({N_{p}},{N_{\bar{p}}})\frac{{{N_{p}}!}}{{\left({{N_{p}}-i}\right)!}}\frac{{{N_{\bar{p}}}!}}{{\left({{N_{\bar{p}}}-k}\right)!}}}}$ (30) $\displaystyle{f_{i,k}(n_{p},n_{\bar{p}})}=\left\langle\frac{{{n_{p}}!}}{{\left({{n_{p}}-i}\right)!}}\frac{{{n_{\bar{p}}}!}}{{\left({{n_{\bar{p}}}-k}\right)!}}\right\rangle=\sum\limits_{{n_{p}}=i}^{\infty}{\sum\limits_{{n_{\bar{p}}}=k}^{\infty}{p({n_{p}},{n_{\bar{p}}})\frac{{{n_{p}}!}}{{\left({{n_{p}}-i}\right)!}}\frac{{{n_{\bar{p}}}!}}{{\left({{n_{\bar{p}}}-k}\right)!}}}}$ (31) The efficiency-corrected factorial moments are then given as: ${F_{i,k}(N_{p},N_{\bar{p}})}=\frac{{{f_{i,k}(n_{p},n_{\bar{p}})}}}{{{{({\varepsilon_{p}})}^{i}}{{({\varepsilon_{\bar{p}}})}^{k}}}}.$ (32) Then the $n^{th}$ order efficiency-corrected moments of net-proton distributions are related to the efficiency-corrected factorial moments as: $\begin{array}[]{l}{m_{n}}({N_{p}}-{N_{\bar{p}}})=<{({N_{p}}-{N_{\bar{p}}})^{n}}>=\sum\limits_{i=0}^{n}{{{(-1)}^{i}}\left({\begin{array}[]{*{20}{c}}n\\\ i\end{array}}\right)}<N_{p}^{n-i}N_{\bar{p}}^{i}>\\\ =\sum\limits_{i=0}^{n}{{{(-1)}^{i}}\left({\begin{array}[]{*{20}{c}}n\\\ i\end{array}}\right)}\left[{\sum\limits_{{r_{1}}=0}^{n-i}{\sum\limits_{{r_{2}}=0}^{i}{{s_{2}}(n-i,{r_{1}}){s_{2}}(i,{r_{2}}){F_{{r_{1}},{r_{2}}}}({N_{p}},{N_{\bar{p}}})}}}\right]\\\ =\sum\limits_{i=0}^{n}{\sum\limits_{{r_{1}}=0}^{n-i}{\sum\limits_{{r_{2}}=0}^{i}{{{(-1)}^{i}}\left({\begin{array}[]{*{20}{c}}n\\\ i\end{array}}\right){s_{2}}(n-i,{r_{1}}){s_{2}}(i,{r_{2}}){F_{{r_{1}},{r_{2}}}}({N_{p}},{N_{\bar{p}}})}}}\end{array}$ (33) The Stirling numbers of the first ($s_{1}(n,i)$) and second kind ($s_{2}(n,i)$), are defined as: $\displaystyle\frac{{N!}}{{(N-n)!}}=\sum\limits_{i=0}^{n}{{s_{1}}(n,i)}{N^{i}}$ (34) $\displaystyle{N^{n}}=\sum\limits_{i=0}^{n}{{s_{2}}(n,i)}\frac{{N!}}{{(N-i)!}}$ (35) where $N$, $n$ and $i$ are non-negative integer numbers. The efficiency- corrected cumulants of net-proton distributions can be obtained from the efficiency-corrected moments by using the recursion relation: $\begin{split}&{C_{r}}({N_{p}}-{N_{\bar{p}}})={m_{r}}({N_{p}}-{N_{\bar{p}}})\\\ &-\sum\limits_{s=1}^{r-1}{\left(\begin{array}[]{c}r-1\\\ s-1\end{array}\right)}{C_{s}}({N_{p}}-{N_{\bar{p}}}){m_{r-s}}({N_{p}}-{N_{\bar{p}}})\end{split}$ (36) where the $C_{r}$ denotes the $r^{th}$-order cumulants of net-proton distributions. If the protons and antiprotons has the same efficiency, $\varepsilon_{p}=\varepsilon_{\bar{p}}=\varepsilon$, the expressions for the first four efficiency-corrected cumulants can be explicitly written as: $\begin{split}C_{1}^{X-Y}&=\frac{\langle x\rangle-\langle y\rangle}{\varepsilon}\\\ C_{2}^{X-Y}&=\frac{{C_{2}^{x-y}+(\varepsilon-1)(\langle x\rangle+\langle y\rangle)}}{{{\varepsilon^{2}}}}\\\ C_{3}^{X-Y}&=\frac{{C_{3}^{x-y}+3(\varepsilon-1)(C_{2}^{x}-C_{2}^{y})+(\varepsilon-1)(\varepsilon-2)(\langle x\rangle-\langle y\rangle)}}{{{\varepsilon^{3}}}}\\\ C_{4}^{X-Y}&=\frac{{C_{4}^{x-y}-2(\varepsilon-1)C_{3}^{x+y}+8(\varepsilon-1)(C_{3}^{x}+C_{3}^{y})+(5-\varepsilon)(\varepsilon-1)C_{2}^{x+y}}}{{{\varepsilon^{4}}}}\\\ &+\frac{{8(\varepsilon-1)(\varepsilon-2)(C_{2}^{x}+C_{2}^{y})+({\varepsilon^{2}}-6\varepsilon+6)(\varepsilon-1)(\langle x\rangle+\langle y\rangle)}}{{{\varepsilon^{4}}}}\end{split}$ (37) where the $(X,Y)$ and $(x,y)$ are the numbers of $(p,\bar{p})$ produced and measured, respectively. The efficiency-corrected cumulants are sensitive to the efficiency and depend on the lower order measured cumulants. In the current analysis, the proton and antiproton $p_{\mathrm{T}}$ range is from 0.4 to 2 GeV/$c$. This has been possible by using particle identification information for the TPC in the $p_{\mathrm{T}}$ range 0.4 to 0.8 GeV/$c$ and the TPC+TOF in the momentum range 0.8 to 2 GeV/$c$. This results in two different efficiencies for proton reconstruction and two different values for antiprotons. Hence the above formulation which holds for one single value of efficiency and $\varepsilon=\varepsilon_{p}=\varepsilon_{\bar{p}}$ has to be modified to take care of four different efficiency values, two each for the proton and antiproton corresponding to different $p_{\mathrm{T}}$ ranges. Let $\varepsilon_{{p_{1}}},\varepsilon_{{p_{2}}}$ and $\varepsilon_{{{\bar{p}}_{1}}},\varepsilon_{{{\bar{p}}_{2}}}$ denote the efficiency for protons and antiprotons in the two sub-phase spaces, and denote the corresponding number of protons and antiprotons in the two sub-phase spaces by $N_{p_{1}}$, $N_{p_{2}}$ and $N_{\bar{p}_{1}}$, $N_{\bar{p}_{2}}$, respectively. Using analogous formulations as above, the bivariate factorial moments of protons and antiprotons distributions is given as: $\begin{split}{F_{{r_{1}},{r_{2}}}}({N_{p}},{N_{\bar{p}}})&={F_{{r_{1}},{r_{2}}}}({N_{{p_{1}}}}+{N_{{p_{2}}}},{N_{{{\bar{p}}_{1}}}}+{N_{{{\bar{p}}_{2}}}})=\sum\limits_{{i_{1}}=0}^{{r_{1}}}{\sum\limits_{{i_{2}}=0}^{{r_{2}}}{{s_{1}}({r_{1}},{i_{1}})}}{s_{1}}({r_{2}},{i_{2}})\langle{({N_{{p_{1}}}}+{N_{{p_{2}}}})^{{i_{1}}}}{({N_{{{\bar{p}}_{1}}}}+{N_{{{\bar{p}}_{2}}}})^{{i_{2}}}}\rangle\\\ &=\sum\limits_{{i_{1}}=0}^{{r_{1}}}{\sum\limits_{{i_{2}}=0}^{{r_{2}}}{{s_{1}}({r_{1}},{i_{1}})}}{s_{1}}({r_{2}},{i_{2}})\langle{\sum\limits_{s=0}^{{i_{1}}}{\left({\begin{array}[]{*{20}{c}}{{i_{1}}}\\\ s\end{array}}\right)N_{{p_{1}}}^{{i_{1}}-s}N_{{p_{2}}}^{s}\sum\limits_{t=0}^{{i_{2}}}{\left({\begin{array}[]{*{20}{c}}{{i_{2}}}\\\ t\end{array}}\right)N_{{{\bar{p}}_{1}}}^{{i_{2}}-t}N_{{{\bar{p}}_{2}}}^{t}}}}\rangle\\\ &=\sum\limits_{{i_{1}}=0}^{{r_{1}}}{\sum\limits_{{i_{2}}=0}^{{r_{2}}}{\sum\limits_{s=0}^{{i_{1}}}{\sum\limits_{t=0}^{{i_{2}}}{{s_{1}}({r_{1}},{i_{1}}){s_{1}}({r_{2}},{i_{2}})\left({\begin{array}[]{*{20}{c}}{{i_{1}}}\\\ s\end{array}}\right)\left({\begin{array}[]{*{20}{c}}{{i_{2}}}\\\ t\end{array}}\right)}}}}\langle N_{{p_{1}}}^{{i_{1}}-s}N_{{p_{2}}}^{s}N_{{{\bar{p}}_{1}}}^{{i_{2}}-t}N_{{{\bar{p}}_{2}}}^{t}\rangle\\\ &=\sum\limits_{{i_{1}}=0}^{{r_{1}}}{\sum\limits_{{i_{2}}=0}^{{r_{2}}}{\sum\limits_{s=0}^{{i_{1}}}{\sum\limits_{t=0}^{{i_{2}}}{\sum\limits_{u=0}^{{i_{1}}-s}{\sum\limits_{v=0}^{s}{\sum\limits_{j=0}^{{i_{2}}-t}{\sum\limits_{k=0}^{t}{{s_{1}}({r_{1}},{i_{1}}){s_{1}}({r_{2}},{i_{2}})\left({\begin{array}[]{*{20}{c}}{{i_{1}}}\\\ s\end{array}}\right)\left({\begin{array}[]{*{20}{c}}{{i_{2}}}\\\ t\end{array}}\right)}}}}}}}}\\\ &\times{s_{2}}({i_{1}}-s,u){s_{2}}(s,v){s_{2}}({i_{2}}-t,j){s_{2}}(t,k)\times{F_{u,v,j,k}}(N_{{p_{1}}},N_{{p_{2}}},N_{{{\bar{p}}_{1}}},N_{{{\bar{p}}_{2}}})\end{split}$ (38) Similar to Eq. (32) for the multivariate case, the efficiency-corrected multivariate factorial moments of proton and antiproton distributions in the current case are given as: ${F_{u,v,j,k}}(N_{{p_{1}}},N_{{p_{2}}},N_{{{\bar{p}}_{1}}},N_{{{\bar{p}}_{2}}})=\frac{{{f_{u,v,j,k}}(n_{{p_{1}}},n_{{p_{2}}},n_{{{\bar{p}}_{1}}},n_{{{\bar{p}}_{2}}})}}{{{{({\varepsilon_{{p_{1}}}})}^{u}}{{({\varepsilon_{{p_{2}}}})}^{v}}{{({\varepsilon_{{{\bar{p}}_{1}}}})}^{j}}{{({\varepsilon_{{{\bar{p}}_{2}}}})}^{k}}}}$ (39) where ${{f_{u,v,j,k}}(N_{{p_{1}}},N_{{p_{2}}},N_{{{\bar{p}}_{1}}},N_{{{\bar{p}}_{2}}})}$ are the measured multivariate factorial moments of proton and antiproton distributions. By using Eq. (33), (36), (38) and (39), one can obtain the efficiency-corrected moments and cumulants of net-proton distributions for the case where the protons (antiprotons) have different efficiency in two sub- phase spaces. Through simulations as discussed in Refs. Luo (2015); Nonaka _et al._ (2016), it has been shown that this formulation works consistently. Another binomial-model-based efficiency correction method using track-by-track efficiency is discussed in Ref. Luo and Nonaka (2019). ## Appendix B Statistical Uncertainties Estimation According to Eqs. (33), (36) and (38), the efficiency-corrected moments are expressed in terms of the factorial moments, and thereby the factorial moments are the random variable $X_{i}$ in Eq. (21). The covariance of the multivariate moments can be written as: ${\rm Cov}({m_{r,s}},{m_{u,v}})=\frac{1}{n}({m_{r+u,s+v}}-{m_{r,s}}{m_{u,v}})$ (40) where $n$ is the number of events, $m_{r,s}=\langle X_{1}^{r}X_{2}^{s}\rangle$ and ${m_{u,v}}=\langle X_{1}^{u}X_{2}^{v}\rangle$ are the multivariate moments, and the $X_{1}$ and $X_{2}$ are random variables. In this paper, $X_{1}$ and $X_{2}$ represent proton and antiproton number, respectively. Based on Eq. (40), one can obtain the covariance for the multivariate factorial moments as: $\begin{split}&{\rm Cov}({f_{r,s}},{f_{u,v}})={\rm Cov}\left(\sum\limits_{i=0}^{r}{\sum\limits_{j=0}^{s}{{s_{1}}(r,i){s_{1}}(s,j){m_{i,j}},}}\sum\limits_{k=0}^{u}{\sum\limits_{h=0}^{v}{{s_{1}}(u,k){s_{1}}(v,h){m_{k,h}}}}\right)\\\ &=\sum\limits_{i=0}^{r}{\sum\limits_{j=0}^{s}{\sum\limits_{k=0}^{u}{\sum\limits_{h=0}^{v}{{s_{1}}(r,i){s_{1}}(s,j){s_{1}}(u,k){s_{1}}(v,h)}}\times{\rm Cov}({m_{i,j}},{m_{k,h}})}}\\\ &=\frac{1}{n}\sum\limits_{i=0}^{r}{\sum\limits_{j=0}^{s}{\sum\limits_{k=0}^{u}{\sum\limits_{h=0}^{v}{{s_{1}}(r,i){s_{1}}(s,j){s_{1}}(u,k){s_{1}}(v,h)}}\times}}({m_{i+k,j+h}}-{m_{i,j}}{m_{k,h}})\\\ &=\frac{1}{n}({f_{(r,u),(s,v)}}-{f_{r,s}}{f_{u,v}})\end{split}$ (41) where the $f_{(r,u),(s,v)}$ is defined as: $\begin{array}[]{l}{f_{(r,u),(s,v)}}=\left\langle{\frac{{{X_{1}}!}}{{({X_{1}}-r)!}}\frac{{{X_{1}}!}}{{({X_{1}}-u)!}}\frac{{{X_{2}}!}}{{({X_{2}}-s)!}}\frac{{{X_{2}}!}}{{({X_{2}}-v)!}}}\right\rangle\\\ =\sum\limits_{i=0}^{r}{\sum\limits_{j=0}^{s}{\sum\limits_{k=0}^{u}{\sum\limits_{h=0}^{v}{\sum\limits_{\alpha=0}^{i+k}{\sum\limits_{\beta=0}^{j+h}{{s_{1}}(r,i){s_{1}}(s,j){s_{1}}(u,k){s_{1}}(v,h)}}}}}}\\\ \times{s_{2}}(i+k,\alpha){s_{2}}(j+h,\beta){f_{\alpha,\beta}}\end{array}$ (42) The definition of the bivariate factorial moments $f_{r,s}$, $f_{u,v}$ and $f_{\alpha,\beta}$ can be found in Eq. (31). The Equation (41) can be used in the standard error propagation formula, Eq. (21), to obtain the statistical uncertainties of the efficiency-corrected cumulants. The detailed derivation of the analytical formulae for statistical uncertainties on cumulants and moments exists in the literature Luo (2012, 2015). If we put $\varepsilon_{p}=\varepsilon_{\bar{p}}=1$, the statistical uncertainties on the cumulants and cumulant ratios up to the eighth-order expressed in terms of central moments ($\mu_{n}$) are given below, where the uncertainties are the square root of the variances. $\displaystyle\mathrm{Var}(C_{1})$ $\displaystyle=\mu_{2}/n$ $\displaystyle\mathrm{Var}(C_{2})$ $\displaystyle=(-{\mu}_{2}^{2}+{\mu}_{4})/n$ $\displaystyle\mathrm{Var}(C_{3})$ $\displaystyle=(9{\mu}_{2}^{3}-6{\mu}_{2}{\mu}_{4}-{\mu}_{3}^{2}+{\mu}_{6})/n$ $\displaystyle\mathrm{Var}(C_{4})$ $\displaystyle=(-36{\mu}_{2}^{4}+48{\mu}_{2}^{2}{\mu}_{4}+64{\mu}_{2}{\mu}_{3}^{2}-12{\mu}_{2}{\mu}_{6}-8{\mu}_{3}{\mu}_{5}-{\mu}_{4}^{2}+{\mu}_{8})/n$ $\displaystyle\mathrm{Var}(C_{5})$ $\displaystyle=({\mu}_{10}+900{\mu}_{2}^{5}-900{\mu}_{2}^{3}{\mu}_{4}-1000{\mu}_{2}^{2}{\mu}_{3}^{2}+160{\mu}_{2}^{2}{\mu}_{6}+240{\mu}_{2}{\mu}_{3}{\mu}_{5}$ $\displaystyle+125{\mu}_{2}{\mu}_{4}^{2}-20{\mu}_{2}{\mu}_{8}+200{\mu}_{3}^{2}{\mu}_{4}-20{\mu}_{3}{\mu}_{7}-10{\mu}_{4}{\mu}_{6}-{\mu}_{5}^{2})/n$ $\displaystyle\mathrm{Var}(C_{6})$ $\displaystyle=(-30{\mu}_{10}{\mu}_{2}+{\mu}_{12}-8100{\mu}_{2}^{6}+13500{\mu}_{2}^{4}{\mu}_{4}+39600{\mu}_{2}^{3}{\mu}_{3}^{2}-2880{\mu}_{2}^{3}{\mu}_{6}$ $\displaystyle-9720{\mu}_{2}^{2}{\mu}_{3}{\mu}_{5}-3600{\mu}_{2}^{2}{\mu}_{4}^{2}+405{\mu}_{2}^{2}{\mu}_{8}-9600{\mu}_{2}{\mu}_{3}^{2}{\mu}_{4}+840{\mu}_{2}{\mu}_{3}{\mu}_{7}-400{\mu}_{3}^{4}$ $\displaystyle+216{\mu}_{2}{\mu}_{5}^{2}+510{\mu}_{2}{\mu}_{4}{\mu}_{6}+440{\mu}_{3}^{2}{\mu}_{6}+1020{\mu}_{3}{\mu}_{4}{\mu}_{5}-40{\mu}_{3}{\mu}_{9}+225{\mu}_{4}^{3}$ $\displaystyle-30{\mu}_{4}{\mu}_{8}-12{\mu}_{5}{\mu}_{7}-{\mu}_{6}^{2})/n$ $\displaystyle\mathrm{Var}(C_{7})$ $\displaystyle=(861{\mu}_{10}{\mu}_{2}^{2}-70{\mu}_{10}{\mu}_{4}-70{\mu}_{11}{\mu}_{3}-42{\mu}_{12}{\mu}_{2}+{\mu}_{14}+396900{\mu}_{2}^{7}-529200{\mu}_{2}^{5}{\mu}_{4}$ $\displaystyle-1102500{\mu}_{2}^{4}{\mu}_{3}^{2}+79380{\mu}_{2}^{4}{\mu}_{6}+299880{\mu}_{2}^{3}{\mu}_{3}{\mu}_{5}+176400{\mu}_{2}^{3}{\mu}_{4}^{2}-10080{\mu}_{2}^{3}{\mu}_{8}+558600{\mu}_{2}^{2}{\mu}_{3}^{2}{\mu}_{4}$ $\displaystyle-33600{\mu}_{2}^{2}{\mu}_{3}{\mu}_{7}-29400{\mu}_{2}^{2}{\mu}_{4}{\mu}_{6}-10584{\mu}_{2}^{2}{\mu}_{5}^{2}+137200{\mu}_{2}{\mu}_{3}^{4}-43120{\mu}_{2}{\mu}_{3}^{2}{\mu}_{6}$ $\displaystyle-76440{\mu}_{2}{\mu}_{3}{\mu}_{4}{\mu}_{5}+2310{\mu}_{2}{\mu}_{3}{\mu}_{9}-14700{\mu}_{2}{\mu}_{4}^{3}+1890{\mu}_{2}{\mu}_{4}{\mu}_{8}$ $\displaystyle+966{\mu}_{2}{\mu}_{5}{\mu}_{7}+343{\mu}_{2}{\mu}_{6}^{2}-15680{\mu}_{3}^{3}{\mu}_{5}-14700{\mu}_{3}^{2}{\mu}_{4}^{2}+1505{\mu}_{3}^{2}{\mu}_{8}+2590{\mu}_{3}{\mu}_{4}{\mu}_{7}$ $\displaystyle+2254{\mu}_{3}{\mu}_{5}{\mu}_{6}+1715{\mu}_{4}^{2}{\mu}_{6}+1911{\mu}_{4}{\mu}_{5}^{2}-42{\mu}_{5}{\mu}_{9}-14{\mu}_{6}{\mu}_{8}-{\mu}_{7}^{2})/n$ $\displaystyle\mathrm{Var}(C_{8})$ $\displaystyle=(-28560{\mu}_{10}{\mu}_{2}^{3}+5600{\mu}_{10}{\mu}_{2}{\mu}_{4}+4256{\mu}_{10}{\mu}_{3}^{2}-56{\mu}_{10}{\mu}_{6}+5376{\mu}_{11}{\mu}_{2}{\mu}_{3}-112{\mu}_{11}{\mu}_{5}$ $\displaystyle+1624{\mu}_{12}{\mu}_{2}^{2}-140{\mu}_{12}{\mu}_{4}-112{\mu}_{13}{\mu}_{3}-56{\mu}_{14}{\mu}_{2}+{\mu}_{16}-6350400{\mu}_{2}^{8}+12700800{\mu}_{2}^{6}{\mu}_{4}$ $\displaystyle+59270400{\mu}_{2}^{5}{\mu}_{3}^{2}-2399040{\mu}_{2}^{5}{\mu}_{6}-15523200{\mu}_{2}^{4}{\mu}_{3}{\mu}_{5}-6174000{\mu}_{2}^{4}{\mu}_{4}^{2}+322560{\mu}_{2}^{4}{\mu}_{8}$ $\displaystyle-35280000{\mu}_{2}^{3}{\mu}_{3}^{2}{\mu}_{4}+1626240{\mu}_{2}^{3}{\mu}_{3}{\mu}_{7}+1340640{\mu}_{2}^{3}{\mu}_{4}{\mu}_{6}+677376{\mu}_{2}^{3}{\mu}_{5}^{2}-8467200{\mu}_{2}^{2}{\mu}_{3}^{4}$ $\displaystyle+2759680{\mu}_{2}^{2}{\mu}_{3}^{2}{\mu}_{6}+5597760{\mu}_{2}^{2}{\mu}_{3}{\mu}_{4}{\mu}_{5}-119840{\mu}_{2}^{2}{\mu}_{3}{\mu}_{9}+882000{\mu}_{2}^{2}{\mu}_{4}^{3}-108360{\mu}_{2}^{2}{\mu}_{4}{\mu}_{8}$ $\displaystyle-77952{\mu}_{2}^{2}{\mu}_{5}{\mu}_{7}-26656{\mu}_{2}^{2}{\mu}_{6}^{2}+2007040{\mu}_{2}{\mu}_{3}^{3}{\mu}_{5}+3684800{\mu}_{2}{\mu}_{3}^{2}{\mu}_{4}^{2}-160160{\mu}_{2}{\mu}_{3}^{2}{\mu}_{8}$ $\displaystyle-322560{\mu}_{2}{\mu}_{3}{\mu}_{4}{\mu}_{7}-257152{\mu}_{2}{\mu}_{3}{\mu}_{5}{\mu}_{6}-172480{\mu}_{2}{\mu}_{4}^{2}{\mu}_{6}-178752{\mu}_{2}{\mu}_{4}{\mu}_{5}^{2}+3808{\mu}_{2}{\mu}_{5}{\mu}_{9}$ $\displaystyle+1680{\mu}_{2}{\mu}_{6}{\mu}_{8}+512{\mu}_{2}{\mu}_{7}^{2}+940800{\mu}_{3}^{4}{\mu}_{4}-71680{\mu}_{3}^{3}{\mu}_{7}-203840{\mu}_{3}^{2}{\mu}_{4}{\mu}_{6}-75264{\mu}_{3}^{2}{\mu}_{5}^{2}$ $\displaystyle-156800{\mu}_{3}{\mu}_{4}^{2}{\mu}_{5}+8960{\mu}_{3}{\mu}_{4}{\mu}_{9}+6496{\mu}_{3}{\mu}_{5}{\mu}_{8}+4480{\mu}_{3}{\mu}_{6}{\mu}_{7}-4900{\mu}_{4}^{4}+5040{\mu}_{4}^{2}{\mu}_{8}$ $\displaystyle+9856{\mu}_{4}{\mu}_{5}{\mu}_{7}+4704{\mu}_{4}{\mu}_{6}^{2}+6272{\mu}_{5}^{2}{\mu}_{6}-16{\mu}_{7}{\mu}_{9}-{\mu}_{8}^{2})/n$ $\displaystyle\mathrm{Var}(\frac{C_{2}}{C_{1}})$ $\displaystyle=(-\frac{{\mu}_{2}^{2}}{\langle N\rangle^{2}}+\frac{{\mu}_{4}}{\langle N\rangle^{2}}-\frac{2{\mu}_{2}{\mu}_{3}}{\langle N\rangle^{3}}+\frac{{\mu}_{2}^{3}}{\langle N\rangle^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{3}}{C_{2}})$ $\displaystyle=(9{\mu}_{2}-\frac{6{\mu}_{4}}{{\mu}_{2}}+\frac{6{\mu}_{3}^{2}}{{\mu}_{2}^{2}}+\frac{{\mu}_{6}}{{\mu}_{2}^{2}}-\frac{2{\mu}_{3}{\mu}_{5}}{{\mu}_{2}^{3}}+\frac{{\mu}_{3}^{2}{\mu}_{4}}{{\mu}_{2}^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{4}}{C_{2}})$ $\displaystyle=(-9{\mu}_{2}^{2}+9{\mu}_{4}+\frac{40{\mu}_{3}^{2}}{{\mu}_{2}}-\frac{6{\mu}_{6}}{{\mu}_{2}}-\frac{8{\mu}_{3}{\mu}_{5}}{{\mu}_{2}^{2}}+\frac{6{\mu}_{4}^{2}}{{\mu}_{2}^{2}}+\frac{{\mu}_{8}}{{\mu}_{2}^{2}}+\frac{8{\mu}_{3}^{2}{\mu}_{4}}{{\mu}_{2}^{3}}-\frac{2{\mu}_{4}{\mu}_{6}}{{\mu}_{2}^{3}}+\frac{{\mu}_{4}^{3}}{{\mu}_{2}^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{5}}{C_{1}})$ $\displaystyle=(\frac{{\mu}_{10}}{\langle N\rangle^{2}}+\frac{900{\mu}_{2}^{5}}{\langle N\rangle^{2}}-\frac{900{\mu}_{2}^{3}{\mu}_{4}}{\langle N\rangle^{2}}-\frac{1000{\mu}_{2}^{2}{\mu}_{3}^{2}}{\langle N\rangle^{2}}+\frac{160{\mu}_{2}^{2}{\mu}_{6}}{\langle N\rangle^{2}}+\frac{240{\mu}_{2}{\mu}_{3}{\mu}_{5}}{\langle N\rangle^{2}}+\frac{125{\mu}_{2}{\mu}_{4}^{2}}{\langle N\rangle^{2}}$ $\displaystyle-\frac{20{\mu}_{2}{\mu}_{8}}{\langle N\rangle^{2}}+\frac{200{\mu}_{3}^{2}{\mu}_{4}}{\langle N\rangle^{2}}-\frac{20{\mu}_{3}{\mu}_{7}}{\langle N\rangle^{2}}-\frac{10{\mu}_{4}{\mu}_{6}}{\langle N\rangle^{2}}-\frac{{\mu}_{5}^{2}}{\langle N\rangle^{2}}+\frac{600{\mu}_{2}^{4}{\mu}_{3}}{\langle N\rangle^{3}}-\frac{60{\mu}_{2}^{3}{\mu}_{5}}{\langle N\rangle^{3}}-\frac{300{\mu}_{2}^{2}{\mu}_{3}{\mu}_{4}}{\langle N\rangle^{3}}$ $\displaystyle-\frac{200{\mu}_{2}{\mu}_{3}^{3}}{\langle N\rangle^{3}}+\frac{20{\mu}_{2}{\mu}_{3}{\mu}_{6}}{\langle N\rangle^{3}}+\frac{30{\mu}_{2}{\mu}_{4}{\mu}_{5}}{\langle N\rangle^{3}}+\frac{20{\mu}_{3}^{2}{\mu}_{5}}{\langle N\rangle^{3}}-\frac{2{\mu}_{5}{\mu}_{6}}{\langle N\rangle^{3}}+\frac{100{\mu}_{2}^{3}{\mu}_{3}^{2}}{\langle N\rangle^{4}}-\frac{20{\mu}_{2}^{2}{\mu}_{3}{\mu}_{5}}{\langle N\rangle^{4}}+\frac{{\mu}_{2}{\mu}_{5}^{2}}{\langle N\rangle^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{6}}{C_{2}})$ $\displaystyle=(-\frac{30{\mu}_{10}}{{\mu}_{2}}+\frac{{\mu}_{12}}{{\mu}_{2}^{2}}-3600{\mu}_{2}^{4}+5400{\mu}_{2}^{2}{\mu}_{4}+30000{\mu}_{2}{\mu}_{3}^{2}-1800{\mu}_{2}{\mu}_{6}-8160{\mu}_{3}{\mu}_{5}-225{\mu}_{4}^{2}$ $\displaystyle+345{\mu}_{8}-\frac{3900{\mu}_{3}^{2}{\mu}_{4}}{{\mu}_{2}}+\frac{840{\mu}_{3}{\mu}_{7}}{{\mu}_{2}}-\frac{120{\mu}_{4}{\mu}_{6}}{{\mu}_{2}}+\frac{216{\mu}_{5}^{2}}{{\mu}_{2}}+\frac{2300{\mu}_{3}^{4}}{{\mu}_{2}^{2}}-\frac{140{\mu}_{3}^{2}{\mu}_{6}}{{\mu}_{2}^{2}}+\frac{240{\mu}_{3}{\mu}_{4}{\mu}_{5}}{{\mu}_{2}^{2}}$ $\displaystyle-\frac{40{\mu}_{3}{\mu}_{9}}{{\mu}_{2}^{2}}-\frac{12{\mu}_{5}{\mu}_{7}}{{\mu}_{2}^{2}}+\frac{30{\mu}_{6}^{2}}{{\mu}_{2}^{2}}-\frac{520{\mu}_{3}^{3}{\mu}_{5}}{{\mu}_{2}^{3}}+\frac{20{\mu}_{3}^{2}{\mu}_{8}}{{\mu}_{2}^{3}}+\frac{52{\mu}_{3}{\mu}_{5}{\mu}_{6}}{{\mu}_{2}^{3}}-\frac{2{\mu}_{6}{\mu}_{8}}{{\mu}_{2}^{3}}+\frac{100{\mu}_{3}^{4}{\mu}_{4}}{{\mu}_{2}^{4}}$ $\displaystyle-\frac{20{\mu}_{3}^{2}{\mu}_{4}{\mu}_{6}}{{\mu}_{2}^{4}}+\frac{{\mu}_{4}{\mu}_{6}^{2}}{{\mu}_{2}^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{7}}{C_{1}})$ $\displaystyle=(\frac{861{\mu}_{10}{\mu}_{2}^{2}}{\langle N\rangle^{2}}-\frac{70{\mu}_{10}{\mu}_{4}}{\langle N\rangle^{2}}-\frac{70{\mu}_{11}{\mu}_{3}}{\langle N\rangle^{2}}-\frac{42{\mu}_{12}{\mu}_{2}}{\langle N\rangle^{2}}+\frac{{\mu}_{14}}{\langle N\rangle^{2}}+\frac{396900{\mu}_{2}^{7}}{\langle N\rangle^{2}}-\frac{529200{\mu}_{2}^{5}{\mu}_{4}}{\langle N\rangle^{2}}$ $\displaystyle-\frac{1102500{\mu}_{2}^{4}{\mu}_{3}^{2}}{\langle N\rangle^{2}}+\frac{79380{\mu}_{2}^{4}{\mu}_{6}}{\langle N\rangle^{2}}+\frac{299880{\mu}_{2}^{3}{\mu}_{3}{\mu}_{5}}{\langle N\rangle^{2}}+\frac{176400{\mu}_{2}^{3}{\mu}_{4}^{2}}{\langle N\rangle^{2}}-\frac{10080{\mu}_{2}^{3}{\mu}_{8}}{\langle N\rangle^{2}}$ $\displaystyle+\frac{558600{\mu}_{2}^{2}{\mu}_{3}^{2}{\mu}_{4}}{\langle N\rangle^{2}}-\frac{33600{\mu}_{2}^{2}{\mu}_{3}{\mu}_{7}}{\langle N\rangle^{2}}-\frac{29400{\mu}_{2}^{2}{\mu}_{4}{\mu}_{6}}{\langle N\rangle^{2}}-\frac{10584{\mu}_{2}^{2}{\mu}_{5}^{2}}{\langle N\rangle^{2}}+\frac{137200{\mu}_{2}{\mu}_{3}^{4}}{\langle N\rangle^{2}}$ $\displaystyle-\frac{43120{\mu}_{2}{\mu}_{3}^{2}{\mu}_{6}}{\langle N\rangle^{2}}-\frac{76440{\mu}_{2}{\mu}_{3}{\mu}_{4}{\mu}_{5}}{\langle N\rangle^{2}}+\frac{2310{\mu}_{2}{\mu}_{3}{\mu}_{9}}{\langle N\rangle^{2}}-\frac{14700{\mu}_{2}{\mu}_{4}^{3}}{\langle N\rangle^{2}}+\frac{1890{\mu}_{2}{\mu}_{4}{\mu}_{8}}{\langle N\rangle^{2}}$ $\displaystyle+\frac{966{\mu}_{2}{\mu}_{5}{\mu}_{7}}{\langle N\rangle^{2}}+\frac{343{\mu}_{2}{\mu}_{6}^{2}}{\langle N\rangle^{2}}-\frac{15680{\mu}_{3}^{3}{\mu}_{5}}{\langle N\rangle^{2}}-\frac{14700{\mu}_{3}^{2}{\mu}_{4}^{2}}{\langle N\rangle^{2}}+\frac{1505{\mu}_{3}^{2}{\mu}_{8}}{\langle N\rangle^{2}}+\frac{2590{\mu}_{3}{\mu}_{4}{\mu}_{7}}{\langle N\rangle^{2}}$ $\displaystyle+\frac{2254{\mu}_{3}{\mu}_{5}{\mu}_{6}}{\langle N\rangle^{2}}+\frac{1715{\mu}_{4}^{2}{\mu}_{6}}{\langle N\rangle^{2}}+\frac{1911{\mu}_{4}{\mu}_{5}^{2}}{\langle N\rangle^{2}}-\frac{42{\mu}_{5}{\mu}_{9}}{\langle N\rangle^{2}}-\frac{14{\mu}_{6}{\mu}_{8}}{\langle N\rangle^{2}}-\frac{{\mu}_{7}^{2}}{\langle N\rangle^{2}}+\frac{264600{\mu}_{2}^{6}{\mu}_{3}}{\langle N\rangle^{3}}$ $\displaystyle-\frac{26460{\mu}_{2}^{5}{\mu}_{5}}{\langle N\rangle^{3}}-\frac{220500{\mu}_{2}^{4}{\mu}_{3}{\mu}_{4}}{\langle N\rangle^{3}}+\frac{1260{\mu}_{2}^{4}{\mu}_{7}}{\langle N\rangle^{3}}-\frac{235200{\mu}_{2}^{3}{\mu}_{3}^{3}}{\langle N\rangle^{3}}+\frac{11760{\mu}_{2}^{3}{\mu}_{3}{\mu}_{6}}{\langle N\rangle^{3}}$ $\displaystyle+\frac{17640{\mu}_{2}^{3}{\mu}_{4}{\mu}_{5}}{\langle N\rangle^{3}}+\frac{47040{\mu}_{2}^{2}{\mu}_{3}^{2}{\mu}_{5}}{\langle N\rangle^{3}}+\frac{44100{\mu}_{2}^{2}{\mu}_{3}{\mu}_{4}^{2}}{\langle N\rangle^{3}}-\frac{420{\mu}_{2}^{2}{\mu}_{3}{\mu}_{8}}{\langle N\rangle^{3}}-\frac{840{\mu}_{2}^{2}{\mu}_{4}{\mu}_{7}}{\langle N\rangle^{3}}$ $\displaystyle-\frac{1176{\mu}_{2}^{2}{\mu}_{5}{\mu}_{6}}{\langle N\rangle^{3}}+\frac{39200{\mu}_{2}{\mu}_{3}^{3}{\mu}_{4}}{\langle N\rangle^{3}}-\frac{1120{\mu}_{2}{\mu}_{3}^{2}{\mu}_{7}}{\langle N\rangle^{3}}-\frac{1960{\mu}_{2}{\mu}_{3}{\mu}_{4}{\mu}_{6}}{\langle N\rangle^{3}}-\frac{2352{\mu}_{2}{\mu}_{3}{\mu}_{5}^{2}}{\langle N\rangle^{3}}$ $\displaystyle-\frac{1470{\mu}_{2}{\mu}_{4}^{2}{\mu}_{5}}{\langle N\rangle^{3}}+\frac{42{\mu}_{2}{\mu}_{5}{\mu}_{8}}{\langle N\rangle^{3}}+\frac{56{\mu}_{2}{\mu}_{6}{\mu}_{7}}{\langle N\rangle^{3}}-\frac{3920{\mu}_{3}^{2}{\mu}_{4}{\mu}_{5}}{\langle N\rangle^{3}}-\frac{2450{\mu}_{3}{\mu}_{4}^{3}}{\langle N\rangle^{3}}+\frac{70{\mu}_{3}{\mu}_{4}{\mu}_{8}}{\langle N\rangle^{3}}$ $\displaystyle+\frac{112{\mu}_{3}{\mu}_{5}{\mu}_{7}}{\langle N\rangle^{3}}+\frac{70{\mu}_{4}^{2}{\mu}_{7}}{\langle N\rangle^{3}}-\frac{2{\mu}_{7}{\mu}_{8}}{\langle N\rangle^{3}}+\frac{44100{\mu}_{2}^{5}{\mu}_{3}^{2}}{\langle N\rangle^{4}}-\frac{8820{\mu}_{2}^{4}{\mu}_{3}{\mu}_{5}}{\langle N\rangle^{4}}-\frac{14700{\mu}_{2}^{3}{\mu}_{3}^{2}{\mu}_{4}}{\langle N\rangle^{4}}$ $\displaystyle+\frac{420{\mu}_{2}^{3}{\mu}_{3}{\mu}_{7}}{\langle N\rangle^{4}}+\frac{441{\mu}_{2}^{3}{\mu}_{5}^{2}}{\langle N\rangle^{4}}+\frac{1470{\mu}_{2}^{2}{\mu}_{3}{\mu}_{4}{\mu}_{5}}{\langle N\rangle^{4}}$ $\displaystyle-\frac{42{\mu}_{2}^{2}{\mu}_{5}{\mu}_{7}}{\langle N\rangle^{4}}+\frac{1225{\mu}_{2}{\mu}_{3}^{2}{\mu}_{4}^{2}}{\langle N\rangle^{4}}-\frac{70{\mu}_{2}{\mu}_{3}{\mu}_{4}{\mu}_{7}}{\langle N\rangle^{4}}+\frac{{\mu}_{2}{\mu}_{7}^{2}}{\langle N\rangle^{4}})/n$ $\displaystyle\mathrm{Var}(\frac{C_{8}}{C_{2}})$ $\displaystyle=(-27300{\mu}_{10}{\mu}_{2}+\frac{4760{\mu}_{10}{\mu}_{4}}{{\mu}_{2}}+\frac{3136{\mu}_{10}{\mu}_{3}^{2}}{{\mu}_{2}^{2}}+\frac{112{\mu}_{10}{\mu}_{3}{\mu}_{5}}{{\mu}_{2}^{3}}+\frac{70{\mu}_{10}{\mu}_{4}^{2}}{{\mu}_{2}^{3}}-\frac{2{\mu}_{10}{\mu}_{8}}{{\mu}_{2}^{3}}$ $\displaystyle+\frac{5376{\mu}_{11}{\mu}_{3}}{{\mu}_{2}}-\frac{112{\mu}_{11}{\mu}_{5}}{{\mu}_{2}^{2}}+1624{\mu}_{12}-\frac{140{\mu}_{12}{\mu}_{4}}{{\mu}_{2}^{2}}-\frac{112{\mu}_{13}{\mu}_{3}}{{\mu}_{2}^{2}}-\frac{56{\mu}_{14}}{{\mu}_{2}}+\frac{{\mu}_{16}}{{\mu}_{2}^{2}}$ $\displaystyle-3572100{\mu}_{2}^{6}+6747300{\mu}_{2}^{4}{\mu}_{4}+48686400{\mu}_{2}^{3}{\mu}_{3}^{2}-1693440{\mu}_{2}^{3}{\mu}_{6}-13335840{\mu}_{2}^{2}{\mu}_{3}{\mu}_{5}$ $\displaystyle-2425500{\mu}_{2}^{2}{\mu}_{4}^{2}+282240{\mu}_{2}^{2}{\mu}_{8}-25166400{\mu}_{2}{\mu}_{3}^{2}{\mu}_{4}+1545600{\mu}_{2}{\mu}_{3}{\mu}_{7}+664440{\mu}_{2}{\mu}_{4}{\mu}_{6}$ $\displaystyle+606816{\mu}_{2}{\mu}_{5}^{2}-1254400{\mu}_{3}^{4}+1881600{\mu}_{3}^{2}{\mu}_{6}+3974880{\mu}_{3}{\mu}_{4}{\mu}_{5}-119840{\mu}_{3}{\mu}_{9}+102900{\mu}_{4}^{3}$ $\displaystyle-78540{\mu}_{4}{\mu}_{8}-77952{\mu}_{5}{\mu}_{7}-784{\mu}_{6}^{2}-\frac{439040{\mu}_{3}^{3}{\mu}_{5}}{{\mu}_{2}}+\frac{1764000{\mu}_{3}^{2}{\mu}_{4}^{2}}{{\mu}_{2}}-\frac{115360{\mu}_{3}^{2}{\mu}_{8}}{{\mu}_{2}}$ $\displaystyle-\frac{268800{\mu}_{3}{\mu}_{4}{\mu}_{7}}{{\mu}_{2}}-\frac{119168{\mu}_{3}{\mu}_{5}{\mu}_{6}}{{\mu}_{2}}-\frac{31360{\mu}_{4}^{2}{\mu}_{6}}{{\mu}_{2}}-\frac{131712{\mu}_{4}{\mu}_{5}^{2}}{{\mu}_{2}}+\frac{3808{\mu}_{5}{\mu}_{9}}{{\mu}_{2}}$ $\displaystyle-\frac{840{\mu}_{6}{\mu}_{8}}{{\mu}_{2}}+\frac{512{\mu}_{7}^{2}}{{\mu}_{2}}-\frac{62720{\mu}_{3}^{2}{\mu}_{4}{\mu}_{6}}{{\mu}_{2}^{2}}+\frac{159936{\mu}_{3}^{2}{\mu}_{5}^{2}}{{\mu}_{2}^{2}}+\frac{3920{\mu}_{3}{\mu}_{4}^{2}{\mu}_{5}}{{\mu}_{2}^{2}}+\frac{8960{\mu}_{3}{\mu}_{4}{\mu}_{9}}{{\mu}_{2}^{2}}$ $\displaystyle+\frac{224{\mu}_{3}{\mu}_{5}{\mu}_{8}}{{\mu}_{2}^{2}}+\frac{896{\mu}_{3}{\mu}_{6}{\mu}_{7}}{{\mu}_{2}^{2}}+\frac{28175{\mu}_{4}^{4}}{{\mu}_{2}^{2}}+\frac{2100{\mu}_{4}^{2}{\mu}_{8}}{{\mu}_{2}^{2}}+\frac{9856{\mu}_{4}{\mu}_{5}{\mu}_{7}}{{\mu}_{2}^{2}}+\frac{3136{\mu}_{5}^{2}{\mu}_{6}}{{\mu}_{2}^{2}}$ $\displaystyle-\frac{16{\mu}_{7}{\mu}_{9}}{{\mu}_{2}^{2}}+\frac{56{\mu}_{8}^{2}}{{\mu}_{2}^{2}}+\frac{62720{\mu}_{3}^{3}{\mu}_{4}{\mu}_{5}}{{\mu}_{2}^{3}}+\frac{39200{\mu}_{3}^{2}{\mu}_{4}^{3}}{{\mu}_{2}^{3}}-\frac{1120{\mu}_{3}^{2}{\mu}_{4}{\mu}_{8}}{{\mu}_{2}^{3}}-\frac{7168{\mu}_{3}^{2}{\mu}_{5}{\mu}_{7}}{{\mu}_{2}^{3}}$ $\displaystyle-\frac{4480{\mu}_{3}{\mu}_{4}^{2}{\mu}_{7}}{{\mu}_{2}^{3}}-\frac{7840{\mu}_{3}{\mu}_{4}{\mu}_{5}{\mu}_{6}}{{\mu}_{2}^{3}}-\frac{6272{\mu}_{3}{\mu}_{5}^{3}}{{\mu}_{2}^{3}}+\frac{128{\mu}_{3}{\mu}_{7}{\mu}_{8}}{{\mu}_{2}^{3}}-\frac{4900{\mu}_{4}^{3}{\mu}_{6}}{{\mu}_{2}^{3}}-\frac{3920{\mu}_{4}^{2}{\mu}_{5}^{2}}{{\mu}_{2}^{3}}$ $\displaystyle+\frac{140{\mu}_{4}{\mu}_{6}{\mu}_{8}}{{\mu}_{2}^{3}}+\frac{112{\mu}_{5}^{2}{\mu}_{8}}{{\mu}_{2}^{3}}+\frac{3136{\mu}_{3}^{2}{\mu}_{4}{\mu}_{5}^{2}}{{\mu}_{2}^{4}}+\frac{3920{\mu}_{3}{\mu}_{4}^{3}{\mu}_{5}}{{\mu}_{2}^{4}}$ $\displaystyle-\frac{112{\mu}_{3}{\mu}_{4}{\mu}_{5}{\mu}_{8}}{{\mu}_{2}^{4}}+\frac{1225{\mu}_{4}^{5}}{{\mu}_{2}^{4}}-\frac{70{\mu}_{4}^{3}{\mu}_{8}}{{\mu}_{2}^{4}}+\frac{{\mu}_{4}{\mu}_{8}^{2}}{{\mu}_{2}^{4}})/n$ ## References * Aggarwal _et al._ (2010a) M. M. Aggarwal _et al._ (STAR Collaboration), (2010a), arXiv:1007.2613 [nucl-ex] . * (2) BES-II White Paper (STAR Note): https://drupal.star.bnl.gov/STAR/starnotes/public/sn0598. * Aoki _et al._ (2006) Y. Aoki, G. Endrodi, Z. Fodor, S. D. Katz, and K. K. Szabo, Nature 443, 675 (2006), arXiv:hep-lat/0611014 [hep-lat] . * Arsene _et al._ (2005) I. Arsene _et al._ (BRAHMS Collaboration), Nucl. Phys. A757, 1 (2005), arXiv:nucl-ex/0410020 . * Back _et al._ (2005) B. Back _et al._ (PHOBOS Collaboration), Nucl. Phys. A757, 28 (2005), arXiv:nucl-ex/0410022 . * Adcox _et al._ (2005) K. Adcox _et al._ (PHENIX Collaboration), Nucl. Phys. A757, 184 (2005), arXiv:nucl-ex/0410003 . * Adams _et al._ (2005) J. Adams _et al._ (STAR Collaboration), Nucl. Phys. A757, 102 (2005), arXiv:nucl-ex/0501009 . * Adamczyk _et al._ (2017) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. C96, 044904 (2017), arXiv:1701.07065 [nucl-ex] . * Borsanyi _et al._ (2010) S. Borsanyi, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, C. Ratti, and K. K. Szabo (Wuppertal-Budapest Collaboration), JHEP 09, 073 (2010), arXiv:1005.3508 [hep-lat] . * Bazavov _et al._ (2019) A. Bazavov _et al._ (HotQCD Collaboration), Phys. Lett. B795, 15 (2019), arXiv:1812.08235 [hep-lat] . * Adamczyk _et al._ (2013) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 110, 142301 (2013), arXiv:1301.2347 [nucl-ex] . * Adamczyk _et al._ (2014a) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 112, 162301 (2014a), arXiv:1401.3043 [nucl-ex] . * Adamczyk _et al._ (2018a) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 121, 032301 (2018a), arXiv:1707.01988 [nucl-ex] . * Adamczyk _et al._ (2014b) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 113, 052302 (2014b), arXiv:1404.1433 [nucl-ex] . * Fukushima and Hatsuda (2011) K. Fukushima and T. Hatsuda, Rept. Prog. Phys. 74, 014001 (2011), arXiv:1005.4814 [hep-ph] . * Stephanov _et al._ (1999) M. A. Stephanov, K. Rajagopal, and E. V. Shuryak, Phys. Rev. D60, 114028 (1999), arXiv:hep-ph/9903292 [hep-ph] . * Stephanov (2004) M. A. Stephanov, Prog. Theor. Phys. Suppl. 153, 139 (2004), arXiv:hep-ph/0402115 . * Fodor and Katz (2004) Z. Fodor and S. D. Katz, JHEP 04, 050 (2004), arXiv:hep-lat/0402006 [hep-lat] . * Gavai and Gupta (2008) R. V. Gavai and S. Gupta, Phys. Rev. D78, 114503 (2008), arXiv:0806.2233 [hep-lat] . * Gavai and Gupta (2005) R. V. Gavai and S. Gupta, Phys. Rev. D71, 114014 (2005), arXiv:hep-lat/0412035 [hep-lat] . * Gupta (2009) S. Gupta, PoS CPOD2009, 025 (2009), arXiv:0909.4630 [nucl-ex] . * Ejiri (2008) S. Ejiri, Phys. Rev. D78, 074507 (2008), arXiv:0804.3227 [hep-lat] . * Bowman and Kapusta (2009) E. S. Bowman and J. I. Kapusta, Phys. Rev. C79, 015202 (2009), arXiv:0810.0042 [nucl-th] . * Aggarwal _et al._ (2010b) M. M. Aggarwal _et al._ (STAR Collaboration), Phys. Rev. Lett. 105, 022302 (2010b), arXiv:1004.4959 [nucl-ex] . * Abelev _et al._ (2010) B. I. Abelev _et al._ (STAR Collaboration), Phys. Rev. C81, 024911 (2010), arXiv:0909.4131 [nucl-ex] . * Adam _et al._ (2020a) J. Adam _et al._ (STAR Collaboration), (2020a), arXiv:2007.14005 [nucl-ex] . * Luo and Xu (2017) X. Luo and N. Xu, Nucl. Sci. Tech. 28, 112 (2017), arXiv:1701.02105 [nucl-ex] . * Bzdak _et al._ (2020) A. Bzdak, S. Esumi, V. Koch, J. Liao, M. Stephanov, and N. Xu, Phys. Rept. 853, 1 (2020), arXiv:1906.00936 [nucl-th] . * Asakawa _et al._ (2000) M. Asakawa, U. W. Heinz, and B. Muller, Phys. Rev. Lett. 85, 2072 (2000), arXiv:hep-ph/0003169 [hep-ph] . * Hatta and Stephanov (2003) Y. Hatta and M. A. Stephanov, Phys. Rev. Lett. 91, 102003 (2003), [Erratum: Phys. Rev. Lett.91,129901(2003)], arXiv:hep-ph/0302002 [hep-ph] . * Koch _et al._ (2005) V. Koch, A. Majumder, and J. Randrup, Phys. Rev. Lett. 95, 182301 (2005), arXiv:nucl-th/0505052 [nucl-th] . * Asakawa _et al._ (2009) M. Asakawa, S. Ejiri, and M. Kitazawa, Phys. Rev. Lett. 103, 262301 (2009), arXiv:0904.2089 [nucl-th] . * Gupta _et al._ (2011) S. Gupta, X. Luo, B. Mohanty, H. G. Ritter, and N. Xu, Science 332, 1525 (2011), arXiv:1105.3934 [hep-ph] . * Ding _et al._ (2015) H.-T. Ding, F. Karsch, and S. Mukherjee, Int. J. Mod. Phys. E24, 1530007 (2015), arXiv:1504.05274 [hep-lat] . * Shi _et al._ (2014) C. Shi, Y.-L. Wang, Y. Jiang, Z.-F. Cui, and H.-S. Zong, JHEP 07, 014 (2014), arXiv:1403.3797 [hep-ph] . * Gao and Liu (2016) F. Gao and Y.-x. Liu, Phys. Rev. D94, 076009 (2016), arXiv:1607.01675 [hep-ph] . * Fischer (2019) C. S. Fischer, Prog. Part. Nucl. Phys. 105, 1 (2019), arXiv:1810.12938 [hep-ph] . * Friman _et al._ (2011) B. Friman, F. Karsch, K. Redlich, and V. Skokov, Eur. Phys. J. C71, 1694 (2011), arXiv:1103.3511 [hep-ph] . * Fu _et al._ (2020) W.-J. Fu, J. M. Pawlowski, and F. Rennecke, Phys. Rev. D101, 054032 (2020), arXiv:1909.02991 [hep-ph] . * Lu _et al._ (2015) Y. Lu, Y.-L. Du, Z.-F. Cui, and H.-S. Zong, Eur. Phys. J. C75, 495 (2015), arXiv:1508.00651 [hep-ph] . * Chen _et al._ (2016) J.-W. Chen, J. Deng, H. Kohyama, and L. Labun, Phys. Rev. D93, 034037 (2016), arXiv:1509.04968 [hep-ph] . * Fan _et al._ (2019) W. Fan, X. Luo, and H. Zong, Chin. Phys. C43, 033103 (2019), arXiv:1702.08674 [hep-ph] . * Fu _et al._ (2008) W.-J. Fu, Z. Zhang, and Y.-x. Liu, Phys. Rev. D77, 014006 (2008), arXiv:0711.0154 [hep-ph] . * Li _et al._ (2019) Z. Li, K. Xu, X. Wang, and M. Huang, Eur. Phys. J. C79, 245 (2019), arXiv:1801.09215 [hep-ph] . * Herold _et al._ (2016) C. Herold, M. Nahrgang, Y. Yan, and C. Kobdaj, Phys. Rev. C93, 021902 (2016), arXiv:1601.04839 [hep-ph] . * Chen _et al._ (2015) J.-W. Chen, J. Deng, and L. Labun, Phys. Rev. D92, 054019 (2015), arXiv:1410.5454 [hep-ph] . * Vovchenko _et al._ (2015) V. Vovchenko, D. V. Anchishkin, M. I. Gorenstein, and R. V. Poberezhnyuk, Phys. Rev. C92, 054901 (2015), arXiv:1506.05763 [nucl-th] . * Jiang _et al._ (2016) L. Jiang, P. Li, and H. Song, Phys. Rev. C94, 024918 (2016), arXiv:1512.06164 [nucl-th] . * Mukherjee _et al._ (2017) A. Mukherjee, J. Steinheimer, and S. Schramm, Phys. Rev. C96, 025205 (2017), arXiv:1611.10144 [nucl-th] . * Zhang _et al._ (2017) H. Zhang, D. Hou, T. Kojo, and B. Qin, Phys. Rev. D96, 114029 (2017), arXiv:1709.05654 [hep-ph] . * Schaefer and Wagner (2012) B. J. Schaefer and M. Wagner, Phys. Rev. D85, 034027 (2012), arXiv:1111.6871 [hep-ph] . * Palhares _et al._ (2010) L. F. Palhares, E. S. Fraga, and T. Kodama, J. Phys. G37, 094031 (2010). * Pan _et al._ (2017) Z. Pan, Z.-F. Cui, C.-H. Chang, and H.-S. Zong, Int. J. Mod. Phys. A32, 1750067 (2017), arXiv:1611.07370 [hep-ph] . * Berdnikov and Rajagopal (2000) B. Berdnikov and K. Rajagopal, Phys. Rev. D61, 105017 (2000), arXiv:hep-ph/9912274 [hep-ph] . * Stephanov and Yin (2018) M. Stephanov and Y. Yin, Phys. Rev. D98, 036006 (2018), arXiv:1712.10305 [nucl-th] . * Rajagopal _et al._ (2020) K. Rajagopal, G. Ridgway, R. Weller, and Y. Yin, Phys. Rev. D102, 094025 (2020), arXiv:1908.08539 [hep-ph] . * An _et al._ (2020) X. An, G. Başar, M. Stephanov, and H.-U. Yee, Phys. Rev. C102, 034901 (2020), arXiv:1912.13456 [hep-th] . * Stephanov (2010) M. A. Stephanov, Phys. Rev. D81, 054012 (2010), arXiv:0911.1772 [hep-ph] . * Stephanov (2009) M. A. Stephanov, Phys. Rev. Lett. 102, 032301 (2009), arXiv:0809.3450 [hep-ph] . * Athanasiou _et al._ (2010) C. Athanasiou, K. Rajagopal, and M. Stephanov, Phys. Rev. D82, 074008 (2010), arXiv:1006.4636 [hep-ph] . * Stephanov (2011) M. A. Stephanov, Phys. Rev. Lett. 107, 052301 (2011), arXiv:1104.1627 [hep-ph] . * Ejiri _et al._ (2006) S. Ejiri, F. Karsch, and K. Redlich, Phys. Lett. B633, 275 (2006), arXiv:hep-ph/0509051 [hep-ph] . * Cheng _et al._ (2009) M. Cheng _et al._ , Phys. Rev. D79, 074505 (2009), arXiv:0811.1006 [hep-lat] . * Stokic _et al._ (2009) B. Stokic, B. Friman, and K. Redlich, Phys. Lett. B673, 192 (2009), arXiv:0809.3129 [hep-ph] . * Gavai and Gupta (2011) R. V. Gavai and S. Gupta, Phys. Lett. B696, 459 (2011), arXiv:1001.3796 [hep-lat] . * Kitazawa and Asakawa (2012) M. Kitazawa and M. Asakawa, Phys. Rev. C86, 024904 (2012), [Erratum: Phys. Rev.C86,069902(2012)], arXiv:1205.3292 [nucl-th] . * Bzdak and Koch (2012) A. Bzdak and V. Koch, Phys. Rev. C86, 044904 (2012), arXiv:1206.4286 [nucl-th] . * Ackermann _et al._ (2003) K. H. Ackermann _et al._ (STAR Collaboration), Nucl. Instrum. Meth. A499, 624 (2003). * Anderson _et al._ (2003) M. Anderson _et al._ , Nucl. Instrum. Meth. A499, 659 (2003), arXiv:nucl-ex/0301015 . * Adamczyk _et al._ (2014c) L. Adamczyk _et al._ (STAR Collaboration), Phys. Rev. Lett. 112, 032302 (2014c), arXiv:1309.5681 [nucl-ex] . * Adam _et al._ (2020b) J. Adam _et al._ (STAR Collaboration), (2020b), arXiv:2001.02852 [nucl-ex] . * Llope (2012) W. J. Llope (STAR Collaboration), Nucl. Instrum. Meth. A661, S110 (2012). * Adler _et al._ (2001) C. Adler, A. Denisov, E. Garcia, M. J. Murray, H. Strobele, and S. N. White, Nucl. Instrum. Meth. A470, 488 (2001), arXiv:nucl-ex/0008005 [nucl-ex] . * Llope _et al._ (2004) W. J. Llope _et al._ , Nucl. Instrum. Meth. A522, 252 (2004), arXiv:nucl-ex/0308022 [nucl-ex] . * Bieser _et al._ (2003) F. S. Bieser _et al._ , Nucl. Instrum. Meth. A499, 766 (2003). * Miller _et al._ (2007) M. L. Miller, K. Reygers, S. J. Sanders, and P. Steinberg, Ann. Rev. Nucl. Part. Sci. 57, 205 (2007), arXiv:nucl-ex/0701025 [nucl-ex] . * Bichsel (2006) H. Bichsel, Nucl. Instrum. Meth. A562, 154 (2006). * Luo _et al._ (2013) X. Luo, J. Xu, B. Mohanty, and N. Xu, J. Phys. G40, 105104 (2013), arXiv:1302.2332 [nucl-ex] . * Chatterjee _et al._ (2020) A. Chatterjee, Y. Zhang, J. Zeng, N. R. Sahoo, and X. Luo, Phys. Rev. C101, 034902 (2020), arXiv:1910.08004 [nucl-ex] . * Zhou and Jia (2018) M. Zhou and J. Jia, Phys. Rev. C98, 044903 (2018), arXiv:1803.01812 [nucl-th] . * Sugiura _et al._ (2019) T. Sugiura, T. Nonaka, and S. Esumi, Phys. Rev. C100, 044904 (2019), arXiv:1903.02314 [nucl-th] . * Ling and Stephanov (2016) B. Ling and M. A. Stephanov, Phys. Rev. C93, 034915 (2016), arXiv:1512.09125 [nucl-th] . * Bzdak _et al._ (2017a) A. Bzdak, V. Koch, and N. Strodthoff, Phys. Rev. C95, 054906 (2017a), arXiv:1607.07375 [nucl-th] . * Kitazawa and Luo (2017) M. Kitazawa and X. Luo, Phys. Rev. C96, 024910 (2017), arXiv:1704.04909 [nucl-th] . * He and Luo (2018) S. He and X. Luo, Chin. Phys. C42, 104001 (2018), arXiv:1802.02911 [physics.data-an] . * Skokov _et al._ (2013) V. Skokov, B. Friman, and K. Redlich, Phys. Rev. C88, 034911 (2013), arXiv:1205.4756 [hep-ph] . * Braun-Munzinger _et al._ (2017) P. Braun-Munzinger, A. Rustamov, and J. Stachel, Nucl. Phys. A960, 114 (2017), arXiv:1612.00702 [nucl-th] . * Luo (2015) X. Luo, Phys. Rev. C91, 034907 (2015), arXiv:1410.3914 [physics.data-an] . * Nonaka _et al._ (2017) T. Nonaka, M. Kitazawa, and S. Esumi, Phys. Rev. C95, 064912 (2017), arXiv:1702.07106 [physics.data-an] . * Luo and Nonaka (2019) X. Luo and T. Nonaka, Phys. Rev. C99, 044917 (2019), arXiv:1812.10303 [physics.data-an] . * Garg _et al._ (2013a) P. Garg, D. K. Mishra, P. K. Netrakanti, A. K. Mohanty, and B. Mohanty, J. Phys. G40, 055103 (2013a), arXiv:1211.2074 [nucl-ex] . * Esumi _et al._ (2021) S. Esumi, K. Nakagawa, and T. Nonaka, Nucl. Instrum. Meth. A987, 164802 (2021), arXiv:2002.11253 [physics.data-an] . * Fine and Nevski (2000) V. Fine and P. Nevski, in _Proceedings CHEP 2000, 143._ (2000). * Bzdak _et al._ (2016) A. Bzdak, R. Holzmann, and V. Koch, Phys. Rev. C94, 064907 (2016), arXiv:1603.09057 [nucl-th] . * Nonaka _et al._ (2018) T. Nonaka, M. Kitazawa, and S. Esumi, Nucl. Instrum. Meth. A906, 10 (2018), arXiv:1805.00279 [physics.data-an] . * Adamczyk _et al._ (2018b) L. Adamczyk _et al._ (STAR Collaboration), Phys. Lett. B785, 551 (2018b), arXiv:1709.00773 [nucl-ex] . * Luo (2012) X. Luo, J. Phys. G39, 025008 (2012), arXiv:1109.0593 [physics.data-an] . * Pandav _et al._ (2019) A. Pandav, D. Mallick, and B. Mohanty, Nucl. Phys. A991, 121608 (2019), arXiv:1809.08892 [nucl-ex] . * (99) B. Efron, The Annals of Statistics 7 p1-26 (1979). * Efron (1979) B. Efron, _Computers and the Theory of Statistics : Thinking the Unthinkable_ (Society for Industrial and Applied Mathematics, 1979). * Garg _et al._ (2013b) P. Garg, D. K. Mishra, P. K. Netrakanti, B. Mohanty, A. K. Mohanty, B. K. Singh, and N. Xu, Phys. Lett. B726, 691 (2013b), arXiv:1304.7133 [nucl-ex] . * Xu _et al._ (2016) J. Xu, S. Yu, F. Liu, and X. Luo, Phys. Rev. C94, 024901 (2016), arXiv:1606.03900 [nucl-ex] . * He and Luo (2017) S. He and X. Luo, Phys. Lett. B774, 623 (2017), arXiv:1704.00423 [nucl-ex] . * Abelev _et al._ (2009) B. I. Abelev _et al._ (STAR Collaboration), Phys. Rev. C79, 034909 (2009), arXiv:0808.2041 [nucl-ex] . * Bazavov _et al._ (2012) A. Bazavov _et al._ , Phys. Rev. Lett. 109, 192302 (2012), arXiv:1208.1220 [hep-lat] . * Borsanyi _et al._ (2013) S. Borsanyi, Z. Fodor, S. D. Katz, S. Krieg, C. Ratti, and K. K. Szabo, Phys. Rev. Lett. 111, 062005 (2013), arXiv:1305.5161 [hep-lat] . * Gupta _et al._ (2020) S. Gupta, D. Mallick, D. K. Mishra, B. Mohanty, and N. Xu, (2020), arXiv:2004.04681 [hep-ph] . * Bzdak and Koch (2017) A. Bzdak and V. Koch, Phys. Rev. C96, 054905 (2017), arXiv:1707.02640 [nucl-th] . * Brewer _et al._ (2018) J. Brewer, S. Mukherjee, K. Rajagopal, and Y. Yin, Phys. Rev. C98, 061901 (2018), arXiv:1804.10215 [hep-ph] . * Mukherjee _et al._ (2016) S. Mukherjee, R. Venugopalan, and Y. Yin, Phys. Rev. Lett. 117, 222301 (2016), arXiv:1605.09341 [hep-ph] . * Wu _et al._ (2019) S. Wu, Z. Wu, and H. Song, Phys. Rev. C99, 064902 (2019), arXiv:1811.09466 [nucl-th] . * Ohnishi _et al._ (2016) Y. Ohnishi, M. Kitazawa, and M. Asakawa, Phys. Rev. C94, 044905 (2016), arXiv:1606.03827 [nucl-th] . * Sakaida _et al._ (2017) M. Sakaida, M. Asakawa, H. Fujii, and M. Kitazawa, Phys. Rev. C95, 064905 (2017), arXiv:1703.08008 [nucl-th] . * Nahrgang _et al._ (2019) M. Nahrgang, M. Bluhm, T. Schaefer, and S. A. Bass, Phys. Rev. D99, 116015 (2019), arXiv:1804.05728 [nucl-th] . * Asakawa _et al._ (2020) M. Asakawa, M. Kitazawa, and B. Müller, Phys. Rev. C101, 034913 (2020), arXiv:1912.05840 [nucl-th] . * Li _et al._ (2018) J. Li, H.-j. Xu, and H. Song, Phys. Rev. C97, 014902 (2018), arXiv:1707.09742 [nucl-th] . * Lin _et al._ (2017) Y. Lin, L. Chen, and Z. Li, Phys. Rev. C96, 044906 (2017), arXiv:1707.04375 [hep-ph] . * Almasi _et al._ (2017) G. A. Almasi, B. Friman, and K. Redlich, Phys. Rev. D96, 014027 (2017), arXiv:1703.05947 [hep-ph] . * Yang _et al._ (2017) Z. Yang, X. Luo, and B. Mohanty, Phys. Rev. C95, 014914 (2017), arXiv:1610.07580 [nucl-ex] . * Zhou _et al._ (2017) C. Zhou, J. Xu, X. Luo, and F. Liu, Phys. Rev. C96, 014909 (2017), arXiv:1703.09114 [nucl-ex] . * Zhao _et al._ (2017) A. Zhao, X. Luo, and H. Zong, Eur. Phys. J. C77, 207 (2017), arXiv:1609.01416 [nucl-th] . * Vovchenko _et al._ (2018) V. Vovchenko, L. Jiang, M. I. Gorenstein, and H. Stoecker, Phys. Rev. C98, 024910 (2018), arXiv:1711.07260 [nucl-th] . * Albright _et al._ (2015) M. Albright, J. Kapusta, and C. Young, Phys. Rev. C92, 044904 (2015), arXiv:1506.03408 [nucl-th] . * Fukushima (2015) K. Fukushima, Phys. Rev. C91, 044910 (2015), arXiv:1409.0698 [hep-ph] . * Netrakanti _et al._ (2016) P. K. Netrakanti, X. F. Luo, D. K. Mishra, B. Mohanty, A. Mohanty, and N. Xu, Nucl. Phys. A947, 248 (2016), arXiv:1405.4617 [hep-ph] . * Morita _et al._ (2015) K. Morita, B. Friman, and K. Redlich, Phys. Lett. B741, 178 (2015), arXiv:1402.5982 [hep-ph] . * Samanta and Mohanty (2019) S. Samanta and B. Mohanty, (2019), arXiv:1905.09311 [hep-ph] . * He _et al._ (2016) S. He, X. Luo, Y. Nara, S. Esumi, and N. Xu, Phys. Lett. B762, 296 (2016), arXiv:1607.06376 [nucl-ex] . * Nahrgang _et al._ (2015) M. Nahrgang, M. Bluhm, P. Alba, R. Bellwied, and C. Ratti, Eur. Phys. J. C75, 573 (2015), arXiv:1402.1238 [hep-ph] . * Mishra _et al._ (2016) D. K. Mishra, P. Garg, P. K. Netrakanti, and A. K. Mohanty, Phys. Rev. C94, 014905 (2016), arXiv:1607.01875 [hep-ph] . * Bluhm _et al._ (2017) M. Bluhm, M. Nahrgang, S. A. Bass, and T. Schaefer, Eur. Phys. J. C77, 210 (2017), arXiv:1612.03889 [nucl-th] . * Zhang _et al._ (2020) Y. Zhang, S. He, H. Liu, Z. Yang, and X. Luo, Phys. Rev. C101, 034909 (2020), arXiv:1905.01095 [nucl-ex] . * Karsch _et al._ (2016) F. Karsch, K. Morita, and K. Redlich, Phys. Rev. C93, 034907 (2016), arXiv:1508.02614 [hep-ph] . * Bzdak _et al._ (2013) A. Bzdak, V. Koch, and V. Skokov, Phys. Rev. C87, 014901 (2013), arXiv:1203.4529 [hep-ph] . * Braun-Munzinger _et al._ (2019) P. Braun-Munzinger, A. Rustamov, and J. Stachel, (2019), arXiv:1907.03032 [nucl-th] . * Karsch and Redlich (2011) F. Karsch and K. Redlich, Phys. Lett. B695, 136 (2011), arXiv:1007.2581 [hep-ph] . * Bass _et al._ (1998) S. A. Bass _et al._ , Prog. Part. Nucl. Phys. 41, 255 (1998), arXiv:nucl-th/9803035 [nucl-th] . * Bleicher _et al._ (1999) M. Bleicher _et al._ , J. Phys. G25, 1859 (1999), arXiv:hep-ph/9909407 [hep-ph] . * Wasserstein and Lazar (2016) R. L. Wasserstein and N. A. Lazar, American Statistician 70, 129 (2016). * Fu (2017) J.-H. Fu, Phys. Rev. C96, 034905 (2017), arXiv:1610.07138 [nucl-th] . * Braun-Munzinger _et al._ (2020) P. Braun-Munzinger, B. Friman, K. Redlich, A. Rustamov, and J. Stachel, (2020), arXiv:2007.02463 [nucl-th] . * Fu (2013) J. Fu, Phys. Lett. B722, 144 (2013). * Bhattacharyya _et al._ (2014) A. Bhattacharyya, S. Das, S. K. Ghosh, R. Ray, and S. Samanta, Phys. Rev. C90, 034909 (2014), arXiv:1310.2793 [hep-ph] . * Bzdak _et al._ (2017b) A. Bzdak, V. Koch, and V. Skokov, Eur. Phys. J. C77, 288 (2017b), arXiv:1612.05128 [nucl-th] . * Nonaka _et al._ (2016) T. Nonaka, T. Sugiura, S. Esumi, H. Masui, and X. Luo, Phys. Rev. C94, 034909 (2016), arXiv:1604.06212 [nucl-th] .
# Distributed Spatial-Keyword kNN Monitoring for Location-aware Pub/Sub Shohei Tsuruoka Osaka University<EMAIL_ADDRESS>, Daichi Amagata Osaka University<EMAIL_ADDRESS>, Shunya Nishio Osaka University<EMAIL_ADDRESS>and Takahiro Hara Osaka University<EMAIL_ADDRESS> ###### Abstract. Recent applications employ publish/subscribe (Pub/Sub) systems so that publishers can easily receive attentions of customers and subscribers can monitor useful information generated by publishers. Due to the prevalence of smart devices and social networking services, a large number of objects that contain both spatial and keyword information have been generated continuously, and the number of subscribers also continues to increase. This poses a challenge to Pub/Sub systems: they need to continuously extract useful information from massive objects for each subscriber in real time. In this paper, we address the problem of $k$ nearest neighbor monitoring on a spatial-keyword data stream for a large number of subscriptions. To scale well to massive objects and subscriptions, we propose a distributed solution, namely D$k$M-SKS. Given $m$ workers, D$k$M-SKS divides a set of subscriptions into $m$ disjoint subsets based on a cost model so that each worker has almost the same $k$NN-update cost, to maintain load balancing. D$k$M-SKS allows an arbitrary approach to updating $k$NN of each subscription, so with a suitable in-memory index, D$k$M-SKS can accelerate update efficiency by pruning irrelevant subscriptions for a given new object. We conduct experiments on real datasets, and the results demonstrate the efficiency and scalability of D$k$M-SKS. ## 1\. Introduction Due to the recent prevalence of GPS-enabled devices, many applications have been generating objects that contain location information and keywords (choudhury2018batch, ; mahmood2018adaptive, ). They often provide services that retrieve objects useful to users from the generated ones, based on a location-aware publish/subscribe (Pub/Sub) model (hu2015location, ; li2013location, ; nishio2020lamps, ; wang2015ap, ; wang2016skype, ). In this model, users register queries that specify query locations and keywords as their subscriptions on a Pub/Sub system, and this system delivers appropriate objects generated by publishers (e.g., Point of Interests) to subscriptions based on their query locations and keywords. It is well known that range and $k$ nearest neighbor ($k$NN) queries support location-aware Pub/Sub systems. A range query retrieves all objects existing within a user-specified range from a query point, so it cannot control the result size. This means that users may not obtain any objects or may obtain a huge amount of objects, which is not desirable. On the other hand, a $k$NN query alleviates this drawback, since users can obtain a reasonable-sized result. In this paper, hence, we consider $k$NN queries. ### 1.1. Motivation In Pub/Sub environments, objects are generated in a streaming fashion, so we have to continuously update the $k$NN objects for each subscription. For example: (a) At time $t$ (b) At time $t+1$ Figure 1. An example of $k$NN monitoring in a location-aware Pub/Sub system, where $o_{i}$ and $s_{j}$ respectively denote an spatial-keyword object and a subscription Example 1. Figure 1 illustrates an example of $k$NN monitoring in a location- aware Pub/Sub system. Three subscriptions ($s_{1}$, $s_{2}$, and $s_{3}$) are registered, and the Pub/Sub system monitors $k$NN objects for each subscription. Assume $k=1$ and focus on $s_{1}$, which specifies Japanese and Noodle as keywords. At time $t$, i.e., in Figure 1(a), the NN object for $s_{1}$ is $o_{2}$, because it contains the keyword Noodle and is the nearest to $s_{1}$ among $\\{o_{1},o_{2},o_{3}\\}$. Also, the NN object for $s_{2}$ ($s_{3}$) is $o_{3}$ ($o_{2}$). Assume further that a new object $o_{4}$ is generated at time $t+1$, as shown in Figure 1(b). Since $o_{4}$ also contains the keyword Japanese, the NN object of $s_{3}$ is updated to $o_{4}$ (and the NN objects for the other subscriptions do not change). Users require up-to-date results, so Pub/Sub systems have to efficiently update $k$NN objects of their subscriptions when new objects are given. However, this is a difficult task, because many applications employing Pub/Sub systems have to deal with a lot of (often million-scale) subscriptions (wang2015ap_, ). Besides, due to the usefulness of location-aware Pub/Sub systems, the number of subscriptions is further increasing (wang2017top, ). It is therefore hard for a single server to update the result for each subscription in real time (chen2017distributed, ). This suggests that we need to make location-aware Pub/Sub systems efficient and scalable, motivating us to consider a distributed solution: given multiple workers, each registered subscription is assigned to a specific worker so that parallel $k$NN update is enabled. Challenge. Although a distributed solution is promising, it has some challenges to scale well to massive objects and subscriptions (i.e., continuous spatial-keyword $k$NN queries). (1) A distributed solution has to maintain load balancing. This is not trivial for continuous spatial-keyword $k$NN queries, because each subscription specifies arbitrary locations and keywords., i.e., the loads of subscriptions are different and not explicitly provided. (2) It is necessary to deal with subscription insertions and deletions. Although some variants of the spatial-keyword $k$NN monitoring problem (hu2015location, ; wang2016skype, ) accept subscription insertions and deletions, these solutions consider centralized environments and extending them for decentralized environments is not trivial. In addition, (chen2017distributed, ; wang2017top, ) assume subscription insertions and deletions in distributed processing environments. However, (chen2017distributed, ) considers not the costs of subscriptions but the number of them, which is not effective for load balancing, and (wang2017top, ) does not consider load balancing. ### 1.2. Contribution We overcome these challenges and propose two baselines and D$k$M-SKS (Distributed $k$NN Monitoring on Spatial-Keyword data Stream). Our solutions employ * • Cost models for subscriptions: We design cost models for subscriptions, so that we can estimate the load of a given subscription when a new object is generated. Specifically, we propose keyword- and space-oriented cost models. Our models use a practical assumption and can deal with new subscriptions. Based on these models, we further propose a hybrid of these two models. * • Cost-based subscription partitioning: Based on our cost models, a set of subscriptions is divided into disjoint subsets, each of which is assigned to a specific worker. In particular, D$k$M-SKS considers both spatial and keyword information, so that $k$NN update costs can be minimized. We use a greedy algorithm for subscription partitioning, because optimal cost-based partitioning is NP-hard. Furthermore, D$k$M-SKS allows an arbitrary exact algorithm for $k$NN update. This is a good property because it can implement a state-of-the-art to accelerate performance. To demonstrate the efficiency of D$k$M-SKS, we conduct experiments on two real datasets. From the experimental results, we confirm that D$k$M-SKS outperforms the baselines and a state-of-the-art technique. This is the full version of our preliminary paper (tsuruoka2020distributed, ). Organization. The rest of this paper is organized as follows. We formally define our problem in Section 2. Then, we design baseline solutions in Section 3. We propose D$k$M-SKS in Section 4, and introduce our experimental results in Section 5. Related works are reviewed in Section 6. Finally, this paper is concluded in Section 7. ## 2\. Preliminary Problem definition. Let us first define spatial-keyword objects. Definition 1 (Spatial-keyword object). A spatial-keyword object $o$ is defined as $o=\langle p,\psi,t\rangle$, where $p$ is a 2-dimensional location of $o$, $\psi$ is a set of keywords held by $o$, and $t$ is the time-stamp when $o$ is generated. Without loss of generality, hereinafter, we call $o$ object simply. Note that we assume discrete time in this paper. Next, we define continuous spatial- keyword $k$ nearest neighbor ($k$NN) queries. Definition 2 (Continuous spatial-keyword $k$NN query). A continuous spatial- keyword $k$NN query $s$ is defined as $s=\langle p,\psi,k,t\rangle$, where $p$ is a 2-dimensional location of interest for $s$, $\psi$ is a set of keywords in which $s$ is interested, $k$ is the number of results required by $s$, and $t$ is the time-stamp when $s$ is registered. Let $O$ be a set of objects generated so far, and let $O(s)$ be the set of objects $o\in O$ where $o.\psi\cap s.\psi\neq\varnothing$ and $s.t\leq o.t$. Given $O(s)$, this query monitors a set of objects $A$ that satisfy (i) $|A|=k$ and (ii) $\forall o\in A$, $\forall o^{\prime}\in O(s)-A$, $dist(o.p,s.p)\leq dist(o^{\prime}.p,s.p)$, where $dist(p,p^{\prime})$ evaluates the Euclidean distance between points $p$ and $p^{\prime}$ (ties are broken arbitrarily). That is, we consider continuous spatial-keyword $k$NN queries with a boolean (i.e., OR) semantic for keywords (almaslukh2018evaluating, ; amagata2015distributed, ; chen2013spatial, ) and a time constraint (amagata2016diversified, ; qiao2016range, ) for obtaining fresh objects as much as possible. A subscription is corresponding to a continuous spatial- keyword $k$NN query in this paper, as shown in Example 1. We hence use them interchangeably. Then, our problem is defined as follows: Figure 2. A toy example of objects and subscriptions that have been respectively generated and registered at time $t$ Problem statement. Given $O$ and a set of registered subscriptions $S$, our problem is to exactly monitor $A$ for each subscription $\in S$. Example 2. Figure 2 illustrates a toy example which is used throughout this paper. Assume that $s_{1}$, …, $s_{10}$ ($o_{1},...,o_{9}$) have been registered (generated) at time $t$. Consider $s_{1}$, then $O(s_{1})=\\{o_{1},o_{4},o_{6},o_{7}\\}$, because they contain a, b, or c. Assuming $s_{1}.k=2$, $A$ of $s_{1}$ is $\\{o_{1},o_{4}\\}$. This paper proposes a distributed solution to achieve real-time monitoring and scale well to large $|O|$ and $|S|$. System overview. We assume that a location-aware Pub/Sub system employs a general distributed setting consisting of a main server and $m$ workers (amagata2018space, ; luo2014distributed, ). The main server (each worker) directly communicates with workers (the main server). (A worker can be a CPU core or a machine that can use a thread.) The main server takes the following roles: it * • assigns each subscription to a specific worker, * • receives a stream of objects and broadcasts them to all workers, and * • accepts subscription insertions and deletions. The main operations of each worker are as follows: it * • accepts subscriptions assigned by the main server, * • removes requested subscriptions, and * • updates the $k$NN objects for each assigned subscription. We see that $k$NN objects for each subscription are updated in parallel, thereby this approach is promising for massive subscriptions. An important problem to achieve this is load balancing. That is, distributed solutions have to consider how to make the computation time of each worker almost equal when new objects are generated. We below analyze this problem theoretically. Let $C(s)$ be the $k$NN update cost of a subscription $s$ (how to obtain $C(s)$ is introduced later). Furthermore, let $C(w_{i})$ be the cost (load) of a worker $w_{i}$, which is defined as $C(w_{i})=\sum_{s\in S(w_{i})}C(s),$ where $S(w_{i})$ is a set of subscriptions assigned to $w_{i}$. We want to optimize the load difference between workers with a good subscription assignment. This can be formalized as follows: Definition 3 (Optimal subscription assignment problem). Given a set of objects $O$, a set of subscriptions $S$, and $m$ workers, this problem is to find a subscription assignment that minimizes $\max_{i\in[1,m]}C(w_{i})-\min_{j\in[1,m]}C(w_{j}).$ We have the following theorem w.r.t. the above problem (amagata2019identifying, ). Theorem 1. The optimal subscription assignment problem is NP-hard. It can be seen, from this theorem, that it is not practical to obtain the optimal assignment, which suggests that we need a heuristic approach. We hence consider $C(s)$ to capture the load of $s$ and then design an approach that partitions $S$ into $m$ disjoint subsets whose loads are well balanced. Note that $C(s)$ is dependent on a given cost model. In addition, we consider how to manage new subscriptions (we can easily deal with subscription deletions: the main server simply requests them to the corresponding workers). ## 3\. Baselines Because this is the first work that proposes a distributed solution for processing continuous spatial-keyword $k$NN queries defined in Definition 2, we first design baseline solutions. We propose two baselines that respectively employ keyword- and space-oriented subscription partitioning. We assume that some subscriptions are registered at the initial time, and we partition $S$ when $O$ becomes sufficiently large. (This is common to D$k$M-SKS.) We use $O_{init}$ to denote the set of objects when $S$ is partitioned. ### 3.1. Keyword-oriented Partition To start with, we design keyword-oriented partition. One possible approach partitions $S$ so that a set of distinct keywords of the subscriptions held by each worker can be disjoint between workers. This approach is not efficient, because it does not consider keyword frequencies. In other words, if a worker has subscriptions with keywords that are contained by many objects, its load becomes heavy, rendering load imbalance. Hence our keyword-oriented partition takes keyword frequencies into account. Cost estimation. Similar to (wang2015ap, ), we estimate the load of a subscription based on the distributions of keywords in $O_{init}$, because the distributions of large datasets rarely change in practice (yoon2019nets, ). Assume that the appearance probability of each keyword is independent. Given an object $o$, the probability that a keyword $\lambda$ is contained in $o.\psi$, $P(\lambda)$, is $P(\lambda)=\frac{|O_{\lambda}|}{|O_{init}|},$ where $O_{\lambda}\in O_{init}$ is a set of objects $o_{i}$ such that $\lambda\in o_{i}.\psi$. Recall that $O(s)$ is a set of objects $o_{j}$ where $o_{j}.\psi\cap s.\psi\neq\varnothing$. Therefore, the $k$NN update cost of a subscription $s$, $C(s)$, can be estimated as: (1) $C(s)=\sum_{\lambda\in s.\psi}P(\lambda).$ Subscription partition. Keyword-oriented partition employs the cost model defined in Equation (1) and a 3/2-approximation greedy algorithm (graham1969bounds, ) for subscription partitioning. This approach first computes $C(s)$ for every $s\in S$, and sorts $S$ in descending order of $C(s)$. Then, this approach sequentially accesses subscriptions while assigning an accessed subscription to $w_{i}$ with the minimum $C(w_{i})$. Algorithm 1 details this approach. $k$NN update. Each worker $w$ maintains an inverted file $w.I$ to index its assigned subscriptions. The inverted file is a set of postings lists $w.I[\lambda]$ that maintain subscriptions containing a keyword $\lambda$. Given a new object $o$ (broadcast by the main server), each worker $w$ computes subscriptions that contain keywords in $o.\psi$, from $w.I$, while pruning irrelevant subscriptions. After that, $w$ updates the $k$NN of the corresponding subscriptions. Example 3. We partition $S$ in Figure 2 into two disjoint subsets for workers $w_{1}$ and $w_{2}$, based on keyword-oriented partition. Figure 3 illustrates an overview. The left part shows subscriptions and their costs obtained from Equation (1), and the right part shows the partition result, i.e., $w_{1}$ has $s_{1}$, $s_{6}$, $s_{7}$, $s_{9}$, and $s_{10}$ while $w_{2}$ has $s_{2}$, $s_{3}$, $s_{4}$, $s_{5}$, and $s_{8}$. They are maintained by inverted files (the most right tables). Input: $S$ (a set of subscriptions) and $m$ workers 1 Sort $S$ in descending order of cost 2 Set $C(w)=0$ for each worker 3 for _each $s\in S$_ do 4 $w\leftarrow\mathop{\rm arg\,min}\limits_{m}C(w)$ 5 $S(w)\leftarrow S(w)\cup\\{s\\}$, $C(w)\leftarrow C(w)+C(s)$ Algorithm 1 Subscription-Assignment Figure 3. An example of keyword-oriented partition for two workers $w_{1}$ and $w_{2}$ (based on objects and subscriptions in Figure 2) Subscription insertion. A new subscription $s^{\prime}$ also can obtain its estimated cost from Equation (1), because its cost model assumes that the keyword distribution rarely changes (wang2015ap_, ). The main server maintains $C(w)$ for each worker $w$. (This is common to all of our solutions.) Given a new subscription $s^{\prime}$, the main server computes $C(s)$ from Equation (1). Then the main server assigns $s^{\prime}$ to the worker with the minimum $C(w)$. Subscription deletion. For subscription deletion, the main server simply requests the worker, which has the corresponding subscription, to remove it, then updates $C(w)$. This is also common to our solutions. ### 3.2. Space-oriented Partition We next design space-oriented partition. The most straightforward approach is to partition the data space into $m$ equal-sized subspaces. Clearly, this is not efficient, because some of them have more objects than the others, which also provides load imbalance. We hence consider a space-based cost model below. Cost estimation. Consider a set $S_{r}$ of subscriptions that exist in a subspace $r$, and let $C(S_{r})$ be its cost. Note that $C(S_{r})$ can be a probability that $k$NN objects of subscriptions in $S_{r}$ may be updated, given a new object $o$. Let $o_{k}$ be the current $k$-th nearest neighbor object of a subscription $s$. Furthermore, let $B(s)$ be a ball whose center and radius are respectively $s.p$ and $dist(o_{k}.p,s.p)$. We see that new objects that are generated within $B(s)$ may become new $k$NN of $s$. Now consider a rectangle $R$ that encloses all balls of $S_{r}$. It is also true that new objects that are generated within $R$ may become new $k$NNs of $s\in S_{r}$. The space-based cost also utilizes the distribution of $O_{init}$. Given a set $O_{R}$ of objects existing within $R$, the probability that a new object is generated within $R$, $P(R)$, is (2) $P(R)=\frac{|O_{R}|}{|O_{init}|}.$ Then we define $C(S_{r})$ as follows: (3) $C(S_{r})=P(R)\cdot|S_{r}|$ It can be seen that $C(S_{r})$ takes the number of subscriptions into account. Assume that $R$ is small but contains many subscriptions. We see that the $k$NN update cost of $R$ is not small when a new object is generated within $R$. However, without $|S_{r}|$, $C(S_{r})$ is small, which contradicts the above intuition. We therefore make Equation (3) an expected value, different from Equation (1). Subscription partition. Here, we introduce how to obtain $R$ (or $r$). Let $\mathbb{R}^{2}$ be the space where objects and subscriptions exist. We partition $\mathbb{R}^{2}$ in a similar way to quadtree (finkel1974quad, ), motivated by a recent empirical evaluation on a spatial-keyword stream that confirms the superiority of quadtree-based space partition (almaslukh2018evaluating, ). Specifically, we partition $\mathbb{R}^{2}$ into four equal-sized subspaces and compute $C(S_{r})$ for each subspace $r$. Then we pick the subspace that has the largest $C(S_{r})$ and partition it in the same way. This is repeated until we have $n\geq\theta\cdot m$, where $n$ and $\theta$ are the number of subspaces and a threshold (system parameter), respectively. Now we have $n$ disjoint subsets of $S$ and determine their assignment in a similar way to Algorithm 1. Note that space-oriented partition considers the assignment of subsets $S_{r}$, different from keyword-oriented partition. That is, the input of the greedy algorithm is a collection of subsets $S_{r}$. $k$NN update. Space-oriented partition takes a different approach from keyword-oriented partition. Assume that a worker $w$ has a collection $S(w)$ of $S_{r}$. For each $S_{r}\in S(w)$, we build an inverted file $I(S_{r})$ for $S_{r}$. This aims at pruning irrelevant subscriptions, i.e., we can prune $S_{r}$ when a new object is generated within $R$ but does not contain any keywords in $S_{r}$. Given a new object $o$ that is generated within $R$, $w$ computes subscriptions that contain the keywords in $o.\psi$ by using $I(S_{r})$. If there are such subscriptions, $w$ updates their $k$NNs. (a) Space-oriented partition for $S$ in Figure 2 (b) Subscriptions that are assigned to $w_{1}$ (c) Subscriptions that are assigned to $w_{2}$ Figure 4. An example of space-oriented partition for two workers $w_{1}$ and $w_{2}$ (based on objects and subscriptions in Figure 2) Example 4. We partition $S$ in Figure 2 into two disjoint subsets for workers $w_{1}$ and $w_{2}$, based on space-oriented partition. Figure 4 illustrates an example. For simplicity, $S$ is partitioned into four subsets $S_{1}$, $S_{2}$, $S_{3}$, and $S_{4}$, see Figure 4(a). Assume that their costs are 0.36, 0.11, 0.2, and 0.28, respectively. Then $S_{1}$ and $S_{2}$ ($S_{3}$ and $S_{4}$) are assigned to $w_{1}$ ($w_{2}$), as shown in Figure 4(b) (4(c)). Given a new object $o_{10}$ that is shown in Figures 4(b) and 4(c), $w_{1}$ needs to deal with $s_{1}$, $s_{2}$, $s_{3}$, and $s_{4}$, because $o_{11}$ exists within the rectangle of $S_{1}$. Similarly, $w_{2}$ needs to consider $s_{7}$. New subscription. Our space-oriented partition provides a cost with a set of subscriptions. On the other hand, for new subscriptions, we should provide their respective costs, because the number of new subscriptions at a given time is much smaller than that of the initial set of subscriptions. We therefore estimate the cost of a new subscription based on Equation (2). Given a new subscription $s$, the main server computes its $k$NN among $O_{init}$ to obtain $B(s)$. (Note that its exact $k$NN is monitored after $s$ is assigned to a worker, as $O_{init}$ is not qualified for $O(s)$.) Then the main server has a rectangle $R$ (i.e., a space $r$) that encloses $B(s)$. Now we can obtain its cost from Equation (2), because $S_{r}=\\{s\\}$, i.e., Equation (3) becomes Equation (2). How to assign $s$ to a worker is the same as keyword-oriented partition. Although the above approach can deal with new subscriptions, it loses the property of “space-oriented”, because a new subscription $s$ may be assigned to a worker $w$ that does not have subscriptions close to $s$. This case may degrade the pruning performance, because the data space, where $w$ has to care, becomes larger. For example, assume that a new subscription $s_{11}$ is registered and its location is a point in $S_{2}$ of Figure 4(a). Assume furthermore that $s_{11}$ is assigned to $w_{2}$, then the space, where $w_{2}$ has to take care, becomes larger. ## 4\. D$k$M-SKS Motivation. Our baselines partition $S$ based only on either keyword or spatial information. However, given a subspace, a better partition approach is dependent on the space and keyword distributions of the subspace. For example: Example 5. Figure 5 depicts two data distributions. Black points, dashed circles, and balloons represent the locations of subscriptions $s$, $B(s)$, and keywords of $s$, respectively. Focus on Figure 5(a) and let the solid rectangle show $r$. We see that $B(s)$ of each subscription $s$ is small, thereby the entire cost is small if we use space-oriented partition for $r$, because the pruning probability becomes large. Next, consider Figure 5(b). Each subscription has a large $B(s)$ and it overlaps with the others. For this distribution, space-oriented partition is clearly not a good choice, because the size of the rectangle that encloses each ball does not change much even if $r$ is partitioned. Motivated by the above observation, D$k$M-SKS considers a better partitioning approach when it partitions a (sub)set of subscriptions, to minimize the entire load. Then D$k$M-SKS assigns each subscription to a specific worker based on the greedy algorithm (graham1969bounds, ) and an additional heuristic. (a) A distribution in which space-oriented partition is effective (b) A distribution in which space-oriented partition is not effective Figure 5. An example that depicts data distributions for considering a better partitioning approach. Black points, dashed circles, and balloons represent the locations of subscriptions $s$, $B(s)$, and keywords of $s$, respectively. ### 4.1. Cost Estimation D$k$M-SKS also utilizes $O_{init}$ to estimate the cost of a subscription. Different from keyword- and space-oriented partition, D$k$M-SKS considers both space and keyword information. Consider $S_{r}$ a subset of $S$, and we have a rectangle $R$ that encloses the balls of subscriptions in $S_{r}$. Based on a similar idea to Equation (2), the probability that an object $o$ is generated within $R$ and there exist the other objects in $R$ which contain a keyword $\lambda\in o.\psi$ is (4) $P(R,\lambda)=\frac{|O_{R,\lambda}|}{|O_{init}|},$ where $O_{R,\lambda}$ is a set of objects $\in O_{init}$ that exist within $R$ and contain $\lambda$ in their keywords. Now take a subscription $s\in S_{r}$. Equation (4) focuses on a single keyword, thereby an estimated cost of $s$ is (5) $C(s)=\sum_{\lambda\in s.\psi}P(R,\lambda).$ Then the cost of $S_{r}$ is defined as (6) $C(S_{r})=\sum_{s\in S_{r}}C(s).$ Note that D$k$M-SKS provides a cost both with a single subscription and a set of subscriptions. ### 4.2. Subscription Partition and Assignment Subscription partition. Given $S_{r}$, D$k$M-SKS selects a better approach to $S_{r}$ by considering the following two partitioning approaches. Space-only-Partition. Consider a space $r$ where $S_{r}$ exists. This approach partitions $r$ into equal-sized disjoint subspaces $r_{i}$ ($1\leq i\leq 4$), as with space-oriented partition. Then D$k$M-SKS obtains a set of subscriptions $S_{r_{i}}$ that exist in $r_{i}$. Hybrid-Partition. This approach also partitions $S_{r}$ into four disjoint subsets $S_{h_{i}}$ ($1\leq i\leq 4$), but in a different way from Space-only- Partition. (The reason why this approach obtains four subsets is to be comparable to Space-only-Partition.) This approach utilizes $C(s)$, which is defined in Equation (5), and considers both space and keyword information. More specifically, D$k$M-SKS sorts $S_{r}$ in descending order of $C(s)$, and runs the greedy algorithm to assign each $s\in S_{r}$ to $S_{h_{i}}$ with the minimum $\sum_{s\in S_{h_{i}}}C(s)$. We do not consider keyword-only partition, because it does not consider spatial information and cannot reduce the size of $R$. We define a better partition as the one with less entire cost than the other after $S_{r}$ is partitioned. The entire costs $C_{s}$ and $C_{h}$, which are respectively provided by Space-only-Partition and Hybrid-Partition, are defined as $C_{s}=\sum_{1\leq i\leq 4}C(S_{r_{i}})$ and $C_{h}=\sum_{1\leq i\leq 4}C(S_{h_{i}}).$ If $C_{s}<C_{h}$, D$k$M-SKS selects Space-only-Partition. Otherwise, D$k$M-SKS selects Hybrid-Partition. Algorithm description. Now we are ready to introduce how to partition $S$ through D$k$M-SKS. Algorithm 2 describes the detail. The objective of this algorithm is to obtain at least $\gamma_{1}\cdot m$ subsets of $S$, where $\gamma_{1}$ is a system parameter. For ease of presentation, assume that we are given a subset $S^{\prime}$ of $S$. (At initialization, $S^{\prime}=S$.) D$k$M-SKS considers partitioning $S^{\prime}$. Because Hybrid-Partition needs to compute Equation (5) for each subscription in $S^{\prime}$, it incurs a large computational cost if $|S^{\prime}|$ is large. Therefore, if $|S^{\prime}|>\gamma_{2}$, where $\gamma_{2}$ is also a system parameter, D$k$M-SKS always utilizes Space-only- Partition to partition $S^{\prime}$ into four subsets. On the other hand, if $|S^{\prime}|\leq\gamma_{2}$, D$k$M-SKS tests both Space-only-Partition and Hybrid-Partition. D$k$M-SKS then selects the result of Space-only-Partition if $C_{s}<C_{h}$. Otherwise, D$k$M-SKS selects that of Hybrid-Partition. The four subsets obtained by a better partition are inserted into a collection $\mathcal{S}$ of subsets. After that, $\mathcal{S}$ is sorted in descending order of the estimated cost of subset. D$k$M-SKS checks $|\mathcal{S}|$, and if $|\mathcal{S}|<\gamma_{1}\cdot m$, D$k$M-SKS picks the subset with the largest cost and repeats the above operations. Input: $S$ (a set of subscriptions), $m$ workers, $\gamma_{1}$, and $\gamma_{2}$ (system parameters) 1 $\mathcal{S}\leftarrow\langle S,0\rangle$ 2 while _$|\mathcal{S}| <\gamma_{1}\cdot m$_ do 3 $\langle S^{\prime},C(S^{\prime})\rangle\leftarrow$ the front of $\mathcal{S}$ 4 $\mathcal{S}\leftarrow\mathcal{S}-\langle S^{\prime},C(S^{\prime})\rangle$ 5 if _$|S^{\prime}| >\gamma_{2}$_ then 6 $\mathbb{S}\leftarrow$ Space-only-Partition$(S^{\prime})$ 7 for _each $S_{r}\in\mathbb{S}$_ do 8 $\mathcal{S}\leftarrow\mathcal{S}\cup\langle S_{r},C(S_{r})\rangle$ 9 10 11 else 12 $\mathbb{S}\leftarrow$ Space-only-Partition$(S^{\prime})$ 13 $\mathbb{S^{\prime}}\leftarrow$ Hybrid-Partition$(S^{\prime})$ 14 if _$C_{s} <C_{h}$_ then 15 for _each $S_{r}\in\mathbb{S}$_ do 16 $\mathcal{S}\leftarrow\mathcal{S}\cup\langle S_{r},C(S_{r})\rangle$ 17 18 19 else 20 for _each $S_{h}\in\mathbb{S^{\prime}}$_ do 21 $\mathcal{S}\leftarrow\mathcal{S}\cup\langle S_{h},C(S_{h})\rangle$ 22 23 24 25 Sort $\mathcal{S}$ in descending order of $C(S^{\prime})$ return $\mathcal{S}$ Algorithm 2 Subscription-Partitioning Input: $\mathcal{S}$ (a collection of subsets of $S$) and $m$ workers 1 Set $C(w)=0$ for each worker 2 Sort $\mathcal{S}$ in descending order of cost 3 for _each $S^{\prime}\in\mathcal{S}$_ do 4 Sort $S^{\prime}$ in descending order of cost 5 for _each $s\in S^{\prime}$_ do 6 $w\leftarrow\mathop{\rm arg\,min}\limits_{m}C(w)$ 7 $S(w)\leftarrow S(w)\cup\\{s\\}$, $C(w)\leftarrow C(w)+C(s)$ 8 Algorithm 3 Subscription-Assignment for D$k$M-SKS Subscription assignment. From the subscription partition, D$k$M-SKS has a collection $\mathcal{S}$ of subsets $S^{\prime}$. It is important to note that $S^{\prime}$ tends to contain subscriptions with close locations and similar keyword sets. That is, given a new object $o$, the $k$NN of all subscriptions in $S^{\prime}$ may change by $o$. In this case, assigning each subscription $s\in S^{\prime}$ to a different worker is better than assigning $S^{\prime}$ to a worker, to exploit the parallel $k$NN update. We use this heuristic for subscription assignment. Algorithm description. D$k$M-SKS employs a similar approach to keyword- oriented partition for subscription assignment. In other words, D$k$M-SKS uses the greedy algorithm (graham1969bounds, ), but how to access subscriptions is different. Given $\mathcal{S}$, we first sort $\mathcal{S}$ in descending order of cost obtained from Equation (6). Then, for each $S^{\prime}\in\mathcal{S}$, we sort $S^{\prime}$ as with $\mathcal{S}$ and run the greedy algorithm. Algorithm 3 elaborates this operation. ### 4.3. $k$NN Update Algorithm Actually, D$k$M-SKS can employ an arbitrary index for updating $k$NN of each subscription. This is a good property because it can always make use of a state-of-the-art. By default, in D$k$M-SKS, each worker $w$ utilizes a hybrid structure of a grid and an inverted file, because this structure is update- friendly. The grid is a set of cells, and for each cell, we implement an inverted file. More specifically, consider a subscription $s\in S(w)$ and a cell $g$ that overlaps with a ball of $s$, $B(s)$. This cell $g$ maintains $s$ by its inverted file $g.I$ (i.e., $g.I[\lambda]$ maintains $s$ if $\lambda\in s.\psi$). Given a new object $o$ broadcast by the main server, each worker $w$ obtains the cell $g$ to which $o$ is mapped. Then $w$ considers whether or not it needs to update the $k$NN of subscriptions in $S(w)$ from $g.I$ (i.e., subscriptions that do not contain any keywords in $o.\psi$ are pruned). If necessary, $w$ updates the $k$NN of corresponding subscriptions, then updates $g.I$ accordingly. (a) Subscription partitioning of D$k$M-SKS (b) Subscriptions assigned to $w_{1}$ and the data structure maintained by $w_{1}$ (c) Subscriptions assigned to $w_{2}$ and the data structure maintained by $w_{2}$ Figure 6. An example of subscription partitioning and assignment of D$k$M-SKS Example 6. D$k$M-SKS partitions $S$ in Figure 2 into two disjoint subsets for two workers $w_{1}$ and $w_{2}$. Assume that the table in Figure 6(a) depicts the estimated cost of each subscription in D$k$M-SKS. Assume furthermore that the result of partitioning is $\\{S_{1},S_{2},S_{3},S_{4},S_{5}\\}$, which is also shown in Figure 6(a). Following Algorithm 3, the result of subscription assignment of D$k$M-SKS is $\\{s_{1},s_{3},s_{7},s_{8}\\}$ for $w_{1}$ and $\\{s_{2},s_{4},s_{5},s_{6},s_{9},s_{10}\\}$ for $w_{2}$, which are respectively illustrated in Figures 6(b) and 6(c). Consider a case where a new object $o_{10}$ (see Figure 4), which contains keyword b, is generated. Since it is mapped to $g_{3}$, $w_{1}$ needs to consider $s_{7}$, which can be seen from $g_{3}.I[\textsf{b}]$. Similarly, $w_{2}$ needs to consider $s_{4}$. Compared with Example 4, D$k$M-SKS shows a better load balancing. ### 4.4. Dealing with New Subscriptions Recall that the estimated cost of a subscription $s$ is obtained from $S_{r}\subset S$ ($s\in S_{r}$), as described in Equations (4) and (5). When a new subscription $s_{n}$ is registered, it does not belong to any subset of $S$. A straightforward approach to providing an estimated cost with $s_{n}$ is to re-conduct Algorithm 2. It is obvious that this approach is very computationally expensive. However, it is desirable that $s_{n}$ is handled as if $s_{n}$ has been registered at the initial time. We therefore take an approximate approach to estimating the cost of $s_{n}$. Consider a collection $\mathcal{S}$ of subsets of $S$ obtained by Algorithm 2. The main server maintains $R$ for each subset $S_{r}\in\mathcal{S}$. Given a new subscription $s_{n}$, we first do the same operation as space-oriented partition: the main server computes its $k$NN among $O_{init}$, and then computes the rectangle $R_{n}$ that encloses $B(s_{n})$. Let $R\cap R_{n}$ be the overlapping area between $R$ and $R_{n}$. Now $|R\cap R_{n}|$ can be the overlapped area size. The main server computes (7) $S^{*}=\mathop{\rm arg\,max}\limits_{S_{r}\in\mathcal{S}}|R\cap R_{n}|.$ (In practice, $|\mathcal{S}|$ is small, so the cost of this computation is trivial.) Let $R^{*}$ be the rectangle of $S^{*}$, and $R^{*}$ is the rectangle that overlaps with $R_{n}$ the most. By using Equation (5) with $R^{*}$, $s_{n}$ obtains its estimated cost. Then $s_{n}$ is assigned to the worker $w$ with the minimum cost $C(w)$. ## 5\. Experiment ### 5.1. Setting We conducted experiments on a cluster of six machines. One of them is a main server with 3.0GHz Intel Xeon Gold with 512GB RAM. The others are equipped with 6-core 2.4GHz Intel Core i7-8700T and 32GB RAM. We used one core as a worker. The main server and workers communicate via a 1Gbps Ethernet network. As with (wang2016skype, ), we set $|O_{init}|=1,000,000$. That is, when 1,000,000 objects were generated, we partitioned $S$ into $m$ workers. After that, we generated 1,000 objects and requested 100 subscription insertions and deletions for each time-stamp. Dataset. We used two real spatial-keyword stream datasets, Place (place, ) and Twitter (twitter, ). Table 1 shows the statistics of these datasets. We generated subscriptions for each dataset, so that they follow the distributions of the corresponding dataset (wang2017top, ). When a subscription $s$ was generated, we randomly picked one object to determine its location $s.p$ and then picked at most five keywords at uniformly random from its keyword set to obtain $s.\psi$. The value of $s.k$ was a random integer $\in[1,k_{max}]$. Algorithm. We evaluated the following algorithms: * • PS2Stream (chen2017distributed, ): a state-of-the-art algorithm for continuous spatial-keyword range queries. We extend the original algorithm so that it can deal with our problem. * • KOP: our first baseline, keyword-oriented partition, introduced in Section 3.1. * • SOP: our second baseline, space-oriented partition, introduced in Section 3.2. * • D$k$M-SKS: our proposed solution in this paper. All algorithms were implemented in C++. Parameter. The default values of $m$, $k_{max}$, and the initial $|S|$ are 20, 10, and 10,000,000, respectively. When we investigated the impact of a given parameter, the other parameters were fixed. In addition, we set $\theta=20$, $\gamma_{1}=100,000$, and $\gamma_{2}=20$ from preliminary experiments. (a) Place (b) Twitter Figure 7. Update time as a function of time Criteria. To evaluate the performance of each algorithm, we measured the following criteria: * • Update time: this is the average computation time for updating $k$NN objects of all registered subscriptions and time for dealing with subscription insertions and deletions, per time-stamp. * • Load balance: this is the average difference between the maximum and the minimum time to finish $k$NN update between workers, per time-stamp. Table 1. Dataset statistics Dataset | Place | Twitter ---|---|--- Cardinality | 9,356,750 | 20,000,000 Distinct # of keywords | 53,931 | 2,225,654 Avg. # of keywords in one object | 2.94 | 5.51 ### 5.2. Results Justification of using $O_{init}$ and analysis. We first empirically demonstrate that cost estimation based on $O_{init}$ functions well. Figure 7 depicts the time-series of the update time of each algorithm. The result of D$k$M-SKS shows that its update time does not vary even as new objects are given, new subscriptions are inserted, and some subscriptions are removed, on both Place and Twitter. This suggests that D$k$M-SKS keeps load balancing, and its cost estimation yields this result. Table 2 depicts that the load of each worker in D$k$M-SKS is actually balanced. We see that PS2Stream also has this tendency. However, this result does not mean that PS2Stream has load balancing. We observed that the initial partition of PS2Stream incurs very imbalance partition, i.e., one worker $w$ has a very heavy load at initialization. Because of this, new subscriptions are assigned to the other workers, but this does not overcome the load imbalance. Therefore, the update time of PS2Stream is simply affected by the load of $w$. This is also confirmed from Table 2, which shows that the load in PS2Stream is significantly imbalance. Next, we focus on KOP. This algorithm also has a similar result. From Figure 7 and Table 2, we see that the load balance of KOP is not so bad. Because both subscription partition and $k$NN update in KOP consider only keyword information, they go well together. However, KOP is outperformed by D$k$M-SKS, which considers both spatial and keyword information. This result confirms the effectiveness of the partitioning approach in D$k$M-SKS. Let us consider SOP here. Figure 7 shows that, different from the other algorithms, the update time of SOP increases, as time goes by. This is derived from low accuracy of its cost estimation for new subscriptions. Specifically, given a new subscription, its cost estimated by SOP is usually small, although it is large in practice. Because of this, the load of a worker, which has new subscriptions, becomes heavy and bottleneck of the system. Table 2 also demonstrates this fact. Table 2. Load balance [msec] (default parameters) Algorithm | PS2Stream | KOP | SOP | D$k$M-SKS ---|---|---|---|--- Place | 27490.11 | 549.23 | 14037.33 | 32.08 Twitter | 21962.50 | 486.97 | 12265.40 | 51.03 Table 3. Decomposed time [msec] on Place Algorithm | PS2Stream | KOP | SOP | D$k$M-SKS ---|---|---|---|--- $k$NN update | 32892.90 | 7684.48 | 18549.04 | 471.00 Subscription ins. | 1.51 | 1.59 | 1110.65 | 35.36 Subscription del. | 1.54 | 1.42 | 0.79 | 1.10 Last, we investigate the detail of update time. Table 3 decomposes the update time of each algorithm on Twitter into $k$NN update time, subscription insertion time, and subscription deletion time, each of which includes index update time. The result on Twitter is omitted, because its tendency is similar to that on Place. It can be seen that the main part of the update time is $k$NN update time, and subscription deletion needs a trivial cost. D$k$M-SKS significantly outperforms (is more than 10 times faster than) the other algorithms and exploits available workers to reduce $k$NN update time. We see that the query insertion time of D$k$M-SKS is slower than those of KOP and PS2Stream. This is because D$k$M-SKS needs to compute Equation (7), which incurs a more cost than Equation (2). Also, we can observe that the subscription insertion time of SOP is much longer than those of the others. As explained earlier, (most) new subscriptions are assigned to a single worker $w$. Hence $w$ incurs a long index update time. Varying $m$. We next study the impact of $m$, the number of workers. Figure 8 depicts the result. Since PS2Stream is significantly outperformed by D$k$M-SKS, we omit its result. We see that each algorithm reduces its update time as $m$ increases. This is an intuitive result, since subscriptions are distributed to more workers. The load balances of KOP and D$k$M-SKS are not affected by $m$ so much, since their cost estimations yield balanced load. On the other hand, as $m$ increases, the load balance of SOP decreases. The reason is simple: the update time of the worker with the largest load becomes shorter as $m$ increases. (a) Update time (Place) (b) Update time (Twitter) (c) Load balance (Place) (d) Load balance (Twitter) Figure 8. Impact of $m$ (a) Update time (Place) (b) Update time (Twitter) (c) Load balance (Place) (d) Load balance (Twitter) Figure 9. Impact of $|S|$ (a) Update time (Place) (b) Update time (Twitter) (c) Load balance (Place) (d) Load balance (Twitter) Figure 10. Impact of $k_{max}$ Varying $|S|$. To investigate the scalability of each algorithm, we studied the influence of $|S|$. Figure 9 shows the result. Due to the load imbalance, the update time of SOP is slow, even when $|S|$ is small. KOP and D$k$M-SKS scale linearly to $|S|$, so D$k$M-SKS always outperforms KOP. Note that the linear scalability shows that their subscription assignment functions well. From Figs. 10(c) and 10(d), we see that the load balance of each algorithm normally becomes larger, as $|S|$ increases. It is however trivial increase. For example, in D$k$M-SKS on Twitter, the difference is only 40[msec] between the cases of $|S|=2.5\cdot 10^{6}$ and $|S|=10\cdot 10^{6}$. Varying $k_{max}$. Last, we study the impact of $k_{max}$, and the result is shown in Figure 10. The update time of KOP and D$k$M-SKS increases slightly as $k_{max}$ increases. This is also a reasonable result, because, as $k_{max}$ increases, the probability that new objects update the $k$NN of some subscriptions becomes higher. SOP shows a different pattern to KOP and D$k$M-SKS, because of its load imbalance. It can be seen that the update time of SOP is simply affected by its load balance. ## 6\. Related Work Spatial-keyword search. Due to the prevalence of spatial-keyword objects, algorithms for efficient searching them have been extensively devised (chen2013spatial, ). For indexing a given dataset, these algorithms employ hybrid structures of spatial indices, e.g., R-tree and quadtree, and textual indices, such as inverted files (cong2009efficient, ; zhang2016inverted, ). A famous index is IR-tree (cong2009efficient, ). This is an essentially R-tree, but each node contains an inverted file. The hybrid structure of grid and inverted file, which is employed by D$k$M-SKS, is derived from IR-tree. An R-tree is not update-efficient, because it partitions data space based on a given dataset. We therefore did not employ R-tree-like structures. It is also important to notice that these works consider snapshot queries and static datasets, whereas we consider continuous queries and streaming data. Some related queries support moving users (wu2013moving, ), find potential users (choudhury2016maximizing, ), and analyze the result of spatial-keyword search (chen2015answering, ). These works are also totally different from our problem. Distributed query processing system. Recently, distributed spatial query processing systems have been developed on Hadoop, Spark, and Storm. (D$k$M-SKS is orthogonal to these systems.) For example, Hadoop-GIS (aji2013hadoop, ) and SpatialHaddop (eldawy2015spatialhadoop, ) support efficient processing of spatial queries, e.g., range and $k$NN queries on MapReduce environments. However, they do not consider keyword information and cannot deal with our problem. Tornado (mahmood2015tornado, ) is a system based on Storm (storm, ) and supports spatio-textual queries. The main focus of this system is to achieve efficient spatial-keyword query processing and not to support massive subscriptions. Hence, it is not trivial for Tornado to provide subscription partitioning for continuous spatial-keyword $k$NN queries. SSTD (chen2020sstd, ) is also a system that supports spatio-textual queries on streaming data. However, SSTD imposes, for objects, the condition that they have to contain all keywords specified by queries to match the queries. This is too strict, resulting in no matching objects. Location-aware Pub/Sub. There are many studies that addressed the problem of dealing with spatio-textual subscriptions. Literatures (li2013location, ; mahmood2018fast, ; mahmood2018adaptive, ; wang2015ap, ) considered continuous boolean range queries as subscriptions. Although PS2Stream (chen2017distributed, ) also deals with boolean range queries, this is the most related work to ours, because it also assumes the same distributed setting as ours. Our empirical study has demonstrated that the cost model proposed in (chen2017distributed, ) is not efficient for our problem and D$k$M-SKS significantly outperforms PS2Stream. Some studies (chen2015temporal, ; nishio2017geo, ; nishio2020lamps, ; wang2016skype, ) also tackled the problem of spatio-textual $k$NN (or top-k) monitoring. (chen2015temporal, ) considers a decay model for streaming data, while (nishio2020lamps, ; wang2016skype, ) do a sliding-window model. In addition, they consider an aggregation function for object scoring, i.e., spatial proximity and keyword (textual) similarity are aggregated to a score through a weighting parameter $\alpha$. Based on this scoring function, they monitor top-k objects for each subscription. Their techniques are specific to this scoring function and their assumed streaming model (decay or sliding- window), thereby cannot deal with our problem. Besides, it is well-known that specifying an appropriate $\alpha$ is generally hard for ordinary users (he2012answering, ). We therefore consider boolean-based $k$NN monitoring, which is more user-friendly. ## 7\. Conclusion In this paper, to scale well to massive objects and subscriptions in location- aware Pub/Sub environments, we proposed D$k$M-SKS, a distributed solution to the problem of spatial-keyword $k$NN monitoring of massive subscriptions. D$k$M-SKS employs a new cost model to effectively reflect the load of a given subscription. Besides, D$k$M-SKS partitions a set of subscriptions so that the entire load becomes as small as possible, then assigns each subscription to a specific worker while considering load balancing. We conducted experiments on two real datasets, and the results demonstrate that D$k$M-SKS outperforms baselines and a state-of-the-art and scales well to massive subscriptions. ## Acknowledgments This research is partially supported by JSPS Grant-in-Aid for Scientific Research (A) Grant Number 18H04095. ## References * [1] https://archive.org/details/2011-08-SimpleGeo-CC0-Public-Spaces. * [2] http://www.ntu.edu.sg/home/gaocong/datacode.html. * [3] http://storm.apache.org/. * [4] A. Aji, F. Wang, H. Vo, R. Lee, Q. Liu, X. Zhang, and J. Saltz. Hadoop gis: a high performance spatial data warehousing system over mapreduce. PVLDB, 6(11):1009–1020, 2013. * [5] A. Almaslukh and A. Magdy. Evaluating spatial-keyword queries on streaming data. In SIGSPATIAL, pages 209–218, 2018. * [6] D. Amagata and T. Hara. Diversified set monitoring over distributed data streams. In DEBS, pages 1–12, 2016. * [7] D. Amagata and T. Hara. Identifying the most interactive object in spatial databases. In ICDE, pages 1286–1297, 2019. * [8] D. Amagata, T. Hara, and S. Nishio. Distributed top-k query processing on multi-dimensional data with keywords. In SSDBM, pages 10:1–10:12, 2015. * [9] D. Amagata, T. Hara, and M. Onizuka. Space filling approach for distributed processing of top-k dominating queries. IEEE Transactions on Knowledge and Data Engineering, 30(6):1150–1163, 2018. * [10] L. Chen, G. Cong, X. Cao, and K.-L. Tan. Temporal spatial-keyword top-k publish/subscribe. In ICDE, pages 255–266, 2015. * [11] L. Chen, G. Cong, C. S. Jensen, and D. Wu. Spatial keyword query processing: an experimental evaluation. PVLDB, 6(3):217–228, 2013. * [12] L. Chen, X. Lin, H. Hu, C. S. Jensen, and J. Xu. Answering why-not questions on spatial keyword top-k queries. In ICDE, pages 279–290, 2015. * [13] Y. Chen, Z. Chen, G. Cong, A. R. Mahmood, and W. G. Aref. Sstd: A distributed system on streaming spatio-textual data. PVLDB, 13(11):2284–2296. * [14] Z. Chen, G. Cong, Z. Zhang, T. Z. Fuz, and L. Chen. Distributed publish/subscribe query processing on the spatio-textual data stream. In ICDE, pages 1095–1106, 2017. * [15] F. M. Choudhury, J. S. Culpepper, Z. Bao, and T. Sellis. Batch processing of top-k spatial-textual queries. ACM Transactions on Spatial Algorithms and Systems, 3(4):1–40, 2018\. * [16] F. M. Choudhury, J. S. Culpepper, T. Sellis, and X. Cao. Maximizing bichromatic reverse spatial and textual k nearest neighbor queries. PVLDB, 9(6):456–467, 2016. * [17] G. Cong, C. S. Jensen, and D. Wu. Efficient retrieval of the top-k most relevant spatial web objects. PVLDB, 2(1):337–348, 2009. * [18] A. Eldawy and M. F. Mokbel. Spatialhadoop: A mapreduce framework for spatial data. In ICDE, pages 1352–1363, 2015. * [19] R. A. Finkel and J. L. Bentley. Quad trees a data structure for retrieval on composite keys. Acta informatica, 4(1):1–9, 1974. * [20] R. L. Graham. Bounds on multiprocessing timing anomalies. SIAM journal on Applied Mathematics, 17(2):416–429, 1969. * [21] Z. He and E. Lo. Answering why-not questions on top-k queries. IEEE Transactions on Knowledge and Data Engineering, 26(6):1300–1315, 2012. * [22] H. Hu, Y. Liu, G. Li, J. Feng, and K.-L. Tan. A location-aware publish/subscribe framework for parameterized spatio-textual subscriptions. In ICDE, pages 711–722, 2015. * [23] G. Li, Y. Wang, T. Wang, and J. Feng. Location-aware publish/subscribe. In KDD, pages 802–810, 2013. * [24] S. Luo, Y. Luo, S. Zhou, G. Cong, J. Guan, and Z. Yong. Distributed spatial keyword querying on road networks. In EDBT, pages 235–246, 2014. * [25] A. R. Mahmood, A. M. Aly, and W. G. Aref. Fast: frequency-aware indexing for spatio-textual data streams. In ICDE, pages 305–316, 2018. * [26] A. R. Mahmood, A. M. Aly, T. Qadah, E. K. Rezig, A. Daghistani, A. Madkour, A. S. Abdelhamid, M. S. Hassan, W. G. Aref, and S. Basalamah. Tornado: A distributed spatio-textual stream processing system. PVLDB, 8(12):2020–2023, 2015. * [27] A. R. Mahmood, A. Daghistani, A. M. Aly, M. Tang, S. Basalamah, S. Prabhakar, and W. G. Aref. Adaptive processing of spatial-keyword data over a distributed streaming cluster. In SIGSPATIAL, pages 219–228, 2018. * [28] S. Nishio, D. Amagata, and T. Hara. Geo-social keyword top-k data monitoring over sliding window. In DEXA, pages 409–424, 2017. * [29] S. Nishio, D. Amagata, and T. Hara. Lamps: Location-aware moving top-k pub/sub. IEEE Transactions on Knowledge and Data Engineering, 2020. * [30] M. Qiao, J. Gan, and Y. Tao. Range thresholding on streams. In SIGMOD, pages 571–582, 2016. * [31] S. Tsuruoka, D. Amagata, S. Nishio, and T. Hara. Distributed spatial-keyword knn monitoring for location-aware pub/sub. In International Conference on Advances in Geographic Information Systems, pages 111–114, 2020. * [32] X. Wang, W. Zhang, Y. Zhang, X. Lin, and Z. Huang. Top-k spatial-keyword publish/subscribe over sliding window. The VLDB Journal, 26(3):301–326, 2017. * [33] X. Wang, Y. Zhang, W. Zhang, X. Lin, and Z. Huang. Skype: top-k spatial-keyword publish/subscribe over sliding window. PVLDB, 9(7):588–599, 2016. * [34] X. Wang, Y. Zhang, W. Zhang, X. Lin, and W. Wang. Ap-tree: Efficiently support continuous spatial-keyword queries over stream. In ICDE, pages 1107–1118, 2015. * [35] X. Wang, Y. Zhang, W. Zhang, X. Lin, and W. Wang. Ap-tree: efficiently support location-aware publish/subscribe. The VLDB Journal, 24(6):823–848, 2015. * [36] D. Wu, M. L. Yiu, and C. S. Jensen. Moving spatial keyword queries: Formulation, methods, and analysis. ACM Transactions on Database Systems, 38(1):1–47, 2013. * [37] S. Yoon, J.-G. Lee, and B. S. Lee. Nets: extremely fast outlier detection from a data stream via set-based processing. PVLDB, 12(11):1303–1315, 2019. * [38] C. Zhang, Y. Zhang, W. Zhang, and X. Lin. Inverted linear quadtree: Efficient top k spatial keyword search. IEEE Transactions on Knowledge and Data Engineering, 28(7):1706–1721, 2016.
# Carbon nanotubes collapse phase diagram with arbitrary number of walls. Collapse modes and macroscopic analog. Y. Magnin<EMAIL_ADDRESS>F. Rondepierre W. Cui D.J. Dunstan A. San-Miguel <EMAIL_ADDRESS>MIT Energy Initiative, Massachusetts Institute of Technology, Cambridge, MA, United States Consultant, Total@Saclay NanoInnov, 2 boulevard Thomas Gobert, 91120 Palaiseau Cedex, France. Univ Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, Campus LyonTech - La Doua, F-69622 LYON, France School of Physics and Electronic Engineering, Jiangsu Normal University, Xuzhou 221116,China School of Physics and Astronomy, Queen Mary University of London, London, E1 4NS, UK ###### Abstract Carbon nanotubes tend to collapse when their diameters exceed a certain threshold, or when a sufficiently large external pressure is applied on their walls. Their radial stability of tubes has been studied in each of these cases, however a general theory able to predict collapse is still lacking. Here, we propose a simple model predicting stability limits as a function of the tube diameter, the number of walls and the pressure. The model is supported by atomistic simulations, experiments, and is used to plot collapse phase diagrams. We have identified the most stable carbon nanotube, which can support a maximum pressure of $\sim$18 GPa before collapsing. The latter was identified as a multiwall tube with an internal tube diameter of $\sim$12nm and $\sim$30 walls. This maximum pressure is lowered depending on the internal tube diameter and the number of walls. We then identify a tube diameter domain in which the radial mechanical stability can be treated as equivalent to macroscopic tubes, known to be described by the canonical Lévy-Carrier law. This multiscale behavior is shown to be in good agreement with experiments based on O-ring gaskets collapse, proposed as a simple macroscopic parallel to nanotubes in this domain. ###### keywords: Carbon nanotubes, High pressure, Irreversible transformation, Atomistic simulations ††journal: Carbon ## 1 Introduction Low-dimensional carbon structures such as fullerenes, graphene, carbon nanotubes (CNT), nanocones, nano-junctions [1, 2, 3, 4, 5, 6] have deeply changed fundamental concepts of condensed matter physics during the last decades [7]. The many associated technological breakthroughs have opened perspectives in a broad range of applications, ranging from electronics [8, 9], sensor developments [10],energy transport and storage [11, 12, 13] or biology and medical sciences such as drug delivery technology [14]. While both graphene and CNT sp2 structures have concentrated an important part of the recent research efforts, there exist many hybrid structures between these two, which have been much less explored [15]. In the literature, they are referred as ”collapsed nanotubes”, ”flattened carbon nanotubes”, ”closed-edge graphene nanoribbons” or ”dogbones”. Such structures correspond to a geometrical evolution of the CNT radial cross-section, from circular towards a continuum of shapes, in which the internal walls become closer, in at least one radial direction. The terms mentioned above are most frequently used when the distance between the internal walls in the collapse direction tends to the graphitic interlayer distance, where van der Waals interactions (vdW) have to be considered. For clarity, we will refer to this state as the ”collapsed” shape, and the deviations from the circular cross-section leading to it as ”collapse transition” shapes. The latter can be either first-order-like for large tube diameters, or continuous for smaller ones [16], going through different geometries including oval, race-track or polygonal [17], all grouped in the ”collapse transition” domain. Characterizing the CNT collapse behavior is greatly motivated by the change in electronic structure from the pristine circular cross-section to the deformed or collapsed geometry [18, 19, 20, 21, 22, 23, 24]. Hence, a geometrical electronic tuning based on a shape modification may offer an interesting alternative to substitutional doping in nano-engineering design [25, 26, 27]. Figure 1: Scheme of the different physical mechanisms allowing the evolution of a carbon nanotube to a symmetrical collapsed structure (s): s1 External applied pressure; s2 Self collapse for large tube diameters; s3 Defective tubes; s4 Through charge injection. Mechanisms leading to an asymmetrical collapsed structure (a): a1 Through interaction with a substrate; a2 Through interaction with other nanotubes. Soon after the first dedicated study about CNT by Iijima et al. [28], collapsed geometries were evidenced by electron microscopy on large CNT diameters [29]. It is generally admitted now that a collapse event is favored for large tube diameters and/or for small numbers of concentric tubes [30, 31, 32, 33, 34]. Other collapse parameters at ambient conditions were also identified, including interactions between the external walls of CNT, interactions with a substrate or with molecular adsorbates [35, 36, 37], defect formation by electron beam irradiation [38], or due to the application of electrostatic fields [39, 40]. The different mechanisms that can be responsible for collapse are illustrated in Figure 1. For small tube diameters with a stable circular cross-section at ambient conditions, it has been shown that a high external pressure causes collapse. This was first shown for SWCNT, where the collapse pressure has been demonstrated to be a function of CNT diameter [41, 42, 43, 44, 45, 16, 46, 47, 48, 49, 50, 51, 52, 53, 54]. Pressure-induced collapse is also observed for double-wall [55, 56, 57, 58], triple-wall [59] and multi-wall carbon nanotubes [60]. Any effect of helicity (or chirality) - a geometrical characteristic reported to originate from the configurational tube edge entropy [61] \- on the pressure response of CNT was reported to be small, or to occur only in tubes of very small diameters [62]. In this work, we explore the stability conditions of CNT from their circular cross-section to collapsed shapes, focusing on the stability domain as a function of geometrical parameters (diameter and number of tube walls), as well as the effect of external pressure. The instability of the circular cross-section is found to be driven by the competition of the elastic energy, related to the bond-bending energy, and the external and internal forces. External forces include the pressure applied to CNT external surfaces, and interactions with the surrounding molecular medium, while internal forces include the vdW interactions of the tube walls. While a number of atomistic calculations and models have tried to predict the CNT stability conditions [43, 63, 16, 55, 56, 47, 60, 32, 53, 54, 63], none of them fully covered structural tube properties, including tube diameters, number of walls, etc. In addition, we propose a multiscale approach, showing that the collapse onset of a range of nanotubes with diameter about nanometer sizes, are well described by the canonical Lévy-Carrier law (LC), formulated 150 years ago [64], and originally developed for macroscopic tube collapse. Our modified LC- based model behaves consistently in this domain with atomistic simulations, density functional tight-binding (DFTB), molecular dynamics (MD), as well as with a macroscale analog based on O-ring gaskets deformation. Our model is then used to plot collapse phase diagrams for carbon nanotubes, providing a better understanding of the radial stability and collapse mechanisms of single- and multi-wall nanotubes (MWCNT), including nanotube bundles as a function of their diameter from the nanometer to dozen of nanometers. We finally think that such an approach could be adapted for more complex porous materials. ## 2 Results and discussion ### 2.1 Theoretical SWCNT collapse model In mechanics, the radial collapse pressure $P_{c}$ for macroscopic tubes with a diameter $d_{0}$, is known to scale as $P_{c}\ \propto d^{-3}_{0}$, as expressed by Lévy-Carrier [64]. At the nanoscale, it has been shown that such a formalism is consistent for SWCNT, when including an additional correction term, $\beta^{2}/d^{2}_{0}$. This term emerges both in simulations, as well in experiments, and may be related to the large built-in curvature energy for small tube diameters, or to the discrete nature of the nanotubes [65]. This approach is called the modified LC equation [66], and is written, $P_{c}=\frac{24D}{d^{3}_{0}}\left(1-\frac{\beta^{2}}{d^{2}_{0}}\right),$ (1) where $D$ is the bending stiffness of graphene, and $\beta$ corresponds to the diameter of the smallest free-standing stable SWCNT [66]. All parameters are given in Table 1. When $d_{0}\gg\beta$, the LC law is recovered; however, when $d_{0}<\beta$, $P_{c}<$0, corresponding to unfeasibly small unsupported tube diameters. Eq.1, originally based on experimental observations [54], has been shown to be well-suited for SWCNT with diameters in the range $d_{0}\sim$ 0.7 - 2 nm, and was found to be independent of the tube chirality [66]. With increasingly larger tube diameters, the correction term decreases, and the interaction of the external tube with the pressure transmitting medium (PTM) becomes more important. To include the PTM in the model, we integrate Eq.1, and add a surface energy $\gamma_{F-C}$, in order to account for the surrounding PTM (an argon bath in MD simulations) [67]. Doing so, we obtain the enthalpy of a SWCNT of length $L$ as, $H\ =\ \frac{48D}{d_{0}^{3}}\left(1-\frac{\beta^{2}}{3d_{0}^{2}}\right)\ \frac{\pi L}{4}d^{2}_{0}\ +\ P\frac{\pi L}{4}d^{2}_{0}+\ \gamma_{F-C}\ \pi L\ d_{0},$ (2) Minimizing Eq.2 as a function of $d_{0}$, we obtain the collapse pressure for SWCNT interacting with the PTM as, $P_{c}\ =\ \frac{24D}{d^{3}_{0}}\left(1-\frac{\beta^{2}}{d^{2}_{0}}\right)\ -\ 2\frac{\gamma_{F-C}}{d_{0}}.$ (3) ### 2.2 Theoretical SWCNT bundle collapse model For a bundle formed of SWCNT, Eq.3 can be modified following Pugno et al. [67], considering that each individual tube in a bundle interacts with its neighboring tubes, acting as a SWCNT pseudo-fluid, while the outer surface of the bundle interacts with the PTM. Then, $P_{c}$ can be obtained by minimizing the corresponding enthalpy expression, $P_{c}\ =\ \frac{24D}{d^{3}_{0}}\left(1-\frac{\beta^{2}}{d^{2}_{0}}\right)\ -\ 2\left(\frac{\gamma_{C-C}}{d_{0}}\ +\ \frac{\gamma_{F-C}}{d_{B}}\right)$ (4) where dB represents the bundle diameter. In Eq.4, $\gamma_{C-C}$ is the carbon formation energy, corresponding to the inter-tube vdW interactions, while the last term represents the interaction between the PTM and the external surface area of the bundle. It is worth to note that following Pugno et al. [32] and consistent with atomistic modeling [59], the bundle will have undergone polygonization before collapse. ### 2.3 Theoretical MWCNT collapse model To go a step further, we now generalize our model for MWCNT. We follow the methodology proposed by Gadakar et al. [56] in which friction between tubes is neglected so that the bending stiffnesses are additive. Thus, the net external pressure needed to collapse the $N$ walls of a MWCNT is written as the sum of the pressures $P_{c_{i}}$ needed to collapse the $i=1$ to $N$ corresponding individual SWCNT. The pressure energy needs to be distributed between the different tubes, leading to the additive character of the $P_{c}$. In MWCNT, the vdW inter-wall interactions are compensated, considering that each individual inner-tube interacts both with its next inner- and next outer-tube, $i$-1 and $i$+1 respectively. However, it is necessary to account for interaction of the innermost tube ($i$=0) with only its next larger tube neighbor ($i$=1), and for the interaction of the outermost tube ($i$=$N$-1) with ($i$=$N$-2), and finally with the external PTM (the last term in Eq.5). Accounting for all these interactions, the collapse pressure becomes, $\begin{split}P_{c}&=\ \sum^{N-1}_{i=0}\ \frac{24D}{d_{i}^{3}}\ \left(1-\frac{\beta^{2}}{d_{i}^{2}}\right)\ -\ 2\left(\ \frac{\gamma_{C-C}}{d_{0}}\ -\ \frac{\gamma_{C-C}}{d_{N-1}}\ +\ \frac{\gamma_{F-C}}{d_{N-1}}\right).\end{split}$ (5) Inter-tube distances are reported to range in between 0.27 nm and 0.42 nm, however, the most common distances in MWCNT are about 0.32-0.35 nm [68]. For the sake of simplicity, we have considered that all tubes in a MWCNT range at the graphitic interlayer distance $\delta$ (see Table.1), and that $d_{i}=d_{0}+2\delta\cdot i$ in Eq.5. We may note here that even if a general MWCNT can no longer be considered a thin tube, our method of evaluation of the collapse pressure as contribution of various SWCNT in interaction with their environment allows us to keep using the LC expression, which is in fact only valid for thin-walled tubes. Table 1: Parameters used in the various Lévy-Carrier models. $D$ | 1.7 (eV) | Ref [66, 69] ---|---|--- $\beta$ | 0.44 (nm) | Ref [66] $\gamma_{F-C}$ | 0.11 (J/m2) | This work $\gamma_{C-C}$ | 0.23 (J/m2) | Ref [69, 70, 71] $\delta$ | 0.34 (nm) | Ref [69] ### 2.4 Numerical simulations, experiments and model validation In order to check the accuracy of our model, we compare it with both numerical and experimental data. Simulations have been performed with two algorithms, the DFTB for small tube diameters [72], and MD based on the empirical AIREBO bond order potential [73], accounting both for C-C covalent bonds and for long-range vdW interactions [74]. All simulations have been performed for tubes or bundles immersed in an Ar bath at 300K. Ar-Ar and C-Ar interactions have been modeled by a (12-6) Lennard-Jones potential, using the Lorentz- Berthelot mixing rule (see section Method for simulation details). In Fig.2.a, we show the evolution of $P_{c}$ as a function of the tube diameter from DFTB, MD, and from the theoretical models detailed above. As can be seen, for small $d_{0}$, the modified LC approach (light green line), is in good agreement with the DFTB simulations (yellow stars). Hence, when $d_{0}<$0.57 nm, $P_{c}$ decreases, corresponding to a situation where tube diameters are so small that the curvature energy is large enough to make tubes unstable. It is noteworthy that a deviation is observed when it is compared to the macroscopic LC law (dashed black line), that does not include the small- diameter correction term discussed above. When $d_{0}>$3nm, the vdW-LC approach (red line) shows a sharp decrease of $P_{c}$, corresponding to a tube diameter where interactions between the PTM and the tube walls dominate, while the curvature energy is negligible for such diameters. SWCNT simulations (red circles), show a self-collapse diameter of 5.3 nm, in good agreement with the experimental data reported; about 5.1 nm is cited in the review of He et al. [15]. This agreement allows then to fit $\gamma_{F-C}$ (see Table 1), in the vdW-LC model for SWCNT. As shown in Fig.2.a, we found an excellent agreement between simulations and the theoretical model for the full diameter range simulated. Figure 2: a. Collapse pressure $P_{c}$ of SWCNT as a function of the tube diameter $d_{0}$, using three models based on the Lévy-Carrier formula: the standard LC (black dashed line), the modified LC (green line) and the vdW-LC developed in this work for SWCNT (red line), and bundles (blue line). The results are compared to numerical simulations, DFTB (yellow stars) and MD simulations with a long-range bond order potentials for SWCNT (red circles), and bundles (blue squares). b. Innermost tube diameters $d_{0}$ of MWCNT for which collapsed configurations have been observed, plotted against the number of tube walls $N$. The red circles correspond to experimental observation of collapsed tubes, while the yellow square corresponds to an experimental determination of the collapse pressure for SWCNT. The black line corresponds to the prediction from our vdW-LC model. The gray area corresponds to the phase where tubes are not collapsed (circular cross section), while the part of the diagram above corresponds to nanotubes collapsed at ambient pressure. We then compared the vdW-LC model applied on bundles. In Fig.2.a, the gray squares correspond to the results of simulations performed on bundles made of 37 SWCNT. As can be seen, the self-collapse diameter is smaller than that of isolated SWCNT, a behavior already observed in [75]. This behavior could be explained by the contribution of the polygonisation of the tubes to the surface energy which results from inter-tube interactions and which tends to lower $P_{c}$. Using Eq.4 and the $\gamma_{F-C}$ previously fitted from isolated SWCNT, we see that the vdW-LC model applied to the bundle configuration is in good agreement with simulations using the C-C interaction energy $\gamma_{C-C}$ (see Table 1). Note that in the bundle used in simulations, $d_{B}\gg d_{0}$ and the interaction between the PTM and the external bundle surface could be neglected compared to the inter-tube interactions. In Fig.2.b, we compare experiment with the vdW-LC for MWCNT (Eq.5). The tube stability is presented as innermost tube diameter $d_{0}$ against the number of tube walls $N$ at ambient pressure. The continuous line separating the two domains represents the critical internal diameter for collapse of MWCNT at ambient pressure. The experimental points (red circles) correspond to collapsed tubes observed by electron microscopy, and are extracted from different sources summarized in the work of Balima et al [34]. As expected, these points are mainly found in the collapsed domain. We have underlined a particular point from the work of He et al [15] (yellow square), which corresponds to the determination of the critical collapse diameter by observations of many SWCNT. This point should then lie on the limiting curve, and is found to be in very good agreement with the prediction of our model. We can also note that two points are found bellow the theoretical prediction. These exceptions may be explained by interactions with the substrate (Fig.1.a1), which are known to favor the tube collapse [76]. Overall, our prediction is very consistent with experiments. The result presented in Fig.2.b also shows that with an increasingly higher number of tube walls, MWCNT of larger internal diameter can be stabilized at ambient conditions. Note that the inner-wall vdW interactions may lead to the metastability of a collapsed geometry at a diameter below the critical one [29, 75, 32, 34]. The critical diameter may also be strongly reduced for defective carbon nanotubes [33]. ### 2.5 Nanotube collapse phase diagram We have demonstrated that the vdW-LC model is a robust, simple and suitable approach in predicting single- and multi-wall carbon nanotubes stability, as well as for bundles. We now use this model to determine the nanotube collapse phase diagram. In Fig.3.a, we plot the stability diagram of MWCNT at ambient pressure, extending beyond the domain explored in Fig. 2.b. The stable phase, either circular or collapsed, depends on $d_{0}$ and $N$. There also exists an instability domain, i.e., in which tubes cannot exist, for very small $d_{0}$ (red domain) in Fig,3.a,b. Hence, with an increasingly large $d_{0}$, the number of tube walls has to be increased in order to stabilize a circular MWCNT. Nevertheless, $d_{0}$ has a maximum at $d_{0}$=12nm for $N$=45 walls, corresponding theoretically to the largest possible internal cavity in MWCNT. For larger $N$, $d_{0}$ slightly decreases and becomes asymptotically constant at $d_{0}$=11.3 nm. This behavior results from the competition between the inter-tube vdW and the tube-PTM interactions, corresponding to the second term in the Eq.5. Figure 3: a. Stability diagram of carbon nanotubes at ambient pressure as a function of the internal diameter $d_{0}$, and the number of tube walls, $N$. Three regions are defined from the bottom to the top: (1) the red phase corresponds to the tubes that cannot exist due to too small an internal diameter, (2) the gray phase, corresponding to the stability domain of tubes with a circular cross-section, and (3) the white phase, corresponding to the stability domain of collapsed tubes. b. Zoom on the red phase in (a) (unstable tubes), for small tube diameters. c. Stability diagram for MWCNT with $d_{0}$=0.56 nm, corresponding to the tubes found to show the highest collapse pressure. The red dashed line corresponds to the maximum collapse pressure found in the model. In Fig.3.b, we see that the smallest possible tube diameter for a free- standing SWCNT corresponding to $d_{0}$=0.44 nm [66] can be slightly reduced when increasing the number of tube walls. This result is qualitatively supported in the literature, where $d_{0}\sim$0.407nm has been reported for double-wall carbon nanotubes [77], and $d_{0}\sim$0.3nm for a $N$=13 MWCNT [78]. Our calculations do not allow for internal tube diameters below $\sim$0.43nm. Such a limitation could results from the tube-substrate interactions, the discreet distribution of carbon atoms onto the wall, or the small tube diameter helicities, neglected in the model, and going beyond the scope of this work. Figure 4: a. Collapse pressure $P_{c}$ as a function of the innermost tube diameter $d_{0}$ for different numbers of tube walls $N$, ranging from 1 to 100 and determined from the theoretical model accounting for vdW interactions. The gray area represents the multiscale domain, i.e. a $d_{0}$ range where the original macroscopic LC model agrees with the vdW-LC model at the nanoscale (see text and (b) for details and criteria). b. Normalized collapse pressure $P_{c}d_{0}^{3}$ as function of the internal diameter for tubes with $N$=1, 2, 3, 5, 10, 20 and 100 walls from inner to outer curves. The continuous part of the curves represent the domain in which within $\pm$5 % accuracy, a macroscopic LC model (horizontal dashed lines) can be used to approximate the vdW-LC one.c. Domain of validity of the continuum mechanics corresponding to the LC model (gray area). We have considered that the LC model is valid when it differs by less than 5% from a linear approximation in the $P_{c}d_{0}^{3}$ representation as function of $d_{0}$. The maximum pressure needed to collapse any MWCNT can now be found by searching the maximum collapse pressure from Eq.5, as a function of $d_{0}$ and $N$. The maximum $P_{c}$ value depends on $N$ but is found invariably at $d_{0}\sim$0.56 nm whatever the value of $N$. The maximum value of $P_{c}$ evolves from $P_{c}\sim$13.9 GPa for SWCNT and progressively increases with $N$, converging to a maximum at $P_{c}$=18.2 GPa. As shown in Fig.3.c, we note the quick convergence of $P_{c}$ for $N$ ranging from 1 to 4. The number of walls is thus found to increase the stability pressure by about $\sim$30%, corresponding to the observations on deformed or collapsed geometries for MWCNT with a relatively high number of walls in nanocomposites under compression [34]. We now use the model to find how $P_{c}$ evolves as a function of $d_{0}$, for $N$ ranging from 1 to 100, in the collapse phase diagram, Fig.4.a. As can be seen, for small $d_{0}$, $N$ has no significant effect on $P_{c}$. So, we may approximate that all curves collapse in a single one for internal diameters below $d_{0}\sim$1nm. For larger $d_{0}$, $N$ plays a more significant role on tubes stability. Interestingly, in Fig.4.b, the collapse pressures obtained with the LC model are comparable with those from the vdW-LC model for certain tubes. We consider a multiscale domain when the $P_{c}d_{0}^{3}$ plotted as function of $d_{0}$ can be approximated by the LC law corresponding to a constant. We show this behavior for $N=$ 1, 2, 3, 5, 10, 20 and 100, where the continuous part of plots can be assimilated to horizontal lines, assuming an uncertainty on experimental pressure determinations of the order of $\pm 5$ %. In Fig 4.c, we show the tube diameter as a function of the number of walls into the multiscale domain. We thus observe that SWCNT with diameters ranging from 0.95 to 2.5 nm behave as macroscopic tubes. This diameter domain is shifted as a function of $N$ to higher diameters, and converging for $N=$ 20, for internal diameters ranging from $\sim$3.5 to $\sim$7.5 nm. The chosen accuracy of $\pm 5$ % corresponds to a conservative choice of pressure determination in high-pressure measurements [79]. This link between the nano- and the macroscopic scales can be understood from the dominant role played by the innermost tube on pressure stability, as shown in Fig. 3.a. It is noteworthy that such a multiscale behavior has been previously reported in the case of SWCNT [43, 80]. ## 3 A macroscopic model for the collapse of carbon nanotubes We have shown above that Eq.1 is a suitable multiscale equation for comparing the collapse behavior of macroscopic tubes with certain tubes at the nanoscale. This result is thus consistent in predicting $P_{c}$ for SWCNT with diameters ranging from $d_{0}\sim$ 1-2 nm and for MWCNT with a limited number of tube walls. Note that the diameter range reported corresponds to the most common tube diameters produced experimentally [81]. Figure 5: a. Schema and picture of the experiment, in which O-rings are collapsed by a fluid pressure medium (air or water). b. Averaged collapse pressure $P_{c}$ of O-rings in the experiment described in (a), as a function of their diameters $d$ (red circles). Experimental data are found to fit $P_{c}\propto d^{-3.0\pm 0.7}$ (black line). c. Schema and picture of the experiment in which O-rings are collapsed by a solid pressure medium (ball- bearings ). In order to verify this theoretical result, we have developed a table-top experiment using macroscopic nitrile rubber gaskets to mimic CNT. In this experiment, nanotubes are replaced by macroscopic toroidal rings (O-rings) with the diameters $d_{0}$=11.25, 13.7, 16.25 and 28.75 mm. The O-rings are placed between two transparent plates to limit movement in the direction of the torus axis. A pressure vessel is made by using a large outer O-ring surrounding the smaller inner one. The vessel is connected to a bicycle pump with a pressure gauge to generate and monitor the pressure acting on the inner O-ring (Fig.5.a). The deformation of this O-ring is measured as a function of the applied pressure, and quantifieded by image analysis (details are given in the section Method. Videos of the experiment are also available in the Supplementary Information). The O-rings used in this experiment are not strictly speaking tubes, but rings, and we first verify that they follow the LC law. To do so, four measurements per O-ring diameters have been performed, and the averaged data are shown in Fig.5.b. The data are fitted by an inverse power law, $P_{c}=a.D^{-\alpha}$ with $a$=1.51 J (a constant), and $\alpha$=3.0 $\pm$ 0.7, as expected from Eq.1. The O-rings are thus demonstrated to have a similar radial mechanical response to macroscopic tubes, and can reasonably be used for comparison with SWCNT with $d_{0}$ ranging from $\sim$1 to 3.45 nm. In Fig.6, we compare the dynamical collapse behavior of a macroscopic O-ring bundle with data from MD simulations at the nanoscale. In the experiment, a bundle of 37 O-rings is surrounded by a bath of steel ball-bearings acting as a PTM. Pressure is applied by tightening a sheathed guitar string surrounding the ball-bearings (Fig.5.c). Figure 6: a. Evolution of an O-ring bundle immersed in the ball-bearing pressure-transmitting medium during a compression cycle. b. Collapse behavior of a SWCNT bundle immersed in a solid argon medium. Simulation was performed at pressures ranging from 1 to 3 GPa with a pressure increased (from the left to the right). The change of PTM (ball-bearings instead of air or water) compared to the previous experiment (Fig.5.a) is necessary to match simulations. Indeed, the pressure needed to collapse a bundle with SWCNT diameter around 1 nm is a few GPa, a pressure at which the argon PTM used in simulation and in experiment is solid [82]. The ball-bearings tend to form an hexagonal close-packed planar domain, which mimic a macro-crystalline argon PTM. In the simulation, we have built a bundle formed of 37 SWCNT with $d_{0}$=1.3nm, immersed in an argon bath at $P\sim$2GPa. Collapse experiments on O-rings are compared to these MD simulations in Fig.6. The collapse process in both cases is found to be in good qualitative agreement. In the early stages, tubes show a small deformation. Then some of the tubes collapse to a peanut shape, while others stay circular or ovalize. Later in the semi-collapsed state, a large number of tubes have collapsed, but a few tubes remain ovalized. At the end of the process, all tubes show a collapse shape. Note that both experiments and simulations show that in a solid PTM, the collapse of a SWCNT bundle, at least when it can be assimilated to a macroscopic bundle, is a complex and non- homogeneous process. It has already been shown that collapse of even a single isolated elastic ring is not instantaneous [83] \- complete at $P_{c}$ \- but goes progressively through ovalisation to collapse shape over the range $P_{c}$ to 1.5$P_{c}$, see also Fig.3 in [54]. ## 4 Conclusion We propose a simple theoretical model to determine the stability domain of carbon nanotubes as a function of their diameters and their number of walls. The geometrical stability limits at ambient, as well as collapse pressures for arbitrary number of walls are characterized from the long range Van der Waals interactions, introduced into the modified Lévy-Carrier equation, formulated for tubes at the nanoscale. The model proposed in this work is validated by numerical simulations at the nanoscale, as well as experiments. We have thus found that depending on the number of tube walls, nanotubes show a maximum collapse pressure ranging from $\sim$13 to 18 GPa with an inner-tube diameter of $d_{0}$=0.56nm. It is noteworthy that our model fits experimental data despite neglecting multi-wall nanotubes inter-layer friction. When $d$ is smaller than 0.56nm, the collapse pressure drops right down, due to the strong tube curvature, resulting in unstable nanotubes. For large tube diameters, the collapse pressure decreases, corresponding to the Van der Walls interactions that tend to favor the collapse. From the collapse phase diagrams plotted with the model presented in this work, we have shown that the Lévy-Carrier equation (originally established for macroscopic tubes) is compatible with tube diameters of a few nanometers, and depending on the number of tube walls. This is an important result underlying that the collapse process of most of the common nanotubes produced by standard experimental techniques takes place as in macroscopic tubes, linking behavior at the nano- to the macroscale. This behavior was verified comparing numerical simulations with experiments at the nanoscale and at the macroscale, where nanotubes were replaced by polymer O-rings. We think that such an analogy maybe interesting in order to study mechanical deformation of nanotubes under pressure more easily, or more complex porous systems as Metal Organic Frameworks (MOF), zeolites, or even disordered porous materials as kerogen. ## 5 Methods ### 5.1 Numerical simulations Density functional tight-binding calculations were performed using the DFTB software package [72] with the matsci-0-3 parameter [84]. This algorithm was used only for small tube diameters ranging from $d_{0}$=0.5 to 1.4nm, due to its expensive computational time. In this approach, the Kohn-Sham density- functional theory is approximated with fitted integrals from reference calculations. The method increases simulations efficiency compared to density functional theory (DFT), while keeping ”a priori” a better accuracy compared to the empirical approaches. The C-C Slater-Koster parameters implemented in this work have been extensively used for CNT simulations and can be found elsewhere [84]. In Fig.2.a, we have determined $P_{c}$ for dozen of armchair SWCNT. For each pressure, a random displacement of 0.002nm is applied on each atom, and both atomic positions and cell vectors were optimized until the magnitude of all forces became smaller than 10-4 Ha/Bohr. In this process, $P$ was increased by steps of 0.2 GPa, up to the tube collapse. This phenomenon is generally found to be abrupt, and can be easily identified from a discontinuity in the enthalpy as a function of $P$, corresponding to the transformation in a collapse shape. In some rare cases, especially for small $d_{0}$, the discontinuity is not visible, and $P_{c}$ was determined by eye, i.e., we assigned $P_{c}$ to the first collapsed geometry found. A second set of simulations using an MD algorithm were performed to study larger tube diameters. MD simulations were conducted for systems with SWCNT immersed in an argon bath, in order to transmit pressure to the nanotubes. Inter-atomic interactions (Ar-Ar and Ar-C) were modelled by a (12-6) Lennard- Jones potential (LJ), with a cutoff fixed at 2 nm, $U=4\ \epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right],$ (6) where $\sigma$ corresponds to the atomic diameter, $r_{ij}$ is the inter- atomic distance, and $\epsilon_{ij}$ is the interaction energy between two atoms $i$ and $j$. The LJ parameters (given in Table 2) for the interactions of the different species are determined with the Lorentz-Berthelot mixing rule as, $\sigma_{ij}=\left(\frac{\sigma_{i}\ +\ \sigma_{j}}{2}\right)\mbox{\hskip 28.45274pt and \hskip 28.45274pt}\epsilon_{ij}=\sqrt{\epsilon_{i}\epsilon_{j}},\\\ $ (7) Table 2: Lennard-Jones parameters for argon and carbon atoms. Atoms | $\sigma$ (nm) | $\epsilon$ (meV) ---|---|--- Ar-Ar | 0.341 | 10.3 C-C | 0.336 | 2.4 Ar-C | 3.382 | 5 The C-C interactions were modelled with the long range bond order potential AIREBO [73]. This potential models both the carbon covalent bonds, and long range vdW interactions with a cutoff fixed at 2 nm. The first step of simulations consists in adsorbing Ar molecules in a bulk phase with the grand canonical Monte Carlo algorithm GCMC ($\mu_{Ar}$,$V$,$T$), where $\mu_{Ar}$ is the chemical potential of Ar, $V$ the volume of the simulation box at $T$=300K. GCMC was thus performed to generate a pure argon bulk configuration at pressures of about P=2 GPa. A tube (or a bundle) is then inserted inside the bulk configuration, and the inner part of the tube is cleared of argon atoms. MD simulations are then performed in the isothermal-isobaric ensemble ($N$,$P$,$T$), and Pc is determined from enthalpy as a function of $P$. When tube collapse occurs, the tube energy changes abruptly, and the collapse pressure can be easily identify. In Fig.7, we show the tube shape evolution, for a SWCNT with $d_{0}\sim$5 nm at P$\sim$0.5 MPa. As can be seen, the tube goes from a circular to a collapse shape, corresponding to its equilibrium state in such thermodynamic conditions. For all simulations performed in this work, we used zigzag SWCNT. Figure 7: Shape evolution of a SWCNT with $d_{0}\sim$5 nm immersed into an argon bath at a pressure of P$\sim$0.5 MPa. The two first snapshots correspond to out of equilibrium configurations. The last configuration corresponds to an equilibrium state in a collapse shape. ### 5.2 Experiments In order to mimic the elastic properties of the radial buckling of SWCNT, we have used toroidal elastomer gaskets (O-ring), usually used for applications as vacuum seals. O-rings were placed between two transparent plates of PMMA (poly(methyl methacrylate)) with 25mm thickness, while a larger gasket diameter was used to create a cavity around the smaller one. We then used dynamometric keys in order to ensure a uniform and weak pressure over the O-ring. Both O-rings and internal plate surfaces were oiled in order to reduce the friction. Holes were then drilled into the top plate and were used to connect a bicycle pump and a manometer to vary and monitor the pressure respectively, Fig.8. A schema and a picture of the experiment is shown in Fig.5.a. Doing so, we was able to observe the effect of a radially applied pressure on O-rings for different diameters ($d_{0}$=11.25, 13.7, 16.25 and 28.75 mm), and taking care that displacements are constrained into the plane. In order to validate that O-rings can be compared to simili of a SWCNT in the consistent LC domain, we have first shown that they verify the macroscopic LC approach itself, i.e., that the collapse pressure is inversely proportional to the cube of the torus diameter, Fig.5.b. Figure 8: Determination of the collapse pressure for individual O-rings from video image analysis during the compression following the scheme of Fig.5.a. a. Time evolution of O-ring radial sectors, measured in pixels from the image. The color scale represents the radial distribution of the pixels detected. The higher this value, the more the O-ring presents a circular shape. On the contrary, the lower this value, the more the O-ring deviates from a circular shape, corresponding to a collapse situation. b. Pressure measured from the manometer in image analysis. The correlation with (a) allows to determine the collapse pressure. See videos in the supplementary material. The second experiment conducted in this work is devoted to O-rings deformation using a pressure-transmitting medium that mimics the situation where SWCNT are immersed in a high-pressure ($>$1GPa) argon bath. To do this, we have used bearing balls enclosed by a guitar string, used to increase the pressure on the O-ring surfaces. As shown in the Fig.9.a, when the string loop is tightened, the balls are well organised in a close-packed hexagonal structure. This crystalline medium shows grains as in a polycrystalline solid. Note that the grain structure induces rugosity at the surface of O-rings that replicates the situation at the nanoscale, see Fig.3 in [54]. In order to limit such spurious effects, we have created a mixed medium by disseminating other types of particles as plastic beads into the ball bearing media, Fig.9.a. Despite this trick, we was not able to collapse this O-ring due to the remaining grain domains, preventing from collapse. We show what should be such a four-lobe geometry in a fully collapse shape by MD simulations in Fig.9.b. Figure 9: a. Four-lobe collapse of an O-ring obtained using a mixed medium of ball bearings and plastic beads (red colour). The mixed medium leads to the formation of smaller crystalline domains in the pressurized transmitting medium. b. Numerical simulation of tube collapse with a metastable state corresponding to a four-lobe shape (the pressure transmitting medium is not shown in this snapshot). ## Acknowledgments We acknowledge the platform PLECE of the University de Lyon and iLMTech (CNRS and University Claude Bernard Lyon 1). Y. Magnin gratefully acknowledges the Computational Center of Cergy-Pontoise University (UCP) for the computational time. A. San-Miguel and D. J. Dunstan acknowledge the support of the 2D-PRESTO ANR-19-CE00-0027 project. ## References * [1] H. W. Kroto, J. R. Heath, S. C. O’Brien, R. F. Curl, R. E. Smalley, C60: Buckminsterfullerene, nature 318 (6042) (1985) 162–163. * [2] A. K. Geim, K. S. Novoselov, The rise of graphene, in: Nanoscience and technology: a collection of reviews from nature journals, World Scientific, 2010, pp. 11–19. * [3] S. Iijima, T. Ichihashi, Single-shell carbon nanotubes of 1-nm diameter, nature 363 (6430) (1993) 603–605. * [4] N. Yang, G. Zhang, B. Li, Carbon nanocone: A promising thermal rectifier, Applied Physics Letters 93 (24) (2008) 243111. * [5] D. Wei, Y. Liu, The intramolecular junctions of carbon nanotubes, Advanced materials 20 (15) (2008) 2815–2841. * [6] S. Nasir, M. Z. Hussein, Z. Zainal, N. A. Yusof, Carbon-based nanomaterials/allotropes: A glimpse of their synthesis, properties and some applications, Materials 11 (2) (2018) 295. * [7] T. C. Dinadayalane, J. Leszczynski, Fundamental structural, electronic, and chemical properties of carbon nanostructures: graphene, fullerenes, carbon nanotubes, and their derivatives, Handbook of computational chemistry (2012) 793–867. * [8] L.-M. Peng, Z. Zhang, S. Wang, Carbon nanotube electronics: recent advances, Materials today 17 (9) (2014) 433–442. * [9] M. M. Shulaker, G. Hills, N. Patil, H. Wei, H.-Y. Chen, H.-S. P. Wong, et al., Carbon nanotube computer, Nature 501 (7468) (2013) 526–530. * [10] K. Chen, W. Gao, S. Emaminejad, D. Kiriya, H. Ota, H. Y. Y. Nyein, et al., Printed carbon nanotube electronics and sensor systems, Advanced Materials 28 (22) (2016) 4397–4414. * [11] E. Halakoo, A. Khademi, M. Ghasemi, M. Yusof, R. J. Gohari, A. F. Ismail, Production of sustainable energy by carbon nanotube/platinum catalyst in microbial fuel cell, Procedia CIRP 26 (2015) 473–476. * [12] H. Kim, K.-Y. Park, J. Hong, K. Kang, All-graphene-battery: bridging the gap between supercapacitors and lithium ion batteries, Scientific reports 4 (2014) 5278. * [13] T. Saito, T. Yamada, D. Fabris, H. Kitsuki, P. Wilhite, M. Suzuki, et al., Improved contact for thermal and electrical transport in carbon nanofiber interconnects, Applied Physics Letters 93 (10) (2008) 102108. * [14] A. Bianco, K. Kostarelos, M. Prato, Applications of carbon nanotubes in drug delivery, Current opinion in chemical biology 9 (6) (2005) 674–679. * [15] M. He, J. Dong, H. Wang, H. Xue, Q. Wu, B. Xin, et al., Advance in close-edged graphene nanoribbon: Property investigation and structure fabrication, Small 15 (29) (2019) 1804473. arXiv:https://www.onlinelibrary.wiley.com/doi/pdf/10.1002/smll.201804473, doi:10.1002/smll.201804473. URL https://www.onlinelibrary.wiley.com/doi/abs/10.1002/smll.201804473 * [16] P. Tangney, R. B. Capaz, C. D. Spataru, M. L. Cohen, S. G. Louie, Structural transformations of carbon nanotubes under hydrostatic pressure, Nano Lett. 5 (11) (2005) 2268–2273. doi:10.1021/nl051637p. * [17] S. Rols, I. N. Goncharenko, R. Almairac, J. L. Sauvajol, I. Mirebeau, Polygonization of single-wall carbon nanotube bundles under high pressure, Phys. Rev. B 64 (2001) 153401. doi:10.1103/PhysRevB.64.153401. * [18] P. E. Lammert, P. Zhang, V. H. Crespi, Gapping by squashing: Metal-insulator and insulator-metal transitions in collapsed carbon nanotubes, Phys. Rev. Lett. 84 (2000) 2453–2456. doi:10.1103/PhysRevLett.84.2453. URL https://link.aps.org/doi/10.1103/PhysRevLett.84.2453 * [19] M. S. C. Mazzoni, H. Chacham, Bandgap closure of a flattened semiconductor carbon nanotube: A first-principles study, Applied Physics Letters 76 (12) (2000) 1561–1563. arXiv:https://doi.org/10.1063/1.126096, doi:10.1063/1.126096. URL https://doi.org/10.1063/1.126096 * [20] L. Yang, J. Han, Electronic structure of deformed carbon nanotubes, Phys. Rev. Lett. 85 (2000) 154–157. doi:10.1103/PhysRevLett.85.154. URL https://link.aps.org/doi/10.1103/PhysRevLett.85.154 * [21] O. Gülseren, T. Yildirim, S. Ciraci, Ç. Kılıç, Reversible band-gap engineering in carbon nanotubes by radial deformation, Physical Review B 65 (15) (2002) 155410. * [22] A. Impellizzeri, P. Briddon, C. P. Ewels, Stacking- and chirality-dependent collapse of single-walled carbon nanotubes: A large-scale density-functional study, Phys. Rev. B 100 (2019) 115410. doi:10.1103/PhysRevB.100.115410. URL https://link.aps.org/doi/10.1103/PhysRevB.100.115410 * [23] C. E. Giusca, Y. Tison, S. R. P. Silva, Atomic and electronic structure in collapsed carbon nanotubes evidenced by scanning tunneling microscopy, Physical Review B 76 (3) (2007) 035429. * [24] F. Balima, S. Le Floch, C. Adessi, T. F. Cerqueira, N. Blanchard, R. Arenal, et al., Radial collapse of carbon nanotubes for conductivity optimized polymer composites, Carbon 106 (2016) 64–73. * [25] L. Duclaux, Review of the doping of carbon nanotubes (multiwalled and single-walled), Carbon 40 (10) (2002) 1751 – 1764, carbon Nanotubes:The Present State. doi:https://doi.org/10.1016/S0008-6223(02)00043-X. URL http://www.sciencedirect.com/science/article/pii/S000862230200043X * [26] X. Ma, L. Adamska, H. Yamaguchi, S. E. Yalcin, S. Tretiak, S. K. Doorn, et al., Electronic structure and chemical nature of oxygen dopant states in carbon nanotubes, ACS nano 8 (10) (2014) 10782–10789. * [27] D. Machon, V. Pischedda, S. Le Floch, A. San-Miguel, Perspective: High pressure transformations in nanomaterials and opportunities in material design, Journal of Applied Physics 124 (16) (2018) 160902. doi:10.1063/1.5045563. * [28] S. Iijima, Helical microtubules of graphitic carbon, Nature 354 (6348) (1991) 56–58. doi:10.1038/354056a0. URL https://doi.org/10.1038/354056a0 * [29] N. G. Chopra, L. X. Benedict, V. H. Crespi, M. L. Cohen, S. G. Louie, A. Zettl, Fully collapsed carbon nanotubes, Nature 377 (6545) (1995) 135–138. doi:10.1038/377135a0. URL http://www.nature.com/nature/journal/v377/n6545/abs/377135a0.html * [30] L. X. Benedict, N. G. Chopra, M. L. Cohen, A. Zettl, S. G. Louie, V. H. Crespi, Microscopic determination of the interlayer binding energy in graphite, Chemical Physics Letters 286 (5) (1998) 490 – 496. doi:https://doi.org/10.1016/S0009-2614(97)01466-8. URL http://www.sciencedirect.com/science/article/pii/S0009261497014668 * [31] T. Tang, A. Jagota, C.-Y. Hui, N. J. Glassmaker, Collapse of single-walled carbon nanotubes, J. Appl. Phys. 97 (7) (2005) –. doi:10.1063/1.1883302. * [32] N. M. Pugno, The design of self-collapsed super-strong nanotube bundles, J. Mech. Phys. Solids 58 (9) (2010) 1397–1410. doi:10.1016/j.jmps.2010.05.007. URL http://www.sciencedirect.com/science/article/pii/S0022509610000979 * [33] C. Zhang, K. Bets, S. S. Lee, Z. Sun, F. Mirri, V. L. Colvin, et al., Closed-edged graphene nanoribbons from large-diameter collapsed nanotubes, ACS Nano 6 (7) (2012) 6023–6032, pMID: 22676224. arXiv:https://doi.org/10.1021/nn301039v, doi:10.1021/nn301039v. URL https://doi.org/10.1021/nn301039v * [34] F. Balima, S. L. Floch, C. Adessi, T. F. Cerqueira, N. Blanchard, R. Arenal, et al., Radial collapse of carbon nanotubes for conductivity optimized polymer composites, Carbon 106 (2016) 64 – 73. doi:https://doi.org/10.1016/j.carbon.2016.05.004. * [35] T. Hertel, R. E. Walkup, P. Avouris, Deformation of carbon nanotubes by surface van der waals forces, Phys. Rev. B 58 (1998) 13870–13873. doi:10.1103/PhysRevB.58.13870. URL https://link.aps.org/doi/10.1103/PhysRevB.58.13870 * [36] K. Yan, Q. Xue, Q. Zheng, D. Xia, H. Chen, J. Xie, Radial collapse of single-walled carbon nanotubes induced by the cu2o surface, J. Phys. Chem. C 113 (8) (2009) 3120–3126. arXiv:http://pubs.acs.org/doi/pdf/10.1021/jp808264d, doi:10.1021/jp808264d. * [37] J. Xie, Q. Xue, H. Chen, D. Xia, C. Lv, M. Ma, Influence of solid surface and functional group on the collapse of carbon nanotubes, The Journal of Physical Chemistry C 114 (5) (2010) 2100–2107. arXiv:https://doi.org/10.1021/jp910630w, doi:10.1021/jp910630w. URL https://doi.org/10.1021/jp910630w * [38] N. G. Chopra, F. Ross, A. Zettl, Collapsing carbon nanotubes with an electron beam, Chemical Physics Letters 256 (3) (1996) 241 – 245. doi:https://doi.org/10.1016/0009-2614(96)00475-7. URL http://www.sciencedirect.com/science/article/pii/0009261496004757 * [39] O. E. Shklyaev, E. Mockensturm, V. H. Crespi, Modeling electrostatically induced collapse transitions in carbon nanotubes, Phys. Rev. Lett. 106 (2011) 155501. doi:10.1103/PhysRevLett.106.155501. URL https://link.aps.org/doi/10.1103/PhysRevLett.106.155501 * [40] H. R. Barzegar, A. Yan, S. Coh, E. Gracia-Espino, G. Dunn, T. Wågberg, et al., Electrostatically driven nanoballoon actuator, Nano Letters 16 (11) (2016) 6787–6791, pMID: 27704855. doi:10.1021/acs.nanolett.6b02394. * [41] A. Sood, P. Teresdesai, D. Muthu, R. Sen, A. Govindaraj, C. Rao, Pressure behaviour of single wall carbon nanotube bundles and fullerenes: A raman study, Phys. Status Solidi B 215 (1) (1999) 393–401, International Conference on Solid State Spectroscopy - (ICSSS), Schwabisch Gmund, Germany, Sep 05-07, 1999. doi:{10.1002/(SICI)1521-3951(199909)215:1<393::AID-PSSB393>3.0.CO;2-8}. * [42] S.-P. Chan, W.-L. Yim, X. G. Gong, Z.-F. Liu, Carbon nanotube bundles under high pressure: transformation to low-symmetry structures, Phys. Rev. B 68 (2003) 075404. doi:10.1103/PhysRevB.68.075404. * [43] R. B. Capaz, C. D. Spataru, P. Tangney, M. L. Cohen, S. G. Louie, Hydrostatic pressure effects on the structural and electronic properties of carbon nanotubes, Phys. Status Solidi B 241 (14) (2004) 3352–3359. doi:10.1002/pssb.200490021. * [44] J. A. Elliott, J. K. W. Sandler, A. H. Windle, R. J. Young, M. S. P. Shaffer, Collapse of single-wall carbon nanotubes is diameter dependent, Phys. Rev. Lett. 92 (2004) 095501. doi:10.1103/PhysRevLett.92.095501. * [45] A. Merlen, N. Bendiab, P. Toulemonde, A. Aouizerat, A. San Miguel, J. L. Sauvajol, et al., Resonant raman spectroscopy of single-wall carbon nanotubes under pressure, Phys. Rev. B 72 (2005) 035409. doi:10.1103/PhysRevB.72.035409. * [46] S. Zhang, R. Khare, T. Belytschko, K. J. Hsia, S. L. Mielke, G. C. Schatz, Transition states and minimum energy pathways for the collapse of carbon nanotubes, Phys. Rev. B 73 (2006) 075423. doi:https://doi.org/10.1103/PhysRevB.73.075423. * [47] M. Hasegawa, K. Nishidate, Radial deformation and stability of single-wall carbon nanotubes under hydrostatic pressure, Phys. Rev. B 74 (2006) 115401. doi:10.1103/PhysRevB.74.115401. URL https://link.aps.org/doi/10.1103/PhysRevB.74.115401 * [48] C. Caillier, D. Machon, A. San-Miguel, R. Arenal, G. Montagnac, H. Cardon, et al., Probing high-pressure properties of single-wall carbon nanotubes through fullerene encapsulation, Phys. Rev. B 77 (12) (2008) 125418. doi:10.1103/PhysRevB.77.125418. * [49] M. Yao, Z. Wang, B. Liu, Y. Zou, S. Yu, W. Lin, et al., Raman signature to identify the structural transition of single-wall carbon nanotubes under high pressure, Phys. Rev. B 78 (20) (2008) 205411. doi:10.1103/PhysRevB.78.205411. * [50] A. J. Ghandour, D. J. Dunstan, A. Sapelkin, G-mode behaviour of closed ended single wall carbon nanotubes under pressure, physica status solidi (b) 246 (3) (2009) 491–495. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.200880503, doi:https://doi.org/10.1002/pssb.200880503. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/pssb.200880503 * [51] C. A. Kuntscher, A. Abouelsayed, K. Thirunavukkuarasu, F. Hennrich, Pressure-induced phenomena in single-walled carbon nanotubes: Structural phase transitions and the role of pressure transmitting medium, physica status solidi (b) 247 (11‐12) (2010) 2789–2792. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201000150, doi:https://doi.org/10.1002/pssb.201000150. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/pssb.201000150 * [52] Y. Sun, D. Dunstan, M. Hartmann, D. Holec, Nanomechanics of carbon nanotubes, PAMM 13 (1) (2013) 7–10. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/pamm.201310003, doi:https://doi.org/10.1002/pamm.201310003. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/pamm.201310003 * [53] T. F. Cerqueira, S. Botti, A. San-Miguel, M. A. Marques, Density-functional tight-binding study of the collapse of carbon nanotubes under hydrostatic pressure, Carbon 69 (0) (2014) 355–360. doi:10.1016/j.carbon.2013.12.036. * [54] A. C. Torres-Dias, T. F. Cerqueira, W. Cui, M. A. Marques, S. Botti, D. Machon, et al., From mesoscale to nanoscale mechanics in single-wall carbon nanotubes, Carbon 123 (2017) 145 – 150. doi:http://dx.doi.org/10.1016/j.carbon.2017.07.036. * [55] X. Ye, D. Y. Sun, X. G. Gong, Pressure-induced structural transition of double-walled carbon nanotubes, Phys. Rev. B 72 (2005) 035454. doi:10.1103/PhysRevB.72.035454. URL http://link.aps.org/doi/10.1103/PhysRevB.72.035454 * [56] V. Gadagkar, P. K. Maiti, Y. Lansac, A. Jagota, A. K. Sood, Collapse of double-walled carbon nanotube bundles under hydrostatic pressure, Phys. Rev. B 73 (2006) 085402. doi:10.1103/PhysRevB.73.085402. * [57] A. L. Aguiar, E. B. Barros, R. B. Capaz, A. G. Souza Filho, P. T. C. Freire, J. Mendes Filho, et al., Pressure-induced collapse in double-walled carbon nanotubes: Chemical and mechanical screening effects, J. Phys. Chem. C 115 (13) (2011) 5378–5384. doi:10.1021/jp110675e. * [58] S. You, M. Mases, I. Dobryden, A. A. Green, M. C. Hersam, A. V. Soldatov, Probing structural stability of double-walled carbon nanotubes at high non-hydrostatic pressure by raman spectroscopy, High Pressure Res. 31 (1) (2011) 186–190. arXiv:http://www.tandfonline.com/doi/pdf/10.1080/08957959.2011.562897, doi:10.1080/08957959.2011.562897. * [59] R. S. Alencar, W. Cui, A. C. Torres-Dias, T. F. T. Cerqueira, S. Botti, M. A. L. Marques, et al., Pressure-induced radial collapse in few-wall carbon nanotubes: A combined theoretical and experimental study, Carbon 125 (2017) 429 – 436. doi:10.1016/j.carbon.2017.09.044. * [60] H. Shima, M. Sato, Pressure-induced structural transitions in multi-walled carbon nanotubes, physica status solidi (a) 206 (10) (2009) 2228–2233. doi:10.1002/pssa.200881706. URL http://dx.doi.org/10.1002/pssa.200881706 * [61] Y. Magnin, H. Amara, F. Ducastelle, A. Loiseau, C. Bichara, Entropy-driven stability of chiral single-walled carbon nanotubes, Science 362 (6411) (2018) 212–215. * [62] A. C. Torres-Dias, S. Cambré, W. Wenseleers, D. Machon, A. San-Miguel, Chirality-dependent mechanical response of empty and water-filled single-wall carbon nanotubes at high pressure, Carbon (2015) 442–541doi:10.1016/j.carbon.2015.08.032. * [63] M. H. F. Sluiter, Y. Kawazoe, Phase diagram of single-wall carbon nanotube crystals under hydrostatic pressure, Phys. Rev. B 69 (2004) 224111. doi:10.1103/PhysRevB.69.224111. * [64] G. Carrier, On the buckling of elastic rings, Journal of Mathematics and Physics 26 (1-4) (1947) 94–103. * [65] D. J. Carter, D. J. Dunstan, W. Just, O. F. Bandtlow, A. S. Miguel, Softening of the euler buckling criterion under discretisation of compliance (2020). arXiv:2011.14120. * [66] A. C. Torres-Dias, T. F. Cerqueira, W. Cui, M. A. Marques, S. Botti, D. Machon, et al., From mesoscale to nanoscale mechanics in single-wall carbon nanotubes, Carbon 123 (2017) 145 – 150. doi:https://doi.org/10.1016/j.carbon.2017.07.036. URL http://www.sciencedirect.com/science/article/pii/S0008622317307194 * [67] N. M. Pugno, J. A. Elliott, Buckling of peapods, fullerenes and nanotubes, Physica E: Low-dimensional Systems and Nanostructures 44 (6) (2012) 944 – 948. doi:http://dx.doi.org/10.1016/j.physe.2011.12.024. URL http://www.sciencedirect.com/science/article/pii/S1386947712000021 * [68] O. V. Kharissova, B. I. Kharisov, Variations of interlayer spacing in carbon nanotubes, RSC Adv. 4 (2014) 30807–30815. doi:10.1039/C4RA04201H. URL http://dx.doi.org/10.1039/C4RA04201H * [69] X. Meng, B. Zhang, H. Li, F. Li, Z. Kang, M. Li, et al., A theoretical analysis on self-collapsing of nanotubes, International Journal of Solids and Structures 160 (2019) 51–58. * [70] Y. Han, K. C. Lai, A. Lii-Rosales, M. C. Tringides, J. W. Evans, P. A. Thiel, Surface energies, adhesion energies, and exfoliation energies relevant to copper-graphene and copper-graphite systems, Surface Science 685 (2019) 48–58. * [71] L. Girifalco, R. Lad, Energy of cohesion, compressibility, and the potential energy functions of the graphite system, The Journal of chemical physics 25 (4) (1956) 693–697. * [72] B. Aradi, B. Hourahine, T. Frauenheim, Dftb+, a sparse matrix-based implementation of the dftb method, J. Phys. Chem. A 111 (26) (2007) 5678–5684. * [73] S. J. Stuart, A. B. Tutein, J. A. Harrison, A reactive potential for hydrocarbons with intermolecular interactions, The Journal of chemical physics 112 (14) (2000) 6472–6486. * [74] Y. Magnin, G. Förster, F. Rabilloud, F. Calvo, A. Zappelli, C. Bichara, Thermal expansion of free-standing graphene: benchmarking semi-empirical potentials, Journal of Physics: Condensed Matter 26 (18) (2014) 185401. * [75] M. Motta, A. Moisala, I. A. Kinloch, A. H. Windle, High performance fibres from ’dog bone’ carbon nanotubes, Advanced Materials 19 (21) (2007) 3721–3726. * [76] J.-C. Blancon, A. Ayari, L. Marty, N. Bendiab, A. San-Miguel, Electronic transport in individual carbon nanotube bundles under pressure, J. Appl. Phys. 114 (14). doi:{10.1063/1.4824544}. * [77] L. Guan, K. Suenaga, S. Iijima, Smallest carbon nanotube assigned with atomic resolution accuracy, Nano Letters 8 (2) (2008) 459–462, pMID: 18186659\. arXiv:https://doi.org/10.1021/nl072396j, doi:10.1021/nl072396j. URL https://doi.org/10.1021/nl072396j * [78] X. Zhao, Y. Liu, S. Inoue, T. Suzuki, R. Jones, Y. Ando, Smallest carbon nanotube is 3 å in diameter, Physical review letters 92 (12) (2004) 125502\. * [79] K. Syassen, Ruby under pressure, High Pressure Research 28 (2) (2008) 75–126. arXiv:https://doi.org/10.1080/08957950802235640, doi:10.1080/08957950802235640. URL https://doi.org/10.1080/08957950802235640 * [80] J. Zang, A. Treibergs, Y. Han, F. Liu, Geometric constant defining shape transitions of carbon nanotubes under pressure, Phys. Rev. Lett. 92 (2004) 105501. doi:10.1103/PhysRevLett.92.105501. URL https://link.aps.org/doi/10.1103/PhysRevLett.92.105501 * [81] J. Prasek, J. Drbohlavova, J. Chomoucka, J. Hubalek, O. Jasek, V. Adam, et al., Methods for carbon nanotubes synthesis—review, J. Mater. Chem. 21 (2011) 15872–15884. doi:10.1039/C1JM12254A. URL http://dx.doi.org/10.1039/C1JM12254A * [82] D. Bolmatov, M. Zhernenkov, D. Zav’yalov, S. N. Tkachev, A. Cunsolo, Y. Q. Cai, The frenkel line: a direct experimental evidence for the new thermodynamic boundary, Scientific reports 5 (2015) 15850. * [83] P. A. Djondjorov, V. M. Vassilev, I. M. Mladenov, Analytic description and explicit parametrisation of the equilibrium shapes of elastic rings and tubes under uniform hydrostatic pressure, International Journal of Mechanical Sciences 53 (5) (2011) 355–364. * [84] J. Frenzel, A. F. Oliveira, H. A. Duarte, T. Heine, G. Seifert, Structural and electronic properties of bulk gibbsite and gibbsite surfaces, Zeitschrift für anorganische und allgemeine Chemie 631 (6-7) (2005) 1267–1271.
# Structural Interventions in Networks††thanks: For useful comments and suggestions we thank Hanming Fang, Ben Golub, Sanjeev Goyal, Matthew Jackson, Evan Sadler, Adam Szeidl, Yves Zenou, and participants at conferences and workshops. Yang Sun Department of Economics, Sichuan University, China. Email: <EMAIL_ADDRESS>Wei Zhao Department of Economics and Decision Sciences, HEC Paris, France. Email<EMAIL_ADDRESS>Junjie Zhou Department of Economics, National University of Singapore, Singapore. Email: <EMAIL_ADDRESS> ###### Abstract Two types of interventions are commonly implemented in networks: characteristic intervention which influences individuals’ intrinsic incentives, and structural intervention which targets at the social links among individuals. In this paper we provide a general framework to evaluate the distinct equilibrium effects of both types of interventions. We identify a hidden equivalence between a structural intervention and an _endogenously determined_ characteristic intervention. Compared with existing approaches in the literature, the perspective from such an equivalence provides several advantages in the analysis of interventions targeting on network structure. We present a wide range of applications of our theory, including identifying the most wanted criminal(s) in delinquent networks and targeting the key connector for isolated communities. ## 1 Introduction Social ties shape economic agents’ decisions in a connected world, ranging from which product to adopt for consumers, how much time to spend on study for pupils, how much effort to exert for workers within a team, whether to conduct crime for teenagers, etc.111Numerous studies have highlighted the influence of networks in different contexts such as microfinance (Banerjee et al. (2013)), firm performance (Cai and Szeidl (2018)), productivity at work (Mas and Moretti (2009)), R&D (Goyal and Moraga-González (2001)), education (Sacerdote (2001); Calvó-Armengol et al. (2009)), crime (Ballester et al. (2006)), public goods provision (Bramoullé and Kranton (2007); Allouch (2017), brand choice (David and Dina (2004)), and policy intervention Galeotti et al. (2020)). For recent surveys, see, for instance, Bramoullé et al. (2016); Jackson et al. (2017); Elliott et al. (2019). These social ties, structurally represented as a network, govern individual incentives and therefore collectively determine equilibrium outcomes and welfare in the society. Thus, structural intervention on social ties provides an important policy instrument for the social planner. A natural research question arises: how to best intervene the social structure to maximize a certain performance objective subject to certain resource constraints. This research problem is inherently difficult as it is well-known that network operates in a complex way. Local changes of social links among a few nodes can influence actions of a large set of nodes, including those who are far away through ripple effects. Furthermore, the influence is not homogeneous: nodes that are closer to (further away) the origin of shocks tend to be more (less) responsive. These key features of shock propagation and heterogeneous responses make the analysis of structural interventions both intriguing and challenging.222Admittedly, these two features are also true for other types of intervention, such as the characteristic intervention. As shown in Section 2.2, the problem of characteristic intervention in a fixed network is much simpler and has been extensively studied in the literature. In this paper, we propose a general yet tractable framework to quantitatively assess the consequences of an arbitrary structural intervention of social ties on the equilibrium actions. Overcoming the challenges mentioned above, we present a neat characterization result in Proposition 1 which evaluates the change of equilibrium behavior in response to changes in the network structure. Then we apply Proposition 1 to several economic settings, such as key group removal in delinquent networks (in Section 3) and key connectors for isolated communities (in Sections 4). More specifically, our model of structural intervention builds up a seminal paper by Ballester et al. (2006) (BCZ hereafter), who propose a simple yet powerful model of interactions on a fixed network.333The model in BCZ has been applied, empirically tested, and generalized extensively in the network literature, see, for example, Calvó-Armengol et al. (2009); Chen et al. (2018b); Galeotti et al. (2020). They identify the equivalence between equilibrium actions in a network game and the Katz-Bonacich centralities in sociology (Bonacich (1987)). The Katz-Bonacich centrality of a node on a network simply counts the sum of geometrically discounted walks originated from this node to all the other nodes in the network, weighted by the characteristics of the ending nodes.444This Katz-Bonacich centrality (and its variants and generalizations) plays important roles in shaping agents’ decisions in a wide range of network models, see, for instance, production network (Acemoglu et al. (2012); Baqaee (2018); Liu (2019)), pricing of social products (Candogan et al. (2012); Bloch and Quérou (2013); Chen et al. (2018a)). Proposition 1 in our paper characterizes the impacts of structural intervention on the equilibrium by utilizing another equivalence result: any (local) intervention on the network structure is equivalent to a (local) _endogenously determined_ interventions on characteristics. Specifically, we find that the equilibrium induced by a structural intervention coincides with that by an endogenously determined characteristic intervention without changing the network structure. Moreover, the endogenously determined characteristic intervention only changes the characteristics of the players whose social ties are altered by the structural intervention. The analysis of post-intervention equilibrium becomes much simpler after translating a structural intervention to the characteristic intervention since the later changes individual’s equilibrium behavior linearly and is well studied in the literature, while the former is non-linear. Furthermore, in Corollary 1, we provide a sufficient condition on a structural intervention to induce higher aggregate equilibrium activity, and use it to check the effect of a link reallocation, or a link swap on the aggregate action. For applications, we first adopt the outcome equivalence result to study the key group problem which aims to specify a group of players removing whom reduces the aggregate equilibrium activity the most. Specifically, the removal of a group of players is equivalent to a certain characteristic intervention restricted to this group. Such a characteristic intervention is chosen to make sure that, within the original network, the induced equilibrium efforts of nodes in this group to zero. As a generalization of single node inter- centrality given by BCZ, we provide a closed-form index of group inter- centrality explicitly. Such index takes into account both the activities nodes in this group and their influences on nodes outside of the group. The group inter-centrality index reveals that the higher the connectedness between nodes within a group, the lower the inter-centrality of the group. Therefore, the greedy algorithm, which sequentially selects nodes with highest single node inter-centrality, may fail to find the key group with highest inter- centrality. We also show that group inter-centrality index is equivalent to the aggregate sum of all walks which must pass the group. As a by product, we characterize the aggregate sum of all walks starting from one group and ending at the second group, which does _not_ pass the third group. This result generalizes some of the findings about the targeting centrality proposed by Bramoullé and Garance (2018) in the information diffusion setting. Next, we introduce a bridge index to characterize the impact of building up a bridge between separated networks (Proposition 4). We use the bridge index to fully solve the key bridge problem. Furthermore, we show that the key bridge player must locate at the Pareto frontier of Katz-Bonacich centrality and self-loop in the network. In general, the selection of a bridge pair is an interdependent decision across two networks as identity of key bridge player in one network depends on who is selected as his partner in the second network. These findings are summarized in Corollary 2 and illustrated in Example 3. We also extend the analysis to consider the value of an existing link (the key link problem) and the value of a potential link for an arbitrary network in Section 4.2. As an illustration, we compare inter-group links and intra-group links in Example 4. Our paper builds up on the vast literature on network games (see Ballester et al. (2006), Bramoullé and Kranton (2007) and Galeotti and Goyal (2010)). These papers typically characterize the effects of network structure on equilibrium behavior. Our paper instead focuses on how interventions of network structures affect equilibrium outcomes and sheds light upon the policy design targeting at network structure. The literature on interventions in networks could be broadly divided into two categories: characteristic intervention and network structure intervention. In the first category, the characteristics of individuals can be changed by subsidy or taxation on the choices. For instance, Demange (2017) and Galeotti et al. (2020) study the optimal intervention on characteristics subject to a fixed budget constraint and a quadratic adjustment cost, respectively. Motivated by Ballester et al. (2006, 2010), our paper mainly focuses on the second category: structural intervention. The identified equivalence between a structural intervention and an endogenously determined characteristic intervention in our paper provides an interesting link between these two categories. Several papers on network formation study the most efficient network by analyzing the impact of link shifting (e.g., Belhaj et al. (2016) and Li (2020)). As a complementary result, we propose a sufficient condition to guarantee that a structural intervention leads to higher aggregate action. An important topic in social networks is to study the relative importance of a node in a given network using some indices. Various centrality measures have been proposed to serve the purpose. Bloch et al. (2020) take an axiomatic approach to provide a unified perspective on several commonly used centrality measures. Ballester et al. (2006) give a micro-foundation of Katz-Bonacich centrality and propose another measure, i.e. inter-centrality, to characterize the impact of a node removal. Analogous measures for a group of nodes, instead of a single node, are not fully developed. One exception is Ballester et al. (2010), who define the group intercentrality. One of our contribution is to propose a specific form of group intercentrality by using the statistics in the underlying network and show that the group intercentrality decreases with connectedness between group members. Bramoullé and Garance (2018) study the contribution of a pair of nodes (one sender and one receiver) in an information transmission setting. Our analysis in Section 3.2 can be viewed as an extension of their results by allowing for multiple senders and multiple receivers. This paper also speaks to the literature on the effect of bridge(s) between isolated communities. Verdier and Zenou (2015, 2018) study the communication between cultural leaders, who serve as a bridge connecting different race groups. Cai and Szeidl (2018) demonstrate that business meetings facilitate interfirm communications and create enormous economic values by increasing firm performance. To the best of our knowledge, Golub and Lever (2010) is the only network paper to theoretically study the impact of bridge. Golub and Lever (2010) consider a social learning model and their main focus is on the eigenvalue centrality. Our paper, instead, studies its impact of a bridge on Katz-Bonacich centralities and proposes an explicit bridge index to characterize the key pair of nodes connecting two separated networks. See Golub and Lever (2010) for a comprehensive discussion of the related literature. ## 2 Interventions in networks: theory ### 2.1 Setup Baseline game played on a network Consider a network game played by a set of players $N=\left\\{1,2,\ldots,n\right\\}$ embedded in a social network $\mathbf{G}$, which is represented by a $n\times n$ adjacency matrix $\mathbf{G}=\left(g_{ij}\right)_{n\times n}$. Each player $i$ chooses an effort $a_{i}\in[0,\infty)$ simultaneously with payoff function given as follows:555In Section 5, we discuss several extensions of the baseline model. $u_{i}\left(a_{i},\mathbf{a}_{-i}\right)=\theta_{i}a_{i}-\frac{1}{2}a_{i}^{2}+\delta\underset{k=1}{\overset{n}{\sum}}g_{ik}a_{i}a_{k},~{}~{}~{}~{}i\in N.$ (1) This specification of payoff closely follows from Ballester et al. (2006), where $\theta_{i}$ measures player $i$’s intrinsic marginal utility (hence $i$’s characteristic), $\frac{1}{2}a_{i}^{2}$ denotes player $i$’s cost of effort, and the last term $\delta\underset{k=1}{\overset{n}{\sum}}g_{ik}a_{i}a_{k}$ captures the interaction term representing the local network effects among players. The scalar parameter $\delta$ controls the strength of network interaction. We assume $\delta>0$ so the game exhibits strategic complementarity. We use $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$ to denote the network game represented above, where $\boldsymbol{\theta}=\left(\theta_{1},\ldots,\theta_{n}\right)^{\prime}$ is the characteristics vector. Throughout the paper, we impose the standard assumptions that (i) $\mathbf{G}$ is symmetric with $g_{ij}=g_{ji}\in\left\\{0,1\right\\}$, and (ii) that $g_{ii}=0$ for all $i\in N$.666For ease of interpretation, we focus on undirected zero-one network matrix $\mathbf{G}$. Our results can be easily generalized to weighted directed networks. Let $\lambda_{\max}\left(\mathbf{G}\right)$ denote the spectral radius of matrix $\mathbf{G}$. By Perron-Frobenius Theorem, $\lambda_{\max}\left(\mathbf{G}\right)$ also equals the largest eigenvalue of $\mathbf{G}$. The following is a well-known measure of centralities in network literature. ###### Definition 1. Given a network $\mathbf{G}$, a scalar $\delta$, and a $n$-dimensional vector $\boldsymbol{\theta}$, we define $\boldsymbol{\theta}$-weighted Katz-Bonacich centralities as $\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta},\delta\right)=(b_{1},b_{2},\cdots,b_{n})^{\prime}\equiv\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1}\boldsymbol{\theta},$ (2) provided that $\delta<\frac{1}{\lambda_{\max}\left(\mathbf{G}\right)}$. When $\boldsymbol{\theta}=(1,1,\cdots,1)^{\prime}=\mathbf{1}$, we call $\mathbf{b}\left(\mathbf{G},\delta\right)\equiv\mathbf{b}\left(\mathbf{G},\boldsymbol{1},\delta\right)$ the unweighted Katz-Bonacich centralities. Define the Leontief inverse matrix $\mathbf{M}\left(\mathbf{G},\delta\right)=\left(m_{ij}\left(\mathbf{G}\right)\right)_{n\times n}\equiv\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1},$ (3) so that $b_{i}\left(\mathbf{G},\boldsymbol{\theta},\delta\right)=\underset{j=1}{\overset{n}{\sum}}m_{ij}\left(\mathbf{G}\right)\theta_{j}$. Intuitively, $m_{ij}\left(\mathbf{G}\right)$ counts the total number of walks from $i$ to $j$ in network $\mathbf{G}$ with path of length $k$ discounted by $\delta^{k}$. So $i$’s Katz-Bonacich centrality $b_{i}\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$ is the sum of walks starting from $i$ ending at any node $j$ with weights $\theta_{j}$.777It follows from the following identity of the Leontief inverse matrix: $\mathbf{M}\left(\mathbf{G},\delta\right)=\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1}=\mathbf{I}+\delta\mathbf{G}+\delta^{2}\mathbf{G}^{2}+\cdots.$ This Neumann series converges when $0\leq\delta<\frac{1}{\lambda_{\max}\left(\mathbf{G}\right)}$. Interestingly, Ballester et al. (2006) show that, when $\delta<\frac{1}{\lambda_{\max}\left(\mathbf{G}\right)}$, game $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$ has a unique Nash equilibrium in which each player $i$’s equilibrium action $x_{i}^{\ast}$ is exactly equal to $i$’s Katz-Bonacich centrality, i.e., $\mathbf{x}^{\ast}=\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta},\delta\right)\text{.}$ (4) The more influential players, measured by the Katz-Bonacich centralities, are more active in equilibrium. Such an elegant relationship between equilibrium outcomes and Katz-Bonacich centralities is the starting point of our analysis. Interventions on networks The network structure $\mathbf{G}$ and the characteristics vector $\boldsymbol{\theta}$ jointly shape the equilibrium actions of players and welfare in $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$. We introduce two primary types of interventions to influence the equilibrium outcomes: _characteristic intervention_ and _structural intervention_. In the former case, $\boldsymbol{\theta}$ is modified to $\boldsymbol{\hat{\theta}}$ while $\mathbf{G}$ is fixed, and in the latter case, $\mathbf{G}$ is changed to $\mathbf{\hat{G}}$ while $\boldsymbol{\theta}$ is fixed. The economic consequences of these two types of interventions are characterized in details in next two subsections. The hybrid case involving both types of interventions is discussed in Section 5.2. Importantly, changes in either $\mathbf{G}$ and $\boldsymbol{\theta}$ in our paper occur due to exogenous and independent reasons. Furthermore, we do not consider the possibility that changes in characteristics may induce changes in the network structure, and vice versa. The parameter $\delta$ is fixed throughout the paper, and often omitted in the expressions when the context is clear. Assumptions To ensure the uniqueness of Nash equilibrium before and after the intervention, we impose the standard spectral condition: $\delta<\frac{1}{\lambda_{\max}\left(\mathbf{G}\right)}$ and $\delta<\frac{1}{\lambda_{\max}\left(\mathbf{\hat{G}}\right)}$. Our main focus is on the effects of interventions on equilibrium actions. In applications we analyze optimal intervention under certain resource constraint (such as limiting the number of links or players that can be intervened). Beyond that, we do not explicitly model the cost side of interventions.888With parametric assumptions on the cost of interventions, certainly more can be said about the optimal interventions for the social planer. Assuming a quadratic loss function of the Euclidean distance between $\boldsymbol{\theta}$ and $\boldsymbol{\hat{\theta}}$, Galeotti et al. (2020) explicitly solve the optimal characteristic intervention, for both the case with $\delta>0$ (strategic complement) and that with $\delta<0$ (strategic substitute). They relate the optimal interventions to the spectral properties of interaction network and show that the optimal intervention takes a simple form when the budget of the planner is sufficiently large. Notation Before proceed, we introduce some notation that will be used in this paper. In the network $\left(N,\mathbf{G}\right)$, for any subset $A\subseteq N$, we let $|A|$ denote the cardinality of this set, and let $A^{C}=N\backslash A$ denote the complement of $A$. Let $\mathbf{G}_{AA}$ denote the $|A|\times|A|$ adjacency matrix of the subnetwork formed by players in $A$. Moreover, the adjacency matrix $\mathbf{G}$ can be written as a block matrix $\mathbf{G}=\begin{bmatrix}\mathbf{G}_{A^{C}A^{C}}&\mathbf{G}_{A^{C}A}\\\ \mathbf{G}_{AA^{C}}&\mathbf{G}_{AA}\end{bmatrix}.$ Similarly, we can rewrite a column vector $\mathbf{x}$ of length $n$ as $\begin{bmatrix}\mathbf{x}_{A^{C}}\\\ \mathbf{x}_{A}\end{bmatrix}$. We use $x$ to denote the sum of all elements in vector $\mathbf{x=}\left(x_{i}\right)_{n\times 1}$, i.e., $x=\underset{i=1}{\overset{n}{\sum}}x_{i}$. The transpose of a matrix $\mathbf{H}$ is denoted by $\mathbf{H}^{\prime}$. Consider two matrices $\mathbf{Q=}\left(q_{ij}\right)_{n\times m}$ and $\mathbf{P}=\left(p_{ij}\right)_{n\times m}$ of the same dimension, we write $\mathbf{Q}\succeq(\preceq)\mathbf{P}$ if and only if $q_{ij}\geq(\leq){p}_{ij}$ for any $i$, $j$. ### 2.2 Effects of a characteristic intervention In this subsection, we consider the impact of characteristic intervention. In reality, the characteristics of players can be increased by subsidy, or decreased by taxation (See, for example, Galeotti et al. (2020) for further illustrations of changing $\boldsymbol{\theta}$.) The characteristic intervention changes the characteristic vector $\boldsymbol{\theta}$ to $\boldsymbol{\hat{\theta}}$. The game after intervention $\Gamma\left(\mathbf{G},\boldsymbol{\hat{\theta}}\right)$ reaches a new equilibrium, denoted as $\mathbf{\hat{x}}^{\ast}$. Define $\Delta\boldsymbol{\theta}:=\boldsymbol{\hat{\theta}}-\boldsymbol{\theta}$ as the differences in players’ characteristics and $\Delta\mathbf{x}^{\ast}=\mathbf{\hat{x}}^{\ast}-\mathbf{x}^{\ast}$ as the changes in equilibrium actions. Since interventions can be targeted, not every player is equally affected, thus $\Delta\theta_{i}$ may not have the same sign or magnitude as $\Delta\theta_{j}$. Define $S=\left\\{i\in N:\Delta\theta_{i}\neq 0\right\\}$ as the set of players involved in this characteristic intervention and we rewrite $\Delta\boldsymbol{\theta}$ as $\begin{bmatrix}\mathbf{0}\\\ \Delta\boldsymbol{\theta}_{S}\end{bmatrix}$ after suitable relabelling of players. That is, characteristics of players in $S$ are changed by $\Delta\boldsymbol{\theta}_{S}$, while characteristics of players in its complement $S^{C}$ are not affected. The following Lemma summarizes the effects of a characteristic intervention. ###### Lemma 1. After characteristic intervention $\Delta\boldsymbol{\theta}=\begin{bmatrix}\mathbf{0}\\\ \Delta\boldsymbol{\theta}_{S}\end{bmatrix}$, the change in equilibrium is $\Delta\mathbf{x}_{A}^{\ast}=\mathbf{M}_{AS}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}$ (5) for any subset $A\subseteq N$. Moreover, the change in the aggregate action is $\Delta x^{\ast}=\sum_{i\in N}\Delta x_{i}^{*}=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}\text{.}$ (6) This Lemma is straightforward according to equation (4) since the network structure is fixed during the intervention, and the equilibrium action profile $\mathbf{x}^{\ast}$ is linear in the characteristics vector $\boldsymbol{\theta}$ with sensitivity matrix given by $\mathbf{M}(\mathbf{G})$. In particular, consider a characteristic intervention at a single node $S=\\{j\\}$ by $\Delta\theta_{j}$, then for $A=\\{i\\}$, $\Delta x_{i}^{\ast}=m_{ij}\left(\mathbf{G}\right)\Delta\theta_{j}$ by Lemma 1. The marginal contribution of $j$’s characteristics on $i$’s equilibrium behavior is exactly $m_{ij}(\mathbf{G})$: the total number of walks from $i$ to $j$ with length discount $\delta$ in the network. Summing over all $i$, the marginal contribution of $j$’s characteristics on the aggregate effort is just $\sum_{i\in N}m_{ij}(\mathbf{G})=\sum_{i\in N}m_{ji}(\mathbf{G})=b_{j}\left(\mathbf{G}\right)$.999We exploit the symmetry of matrix $\mathbf{M}$ here. In other words, we have $\frac{\partial x^{\ast}}{\partial{\theta}_{j}}=\frac{\partial\\{\sum_{i\in N}x_{i}^{\ast}\\}}{\partial{\theta}_{j}}=b_{j}(\mathbf{G}).$ (7) When characteristics of multiple players are modified during the intervention (so $S$ contains multiple players), by Lemma 1, we observe a form of linearity: the change of player $i$’s equilibrium action is simply the sum, over $j$ in $S$, of the effect caused by $j$, i.e., $\Delta x_{i}^{\ast}=\sum_{j\in S}m_{ij}\left(\mathbf{G}\right)\Delta\theta_{j}$. In other words, $\frac{\partial\mathbf{x}_{A}^{\ast}}{\partial\boldsymbol{\theta}_{S}}=\mathbf{M}_{AS}\left(\mathbf{G}\right),\mbox{for any subset $A\subseteq N$.}$ (8) As we will see in next subsection, this desirable feature of linearity does not hold for structural intervention, the effects of which are nonlinear and hence more complex to analyze. ### 2.3 Effects of a structural intervention In this subsection, we study the impact of structural intervention, i.e., changing $\mathbf{G}$ to $\mathbf{\hat{G}}$. The equilibrium action profile changes from $\mathbf{x}^{*}$ in the original game $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$ to $\mathbf{\hat{x}}^{*}$ in the new game $\Gamma\left(\mathbf{\hat{G}},\boldsymbol{\theta},\delta\right)$. Define $\mathbf{C}=\mathbf{\hat{G}}-\mathbf{G}$ as the change in the network structure, and $\Delta\mathbf{\hat{x}}^{*}=\mathbf{\hat{x}}^{\ast}-\mathbf{x}^{\ast}$ as the change in equilibrium actions. Structural interventions may occur when new links are formed and/or existing links or nodes are deleted. The matrix $\mathbf{C}=\left(c_{ij}\right)_{n\times n}$ is symmetric with entries in $\left\\{1,0,-1\right\\}$. In particular, $\mathbf{C}$ is not necessarily a non-negative matrix. Let $S=\left\\{i\in N:c_{ij}\neq 0\text{ for some }j\in N\right\\}$ denote the set of players involved in this intervention. Rearranging the order of players if necessary, we can represent the intervention matrix $\mathbf{C}$ by the following block matrix $\begin{bmatrix}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{C}_{SS}\end{bmatrix}.$ To state the next result regarding the effects of the structural intervention $\mathbf{C}$, we define an $|S|$-dimensional vector $\boldsymbol{\Delta\theta}_{S}^{\ast}$ as follows: $\boldsymbol{\Delta\theta}_{S}^{\ast}\equiv\delta\mathbf{C}_{SS}\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ (9) Note that this vector can be easily computed using centralities measures before the intervention (such as $\mathbf{b}_{S}\left(\mathbf{G}\right)$ and $\mathbf{M}_{SS}\left(\mathbf{G}\right)$), and the intervention matrix $\mathbf{C}_{SS}$. ###### Lemma 2 (Equivalence between structural intervention and characteristics intervention). Start with $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$, a structural intervention $\mathbf{C}=\begin{bmatrix}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{C}_{SS}\end{bmatrix}$ has the same effects on equilibrium actions as a characteristics intervention $\widetilde{\Delta\boldsymbol{\theta}}\equiv\begin{bmatrix}\mathbf{0}\\\ \boldsymbol{\Delta\theta}_{S}^{\ast}\end{bmatrix},$ where $\boldsymbol{\Delta\theta}_{S}^{\ast}$ is given in equation (9). The main idea behind Lemma 2 is very simple. An equilibrium is a fixed point of the best-response mapping, which, in the framework of BCZ, is linearly additively separable in actions $\mathbf{x}$ and characteristics $\boldsymbol{\theta}$. That is, $\mathbf{x}^{\ast}$ is an equilibrium of game $\Gamma\left(\mathbf{G},\boldsymbol{\theta},\delta\right)$ if and only if $\mathbf{x}^{\ast}=\boldsymbol{\theta}+\delta\mathbf{G}\mathbf{x}^{\ast}$. We could re-interpret the post-intervention equilibrium $\mathbf{\hat{x}}^{\ast}$ as a fixed point in the pre-intervention game after modifying the characteristics vector of players in $S$ from $\boldsymbol{\theta}_{S}$ to $\boldsymbol{\theta}_{S}+\boldsymbol{\Delta\theta}_{S}^{\ast}$ with $\boldsymbol{\Delta\theta}_{S}^{\ast}=\delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}$.101010Formally, the post-intervention equilibrium action profile $\mathbf{\hat{x}}$ solves $\displaystyle\mathbf{\hat{x}}^{\ast}$ $\displaystyle=$ $\displaystyle\boldsymbol{\theta}+\delta\left(\mathbf{G}+\mathbf{C}\right)\mathbf{\hat{x}}^{\ast}=\left(\boldsymbol{\theta}+\delta\mathbf{C}\mathbf{\hat{x}}^{\ast}\right)+\delta\mathbf{G\hat{x}}^{\ast}=\left(\boldsymbol{\theta}+\begin{bmatrix}\mathbf{0}\\\ \delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{*}\end{bmatrix}\right)+\delta\mathbf{G\hat{x}}^{\ast}.$ Put differently, we have $\mathbf{\hat{x}}^{\ast}=\mathbf{b}(\mathbf{G},\boldsymbol{\theta}+\widetilde{\Delta\boldsymbol{\theta}})$ with $\widetilde{\Delta\boldsymbol{\theta}}=\begin{bmatrix}\mathbf{0}\\\ \delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{*}\end{bmatrix}$. To determine $\mathbf{\hat{x}}_{S}^{\ast}$, which is endogenous, we make use of the following identity: $\underbrace{\mathbf{\hat{x}}_{S}^{\ast}-\mathbf{b}_{S}}_{=\Delta\mathbf{x}_{S}^{\ast}}=\mathbf{M}_{SS}\underbrace{\left(\delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}\right)}_{=\boldsymbol{\Delta\theta}_{S}^{\ast}}.$ The above identity follows from equation (5): the term on the left-hand side is just the change in the equilibrium profile of $S$ (recall $\mathbf{b}_{S}$ is just the pre-intervention effort profile of $S$), and the term on the right-hand side follows from the above equivalent characteristical re- interpretation of structural intervention. Lemma 2 demonstrates a simple equivalence between a structural intervention and an _endogenously determined_ characteristics intervention. Lemma 2, combined with Lemma 1, greatly simplifies the analysis of effects of structural interventions in networks. There are three key features worth noting. The first is _locality_. The vector of the equivalent characteristics intervention is non-zero only on $S$ (the set of nodes involved in the intervention) and it only requires information of $\mathbf{M(G)}$ and $\mathbf{b(G)}$ of nodes in $S$, i.e., the entries of $\mathbf{M}_{SS}(\mathbf{G})$ and $\mathbf{b}_{S}(\mathbf{G})$. This feature is appealing, as in many applications $|S|$ is relatively small, compared with the network size $|N|$ (see our applications in subsequent Sections). Locality makes the expression of the effects of structural interventions much more succinct, hence easier to interpret. The second is _convenience_. On top of the intervention $\mathbf{C}$, the determination of the equivalent characteristics intervention uses the Leontief inverse matrix $\mathbf{M}(\mathbf{G})$ and Katz-Bonacich centralities $\mathbf{b}(\mathbf{G},\boldsymbol{\theta})$ evaluated _before_ the intervention, rather than indices of post-intervention network $\hat{\mathbf{G}}$. Since these information about pre-intervention centralities is usually available, the amount of additional information to evaluate structural intervention is minimal. The second feature also makes the comparative analysis across different structural interventions manageable. To compare the effects of two structural interventions, say $\mathbf{C^{\prime}}$ and $\mathbf{C^{\prime\prime}}$, we keep track of the differences in the vectors of characteristics interventions by mainly focusing on the differences between $\mathbf{C^{\prime}}$ and $\mathbf{C^{\prime\prime}}$, as information about centralities comes from a common source: the pre-intervention equilibrium. Thirdly, characteristic intervention affects players’ equilibrium efforts linearly with sensitivity matrix $\mathbf{M}\left(\mathbf{G}\right)$ (see Lemma 1). In contrast, the impact of structural intervention is much more involved. Using the Newmann series definition (or the walk counting explanations) of centrality measures, we obtain the following decomposition of the changes in actions: $\displaystyle\Delta\mathbf{x}^{\ast}$ $\displaystyle=$ $\displaystyle\left(\mathbf{I}-\delta\left(\mathbf{G+C}\right)\right)^{-1}\boldsymbol{\theta}-\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1}\boldsymbol{\theta}$ $\displaystyle=$ $\displaystyle\left\\{\delta\mathbf{C}+\delta^{2}\left(\left(\mathbf{G+C}\right)^{2}-\mathbf{G}^{2}\right)+\delta^{3}\left(\left(\mathbf{G+C}\right)^{3}-\mathbf{G}^{3}\right)+\cdots\right\\}\boldsymbol{\theta}\text{.}$ For each $k=1,2,\cdots$, the term $\left(\mathbf{G+C}\right)^{k}-\mathbf{G}^{k}$ keeps track of changes in the number of walks with length $k$ due to this intervention $\mathbf{C}$. Evaluating this term directly is increasingly complicated as $k$ gets larger. By transforming the structural intervention to an _endogenously determined_ characteristic intervention, Proposition 1 bypasses most of the challenging issues associated with the evaluation of structural interventions. The following Proposition immediately follows from Lemmas 1 and 2. ###### Proposition 1 (Effects of structural interventions). After structural intervention $\mathbf{C}=\begin{bmatrix}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{C}_{SS}\end{bmatrix}$, 1. (i) the change in equilibrium is $\Delta\mathbf{x}_{A}^{\ast}=\mathbf{M}_{AS}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}^{\ast},$ (10) for any subset $A\subseteq N$; and 2. (ii) the change in the aggregate action is $\Delta x^{\ast}=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}^{\ast},$ (11) where $\boldsymbol{\Delta\theta}_{S}^{\ast}$ is given in equation (9). Proposition 1 is applicable to an arbitrary structural intervention. In what follows, we present several examples of structural interventions, which are commonly used in the network literature, though under different contexts. ###### Example 1 (Different types of structural interventions). 1. (i) Creating a new link between $i$ and $j$: $\mathbf{C}=\mathbf{E}_{ij}$.121212For instance, Golub and Lever (2010) evaluate the impact of adding a link (a weak tie) between two disconnected networks on the eigenvalue centralities.Here $\mathbf{E}_{ij}$ denotes the matrix with $1$ on $\left(i,j\right)$ and $\left(j,i\right)$ entries, $0$ on all the other entries. 2. (ii) Removing an existing link between $i$ and $j$: $\mathbf{C}=-\mathbf{E}_{ij}$.131313For instance,Ballester et al. (2010) investigate the impact of removing a link in a delinquent network. 3. (iii) Removing all the links associated with a given player $i$: $\mathbf{C}=-\sum_{j:g_{ij}=1}\mathbf{E}_{ij}$.141414Ballester et al. (2006) study the impact of removing a single node from the criminal network. Also see Bramoullé et al. (2016) for a recent survey of key player. 4. (iv) Creating new links while removing existing links simultaneously.151515For instance, Cai and Szeidl (2018) show that business meetings, which help firms build up social connections, have positive impacts on firm performances. König et al. (2014) study a model of network formation with new links added and existing links removed dynamically. To study the efficient network design with fixed number of total links, Belhaj et al. (2016) analyze the effects of a _link swap_ , an operation which cuts an existing link between $i$ and $j$, while adds a new link between $k$ and $l$ (in our language, $\mathbf{C}=-\mathbf{E}_{ij}+\mathbf{E}_{kl}$ for a swap). Proposition 1 (i) provides the effect of interventions for each player. In many applications, the designer may care about the aggregate action, or even its sign. Obviously, if links are created, i.e., $\mathbf{C}\succeq\mathbf{0}$, the aggregate action unambiguously increases. Likewise, when links are removed, i.e., $\mathbf{C}\preceq\mathbf{0}$, the aggregate action decreases. Suppose that new links are formed and meanwhile existing links are removed in the intervention $\mathbf{C}$, some players become more active, while others are less active with the net effect on aggregate action less clear-cut to check. Proposition 1 (ii) provides a necessary and sufficient condition. In the next Corollary, we present a sufficient condition to guarantee that a structural intervention leads to higher aggregate action. Such a condition is much simpler to check than that in Proposition 1 (ii).161616To use Proposition 1 (ii), we need to check the sign of $\mathbf{b}_{S}^{\prime}\delta\mathbf{C}_{SS}\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)$. ###### Corollary 1. Assume $\boldsymbol{\theta}=\mathbf{1}$. In network $\left(N,\mathbf{G}\right)$, a structural intervention $\mathbf{C}$ satisfying $\mathbf{b}^{\prime}\mathbf{C}\mathbf{b}=\mathbf{b}_{S}^{\prime}\mathbf{C}_{SS}\mathbf{b}_{S}\geq(>)0$ (12) always increases (strictly increases) aggregate equilibrium action, where $\mathbf{b}=\mathbf{b}\left(\mathbf{G},\mathbf{1},\delta\right)$. Corollary 1 is a direct consequence of the following inequality: $\underbrace{b(\mathbf{G}+\mathbf{C})-b(\mathbf{G})}_{:=\Delta x^{*}}\geq\delta\mathbf{b}^{\prime}\left(\mathbf{G}\right)\mathbf{C}\mathbf{b}(\mathbf{G}),$ (13) which, under the condition $\boldsymbol{\theta}=\mathbf{1}$, provides a lower bound on the change of aggregate action for any intervention $\mathbf{C}$ in $\Gamma(\mathbf{G},\mathbf{1})$. The above inequality employs a convexity property of the equilibrium aggregation effort $b(\mathbf{G})$ as a function of the network topology $\mathbf{G}$, as the term on the right-hand side can be viewed as the linear approximation (hence an under-estimation due to convexity) of the change of aggregate action. The condition stated in Corollary 1 is sufficient, but in general not necessary. An intervention which does not satisfy the condition in Corollary 1 could still improve aggregate action. Since $c_{ij}\in\\{0,\pm 1\\}$, we can reformulate the expression in Corollary 1 as follows $\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\mathbf{C}_{SS}\mathbf{b}_{S}\left(\mathbf{G}\right)=\sum_{i,j\in S}c_{ij}b_{i}b_{j}=\left(\sum_{(i,j):c_{ij}=1}b_{i}b_{j}\right)-\left(\sum_{(i,j):c_{ij}=-1}b_{i}b_{j}\right).$ (14) If we define the product Katz-Bonacich centralities of two nodes associated with a link the _l-value_ of that link, then Corollary 1 states that if sum of _l-values_ over new links in an intervention $\mathbf{C}$ exceeds the sum of _l-values_ over removed links in $\mathbf{C}$, the aggregate action must increase after this intervention. To see some immediate implication of this Corollary, we present two simple examples. 1. 1. First, we consider a link reallocation in the form of $\mathbf{C}=-\mathbf{E}_{ij}+\mathbf{E}_{kl}$, i.e., removing the link between $i$ and $j$, while adding a new link between $k$ and $l$.171717To make such an intervention $\mathbf{C}$ legitimate for network $\mathbf{G}$, we assume $g_{ij}=1$ and $g_{kl}=0$. Moreover, we assume at least three elements of $\\{i,j,k,l\\}$ must be distinct. Then, Corollary 1 implies that such a reallocation of link increases aggregate action if $b_{i}b_{j}<b_{k}b_{l}$. In particular it holds when $b_{i}<b_{k}$ and $b_{j}\leq b_{l}$. Whenever the new formed link contains nodes with higher Katz-Bonacich centralities than the removed link, this type of link reallocation increases aggregate action. 2. 2. Second, we consider a link swap $\mathbf{\tilde{C}}=-\mathbf{E}_{ij}+\mathbf{E}_{il}$ (a specific reallocation with $i=k$), i.e., removing the link between $i$ and $j$, while adding a new link between $i$ and $l$. Such a swap, by Corollary 1, increases aggregation action whenever $b_{j}<b_{l}$. Cutting an old link with a neighbouring node $j$ of node $i$ with lower Katz-Bonacich centrality, while creating a new link from $i$ to another unconnected node $l$ with higher Katz-Bonacich centrality makes the whole group overall more active. In both examples, we identify simple ways to reallocate or swap links of existing network to improve aggregate action. This argument is complementary to the critical Lemma (Lemma 1) in Belhaj et al. (2016) which states that certain type of link swap or reallocation leads to higher aggregate welfare.181818The planer’s objective in Belhaj et al. (2016) is aggregate welfare, not aggregate effort as in Corollary 1. But the underlying driving forces behind Lemma 1 in their paper are similar to ours. See our companion paper Sun et al. (2021) for related discussions and further implications of this Corollary on efficient network design. Another potential application of Corollary 1 is to provide local optimality conditions for constrained network optimization problem. Take a set of networks $\mathcal{G}$, if $\mathbf{G}^{\ast}\in\mathcal{G}$ solves the problem: $\max{x}^{\ast}(\mathbf{G^{\prime}})$ subject to $\mathbf{G}^{\prime}\in\mathcal{G}$. Then the optimality of $\mathbf{G}^{\ast}$ immediately imply that $\mathbf{b}^{\prime}\mathbf{C}\mathbf{b}\leq 0$ for any $\mathbf{C}=\mathbf{G^{\prime}}-\mathbf{G}^{\ast}$ with $\mathbf{G}^{\prime}\in\mathcal{G}$, where $\mathbf{b}=\mathbf{b}\left(\mathbf{G}^{\ast}\right)$.191919Since the network structure we consider in this paper is discrete (the bilateral link is either zero or one), not continuous, the standard KKT conditions for optimality do not directly apply. Focusing on the class of weighted and directed networks, Li (2020) employ KKT conditions to show that optimal networks in his setting are generalized nested split graphs. For certain specification of $\mathcal{G}$, the combinations of these local necessary conditions are rich enough to infer useful structural properties of the resulting optimal network.202020In a companion paper Sun et al. (2021) on designing efficient networks sequentially, we show that the optimal network $\mathbf{G}^{t}$ in each step $t$ must be contained in an important class of networks called quasi-complete graphs. We mainly focus on the network model in Ballester et al. (2006). As shown by Bramoullé et al. (2014), our results regarding the effects of interventions on networks carry over to a more general class of utility functions that induce linear best responses. Two main forms of interventions are studied in this Section. Lemma 1 focuses on general characteristic intervention. Proposition 1 and Corollary 1 provide a unified approach to analyze the impacts of structural intervention on the equilibrium behavior at the individual and aggregate level. The gist of Lemma 2 is to offer a new perspective of identifying a structural intervention and an _endogenously determined_ characteristic intervention. Restricting to more specific types of structural interventions in subsequent applications, we illustrate several advantages of our theory of interventions in networks, compared with existing approaches in the literature. ## 3 The most wanted criminal(s) in delinquent networks ### 3.1 The key group problem and the intercentrality index Consider the following optimization problem: $\underset{S\subseteq N,|S|\leq k}{\min}b\left(\mathbf{G}_{S^{C}S^{C}},\boldsymbol{\theta}_{S^{C}}\right),$ (15) which is motivated by the application of criminal network (see Ballester et al. (2006, 2010) for detailed discussion): the government, facing a group of criminals in a network $\mathbf{G}$, wants to identify a subset of criminals $S$ of $N$ (as known as _the most wanted_) so that the total action (criminal effort in this context) in the remaining network is minimized after removing $S$ from the original network $\mathbf{G}$.212121It is well established that criminality is a social action with strong peer influences (see, for example, Jerzy (2001); Mark (2002); Patacchini and Zenou (2012)). Note that, before the intervention, the criminals play a game $\Gamma\left(\mathbf{G},\boldsymbol{\theta}\right)$ with total criminal activities $b\left(\mathbf{G},\boldsymbol{\theta}\right)$; after the removal of $S$ from $\mathbf{G}$, the remaining criminals $S^{C}$ play the game $\Gamma(\left(\mathbf{G}_{S^{C}S^{C}},\boldsymbol{\theta}_{S^{C}}\right)$, which leads to the objective stated in program (15). To accommodate the constrain from the government side (for instance, limited police resource), the size of $S$ is bounded above by a positive integer $k$. Problem (15) is called the key player problem for $k=1$, and the key group problem in general for $k\geq 2$. Ballester et al. (2010) introduce the following definition. ###### Definition 2. The intercentrality index of group $S$ in network $\left(N,\mathbf{G}\right)$ is defined as $d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)\equiv b\left(\mathbf{G},\boldsymbol{\theta}\right)-b\left(\mathbf{G}_{S^{C}S^{C}},\boldsymbol{\theta}_{S^{C}}\right)\text{.}$ The intercentrality of group $S$, $d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)$, is the precisely reduction of aggregate activity by removing group $S$ and it can be decomposed into two parts: a direct effect by the removed players in $S$, $\sum_{l\in S}b_{l}\left(\mathbf{G},\boldsymbol{\theta}\right)$, and an indirect effect due to the decreasing of equilibrium actions of the remaining players $j\in S^{C}$, $\sum_{j\in S^{C}}\left\\{b_{j}\left(\mathbf{G},\boldsymbol{\theta}\right)-b_{j}\left(\mathbf{G}_{S^{C}S^{C}},\boldsymbol{\theta}_{S^{C}}\right)\right\\}$. The next Lemma gives a simple expression for $d_{S}$. ###### Lemma 3. For any $S\subseteq N$, we have $d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ (16) Figure 1: Key player problem from the angle of view from characteristic intervention For illustration, we consider Figure 1. Imagine a thought experiment in which we change $\theta_{4}$ by $\Delta\theta_{4}^{\ast}=-\frac{b_{4}\left(\mathbf{G},\boldsymbol{\theta}\right)}{m_{44}\left(\mathbf{G}\right)}$ while keeping other $\theta_{i},i\neq 4$ the same. By Lemma 1, after this characteristic intervention $\Delta\theta_{4}^{\ast}$, each $i$’s equilibrium action is changed by $m_{i4}(\mathbf{G})\Delta\theta_{4}^{\ast}$, and the aggregate action is changed by $b_{4}(\mathbf{G})\Delta\theta_{4}^{\ast}=-b_{4}(\mathbf{G},\mathbf{1})\frac{b_{4}\left(\mathbf{G},\boldsymbol{\theta}\right)}{m_{44}\left(\mathbf{G}\right)}$. This critical value $\Delta\theta_{4}^{\ast}$ is picked so that player $4$ would be exactly choosing zero in equilibrium after this characteristic intervention: $b_{4}\left(\mathbf{G},\boldsymbol{\theta}\right)+m_{44}(\mathbf{G})\Delta\theta_{4}^{\ast}=0$. Given that player 4 is inactive in equilibrium, the other three players effectively play a network game with node $4$ removed. In other words, such a change of $\theta_{4}$ by $\Delta\theta_{4}^{\ast}$ exactly replicates the impacts of removing $4$ from the network in terms of equilibrium choice. As a result, the total impact of removing $4$ on the aggregate equilibrium activity is given by $d_{4}(\mathbf{G},\boldsymbol{\theta})=b_{4}(\mathbf{G},\mathbf{1})\frac{b_{4}\left(\mathbf{G},\boldsymbol{\theta}\right)}{m_{44}\left(\mathbf{G}\right)}$. The idea presented in Figure 1 for a single node removal can be easily extended to the setting with multiple nodes removed simultaneously. The key observation is that the removal of group $S$ from the network has the same effects as changing the characteristics of players in $S$ by $\Delta\boldsymbol{\theta}_{S}=-\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)\text{.}$ According to equation (6), this characteristic intervention leads to the reduction of aggregate action by $-\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}=d_{S}(\mathbf{G},\boldsymbol{\theta})$ in equation (16). Therefore, Lemma 3 follows immediately after showing the equivalence between a structural intervention (removal of a set of nodes) to a characteristic intervention (decreasing $\boldsymbol{\theta}_{S}$ by $\Delta\boldsymbol{\theta}_{S}$). Consequently, the key group program (15) can be reformulated as $\underset{S\subseteq N,|S|\leq k}{\max}d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ (17) When $k=1$, taking $S=\left\\{i\right\\}$, we obtain $d_{i}(\mathbf{G},\boldsymbol{\theta})=\frac{b_{i}\left(\mathbf{G}\right)b_{i}\left(\mathbf{G},\boldsymbol{\theta}\right)}{m_{ii}\left(\mathbf{G}\right)}$ by equation (16), which coincides with key player index in Ballester et al. (2006). To the best of our knowledge, there is no analogous simple expression for the key group index (or the intercentrality index) with $k\geq 2$, except for the definition. As a non-trivial generalization of the key player index, Lemma 3 uses self-loops and centralities of the removed players to construct the key group index. Thus, we can conveniently identify the key group from the information in the matrix $\mathbf{M}\left(\mathbf{G}\right)$ without re- computing the new equilibrium after the removal of nodes. Furthermore, the analytically simplicity of the expression in Lemma 3 enables us to make inference regarding the key group. ###### Proposition 2. Assume $\boldsymbol{\theta}=\mathbf{1}$. Consider two subsets, $S$ and $S^{\prime}$, 1. (i) if $S\subseteq(\subset)S^{\prime}$, then $d_{S}\left(\mathbf{G},\boldsymbol{1}\right)\leq(<)d_{S^{\prime}}\left(\mathbf{G},\boldsymbol{1}\right)$; 2. (ii) if $\left|S\right|=\left|S^{\prime}\right|$, $\mathbf{b}_{S}\left(\mathbf{G}\right)\preceq\mathbf{b}_{S^{\prime}}\left(\mathbf{G}\right)$, and $\mathbf{M}_{SS}\left(\mathbf{G}\right)\succeq\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)$, then $d_{S}\left(\mathbf{G},\boldsymbol{1}\right)\leq d_{S^{\prime}}\left(\mathbf{G},\boldsymbol{1}\right)$. Proposition 2 (i) is rather intuitive: removing a larger group induces a more significant impact. In particular, to search for the optimal $S^{\ast}$ in equation (15), it is without loss of any generality to consider $S$ with $|S|=k$. Proposition 2 (ii) shows that, when comparing groups with the same size, group $S^{\prime}$ with greater Katz-Bonacich centralities, $\mathbf{b}_{S^{\prime}}$, and fewer number of walks within any pair of nodes in $S^{\prime}$ (measured by $\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)$) has a larger intercentrality. These monotonicity results are useful in reducing the possible choices of candidates for the key group problem, as shown in Example 2 below.222222Ballester et al. (2010), in their Example 2 on the key group problem with $k=2$, illustrate similar observations as our Proposition 2 (ii). ###### Example 2. Consider a regular network depicted in Figure 2. We consider two cases: $k=1$ (the key player problem) and $k=2$ (the key group problem). Assume $\boldsymbol{\theta=1}$ and $\delta=0.2$, then all nodes have the same unweighted Katz-Bonacich centralities: $b_{i}(\mathbf{G},\mathbf{1})=b_{j}(\mathbf{G},\mathbf{1})$ for any $i,j$.232323This network is regular with degree $d=3$, so $\mathbf{G}^{k}\mathbf{1}=3^{k}\mathbf{1},\forall k,$ and $b_{i}(\mathbf{G},\mathbf{1})=\frac{1}{1-d\delta}=2.5,\forall i$. This same network is also analyzed in Calvó-Armengol and Jackson (2004) and Zhou and Chen (2015) under different contexts. Figure 2: A regular network with degree three 1. (i) Assume $k=1$ so that we can only remove one node ($|S|=1$). For $S=\\{i\\}$, $d_{i}=\frac{b_{i}^{2}}{m_{ii}}$ by Lemma 3. Since $b_{i}$ is the same for all $i$, so $d_{l}>d_{j}\Longleftrightarrow m_{ll}<m_{jj}$, consistent with Proposition 2 (ii). Table 1 summarizes $m_{ii},d_{i}$ for each equivalent type of player.242424For the key player problem ($k=1$), there are only three equivalent types due to symmetry of the network. For instance, players 1 and 6 are equivalent. Similarly, $2,5,7,10$ are mutually equivalent. The key player index is negatively related to self-loops. Therefore, player 1 (equivalently player 6) is the key player. Table 1: The key player $S$ | $m_{SS}\left(\mathbf{G}\right)$ | $d_{S}\left(\mathbf{G},\boldsymbol{1}\right)$ ---|---|--- {1} | 1.1688 | 5.3474* {2} | 1.1981 | 5.2166 {3} | 1.2162 | 5.1390 Table 2: The key group $S$ | $d_{S}\left(\mathbf{G},\boldsymbol{1}\right)$ | $S$ | $d_{S}\left(\mathbf{G},\boldsymbol{1}\right)$ ---|---|---|--- {1,2} | 8.4725 | {2,3} | 8.0331 {1,3} | 9.3419 | {2,5} | 8.9529 {1,6} | 8.7506 | {2,7} | 10.2938* {1,7} | 10.0150 | {2,8} | 10.2863 {1,8} | 10.2081 | {3,4} | 7.8174 | | {3,8} | 10.2431 2. (ii) Assume $k=2$, i.e, we can remove two nodes ($|S|=2$). Table 2 shows intercentralities of all equivalent types of groups with size $k=2$.252525For this key group problem with $k=2$, there are exactly $11$ types up to equivalence as shown in Table 2. Note that node $2$ is equivalent to $7$ for the key player problem ($m_{22}=m_{77}$ and $b_{2}=b_{7}$), but $\\{1,2\\}$ and $\\{1,7\\}$ are not equivalent for the key group problem as $m_{12}\neq m_{17}$. By Proposition 2 (ii), for $S=\\{i,j\\}$, $d_{S}$ is proportional to $\mathbf{1}^{\prime}\left(\mathbf{M}_{SS}\right)^{-1}\mathbf{1=}\frac{m_{ii}+m_{jj}-2m_{ij}}{m_{ii}m_{jj}-m_{ij}^{2}}$ (note that $b_{i}=b_{j}$ for any $i,j$), and decreases in $m_{ii},m_{ij}$ and $m_{jj}$. Observe that $d_{\\{2,7\\}}$ is higher than $d_{\\{2,5\\}}$. Both sets $\\{2,7\\}$ and $\\{2,5\\}$ share the same player 2, furthermore $m_{77}=m_{55}=1.1981$, but $m_{27}=0.0162<m_{25}=0.1981$, implying $d_{\\{2,7\\}}>d_{\\{2,5\\}}$.262626By the same token, we can show $d_{\\{1,2\\}}<d_{\\{1,7\\}}$, $d_{\\{1,3\\}}<d_{\\{1,8\\}}$, $d_{\\{2,3\\}}<d_{\\{2,8\\}}$. In fact, as demonstrated by Table 2, group $S^{\ast}=\left\\{2,7\right\\}$ is the key group. For the key player problem with $k=1$, comparing the selfloop $m_{ii}$ is suffice to determine who is the most wanted player in the network in Figure 2. The group version of intercentrality requires more detailed information beyond selfloops. In particular, the number of walks between the players in $S$, $m_{ij}$, also matters, as shown from the comparison between $\\{2,7\\}$ and $\\{2,5\\}$. Nevertheless, the monotonicity result in Proposition 2 (ii) enables us to rule out many dominated groups for consideration. Another interesting point is the comparison between $S^{\ast}=\left\\{2,7\right\\}$ and $\hat{S}=\\{1,6\\}$. Both player 1 and 6 are key players with $k=1$ (see Table 1), but $\hat{S}=\\{1,6\\}$, the combination of two key players, does not form the key group with $k=2$ as $d_{\\{1,6\\}}<d_{\\{2,7\\}}$.272727Note that Proposition 2 (ii) is not applicable here as the matrix $\mathbf{M}_{\hat{S}\hat{S}}$ does not dominate $\mathbf{M}_{S^{*}S^{*}}$ entry by entry ($m_{22}=m_{77}>m_{11}=m_{66}$, but $m_{27}<m_{16}$). Simply collecting all the key players together does not solve the key group problem. In fact, in this example, the key group with $k=2$ does not include any key player with $k=1$. These observations point to the computational complexity of the key group problem, which is NP-hard (see Proposition 5 in Ballester et al. (2010) and the detailed discussion therein). ### 3.2 A view from walk counting Given the close relationship between equilibrium action in the game and Katz- Bonacich centralities in the network, we offer an explanation of the intercentrality index from the view of walk counting. In network $\left(N,\mathbf{G}\right)$, $m_{ij}\left(\mathbf{G}\right)$ summerizes the total number of walks from $i$ to $j$ (with length discount $\delta$). In particular, for a non-empty set $S\subset N$, $m_{ij}\left(\mathbf{G}\right)$ counts the walks passing one or more nodes in $S$, as well as the other walks never hitting any nodes in $S$. The former types of walks with length discount exactly measure the importance of group $S$ in the key group problem since those walks do not contribute to the centrality in remaining network $\mathbf{G}_{S^{C}S^{C}}$. To distinguish these two types of walks and facilitate the walk counting, we introduce the following notation. ###### Definition 3. Fixing a non-empty proper subset $S$ of $N$ in network $\left(N,\mathbf{G}\right)$, for any $i$, $j\in N$, we define $w_{ij}\left(\mathbf{G},S\right)$ as total number of walks with length discount from $i$ to $j$ that do not pass any node in $S$ with the possible exception for the starting node $i$ and the ending node $j$. Let $\mathbf{W}\left(\mathbf{G},S\right)=\left(w_{ij}\left(\mathbf{G},S\right)\right)_{n\times n}$. Different from $m_{ij}\left(\mathbf{G}\right)$, $w_{ij}\left(\mathbf{G},S\right)$ precludes the walks from $i$ to $j$ that cross group $S$. In particular, $w_{ij}\left(\mathbf{G},\emptyset\right)=m_{ij}\left(\mathbf{G}\right)$. If $i$, $j\in S^{C}$, then $w_{ij}\left(\mathbf{G},S\right)$ counts the total number of walks from $i$ to $j$ that never passing group $S$; If $i\in S^{C}$ and $j\in S$, then $w_{ij}\left(\mathbf{G},S\right)$ counts the total number of walks from $i$ to $j$ that never passing group $S$ before stopping at node $j\in S$; If $i$, $j\in S$, then $w_{ij}\left(\mathbf{G},S\right)$ denotes the total number of walks from $i$ to $j$ that never pass group $S$ except the starting and ending nodes $i$, $j$. Since network $\mathbf{G}$ is undirected, matrix $\mathbf{W}\left(\mathbf{G},S\right)$ is necessarily symmetric: any walk from $i$ to $j$ that bypasses group $S$ is also a walk from $j$ to $i$ that does not cross $S$, and vice versa. After suitable relabelling of nodes, $\mathbf{W}\left(\mathbf{G},S\right)$ can be represented as the following block matrix: $\mathbf{W}\left(\mathbf{G},S\right)=\left[\begin{array}[]{cc}\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)&\mathbf{W}_{SS^{C}}\left(\mathbf{G},S\right)\\\ \mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)&\mathbf{W}_{SS}\left(\mathbf{G},S\right)\end{array}\right].$ The matrix $\mathbf{M}\left(\mathbf{G}\right)$ can be partitioned in the same way. We establish the following result. ###### Proposition 3. For any $S\subseteq N$, the following identities hold: $\displaystyle\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)-\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)$ (18) $\displaystyle\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}$ (19) $\displaystyle\mathbf{W}_{SS}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle 2\mathbf{I}-\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}$ (20) In particular, for any $A,B\subseteq N$ and $A\cap B=\emptyset$, then $\displaystyle\mathbf{W}_{AB}\left(\mathbf{G},A\cup B\right)$ $\displaystyle=\left(\mathbf{M}_{AA}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{AB}\left(\mathbf{G}\right)\left(\mathbf{W}_{BB}\left(\mathbf{G},A\right)\right)^{-1}$ (21) $\displaystyle=\left(\mathbf{W}_{AA}\left(\mathbf{G},B\right)\right)^{-1}\mathbf{M}_{AB}\left(\mathbf{G}\right)\left(\mathbf{M}_{BB}\left(\mathbf{G}\right)\right)^{-1}$ Proposition 3 characterizes the impacts of removing group $S$ on the total number of walks between each pair of nodes.292929We provide a view from walk counting for the identities in Proposition 3 in Appendix B. There are three points worth noting. 1. 1. Equation (18) uses centrality measures in the original network to quantify all the walk changes in the remaining network when a set of nodes is removed. Specifically, $\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)-\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)=\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)$ summarizes the reduction of total number of walks between each pair of nodes in $S^{C}$. When $S=\left\\{i\right\\}$, for any pair $\left(j,k\right)\in S^{C}$, equation (18) yields to ${m_{jk}}({\mathbf{G}})-{w_{jk}}({\mathbf{G}},\\{i\\})=\frac{{{m_{ji}}\left({\mathbf{G}}\right){m_{ik}}\left({\mathbf{G}}\right)}}{{{m_{ii}}\left({\mathbf{G}}\right)}}\text{.}$ This equation is equivalent to Lemma 1 in Ballester et al. (2006), which characterizes the change of walks in the network after removing a single node $i$ and leads to the intercentrality index. Equation (18) extends Ballester et al. (2006)’s Lemma 1 into the case of removing multiple nodes. 2. 2. Equation (19) indicates that the intercentrality measure $d_{S}$ in equation (16) is precisely the discounted number of walks that pass through group $S$. To fix idea, we set $\mathbf{\theta=1}$. The intercentrality of group $S$ can be decomposed according to whether the starting node of such a walk is in $S$ (type I walks) or not (type II walks): $d_{S}\left(\mathbf{G,1}\right)={{\underbrace{b_{S}\left(\mathbf{G}\right)}_{\text{Term I}}+\underbrace{\mathbf{1}^{\prime}\mathbf{W}_{S^{C}S}(\mathbf{G},S)\mathbf{b}_{S}\left(\mathbf{G}\right)}_{\text{Term II}}}\text{.}}$ Term I is precisely the sum of walks with the starting node in $S$, i.e, type I walks $b_{S}(\mathbf{G})=\sum_{i\in S}{b_{i}(\mathbf{G})}$. Term II exactly captures the walks that starts with a node in $S^{C}$ and passes group $S$ at least once, i.e., type II walks. Each walk of type II can be decomposed as the concatenations of two walks: Consider an arbitrary walk starting from node $i\in S^{C}$ and ending at $j$ (which may or may not in $S$), which passes the group $S$ at least once. Let $l\in S$ be the first node that the walk meets group $S$. Then, this walk can be _uniquely_ decomposed as the concatenations of a walk from $i$ to $l$ and the other walk from $l$ to $j$. The former category of walks never cross group $S$ before ending, and therefore, is summerized by $w_{il}\left(\mathbf{G},S\right)$. The total number of the latter category of walks, with length discount, is counted by $m_{lj}\left(\mathbf{G}\right)$. Consequently, the number of type II walks from $i$ to $j$ is given by $\sum_{l\in S}w_{il}\left(\mathbf{G},S\right)m_{lj}\left(\mathbf{G}\right)$. Summing over indices $i\in S^{C}$ and $j\in N$, we obtain term II $\sum_{i\in S^{C}}\sum_{j\in N}\sum_{l\in S}w_{il}(\mathbf{G},S)m_{lj}(\mathbf{G})=\mathbf{1}^{\prime}\mathbf{W}_{S^{C}S}(\mathbf{G},S)\mathbf{b}_{S}\left(\mathbf{G}\right)$. As a whole, the intercentrality of group $S$, $d_{S}\left(\mathbf{G,1}\right)$, is the discounted number of walks in network $\left(N,\mathbf{G}\right)$ that pass group $S$ at least once. 3. 3. Equation (20) captures the aggregate walks from $i$ to $j$ without passing any of them during the path. In particular, let $S=\left\\{i,j\right\\}$, then we have ${w_{ij}}({\mathbf{G}},\\{i,j\\})=\frac{{{m_{ij}}({\mathbf{G}})}}{{{m_{ii}}({\mathbf{G}}){m_{jj}}({\mathbf{G}})-{{\left({{m_{ij}}({\mathbf{G}})}\right)}^{2}}}}\text{.}$ Equation (3) is consistent with Proposition 2 in Bramoullé and Garance (2018) on targeting centralities, which characterizes the expected number of times $i$’s request reaches $j$ if both nodes $i$ and $j$ are excluded from favor re-transmission.323232The economic issue explored in Bramoullé and Garance (2018) and the notation they use are slightly different from ours. Here we have adapted their results using our notation. In fact, Proposition 3 generalizes Bramoullé and Garance (2018)’s targeting centrality in two dimensions. Specifically, equation (20) captures the expected times that $i$’s request reaches $j$ when a group of individuals, rather than only $i$ and $j$ in Bramoullé and Garance (2018), are excluded from re-transmission. Meanwhile, equation (18) and (19) capture the cases that either request originator $i$ or receiver $j$ or both are allowed to retransmit the request when a group of individuals cannot. Finally, it is worth noting that, equation (21) demonstrates a symmetric decomposition of the walks between two disjoint set of nodes in the network. This symmetric property is a group generalization of Bramoullé and Garance (2018)’s observation (cf. footnote 7 in Bramoullé and Garance (2018)).333333Bramoullé and Garance (2018) state that (in our notation): “for any $i$, $j$, $m_{ii}\left(\mathbf{G}\right)w_{jj}\left(\mathbf{G},\left\\{i\right\\}\right)=$ $m_{jj}\left(\mathbf{G}\right)w_{ii}\left(\mathbf{G},\left\\{j\right\\}\right)$. To our knowledge, this provides a noval result in matrix analysis.” The symmetric property is consistent with (21) after canceling out the common term $m_{ij}\left(\mathbf{G}\right)$ on both sides. ## 4 The key bridge connecting isolated networks ### 4.1 The bridge index and the key bridge Consider two isolated networks, $\left(N_{1},\mathbf{N}^{1}\right)$ and $\left(N_{2},\mathbf{N}^{2}\right)$, where $N_{i}$ denotes the set of players and $\mathbf{N}^{i}$ denotes the corresponding adjacency matrix for $i\in\left\\{1,2\right\\}$. Define $N=N^{1}\cup N^{2}$ and the adjacency matrix $\mathbf{G}=\begin{bmatrix}\mathbf{N}^{1}&\mathbf{0}\\\ \mathbf{0}&\mathbf{N}^{2}\end{bmatrix}$. Note that $m_{ij}(\mathbf{G})=m_{ji}(\mathbf{G})=0$ for $i\in N_{1},j\in N_{2}$. To simplify the notation, we set $\theta_{k}=1$ for any $k\in N$ throughout this section. The planner’s problem is to maximize aggregate equilibrium effort by adding a new link between some nodes $i\in N_{1}$ and $j\in N_{2}$. Mathematically, the planner solves $\max_{(i,j)\in N_{1}\times N_{2}}b\left(\mathbf{G+E}_{ij}\right)\text{.}$ (24) The pair of nodes $(i^{\ast},j^{\ast})$ that solves the above problem is called the key bridge pair. We call node $i^{\ast}$ (node $j^{\ast}$) as the key bridge player in network $\mathbf{N}^{1}(\mathbf{N}^{2})$. The key bridge problem naturally arise in many economic settings. For example, in the integration of new immigrants into a new country, the communication between culture leaders serves as a bond connecting two initially isolated communities (see Verdier and Zenou (2015, 2018)). For another example, a firm can be viewed as a network among workers with synergies as a worker’s productivity is influenced by his peers’ through knowledge sharing and skill complementarity. Building up interfirm social connections creates further economic values. For instance, Cai and Szeidl (2018) document the effects of interfirm meetings among young Chinese firms on their business performance.343434In a large-scale experimental study of network formation, Choi et al. (2019) highlight the role of connectors and influencers. ###### Definition 4. For any pair $(i,j)\in N_{1}\times N_{2}$, define the bridge index $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ as $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)\equiv\frac{\delta m_{jj}\left(\mathbf{N}^{2}\right)b_{i}^{2}\left(\mathbf{N}^{1}\right)+\delta m_{ii}\left(\mathbf{N}^{1}\right)b_{j}^{2}\left(\mathbf{N}^{2}\right)+2b_{j}\left(\mathbf{N}^{2}\right)b_{i}\left(\mathbf{N}^{1}\right)}{1-\delta^{2}m_{jj}\left(\mathbf{N}^{2}\right)m_{ii}\left(\mathbf{N}^{1}\right)}.$ (25) ###### Proposition 4. The key bridge pair $(i^{*},j^{*})$ must maximize the bridge index, i.e., $(i^{*},j^{*})\in\arg\max_{(i,j)\in N_{1}\times N_{2}}L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)\text{.}$ This Proposition fully solve the key bridge problem using the bridge index. It follows that, when $\mathbf{G}$ is the union of two isolated networks $b\left(\mathbf{G+E}_{ij}\right)-b\left(\mathbf{G}\right)=\delta L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right),~{}~{}~{}\forall i\in N_{1},j\in N_{2}.$ (26) Thus, this bridge index $L_{ij}$ summarizes all the walks passing bridge $(i,j)$ at least once. Depending on the starting node, the end node, and how many times such a new walk intersects with the bridge, we sort these additional walks into different categories, and can provide a view from walk counting for each category as shown in the bridge index (see the Appendix B for details). To obtain further insights on who exactly is the key bridge player using the primitive information, we present the following Corollary. Let $e_{i}=\sum_{j\in N}g_{ij}$ denote degree of player $i$. ###### Corollary 2. The following properties about the bridge index $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ hold: 1. (i) For two nodes $i$ and $i^{\prime}$ in $N_{1}$ with $b_{i}\left(\mathbf{N}^{1}\right)\geq b_{i^{\prime}}\left(\mathbf{N}^{1}\right)$ and $m_{ii}\left(\mathbf{N}^{1}\right)\geq m_{i^{\prime}i^{\prime}}\left(\mathbf{N}^{1}\right)$, we have $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)\geq L_{i^{\prime}j}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ for any $j\in N_{2}$. 2. (ii) For nodes $i$, $i^{\prime}$ in $N_{1}$ and $j$, $j^{\prime}$ in $N_{2}$ such that $b_{i}\left(\mathbf{N}^{1}\right)=b_{i^{\prime}}\left(\mathbf{N}^{1}\right)$, $m_{jj}\left(\mathbf{N}^{2}\right)=m_{j^{\prime}j^{\prime}}\left(\mathbf{N}^{2}\right)$, $m_{ii}\left(\mathbf{N}^{1}\right)\geq m_{i^{\prime}i^{\prime}}\left(\mathbf{N}^{1}\right)$ and $b_{j}\left(\mathbf{N}^{2}\right)\geq b_{j^{\prime}}\left(\mathbf{N}^{2}\right)$, we have $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)-L_{ij^{\prime}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)\geq L_{i^{\prime}j}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)-L_{i^{\prime}j^{\prime}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)\text{.}$ 3. (iii) For two nodes $i$, $i^{\prime}$ in $N_{1}$ with $e_{i}>e_{i^{\prime}}$, there exists $\bar{\delta}>0$ such that for any $0<\delta<\bar{\delta}$, $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)>L_{i^{\prime}j}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ for any $j\in N_{2}$. Corollary 2 (i) implies that to find key bridge player $i^{\ast}$ in first network $N_{1}$, it suffices to focus on the set of nodes in $N_{1}$ that lie on the Pareto frontier of Katz-Bonacich centrality $b_{i}(\mathbf{N}^{1})$ and self-loops $m_{ii}(\mathbf{N}^{1})$. Therefore, if the most active player (the one with highest $b_{i}(\mathbf{N}^{1})$) also happens to the one with highest $m_{ii}(\mathbf{N}^{1})$, it must be the key bridge player. The reason is that $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ increases with $b_{i}(\mathbf{N}^{1})$ and $m_{ii}(\mathbf{N}^{1})$, fixing $j$. Meanwhile, the intercentrality index, $\frac{b_{i}^{2}(\mathbf{N}^{1})}{m_{ii}(\mathbf{N}^{1})}$, increases in $b_{i}(\mathbf{N}^{1})$ but decreases in $m_{ii}(\mathbf{N}^{1})$. Therefore, the key player may differ from the key bridge player. Moreover, when the player with largest $b_{i}(\mathbf{N}^{1})$ differs from the one with largest $m_{ii}(\mathbf{N}^{1})$ in $N_{1}$, the selection of the key bridge player in $N_{1}$ crucially depends who is chosen as the bridge player $j$ in the other network $\mathbf{N}^{2}$. In other words, the selection of the key bridge pair is not independent. As suggested by Corollary 2 (ii), the role of node $i$’s self-loops in determining the bridge index $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ becomes more significant if a node $j$ with larger Katz-Bonacich centrality is selected. Correspondently, $i$’s Katz-Bonacich centrality plays a more important role than self-loops when a node $j$ with larger self-loops is selected. Consider a scenario in which these two networks are highly imbalanced, in the sense that the largest Bonacich centrality in network $\mathbf{N}^{2}$ is much larger than the one in network ${\mathbf{N}^{1}}$.353535This could be the case when network $\mathbf{N}^{2}$ involves much more network members, who have denser connections among each other. Then the key bridge pair may consist of the central node in $\mathbf{N}^{2}$ and the node with largest self-loop in network $\mathbf{N}^{1}$ (even if such node may not be the most central one). It is because the self-loop of the player in $\mathbf{N}^{1}$ contributes more than Katz-Bonacich centrality on the bridge index $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$ when Katz-Bonacich centrality of the bridge player in $\mathbf{N}^{2}$ is significantly large. Furthermore, when $\delta$ is below a threshold, the degree centrality plays the dominant role in the bridge index by item (iii). We illustrate these observations using the following example. ###### Example 3. Consider two isolated networks depicted in Figure 3. Figure 3: Connecting $\mathbf{N}^{1}$ and $\mathbf{N}^{2}$ by adding a bridge Table 3: Measures $\delta=0.25$ Players | $m_{ii}$ | $b_{i}$ ---|---|--- $a_{1}$ | 1.5686 | 4.7059 $a_{2}$ | 1.5980 | 4.6765 $a_{3}$ | 1.2686 | 3.2059 $a_{4}$ | 1.4255 | 4.1471 $a_{5}$ | 1.0980 | 2.1765 $h$ | 1.7778 | 4.8889 $l$ | 1.1111 | 2.2222 Table 4: Bridges with $\delta=0.25$ bridge $i$-$j$ | $L_{ij}\left(\mathbf{G}\right)$ ---|--- $h$-$a_{1}$ | 78.9970 $h$-$a_{2}$ | 79.0258 Table LABEL:tab3 gives the Katz-Bonacich centrality $b_{i}$ and self-loops $m_{ii}$ measures for $\delta=0.25$. In the first network $\left(N_{1},\mathbf{N}^{1}\right)$, the hub player $h$ is more important than any of the periphery node both in terms of centrality and self-loops measures. In the second network $\left(N_{2},\mathbf{N}^{2}\right)$, $a_{1}$ dominates $a_{2}$ in terms of Katz-Bonacich centrality $b_{i}$, while $a_{2}$ dominates $a_{1}$ in terms of self-loops $m_{ii}$. All the other nodes in $N_{2}$ are dominated by $a_{1}$ and $a_{2}$ in both $m_{ii}$ and $b_{i}$. By Corollary 2 (i), the key bridge pair is either $(h,a_{1})$ or $(h,a_{2})$. Table LABEL:tab4 demonstrates that the bridge index $L_{ha_{2}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)>L_{ha_{1}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)$, thus $a_{2}$ is the key bridge player in $\left(N_{2},\mathbf{N}^{2}\right)$, yet $a_{2}$ is neither the most active player (in terms of $b_{i}$) nor the key player (in terms of the inter-centrality $b_{i}^{2}/m_{ii}$) in $\left(N_{2},\mathbf{N}^{2}\right)$. Next, we consider $\delta=0.23$ (see Tables LABEL:tab5 and LABEL:tab6). By the same logic, it suffices to consider connecting the hub $h$ in the first network to either $a_{1}$ or $a_{2}$ in the second network. However, for $\delta=0.23$, the $b_{i}$ plays a more prominent role in the bridge index than $m_{ii}$, and indeed $a_{1}$ is now the key bridge player. This observation is consistent with Corollary 2 (iii): the key bridge player is the player with highest degree when $\delta$ is relatively small (the degree of $a_{1}$ is larger than that of $a_{2}$). Keeping $\delta=0.23$. Suppose we increase the periphery nodes in $\left(N_{1},\mathbf{N}^{1}\right)$ from $7$ to $17$. The Katz-Bonacich centrality of hub player $b_{h}=48.76$ is significantly larger than that of players in $N_{2}$. Thus, the self-loops $m_{jj}$ of the bridge player $j$ in $N_{2}$ is more pronounced, compared with his Katz-Bonacich centrality $b_{j}$ in shaping the relative values of $L_{hj}$. As a result, $(h,a_{2})$ is the key bridge (indeed, $L_{ha_{2}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)=4744>L_{ha_{1}}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)=4680$). Table 5: Measures with $\delta=0.23$ Players | $m_{ii}$ | $b_{i}$ ---|---|--- $a_{1}$ | 1.4213 | 3.8423 $a_{2}$ | 1.4300 | 3.7545 $a_{3}$ | 1.1969 | 2.6348 $a_{4}$ | 1.3063 | 3.3533 $a_{5}$ | 1.0752 | 1.8837 $h$ | 1.5881 | 4.1448 $l$ | 1.0840 | 1.9533 Table 6: Bridges with $\delta=0.23$ bridge $i$-$j$ | $L_{ij}\left(\mathbf{G}\right)$ ---|--- $h$-$a_{1}$ | 48.6711 $h$-$a_{2}$ | 47.6461 ### 4.2 The value of an existing link and the value of a potential link Instead of considering two isolated networks, in this subsection we consider a general network and explore the effects of link creation and deletion. ###### Lemma 4. For any network $\mathbf{G}$, * (i) Suppose $g_{ij}=0$, then $b(\mathbf{G}+\mathbf{E}_{ij})-b(\mathbf{G})=\delta{L}_{ij}\left(\mathbf{G}\right)$ (27) where ${L}_{ij}\left(\mathbf{G}\right)=\frac{\delta m_{ii}\left(\mathbf{G}\right)b_{j}^{2}\left(\mathbf{G}\right)+\delta m_{jj}\left(\mathbf{G}\right)b_{i}^{2}\left(\mathbf{G}\right)+2\left(1-\delta m_{ij}\left(\mathbf{G}\right)\right)b_{i}\left(\mathbf{G}\right)b_{j}\left(\mathbf{G}\right)}{\left(1-\delta m_{ij}\left(\mathbf{G}\right)\right)^{2}-\delta^{2}m_{ii}\left(\mathbf{G}\right)m_{jj}\left(\mathbf{G}\right)}\text{.}$ * (ii) Suppose $g_{ij}=1$, then $b(\mathbf{G}-\mathbf{E}_{ij})-b(\mathbf{G})=-\delta l_{ij}\left(\mathbf{G}\right)$ (28) where $l_{ij}\left(\mathbf{G}\right)=\frac{2\left(1+\delta m_{ij}\left(\mathbf{G}\right)\right)b_{i}\left(\mathbf{G}\right)b_{j}\left(\mathbf{G}\right)-\left(\delta m_{ii}\left(\mathbf{G}\right)b_{j}^{2}\left(\mathbf{G}\right)+\delta m_{jj}\left(\mathbf{G}\right)b_{i}^{2}\left(\mathbf{G}\right)\right)}{\left(1+\delta m_{ij}\left(\mathbf{G}\right)\right)^{2}-\delta^{2}m_{ii}\left(\mathbf{G}\right)m_{jj}\left(\mathbf{G}\right)}\text{,}$ The index $L_{ij}$ measures the value of a potential new link $(i,j)$ in the network $\mathbf{G}$, while $l_{ij}$ measures the value of an existing link $(i,j)$ in $\mathbf{G}$. Therefore, the index $L_{ij}$ is useful in determining the optimal location for a new link (for instance, the key bridge problem). Meanwhile, $l_{ij}$ measures the contribution of an existing link $\left(i,j\right)$ to the total Katz-Bonacich centralities. For instance, Ballester et al. (2010) derives the same measure in a different method (see their Lemma 2), and use it to study the most important existing link (the key link). Both indices $L_{ij}$ and $l_{ij}$ share some similar properties as the bridge index in Corollary 2, hence we omit the details. Both results in Lemma 4 follow directly from Proposition 1: we set $\mathbf{C}=\mathbf{E}_{ij}$ in case (i) and set $\mathbf{C}=-\mathbf{E}_{ij}$ in case (ii). Note that in both cases, $m_{ij}$ and $b_{i}$ in the expressions of $L_{ij}$ and $l_{ij}$ are evaluated at the existing network $\mathbf{G}$. Since removing the newly added link $(i,j)$ in $\mathbf{G}+\mathbf{E}_{ij}$ results in the original network $\mathbf{G}$, Lemma 4 reveals the following relationship between two indices: $l_{ij}(\mathbf{G}+\mathbf{E}_{ij})=L_{ij}(\mathbf{G}).$ (29) This identity enables us to express $L_{ij}(\mathbf{G})$ using the centrality measures of the new network $\mathbf{G}+\mathbf{E}_{ij}$. Lemma 4 (i) generalizes the bridge index in equation (25). Indeed, when $\mathbf{G}$ is the union of two isolated networks $\mathbf{N}^{1}$ and $\mathbf{N}^{2}$, we have $m_{ij}=0$ for $i\in\mathbf{N}^{1},j\in\mathbf{N}^{2}$, and the index $L_{ij}$ in equation (27) reduces to $L_{ij}(\mathbf{N}^{1},\mathbf{N}^{2})$ in equation (25). Unlike the bridge index which does not depend on $m_{ij}$, the general link index $L_{ij}$ increases with $m_{ij}$ (note that in a general network, $m_{ij}$ can be positive even when $i$ and $j$ are not directly connected). So the designer may prefer connecting nodes who has already been well connected in the original network to adding bridges connecting separated groups. The next Example illustrates when it is desirable to add intra-group link(s) vs. inter-group link(s). ###### Example 4. Consider a original network $\mathbf{G}$ composed of two separated cycles, each with size four. We search for the optimal way to add one or two links, where the links can be formed between or within two circles. * (a) Consider adding one link. By symmetry, it suffices to compare $\hat{\mathbf{G}}_{1}$ and $\hat{\mathbf{G}}_{2}$ in Figure 4. Given $m_{2,3}>m_{2,5}=0$ and all other measures are equal, adding the intra-group link $(2,3)$ strictly dominates adding the inter-group link $(2,5)$, i.e., $\hat{\mathbf{G}}_{2}$ dominates $\hat{\mathbf{G}}_{1}$ (see Table 7). Figure 4: Adding a single link between two separated networks * (b) What if we can add one additional link? It is easy to see that the optimal network is among the following three networks: $\bar{\mathbf{G}}_{1}$, $\bar{\mathbf{G}}_{2}$, and $\bar{\mathbf{G}}_{3}$ (see Figure 5).363636All other ways of forming two links are dominated. Denote $((i,j),(k,l))$ as an intervention of adding two links. For instance, $((2,5),(4,7))$ is strictly dominated by $((2,5),(2,7))$ since $b_{2}>b_{4}$, $m_{2,2}>m_{4,4}$ and $m_{2,7}>m_{4,7}$ once a bridge $(2,5)$ is added. $((2,5),(2,7))$ strictly dominates $((2,5),(2,8))$ since $m_{2,7}>m_{2,8}$ once a bridge $(2,5)$ is added. $((2,3),(6,7))$ is strictly dominated by $((1,4),(2,3))$ since $b_{1}>b_{6}$, $m_{1,1}>m_{6,6}$ and $m_{1,4}>m_{6,7}$ once $(2,3)$ is added. $((2,5),(6,7))$ is strictly dominated by $((5,8),(6,7))$ since $b_{8}>b_{2}$, $m_{8,8}>m_{2,2}$ and $m_{5,8}>m_{2,5}$ once $(6,7)$ is added. Among these three, see Table 7 shows that $\bar{\mathbf{G}}_{3}$ is the optimal. In other words, connecting two inter-group bridges $((2,5),(2,7))$ strictly dominates building up two intra-group links $((1,4),(2,3))$, even though building up one intra-group link is myopically optimal as in part (a).373737 Starting with $\hat{\mathbf{G}}_{2}$, the optimal network with one extra link by part (a). Conditioning on adding $(2,3)$ as the first link, $(1,4)$ strictly dominates $(2,5)$ as the second link as $\bar{\mathbf{G}}_{2}$ has higher aggregate Katz-Bonacich centralities than $\bar{\mathbf{G}}_{1}$ in terms of (see Table 7). However, neither $\bar{\mathbf{G}}_{1}$ nor $\bar{\mathbf{G}}_{2}$ is optimal. Nevertheless, starting with the dominated network, $\hat{\mathbf{G}}_{1}$ in part (a), we can reach the optimal network $\bar{\mathbf{G}}_{3}$ by adding the link $(2,7)$. Figure 5: Adding two links between two separated networks Table 7: Aggregate Katz-Bonacich centralities ($\delta=0.21$) Add one link | $b\left(\right)$ | Add two links | $b\left(\right)$ ---|---|---|--- $\hat{\mathbf{G}}_{1}$ | 15.4198 | $\bar{\mathbf{G}}_{1}$ | 17.7010 $\hat{\mathbf{G}}_{2}$ | 15.4689* | $\bar{\mathbf{G}}_{2}$ | 17.7074 | | $\bar{\mathbf{G}}_{3}$ | 17.7547* ## 5 Extensions and concluding remarks ### 5.1 Alternative network models Our analysis so far focuses on the impact of the structural interventions on the Katz-Bonacich centralities in the baseline model of Ballester et al. (2006). Since Katz-Bonacich centrality plays a critical role for many network models, so our results (and subsequent applications) naturally extend to these alternative models. For instance, Currarini et al. (2017) extend the single- activity network model of Ballester et al. (2006) with direct complements to incorporate indirect substitutes among players with distance two, and characterize the equilibrium using both $\mathbf{G}$ and $\mathbf{G}^{2}$. In the Appendix, we show that the equilibrium in Currarini et al. (2017) can be written as a linear combination of two Katz-Bonacich centralities. Ballester et al. (2006) extend the baseline model in equation (1) to allow global substitution and show that the aggregate action is a monotone transformation of Katz-Bonacich centralities. In addition, Chen et al. (2018b) consider a network game with multiple activities, and show that the equilibrium can be represented as the weights sum of two Katz-Bonacich centralities (with different synergy parameters and characteristics). See Appendix C for details. ### 5.2 Hybrid interventions We can study the effect of general interventions combining both structural and characteristic interventions in networks. A general hybrid intervention is given by $\left(\mathbf{C},\Delta\boldsymbol{\theta}\right)$, where $\mathbf{C}$ is the structural intervention and $\Delta\boldsymbol{\theta}$ is the characteristic intervention. The hybrid intervention $\left(\mathbf{C},\Delta\boldsymbol{\theta}\right)$ on network game $\Gamma\left(\mathbf{G},\boldsymbol{\theta}\right)$ can be viewd as a structural intervention $\mathbf{C}$ on network game $\Gamma\left(\mathbf{G},\boldsymbol{\theta+}\Delta\boldsymbol{\theta}\right)$. By Proposition 1, this hybrid intervention $\left(\mathbf{C},\Delta\boldsymbol{\theta}\right)$ is outcome equivalent to a characteristic intervention $\widetilde{\Delta\boldsymbol{\theta}^{\ast}}=\Delta\boldsymbol{\theta}+\begin{bmatrix}\mathbf{0}\\\ \delta\mathbf{C}_{SS}\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}+\Delta\boldsymbol{\theta}\right)\end{bmatrix}\text{,}$ of $\Gamma\left(\mathbf{G},\boldsymbol{\theta}\right)$. Thus, the new equilibrium after this hybrid intervention is given by $\mathbf{\hat{x}}^{\ast}=\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta}\right)+\mathbf{M}\left(\mathbf{G}\right)\widetilde{\Delta\boldsymbol{\theta}^{\ast}}.$ This characterization of equilibrium effects of hybrid interventions enables us to study optimal combinations of intervention policies in networks. ### 5.3 Concluding remarks In this paper, we present a theory of interventions in network. By showing an equivalence between a structural intervention and an _endogenously determined_ characteristic intervention, we analyze how these two types of interventions affect the equilibrium actions and offer new insights regarding the optimal interventions in a range of applications. We discuss several venues for future work. First, this paper mainly focuses on the benefit of structural interventions without explicitly modelling the cost of cutting/building links and nodes. It is interesting to study the optimal intervention policy with some budget on the cost of interventions (see Galeotti et al. (2020)). Second, we treat two instruments, i.e, characteristics and social links, independently in our analysis. In some contexts, intervention in one space (say the characteristics) may induce endogenous responses in the other space (the network links). For instance, Banerjee et al. (2018) show that new links are formed and existing links are removed after exposure to formal credit markets. Extending hybrid interventions to accomodate interdependence between two instruments is an intriguing subject. Finally, It is natural to extend our approach to network games with non-linear responses (see, for instance, Allouch (2017) and Elliott and Golub (2018)). These and other generalizations will enrich our understanding of optimal interventions in economic settings involving networks. Appendix ## Appendix A Proofs Proof of Lemma 1: In the game $\Gamma\left(\mathbf{G,\theta}\right)$, the equilibrium action profile is $\mathbf{x}^{\ast}=\mathbf{M}\left(\mathbf{G}\right)\boldsymbol{\theta}$, and the aggregate equilibrium action is ${x}^{\ast}=\mathbf{1}^{\prime}\mathbf{M}\left(\mathbf{G}\right)\boldsymbol{\theta}=\mathbf{b}^{\prime}(\mathbf{G})\boldsymbol{\theta}$. Since the network $\mathbf{G}$ is fixed for a characteristic intervention, the results directly follow. $\Box$ Proofs of Lemma 2: The equilibrium actions of $\Gamma\left(\mathbf{G},\boldsymbol{\theta}\right)$ satisfies $\mathbf{x}^{\ast}=\boldsymbol{\theta}+\delta\mathbf{Gx}^{\ast}$. Under the structural intervention $\mathbf{C}$, the new equilibrium action profile $\mathbf{\hat{x}}^{\ast}$ satisfies $\mathbf{\hat{x}}^{\ast}=\boldsymbol{\theta}+\delta\left(\mathbf{G}+\mathbf{C}\right)\mathbf{\hat{x}}^{\ast}=\left(\boldsymbol{\theta}+\underbrace{\delta\mathbf{C\hat{x}}^{\ast}}_{=\mathbf{\Delta}\boldsymbol{\theta}^{\ast}}\right)+\delta\mathbf{G\hat{x}}^{\ast}\text{.}$ That is, the structural intervention $\mathbf{C}$ is outcome equivalent to a change of players’ intrinsic marginal utilities from $\boldsymbol{\theta}$ to $\boldsymbol{\theta}+\mathbf{\Delta}\boldsymbol{\theta}^{\ast}=\boldsymbol{\theta}+\delta\mathbf{C\hat{x}}^{\ast}$. Given $\mathbf{C}=\begin{bmatrix}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{C}_{SS}\end{bmatrix}$, we obtain $\mathbf{\Delta}\boldsymbol{\theta}^{\ast}=\delta\mathbf{C\hat{x}}^{\ast}=\delta\begin{bmatrix}\mathbf{0}&\mathbf{0}\\\ \mathbf{0}&\mathbf{C}_{SS}\end{bmatrix}\begin{bmatrix}\mathbf{\hat{x}}_{S^{C}}^{\ast}\\\ \mathbf{\hat{x}}_{S}^{\ast}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\\ \delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}\end{bmatrix}\text{.}$ That is, the structural intervention $\mathbf{C}$ is outcome equivalence to changing the characteristics of players in $S$ by $\mathbf{\Delta}\boldsymbol{\theta}_{S}^{\ast}=\delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}$. Moreover, from equation (5), $\mathbf{\hat{x}}_{S}^{\ast}$ must satisfy the following: $\mathbf{\hat{x}}_{S}^{\ast}=\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)+\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{\Delta}\boldsymbol{\theta}_{S}^{\ast}=\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)+\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}\text{.}$ Solving it yields $\mathbf{\hat{x}}_{S}^{\ast}=\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ Consequently, we obtain $\mathbf{\Delta}\boldsymbol{\theta}_{S}^{\ast}=\delta\mathbf{C}_{SS}\mathbf{\hat{x}}_{S}^{\ast}=\delta\mathbf{C}_{SS}\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ $\square$ Proofs of Proposition 1: It directly follows from Lemmas 1 and 2. $\square$ Proof of Corollary 1: Define $\eta(t)=\mathbf{1}^{\prime}(\mathbf{I}-\delta(\mathbf{G}+t\mathbf{C}))^{-1}\mathbf{1},t\in[0,1]$. Since we have assumed that $\mathbf{I}-\delta\mathbf{G}$ and $\mathbf{I}-\delta(\mathbf{G}+\mathbf{C})$ are both symmetric positive definite, $(\mathbf{I}-\delta(\mathbf{G}+t\mathbf{C}))$ is positive definite for any $t\in[0,1]$, hence $\eta$ is well-defined. Given $\boldsymbol{\theta}=\mathbf{1}$, $\eta(0)=\mathbf{1}^{\prime}(\mathbf{I}-\delta(\mathbf{G})^{-1}\mathbf{1}=b(\mathbf{G},\mathbf{1})$ is the equilibrium aggregate action before the intervention, and $\eta(1)=\mathbf{1}^{\prime}(\mathbf{I}-\delta(\mathbf{G}+\mathbf{C})^{-1}\mathbf{1}=b(\mathbf{G+C},\mathbf{1})$ is the equilibrium aggregate action after the intervention. Direct computation shows that383838We use the fact that $d(A^{-1})=-A^{-1}(dA)A^{-1}$. $\begin{split}\eta^{\prime}(0)=\eta^{\prime}(t)|_{t=0}&=\mathbf{1}^{\prime}(\mathbf{I}-\delta(\mathbf{G}+t\mathbf{C}))^{-1}\delta\mathbf{C}(\mathbf{I}-\delta(\mathbf{G}+t\mathbf{C}))^{-1}\mathbf{1}|_{t=0}\\\ &=\mathbf{1}^{\prime}(\mathbf{I}-\delta\mathbf{G})^{-1}\delta\mathbf{C}(\mathbf{I}-\delta\mathbf{G})^{-1}\mathbf{1}=\delta\mathbf{b}^{\prime}(\mathbf{G})\mathbf{C}\mathbf{b}(\mathbf{G}).\end{split}$ Critically, $\eta(\cdot)$ is convex in $t$ by Lemma 5 below, therefore, $\eta(1)-\eta(0)\geq\eta^{\prime}(0)(1-0).$ In other words, $b(\mathbf{G+C},\mathbf{1})-b(\mathbf{G},\mathbf{1})\geq\delta\mathbf{b}^{\prime}(\mathbf{G})\mathbf{C}\mathbf{b}(\mathbf{G}),$ which implies Corollary 1.393939As seen from the proof, Corollary 1 holds for weighted undirected networks as well. $\Box$ ###### Lemma 5. Let $\mathcal{O}$ denote the set of $n$ by $n$ symmetric positive definite matrices. Then the function $V(\mathbf{A}):=\mathbf{1}^{\prime}\mathbf{A}^{-1}\mathbf{1}$ is convex in $\mathbf{A}\in\mathcal{O}$. Proof of Lemma 5: Define $H(\mathbf{A},\mathbf{x})=2\mathbf{1}^{\prime}x-\mathbf{x}^{\prime}\mathbf{A}\mathbf{x}$, where $\mathbf{A}\in\mathcal{O},\mathbf{x}\in\mathbf{R}^{n}$. Fixing a positive definite matrix $\mathbf{A}\in\mathcal{O}$, $H(\mathbf{A},\cdot)$ is strictly concave in $x$ with the maximum value $\max_{\mathbf{x}\in\mathbf{R}^{n}}H(\mathbf{A},\mathbf{x})=\mathbf{1}^{\prime}\mathbf{A}^{-1}\mathbf{1}=V(\mathbf{A}),$ obtained at $\mathbf{x}^{\ast}=\mathbf{A}^{-1}\mathbf{1}$. Moreover, $H(\mathbf{A},\mathbf{x})$ is linear in $\mathbf{A}$ for fixed $\mathbf{x}$, so $V(\mathbf{A})=\max_{\mathbf{x}\in\mathbf{R}^{n}}H(\mathbf{A},\mathbf{x})$ is convex in $\mathbf{A}\in\mathcal{O}$, as the maximum of a family of linear functions is convex (see Boyd and Vandenberghe (2004)). $\Box$ Proof of Lemma 3: As demonstrated in the main text, $d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)$ exactly equals the effect of the characteristic intervention $\Delta\boldsymbol{\theta}_{S}=\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)$ on the aggregate action. Therefore, by Lemma 1, $d_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right)$. $\Box$ Proof of Proposition 2: Part (i) is obvious. For part (ii), we first define $\mathbf{v}:=\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{1}\right)$. We first show that $\mathbf{v}$ is a positive vector, i.e, $\mathbf{v}\succeq\mathbf{0}$: $\displaystyle\mathbf{v}$ $\displaystyle=$ $\displaystyle\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{1}\right)=\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\left(\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)\mathbf{1}_{|S^{C}|}+\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{1}_{|S|}\right)$ $\displaystyle=$ $\displaystyle\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)\mathbf{1}_{|S^{C}|}+\mathbf{1}_{|S|}=\underbrace{\delta\mathbf{G}_{SS^{C}}\left(\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}\right)^{-1}\mathbf{1}_{|S^{C}|}}_{\succeq\mathbf{0}}+\mathbf{1}_{|S|}\succeq\mathbf{0},$ where in the last equality we use the identity $\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}=\delta\mathbf{G}_{SS^{C}}\left(\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}\right)^{-1}$. Moreover, given $\mathbf{b}_{S}\left(\mathbf{G}\right)\preceq\mathbf{b}_{S^{\prime}}\left(\mathbf{G}\right)$, and $\mathbf{M}_{SS}\left(\mathbf{G}\right)\succeq\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)$, and $\mathbf{v}\succeq\mathbf{0}$, we have $d_{S}\left(\mathbf{G},\mathbf{1}\right)=2\mathbf{b}_{S}\left(\mathbf{G}\right)\mathbf{v}-\mathbf{v}^{\prime}\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{v}\leq 2\mathbf{b}_{S^{\prime}}^{\prime}\left(\mathbf{G}\right)\mathbf{v}-\mathbf{v}^{\prime}\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)\mathbf{v},$ (30) Solving the following concave programming yields404040As a principle submatrix of positive definite matrix $\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1}$, $\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)$ is also positive definite. $\underset{\mathbf{x\in}\mathbb{R}^{|S|}}{\max}\left\\{2\mathbf{b}_{S^{\prime}}^{\prime}\left(\mathbf{G}\right)\mathbf{x}-\mathbf{x}^{\prime}\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)\mathbf{x}\right\\}=\mathbf{b}_{S^{\prime}}^{\prime}\left(\mathbf{G}\right)\left(\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)\right)^{-1}\mathbf{b}_{S^{\prime}}\left(\mathbf{G}\right)=d_{S^{\prime}}\left(\mathbf{G},\mathbf{1}\right).$ (31) By optimality, $\underset{\mathbf{x\in}\mathbb{R}^{|S|}}{\max}\left\\{2\mathbf{b}_{S^{\prime}}^{\prime}\left(\mathbf{G}\right)\mathbf{x}-\mathbf{x}^{\prime}\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)\mathbf{x}\right\\}\geq 2\mathbf{b}_{S^{\prime}}^{\prime}\left(\mathbf{G}\right)\mathbf{v}-\mathbf{v}^{\prime}\mathbf{M}_{S^{\prime}S^{\prime}}\left(\mathbf{G}\right)\mathbf{v}.$ (32) Combing equation (30), (31), and (32) yields $d_{S}\left(\mathbf{G},\mathbf{1}\right)\leq d_{S^{\prime}}\left(\mathbf{G},\mathbf{1}\right)$. $\square$ Proof of Proposition 3: ###### Remark 1. We can have alternative expressions for blocks of the matrix $\mathbf{W}$ as follows $\displaystyle\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle\left(\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}\right)^{-1}\text{;}$ $\displaystyle\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle\left(\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}\right)^{-1}\delta\mathbf{G}_{S^{C}S}\text{;}$ $\displaystyle\mathbf{W}_{SS}\left(\mathbf{G},S\right)$ $\displaystyle=$ $\displaystyle\delta\mathbf{G}_{SS^{C}}\left(\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}\right)^{-1}\delta\mathbf{G}_{S^{C}S}+\delta\mathbf{G}_{SS}+\mathbf{I}\text{.}$ These expressions directly follow from the definition of $\mathbf{W}$. Unlike Proposition 3, these expressions use the centralities in the remaining network $\mathbf{G}_{S^{C}S^{C}}$. The Leontief inverse matrix can be written in the block form $\left(\mathbf{I}-\delta\mathbf{G}\right)^{-1}=\left[\begin{array}[]{cc}\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}&-\delta\mathbf{G}_{S^{C}S}\\\ -\delta\mathbf{G}_{SS^{C}}&\mathbf{I}-\delta\mathbf{G}_{SS}\end{array}\right]^{-1}=\left[\begin{array}[]{cc}\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)&\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\\\ \mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)&\mathbf{M}_{SS}\left(\mathbf{G}\right)\end{array}\right].$ For easy notation let $\mathbf{I}-\delta\mathbf{G}_{S^{C}S^{C}}=\mathbf{A}$, $-\delta\mathbf{G}_{S^{C}S}=\mathbf{B}$, $-\delta\mathbf{G}_{SS^{C}}=\mathbf{C}$ and $\mathbf{I}-\delta\mathbf{G}_{SS}=\mathbf{D}$. Then by the remark above, we have $\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)=\mathbf{A}^{-1}$, $\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)=-\mathbf{A}^{-1}\mathbf{B}$ and $\mathbf{W}_{SS}\left(\mathbf{G},S\right)=\mathbf{CA}^{-1}\mathbf{B-D+}2\mathbf{I}$. Using the block matrix inversion, $\displaystyle\left[\begin{array}[]{cc}\mathbf{A}&\mathbf{B}\\\ \mathbf{C}&\mathbf{D}\end{array}\right]^{-1}$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}\mathbf{A}^{-1}\mathbf{+A}^{-1}\mathbf{B}\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\mathbf{CA}^{-1}&-\mathbf{A}^{-1}\mathbf{B}\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\\\ \mathbf{-\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}CA}^{-1}&\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\end{array}\right]$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)&\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\\\ \mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)&\mathbf{M}_{SS}\left(\mathbf{G}\right)\end{array}\right]\text{.}$ Thus, $\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}=\mathbf{M}_{SS}\left(\mathbf{G}\right)$, which implies $\mathbf{W}_{SS}\left(\mathbf{G},S\right)=\mathbf{CA}^{-1}\mathbf{B-D+}2\mathbf{I}=2\mathbf{I+}\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\text{.}$ We further have $-\mathbf{A}^{-1}\mathbf{B}\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}=\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)$. Therefore, $\mathbf{A}^{-1}\mathbf{B=\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)=-M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\text{.}$ In addition, $\mathbf{A}^{-1}\mathbf{B}\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\mathbf{CA}^{-1}=\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)$. Substituting in the identity $\mathbf{A}^{-1}\mathbf{+A}^{-1}\mathbf{B}\left(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\mathbf{CA}^{-1}=\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)$, we can get $\mathbf{A}^{-1}=\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)=\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)-\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right)\text{.}$ Let $S=A\cup B$, then the inverse of matrix $\boldsymbol{M}_{SS}$ is given as $\begin{aligned} (\boldsymbol{M}_{SS})^{-1}=&\left[\begin{array}[]{cc}\boldsymbol{M}_{AA}&\boldsymbol{M}_{AB}\\\ \boldsymbol{M}_{BA}&\boldsymbol{M}_{BB}\end{array}\right]^{-1}\\\ =&\left[\begin{array}[]{cc}(\boldsymbol{M}_{AA})^{-1}\left(\boldsymbol{I}+\boldsymbol{M}_{AB}\left(\boldsymbol{W}_{BB}(\boldsymbol{G},A)\right)^{-1}\boldsymbol{M}_{BA}(\boldsymbol{M}_{AA})^{-1}\right)&-(\boldsymbol{M}_{AA})^{-1}\boldsymbol{M}_{AB}\left(\boldsymbol{W}_{BB}(\boldsymbol{G},A)\right)^{-1}\\\ -\left(\boldsymbol{W}_{BB}(\boldsymbol{G},A)\right)^{-1}\boldsymbol{M}_{BA}(\boldsymbol{M}_{AA})^{-1}&\left(\boldsymbol{W}_{BB}(\boldsymbol{G},A)\right)^{-1}\end{array}\right]\\\ =&\left[\begin{array}[]{cc}\left(\boldsymbol{W}_{AA}(\boldsymbol{G},B)\right)^{-1}&-\left(\boldsymbol{W}_{AA}(\boldsymbol{G},B)\right)^{-1}\boldsymbol{M}_{AB}(\boldsymbol{M}_{BB})^{-1}\\\ -(\boldsymbol{M}_{BB})^{-1}\boldsymbol{M}_{BA}\left(\boldsymbol{W}_{AA}(\boldsymbol{G},B)\right)^{-1}&(\boldsymbol{M}_{BB})^{-1}\left(\boldsymbol{I}+\boldsymbol{M}_{BA}\left(\boldsymbol{W}_{AA}(\boldsymbol{G},B)\right)^{-1}\boldsymbol{M}_{AB}(\boldsymbol{M}_{BB})^{-1}\right)\end{array}\right]\end{aligned}.$ Using (20), we obtain that $\displaystyle\boldsymbol{W}_{AB}(\boldsymbol{G},A\cup B)$ $\displaystyle=\left(2\boldsymbol{I}-(\boldsymbol{M}_{SS})^{-1}\right)_{AB}=-\left((\boldsymbol{M}_{SS})^{-1}\right)_{AB}$ $\displaystyle=(\boldsymbol{M}_{AA})^{-1}\boldsymbol{M}_{AB}\left(\boldsymbol{W}_{BB}(\boldsymbol{G},A)\right)^{-1}$ $\displaystyle=\left(\boldsymbol{W}_{AA}(\boldsymbol{G},B)\right)^{-1}\boldsymbol{M}_{AB}(\boldsymbol{M}_{BB})^{-1}.$ Proof of Proposition 4: To Proposition 4, it suffices to show equation (26). Indeed, when the bridge link between $i\in N_{1}$ and $j\in N_{2}$ is added, the new equilibrium efforts of $i$ and $j$ satisfy $\begin{cases}\hat{x}_{i}^{\ast}{{=b_{i}\left(\mathbf{N}^{1}\right)+m_{ii}\left(\mathbf{N}^{1}\right)\underbrace{\delta\hat{x}_{j}^{\ast}}_{=\Delta\theta_{i}^{\ast}}}}\\\ \hat{x}_{j}^{\ast}{{=b_{j}\left(\mathbf{N}^{2}\right)+m_{jj}\left(\mathbf{N}^{2}\right)\underbrace{\delta\hat{x}_{i}^{\ast}}_{=\Delta\theta_{j}^{\ast}}}}\end{cases}\text{.}$ (35) Here we have translated the structural intervention (adding the link $i-j$) into the corresponding characteristic intervention: $\Delta\theta_{i}^{\ast}=\delta\hat{x}_{j}^{\ast}$, $\Delta\theta_{j}^{\ast}=\delta\hat{x}_{i}^{\ast}$ and $\Delta\theta_{k}^{\ast}=0$ for all $k\notin\left\\{i,j\right\\}$ (see Lemma 2). Note that $m_{ij}=m_{ji}=0$ as two networks are initially isolated. Simple algebra yields $\displaystyle\hat{x}_{i}^{\ast}$ $\displaystyle=$ $\displaystyle\frac{b_{i}\left(\mathbf{N}^{1},\boldsymbol{\theta}^{1}\right)+\delta m_{ii}\left(\mathbf{N}^{1}\right)b_{j}\left(\mathbf{N}^{2},\boldsymbol{\theta}^{2}\right)}{1-\delta^{2}m_{ii}\left(\mathbf{N}^{1}\right)m_{jj}\left(\mathbf{N}^{2}\right)}\text{,~{}~{}~{}}\hat{x}_{j}^{\ast}=\frac{b_{j}\left(\mathbf{N}^{2},\boldsymbol{\theta}^{2}\right)+\delta m_{jj}\left(\mathbf{N}^{2}\right)b_{i}\left(\mathbf{N}^{1},\boldsymbol{\theta}^{1}\right)}{1-\delta^{2}m_{ii}\left(\mathbf{N}^{1}\right)m_{jj}\left(\mathbf{N}^{2}\right)}\text{.}$ By Proposition 1, the change in aggregate action equals $b\left(\mathbf{G+E}_{ij}\right)-b\left(\mathbf{G}\right)=b_{i}\left(\mathbf{N}^{1}\right)\underbrace{\delta\hat{x}_{j}^{\ast}}_{=\Delta\boldsymbol{\theta}_{i}^{\ast}}+b_{j}\left(\mathbf{N}^{2}\right){{\underbrace{\delta\hat{x}_{i}^{\ast}}_{=\Delta\boldsymbol{\theta}_{j}^{\ast}}}}=\delta L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right).$ Proof of Corollary 2: For item (i), we first note that by Definition 4, $L_{ij}$ clearly increases with $b_{i}$ and $m_{ii}$ for each given $j$. The claim just follows. For item (ii), given $m_{jj}=m_{j^{\prime}j^{\prime}}$ and $b_{j}\geq b_{j^{\prime}}$, we have $\displaystyle L_{ij}-L_{ij^{\prime}}$ $\displaystyle=$ $\displaystyle\frac{\delta m_{ii}\left(b_{j}^{2}-b_{j^{\prime}}^{2}\right)}{1-\delta^{2}m_{ii}m_{jj}}+\frac{2b_{i}\left(b_{j}-b_{j^{\prime}}\right)}{1-\delta^{2}m_{ii}m_{jj}}=\frac{\delta\left(b_{j}^{2}-b_{j^{\prime}}^{2}\right)}{\frac{1}{m_{ii}}-\delta^{2}m_{jj}}+\frac{2b_{i}\left(b_{j}-b_{j^{\prime}}\right)}{1-\delta^{2}m_{ii}m_{jj}}\text{.}$ which clearly increases in $m_{ii}$. The result just follows by noting that $b_{i}=b_{i^{\prime}}$ and $m_{ii}\geq m_{i^{\prime}i^{\prime}}$. For item (iii), we apply the Taylor expansions to obtain that $b_{k}\left(\mathbf{N}^{1}\right)=1+\delta e_{k}\left(\mathbf{N}^{1}\right)+O\left(\delta^{2}\right),m_{kk}\left(\mathbf{N}^{1}\right)=1+O\left(\delta^{2}\right),~{}~{}k\in N_{1},$ where $O\left(\delta^{2}\right)$ denotes a real-valued function such that $\lim\sup_{\delta\rightarrow 0}|\frac{\mathcal{O}\left(\delta^{2}\right)}{\delta^{2}}|<\infty$. Consequently, $L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)=2+2\delta\left(1+e_{i}\left(\mathbf{N}^{1}\right)+e_{j}\left(\mathbf{N}^{2}\right)\right)+\mathcal{O}\left(\delta^{2}\right)\text{.}$ Thus, when $\delta$ is sufficiently small, only the degree centrality matters for the bridge index $L_{ij}$. $\square$ Proof of Lemma 4: The proof is similar to that of Proposition 4 with the exception that $m_{ij}$ is not necessarily zero. For item (i), applying Proposition 1 with $S=\\{i,j\\}$ and $\mathbf{C}_{SS}=\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}$ yields $\Delta x^{*}=\mathbf{b}_{S}^{\prime}\left(\mathbf{G}\right)\Delta\boldsymbol{\theta}_{S}^{\ast}=\mathbf{b}_{S}^{\prime}\delta\mathbf{C}_{SS}\left(\mathbf{I}-\delta\mathbf{M}_{SS}\left(\mathbf{G}\right)\mathbf{C}_{SS}\right)^{-1}\mathbf{b}_{S}\left(\mathbf{G},\boldsymbol{\theta}\right).$ which reduces to $\delta L_{ij}$ after some algebra. For item (ii), we apply Proposition 1 with $S=\\{i,j\\}$, $\mathbf{C}_{SS}=-\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}$. The analysis is similar, hence omitted. $\Box$ ## Appendix B A walk-counting interpretation ### B.1 Intercentrality of group In this subsection, we show that any block of $\mathbf{W}\left(\mathbf{G},S\right)$ can be decomposed by the idea of walk concatenations.We first introduce some notation. ###### Definition 5. A walk $\mathcal{P}$ in network $\mathbf{G}$ is a finite sequence of nodes $(i_{1},i_{2},\cdots,i_{t+1})$ such that $g_{i_{1}i_{2}}=g_{i_{2}i_{3}}=\cdots=g_{i_{t}i_{t+1}}=1$. The node $i_{1}$ is the starting node and $i_{t+1}$ is the ending node. The length of the path is denoted by $\\#(\mathcal{P})=t$. A path of length zero is supposed to be a one-tuple $(i_{1})$. Fixing the parameter $\delta$, we define $\vee_{\delta}(\mathcal{P}):=\delta^{\\#(\mathcal{P})}$ for a walk $\mathcal{P}$. This definition is linearly extended to a set of walks $\mathbb{P}$:$\vee_{\delta}(\mathbb{P}):=\sum_{\mathcal{P}\in\mathbb{P}}\vee_{\delta}(\mathcal{P})=\sum_{\mathcal{P}\in\mathbb{P}}\delta^{\\#(\mathcal{P})}.$ (We often drop the subscription $\delta$ in $\vee$ when the context is clear.) Given two walks $\mathcal{P}=(i_{1},i_{2},\cdots,i_{t+1})$, $\mathcal{Q}=(j_{1},j_{2},\cdots,j_{s+1})$ with $i_{t+1}=j_{1}$, we construct the concatenation of $\mathcal{P}$ and $\mathcal{Q}$ as $\mathcal{P}\odot\mathcal{Q}=(i_{1},i_{2},\cdots,i_{t+1},j_{2},\cdots,j_{s+1})$. Clearly, we have $\vee(\mathcal{P}\odot\mathcal{Q})=\vee(\mathcal{P})\cdot\vee(\mathcal{Q})\mbox{~{}and~{} }\\#(\mathcal{P}\odot\mathcal{Q})=\\#(\mathcal{P})+\\#(\mathcal{P}).$ (36) These definitions are convenient. For instance, let $\mathbb{M}_{ij}(\mathbf{G})$ denote the set of walks which starts with $i$ and ends at $j$ in network $\mathbf{G}$. Then, we have $m_{ij}(\mathbf{G})=\vee(\mathbb{M}_{ij}(\mathbf{G}))$. ###### Definition 6. Fixing a non-empty proper subset $S$ of $N$ in the network $\left(N,\mathbf{G}\right)$, for any $i$, $j\in N$, we define $\mathbb{W}_{ij}\left(\mathbf{G},S\right)$ as the set of walks in $\mathbb{M}_{ij}(\mathbf{G})$ that do not contain any node in $S$ with the possible exception for the starting node $i$ and the ending node $j$. By the definition, it is obvious that $w_{ij}\left(\mathbf{G},S\right)=\vee(\mathbb{W}_{ij}\left(\mathbf{G},S\right))$. Now, we are in the position to show the equations in Proposition 3 using walk counting. Given $i\in S^{C},j\in S$, any walk $\mathcal{Q}$ in $\mathbb{M}_{ij}$ can be _uniquely_ decomposed as the concatenations of two walks, $\mathcal{P}^{il}$ and $\mathcal{P}^{lj}$ for some $l\in S$, where $l$ is the _first_ node along the walk $\mathcal{Q}$ that that node is in $S$. This walk $\mathcal{Q}$ contains at least one node in $S$ as the ending node $j$ is in $S$. By this definition of $l$, we have $\mathcal{P}^{il}\in\mathbb{W}_{il}(\mathbf{G},S)$. Clearly, $\mathcal{P}^{lj}\in\mathbb{M}_{lj}$. Furthermore, such a decomposition is unique. Consequently, we have the following decomposition $\mathbb{M}_{ij}=\bigcup_{l\in S}\bigcup_{\mathcal{P}^{il}\in\mathbb{W}_{il}(\mathbf{G},S)}\bigcup_{\mathcal{P}^{lj}\in\mathbb{M}_{lj}}\mathcal{P}^{il}\odot\mathcal{P}^{lj}.$ Taking $\vee$ on both sides and applying the properties of $\vee$ yield the following linear equations $m_{ij}=\sum_{l\in S}w_{il}m_{lj}$ for any $i\in S^{C},j\in S$. In matrix form, we obtain that $\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)=\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)$. Hence, $\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)=\mathbf{M}_{S^{C}S}\left(\mathbf{G}\right)\left(\mathbf{M}_{SS}\left(\mathbf{G}\right)\right)^{-1}$. Now we turn to equation (18). It suffices to show the following $\mathbf{M}_{S^{C}S^{C}}\left(\mathbf{G}\right)=\mathbf{W}_{S^{C}S^{C}}\left(\mathbf{G},S\right)+\mathbf{W}_{S^{C}S}\left(\mathbf{G},S\right)\mathbf{M}_{SS^{C}}\left(\mathbf{G}\right),$ (37) which holds as for $i,j\in S^{C}$, the walks $\mathbb{M}_{ij}$ can be decomposed into disjoint unions: $\mathbb{M}_{ij}=\mathbb{W}_{ij}\bigcup\left(\bigcup_{l\in S}\bigcup_{\mathbb{P}^{il}\in\mathbb{W}_{il}}\bigcup_{\mathbb{P}^{lj}\in\mathbb{M}_{lj}}\mathbb{P}^{il}\odot\mathbb{P}^{lj}\right).$ The intuition follows from a simple counting exercise. For a walk $\mathcal{Q}$ in $\mathbb{M}_{ij}$, either it does not contain any node in $S$, or it contain at least one node in $S$. The set of walks in the former case is precisely $\mathbb{W}_{ij}$, while the set of walks in the latter case is precisely $\left(\bigcup_{l\in S}\bigcup_{\mathbb{P}^{il}\in\mathbb{W}_{il}}\bigcup_{\mathbb{P}^{lj}\in\mathbb{M}_{lj}}\mathbb{P}^{il}\odot\mathbb{P}^{lj}\right)$ (note that $l$ is the first node in a walk $\mathcal{Q}$ such that this node is in $S$). Taking operator $\vee$ on both sides yields equation (37). To show equation (20), it suffices to show that ${{\mathbf{M}}_{SS}}({\mathbf{G}})=\underbrace{{{\mathbf{W}}_{SS}}({\mathbf{G}},S)}_{{\text{Walks never passing }}S}+\underbrace{\left({{{\mathbf{W}}_{SS}}({\mathbf{G}},S)-{\mathbf{I}}}\right)\left({{{\mathbf{M}}_{SS}}({\mathbf{G}})-{\mathbf{I}}}\right)}_{{\text{Walks always passing }}S}\hfill\\\ $ (38) which follows from the following decomposition414141The intuition for the decomposition is similar, hence omitted. Here $k$ is the first node of a walk in $\mathbb{M}_{ij}(\mathbf{G})\backslash\mathbb{W}_{ij}(\mathbf{G},S)$ so that that node is in $S$. We must remove the walk with length zero in $\mathbb{W}_{ik}$ and $\mathbb{M}_{kj}$ to guarantee uniqueness of concatenation decomposition, which explains the identity matrix in equation (38). $\mathbb{M}_{ij}(\mathbf{G})\backslash\mathbb{W}_{ij}(\mathbf{G},S)=\bigcup_{k\in S}\bigcup_{\mathcal{P}^{ik}\in\mathbb{W}_{ik}(\mathbf{G},S)\backslash\\{(i)\\}}\bigcup_{\mathcal{P}^{kj}\in\mathbb{M}_{kj}(\mathbf{G})\backslash\\{(k)\\}}\mathcal{P}^{ik}\odot\mathcal{P}^{kj}.$ for $i,j\in S$. Taking $\vee$ on both sides yields $m_{ij}-w_{ij}=\sum_{k\in S}{[w_{ik}(\mathbf{G},S)-\mathbf{1}_{\\{k=i\\}}][m_{kj}(\mathbf{G})-\mathbf{1}_{\\{k=j\\}}]},$ which holds for any $i,j\in S$, thus is equivalent to equation (38). ### B.2 Key bridge index In this Section we prove the following identity: $b\left(\mathbf{G+E}_{ij}\right)-b\left(\mathbf{G}\right)=\delta L_{ij}\left(\mathbf{N}^{1},\mathbf{N}^{2}\right)=\frac{\delta^{2}m_{jj}\left(\mathbf{N}^{2}\right)b_{i}^{2}\left(\mathbf{N}^{1}\right)+\delta^{2}m_{ii}\left(\mathbf{N}^{1}\right)b_{j}^{2}\left(\mathbf{N}^{2}\right)+2\delta b_{j}\left(\mathbf{N}^{2}\right)b_{i}\left(\mathbf{N}^{1}\right)}{1-\delta^{2}m_{jj}\left(\mathbf{N}^{2}\right)m_{ii}\left(\mathbf{N}^{1}\right)}$ (39) $\forall i\in N_{1},j\in N_{2},$ where $\mathbf{G}$ is in the union of two isolated networks. For ease of notation we drop the network variable in $m_{ii},b_{i},b_{j},m_{jj}$. Define $\mathbb{W}_{.i}=\bigcup_{i\in N}\mathbb{W}_{ji}$ as the set of walks ending at $i$. Define $\mathbb{W}_{i.}=\bigcup_{l\in N}\mathbb{W}_{il}$ as the set of walks starting with $i$ Clearly, $\vee(\mathbb{W}_{.i})=\vee(\mathbb{W}_{i.})=\sum_{j\in N}m_{ij}=\sum_{l\in N}m_{il}=b_{i}$. Note that the term on the left-hand side of equation (39) is the discounted sum of additional walks due to the new link $(i,j)$. We will divide these new walks into four types, and contribute each type to a term on the right-hand side of equation (39). (Type I) The new walks starting a node in $\mathbf{N}^{1}$ and ending at a node in $\mathbf{N}^{2}$. Take such a walk $\mathcal{Q}$. Suppose it passes the bridge link $(i,j)$ only once, it can be uniquely written as $\mathcal{P}^{.i}\odot(i,j)\odot\mathcal{P}^{j.}$, where $\mathcal{P}^{.i}\in\mathbb{W}_{.i}(\mathbf{N}^{1})$ and $\mathcal{P}^{j.}\in\mathbb{W}_{j.}(\mathbf{N}^{2})$ (see Figure 6 for such an example). We have $\vee\left(\bigcup_{\mathcal{P}^{.i}\in\mathbb{W}_{.i}(\mathbf{N}^{1})}\bigcup_{\mathcal{P}^{j.}\in\mathbb{W}_{j.}(\mathbf{N}^{2})}\mathcal{P}^{.i}\odot(i,j)\odot\mathcal{P}^{j.}\right)=\vee\left(\mathbb{W}_{.i}\right)\times\delta\times\vee\left(\mathbb{W}_{j.}\right)=\delta b_{i}b_{j}.$ However, $\mathcal{Q}$ can also pass the bridge link $(i,j)$ three times, give times, etc. We can apply similar exercise to show that the set of walks originating from nodes in $\mathbf{N}^{1}$ and stopping at nodes in $\mathbf{N}^{2}$ which pass $\left(i,j\right)$ for exactly three times is given by $\bigcup_{\left(k,l\right)\in N_{1}\times N_{2}}\bigcup_{(\mathcal{P}^{ki},\mathcal{P}^{jj},\mathcal{P}^{ii},\mathcal{P}^{jl})\in(\mathbb{M}_{ki}(\mathbf{N}^{1}))\times(\mathbb{M}_{jj}(\mathbf{N}^{2}))\times(\mathbb{M}_{ii}(\mathbf{N}^{1}))\times(\mathbb{M}_{jl}(\mathbf{N}^{2}))}(\mathcal{P}^{ki}\odot\left(i,j\right)\odot\mathcal{P}^{jj}\odot\left(j.i\right)\odot\mathcal{P}^{ii}\odot\left(i,j\right)\odot\mathcal{P}^{jl})$ (Figure 7 gives a walk that passes the bridge three times.) Taking $\vee$ yields $\displaystyle\sum_{k\in N_{1}}\vee\left(\mathbb{M}_{ki}(\mathbf{N}^{1})\right)\cdot\delta\cdot\vee\left(\mathbb{M}_{jj}(\mathbf{N}^{2})\right)\cdot\delta\cdot\vee\left(\mathbb{M}_{ii}(\mathbf{N}^{1})\right)\cdot\delta\cdot\sum_{l\in N_{2}}\vee\left(\mathbb{M}_{jl}(\mathbf{N}^{2})\right)$ $\displaystyle=$ $\displaystyle\delta b_{i}b_{j}(\delta^{2}m_{ii}m_{jj}).$ Similarly, $\delta b_{i}b_{j}(\delta^{2}m_{ii}m_{jj})^{2}$ captures type I walks that pass the link $(i,j)$ five times. Taking sum yields $\delta b_{i}b_{j}+\delta b_{i}b_{j}(\delta^{2}m_{ii}m_{jj})+\delta b_{i}b_{j}(\delta^{2}m_{ii}m_{jj})^{2}+\cdots=\delta b_{i}b_{j}\frac{1}{1-\delta^{2}m_{ii}m_{jj}}.$ (40) Figure 6: New walk from one network to another by passing bridge once Figure 7: New walk from one network to another by passing bridge three times (Type II) For the same reason, the new walks starting a node in $\mathbf{N}^{2}$ and ending at a node in $\mathbf{N}^{1}$ contribute $\delta b_{i}b_{j}\frac{1}{1-\delta^{2}m_{ii}m_{jj}}$ in equation (39). (Type III) The new walks starting a node in $\mathbf{N}^{1}$ and ending at a node in $\mathbf{N}^{1}$. By definition, such a walk must pass the bridge $(i,j)$ two times, four times, etc. To pass the bridge twice, the walk must be decomposed into the concatenations of the following walks: $\mathcal{P}^{.i}\odot\left(i,j\right)\odot\mathcal{P}^{jj}\odot\left(j,i\right)\odot\mathcal{P}^{i.}$ where $\mathcal{P}^{.i}\in\mathbb{W}_{.i},\mathcal{P}^{jj}\in\mathbb{W}_{ii}\mathcal{P}^{i.}\in\mathbb{W}_{.i}$ (See Figure 8 for an example). Taking $\vee$ of these walks yields $\delta^{2}b_{i}^{2}m_{jj}$. To take into account the facts that such walks can pass the bridge four times, six times, etc, we should multiply $\delta^{2}b_{i}^{2}m_{jj}$ by $\frac{1}{1-\delta^{2}m_{ii}m_{jj}}$( the underlying logic is similar to the exercise for type I). Together, type III walks contribute exactly $\delta^{2}b_{i}^{2}m_{jj}\frac{1}{1-\delta^{2}m_{ii}m_{jj}}$ in equation (39). Figure 8: New walk starting and ending at a same network (Type IV) The new walks starting a node in $\mathbf{N}^{2}$ and ending at a node in $\mathbf{N}^{2}$. This part is similar to type III and it contributes $\delta^{2}b_{j}^{2}m_{ii}\frac{1}{1-\delta^{2}m_{ii}m_{jj}}$ in equation (39). ## Appendix C Interventions in alternative network models Our paper is applicable to many other network models in which the Katz- Bonacich centrality plays an important role in shaping the equilibrium. We list several models. A common theme is that the Katz-Bonacich centrality, similar to Ballester et al. (2006), is a building block of the equilibrium objective. 1. 1. (Multiple activities) Chen et al. (2018b) consider a network model with multiple interdependent activities. In the network $\left(N,\mathbf{G}\right)$, each player $i$ can choose the levels of two activities $\left(a_{i}^{A},a_{i}^{B}\right)=\mathbf{a}_{i}$ with utility $\displaystyle u_{i}\left(\mathbf{a}_{i},\mathbf{a}_{-i}\right)$ $\displaystyle=$ $\displaystyle\theta_{i}^{A}a_{i}^{A}+\theta_{i}^{B}a_{i}^{B}-\left\\{\frac{1}{2}\left(a_{i}^{A}\right)^{2}+\frac{1}{2}\left(a_{i}^{B}\right)^{2}+\beta a_{i}^{A}a_{i}^{B}\right\\}+\delta\underset{j}{\sum}g_{ij}a_{i}^{A}a_{j}^{A}+\delta\underset{j}{\sum}g_{ij}a_{i}^{B}a_{j}^{B}\text{,}$ where $\boldsymbol{\theta}_{i}=\left(\theta_{i}^{A},\theta_{i}^{B}\right)$ is $i$’s characteristics, $\frac{1}{2}\left(a_{i}^{A}\right)^{2}+\frac{1}{2}\left(a_{i}^{B}\right)^{2}+\beta a_{i}^{A}a_{i}^{B}$ is the cost of action $\mathbf{a}_{i}$ and $\delta\underset{j}{\sum}g_{ij}a_{i}^{A}a_{j}^{A}+\delta\underset{j}{\sum}g_{ij}a_{i}^{B}a_{j}^{B}$ captures the network externalities. Chen et al. (2018b) show that if $0\leq\delta\leq\frac{1-|\beta|}{\lambda_{\max}\left(\mathbf{G}\right)}$, there exists a unique Nash equilibrium given by $\begin{bmatrix}\mathbf{x}^{A}\\\ \mathbf{x}^{B}\end{bmatrix}=\begin{bmatrix}\frac{1}{2\left(1+\beta\right)}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta}^{A}+\boldsymbol{\theta}^{B},\frac{\delta}{1+\beta}\right)+\frac{1}{2\left(1-\beta\right)}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta}^{A}-\boldsymbol{\theta}^{B},\frac{\delta}{1-\beta}\right)\\\ \frac{1}{2\left(1+\beta\right)}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta}^{A}+\boldsymbol{\theta}^{B},\frac{\delta}{1+\beta}\right)-\frac{1}{2\left(1-\beta\right)}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta}^{A}-\boldsymbol{\theta}^{B},\frac{\delta}{1-\beta}\right)\end{bmatrix}\text{,}$ where $\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta},\frac{\delta}{1+\beta}\right)=\left(\mathbf{I}-\frac{\delta}{1+\beta}\mathbf{G}\right)^{-1}\boldsymbol{\theta}$. That is, the equilibrium profile is the weighted sum of two Katz-Bonacich centralities. 2. 2. (Direct complements and indirect substitutes) Currarini et al. (2017) introduce a linear quadratic network games in which each agent faces peer effects from the distance-one neighbors but also exhibits local congestion effects from distance-two neighbors. Specifically, the payoff of individual $i$ is given by $u_{i}\left(a_{i},\mathbf{a}_{-i}\right)=\theta_{i}a_{i}-\frac{1}{2}a_{i}^{2}+\delta\sum_{k=1}^{n}g_{ik}a_{i}a_{k}-\gamma\sum_{k=1}^{n}g_{ik}^{\left[2\right]}a_{i}a_{k}\text{,}$ where the last term $-\gamma\Sigma_{k=1}^{n}g_{ik}^{\left[2\right]}a_{i}a_{k}>0$ is new compared with Ballester et al. (2006) and captures the strategic substitution effect between players at distance-two in the network. Note that $\gamma\geq 0$ and $g_{ik}^{\left[2\right]}$ is the $ik$-th element of matrix $\mathbf{G}^{2}$. Under some regularity assumptions on $\delta$ and $\gamma$, Currarini et al. (2017) show that the unique equilibrium equals $\mathbf{x}^{\ast}=\left(\mathbf{I}-\delta\mathbf{G}+\gamma\mathbf{G}^{2}\right)^{-1}\boldsymbol{\theta}\text{.}$ In fact, we can rewrite the above equilibrium succinctly as a linear combination of two Katz-Bonacich centralities: $\mathbf{x}^{\ast}=\frac{\beta_{1}}{\beta_{1}-\beta_{2}}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta,}\beta_{1}\right)-\frac{\beta_{2}}{\beta_{1}-\beta_{2}}\mathbf{b}\left(\mathbf{G},\boldsymbol{\theta,}\beta_{2}\right)\text{.}$ (41) where $\beta_{1}=\frac{\delta+\sqrt{\delta^{2}-4\gamma}}{2}$ and $\beta_{2}=\frac{\delta-\sqrt{\delta^{2}-4\gamma}}{2}$. Note that $\beta_{1}$ and $\beta_{2}$ satisfy $\beta_{1}+\beta_{2}=\delta,\beta_{1}\beta_{2}=\gamma$. Equation (41) directly follows from the following mathematical identity: $\displaystyle\left(\mathbf{I}-\delta\mathbf{G}+\gamma\mathbf{G}^{2}\right)^{-1}$ $\displaystyle\boldsymbol{=}$ $\displaystyle\left(\mathbf{I}-\left(\beta_{1}+\beta_{2}\right)\mathbf{G}+\beta_{1}\beta_{2}\mathbf{G}^{2}\right)^{-1}=\frac{\beta_{1}}{\beta_{1}-\beta_{2}}\left(\mathbf{I-}\beta_{1}\mathbf{G}\right)^{-1}-\frac{\beta_{2}}{\beta_{1}-\beta_{2}}\left(\mathbf{I-}\beta_{2}\mathbf{G}\right)^{-1}\text{.}$ That is, the equilibrium in Currarini et al. (2017) equals the weighted sum of two Katz-Bonacich centralities. 3. 3. (Local complementarity and global substitution) In addition to local complementaries, players might experience global competitive effects. In one extension, Ballester et al. (2006) consider the following utility function of individual $i$ $u_{i}\left(a_{i},\mathbf{a}_{-i}\right)=a_{i}-\frac{1}{2}a_{i}^{2}-\phi\Sigma_{k\neq i}a_{i}a_{k}+\delta\Sigma_{k=1}^{n}g_{ik}a_{i}a_{k}\text{.}$ where the term $\phi\Sigma_{k\neq i}a_{i}a_{k}$ is the global interaction effect corresponds to a substitutability in efforts across all players. $\phi\geq 0$ measures the intensity of the global interdependence. For simplicty, we assume that each player’s intrinsic marginal utilities are identical. Ballester et al. (2006) show that the equilibrium behavior in this network game $\mathbf{x}=\frac{1}{1-\phi+\phi b\left(\mathbf{G},\frac{\delta}{1-\phi}\right)}\mathbf{b}\left(\mathbf{G},\frac{\delta}{1-\phi}\right)\text{.}$ Each player’s equilibrium strategy is a function of Katz-Bonacich centralities. In particular, the aggregate equilibrium action, $\mathbf{1}^{\prime}\mathbf{x=}\frac{b\left(\mathbf{G},\frac{\delta}{1-\phi}\right)}{1-\phi+\phi b\left(\mathbf{G},\frac{\delta}{1-\phi}\right)}$, is a monotonic function of $b\left(\mathbf{G},\frac{\delta}{1-\phi}\right)$. Since the equilibrium in each of these model is either linear combinations or transformations of the Katz-Bonacich centralities. We can directly apply Proposition 1 to study the effects of structural and chracteristical interventions, and study similar issues such as the key group and the key link problems. ## References * Acemoglu et al. (2012) Acemoglu, D., V. M. Carvalho, A. Ozdaglar, and A. Tahbaz-Salehi (2012). The network origins of aggregate fluctuations. Econometrica 80(5), 1977–2016. * Allouch (2017) Allouch, N. (2017). On the private provision of public goods on networks. Journal of Economic Theory 157, 527–552. * Ballester et al. (2006) Ballester, C., A. Calvó-Armengol, and Y. Zenou (2006). Who’s who in networks. wanted: The key player. Econometrica 74(5), 1403–1417. * Ballester et al. (2010) Ballester, C., Y. Zenou, and A. Calvó-Armengol (2010). Delinquent networks. Journal of the European Economic Association 8(1), 34–61. * Banerjee et al. (2013) Banerjee, A., A. G. Chandrasekhar, E. Duflo, and M. O. Jackson (2013). The diffusion of microfinance. Science 341(6144). * Banerjee et al. (2018) Banerjee, A., A. G. Chandrasekhar, E. Duflo, and M. O. Jackson (2018). Changes in social network structure in response to exposure to formal credit markets. Available at SSRN 3245656. * Baqaee (2018) Baqaee, D. R. (2018). Cascading failures in production networks. Econometrica 86(5), 1819–1838. * Belhaj et al. (2016) Belhaj, M., S. Bervoets, and F. Deroïan (2016). Efficient networks in games with local complementarities. Theoretical Economics 11(1), 357–380. * Bloch et al. (2020) Bloch, F., M. O. Jackson, and P. Tebaldi (2020). Centrality measures in networks. Working paper. * Bloch and Quérou (2013) Bloch, F. and N. Quérou (2013). Pricing in social networks. Games and Economic Behavior 80, 243 – 261. * Bonacich (1987) Bonacich, P. (1987). Power and centrality: A family of measures. American Journal of Sociology 92(5), 1170–1182. * Boyd and Vandenberghe (2004) Boyd, S. and L. Vandenberghe (2004). Convex optimization. Cambridge university press. * Bramoullé et al. (2016) Bramoullé, Y., A. Galeotti, and B. Rogers (2016). The Oxford handbook of the economics of networks. Oxford University Press. * Bramoullé et al. (2016) Bramoullé, Y., A. Galeotti, B. Rogers, and Y. Zenou (2016). Key players. * Bramoullé and Garance (2018) Bramoullé, Y. and G. Garance (2018). Diffusion centrality: Foundations and extensions. Working paper. * Bramoullé and Kranton (2007) Bramoullé, Y. and R. Kranton (2007). Public goods in networks. Journal of Economic Theory 135(1), 478 – 494. * Bramoullé et al. (2014) Bramoullé, Y., R. Kranton, and M. D’Amours (2014). Strategic interaction and networks. American Economic Review 104(3), 898–930. * Cai and Szeidl (2018) Cai, J. and A. Szeidl (2018). Interfirm relationships and business performance. The Quarterly Journal of Economics 133(3), 1229–1282. * Calvó-Armengol and Jackson (2004) Calvó-Armengol, A. and M. O. Jackson (2004, June). The effects of social networks on employment and inequality. American Economic Review 94(3), 426–454. * Calvó-Armengol et al. (2009) Calvó-Armengol, A., E. Patacchini, and Y. Zenou (2009). Peer effects and social networks in education. The Review of Economic Studies 76(4), 1239–1267. * Candogan et al. (2012) Candogan, O., K. Bimpikis, and A. Ozdaglar (2012). Optimal pricing in networks with externalities. Operations Research 60(4), 883–905. * Chen et al. (2018a) Chen, Y.-J., Y. Zenou, and J. Zhou (2018a). Competitive pricing strategies in social networks. The RAND Journal of Economics 49(3), 672–705. * Chen et al. (2018b) Chen, Y.-J., Y. Zenou, and J. Zhou (2018b). Multiple activities in networks. American Economic Journal: Microeconomics 10(3), 34–85. * Choi et al. (2019) Choi, S., S. Goyal, and F. Moisan (2019). Connectors and influencers. * Currarini et al. (2017) Currarini, S., E. Fumagalli, and F. Panebianco (2017). Peer effects and local congestion in networks. Games and Economic Behavior 105, 40 – 58. * David and Dina (2004) David, G. and M. Dina (2004). Using online conversations to study word-of-mouth communication. Marketing Science 23(4), 545–560. * Demange (2017) Demange, G. (2017). Optimal targeting strategies in a network under complementarities. Games and Economic Behavior 105, 84 – 103. * Elliott and Golub (2018) Elliott, M. and B. Golub (2018). A network approach to public goods. Journal of Political Economy forthcoming. * Elliott et al. (2019) Elliott, M. L., S. Goyal, and A. Teytelboym (2019). Networks and economic policy. Oxford Review of Economic Policy 35(4), 565–585. * Galeotti et al. (2020) Galeotti, A., B. Golub, and S. Goyal (2020). Targeting interventions in networks. Econometrica 88(6), 2445–2471. * Galeotti and Goyal (2010) Galeotti, A. and S. Goyal (2010). The law of the few. American Economic Review 100(4), 1468–1492. * Golub and Lever (2010) Golub, B. and C. Lever (2010). The leverage of weak ties how linking groups affects inequality. Working paper. * Goyal and Moraga-González (2001) Goyal, S. and J. L. Moraga-González (2001). R&D networks. The RAND Journal of Economics, 686–707. * Jackson et al. (2017) Jackson, M. O., B. W. Rogers, and Y. Zenou (2017). The economic consequences of social-network structure. Journal of Economic Literature 55(1), 49–95. * Jerzy (2001) Jerzy, S. (2001). Delinquent Networks: Youth Co-Offending in Stockholm. Cambridge Studies in Criminology. Cambridge University Press. * König et al. (2014) König, M. D., C. J. Tessone, and Y. Zenou (2014). Nestedness in networks: A theoretical model and some applications. Theoretical Economics 9(3), 695–752. * Li (2020) Li, X. (2020). Designing weighted and directed networks under complementarities. Working paper at SSRN 3299331. * Liu (2019) Liu, E. (2019). Industrial Policies in Production Networks. The Quarterly Journal of Economics 134(4), 1883–1948. * Mark (2002) Mark, W. (2002). Companions in Crime: The Social Aspects of Criminal Conduct. Cambridge Studies in Criminology. Cambridge University Press. * Mas and Moretti (2009) Mas, A. and E. Moretti (2009). Peers at work. American Economic Review 99(1), 112–45. * Patacchini and Zenou (2012) Patacchini, E. and Y. Zenou (2012). Juvenile delinquency and conformism. Journal of Law, Economics, and Organization 28(1), 1–31. * Sacerdote (2001) Sacerdote, B. (2001). Peer Effects with Random Assignment: Results for Dartmouth Roommates. The Quarterly Journal of Economics 116(2), 681–704. * Sun et al. (2021) Sun, Y., W. Zhao, and J. Zhou (2021). Building up efficient networks sequentially. Working paper. * Verdier and Zenou (2015) Verdier, T. and Y. Zenou (2015). The role of cultural leaders in the transmission of preferences. Economics Letters 136, 158 – 161. * Verdier and Zenou (2018) Verdier, T. and Y. Zenou (2018). Cultural leader and the dynamics of assimilation. Journal of Economic Theory 175, 374 – 414. * Zhou and Chen (2015) Zhou, J. and Y.-J. Chen (2015). Key leaders in social networks. Journal of Economic Theory 157, 212–235.
# lrsarith: a small fixed/hybrid arithmetic C library David Avis School of Informatics, Kyoto University, Kyoto, Japan and School of Computer Science, McGill University, Montréal, Québec, Canada Charles Jordan Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan ###### Abstract We describe lrsarith which is a small fixed precision and hybrid arithmetic C library for integers and rationals that we developed for use in the lrslib library for polyhedral computation. Using a generic set of operations, a program can be compiled with either 64-bit or 128-bit (if available) fixed precision, with an extended precision library such as GMP or the built-in MP routines. A simple scheme checks for overflow and either terminates the program or, in hybrid mode, changes to a higher precision arithmetic. Implementing these arithmetics in lrslib resulted in only minimal changes to the original code. We give computational results using lrs and mplrs, vertex/facet enumeration codes in lrslib, using 64 and 128 bit fixed integer arithmetic with and without overflow checking, GMP arithmetic, lrsarith hybrid arithmetic with both GMP and MP, and FLINT hybrid arithmetic. We give a small self-contained example C program using the lrsarith package in both fixed precision and hybrid mode. ## 1 Introduction When writing mathematical software, a fundamental choice is the arithmetic to use. It is easy to be seduced by the performance of native integers, but the possibility of incorrect output caused by overflow can be a major concern. To guarantee correctness, in many cases one must rely on multiprecision arithmetic such as that provided by the GNU Multiple Precision (GMP) Arithmetic Library. This comes with a large performance penalty compared to native integers on inputs where they suffice, and it’s tempting to maintain multiple versions with different arithmetic. Until version 7, this was the situation with lrslib, a library of programs for polyhedral computation including e.g. lrs for vertex/facet enumeration and redund for redundancy removal. From the outset lrslib used exact arithmetic and could be compiled with either fixed precision C integer arithmetic (without overflow checking) or with an internal extended precision arithmetic library. For safety, the default was to use extended precision. Later versions could also be compiled with the GMP package and more recently, with the hybrid arithmetic package available in FLINT. The fixed precision versions perform several times faster than the extended precision versions but can produce incorrect results if overflow occurs. For this reason we developed a simple scheme to detect the possibility of overflow occurring while using fixed precision arithmetic. The code could then halt and allow the user to restart using higher or extended precision arithmetic. This led to the development of a hybrid scheme in lrslib to allow automatic restart with the next higher precision arithmetic. This gives users the performance of native arithmetic in most cases where it is safe, while keeping the correctness provided by the extended precision versions. The restart is from (very near) the point where a possible overflow was detected. Implementing this in lrslib resulted in only minimal changes to the original code and gives a significant performance improvement. The main purpose of this paper is to explain how overflow checking and hybrid arithmetic was implemented and to give numerical results for a case study involving vertex/facet enumeration. This explains the improved performance of lrslib 7 compared to previous versions. These techniques can be used in any C code without requiring the full lrslib library, so we created the much smaller, independent lrsarith package which contains only the arithmetic code. It can be downloaded from [1]. Implementing arithmetic in lrsarith allows developers to postpone the choice of arithmetic until compile time. With some additional effort to handle overflows and restarting from intermediate points of the computation, it also allows the developer to obtain the performance of native integers when they suffice and the correctness of an extended precision library when required. ### 1.1 Related work When comparing various codes for vertex/facet enumeration [2], we noted that the hybrid arithmetic available in normaliz could give a large speedup for certain inputs. The default in normaliz is to begin the computation in 64-bit integers and to restart using GMP extended precision if overflow occurs. However, the restart is from the beginning of the computation, which can hurt performance on certain inputs where the repeated work is significant (see the comparison in [2]). The Fast Library for Number Theory (FLINT) [5] is a collection of modules that support various arithmetic operations. In particular, we use the fmpz module for extended precision arithmetic that balances “very efficient” performance on small integers with performance similar to GMP for large integers. We will compare lrsarith performance with FLINT in Section 4.4. Although FLINT is easier to develop with since switching arithmetic is transparent to the user, lrsarith can give higher performance on small integers. FLINT contains much additional functionality beyond extended precision arithmetic, but usually requires the user to install it. Porta [4] is a collection of routines related to polytopes, including vertex/facet enumeration. It supports both fixed precision and its own multiprecision arithmetic and was included in the comparison [2] mentioned earlier. The standard general purpose library for extended precision arithmetic with large integers is GMP111https://gmplib.org/. It is highly optimized for many platforms and was the default in lrslib for some time. Historically, lrslib used its own multiprecision arithmetic (MP) and this is still supported. Comparisons are given in Section 4.4. ### 1.2 Organization of the paper The paper contains two parts that can essentially be read independently. The first part gives a detailed explanation of how we implement the various arithmetics with a simple but complete example program that can be used as a template for developing with lrsarith. In Section 2 we show how overflow checking is implemented in our fixed precision lrslong library. We give a simple example to show how a single C code can be compiled with 64-bit, 128-bit, MP or GMP arithmetic. In the first three cases the program terminates when the possibility of overflow is detected. In Section 3 we continue the example to show how a program can restart after overflow has been detected, using the next level of arithmetic. This form of hybrid arithmetic is used in lrslib. Computational results are presented in both sections for a larger example involving Collatz sequences. The second part of the paper begins in Section 4 where we present our case study and give computational results comparing various arithmetic packages that can be used in lrslib. These include fixed integer arithmetic with and without overflow checking, GMP arithmetic, two versions of lrsarith hybrid arithmetic and FLINT hybrid arithmetic. ## 2 Fixed arithmetic ### 2.1 Definitions and overflow handling Arithmetic is handled in lrsarith by defining a generic integer type and a set of generic operations. A generic integer a, integer vector v and integer matrix A are defined lrs_mp a; lrs_mp_vector v; lrs_mp_matrix A; allocated lrs_alloc_mp(a); v=lrs_alloc_mp_vector(n); A=lrs_alloc_mp_matrix(m,n); and freed lrs_clear_mp(a); lrs_clear_mp_vector(n); lrs_clear_mp_matrix(m,n); where m and n are the dimensions. The types are assigned at compile time depending on the arithmetic used. For 64-bit and 128-bit integers they are assigned to fixed length integers and for GMP arithmetic to the GMP integer type: typedef long long lrs_mp[1]; /* one long integer */ typedef __int128 lrs_mp[1]; /* one 128-bit integer */ typedef long long lrs_mp[MAX_DIGITS + 1]; /* one MP integer */ typedef mpz_t lrs_mp; /* one GMP integer */ Operations using lrs_mp integers are written generically. Typical examples are as follows where a,b,c,d,e are lrs_mp integers, i is a long long and the equivalent C code is given in parentheses: itomp(i,a); (a=i) copy(b,a); (b=a) addint(a,b,c); (c=a+b) mulint(a,b,c); (c=a*b) divint(a,b,c); (c=a/b a=a%b) linint(a, ka, b, kb); (a=a*ka+b*kb for ka, kb long long integers) qpiv(a,b,c,d,e); (a=(a*b-c*d)/e where the division is exact ) A small C program, fixed.c, using some of these operations is given in Appendix A. It reads in an integer $k$ and attempts to repeatedly square it six times. We will discuss the program later in this section. Generic operations are either assigned at compile time by macros, for example: #define addint(a,b,c) *(c) = *(a) + *(b) /* 64-bit or 128-bit */ #define addint(a,b,c) mpz_add((c),(a),(b)) /* GMP */ or by C code in the case of MP. In this way arithmetic operations are essentially the same as using the underlying arithmetic package. The problem is that overflow is not detected in 64-bit and 128-bit arithmetic. We solve this problem by a technique we call lazy overflow handling. Using very light extra computation we detect when it is possible that an overflow may occur. Control is then given to an overflow handler that either halts computation or restarts the program using arithmetic with higher precision. While restarting from the beginning of execution is simplest, later we will see options for restarting from intermediate checkpoints – allowing programs to use faster arithmetic for initial portions of computations in some cases. To incorporate lazy overflow handling we define some constants that depend on the word size $W$ which is either 64 or 128 in lrsarith: $\textsf{MAXDm}=\left\lfloor\sqrt{2^{W-1}-1}\right\rfloor~{}~{}~{}\textsf{MAXDl}=2^{W/2-1}-1~{}~{}~{}\textsf{MAXDa}=2^{W-2}-1$ (1) MAXDa is the constant used in testing for overflow when using addition or similar operations, MAXDl is used for linint and MAXDm is used for multiplying. Three macros test whether the operands a and b are out of bounds: #define mpsafem(a,b) *(a)> MAXDm ||*(b)> MAXDm ||*(a)<- MAXDm ||*(b)<- MAXDm #define mpsafel(a,b) *(a)> MAXDl ||*(b)> MAXDl ||*(a)<- MAXDl ||*(b)<- MAXDl #define mpsafea(a,b) *(a)> MAXDa ||*(b)> MAXDa ||*(a)<- MAXDa ||*(b)<- MAXDa Using these macros we can write macros for the generic operations above as follows: #define addint(a,b,c) {if(mpsafea(a,b)) lrs_overflow(1); else *(c) = *(a) + *(b);} #define mulint(a,b,c) {if(mpsafem(a,b)) lrs_overflow(1); else *(c) = *(a) * *(b);} #define divint(a,b,c) {*(c) = *(a) / *(b); *(a) = *(a) % *(b);} #define linint(a,ka,b,kb) {if(mpsafel(a,b)) lrs_overflow(1); else *(a)=*(a)*ka+*(b)*kb;} #define qpiv(a,b,c,d,e) {if(mpsafel(a,b)|| mpsafel(c,d)) lrs_overflow(1); else *(a) =(*(a) * *(b) - *(c) * *(d))/(*e);} We claim that if the integer arithmetic would overflow then lrs_overflow is called. Since we are using signed integer arithmetic this is equivalent to proving that the result (and intermediate values) in each case is at most $2^{W-1}-1$. Proceeding case by case: * • addint($a,b,c$): $|a|,|b|\leq\textsf{MAXDa}$ and so $a+b\leq 2*(2^{W-2}-1)=2^{W-1}-2$ ; * • mulint($a,b,c$): $|a|,|b|\leq\textsf{MAXDm}$ and so $a*b\leq\left(\sqrt{2^{W-1}-1}\right)^{2}=2^{W-1}-1$ ; * • linint($a,ka,b,kb$) $|a|,|b|,|ka|,|kb|\leq\textsf{MAXDl}$ and so $~{}~{}~{}~{}~{}a*ka+b*kb\leq 2\,(2^{W/2-1}-1)^{2}=2^{W-1}-2^{W/2+1}+2$ . Note that divint cannot overflow and that the analysis for qpiv is thus essentially the same as for linint, proving the claim. Rational arithmetic is handled in lrsarith in a very simple way by representing the rational number $a/b$ by two lrs_mp integers a and b. The operation reduce(a,b) divides a and b by their greatest common divisor. Arithmetic operations for rationals are based on the integer operations above. For example mulrat(a,b,c,d,e,f) (e/f = a/b * c/d, with e/f reduced) is implemented by mulint (a,c,e); mulint (b,d,f); reduce (e,f); Overflow checking for rational arithmetic is inherited from the corresponding integer arithmetic. We now look more closely at the code fixed.c for iterated squaring given in Listing 1 of Appendix A. Among the includes on lines 1–3 we have the lrsarith header file. On lines 7–9 is the function lrs_overflow that handles overflow processing. In this case it simply prints a message and halts the program. In the main routine we see that two lrs_mp variables are declared. These are allocated and cleared in lines 16 and 26. These are null operations for 64 or 128-bit arithmetic and MP but are needed in GMP. On line 15 we initialize the arithmetic package. The next lines read an integer $k$ and attempt to iteratively square it six times. Using the makefile in Listing 2 we can compile fixed.c with each of the arithmetic packages depending on the compiler switches used. For 64 or 128-bit arithmetic the -DLRSLONG switch is set, with -DB128 also set for 128-bit arithmetic. The -DSAFE switch enables overflow checking and handling as described in this section. The files lrsarith.h and lrsarith.c use these switches to ensure that the correct arithmetic files are included. Lines 2 and 3 produce the binaries fixed1 and fixed2. For illustrative purposes we also compile without the -DSAFE switch obtaining fixed1n and fixed2n in lines 4 and 5. In this case no overflow checking is performed. With the -DMP switch set the MP version is produced. Finally by setting the -DGMP switch we compile with the external GMP library which must be preinstalled. For simplicity, we assume that the necessary files are in standard locations otherwise the locations need to be specified. We ran fixed1, fixed2, fixedmp and fixedgmp with the input $k=5$ getting the following output, respectively: 5 25 625 390625 152587890625 overflow detected:halting 5 25 625 390625 152587890625 2328364365386962890625 overflow detected:halting 5 25 625 390625 152587890625 23283064365386962890625 542101086242752217003726400434970855712890625 5 25 625 390625 152587890625 23283064365386962890625 542101086242752217003726400434970855712890625 fixed1 can compute $5^{16}$ before overflowing and fixed2 can also compute $5^{32}$. Both fixedmp and fixedgmp can compute $5^{64}$. Running fixed1n and fixed2n with no overflow protection produces incorrect output. This can observed by noting the three cases where the last digit is not 5: 5 25 625 390625 152587890625 3273344365508751233 7942358959831785217 5 25 625 390625 152587890625 23283064365386962890625 -30240059481632067979667719627811971327 ### 2.2 Fixed precision performance We briefly compare performance of the various fixed arithmetic packages in order to evaluate the overhead used by overflow checking. While this can also be seen in the more extended comparison done in Section 4, here we use a simple code to further demonstrate the use of lrsarith. We consider a problem that uses a great deal of relatively simple computations and little output. The Collatz conjecture [6, 7] is a famous open problem in number theory: given a number $k$, we replace $k$ by $3k+1$ if $k$ is odd and by $k/2$ if it is even. The process continues in the same way and the question is whether for every starting value, the sequence eventually reaches $1$. There is a great deal of work [6, 7] on the conjecture and it has been [3] verified222See also David Barina’s page on the current status: https://pcbarina.fit.vutbr.cz/ for all $k<2^{68}$. Our goal is to compare arithmetic packages and not to computationally verify larger values, so we avoid techniques such as huge lookup tables for simplicity. We consider the Collatz sequence from $k$ as a path from the integer $k$ to the value one. The set of such paths defines an infinite tree with root $k=1$ and the conjecture holds if all integers appear in the tree. One way to make the tree finite is to give a parameter $\mathit{max}$ and consider the tree of all paths to the root that do not contain an integer $k$ greater than $\mathit{max}$. This finite tree can be generated from the root without using memory to store its nodes using the reverse search procedure. We wrote a code coll 333coll is contained in lrsarith-010. to generate the tree in this way. As long as $\mathit{max}<2^{63}/3$ the program will not overflow in 64-bit arithmetic. Table 1 gives the running times444 _mai48_ : 2x AMD EPYC7552 2.3GHz, 48 cores, 512 GB memory, CentOS 7.6, gcc 4.8.5 when the various arithmetic packages are compiled with coll. The binary collflint uses FLINT arithmetic (version 2.6.3) and the other binaries are labelled in the same way as for fixed.c above. $\mathit{max}$ | nodes | coll1 | coll1n | coll2 | coll2n | collgmp | collflint | collmp ---|---|---|---|---|---|---|---|--- $10^{8}$ | 39523168 | 0.70 | 0.60 | 5.20 | 4.63 | 9.20 | 7.24 | 7.19 $10^{9}$ | 395436300 | 6.50 | 5.79 | 54.37 | 48.40 | 91.87 | 72.26 | 72.35 $10^{10}$ | 3953296865 | 82.28 | 57.26 | 570.26 | 511.66 | 923.77 | 866.20 | 864.84 Table 1: Collatz tree generation: times in seconds Collatz tree (_mai48_) The values in Table 1 are small enough to avoid overflows, but one can see that overflow checking in lrslong results in only minor overhead. The relative performance of the packages will vary between machines, mix of arithmetic operations, compilers and versions of extended precision libraries; however, these results give a sample of what can be obtained. It is also worth noting that the multiprecision arithmetics are generally focused more on larger operands. In any case, for this computation on this machine, 128-bit arithmetic is roughly eight times slower than 64-bit arithmetic. FLINT arithmetic is somewhat faster than GMP (version 6.0) but slower than the dedicated fixed precision arithmetics. This is generally reasonable – FLINT makes more effort to obtain good performance on small operands, however lrsarith fixed arithmetic is essentially the same as using native arithmetic. ## 3 Hybrid arithmetic The fixed arithmetic versions described in the previous section are very simple to implement and easy to use, giving good performance for inputs that do not require very long integers. If overflow occurs the user can manually rerun the job with a higher precision arithmetic package. However, it is certainly more convenient to automatically switch from one arithmetic to another with higher precision when overflow could occur. In this section we describe how this can be done using lrsarith, while making only minor modifications to the original user code. ### 3.1 Combining fixed arithmetic packages We illustrate our approach using the example program of the previous section that reads an integer $k$ and successively squares it. The hybrid code is shown in Appendix B. Lines 9–29 contain the function run that is essentially the function main from Appendix A. Instead of reading the integer $k$, it is passed to run as a parameter. The major change is the use of setjmp to allow lrs_overflow to return control to the program that calls run. This is achieved via the variable buf1 defined on line 7\. The main loop (lines 18–25) of run is enclosed by a test on line 17. If lrs_overflow (lines 31–33) is called by the arithmetic package a long jump is performed to the statement immediately after the end of the enclosed loop, i.e. line 26. This prints a warning and returns to the routine that called run. In fact there are three routines that call run, namely run1, run2, and runmp (GMP or MP), but only the first two can trigger a call to lrs_overflow. The function run itself will need to be compiled three times, once for each package. The arithmetic package used and the routine to call run are selected at compile time by compiler directives in the makefile and on lines 35–43, respectively. The problem now arises that since run will be compiled three times the three versions will need different names, a process known as name mangling. In fact all of the routines that use lrs_mp will also need name mangling. The particular scheme used in lrsarith was suggested by David Bremner and is illustrated in line 5. A unique suffix suf defined in the arithmetic header file used is added to the function run. These define lines will be needed for each user supplied routine. The define for lrs_overflow is contained in lrslong.h. Listing 5 contains the main program with its header file given in Listing 4. It begins by reading the parameter $k$ and by calling run1 which uses 64-bit arithmetic. If overflow occurs a return code of 1 is triggered by lrs_overflow causing main to call run2 using 128-bit arithmetic. If this in turn overflows a final call is made to runmp which uses GMP555For simplicity we have omitted lines which produce a hybrid executable with MP, but they are in the lrsarith makefile.. In the makefile given in Listing 6, lines 4–6 compile the arithmetic libraries. Lines 7–9 compile hybridlib.c three times, once with each arithmetic library. Finally line 11 combines everything into a single executable hybrid. Running hybrid with $k=5$ produces: 5 25 625 390625 152587890625 overflow detected:restarting 5 25 625 390625 152587890625 2328364365386962890625 overflow detected:restarting 5 25 625 390625 152587890625 23283064365386962890625 542101086242752217003726400434970855712890625 ### 3.2 Hybrid performance We continue the example of generating Collatz trees introduced in Section 2.2. This time we use much larger values of $\mathit{max}$ in order to create overflow conditions in both 64-bit and 128-bit arithmetic. As the trees now become extremely large, we terminate the programs after $10^{8}$ nodes have been generated. The program is deterministic so in each case the same nodes are traversed. The program coll is the hybrid code combining coll1, coll2 and collgmp created in the same way as hybrid described above. The results are shown in Table 2. On each line the final arithmetic used in coll is shown in blue. It is interesting to observe that for each of these three runs collgmp runs in roughly the same time and that, for $\mathit{max}=10^{32}$, this is faster than the 128-bit arithmetic coll2 (and hence coll). $\mathit{max}$ | coll | coll1 | coll2 | collgmp | collmp | collflint ---|---|---|---|---|---|--- | (hybrid) | | | | | (hybrid) $10^{16}$ | 1.67 | 1.70 | 18.60 | 24.43 | 22.93 | 24.85 $10^{32}$ | 36.21 | (o) | 36.17 | 24.15 | 34.10 | 34.04 $10^{48}$ | 25.59 | (o) | (o) | 24.66 | 45.83 | 45.97 * 1 (o)=possible overflow detected Table 2: Collatz tree generation: $10^{8}$ nodes, times in seconds (_mai48_) The simple scheme described in this section will be adequate in many applications but has a number of disadvantages. First, memory allocated during the run for a given arithmetic may not be freed after an overflow occurs, causing a memory leak. More importantly, the simple scheme here restarts from the start of program execution. Forgetting the past can be helpful but this can hurt performance if overflow occurs near the end of program execution. It would be much better to not redo work that has already been done, especially with slower arithmetic. In applications such as lrslib, these are serious issues. The definition of lrs_mp depends on the arithmetic, so offsets into structures containing lrs_mp can change after switching arithmetic. Handling overflows appropriately is therefore more complicated than simply forgetting the past and restarting, as was described here. The solution used in lrslib was to use a global variable to point to the allocated data structures. This allows lrs_overflow to free the data after overflow is detected, before switching to the next arithmetic. A further improvement in lrslib was to allow the program to restart at the point in the calculation where the overflow was detected. Again this was done using a global variable and also the ability of the original code to restart. Fortunately this was already available. In other programs, it may be necessary to add periodic checkpoints and it can be helpful to separate structures that contain lrs_mp from those that don’t. ## 4 Vertex/facet enumeration problems When comparing [2] various codes for vertex/facet enumeration, we noted that hybrid arithmetic could give a large speedup for certain inputs, but that this was not available in lrslib at the time. We now turn our attention to the original motivation for developing the hybrid version of lrsarith: obtaining these speedups in lrslib v. 7. We begin by introducing the basics of vertex and facet enumeration. Then, we explain the parallel hybrid version mplrs, since this handles overflow somewhat differently than hybrid lrs. In Section 4.3 we describe the experimental setup and in Section 4.4 we present a comparison between the different arithmetic packages as used in lrs and mplrs. ### 4.1 Background A convex polyhedron $P$ can be represented by either a list of vertices and extreme rays, called a V-representation, or a list of its facet defining inequalities, called an H-representation. The vertex enumeration problem is to convert an H-representation to a V-representation. The computationally equivalent facet enumeration problem performs the reverse transformation. For further background see Ziegler [8]. We will use the lrs program in lrslib to experiment with various instances of these problems. A comparison with other codes including mplrs, a parallel version of lrs, to solve these problems can be found in [2]. The input is represented by an $m$ by $n$ matrix. For a vertex enumeration problem this is a list of $m$ inequalities in $n-1$ variables whose intersection define $P$. A vertex (resp. extreme ray) is the intersection of a set of $n-1$ (resp. $n-2$) input inequalities, taken as equations, that satisfies the remaining inequalities. A major difficulty is caused by degeneracy which occurs when more than $n-1$ inequalities intersect at a vertex. In this case lrs generates multiple representations of the vertex, known as bases. For a facet enumeration problem the input is a list of the vertices of $P$ each beginning with a 1 in column one666Extreme rays would be indicated by a zero in column one.. A facet is defined by a set of $n-1$ input vertices which span a hyperplane for which all of the other input vertices lie on one side. Here degeneracy is manifested when more than $n-1$ vertices lie on the hyperplane. Again lrs will generate the facet multiple times, each known as a basis. When degeneracy occurs the vertex or facet is output when the index-wise lexicographically minimum basis is found. ### 4.2 Hybrid mplrs On detecting a possible overflow, the hybrid version of lrs switches to the next highest precision arithmetic package which it uses from that point onwards. The sequential implementation was parallelized as mplrs [2] which dynamically partitions the work and provides a series of subproblems to multiple lrs workers. If each of these workers was a hybrid lrs process, it would start each problem with the fastest arithmetic and switch to higher precision when detecting a possible overflow. This has a few drawbacks; 1. 1. Duplicate output: it’s possible that when restarting after an overflow, we reprint the most recent line of output. 2. 2. Performance: if overflow occurs, it usually occurs often in the run (ie not infrequently). There is overhead in restarting and switching arithmetic and so we would prefer to avoid frequent switches. In the hybrid version of mplrs, each of the workers is a fixed arithmetic process that returns to mplrs when possible overflow is detected. At that point, mplrs re-initializes the worker in question using the next available arithmetic. All output produced by the subproblem is discarded and the worker restarts from the beginning (of the subproblem). The worker uses the new arithmetic from that point onwards. This approach avoids duplicate output: by default, hybrid mplrs holds output produced by the worker if the arithmetic could overflow, flushing it only when the job ends. It also resolves the second problem; each worker can only overflow and switch arithmetic twice in the overall run – the same as hybrid lrs. In addition, if overflow is a rare event then it is possible for only some workers to overflow. This makes larger speedups possible comparable to hybrid lrs. There is overhead in two areas. First, when overflow is detected the worker restarts its job. This overhead is limited because jobs are usually very small777See e.g., Figure 3(c,d) in [2]. and workers can only overflow at most twice during the run. Next, since mplrs traverses different parts of the reverse search tree in a different order compared to lrs, it’s possible for overflow to occur earlier than it would in lrs. This could hurt performance if lrs overflows only at the end but could also help performance by avoiding early lrs overflows. One complication is that for volume computation, mplrs internally uses the maximum precision arithmetic available. This means that mplrs may not agree with the worker process on arithmetic. Communication between the worker process and mplrs therefore avoids using lrs_mp integers. After checkpointing and restarting, hybrid mplrs begins again with the fastest arithmetic available. Checkpoint files are compatible between the various arithmetic packages. ### 4.3 Experimental setup The polytopes we tested are described in Table 3 and, except for two new problems, were previously described and used in [2]. The problems range from non-degenerate to highly degenerate polyhedra. This table includes the results of an lrs run on each polytope as lrs gives the number of bases in a symbolic perturbation of the polytope. The new problems are: * • _cp7_ is the cut polytope for 7 points which has input coefficients 0 or 1 and is highly degenerate. * • _p8-6_ is related to the holographic cone studied in physics and is an extension of the eight point cut polytope. Input coordinates are 0, 1 or -1 and it is the most degenerate problem studied. We include a column labelled degeneracy which is the number of bases divided by the number of vertices (or facets) output, rounded to the nearest integer. We have sorted the table in order of increasing degeneracy. The horizontal line separates the non-degenerate from the degenerate problems. The corresponding input files are available by following the download link at [1]. Note that the input sizes are small, roughly comparable and much smaller than the output sizes. Name | Input | Output | lrs ---|---|---|--- | H/V | $m$ | $n$ | size | V/H | size | bases | secs | depth | degeneracy _c30_ | V | 30 | 16 | 4.7K | 341088 | 73.8M | 319770 | 36 | 14 | 1 _c40_ | V | 40 | 21 | 12K | 40060020 | 15.6G | 20030010 | 7531 | 19 | 1 _km22_ | H | 44 | 23 | 4.8K | 4194304 | 1.2G | 4194304 | 234 | 22 | 1 _perm10_ | H | 1023 | 11 | 29K | 3628800 | 127M | 3628800 | 283 | 45 | 1 _vf500_ | V | 500 | 7 | 98K | 56669 | 38M | 202985 | 137 | 41 | 4 _vf900_ | V | 900 | 7 | 20K | 55903 | 3.9M | 264385 | 23 | 45 | 5 _mit71_ | H | 71 | 61 | 9.5K | 3149579 | 1.1G | 57613364 | 15474 | 20 | 18 _fq48_ | H | 48 | 19 | 2.1K | 119184 | 8.7M | 7843390 | 44 | 24 | 66 _mit_ | H | 729 | 9 | 21K | 4862 | 196K | 1375608 | 132 | 101 | 283 _cp7_ | V | 64 | 22 | 3K | 116764 | 7.4M | 308644212 | 4071 | 41 | 2643 _bv7_ | H | 69 | 57 | 8.1K | 5040 | 867K | 84707280 | 1256 | 17 | 16807 _p8-6_ | H | 154 | 92 | 36K | 4452 | 1.2M | 110640628 | 16152 | 63 | 24852 Table 3: Polytopes tested and lrs v. 7.1 times (_mai48_) ### 4.4 Comparison of arithmetic packages In this section we give numerical results on using lrslib to solve the vertex enumeration problems described in the previous section with the various arithmetic packages in lrsarith. We test the following suite of codes, which apart from the exception noted below, are available in lrslib v. 7.1: * • lrs1: 64-bit fixed arithmetic with overflow checking. * • lrs2: 128-bit fixed arithmetic with overflow checking. * • lrsgmp: GMP arithmetic (version 6.0). Comparable to default lrs v. 6.2 and earlier. * • lrs: lrsarith hybrid arithmetic. Starts with lrs1 and switches to lrs2 (if available) and finally lrsgmp. Default version of lrs. * • lrsMP 888Available in v. 7.2 or on request.: As lrs except in the final step internal MP arithmetic is used. * • lrsflint: FLINT hybrid arithmetic (version 2.6.3). * • mplrsgmp: Multi-core version of lrsgmp, comparable to default mplrs v. 6.2 and earlier. * • mplrs: Hybrid multi-core version of lrs. Default version of mplrs. Also available in lrslib v. 7.1 are the parallel versions mplrs1, mplrs2, and mplrsflint of the corresponding lrs codes above. Name | Single Arithmetic | Hybrid Arithmetic | 40 cores ---|---|---|--- | lrs1 | lrs2 | lrsgmp | lrs | lrsMP | lrsflint | mplrsgmp | mplrs _c30_ | (o) | (o) | 36 | 36 | 295 | 42 | 2 | 3 _c40_ | (o) | (o) | 7707 | 7581 | 97532 | 8341 | 402 | 389 _km22_ | (o) | (o) | 237 | 234 | 384 | 187 | 8 | 8 _perm10_ | 288 | 538 | 2429 | 283 | 284 | 1120 | 113 | 16 _vf500_ | (o) | (o) | 139 | 137 | 1658 | 163 | 12 | 11 _vf900_ | (o) | 23 | 103 | 23 | 23 | 170 | 8 | 2 _mit71_ | (o) | (o) | 15489 | 15474 | 110467 | 26432 | 731 | 723 _fq48_ | 44 | 77 | 265 | 44 | 44 | 147 | 11 | 1 _mit_ | (o) | 134 | 563 | 132 | 131 | 270 | 27 | 7 _cp7_ | 4105 | 6824 | 25030 | 4071 | 4039 | 14413 | 1141 | 191 _bv7_ | 1276 | 2346 | 7981 | 1256 | 1252 | 4874 | 357 | 65 _p8-6_ | 16197 | 23271 | 56519 | 16152 | 15970 | 45838 | 2395 | 661 * 1 (o)=possible overflow detected Table 4: Comparison of running times for various types of arithmetic, lrslib v. 7.1 (_mai48_) The results of the tests are shown in Table 4. The programs lrs1 and lrs2 either produce the correct output or indicate that an overflow may occur (o) and terminate. The columns lrs and lrsMP give the running times for the hybrid versions. When 64 or 128-bit arithmetic suffices these two running times are essentially the same. When extended precision is required the internal MP arithmetic may be much slower than GMP, especially for _c40_ , which requires 459 decimal digits. The final two columns show the speedups obtained by using the multicore versions mplrs and mplrsgmp with 40 processors available. In the table we show in blue which arithmetic version was in use when lrs, shown in red, terminated. As to be expected lrs performs roughly the same as lrsgmp when the integers become too large for fixed precision. Speedups of 2–4 times are observed for the combinatorial problems which can be solved using only 64 or 128 bits. In general lrsflint did not perform as well as lrs but it is usually faster than lrsgmp on the combinatorial problems. The approach used in lrsarith hybrid arithmetic allows it to obtain essentially native integer performance, but requires effort from the developer to handle overflow. FLINT is easier for the developer but did not achieve the same performance when all values are small. Another possible reason for the lack of performance is that we did not use FLINT matrices as this would have required substantial reprogramming of lrs. We can observe that 128-bit arithmetic runs roughly 1.5–2 times slower than 64-bit arithmetic, when 64-bit arithmetic suffices. Hence there is a strong incentive to start the computation with 64 bits using 128 bits only when necessary. This latter outcome occurred for _mit_ and _vf900_. ## 5 Conclusion We introduced lrsarith which is a small C library for performing fixed precision arithmetic on integers and rationals with overflow protection and allows hybrid arithmetic. It was developed as part of the lrslib polyhedral computation package. However, due to its small size and ease of use we decided to release it as an independent package. The details of the method used and small examples were given in the first part of the paper. In the second part we gave computational results for some vertex enumeration problems. The examples show the diversity of such problems and this is reflected in the arithmetic precision required to solve them. Many combinatorial polytopes involve calculations that can be completed without overflow using 64 (or 128) bit integers. Using fixed arithmetic with overflow checking or hybrid arithmetic, speedups of 2–4 times can be achieved and these carry over to parallel implementations. The new hybrid version described here explains the performance improvements found in lrslib version 7 relative to previous versions or earlier comparisons [2]. ## Acknowledgements We thank David Bremner for many helpful discussions and in particular for the elegant implementation of name mangling. This work was partially supported by JSPS Kakenhi Grants 16H02785, 18H05291, 18K18027, 20H00579, 20H00595 . ## References * [1] D. Avis. http://cgm.cs.mcgill.ca/~avis/C/lrs.html. * [2] David Avis and Charles Jordan. mplrs: A scalable parallel vertex/facet enumeration code. Mathematical Programming Computation, 10(2):267–302, 2018. * [3] David Barina. Convergence verification of the Collatz problem. The Journal of Supercomputing, 2020. * [4] T. Christof and A. Loebel. http://porta.zib.de. * [5] William B. Hart. Fast library for number theory: an introduction. In Mathematical Software – ICMS 2010, volume 6327 of Lecture Notes in Computer Science, pages 88–91, 2010. * [6] Jeffrey C. Lagarias. The $3x+1$ problem: An annotated bibliography (1963–1999) (sorted by author). arXiv:math/0309224, 2003. * [7] Jeffrey C. Lagarias. The $3x+1$ problem: An annotated bibliography, II (2000-2009). arXiv:math/0608208, 2006. * [8] Günter M. Ziegler. Lectures on polytopes. Springer, 1995. ## Appendix A Single arithmetic code for repeated squaring Listing 1: fixed.c ⬇ 3#include <stdio.h> 4#include <stdlib.h> 5#include ”lrsarith.h” 6 7FILE *lrs_ifp,*lrs_ofp; 8 9void lrs_overflow(int parm){ 10 fprintf(lrs_ofp,” overflow detected:halting\n”); 11 exit (1); 12} 13 14int main (void){ 15 long i,k; 16 lrs_mp a,b; 17 lrs_mp_init(10,stdin,stdout); 18 lrs_alloc_mp(a); lrs_alloc_mp(b); 19 fscanf(lrs_ifp,”%ld”,&k); 20 itomp(k,b); 21 pmp(””,b); 22 for(i=1;i<=6;i++) 23 { 24 copy(a,b); 25 mulint(a,a,b); 26 pmp(””,b); 27 } 28 lrs_clear_mp(a); lrs_clear_mp(b); 29 fprintf(lrs_ofp,”\n”); 30 return 0; 31} Listing 2: makefile ⬇ 16fixed: fixed.c lrsarith.c lrsarith.h lrslong.c lrslong.h lrsgmp.c lrsgmp.h lrsmp.c lrsmp.h 17 $(CC) -DLRSLONG -DSAFE -o fixed1 lrsarith.c fixed.c 18 $(CC) -DLRSLONG -DB128 -DSAFE -o fixed2 lrsarith.c fixed.c 19 $(CC) -DLRSLONG -o fixed1n lrsarith.c fixed.c 20 $(CC) -DLRSLONG -DB128 -o fixed2n lrsarith.c fixed.c 21 $(CC) -DMP -o fixedmp lrsarith.c fixed.c 22 $(CC) -DGMP -o fixedgmp lrsarith.c fixed.c -lgmp ## Appendix B Hybrid arithmetic code for repeated squaring Listing 3: hybridlib.c ⬇ 4#include <stdio.h> 5#include <stdlib.h> 6#include <setjmp.h> 7#include ”lrsarith.h” 8#define run suf(run) 9 10static jmp_buf buf1; /* return location when overflowing */ 11 12int run(long k){ 13 long i; 14 lrs_mp a,b; 15 lrs_mp_init(0,stdin,stdout); 16 lrs_alloc_mp(a); lrs_alloc_mp(b); 17 itomp(k,b); 18 pmp(””,b); 19 20 if (!setjmp(buf1)){ /* overflow test */ 21 for(i=1;i<=6;i++){ 22 copy(a,b); 23 mulint(a,a,b); 24 pmp(””,b); 25 } 26 lrs_clear_mp(a); lrs_clear_mp(b); 27 return 0; 28 } 29 printf(” overflow detected:restarting\n”); 30 lrs_clear_mp(a); lrs_clear_mp(b); 31 return 1; 32} 33 34void lrs_overflow(int parm){ 35 longjmp(buf1,1); 36} 37 38#if defined(MA) && defined(LRSLONG) 39#ifdef B128 40int run2(long k){ /* 128 bit */ 41#else 42int run1(long k){ /* 64 bit */ 43#endif 44#else 45int runmp(long k){ /* other arithmetic */ 46#endif 47 48 return(run(k)); 49} Listing 4: hybrid.h ⬇ 4#include <stdio.h> 5#include <stdlib.h> 6 7FILE *lrs_ifp; /* input file pointer */ 8FILE *lrs_ofp; /* output file pointer */ 9 10int run1(long k); 11int run2(long k); 12int runmp(long k); Listing 5: hybrid.c ⬇ 3int main (void) { 4long k; 5 6scanf(”%ld”,&k); 7 8if(run1(k)) /* TRUE on 64 bit overflow */ 9 if(run2(k)) /* TRUE on 128 bit overflow */ 10 runmp(k); 11 12printf(”\n”); 13return 0; 14} Listing 6: makefile ⬇ 3HOBJ=lrslong1.o hybridlib1.o lrslong2.o lrsgmp.o hybridlib2.o hybridlibgmp.o 4 5hybrid: hybrid.c lrslong.c lrslong.h lrsgmp.c lrsgmp.h hybridlib.c hybrid.h lrsarith.c 6 $(CC) -DMA -DSAFE -DLRSLONG -c -o lrslong1.o lrsarith.c 7 $(CC) -DMA -DB128 -DSAFE -DLRSLONG -c -o lrslong2.o lrsarith.c 8 $(CC) -DMA -DGMP -c -o lrsgmp.o lrsarith.c 9 $(CC) -DMA -DSAFE -DLRSLONG -c -o hybridlib1.o hybridlib.c 10 $(CC) -DMA -DB128 -DSAFE -DLRSLONG -c -o hybridlib2.o hybridlib.c 11 $(CC) -DMA -DGMP -c -o hybridlibgmp.o hybridlib.c 12 13 $(CC) -DMA -DLRSLONG -DSAFE -o hybrid ${HOBJ} hybrid.c -lgmp
# Zero-Error Communication over Adversarial MACs Yihan Zhang12 1Faculty of Computer Science, Technion Israel Institute of Technology 2Institute of Theoretical Computer Science and Communications, The Chinese University of Hong Kong <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We consider zero-error communication over a two-transmitter deterministic adversarial multiple access channel (MAC) governed by an adversary who has access to the transmissions of both senders (hence called _omniscient_) and aims to maliciously corrupt the communication. None of the encoders, jammer and decoder is allowed to randomize using private or public randomness. This enforces a combinatorial nature of the problem. Our model covers a large family of channels studied in the literature, including all deterministic discrete memoryless noisy or noiseless MACs. In this work, given an arbitrary two-transmitter deterministic omniscient adversarial MAC, we characterize when the capacity region 1. 1. has nonempty interior (in particular, is two-dimensional); 2. 2. consists of two line segments (in particular, has empty interior); 3. 3. consists of one line segment (in particular, is one-dimensional); 4. 4. or only contains $(0,0)$ (in particular, is zero-dimensional). This extends a recent result by Wang, Budkuley, Bogdanov and Jaggi (2019) from the point-to-point setting to the multiple access setting. Indeed, our converse arguments build upon their generalized Plotkin bound and involve delicate case analysis. One of the technical challenges is to take care of both “joint confusability” and “marginal confusability”. In particular, the treatment of marginal confusability does _not_ follow from the point-to-point results by Wang et al. Our achievability results follow from random coding with expurgation. ## I Introduction The multiple access channel (MAC) model was first (implicitly) considered by Shannon [Sha61]. This model is arguably one of the simplest communication models beyond the point-to-point setting. The problem concerns information transmission over a three-node network. Two111In this paper, we only consider MACs with two transmitters. Generalizations to more transmitters are left as an open question (see Item 3 in Section XVI). independent senders simultaneously send signals to the channel; a single receiver aims to recover both senders’ transmitted messages given the channel-distorted signal. The goal for the parties in such a communication scenario is to reliably deliver as much information from the senders to the receiver. The fundamental limits (i.e., _capacity region_ , see Definition 7) of discrete memoryless MACs under the average error criterion was derived independently by Ahlswede [Ahl73, Ahl74] and Liao [Lia72]222The capacity region given by Ahlswede [Ahl73, Ahl74] and Liao [Lia72] is written in terms of the convex hull of the union of multiple regions. An alternative form involving an auxiliary time-sharing variable was given by Slepian and Wolf [SW73]. A cardinality bound on the alphabet of the auxiliary variable was given in [CK11].. The Gaussian counterpart333This paper only concerns MACs with finite-sized alphabets and will not deal with the Euclidean case. was solved by Cover [Cov75] and Wyner [Wyn74]. MACs are so far the essentially only multiuser channel whose fundamental limits are well-understood in full generality. In the classical Shannon’s setup of the MAC problem, it is assumed that the channel is given by a _fixed_ (i.e., time-invariant) law444We use lowercase boldface letters to denote (scalar) random variables. $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2}}$ that maps a given pair of input symbols555Throughout this paper, we use superscripts to denote the indices of the transmitter. E.g., $x^{1}$ (resp. $x^{2}$) denotes a symbol transmitted by the first (resp. second) transmitter. $(x^{1},x^{2})\in{\mathcal{X}}_{1}\times{\mathcal{X}}_{2}$ to an output symbol $y\in{\mathcal{Y}}$ with probability $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\left(y|x^{1},x^{2}\right)$. Such a channel well models white noise between the senders and the receiver, while it fails to model _adversarial_ noise that is potentially injected by a malicious adversary. In this paper, we take a coding-theoretic perspective on multiple access. A general _omniscient adversarial MAC_ model is introduced and studied. We assume that the channel is governed by an adversary who has full access to the transmitted signals from both senders (hence called _omniscient_). The adversary aims to prevent communication from happening by transmitting a carefully designed noise sequence to the channel. We therefore at times also call the adversary the _jammer_. None of the encoders, the jammer and the decoder is allowed to randomize. To enforce a combinatorial nature of the problem, it is further assumed that the channel obeys a zero-one law, i.e., the distribution $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}$ (where ${\mathbf{s}}$ denotes the symbol sent by the jammer) only takes values in $\\{0,1\\}$ and can be realized by a deterministic function $y=W(x^{1},x^{2},s)$ (with a slight abuse of notation). The main contribution of this paper is a _zero-th_ order (see the next paragraph) characterization of the capacity region of an arbitrary omniscient adversarial MAC with _maximum_ error probability. In fact, since nothing in the system is stochastic, it is not hard to see that maximum error criterion is equivalent to zero error criterion. Our results can be appreciated through different lenses, e.g., arbitrarily varying channels, zero-error information theory, coding theory, etc. Elaboration on various connections is deferred to Section II. Classical Shannon theory and combinatorial coding theory provide systematic ways of studying the _first-order_ asymptotics, i.e., capacity, of (stochastic and adversarial respectively) communication channels. By first-order we mean the number of bits that can be reliably transmitted through the channel. The first-order asymptotics of discrete memoryless channels (DMCs) are well- established in the seminal paper by Shannon [Sha48] which laid the foundation of information theory. The first-order asymptotics of most multiuser channels remain open, except for MAC as mentioned before and a handful of other special cases. On the other hand, in the theory of error-correcting codes which deals with worst-case errors, essentially no capacity is characterized for any nontrivial channel. Indeed, even the capacity of adversarial bitflip channels – one of the simplest nontrivial channels remains a holy grail problem in coding theory. This problem is well known to be equivalent to the sphere packing problem in binary Hamming space. Our work can be viewed as a first step towards pushing the existing wisdom of classical coding theory to the general multiuser setting. For one thing, we consider very general channel models, not just the bitflip channel which is the most studied one in coding theory. For another thing, we go beyond the point-to-point setting and consider MACs. Due to the lack of techniques for characterizing the capacity, this work only aims to characterize the “shape” of the capacity region of any given adversarial MAC. More specifically, we determine the dimension of the capacity region – when it has nonempty interior; when it only consists of (one or two) line segment(s); and when it only contains ${(0,0)}$. We call such positivity conditions a characterization of the _zero-th_ order asymptotics of the channel. See Section XI for the formal statements of our results. Finally, we remark that there has been a stream of work on high-order (second-/third-/fourth-order) asymptotics of channels [PPV10, TT13, TT15, SMiF14, YKE20, Kos20]. ###### Remark 1. The capacity region of a (non-adversarial) MAC under average error criterion can be achieved using deterministic encoding and the region is invariant even if stochastic encoding is allowed. However, unlike the point-to-point case, under _maximum_ error criterion and _deterministic_ encoding, the capacity region of a MAC is strictly smaller than that under average error criterion [Due78]. To the best of our knowledge, the exact capacity region in this case is still open. Furthermore, under maximum error criterion, stochastic encoding can achieve the capacity region with average probability of error. This shows that randomization at the encoders can boost the capacity under maximum error criterion – a phenomenon absent in the point-to-point setting. ## II Related work Our model and results are connected to various facets of information theory and adjacent fields. We list non-exhaustively several connections below and compare, when proper, our results with existing ones. ### II-A Arbitrarily varying channels Our model of general omniscient adversarial MAC is intimately related to a classical model studied in the literature known as the _arbitrarily varying channel (AVC)_. An AVC is a channel with a state ${\mathbf{s}}$ that does not follow any fixed distribution, i.e., is arbitrarily varying. A noticeable difference between the classical AVC model and our model is that the bulk of the literature on AVC deals with channels with an _oblivious_ adversary who does not know anything about the transmitted sequence. Under average error criterion, this problem is significantly easier (though not trivial) than the omniscient counterpart. Indeed, the fundamental limits of point-to-point AVCs [CN88b, CN91] and arbitrarily varying MACs (AVMACs) [AC99, PS19] (and several other channels which we do not spell out here) are well-understood. In fact, an oblivious AVMAC with maximum probability of error is equivalent to our model of omniscient adversarial MAC. However, the maximum error criterion is much less studied in the AVC literature. Obtaining a tight first-order characterization of the capacity remains an formidable challenge even for very simple channels. The main focus of this work is a zero-th order characterization of the capacity region of general omniscient adversarial MACs. Though we do present nontrivial inner and outer bounds, there is no reason to expect any of them to be optimal. Item 1 in Section XVI contains more discussions and open problems regarding error criterion. See also Section XI-B for an in-depth comparison between our work and [PS19] on AVMACs. ### II-B Zero-error information theory Since randomization in the encoding/jamming/decoding strategies are ruled out from our model and only deterministic channels are considered, there is no probability anywhere in the system and maximum error criterion is equivalent to zero error criterion. For this reason, it is worth mentioning the connections between our work and zero-error information theory – a combinatorial facet of information theory. The basic deviation of zero-error information theory from ordinary Shannon theory is to insist on _zero error_ criterion which changes the nature of the problem in a fundamental way. Despite of years of research, there is essentially no capacity result for any general channel model except for sporadic special channels [Lov79]. Usually channels studied in zero-error information theory do not consist of an adversarial noise (a.k.a. an arbitrarily varying state in AVC jargon). It turns out that if the adversarial noise in our model is _unconstrained_ (i.e., the state vector666We use underlines to denote vectors of length $n$ – the number of channel uses. See Section V for notational conventions of this paper. ${\underline{s}}$ can take any value in ${\mathcal{S}}^{n}$), then the channel is equivalent to a non-adversarial channel under zero error criterion. On the other hand, the presence of state constraints brings significant effect on the behaviour of the channel. Such a phenomenon already shows up in the point-to-point setting [CN88b]. Classical zero-error information theory approaches the problem of zero-error communication via the notion of _Shannon capacity_ of graphs [Sha56] – getting rid of channel probabilities.777Unfortunately, Shannon capacity is not computable since it is defined as a limit as $n$, the blocklength, goes to infinity. See Section II-D and Item 5 in Section XVI for remarks on $n$-letter capacity expressions. Recently, the positivity of zero-error capacity of MACs (and several other multiuser channels) was characterized by Devroye [Dev16]. However, she only dealt with non-adversarial channels, or equivalently, adversarial channels without state constraints. Several other general multiuser channels with zero error such as two-way channels [GS19] and relay channels [CSD14, CD15, CD17, APBD18] were also studied in the literature. Many other works on zero-error multiuser channels concentrate around specific channels such as binary adder MAC [AKKN17], $\mathsf{AND}\text{-}\mathsf{OR}$ interference channel [NY20], etc. See Section II-F for more related work on special MACs. ### II-C Kolmogorov complexity Besides Shannon’s notion of graph capacity, Kolmogorov [Kol56, Tik93] introduced the $\varepsilon$-entropy and $\varepsilon$-capacity (which are the normalized covering and packing number (using balls of radius $\varepsilon$) of a space) as another non-stochastic approach to zero-error source and channel coding, respectively. However, there was no coding theorems companying these notions. The results in [WBBJ19] which we build upon can be cast as packing _general_ shapes (not necessarily balls) without overlap in a general space. For MACs, the geometric interpretation of packing and covering does not seem to be as obvious/clean as in the point-to-point case. ### II-D Non-stochastic information theory Recently, Nair [Nai11, Nai13] proposed yet another alternative framework towards understanding zero-error communication known as _non-stochastic information theory_. He introduced non-stochastic analogs of information measures and proved coding theorems for worst-case error models. Extensions to MACs (see [ZNE19] for the two-transmitter case and [ZN20] for the multi- transmitter case), channels with feedback [Nai12, SFN18, SFN20b], channels with memory [SFN20a, SFN19] and function evaluation [FN20] are presented in followup works by Nair and his coauthors. In most cases, Nair’s framework only gives $n$-letter expressions for capacity, similar to the graph-theoretic approach mentioned in Section II-B. More recently, Lim–Franceschetti [LF17] and Rangi–Franceschetti [RF19] refined Nair’s framework by introducing new non-stochastic information measures to incorporate decoding errors while retaining the worst-case nature of the error model. The latter work [RF19] also studied the possibility of obtaining single-letter expressions for the capacity of a certain family of channels. As a comparison, our approach does not even yield $n$-letter capacity expressions. However, we can handle general adversarial channels with potentially constrained adversarial noise. In [RF19], following Nair’s framework, such channels are treated as _nonstationary_ channels with _memory_ for which no $n$-letter capacity expression was obtained. More words on $n$-letter expressions can be found in Item 5 of Section XVI. ### II-E Coding theory and generalized Plotkin bound Since our problem inherently exhibits a combinatorial nature, one can view our contributions as Shannon-theoretic results for a coding-theoretic model. We borrow insights and techniques from both information theory and coding theory and try to build a bridge between them in the particular MAC setting. At a technical level, the principal tool that we use is inspired by a recent Plotkin-type bound for general point-to-point omniscient adversarial channels [WBBJ19]. Our contribution is to generalize it to the MAC setting and use it, along with delicate case analysis, to characterize the “dimension” of the capacity region. The results in both [WBBJ19] and this paper are in turn generalizations of the Plotkin bound in classical coding theory. This bound (together with a standard probabilistic construction) pins down the exact threshold of the noise level of a bitflip channel888A bitflip channel takes a binary sequence as input and arbitrarily flips a fixed fraction of bits. such that positive rates are achievable (see Definition 7 for the formal definition of achievable rates). ### II-F Specific channels Our model covers a large family of channels studied in the literature, including the $\operatorname{\mathsf{OR}}$ MAC, the collision MAC, the adder MAC [Gu18, AKKN17], the disjunctive MAC [DPSV19], the multiple access hyperchannel [Shc16], etc. Indeed, our model incorporates all deterministic channel models. Interested readers are encouraged to refer to the lecture notes [GGLR] and [PW14, Chapter 29, 30]. ## III Overview of our results This work initiates a systematic study of memoryless MACs in the presence of an omniscient adversary (who may _not_ behave memorylessly) under the maximum probability of error criterion. In particular, the main attention of this paper is focused on the capacity threshold. In what follows, we summarize the contributions of this paper. 1. 1. We introduce in Section VII the model of _omniscient adversarial MACs_ which covers a large family of channels of interests. In particular, all component- wise deterministic memoryless channels with finite alphabets fall into our framework. In this work we focus on the maximum probability of error criterion. For technical reasons, we make additional assumptions that are listed in Section VII-B. 2. 2. We introduce in Section IX the notion of _confusability_ , both the operational version (12) and the distributional version (Definition 11) which turn out to be equivalent (14, Remark 5). Specifically, we define the _joint confusability set_ and the (first and second) _marginal confusability sets_ (for both transmitters separately) to capture the disability to reliably transmit both (for the joint case) or exactly one (for the marginal cases) of the sequences. One can think of the confusability sets as the sets of “bad” distributions that (the types999The type of a (collection of) vector(s) is the empirical distribution/histogram. See Definition 3 for a formal definition. of) any good code should avoid. The significance of the notion of confusability is that it precisely captures all information one needs for understanding the capacity region of any adversarial MAC. In fact, adversarial MACs with the same confusability sets share a common capacity region (16), though they may appear different at the first glance. Various properties of the confusability sets are presented in Proposition 15. 3. 3. Towards understanding capacity thresholds, we find a class of distributions that we call _good_ (Definition 15). Again, they are separately tailored for the joint case and two marginal cases. While being of independent interest on their own, the sets of good distributions are particularly useful in our context of determining the capacity threshold. One should think of these classes of distributions as the _only_ types of distributions that one needs to consider for the purpose of achieving positive rates (though in this way one may not be able to achieve the capacity which is anyway unknown given the current techniques). We also define a cone of tensors referred to as _co-good_ tensors (Definition 16) and show that the cones of good and co-good tensors are dual to each other (Theorem 18), which will be critical to the proofs in the proceeding sections. Various properties of good distributions and co-good tensors are presented. We expect these distributions/tensors and the associated duality to be useful elsewhere. 4. 4. We completely characterize, for any given omniscient adversarial MAC, the “shape” of the capacity region, that is, when the capacity region 1. (a) has nonempty interior (in particular, is two-dimensional); 2. (b) consists of two line segments (in particular, has empty interior); 3. (c) consists of one line segment (in particular, is one-dimensional); 4. (d) or only contains $(0,0)$ (in particular, is zero-dimensional). The proof comprises of the direct part and the converse part. The technically most challenging case is to handle the (non-)achievability of rate pairs both components of which are strictly positive. For the marginal cases, we emphasize that they do _not_ follow from the point-to-point results in [WBBJ19] in a black-box manner. We then briefly discuss separately our achievability and converse results and the techniques for proving them. For a more detailed discussion on the proof techniques, see Section XII. 1. 1. For the achievability part, one could use good non-confusable distributions (whenever they exist) to sample good codes of positive rates (Lemma 23). This follows from the standard random coding argument which in turn is proved using Chernoff-union bounds. We also strengthen the above positivity results by giving _inner bounds_ on the capacity region (Lemma 24). This follows by carefully expurgating the codes and analyzing the large deviation exponents of the error events using the Sanov’s theorem (Lemma 3). The most challenging case is where both transmitters are able to achieve positive rates. 2. 2. On the other hand, for the converse part, if one cannot construct positive rate good codes using good distributions, then she/he cannot construct them using any other types of distributions (Theorem 20). This part is much less obvious and forms the bulk of the technically most challenging portion of this work. As alluded to above, the crux of the proof is to leverage the duality between the cone of good distributions and the cone of co-good tensors defined before and to apply a double counting trick that is reminiscent of the one used in the classical Plotkin bound in coding theory. Technically, to make the trick actually work, we have to preprocess the code by applying a standard constant composition reduction and an equicoupled subcode extraction (using Ramsey’s theorems Theorems 26 and 35). The hardest case is to show that two transmitters cannot simultaneously achieve positive rates as long as there does not exist a distribution that is _simultaneously_ jointly good and (first and second) marginally good. ## IV Organization of this paper The rest of the paper is organized as follows. Notational conventions of this paper are listed in Section V, followed by preliminaries in Section VI. We formally introduce the omniscient adversarial MAC model in Section VII. Before proceeding, we first study the special case of binary noisy $\operatorname{\mathsf{XOR}}$ MACs in Section VIII with proofs deferred to Appendix B. Then in Sections IX and X respectively, we introduce two important notions of (sets of) distributions, viz.: the confusability sets and the sets of good distributions, and prove properties of them. Building on the machinery we have developed in the previous sections, the main result (Theorem 19) of this paper, i.e., a characterization of the “shape” of capacity region, is formally stated in Section XI. Before presenting the detailed proofs, we outline a roadmap with underlying ideas of the proofs in Section XII. Section XIII contains a full proof of the achievability part of our main theorem. Sections XIV and XV prove the “joint” case and the “marginal” cases of the converse part, respectively. We conclude the paper with a list of remarks and open questions in Section XVI. A table of frequently used notation can be found in Appendix A. ## V Notation Sets are denoted by capital letters in calligraphic typeface, e.g., ${\mathcal{X}},{\mathcal{S}},{\mathcal{Y}}$, etc. All alphabets in this paper are finite sized. For a positive integer $M$, we use $[M]$ to denote $\left\\{1,\cdots,M\right\\}$. Let ${\mathcal{X}}$ be a finite set. For an integer $0\leq k\leq{\left|{\mathcal{X}}\right|}$, we use $\binom{{\mathcal{X}}}{k}$ to denote $\left\\{{\mathcal{X}}^{\prime}\subseteq{\mathcal{X}}\colon\left|{\mathcal{X}}^{\prime}\right|=k\right\\}$. Random variables are denoted by lowercase letters in boldface, e.g., ${\mathbf{x}},{\mathbf{s}},{\mathbf{y}}$, etc. Their realizations are denoted by corresponding lowercase letters in plain typeface, e.g., $x,s,y$, etc. Vectors (random or fixed) of length $n$, where $n$ is the blocklength of the code without further specification, are denoted by lowercase letters with underlines, e.g., ${\underline{\mathbf{x}}},{\underline{\mathbf{s}}},{\underline{\mathbf{y}}},{\underline{x}},{\underline{s}},{\underline{y}}$, etc. The $i$-th entry of a vector ${\underline{x}}\in{\mathcal{X}}^{n}$ (resp. ${\underline{\mathbf{x}}}\in{\mathcal{X}}^{n}$) is denoted by ${\underline{x}}(i)$ (resp. ${\underline{\mathbf{x}}}(i)$). For vectors and random variables/vectors, we use superscripts to denote the indices of the transmitters, e.g., ${\underline{x}}^{1},{\mathbf{x}}^{1},{\underline{\mathbf{x}}}^{1}$ (resp. ${\underline{x}}^{2},{\mathbf{x}}^{2},{\underline{\mathbf{x}}}^{2}$) correspond to the first (resp. second) transmitter. We use the standard Bachmann–Landau (Big-Oh) notation. For two real-valued functions $f(n),g(n)$ of positive integers, we say that $f(n)$ _asymptotically equals_ $g(n)$, denoted by $f(n)\asymp g(n)$, if $\lim_{n\to\infty}{f(n)}/{g(n)}=1$. We write $f(n)\doteq g(n)$ (read $f(n)$ _dot equals_ $g(n)$) if $\lim_{n\to\infty}\left(\log f(n)\right)/\left(\log g(n)\right)=1$. Note that $f(n)\asymp g(n)$ implies $f(n)\doteq g(n)$, but the converse is not true. For any ${\mathcal{A}}\subseteq{\mathcal{X}}$, the indicator function of ${\mathcal{A}}$ is defined as, for any $x\in{\mathcal{X}}$, $\mathds{1}_{{\mathcal{A}}}(x)\coloneqq\begin{cases}1,&x\in{\mathcal{A}}\\\ 0,&x\notin{\mathcal{A}}\end{cases}.$ At times, we will slightly abuse notation by saying that $\mathds{1}{\left\\{{\mathsf{A}}\right\\}}$ is $1$ when event ${\mathsf{A}}$ happens and $0$ otherwise. Note that $\mathds{1}_{{\mathcal{A}}}(\cdot)=\mathds{1}{\left\\{\cdot\in{\mathcal{A}}\right\\}}$. In this paper, all logarithms are to the base 2. We use $\Delta({\mathcal{X}})$ to denote the probability simplex on ${\mathcal{X}}$. Related notations such as $\Delta({\mathcal{X}}\times{\mathcal{Y}})$ and $\Delta({\mathcal{Y}}|{\mathcal{X}})$ are similarly defined. For a distribution $P_{{\mathbf{x}},{\mathbf{y}}|{\mathbf{u}}}\in\Delta({\mathcal{X}}\times{\mathcal{Y}}|{\mathcal{U}})$, we use $\left[P_{{\mathbf{x}},{\mathbf{y}}|{\mathbf{u}}}\right]_{{\mathbf{x}}|{\mathbf{u}}}\in\Delta({\mathcal{X}}|{\mathcal{U}})$ to denote the marginal distribution onto ${\mathbf{x}}$ given ${\mathbf{u}}$, i.e., for every $x\in{\mathcal{X}},u\in{\mathcal{U}}$, $\left[P_{{\mathbf{x}},{\mathbf{y}}|{\mathbf{u}}}\right]_{{\mathbf{x}}|{\mathbf{u}}}(x|u)=\sum_{y\in{\mathcal{Y}}}P_{{\mathbf{x}},{\mathbf{y}}|{\mathbf{u}}}(x,y|u)$. We use $\Delta^{(n)}({\mathcal{X}})$ to denote the set of types (i.e., empirical distributions/histograms, see Definition 3 for formal definitions) of length-$n$ vectors over alphabet ${\mathcal{X}}$. That is, $\Delta^{(n)}({\mathcal{X}})$ consists of all distributions $P_{\mathbf{x}}\in\Delta({\mathcal{X}})$ that are induced by ${\mathcal{X}}^{n}$-valued vectors. Other notations such as $\Delta^{(n)}({\mathcal{X}}\times{\mathcal{Y}})$ and $\Delta^{(n)}({\mathcal{Y}}|{\mathcal{X}})$ are similarly defined. The notation ${\mathbf{x}}\sim P_{\mathbf{x}}$ (resp. ${\underline{\mathbf{x}}}\sim P_{{\underline{\mathbf{x}}}}$) means that the p.m.f. of a random variable (resp. vector) ${\mathbf{x}}$ (resp. ${\underline{\mathbf{x}}}$) is $P_{\mathbf{x}}$ (resp. $P_{\underline{\mathbf{x}}}$). If ${\mathbf{x}}$ is uniformly distributed in ${\mathcal{X}}$, then we write ${\mathbf{x}}\sim{\mathcal{X}}$. Throughout this paper, we use $d_{{\infty}}\left(\cdot,\cdot\right)$ and $d_{{1}}\left(\cdot,\cdot\right)$ to respectively denote the $\ell^{\infty}$ and $\ell^{1}$ distances between two distributions which are defined as follows $\displaystyle d_{{\infty}}\left(P,Q\right)\coloneqq\sum_{x\in{\mathcal{X}}}\left|P(x)-Q(x)\right|,\quad d_{{1}}\left(P,Q\right)\coloneqq\max_{x\in{\mathcal{X}}}\left|P(x)-Q(x)\right|,$ for any $P,Q\in\Delta({\mathcal{X}})$. For a distribution $P\in\Delta({\mathcal{X}})$ and a subset ${\mathcal{A}}\subseteq\Delta({\mathcal{X}})$, the distance (w.r.t. some metric $\operatorname{dist}(\cdot,\cdot)$) between $P$ and ${\mathcal{A}}$ is defined as $\operatorname{dist}(P,{\mathcal{A}})\coloneqq\inf_{Q\in{\mathcal{A}}}\operatorname{dist}(P,Q)$. For ${\mathcal{B}}\subseteq\Delta({\mathcal{X}})$, the distance between ${\mathcal{A}}$ and ${\mathcal{B}}$ is defined as $\operatorname{dist}({\mathcal{A}},{\mathcal{B}})\coloneqq\inf_{(P,Q)\in{\mathcal{A}}\times{\mathcal{B}}}\operatorname{dist}(P,Q)$. The inner product between $P$ and $Q$ is defined as $\left\langle P,Q\right\rangle\coloneqq\sum_{x\in{\mathcal{X}}}P(x)Q(x)$. The $\ell^{p}$-norm of a vector is denoted by $\left\|\cdot\right\|_{p}$. Note that $d_{{\infty}}\left(\cdot,\cdot\right)=\left\|\cdot-\cdot\right\|_{\infty}$ and $d_{{1}}\left(\cdot,\cdot\right)=\left\|\cdot-\cdot\right\|_{1}$. ## VI Preliminaries Let $P_{\mathbf{x}}\in\Delta({\mathcal{X}})$. We always assume $\operatorname{supp}(P_{\mathbf{x}})={\mathcal{X}}$. Otherwise, we can properly reduce ${\mathcal{X}}$ to ${\mathcal{X}}^{\prime}$ and again assume $P_{\mathbf{x}}\in\Delta({\mathcal{X}}^{\prime}),\operatorname{supp}(P_{\mathbf{x}})={\mathcal{X}}^{\prime}$. Define the polynomial $\nu(P_{\mathbf{x}},n)$ as $\displaystyle\nu(P_{\mathbf{x}},n)\coloneqq$ $\displaystyle\sqrt{(2\pi n)^{\left|{\mathcal{X}}\right|}\prod_{x\in{\mathcal{X}}}P_{\mathbf{x}}(x)}.$ (1) Note that $\nu(P_{\mathbf{x}},n)\neq 0$. ###### Lemma 1. If ${\underline{\mathbf{x}}}\sim P_{\mathbf{x}}^{\otimes n}$, then for any ${\underline{x}}$ of type $P_{\mathbf{x}}$, we have $\Pr\left[{\underline{\mathbf{x}}}={\underline{x}}\right]=2^{-H(P_{\mathbf{x}})}$. Moreover, $\Pr\left[\tau_{\underline{\mathbf{x}}}=P_{\mathbf{x}}\right]\asymp 1/\nu(P_{\mathbf{x}},n)$. ###### Lemma 2 (Chernoff bound). Let ${\mathbf{x}}_{1},\cdots,{\mathbf{x}}_{N}$ be independent $\left\\{0,1\right\\}$-valued random variables. Let ${\mathbf{x}}\coloneqq\sum_{i=1}^{N}{\mathbf{x}}_{i}$. Then for any $\sigma\in[0,1]$, $\displaystyle\Pr\left[{\mathbf{x}}\geq(1+\delta)\mathbb{E}\left[{\mathbf{x}}\right]\right]\leq$ $\displaystyle\exp\left(-\frac{\delta^{2}}{3}\mathbb{E}\left[{\mathbf{x}}\right]\right),$ $\displaystyle\Pr\left[{\mathbf{x}}\leq(1-\delta)\mathbb{E}\left[{\mathbf{x}}\right]\right]\leq$ $\displaystyle\exp\left(-\frac{\delta^{2}}{2}\mathbb{E}\left[{\mathbf{x}}\right]\right),$ $\displaystyle\Pr\left[{\mathbf{x}}\notin(1\pm\delta)\mathbb{E}\left[{\mathbf{x}}\right]\right]\leq$ $\displaystyle 2\exp\left(-\frac{\delta^{2}}{3}\mathbb{E}\left[{\mathbf{x}}\right]\right).$ ###### Lemma 3 (Sanov’s theorem). Let ${\mathcal{Q}}\subseteq\Delta({\mathcal{X}})$ be a subset of distributions which equals the closure of its interior. Let ${\underline{\mathbf{x}}}\sim P_{\mathbf{x}}^{\otimes n}$ for some $P_{\mathbf{x}}\in\Delta({\mathcal{X}})$. Then $\displaystyle\lim_{n\to\infty}\frac{1}{n}\log\Pr\left[\tau_{\underline{\mathbf{x}}}\in{\mathcal{A}}\right]=$ $\displaystyle-\inf_{Q_{\mathbf{x}}\in{\mathcal{Q}}}D\left(Q_{\mathbf{x}}\middle\|P_{\mathbf{x}}\right),$ where the Kullback–Leibler (KL) divergence $D\left(\cdot\middle\|\cdot\right)$ between two distributions is defined in Definition 2. ###### Fact 4. Let ${\underline{x}}=({\underline{x}}^{(1)},{\underline{x}}^{(2)})\in{\mathcal{X}}^{n}$ where ${\underline{x}}^{(1)}\in{\mathcal{X}}^{\alpha n}$ and ${\underline{x}}^{(2)}\in{\mathcal{X}}^{(1-\alpha)n}$ for some $\alpha\in[0,1]$. Then we have $\tau_{{\underline{x}}}=\alpha\tau_{{\underline{x}}^{(1)}}+(1-\alpha)\tau_{{\underline{x}}^{(2)}}$. ###### Definition 1 (Net). Let $({\mathcal{X}},\operatorname{dist})$ be a metric space and $\eta>0$ be a constant. A subset ${\mathcal{N}}\subseteq{\mathcal{X}}$ is an _$\eta$ -net_ if for all $x\in{\mathcal{X}}$, there exists $x^{\prime}\in{\mathcal{N}}$ such that $\operatorname{dist}(x,x^{\prime})\leq\eta$. The following lemma can be proved by taking a simple coordinate quantization. A proof can be found in, e.g., [ZBJ20]. ###### Lemma 5 (Bound on size of a net). Let ${\mathcal{X}}$ be a finite alphabet. For any constant $\eta>0$, there exists an $\eta$-net of $(\Delta({\mathcal{X}}),d_{\infty})$ of size at most $\left\lceil\frac{{\left|{\mathcal{X}}\right|}}{2\eta}\right\rceil^{{\left|{\mathcal{X}}\right|}}\leq\left(\frac{{\left|{\mathcal{X}}\right|}}{2\eta}+1\right)^{{\left|{\mathcal{X}}\right|}}$. ###### Fact 6. For any ${\underline{x}},{\underline{y}}\in{\mathbb{R}}^{k}$, we have $d_{{\infty}}\left({\underline{x}},{\underline{y}}\right)\leq d_{{1}}\left({\underline{x}},{\underline{y}}\right)\leq k\cdot d_{{\infty}}\left({\underline{x}},{\underline{y}}\right)$. ###### Definition 2 (Kullback–Leibler (KL) divergence). Let ${\mathcal{X}}$ be a finite set and let $P,Q\in\Delta({\mathcal{X}})$. Assume that $P$ is absolutely continuous w.r.t. $Q$ (i.e., $\operatorname{supp}(P)\subseteq\operatorname{supp}(Q)$). The _Kullback–Leibler (KL) divergence_ between $P$ and $Q$ is defined as $D\left(P\middle\|Q\right)\coloneqq\sum_{x\in{\mathcal{X}}}P(x)\log\frac{P(x)}{Q(x)}$. ###### Definition 3 (Types). Let ${\mathcal{X}}$ be a finite set and $n\in{\mathbb{Z}}_{\geq 1}$. The _type_ of a vector ${\underline{x}}\in{\mathcal{X}}^{n}$, denoted by $\tau_{\underline{x}}\in\Delta({\mathcal{X}})$, is the empirical distribution/histogram of ${\underline{x}}$ defined as: for every $x\in{\mathcal{X}}$, $\tau_{\underline{x}}(x)=\frac{1}{n}\left|\left\\{i\in[n]\colon{\underline{x}}(i)=x\right\\}\right|$. The set of all types of ${\mathcal{X}}^{n}$-valued vectors is denoted by $\Delta^{(n)}({\mathcal{X}})$. Let ${\mathcal{Y}}$ be another finite set and ${\underline{y}}\in{\mathcal{Y}}^{n}$. The _joint type_ $\tau_{{\underline{x}},{\underline{y}}}$ (and $\Delta^{(n)}({\mathcal{X}}\times{\mathcal{Y}})$ correspondingly) and the _conditional type_ $\tau_{{\underline{x}}|{\underline{y}}}$ (and $\Delta^{(n)}({\mathcal{X}}|{\mathcal{Y}})$ correspondingly) are defined in a similar manner. Furthermore, these definitions can be extended to tuples of vectors in the canonical way. The set of vectors of the same type is called a _type class_. ###### Fact 7 (Types are dense in distributions). Let ${\mathcal{X}}$ be a finite set. The set $\bigcup_{n\in{\mathbb{Z}}_{\geq 1}}\Delta^{(n)}({\mathcal{X}})$ of types induced by vectors of all possible lengths is dense in the corresponding set $\Delta({\mathcal{X}})$ of distributions. The number of types of length-$n$ vectors is polynomial in $n$. ###### Lemma 8 (Number of types [Csi98]). The number of types corresponding to ${\mathcal{X}}^{n}$-valued vectors equals $\binom{n-\left|{\mathcal{X}}\right|-1}{\left|{\mathcal{X}}\right|-1}\leq(n+\left|{\mathcal{X}}\right|-1)^{\left|{\mathcal{X}}\right|-1}$. ###### Lemma 9 (Marginalization does not increase distance). Let $P_{{\mathbf{a}},{\mathbf{b}}},Q_{{\mathbf{a}},{\mathbf{b}}}\in\Delta({\mathcal{A}}\times{\mathcal{B}})$. Then $d_{{1}}\left(\left[P_{{\mathbf{a}},{\mathbf{b}}}\right]_{{\mathbf{a}}},\left[Q_{{\mathbf{a}},{\mathbf{b}}}\right]_{{\mathbf{a}}}\right)\leq d_{{1}}\left(P_{{\mathbf{a}},{\mathbf{b}}},Q_{{\mathbf{a}},{\mathbf{b}}}\right)$. ###### Proof. The lemma follows from triangle inequality. $\displaystyle d_{{1}}\left(\left[P_{{\mathbf{a}},{\mathbf{b}}}\right]_{{\mathbf{a}}},\left[Q_{{\mathbf{a}},{\mathbf{b}}}\right]_{{\mathbf{a}}}\right)\leq$ $\displaystyle\sum_{a\in{\mathcal{A}}}\left|\sum_{b\in{\mathcal{B}}}P_{{\mathbf{a}},{\mathbf{b}}}(a,b)-\sum_{b\in{\mathcal{B}}}Q_{{\mathbf{a}},{\mathbf{b}}}(a,b)\right|\leq\sum_{(a,b)\in{\mathcal{A}}\times{\mathcal{B}}}\left|P_{{\mathbf{a}},{\mathbf{b}}}(a,b)-Q_{{\mathbf{a}},{\mathbf{b}}}(a,b)\right|=d_{{1}}\left(P_{{\mathbf{a}},{\mathbf{b}}},Q_{{\mathbf{a}},{\mathbf{b}}}\right).\qed$ (2) ## VII Basic definitions ### VII-A Channel and coding ###### Definition 4 (Omniscient adversarial MACs). An _omniscient adversarial two-user multiple access channel (MAC)_ $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ is comprised of 1. 1. three alphabets ${\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}}$ for the input sequence from the first user, the input sequence from the second user, the jamming sequence and the output sequence, respectively; 2. 2. input constraints $\Gamma_{1}\subseteq\Delta({\mathcal{X}}_{1})$ and $\Gamma_{2}\subseteq\Delta({\mathcal{X}}_{2})$ for the first and second users, respectively; 3. 3. state constraints $\Lambda\subseteq\Delta({\mathcal{S}})$ for the jammer; 4. 4. and the adversarial channel transition law $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}$ that is governed by the adversary. Suppose that the first (resp. second) transmitter wishes to send a message $m^{1}\in[M_{1}]$ (resp. $m^{2}\in[M_{2}]$) to the receiver. They are allowed to encode101010Importantly, the encoding process must be completed locally by two individual encoders without cooperation. $(m^{1},m^{2})$ into two sequences (called _codewords_) $\operatorname{Enc}_{1}(m^{1})={\underline{x}}^{1}\in{\mathcal{X}}_{1}^{n}$ and $\operatorname{Enc}_{2}(m^{2})={\underline{x}}^{2}\in{\mathcal{X}}_{2}^{n}$ respectively such that $\tau_{{\underline{x}}^{1}}\in\Gamma_{1},\tau_{{\underline{x}}^{2}}\in\Gamma_{2}$. These two codewords are transmitted into the channel. Knowing the transmitted ${\underline{x}}^{1},{\underline{x}}^{2}$ and the codebooks $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\in{\mathcal{X}}_{1}^{M_{1}\times n}\times{\mathcal{X}}_{2}^{M_{2}\times n}$ (i.e., the collection of codeword pairs that encode the messages in $[M_{1}]\times[M_{2}]$; see Definition 5), the adversary injects an adversarial noise (a.k.a. the _state vector_ or _jamming vector_) ${\underline{s}}\in{\mathcal{S}}^{n}$ such that $\tau_{{\underline{s}}}\in\Lambda$. The channel acts on the inputs ${\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}}$ and generates an output ${\underline{\mathbf{y}}}$ memorylessly, i.e., for any ${\underline{y}}\in{\mathcal{Y}}^{n}$, $\displaystyle W_{{\underline{\mathbf{y}}}|{\underline{\mathbf{x}}}^{1},{\underline{\mathbf{x}}}^{2},{\underline{\mathbf{s}}}}\left({\underline{y}}|{\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}}\right)=W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}^{\otimes n}\left({\underline{y}}|{\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}}\right)=\prod_{j=1}^{n}W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left({\underline{y}}(j)|{\underline{x}}^{1}(j),{\underline{x}}^{2}(j),{\underline{s}}(j)\right).$ Receiving ${\underline{\mathbf{y}}}$, the decoder is required to output an estimate $\operatorname{Dec}({\underline{\mathbf{y}}})=\left(\widehat{m}^{1},\widehat{m}^{2}\right)$ of the transmitted messages $(m^{1},m^{2})$. See Figure 1 for a system diagram of $\mathsf{MAC}_{2}$. Figure 1: A system diagram of a general two-user omniscient adversarial MAC. ###### Remark 2. Though the channel from the transmitters to the receiver is memoryless, the state vector ${\underline{\mathbf{s}}}$ is not necessarily generated memorylessly by the jammer given ${\underline{\mathbf{x}}}^{1},{\underline{\mathbf{x}}}^{2}$. That is, $P_{{\underline{\mathbf{s}}}|{\underline{\mathbf{x}}}^{1},{\underline{\mathbf{x}}}^{2}}$ may not factor. Indeed, the adversary can put probability mass one on a single sequence ${\underline{s}}$. ###### Definition 5 (Codes). A code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ for an omniscient adversarial MAC $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ consists of 1. 1. two encoders $\operatorname{Enc}_{1}\colon[M_{1}]\to{\mathcal{X}}_{1}^{n}$ and $\operatorname{Enc}_{2}\colon[M_{2}]\to{\mathcal{X}}_{2}^{n}$ for the first and the second users which map $m^{1}\in[M_{1}]$ and $m^{2}\in[M_{2}]$ to $\operatorname{Enc}_{1}(m^{1})={\underline{x}}^{1}_{m^{1}}$ and $\operatorname{Enc}_{2}(m^{2})={\underline{x}}^{2}_{m^{2}}$ respectively; and 2. 2. a decoder $\operatorname{Dec}\colon{\mathcal{Y}}^{n}\to[M_{1}]\times[M_{2}]$ that maps ${\underline{y}}$ to $\operatorname{Dec}({\underline{y}})=\left(\widehat{m}^{1},\widehat{m}^{2}\right)$. We call the images of $\operatorname{Enc}_{1}$ and $\operatorname{Enc}_{2}$ a _codebook pair_ (or simply a _code pair_ , overloading the terminology), denoted, with a slight abuse of notation, by $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\in{\mathcal{X}}_{1}^{M_{1}\times n}\times{\mathcal{X}}_{2}^{M_{2}\times n}$. The length $n$ of each codeword is called the _blocklength_. The _rate pair_ of $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ is defined as $R_{1}=R({\mathcal{C}}_{1})\coloneqq\frac{\log M_{1}}{n\log\left|{\mathcal{X}}_{1}\right|}$ and $R_{2}=R({\mathcal{C}}_{2})\coloneqq\frac{\log M_{2}}{n\log\left|{\mathcal{X}}_{2}\right|}$. We assume that the code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ is known to $\operatorname{Enc}_{1},\operatorname{Enc}_{2},\operatorname{Jam}$ (see Definition 6 below) and is fixed before communication is instantiated. ###### Remark 3. When we talk about “a” code (pair), we always mean an infinite sequence of codes of increasing blocklengths, i.e., $\left\\{\left({\mathcal{C}}_{1}^{(i)},{\mathcal{C}}_{2}^{(i)}\right)\right\\}_{i\geq 1}$ each of blocklength $n_{i}$ where $n_{1}<n_{2}<\cdots\in{\mathbb{Z}}_{\geq 1}$. ###### Definition 6 (Maximum probability of error). A code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\in{\mathcal{X}}_{1}^{M_{1}\times n}\times{\mathcal{X}}_{2}^{M_{2}\times n}$ (equipped with encoders $\operatorname{Enc}_{1},\operatorname{Enc}_{2}$ and a decoder $\operatorname{Dec}$) is said to attain _maximum probability of error $\varepsilon$_ for an omniscient adversarial MAC $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ if $\displaystyle\max_{(m^{1},m^{2})\in[M_{1}]\times[M_{2}]}\max_{\begin{subarray}{c}\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))\in{\mathcal{S}}^{n}\\\ \tau_{\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))}\in\Lambda\end{subarray}}{\mathop{\Pr}_{{\underline{\mathbf{y}}}\sim W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}^{\otimes n}\left(\cdot|\operatorname{Enc}_{1}\left(m^{1}\right),\operatorname{Enc}_{2}\left(m^{2}\right),\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))\right)}\left[\operatorname{Dec}\left({\underline{\mathbf{y}}}\right)\neq\left(m^{1},m^{2}\right)\right]}$ $\displaystyle=$ $\displaystyle\max_{(m^{1},m^{2})\in[M_{1}]\times[M_{2}]}\max_{\begin{subarray}{c}\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))\in{\mathcal{S}}^{n}\\\ \tau_{\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))}\in\Lambda\end{subarray}}\sum_{{\underline{y}}\in{\mathcal{Y}}^{n}:\operatorname{Dec}({\underline{y}})\neq(m^{1},m^{2})}W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}^{\otimes n}\left({\underline{y}}|\operatorname{Enc}_{1}\left(m^{1}\right),\operatorname{Enc}_{2}\left(m^{2}\right),\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))\right)$ $\displaystyle\leq$ $\displaystyle\varepsilon.$ (3) The second maximization is over all legitimate jamming functions $\operatorname{Jam}\colon{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}\to{\mathcal{S}}^{n}$ such that $\tau_{\operatorname{Jam}(\operatorname{Enc}_{1}(m^{1}),\operatorname{Enc}_{2}(m^{2}))}\in\Lambda$. ###### Remark 4. We emphasize that this paper is focused on the maximum probability of error as defined in Definition 6. One can instead place different bounds on the constituent error probabilities [TK13] $\displaystyle\max_{(m^{1},m^{2})\in[M_{1}]\times[M_{2}]}\max_{{\underline{s}}:\tau_{\underline{s}}\in\Lambda}\Pr\left[\left\\{\widehat{\mathbf{m}}^{1}\neq m^{1}\right\\}\cup\left\\{\widehat{\mathbf{m}}^{2}\neq m^{2}\right\\}\right],$ $\displaystyle\max_{(m^{1},m^{2})\in[M_{1}]\times[M_{2}]}\max_{{\underline{s}}:\tau_{\underline{s}}\in\Lambda}\Pr\left[{\widehat{\mathbf{m}}^{1}\neq m^{1}}\right],$ $\displaystyle\max_{(m^{1},m^{2})\in[M_{1}]\times[M_{2}]}\max_{{\underline{s}}:\tau_{\underline{s}}\in\Lambda}\Pr\left[{\widehat{\mathbf{m}}^{2}\neq m^{2}}\right].$ This may create wacky behaviours of the capacity region [ZVJ20] and is a more challenging question. ###### Definition 7 (Achievable rate pairs and capacity region). A rate pair $(R_{1},R_{2})$ is said to be _achievable_ for an omniscient adversarial MAC $\mathsf{MAC}_{2}$ under the maximum error criterion if there exists a code ${({\mathcal{C}}_{1},{\mathcal{C}}_{2})}$ for $\mathsf{MAC}_{2}$ of rates $R({\mathcal{C}}_{1})\geq R_{1}$ and $R({\mathcal{C}}_{2})\geq R_{2}$ with $o(1)$ maximum probability of error. The closure of all achievable rate pairs is called the _capacity region_ of $\mathsf{MAC}_{2}$. ###### Definition 8 (Constant composition codes). A code ${\mathcal{C}}\subseteq{\mathcal{X}}^{n}$ is called $P$-constant composition for some distribution $P\in\Delta({\mathcal{X}})$ if all codewords in ${\mathcal{C}}$ have type $P$. A simple application of Markov’s inequality and Lemma 8 yields the following reduction from general codes to constant composition codes. ###### Lemma 10 (Constant composition reduction). For any code ${\mathcal{C}}\subseteq{\mathcal{X}}^{n}$, there exists a constant composition subcode ${\mathcal{C}}^{\prime}\subseteq{\mathcal{C}}$ of size at least $|{\mathcal{C}}|/(n+\left|{\mathcal{X}}\right|-1)^{\left|{\mathcal{X}}\right|-1}$. In particular, $R({\mathcal{C}}^{\prime})$ is the same as $R({\mathcal{C}})$ (asymptotically in $n$). Lemma 10 shows that for the purpose of understanding the capacity (region), it suffices to study constant composition codes. Throughout this paper, we focus on constant composition code pairs by fixing two feasible input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. ### VII-B Additional technical assumptions For technical reasons, we make further assumptions on the model considered throughout this paper. 1. 1. All alphabets ${\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}}$ are finite. In particular, our proof will heavily rely on the assumption of the finiteness of ${\mathcal{X}}_{1}$ and ${\mathcal{X}}_{2}$. It is unclear how to extend our results to the large alphabet regime, e.g., the case where $\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|$ are increasing in $n$. In fact, we believe that the behaviour of adversarial MACs is considerably different when the alphabet sizes are sufficiently large. See Item 11 in Section XVI. 2. 2. In this work we only focus on _state deterministic_ channels, i.e., channels for which $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}$ is a zero-one law. Alternatively, the channel transition law can be written as a (deterministic) function $W\colon{\mathcal{X}}_{1}\times{\mathcal{X}}_{2}\times{\mathcal{S}}\to{\mathcal{Y}}$ such that $y=W(x^{1},x^{2},s)$. 3. 3. To avoid peculiar behaviours, we assume that $\Gamma_{1},\Gamma_{2},\Lambda$ are all convex sets. 4. 4. We do not assume the availability of common randomness between the encoders and the decoder (while kept secret from the jammer). In the AVC literature, the capacity in the presence of shared randomness is known as the _random code capacity_ [Ahl78, CN88a]. 5. 5. No party in the system is allowed to use private randomness. That is, the encoding/jamming/decoding functions are all deterministic. In the case of point-to-point omniscient adversarial channels [WBBJ19], there are reductions showing that the capacity remains the same under stochastic/deterministic encoding/jamming/decoding. Furthermore, average error criterion is equivalent to maximum error criterion which is further equivalent to zero error criterion when the channel is deterministic. Therefore, the omniscient point-to-point channel problem is combinatorial in nature. However, for our model of omniscient MACs, as alluded to in Remark 1, we expect neither the equivalence between stochastic and deterministic encoding nor the equivalence between average/maximum probability of error. For simplicity, we choose to work with deterministic encoding/jamming/decoding and maximum/zero error criterion in this paper. The average probability of error counterpart is left for future study (see Item 1 in Section XVI). Under the above assumptions of deterministic encoding/jamming/decoding/channel law and maximum error criterion, the probability in Equation 3 is either zero or one. Therefore, vanishing maximum probability of error implies zero error. This enforces a combinatorial nature of the problem in hand. Our results serve as a first step towards understanding omniscient adversarial MACs. ## VIII Warmup example: binary noisy $\operatorname{\mathsf{XOR}}$ MAC In this section, we study a warmup example of binary noisy $\operatorname{\mathsf{XOR}}$ MAC defined as follows. ###### Definition 9 (Binary noisy $\operatorname{\mathsf{XOR}}$ MAC). A two-user binary noisy $\operatorname{\mathsf{XOR}}$ MAC $\mathsf{XOR}\text{-}\mathsf{MAC}_{2}(p)$ takes as input two binary transmissions $({\underline{x}}^{1},{\underline{x}}^{2})\in\left(\\{0,1\\}^{n}\right)^{2}$ and a binary noise sequence ${\underline{s}}\in\\{0,1\\}^{n}$ with (relative) Hamming weight at most $p$ and outputs ${\underline{y}}={\underline{x}}^{1}\oplus{\underline{x}}^{1}\oplus{\underline{s}}$ where the addition is modulo two. The following theorem generalizes the classical Plotkin bound in coding theory to the multiuser setting. ###### Theorem 11. If $p>1/4$, then there exists no rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$. ###### Proof. See Appendix B. ∎ ## IX Confusability sets and their properties In this section, we introduce one of the core definitions of this paper: the _confusability sets_ associated to an adversarial MAC. They are the sets of _bad_ distributions that any good code should avoid. As the name suggests, they precisely characterize the “confusability” of a given channel. In fact, they determine the capacity region of the channel and therefore are arguably the most important statistics associated to the channel. Some properties of confusability sets are proved. We first present an obvious-looking claim which relates the the zero error criterion with _operational non-confusability_. ###### Claim 12 (Equivalence between zero error and operational non- confusability). Let $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ be a two-user omniscient adversarial MAC. A code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\in{\mathcal{X}}_{1}^{M_{1}\times n}\times{\mathcal{X}}_{2}^{M_{2}\times n}$ attains zero error for $\mathsf{MAC}_{2}$ if and only if all of the following conditions (which we call _operational non-confusability_ conditions) are satisfied: 1. 1. for all $1\leq i_{1}\neq i_{2}\leq M_{1}$ and $1\leq j_{1}\neq j_{2}\leq M_{2}$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ such that $W({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{s}}^{1})=W({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}},{\underline{s}}^{2})$; in this case we say that $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}})$ and $({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}})$ are _non- confusable_ ; 2. 2. for all $1\leq i_{1}\neq i_{2}\leq M_{1}$ and $1\leq j\leq M_{2}$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ such that $W({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j},{\underline{s}}^{1})=W({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j},{\underline{s}}^{2})$; in this case we say that $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j})$ and $({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j})$ are _non-confusable_ ; 3. 3. for all $1\leq i\leq M_{1}$ and $1\leq j_{1}\neq j_{2}\leq M_{2}$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ such that $W({\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{s}}^{1})=W({\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{2}},{\underline{s}}^{2})$; in this case we say that $({\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}})$ and $({\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{2}})$ are _non-confusable_. ###### Proof. Intuitively, a violation of the zero error criterion must be the case where a received vector ${\underline{y}}$ can be explained by (at least) two distinct pairs of codewords via admissible jamming vectors. In this case, the decoder is confused by (at least) two candidate pairs of codewords and is forced to make a decoding error with nonzero probability. Formally, the claim follows from the following simple arguments. We first prove the contrapositive of the direct part. If $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ has nonzero error, then there must exist a pair of codewords $({\underline{x}}^{1},{\underline{x}}^{2})\in({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ which leads to a decoding error. In particular, at least one of ${\underline{x}}^{1}$ and ${\underline{x}}^{2}$ cannot be correctly decoded. Then at least one of Items 1, 2 and 3 must be satisfied. Indeed, 1. 1. Item 1 corresponds to the case where neither ${\underline{x}}^{1}$ nor ${\underline{x}}^{2}$ can be correctly decoded. More specifically, there must exist another pair of codewords $\widetilde{\underline{x}}^{1}\neq{\underline{x}}^{1}$ and $\widetilde{\underline{x}}^{2}\neq{\underline{x}}^{2}$ such that $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}})$ for some ${\underline{s}},\widetilde{\underline{s}}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}},\tau_{\widetilde{\underline{s}}}\in\Lambda$. In this case, the decoder could not decide to output $({\underline{x}}^{1},{\underline{x}}^{2})$ or $(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$. 2. 2. Item 2 corresponds to the case where ${\underline{x}}^{1}$ is confusable with another codeword. More specifically, there must exist another codeword $\widetilde{\underline{x}}^{1}\neq{\underline{x}}^{1}$ such that $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W(\widetilde{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{s}})$ for some ${\underline{s}},\widetilde{\underline{s}}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}},\tau_{\widetilde{\underline{s}}}\in\Lambda$. In this case, the decoder could not decide to output $({\underline{x}}^{1},{\underline{x}}^{2})$ or $(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$. 3. 3. Item 3 corresponds to the case where ${\underline{x}}^{2}$ is confusable with another codeword. More specifically, there must exist another codeword $\widetilde{\underline{x}}^{2}\neq{\underline{x}}^{2}$ such that $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W({\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}})$ for some ${\underline{s}},\widetilde{\underline{s}}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}},\tau_{\widetilde{\underline{s}}}\in\Lambda$. In this case, the decoder could not decide to output $({\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$. The converse part is straightforward. If a code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ attains zero error, then none of Items 1, 2 and 3 is satisfied. Otherwise, (at least) one of Items 1, 2 and 3 above holds which results in a decoding error, violating the zero-error assumption. ∎ ###### Claim 13 (Permutation invariance of operational (non-)confusability). If two pairs of codewords $({\underline{x}}^{1},{\underline{x}}^{2})$ and $(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$ (resp. $(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$) are confusable/non- confusable (in the sense of 12), then any other pairs $({\underline{x}}^{1}_{*},{\underline{x}}^{2}_{*})$ and $(\widetilde{\underline{x}}^{1}_{*},\widetilde{\underline{x}}^{2}_{*})$ (resp. $(\widetilde{\underline{x}}^{1}_{*},{\underline{x}}^{2}_{*})$ or $({\underline{x}}^{1}_{*},\widetilde{\underline{x}}^{2}_{*})$) of the same joint type $\tau_{{\underline{x}}^{1}_{*},\widetilde{\underline{x}}^{1}_{*},{\underline{x}}^{2}_{*},\widetilde{\underline{x}}^{2}_{*}}=\tau_{{\underline{x}}^{1},\widetilde{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{x}}^{2}}$ (resp. $\tau_{{\underline{x}}^{1}_{*},\widetilde{\underline{x}}^{1}_{*},{\underline{x}}^{2}_{*}}=\tau_{{\underline{x}}^{1},\widetilde{\underline{x}}^{1},{\underline{x}}^{2}}$ or $\tau_{{\underline{x}}^{1}_{*},{\underline{x}}^{2}_{*},\widetilde{\underline{x}}^{2}_{*}}=\tau_{{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{x}}^{2}}$) are also confusable/non-confusable. ###### Proof. Since the channel is component-wise and memoryless, the confusability conditions (Items 1, 2 and 3 in 12) are invariant under coordinate permutations. That is, $({\underline{x}}^{1},{\underline{x}}^{2})$ is confusable with $(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$ (resp. $(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$) if and only if $(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}))$ is confusable with $(\pi(\widetilde{\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}))$ (resp. $(\pi(\widetilde{\underline{x}}^{1}),\pi({\underline{x}}^{2}))$ or $(\pi({\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}))$) for any $\pi\in S_{n}$. Here for a vector ${\underline{v}}=({\underline{v}}(1),\cdots,{\underline{v}}(n))\in{\mathcal{V}}^{n}$, we use the notation $\pi({\underline{v}})\coloneqq({\underline{v}}(\pi(1)),\cdots,{\underline{v}}(\pi(n)))$. Indeed, one simply takes $\pi({\underline{s}}),\pi(\widetilde{\underline{s}})$ of type $\tau_{\pi({\underline{s}})}=\tau_{{\underline{s}}}\in\Lambda$ and $\tau_{\pi(\widetilde{\underline{s}})}=\tau_{\widetilde{\underline{s}}}\in\Lambda$. Then for any $j\in[n]$, $\displaystyle W(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi({\underline{s}}))(j)=$ $\displaystyle W(\pi({\underline{x}}^{1})(j),\pi({\underline{x}}^{2})(j),\pi({\underline{s}})(j))$ (4) $\displaystyle=$ $\displaystyle W({\underline{x}}^{1}(\pi(j)),{\underline{x}}^{2}(\pi(j)),{\underline{s}}(\pi(j)))$ $\displaystyle=$ $\displaystyle W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})(\pi(j))$ $\displaystyle=$ $\displaystyle\pi(W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}}))(j).$ Equation 4 is because the channel acts on the inputs component-wise. That is, $W(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi({\underline{s}}))=\pi(W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}}))$. Similarly, $W(\pi(\widetilde{\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))=\pi(W(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}}))$ (resp. $W(\pi(\widetilde{\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))=\pi(W(\widetilde{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{s}}))$ or $W(\pi({\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))=\pi(W({\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}}))$). Since $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}})$ (resp. $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W(\widetilde{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{s}})$ or $W({\underline{x}}^{1},{\underline{x}}^{2},{\underline{s}})=W({\underline{x}}^{1},\widetilde{\underline{x}}^{2},\widetilde{\underline{s}})$) and $\pi$ is bijective, we have $W(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi({\underline{s}}))=W(\pi(\widetilde{\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))$ (resp. $W(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi({\underline{s}}))=W(\pi(\widetilde{\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))$ or $W(\pi({\underline{x}}^{1}),\pi({\underline{x}}^{2}),\pi({\underline{s}}))=W(\pi({\underline{x}}^{1}),\pi(\widetilde{\underline{x}}^{2}),\pi(\widetilde{\underline{s}}))$). Finally, permutation invariance of confusability follows from the observation that all vectors of the same type can be obtained by properly permuting the coordinates. Since permutations are bijections, non-confusability is also invariant under coordinate permutation. ∎ We are ready to give the definition of confusability sets. Before doing so, we first define _self-couplings_ as distributions with prescribed marginals in accordance with the use of constant composition code pairs. ###### Definition 10 (Self-couplings). $\displaystyle{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\Delta({\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2})\colon\begin{array}[]{l}\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1}}=\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{2}}=P_{1},\\\ \left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{1}}=\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{2}}=P_{2}\end{array}\right\\},$ (7) $\displaystyle{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2})\colon\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{1}_{1}}=\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{1}_{2}}=P_{1},\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{2}}=P_{2}\right\\},$ $\displaystyle{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\Delta({\mathcal{X}}_{1}\times{\mathcal{X}}_{2}^{2})\colon\left[P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}}=P_{1},\left[P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{1}}=\left[P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{2}}=P_{2}\right\\}.$ The previous two claims (12, 13) motivate us to make the following definition of _confusability sets_. One should think of the conditions in the definition below as the distributional version of operational confusability in 12. ###### Definition 11 (Confusability sets). Let $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ be a 2-user adversarial MAC. Let $P_{1}\in\Delta({\mathcal{X}}_{1})$ and $P_{2}\in\Delta({\mathcal{X}}_{2})$. The _joint confusability set_ ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, the _first marginal confusability set_ ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and the _second marginal confusability set_ ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ of $\mathsf{MAC}_{2}$ w.r.t. input distributions $P_{1}$ and $P_{2}$ are defined as follows: $\displaystyle{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\begin{array}[]{rl}&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\colon\\\ \exists&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\in\Delta\left({\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}\right)\mathrm{\ s.t.}\\\ &\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}};\\\ \forall&\left(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y\right)\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}},\\\ &P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\left(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y\right)\\\ =&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2}\right)P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1}_{1},x^{2}_{1},s_{1}\right)\\\ =&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2}\right)P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1}_{2},x^{2}_{2},s_{2}\right)\end{array}\right\\},$ (15) $\displaystyle{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\begin{array}[]{rl}&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\colon\\\ \exists&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\in\Delta\left({\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}\right)\mathrm{\ s.t.}\\\ &\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}=P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}};\\\ \forall&\left(x^{1}_{1},x^{1}_{2},x^{2},s_{1},s_{2},y\right)\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}},\\\ &P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\left(x^{1}_{1},x^{1}_{2},x^{2},s_{1},s_{2},y\right)\\\ =&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\left(x^{1}_{1},x^{1}_{2},x^{2}\right)P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\left(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1}_{1},x^{2},s_{1}\right)\\\ =&P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\left(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1}_{2},x^{2},s_{2}\right)\end{array}\right\\},$ (23) $\displaystyle{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\begin{array}[]{rl}&P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\colon\\\ \exists&P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\in\Delta\left({\mathcal{X}}_{1}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}\right)\mathrm{\ s.t.}\\\ &\left[P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\right]_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}};\\\ \forall&\left(x^{1},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y\right)\in{\mathcal{X}}_{1}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}},\\\ &P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{s}}_{1},{\mathbf{s}}_{2},{\mathbf{y}}}\left(x^{1},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y\right)\\\ =&P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(x^{1},x^{2}_{1},x^{2}_{2}\right)P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(s_{1},s_{2}|x^{1},x^{2}_{1},x^{2}_{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1},x^{2}_{1},s_{1}\right)\\\ =&P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(x^{1},x^{2}_{1},x^{2}_{2}\right)P_{{\mathbf{s}}_{1},{\mathbf{s}}_{2}|{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\left(s_{1},s_{2}|x^{1},x^{2}_{1},x^{2}_{2}\right)W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}\left(y|x^{1},x^{2}_{2},s_{2}\right)\end{array}\right\\}.$ (31) One should think of confusability sets as the sets of _bad_ distributions/types that any (sequence of) good codes should avoid. Indeed, one has the following claim. ###### Claim 14. Let $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ be a 2-user adversarial MAC and let $\left(P_{1},P_{2}\right)\in\Gamma_{1}\times\Gamma_{2}$ be a pair of feasible input distributions. Let $\left\\{\left({\mathcal{C}}_{1,i},{\mathcal{C}}_{2,i}\right)\right\\}_{i}\subseteq{\mathcal{X}}_{1}^{n_{i}}\times{\mathcal{X}}_{2}^{n_{i}}$ be a sequence of pairs of $P_{1}$\- and $P_{2}$-constant composition codes of increasing blocklengths $n_{i}$’s. Then $\left\\{\left({\mathcal{C}}_{1,i},{\mathcal{C}}_{2,i}\right)\right\\}_{i}$ achieves zero error for $\mathsf{MAC}_{2}$ if an only if for every $i$, there is no $\left({\underline{x}}^{1}_{1},{\underline{x}}^{2}_{1}\right),\left({\underline{x}}^{1}_{2},{\underline{x}}^{2}_{2}\right)\in{\mathcal{C}}_{1,i}\times{\mathcal{C}}_{2,i}$ and ${\underline{x}}^{1}\in{\mathcal{C}}_{1,i}$, ${\underline{x}}^{2}\in{\mathcal{C}}_{2,i}$, such that at least one of the following happens: $\tau_{{\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1}{\underline{x}}^{2}_{2}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $\tau_{{\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$, $\tau_{{\underline{x}}^{1},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. ###### Proof. 13 implies that the non-confusability properties (Items 1, 2 and 3 in 12) depend only on the _type_ of vectors rather than the order of coordinates. We can therefore quotient out type classes (Definition 3) and work with types instead of vectors.111111Formally, let $\sim_{\mathrm{perm}}$ be a relation on vectors defined as ${\underline{v}}\sim_{\mathrm{perm}}{\underline{v}}^{\prime}$ iff there is $\pi\in S_{n}$ such that ${\underline{v}}^{\prime}=\pi({\underline{v}})$. It is easy to check that $\sim_{\mathrm{perm}}$ is an equivalence relation. As 13 suggests, the confusability property is a _class invariant_ under $\sim_{\mathrm{perm}}$, i.e., it is invariant in each equivalence class by $\sim_{\mathrm{perm}}$. For the purpose of studying confusability, one can without loss of generality focus on equivalence classes (i.e., types) rather than vectors. The above conditions are equivalent to 1. 1. for all $1\leq i_{1}\neq i_{2}\leq|{\mathcal{C}}_{1}|$ and $1\leq j_{1}\neq j_{2}\leq|{\mathcal{C}}_{2}|$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ and ${\underline{y}}\in{\mathcal{Y}}^{n}$ such that $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}},{\underline{s}}^{1},{\underline{s}}^{2},{\underline{y}}}(x^{1}_{1},x^{2}_{1},x^{1}_{2},x^{2}_{2},s_{1},s_{2},y)$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}}}(x^{1}_{1},x^{2}_{1},x^{1}_{2},x^{2}_{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}}}(s_{1},s_{2}|x^{1}_{1},x^{2}_{1},x^{1}_{2},x^{2}_{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1}_{1},x^{2}_{1},s_{1})$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}}}(x^{1}_{1},x^{2}_{1},x^{1}_{2},x^{2}_{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}}}(s_{1},s_{2}|x^{1}_{1},x^{2}_{1},x^{1}_{2},x^{2}_{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1}_{2},x^{2}_{2},s_{2})$ for all $(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y)\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}$; 2. 2. for all $1\leq i_{1}\neq i_{2}\leq|{\mathcal{C}}_{1}|$ and $1\leq j\leq|{\mathcal{C}}_{2}|$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ and ${\underline{y}}\in{\mathcal{Y}}^{n}$ such that $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j},{\underline{s}}^{1},{\underline{s}}^{2},{\underline{y}}}(x^{1}_{1},x^{1}_{2},x^{2},s_{1},s_{2},y)$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}(x^{1}_{1},x^{1}_{2},x^{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1}_{1},x^{2},s_{1})$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}(x^{1}_{1},x^{1}_{2},x^{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}(s_{1},s_{2}|x^{1}_{1},x^{1}_{2},x^{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1}_{2},x^{2},s_{2})$ for all $(x^{1}_{1},x^{1}_{2},x^{2},s_{1},s_{2},y)\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}$; 3. 3. for all $1\leq i\leq|{\mathcal{C}}_{1}|$ and $1\leq j_{1}\neq j_{2}\leq|{\mathcal{C}}_{2}|$, there do not exist ${\underline{s}}^{1},{\underline{s}}^{2}\in{\mathcal{S}}^{n}$ with $\tau_{{\underline{s}}^{1}},\tau_{{\underline{s}}^{2}}\in\Lambda$ and ${\underline{y}}\in{\mathcal{Y}}^{n}$ such that $\displaystyle\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}},{\underline{s}}^{1},{\underline{s}}^{2},{\underline{y}}}(x^{1},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y)$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}(x^{1},x^{2}_{1},x^{2}_{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}(s_{1},s_{2}|x^{1},x^{2}_{1},x^{2}_{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1},x^{2}_{1},s_{1})$ $\displaystyle=$ $\displaystyle\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}(x^{1},x^{2}_{1},x^{2}_{2})\tau_{{\underline{s}}^{1},{\underline{s}}^{2}|{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}(s_{1},s_{2}|x^{1},x^{2}_{1},x^{2}_{2})W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}(y|x^{1},x^{2}_{2},s_{2})$ for all $(x^{1},x^{2}_{1},x^{2}_{2},s_{1},s_{2},y)\in{\mathcal{X}}_{1}\times{\mathcal{X}}_{2}^{2}\times{\mathcal{S}}^{2}\times{\mathcal{Y}}$. We now get that $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\in{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ attains zero error for $\mathsf{MAC}_{2}$ if and only if the above conditions hold. Since these conditions should be satisfied for every $n$, by 7, we pass from types to distributions. According to Definition 11, we finally get that an infinite sequence of codes $\left\\{\left({\mathcal{C}}_{1}^{(n)},{\mathcal{C}}_{2}^{(n)}\right)\right\\}_{n\geq 1}$ attains zero error for $\mathsf{MAC}_{2}$ if and only if for every $n$, 1. 1. for all $1\leq i_{1}\neq i_{2}\leq|{\mathcal{C}}_{1}^{(n)}|$ and $1\leq j_{1}\neq j_{2}\leq|{\mathcal{C}}_{2}^{(n)}|$, $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$; 2. 2. for all $1\leq i_{1}\neq i_{2}\leq|{\mathcal{C}}_{1}^{(n)}|$ and $1\leq j\leq|{\mathcal{C}}_{2}^{(n)}|$, $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$; 3. 3. for all $1\leq i\leq|{\mathcal{C}}_{1}^{(n)}|$ and $1\leq j_{1}\neq j_{2}\leq|{\mathcal{C}}_{2}^{(n)}|$, $\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\notin{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. This finishes the proof. ∎ ###### Remark 5. 12 and 14 actually imply that operational confusability and distributional confusability are equivalent, both of which are characterizations of zero error. ###### Remark 6. Using operational confusability, one can instead define the confusability sets in terms of types rather than distributions. $\displaystyle{\mathcal{K}}_{1,2}^{(n)}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{\tau_{{\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right):\begin{array}[]{c}({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2})\in({\mathcal{X}}_{1}^{n})^{2}\times({\mathcal{X}}_{2}^{n})^{2}\\\ ({\underline{x}}^{1}_{1},{\underline{x}}^{2}_{1})\text{ and }({\underline{x}}^{1}_{2},{\underline{x}}^{2}_{2})\text{ satisfy \lx@cref{creftypecap~refnum}{cond:nonzero-joint} in the proof of \lx@cref{creftypecap~refnum}{claim:operational-nonconf}}\end{array}\right\\},$ (34) $\displaystyle{\mathcal{K}}_{1}^{(n)}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{\tau_{{\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right):\begin{array}[]{c}({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2})\in({\mathcal{X}}_{1}^{n})^{2}\times{\mathcal{X}}_{2}^{n}\\\ ({\underline{x}}^{1}_{1},{\underline{x}}^{2})\text{ and }({\underline{x}}^{1}_{2},{\underline{x}}^{2})\text{ satisfy \lx@cref{creftypecap~refnum}{cond:nonzero-marg1} in the proof of \lx@cref{creftypecap~refnum}{claim:operational-nonconf}}\end{array}\right\\}$ (37) $\displaystyle{\mathcal{K}}_{2}^{(n)}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{\tau_{{\underline{x}}^{1},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}}\in{\mathcal{J}}_{2}\left(P_{1},P_{2}\right):\begin{array}[]{c}({\underline{x}}^{1},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2})\in{\mathcal{X}}_{1}^{n}\times({\mathcal{X}}_{2}^{n})^{2}\\\ ({\underline{x}}^{1},{\underline{x}}^{2}_{1})\text{ and }({\underline{x}}^{1},{\underline{x}}^{2}_{2})\text{ satisfy \lx@cref{creftypecap~refnum}{cond:nonzero-marg2} in the proof of \lx@cref{creftypecap~refnum}{claim:operational-nonconf}}\end{array}\right\\}.$ (40) By 7 and Remark 5, the above definition is (almost) the same as Definition 11. Indeed, $\displaystyle{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{cl}\left(\bigcup_{n=1}^{\infty}{\mathcal{K}}_{1,2}^{(n)}(P_{1},P_{2})\right),$ $\displaystyle{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{cl}\left(\bigcup_{n=1}^{\infty}{\mathcal{K}}_{1}^{(n)}(P_{1},P_{2})\right),$ $\displaystyle{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{cl}\left(\bigcup_{n=1}^{\infty}{\mathcal{K}}_{2}^{(n)}(P_{1},P_{2})\right),$ where $\operatorname{cl}(\cdot)$ denotes the closure of a set. We stick with the distribution version of the definition rather than type version. ###### Proposition 15. Fix any $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. The confusability sets enjoy the following properties. 1. 1. _Nontriviality._ Any distributions $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$, $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{1},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)$ and $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)$ are in ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, respectively. 2. 2. _Transpositional invariance._ If $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is in ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, then $P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}$ is also in ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$; if $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}$ is in ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$, then $P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}}$ is also in ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$; if $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is in ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, then $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}$ is also in ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. 3. 3. _Convexity._ All of ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ are convex. ###### Proof. By Remark 5, it is convenient to prove the properties via operational confusability. To prove the first property, one simply observes that a pair of codewords $({\underline{x}}^{1},{\underline{x}}^{2})$ is apparently confusable with itself. In Item 1 (of 12), one takes ${\underline{s}}=\widetilde{\underline{s}}$. To prove the second property, one notes that if $({\underline{x}}^{1},{\underline{x}}^{2})$ is confusable with $(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$ (resp. $(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$), then $(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$ (resp. $(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$) is also confusable with $({\underline{x}}^{1},{\underline{x}}^{2})$. In the conditions of 12, one interchanges the corresponding ${\underline{s}}$ and $\widetilde{\underline{s}}$. To prove the third property, we note that for any $\alpha\in[0,1]$, if $(\vec{x}^{1}_{1},\vec{x}^{2}_{1})\in{\mathcal{X}}_{1}^{\alpha n}\times{\mathcal{X}}_{2}^{\alpha n}$ and $(\vec{x}^{1}_{2},\vec{x}^{2}_{2})\in{\mathcal{X}}_{1}^{\alpha n}\times{\mathcal{X}}_{2}^{\alpha n}$ are confusable (via $\vec{s}_{1}\in{\mathcal{S}}^{\alpha n}$ and $\vec{s}_{2}\in{\mathcal{S}}^{\alpha n}$), $(\vec{x}^{1}_{3},\vec{x}^{2}_{3})\in{\mathcal{X}}_{1}^{(1-\alpha)n}\times{\mathcal{X}}_{2}^{(1-\alpha)n}$ and $(\vec{x}^{1}_{4},\vec{x}^{2}_{4})\in{\mathcal{X}}_{1}^{(1-\alpha)n}\times{\mathcal{X}}_{2}^{(1-\alpha)n}$ are also confusable (via $\vec{s}_{3}\in{\mathcal{S}}^{(1-\alpha)n}$ and $\vec{s}_{4}\in{\mathcal{S}}^{(1-\alpha)n}$), then $((\vec{x}^{1}_{1},\vec{x}^{1}_{3}),(\vec{x}^{2}_{1},\vec{x}^{2}_{3}))\in{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ and $((\vec{x}^{1}_{2},\vec{x}^{1}_{4}),(\vec{x}^{2}_{2},\vec{x}^{2}_{4}))\in{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ are confusable (via $(\vec{s}_{1},\vec{s}_{3})\in{\mathcal{S}}^{n}$ and $(\vec{s}_{2},\vec{s}_{4})\in{\mathcal{S}}^{n}$). Here for two vectors $\vec{v}_{1}\in{\mathcal{V}}^{n_{1}}$ and $\vec{v}_{2}\in{\mathcal{V}}^{n_{2}}$, we use the notation $(\vec{v}_{1},\vec{v}_{2})\in{\mathcal{V}}^{n_{1}+n_{2}}$ to denote the concatenation of $\vec{v}_{1}$ and $\vec{v}_{2}$. Therefore, by 4, if $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ and $P_{\widetilde{{\mathbf{x}}^{1}_{1}},\widetilde{{\mathbf{x}}^{1}_{2}},\widetilde{{\mathbf{x}}^{2}_{1}},\widetilde{{\mathbf{x}}^{2}_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ then $\alpha P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}+(1-\alpha)P_{\widetilde{{\mathbf{x}}^{1}_{1}},\widetilde{{\mathbf{x}}^{1}_{2}},\widetilde{{\mathbf{x}}^{2}_{1}},\widetilde{{\mathbf{x}}^{2}_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ for any $\alpha\in[0,1]$. ∎ ###### Remark 7. If we define the relation $\sim_{\mathrm{conf}}$ on the set of feasible input sequences as $({\underline{x}}^{1},{\underline{x}}^{2})\sim_{\mathrm{conf}}(\widetilde{\underline{x}}^{1},\widetilde{\underline{x}}^{2})$ (resp. $({\underline{x}}^{1},{\underline{x}}^{2})\sim_{\mathrm{conf}}(\widetilde{\underline{x}}^{1},{\underline{x}}^{2})$ or $({\underline{x}}^{1},{\underline{x}}^{2})\sim_{\mathrm{conf}}({\underline{x}}^{1},\widetilde{\underline{x}}^{2})$) iff $\tau_{{\underline{x}}^{1},\widetilde{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ (resp. $\tau_{{\underline{x}}^{1},\widetilde{\underline{x}}^{1},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ or $\tau_{{\underline{x}}^{1},{\underline{x}}^{2},\widetilde{\underline{x}}^{2}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$), then Proposition 15 implies that $\sim_{\mathrm{conf}}$ is reflective and symmetric. However, $\sim_{\mathrm{conf}}$ is not necessarily transitive. Therefore, it is not in general an equivalence relation. ###### Claim 16. Channels with the same confusability sets have the same capacity region. ###### Proof. Let $\mathsf{MAC}_{2}$ and $\mathsf{MAC}_{2}^{\prime}$ be two adversarial MACs with the same input constraints $\Gamma_{1},\Gamma_{2}$ and the same confusability sets ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ for all $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. Note that $\mathsf{MAC}_{2}$ and $\mathsf{MAC}_{2}^{\prime}$ may have different state/output alphabets and channel laws. By 14, any code $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ that attains zero error for $\mathsf{MAC}_{2}$ also attains zero error for $\mathsf{MAC}_{2}^{\prime}$. Therefore, any achievable rate pair $(R_{1},R_{2})$ for $\mathsf{MAC}_{2}$ is also achievable for $\mathsf{MAC}_{2}^{\prime}$. ∎ ## X The sets of good distributions and their properties The geometry of various sets of distributions/tensors is depicted in Figure 2. Figure 2: The geometry of various sets of distributions/tensors. We only draw sets of joint distributions/tensors. The geometry of the corresponding marginal distributions/tensors is similar. The ambient space is $\Delta_{1,2}(P_{1},P_{2})$ which is defined in Definition 12. The set ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ of self-couplings is defined in Definition 10. The set $\mathsf{Sym}_{1,2}(P_{1},P_{2})$ of symmetric tensors is defined in Definition 13. Inside $\mathsf{Sym}_{1,2}(P_{1},P_{2})$, there is a pair of dual cones, viz.: ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ (Definition 15) and $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ (Definition 16). The blue region denotes the set ${\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)$ of symmetric distributions (Definition 14) which is the intersection of $\mathsf{Sym}_{1,2}(P_{1},P_{2})$ and ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$. ###### Definition 12 (Generalized self-couplings). $\displaystyle\Delta_{1,2}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathbb{R}}^{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|^{2}}:\left\|T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{1}=1,\begin{array}[]{l}\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1}}=\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{2}}=P_{1},\\\ \left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{1}}=\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{2}}=P_{2}\end{array}\right\\},$ (43) $\displaystyle\Delta_{1}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathbb{R}}^{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|}:\left\|T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\|_{2}=1,\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{1}_{1}}=\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{1}_{2}}=P_{1},\left[T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right]_{{\mathbf{x}}^{2}}=P_{2}\right\\}$ $\displaystyle\Delta_{2}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathbb{R}}^{\left|{\mathcal{X}}_{1}\right|\times\left|{\mathcal{X}}_{2}\right|^{2}}:\left\|T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{2}=1,\left[T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}}=P_{1},\left[T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{1}}=\left[T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{2}_{2}}=P_{2}\right\\}.$ ###### Remark 8. For a general tensor (not necessarily a distribution) $T_{{\mathbf{a}},{\mathbf{b}}}\in{\mathbb{R}}^{\left|{\mathcal{A}}\right|\times\left|{\mathcal{B}}\right|}$, the marginalization of $T_{{\mathbf{a}},{\mathbf{b}}}$ onto the first variable ${\mathbf{a}}$ is defined as $\left[T_{{\mathbf{a}},{\mathbf{b}}}\right]_{{\mathbf{a}}}(a)\coloneqq\sum_{b\in{\mathcal{B}}}\left|T_{{\mathbf{a}},{\mathbf{b}}}(a,b)\right|$ for any $a\in{\mathcal{A}}$. ###### Remark 9. For the convenience of discussion, the above sets should be thought of as generalizations of distributions (Definition 10). ###### Definition 13 (Symmetric tensors). $\displaystyle\mathsf{Sym}_{1,2}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\Delta_{1,2}(P_{1},P_{2}):T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=T_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}=T_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}\right\\},$ $\displaystyle\mathsf{Sym}_{1}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\Delta_{1}(P_{1},P_{2}):T_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}=T_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}}\right\\},$ $\displaystyle\mathsf{Sym}_{2}(P_{1},P_{2})\coloneqq$ $\displaystyle\left\\{T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\Delta_{2}(P_{1},P_{2}):T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=T_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}\right\\}.$ ###### Definition 14 (Symmetric distributions). $\displaystyle{\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\cap\mathsf{Sym}_{1,2}(P_{1},P_{2}),$ $\displaystyle{\mathcal{S}}_{1}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\cap\mathsf{Sym}_{1}(P_{1},P_{2}),$ $\displaystyle{\mathcal{S}}_{2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\cap\mathsf{Sym}_{2}(P_{1},P_{2}).$ ###### Definition 15 (Good distributions). Let $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. The set of _jointly good distributions_ ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, the set of _first marginally good distributions_ ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and the set of _second marginally good distributions_ ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ w.r.t. $P_{1}$ and $P_{2}$ are defined as follows: $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\colon\begin{array}[]{l}\exists k\in{\mathbb{Z}}_{\geq 1},\left\\{\lambda_{i}\right\\}_{i=1}^{k}\subseteq[0,1],\left\\{P_{1,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{1}),\left\\{P_{2,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{2}),\mathrm{\ s.t.}\\\ \displaystyle\sum_{i=1}^{k}\lambda_{i}=1,P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}\end{array}\right\\},$ (46) $\displaystyle{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\colon\begin{array}[]{l}\exists k\in{\mathbb{Z}}_{\geq 1},\left\\{\lambda_{i}\right\\}_{i=1}^{k}\subseteq[0,1],\left\\{P_{1,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{1}),\left\\{P_{2,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{2}),\mathrm{\ s.t.}\\\ \displaystyle\sum_{i=1}^{k}\lambda_{i}=1,P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}=\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}\end{array}\right\\},$ (49) $\displaystyle{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\colon\begin{array}[]{l}\exists k\in{\mathbb{Z}}_{\geq 1},\left\\{\lambda_{i}\right\\}_{i=1}^{k}\subseteq[0,1],\left\\{P_{1,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{1}),\left\\{P_{2,i}\right\\}_{i=1}^{k}\subseteq\Delta({\mathcal{X}}_{2}),\mathrm{\ s.t.}\\\ \displaystyle\sum_{i=1}^{k}\lambda_{i}=1,P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\sum_{i=1}^{k}\lambda_{i}P_{1,i}\otimes P_{2,i}^{\otimes 2}\end{array}\right\\}.$ (52) In addition, we define the set of _simultaneously good_ distributions ${\mathcal{G}}\left(P_{1},P_{2}\right)$ w.r.t. $P_{1}$ and $P_{2}$ as $\displaystyle{\mathcal{G}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\begin{array}[]{rl}P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right):&\\\ \left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}=&\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\\\ \left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=&\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\end{array}\right\\}.$ (56) ###### Proposition 17 (Properties of good distributions). The sets ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ enjoy the following properties. 1. 1. Good distributions are symmetric. $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\subset{\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right),\quad{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\subset{\mathcal{S}}_{1}\left(P_{1},P_{2}\right),\quad{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\subset{\mathcal{S}}_{2}\left(P_{1},P_{2}\right).$ 2. 2. For any $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, $\displaystyle\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}=$ $\displaystyle\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2}},\quad\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}.$ 3. 3. The sets ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ are projections of the set ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. $\displaystyle{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)=$ $\displaystyle\left\\{\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}:P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right\\},$ $\displaystyle{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)=$ $\displaystyle\left\\{\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}:P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right\\}.$ ###### Remark 10. Though the good sets ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ are _consistent under projections_ (the third property of Proposition 17), the confusability sets ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ are not. Operationally, this is because $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{j_{1}})$ (or $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{2}})$) and $({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}})$ (or $({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}})$) are not necessarily confusable even if $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}})$ and $({\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}})$ are (for $i_{1}\neq i_{2}$ and $j_{1}\neq j_{2}$). Therefore, even the second property of Proposition 17 is guaranteed to hold for $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, let alone the third one. ###### Definition 16 (Co-good tensors). $\displaystyle\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathsf{Sym}_{1,2}(P_{1},P_{2})\colon\forall P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),\forall P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2}),\left\langle P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0\right\\},$ $\displaystyle\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\mathsf{Sym}_{1}(P_{1},P_{2})\colon\forall P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),\forall P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2}),\left\langle P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\geq 0\right\\},$ $\displaystyle\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathsf{Sym}_{2}(P_{1},P_{2})\colon\forall P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),\forall P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2}),\left\langle P_{{\mathbf{x}}^{1}}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2},P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0\right\\}.$ ###### Remark 11. Note that co-good tensors are not necessarily distributions. They may have negative entries. ###### Remark 12. It follows from definition that the sets of good distributions are subsets of the corresponding co-good distributions, i.e., $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\subset\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right),\quad{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\subset\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right),\quad{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\subset\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right).$ ###### Definition 17 (Dual cone). The _dual cone_ ${\mathcal{B}}^{*}$ of a cone ${\mathcal{B}}$ in a Hilbert space ${\mathcal{H}}$ is defined as ${\mathcal{B}}^{*}\coloneqq\left\\{b^{\prime}\in{\mathcal{H}}:\forall b\in{\mathcal{B}},\;\left\langle b,b^{\prime}\right\rangle\geq 0\right\\}$. ###### Theorem 18 (Duality). The sets ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ are all closed convex pointed cones with non-empty interior. Furthermore, the following duality relations hold. In $\mathsf{Sym}_{1,2}(P_{1},P_{2})$, ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ are dual cones of each other. In $\mathsf{Sym}_{1}(P_{1},P_{2})$, ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$ are dual cones of each other. In $\mathsf{Sym}_{2}(P_{1},P_{2})$, ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right)$ are dual cones of each other. ###### Proof. We first prove the duality relations. Intuitively, the duality follows since the extremal rays of ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ (or ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$, ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ respectively) are distributions of the form $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}$ (or $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}$, $P_{{\mathbf{x}}^{1}}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}$ respectively). Indeed, it follows from Definition 15 that $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{conv}\left\\{P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}:P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2})\right\\}\cap{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right),$ $\displaystyle{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{conv}\left\\{P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}:P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2})\right\\}\cap{\mathcal{J}}_{1}\left(P_{1},P_{2}\right),$ $\displaystyle{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)=$ $\displaystyle\operatorname{conv}\left\\{P_{{\mathbf{x}}^{1}}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}:P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1}),P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2})\right\\}\cap{\mathcal{J}}_{2}\left(P_{1},P_{2}\right),$ where $\operatorname{conv}\left\\{\cdot\right\\}$ denotes the convex hull of a set. Therefore, one can replace $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ (or $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}$, $P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ respectively) in the definition of ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}$ (or ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}$, ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)^{*}$ respectively) below with $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}$ (or $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}$, $P_{{\mathbf{x}}^{1}}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}$ respectively). $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}=$ $\displaystyle\left\\{Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathsf{Sym}_{1,2}(P_{1},P_{2}):\forall P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right),\;\left\langle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0\right\\},$ $\displaystyle{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}=$ $\displaystyle\left\\{Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\mathsf{Sym}_{1}(P_{1},P_{2}):\forall P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),\;\left\langle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\geq 0\right\\},$ $\displaystyle{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)^{*}=$ $\displaystyle\left\\{Q_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathsf{Sym}_{2}(P_{1},P_{2}):\forall P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right),\;\left\langle P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0\right\\}.$ After the replacement, we get exactly ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ (or ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$, ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$, respectively). To formalize this intuition, we prove two-sided set inclusions for $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$. The proof for $\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right)$ is the same as that for $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$ up to change of notation. We first prove $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)={\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}$. 1. $\subseteq$. Let $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$. Let $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. By Definition 16, we have $\left\langle Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}\right\rangle\geq 0$ for all $i\in[k]$. Therefore, $\left\langle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0$, which means $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}$. This proves $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)\subseteq{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}$. 2. $\supseteq$. Let $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}$. By Definition 17, for any $P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1})$ and $P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2})$, we have $\left\langle Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}\right\rangle\geq 0$ since $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}^{\otimes 2}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. Therefore, $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)^{*}\subseteq\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$. We then prove $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)={\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}$ in the same way. 1. $\subseteq$. Let $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$. Let $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}=\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. By Definition 16, for each $i\in[k]$, we have $\left\langle Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},P_{1,i}^{\otimes 2}\otimes P_{2,i}\right\rangle\geq 0$. Therefore, $\left\langle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\geq 0$, which means $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}$. This proves $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)\subseteq{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}$. 2. $\supseteq$. Let $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}$. By Definition 17, for any $P_{{\mathbf{x}}^{1}}\in\Delta({\mathcal{X}}_{1})$ and $P_{{\mathbf{x}}^{2}}\in\Delta({\mathcal{X}}_{2})$, we have $\left\langle Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}\right\rangle\geq 0$ since $P_{{\mathbf{x}}^{1}}^{\otimes 2}\otimes P_{{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$. Therefore, $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)^{*}\subseteq\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$. This finishes the proof for duality. The claimed convexity and conic property of $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$, $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right)$ follow directly from Definition 16. The closedness of ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ follows from the fact that the dual cone of any convex cone is closed. One can easily find distributions that are in the interior of the cones under consideration. The pointedness of ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$ follows from nonnegativity of the entries of their elements. Finally, the pointedness of $\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$, $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$ and $\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right)$ follows from the fact that the dual cone of any convex cone with nonempty interior is pointed. ∎ ## XI A characterization of the shape of capacity region ###### Theorem 19. Fix a pair of input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$, then the capacity region contains rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$ or $R_{1}>0,R_{2}=0$ or $R_{1}=0,R_{2}>0$ or $R_{1}=0,R_{2}=0$. 2. 2. If ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$, ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$, then the capacity region only contains rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}=0$ or $R_{1}=0,R_{2}>0$ or $R_{1}=0,R_{2}=0$. 3. 3. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ and ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)=\emptyset$, then the capacity region only contains rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}=0$ or $R_{1}=0,R_{2}=0$. 4. 4. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$ and ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$, then the capacity region only contains rate pairs $(R_{1},R_{2})$ such that $R_{1}=0,R_{2}>0$ or $R_{1}=0,R_{2}=0$. 5. 5. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$ and ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)=\emptyset$, then the capacity region only contains $(0,0)$. Cases | ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$ | ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ | ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$ | Capacity region ---|---|---|---|--- Case (1) | $\checkmark$ | $\checkmark$ | $\checkmark$ | $(+,+),(+,0),(0,+),(0,0)$ Case (2) | $\times$ | $\checkmark$ | $\checkmark$ | $(+,0),(0,+),(0,0)$ Case (3) | $\times$ | $\checkmark$ | $\times$ | $(+,0),(0,0)$ Case (4) | $\times$ | $\times$ | $\checkmark$ | $(0,+),(0,0)$ Case (5) | $\times$ | $\times$ | $\times$ | $(0,0)$ TABLE I: A characterization of the _shape_ of the capacity region of any omniscient adversarial two-user MAC. Note that the condition ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$ implies both ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ and ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$. Indeed, the former condition is strictly stronger. In each case, we highlight the conditions in colors in such a way that red conditions imply blue conditions. Note that the table above covers all possible cases. The proof of the above characterization is comprised of two parts: achievability (Lemma 23) and converse (Theorem 20). ###### Theorem (Achievability, restatement of Lemma 23). Fix input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$. 2. 2. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(R_{1},0)$ such that $R_{1}>0$. 3. 3. If ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(0,R_{2})$ such that $R_{2}>0$. Various achievability results are proved in Section XIII. Firstly, in Lemma 22, we prove the existence of positive rates using _product_ distributions. Next, in Lemma 23, we refine this result using _mixtures_ of product distributions, i.e., good distributions (Definition 15). Finally, in Lemma 24 we present _inner bounds_ on the capacity region using product distributions. ###### Theorem 20 (Converse). Fix a pair of input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$, then there does not exist achievable rate pair $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$. 2. 2. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$, then there does not exist achievable rate pair $(R_{1},R_{2})$ such that $R_{1}>0$. 3. 3. If ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)=\emptyset$, then there does not exist achievable rate pair $(R_{1},R_{2})$ such that $R_{2}>0$. ###### Proof. Item 1 is proved in Section XIV. Items 2 and 3 are proved in Section XV. ∎ ###### Observation 13. For an omniscient two-user adversarial MAC, for $i=1,2$, if a rate $R_{i}>0$ is achievable for transmitter $i$, then any rate $0\leq R_{i}^{\prime}\leq R_{i}$ is also achievable for transmitter $i$. By 13, if the capacity region contains a rate pair $(R_{1},R_{2})$ where $R_{1}>0,R_{2}>0$, then the rate pairs $(R_{1},0)$ and $(0,R_{2})$ are also in the capacity region. ### XI-A A remark on nonconvexity of capacity region As suggested by Theorem 19, the capacity region of an adversarial MAC can be _nonconvex_. E.g., if a MAC satisfies the conditions in Item 2 of Theorem 19, then the capacity region only consists of two perpendicular line segments and is therefore nonconvex. However, the capacity region cannot be an arbitrary nonconvex region. Indeed, 13 implies that if a rate pair $(R_{1},R_{2})$ with $R_{1}>0,R_{2}>0$ is achievable, then all rate pairs in the (closed) rectangle with vertices $(0,0),(R_{1},0),(0,R_{2}),(R_{1},R_{2})$ are also achievable. For AVMACs (i.e., the _oblivious_ adversarial MACs), the nonconvexity of the capacity region was noted by Gubner–Hughes [GH95] and Pereg–Steinberg [PS19] via the example of an (oblivious) erasure MAC. As a side note, for AVMACs equipped with common randomness, the capacity region may or may not be convex, depending on how the common randomness is instantiated. If each encoder shares an _independent_ secret key with the decoder, then the corresponding capacity region, known as the _divided-randomness_ capacity region, is not necessarily convex [GH95]. On the other hand, if all of two encoders and the decoder share the _same_ key, then the corresponding capacity region, known as the _random code_ capacity region, is always convex [PS19]. In our work, we do not equip any party with shared randomness. See [PS19] for a more detailed discussion on the nonconvexity of the capacity region of AVMACs. ### XI-B Comparison of our results with [PS19] on (oblivious) AVMACs We compare below our results with the parallel results by Pereg and Steinberg on _oblivious_ AVMACs. For simplicity, we only compare the characterizations of _positivity_ of capacities. Specifically, an oblivious AVMAC is a general adversarial MAC with input and state constraints and an oblivious adversary who does _not_ know the transmitted sequences from any of the encoders. As many other results in the AVC literature, their characterization involves the oblivious analog of confusability known as _symmetrizability_. Proper notions of _first marginal symmetrizability_ , _second marginal symmetrizability_ and _joint symmetrizability_ (denoted in their notation by _symmetrizability- ${\mathcal{X}}_{1}|{\mathcal{X}}_{2}$_, _symmetrizability- ${\mathcal{X}}_{2}|{\mathcal{X}}_{1}$_ and _symmetrizability- ${\mathcal{X}}_{1}\times{\mathcal{X}}_{2}$_ respectively) were introduced and were shown to characterize the capacity positivity. See Table II below. Cases | non-joint symmetrizability | non-first marginal symmetrizability | non-second marginal symmetrizability | Capacity region ---|---|---|---|--- Case (1) | $\checkmark$ | $\checkmark$ | $\checkmark$ | $(+,+),(+,0),(0,+),(0,0)$ Case (2) | $\checkmark$ | $\checkmark$ | $\times$ | $(+,0),(0,0)$ Case (3) | $\checkmark$ | $\times$ | $\checkmark$ | $(0,+),(0,0)$ Case (4) | $\times$ | $?$ | $?$ | $(0,0)$ Case (5) | $\checkmark$ | $\times$ | $\times$ | $(0,0)$ TABLE II: Results in [PS19] on capacity positivity of oblivious AVMACs. In the table, “$\checkmark$” (resp. “$\times$”) means the corresponding non- symmetrizability condition is satisfied (resp. unsatisfied). Question marks “$?$” mean either satisfied or unsatisfied, regardlessly. As noted, non-joint symmetrizability is a necessary condition for any positive achievable rate. Intuitively, one should think of symmetrizability as the oblivious analog of confusability defined in Section IX. However, in the AVMAC setting, due to the “independence” between the jammer and the encoders, the formal definition of symmetrizability does not appear to be a straightforward adjustment of Definition 11. As a result, the characterization of positivity in [PS19] does not exactly parallel ours. An informal analogy between the symmetrizability of Pereg and Steinberg’s and the confusability of ours is as follows. Non-first (resp. -second) marginal symmetrizability corresponds to ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ (resp. ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$). Non-joint symmetrizability corresponds to ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\neq\emptyset$. However, one gets _wrong_ results (for Items 1 and 2 in particular) if she/he verbatim translates the oblivious results to the omniscient setting using the aforementioned informal correspondence. In the AVMAC setting, non-joint symmetrizability is a necessary condition for the existence of $R_{1}>0$ or $R_{2}>0$. As a consequence, there does not exist situation where $R_{1}>0$ or $R_{2}>0$ can be achieved separately yet not simultaneously (Item 2 in Theorem 19). In the omniscient setting, the condition that determines the possibility of $(R_{1},R_{2})$ with $R_{1}>0,R_{2}>0$ is in terms of ${\mathcal{G}}\left(P_{1},P_{2}\right)$ rather than ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. Communication at positive rates for both encoders simultaneously may not be possible even if ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\neq\emptyset$. It is possible only if there is a _single_ good distribution (as per Definition 15) that is simultaneously non-jointly symmetrizable and non- marginally symmetrizable (for both transmitters). ## XII Overview of proof techniques In this section we overview the proof techniques for establishing Theorem 19. Since there are cases where both/exactly one/none of the transmitters can achieve positive rates, we have to divide the analysis into several cases. Nevertheless, the proofs for different cases share roughly the same structure. In what follows, we briefly introduce the ideas behind the achievability part and the converse part separately. ### XII-A Proof techniques for achievability To show positive achievable rates under the conditions of Lemma 23, we use the standard method of random coding with expurgation. The conditions in Lemma 23 can be intuitively interpreted as the existence of _good_ distributions (according to Definition 15) that are not _bad_ (according to Definition 11). If one is able to find a _product_ distribution (which is always good by definition) that is outside the confusability sets, then one can simply sample positive rate codes whose entries are i.i.d. according to the distribution. By concentration of measure, the joint type of any codeword tuple is tightly concentrated around the product distribution. In particular, any joint type is outside the confusability sets with high probability. Now by large deviation principle, if the code rates are sufficiently small, a union bound over all codeword tuples allows us to conclude that no joint type is confusable and hence the whole code pair attains zero error with high probability. This gives Lemma 22. Lemma 22 can be strengthened in the following two ways. Firstly, even if product distributions are confusable, if one can find _mixtures_ of product distributions that are outside the confusability sets, then positive rates are still achievable. Here the additional idea is _time- sharing_. Recall that a good distribution is a convex combination of product distributions.121212Note that importantly, the components of such a convex combination do not have to satisfy the input constraints. This is why it is possible to find mixtures of product distributions that are non-confusable even if all _feasible_ product distributions are confusable. See Remark 14. The coefficients of the convex combination can be regarded as giving a time- sharing sequence. We then sample random codes in the following way. All codewords are chopped up into chunks of lengths proportional to the convex combination coefficients. Entries of all codewords in a particular chunk are i.i.d. according the corresponding component distribution of the convex combination. Effectively it is as if we convexly concatenate multiple codebooks of shorter lengths sampled from different product distributions. Again by a Chernoff-union argument, all joint types are tightly concentrated around the mixture distribution provided that the rates are sufficiently small. Since the mixture distribution itself is outside the confusability sets, the code pair attains zero error with high probability. This gives Lemma 23. Such a code construction is known as _coded time-sharing_ (see Remark 14). Secondly, by carefully analyzing the large deviation exponent, one can in fact obtain _inner bounds_ on the capacity region. To this end, one could not simply set the rates to be sufficiently small so as to admit a union bound. A standard trick is to _remove_ (a.k.a. _expurgate_) one codeword from each confusable pair. Using Sanov’s theorem (Lemma 3), one can get the exact exponent of the probability of sampling a confusable pair. One can then set the rates so as to guarantee that the (expected) number of expurgated codewords is at most, say, half of the code size. This ensures that the expurgation process does not hurt the rate. This gives Lemma 24. We remark that if one wishes to achieve a rate pair with two positive rates, then the above argument requires one to expurgate codewords that contribute to (at least one of) jointly confusable pairs, first marginally confusable pairs or second marginally confusable pairs. We believe that such an expurgation strategy is pessimistic and higher rates may be obtained using more clever expurgation strategies. See Item 4 in Section XVI. ### XII-B Proof techniques for converse The converse part is considerably more involved. At a high level, it is inspired by the classical Plotkin bound in coding theory and follows a similar structure as [WBBJ19]. However, due to the multiuser nature of the channel, the case analysis is more delicate. The basic proof strategy is comprised of the following components. Given any code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ that attains zero error, we would like to show that they have zero rate(s) once the conditions in Theorem 20 are satisfied. To this end, we follow the steps below. 1. 1. First, we extract a subcode pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ which has nontrivial sizes and is “equicoupled”. More specifically, for one thing, the code sizes are mildly large in the sense that $|{\mathcal{C}}_{i}^{\prime}|\xrightarrow{|{\mathcal{C}}_{i}|\to\infty}\infty$ for $i=1,2$. In fact $|{\mathcal{C}}_{i}^{\prime}|=f(|{\mathcal{C}}_{i}|)$ where $f(\cdot)$ is the inverse Ramsey number which grows extremely slow. However, this is enough for our purposes since it will be ultimately proved that $\max\left\\{|{\mathcal{C}}_{1}^{\prime}|,|{\mathcal{C}}^{\prime}|\right\\}\leq C$ for some _constant_ $C>0$ independent of $n$. Then $\max\left\\{|{\mathcal{C}}_{1}|,|{\mathcal{C}}_{2}|\right\\}\leq f^{-1}(C)$ which is a huge constant. However, this is already more than sufficient to imply zero rates. For another (more important) thing, the subcode pair we obtained is highly structured in the sense that the joint type of any codeword tuple from the subcode pair is approximately the same (hence the subcodes are at times called _equicoupled_ in this paper). This follows from Ramsey’s theorem (Theorem 26). At the cost of losing rates (which is actually fine), we localize some highly regular structures into a tiny subcode pair. 2. 2. We then focus on the subcode pair. It is unclear whether or not the distribution that all joint types are concentrated around is symmetric (as per Definition 14). However, viewing the codebook as a sequence of random variables, we can show (in Section XIV-B) that the size of the equicoupled subcode must be small if the distribution is asymmetric. This, after some preprocessing of the sequence of random variables, follows from a classical theorem by Komlós (Theorem 29). 3. 3. Now we assume that the equicoupled subcode is equipped with a symmetric distribution. Since we started with a code pair of zero error, all joint types are outside the confusability sets. Hence by the equicoupledness property, the associated distribution is outside the confusability sets as well. By the assumptions of Theorem 20, this distribution cannot be good (as per Definition 15) since the sets of good distributions are assumed to be subsets of the confusability sets. By the duality (Theorem 18) between the sets of good and “co-good” tensors (Definition 16), we can find a _witness_ (which itself is a co-good tensor) of the non-goodness of the distribution. This finally allows us to apply a Plotkin-type double counting trick. Specifically, we upper and lower bound the following crucial quantity (Equation 96): the average inner product between the witness and the joint types in the subcodes. Careful calculations give us upper and lower bounds on this quantity. Contrasting these bounds further gives us an upper bound on the code sizes as promised. Similar argument can be adapted to the marginal case where exactly one transmitter suffers from zero capacity. ## XIII Achievability We need the following lemma which concentrates the size of the constant composition component of a random code. The proof follows from the Chernoff bound (Lemma 2) and can be found in, e.g., [ZBJ20]. ###### Lemma 21. Let ${\mathcal{C}}\subseteq{\mathcal{X}}^{n}$ be a random code that consists of codewords ${\underline{\mathbf{x}}}_{1},\cdots,{\underline{\mathbf{x}}}_{M}$ i.i.d. according to $P_{\mathbf{x}}^{\otimes n}$ for some $P_{\mathbf{x}}\in\Delta({\mathcal{X}})$. Let ${\mathcal{C}}^{\prime}\subseteq{\mathcal{C}}$ be the $P_{\mathbf{x}}$-constant composition subcode of ${\mathcal{C}}$. Then $\displaystyle\Pr\left[\left|{\mathcal{C}}^{\prime}\right|\notin(1\pm 1/2)\frac{M}{\nu(P_{\mathbf{x}},n)}\right]\leq$ $\displaystyle 2\exp\left(-\frac{M}{12\nu(P_{\mathbf{x}},n)}\right).$ ### XIII-A Positive achievable rates via product distributions ###### Lemma 22 (Positive achievable rates via product distributions). Let $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If $P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $P_{1}^{\otimes 2}\otimes P_{2}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $P_{1}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, then there exist achievable rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$. 2. 2. If $P_{1}^{\otimes 2}\otimes P_{2}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$, then there exist achievable rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}=0$. 3. 3. If $P_{1}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$, then there exist achievable rate pairs $(R_{1},R_{2})$ such that $R_{1}=0,R_{2}>0$. ###### Proof of Item 1 in Lemma 22. Assume that both $P_{1}$ and $P_{2}$ have no zero atoms. Sample a random code pair $\left({\mathcal{C}}_{1},{\mathcal{C}}_{2}\right)\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ of sizes $(M_{1},M_{2})$, where ${\mathcal{C}}_{i}$ consists of codewords ${\underline{\mathbf{x}}}^{i}_{1},\cdots,{\underline{\mathbf{x}}}^{i}_{M_{i}}$ i.i.d. according to $P_{i}^{\otimes n}$ ($i=1,2$). Note that for any $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right]=$ $\displaystyle P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}.$ (57) To see this, for any $(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}$, $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right](x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})=$ $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1},{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2},{\underline{\mathbf{x}}}^{2}_{j_{1}}(k)=x^{2}_{1},{\underline{\mathbf{x}}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}\right]$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1}\right\\}}\right]\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{2}\right\\}}\right]\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{2}_{j_{1}}(k)=x^{2}_{1}\right\\}}\right]\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}\right]$ (58) $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\Pr\left[{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1}\right]\Pr\left[{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2}\right]\Pr\left[{\underline{\mathbf{x}}}^{2}_{j_{1}}(k)=x^{2}_{1}\right]\Pr\left[{\underline{\mathbf{x}}}^{2}_{j_{2}}(k)=x^{2}_{2}\right]$ $\displaystyle=$ $\displaystyle P_{1}(x^{1}_{1})P_{1}(x^{1}_{2})P_{2}(x^{2}_{1})P_{2}(x^{2}_{2}),$ (59) where Equation 58 follows since each codeword is sampled independent; Equation 59 follows since each component is identically distributed. Similarly, $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}}}\right]=$ $\displaystyle P_{1}^{\otimes 2}\otimes P_{2},\quad\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right]=P_{1}\otimes P_{2}^{\otimes 2}.$ Let ${\mathcal{C}}_{i}^{\prime}$ be the $P_{i}$-constant composition subcode of ${\mathcal{C}}_{i}$ ($i=1,2$). By Lemma 21, for $i=1,2$, $\displaystyle\Pr\left[\left|{\mathcal{C}}_{i}^{\prime}\right|\notin(1\pm 1/2)\frac{M_{i}}{\nu(P_{i},n)}\right]\leq$ $\displaystyle 2\exp\left(-\frac{M_{i}}{12\nu(P_{i},n)}\right).$ (60) Let $\displaystyle\rho_{1,2}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2},{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right),$ (61) $\displaystyle\rho_{1}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{1}^{\otimes 2}\otimes P_{2},{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right),$ $\displaystyle\rho_{2}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{1}\otimes P_{2}^{\otimes 2},{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right),$ $\displaystyle\varepsilon\coloneqq$ $\displaystyle\frac{1}{2}\min\left\\{\rho_{1,2},\rho_{1},\rho_{2}\right\\}.$ By the assumptions of Item 1, all the above quantities are _strictly_ positive. Since $\varepsilon<\rho_{1,2}$, for any $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right]$ $\displaystyle\leq$ $\displaystyle\Pr\left[d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}},P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)\geq\varepsilon\right]$ $\displaystyle=$ $\displaystyle\Pr\left[\exists(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2},\;\left|\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{1}(x^{1}_{1})P_{1}(x^{1}_{2})P_{2}(x^{2}_{1})P_{2}(x^{2}_{2})\right|\geq\varepsilon\right]$ $\displaystyle\leq$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\Pr\left[\left|\sum_{k=1}^{n}\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1},{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2},{\underline{\mathbf{x}}}^{2}_{j_{1}}(k)=x^{2}_{1},{\underline{\mathbf{x}}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}-nP_{1}(x^{1}_{1})P_{1}(x^{1}_{2})P_{2}(x^{2}_{1})P_{2}(x^{2}_{2})\right|\geq n\varepsilon\right]$ $\displaystyle=$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\Pr\left[\sum_{k=1}^{n}\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1},{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2},{\underline{\mathbf{x}}}^{2}_{j_{1}}(k)=x^{2}_{1},{\underline{\mathbf{x}}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}\notin\left(1\pm\frac{n\varepsilon}{\mu}\right)\mu\right]$ (62) $\displaystyle\leq$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}2\exp\left(-\frac{1}{3}\left(\frac{n\varepsilon}{\mu}\right)^{2}\mu\right)$ (63) $\displaystyle=$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}2\exp\left(-\frac{n\varepsilon^{2}}{3P_{1}(x^{1}_{1})P_{1}(x^{1}_{2})P_{2}(x^{2}_{1})P_{2}(x^{2}_{2})}\right)$ (64) $\displaystyle\leq$ $\displaystyle\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right).$ (65) In Equation 62, we define $\displaystyle\mu=\mu(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\coloneqq\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right](x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})=P_{1}(x^{1}_{1})P_{1}(x^{1}_{2})P_{2}(x^{2}_{1})P_{2}(x^{2}_{2})>0.$ Equation 63 is by Lemma 2. In Equation 64, we used Equation 57. In Equation 65, we used the trivial bound: for $i=1,2$, $P_{i}(x)\leq 1$ for $x\in{\mathcal{X}}_{i}$. We only need to consider ordered pairs $i_{1}<i_{2}$ and $j_{1}<j_{2}$, since by the Item 2 of Proposition 15, if $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ then $\tau_{{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{2}},{\underline{x}}^{2}_{j_{1}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. By union bound, $\displaystyle\Pr\left[\exists((i_{1},i_{2}),(j_{1},j_{2}))\in\binom{[|{\mathcal{C}}_{1}^{\prime}|]}{2}\times\binom{[|{\mathcal{C}}_{2}^{\prime}|]}{2},\;\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right]$ $\displaystyle\leq$ $\displaystyle\binom{M_{1}}{2}\binom{M_{2}}{2}\cdot\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right)$ $\displaystyle\leq$ $\displaystyle\exp\left(n\left(2R_{1}\ln\left|{\mathcal{X}}_{1}\right|+2R_{2}\ln\left|{\mathcal{X}}_{2}\right|-\varepsilon^{2}/3+o(1)\right)\right).$ (66) Similar Chernoff-union argument yields $\displaystyle\Pr\left[\exists((i_{1},i_{2}),j)\in\binom{[|{\mathcal{C}}_{1}^{\prime}|]}{2}\times[|{\mathcal{C}}_{2}^{\prime}|],\;{\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j}}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle\exp\left(n\left(2R_{1}\ln\left|{\mathcal{X}}_{1}\right|+R_{2}\ln\left|{\mathcal{X}}_{2}\right|-\varepsilon^{2}/3+o(1)\right)\right),$ (67) $\displaystyle\Pr\left[\exists(i,(j_{1},j_{2}))\in[|{\mathcal{C}}_{1}^{\prime}|]\times\binom{[|{\mathcal{C}}_{2}^{\prime}|]}{2},\;{\tau_{{\underline{\mathbf{x}}}^{1}_{i},{\underline{\mathbf{x}}}^{1}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle\exp\left(n\left(R_{1}\ln\left|{\mathcal{X}}_{1}\right|+2R_{2}\ln\left|{\mathcal{X}}_{2}\right|-\varepsilon^{2}/3+o(1)\right)\right).$ (68) It suffices to take $(R_{1},R_{2})$ such that $2R_{1}\ln\left|{\mathcal{X}}_{1}\right|+2R_{2}\ln\left|{\mathcal{X}}_{2}\right|-\varepsilon^{2}/3<0$. For instance, one can take $R_{1}=\frac{\varepsilon^{2}}{24\ln\left|{\mathcal{X}}_{1}\right|}$ and $R_{2}=\frac{\varepsilon^{2}}{24\ln\left|{\mathcal{X}}_{2}\right|}$. Then Equations 66, 67 and 68 are all $\exp\left(-\Omega(n)\right)$. Finally, combining Equations 60, 66, 67 and 68, we get that with probability $1-\exp(-\Omega(n))$, $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ is a good code pair of rates $R({\mathcal{C}}_{1}^{\prime})\asymp R_{1}>0$ and $R({\mathcal{C}}_{2}^{\prime})\asymp R_{2}>0$. ∎ ###### Proof of Items 2 and 3 in Lemma 22. We only prove Item 2 and Item 3 follows similarly once the roles of user one and user two are interchanged. Suppose $P_{1}^{\otimes 2}\otimes P_{2}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$. We construct a codebook pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ as follows. The codebook ${\mathcal{C}}_{2}$ consists of only one (arbitrary) codeword ${\underline{x}}^{2}\in{\mathcal{X}}_{2}^{n}$ of type $P_{2}$. Apparently $R({\mathcal{C}}_{2})\to 0$ as $n\to 0$. Indeed, user two cannot even transmit a single bit reliably through the channel. The codebook ${\mathcal{C}}_{1}\in{\mathcal{X}}_{1}^{M\times n}$ consists of $M$ codewords ${\underline{\mathbf{x}}}^{1}_{1},\cdots,{\underline{\mathbf{x}}}^{1}_{M}$ i.i.d. according to $P_{1}^{\otimes n}$. Note that for all $1\leq i_{1}<i_{2}\leq M$, $\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\right]=P_{1}^{\otimes 2}\otimes P_{2}$. Indeed, for any $(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}$, $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\right](x^{1}_{1},x^{1}_{2},x^{2})=$ $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1},{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2},{\underline{x}}^{2}(k)=x^{2}\right\\}}\right]$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{1}}(k)=x^{1}_{1}\right\\}}\right]\mathbb{E}\left[\mathds{1}{\left\\{{\underline{\mathbf{x}}}^{1}_{i_{2}}(k)=x^{1}_{2}\right\\}}\right]\mathds{1}{\left\\{{\underline{x}}^{2}(k)=x^{2}\right\\}}$ $\displaystyle=$ $\displaystyle\Pr\left[{\underline{\mathbf{x}}}^{1}(1)=x^{1}_{1}\right]\Pr\left[{\underline{\mathbf{x}}}^{1}(1)=x^{1}_{2}\right]\frac{1}{n}\sum_{k=1}^{n}\mathds{1}{\left\\{{\underline{x}}^{2}(k)=x^{2}\right\\}}$ $\displaystyle=$ $\displaystyle P_{1}(x^{1}_{1})P_{1}(x^{1}_{2})\tau_{{\underline{x}}^{2}}(x^{2})$ $\displaystyle=$ $\displaystyle\left(P_{1}^{\otimes 2}\otimes P_{2}\right)(x^{1}_{1},x^{1}_{2},x^{2}).$ By Lemma 21, Equation 60 holds for the $P_{1}$-constant composition subcode of ${\mathcal{C}}_{1}$, denoted by ${\mathcal{C}}_{1}^{\prime}$. Therefore, ${\mathcal{C}}_{1}^{\prime}$ has asymptotically the same rate as $R({\mathcal{C}}_{1})$. We define the gap $\rho_{1}>0$ between $P_{1}^{\otimes 2}\otimes P_{2}$ and ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ in the same way as in Equation 61. Let $\varepsilon\coloneqq\rho_{1}/2$. Similar Chernoff-union-type argument as before yields $\displaystyle\Pr\left[\exists(i_{1},i_{2})\in\binom{[|{\mathcal{C}}_{1}^{\prime}|]}{2},\;\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle\binom{M}{2}\cdot\left|{\mathcal{X}}_{1}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right)$ $\displaystyle\leq$ $\displaystyle\exp\left(n\left(2R_{1}\ln\left|{\mathcal{X}}_{1}\right|-\varepsilon^{2}/3+o(1)\right)\right).$ (69) Taking $R_{1}=\frac{\varepsilon^{2}}{12\left|{\mathcal{X}}_{1}\right|}$, we get that with probability $1-2^{-\Omega(n)}$, the codebook pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2})$ constructed above is good. ∎ ### XIII-B Positive achievable rates via mixtures of product distributions ###### Lemma 23 (Positive achievable rates via mixtures product distributions). Fix input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(R_{1},R_{2})$ such that $R_{1}>0,R_{2}>0$. 2. 2. If ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(R_{1},0)$ such that $R_{1}>0$. 3. 3. If ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\neq\emptyset$, then there exist achievable rate pairs $(0,R_{2})$ such that $R_{2}>0$. ###### Proof of Item 1. By the condition in Item 1, we are able to find a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}\left(P_{1},P_{2}\right)$. Suppose $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\sum_{{\ell}=1}^{k}\lambda_{\ell}P_{1,{\ell}}^{\otimes 2}\otimes P_{2,{\ell}}^{\otimes 2}$ for some $k\in{\mathbb{Z}}_{\geq 1}$, $\left\\{\lambda_{\ell}\right\\}_{{\ell}=1}^{k}\subset(0,1]$ with $\sum_{{\ell}=1}^{k}\lambda_{\ell}=1$ and distributions $\left\\{P_{1,{\ell}}\right\\}_{{\ell}=1}^{k}\subset\Delta({\mathcal{X}}_{1}),\left\\{P_{2,{\ell}}\right\\}_{{\ell}=1}^{k}\subset\Delta({\mathcal{X}}_{2})$. It simultaneously holds that $\displaystyle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in$ $\displaystyle{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right),$ (70) $\displaystyle P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\coloneqq\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}\in$ $\displaystyle{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),$ $\displaystyle P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\coloneqq\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}\otimes P_{2,\ell}^{\otimes 2}\in$ $\displaystyle{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right).$ See Figure 3(a) for the geometry of the aforementioned distributions. Partition $[n]$ into $k$ subsets ${\mathcal{I}}_{1},\cdots,{\mathcal{I}}_{k}$ such that $|{\mathcal{I}}_{\ell}|=\lambda_{\ell}n$ (${\ell}\in[k]$). Now sample a codebook pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ of sizes $(M_{1},M_{2})$ in the following way. For $i=1,2$, ${\ell}\in[k]$, the entries of each codeword of ${\mathcal{C}}_{i}$ that are in ${\mathcal{I}}_{\ell}$ are i.i.d. according to $P_{i,{\ell}}$. See Figure 3(b) for a pictorial explanation of the code construction. The proof is similar to that of Lemma 22 and the geometry of various distributions is depicted in Figure 3(c). (a) By the assumption ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$, there exists a distribution $\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ such that $\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $\sum_{i=1}^{k}\lambda_{i}P_{1,i}\otimes P_{2,i}^{\otimes 2}\notin{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ (see Equation 70). (b) A pictorial explanation of our code construction from $\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}$. The construction can be viewed as an application of coded time-sharing where the time-sharing sequence is given by the convex combination coefficients $\left\\{\lambda_{i}\right\\}_{i=1}^{k}$. For any fixed value $\ell\in[k]$ of the time-sharing variable, each symbol of ${\mathcal{C}}_{i}$ is i.i.d. according to $P_{\ell,i}$. (c) By the assumption that $\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}^{\otimes 2}$ is $\rho_{1,2}$-far from ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $\sum_{i=1}^{k}\lambda_{i}P_{1,i}^{\otimes 2}\otimes P_{2,i}$ is $\rho_{1}$-far from ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $\sum_{i=1}^{k}\lambda_{i}P_{1,i}\otimes P_{2,i}^{\otimes 2}$ is $\rho_{2}$-far from ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, one can show via a Chernoff-union-type argument that all joint types of $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ are $\varepsilon$-far from the confusability sets and hence $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ attains positive rates and zero error. The gap factors $\rho_{1,2},\rho_{1},\rho_{2}$ and $\varepsilon$ are defined in Equation 72. Figure 3: Illustration of the proof of Item 1 of Lemma 23. Under the assumption ${\mathcal{G}}\left(P_{1},P_{2}\right)\neq\emptyset$, the goal is to show the existence of zero-error code pairs $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ of positive rates. We can apply similar Chernoff-union argument to the $\ell$-th punctured codes of $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ for each $\ell\in[k]$ and then take a union bound over ${\ell}$. Here by the $\ell$-th punctured codes we mean the codes obtained by restricting codewords to ${\mathcal{I}}_{\ell}$. We use ${\underline{\mathbf{x}}}_{i,{\ell}}^{1}\in{\mathcal{X}}_{1}^{\lambda_{\ell}n}$ and ${\underline{\mathbf{x}}}_{j,{\ell}}^{2}\in{\mathcal{X}}_{2}^{\lambda_{\ell}n}$ to denote respectively the subsequences of ${\underline{\mathbf{x}}}^{1}_{i}$ and ${\underline{\mathbf{x}}}^{2}_{j}$ whose components are in ${\mathcal{I}}_{\ell}$. Note that for any $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, by 4, $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right]=$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}}\right]=\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}=P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},$ $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}}}\right]=$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell}}\right]=\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}=P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},$ $\displaystyle\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\right]=$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}}\right]=\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}\otimes P_{2,\ell}^{\otimes 2}=P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}.$ Let ${\mathcal{C}}_{i}^{\prime}$ be the subcode of ${\mathcal{C}}_{i}$ such that all codewords in ${\mathcal{C}}_{i}^{\prime}$ restricted to ${\mathcal{I}}_{\ell}$ are $P_{i,\ell}$-constant composition ($i=1,2,\ell\in[k]$). The size of ${\mathcal{C}}_{i}^{\prime}$ can be concentrated similarly as before. $\displaystyle\mathbb{E}\left[|{\mathcal{C}}_{i}^{\prime}|\right]=$ $\displaystyle\sum_{j=1}^{M_{i}}\Pr\left[\forall\ell\in[k],\;\tau_{{\underline{\mathbf{x}}}^{i}_{j,\ell}}=P_{i,\ell}\right]=\sum_{j=1}^{M_{i}}\prod_{\ell=1}^{k}\Pr\left[\tau_{{\underline{\mathbf{x}}}^{i}_{j,\ell}}=P_{i,\ell}\right]\asymp M_{i}\prod_{\ell=1}^{k}\nu(P_{i,\ell},\lambda_{\ell}n)^{-1}.$ By Lemma 2, $\displaystyle\Pr\left[|{\mathcal{C}}_{i}^{\prime}|\notin(1\pm 1/2)\mathbb{E}\left[|{\mathcal{C}}_{i}^{\prime}|\right]\right]\leq$ $\displaystyle 2\exp\left(-\frac{M_{i}}{12\prod_{\ell=1}^{k}\nu(P_{i,\ell},\lambda_{\ell}n)}\right).$ (71) Let $\displaystyle\rho_{1,2}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right)>0,$ (72) $\displaystyle\rho_{1}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right)>0,$ $\displaystyle\rho_{2}\coloneqq$ $\displaystyle d_{{\infty}}\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right)>0,$ $\displaystyle\varepsilon\coloneqq$ $\displaystyle\frac{1}{2}\min\left\\{\rho_{1,2},\rho_{1},\rho_{2}\right\\}>0.$ For any $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle\Pr\left[\exists\ell\in[k],\;d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}},P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}\right)\geq\varepsilon\right]$ (73) $\displaystyle\leq$ $\displaystyle k\cdot\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right).$ (74) Equation 73 follows since $d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}},P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}\right)<\varepsilon$ for all $\ell\in[k]$ implies $\displaystyle d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)$ $\displaystyle=$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\left|\sum_{\ell=1}^{k}\lambda_{\ell}\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|$ $\displaystyle\leq$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}\max_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\left|\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|$ $\displaystyle=$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{\mathbf{x}}}^{2}_{j_{1},\ell},{\underline{\mathbf{x}}}^{2}_{j_{2},\ell}},P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}\right)<\varepsilon<\rho_{1,2},$ which in turn implies $\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. In Equation 74, we took a union bound over $\ell\in[k]$ where $k={\mathcal{O}}(1)$. Similarly, we have $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle k\cdot\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right),$ (75) for all $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j\leq M_{2}$; and $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle k\cdot\left|{\mathcal{X}}_{1}\right|\left|{\mathcal{X}}_{2}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right),$ (76) for all $1\leq i\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$. Taking further union bounds on Equations 74, 75 and 76 over $((i_{1},i_{2}),(j_{1},j_{2}))$, $((i_{1},i_{2}),j)$ and $(i,(j_{1},j_{2}))$ respectively ensures that Equations 66, 67 and 68 still hold. The rest of the proof remains the same and we get a good code pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ of rate $R({\mathcal{C}}_{1}^{\prime})>0,R({\mathcal{C}}_{2}^{\prime})>0$. ∎ ###### Proof of Items 2 and 3. We only prove Item 2 since Item 3 is the same once the roles of the first and second users are swapped. Suppose $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ has a decomposition $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}=\sum_{\ell=1}^{k}\lambda_{\ell}P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}$ for some $k\in{\mathbb{Z}}_{\geq 1}$, $\left\\{\lambda_{\ell}\right\\}_{\ell=1}^{k}\subset(0,1]$ with $\sum_{\ell=1}^{k}\lambda_{\ell}=1$ and $\left\\{P_{1,\ell}\right\\}_{\ell=1}^{k}\subset\Delta({\mathcal{X}}_{1}^{2})$, $\left\\{P_{2,\ell}\right\\}_{\ell=1}^{k}\subset\Delta({\mathcal{X}}_{2}^{2})$. Partition $[n]$ into $k$ subsets ${\mathcal{I}}_{1},\cdots,{\mathcal{I}}_{k}$ such that $|{\mathcal{I}}_{\ell}|=\lambda_{\ell}n$ (${\ell}\in[k]$). Construct a codebook pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ as follows. The second codebook ${\mathcal{C}}_{2}$ only consists of one (arbitrary) codeword ${\underline{x}}^{2}\in{\mathcal{X}}_{2}^{n}$ satisfying the following property. Let ${\underline{x}}^{2}_{\ell}\in{\mathcal{X}}_{2}^{\lambda_{\ell}n}$ denote the subsequence of ${\underline{x}}^{2}$ restricted to ${\mathcal{I}}_{\ell}$. For each $\ell\in[k]$, $\tau_{{\underline{x}}^{2}_{\ell}}=P_{2,\ell}$. The first codebook ${\mathcal{C}}_{1}\in{\mathcal{X}}_{1}^{M\times n}$ consists of $M$ codewords ${\underline{\mathbf{x}}}^{1}_{1},\cdots,{\underline{\mathbf{x}}}^{1}_{M}$, where for each $i\in[M]$ and $\ell\in[k]$, ${\underline{\mathbf{x}}}^{1}_{i,\ell}\overset{\mathrm{i.i.d.}}{\sim}P_{1,\ell}^{\otimes(\lambda_{\ell}n)}$. Note that for all $1\leq i_{1}<i_{2}\leq M$, $\mathbb{E}\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\right]=P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}$. Let ${\mathcal{C}}_{1}^{\prime}$ be the subcode of ${\mathcal{C}}_{1}$ whose codewords restricted to ${\mathcal{I}}_{\ell}$ are all $P_{1,\ell}$-constant composition ($\ell\in[k]$). For ${\mathcal{C}}_{1}^{\prime}$, Equation 71 still holds. Therefore, $R({\mathcal{C}}_{1}^{\prime})\asymp R({\mathcal{C}}_{1})$ ($n\to\infty$). We define $\rho_{1}$ in the same way as in Equation 72. Let $\varepsilon\coloneqq\rho_{1}/2$. Since $\displaystyle d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq$ $\displaystyle\sum_{\ell=1}^{k}\lambda_{\ell}d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{x}}^{2}_{\ell}},P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}\right),$ a Chernoff-union bound gives $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\leq$ $\displaystyle\Pr\left[d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\geq\varepsilon\right]$ $\displaystyle\leq$ $\displaystyle\Pr\left[\exists\ell\in[k],\;d_{{\infty}}\left(\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1},\ell},{\underline{\mathbf{x}}}^{1}_{i_{2},\ell},{\underline{x}}^{2}_{\ell}},P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}\right)\geq\varepsilon\right]$ $\displaystyle\leq$ $\displaystyle k\cdot\left|{\mathcal{X}}_{1}\right|^{2}\cdot 2\exp\left(-\frac{n\varepsilon^{2}}{3}\right).$ Since $k$ is a constant independent of $n$, a union bound over $(i_{1},i_{2})\in\binom{[|{\mathcal{C}}_{1}^{\prime}|]}{2}$ gives Equation 69. Under a proper choice of $R_{1}>0$, we get that $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2})$ is a good codebook pair with probability at least $1-2^{-\Omega(n)}$. ∎ ###### Remark 14. In the above proof of Lemma 23, the partition $\left\\{{\mathcal{I}}_{\ell}\right\\}_{\ell=1}^{k}$ can be thought of as a _time-sharing_ sequence ${\underline{u}}\in[k]^{n}$ of type $P_{\mathbf{u}}$ given by the coefficients $\left\\{\lambda_{i}\right\\}_{i=1}^{k}$ of the convex combination. That is, $P_{\mathbf{u}}(u)=\lambda_{u}$ for any $u\in[k]$. This particular type of time-sharing scheme is known as the _coded_ time-sharing in the literature [PS19]. As explained in [PS19, Remark 6], the classical _operational_ time-sharing in network information theory does not work for (oblivious) arbitrarily varying channels with constraints. This is because the adversary can concentrate his power on coordinates in a single ${\mathcal{I}}_{\ell}$. This effectively increases the noise level in ${\mathcal{I}}_{\ell}$ significantly and the $\ell$-th component codebook in the time-sharing is not necessarily resilient to this effective level of noise. The above argument also applies to the omniscient adversarial channel model. More discussions on the “non-tensorization” of good codes for adversarial channels and its implications to single-letterization of capacity expressions can be found in Item 5 of Section XVI. These phenomena suggest that the capacity region of adversarial channels does not have to be convex in general (see Section XI-A). Furthermore, we emphasize the following point in the above achievability proof. Each component $P_{1,\ell}$ and $P_{2,\ell}$ of the convex combinations is not necessarily non-confusable, i.e., $P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}^{\otimes 2}$, $P_{1,\ell}^{\otimes 2}\otimes P_{2,\ell}$ or $P_{1,\ell}\otimes P_{2,\ell}^{\otimes 2}$ may be confusable. Nonetheless, it is only desired that their convex combinations are non-confusable. ###### Remark 15. In the above proof of Items 2 and 3, the transmitter with zero capacity cannot even reliably transmit a single bit through the MAC since the codebook contains only one codeword. Such achievability proofs go through as long as there exist non-marginally confusable distributions. In contrast, in the AVMAC setting [PS19], besides non-marginal symmetrizability, non-joint symmetrizability is a necessary condition for achieving any positive rate even individually instead of jointly. More discussions on the differences between our results and Pereg–Steinberg’s [PS19] can be found in Section XI-B. ### XIII-C Inner bounds via product distributions ###### Lemma 24 (Inner bounds via product distributions). Fix input distributions $(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}$. 1. 1. If $P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $P_{1}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $P_{2}\notin{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, then rate pairs $(R_{1},R_{2})\in{\mathbb{R}}_{\geq 0}^{2}$ satisfying $\displaystyle R_{1}\leq$ $\displaystyle D(P_{1},P_{2})-\widehat{D}(P_{1},P_{2})$ (77) $\displaystyle R_{2}\leq$ $\displaystyle D(P_{1},P_{2})-\widehat{D}(P_{1},P_{2})$ $\displaystyle R_{1}+R_{2}\leq$ $\displaystyle\widehat{D}(P_{1},P_{2})$ are achievable, where $\displaystyle D(P_{1},P_{2})\coloneqq$ $\displaystyle\min_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)}D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)$ $\displaystyle\widehat{D}(P_{1},P_{2})\coloneqq$ $\displaystyle\min\left\\{\min_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)}D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right),\min_{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)}D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)\right\\}.$ 2. 2. If $P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $P_{1}^{\otimes 2}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $P_{2}^{\otimes 2}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, then rate pairs $(R_{1},0)$ satisfying $\displaystyle 0\leq R_{1}\leq\min_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)}D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)$ (78) are achievable. 3. 3. If $P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, $P_{1}^{\otimes 2}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $P_{2}^{\otimes 2}\notin{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, then rate pairs $(0,R_{2})$ satisfying $\displaystyle 0\leq R_{2}\leq\min_{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)}D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)$ (79) are achievable. ###### Corollary 25 (Inner bounds on capacity region). Let $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ be a two-user omniscient adversarial MAC. The capacity region of $\mathsf{MAC}_{2}$ contains as a subset the following region $\displaystyle\bigcup_{\begin{subarray}{c}(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}\\\ \text{conditions in \lx@cref{creftypecap~refnum}{itm:rate-prod-pos-pos} are satisfied}\end{subarray}}\left\\{(R_{1},R_{2}):(R_{1},R_{2})\text{ satisfies }\lx@cref{creftypecap~refnum}{eqn:achieve-product-12}\right\\}$ $\displaystyle\cup\bigcup_{\begin{subarray}{c}(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}\\\ \text{conditions in \lx@cref{creftypecap~refnum}{itm:rate-prod-pos-zero} are satisfied}\end{subarray}}\left\\{(R_{1},0):R_{1}\text{ satisfies \lx@cref{creftypecap~refnum}{eqn:rate-prod-pos-zero}}\right\\}$ $\displaystyle\cup\bigcup_{\begin{subarray}{c}(P_{1},P_{2})\in\Gamma_{1}\times\Gamma_{2}\\\ \text{conditions in \lx@cref{creftypecap~refnum}{itm:rate-prod-zero-pos} are satisfied}\end{subarray}}\left\\{(0,R_{2}):R_{2}\text{ satisfies \lx@cref{creftypecap~refnum}{eqn:rate-prod-zero-pos}}\right\\}.$ ###### Proof of Item 1. Sample a random code pair $\left({\mathcal{C}}_{1},{\mathcal{C}}_{2}\right)\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ of sizes $(M_{1},M_{2})$, where ${\mathcal{C}}_{i}$ consists of codewords ${\underline{\mathbf{x}}}^{i}_{1},\cdots,{\underline{\mathbf{x}}}^{i}_{M_{i}}$ i.i.d. according to $P_{i}^{\otimes n}$ ($i=1,2$). By Lemma 1, the the expected number of codewords in ${\mathcal{C}}_{i}$ of type $P_{i}$ is asymptotically $M_{i}/\nu(P_{i},n)$. For any $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, by Lemma 3, $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right]\doteq$ $\displaystyle\sup_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)}2^{-nD\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)},$ $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\doteq$ $\displaystyle\sup_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)}2^{-nD\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)},$ $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right]\doteq$ $\displaystyle\sup_{P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)}2^{-nD\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)}.$ Hence the expected number of confusable tuples $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}})$, $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j})$ and $({\underline{\mathbf{x}}}^{1}_{i},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}})$ is respectively $\displaystyle\binom{M_{1}}{2}\binom{M_{2}}{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)}\leq$ $\displaystyle M_{1}^{2}M_{2}^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)},$ $\displaystyle\binom{M_{1}}{2}M_{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}\leq$ $\displaystyle M_{1}^{2}M_{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)},$ $\displaystyle M_{1}\binom{M_{2}}{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)}\leq$ $\displaystyle M_{1}M_{2}^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)}.$ Pick $M_{1},M_{2}$ such that $\displaystyle M_{1}^{2}M_{2}^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)}\leq$ $\displaystyle\min\left\\{\frac{M_{1}}{3\nu(P_{1},n)},\frac{M_{2}}{3\nu(P_{2},n)}\right\\},$ $\displaystyle M_{1}^{2}M_{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}\leq$ $\displaystyle\frac{M_{1}}{3\nu(P_{1},n)}$ $\displaystyle M_{1}M_{2}^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)}\leq$ $\displaystyle\frac{M_{2}}{3\nu(P_{2},n)}.$ This can be satisfied if $\displaystyle 2R_{1}+2R_{2}-\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)\leq$ $\displaystyle\min\left\\{R_{1},R_{2}\right\\}-o(1),$ $\displaystyle 2R_{1}+R_{2}-\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)\leq$ $\displaystyle R_{1}-o(1),$ $\displaystyle R_{1}+2R_{2}-\inf D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)\leq$ $\displaystyle R_{2}-o(1),$ i.e., $\displaystyle R_{1}+2R_{2}\leq$ $\displaystyle\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)-o(1),$ $\displaystyle 2R_{1}+R_{2}\leq$ $\displaystyle\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}^{\otimes 2}\right)-o(1),$ $\displaystyle R_{1}+R_{2}\leq$ $\displaystyle\min\left\\{\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)-o(1),\inf D\left(P_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\middle\|P_{1}\otimes P_{2}^{\otimes 2}\right)-o(1)\right\\}.$ That is, it suffices to take $(R_{1},R_{2})$ satisfying Equation 77 (as $n\to\infty$). Now, we remove all codewords from ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ whose types are not $P_{1}$ and $P_{2}$ respectively. For all $1\leq i_{1}<i_{2}\leq M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$, we also remove 1. 1. one of $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{j_{2}})$ from ${\mathcal{C}}_{1}$ and one of $({\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}})$ from ${\mathcal{C}}_{2}$ if $\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$; 2. 2. one of $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}})$ from ${\mathcal{C}}_{1}$ if $\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{1}}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$; 3. 3. one of $({\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}})$ from ${\mathcal{C}}_{2}$ if $\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}},{\underline{\mathbf{x}}}^{2}_{j_{2}}}\in{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. After the removal, $\left({\mathcal{C}}_{1},{\mathcal{C}}_{2}\right)$ becomes a good code pair. In total, the expected number of codewords we removed from ${\mathcal{C}}_{i}$ is at most $\displaystyle M_{i}-\frac{M_{i}}{\nu(P_{i},n)}+\frac{M_{i}}{3\nu(P_{i},n)}+\frac{M_{i}}{3\nu(P_{i},n)}=M_{i}-\frac{M_{i}}{3\nu(P_{i},n)}$ for $i=1,2$. Therefore, $(R_{1},R_{2})$ is preserved after the removal. Noting that we have exhibited the existence of code pairs that attain zero error for $\mathsf{MAC}_{2}$ with desired rates, we finish the proof. ∎ ###### Proof of Items 2 and 3. We only prove Item 2. Item 3 will follow verbatim. Let ${\underline{x}}^{2}\in{\mathcal{X}}_{2}^{n}$ be an arbitrary codeword of type $P_{2}$. The codebook ${\mathcal{C}}_{2}$ only consists of ${\underline{x}}^{2}$. The codebook ${\mathcal{C}}_{1}$ consists of $M$ codewords ${\underline{\mathbf{x}}}^{1}_{1},\cdots,{\underline{\mathbf{x}}}^{1}_{M}$ i.i.d. according to $P_{1}^{\otimes n}$. Again, the expected number of codewords in ${\mathcal{C}}_{1}$ of type $P_{1}$ is asymptotically $M/\nu(P_{1},n)$. By Lemma 3, for any $1\leq i_{1}<i_{2}\leq M$, $\displaystyle\Pr\left[\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right]\doteq$ $\displaystyle\sup_{P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)}2^{-nD\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}.$ Hence the expected number of confusable tuples $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2})$ is $\displaystyle\binom{M}{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}\leq$ $\displaystyle M^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}.$ Pick $M$ such that $\displaystyle M^{2}2^{-n\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)}\leq$ $\displaystyle\frac{M}{2\nu(P_{1},n)}.$ It suffices to take $\displaystyle 2R_{1}-\inf D\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\middle\|P_{1}^{\otimes 2}\otimes P_{2}\right)\leq$ $\displaystyle R_{1}-o(1),$ i.e., $R_{1}$ asymptotically satisfies Equation 78. We then remove all codewords from ${\mathcal{C}}_{1}$ which have type different from $P_{1}$. We also remove ${\underline{\mathbf{x}}}^{1}_{i_{1}}$ if $\tau_{{\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{x}}^{2}}\in{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ for some $i_{1}<i_{2}\leq M$. After removal we get a constant composition codebook pair that attains zero error. The expected number of codewords we removed from ${\mathcal{C}}_{1}$ is at most $M-M/\nu(P_{1},n)+M/2\nu(P_{1},n)=M-M/2\nu(P_{1},n)$. Therefore, the removal does not (asymptotically) change the rate. This finishes the proof. ∎ ###### Remark 16. In Lemma 24, we did not obtain a pentagon region defined by three mutual information terms as is commonly seen in problems regarding MACs. It is perhaps due to our crude expurgation strategy. We believe that our inner bounds can be improved by employing more careful expurgation strategies (see Item 4 in Section XVI). ## XIV Converse, Item 1 in Theorem 20 In this section, we assume that ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$. Let $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ be any good codebook pair. Without loss of rate, we assume that ${\mathcal{C}}_{1}$ is $P_{1}$-constant composition and ${\mathcal{C}}_{2}$ is $P_{2}$-constant composition. Our goal is to show that $R({\mathcal{C}}_{1})$ and $R({\mathcal{C}}_{2})$ cannot be simultaneously positive. In fact, we will show that at least one of $M_{1}\coloneqq|{\mathcal{C}}_{1}|$ and $M_{2}\coloneqq|{\mathcal{C}}_{2}|$ is bounded from above by a _constant_ (independent of $n$). ### XIV-A Subcode pair extraction ###### Definition 18 (Bipartite, uniform, complete hypergraphs). A hypergraph ${\mathcal{H}}=({\mathcal{V}},{\mathcal{E}})$ is called _$(N_{1},N_{2})$ -bipartite_ if it is bipartite with ${\mathcal{V}}={\mathcal{V}}_{1}\sqcup{\mathcal{V}}_{2}$ where $|{\mathcal{V}}_{1}|=N_{1}$ and $|{\mathcal{V}}_{2}|=N_{2}$. It is called _$(k_{1},k_{2})$ -uniform_ if every hyperedge contains $k_{1}$ vertices in ${\mathcal{V}}_{1}$ and $k_{2}$ vertices in ${\mathcal{V}}_{2}$. It is called _complete_ if every $k_{1}$-tuple of vertices in ${\mathcal{V}}_{1}$ and every $k_{2}$-tuple of vertices in ${\mathcal{V}}_{2}$ are connected. ###### Theorem 26 (Bipartite hypergraph Ramsey’s theorem [BLA76]). Let $N_{1},N_{2},D$ be integers that are at least 2. There exist constants $K_{1}=K_{2}(N_{1},N_{2},D)$ and $K_{2}=K_{2}(N_{1},N_{2},D)$ such that for every $(M_{1},M_{2})$-bipartite $(2,2)$-uniform complete hypergraph ${\mathcal{H}}=(({\mathcal{V}}_{1},{\mathcal{V}}_{2}),{\mathcal{E}})$ such that $|{\mathcal{V}}_{1}|=M_{1}\geq K_{1}$ and $|{\mathcal{V}}_{2}|=M_{2}\geq K_{2}$, for every $D$-coloring of ${\mathcal{E}}$, there must exist ${\mathcal{V}}_{1}^{\prime}\subseteq{\mathcal{V}}_{1}$ and ${\mathcal{V}}_{2}^{\prime}\subseteq{\mathcal{V}}_{2}$ such that $|{\mathcal{V}}_{1}^{\prime}|\geq N_{1},|{\mathcal{V}}_{2}^{\prime}|\geq N_{2}$ and all hyperedges crossing ${\mathcal{V}}_{1}^{\prime}$ and ${\mathcal{V}}_{2}^{\prime}$ have the same color. ###### Lemma 27 (Subcode pair extraction). For any code pair $\left({\mathcal{C}}_{1},{\mathcal{C}}_{2}\right)=\left(\left\\{{\underline{x}}^{1}_{k}\right\\}_{k=1}^{M_{1}},\left\\{{\underline{x}}^{2}_{\ell}\right\\}_{\ell=1}^{M_{2}}\right)$ of sizes $M_{1}$ and $M_{2}$, respectively, there exists a subcode pair $\left({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime}\right)=\left(\left\\{{\underline{x}}^{1}_{i}\right\\}_{i=1}^{M_{1}^{\prime}},\left\\{{\underline{x}}^{2}_{j}\right\\}_{j=1}^{M_{2}^{\prime}}\right)$ of sizes $M_{1}^{\prime}\geq f_{1}(|{\mathcal{X}}_{1}|,|{\mathcal{X}}_{2}|,\eta,M_{1},M_{2})\xrightarrow{M_{1}\to\infty}\infty$ and $M_{2}^{\prime}\geq f_{2}(|{\mathcal{X}}_{1}|,|{\mathcal{X}}_{2}|,\eta,M_{1},M_{2})\xrightarrow{M_{2}\to\infty}\infty$, respectively, and there exists a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ such that, for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$ and $1\leq j_{1}<j_{2}\leq M_{2}^{\prime}$, it holds that $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta$. ###### Proof. To apply Theorem 26, we build an $(M_{1},M_{2})$-bipartite $(2,2)$-uniform complete hypergraph ${\mathcal{H}}$. The left and right vertex sets of ${\mathcal{H}}$ are the codewords in ${\mathcal{C}}_{1}$ and the codewords in ${\mathcal{C}}_{2}$ respectively. Every pair of codewords $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}})\in\binom{{\mathcal{C}}_{1}}{2}$ (where $1\leq i_{1}<i_{2}<M_{1}$) in the left vertex set is connected to all pairs of codewords $({\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}})\in\binom{{\mathcal{C}}_{2}}{2}$ (for all $1\leq j_{1}<j_{2}\leq M_{2}$) in the right vertex set. We now color all hyperedges of ${\mathcal{H}}$ using distributions in ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$. To this end, we first take an $\eta$-net ${\mathcal{N}}$ of ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ with respect to $d_{\infty}$. By Lemma 5, $D\coloneqq\left|{\mathcal{N}}\right|$ can be made no larger than $\left(\frac{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|^{2}}{2\eta}+1\right)^{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|^{2}}$. The hyperedges in ${\mathcal{H}}$ are colored in the following way. If an hyperedge $(({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}}),({\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}))$ (where $1\leq i_{1}<i_{2}<M_{1}$ and $1\leq j_{1}<j_{2}\leq M_{2}$) satisfies $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta$ for some $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{N}}$, then we color this hyperedge by $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$. Note that by the covering property of ${\mathcal{N}}$, such a distribution must exist. By Theorem 26, there exist subcodes $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ of $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ satisfying 1. 1. $M_{1}^{\prime}\coloneqq|{\mathcal{C}}_{1}^{\prime}|\geq N_{1},M_{2}^{\prime}\coloneqq|{\mathcal{C}}_{2}^{\prime}|\geq N_{2}$ for $N_{1}=N_{1}(M_{1},M_{2},D),N_{2}=N_{2}(M_{1},M_{2},D)$ with $N_{1}\xrightarrow{M_{1}\to\infty}\infty,N_{2}\xrightarrow{M_{2}\to\infty}\infty$; 2. 2. all hyperedges between ${\mathcal{C}}_{1}^{\prime}$ and ${\mathcal{C}}_{2}^{\prime}$ are monochromatic. In other words, according to the way we colored the hyperedges, there is a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ such that for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$ and $1\leq j_{1}<j_{2}\leq M_{2}^{\prime}$, we have $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta$. This completes the proof. ∎ In what follows, we will prove that the “equicoupled” subcode pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ must have at least one zero rate. We do so by treating separately the case where $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is (almost) symmetric and the case where it is (significantly) asymmetric. We will actually show that131313Hereafter we use the simplified notation $M_{1}^{\prime}=f(M_{1})$ and $M_{2}^{\prime}=f(M_{2})$ (where $f(\cdot)$ is an increasing function) to emphasize the respective dependence of $|{\mathcal{C}}_{1}^{\prime}|$ and $|{\mathcal{C}}_{2}^{\prime}|$ on $|{\mathcal{C}}_{1}|$ and ${\mathcal{C}}_{2}$, ignoring the dependence on other parameters. Indeed, noting $M_{1},M_{2}\geq 1$ and treating $\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta$ as constants, one can take $f(\cdot)=\min\left\\{f_{1}(\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta;\cdot,1),f_{2}(\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta;1,\cdot)\right\\}$ where $f_{1}$ and $f_{2}$ are from Lemma 27. $M_{1}^{\prime}=f(M_{1})\leq C_{1}$ or $M_{2}^{\prime}=f(M_{2})\leq C_{2}$ for some constants (independent of $n$) $C_{1}>0$ and $C_{2}>0$. Since $f(\cdot)$ is a (slowly) increasing function, this implies that the original code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ has sizes $M_{1}\leq f^{-1}(C_{1})$ and $M_{2}\leq f^{-1}(C_{2})$ which are still constants (though enormous). This is a stronger statement than that $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ have at least one zero rate. ### XIV-B Asymmetric case ###### Definition 19 (Asymmetry and approximate symmetry). The _$\left\\{1,2\right\\}$ -asymmetry_, the _$\left\\{1\right\\}$ -asymmetry_, the _$\left\\{2\right\\}$ -asymmetry_ and the _asymmetry_ of a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\Delta({\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2})$ is respectively defined as $\displaystyle\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\coloneqq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{2},x^{1}_{1},x^{2}_{2},x^{2}_{1})\right|,$ $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\coloneqq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{2},x^{1}_{1},x^{2}_{1},x^{2}_{2})\right|,$ $\displaystyle\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\coloneqq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{2},x^{2}_{1})\right|,$ $\displaystyle\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\coloneqq$ $\displaystyle\max\left\\{\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}),\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}),\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\right\\}.$ A distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is called _$\alpha$ -symmetric_ if $\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq\alpha$. ###### Remark 17. By definition, a self-coupling $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ is in ${\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)$ if and only if $\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})=0$. According to Definition 19, the asymmetry of $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ that was extracted in Lemma 27 can be divided into eight different cases as shown in Table III below. Case (1) in Table III corresponds to the case where $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is $\alpha$-symmetric. This case will be treated in Section XIV-C. Other cases correspond to when $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is asymmetric with asymmetry larger than $\alpha$. They will be treated in Sections XIV-B1, XIV-B2 and XIV-B3. Cases | $\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\stackrel{{\scriptstyle?}}{{\leq}}\alpha$ | $\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\stackrel{{\scriptstyle?}}{{\leq}}\alpha$ | $\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\stackrel{{\scriptstyle?}}{{\leq}}\alpha$ | Section ---|---|---|---|--- Case (1) | $\leq$ | $\leq$ | $\leq$ | Section XIV-C Case (2) | $>$ | $\leq$ | $\leq$ | Section XIV-B3 Case (3) | $\leq$ | $>$ | $\leq$ | Section XIV-B3 Case (4) | $\leq$ | $\leq$ | $>$ | Section XIV-B3 Case (5) | $>$ | $>$ | $\leq$ | Section XIV-B1 Case (6) | $>$ | $\leq$ | $>$ | Section XIV-B1 Case (7) | $\leq$ | $>$ | $>$ | Section XIV-B2 Case (8) | $>$ | $>$ | $>$ | Section XIV-B2 TABLE III: The asymmetric case can be divided into several sub-cases. For the asymmetric cases (Cases (5)-(8) in Table III), we prove the following lemma. ###### Lemma 28. If a code pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})\in{\mathcal{X}}_{1}^{M_{1}^{\prime}\times n}\times{\mathcal{X}}_{2}^{M_{2}^{\prime}\times n}$ satisfies that there exists a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ such that 1. 1. ${\mathcal{C}}_{i}$ is $P_{i}$-constant composition for $i=1,2$; 2. 2. for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$ and $1\leq j_{1}<j_{2}\leq M_{2}^{\prime}$, $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta$; 3. 3. $\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\geq\alpha$, then at least one of $M_{1}^{\prime}$ and $M_{2}^{\prime}$ is at most a constant $C(\alpha,\eta)>0$. ###### Proof. The proof is divided into several cases. As we shall see in Sections XIV-B1 and XIV-B2, only Cases (5)-(8) in Table III are asymmetric cases. Cases (2)-(4), handled in Section XIV-B3, can be reduced to the symmetric case (Case (1)). The symmetric Case (1) will be treated in the next section (Section XIV-C). ∎ The following lemma will be crucial in the proceeding subsections. ###### Theorem 29 ([Kom90]). Let ${\mathbf{v}}_{1},\cdots,{\mathbf{v}}_{M}$ be a sequence of random variables over a common finite alphabet ${\mathcal{W}}$. If there exist a distribution $P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}\in\Delta({\mathcal{W}}^{2})$ and a constant $\eta\geq 0$ such that $\left\|P_{{\mathbf{v}}_{i},{\mathbf{v}}_{j}}-P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}\right\|_{\infty}\leq\eta$ for all $1\leq i<j\leq M$, then $\mathrm{asymm}(P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}})\leq{6}/{\sqrt{M}}+4\sqrt{\eta}+2\eta$, where $\displaystyle\mathrm{asymm}(P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}})\coloneqq$ $\displaystyle\max_{(w_{1},w_{2})\in{\mathcal{W}}\times{\mathcal{W}}}\left|P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}(w_{1},w_{2})-P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}(w_{2},w_{1})\right|.$ #### XIV-B1 Cases (5) & (6) in Table III We only prove Case (5) since Case (6) is the same up to change of notation. We will show that $M_{1}^{\prime}\coloneqq|{\mathcal{C}}_{1}^{\prime}|$ is at most a constant. We identify $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ with $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}$ where ${\mathbf{z}}^{2}=({\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2})$ is a random variable over ${\mathcal{Z}}_{2}\coloneqq{\mathcal{X}}_{2}^{2}$. Immediately, $\alpha<\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})=\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}})$ where $\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})$ is naturally defined as $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}})\coloneqq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{\mathcal{X}}_{1}^{2}}\max_{z^{2}\in{\mathcal{Z}}_{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}(x^{1}_{1},x^{1}_{2},z^{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}(x^{1}_{2},x^{1}_{1},z^{2})\right|.$ We then have the following simple lemma. ###### Lemma 30. If a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}$ satisfies $\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}})>\alpha$, then $\mathrm{asymm}(P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}})>\alpha$, where ${\mathbf{w}}_{i}\coloneqq({\mathbf{x}}^{1}_{i},{\mathbf{z}}^{2})\in{\mathcal{W}}\coloneqq{\mathcal{X}}_{1}\times{\mathcal{Z}}_{2}$ for $i=1,2$. ###### Proof. The lemma follows from the following simple (in)equalities: $\displaystyle\left|P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}(w_{1},w_{2})-P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}(w_{2},w_{1})\right|=$ $\displaystyle\left|P_{({\mathbf{x}}^{1}_{1},{\mathbf{z}}^{2}),({\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2})}((x^{1}_{1},z^{2}),(x^{1}_{2},z^{2}))-P_{({\mathbf{x}}^{1}_{1},{\mathbf{z}}^{2}),({\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2})}((x^{1}_{2},z^{2}),(x^{1}_{1},z^{2}))\right|$ $\displaystyle=$ $\displaystyle\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}(x^{1}_{1},x^{1}_{2},z^{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}(x^{1}_{2},x^{1}_{1},z^{2})\right|>\alpha.\qed$ (80) Recall $M_{1}^{\prime}\coloneqq\left|{\mathcal{C}}_{1}^{\prime}\right|,M_{2}^{\prime}\coloneqq\left|{\mathcal{C}}_{2}^{\prime}\right|$. By equicoupledness, for any fixed $1\leq j_{1}<j_{2}<M_{2}^{\prime}$, we have $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta$ for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$. Identify the codewords ${\underline{x}}^{1}_{1},\cdots,{\underline{x}}^{1}_{M_{1}^{\prime}}$ in ${\mathcal{C}}_{1}^{\prime}$ together with ${\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}\in{\mathcal{C}}_{2}^{\prime}$ with a sequence of random variables ${\boldsymbol{\chi}}_{1},\cdots,{\boldsymbol{\chi}}_{M_{1}^{\prime}},{\boldsymbol{\zeta}}^{2}\in{\mathcal{X}}_{1}^{M_{1}^{\prime}}\times{\mathcal{Z}}_{2}$. That is $P_{{\boldsymbol{\chi}}_{1},\cdots,{\boldsymbol{\chi}}_{M_{1}^{\prime}},{\boldsymbol{\zeta}}^{2}}\coloneqq\tau_{{\underline{x}}^{1}_{1},\cdots,{\underline{x}}^{1}_{M_{1}^{\prime}},({\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}})}$. Arrange this sequence in the following way: ${\mathbf{v}}_{1},\cdots,{\mathbf{v}}_{M_{1}^{\prime}}$ where ${\mathbf{v}}_{i}=({\boldsymbol{\chi}}_{i},{\boldsymbol{\zeta}}^{2})\in{\mathcal{W}}\coloneqq{\mathcal{X}}_{1}\times{\mathcal{Z}}_{2}$. This sequence satisfies $d_{{\infty}}\left(P_{{\mathbf{v}}_{i_{1}},{\mathbf{v}}_{i_{2}}},P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}\right)\leq\eta$ for every $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$. To see this, $\displaystyle d_{{\infty}}\left(P_{{\mathbf{v}}_{i_{1}},{\mathbf{v}}_{i_{2}}},P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}}\right)=$ $\displaystyle d_{{\infty}}\left(P_{({\boldsymbol{\chi}}_{i_{1}},{\boldsymbol{\zeta}}^{2}),({\boldsymbol{\chi}}_{i_{2}},{\boldsymbol{\zeta}}^{2})},P_{({\mathbf{x}}^{1}_{1},{\mathbf{z}}^{2}),({\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2})}\right)$ $\displaystyle=$ $\displaystyle d_{{\infty}}\left(P_{{\boldsymbol{\chi}}_{i_{1}},{\boldsymbol{\chi}}_{i_{2}},{\boldsymbol{\zeta}}^{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{z}}^{2}}\right)$ $\displaystyle=$ $\displaystyle d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\eta.$ (81) Equation 81 is by the second assumption of Lemma 28. Now by Theorem 29 and Lemma 30, $\alpha<\mathrm{asymm}(P_{{\mathbf{w}}_{1},{\mathbf{w}}_{2}})\leq 6/\sqrt{M_{1}^{\prime}}+4\sqrt{\eta}+2\eta$, i.e., $M_{1}^{\prime}<36/(\alpha-4\sqrt{\eta}-2\eta)^{2}$. This finishes the proof of this case. #### XIV-B2 Cases (7) & (8) in Table III In both Cases (7) & (8), it simultaneously holds that $\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})>\alpha$ and $\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})>\alpha$. By the analysis of the previous case, we have $M_{1}^{\prime}<36/(\alpha-4\sqrt{\eta}-2\eta)^{2}$ and $M_{2}^{\prime}<36/(\alpha-4\sqrt{\eta}-2\eta)^{2}$. #### XIV-B3 Cases (2)-(4) in Table III We apply the following lemma to handle Cases (2)-(4). ###### Lemma 31. The following relations hold. $\displaystyle\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq$ $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})+\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}),$ (82) $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq$ $\displaystyle\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})+\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}),$ (83) $\displaystyle\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq$ $\displaystyle\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})+\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}).$ (84) ###### Proof. The lemma is a simple consequence of the triangle inequality. We only prove Equation 82. Equations 83 and 84 follow similarly. $\displaystyle\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})=$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{2},x^{1}_{1},x^{2}_{2},x^{2}_{1})\right|$ $\displaystyle\leq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left(\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{2},x^{2}_{1})\right|\right.$ $\displaystyle+\left.\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{2},x^{2}_{1})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{2},x^{1}_{1},x^{2}_{2},x^{2}_{1})\right|\right)$ $\displaystyle\leq$ $\displaystyle\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{2},x^{2}_{1})\right|$ $\displaystyle+\max_{(x^{1}_{1},x^{1}_{2})\in{{\mathcal{X}}_{1}}^{2}}\max_{(x^{2}_{1},x^{2}_{2})\in{{\mathcal{X}}_{2}}^{2}}\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{2},x^{2}_{1})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{2},x^{1}_{1},x^{2}_{2},x^{2}_{1})\right|$ $\displaystyle=$ $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})+\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}).\qed$ (85) By Lemma 31, we can reduce Cases (2)-(4) to the symmetric case (Case (1)) with $\alpha$ replaced by $2\alpha$. Indeed, in Case (2), $\displaystyle\alpha<\mathrm{asymm}_{1,2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq$ $\displaystyle\mathrm{asymm}_{1}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})+\mathrm{asymm}_{2}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq 2\alpha.$ Cases (3) and (4) are similar. #### XIV-B4 Case (1) in Table III Case (1) is treated in the next section. ### XIV-C Symmetric case In the previous section, we showed that $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ associated to the subcode pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ must be approximately symmetric (in the sense of Definition 19) for both $|{\mathcal{C}}_{1}^{\prime}|$ and $|{\mathcal{C}}_{2}^{\prime}|$ to be large, regardless of the channel structure. Therefore, in this section we focus on the case where $\displaystyle\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}})\leq\alpha.$ (86) Though we assume ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$ in Item 1 of Theorem 20, the set ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ may or may not be empty (see Figure 5). We treat these two cases separately in the subsequent two subsections (Sections XIV-C1 and XIV-C2). #### XIV-C1 The case where ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)=\emptyset$ In this subsection, we show that if ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)=\emptyset$, then both $M_{1}^{\prime}$ and $M_{2}^{\prime}$ are bounded from above by a constant. Therefore, any good code pair $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ has rates $R_{1}=0$ and $R_{2}=0$. The geometry of various sets of distributions that are involved in the following proof is depicted in Figure 4. Figure 4: The geometry of various sets of distributions in the converse proof in Section XIV-C1. We assume ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)=\emptyset$ and would like to show that any zero-error code pair has rate $R_{1}=0$ and $R_{2}=0$. In the above figure, the ambient space is the set ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ of self-couplings equipped with $\ell^{1}$ metric. The set ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ is a strict subset of ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ such that they are $\varepsilon$-separated (see Equation 87). The joint types of the equicoupled subcode pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ are in an $\eta^{\prime}$-ball (see Equation 91) around a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ which is assumed to be $\alpha$-symmetric (see Equation 86). We then project $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ to obtain a symmetric distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ defined in Equation 88. (Note that $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ may be slight inside ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$.) It can be shown that $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ and $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ are $\alpha^{\prime}$-close (see Equation 89). Since $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ attains zero error and all joint types are outside ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$, one can show that $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is $(\varepsilon-\eta^{\prime}-\alpha^{\prime})$-far from ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ (see 32). This allows us to proceed with the double counting argument. We assume that ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ is a _proper_ subset of ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. Specifically, we assume that there exists a constant $\varepsilon>0$ such that $\displaystyle d_{{1}}\left({\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq\varepsilon.$ (87) We first project $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ to ${\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)$ and obtain an _exactly_ symmetric distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$, $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\coloneqq\frac{1}{4}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}+P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}+P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}+P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}\right).$ (88) Since the four summands are all in ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$, $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is also in ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$. Also, one can easily check that it is indeed symmetric in the sense of Definition 14. Furthermore, $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ and $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ are close to each other. $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)=$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\left|\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|$ $\displaystyle\leq$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}^{2}}\frac{1}{4}\left(\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|\right.$ $\displaystyle+\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|$ $\displaystyle+\left.\left|P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})-P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})\right|\right)$ $\displaystyle\leq$ $\displaystyle\frac{3}{4}|{\mathcal{X}}_{1}|^{2}|{\mathcal{X}}_{2}|^{2}\alpha\eqqcolon\alpha^{\prime}.$ (89) Equation 89 follows from the assumption given by Equation 86. Though we will not use it, the above bound can be slightly improved to $\alpha^{\prime}=\frac{1}{4}\left(3|{\mathcal{X}}_{1}|^{2}|{\mathcal{X}}_{2}|^{2}-|{\mathcal{X}}_{1}||{\mathcal{X}}_{2}|^{2}-|{\mathcal{X}}_{1}|^{2}|{\mathcal{X}}_{2}|-|{\mathcal{X}}_{1}||{\mathcal{X}}_{2}|\right)$ by noting that some terms corresponding to ${\underline{x}}^{1}_{1}={\underline{x}}^{1}_{2}$ or ${\underline{x}}^{2}_{1}={\underline{x}}^{2}_{2}$ do not contributed to the sum. ###### Claim 32. Under the assumptions of Section XIV-C1, $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is not in ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$: $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq$ $\displaystyle\varepsilon-\eta^{\prime}-\alpha^{\prime},$ (90) where $\eta^{\prime}\coloneqq\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}\eta$ and $\alpha^{\prime}$ was defined in Equation 89. ###### Proof. To prove the claim, first recall that for any $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$ and $1\leq j_{1}<j_{2}\leq M_{2}^{\prime}$, we have (by 6) $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq$ $\displaystyle\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\leq\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|^{2}\eta\eqqcolon\eta^{\prime}.$ (91) Since $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ is a good code pair, $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ is also good. Hence $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}$ is not confusable, i.e., $\displaystyle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\in{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right).$ (92) We get that $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}$ is strictly bounded away from ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq d_{{1}}\left({\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq\varepsilon.$ (93) The first inequality is by Equation 92 and the second one follows from the assumption given by Equation 87. Equations 93 and 91 imply that $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is strictly outside ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$. $\displaystyle d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)-d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\geq\varepsilon-\eta^{\prime}.$ (94) Combining Equations 94 and 89, we further have $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)\geq$ $\displaystyle d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\right)-d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)\geq\varepsilon-\eta^{\prime}-\alpha^{\prime}.$ This finishes the proof of 32. ∎ Since $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\notin{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ by Equation 90, we can apply Theorem 18. There exists $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ such that $\displaystyle\left\langle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\leq$ $\displaystyle-\varepsilon^{\prime}<0$ (95) for some constant $\varepsilon^{\prime}>0$. To prove upper bounds on $M_{1}^{\prime}$ and $M_{2}^{\prime}$, the trick is to upper and lower bound the following quantity $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]\times[M_{2}^{\prime}]}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]\times[M_{2}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle.$ (96) Contrasting the upper and lower bounds on Equation 96 will give us an upper bound on $\max\left\\{M_{1}^{\prime},M_{2}^{\prime}\right\\}$. We first prove an upper bound on Equation 96. ###### Claim 33. Equation 96 is at most $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]\times[M_{2}^{\prime}]}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]\times[M_{2}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle\leq$ $\displaystyle M_{1}^{\prime}(M_{1}^{\prime}-1)M_{2}^{\prime}(M_{2}^{\prime}-1)(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime 2}M_{2}^{\prime}+M_{1}^{\prime}M_{2}^{\prime 2}+M_{1}^{\prime}M_{2}^{\prime}.$ (97) ###### Proof. Expanding the summation in Equation 96, we have $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]\times[M_{1}^{\prime}]}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]\times[M_{2}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}(i_{1},i_{2},j_{1},j_{2})\in[M_{1}^{\prime}]^{2}\times[M_{2}^{\prime}]^{2}\\\ i_{1}\neq i_{2},j_{1}\neq j_{2}\end{subarray}}+\sum_{\begin{subarray}{c}(i_{1},i_{2},j_{1},j_{2})\in[M_{1}^{\prime}]^{2}\times[M_{2}^{\prime}]^{2}\\\ i_{1}=i_{2}\mathrm{\ or\ }j_{1}=j_{2}\end{subarray}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{i_{1}\neq i_{2},j_{1}\neq j_{2}}+\sum_{i_{1}\neq i_{2},j_{1}=j_{2}}+\sum_{i_{1}=i_{2},j_{1}\neq j_{2}}+\sum_{i_{1}=i_{2},j_{1}=j_{2}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle.$ (98) Note that $\displaystyle\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\leq$ $\displaystyle\left\|\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\right\|_{2}\left\|Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{2}\leq\left\|\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\right\|_{1}\left\|Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{1}=1.$ The last three terms in Equation 98 is at most $\displaystyle M_{1}^{\prime 2}M_{2}^{\prime}+M_{1}^{\prime}M_{2}^{\prime 2}+M_{1}^{\prime}M_{2}^{\prime}.$ (99) The first term in Equation 98 can be bounded as follows. $\displaystyle\sum_{i_{1}\neq i_{2},j_{1}\neq j_{2}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{i_{1}\neq i_{2},j_{1}\neq j_{2}}\left(\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}-\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle+\left\langle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\right).$ (100) For any $i_{1}\neq i_{2}$ and $j_{1}\neq j_{2}$, the first term of the summand in Equation 100 is at most $\displaystyle\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}-\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\leq$ $\displaystyle\left\|\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}-\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{1}\left\|Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{\infty}$ (101) $\displaystyle\leq$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)+d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)$ (102) $\displaystyle\leq$ $\displaystyle\eta^{\prime}+\alpha^{\prime}.$ (103) In Equation 101, we used the symmetry141414The double counting argument crucially relies on the symmetry of $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ which is the reason why we treat the symmetric and asymmetric cases separately. (as per Definition 14) of $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$. Specifically, since $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)$, we have $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{2}},{\underline{x}}^{2}_{j_{1}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)=$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}\right)=d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right),$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)=$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)=d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right),$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{2}},{\underline{x}}^{2}_{j_{1}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)=$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2},{\mathbf{x}}^{2}_{1}}\right)=d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right).$ Hence, the bound in Equation 103 holds for all $i_{1}\neq i_{2}$ and $j_{1}\neq j_{2}$ (not only for $i_{1}<i_{2}$ and $j_{1}<j_{2}$). In Equation 102, we used the trivial bound $\left\|Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\|_{\infty}\leq 1$ since $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ is a probability distribution. Equation 103 is by Equations 89 and 91. Combining Equations 103 and 95, we get that the first term in Equation 98 is at most $\displaystyle M_{1}^{\prime 2}M_{2}^{\prime 2}(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime}).$ (104) Overall, by Equations 99 and 104, we get an upper bound on Equation 96: $\displaystyle M_{1}^{\prime}(M_{1}^{\prime}-1)M_{2}^{\prime}(M_{2}^{\prime}-1)(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime 2}M_{2}^{\prime}+M_{1}^{\prime}M_{2}^{\prime 2}+M_{1}^{\prime}M_{2}^{\prime},$ which completes the proof of 33. ∎ On the other hand, a lower bound on Equation 96 follows from a direct calculation. ###### Claim 34. Equation 96 is nonnegative, i.e., $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]\times[M_{2}^{\prime}]}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]\times[M_{2}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle\geq 0.$ (105) ###### Proof. We compute Equation 96 from the first principle and interchange the summations. $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]\times[M_{2}^{\prime}]}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]\times[M_{2}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\sum_{(j_{1},j_{2})\in[M_{2}^{\prime}]^{2}}\sum_{(x^{1}_{1},x^{1}_{2})\in{\mathcal{X}}_{1}^{2}}\sum_{(x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{2}^{2}}\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})Q(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}(x^{1}_{1},x^{1}_{2})\in{\mathcal{X}}_{1}^{2}\\\ (x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{2}^{2}\end{subarray}}\sum_{\begin{subarray}{c}(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}\\\ (j_{1},j_{2})\in[M_{2}^{\prime}]^{2}\end{subarray}}\frac{1}{n}\sum_{k\in[n]}\mathds{1}{\left\\{{\underline{x}}^{1}_{i_{1}}(k)=x^{1}_{1}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{1}_{i_{2}}(k)=x^{1}_{2}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{2}_{j_{1}}(k)=x^{2}_{1}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}Q(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{\begin{subarray}{c}(x^{1}_{1},x^{1}_{2})\in{\mathcal{X}}_{1}^{2}\\\ (x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{2}^{2}\end{subarray}}\sum_{k\in[n]}\left(\sum_{i_{1}\in[M_{1}^{\prime}]}\mathds{1}{\left\\{{\underline{x}}^{1}_{i_{1}}(k)=x^{1}_{1}\right\\}}\right)\left(\sum_{i_{2}\in[M_{1}^{\prime}]}\mathds{1}{\left\\{{\underline{x}}^{1}_{i_{2}}(k)=x^{1}_{2}\right\\}}\right)$ $\displaystyle\left(\sum_{j_{1}\in[M_{2}^{\prime}]}\mathds{1}{\left\\{{\underline{x}}^{2}_{j_{1}}(k)=x^{2}_{1}\right\\}}\right)\left(\sum_{j_{2}\in[M_{2}^{\prime}]}\mathds{1}{\left\\{{\underline{x}}^{2}_{j_{2}}(k)=x^{2}_{2}\right\\}}\right)Q(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})$ $\displaystyle=$ $\displaystyle\frac{M_{1}^{\prime 2}M_{2}^{\prime 2}}{n}\sum_{k\in[n]}\sum_{\begin{subarray}{c}(x^{1}_{1},x^{1}_{2})\in{\mathcal{X}}_{1}^{2}\\\ (x^{2}_{1},x^{2}_{2})\in{\mathcal{X}}_{2}^{2}\end{subarray}}P_{1}^{(k)}(x^{1}_{1})P_{1}^{(k)}(x^{1}_{2})P_{2}^{(k)}(x^{2}_{1})P_{2}^{(k)}(x^{2}_{2})Q(x^{1}_{1},x^{1}_{2},x^{2}_{1},x^{2}_{2})$ (106) $\displaystyle=$ $\displaystyle M_{1}^{\prime 2}M_{2}^{\prime 2}\left\langle\frac{1}{n}\sum_{k\in[n]}\left(P_{1}^{(k)}\right)^{\otimes 2}\otimes\left(P_{2}^{(k)}\right)^{\otimes 2},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right\rangle$ $\displaystyle\geq$ $\displaystyle 0.$ (107) In Equation 106, $P_{i}^{(k)}$ denotes the empirical distribution of the $k$-th _column_ of the codebook ${\mathcal{C}}_{i}^{\prime}\in{\mathcal{X}}_{i}^{M_{i}^{\prime}\times n}$, i.e., for any $x^{i}\in{\mathcal{X}}_{i}$, $\displaystyle P_{i}^{(k)}(x^{i})=\frac{1}{M_{i}^{\prime}}\left|\left\\{\ell\in[M_{i}^{\prime}]:{\underline{x}}^{i}_{\ell}(k)=x^{i}\right\\}\right|.$ (108) Equation 107 follows from Theorem 18 since $\frac{1}{n}\sum_{k\in[n]}\left(P_{1}^{(k)}\right)^{\otimes 2}\otimes\left(P_{2}^{(k)}\right)^{\otimes 2}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ and $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$. This finishes the proof of 34. ∎ Finally, Equations 97 and 105 yield $\displaystyle 0\leq$ $\displaystyle M_{1}^{\prime 2}M_{2}^{\prime 2}(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime 2}M_{2}^{\prime}+M_{1}^{\prime}M_{2}^{\prime 2}+M_{1}^{\prime}M_{2}^{\prime}$ $\displaystyle\implies$ $\displaystyle 0\leq$ $\displaystyle M_{1}^{\prime}M_{2}^{\prime}(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime}+M_{2}^{\prime}+1$ $\displaystyle\implies$ $\displaystyle 0\leq$ $\displaystyle-\delta M^{\prime 2}+2M^{\prime}+1$ (109) $\displaystyle\implies$ $\displaystyle M^{\prime}\leq$ $\displaystyle\frac{1+\sqrt{1+\delta}}{\delta}$ (110) In Equation 109, we let $M^{\prime}\coloneqq\max\left\\{M_{1}^{\prime},M_{2}^{\prime}\right\\}$ and $\delta\coloneqq\varepsilon^{\prime}-\eta^{\prime}-\alpha^{\prime}>0$. Equation 110 gives us the desired bound $\max\left\\{M_{1}^{\prime},M_{2}^{\prime}\right\\}\leq C$ for some constant $C>0$ independent of $n$. #### XIV-C2 The case where ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\neq\emptyset$ Intuitively, this case is impossible for the following reasons. In the last subsection, we have shown that for any of $|{\mathcal{C}}_{1}^{\prime}|$ and $|{\mathcal{C}}_{2}^{\prime}|$ to be large, the distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ should (approximately) belong to ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. For one thing, since $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$, by the second property in Proposition 17, $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}$ (approximately) belongs to ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$ and $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ (approximately) belongs to ${\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)$. For another thing, since the code pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ is assumed to attain zero error in the first place, we have that $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}$ which is close to $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}}}$ is (approximately) outside ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ which is close to $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}$ is (approximately) outside ${\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. In summary, we found a distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ with $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}\in{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ and $\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$. This, nevertheless, contradicts the assumption ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$ of Item 1 in Theorem 20. The above intuition can be formalized by taking a good care of various slack factors. We flesh out the details below. In the previous section, we showed that for any constant $\gamma>0$, if the distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ (which is the symmetrized version of $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$, as defined in Equation 88) associated to $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{2}^{\prime})$ is $\gamma$-far (in $\ell^{1}$ distance) from ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, then both $M_{1}^{\prime}$ and $M_{2}^{\prime}$ are at most a constant $g(\gamma)$ for some function $g(\gamma)\xrightarrow{\gamma\to 0}0$.151515 In the previous section, $\gamma=\varepsilon-\eta^{\prime}-\alpha^{\prime}$ as we got in Equation 90 and $g(\gamma)=g(\varepsilon,\eta^{\prime},\alpha^{\prime})=\frac{1+\sqrt{1+\varepsilon^{\prime}-\eta^{\prime}-\alpha^{\prime}}}{\varepsilon^{\prime}-\eta^{\prime}-\alpha^{\prime}}$ where $\varepsilon^{\prime}=\varepsilon^{\prime}(\varepsilon)$ as we obtained in Equation 110. In other words, for $M_{1}^{\prime}$ or $M_{2}^{\prime}$ to be sufficiently large, $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ has to be $\gamma$-close (in $\ell^{1}$ distance) to ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ for an arbitrarily small constant $\gamma>0$. Note also that unlike $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}$, the distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ can be _slightly_ inside ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. However, it cannot be _significantly_ inside ${\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$. Specifically, for any $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$ and $1\leq j_{1}<j_{2}\leq M_{2}^{\prime}$, $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right)$ $\displaystyle\leq$ $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right)+d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}}\right)+d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}}},{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right)$ (111) $\displaystyle\leq$ $\displaystyle\alpha^{\prime}+\eta^{\prime}.$ (112) In Equation 112, we used Equations 89 and 91. Also, the last term in Equation 111 is zero due to Equation 92. Overall, we have that for any good code pair $({\mathcal{C}}_{1}^{\prime},{\mathcal{C}}_{1}^{\prime})\in{\mathcal{X}}_{1}^{M_{1}^{\prime}\times n}\times{\mathcal{X}}_{2}^{M_{2}^{\prime}\times n}$ extracted from Lemma 27, for either $M_{1}^{\prime}$ or $M_{2}^{\prime}$ to be sufficiently large, $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}$ has to be $(\varepsilon-\eta^{\prime}-\alpha^{\prime})$-close to ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ and $(\alpha^{\prime}+\eta^{\prime})$-close to ${\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ for arbitrarily small constants $\varepsilon,\eta^{\prime},\alpha^{\prime}>0$. Therefore, we can without loss of rigor drop these slack factors and assume for convenience $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right).$ (113) For this to be possible, in this subsection we consider the case where ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\neq\emptyset$. Let $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\coloneqq$ $\displaystyle\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}=\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{2}},$ $\displaystyle\overline{P}_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\coloneqq$ $\displaystyle\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}=\left[\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}.$ Since $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$, the equality of the respective marginals above is by the second property of Proposition 17. Furthermore, by Equation 112 and Lemma 9, $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right)\leq$ $\displaystyle\alpha^{\prime}+\eta^{\prime},$ (114) $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}},{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right)\leq$ $\displaystyle\alpha^{\prime}+\eta^{\prime}.$ (115) We further divide the analysis into two cases (as shown in Figure 5). (a) The case where $\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$ where $\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)$ is defined in Equation 116. (b) The case where $\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset$ while $\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)=\emptyset$ where $\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right)$ is defined in Equation 126. Figure 5: Under the assumptions ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$ and ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\neq\emptyset$, we further divide the converse analysis into two cases. The goal is to show that in these cases there do not exist zero-error code pairs of rates $R_{1}>0$ and $R_{2}>0$. In the above figures, pink sets are confusability sets and green sets are sets of good distributions. Two-dimensional sets are sets of joint distributions (e.g., ${\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$) and one-dimensional sets are sets of marginal distributions (e.g., ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$, etc.). 1. 1. Define $\displaystyle\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}:P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\right\\}\subseteq{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right).$ (116) Note that by Equation 113, $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right).$ (117) Since we assume ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$ in Item 1 of Theorem 20, $\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ may or may not be empty. We first handle the case where $\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$. In fact, let us assume $\displaystyle d_{{1}}\left(\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right)\geq\varepsilon_{1}$ (118) for some $\varepsilon_{1}>0$. See Figure 5(a). However, Equations 117 and 114 lead to a contradiction if $\alpha^{\prime}$ and $\eta^{\prime}$ and sufficiently small so that $\alpha^{\prime}+\eta^{\prime}<\varepsilon_{1}$. Therefore, it is impossible for this case to happen. 2. 2. Now we assume $\displaystyle\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\neq\emptyset.$ (119) The analysis of the above case shows that $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)$ has to be $(\alpha^{\prime}+\eta^{\prime})$-close to ${\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$ for arbitrarily small $\alpha^{\prime}$ and $\eta^{\prime}$. Similar to the assumption given by Equation 113, in the present case we may as well assume for convenience $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right).$ (120) Now define $\displaystyle\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right)\coloneqq$ $\displaystyle\left\\{\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}:\begin{array}[]{rl}P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in&{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\\\ \left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}\in&{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\end{array}\right\\}$ (123) $\displaystyle=$ $\displaystyle\left\\{\left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}:\begin{array}[]{rl}P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in&{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)\\\ \left[P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\right]_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}_{1}}\in&\widetilde{{\mathcal{G}}_{{1}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\end{array}\right\\}\subseteq{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right).$ (126) Equation 126 is by Equation 116. By the assumption given by Equation 119, $\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right)\neq\emptyset$. Note that by Equations 113 and 120, $\displaystyle\overline{P}_{{\mathbf{x}}^{1},{\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2}}\in\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right).$ (127) On the other hand, by the assumption ${\mathcal{G}}\left(P_{1},P_{2}\right)=\emptyset$ and Equation 119, $\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)$ must be empty. In fact let us assume $\displaystyle d_{{1}}\left(\widetilde{{\mathcal{G}}_{{2}}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{2}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right)\right)\geq\varepsilon_{2}$ (128) for some $\varepsilon_{2}>0$. See Figure 5(b). Now by Equations 115 and 127, we again reach a contradiction if $\alpha+\eta^{\prime}<\varepsilon_{2}$. Therefore, this case is also impossible to happen. ## XV Converse, Items 2 and 3 in Theorem 20 In this section, we only prove Item 2. Item 3 follows by interchanging notation. We assume that ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)=\emptyset$. More precisely, we assume $\displaystyle d_{{1}}\left({\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right)\geq\varepsilon$ (129) for some $\varepsilon>0$. Let $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$ be any good code pair. Suppose $R_{1}>0$. Our goal is to derive a contradiction. ### XV-A Subcode extraction ###### Theorem 35 (Ramsey’s theorem [Wik21]). Let ${\mathcal{K}}_{M}$ denote the (undirected) complete graph on $M$ vertices. Let $N\in{\mathbb{Z}}_{\geq 1},D\in{\mathbb{Z}}_{\geq 2}$. Then there exists a constant $K=K(N,D)$ such that for every $D$-coloring of the edges of ${\mathcal{K}}_{M}$ with $M\geq K$, there is a monochromatic clique in ${\mathcal{K}}_{M}$ of size at least $N$. ###### Lemma 36 (Subcode extraction). Let $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ be any $(P_{1},P_{2})$-constant composition code pair of sizes $M_{1},M_{2}$, respectively. Let $j\in[M_{2}]$. Then there exists a subcode ${\mathcal{C}}_{1}^{\prime}\subseteq{\mathcal{C}}$ of size $M_{1}^{\prime}\geq f(\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta,M_{1})\xrightarrow{M_{1}\to\infty}\infty$ and a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)$ such that for all $1\leq i_{1}<i_{1}\leq M_{1}^{\prime}$, we have $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq\eta$. ###### Proof. The proof is similar to that of Lemma 27 and follows readily from Theorem 35. We first build a complete graph ${\mathcal{K}}_{M_{1}}$ whose vertex set is ${\mathcal{C}}_{1}$. We then color the edges of ${\mathcal{K}}_{M_{1}}$ using distributions in ${\mathcal{J}}_{1}\left(P_{1},P_{2}\right)$. Let ${\mathcal{N}}$ be an $\eta$-net of ${\mathcal{J}}_{1}\left(P_{1},P_{2}\right)$ of size at most $|{\mathcal{N}}|\leq\left(\frac{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|}{2\eta}+1\right)^{\left|{\mathcal{X}}_{1}\right|^{2}\times\left|{\mathcal{X}}_{2}\right|}\eqqcolon D$ (by Lemma 5). An edge $({\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}})$ ($1\leq i_{1}<i_{2}\leq M_{1}$) is colored by a distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{N}}$ if $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq\eta$. Now by Theorem 35, there is a monochromatic subcode ${\mathcal{C}}_{1}^{\prime}\subseteq{\mathcal{C}}_{1}$ of size at least $M_{1}^{\prime}\geq f(\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta,M_{1})$, where $f(\left|{\mathcal{X}}_{1}\right|,\left|{\mathcal{X}}_{2}\right|,\eta,M_{1})\xrightarrow{M_{1}\to\infty}\infty$. According to the way we colored the edges, this means that for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$, $d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq\eta$. ∎ Fix any $j\in[M_{2}]$. By Lemma 36, there is a subcode ${\mathcal{C}}_{1}^{\prime}\subseteq{\mathcal{C}}_{1}$ of size $M_{1}^{\prime}\xrightarrow{M_{1}\to\infty}\infty$ such that for some distribution $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)$, we have $\displaystyle d_{{\infty}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq\eta$ (130) for all $1\leq i_{1}<i_{2}\leq M_{1}^{\prime}$. Equation 130 implies, by 6, that $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)\leq\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|\eta\eqqcolon\eta^{\prime}.$ (131) In the following two sections (Sections XV-B and XV-C) we treat the cases where $P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}$ is (noticeably) asymmetric and (approximately) symmetric (in the sense of Definition 14) separately. ### XV-B Asymmetric case Reusing the proof for Cases (5) & (6) of Lemma 28 with ${\mathbf{z}}^{2}$ being ${\mathbf{x}}^{2}$ (instead of $({\mathbf{x}}^{2}_{1},{\mathbf{x}}^{2}_{2})$ as in Section XIV-B) and ${\boldsymbol{\zeta}}^{2}$ corresponding to ${\underline{x}}^{2}_{j}$ (instead of $({\underline{x}}^{2}_{j_{1}},{\underline{x}}^{2}_{j_{2}})$ as in Section XIV-B), we get that $\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}})\leq\alpha$ as long as $M_{1}^{\prime}\geq 36/(\alpha-4\sqrt{\eta}-2\eta)^{2}$. ### XV-C Symmetric case As we saw in the last section, for $M_{1}^{\prime}$ to be sufficiently large, $\mathrm{asymm}(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}})\leq\alpha$. Under such an approximate symmetry condition, we then pass to an _exactly_ symmetric distribution $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{S}}_{1}\left(P_{1},P_{2}\right)$ defined as $\displaystyle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\coloneqq$ $\displaystyle\frac{1}{2}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}+P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}}\right).$ Furthermore, $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)=$ $\displaystyle\sum_{(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}}\left|\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})\right|$ $\displaystyle\leq$ $\displaystyle\frac{1}{2}\sum_{(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}}\left|P_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})-P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})\right|$ $\displaystyle\leq$ $\displaystyle\frac{1}{2}\left|{\mathcal{X}}_{1}\right|^{2}\left|{\mathcal{X}}_{2}\right|\alpha\eqqcolon\alpha^{\prime}.$ (132) To apply the duality theorem (Theorem 18), we argue that $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}$ is not in ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)$. $\displaystyle d_{{1}}\left(\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\right)\geq$ $\displaystyle d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\right)-d_{{1}}\left(P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)$ $\displaystyle\geq$ $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},{\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right)\right)-d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},P_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)-\alpha^{\prime}$ (133) $\displaystyle\geq$ $\displaystyle d_{{1}}\left({\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1}\left(P_{1},P_{2}\right)\setminus{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)\right)-\eta^{\prime}-\alpha^{\prime}$ (134) $\displaystyle\geq$ $\displaystyle\varepsilon-\eta^{\prime}-\alpha^{\prime}.$ (135) Equation 133 is by Equation 132. Equation 134 is by Equation 131 and the fact that $\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}\notin{\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right)$. Equation 135 follows from Equation 129. By Theorem 18, there exists $Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right)$, such that $\displaystyle\left\langle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\leq-\varepsilon^{\prime}$ (136) for some constant $\varepsilon^{\prime}>0$. The strategy is to bound $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle.$ (137) For an upper bound, $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}\\\ i_{1}\neq i_{2}\end{subarray}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle+\sum_{i\in[M_{1}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{\begin{subarray}{c}(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}\\\ i_{1}\neq i_{2}\end{subarray}}\left(\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}-\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle-\left\langle\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\right)+\sum_{i\in[M_{1}^{\prime}]}\left\langle\tau_{{\underline{x}}^{1}_{i},{\underline{x}}^{1}_{i},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle$ $\displaystyle\leq$ $\displaystyle M_{1}^{\prime 2}(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime}.$ (138) In the above Equation 138, besides Equations 131, 132 and 136, we also used the fact that $\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\in{\mathcal{S}}_{1}\left(P_{1},P_{2}\right)$ and hence by Definition 14 $\displaystyle d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{2}_{j}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right)=d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},\overline{P}_{{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{2}}\right)=d_{{1}}\left(\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},\overline{P}_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right).$ For a lower bound, $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\left\langle\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\sum_{(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}}\tau_{{\underline{x}}^{1}_{i_{1}},{\underline{x}}^{1}_{i_{2}},{\underline{x}}^{2}_{j}}(x^{1}_{1},x^{1}_{2},x^{2})Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})$ $\displaystyle=$ $\displaystyle\sum_{(i_{1},i_{2})\in[M_{1}^{\prime}]^{2}}\sum_{(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}}\frac{1}{n}\sum_{k\in[n]}\mathds{1}{\left\\{{\underline{x}}^{1}_{i_{1}}(k)=x^{1}_{1},{\underline{x}}^{1}_{i_{2}}(k)=x^{1}_{2},{\underline{x}}^{2}_{j}(k)=x^{2}\right\\}}Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})$ $\displaystyle=$ $\displaystyle M_{1}^{\prime 2}\sum_{(x^{1}_{1},x^{1}_{2},x^{2})\in{\mathcal{X}}_{1}^{2}\times{\mathcal{X}}_{2}}\frac{1}{n}\sum_{k\in[n]}P_{1}^{(k)}(x^{1}_{1})P_{1}^{(k)}(x^{1}_{2})P_{2}^{(k)}(x^{2})Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}(x^{1}_{1},x^{1}_{2},x^{2})$ (139) $\displaystyle=$ $\displaystyle M_{1}^{\prime 2}\left\langle\frac{1}{n}\sum_{k\in[n]}\left(P_{1}^{(k)}\right)^{\otimes 2}\otimes P_{2}^{(k)},Q_{{\mathbf{x}}^{1}_{1},{\mathbf{x}}^{1}_{2},{\mathbf{x}}^{2}}\right\rangle\geq 0.$ (140) In Equation 139, $P_{1}^{(k)}$ denotes the empirical distribution of the $k$-th column of ${\mathcal{C}}_{1}^{\prime}$ as defined in Equation 108 for $i=1$; $P_{2}^{(k)}$ is the indicator distribution $P_{2}^{(k)}(x^{2})\coloneqq\mathds{1}{\left\\{{\underline{x}}^{2}_{j}(k)=x^{2}\right\\}}$ for all $x^{2}\in{\mathcal{X}}_{2}$. Equation 140 is by duality (Theorem 18). Equations 138 and 140 jointly yield $\displaystyle M_{1}^{\prime 2}(\eta^{\prime}+\alpha^{\prime}-\varepsilon^{\prime})+M_{1}^{\prime}\geq 0,$ i.e., $\displaystyle M_{1}^{\prime}\leq$ $\displaystyle\frac{1}{\varepsilon^{\prime}-\eta^{\prime}-\alpha^{\prime}}.$ ###### Remark 18. The marginal cases (Items 2 and 3) of Theorem 20 proved in this section do _not_ directly follow from the point-to-point results by Wang et al. [WBBJ19] in a black-box manner. Unlike in the achievability proof (see proofs of Items 2 and 3 of Lemma 22, proofs of Items 2 and 3 of Lemma 23 and proofs of Items 2 and 3 of Lemma 24), we cannot assume in a converse argument that a zero-rate codebook only contains one codeword. Indeed, a rateless code may contain subexponentially many codewords. Consequently, the adversary may leverage his knowledge of this small code and jam the communication in a potentially more malicious way than as if he was not aware of the existence of the small code (in which case the problem reduces to the point-to-point setting). Incorporating such strength of the adversary requires a more tender care of the converse argument as we did in this section. Finally, we reiterate the nontriviality of the marginal cases of MACs even given the point-to-point results. Indeed, similar issues also arise in the study of AVMACs (where the adversary is oblivious) – another adversarial model that received more attention than ours over the past years. The corner cases where exactly one of the transmitters has zero capacity was left as a gap in Ahlswede and Cai’s paper [AC99], though the point-to-point results [Ahl78, CN88b] were known for long by then. The gap was later noticed by Wiese and Boche [WB12] and recently filled by Pereg and Steinberg [PS19], more than twenty years after [AC99]. ## XVI Concluding remarks and open problems In the following remarks we reflect on the results we obtained and the techniques we leveraged in this paper, and interleave them with several promising/interesting open questions. 1. 1. Another highly related yet different model that is not considered in this paper is the adversarial MACs with _average_ probability of error. As briefly discussed in Remark 1, even for _stochastic_ MACs, the capacity region exhibits different behaviours under average error criterion than maximum error criterion. Therefore, we do not believe that average error criterion behaves the same (at least under deterministic encoding) as the maximum one (which is equivalent to the zero error criterion under deterministic encoding) under our omniscient _adversarial_ MAC model. Characterizing the capacity positivity and proving inner and outer bounds on the capacity region with _average_ probability of error are left for future research. In contrast, for point-to- point AVCs, the capacity remains the same under average probability of error (with deterministic encoding) and maximum probability of error (with stochastic encoding) [CN88b]. 2. 2. For technical simplicity, this paper only handles deterministic MACs. For general (potentially stochastic) MACs, maximum error criterion is _not_ equivalent to zero error criterion (though they are for deterministic MACs). Techniques along the lines of [CK81] are of relevance for extending our results to general adversarial MACs. 3. 3. It is possible to generalize our results on capacity positivity to $t$-user MACs with $t>2$, though the case analysis may become baroque. 4. 4. We believe that the capacity inner bounds obtained in Lemma 24 can be improved. In particular, the expurgation method we employed is crude – we expurgated one codeword from each user’s codebook for _every pair_ of confusable pairs $(({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}}),({\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{2}}))$. Noting that a pair of codewords $({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}})$ participates in $\Theta(M_{1}M_{2})$ many pairs $(({\underline{\mathbf{x}}}^{1}_{i_{1}},{\underline{\mathbf{x}}}^{2}_{j_{1}}),({\underline{\mathbf{x}}}^{1}_{i_{2}},{\underline{\mathbf{x}}}^{2}_{j_{2}}))$, we might have over-expurgated a more-than-desired number of codewords. We believe that more careful expurgation strategy may lead to improved inner bounds. For example, in [Gu18], a nontrivial lower bound for $t$-user binary adder MACs161616One caveat is that Gu [Gu18] was dealing with $t$-user MACs in which all transmitters use the _same_ codebook. Such codes are also known as $B_{t}$ codes. was obtained by only expurgating codewords with _minimal violation_ of the zero error criterion. A naive expurgation as ours does _not_ yield such a bound. 5. 5. In classical zero-error information theory where channels under consideration are non-adversarial (or equivalently, unconstrainedly adversarial under our framework), there is a well-known $n$-letter expression for the capacity of a general DMC with zero error. The expression involves the independence number of the $n$-fold strong product of the confusability graph associated to the channel. Similarly, the non-stochastic information theory framework initiated by Nair [Nai11, Nai13] also provides multi-letter expressions in terms of non- stochastic information measures. In our opinion, the availability of such formulas heavily relies on the unconstrainedness of the channel. That is, viewed as an adversarial channel, the noise sequence ${\underline{s}}$ can take any value in ${\mathcal{S}}^{n}$. Consequently, “good codes tensorize” in the sense that if ${\mathcal{C}}\subseteq{\mathcal{X}}^{n}$ attains zero error then ${\mathcal{C}}\times{\mathcal{C}}\subseteq{\mathcal{X}}^{2n}$ also attains zero error171717Here we think of the tensor product ${\mathcal{C}}\times{\mathcal{C}}$ as the set of concatenated codewords of length-$2n$ with both length-$n$ components from ${\mathcal{C}}$.. Unfortunately, such a tensorization property is not true for channels with state constraints. It can be easily seen that the adversary can allocate his power on the long codeword in a nonuniform manner so as to confuse the decoder. Codes for the adversarial bitflip channel is a concrete counterexample.181818 Consider a bitflip channel which can arbitrarily flip $p$ fraction of bits in the transmitted sequence. Let ${\mathcal{C}}\in\\{0,1\\}^{n}$ be a good code for this channel. That is, the minimum distance of ${\mathcal{C}}$ is at least $2np$. Then ${\mathcal{C}}\times{\mathcal{C}}$ still has distance $2np$ while its length doubles. This means that it can only correct a $p/2$ fraction of errors, no longer attaining zero error for the original channel with noise level $p$. The possibility of obtaining tight $n$-letter expressions for the capacity of omniscient adversarial channels using our framework is left for future investigations. 6. 6. Recall that our main theorem asserts that for the sake of capacity positivity, it suffices to only consider distributions corresponding to mixtures of i.i.d. random variables. Achievability-wise, one can achieve positive rates, whenever possible, by sampling random codes using mixtures of product self-couplings, i.e., “good” distributions as per Definition 15. Conversely, if one could not achieve positive rates using good distributions, then she/he cannot achieve them using _any other distributions_. In the above sense, the set of good distributions we introduced plays a fundamental role in understanding capacity thresholds. This brings a natural question of whether there exist scenarios where correlated distributions help enlarge the region of positive rates and are hence also fundamentally “good”. One feasible way of physically instantiating correlation between input distributions is to allow _cooperation_. There is a recent line of works on _oblivious_ adversarial MACs (i.e., the classical AVMAC model) with cooperation [WBBJ11, WB12, BS16, HS17]. That is, two encoders are allowed to communicate through a rate-limited channel191919Note that if the channel between the two encoders is rate- unbounded, then the MAC problem reduces to a point-to-point problem. . It is an interesting problem to examine the behaviour of MACs with cooperations under the _omniscient_ model. 7. 7. It is an intriguing question to extend our results to list decoding with constant list sizes. The list decoding problem for both (oblivious) AVCs [Hug97, SG12, BSP18, HK19, ZJB20] and AVMACs [BS16, Nit13, Cai16, Zha20] is well-studied. There are also papers on combinatorial list decoding for special MACs [DPSV19, Shc16], not mentioning a huge body of work on list decoding for bitflip channels. However, zero-error list decoding for _general omniscient_ adversarial channels remains relatively uncharted until recently [ZBJ20]. One of the major technical challenges for MACs that is absent in the point-to- point case has to do with list configurations. A list for MAC can be represented by a bipartite graph [Cai16, Zha20]. For a target list size $L\in{\mathbb{Z}}_{\geq 2}$, the bipartite graph with $L$ edges corresponding to an $L$-list may have different “shapes”. Such complications call for delicate analysis. 8. 8. It is plausible that our framework, built upon the prior work [WBBJ19], is eligible for tackling the capacity threshold problem of other adversarial multiuser channels, e.g., broadcast channels, interference channels, relay channels, etc. We leave this for further exploration. The non- adversarial/unconstrained version of these problems has been considered by Devroye [Dev16]. 9. 9. Motivated by the situation where the fundamental limit of oblivious MAC is well-understood [PS19] while that of the omniscient counterpart is out of reach of the current techniques, it is tempting to study an intermediate model which interpolates between the oblivious and the omniscient models. One model of this kind known as the _myopic_ channels was initiated by Sarwate [Sar10] and was advanced in a sequence of followup work [DJL15, BDJ+20, ZVJS18]. Despite the progress, even the capacity threshold of general point-to-point myopic channels is unknown. In the case of MAC, one natural definition of the myopic variant could be that the adversary gets to observe a noisy version of the transmitted sequence pair through a _stochastic_ (non-adversarial) MAC. Such a model, as far as we know, remains unexplored. 10. 10. Strictly speaking, both our achievability and converse proofs rely on a _strict_ separation between the set of good distributions and the confusability set. Specifically, we have to assume that the good set minus the confusability set has nonempty interior in the achievability proof; we have to assume that the good set is a proper subset of the confusability set in the converse proof. The case where the good set _kisses_ the confusability set remains unsolved. Such boundary cases are solved for some special channels including the (point-to-point) bitflip channel (see, e.g., [GRS12, Theorem 4.4.1]). Similar subtleties also arise in the oblivious AVC/AVMAC setting where the boundary cases are in general open but are solved when the optimal jamming strategy is deterministic (which is the case, in particular, if the channel is deterministic) [CN88b, PS19]. In all above solved cases, the capacity is zero at the boundary. Inspired by these results, we conjecture that the capacity of our omniscient adversarial MACs is also zero in the boundary case. That is, our converse can be (conjecturally) strengthened. 11. 11. Our proof heavily relies on the assumption of finite alphabets. It is unclear how to extend our proof to the case where the alphabet sizes grow with $n$. In fact, we believe that the behaviour of the capacity (region) is significantly different in the large alphabet regime. Indeed, for bitflip channels, there are algebraic constructions (notably the Reed–Solomon codes) attaining the capacity upper bound (the Singleton bound). In other words, unlike in the small alphabet case, the first-order asymptotics of bitflip channels are known as long as the alphabet sizes are sufficiently large (in particular at least $n$ suffices). It remains an intriguing question to explore the behaviour of omniscient adversarial MACs in the large alphabet regime. 12. 12. Our converse results (Theorem 20) give upper bounds on the size of codes when the channel does not admit positive rates. For instance, if the set of good distributions is “$\varepsilon$-contained” (as per Equation 87) in the confusability set, then our proof gives $\max\left\\{\left|{\mathcal{C}}_{1}\right|,\left|{\mathcal{C}}_{2}\right|\right\\}\leq f(1/\varepsilon)$ which is independent of $n$. However, the function $f(\cdot)$ involves Ramsey number and is therefore enormous. We do not expect this bound to have an optimal dependence on $1/\varepsilon$. This type of question regarding the size of codes above the Plotkin bound was studied previously only for special channels. For instance, for the (point-to-point) bitflip channels with noise level $p$, the optimal dependence is known to be $\Theta(1/\varepsilon)$ [Lev61] where $\varepsilon=p-1/4$ is the gap between the Plotkin point and the noise level. Optimal bounds are also known for list decoding over bitflip channels with odd202020In [ABP18], the list size was parameterized by $L-1$ and optimal bounds were only shown for _even_ $L$, i.e., _odd_ list sizes. list sizes [ABP18]. We are not aware of any result on codes above the Plotkin bound for adversarial MACs. ## XVII Acknowledgement We thank Amitalok J. Budkuley and Sidharth Jaggi for many helpful discussions at the early stage of this work. We also thank Nir Ailon, Qi Cao and Chandra Nair for discussions on a related problem regarding zero-error binary adder MACs. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 682203-ERC-[Inf- Speed-Tradeoff]. ## Appendix A Table of notation Frequently used notation is listed in the following table (LABEL:tab:notation). TABLE IV: Table of frequently used notation. Notation | Meaning | Definition ---|---|--- $\mathrm{asymm}_{1}(\cdot),\mathrm{asymm}_{2}(\cdot),\mathrm{asymm}_{1,2}(\cdot),\mathrm{asymm}(\cdot)$ | Asymmetry of a joint distribution | Definition 19 $({\mathcal{C}}_{1},{\mathcal{C}}_{2})\subseteq{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ | Code pair | Definition 5 $\mathrm{co}\text{-}{\mathcal{G}}_{1}\left(P_{1},P_{2}\right),\mathrm{co}\text{-}{\mathcal{G}}_{2}\left(P_{1},P_{2}\right),\mathrm{co}\text{-}{\mathcal{G}}_{1,2}\left(P_{1},P_{2}\right)$ | Sets of co-good tensors with marginals $(P_{1},P_{2})$ | Definition 16 $\operatorname{Dec}\colon{\mathcal{Y}}^{n}\to[M_{1}]\times[M_{2}]$ | Decoder of the receiver | Definition 5 $\operatorname{Enc}_{1}\colon[M_{1}]\to{\mathcal{X}}_{1}^{n},\operatorname{Enc}_{2}\colon[M_{2}]\to{\mathcal{X}}_{2}^{n}$ | Encoders of the transmitters | Definition 5 ${\mathcal{G}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{2}}\left(P_{1},P_{2}\right),{\mathcal{G}}_{{1,2}}\left(P_{1},P_{2}\right)$ | Sets of good distributions with marginals $(P_{1},P_{2})$ | Definition 15 ${\mathcal{G}}\left(P_{1},P_{2}\right)$ | Set of simultaneously good distributions with marginals $(P_{1},P_{2})$ | Definition 15 ${\mathcal{J}}_{1}\left(P_{1},P_{2}\right),{\mathcal{J}}_{2}\left(P_{1},P_{2}\right),{\mathcal{J}}_{1,2}\left(P_{1},P_{2}\right)$ | Sets of self-couplings with marginals $(P_{1},P_{2})$ | Definition 10 $\operatorname{Jam}\colon{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}\to{\mathcal{S}}^{n}$ | Jamming function of the adversary | Definition 6 ${\mathcal{K}}_{{1}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{2}}\left(P_{1},P_{2}\right),{\mathcal{K}}_{{1,2}}\left(P_{1},P_{2}\right)$ | Confusability sets with marginals $(P_{1},P_{2})$ | Definition 11 $\mathsf{MAC}_{2}=\left({\mathcal{X}}_{1},{\mathcal{X}}_{2},{\mathcal{S}},{\mathcal{Y}},\Gamma_{1},\Gamma_{2},\Lambda,W_{{\mathbf{y}}|{\mathbf{x}},{\mathbf{s}}}\right)$ | Omniscient adversarial MAC | Definition 4 $(m^{1},m^{2})\in[M_{1}]\times[M_{2}]$ | Messages of the transmitters | Definition 4 $M_{1}=|{\mathcal{C}}_{1}|,M_{2}=|{\mathcal{C}}_{2}|$ | Sizes of codebooks | Definition 5 $\left[P_{{\mathbf{x}},{\mathbf{y}}}\right]_{{\mathbf{x}}}\in\Delta({\mathcal{X}})$ | Marginal distribution of $P_{{\mathbf{x}},{\mathbf{y}}}\in\Delta({\mathcal{X}}\times{\mathcal{Y}})$ on the variable ${\mathbf{x}}$ | Section V $(R_{1},R_{2})$ | Rate pair | Definition 5 ${\underline{s}}\in{\mathcal{S}}^{n}$ | Jamming sequence of the adversary | Definition 4 ${\mathcal{S}}$ | Alphabet of the adversary | Definition 4 ${\mathcal{S}}_{1}\left(P_{1},P_{2}\right),{\mathcal{S}}_{2}\left(P_{1},P_{2}\right),{\mathcal{S}}_{1,2}\left(P_{1},P_{2}\right)$ | Sets of symmetric distributions with marginals $(P_{1},P_{2})$ | Definition 14 $\mathsf{Sym}_{1}(P_{1},P_{2}),\mathsf{Sym}_{2}(P_{1},P_{2}),\mathsf{Sym}_{1,2}(P_{1},P_{2})$ | Sets of symmetric tensors with marginals $(P_{1},P_{2})$ | Definition 13 $W_{{\mathbf{y}}|{\mathbf{x}}^{1},{\mathbf{x}}^{2},{\mathbf{s}}}$ | Channel transition law | Definition 4 $({\underline{x}}^{1},{\underline{x}}^{2})\in{\mathcal{X}}_{1}^{n}\times{\mathcal{X}}_{2}^{n}$ | Input sequences from the transmitters | Definition 4 ${\mathcal{X}}_{1},{\mathcal{X}}_{2}$ | Alphabets of the transmitters | Definition 4 ${\underline{y}}\in{\mathcal{Y}}^{n}$ | Output sequence to the receiver | Definition 4 ${\mathcal{Y}}$ | Alphabet of the receiver | Definition 4 $(\Gamma_{1},\Gamma_{2})\subseteq\Delta({\mathcal{X}}_{1})\times\Delta({\mathcal{X}}_{2})$ | Input constraints | Definition 4 $\Delta({\mathcal{X}})$ | Probability simplex on ${\mathcal{X}}$ | Section V $\Delta_{1}(P_{1},P_{2}),\Delta_{2}(P_{1},P_{2}),\Delta_{1,2}(P_{1},P_{2})$ | Sets of generalized self-couplings with marginals $(P_{1},P_{2})$ | Definition 12 $\Delta^{(n)}({\mathcal{X}})$ | Sets of types of ${\mathcal{X}}^{n}$-valued vectors | Definition 3 $\Lambda\subseteq\Delta({\mathcal{S}})$ | State constraints | Definition 4 $\nu(P_{\mathbf{x}},n)$ | – | Equation 1 $\tau_{{\underline{x}}}\in\Delta^{(n)}({\mathcal{X}})$ | Type of ${\underline{x}}\in{\mathcal{X}}^{n}$ | Definition 3 ## Appendix B Proof of Plotkin bound for binary noisy $\operatorname{\mathsf{XOR}}$ MACs (Theorem 11) ###### Proof of Theorem 11. Suppose $p=1/4+\varepsilon$ for some constant $\varepsilon>0$. Let $\left({\mathcal{C}}_{1},{\mathcal{C}}_{2}\right)$ be a code pair which attains zero error on the binary noisy $\operatorname{\mathsf{XOR}}$ MAC. Let $M_{1}\coloneqq\left|{\mathcal{C}}_{1}\right|,M_{2}\coloneqq\left|{\mathcal{C}}_{2}\right|$. We will show that $M_{1}M_{2}\leq\nicefrac{{1}}{{4\varepsilon}}+1$. To this end, inspired the classical Plotkin bound in coding theory, we estimate the following quantity $\displaystyle\sum_{\left({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}\right)\in{\mathcal{C}}_{1}^{2}\times{\mathcal{C}}_{2}^{2}}d_{\mathrm{H}}\left({\underline{x}}^{1}_{1}\oplus{\underline{x}}^{2}_{1},{\underline{x}}^{1}_{2}\oplus{\underline{x}}^{2}_{2}\right).$ (141) One the one hand, by the goodness of $({\mathcal{C}}_{1},{\mathcal{C}}_{2})$, as long as $({\underline{x}}^{1}_{1},{\underline{x}}^{2}_{1})\neq({\underline{x}}^{1}_{2},{\underline{x}}^{2}_{2})$, we have $d_{\mathrm{H}}\left({\underline{x}}^{1}_{1}\oplus{\underline{x}}^{2}_{1},{\underline{x}}^{1}_{2}\oplus{\underline{x}}^{2}_{2}\right)>2np$. For $({\underline{x}}^{1}_{1},{\underline{x}}^{2}_{1})=({\underline{x}}^{1}_{2},{\underline{x}}^{2}_{2})$, the summand is apparently zero. Therefore, Equation 141 is larger than $(M_{1}^{2}M_{2}^{2}-M_{1}M_{2})\cdot 2np$. On the other hand, we can expand Equation 141 as follows. $\displaystyle\sum_{\left({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}\right)\in{\mathcal{C}}_{1}^{2}\times{\mathcal{C}}_{2}^{2}}d_{\mathrm{H}}\left({\underline{x}}^{1}_{1}\oplus{\underline{x}}^{2}_{1},{\underline{x}}^{1}_{2}\oplus{\underline{x}}^{2}_{2}\right)$ $\displaystyle=$ $\displaystyle\sum_{\left({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}\right)\in{\mathcal{C}}_{1}^{2}\times{\mathcal{C}}_{2}^{2}}wt_{\mathrm{H}}\left({\underline{x}}^{1}_{1}\oplus{\underline{x}}^{2}_{1}\oplus{\underline{x}}^{1}_{2}\oplus{\underline{x}}^{2}_{2}\right)$ $\displaystyle=$ $\displaystyle\sum_{\left({\underline{x}}^{1}_{1},{\underline{x}}^{1}_{2},{\underline{x}}^{2}_{1},{\underline{x}}^{2}_{2}\right)\in{\mathcal{C}}_{1}^{2}\times{\mathcal{C}}_{2}^{2}}\sum_{(a_{1},b_{1},a_{2},b_{2})\in{\mathcal{M}}}\sum_{j=1}^{n}\mathds{1}{\left\\{{\underline{x}}^{1}_{1}(j)=a_{1}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{2}_{1}(j)=b_{1}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{1}_{2}(j)=a_{2}\right\\}}\mathds{1}{\left\\{{\underline{x}}^{2}_{2}(j)=b_{2}\right\\}}$ (142) $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}\sum_{(a_{1},b_{1},a_{2},b_{2})\in{\mathcal{M}}}\left(\sum_{{\underline{x}}^{1}_{1}\in{\mathcal{C}}_{1}}\mathds{1}{\left\\{{\underline{x}}^{1}_{1}(j)=a_{1}\right\\}}\right)\left(\sum_{{\underline{x}}^{2}_{1}\in{\mathcal{C}}_{2}}\mathds{1}{\left\\{{\underline{x}}^{2}_{1}(j)=b_{1}\right\\}}\right)\left(\sum_{{\underline{x}}^{1}_{2}\in{\mathcal{C}}_{1}}\mathds{1}{\left\\{{\underline{x}}^{1}_{2}(j)=a_{2}\right\\}}\right)\left(\sum_{{\underline{x}}^{2}_{2}\in{\mathcal{C}}_{2}}\mathds{1}{\left\\{{\underline{x}}^{2}_{2}(j)=b_{2}\right\\}}\right)$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{n}\big{(}(M_{1}-S_{j})(M_{2}-T_{j})(M_{1}-S_{j})T_{j}+(M_{1}-S_{j})(M_{2}-T_{j})S_{j}(M_{2}-T_{j})$ $\displaystyle+(M_{1}-S_{j})T_{j}(M_{1}-S_{j})(M_{2}-T_{j})+S_{j}(M_{2}-T_{j})(M_{1}-S_{j})(M_{2}-T_{j})$ $\displaystyle+S_{j}T_{j}S_{j}(M_{2}-T_{j})+S_{j}T_{j}(M_{1}-S_{j})T_{j}+S_{j}(M_{2}-T_{j})S_{j}T_{j}+(M_{1}-S_{j})T_{j}S_{j}T_{j}\big{)}$ (143) $\displaystyle=$ $\displaystyle M_{1}^{2}M_{2}^{2}\sum_{j=1}^{n}\left(\overline{\alpha}_{j}\overline{\beta}_{j}\overline{\alpha}_{j}\beta_{j}+\overline{\alpha}_{j}\overline{\beta}_{j}\alpha_{j}\overline{\beta}_{j}+\overline{\alpha}_{j}\beta_{j}\overline{\alpha}_{j}\overline{\beta}_{j}+\alpha_{j}\overline{\beta}_{j}\overline{\alpha}_{j}\overline{\beta}_{j}+\alpha_{j}\beta_{j}\alpha_{j}\overline{\beta}_{j}+\alpha_{j}\beta_{j}\overline{\alpha}_{j}\beta_{j}+\alpha_{j}\overline{\beta}_{j}\alpha_{j}\beta_{j}+\overline{\alpha}_{j}\beta_{j}\alpha_{j}\beta_{j}\right)$ (144) In Equation 142, we use ${\mathcal{M}}\coloneqq\left\\{0001,0010,0100,1000,1110,1101,1011,0111\right\\}$ to denote the set of length-4 binary sequences with odd parity. In Equation 143, we define $S_{j}\coloneqq\sum_{{\underline{x}}^{1}\in{\mathcal{C}}_{1}}\mathds{1}{\left\\{{\underline{x}}^{1}(j)=1\right\\}}$ and $T_{j}\coloneqq\sum_{{\underline{x}}^{2}\in{\mathcal{C}}_{2}}\mathds{1}{\left\\{{\underline{x}}^{2}(j)=1\right\\}}$ to be the number of 1’s in the $j$-th column of ${\mathcal{C}}_{1}\in\\{0,1\\}^{M_{1}\times n}$ and ${\mathcal{C}}_{2}\in\\{0,1\\}^{M_{2}\times n}$ respectively. In Equation 144, we further define $\alpha_{j}\coloneqq S_{j}/M_{1}$ and $\beta_{j}\coloneqq T_{j}/M_{2}$ to be the density of 1’s in the $j$-th column of ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ respectively; we also use the notation $\overline{a}\coloneqq 1-a$ for $a\in[0,1]$. For any $j\in[n]$, since $\alpha_{j},\beta_{j}\in[0,1]$ the summand of Equation 144 is at most $1/2$. This can be verified by solving the following simple constrained (degree-4) polynomial optimization problem: $\displaystyle\max_{(\alpha,\beta)\in[0,1]^{2}}{\overline{\alpha}\overline{\beta}\overline{\alpha}\beta+\overline{\alpha}\overline{\beta}\alpha\overline{\beta}+\overline{\alpha}\beta\overline{\alpha}\overline{\beta}+\alpha\overline{\beta}\overline{\alpha}\overline{\beta}+\alpha\beta\alpha\overline{\beta}+\alpha\beta\overline{\alpha}\beta+\alpha\overline{\beta}\alpha\beta+\overline{\alpha}\beta\alpha\beta}.$ The maximum $1/2$ is attained at $\alpha=1/4,\beta=1/2$. Therefore, Equation 141 is at most $M_{1}^{2}M_{2}^{2}n/2$. Putting the lower and upper bounds on Equation 141 together, we have $\begin{array}[]{rrl}&\left(M_{1}^{2}M_{2}^{2}-{M_{1}M_{2}}\right)\cdot 2np<&\frac{M_{1}^{2}M_{2}^{2}n}{2}\\\ \iff&\left(1-\frac{1}{M_{1}M_{2}}\right)2\left(\frac{1}{4}+\varepsilon\right)<&\frac{1}{2}\\\ \iff&M_{1}M_{2}<&\frac{1}{4\varepsilon}+1,\end{array}$ which finishes the proof of Theorem 11. ∎ ## References * [ABP18] Noga Alon, Boris Bukh, and Yury Polyanskiy. List-decodable zero-rate codes. IEEE Transactions on Information Theory, 65(3):1657–1667, 2018\. * [AC99] Rudolf Ahlswede and Ning Cai. Arbitrarily varying multiple-access channels. i. ericson’s symmetrizability is adequate, gubner’s conjecture is true. IEEE Transactions on Information Theory, 45(2):742–749, 1999. * [Ahl73] Rudolf Ahlswede. Multi-way communication channels. In Second International Symposium on Information Theory: Tsahkadsor, Armenia, USSR, Sept. 2-8, 1971, 1973. * [Ahl74] Rudolf Ahlswede. The capacity region of a channel with two senders and two receivers. The annals of probability, 2(5):805–814, 1974. * [Ahl78] R. Ahlswede. Elimination of correlation in random codes for arbitrarily varying channels. Z. Wahrscheinlichkeitstheorie Verv. Gebiete, 44:181–193, 1978. * [AKKN17] Per Austrin, Petteri Kaski, Mikko Koivisto, and Jesper Nederlof. Sharper upper bounds for unbalanced uniquely decodable code pairs. IEEE Transactions on Information Theory, 64(2):1368–1373, 2017\. * [APBD18] Meysam Asadi, Kenneth Palacio-Baus, and Natasha Devroye. A relaying graph and special strong product for zero-error problems in primitive relay channels. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 281–285. IEEE, 2018. * [BDJ+20] Amitalok J Budkuley, Bikash Kumar Dey, Sidharth Jaggi, Michael Langberg, Anand D Sarwate, and Carol Wang. Symmetrizability for myopic avcs. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 2103–2107. IEEE, 2020. * [BLA76] LW BEINERE, BEINERE LW, and SCHWENK AJ. On a bipartite form of the ramsey problem. 1976\. * [BS16] Holger Boche and Rafael F Schaefer. Arbitrarily varying multiple access channels with conferencing encoders: List decoding and finite coordination resources. Advances in Mathematics of Communications, 10(2):333–354, 2016\. * [BSP18] Holger Boche, Rafael F Schaefer, and H Vincent Poor. Analytical properties of shannon’s capacity of arbitrarily varying channels under list decoding: Super-additivity and discontinuity behavior. Problems of Information Transmission, 54(3):199–228, 2018. * [Cai16] Ning Cai. List decoding for arbitrarily varying multiple access channel revisited: List configuration and symmetrizability. IEEE Transactions on Information Theory, 62(11):6095–6110, 2016\. * [CD15] Yanying Chen and Natasha Devroye. On the optimality of colour-and-forward relaying for a class of zero-error primitive relay channels. In 2015 IEEE International Symposium on Information Theory (ISIT), pages 1272–1276. IEEE, 2015. * [CD17] Yanying Chen and Natasha Devroye. Zero-error relaying for primitive relay channels. IEEE Transactions on Information Theory, 63(12):7708–7715, 2017\. * [CK81] I. Csiszár and J. Körner. On the capacity of the arbitrarily varying channel for maximum probability of error. Z. Wahrscheinlichkeitstheorie Verv. Gebiete, 57:87–101, 1981. * [CK11] Imre Csiszár and János Körner. Information theory: coding theorems for discrete memoryless systems. Cambridge University Press, 2011. * [CN88a] Imre Csiszár and Prakash Narayan. Arbitrarily varying channels with constrained inputs and states. IEEE Trans. Inf. Theory, 34:27–34, 1988. * [CN88b] Imre Csiszár and Prakash Narayan. The Capacity of the Arbitrarily Varying Channel Revisited : Positivity, Constraints. IEEE Trans. Inf. Theory, 34:181–193, 1988. * [CN91] Imre Csiszár and Prakash Narayan. Capacity of the gaussian arbitrarily varying channel. IEEE Transactions on Information Theory, 37(1):18–26, 1991. * [Cov75] Thomas M Cover. Some advances in broadcast channels. In Advances in communication systems, volume 4, pages 229–260. Elsevier, 1975. * [CSD14] Yanying Chen, Sara Shahi, and Natasha Devroye. Colour-and-forward: relaying “what the destination needs” in the zero-error primitive relay channel. In 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 987–995. IEEE, 2014. * [Csi98] Imre Csiszár. The method of types [information theory]. IEEE Transactions on Information Theory, 44(6):2505–2523, 1998\. * [Dev16] Natasha Devroye. When is the zero-error capacity positive in the relay, multiple-access, broadcast and interference channels? In 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 672–678. IEEE, 2016. * [DJL15] Bikash Kumar Dey, Sidharth Jaggi, and Michael Langberg. Sufficiently myopic adversaries are blind. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 1164–1168, 2015. * [DPSV19] Arkadii D’yachkov, Nikita Polyanskii, Vladislav Shchukin, and Ilya Vorobyev. Separable codes for the symmetric multiple-access channel. IEEE Transactions on Information Theory, 65(6):3738–3750, 2019\. * [Due78] G Dueck. Maximal error capacity regions are smaller than average error capacity regions for multi-user channels. 1978\. * [FN20] Farhad Farokhi and Girish Nair. Non-stochastic private function evaluation. arXiv preprint arXiv:2010.09968, 2020. * [GGLR] L Gyorfi, Sándor Gyori, Bálint Laczay, and M Ruszinko. Lectures on multiple access channels. Web: http://www. szit. bme. hu/gyori/AFOSR, 5. * [GH95] John A Gubner and Brian L Hughes. Nonconvexity of the capacity region of the multiple-access arbitrarily varying channel subject to constraints. IEEE transactions on information theory, 41(1):3–13, 1995. * [GRS12] Venkatesan Guruswami, Atri Rudra, and Madhu Sudan. Essential coding theory. Draft available at http://www. cse. buffalo. edu/ atri/courses/coding-theory/book, 2012. * [GS19] Yujie Gu and Ofer Shayevitz. On the non-adaptive zero-error capacity of the discrete memoryless two-way channel. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 3107–3111. IEEE, 2019. * [Gu18] Yuzhou Gu. Zero-error communication over adder mac. arXiv preprint arXiv:1809.07364, 2018. * [HK19] Fatemeh Hosseinigoki and Oliver Kosut. List-decoding capacity of the gaussian arbitrarily-varying channel. Entropy, 21(6):575, 2019. * [HS17] Wasim Huleihel and Yossef Steinberg. Channels with cooperation links that may be absent. IEEE Transactions on Information Theory, 63(9):5886–5906, 2017\. * [Hug97] Brian L. Hughes. The smallest list for the arbitrarily varying channel. IEEE Transactions on Information Theory, 43(3):803–815, 1997. * [Kol56] Andrey Nikolaevich Kolmogorov. Certain asymptotic characteristics of completely bounded metric spaces. Doklady Akademii Nauk SSSR, 108(3):385–388, 1956. * [Kom90] János Komlós. A strange pigeon-hole principle. Order, 7(2):107–113, 1990. * [Kos20] Oliver Kosut. A second-order converse bound for the multiple-access channel via wringing dependence. arXiv preprint arXiv:2007.15664, 2020. * [Lev61] VI Levenshtein. Application of hadamard matrices on coding problem. Problems of Cybernetica, 5:123–136, 1961. * [LF17] Taehyung J Lim and Massimo Franceschetti. Information without rolling dice. IEEE Transactions on Information Theory, 63(3):1349–1363, 2017\. * [Lia72] HHJ Liao. Multiple Access Channels. Honolulu. PhD thesis, Ph. D. Dissertation, 1972. * [Lov79] László Lovász. On the shannon capacity of a graph. IEEE Transactions on Information theory, 25(1):1–7, 1979. * [Nai11] Girish N Nair. A non-stochastic information theory for communication and state estimation over erroneous channels. In 2011 9th IEEE International Conference on Control and Automation (ICCA), pages 159–164. IEEE, 2011. * [Nai12] Girish N Nair. A nonstochastic information theory for feedback. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 1343–1348. IEEE, 2012. * [Nai13] Girish N Nair. A nonstochastic information theory for communication and state estimation. IEEE Transactions on automatic control, 58(6):1497–1510, 2013. * [Nit13] Sirin Nitinawarat. On the deterministic code capacity region of an arbitrarily varying multiple-access channel under list decoding. IEEE transactions on information theory, 59(5):2683–2693, 2013\. * [NY20] Chandra Nair and Mehdi Yazdanpanah. On the and-or interference channel and the sandglass conjecture. In 2020 IEEE International Symposium on Information Theory (ISIT), pages 1540–1545. IEEE, 2020. * [PPV10] Yury Polyanskiy, H Vincent Poor, and Sergio Verdú. Channel coding rate in the finite blocklength regime. IEEE Transactions on Information Theory, 56(5):2307–2359, 2010\. * [PS19] Uzi Pereg and Yossef Steinberg. The capacity region of the arbitrarily varying mac: with and without constraints. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 445–449. IEEE, 2019. * [PW14] Yury Polyanskiy and Yihong Wu. Lecture notes on information theory. Lecture Notes for ECE563 (UIUC) and, 6(2012-2016):7, 2014. * [RF19] Anshuka Rangi and Massimo Franceschetti. Towards a non-stochastic information theory. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 997–1001. IEEE, 2019. * [Sar10] Anand Sarwate. Coding against Myopic Adversaries. In Proc. IEEE Information Theory Workshop, Dublin, Ireland, 2010\. * [SFN18] Amir Saberi, Farhad Farokhi, and Girish Nair. Estimation and control over a nonstochastic binary erasure channel. IFAC-PapersOnLine, 51(23):265–270, 2018. * [SFN19] Amir Saberi, Farhad Farokhi, and Girish N Nair. State estimation over worst-case erasure and symmetric channels with memory. arXiv preprint arXiv:1902.00726, 2019. * [SFN20a] Amir Saberi, Farhad Farokhi, and Girish N Nair. Bounded state estimation over finite-state channels: Relating topological entropy and zero-error capacity. arXiv preprint arXiv:2003.11954, 2020. * [SFN20b] Amir Saberi, Farhad Farokhi, and Girish N Nair. An explicit formula for the zero-error feedback capacity of a class of finite-state additive noise channels. arXiv preprint arXiv:2006.00892, 2020. * [SG12] Anand D Sarwate and Michael Gastpar. List-decoding for the arbitrarily varying channel under state constraints. IEEE Transactions on Information Theory, 58(3):1372–1384, 2012\. * [Sha48] Claude E Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3):379–423, 1948. * [Sha56] Claude Shannon. The zero error capacity of a noisy channel. IRE Transactions on Information Theory, 2(3):8–19, 1956. * [Sha61] Claude E Shannon. Two-way communication channels. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California, 1961. * [Shc16] V Yu Shchukin. List decoding for a multiple access hyperchannel. Problems of Information Transmission, 52(4):329–343, 2016. * [SMiF14] Jonathan Scarlett, Alfonso Martinez, and Albert Guillén i Fàbregas. Second-order rate region of constant-composition codes for the multiple-access channel. IEEE Transactions on Information Theory, 61(1):157–172, 2014. * [SW73] David Slepian and Jack Keil Wolf. A coding theorem for multiple access channels with correlated sources. Bell System Technical Journal, 52(7):1037–1076, 1973. * [Tik93] VM Tikhomirov. $\varepsilon$-entropy and $\varepsilon$-capacity of sets in functional spaces. In Selected works of AN Kolmogorov, pages 86–170. Springer, 1993\. * [TK13] Vincent YF Tan and Oliver Kosut. On the dispersions of three network information theory problems. IEEE Transactions on Information Theory, 60(2):881–903, 2013. * [TT13] Marco Tomamichel and Vincent YF Tan. A tight upper bound for the third-order asymptotics for most discrete memoryless channels. IEEE Transactions on Information Theory, 59(11):7041–7051, 2013\. * [TT15] Vincent Yan Fu Tan and Marco Tomamichel. The third-order term in the normal approximation for the awgn channel. IEEE Transactions on Information Theory, 61(5):2430–2438, 2015\. * [WB12] Moritz Wiese and Holger Boche. The arbitrarily varying multiple-access channel with conferencing encoders. IEEE transactions on information theory, 59(3):1405–1416, 2012\. * [WBBJ11] Moritz Wiese, Holger Boche, Igor Bjelakovic, and Volker Jungnickel. The compound multiple access channel with partially cooperating encoders. IEEE transactions on information theory, 57(5):3045–3066, 2011\. * [WBBJ19] Xishi Wang, Amitalok J Budkuley, Andrej Bogdanov, and Sidharth Jaggi. When are large codes possible for avcs? In 2019 IEEE International Symposium on Information Theory (ISIT), pages 632–636. IEEE, 2019. * [Wik21] Wikipedia contributors. Ramsey’s theorem — Wikipedia, the free encyclopedia, 2021. [Online; accessed 10-January-2021]. * [Wyn74] Aaron Wyner. Recent results in the shannon theory. IEEE Transactions on information Theory, 20(1):2–10, 1974. * [YKE20] Recep Can Yavas, Victoria Kostina, and Michelle Effros. Gaussian multiple and random access in the finite blocklength regime. arXiv preprint arXiv:2001.03867, 2020. * [ZBJ20] Yihan Zhang, Amitalok J Budkuley, and Sidharth Jaggi. Generalized list decoding. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. * [Zha20] Yihan Zhang. List Decoding for Oblivious Arbitrarily Varying MACs: Constrained and Gaussian, 2020. * [ZJB20] Yihan Zhang, Sidharth Jaggi, and Amitalok J Budkuley. Tight list-sizes for oblivious avcs under constraints. arXiv preprint arXiv:2009.03788, 2020. * [ZN20] Ghassen Zafzouf and Girish N Nair. Distributed state estimation with bounded errors over multiple access channels. arXiv preprint arXiv:2002.03294, 2020. * [ZNE19] Ghassen Zafzouf, Girish N Nair, and Jamie S Evans. Zero-error capacity of multiple access channels via nonstochastic information. In 2019 IEEE Information Theory Workshop (ITW), pages 1–5. IEEE, 2019. * [ZVJ20] Yihan Zhang, Shashank Vatedka, and Sidharth Jaggi. Quadratically constrained two-way adversarial channels. arXiv preprint arXiv:2001.02575, 2020. * [ZVJS18] Yihan Zhang, Shashank Vatedka, Sidharth Jaggi, and Anand D Sarwate. Quadratically constrained myopic adversarial channels. In 2018 IEEE International Symposium on Information Theory (ISIT), pages 611–615. IEEE, 2018.
∎ 11institutetext: Giordano Fava, Tommaso Giovannelli, Massimo Roma 22institutetext: Dipartimento di Ingegneria Informatica Automatica e Gestionale “A. Ruberti”, SAPIENZA Università di Roma, via Ariosto, 25 – 00185 Roma, Italy. 22email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>33institutetext: Mauro Messedaglia 44institutetext: ACTOR Start up of SAPIENZA Università di Roma, via Nizza 45, 00198 Roma, Italy. 44email<EMAIL_ADDRESS> # Effect of different patient peak arrivals on an Emergency Department via discrete event simulation Giordano Fava Tommaso Giovannelli https://orcid.org/0000-0002-1436-5348 Mauro Messedaglia Massimo Roma https://orcid.org/0000-0002-9858-3616 ###### Abstract Emergency Departments (EDs) overcrowding is a well recognized worldwide phenomenon. The consequences range from long waiting times for visit and treatment of patients, up to life-threatening health conditions. The international community is devoting greater and greater efforts to analyze this phenomenon aiming at reducing waiting times, improving the quality of the service. Within this framework, we propose a Discrete Event Simulation (DES) model to study the patient flows through a medium–size ED located in a region of Central Italy recently hit by a severe earthquake. In particular, our aim is to simulate unusual ED conditions, corresponding to critical events (like a natural disaster) which cause a sudden spike in the number of patient arrivals. The availability of detailed data concerning the ED processes enabled to build an accurate DES model and to perform extensive scenario analyses. The model provides a valid decision support system for the ED managers also in defining specific emergency plans to be activated in case of mass casualty disasters. ###### Keywords: Emergency Department Discrete Event Simulation Patient flow Overcrowding Patient peak arrivals ## 1 Introduction Emergency Department (ED) belongs to the Emergency Medical Services (EMS) which represent the most important healthcare services, considering that they affect people’s lives (see the recent survey paper Aringhieri.2017 for a complete review on EMS). In particular, the task of an ED is providing healthcare services for people who need urgent medical treatments. An ED is open 365 days a year and 24 hours a day and people with different urgency arrive requiring treatment. Since the services delivered by an ED are time–critical, the main issue concerns the response time. Unfortunately, the well known and increasing problem of the overcrowding tends to enlarge the waiting times, endangering the life of critical patients. Today the overcrowding is an international problem widely considered in the specific literature (see e.g. hoot.aronsky:08 ; Weiss.2004 ; Weiss.2006 and the references reported therein). In particular, according to hoot.aronsky:08 , possible causes of overcrowding are insufficient staff, shortcomings of the structures, flu season, request of nonurgent treatments, unavailability of hospital beds. Besides treatment delays, other possible consequences of overcrowding are reduced quality of the services and higher patient mortality, ambulance diversion, growing number of patients which leave without being visited, greater expenses for the service provider due to longer patient stay. Of course, these critical issues are greatly amplified whenever mass casualty disasters occur. In this case, the rate of patient arrivals suddenly increases and some different tactical and strategic decisions must be adopted to ensure timely treatments. In this paper we propose the use of the DES based approach for analyzing the operation of an ED of a medium–size hospital located in an Italian region where the recent earthquakes have put a strain on the EDs of the area. For such an ED it is crucial to assess the impact of unusual/critical events which cause peak arrivals in order to possibly adopt suited contingency plans. Therefore, we study the effects of spikes in the number of patient arrivals on the ED operation. The availability of detailed data concerning the ED processes allowed us to build an accurate DES model, well reproducing the actual operating modes of the ED. After a complete input analysis, the DES model has been implemented by using ARENA 15.1 Simulation Software ARENA ; simulationwitharena which is one of the most commonly used general purpose DES package. Based on flowchart modules, it enables to build the simulation model and to perform input analysis, simulation runs and output analysis. Then the model has been accurately validated in order to guarantee its reliability. Several scenario analyses have been performed, aiming at evaluating the impact of possible changes in the ED operating conditions. We focused on assessing the main Key Performance Indicators (KPIs) of the ED for different scenarios in critical conditions. In particular, we simulated unusual ED conditions, corresponding to spikes in the number of patient arrivals. These patient surges can be of different patterns and they seriously affect the ED operation. We experienced some artificial scenarios we have specially created trying to reproduce really occurred conditions. The model we propose is very helpful for the ED managers both in the daily management of the resources and also in defining suited emergency plans to be adopted in case of critical emergencies. The paper is organized as follows: Section 2 reports some background material and a literature review. In Section 3 generalities on operating conditions of Italian EDs are briefly reported. Section 4 describes the case study of the ED we consider. In Section 5 we detail the DES model we propose, along with the input analysis and the model validation. Section 6 reports results for the “as–is” status along with extensive scenario analyses, mainly focused on peak patient arrivals. Finally, some concluding remarks are included in Section 7. ## 2 Background In the recent years, techniques from Operations Research have been frequently applied for tackling the problem of the overcrowing of EDs. Since 2005 the _National Academy of Engineering_ and the _Institute of Medicine_ in NAS.2005 highlighted the importance of using tools from Operations Research and Systems Engineering (statistical process controls, queuing theory, mathematical modelling and simulation) in healthcare delivery, in order to improve performance of care processes or units. Among them, _simulation_ is considered of fundamental importance for analyzing several healthcare settings (see, e.g. NAS.2005 and the several examples reported therein and Almagooshi.2015 ). In particular, simulation models have been largely applied to emergency medical service operations (see Aboueljinane.2013 for a survey). Several papers in the simulation literature are devoted to study the patient flow through an ED by means of _Discrete Event Simulation_ (DES) models (see e.g. Aboueljinane.2013 ; Gul.2015 ; Joshi.2016 ; Kuo.2016 ; Rado.2014 ; Wong.2016 ; Zeinali.2015 ) and _Agent Based Simulation_ (ABS) models (see e.g. Aringhieri.2018 ; Kaushal.2015 ; Liu.2015 ; Taboada.2011 ; Wang.2009 ). In particular, DES has been largely used for studying causes and effects of EDs overcrowding. We refer to the paper Paul.2010 (and to the many references reported therein) and to the more recent paper Nahhas.2017 , for a review of simulation studies available in literature devoted to the ED overcrowding phenomenon and for a discussion on the effectiveness of the approaches based on DES models used for tackling this problem. Among the many papers which deal with ED overcrowding we mention Wong.2016 where an ED in Hong Kong is simulated in order to assess how modifying the path of the patient clinical process and the level of physician resources affect the performance; Joshi.2016 where the simulation model is used to understand the process which allows to shorten the waiting times and the length of stay by varying the workload among staff members and by giving nonurgent patients the possibility to return afterwards; the many papers addressing the adoption of fast–track systems in the ED, such as Kuo.2018 and Aroua.2017 , which propose to send less urgent patients to specific queues so that they receive the service early, and hence they will be discharged earlier and the ED will become less crowded; Whitt.2017 where a in-depth study on patients interarrivals and lenght of stay in and ED located in Israel is reported; Daldoul.2018 where a stochastic model is considered in order to reduce the average total patient waiting time in an university hospital. From the wide literature on the topic, it clearly emerges that a study based on DES provides important insights into ED overcrowding. Moreover, as well known, a simulation based study can be also combined with optimization tools in order to determine which setting results the best, once one or more objectives (to be minimized or maximized) are defined. The resulting _Simulation Optimization_ approach has been also applied in healthcare contexts (see e.g. Chanchaichujit:2019 ; Granja:2014 ; optl2016 ; ieee-tase2016 ; Zhang:2019 ) and, in particular, in dealing with ED (see e.g. Ahmed.2009 ; Diefenbach.2011 ; Guo.2017 ; Guo.2016 ). For instance, in Guo.2017 a simulation model for the ED of a public hospital in Hong Kong is built and integrated with an optimization tool in order to find an optimal medical staff configuration to minimize the total labor cost given the service quality requirement; in Diefenbach.2011 the patient flow through an Australian ED is studied and optimized on the basis of the bed configuration. Moreover, interesting papers are devoted to prevent and predict strain situations in EDs (see e.g. Kadri:2014 ) and to study the effect of spikes in the patient arrivals due to disaster or extreme events. A comprehensive review highlighting the importance of simulation to improve ED policies in case of critical conditions can be found in Gul.2015 . Furthermore, in Gul.2015b a DES model is proposed to study disaster scenarios corresponding to a patient flow surge for an ED located in an earthquake area in Istanbul. The aim is to enable early preparedness of ED resources to overcome bottlenecks due to critical situations. Finally, we mention Xiao.2012 where the patient workflow through an ED located in Western New York during extreme conditions is studied. A framework to reconfigure the workflow is proposed aiming at improving the overall management of the patient flow. ## 3 Generalities on EDs and Italian guidelines The ED consists of two categories of stakeholders: _physicians_ and _nurses_ , that are the human resources having different responsibilities, and _patients_ , who need a specific care. Moreover, there are physical resources, such as beds, machineries, stretchers and so forth, necessary to host and visit patients during their stay. Each patient arrived to the ED goes through different clinical paths. This flow comprises several steps which generally consist in the triage, whose aim is to assign an urgency code to every patient, medical visits and examinations and, finally, the leaving of the ED, which may involve diverse kinds of discharge. As first step, a triage tag is assigned to every incoming patient, in order to determine the priority of treatment. Different systems of classification are usually adopted. The most commonly used scale in Italy is reported in Table 1. Table 1: Colour coding scheme for triage of incoming patients. Red tag | Very critical, danger of life. ---|--- The patient must be visited immediately Yellow tag | Fairly critical, high risk. The patient should be visited as soon as possible Green tag | Minor injury, no risk of conditions worsening. The treatment can be delayed. White tag | No injury, minimal pain with no risk features. The treatment can be deferred. Moreover, in some regions a blue tag is also used as intermediate case between green and yellow tags. In some countries more fine-grained (sometimes numerical valued) scales are adopted. In order to guarantee more appropriate clinical paths and following the main current international scientific evidence, the Italian Ministry of Health is going to adopt new guidelines. They are based on a new triage which considers five numerical urgency codes (see triage.2001 ; accordo.2013 ) as detailed in Table 2. Table 2: Numeric coding scheme for the triage of incoming patients. Code 1 | Very critical conditions, immediate treatment ---|--- Code 2 | Fairly critical, high level of risk Code 3 | Not very critical, no risk of worsening Code 4 | Not critical, acute but not serious Code 5 | Not critical, not serious, not acute. This classification closely resembles the Emergency Severity Index (ESI) adopted in US which is based on an algorithm that rapidly yields grouping of patients into five classes, as described in Gilboy.2012 . After assigned the triage code, the nurse activates the more appropriate Diagnostic Therapeutic Care Path (DTCP). In particular, a patient can be sent: 1) to an ED room, 2) to outpatient facilities, 3) toward a _“Fast Track”_ , 4) to the _“See and Treat”_ service. The Fast Track and See and Treat are novel services recently introduced in order to reduce the waiting times, the Length Of Stay (LOS) in the ED, as well as the percentage of patients who Leave Without Being Seen (LWBS). The patients directed to ED rooms, follow different clinical pathways which include medical examination and diagnostic tests up to the definition of the outcome. The patient flow inside ED rooms is very complex, due to the many and different specific needs (often even difficult to identify in short time) and the high variability of medical conditions of the incoming patients. Moreover, the flow is also strongly affected by the availability of the resources such as staff on duty, number of rooms and machineries dedicated to different services, capacity of holding areas, beds for hospitalization. The ED process is usually characterized by the following outcomes: _discharged home_ with reliance, if necessary, on territorial structures which provide control at outpatient facilities; _hospitalization_ at an hospital ward (if a bed is available) or _transfer_ to another hospital; admission to the _Short Stay Unit_ (SSU) (whenever such unit exists). The SSU is an inpatient unit attached to the ED, managed under the clinical governance of the ED staff, designed for the short term treatment, observation, assessment and reassessment of patients. When a patient is discharged at the end of his clinical pathway, a physician must assign an exit code in order to identify the patient’s clinical severity level, similarly to that assigned at the triage. Unfortunately, very frequently, the great number of incoming patients leads to the overcrowding of an ED. It is a world phenomenon and it is well perceived by the stakeholders: long patients waiting times before the medical examination, excessive number of patients in the ED, high percentage of patients who LWBS are clear indications of such problem. In order to give a formal assessment of the degree of overcrowding, some measures have been proposed. They enable to monitor the state of the ED, describing the current situation and they can also work as alarm bells to avoid reaching a critical level. The most commonly used are: the Real Time Emergency Analysis of Demand Indicators (READI), the Emergency Department Work Index (EDWIN), the Work Score, the National Emergency Department Overcrowding Scale (NEDOCS). They are continuous valued indicators computed on the basis of some operational variables which enable to quantify the degree of overcrowding of an ED (see Hoot.2007 and the references reported therein for the definition of these methods of measurement). However, the study reported in Hoot.2007 showed that none of these measures actually provides a reliable predictive analysis at a low percentage of false warning. In order to analyze (and to possibly prevent) the overcrowding phenomenon, it is necessary to detect the time spent by the patient inside the ED, during the different phases of the whole process. To this aim, novel Italian guidelines recommend to monitor the times of the clinical pathway in relation to the assigned priority codes. In light of these guidelines, a great interest is shown by the ED managers in tools which enable to perform scenario analysis, like those provided by DES. The aim is to assess how the main KPIs change after possible redesigning of ED patient flows and changing of the model of care. A great and increasing interest concerns the simulation modelling for studying the impact of patient arrival surges caused by some disaster. The related Italian guidelines state specific measures to be adopted in order to efficiently tackle such situations. In particular, the so called “Internal Emergency Plan for Massive Inflow of Injured”111In Italian: PEIMAF, Piano di Emergenza Interno per il Massiccio Afflusso di Feriti has been issued in recent years. In this plan, different critical levels have been provided, and suited operative measures are indicated to reallocate ED human and physical resources, whenever it is activated due to critical events. Moreover, low complexity patients can be addressed to outpatient facilities to enable the ED staff to timely deliver the most urgent treatments. ## 4 The case study: the ED of the “E. Profili” Fabriano hospital In this section we detail our case study concerning the ED of the “E. Profili” Fabriano (Ancona) hospital. This hospital is located in the Italian region of Marche and the catchment area covers about 48000 inhabitants. Every year about 27000 patients arrive to the ED requiring medical assistance, hence it can be considered of medium dimension. A detailed understanding of the ED operation was gained through process mapping performed along with the ED staff. In the sequel we report a brief description of the ED rooms and staff; moreover, we summarize the patient flows through the ED. This ED is composed by * $\bullet$ a _triage area_ , where a nurse assigns the colour tag at each incoming patient and it can host one patient at a time; * $\bullet$ a _waiting room_ where patients queue for the triage and (after the triage) they wait for the medical examination; * $\bullet$ three areas for the medical treatment: * – the _green area_ , for green and white tagged patients, * – the _yellow area_ , for yellow tagged patients, * – the _shock room_ for red tagged patients; * $\bullet$ an _holding area_ ; * $\bullet$ a _Short Stay Unit (SSU)_. As regards the areas for the medical treatment, the shock room is the most equipped one, and two critical patients can be hosted simultaneously. In the green area and in the yellow area one seat is available. During the night (9.00 p.m. – 8.00 a.m.) only the shock room and the green area are in operation, so that also yellow tagged patients are visited in the green area. As regards the ED staff, physicians and nurses are on duty according to the shifts reported in Tables 3 and in Table 4. Other staff members can be engaged in particular emergency. Table 3: Number of physicians on duty in the weekdays (WD) and in the public holidays (PH). | WD | PH ---|---|--- Morning (8.00 a.m. – 2.00 p.m.) | 2 | 1 Afternoon (2.00 p.m. – 9.00 p.m.) | 2 | 1 Night (9.00 p.m. – 8.00 a.m.) | 1 | 1 Table 4: Number of nurses on duty each day. Morning (7.00 a.m. – 2.00 p.m.) | 3 ---|--- Afternoon (2.00 p.m. – 10.00 p.m.) | 3 Night (10.00 p.m. – 7.00 a.m.) | 2 As regards the patient flow, arrivals are by ambulance or autonomously. All the incoming patients are registered at the check–in desk and they are admitted to triage area where a nurse collects patient’s health information and assigns the colour tag. Critical patients arriving by ambulance are directly transferred to a medical area for immediate treatment, without going through the triage area. After the triage, a patient waits for the call in the waiting room where the estimated waiting time is displayed on a screen. Then the patient is transferred to an appropriate area inside the ED for medical treatment according to the assigned color tag. In severely urgent cases (red tag), the patient is examined in the shock room for possible immediate interventions. In less severe cases, physicians, after performing health assessment, decide the clinical pathway which must be followed by patient. The pathways can be very differentiated on the basis of the acuity of patient’s illness. In many cases, the physician requires additional examinations for the patient (e.g. clinical laboratory tests, X-ray, EKGs). In other cases, patient is transferred to the SSU for a short observation. Moreover, patients with less serious illness, having a single specialist relevance, are assigned to the fast track service, a specific area of the ED provided by a multidisciplinary team, where timely patient treatment and discharge are ensured. In the ED, usually one physician and one nurse manage the examination and the treatment of a patient, however in case of severe injuries, the whole ED staff can support them. Whenever all the examinations are completed and the related reports have been issued, a reassessment of the patient is performed by the physician of the ED who can require further examinations and/or an additional observation period. At the end of the pathway, the final diagnosis is delivered and patient is discharged from the ED. When a patient is discharged, a physician assigns an exit code in order to identify the patient’s clinical severity level, similarly to that assigned at the triage. This implies that the code assigned at the triage on arrival can be confirmed or changed. Furthermore, a detailed description of the outcome must be issued. The outcome is encoded according to the following list: _O1_ : patient is discharged home; _O2_ : patient is discharged home with reliance to outpatient facilities or family physician; _O3_ : patient is hospitalized at an hospital ward; _O4_ : patient is transferred to another hospital due to bed unavailability at the appropriate ward of the hospital; _O5_ : patient refuses hospitalization and leaves the ED despite the medical request; _O6_ : patient leaves during examinations, i.e. the patient does not complete all the required tests and abandons the ED without informing the staff; _O7_ : patient leaves without been seen, i.e. the patient abandons the ED waiting room before being examined by a physician (LWBS); _O8_ : patient dies during the stay at the ED; _O9_ : patient has arrived deceased to the ED. In order to perform a complete process mapping of the ED, many interviews have been carried out to the staff (physicians, nurses, managers) and direct observations took place. Moreover, all the available data concerning patient flow during February 2018 have been anonymously collected. From 00:00 of February 1 to 23:59 of February 28, 2018, the overall number of patients arrived to the ED is 2046. The time–stamps recorded are reported in Figure 1. They have been extracted and organized in a suited database. Note that, since the _holding area_ and the _SSU_ are not subject of our study, these two units of the ED are not specified in our model, and hence not represented in Figure 1. Figure 1: Collected timestamps for the ED process. In Table 5 we report the number and the percentage of color tags assigned at the triage on arrival. Moreover, we also report in Table 5 the number and the percentage of color tags assigned on discharge (as already noticed, in some cases, the discharge tag, at the end of the clinical pathway can be changed with respect to that assigned at the triage). In the same table the percentage of patient leaving without being seen, for each triage tag, is reported. Finally, the number and the percentage of deceased patients is included in the table, too. Table 5: Number and percentage of color tags assigned at the triage at incoming patients (columns 2-3). Number and percentage of discharge color tags assigned at the discharge (columns 4-5). Percentage of patients LWBS (column 6). | TRIAGE TAG | DISCH. TAG | LWBS ---|---|---|--- White | 149 | 7.28% | 171 | 8.36% | 4.65% Green | 1448 | 70.77% | 1612 | 78.78% | 1.61% Yellow | 434 | 21.21% | 243 | 11.88% | 0.41% Red | 15 | 0.74% | 18 | 0.88% | - Deceased | | | 2 | 0.1% | | 2046 | | 2046 | | For sake of brevity we do not report a table with all the tag changes, but by observing Table 5, it is clear that, some color tags are changed to another tag (always an adjacent tag). In Tables 6 we report the distribution of patients on the basis of the discharge tag and according to the list of the outcomes. Table 6: Distribution of patients on the basis of the discharge tag and according to the actual list of the outcomes. | White | Green | Yellow | Red | ---|---|---|---|---|--- _O1_ | 121 | 1025 | 23 | | 1169 _O2_ | 39 | 496 | 21 | | 556 _O3_ | | 29 | 182 | 14 | 225 _O4_ | | | 4 | 4 | 8 _O5_ | | 19 | 10 | | 29 _O6_ | 3 | 17 | 2 | | 22 _O7_ | 8 | 26 | 1 | | 35 _O8_ | | | | 2 | 2 _O9_ | | | | | - | 171 | 1612 | 243 | 20 | 2046 ## 5 The Discrete Event Simulation model In this section we detail the DES model of the ED reported in Section 4. An entity is created on the patient arrival. This event represents the entry of the patient into the system under consideration. Then the entity flows through the different segments of the model according to specified logical rules. These latter enable to reproduce the patient flow described in Section 4. At the end of the DTCP, the entity is discharged from the system model. We implemented the simulation model by means of ARENA 15.1 Simulation Software ARENA ; simulationwitharena . As regards the KPIs of interest, to meet specific demand of the ED managers, we focus on the analysis of the patient flow from the beginning of the DTCP and, in particular, in monitoring, for each color tag, * • the Waiting Time (WT) between the end of the triage and the starting of the visit, namely $t_{2}-t_{1}$ in Figure 1; * • the Total Time (TT) after the triage, i.e. the time between the end of the triage and the discharge, namely $t_{5}-t_{1}$ in Figure 1. Note that, of course, TT does not coincide with LOS, since the initial part of the pathway until the end of the triage is not considered. This choice is motivated by the request of the practitioners of focusing on all the processes following the triage phase. ### 5.1 Input analysis The data were used for a detailed input analysis of all the processes in the ED. As regards the arrival process, we adopt the standard assumption used in literature that the arrival process to an ED is a Nonhomogeneous Poisson Process (NHPP) (see Whitt.2017 ; Kuo.2016 ; Zeinali.2015 ; Ahmed.2009 ; Ahalt.2018 ; Guo.2017 ). In fact, the adoption of a nonhomogeneous process is necessary since patients interarrival times are strongly affected by the arrival hour. In order to obtain a good accuracy of the arrival rate, we consider 24 time slots for each day at an hourly basis, starting from 00:00. Therefore, by using a standard procedure (see e.g. law:15 ), we approximate the arrival rate function by a piecewise constant function. A plot of the hourly arrival rate is reported in Figure 2. Figure 2: Plot of the hourly arrival rates. Thanks to an ARENA built-in tool (see simulationwitharena ) it is possible to generate entity arrivals according to a nonstationary Poisson process. As regards the timing employed in the _visit_ process and in the _additional examination_ process, by using the collected data we obtain the probability distribution of the times (in minutes) as reported in Table 7 for each color tag. Table 7: Probability distribution of _visit_ and _additional examination_ times. | VISIT | ADDIT. EXAMS ---|---|--- White | Lognormal(7.87,9.76) | Exp(44.4) Green | Lognormal(11.7,10.6) | Exp(80.4) Yellow | Normal(15.2,7.72) | Weibull(236,0.731) Red | $8+$Gamma(10.2,1.49) | Weibull(86.9,0.678) As concerns the resources used, each patient requires one physician, one nurse and one site in the room corresponding to the color tag. ### 5.2 Model validation In order to guarantee that the DES model we built provide us with a sufficient accuracy of the output, the model has been widely verified and validated. As regards the model validation, we compared the real system values with the corresponding simulation outputs, namely the average values (with their confidence interval) obtained from 50 independent simulation replications each of them one month long, with a warm up period of 24 hours. In particular, we consider some fundamental KPIs of the overall process in terms of _times_ and _entity counters_. As concerns times, we focus on the waiting time WT and on the total time TT previously defined. In Figure 3 and Figure 4 we report the current values, namely those corresponding to the “as–is” status, and the simulation output of WT and TT (in minutes) with the relative confidence interval (with 95% confidence level). Figure 3: Plot of current values (in orange) and simulation output (in blue) of WT (in minutes) with the confidence interval. Figure 4: Plot of current values (in orange) and simulation output (in blue) of TT (in minutes) with the confidence interval. These plots clearly evidence the reliability and the good accuracy of the simulation model. Indeed, the simulation output represents a good approximation of the current values for all color tags. As regards the entity counters, we compare the current values of the outcomes as reported in Table 6, with the corresponding outputs of the simulation model reported in Table 8 (for the sake of brevity, we do not report the corresponding plots). Table 8: Output values (with confidence interval) for the outcomes returned by the simulation. | White | Green | Yellow | Red ---|---|---|---|--- O1 | $120.53\pm 2.22$ | $1019.74\pm 5.88$ | $23.35\pm 0.91$ | O2 | $39.41\pm 1.29$ | $489.85\pm 4.69$ | $21.45\pm 1.02$ | O3 | | $28.83\pm 1.15$ | $179.18\pm 3.08$ | $13.51\pm 0.73$ O4 | | | $3.79\pm 0.42$ | $4.18\pm 0.41$ O5 | | $18.14\pm 0.87$ | $10.24\pm 0.65$ | O6 | $3.15\pm 0.32$ | $16.93\pm 0.76$ | $2.09\pm 0.28$ | O7 | $7.06\pm 0.60$ | $23.63\pm 1$ | $1.64\pm 0.28$ | O8 | | | | $2.16\pm 0.30$ A comparison between the two tables clearly evidences the reliability and the good accuracy of the simulation model. In particular, the model output values corresponding to each outcome and to each color tag are an accurate approximation of the current values, taking into account the confidence interval. ## 6 Design of experiments and results In this section, we report experimental results obtained by our DES model. Our aim is to determine performance measures of the ED, considering different scenarios, in order to evaluate possible policy changes leading to an overall improvement. To this aim, we consider hypothetical scenarios where patient arrival rate is artificially changed. In particular, we preliminarily consider an increase of a prefixed percentage of the arrival rate due to the growth in demand and a mildly loaded situation, namely a gradual increase of patient arrivals over a period of few days of the week. Then we turn to the main focus of this work, namely extremely loaded situations, possibly due to some critical conditions, for instance a natural disaster. For each scenario, we assess how the ED response times change. In particular, we consider the KPIs of interest defined in Section 5, i.e. the waiting time WT and the total time TT. Moreover, we monitor the resources usage. More specifically, we monitor the instantaneous utilization (on hourly basis) of the resources. Note that instantaneous utilization is preferred to average utilization since high utilization can cause saturation and performance deterioration, even though utilization is low when averaged over a long interval. We used VBA (Visual Basic for Applications) ARENA modules for collecting data concerning the instantaneous utilization of the resources so that we have been able to process them. In particular, we consider the usage of the green and yellow areas. We recall that both green and white tagged patients are assigned to the green area. ### 6.1 Increase of a prefixed percentage of the arrival rate We consider an increase of $10\%$ of the hourly arrival rate with respect to the current value. We run 10 independent replications and the length of each replication is one month with a warm up period of 24 hours. In Figures 5-6 we report the comparison in terms of WT and TT, respectively. Figure 5: WT (in minutes): plot of the comparison between the current “as–is” status (in blue) and a prefixed percentage increase of the arrival rate (in green). Figure 6: TT (in minutes): plot of the comparison between the current “as–is” status (in blue) and a prefixed percentage increase of the arrival rate (in green). Of course, the uniform increase of the demand implies longer waiting/stay times. However, this grow of the arrival rate does not significantly affect waiting/stay times for yellow and red tagged patients. This is mainly due to the priority criterion and also to the small number of red and yellow tagged patients arriving with respect to green and white ones. In Figure 7-8 we report the comparison between the usage of the green and the yellow areas over the 24 hours, namely the instantaneous utilization (on hourly basis) of each of them. We do not consider the usage of the shock room because of the reduced number of red tagged patients. Figure 7: Plot of the usage of the green area. Figure 8: Plot of the usage of the yellow area. From Figure 7 it can be observed that the usage of the green area in the rush hours is close to one even in the “as–is” status and that, as expected, the percentage increase causes a corresponding uniform growing of the utilization. Before discussing the usage of the yellow area, note that since only one physician is on duty during the night, actually only the green area is used in the night for treatment of both green and yellow tagged patients. Therefore, as shown in Figure 8, no patient is visited in the yellow area during the night while throughout the day, when two physicians are on duty, the area is strongly used and a uniform increase of the usage is observed, accordingly to the grow of the average hourly arrival rate. ### 6.2 A mildly and an extremely loaded scenario Now we consider unexpected conditions, namely a spike of patient arrival rate due to a sudden and critical event (for instance, in the extreme case, an earthquake). We focus on two possible artificial scenarios: a mildly and an extremely loaded scenario corresponding to two different unpredictable occurrences. For each experiment, we run 10 independent replications and the length of each replication is 5 days, with a warm up period of 24 hours. The choice of using a 5 days length is motivated by the need of accurately monitor the actual effect of arrival spikes. More precisely, the aim is to avoid that, in the computation of the average values for those KPIs which are computed as an average over the whole replication length, too many occurrences concerning standard days (without spikes) are included. Indeed, this could cause a not negligible bias in the results. As regards the mildly loaded situation, we adopt a gradual increase/decrease of the arrival rate over three days of the week (starting from day 2). More precisely, similarly to Ahalt.2018 , we increase the arrival rate from 5% to 25%, depending on time slots, according to the scheme in Table 9. Table 9: Percentage increases of the arrival rate for _Day 2_ , _Day 3_ and _Day 4_ for different time slots | _Day 2_ | _Day 3_ | _Day 4_ ---|---|---|--- 00:00 – 08:00 | $+5\%$ | $+20\%$ | $+20\%$ 08:00 – 14:00 | $+10\%$ | $+25\%$ | $+15\%$ 14:00 – 20:00 | $+15\%$ | $+25\%$ | $+10\%$ 20:00 – 24:00 | $+20\%$ | $+20\%$ | $+5\%$ As concerns the extremely loaded scenario, we try to reproduce a major emergency. To this aim, we consider a 300% increase of the arrival rate centered over the 24 hours of the Day 2 of the week. Figure 9 reports the increased hourly arrival rate for both scenarios along with the unmodified arrival rate. Figure 9: Plot of the increased patients arrival rate (midly loaded in grey, extremely loaded in yellow) and the unmodified arrival rate (in blue). In Figure 10 we report the comparison between the current “as–is” status and the two scenarios in terms of WT. Similarly, in Figure 11 the same comparison is reported in terms of TT. Figure 10: WT (in minutes): plot of the comparison between the current “as–is” status (in blue) and the mildly (in grey) and extremely (in yellow) loaded scenarios. In the bottom the detail for yellow and red tagged patients. Figure 11: TT (in minutes): plot of the comparison between the current “as–is” status (in blue) and the mildly (in grey) and extremely (in yellow) loaded scenarios. As expected, we observe a huge increase for both WT and TT in the extremely loaded scenario, for low–complexity patients (white and green tagged): the WT would exceed one day for white tagged patients and 10 hours for the green tagged ones and this is not acceptable. As regards the mildly loaded scenario, a moderate increase is highlighted, showing that both WT and TT are actually still feasible. A different outcome is pointed out for higher complexity patients. As regards the red tagged ones, in both the scenarios we do not change their current percentage, with respect to the other color tagged ones, given in Table 5 (scenarios with changes in this percentage are reported afterwards). In this case, even a huge increase of the overall arrivals does not lead to exceed one red tagged patient arrival per hour. Therefore both WT and TT does not grow significantly, also due to the high priority assigned to these patients. Similar result is observed for the yellow tagged patients. Moreover, note that the WT for red tagged patients is still approximately zero, in accordance to their high urgency level. Figures 12-13 report the comparison between the current “as–is” status and the two artificial scenarios in terms of usage of green and yellow areas. Figure 12: Plot of the usage of the green area: the current “as–is” status (in blue), the mildly loaded (in grey) and the extremely loaded (in yellow) scenarios. Figure 13: Plot of the usage of the yellow area: the current “as–is” status (in blue), the mildly loaded (in grey) and the extremely loaded (in yellow) scenarios. From both these figures, it can be observed how, in the two scenarios, the peak patient arrivals causes a sudden increase of the usage of both the resources (green and yellow areas). In the case of extremely loaded scenario, the peak causes resources saturation even immediately before and after the peak center. Note how this phenomenon can be observed only by monitoring the instantaneous resource utilization rather than the average utilization. Now we analyze more in detail the extremely loaded scenario. Indeed, due to the unpredictability of the phenomenon of peak arrivals for to critical events, it is difficult to create artificial scenarios that actually reproduce what could happen in reality. Therefore, we now analyze some variants of the $300\%$ increase of the arrival rate already considered. In particular, we report the comparison of the current “as–is” status with respect to increases of $100\%$, $200\%$, $400\%$ of the arrival rate (always centered in the Day 2 and with the same percentage distribution of the color tags). In the following Figures 14–15, the corresponding WT and the TT are reported, along with those obtained for the $300\%$ increase scenario. Figure 14: WT (in minutes): plot of the comparison between the current “as–is” status (in grey) and the extremely loaded scenario with increases of the arrival rate of $100\%$ (in blue), $200\%$ (in orange), $300\%$ (in yellow) and $400\%$ (in light blue). In the bottom the detail for yellow and red tagged patients. Figure 15: TT (in minutes): plot of the comparison between the current “as–is” status (in grey) and the extremely loaded scenario with increases of the arrival rate of $100\%$ (in blue), $200\%$ (in orange), $300\%$ (in yellow) and $400\%$ (in light blue) From Figure 14, it can be observed, how WT for the red tagged patients still remain acceptable for all the four scenarios, and this is due to the low percentage of red tagged patients arrivals. As regards the yellow tagged ones, WT remains below 10 minutes only for the $100\%$ increase. As concerns the white and green tagged patients, WT becomes unacceptable even with the $100\%$ increase. The TT reported in Figure 15 are direct consequence of the corresponding WT. The same comparison between the current “as–is” status and the increases of $100\%$, $200\%$, $400\%$ of the arrival rate is reported in the Figure 16–17 in terms of resources usage for the green and the yellow areas, respectively. Figure 16: Plot of the usage of the green area: the current “as–is” status and the extremely loaded scenarios. Figure 17: Plot of the usage of the yellow area: the current “as–is” status and the extremely loaded scenarios. These figures clearly highlight how, as expected, the percentage increase of the patient arrival rate strongly affect the utilization of both the green and the yellow areas. Note that the usage of the green area also depends on the fact that during the night it is also used for yellow tagged patients since the yellow area is not in use during the night. Both the resources reach saturation points even in the most cautious scenario ($100\%$ increase). All the scenarios up to now analyzed are based on increases of the patient arrival rate, keeping unchanged the percentage distribution of different colors tagged patients. Actually, during a critical event, it is also likely to assume that the percentage of high priority patients grows during the peak arrivals. During the latest earthquake (August 24, 2016) that hit Central Italian regions (where the ED considered in this work is located), during the hour corresponding to the maximum peak arrival, up to 14 patients with trauma due to crushing (i.e. red tagged patients) requested assistance. As already mentioned in Section 3, in these cases, i.e. in case of the so called “maxi–emergency”, according to current regulations, Italian EDs adopt the “Internal Emergency Plan for Massive Inflow of Injured” (we use the Italian acronym PEIMAF). This implies further resources availability and different operating rules aimed at providing an adequate and timely assistance to all the patients who require it. Of course, such a plan cannot be tested during the normal ED activity. However, it is really important to check it accurately in order to ensure its effectiveness whenever it should be activated. A natural way for performing such testing is the use of a DES model. Therefore, we used our DES model also to provide some insights, useful for the decision makers, concerning the definition of the PEIMAF. In the sequel, we report some analyses we performed, aiming at reproducing a critical situation corresponding to the extremely loaded scenario with an increase of $400\%$ of the overall arrival rate and, in addition, an increase of the red tagged patients arrival during the peak. In particular, we assume an increase of the percentage of red tagged patient arrivals, in order to obtain about 14 of these patients arriving in the peak hour, as really occurred during the recent earthquake. Moreover, we assume the adoption of a possible maxi–emergency PEIMAF plan and compare the KPIs obtained. More specifically, we adopt the following assumptions: A1 Patients arrivals: * – the percentage of color tags assigned at the triage station is modified during the whole peak day, by assuming that $50\%$ of the arrivals are red tagged patients. A2 Maxi–emergency plan (PEIMAF): * – the number of physicians and nurses on duty is doubled during the peak day, starting from the peak hour (10:00 a.m.), then the shifts return to the normal scheme; * – green and yellow tagged patients are not admitted to the ED but send to outpatients facilities; * – also green and the yellow areas can be used for treatment of red tagged patients. In this manner, by A1 we obtain about 14 red tagged patient arrivals in the peak hour trying to reproduce a really occurred critical situation. By A2 we implement a possible simple configuration of a maxi–emergency plan, only for experimental purpose. Actually, the operational procedures provided by a PEIMAF plan are much more complex and articulate and here we only report an illustrative example to show how our DES model can be fruitfully used to test a maxi–emergency plan. Figures 18–19 report the WT and the TT for the “as–is” status and the extremely loaded scenario with $400\%$ increase of patient arrival rate and the percentage of red tags assigned at the triage station modified according to A1. The comparison concerns the values of these KPIs obtained without adopting a maxi–emergency plan and by using the plan described in A2. Figure 18: WT (in minutes): plot of the comparison between the current “as–is” status (in blue) and the extremely loaded scenario with the modified percentage of red tags assigned at the triage as in A1, with (in yellow) and without (in grey) adopting a maxi–emegency plan as in A2. Figure 19: TT (in minutes):plot of the comparison between the current “as–is” status (in blue) and the extremely loaded scenario with the modified percentage of red tags assigned at the triage as in A1, with (in yellow) and without (in grey) adopting a maxi–emergency plan as in A2. The figures clearly evidence the huge times resulting without adopting the maxi–emergency plan. Even by adopting the plan as specified above in A2, WT exceeds 3 hours for yellow tagged patients and 1 hour for the red tagged ones and, obviously, this is unacceptable. This situation is confirmed by the plot of the usage of green and yellow areas and shock room reported in Figures 20–22. Figure 20: Plot of the usage of the green area: the current “as–is” status and the extremely loaded scenario with and without the maxi–emergency plan. Figure 21: Plot of the usage of the yellow area: the current “as–is” status and the extremely loaded scenario with and without the maxi–emergency plan. Figure 22: Plot of the usage of the shock room: the current “as–is” status and the extremely loaded scenarios with and without the maxi–emergency plan. First, note that in case of emergency plan not activated, and hence only two physicians are on duty during the peak day, they both are assigned to the shock room and therefore, the green and the yellow areas cannot be used. On the contrary, if the emergency plan is activated and hence four physicians are on duty during the peak day, also green and yellow areas can be used even for treatment of higher complexity patients. In any case, even if the adoption of the maxi–emergency plan as assumed in A2 leads to an improvement, its overall inadequacy is very evident. The resource saturation is reached for long periods, implying that the resources would have extra requests which cannot be satisfied and hence they are queued. Of course, in case the resources (both human and physical ones) cannot be further enlarged, patient diversion policy towards neighboring EDs should be adopted. To be aware in advance of the real operational capacity of an ED in case of peak arrivals could be really important for the ED managers. In fact, this allows them to define or adjust a maxi–emergency plan, taking into account the specific environmental context, too. Thanks to the high flexibility of our ARENA implementation of the DES model of the ED under study, an extensive scenario analysis can be performed aimed at assessing the ED operational capacity, namely all the KPIs of interest in many and different real critical situations. ## 7 Conclusions In this paper we propose a DES model for studying the ED of a medium dimension hospital in a region of Center Italy recently hit by a severe earthquake. The aim is to assess how fundamental KPIs change in response to different increase patterns of the patient arrival rate. In particular, we focus on extremely loaded situations for the ED, due to critical events like a natural disaster. To evaluate the performance of the EDs, time–related measurements have been considered as well as resources usage. Several scenarios have been considered, including someones artificially created, trying to reproduce real mass casualty occurrences. The experimental results showed that, when the increase of the arrival rate is low or moderate, the ED performance does not significantly deteriorate. Instead, in case of extreme events with high patient peak arrivals, the adoption of an exceptional emergency plan is necessary to ensure effective and timely assistance. The model proposed in this paper refers to a specific ED, but thanks to the flexibility of its implementation, it can be easily adapted to reproduce the patient flow of other EDs. We believe that the model we proposed has a twofold merit: on one side it represents an effective decision support system, allowing to assess the performance of the ED under study, highlighting possible operational improvements and enabling decision makers to better allocate ED resources. On the other side, the model can be used to perform scenario analyses to help managers to define in advance maxi–emergency plans which, of course, cannot be experiment during the normal activity of the ED. ###### Acknowledgements. The authors wish to express their gratitude to Dr. Stefania Mancinelli and to Dr. Marco Pierandrei from “Direzione Medica Ospedale di Fabriano” who enable us to carry out this study. Moreover we thank Dr. Massimo Maurici and Ing. Luca Paulon from “Dipartimento di Biomedicina e Prevenzione” and “Laboratorio di Simulazione e Ottimizzazione dei servizi del SSN” of the Università di Roma “Tor Vergata”, for useful discussions on an early stage of this work. ## Conflict of interest The authors declare that they have no conflict of interest. ## References * (1) Aboueljinane, L., Sahin, E., Jemai, Z.: A review on simulation models applied to emergency medical service operations. Computers & Industrial Engineering 66(4), 734 – 750 (2013) * (2) Ahalt, V., Argon, N., Strickler, J., Mehrotra, A.: Comparison of emergency department crowding scores: a discrete-event simulation approach. Heath Care Management Science 21, 144–155 (2018) * (3) Ahmed, M.A., Alkhamis, T.M.: Simulation optimization for an emergency department healthcare unit in Kuwait. European Journal of Operational Research 198(3), 936 – 942 (2009) * (4) Almagooshi, S.: Simulation modelling in healthcare: Challenges and trends. Procedia Manufacturing 3, 301–307 (2015) * (5) Aringhieri, R., Bonetta, G., Duma, D.: Reducing overcrowding at the emergency department through a different physician and nurse shift organisation: a case study. In: P. Daniele, L. Scrimali (eds.) New Trends in Emergency Complex Real Life Problems, _AIRO Springer Series_ , vol. 1, pp. 43–53. Springer Nature Switzerland (2018) * (6) Aringhieri, R., Bruni, M., Khodaparasti, S., van Essen, J.: Emergency medical services and beyond: Addressing new challenges through a wide literature review. Computers & Operations Research 78, 349 – 368 (2017) * (7) Aroua, A., Abdulnour, G.: Optimization of the emergency department in hospitals using simulation and experimental design: Case study. 2017 Winter Simulation Conference (WSC) pp. 4511–4513 (2017) * (8) Chanchaichujit, J., Tan, A., Meng, F., Eaimkhong, S.: Optimization, Simulation and Predictive Analytics in Healthcare. In Healthcare 4.0 - Next Generation Processes with Latest Technologies, pp. 95–121. Palgrave Pivot (2019) * (9) Daldoul, D., Nouaouri, I., Bouchriha, H., Allaoui, H.: A stochastic model to minimize patient waiting time in an emergency department. Operations Research for Health Care 18, 16 – 25 (2018). EURO 2016–New Advances in Health Care Applications * (10) Diefenbach, M., Kozan, E.: Effects of bed configurations at a hospital emergency department. Journal of Simulation 5(1), 44–57 (2011) * (11) Gilboy, N., Tanabe, T., Travers, D., Rosenau, A.: Emergency Severity Index (ESI): A Triage Tool for Emergency Department Care, Version 4, Implementation Handbook 2012 edn. Agency for Healthcare Research and Quality, Rockville, MD (2011). AHRQ Publication No. 12-0014 * (12) Granja, C., Almada-Lobo, B., Janela, F., Seabra, J., Mendes, A.: An optimization based on simulation approach to the patient admission scheduling problem using a linear programing algorithm. Journal of Biomedical Informatics 52, 427–437 (2014) * (13) Gul, M., Guneri, A.: Simulation modelling of a patient surge in an emergency department under disaster conditions. Croatian Operational Research Review 6, 429–443 (2015) * (14) Gul, M., Guneri, A.F.: A comprehensive review of emergency department simulation applications for normal and disaster conditions. Comput. Ind. Eng. 83(C), 327–344 (2015) * (15) Guo, H., Gao, S., Tsui, K., Niu, T.: Simulation optimization for medical staff configuration at emergency department in Hong Kong. IEEE Transactions on Automation Science and Engineering 14(4), 1655–1665 (2017) * (16) Guo, H., Goldsman, D., Tsui, K.L., Zhou, Y., Wong, S.Y.: Using simulation and optimisation to characterise durations of emergency department service times with incomplete data. International Journal of Production Research 54(21), 6494–6511 (2016) * (17) Hoot, N., Aronsky, D.: Systematic review of emergency department crowding: causes, effects, and solutions. Annals of Emergency Medicine 52(2), 126–136 (2008) * (18) Hoot, N.R., Zhou, C.H., Jones, I., Aronsky, D.: Measuring and forecasting emergency department crowding in real time. Annals of emergency medicine 49 6, 747–55 (2007) * (19) Joshi, V., Lim, C., Teng, S.G.: Simulation study: Improvement for non-urgent patient processes in the emergency department. Engineering Management Journal 28:3, 145–157 (2016) * (20) Kadri, F., Chaabane, S., Tahonauthor, C.: A simulation-based decision support system to prevent and predict strain situations in emergency department systems. Simulation Modelling Practice and Theory 32, 42–52 (2014) * (21) Kaushal, A., Zhao, Y., Peng, Q., Strome, T., Weldon, E., Zhang, M., Chochinov, A.: Evaluation of fast track strategies using agent-based simulation modeling to reduce waiting time in a hospital emergency department. Socio-Economic Planning Sciences 50, 18–31 (2015) * (22) Kelton, W., Sadowski, R., Zupick, N.: Simulation with Arena, 6th edn. McGraw–Hill Professional (2014) * (23) Kuo, Y.H., Leung, J.M., Graham, C.A., Tsoi, K.K., Meng, H.M.: Using simulation to assess the impacts of the adoption of a fast-track system for hospital emergency services. Journal of Advanced Mechanical Design, Systems, and Manufacturing 12(3), 1–11 (2018) * (24) Kuo, Y.H., Rado, O., Lupia, B., Leung, J.M.Y., Graham, C.A.: Improving the efficiency of a hospital emergency department: a simulation study with indirectly imputed service-time distributions. Flexible Services and Manufacturing Journal 28(1), 120–147 (2016) * (25) Law, A.: Simulation Modeling and Analysis, fifth edn. McGraw-Hill (2015) * (26) Liu, Z., Cabrera, E., Taboada, M., Epelde, F., Rexachs, D., Luque, E.: Quantitative evaluation of decision effects in the management of emergency department problems. Procedia Computer Science 51, 433–442 (2015) * (27) Lucidi, S., Maurici, M., Paulon, L., Rinaldi, F., Roma, M.: A derivative-free approach for a simulation-based optimization problem in healthcare. Optimization Letters 10, 219–235 (2016) * (28) Lucidi, S., Maurici, M., Paulon, L., Rinaldi, F., Roma, M.: A simulation-based multiobjective optimization approach for health care service management. IEEE Transactions on Automation Science and Engineering, 13(4), 1480–1491 (2016) * (29) Presidenza del Consiglio dei Ministri: Accordo Stato Regioni, Riorganizzazione del sistema di emergenza urgenza in rapporto alla continuità assistenziale. Rep. Atti n.36/CSR (2013) * (30) Nahhas, A., Alwadi, A., Reggelin, T.: Simulation and the emergency department overcrowding problem. Procedia Engineering 178, 368–376 (2017) * (31) Paul, S.A., Reddy, M.C., De Flitch, C.J.: A systematic review of simulation studies investigating emergency department overcrowding. Simulation 86(8-9), 559–571 (2010) * (32) Rado, O., Lupia, B., Leung, J.M.Y., Kuo, Y.H., Graham, C.A.: Using simulation to analyze patient flows in a hospital emergency department in hong kong. In: A. Matta, J. Li, E. Sahin, E. Lanzarone, J. Fowler (eds.) Proceedings of the International Conference on Health Care Systems Engineering, pp. 289–301. Springer International Publishing, Cham (2014) * (33) Reid, P., Compton, W., Grossman, J., Fanjiang, G.: Building a Better Delivery System: A New Engineering/Health Care Partnership. The National Academies of Press, Washington, D.C. (2005) * (34) Rockwell Automation: Arena User’s guide (2010). Allen–Bradley, Rockwell Software * (35) Ministero della Salute: Documento di proposta di aggiornamento delle linee guida sul triage intraospedaliero. G.U. Serie Generale, n. 285, 07 dicembre 2001 * (36) Taboada, M., Cabrera, E., Iglesias, M.L., Epelde, F., Luque, E.: An agent-based decision support system for hospitals emergency departments. Procedia Computer Science 4, 1870–1879 (2011). Proceedings of the International Conference on Computational Science, ICCS 2011 * (37) Wang, L.: An agent-based simulation for workflow in emergency department. In: 2009 Systems and Information Engineering Design Symposium, pp. 19–23 (2009) * (38) Weiss, S., Derlet, R., Arndahl, J., Ernst, A., Richards, J., Fernández-Frankelton, M., Schwab, R., Stair, T., Vicellio, P., Levy, D., Brautigan, M., Johnson, A., Nick, T.: Estimating the degree of emergency department overcrowding in academic medical centers: Results of the national ed overcrowding study (NEDOCS). Academic Emergency Medicine 11(1), 38–50 (2004) * (39) Weiss, S., Ernst, A.A., Nick, T.G.: Comparison of the national emergency department overcrowding scale and the emergency department work index for quantifying emergency department crowding. Academic emergency medicine: official journal of the Society for Academic Emergency Medicine 13 5, 513–8 (2006) * (40) Whitt, W., Zhang, X.: A data-driven model of an emergency department. Operations Research for Health Care 12, 1 – 15 (2017) * (41) Wong, Z.S.Y., Lit, A.C.H., Leung, S.Y., Tsui, K.L., Chin, K.S.: A discrete-event simulation study for emergency room capacity management in a hong kong hospital. 2016 Winter Simulation Conference (WSC) pp. 1970–1981 (2016) * (42) Xia, N., Sharman, R., Rao H.R., Dutta S.: A simulation-based study for managing hospital emergency departmet’s capacity in extreme events. International Journal of Business Excellence 5, 140–154 (2012) * (43) Zeinali, F., Mahootchi, M., Sepehri, M.: Resource planning in the emergency departments: a simulation-base metamodeling approach. Simulation Modelling Practice and Theory 53, 123–138 (2015) * (44) Zhang, H., Best, T., Chivu, A., Meltze, D.: Simulation-based optimization to improve hospital patient assignment to physicians and clinical units. Health Care Management Science (2019)
# Peeler: Profiling Kernel-Level Events to Detect Ransomware Muhammad Ejaz Ahmed, Hyoungshick Kim, Seyit Camtepe, and Surya Nepal M. E. Ahmed, S. Camtepe, and S. Nepal are with Data61 CSIRO, Sydney, Australia (e-mail: {ejaz.ahmed, seyit.camtepe, surya.nepal}@data61.csiro.au). H. Kim is with the College of Software, Sungkyunkwan University (SKKU), Suwon, Korea (e-mail<EMAIL_ADDRESS> ###### Abstract Ransomware is a growing threat that typically operates by either encrypting a victim’s files or locking a victim’s computer until the victim pays a ransom. However, it is still challenging to detect such malware timely with existing traditional malware detection techniques. In this paper, we present a novel ransomware detection system, called “Peeler” (Profiling kErnEl -Level Events to detect Ransomware). Peeler deviates from signatures for individual ransomware samples and relies on common and generic characteristics of ransomware depicted at the kernel-level. Analyzing diverse ransomware families, we observed ransomware’s inherent behavioral characteristics such as stealth operations performed before the attack, file I/O request patterns, process spawning, and correlations among kernel-level events. Based on those characteristics, we develop Peeler that continuously monitors a target system’s kernel events and detects ransomware attacks on the system. Our experimental results show that Peeler achieves more than 99% detection rate with 0.58% false-positive rate against 43 distinct ransomware families, containing samples from both crypto and screen-locker types of ransomware. For crypto ransomware, Peeler detects them promptly after only one file is lost (within 115 milliseconds on average). Peeler utilizes around 4.9% of CPU time with only 9.8 MB memory under the normal workload condition. Our analysis demonstrates that Peeler can efficiently detect diverse malware families by monitoring their kernel-level events. ###### Index Terms: Ransomware detection, Crypto ransomware, Screen-locker, Malware behavior analysis, Machine learning ## I Introduction Ransomware is a class of malware that has significantly impacted enterprises and consumers. The goal of such type of malware is to obtain financial gain by holding a victim’s system or resources as a hostage through encrypting the victim’s files (called crypto ransomware) or locking the victim’s desktop screen (termed as screen-locker ransomware). Recent statistics of ransomware shows that in the first three quarters of 2019, 151.9 million ransomware attacks were launched targeting enterprises and consumers [1, 2]. The average payment to release files to the victims spiked to $84,116 in the last quarter of 2019, more than double what it was in the previous quarter [3]. In 2019, the U.S. alone was hit by an unprecedented barrage of ransomware attacks that impacted more than 966 government agencies, educational institutions, and healthcare providers at a potential cost of around $7.5 billion [4]. As such, ransomware represents one of the most visible threats to enterprises as well as users. Therefore, a large number of proposals have recently been proposed to fight against ransomware as follows: machine learning models (e.g., [5, 6, 7, 8, 9]), use of decoy files (e.g., [10, 11, 7]), use of a key escrow mechanism (e.g., [12]) and file I/O pattern profiling (e.g., [13, 14, 15, 16, 17, 9, 18, 7, 19]). However, these techniques have at least one of the following limitations: 1) Localised visibility: existing approaches with limited localized visibility would struggle to detect certain type of malicious activities. For example, ransomware may first attempt to delete Windows backup files, disable anti- malware, or real-time monitoring services on the victim’s machine to remain undetected before actually starting to encrypt user files. Techniques relying on file I/O request patterns, cryptographic primitives, or network traffic would fail to detect these activities making it challenging to recover from the attack. If suspicious activities are detected timely, ransomware can be detected without data loss. 2) No guarantee on data loss: most approaches do not provide any recovery or minimal data loss guarantees, i.e., late detection after several files have already been encrypted, or the computer has been locked [14, 10, 20, 15, 9]. Scaife et al. [15] proposed CryptoDrop which is based on the premise that ransomware aggressively encrypts user files. Their approach is able to detect a ransomware attack after a median of ten file losses. REDFISH [13] detects the ransomware activity when ten files are lost, and the detection time took around 20 seconds. Similarly, RWGuard [10] took 8.87 seconds on average to detect all malicious processes spawned by ransomware. 3) Not flexible: Most crypto ransomware detection approaches [10, 13, 15, 20] are built upon either file I/O patterns or cryptographic primitives, however, these techniques are not applicable to detect screen- locker ransomware requiring a separate module to detect them. For instance, in addition to detect crypto ransomware (based on file I/O request patterns), Kharraz et al. [14] developed computer vision-based machine learning module just to detect screen-locker ransomware. Similarly, PayBreak [12] can decrypt files only for the ransomware families that use system provided crypto functions. Such inflexibilities in existing approaches make it difficult to incorporate for different types of malware. Moreover, to collect a diverse set of system events efficiently and consume them in a real-time is in itself a challenging task [20]. For instance, Windows OS audit policies (auditpol.exe) allow users to enable several types of system events, but those are only a subset of all the telemetry available in Windows systems limiting its visibility [21]. There are several types of data sources available, but they can not be easily enabled using an audit policy. Similarly, there are certain APIs (e.g., Win32 Event Tracing API, System.Diagnostic.Eventing, TraceEvent) that allow us to log system events interactively, but they incur higher CPU and memory overheads [20]. For an effective ransomware detection scheme, diverse set of data sources should be easily accessible for analysis with minimum overhead. To address the shortcomings identified above, we develop a novel ransomware detection system, called “Peeler” (Profiling kErnEl -Level Events to detect Ransomware), which tracks ransomware activities from a ransomware binary execution on a victim’s computer to the display of a ransom note on the victim’s screen. For a successful attack, ransomware first aim to remain undetected by anti-malware services on a victim’s computer. To achieve stealthiness, ransomware may disable real-time monitoring, archive scanning, or even delete its own binary from the disk. After that, they launch the actual attack – encryption of user files or locking users’ desktop screens, and finally, payment guidance activities, e.g., change background display picture containing ransom payment note. Peeler exploit several behavioural characteristics observed during the execution of a ransomware and develop a set of methodologies by leveraging rule-based, file I/O patterns-based matcher, and machine learning models. Peeler provides comprehensive visibility via broad and deep monitoring of current and historical security configurations and events associated with sensitive operations. Unlike existing approaches that either rely on APIs or command line tools to collect system events, Peeler extract required system events from native layer of Windows OS using Windows Event Tracing (ETW) framework. As a result, Peeler performs significantly better compared to existing APIs and tools. Moreover, Peeler can intercept events from system- wide Windows OS providers (e.g., there are more than 1,100 providers in Windows 10) providing greater visibility. Finally, the generated system events can consumed in a real-time for early detection. Our key contributions are summarized below: * • Accurate: We develop a set of methodologies based on comprehensive analysis of ransomware from binary execution on victim’s computer to the display of a ransom note at the victim’s computer screen, and design a fast and highly accurate ransomware detection system, called Peeler. It relies on suspicious commands, file I/O event patterns, and two classification models with 13 system behavior features to achieve both fast detection and high detection accuracy. Overall, Peeler achieves 99.52% detection accuracy with a false positive rate of only 0.58% against 43 ransomware families – the largest dataset so far in terms of its diversity. Moreover, Peeler achieves 100% and 99.5% detection accuracy against crypto and screen-locker ransomware, respectively. * • Fast: We measure the detection time of Peeler against both crypto and screen- locker ransomware families. Peeler took 115.3 milliseconds on average to detect crypto ransomware. Compared to the best existing crypto ransomware detection solution, Peeler is about five times faster. In addition, Peeler took 16.4 seconds on average to detect screen-locker ransomware while the execution time of screen-locker ransomware to lock the victim’s desktop screen completely is 302.8 seconds on average, demonstrating that Peeler can also detect screen-locker ransomware at an early stage before locking a victim’s system. * • Transferable: We evaluate Peeler (without new training) against 86 samples from 26 new and unseen ransomware families, including crypto, screen-locker, and general malware samples. Peeler achieved more than 95% detection accuracy in detecting new and unseen ransomware samples. Additionally, Peeler also detected 9 out of 16 samples from general malware. ## II Key characteristics to detect ransomware Typically, a ransomware attack consists of three stages: perform stealth operations to remain undetected, launch the actual attack, and display ransom note after a successful attack. In the first stage, all necessary actions required to perform a successful ransomware attack without being detected are executed. For instance, ransomware may attempt to change Windows OS configurations (e.g., disable real-time monitoring) to remain undetected by anti-malware services. In the second stage, the actual attack is launched, e.g., adversaries delete Windows OS shadow copies to prevent the victim from restoring the Windows OS to the previous version. In the final stage, a ransom note, along with guidance, is provided to the victim. Peeler exploits the differences in system behavioral characteristics between ransomware and benign applications. We collected kernel-level events generated by diverse ransomware samples and benign applications and analyzed those events to gain insights into the unique and inherent system behaviors of ransomware. This section explores those characteristics in detail. TABLE I: Set of actions required to perform attacks. Goal | Action ---|--- Stealth | Delete the ransomware executable from the disk. Kill the task responsible for running malware. Set boot entry elements to ignore errors if there is a failed boot or failed checkpoint. Modify registry values to hide files and filename extensions. Turn off User Account Control (UAC) to remain undetected. Disable archive scanning, real-time monitoring by anti-malware services. Attack | Delete Windows OS shadow copies to prevent victim from restoring Windows OS. Disable automatic repair feature in Windows OS. Start executing malicious tasks, e.g., encryption of user files, with highest privileges. Launch encoded PowerShell commands to change configurations. Stop or bypass detection by a number of popular anti-malware services including Windows OS Defender. Deny the specific rights to delete child folders. Payment guidance | Change Windows OS wallpaper informing the victim about attack. Change background display picture containing ransom payment notes. Open read-me document such as notepad containing all information about ransom payment. ### II-A Malicious commands To maximize the impact of the encryption, ransomware performs malicious activities with the following three goals. #### II-A1 Stealthiness Ransomware tries to remain undetected by anti-malware services or Windows OS defenders running on the victim’s computer. For example, they may disable the following services: runtime monitoring, archive scanning, automatic startup repair (see Table I). Moreover, they may delete ransomware executable from the disk, stop all anti-malware services, or turn off User Account Control (UAC). #### II-A2 Infeasibility of recovery Ransomware deletes the shadow copies and the system’s backup/restore data automatically created by Windows OS. For example, vssadmin is a default Windows OS utility that controls volume shadow copies of user files on a given computer. These shadow copies are regularly used as a recovery point, and additionally, they can be leveraged to re-establish or return the file to a previous state if they are destroyed or lost due to some reasons. Adversary exploits Vssadmin utility, by executing the command vssadmin.exe delete shadows /all /quiet, to delete Windows OS shadow copies, making it impossible to restore the system back to its previous state. Of course, there are also other ways to delete shadow copies such as via PowerShell, wmic, etc. The ransomware can leverage Windows OS program net.exe commands to stop or bypass detection by several popular antivirus software, in addition to defeating Windows OS Defender, e.g., net.exe stop avpsus /y stops Windows OS process Kaspersky Seamless Update Service which is used by Thanos ransomware family. Moreover, ransomware sometimes tries to kill the processes related to specific programs, such as SQL server, to initiate the encryption of the user files on which these programs were operating. After encrypting user files, ransomware shows a ransom note and payment guidelines. #### II-A3 Post-attack guidance on ransom payment Finally, a ransom note is displayed along with a read-me document to help the victim pay the ransom and restore the system files. The ransomware writer typically adds a registry key to the autorun path to show the ransom note window to achieve this. Table I lists the set of actions performed by typical ransomware to achieve respective goals. The actual list of commands used in real-world ransomware are listed in Table I is given in Appendix -C. By intercepting such malicious commands, ransomware could be effectively detected. For instance, Hendler et al. [22, 23] proposed deep learning approaches to detect malicious PowerShell commands. However, in practice, malicious commands could be launched not only by PowerShell, but also from Windows OS legitimate utilities, such as vssadmin, wmic, etc. We consider all malicious command sources rather than relying on malicious commands executed from a specific program. ### II-B File I/O patterns Our analysis on collected events reveals that crypto ransomware samples typically encrypt a user’s file by performing the following four steps: access the file (access), read the content of the file (read), write the encrypted content to a temporary memory or new file (write), and overwrite/delete the user’s original file (overwrite/delete). For example, Figure 1 shows a sequence of file I/O events from a variant of Cerber. These events align with the observed four file I/O steps as follows: 1) in the access step, the ransomware sample accesses a file (D_186.wav) with the FileCreate event; 2) in the read step, the ransomware sample reads the content of D_186.wav with the two Read events; 3) in the write step, the ransomware sample writes the encrypted content to the same file with the two Write events; and 4) in the overwrite step, the file is finally renamed with the Rename, FileDelete, and FileCreate events. As shown in Figure 1, the FileDelete operation removes the content of the original file D_186.wav and FileCreate assigns a new name 2O8nlobpEl.8cbe. We note that the File Key remains the same for all events even though the original file’s name and extension are changed. Figure 1: File I/O events generated by Cerber ransomware. We observe that most crypto ransomware samples follow similar steps to lock a victim’s files. However, the locking strategy adopted by each ransomware sample can be different. For example, some families lock a file without creating a temporary file, whereas others choose to lock a file via a temporary file; some deletes the original file, whereas others decide to overwrite. We perform numerous trials and experiments and observe four _generic_ file I/O patterns that characterize behaviors of most crypto ransomware families. We briefly summarize our findings as follows. 1. 1. Memory-to-File with Post-Overwrite: As shown in Figure 1, some crypto ransomware samples directly overwrite a user’s file with its encrypted data without creating a new file. The I/O events pattern observed is to overwrite the encrypted data to the original file and then rename its file name. This pattern can be observed in the following ransomware families: Cerber, Keypass, Telsacrypt, and Gandcrab. 2. 2. Memory-to-File with Pre-Overwrite: This file I/O events pattern is similar to “Memory-to-File with Post-Overwrite” except that the original file is first renamed, and then the encrypted data is overwritten to that file. This pattern can be observed in samples from Locky ransomware family. Figure 2 shows a sequence of file I/O events generated from a sample in the Locky family. We can see that the original file is first renamed. 3. 3. File-to-File with Delete: Some crypto ransomware samples create a new file and copy the encrypted data of the original file to the new file instead of overwriting the original file itself. After completing the copy process, the original file is finally deleted. This pattern can be observed in the following families: InfinityCrypt, Dharma, Malevich, Sage, and Syrk. Figure 9 in Appendix -A shows a sequence of file I/O events generated from a sample in the InfinityCrypt family. We can see that the ransomware sample reads the file with the file key (FFFFB203AFD146F0) and writes it (in an encrypted form) to the file with the file key (FFFFB203AFD14160). After copying all the file content to the new file, the original file with the file key (FFFFB203AFD146F0) is deleted. 4. 4. File-to-File with Rename and Delete: This file I/O events pattern is similar to “File-to-File with Delete” except that the new file is renamed after copying the encrypted data of the original file to the new file. This pattern can be observed in samples from the WannaCry ransomware family. Figure 10 in Appendix -A shows a sequence of file I/O events generated from a sample in the WannaCry family. After copying all the file content to the new file, the file is renamed from nasa.txt.WNCRYT to nasa.txt.WNCRY. Figure 2: File I/O events generated by Locky ransomware. We show that most families from crypto ransomware follow one of the four file I/O patterns above described. ### II-C Application process tree Applications can spawn one or more processes if it is needed. If a process in an application creates another process, then the creator process is called parent process, and the created process is called child process. We observe that some ransomware families spawns many child processes in a certain pattern compare to that of benign applications. For example, Figure 3 shows a snapshot of the application process tree of VirLock ransomware during its execution. VirLock is self-reproducing ransomware that not only locks a victim’s screen but also infects her files. Both behaviors – self-reproducing and infecting files – were observed in the application process tree of VirLock. We can see locker.exe is replicated at level 3 of the tree. Figure 3: Application process tree of VirLock. To infect a victim’s files, VirLock performs stealthy malicious activities to deceive victims. For instance, while creating files in the victim’s computer, VirLock modifies the registry in the following ways: 1) disable Windows OS User Account Control (UAC), which is a feature that was designed to prevent unauthorized changes in desktop computers; 2) hide all files that are created on the victim’s desktop; and 3) hide all created file extensions. As shown in Figure 3, three child processes (reg.exe in red color) perform the actual registry modifications. The corresponding command line execution is shown at the bottom of Figure 3. In contrast, most benign applications create far less number of spawned processes compare to the screen-locker ransomware. Figure 4 shows the application process trees of six benign applications (Chrome, Adobe Acrobat Reader, MS Visual Studio 2019, MS Office 365 ProPlus, Spotify, and MS Outlook). Figure 4: Application process trees of benign applications. Our observations on the behaviors of ransomware families show that more than 60% of samples spawn multiple processes. For instance, VirLock ransomware sample spawns 44 processes on average. On the other hand, our observations on more than 50 most popular benign Microsoft applications show that they spawn 16 processes on average, significantly smaller to that generated by a screen- locker ransomware sample. ### II-D Correlations between system events We analyzed the system events generated by ransomware samples and observed that there exist strong correlations between some events for the operations of ransomware. For example, all files read must be written (encrypted), which naturally shows a correlation between Read and Write events. Furthermore, similar correlations are exhibited among events collected from different providers such as File, Process, Image, and Thread because ransomware samples generate a large file Read and Write events to perform malicious tasks. Such relationships between certain events can be quantified by using the correlation coefficients of the events (see Table II). TABLE II: Correlation coefficients for some events. Events pair | Ransomware | Benign applications ---|---|--- (File Read, File Write) | 0.9433 | 0.3500 (Process End, Image Unload) | 0.9451 | 0.7174 (Process Start, Image Load) | 0.9476 | 0.7397 (Thread Start, Thread End) | 0.9560 | 0.6585 For example, a crypto ransomware sample generates Read and Write events regularly. As presented in Table II, there exists a strong correlation between the number of Read events and the number of Write events. Such a correlation relationship may not appear in benign applications’ Read and Write requests. Similarly, during ransomware execution, we observe the correlation between the number of Start processes and the number of image Load events, the correlation between the number of End processes and the number of image Unload events, and the correlation between the number of Start threads and the number of End thread events. These correlation coefficients are computed from the analysis performed on 206 ransomware samples and 50 most popular benign applications. Figure 5 shows correlation among three pairs of events (Read and Write, Start and Load, End and Unload). We clearly observe strong correlations for ransomware compared to benign applications. Therefore, Peeler uses those correlations to detect ransomware. Figure 5: Correlations among some event types for ransomware and benign applications. ## III System Design Peeler is designed to minimize the overall damage by ransomware attacks using three different detection modules. Our design principle is to detect ransomware attacks as early as possible because the number of encrypted files (i.e., the victim’s damage) can be increased if the attack detection is delayed. Therefore, our design principle is to detect it immediately by using a simple pattern matching first as soon as the first file is encrypted. We believe that the sacrifice of one file is our best effort scenario in the view of dynamic analysis because Peeler leverages file I/O event patterns that can be generated when a file is encrypted by a ransomware. To overcome the limitations of existing rule-based detectors, we consider not only specific rules to detect atomic observables (e.g., a known malicious file hash or a specific registry key modification) but also generic rules to identify ransomware’s inherent behaviors. In practice, considering both specific and generic rules is essential in designing effective malware detection engines and the defenders’ security posture [24]. Based on this principle, Peeler’s rules are categorized as follows: 1) key observables such as an indicator of compromise (e.g., malicious commands trying to delete Windows OS shadow copy); and 2) generic system event sequences (file I/O patterns discussed in Section II-B). To detect some sophisticated ransomware samples that are not detected by the rule-based approach, Peeler additionally adopts two different machine learning models by extracting behavioral features from process execution patterns (see Section II-C) and correlations among various system events (see Section III-E2) for ransomware’s activities. ### III-A Overview Peeler has three main ransomware detection modules: 1) malicious command detector, 2) file I/O pattern matcher, and 3) machine learning-based classifier. Figure 6 illustrates the overall design of Peeler. Peeler monitors system events continuously to detect ransomware attacks in real-time and uses them to perform ransomware detection. The malicious command detector uses pre- defined rules to check whether malicious commands are executed by processes in which the execution of those commands are mainly observed in ransomware activities. The file I/O pattern matcher takes _kernel-level_ file I/O events as input and looks for suspicious file I/O patterns that can be shown for ransomware. Once a pattern that characterizes ransomware behavior is detected, Peeler generates an alert. Machine learning-based classifier module extracts features from application process tree and system events providers (Process, Image, File, and Thread). The application process tree features are used to build a multinomial logistic regression model, whereas the system event features are used to build a Support Vector Machine (SVM) model. For detection, the scores from both classification models are fused as an ensemble approach, and then detection is performed. Figure 6: Overview of Peeler. Algorithm 1 enlists all the key steps required for ransomware detection. A window size $W$ and a set of rules ($R_{s}$) are given as inputs to Algorithm 1. As shown in Figure 6, the system events monitor collects kernel events and forwards them to three modules (the malicious command detector, the file I/O pattern matcher, and the machine learning-based classifier) in parallel. There are two differences between these modules: 1) malicious command detector and file I/O pattern matcher consumes events generated from the Process and File providers to identify malicious commands and suspicious file I/O patterns, respectively. Whereas ML-based classifier utilizes events from all four providers (File, Process, Image, and Thread) and application process tree features to detect both crypto and screen-locker ransomware. 2) Malicious command detector and file I/O pattern matcher performs analysis on a per-event basis (Step 3 and 7, respectively), whereas ML-based classifier accumulates events for $W$ seconds (Step 11) and then processes them in a batch (Step 12). If all modules do not detect suspicious activities by ransomware, Peeler continuously monitors the system (Step 15). ### III-B System events monitor Peeler provides a module called _system events monitor_ which relies on Event Tracing for Windows111ETW was first introduced in Windows 2000 and is now built-in to all Windows OS versions. (ETW) framework. ETW is a built-in, general-purpose logging and diagnostic framework in Windows OS. It is efficient (high speed and low overhead), flexible (consume events in a real- time or log to a file), and provide greater visibility into the system such that it allows to register to more than 1,100 subsystem providers [25] to receive and consume events. ETW framework is available for usage in both APIs and, command line tools and applications. The usage of ETW-based command line tools and applications are more common compared to ETW APIs (e.g.,TraceEvent, System.Diagnostic.Eventing, Event Tracing) to view system events [20]. Algorithm 1 Overall Process of Peeler 1:Input: $W$ and $R_{s}$ 2:while true do 3: System events monitor receives an $event$ from ETW. 4:/* Input events to both detectors in parallel.*/ 5: if event type is Process Start then 6: Extract command from commandLine and match against rules in $R_{s}$. 7: if command matched to a rule in $R_{s}$ then 8: Raise the alert and halt the process using PID. 9: if event is from File provider then 10: CryptoMatcher = FileIOPatternMatcher(event, $RE_{crypt}$) 11: if CryptoMatcher then 12: Raise the alert and halt the process using PID. 13: IncomingEvents = Accumulate all events in a $W$ seconds window. 14: MLClassifierLabel = ML-basedClassifier(IncomingEvents) 15: if MLClassifierLabel then 16: Raise the alert and halt the process using PID. 17: else Keep on monitoring the system. Existing malware detection techniques either use built-in tools such as Logman, TraceRpt, Event Viewer, etc. or third party applications such as Xperf, PerfView, Microsoft Message Analyser, etc. However, the above mentioned tools suffer from at least one of the following issues: 1) they fail to parse events in real time, i.e., events are first logged to disk then parse, 2) they allow only subset of all telemetry available in Windows OS [21], and 3) high overhead in terms of CPU and memory. Table III compares existing ETW-based approaches for event collection with Peeler’s system events monitor in terms of performance metrics. TABLE III: Tools and applications using ETW framework. Metric | Command line tools and applications | ETW APIs | Peeler ---|---|---|--- Visibility | limited | limited | greater Real-time | $\times$ | ✓ | ✓ Lightweight | ✓ | $\times$ | ✓ Native | $\times$ | $\times$ | ✓ Static support | ✓ | ✓ | ✓ We designed a module called system events monitor based on ETW. Our module is light-weight because it directly interacts with the native layer and performs filtering of the system events. In terms of visibility, Peeler extract events from the following providers: Process, Image, File, Thread. A unified data model (UDM) containing a set of events is extracted and then consumed to detect ransomware. The data obtained by Peeler’s system events monitor are in the form of a continuous sequence of events $t_{i}$. An event is represented as: $t_{i}=<PID,TID,Prov.,EType,E_{timestamp},E_{attrs}>,$ where PID is a process identifier, TID is a thread identifier corresponding to the process $PID$. Prov. is provider name, EType is event name,$E_{timestamp}$ is the time of event occurrence, and $E_{attrs}$ is a set of attributes of the event $E_{name}$. Each event has a schema describing the type of data contained in the event payload. The overall data schema is listed in Table IV. To implement the system events monitor in Peeler, we used an open-source project krabsetw [26], which is a C++ library that simplifies interactions with ETW. We modified krabsetw both at API and native layer to collect the events only needed for Peeler. TABLE IV: Providers and events used in Peeler. Provider | Event | Event schema ---|---|--- Common attributes | Provider-specific event attributes Process | Start, End | PID, TID, Prov., Event, Timestamp | SessionId, ParentId ImageFileName, CommandLine File | Read, Write | PID, TID, Prov., Event, Timestamp | FileKey, FileObject, IoSize Rename, Delete | PID, TID, Prov., Event, Timestamp | FileKey, FileObject FileCreate, FileDelete | PID, TID, Prov., Event, Timestamp | FileObject, FileName Thread | Start, End | PID, TID, Prov., Event, Timestamp | ParentId Image | Load, Unload | PID, TID, Prov., Event, Timestamp | ImageSize, FileName ### III-C Malicious commands detector Peeler uses a component called _malicious command detecter_ to filter suspicious activities conducted by ransomware using the database for malicious commands which are needed to perform ransomware’s activities (see Table I). Peeler can collect malicious commands from the Process’s Command line argument (see Table IV). Typically, the commandLine attribute of the Process contains the actual commands (see example commands in Table XVIII in Appendix -C). For example, Windows OS utility net.exe can start, stop, pause or restart any service using the command net.exe stop ServiceName launched via convenient script/batch file or command prompt. If an adversary leverages such commands to stop several anti-malware services, Peeler can detect the process using pre-defined rules. Similarly, utility taskkill.exe ends one or more tasks or processes. Typically, ransomware specify the image name of the process to be terminated (e.g., taskkill.exe /IM ImageName). The sc.exe utility modifies the value of a service’s entries in the registry and service control manager database. Ransomware may attempt to disable several defending services on the victim’s machine before starting encryption. Peeler uses rules based on utility names and actions performed in order to detect ransomware infection. ### III-D File I/O pattern matcher To effectively detect suspicious file I/O patterns, Peeler relies on detecting a sequence of suspicious file I/O events, as discussed in Section II-B. Algorithm 2 is composed of three main stages for detecting crypto ransomware. The input to the Algorithm is events from file I/O provider. An event is specified as event = $<$PID, EType, FileObject, FileName, FileKey$>$ and with a set of regular expressions, $RE_{crypt}$. PID and EType refer to the process identifier and event type, respectively. FileObject is a unique key assigned to every file, whereas FileName contains the name of a file with full path. FileKey is an identifier that is used to find the file on which an event is performed. For example, the event schema for Read and Write (see Table IV) does not contain FileName attribute. In order to obtain the file name, the FileKey attribute of the Read/Write event is matched with the FileObject attribute of the FileCreate event. #### III-D1 Preparing the data The continuous stream of incoming file I/O events are first processed such that they are added to a separate list based on the file they are operating on. Whenever a user’s file is accessed (i.e., FileCreate event is received), an events list FileEventsList is created and the corresponding event (FileCreate) with respective attributes is added to that list. All subsequent file I/O operation events (such as Read, Write, Rename, Delete, FileCreate, and FileDelete) on that file are added to its FileEventsList (Step 1 – 6 in Algorithm 9). However, it is challenging to accurately add all file I/O events corresponding to a user’s file to its respective FileEventsList. Because a single file may have multiple FileObject keys and vice versa. For example, as shown in Figure 9, the FileObject keys for Read and Write events on the file D_186.wav are not the same, but the actual file on which the operations are performed is the same. To ensure that all events associated with a file are fully recorded, Peeler leverages both FileObject and FileName attributes to accurately identify a user’s file. Algorithm 2 FileIOPatternMatcher. 1:Input: event = $<$PID, EType, FileObject, FileName, FileKey$>$ 2:Output: Benign and Ransomware 3: 4:Stage 1: Preparing the data 5:if EType is ‘FileCreate’ then 6: if FileObject and FileName are newly observed then 7: Create events list FileEventsList for the newly observed file. 8: Add event to FileEventsList. 9:else add event to respective file’s FileEventsList. 10: 11:Stage 2: File I/O patterns detection 12:if the number of unique ETypes in the FileEventsList is equal or greater than four then 13: Extract Etypes sequence from FileEventsList. 14: if Etypes sequence matched to at-least one of the file I/O patterns in Section II-B then 15: Flag the process PID in event. 16: Ransomware = True 17:else continue 18: 19:Stage 3: Filtering false positives. 20:if FilePath are different in FileEventsList’s events then 21: Benign = True 22:if PIDs in FileEventsList’s events are not same then 23: Benign = True #### III-D2 File I/O pattern matcher Suspicious file I/O patterns are detected by analyzing the sequence of events in the list FileEventsList corresponding to a file. If the number of unique ETypes is equal to or more than four (Step 6 in Algorithm 9), the events in the list are analyzed for suspicious patterns. Peeler enforces the four unique events constraint to reduce unnecessary computational overhead because at least four unique event types are required to encrypt a user file (see Figures 1–10). If the sequence of incoming events in FileEventsList matches with file I/O patterns in SectionII-B, i.e., file access, read, write (encrypted content), and file renaming operations are performed in order on a user’s file, Peeler captures this sequence as the evidence of crypto ransomware activity for a security warning. We note that the Memory-to-File operations can be distinguished from the File- to-File operations by using the FileObject keys. When the same file is overwritten, we observe that the FileObject key remains the same for all file I/O events, e.g., the sequence of events to encrypt the file D_186.wav are shown in Figure 1 for Cerber ransomware. For File-to-File operations, multiple files associated with FileObject keys are needed. That is, in this case, the file I/O events from multiple files are accumulated into the same FileEventsList list. #### III-D3 Filtering false positives We observe that some benign applications may generate ransomware-like file I/O patterns. For example, some benign applications may overwrite Windows OS’s Activation Tokens file (tokens.dat), which can lead to false positives because it generates file I/O events that look similar to Memory-to-File patterns. We applied the following two heuristics to reduce the possibility of such false- positive cases (see Stage 3 in Algorithm 2): * • If FilePaths in FileEventsList are not directing to the same file, this is ignored because the file encrypted must be in the same location. * • All the file I/O events must be performed by the same process (i.e., the same PID) with exception for the system and explorer.exe processes. For example, event lists in Figures 9 and 10 show that system process (PID = $4$) is involved. Similarly, explorer.exe is a system process that assists other processes in performing tasks. ### III-E Machine learning-based classifier Peeler uses two different machine learning-based (ML-based) classifiers with application process tree features, and system event features to detect ransomware samples that cannot be detected by the file I/O pattern matcher. Algorithm 3 enlists all the steps for ML-based classifiers. The input to the algorithm is a set of events accumulated over $W$ seconds window. In our current implementation, we empirically set $W=5$ to optimize the speed- accuracy tradeoff. Features are extracted to build two machine learning models (Step 1–4 in Algorithm 3). The built machine learning models are then used to detect ransomware attacks (Step 5–7 in Algorithm 3). Algorithm 3 ML-basedClassifier. 1:Input: IncomingEvents 2:Output: Benign and Ransomware 3: 4:Stage 1: Feature sets extraction and model building. 5:$FV_{MLR}$ = extract application process tree features from IncomingEvents (see Table V). 6:$FV_{SVM}$ = extract system events features from IncomingEvents (see Table V). 7:M1 = Build ML Multinomial logistic regression model with $FV_{MLR}$ feature set. 8:M2 = Build ML SVM-RBF model with $FV_{SVM}$ feature set. 9: 10:Stage 2: Attack detection. 11:Extract feature set vectors for test samples. 12:Test $FV_{MLR}$ and $FV_{SVM}$ features on models M1 and M2, respectively. 13:Fuse scores from models and provide the class label (either benign or ransomware) as output. #### III-E1 Building the first classifier with the application process tree features Windows OS applications are typically descended from the parent process called explorer.exe. Therefore, all applications’ processes share explorer.exe as their parent process. As discussed in Section II-C, we observed that several ransomware samples spawn significantly larger number of child processes compared to benign applications. Based on this observation, Peeler constructs a machine learning model using the application process tree features. Peeler specifically extracts the following features from the application process tree: the number of processes, the number of unique processes, the number of threads created by processes, the maximum depth level of the tree, and the number of leaf nodes in the tree. For the first classifier using the feature set $FV_{MLR}$ (see Table V), we selected a multinomial logistic regression (MLR) model because we do not assume linear relationships among the features in $FV_{MLR}$ [27] and MSR produces the best performance with those features. #### III-E2 Building the second classifier with the system event features As discussed in Section II-D, Peeler leverages four providers’ (File, Process, Image, and Thread) events exhibiting casualties. In total, the following four pairs of events are used: (Read, Write), (Start, Load), (End, Unload), and (Start, End). To capture these casualties simply, Peeler extracts frequency features (see Table V) and train an SVM model based on feature set $FV_{SVM}$ for classification. We selected SVM with RBF kernel because it is lightweight and produces the best accuracy results with $FV_{SVM}$. TABLE V: Feature extraction. Feature set | Feature | Model ---|---|--- $FV_{\text{MLR}}$ | # of processes | MLR Sum of threads from processes Maximum depth level of process tree # of leaf nodes # of unique process names $FV_{\text{SVM}}$ | # of process start | SVM-RBF # of process end # of DLL image loads # of DLL image unloads # of file reads # of file writes # of threads start # of thread end #### III-E3 Attack detection Peeler uses two classification models (MLR and SVM-RBF), and finally decides the classification outcome by fusing their scores. We note that MLR and SVM- RBF are constructed with different feature sets – MLR is trained with $FV_{\text{MLR}}$ while SVM-RBF is trained $FV_{\text{SVM}}$ (see Table V). The scores from the two models are fused by taking their average for detection. ## IV Dataset collection We aimed to collect a ransomware dataset containing diverse ransomware families rather than similar ransomware variants. Therefore, we collected ransomware samples from several sources including VirusTotal [28], malware repository [29], malwares [30], and other online communities. Also, we collected benign applications exhibiting at least one of the following behaviors: 1) encryption or compression capabilities, 2) spawning multiple processes, and 3) most commonly used benign applications (see Appendix -B) to evaluate Peeler’s robustness against false positives. ### IV-A Ransomware We collected 28,034 ransomware samples from VirusTotal [28], MalwareBazaar [31], malware repository [29], malwares [30], and other online communities. However, we excluded many malware samples from our final dataset for experiments. First, we found that many samples were not actual ransomware samples, although they were classified as ransomware by some vendors in VirusTotal. Therefore we discarded such samples. This finding is consistent with the observation in the previous work [15]. Second, ransomware often needs to interact with command-and-control (C&C) servers to perform their malicious activities. However, several ransomware samples did not often work appropriately because their corresponding C&C servers were taken-down. Also, some sophisticated malware samples can detect the analysis environment and remain inactive to evade detection [32]. More importantly, samples from a few ransomware families we observed were significantly larger compared to other families. For example, we found more than 20,000 ransomware samples from Virlock family including Virlock Gen.1, VirLock Gen.4, and VirLock Gen.8 variants. Therefore, we kept limited samples from Virlock family and discarded other samples. Finally, We collected 292 active samples from 67 ransomware families that perform their activities correctly. We used 206 ransomware samples from 43 ransomware families (see Table VI) in the first set of experiments. Out of 43 families, 34 (102 samples) were from crypto, and 9 (104 samples) were from screen-locker types of ransomware. The remaining 24 ransomware families with 86 samples were collected at later stage of data collection and used to evaluate Peeler on new and unseen ransomware samples. TABLE VI: Ransomware families, types, and samples. no. | Family | Type | Samples | no. | Family | Type | Samples ---|---|---|---|---|---|---|--- 1 | Cerber | Crypto | 33 | 23 | Petya | Crypto | 1 2 | Sodinokibi | Crypto | 14 | 24 | Satana | Crypto | 1 3 | GoldenEye | Crypto | 12 | 25 | Shade | Crypto | 1 4 | Sage | Crypto | 5 | 26 | Syrk | Crypto | 1 5 | Locky | Crypto | 5 | 27 | TeslaCrypt | Crypto | 1 6 | Dharma | Crypto | 3 | 28 | ucyLocker | Crypto | 1 7 | dotExe | Crypto | 3 | 29 | Unlock92 | Crypto | 1 8 | Troldesh | Crypto | 1 | 30 | Vipasana | Crypto | 1 9 | WannaCry | Crypto | 3 | 31 | Xorist | Crypto | 2 10 | Da Vinci Code | Crypto | 1 | 32 | Malevich | Crypto | 1 11 | CryptoShield | Crypto | 1 | 33 | Jigsaw | Crypto | 1 12 | CryptoWire | Crypto | 1 | 34 | Adobe | Crypto | 1 13 | District | Crypto | 1 | 35 | Virlock.Gen.5 | Screen | 83 14 | Gandcrab | Crypto | 1 | 36 | LockScreen.AGU | Screen | 12 15 | GlobeImposter | Crypto | 1 | 37 | Alphabet | Screen | 2 16 | Hexadecimal | Crypto | 1 | 38 | EgyptianGhosts | Screen | 1 17 | InfinityCrypt | Crypto | 1 | 39 | Lockey-Pay | Screen | 1 18 | IS (Ordinpt) | Crypto | 1 | 40 | Blue-Howl | Screen | 1 19 | Keypass | Crypto | 1 | 41 | ShellLocker | Screen | 1 20 | Lockcrypt | Crypto | 1 | 42 | DerialLock | Screen | 1 21 | Pack14 | Crypto | 1 | 43 | Trojan.Ransom | Screen | 1 22 | PocrimCrypt | Crypto | 1 | - Total samples: 206, crypto = 102, screen-locker = 104 #### IV-A1 User environment and ground truth (labeled) dataset We used VirtualBox 6.1 [33] to create and manage the computing environment locally for experiments. Rather than using artificially generated data, we used a real user’s data running on the Windows 10 64Bit operating system (a copied version of real user data) to set up a benign user’s environment realistically. Multimedia files (e.g., bmp, jpeg, png, and mp4), Microsoft office documents (e.g., docx, xlsx and pptx files), and other important files (e.g., cpp, py, pdf and wav files) were copied to various directories in different locations. We note that those files are typically most attractive targets for ransomware. Each ransomware sample was executed and then manually labeled by each family type. We ran each ransomware sample for ten minutes or until all user files were encrypted. It took more than 90 days to run all samples and collect data. We only considered those samples that encrypted user files or locked desktop screens. If no files were modified, we excluded them from our dataset. We also obtained labeled ransomware samples from two well-known malware repositories [31, 29]. Many ransomware families use their respective names as file extensions after encrypting a user file, e.g., WannaCry adds .wncry file extension after encrypting a user file, similarly, Ranzy, Ryuk, and Peta add .rnz, .ryuk, and .peta file extensions, respectively. Moreover, we manually verified label ransomware family from VirusTotal’s vendors, i.e., if more than 15 vendors assign the same label to a sample, we label it accordingly. #### IV-A2 Diversity in our dataset Table VI presents a list of ransomware families that are used in our evaluation. To the best of our knowledge, this is the most comprehensive dataset containing diverse ransomware families. According to previous work [34, 15], the use of diverse families is more important than the number of ransomware samples from a few families for evaluating the performance of ransomware detectors. For instance, building a model on 1,000 Locky (and its variants) ransomware samples should prove no more useful than building a model on just one Locky sample [34]. Scaife et al. [15] confirmed that due to the homogeneous nature of file I/O behavior among samples within each family, a small number of representative samples in each family are sufficient to evaluate the detection performance. It is because the core behavioral traits shown by crypto ransomware in encrypting data attack does not change from one variant to the other within a family. Since our study covered more than eight times the number of families from previous study [35], and more than two times the number of families covered in studies [14, 15] and there was not much diversity within families, there was little need to collect additional samples. ### IV-B Benign applications We also collected the dataset for popularly used applications that are typically installed on a benign user’s computer. In addition to popularly used applications, we also considered several benign applications that could resemble ransomware in certain behavioral aspects. The reason is to investigate false positive rates when benign applications potentially resemble ransomware. We divide the benign dataset into three main categories targeting various types of ransomware: 1) benign applications with file I/O patterns resembling crypto ransomware, 2) benign applications spawning many processes, and 3) commonly used benign applications. #### IV-B1 Benign applications resembling crypto ransomware Benign applications performing encryption or compression might generate file I/O patterns similar to crypto ransomware that could result in false positives. To evaluate Peeler against, we collected data from several crypto- like benign applications, listed in Appendix -B. There are key differences in file I/O patterns generated by encryption/compression tools compared to crypto-ransomware. Firstly, unlike ransomware incurring a massive number of repeated file I/O patterns, the encryption/compression tools operate on a limited number of files only. Secondly, the original user files remain intact, i.e., not overwritten or deleted, even after compression or encryption is performed. It is, therefore, doubtful that benign applications show ransomware file I/O patterns. Thirdly, unlike crypto ransomware that encrypts user files arbitrarily, benign applications create a dedicated process that needs sophisticated inputs from the OS to complete the task. For instance, the compression tool 7-zip takes several parameters to specify target files. Each tool in Appendix -B is run twice – for compression and decompression, on a given set of files to collect data. #### IV-B2 Benign applications spawning many processes We collected the dataset containing applications that spawn many child processes to evaluate Peeler’s robustness against false positives. We found that certain benign applications such as Pycharm and Visual Studio create many spawned processes that may resemble screen-locker ransomware. The list of benign applications is shown in Appendix -B. We collected data by running each application individually. #### IV-B3 Commonly used benign application We also collected user’s system usage data under normal conditions while interacting with commonly used applications. A user runs many different applications at the same time. For example, the user read a document using Adobe Acrobat Reader, switched to the internet browser to view online reviews about a product, and then used Adobe Acrobat Reader again. Our goal here is to analyze system events generated in an interleaved manner from commonly used benign applications. The collected data is for around 12 hours of computer usage. During data collection, the user interacted with multiple applications from the list of benign applications in Appendix -B. ## V Evaluation We demonstrate Peeler’s performance in detection accuracy, detection time, and CPU/memory overheads. For evaluation, we used the dataset described in Section IV. For training, 20% of both screen-locker ransomware and benign applications randomly are selected. We used a small training dataset (only 20% of the entire dataset) for the following reasons: 1) the size of a training set is typically limited in the real world; 2) we want to increase the number of testing samples as many as possible for making Peeler robust against unseen ransomware samples; 3) we want to reduce the size of model for quick training. All the remaining ransomware and benign samples (i.e., 80% of the total samples) are used for testing purposes. ### V-A Detection accuracy Table VII shows the summary of Peeler’s detection accuracy. Overall, Peeler achieved 99.52% accuracy with a false positive rate of only 0.58%. Since we used only 20% of applications selected randomly for training, all performance statistics are averaged after 100 runs to mitigate biases in training and testing datasets. Similarly, Peeler achieved high precision and F1 score, which are greater than 99%. TABLE VII: Peeler’s detection accuracy. Acc. (%) | TPR (%) | FPR (%) | FNR (%) | Prec. (%) | Rec. (%) | F1 (%) ---|---|---|---|---|---|--- 99.52 | 99.63 | 0.58 | 0.37 | 99.41 | 99.63 | 99.52 #### V-A1 False positive analysis Minimizing false positives is essential to develop practically useful malware detectors because excessive false positives can annoy users and undermine the system’s effectiveness. We evaluate Peeler’s performance against three different types of benign application scenarios (see Table VIII): TABLE VIII: Peeler’s false positive analysis. Scenario | TNR (%) | FPR (%) | FNR (%) ---|---|---|--- Crypto-like benign apps | 98.27 | 1.72 | 0.96 Locker-like benign apps | 99.5 | 0.31 | 0.5 Commonly used benign apps | 99.78 | 0.21 | 0.87 All ransomware | 99.42 | 0.58 | 0.37 Crypto ransomware-like benign applications. For behavior-based crypto ransomware detection solutions, a significant challenge is not to detect benign applications having compression or encryption capabilities because their system behaviors might be similar to crypto ransomware. We deeply investigated 11 different applications using compression or encryption operations on a large number of files like crypto ransomware (see Appendix -B). We observed that event sequences of some benign applications such as ZipExtractor and BreeZip are quite similar to those of typical crypto ransomware, but they do not restrict access to files via encryption, unlike crypto ransomware. Table VIII shows that Peeler correctly detects 98.27% with a false positive rate of 1.72% even against crypto ransomware-like benign applications. Benign applications spawning many processes. Certain ransomware spawns many child processes. Therefore, we examine how Peeler’s performance can be degraded with benign apps having such behaviors. For this analysis, we investigated 34 most popular applications from Microsoft’s Windows OS Store (https://www.microsoft.com/en-us/store/apps/windows) (see Appendix -B) and selected 18 applications showing such behaviors. Table IX presents the applications’ process tree-related feature ($FV_{MLR}$) values of three representative benign applications showing such behaviors. We observe that Pycharm and Visual Studio spawned 140 and 46 child processes, respectively. Interestingly, Chrome spawns 42 processes, but the depth of its applications’ tree is one, and the number of threads created by these processes is 1,480. We examined the results of Peeler with those 18 applications. TABLE IX: Benign applications’ process tree features. Application | # processes | Depth | # leaf nodes | # unique processes | # threads ---|---|---|---|---|--- Pycharm | 140 | 4 | 70 | 11 | 993 Visual Studio | 46 | 4 | 29 | 21 | 568 Chrome | 42 | 1 | 41 | 2 | 1,480 Table VIII shows that Peeler correctly detected 99.5% with a false positive rate of 0.31% even though these benign applications spawned many processes. The reason for this detection is because Peeler also considers the other set of features ($FV_{SVM}$), which are related to system events. Commonly used benign applications. We also evaluated Peeler’s performance with commonly used benign applications such as Microsoft Office, Adobe Acrobat Reader, email client, and instant messengers, as presented in Sectoin IV-B. We show that Peeler correctly detects all benign activities performed by a user achieving a detection rate of 99.78%. The false positive and true negative rates under normal system usage are 0.21% and 0.87%, respectively. Note that the overall detection accuracy in all three scenarios is above 99%. #### V-A2 False negative analysis The false negative rate for ransomware detection is another important metric to evaluate the effectiveness of Peeler. Table X shows that the overall false negative rate is above 0.37%. The false negative rates for crypto and screen- locker ransomware are 0% and 0.5%, respectively. TABLE X: Peeler’s false negative rate analysis. Ransomware | TPR (%) | FPR (%) | FNR (%) ---|---|---|--- Crypto | 100 | 0.8 | 0 Screen-locker | 99.50 | 0.31 | 0.50 All ransomware | 99.63 | 0.58 | 0.37 ### V-B Detection time For crypto ransomware, the detection time refers to the time interval between the end of the detection and the end of encryption on a file by a process, that is, how long it takes Peeler to detect the ransomware attack after a ransom sample encrypts a file. We excluded the time to be taken for the file encryption because that time can be varied depending on the file size. Figure 7 shows that Peeler can detect over 70% of samples within 115 milliseconds with the mean time of 115.3 milliseconds, demonstrating that Peeler outperforms existing crypto ransomware detection solutions in detection time. Peeler can promptly detect crypto ransomware with a simple, file I/O pattern matching, unlike other existing solutions relying on complicated file activity (e.g., identifying encryption operations based on entropy computation) analysis or machine learning models. These detection time results demonstrate the superiority of Peeler compared with existing crypto ransomware detection solutions. Cryptolock [15] detects crypto ransomware after 10 files are encrypted. Similarly, Mehnaz et al. [10] presented a solution which takes on average 8.87 seconds to detect malicious processes. Figure 7: CDF of the crypto ransomware detection time. For screen-locker ransomware, the detection time refers to the time interval between the end of the detection and the start activity by a ransomware process. Peeler, on average, took 16.4 seconds to detect screen-locker ransomware while the execution time of screen-locker ransomware to lock user’s desktop screen completely is 302.8 seconds on average, demonstrating that Peeler can detect screen-locker ransomware at a very early stage that prevents an attacker from locking a victim’s system. Figure 8 shows the probability density function of screen-locker detection time. Moreover, the detection time is not affected even if ransomware is running in parallel to many other irrelevant system applications because Peeler’s system events monitor directly intercept the events and then analyse them. Figure 8: Probability density function of detection times for screen-locker ransomware with mean ($\mu$) and standard deviation ($\sigma$). ### V-C Robustness against unseen families To test Peeler against unseen ransomware families, we additionally collected new and unseen ransomware samples after three months from the first experiments and monitored online repositories for new or unseen ransomware samples. A total of 70 samples from more than 24 distinct unseen or new ransomware families are tested. We used the previously constructed Peeler without retraining. All samples tested in this experiment are manually verified from VirusTotal and other online malware repositories to confirm their family and type. Peeler was able to correctly detect 67 samples from total 70 new and unseen ransomware samples achieving more than 95% detection rate. The ransomware families and the number of corresponding samples tested are given in Table XI. Peeler can also detect 9 out of 16 conventional malware samples even though those samples do not have ransomware capabilities. For example, our investigation revealed that Backdoor.Generic is a malware family that enables attackers to control infected computers remotely to create large groups of zombie computers (botnets), which are then used for malicious purposes without the user’s knowledge. We surmise that those malware samples have some common behaviors that can be observed from ransomware. TABLE XI: Detection of unseen ransomware and malware. Type/ Detection | Family | Sample | Family | Sample ---|---|---|---|--- Ransomware 95.7% (67/70) | Ryuk | 6 | Zeppelin | 6 Ranzy | 4 | Netwalker | 2 Core | 3 | Fox | 3 Crpren | 1 | MedusaLocker | 1 Balaclava | 5 | Crylock | 7 Matrix | 4 | DarkSide | 4 Ragnarlocker | 2 | HiddenTear | 2 Mespinoza | 5 | Thanos | 3 Vaggen | 3 | Mountlocker | 2 Nemty | 2 | Phobos | 1 Jsworm | 1 | Winlock | 1 Maze | 1 | Unknown | 1 Malware 56.2% (9/16) | Backdoor.Generic | 2 | Unknown | 14 ### V-D Model-specific detection accuracy. We constructed Peeler by building multiple detection components (file I/O pattern matcher and machine learning-based classifier). Here we show the effectiveness of each component in detecting ransomware. Table XII and XIII show the evaluation results of both components, respectively. TABLE XII: File I/O pattern matcher’s performance. Ransomware | TPR (%) | FPR (%) | FNR (%) | Prec. (%) | Rec. (%) | F1 score (%) ---|---|---|---|---|---|--- Crypto | 95.19 | 0 | 4.81 | 100 | 95.19 | 97.53 Screen-locker | 33.33 | 0 | 66.66 | 100 | 33.33 | 50.00 All ransomware | 65.50 | 0 | 34.5 | 100 | 65.50 | 79.15 Table XII shows that the file I/O pattern matcher achieves the detection rate of 95.19% with the false rate of 0% in detecting crypto ransomware; however, it is not effective in detecting screen-locker ransomware. Table XIII shows that the machine learning-based classifier works well in detecting both crypto and screen-locker ransomware. The machine learning-based classifier achieves more than 98% detection accuracy with less than 2% false rate in detecting both types of ransomware attacks. The high detection rate of the machine learning-based classifier can be attributed to the feature vectors ($FV_{\text{MLR}}$ and $FV_{\text{SVM}}$). However, to boost the detection time, we can first apply the file I/O pattern matcher and then use the machine learning-based classifier only when the file I/O pattern matcher fails to detect. TABLE XIII: Machine learning-based classifier’s performance. Ransomware | TPR (%) | FPR (%) | FNR (%) | Prec. (%) | Rec. (%) | F1 score (%) ---|---|---|---|---|---|--- Crypto | 98.45 | 1.84 | 1.54 | 98.24 | 98.45 | 98.29 Screen-locker | 99.5 | 0.31 | 0.50 | 99.68 | 99.50 | 99.59 All ransomware | 99.05 | 1.9 | 0.94 | 98.19 | 99.05 | 98.59 Each component can work as an alternative and complementary detection method to the other component. For instance, the file I/O pattern matcher failed to detect some crypto ransomware samples such as Hexadecimal, Cryptowire, CryptoShield, and CryptoLock, but the machine learning-based classifier successfully detected them. ### V-E CPU and memory overheads We evaluate the performance of Peeler with respect to CPU and memory overheads. Since Peeler intercepts low-level kernel events and then analyzes them to detect ransomware attacks, its performance overheads in CPU and memory can typically be changed depending on the system’s workload. For instance, we observe that if computationally intensive tasks are running, the overheads of Peeler inherently increase because Peeler should frequently intercept a high number of kernel events generated from such computationally intensive processes. TABLE XIV: CPU and memory overheads of Peeler. Workload | CPU (%) | Memory (MB) ---|---|--- Mean | Std. | Mean | Std. Low | 2.7 | 2.0 | 9.8 | 0.3 Normal | 4.9 | 2.2 | 9.8 | 0.4 High | 15.8 | 3.9 | 11.3 | 1.9 Our experiments were conducted on a computing device equipped with two Intel Core(TM) i5-7300U (2.60GHz) CPUs and 8GB RAM, running 64-bit Windows 10 Enterprise edition operating system. Our CPU and memory usage results were measured based on this environment. We considered the three workload conditions as follows: Low, Normal, and High workload conditions. Under the low workload condition, Peeler is only running in the background and continuously collecting system events and writing them to a log file. Under the normal workload condition, we additionally performed the following tasks: 1) drafting an MS Word document; 2) using Chrome for browsing online material; and 3) reading a document using Adobe Acrobat Reader. Under the high workload condition, we additionally ran a CPU intensive algorithm as a background process. We note that an anti-malware program called CylanceProtect and default Windows OS services were continuously running in the background for all workload conditions. Table XIV shows CPU and memory usage of Peeler. We observe that mean CPU usage and the standard deviation are 4.9% and 2.2%, respectively, under the normal workload condition. Table XIV also shows that mean memory usage is around 9.8MB under the low or normal workload conditions, which is quite stable and has less variation (standard deviation is 0.4MB). Under the high workload condition, the average CPU usage of Peeler significantly increases to 15.8% with a standard deviation of 3.9% while its average memory usage slightly increases to 11.3MB memory with a standard deviation of 1.9MB. ## VI Discussion and limitations ### VI-A Comparison with existing solutions As mentioned in Section I, there are many existing methods to detect ransomware attacks. However, since most existing solutions used their own dataset for evaluation, and their source code is not opened, we do not directly compare Peeler with those solutions. Alternatively, we compare Peeler with those solutions according to their experimental results reported in their papers. Table XV shows a summary of the comparison results. TABLE XV: Comparison with existing approaches. Method | TPR (%) | FPR (%) | Files lost | Screen-locker? | Samples/families | Real-time ---|---|---|---|---|---|--- Redemption [6] | 100 | 0.8 | 5 | $\times$ | 677/29 | $\times$ CryptoLock [15] | 100 | 0.03 | 10 | $\times$ | 492/14 | $\times$ UNVEIL [14] | 96.3 | 0 | - | $\checkmark$ | 2121/ - | $\times$ REDFISH [13] | 100 | - | 10 | $\times$ | 54/19 | $\times$ RWGuard [10] | - | 0.1 | partial recovery | $\times$ | \- /14 | $\checkmark$ Elderan [9] | 93.3 | 1.6 | - | $\times$ | 582/11 | $\times$ CM&CB [36] | 98 | Vary | - | $\times$ | 8/ - | $\times$ RansHunt [37] | 97 | 3 | - | $\times$ | 360/20 | $\times$ ShieldFS [7] | 100 | 0.038 | - | $\times$ | 383/11 | $\times$ Peeler | 99.63 | 0.58 | 1 | $\checkmark$ | 206/43 | $\checkmark$ We can see that the ransomware detection rates overall ranges from 93% to 100%. However, it is important to note that the number of ransomware samples/families used for evaluation is different for each approach. CryptoLock [15], Redemption [6], REDFISH [13], and ShieldFS [7] achieved 100% detection rates, but those solutions were tested on 14, 29, 19, and 11 ransomware families only, respectively. In contrast, Peeler was tested against 43 distinct ransomware families, including both crypto and screen locker ransomware, and still achieved a 99.63% detection rate. For the false positive rate with benign applications, Peeler achieved 0.58% FPR, which would be comparable with the other solutions. For ransomware detection, one of the most critical evaluation metrics is the number of user files lost before a ransomware sample is detected. Peeler detects ransomware immediately after a single file alone is encrypted, indicating that Peeler outperforms other solutions.From Table XV, we can see that only two solutions (Peeler and UNVEIL [14]) considered the detection of screen-locker ransomware. However, Peeler’s detection time (16.4 seconds on average) would significantly be faster than UNVEIL. Although Kharaz et al. [14] did not report the exact detection time of UNVEIL, we surmise that UNVEIL would take longer detection time because it needs to capture screenshots of a victim’s desktop periodically, and then compute the similarity between the captured images. ### VI-B Secure implementation of Peeler Peeler runs in a privileged kernel mode with administrative rights because it needs to collect kernel-level events from four main kernel providers (File, Process, Image, and Thread). Therefore, the integrity of Peeler can be securely protected against malicious processes running in the user mode. However, if we consider powerful attackers (e.g., rootkit-based ransomware) with the root privilege, we additionally need to consider deploying a kernel protection mechanism to protect Peeler against attackers. Recently, several techniques (e.g., real-time kernel protection (RKP) [38]) have been proposed to protect the kernel code and objects from unauthorized modifications. With such a secure kernel protection mechanism, we can implement Peeler as a Windows OS service running on the kernel. ### VI-C Peeler’s extension to other computing environments In order to extend Peeler to other computing environments such as Linux or Android, two aspects are needed to be considered: platform-depended system events and key ransomware behavioral characteristics. The later remains platform-independent because the key behavioral characteristics of ransomware remain almost the same no matter which platform they target. However, system events for other computing environments need to be carefully analyzed to extend Peeler to these environments. For instance, the suspicious file I/O patterns and corresponding regular expressions should be updated to reflect platform-dependent system events. ### VI-D Limitations In our experiments, the tested benign applications are only a part of a massive number of benign applications. Therefore, we can still have a chance to encounter (unknown) benign applications that generate suspicious file I/O patterns that might result in false positives. For instance, when Windows OS creates a backup file tokens.dat.bak from the file of tokens.dat, we found that the sequence of kernel-level file I/O events generated from a benign process can lead to a false positive result from the file I/O pattern matcher. A sophisticated malware could memory map a user file and then copy the block of memory, using memcpy() to evade file I/O patterns. The Windows OS may not observe any write operation in such scenario, and therefore, Peeler may not detect such attacks. ## VII Related work We categorize the literature regarding ransomware detection into three groups: 1) crypto ransomware detection techniques that are mainly based on certain behavioral indicators (e.g., file I/O event patterns), 2) machine learning- based approaches that build models by leveraging system behavior feature, and 3) decoy-based approaches that deploy decoy files and monitor if ransomware samples to tamper with the decoy files. Crypto ransomware detection. There were several proposals to monitor file I/O request patterns of applications to detect crypto ransomware. Kharraz et al. [14] studied crypto ransomware families’ file I/O request patterns and presented a dynamic analysis-based ransomware detection system called UNVEIL. UNVEIL detected 13,647 ransomware samples from a dataset of 148,223 general malware samples. Kharraz et al. [6] proposed another ransomware detection system using file I/O patterns, achieving a 100% detection rate with 0.8% false positive on 677 samples from 29 ransomware families. Scaife et al. [15] also presented a system called CryptoDrop that detect ransomware based on suspicious file activities, e.g., tampering with a large number of file accesses within a time interval. According to the experimental results, the number of lost files is ten on average. Moratto et al. [13] proposed a ransomware detection algorithm with a copy of the network traffic, without impacting a user’s activities. Their proposed system achieved a 100% detection rate with 19 different ransomware families after the loss of ten files. Machine learning based ransomware detection. RWGuard [10] is a machine learning-based crypto ransomware detection system. It achieved a 0.1% false positive rate incurring a 1.9% CPU overhead with 14 crypto ransomware families. RWGuard leverages the features about processes’ I/O requests and changes performed on files because RWGuard was mainly designed to detect crypto ransomware only. Sgandurra et al. [9] proposed EldeRan, another machine learning approach that builds a model using system activities such as API invocations, registry event, and file operations that are performed by applications. EldeRan achieved a 93.3% detection rate with a 1.6% false alarm rate with 582 samples from 11 ransomware families. Hirano et al. [39] and Al- rimy [19] proposed behavior-based machine learning models for ransomware detection. Hirano et al. selected five-dimensional features that were extracted both from ransomware and benign applications’ I/O log files. Nieuwenhuizen [34] proposed another behavioral-based machine learning model using a feature set that quantifies the behavioral traits for ransomware’s malicious activities. Decoy files based ransomware detection. Decoy techniques [10, 11, 7] have also been frequently proposed to detect ransomware attacks. For example, Gomez et al. [11] developed a tool called R-Locker using honey files to trap the ransomware. When file operations are performed on honey files by a process, the process is detected and completely blocked because benign processes do not perform any file operations on honey files. However, if decoy files are generated, which look different from real user files, sophisticated ransomware samples ignore decoy files [10]. Moreover, it is also unclear how those solutions would detect some ransomware families (e.g., Petya) that affect predefined system files only. ## VIII Conclusion We propose a new effective and efficient solution called Peeler to detect ransomware attacks using their system behaviors. Peeler is built on both rule- based detection (e.g., malicious commands detector and I/O pattern matcher) and machine learning models to improve the detection accuracy and reduce the detection time. Most crypto ransomware can be detected by the I/O pattern matcher efficiently; the crypto ransomware that cannot be detected by the I/O pattern matcher or screen-locker ransomware can be detected by the machine learning models more accurately. To show the effectiveness of Peeler, we evaluate its performance with 43 ransomware families containing crypto ransomware and screen-locker ransomware. In the experiments, Peeler achieved 99.52% accuracy with a false positive rate of only 0.58%. Moreover, Peeler is efficient in detecting crypto ransomware – over 70% of crypto ransomware samples can be detected within 115 milliseconds. Although Peeler’s detection time (16.4 seconds on average) is relatively slower for screen-locker ransomware, it is still sufficient to detect it because it typically takes a longer time (302.8 seconds on average) to lock a victim’s system entirely. ## References * [1] First Three Quarters of 2019: 7.2 Billion Malware Attacks, 151.9 Million Ransomware Attacks. https://www.securitymagazine.com/articles/91133-first-three-quarters-of-2019-72-billion-malware-attacks-1519-million-ransomware-attacks. * [2] B. A. S. Al-rimy, M. A. Maarof, and S. Z. M. Shaid, “Ransomware threat success factors, taxonomy, and countermeasures: A survey and research directions,” Computers & Security, vol. 74, pp. 144–166, 2018. * [3] N. Popper, Ransomware Attacks Grow, Crippling Cities and Businesses. https://www.nytimes.com/2020/02/09/technology/ransomware-attacks.html. * [4] The State of Ransomware in the US. https://blog.emsisoft.com/en/34822/the-state-of-ransomware-in-the-us-report-and-statistics-2019/. * [5] S. Sivakorn, K. Jee, Y. Sun, L. Kort-Parn, Z. Li, C. Lumezanu, Z. Wu, L.-A. Tang, and D. Li, “Countering malicious processes with process-dns association.,” in Network and Distributed Systems Security, 2019. * [6] A. Kharraz and E. Kirda, “Redemption: Real-time protection against ransomware at end-hosts,” in International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 98–119, 2017. * [7] A. Continella, A. Guagnelli, G. Zingaro, G. De Pasquale, A. Barenghi, S. Zanero, and F. Maggi, “Shieldfs: a self-healing, ransomware-aware filesystem,” in Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 336–347, 2016. * [8] S. Homayoun, A. Dehghantanha, M. Ahmadzadeh, S. Hashemi, and R. Khayami, “Know abnormal, find evil: frequent pattern mining for ransomware threat hunting and intelligence,” IEEE transactions on emerging topics in computing, 2017\. * [9] D. Sgandurra, L. Muñoz-González, R. Mohsen, and E. C. Lupu, “Automated dynamic analysis of ransomware: Benefits, limitations and use for detection,” arXiv preprint arXiv:1609.03020, 2016. * [10] S. Mehnaz, A. Mudgerikar, and E. Bertino, “RWGuard: A real-time detection system against cryptographic ransomware,” in International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 114–136, 2018. * [11] J. Gómez-Hernández, L. Álvarez-González, and P. García-Teodoro, “R-Locker: Thwarting ransomware action through a honeyfile-based approach,” Computers & Security, vol. 73, pp. 389–398, 2018. * [12] E. Kolodenker, W. Koch, G. Stringhini, and M. Egele, “Paybreak: Defense against cryptographic ransomware,” in Proceedings ACM on Asia Conference on Computer and Communications Security, pp. 599–611, 2017. * [13] D. Morato, E. Berrueta, E. Magaña, and M. Izal, “Ransomware early detection by the analysis of file sharing traffic,” Journal of Network and Computer Applications, vol. 124, pp. 14–32, 2018. * [14] A. Kharaz, S. Arshad, C. Mulliner, W. Robertson, and E. Kirda, “UNVEIL: A large-scale, automated approach to detecting ransomware,” in 25th USENIX Security Symposium (USENIX Security 16), pp. 757–772, 2016. * [15] N. Scaife, H. Carter, P. Traynor, and K. R. Butler, “Cryptolock (and drop it): stopping ransomware attacks on user data,” in 36th IEEE International Conference on Distributed Computing Systems (ICDCS), pp. 303–312, 2016. * [16] J. Huang, J. Xu, X. Xing, P. Liu, and M. K. Qureshi, “Flashguard: Leveraging intrinsic flash properties to defend against encryption ransomware,” in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, pp. 2231–2244, 2017. * [17] S. M. Milajerdi, B. Eshete, R. Gjomemo, and V. Venkatakrishnan, “Poirot: Aligning attack behavior with kernel audit records for cyber threat hunting,” in Proceedings of ACM SIGSAC Conference on Computer and Communications Security, pp. 1795–1812, 2019. * [18] L. Zhao and M. Mannan, “TEE-aided write protection against privileged data tampering,” arXiv preprint arXiv:1905.10723, 2019. * [19] B. A. S. Al-rimy, M. A. Maarof, and S. Z. M. Shaid, “A 0-day aware crypto-ransomware early behavioral detection framework,” in International Conference of Reliable Information and Communication Technology, pp. 758–766, 2017. * [20] B. Lelonek and N. Rogers, Make ETW greate again. https://ruxcon.org.au/assets/2016/slides/ETW_16_RUXCON_NJR_no_notes.pdf. * [21] R. Rodriguez, Threat Hunting with ETW events and HELK — Part 1: Installing SilkETW, 2019. https://medium.com/threat-hunters-forge/threat-hunting-with-etw-events-and-helk-part-1-installing-silketw-6eb74815e4a0. * [22] D. Hendler, S. Kels, and A. Rubin, “Detecting malicious powershell commands using deep neural networks,” in Proceedings of the 2018 on Asia Conference on Computer and Communications Security, pp. 187–197, 2018. * [23] D. Hendler, S. Kels, and A. Rubin, “Amsi-based detection of malicious powershell code using contextual embeddings,” in Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, pp. 679–693, 2020. * [24] Chronicle, “Yara-l: A new detection language for modern threats [white paper],” in https://go.chronicle.security/hubfs/YARA-L%20Overview%20White%20Paper.pdf, pp. 1–9, 2020. * [25] About Event Tracing. https://docs.microsoft.com/en-us/windows/win32/etw/about-event-tracing. * [26] Krabsetw. https://github.com/microsoft/krabsetw. * [27] A. Bayaga, “Multinomial logistic regression: Usage and application in risk analysis.,” Journal of applied quantitative methods, vol. 5, no. 2, 2010\. * [28] VirusTotal. https://www.virustotal.com/. * [29] A Live Malware Repository. https://github.com/ytisf/theZoo. * [30] Malware samples. https://github.com/fabrimagic72/malware-samples. * [31] MalwareBazaar. https://bazaar.abuse.ch/. * [32] N. Miramirkhani, M. P. Appini, N. Nikiforakis, and M. Polychronakis, “Spotless sandboxes: Evading malware analysis systems using wear-and-tear artifacts,” in IEEE Symposium on Security and Privacy (SP), pp. 1009–1024, 2017. * [33] VirtualBox. https://www.virtualbox.org. * [34] D. Nieuwenhuizen, “A behavioural-based approach to ransomware detection,” Whitepaper. MWR Labs Whitepaper, 2017. * [35] A. Kharraz, W. Robertson, D. Balzarotti, L. Bilge, and E. Kirda, “Cutting the gordian knot: A look under the hood of ransomware attacks,” in International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment, pp. 3–24, 2015. * [36] M. M. Ahmadian, H. R. Shahriari, and S. M. Ghaffarian, “Connection-monitor & connection-breaker: A novel approach for prevention and detection of high survivable ransomwares,” in IEEE International Iranian Society of Cryptology Conference on Information Security and Cryptology (ISCISC), pp. 79–84, 2015. * [37] M. M. Hasan and M. M. Rahman, “RansHunt: A support vector machines based ransomware analysis framework with integrated feature set,” in 20th IEEE International Conference of Computer and Information Technology (ICCIT), pp. 1–7, 2017. * [38] A. M. Azab, P. Ning, J. Shah, Q. Chen, R. Bhutkar, G. Ganesh, J. Ma, and W. Shen, “Hypervision across worlds: Real-time kernel protection from the arm trustzone secure world,” in Proceedings of ACM SIGSAC Conference on Computer and Communications Security, pp. 90–102, 2014. * [39] M. Hirano and R. Kobayashi, “Machine learning based ransomware detection using storage access patterns obtained from live-forensic hypervisor,” in Sixth IEEE International Conference on Internet of Things: Systems, Management and Security (IOTSMS), pp. 1–6, 2019. ### -A File encryption patterns Figure 9: File I/O events generated by InfinityCrypt ransomware. Figure 10: File I/O events generated by Wannacry ransomware. ### -B Benign applications In this section, we present benign applications that potentially show ransomware-like behavior that are used in evaluation of Peeler: 1) benign encryption, compression and shredder applications (Table XVI), 2) benign applications most commonly used, and 3) benign application spawning multiple processes (Table XVII). TABLE XVI: Ransomware-like benign applications. Tool | Application | Operation | Version ---|---|---|--- Compression | 7-zip | Compression | 19.00 7-zip | Decompression Winzip | Compression | 24 Winzip | Decompression Winrar | Compression | 5.80 Winrar | Decompression BreeZip | Compression | - BreeZip | Decompression Alzip | Compression | 11.04 Alzip | Decompression PeazipWinrar | Compression | 7.1.1 PeazipWinrar | Decompression Encryption | AESCrypt | Encryption | 310.00 AESCrypt | Decryption AxCrypt | Encryption | - AxCrypt | Decryption Shredder | Eraser | Delete | 6.2.0.2986 Ccleaner | Delete | - Windows Delete | Delete | - TABLE XVII: Benign applications spawning multiple processes. Type | Application | Version | spawn processes? ---|---|---|--- Office | MS Word | 16.0.11929.20436 | $\times$ MS Powerpoint | 16.0.11929.20436 | $\times$ MS Excel | 16.0.11929.20436 | $\times$ MS Outlook | 16.0.11929.20436 | ✓ Trio Office: Word, Slide, Spreadsheet | - | ✓ Development | Pycharm | 11.0.3+12-b304.56 amd64 | ✓ Matlab | R2019a | ✓ Visual Studio C++ | 2019 community version | ✓ Android Studio | 191.6010548 | ✓ Tools | Adobe Acrobat Reader | 20.006.20034 | ✓ Adobe Photoshop Express | 3.0.316 | $\times$ PhotoScape | 3.7 | $\times$ Cool File Viewer | - | $\times$ PicArt Photo Studio | - | $\times$ Paint 3D | - | $\times$ Cloud and Internet | Dropbox | - | $\times$ Googledrive | - | $\times$ Internet Explorer | 11.1039.17763 | ✓ Google Chrome | 80.0.3987.132 | ✓ Remote Desktop | - | $\times$ Messenger | Telegram | 1.9.7 | $\times$ WhatApp | 0.4.930 | ✓ Skype | 1.9.7 | $\times$ Facebook Messenger | - | ✓ Document | Wordpad | - | $\times$ Notepad | - | $\times$ OneNote | 16001.12527.20128.0 | $\times$ Media player | VLC | 3.0.8 | $\times$ Netflix | 6.95.602 | $\times$ GOM Player | 2.3.49.5312 | $\times$ Miscellaneous | Spotify | - | ✓ KeePass Password manager | 1.38 | $\times$ Discord | - | $\times$ Facebook | - | $\times$ ### -C Malicious commands triggered during ransomware execution In Table XVIII, we show example malicious commands extracted by Peeler during ransomware execution. This is not an exhaustive list of malicious commands instead we show few important malicious commands found during ransomware execution. TABLE XVIII: Malicious commands. no | Command ---|--- 1 | vssadmin.exe delete shadows /all /quiet 2 | bcdedit.exe /set default recoveryenabled No 3 | bcdedit.exe /set default bootstatuspolicy ignoreallfailures 4 | powershell.exe -e Get WmiObject Win32_Shadowcopy | ForEach-Object{$_.Delete();} 5 | taskkill /t /f /im mal.exe 6 | del mal.exe 7 | reg add HKCU $\backslash$Software$\backslash$Microsoft$\backslash$Windows$\backslash$ CurrentVersion$\backslash$Explorer$\backslash$Advanced /f /v HideFileExt /t REG_DWORD /d 1 8 | reg add HKCU $\backslash$Software$\backslash$Microsoft$\backslash$Windows$\backslash$ CurrentVersion$\backslash$Explorer$\backslash$Advanced /f /v Hidden /t REG_DWORD /d 2 9 | reg add HKCU $\backslash$Software$\backslash$Microsoft$\backslash$Windows$\backslash$ CurrentVersion$\backslash$Explorer$\backslash$Advanced /f /v EnableLUA /d 0 /tREG_DWORD /f 10 | reg add HKCU$\backslash$Control Panel$\backslash$Desktop /v Wallpaper /t REG_SZ /d PathtoRansomNoteImage /f 11 | reg add HKCU$\backslash$Control Panel$\backslash$Desktop /v WallpaperStyle /t REG_SZ /d "0" /f 12 | reg add HKCU$\backslash$Control Panel$\backslash$Desktop /v TileWallpaper /t REG_SZ /d "0" /f 13 | powershell.exe -ExecutionPolicy Restricted -Command Write-Host ’Final result: 1’; 14 | powershell.exe Set-MpPreference -DisableArchiveScanning $true; 15 | powershell.exe Set-MpPreference -DisableBlockAtFirstSeen $true; 16 | icacls Path /deny *S-1-1-0:(OI)(CI)(DE,DC) 17 | icacls . /grant Everyone:F /T /C /Q 18 | notepad.exe C:$\backslash$Users$\backslash$USER$\backslash$Music$\backslash$# RESTORING FILES #.TXT 19 | cscript C:$\backslash$Users$\backslash$kim105$\backslash$AppData$\backslash$Local$\backslash$Temp/SUwk.vbs 20 | wmic.exe shadowcopy delete 21 | net.exe stop vss 22 | net.exe stop McAfeeDLPAgentService /y 23 | schtasks /create /sc onlogon /tn TASK_NAME /rl highest /tr PATH_TO_EXE 24 | vssadmin.exe resize shadowstorage /for=c: /on=c: /maxsize=401MB
# Counterfactual State Explanations for Reinforcement Learning Agents via Generative Deep Learning Matthew L. Olson<EMAIL_ADDRESS>Roli Khanna Lawrence Neal Fuxin Li Weng-Keen Wong Oregon State University, OR, USA ###### Abstract Counterfactual explanations, which deal with “why not?” scenarios, can provide insightful explanations to an AI agent’s behavior ((Miller, 2019]. In this work, we focus on generating counterfactual explanations for deep reinforcement learning (RL) agents which operate in visual input environments like Atari. We introduce counterfactual state explanations, a novel example- based approach to counterfactual explanations based on generative deep learning. Specifically, a counterfactual state illustrates what minimal change is needed to an Atari game image such that the agent chooses a different action. We also evaluate the effectiveness of counterfactual states on human participants who are not machine learning experts. Our first user study investigates if humans can discern if the counterfactual state explanations are produced by the actual game or produced by a generative deep learning approach. Our second user study investigates if counterfactual state explanations can help non-expert participants identify a flawed agent; we compare against a baseline approach based on a nearest neighbor explanation which uses images from the actual game. Our results indicate that counterfactual state explanations have sufficient fidelity to the actual game images to enable non-experts to more effectively identify a flawed RL agent compared to the nearest neighbor baseline and to having no explanation at all. ###### keywords: Deep Learning, Reinforcement Learning, Explainable AI , Interpretable AI ††journal: Journal of Artificial Intelligence ## 1 Introduction Despite the impressive advances made by deep reinforcement learning (RL) agents, their decision-making process is challenging for humans to understand. This limitation is a serious concern for settings in which trust and reliability are critical, and deploying RL agents in these settings requires ensuring that they are making decisions for the right reasons. To solve this problem, researchers are developing techniques to provide human-understandable answers to explanatory questions of the agent’s decision-making. Explanatory questions can be classified into three types ((Miller, 2019, Pearl and Mackenzie, 2018]: “What?” (Associative reasoning), “How?” (Interventionist reasoning) and “Why?” (Counterfactual reasoning). Of the three types, ”Why?” questions are the most challenging as it requires counterfactual reasoning ((Lewis, 1973, Wachter et al., 2017], which involves reasoning about alternate outcomes that have not happened; counterfactual reasoning in turn requires both associative and interventionist reasoning ((Miller, 2019]. In our work, we present a counterfactual explanation method to tackle the ”Why?” question in Miller’s classification. More specifically, we answer the ”Why not?” question by using a deep generative model that can visually change the current state to produce alternate outcomes. Figure 1: A counterfactual example in the game of Space Invaders that demonstrates an agent’s action changing by the removal of an enemy. Left: The game state in which an agent takes action “move left and shoot”. Right: The counterfactual state where the agent will take the action “move right”. Underlying a RL agent is the mathematical framework of a Markov Decision Process (MDP) ((Puterman, 1994], which models an agent making a sequence of decisions as it interacts with a stochastic environment. In the notation to follow in this section and in the rest of the manuscript, vectors, matrices and sets are in boldface while scalars are not. Formally, a MDP is a tuple $(\bm{S},\bm{\mathcal{A}},T,R,\gamma)$, where $\bm{S}$ is a set of states, $\bm{\mathcal{A}}$ is a set of actions, $T(\bm{s^{\prime}},\bm{s},a)$ is a transition function capturing the probability of moving from state $\bm{s}$ to $\bm{s^{\prime}}$ when action $a$ is performed in state $\bm{s}$, $R(\bm{s},a)$ is a reward function returning a reward for being in state $\bm{s}$ and performing action $a$ and $\gamma$ is called a discount factor (where $0\leq\gamma\leq 1)$ which weights the importance of future rewards. Using the MDP framework, we introduce the concept of a counterfactual state as a counterfactual explanation111An early version of this work appeared in Olson et al. ((2019].. More precisely, for an agent in state $\bm{s}$ performing action $a$ according to its learned policy, a counterfactual state $\bm{s^{\prime}}$ is a state that involves a minimal change to $\bm{s}$ such that the agent’s policy chooses action $a^{\prime}$ instead of $a$. For example, a counterfactual state can be seen in Figure 1 for the video game Space Invaders ((Brockman et al., 2016]. In this game, an agent exchanges fire with approaching enemies while taking cover underneath three barriers. Our approach is intended for deep RL agents that operate in visual input environments such as Atari. The main role of deep learning in these environments is to learn a lower dimensional representation of the state that captures the salient aspects needed to learn a successful policy. Our approach investigates how changes to the state cause the agent to choose a different action. As such, we do not focus on explaining the long term, sequential decision-making effects of following a learned policy, though this is a direction of interest for future work. Our end goal is a tool for acceptance testing for end users of a deep RL agent. We envision counterfactual states being used in a replay environment in which a human user observes the agent as it executes its learned policy. At key frames in the replay, the user can ask the agent to generate counterfactual states which help the user determine if the agent has captured relevant aspects of the visual input for its decision making. Our approach relies on a novel deep generative architecture to create counterfactual states. Past work on counterfactuals in visual input environments has relied on other techniques like part-swapping with a distractor image ((Goyal et al., 2019] or region in-filling ((Chang et al., 2019] to create counterfactual explanations. In contrast, our approach is more flexible in that it can generate entire counterfactual states images on demand by moving through the deep network’s latent space. We investigate the following research questions in this work: 1. 1. RQ1: Can deep generative models produce high-fidelity counterfactual states that appear as if they are generated by the Atari game? 2. 2. RQ2: Can counterfactual states help human users, who are non-experts in machine learning, understand enough of an agent’s decision making to identify a flawed agent? 3. 3. RQ3: Can counterfactual states be more effective for helping users understand an agent’s decision-making process than a nearest neighbor baseline technique? Our contributions are thus twofold. First, we introduce a new deep generative approach to generate counterfactual states to provide insight as to a RL agent’s decision making. Second, we present results of user studies that investigate these research questions. Our results indicate that counterfactual states explanations are indeed useful. In our studies, they have sufficient fidelity to aid non-experts in identifying flawed RL agents. ## 2 Related Work ### 2.1 Explainable Artificial Intelligence The literature on explainable AI is vast and we briefly summarize only the most directly related work. Much of the past work on explaining machine learning has focused on explaining what features or regions of visual input were important for a prediction / action. A large class of approaches of this type fall under saliency map techniques, which use properties of the gradient to estimate the effect of pixels on the output (e.g. ((Simonyan et al., 2013, Springenberg et al., 2014, Zeiler and Fergus, 2014, Selvaraju et al., 2017, Fong and Vedaldi, 2017, Shrikumar et al., 2017, Dabkowski and Gal, 2017, Sundararajan et al., 2017, Zhang et al., 2018, Greydanus et al., 2018, Qi et al., 2019]). Recent work, however, has found some saliency map techniques to be problematic. For instance, Adebayo et al. ((2018] found that some saliency map techniques still produced the same results even if the model parameters or the data labels were randomized. In addition, Atrey et al. ((2020] used counterfactual reasoning to evaluate if saliency maps were true explanations of an RL agent’s behavior. Their findings indicated a negative result – namely that saliency maps, by themselves, could lead to incorrect inferences by humans and should not be used as an explanation of an agent’s behavior. Other explanation techniques include extracting a simpler interpretable model from a more complex model ((Craven and Shavlik, 1995], using locally interpretable models (e.g. ((Marco Tulio Ribeiro and Guestrin, 2018, Ribeiro et al., 2016]), generating plots from Generalized Additive Models with pairwise interaction terms ((Caruana et al., 2015] and using influence functions to determine which training data instances most affect the prediction ((Koh and Liang, 2017]. These methods, however, do not specifically identify changes in the current data instance that would result in a different outcome (or classification). These changes are a key part of the counterfactual reasoning needed to answer a ”Why?” or ”Why Not?” question. One of the first methods to do so was the Contrastive Explanations Method (CEM) ((Dhurandhar et al., 2018], which identified critical features or differences that would cause a data instance to be classified as another class. We found the hyperparameters for CEM to be difficult to tune to create high-fidelity counterfactuals for high-dimensional data like Atari images. As we will show in Section 5.2.1, CEM produced counterfactuals for Atari games that were filled with ”snow” artifacts. CEM has also been extended to explain differences between policies in reinforcement learning ((van der Waa et al., 2018]. This approach focused on differences between trajectories in the environments rather than on the visual elements of a state, which is the focus of our work. Two other recent approaches focused on producing counterfactuals for images. Chang et al. ((2019] introduced the FIDO algorithm which generates counterfactuals for images by determining which regions, when filled in with values produced by a generative model, would most change the predicted class of the image. The focus of the FIDO algorithm was on producing saliency maps and they used existing generative models for the infilling. In contrast, we develop a novel generative model to produce counterfactual state explanations; the goal of our method is to generate a realistic version of the entire counterfactual state (e.g. the whole Atari game frame image) in addition to producing difference highlights which are similar to saliency maps. Furthermore, Chang et al. ((2019] did not evaluate their counterfactual explanations on human users while our user study results are one of our key contributions. Goyal et al. ((2019] generated counterfactual visual explanations for images by finding the minimal number of region swaps between the original image $\bm{I}$ with class $c$ and a distractor image $\bm{I^{\prime}}$ with class $c^{\prime}$ such that the class of $\bm{I}$ would change to $c^{\prime}$. This method suffered from the problem that their counterfactual explanations could generate images with swapped regions that looked odd, e.g. due to pose misalignment between the two images. Their user study also focused on machine teaching, which is different from our focus of assessing agents for acceptance testing. ### 2.2 Explainable Reinforcement Learning Past work on explaining RL has focused on explaining different aspects of the RL formulation. Techniques for explaining policies include explaining policies from Markov Decision Processes with logic-based templates ((Khan et al., 2009], state abstractions created through t-SNE embeddings ((Mnih et al., 2015, Zahavy et al., 2016], human-interpretable predicates ((Hayes and Shah, 2017], high-level, domain-specific programming languages ((Verma et al., 2018] and finite state machines for RNN policies ((Koul et al., 2019]. Juozapaitis et al. ((2019] explained decisions made by RL agents by decomposing reward functions into simpler but semantically meaningful components. Finally, Mott et al. ((2019] used an attention mechanism to identify relevant parts of the game environment for decision making. Another category of techniques for explaining RL used machine teaching to help end-users understand an agent’s goals. Huang et al. ((2019] taught end-users about an agent’s reward function using example trajectories chosen by an approximate-inference inverse RL algorithm. Lage et al. ((2019] investigated using both inverse RL and imitation learning to produce summaries of an agent’s policy; their work highlighted the need for personalized summarization techniques as end-users varied in their preference of one technique over the other. Other methods looked at summarizing an agent’s behavior by presenting key moments of trajectories executed by a trained agent ((Amir and Amir, 2018, Huang et al., 2018, Sequeira and Gervasio, 2020]. These key moments were intended to demonstrate an agent’s capabilities, which could improve end-user trust. Key moments could be chosen by importance ((Amir and Amir, 2018], i.e. the largest difference in q-value for a given state ((Torrey and Taylor, 2013] or by critical states in which the q-value for one action was clearly superior to others ((Huang et al., 2018]. Sequeira and Gervasio ((2020] explored interestingness based on the four dimensions of frequency, uncertainty, predictability and contradiction. For a summary, rather than presenting a single moment, they presented a sequence of states that varied according to a particular dimension. These methods are all fundamentally different, yet complementary to our counterfactual approach of generating explanations. More specifically, our work can be used as an explanation technique to demonstrate an agent’s proficiency once a key interaction moment has been chosen, such as by one of the aforementioned approaches. ### 2.3 Generative Deep Learning As our counterfactuals are produced by a deep generative model, we briefly discuss related work on generative deep learning. Generative deep learning methods model the process that generates the data, thereby allowing never- before-seen data instances to be produced. Generative methods include auto- encoders ((Ballard, 1987], which encode an input feature vector into a lower- dimensional latent representation, and then decode that latent representation back to the original input space. Once the auto-encoder is trained, a common method to generate novel instances is to move about in the latent space and then decode the resulting latent space representation. However, these modifications in the latent space often result in unrealistic outputs ((Bengio et al., 2013] due to ”holes” in the learned latent space. This issue can be addressed by incorporating an additional loss function term that makes the latent representation match a pre-defined distribution ((Kingma and Welling, 2013, Makhzani et al., 2015, Tolstikhin et al., 2018]. Another class of generative deep models are adversarial networks, which have gained increased attention due to their novel applications in modeling high- resolution data, especially generating faces that do not exist ((Goodfellow et al., 2014]. Adversarial networks have been used to remove information predictive of the class label from a latent space. For example, Fader Networks ((Lample et al., 2017] encoded an image of a flower to a lower dimensional latent representation that retained its shape and background, but did not contain information regarding its color (where color is the class label). The class label could then be combined with the latent representation to fully reconstruct the original data image, but crucially, the class label did not need to be the original one. This method could recreate many different versions of the same input that retained some properties, but had the characteristics relevant to the label changed. Thus, in this example, we can use Fader Networks to create a flower image with a specific shape and background, but with a different color from the original label. ## 3 Methodology: A Generative Deep Learning Model for Counterfactual States ### 3.1 Counterfactual States The goal of this work is to shed some light into the decision making of a trained deep RL agent through counterfactual explanations. We are specifically interested in gaining some insight into what aspects of the visual input state $\bm{s}$ inform the choice of action $a$. Given a query state $\bm{s}$, we generate a _counterfactual state_ $\bm{s}^{\prime}$ that minimally differs in some sense from $\bm{s}$, but results in the agent performing action $a^{\prime}$ rather than action $a$. We refer to $a^{\prime}$ as the _counterfactual action_. Figure 2: The components of a pre-trained agent. Our approach requires a trained deep RL agent to be given to us by an external party. We now describe this agent, illustrated in Figure 2. This agent has a learned policy represented by a deep neural network. We divide this policy network into two partitions of interest (Figure 2). The first partition of the network layers, which we denote as $A$, takes a state $\bm{s}$ and maps it to a latent representation $\bm{z}=A(\bm{s})$. The vector $\bm{z}$ corresponds to the latent representation of $\bm{s}$ in the second to last fully connected layer in the network. The second partition of network layers, which we denote as $\bm{\pi}$, takes $\bm{z}$ and converts it to an action distribution $\bm{\pi}(\bm{z})$ i.e. a vector of probabilities for each action. Typically, $\bm{\pi}$ consists of a fully connected linear layer followed by a softmax. We use $\bm{\pi}(\bm{z},a)$ to refer to the probability of action $a$ in the action distribution $\bm{\pi}(\bm{z})$. We highlight the distinction in our Atari setting between a state $\bm{s}$, which is a raw Atari game image (also called a game frame), and the latent state $\bm{z}$ which is obtained from the second to last fully connected layer of the policy network. This latent layer, which we call $\bm{Z}$ is important in our diagnosis because it is used by the agent to inform its choice of actions. Our generative model is trained using a training dataset $\mathcal{X}=\\{(\bm{s}_{1},\bm{a}_{1}),\ldots,(\bm{s}_{N},\bm{a}_{N})\\}$ of $N$ state-action pairs, where the action vectors $\bm{a}_{i}$ are action distributions obtained from the trained agent as it executes its learned policy. In summary, the agent222The agent may have other components such as the value function network. Our current work only uses the policy network, but we would like to apply similar ideas to the value function network. can be viewed as the mapping $\bm{\pi}(A(\bm{s}))$. Our approach to counterfactual explanations is to create counterfactual states using a deep generative model, which have been shown to produce realistic images ((Radford et al., 2015]. Our strategy is to encode the query state $\bm{s}$ to a latent representation. Then, from this latent representation, we move in the latent space $\bm{Z}$ in a direction that increases the probability of performing the counterfactual action $a^{\prime}$. However, as previously noted by prior work, the latent space of a standard auto-encoder is filled with “holes” and counterfactual states generated from these holes would look unrealistic ((Bengio et al., 2013]. To produce a latent space that is more amenable to creating representative outputs, we create a novel architecture that involves an adversarial auto-encoder ((Makhzani et al., 2015] and a Wasserstein auto-encoder ((Tolstikhin et al., 2018]. Other approaches for navigating the latent space are possible, such as the methods presented by Jahanian et al. ((2020] and Besserve et al. ((2020], but these approaches do not specify an encoder, which is required in our framework to encode a query state $\bm{s}$ to a latent representation. Figure 3: An overview of our architecture, which consists of the encoder $E$, generator $G$, discriminator $D$ and pre-trained agent (grey). ### 3.2 The Deep Network Architecture Figure 3 depicts the architecture that we use during training. The RL agent is shaded gray to indicate that it has already been trained. First, we describe the Encoder ($E$), the Discriminator ($D$) and the Generator ($G$), which act together to produce counterfactual state images that vary depending on an input action distribution. Second, we describe the Wasserstein auto-encoder ($E_{w},D_{w}$), which produces a new latent space based on the agent’s latent space $\bm{Z}$; this new latent space enables perturbations within this space to produce meaningful counterfactual states. Each of these components contributes a loss term to the overall loss function used to train the network. #### 3.2.1 The Encoder, Discriminator and Generator ##### Auto-encoder Loss The encoder $E$ and generator $G$ act as an encoder-decoder pair. $E$ is a deep convolutional neural network that maps an input state $\bm{s}$ to a lower dimensional latent representation $E(\bm{s})$. We note that the Encoder $E$ is different from the encoder used by the agent’s policy network and thus has a different latent space. $G$ is a deep convolutional generative neural network that creates an Atari image given its latent representation $E(\bm{s})$ and a policy vector $\bm{\pi}(\bm{z})$ (where $\bm{z}=A(s)$). The auto-encoding loss function of E and G is the mean squared error (MSE) function: $L_{AE}=\frac{1}{|\mathcal{X}|}\sum_{(\bm{s},\bm{a})\in\mathcal{X}}||G(E(\bm{s}),\bm{\pi}(A(\bm{s})))-\bm{s}||^{2}_{2}$ (1) To generate counterfactual states, we want to create a new image by changing the action distribution $\bm{\pi}(A(\bm{s}))$ to reflect the desired counterfactual action $a^{\prime}$. However, in our experiments, we found that having only the loss function $L_{AE}$ by itself will cause $G$ to ignore $\bm{\pi}(A(\bm{s}))$ and use only $E(\bm{s})$; this behavior occurs because the loss function encourages reconstruction of $\bm{s}$ which can be achieved with only the encoding $E(\bm{s})$ and without $\bm{\pi}(A(\bm{s}))$. In order to make the Generator conditioned on the action distribution, we add an adversarial loss term using a discriminator $D$. ##### Discriminator Loss In order to ensure that $\bm{\pi}(\bm{z})$ is not ignored, we cause the encoder to create an action-invariant representation $E(\bm{s})$. By action- invariant, we mean that the representation $E(\bm{s})$ no longer captures aspects of the state $\bm{s}$ that inform the choice of action. By doing so, adding $\bm{\pi}(\bm{z})$ as an input to $G$, along with $E(\bm{s})$, will provide the necessary information that will allow $G$ to recreate the effects of $\bm{\pi}$. In order to create an action-invariant representation, we perform adversarial training on the latent space, similar to the approach taken by Lample et al. ((2017]. We thus add a discriminator $D$ that is trained to predict the full action distribution $\bm{\pi}(\bm{z})$ given $E(\bm{s})$. The action-invariant latent representation is learned by $E$ such that $D$ is unable to predict the true $\bm{\pi}(\bm{z})$ from our agent. As in Generative Adversarial Networks (GANs) ((Goodfellow et al., 2014], this setting corresponds to a two-player game where $D$ aims at maximizing its ability to identify the action distribution, and $E$ aims at preventing $D$ from being a good discriminator. The discriminator $D$ approximates $\bm{\pi}(\bm{z})$ given the encoded state $E(\bm{s})$, and is trained with MSE loss as shown below: $L_{D}=\frac{1}{|\mathcal{X}|}\sum_{(\bm{s},\bm{a})\in\mathcal{X}}||D(E(\bm{s}))-\bm{\pi}(A(\bm{s}))||^{2}_{2}$ (2) ##### Adversarial Loss The objective of the encoder $E$ is now to learn a latent representation that optimizes two objectives. The first objective causes the generator to reconstruct the state $\bm{s}$ given $E(\bm{s})$ and $\bm{\pi}(A(\bm{s}))$, but the second objective causes the discriminator to be unable to predict $\bm{\pi}(A(\bm{s}))$ given $E(\bm{s})$. To accomplish this behavior in $D$, we want to maximize the entropy $H(D(E(\bm{s})))$, where $H(\bm{p})=-\sum_{i}p_{i}log(p_{i})$. Therefore, the adversarial loss can be written as: $L_{Adv}=\frac{\lambda}{|\mathcal{X}|}\sum_{(\bm{s},\bm{a})\in\mathcal{X}}-H(D(E(\bm{s})))$ (3) The hyper-parameter $\lambda>0$ weights the importance of this adversarial loss in the overall loss function. A larger $\lambda$ amplifies the importance of a high entropy $\bm{\pi}(\bm{z})$, which in turn reduces the amount of action-related information in $E(\bm{s})$ and if pushed to the extreme, results in the generator $G$ producing unrealistic game frames. On the other hand, small values of $\lambda$ lower $G$’s reliance on the input $\bm{\pi}(\bm{z})$, resulting in small changes to the game state when $\bm{\pi}(\bm{z})$ is modified. For analysis of the effects of varying $\lambda$, see 18. #### 3.2.2 Wasserstein Autoencoder The counterfactual states require a notion of closeness between the query state $\bm{s}$ and the counterfactual state $\bm{s}^{\prime}$. This notion of closeness can be measured in terms of distance in the agent’s latent space $\bm{Z}$. We want to create a counterfactual state in the latent space $\bm{Z}$ because it directly influences the action distribution $\bm{\pi}$. We perform gradient descent in this feature space with respect to our target action $a^{\prime}$ to produce a new $\bm{\pi}$ that has an increased probability of the counterfactual action $a^{\prime}$. However, as previously mentioned, moving about in the standard autoencoder’s latent representation can result in unrealistic counterfactuals ((Bengio et al., 2013]. To avoid this problem, we re-represent $\bm{Z}$ to a lower-dimensional manifold $\bm{Z_{W}}$ that is more compact and better-behaved for producing representative counterfactuals. We use a Wasserstein auto-encoder (WAE) to learn a mapping function from the agent’s original latent space to a well-behaved manifold ((Tolstikhin et al., 2018]. By using the concept of optimal transport, WAEs have shown that they can learn not just a low dimensional embedding, but also one where data points retain their concept of closeness in their original feature space where likely data points are close together. The closeness-preserving nature of the WAE plays an important role when creating an action distribution vector $\bm{\pi}(\bm{z})$. In our counterfactual setting, we want to investigate the effect of performing action $a^{\prime}$. However, we cannot simply convert $a^{\prime}$ to an action distribution vector and assign a probability of 1 to the corresponding component in this vector as this approach could result in unrepresentative and low fidelity images. Instead, we follow a gradient in the $\bm{Z_{W}}$ space, which produces action distribution vectors that are more representative of those produced by the RL agent. This process, in turn, enables the Generator $G$ to produce more realistic images. Figure 4: The Wasserstein auto-encoder (shown as the pair $E_{W}$ and $D_{W}$) approximates the distribution of internal agent states $\bm{z}$. We train a WAE, with encoder $E_{W}$ and decoder $D_{W}$, on data instances represented in the agent’s latent space $\bm{Z}$ (see Figure 4). We use MSE loss regularized by Maximum Mean Discrepancy (MMD): $L_{WAE}=\frac{1}{|S|}\sum_{\bm{s}}\left\|D_{W}(E_{W}(A(\bm{s})))-A(\bm{s})\right\|^{2}_{2}\ +MMD_{k}(D_{W},E_{W})$ (4) where $MMD_{k}(D_{Z},Q_{Z})=\left\|\int_{Z}k(z,\cdot)dD_{Z}(z)-\int_{Z}k(z,\cdot)dE_{Z}(z)\right\|_{\mathcal{H}_{k}}$ (5) Here $\mathcal{H}_{k}$ is a reproducing kernel Hilbert space, and in our work, an inverse multi-quadratic kernel is used ((Tolstikhin et al., 2018] . #### 3.2.3 Training We let a pre-trained agent play the game with $\epsilon$-greedy exploration and train with the resulting dataset $\mathcal{X}=\\{(\bm{s}_{1},\bm{a}_{1}),\ldots,(\bm{s}_{N},\bm{a}_{N}))$. We train with the overall loss function equal to $L=L_{AE}+L_{D}+L_{Adv}+L_{WAE}$. The loss function is minimized at each game time step with stochastic gradient descent using an ADAM optimizer ((Kingma and Ba, 2014]. #### 3.2.4 Loss function Clipping Generative models have been shown to have great difficulty in retaining small objects ((Alvernaz and Togelius, 2017]. We follow ((Kaiser et al., 2020] by using loss clipping, which is defined as max($Loss,C$) for a constant $C$. This clipping is only applied to our auto-encoder and it is critical as many small gradients for each easy-to-predict background pixel outweigh the cost for mispredicting the hard-to-encode small objects. In our setting, we find that this loss clipping ensures the retention of small but key objects during auto-encoding and the creation of these objects when generating counterfactual states, such as the bullets in the Atari game Space Invaders. ### 3.3 Generating Counterfactuals Our goal is to generate counterfactual images that closely resemble real states of the game environment, but result in the agent taking action $a^{\prime}$ instead of action $a$. In order to identify the necessary elements of the state that would need to be changed, we require that the generated counterfactual state $\bm{s^{\prime}}$ is minimally changed from the original query state $\bm{s}$. Similar to Neal et al. ((2018], we formulate this process as an optimization: minimize $\displaystyle||E_{w}(A(\bm{s}))-\bm{z_{w}^{*}}||_{2}^{2}$ subject to $\displaystyle\operatorname*{arg\,max}_{a\in\mathcal{A}}\ \ \bm{\pi}(D_{W}(\bm{z_{w}^{*}}),a)=a^{\prime}$ where $\bm{s}$ is the given query state, $\mathcal{A}$ is the set of actions, and $\bm{z_{w}^{*}}$ is a latent point representing a possible internal state of the agent. This optimization can be relaxed as follows: $\bm{z_{w}^{*}}=\operatorname*{arg\,min}_{\bm{z_{w}}}\Bigg{\\{}||\bm{z_{w}}-E_{w}(A(\bm{s}))||_{2}^{2}+\log\left(1-\bm{\pi}(D_{W}(\bm{z_{w}}),a^{\prime})\right)\Bigg{\\}}$ (6) where $\bm{\pi}(\bm{z},a)$ is the probability of the agent taking a discrete action $a$ on the counterfactual state representation $\bm{z}$. By minimizing the second term, we aim to increase the probability of taking action $a^{\prime}$ and reduce the probability of taking all other actions. To generate a counterfactual state, we select a state from the training set, then encode the state to a Wasserstein latent point $\bm{z_{w}}=E_{W}(A(\bm{s}))$. We then minimize Equation 6 through gradient descent with respect to $\bm{z_{w}}$ to find $\bm{z_{w}^{*}}$, then decode the latent point to create a new $\bm{\pi}(\bm{z})$ which is passed to the generator, along with $E(\bm{s})$ to create the counterfactual state $\bm{s^{\prime}}$. ### 3.4 Experimental Setup The pre-trained agent is a deep convolutional feed-forward network trained with Asynchronous Advantage Actor-Critic (A3C) ((Mnih et al., 2015] to maximize score in an Atari game. Games are played with a fixed frame-skip of 8 (7 for Space Invaders). The network that takes a set of 4 concatenated monochrome frames as input and is trained to maximize game score using the A3C algorithm. We decompose the agent into two functions: $A(\bm{s})$ which takes as input 4 concatenated video frames and produces a 256-dimensional vector $\bm{z}$, and $\bm{\pi}(\bm{z})$ which outputs a distribution among actions. The frames are down sampled and cropped to 80x80, with normalized values [0,1]. This input is processed by 4 convolutional layers (each with 32 filters, kernel sizes of 3, strides of 2, and paddings of 1), followed by a fully connected layer, sized 256, and a last fully connected layer size $|\mathcal{A}|+1$, where $|\mathcal{A}|$ is the action space size. We apply a softmax activation to the first $|\mathcal{A}|$ neurons to obtain $\pi(\bm{s})=a$ and use the last neuron to predict the value, $V(\bm{s})$. The A3C RL algorithm was trained with a learning rate of $\alpha=10^{-4}$ , a discount factor of $\gamma=0.99$, and computed loss on the policy using Generalized Advantage Estimation with $\lambda=1.0$. We find that convergence is more difficult with such a large frame skip, so each policy was trained asynchronously for a total of 50 million frames. During training, we do not downscale or greyscale the game state. We pass in the current game time step as a 3 channels, RGB image. To generate the dataset $\mathcal{X}$, we set $\epsilon$ exploration value to $0.2$ and have the agent play for 25 million environment steps. #### 3.4.1 Network Details The encoder $E$ consists of 6 convolutional layers followed by 2 fully- connected layers with LeakyReLU activations and batch normalization. The output $E(\bm{s})$ is a 16-dimensional vector. For most of our agents, we find a value of $\lambda=50$ enforces a good trade-off between state reconstruction and reliance on $\bm{\pi}(\bm{z})$. The output of the network is referred to in the text as $E(s)$. The generator $G$ consists of one fully-connected layer followed by 6 transposed convolutional layers, all with LeakyReLU activations and batch normalization. The encoded state $E(\bm{s})$ and the action distribution $\bm{\pi}(\bm{z})$ are fed to the first layer of the generator. Additionally, following the recommendation of Lample et al. ((2017], $\bm{\pi}(\bm{z})$ is appended as an additional input channel to each subsequent layer, which ensures $G$ learns to depend on the values of $\bm{\pi}(\bm{z})$ for image creation when $\bm{\pi}(\bm{z})$ is modified during counterfactual generation. The discriminator $D$ consists of two fully-connected layers followed by a softmax function, and outputs a distribution among actions with the same dimensionality as $\bm{\pi}(\bm{z})$. The Wasserstein encoder $E_{w}$ consists of 3 fully-connected layers mapping $\bm{z}$ to a 128-dimensional vector $\bm{z}_{w}$, normalized such that $\left\|\bm{z_{w}}\right\|_{2}=1$. Each layer has the same dimensionality of 256, except the output of the 3rd layer which is 128. Additionally, the first two layers are followed by batch normalization and leaky ReLU with a leak of $0.2$. The corresponding Wasserstein decoder $D_{w}$ is symmetric to $E_{w}$, with batch normalization and leaky ReLU after the first two layers and maps $\bm{z_{w}}$ back to $\bm{z}$. #### 3.4.2 Training Details The encoder, generator, and discriminator are all trained through stochastic gradient descent using an Adam optimizer, with parameters $\alpha=1^{-4},\beta_{1}=0,\beta_{2}=0.9$. These networks were typically trained for 25 million game states to achieve high fidelity reconstructions, but we found even a tenth of the game states to be enough to produce meaningful counterfactual states. We set the max loss clipping constant $C=0.0001$, meaning if reconstructed pixel (0-255) is within 2 values, its gradients are ignored. When training the agent, we use the current time step and previous 3 time steps concatenated to represent the state. For our generative model, we only use the current state. The Wasserstein Autoencoder was trained with Adam optimizers of the same learning rate $\alpha=10^{-4}$ and with the default $\beta$ parameters. Training was performed for 15 million frames, upon which we found selecting actions from $\bm{\pi}(D_{w}(E_{w}(A(\bm{s}))))$ consistently achieved the same average game score as the original agent. All models are constructed and trained using PyTorch ((Paszke et al., 2019]. For more information about our architecture and training parameters, our code can be accessed at: https://github.com/mattolson93/counterfactual-state- explanations/ #### 3.4.3 Creating counterfactual state highlights A counterfactual state often contains small changes that are difficult to notice without careful inspection, so we mimic the saliency map generation process in Greydanus et al. ((2018] to highlight the difference between the original and counterfactual state. We take the absolute difference between the original state $\bm{s}$ and counterfactual state $\bm{s^{\prime}}$ to create a counterfactual mask $\bm{m_{c}}=||\bm{s}-\bm{s^{\prime}}||_{1}$. For further clarity of the changes, we apply a Gaussian blur over the mask. Lastly, we set the blurred mask to a single color channel and combine this color mask with the original state to get the highlights. In our experiments, the highlights are in different colors for different games (e.g. blue for Space Invaders and red for Qbert) as we want colors that are a stark contrast from the color scheme of the game. ## 4 Methodology: User Studies In general, evaluating explanations is a challenging problem, and counterfactual explanations are particularly difficult. A good counterfactual explanation helps humans understand why an agent performed a particular action. This human-based criterion is infeasible to capture with quantitative metrics. For instance, using the probability $\pi(\bm{s^{\prime}},a^{\prime})$ as a quantitative metric for a counterfactual state $\bm{s}^{\prime}$ is misleading because this probability can be high for some Atari images that humans can immediately recognize as not generated by the game itself and also high for adversarial examples with imperceptible changes to the original state $\bm{s}$. Since evaluating counterfactuals requires human inspection, we designed two user studies. In the first user study, we evaluated the fidelity of our counterfactual states to the game. By fidelity, we refer to how well the counterfactual images appear to be generated by the game itself rather than by a generative deep learning model. In the second user study, we investigated if our counterfactual states could help humans understand enough of an agent’s decision making so that they could perform a downstream task of identifying a flawed RL agent. ### 4.1 User Study 1: Fidelity of Counterfactual States (RQ1) In order to evaluate the fidelity of our counterfactual states, we needed to create baseline methods for comparison. First, we experimented with using pertinent negatives from the Contrastive Explanation Method (CEM) ((Dhurandhar et al., 2018] as counterfactuals. These pertinent negatives highlight absent features that would cause the agent to select an alternate action. We generated pertinent negatives from Atari states with pixels as features, and interpreted them as counterfactual states. We performed an extensive search over hyper-parameters to generate high-fidelity states, but found CEM very difficult to tune due to the high-dimensional nature of Atari images. The generated counterfactual states were either identical to the original query state or they had obvious “snow” artifacts as shown in Figure 5, making them too low quality to serve as a reasonable baseline for our user study. Figure 5: Counterfactual states generated using the Contrastive Explanation Method with three choices of parameters on different states. Images are in black and white because the original CEM source code operates on direct input to the agent– which are down-scaled, grey images. We then created a baseline method consisting of counterfactual images from an ablated version of our generative model. In the ablated version of the network, the encoder, discriminator, and Wasserstein autoencoder were removed, and the generator was trained with MSE loss to reconstruct $\bm{s}$ given $\bm{z}$ as input. Counterfactual images were generated by performing gradient descent with respect to $\bm{z}$ to maximize $\bm{\pi}(\bm{z},a^{\prime})$ for a counterfactual action $a^{\prime}$. We found that counterfactual states generated in this way did not always construct a perfectly convincing game state as shown in Figure 6, but were of sufficient quality to use as a baseline in our user study. 4 details other ablation experiments, which reveal the negative effects of removing any specific component from our architecture. Figure 6: Three examples of counterfactual states generated using the ablated model. Finally, we also included images from the game itself. In summary, the images in our first user study were generated by three different sources: 10 from the actual game, 10 from our counterfactual state explanation method, and 10 from our ablated network. These images were randomly sorted for each user. We evaluated our counterfactual state explanations through a user study in our lab with 30 participants (20 male, 10 female) who were not experts in machine learning; participants included undergraduates and members of the local community. Approximately half were undergraduates and the others were from the community. 80% were between the ages of 18-30, 10% were between 30-50, and the other 10% were between 50-60. We chose to focus our study on Space Invaders because it is straightforward to learn for a participant unfamiliar with video games. To familiarize participants with Space Invaders, we started the study by having participants play the game for 5 minutes. Participants then rated the fidelity of 30 randomly ordered game images on a Likert scale from 1 to 6: (1) Completely Fake, (2) Most parts fake, (3) More than half fake, (4) More than half real, (5) Most parts real and (6) Completely real. ### 4.2 User Study 2: Using Counterfactuals to Detect a Flawed Agent (RQs 2 and 3) Our second user study was intended to evaluate the effectiveness of our counterfactual state explanations. Our focus was on a real world setting in which a user, who was not a machine learning expert, needed to assess a RL agent that was about to be deployed. We designed an objective task that relied on the user’s understanding of the agent’s decision making process from the counterfactual explanations. The task required participants to identify which of two RL agents was flawed based on the counterfactual explanations provided. As in the first user study, we chose Space Invaders since it was quick to learn and the optimal strategy was not immediately obvious. Since we recruited non-experts in AI or machine learning, we henceforth referred to the RL agent as an AI agent in our user study for simplicity. The counterfactual explanation’s effectiveness was measured by a (2x2)x2 mixed factorial subjects design as we have both a within-subjects comparison and a between-subjects comparison. The within-subjects comparison involved the two independent variables of RL agent type (flawed versus normal) and explanation presence (with and without explanation). Thus, all participants were shown the behavior of the flawed and the normal agents both with and without counterfactual explanations. The between-subjects comparison involved comparing counterfactual explanation methods; one group of participants was shown a baseline counterfactual explanation method based on nearest neighbors and the other group was shown our counterfactual state explanation method. #### 4.2.1 Experimental Design The participants were presented with the task of identifying which of the two agents was flawed. We designed our two agents such that their average score on the game was almost equal, and the score could not be used to determine which agent was flawed. In addition, humans could not identify the flawed agent by simply watching the agents play the game. Consequently, the counterfactual explanations were the main source of insight into the agent’s decision-making for the participants. An alternative approach for evaluating the effectiveness of counterfactual explanations was to have participants predict an agent’s action in a new state. While action prediction may be feasible in some environments (e.g. Madumal et al. ((2020]), it can also be challenging in other environments such as Atari games and real-time strategy games. Anderson et al. ((2020] showed that using explanations to predict future actions was difficult, sometimes even worse than random guessing, because AI agents could be successful in these games in ways that were unintuitive to humans. Our ”normal” agent was the agent described in section 3.4. For the flawed agent, we tried designing flawed agents that were blind to different parts of the game, but many of these possibilities were easy to detect by humans. Blocking half of the screen resulted in the agent only playing in the visible half. Removing the barriers had no effect as the agent eventually learned their locations during training. Removing the bullets caused a noticeable behavior change as the agent hid under the barriers for the majority of the game. Finally, we were unable to train an agent that performed well by removing the enemies from the observations. We ultimately settled on a flawed Space Invaders agent by masking the region of the screen containing the green ship, effectively making the agent unaware of its own ship’s position. This flaw was subtle and difficult to detect without the aid of counterfactual explanations. This flawed agent was harder to train than a normal agent playing Space Invaders and thus required 160 million game steps to achieve sufficiently good performance. In addition, for our flawed agent, we set the adversarial loss hyperparameter $\lambda=100$ to make the generated counterfactual states have visually evident changes from the original query state. #### 4.2.2 Conditions This study involved two conditions corresponding to different counterfactual explanation methods. The first condition used a naive baseline based on a simple nearest neighbor approach. The second condition involved our counterfactual state explanations. ##### Nearest Neighbor Counterfactual Explanations (NNCE) For this approach, the agent played the game for $N=25$ million time steps with $\epsilon$-greedy exploration to produce a game trace dataset $\mathcal{\bm{D}}$, which we used for nearest neighbor selection. For each step we stored in $\mathcal{\bm{D}}$, the state $\bm{s}$, the representation $\bm{z}=A(\bm{s})$, and the action taken $a$, resulting in a dataset $\mathcal{D}=\\{(\bm{s}_{1},\bm{z}_{1},a_{1}),\ldots,$ $(\bm{s}_{N},\bm{z}_{N},a_{N}))$. To generate a counterfactual from this dataset, the agent played a new game and on the desired query state $\bm{s}$ we found the nearest latent point $\bm{z}^{*}\in\mathcal{D}$ to the current point $\bm{z}=A(\bm{s})$ where the agent took the desired action of $a^{\prime}$; we used $L_{2}$ distance to determine closeness. We then displayed the associated state $\bm{s}^{*}$ from the triplet $(\bm{s}^{*},\bm{z}^{*},a^{\prime})$ as the closest counterfactual state where the agent took a different action $a^{\prime}$. Note that the images from the nearest neighbor approach were always faithful to the game as they were actual game frames from the Atari game. However, even with a very large game trace dataset of size 25 million, the nearest neighbor approach did not always retrieve a game state that was ”close” to the query state. In contrast, our counterfactual state explanations were always close to the query state by design, but they may not always have complete fidelity to the game. ##### Choosing counterfactual query states and counterfactual actions The specific images, serving as query states to present to participants for our counterfactual state explanations, were objectively chosen using a heuristic based on the entropy of the policy vector $\bm{\pi}(A(s))$ of state $s$; this entropy score has been used in the past for choosing key frames for establishing trust ((Huang et al., 2018]. For diversity, if an image at time $t$ was selected, we do not allow images to be selected until after time $t+10$. This restriction was especially important for diversity in the counterfactual states chosen for the flawed agent as it had very low entropy in its policy vector from the initial states, but higher entropy later on. As Space Invaders is a relatively simple game in which the aliens move faster as time progresses, we only considered diversity in terms of the progression of time within a round and we selected query states at different points in time. All query states used in the study can be seen in Appendix figures 22 \- 25. We thus emphasize the fact that the counterfactual states and corresponding actions we presented to participants were not hand-picked; rather, they were selected objectively by our heuristic. For our counterfactual state explanations, once a query state was selected, we chose the counterfactual action $a^{\prime}$ as the one that involved the largest $L_{2}$ change between the original Wasserstein latent state $\bm{z_{w}}$ and the counterfactual Wasserstein latent state $\bm{{z_{w}}^{\prime}}$ (ignoring the no-operation action). For NNCE in our user study, we use the same entropy-based state selection heuristic to determine which query states to show to the participant, thereby ensuring that query states are identical between the two conditions. What varies between the two conditions is the explanation process, which selects the counterfactual action $a^{\prime}$ and the resulting counterfactual state $\bm{s^{\prime}}$. The method we used for selecting the counterfactual action $a^{\prime}$ for NNCE differs from the heuristic used by our counterfactual state explanations. To select the counterfactual action in NNCE, we find the closest nearest neighbor in latent space $\bm{z}$ (via $L_{2}$ distance) where the agent performs a different action $a^{\prime}\neq a$. The action selection heuristics were slightly different between the two conditions in order to maximize the quality of selected counterfactual states by the different methods. The two methods differed in how far the counterfactual images were from the query state due to different latent spaces being used by the two methods and also due to the granularity of their movement in their respective latent spaces. Our counterfactual state explanations operated in a Wasserstein latent space. Due to the fact that they were created by a generative process and not retrieved from a dataset, using the closest Wasserstein point with a different action often caused very little change or no change at all. In contrast, the NNCE method operated in the latent space of the pre-trained agent, which does not have a Wasserstein latent space. The NNCEs used pre-existing images from the dataset $\mathcal{\bm{D}}$, which were usually further away from the query state (visually) than most counterfactual states created by our method. Had we chosen the counterfactual action that involved the largest $L_{2}$ change in latent space, NNCEs would have produced images that were often dramatically different from the query state, which would likely have produced worse results in our user study. Instead, to give the NNCE condition the best possible counterfactual images (based on visual inspection), we ultimately selected the counterfactual action to be the action (different from the original action $a$) associated with the nearest neighbor with the closest $L_{2}$ distance in latent space. #### 4.2.3 Participants and Procedure We recruited 60 participants at Oregon State University, with 30 participants in each condition. The target audience for our user study was people who were not experts in machine learning. Approximately half were undergraduates and the others were from the community. All participants were between the ages of 18-40, 40% of whom were women and 60% were men. This study consisted of 6 sections: 1. 1. Gameplay 2. 2. Agents Analysis (pre-evaluation) 3. 3. Tutorial 4. 4. Evaluation (main task) 5. 5. Agents Analysis (post-evaluation) 6. 6. Reflection ##### 1\. Gameplay A facilitator started the study with a guided tutorial about game rules and described the task to be performed, after which the participants were allowed to use the system. To be able to understand the game better, all the participants first played the Atari 2600 video game Space Invaders for 5 minutes. ##### 2\. Agents Analysis (pre-evaluation) After having enough hands-on experience with the game, each participant watched a video of the normal agent and a video of the flawed agent playing one complete episode of the game from start to finish. The identities of each agent were hidden from participants. The videos were selected such that the agent cleared all enemies before they reached the bottom while avoiding all incoming bullets. We randomized the order of presentation of the normal and flawed agent. For concreteness, we described the flawed agent to the participants as an agent with a malfunction in its sensors. After viewing the videos, we asked the participants, “Which of the two AI do you believe has a malfunction?”, with their choices being ”AI one”, ”AI two”, or ”CAN’T TELL”. We then asked the participants if they could identify which part of the game was the flawed AI blind to: the yellow aliens, the white bullets, the green ship, or the orange barriers. After answering both questions, participants were placed on a waiting screen to ensure the next section occurred simultaneously for everyone. At this point, participants were unable to change their answers to the previous questions and were unable to view the videos for the rest of the study. The answers from this section formed the data corpus of the participants’ descriptive analysis of the AI agents before they saw the explanation. Figure 7: The explanation tool used for displaying counterfactual states to participants in our user study. ##### 3\. Tutorial The facilitator then gave a detailed guided tutorial to describe the counterfactual explanation display tool, particularly since counterfactuals are an esoteric topic for most non-experts. The display tool consisted of a tuple of 3 images: the original state where an agent took its preferred action $a$, the changed state where the agent took the action $a^{\prime}$, and the image highlights. The original state was an interactive UI element, where if clicked it would change into a GIF that sequentially displayed the current and three previous game steps to give context for the query state. We omitted the term ”counterfactual” as we found the additional vocabulary to be confusing in our preliminary studies. Our approach of presenting the original image, highlights and the counterfactual alongside each other ties in with the causal connection of abductive reasoning and inference ((Miller, 2019], where the highlights served the purpose of “fixing the eyeballs” of the participants, and the counterfactuals offered reasoning for the AI’s actions. The exact script for the tutorial is provided in the appendix section 21. ##### 4\. Evaluation (main task) Following the tutorial, the participants evaluated 20 screenshots of the game in the display tool: 10 states for each agent selected via the heuristic described above. All 10 states were selected from the single game episode shown in the videos from earlier. An example of a screen in the explanation tool can be seen in figure 7. For each screenshot, we asked the participant two questions. The first question being ”What objects do you think this AI pays attention to?” with 4 check boxes to be potentially selected (aliens, bullets, ship, barriers) and the second question being ”Which explanation did you use for making your decision?” with 6 options (Only Highlights, Mostly Highlights, Highlights and Changed State Equally, Mostly Changed State, Only Changed State, or Neither). We presented the 20 explanations (of normal and flawed agents) in randomized order to avoid biasing the participants. The answers from this section formed the data corpus of the participants’ descriptive analysis of the AI agents after they saw the explanation. Figure 8: An example of the results screen a user would see after completing the evaluation for the Counterfactual States condition. ##### 5\. Agents Analysis (post-evaluation) Once the participants finished evaluating the 20 explanations, we summarized the results of their own responses to the question ”What objects do you think this AI pays attention to?” in a table and a chart, separating the two different agents and tallying the number of times the participant selected each object. We then re-asked the same questions from study section two: which AI is malfunctioning and in what way is it malfunctioning. An example of the final results screen can be seen in Figure 8, where each vertical element of the UI was hidden until the user clicks a “continue” button to guide participants through the summary data one step at a time. We found that only showing the tallied results before re-asking the questions to be the best way to get participants to focus on the explanations. In our preliminary experiments leading to the design of the final study, we found that participants were overwhelmed with data if they could go back to either look at the individual examples or re-watch the videos of the agents playing the game. ##### 6\. Reflection We ended the study by asking the participants to perform a short written reflection after they submitted their answer to gauge their understanding of the explanations, and to elicit their opinion on the explanation. The questions included, “Which parts of the explanation tool influenced your decision in determining the malfunctioning AI?” to understand what participants found helpful in the explanation, and what contributed to successfully finding the flawed agent. We also asked the participants to describe components of the explanation, “In your own words, can you briefly explain what the 3rd image from the explanation tool is (the images titled: ”AI Response Changed State”)?” to gauge if participants even understood the concept of a counterfactual reasonably well, and how they had done in the main task if they did not. D describes the content analysis applied to these two questions. ## 5 Results ### 5.1 Example Counterfactual States We now show examples of counterfactual states for pre-trained agents in various Atari games; these examples include both high and low quality counterfactuals. In Figures 9 to 12, we show sets of images in which the left image is the original query state where the agent would take action $a$ according to its policy, the right image is the counterfactual state where the agent would take the selected action $a^{\prime}$, and the center image is the highlighted difference between the two. #### 5.1.1 $\text{Q}^{*}$bert (a) $a=\text{MoveUpRight}$, $a^{\prime}=\text{MoveUpLeft}$ (b) $a=\text{MoveUpRight}$, $a^{\prime}=\text{MoveDownLeft}$ Figure 9: Each row shows an example of a counterfactual state explanation for $\text{Q}^{*}$bert: Query state with action $a$ (left), counterfactual state with action $a^{\prime}$ (right), and red highlights (center). In this game, the agent controls the orange character Q*bert, who starts each game with 3 lives at the top of a pyramid and has 5 actions to hopping diagonally from cube to cube (or stay still). Landing on a cube causes it to change color, and changing every cube to the target color allows the agent to progress to the next stage. The agent must avoid purple enemies or lose a life upon contact. Green enemies that revert cube color changes can be stopped via contact. In the top row of Figure 9, the counterfactual shows that if the up-right square were yellow (already visited), Qbert would move up-left. In the bottom row of Figure 9, if Qbert had been higher up on the structure, the agent will jump down and left; in this example, the Qbert image is not perfectly realistic but enough to give a sense of the agent’s decision making. #### 5.1.2 Seaquest (a) $a=\text{MoveUpRightAndShoot}$, $a^{\prime}=\text{MoveUpLeftAndShoot}$ (b) $a=\text{MoveUpLeftAndShoot}$, $a^{\prime}=\text{MoveUpLeft}$ (c) $a=\text{MoveLeftAndShoot}$, $a^{\prime}=\text{MoveDown}$ Figure 10: Each row shows an example counterfactual state explanation for Seaquest: Query state with action $a$ (left), counterfactual state with action $a^{\prime}$ (right), and highlights (center). In this game, an agent must shoot torpedoes at oncoming enemies while rescuing friendly divers. In Figure 10 (top row), a new enemy must appear to the left in order for the agent to take an action that turns the submarine around while firing. Thus, the agent has an understanding about enemy spawns and submarine direction. The middle row of Figure 10 shows a scenario (best viewed on a computer) where the agent would move up and left but not shoot because the agent would not be fully aligned with the enemy fish on the left to hit it; in addition, the submarine has already shot its torpedo in anticipation of an enemy fish appearing on the bottom right and there can only be one torpedo on the screen at a time. Note that the torpedo is actually highlighted in red but due to the size of the image in Figure 10, these highlights are imperceptible. Figure 10 (bottom row) shows an unrealistic counterfactual, where despite never seeing two submarines in the training data, the best prediction of the Generator (given the counterfactual inputs), is to place a submarine at both locations. #### 5.1.3 Crazy Climber (a) $a=\text{MoveRight}$, $a^{\prime}=\text{MoveBodyUp}$ (b) $a=\text{MoveLeft}$, $a^{\prime}=\text{MoveArmsUp}$ Figure 11: Each row shows an example counterfactual state explanation for Crazy Climber: Query state with action $a$ (left), counterfactual state with action $a^{\prime}$ (right), and highlights (center). In this game, an agent must climb up a building while avoiding various obstacles. Figure 11 (top row) shows the original state in which the agent is in a position to move horizontally, whereas the counterfactual state shows the climber in a ready state to move vertically as indicated by the position of its legs. Figure 11 (bottom row) demonstrates how the agent will climb up as the enemy is no longer above it. For both examples, because the climber stays in a fixed vertical position with the entire tower itself moving down, the highlights are difficult to interpret. These examples show the importance of using both the highlights and the counterfactual states as in some cases, the counterfactual states are much easier to understand than the highlights. #### 5.1.4 Space Invaders Figure 12: An example of a counterfactual state explanation for Space Invaders with the ”normal” agent. Here, action $a=\text{MoveRightAndShoot}$ (left), counterfactual state where action $a^{\prime}=\text{MoveRight}$ (right), and the highlighted difference (center). In this game, an agent exchanges fire with approaching enemies while taking cover underneath three barriers. Figure 12 depicts the example, which was also used in our user study. This example reveals that the agent has learned to prefer specific locations for safely lining up shots, selectively choosing enemies to shoot. Figure 13: An example of a counterfactual state explanation for Space Invaders with the flawed agent from our second user study. Here, action $a=\text{MoveLeftAndShoot}$ (left), counterfactual state where action $a^{\prime}=\text{MoveRight}$ (right), and the highlighted difference (center). We also include an example of a counterfactual state explanation with the flawed agent in our second user study. Figure 13 shows that in the generated counterfactual state explanation, the flawed agent does not move the ship as it is blind to its own ship’s location; in fact, the flawed agent never moves the ship in all of our counterfactual state explanations. ### 5.2 User Study Results #### 5.2.1 RQ 1: Fidelity of Counterfactuals | Ablated | Counterfactual State | Actual Game ---|---|---|--- | Version | Explanations | Score | 1.93 | 4.00 | 4.97 Table 1: Average results on a 6 point Likert scale from the fidelity user study. In terms of fidelity, the average ratings on the 6 point Likert scale are shown in Table 1. The differences between the fidelity ratings for the counterfactual states and real states were not statistically significant ($\alpha=0.05$, p-value=0.458, one-sided Wilcoxon signed-rank test). These results show that our counterfactual states were on average close to appearing faithful to the game states but they were not perfect. In the next section, we will show that despite these imperfections, the counterfactuals were still useful to participants. #### 5.2.2 RQ 2: Can counterfactual states help users identify a flawed agent? Participants were significantly more successful at identifying the flawed agent when provided with counterfactual explanations for both the counterfactual state explanations ($\alpha$ = 0.05, p-value = 0.0011, Pearson’s Chi-square test) and the NNCEs ($\alpha$ = 0.05, p-value=0.0009, Pearson’s Chi-square test). This hypothesis was further reinforced when all participants in both conditions self-reported to have found the explanation useful in the evaluation section. Only 1 participant out of 60 stated that the video in the Agents Analysis section was useful. Instead, participants found the highlights and the counterfactuals to be more useful than the video. Figure 14: The total selection count over all participants and all explanations regarding the self-reported usefulness of each explanation component. In the evaluation section, for each explanation from a given counterfactual method, we asked participants to rate the usefulness of each component of the explanation on a 5 point Likert scale (1: Highlights only, 2: Mostly Highlights, 3: Both Equally, 4: Mostly Counterfactuals, 5: Counterfactuals only). For counterfactual state explanations, “Mostly Highlights” was the most common response (204/600 times; 34%) for helping participants identify the flaw in the AI. For NNCE, “Both Equally” was the most common response (236/600 times; 39%). The full response distribution for each condition is shown in Figure 14. These results indicate that neither component in isolation was ideal. Most of the time, participants preferred having both, but with varying degrees of usefulness. We also found this result in the qualitative data from the post task questionnaire, wherein participants in both conditions overwhelmingly self-reported to have used the highlights as a supporting artefact for the counterfactual explanations, and vice-versa: > Participant 43 in Counterfactual State condition: “I used the highlights > tool primarily because it was the easiest way to see what was changing from > the original state. Then, I would reference the changed state tool to see > how the original changed.” > Participant 14 in Nearest Neighbor condition: “Mostly highlights, I used > changed state sparingly to cement assertions from the highlights.” Participants in both conditions found the summary chart to be helpful in consolidating their ideas and facilitating recall. For instance, two participants commented in their responses: > Participant 35 in Counterfactual State condition: “The bar graph at the end > of my responses for both AIs influenced it the most.” > Participant 16 in Nearest Neighbor condition: “The charts at the end > heavily influenced my decision, because I thought the malfunctioning AI > couldn’t see the barriers because they had more damage on the ship side of > the barriers than the alien side, but the charts showed that that was a poor > assumption because almost every time I evaluated the barriers as something > they could see.” #### 5.2.3 RQ 3: Comparison of Counterfactual methods | Incorrect | Correct | Can’t tell ---|---|---|--- | Identification | Identification | Without explanation | 10 ($33\%$) | 17 ($57\%$) | 3 ($10\%$) With explanation | 2 ($7\%$) | 27 ($90\%$) | 1 ($3\%$) Table 2: The number of participants, with and without counterfactual state explanations, who incorrectly identified the normal AI, correctly identified the flawed AI and who were unable to tell the difference. Participants that were provided with counterfactual state explanations identified the flawed AI with a far higher success rate than the NNCEs. Without any explanations, $57\%$ of the participants correctly identified the flawed agent (Table 2). With counterfactual state explanations, this percentage improved to $90\%$, which is a significant improvement at the ($\alpha=0.05$, p-value = $10^{-9}$, Pearson’s Chi-square test). In addition, none of these participants were able to correctly determine the specific flaw in the agent in the first Agent Analysis section. However, after using our counterfactual state explanations, $60\%$ of participants accurately diagnosed the specific flaw, which is statistically significant ($\alpha=0.05$, p-value $=$ 0, Pearson’s Chi-square test). | Incorrect | Correct | Can’t tell ---|---|---|--- | Identification | Identification | Without explanation | 9 ($30\%$) | 19 ($63\%$) | 2 ($7\%$) With explanation | 9 ($30\%$) | 14 ($47\%$) | 7 ($23\%$) Table 3: The number of participants, with and without NNCEs, who incorrectly identified the normal AI, correctly identified the flawed AI and who were unable to tell the difference. Figure 15: The total difference in object counts over all participants, where the _object_ refers to the Space Invaders element that a participant determines the agent pays attention to. Here, the y-axis measures the difference between the total object counts for the flawed agent minus the total object counts for the normal agent. Positive numbers indicate that participants consider the flawed agent to pay more attention to that object than the normal agent, whereas negative numbers indicate that participants consider the flawed agent to pay less attention to that object. In contrast, NNCEs often confused participants. $63\%$ of the participants identified the flawed agent correctly with just the video (Table 3), but after viewing the explanations, this percentage dropped to $47\%$ ($\alpha=0.05$, p-value = 0.1432, Pearson’s Chi square test). Figure 15 contains an aggregate comparison of how well participants that were shown NNCE versus counterfactual state explanations could identify the specific flaw. The histogram in Figure 15 depicts the difference in object counts over all participants, where the object refers to the Space Invaders element that a participant determines the agent pays attention to. The difference is computed as the total object counts for the flawed agent minus the total object counts for the normal agent. Participants for both counterfactual approaches were able to pick up on the correct flaw, but participants that were shown counterfactual state explanations did so with much higher numbers than participants that were shown NNCEs. Figure 16: An example of the nearest neighbor counterfactual explanation method for the normally trained agent with query state where action $a=\text{MoveRightAndShoot}$ (left), counterfactual state where action $a^{\prime}=\text{MoveLeftAndShoot}$ (right), and the highlighted difference (center). Figure 17: An example of the nearest neighbor counterfactual explanation method for the flawed agent with query state where action $a=\text{MoveLeftAndShoot}$ (left), counterfactual state where action $a^{\prime}=\text{MoveLeft}$ (right), and the highlighted difference (center). One of the main reasons for this decrease is that the NNCEs are inconsistent in quality as the quality depended on the existence of an instance in the game trace dataset $\mathcal{\bm{D}}$ that was reasonably close (in latent space) to the query state. Despite a very large game trace dataset (25 million game frames) as a pool for the NNCEs, a suitable instance that can serve as a counterfactual may not exist, resulting in odd changes to the query state (e.g. an extra alien appearing on the opposite side from the agent) or counterfactuals that were extremely different from the current state (e.g. a reset of the game has occurred or many enemies have been added/removed). Examples of low quality nearest neighbor counterfactuals can be seen in Figures 16 and 17. Note that both examples have a large number of highlights. In addition, the NNCE in Figure 17 actually moves the ship, which obscures the true flaw in the agent. This reliance on finding a suitable counterfactual in the game trace dataset is a major disadvantage of nearest neighbor counterfactuals. It is likely infeasible to generate a large enough dataset to facilitate retrieval of a reasonable counterfactual for any arbitrary game state in a sufficiently complex game. In contrast, our counterfactual state explanation generates the game frame on the fly, and even though it is not perfectly faithful to the game, it has sufficient fidelity to give meaningful insight to participants. The inconsistency in quality of the NNCEs likely contributed to the confusion of participants. In the retrospective review from participants who were given the Nearest Neighbor counterfactual explanations, they frequently self- reported to being confused by the highlights or the counterfactual itself. For instance, when asked if the highlights helped make their decision: > Participant 17 in Nearest Neighbor condition: “Sometimes, but it got > confusing sometimes as I couldn’t tell what highlight belonged to what > image, so I couldn’t get the AI’s thoughts from it” Similarly, when asked if the counterfactual state image helped make their decision: > Participant 26 in Nearest Neighbor condition: “No, I was not sure how it > related to the decision to move or shoot.” The NNCEs seemed to be useful to only a small group of participants, as $17\%$ of the participants were able to accurately diagnose the specific flaw in the flawed agent, up from none ($\alpha=0.05$, p-value = 0.014, Pearson’s Chi Square test). This percentage is slightly higher than 12.5%, which is the probability of correctly guessing the flawed agent and correctly guessing the correct flaw purely by chance. ## 6 Discussion In summary, participants in our first study found our counterfactual state explanations to generate game frames that were close in terms of fidelity though not perfectly so. In our second user study, these counterfactual state explanations were of sufficient fidelity that $90\%$ of our participants could use them to identify which agent was flawed. $60\%$ of the participants were able to use our counterfactual state explanations to perform the more challenging task of diagnosing the specific flaw. During this study, participants specifically mentioned the highlights and summary chart as being particularly useful in their decision-making, thereby suggesting that these visual elements greatly enhance counterfactual explanations. Our counterfactual state explanations were also much more effective in our user study than the nearest neighbor baseline. Participants using the counterfactual state explanations were much more successful at identifying the flawed agent as well as the specific flaw than participants using the NNCEs. In addition, even though the NNCEs were 100% faithful to the game, they were not always close to the query state. Participants found that our counterfactual state explanations, which produced images that were ”close” to the original query state, were more insightful despite not having 100% fidelity. Our study also indicated that “No explanation is better than a bad explanation” as participants using the NNCEs were often confused and the number of participants correctly identifying the flawed agent actually decreased after seeing the NNCE. There are a few issues with our approach that remain an open area of investigation. First, our deep generative approach adds some artefacts when creating counterfactual states, which impacts the faithfulness of our explanation. Empirically, we found most artefacts were minor, such as blurry images, and did not seem to be a major roadblock for our participants. One of the more noticeable artefacts is how small objects, such as the shot in space invaders, occasionally disappear. While this issue is somewhat alleviated with max loss clipping, small objects are difficult to preserve in the counterfactual generation processes. These small objects, however, could be important for other domains (e.g. Pong). It is likely that some of these artefacts could be fixed by training longer, with more data, and with better architectures. This problem also raises an open question in representation learning about preserving small, but important, objects in images. A second issue is how to select query states from a replay such that the counterfactual states, and actions, provide the most insight to a human. Our criterion was based on heuristics and a deeper investigation is needed as other criteria could be used, such as those used by other methods for selecting key moments ((Amir and Amir, 2018, Sequeira and Gervasio, 2020]. Moreover, we chose the counterfactuals to present to the participants using a heuristic rather than allowing the participants to interactively explore the space of counterfactuals. We made this choice because many of the counterfactual actions result in no change to the image and users need more guidance as to which counterfactual actions and states are useful. We recognize that this choice directly affects the diversity among the counterfactuals that the participants see, and may hinder in the building of a sufficiently well-rounded mental model ((Mothilal et al., 2020]. Another area for future work is in choosing the way a counterfactual state is generated. Our counterfactual state generation was based on finding a state $\bm{s^{\prime}}$ that was minimally changed (in the latent space) from the query state $\bm{s}$ that would result in a different action $a^{\prime}$ from $a$. The reason for the minimal change was to identify the necessary aspects of a state that would produce the action $a^{\prime}$, without distracting the user with other irrelevant elements in the image. This minimal change criterion is similar to approaches used by other recent methods for generating counterfactuals, such as the minimal-edit approach for replacing regions in an image ((Goyal et al., 2019] and the search for smallest deletion / supporting regions ((Chang et al., 2019]. However, we could use other criteria besides minimal change to define the space of modifications to the query state. For instance, we could permit changes that lead to specific properties on future time steps or allow the user to help define the space of changes allowed. Finally, we recognize that our findings are specific to the visual input environment of Atari, and the success of a generative deep learning method for producing counterfactuals in other visual environments is an open question. In particular, the fidelity of the counterfactual states depends on the amount of training data available and the ability of the deep neural network to capture salient aspects of the images from that domain. While the primary application of our work is for Atari-like domains, more sophisticated auto-encoding training methods have been shown to produce high quality images in more visually rich environments ((Nie et al., 2020]. Therefore, we believe that there are some findings in our study that could be more broadly applicable. The general framework of our explanation, namely presenting original–highlights–counterfactual could be effective in many domains. Moreover, our results also indicate that perfect fidelity may not be necessary. Counterfactual images with sufficient fidelity could give enough insight in other domains and even the highlights by themselves might be sufficiently insightful for other visual environments. ## 7 Conclusion We introduced a deep generative model to produce counterfactual state explanations as a way to provide insight into a deep RL agent’s decision making. The counterfactual states showed what minimal changes needed to occur to a state to produce a different action by the trained RL agent. Results from our first user study showed that these counterfactual state explanations had sufficient fidelity to the actual game. Results from our second user study demonstrated that despite having some artefacts, these counterfactual state explanations were indeed useful for identifying the flawed agent in our study as well as the specific flaw in the agent. In comparison, the nearest neighbor counterfactual explanations confused participants and resulted in fewer participants identifying the correct agent after they were shown the explanation. Furthermore, only a small proportion of participants were able to identify the specific flaw. Our study also demonstrated that the highlights and summary table were important elements to accompany counterfactual explanations. Our results suggest that perfect fidelity may not be necessary for counterfactual state explanations to give non-machine learning experts sufficient understanding of an agent’s decision making in order to use this knowledge for a downstream task. While our study focused on Atari agents, we believe this approach is promising and could apply more broadly to domains beyond Atari with more complex visual input though more investigation is needed. Moreover, using counterfactual state explanations in conjunction with other established and complementary explanation techniques could form a formidable toolset to help non-experts understand decisions made by deep RL agents. ## 8 Acknowledgements This work was supported by DARPA under the grant N66001-17-2-4030. We would like to thank Andrew Anderson, Margaret Burnett, Jonathan Dodge, Alan Fern, Stefan Lee, Neale Ratzlaff, and Janet Schmidt for their expertise and helpful comments. ## References * Adebayo et al. ((2018] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , page 9525–9536, Red Hook, NY, USA, 2018. Curran Associates Inc. * Alvernaz and Togelius ((2017] Samuel Alvernaz and Julian Togelius. Autoencoder-augmented neuroevolution for visual doom playing. In _2017 IEEE Conference on Computational Intelligence and Games (CIG)_. IEEE, 2017. * Amir and Amir ((2018] Dan Amir and Ofra Amir. Highlights: Summarizing agent behavior to people. In _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems_ , page 1168–1176, Richland, SC, 2018\. International Foundation for Autonomous Agents and Multiagent Systems. * Anderson et al. ((2020] Andrew Anderson, Jonathan Dodge, Amrita Sadarangani, Zoe Juozapaitis, Evan Newman, Jed Irvine, Souti Chattopadhyay, Matthew Olson, Alan Fern, and Margaret Burnett. Mental models of mere mortals with explanations of reinforcement learning. _ACM Transactions on Interactive Intelligent Systems (TiiS)_ , 10(2):1–37, 2020. * Atrey et al. ((2020] Akanksha Atrey, Kaleigh Clary, and David Jensen. Exploratory not explanatory: Counterfactual analysis of saliency maps for deep reinforcement learning. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=rkl3m1BFDB. * Ballard ((1987] Dana H Ballard. Modular learning in neural networks. In _AAAI_ , 1987. * Bengio et al. ((2013] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. _IEEE transactions on pattern analysis and machine intelligence_ , 2013. * Besserve et al. ((2020] Michel Besserve, Arash Mehrjou, Rémy Sun, and Bernhard Schölkopf. Counterfactuals uncover the modular structure of deep generative models. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=SJxDDpEKvH. * Brockman et al. ((2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. _arXiv preprint arXiv:1606.01540_ , 2016. * Caruana et al. ((2015] Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , page 1721–1730, New York, NY, USA, 2015\. Association for Computing Machinery. ISBN 9781450336642. * Chang et al. ((2019] Chun-Hao Chang, Elliot Creager, Anna Goldenberg, and David Duvenaud. Explaining image classifiers by counterfactual generation. In _International Conference on Learning Representations_ , 2019. * Craven and Shavlik ((1995] Mark W. Craven and Jude W. Shavlik. Extracting tree-structured representations of trained networks. In _Proceedings of the 8th International Conference on Neural Information Processing Systems_ , page 24–30, Cambridge, MA, USA, 1995. MIT Press. * Dabkowski and Gal ((2017] Piotr Dabkowski and Yarin Gal. Real time image saliency for black box classifiers. In _Advances in Neural Information Processing Systems_ , 2017. * Dhurandhar et al. ((2018] Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In _Advances in Neural Information Processing Systems_ , 2018. * Fong and Vedaldi ((2017] Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In _Proceedings of the IEEE International Conference on Computer Vision_ , 2017. * Goodfellow et al. ((2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in neural information processing systems_ , 2014. * Goyal et al. ((2019] Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. Counterfactual visual explanations. In _International Conference on Machine Learning (ICML)_ , 2019. * Greydanus et al. ((2018] Samuel Greydanus, Anurag Koul, Jonathan Dodge, and Alan Fern. Visualizing and understanding Atari agents. In _Proceedings of the 35th International Conference on Machine Learning_ , 2018. * Hayes and Shah ((2017] Bradley Hayes and Julie A Shah. Improving robot controller transparency through autonomous policy explanation. In _Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction_ , 2017. * Hsieh and Shannon ((2005] Hsiu-Fang Hsieh and Sarah E. Shannon. Three approaches to qualitative content analysis. _Qualitative Health Research_ , 2005. * Huang et al. ((2018] Sandy H. Huang, Kush Bhatia, Pieter Abbeel, and Anca D. Dragan. Establishing appropriate trust via critical states. In _In 2018 IEEE/RSJ International Conference on INtelligent Robots and Systems (IROS)_ , pages 3929–3936, 2018. doi: 10.1109/IROS.2018.8593649. * Huang et al. ((2019] Sandy H. Huang, David Held, Pieter Abbeel, and Anca D. Dragan. Enabling robots to communicate their objectives. _Auton. Robots_ , 43(2):309–326, feb 2019. * Jaccard ((1908] Paul Jaccard. Nouvelles recherches sur la distribution florale. _Bull. Soc. Vaud. Sci. Nat._ , 44, 1908. * Jahanian et al. ((2020] Ali Jahanian, Lucy Chai, and Phillip Isola. On the ”steerability” of generative adversarial networks. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=HylsTT4FvB. * Juozapaitis et al. ((2019] Zoe Juozapaitis, Anurag Koul, Alan Fern, Martin Erwig, and Finale Doshi-Velez. Explainable reinforcement learning via reward decomposition. In _Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence_ , 2019. * Kaiser et al. ((2020] Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błażej Osiński, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model based reinforcement learning for atari. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=S1xCPJHtDB. * Khan et al. ((2009] Omar Zia Khan, Pascal Poupart, and James P. Black. Minimal sufficient explanations for factored markov decision processes. In _Proceedings of the Nineteenth International Conference on Automated Planning and Scheduling_ , 2009. * Kingma and Ba ((2014] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _CoRR_ , abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980. * Kingma and Welling ((2013] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. _CoRR_ , abs/1312.6114, 2013. * Koh and Liang ((2017] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ICML’17, page 1885–1894. JMLR.org, 2017. * Koul et al. ((2019] Anurag Koul, Alan Fern, and Sam Greydanus. Learning finite state representations of recurrent policy networks. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=S1gOpsCctm. * Lage et al. ((2019] Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez, and Ofra Amir. Exploring computational user models for agent policy summarization. In Sarit Kraus, editor, _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence_ , pages 1401–1407. ijcai.org, 2019. * Lample et al. ((2017] Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, et al. Fader networks: Manipulating images by sliding attributes. In _Advances in Neural Information Processing Systems_ , 2017. * Lewis ((1973] David Lewis. _Counterfactuals_. John Wiley & Sons, 1973. * Madumal et al. ((2020] Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. Explainable reinforcement learning through a causal lens. In _In Proceedings of the AAAI Conference on Artificial Intelligence_ , 2020. * Makhzani et al. ((2015] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders. _CoRR_ , abs/1511.05644, 2015. URL http://arxiv.org/abs/1511.05644. * Marco Tulio Ribeiro and Guestrin ((2018] Sameer Singh Marco Tulio Ribeiro and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In _In Proceedings of the 32nd AAAI Conference on Artificial Intelligence_ , 2018. * Miller ((2019] Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. _Artificial Intelligence_ , 2019. * Mnih et al. ((2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_ , 2015. * Mothilal et al. ((2020] Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_ , 2020. * Mott et al. ((2019] Alex Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, and Danilo Jimenez Rezende. Towards interpretable reinforcement learning using attention augmented agents. In _NeurIPS_ , 2019. * Neal et al. ((2018] Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, and Fuxin Li. Open set learning with counterfactual images. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018. * Nie et al. ((2020] Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debhath, Anjul Patney, Ankit B Patel, and Anima Anandkumar. Semi-supervised stylegan for disentanglement learning. _arXiv_ , pages arXiv–2003, 2020. * Olson et al. ((2019] Matthew Olson, Lawrence Neal, Fuxin Li, and Weng-Keen Wong. Counterfactual states for atari agents via generative deep learning. In _Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence_ , 2019. * Paszke et al. ((2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems_ , 2019. * Pearl and Mackenzie ((2018] Judea Pearl and Dana Mackenzie. _The Book of Why: The New Science of Cause and Effect_. Basic Books, 1 edition, 2018. * Puterman ((1994] Martin L. Puterman. _Markov Decision Processes: Discrete Stochastic Dynamic Programming_. John Wiley & Sons, Inc., 1994. * Qi et al. ((2019] Zhongang Qi, Saeed Khorram, and Fuxin Li. Visualizing deep networks by optimizing with integrated gradients. _CoRR_ , abs/1905.00954, 2019. URL http://arxiv.org/abs/1905.00954. * Radford et al. ((2015] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. _arXiv preprint arXiv:1511.06434_ , 2015. * Ribeiro et al. ((2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”why should i trust you?”: Explaining the predictions of any classifier. In _Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2016. * Selvaraju et al. ((2017] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_ , 2017. * Sequeira and Gervasio ((2020] Pedro Sequeira and Melinda Gervasio. Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations. _Artificial Intelligence_ , 288:103367, Nov 2020. doi: 10.1016/j.artint.2020.103367. URL http://dx.doi.org/10.1016/j.artint.2020.103367. * Shrikumar et al. ((2017] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , 2017. * Simonyan et al. ((2013] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. _arXiv preprint arXiv:1312.6034_ , 2013. * Springenberg et al. ((2014] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. _arXiv preprint arXiv:1412.6806_ , 2014. * Sundararajan et al. ((2017] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In _Proceedings of the 34th International Conference on Machine Learning - Volume 70_ , ICML’17, page 3319–3328. JMLR.org, 2017. * Tolstikhin et al. ((2018] Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=HkL7n1-0b. * Torrey and Taylor ((2013] Lisa Torrey and Matthew Taylor. Teaching on a budget: Agents advising agents in reinforcement learning. In _Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems_ , page 1053–1060, Richland, SC, 2013\. International Foundation for Autonomous Agents and Multiagent Systems. * van der Waa et al. ((2018] Jasper van der Waa, Jurriaan van Diggelen, Karel van den Bosch, and Mark Neerincx. Contrastive explanations for reinforcement learning in terms of expected consequences. In _Proceedings of the IJCAI/ECAI 2018 Workshop on Explainable AI_ , 2018. * Verma et al. ((2018] Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, and Swarat Chaudhuri. Programmatically interpretable reinforcement learning. _CoRR_ , abs/1804.02477, 2018. URL http://arxiv.org/abs/1804.02477. * Wachter et al. ((2017] Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. _Harv. JL & Tech._, 2017. * Zahavy et al. ((2016] Tom Zahavy, Nir Ben-Zrihem, and Shie Mannor. Graying the black box: Understanding dqns. In _International Conference on Machine Learning_ , 2016. * Zeiler and Fergus ((2014] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In _European conference on computer vision_ , 2014. * Zhang et al. ((2018] Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-down neural attention by excitation backprop. _International Journal of Computer Vision_ , 2018. ## Appendix A Tuning the $\lambda$ Parameter Figure 18: An example of different models trained with varying $\lambda$ parameters for the normally trained agent. The original state with action $a=$ MoveLeftAndShoot for which to generate a counterfactual is shown in the top left, with the rest of the images being counterfactual states where the agent would take counterfactual action $a^{\prime}=$ Fire. Figure 18 shows the effects of varying the $\lambda$ parameter. As $\lambda$ increases, so does the amount of change in the counterfactual state, with low $\lambda$ values causing nearly imperceptible changes and high $\lambda$ values producing distorted, low quality states. From our first user-study, we found that non-experts were able to clearly identify poor fidelity images, caused by $\lambda$ parameters that were too high. Given a set of images produced by different $\lambda$ values, we feel that finding a ”sweet spot” between too high and too low should be manageable for a non-expert viewer as there is a fairly wide range of $\lambda$ values that produce reasonably high quality counterfactuals. Automating the process of selecting a $\lambda$ for high fidelity counterfactual production is beyond the scope of this work, but is an area of future interest. ## Appendix B Ablation Experiments for the Counterfactual State Neural Network Architecture Ablation experiment | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 ---|---|---|---|---|---|---|---|---|---|--- $E(s)$ | | | | | | ✓ | ✓ | ✓ | ✓ | ✓ $z$ | | ✓ | ✓ | | | | ✓ | ✓ | | $z_{w}$ | | | | ✓ | ✓ | | | | ✓ | ✓ $\bm{\pi}(A(\bm{s}))$ | ✓ | | ✓ | | ✓ | ✓ | | ✓ | | ✓ Table 4: An overview of what elements are given as input to the generator for each ablation experiment. In this section, we describe many different ablation experiments using the neural network architecture described in Section 3.2 for our counterfactual state explanations. These ablations illustrate how each component in our architecture is necessary to achieve high quality counterfactual images. A generator is always used with MSE reconstruction loss, but what we pass into the generator changes for each ablation experiment. We provide an overview of the different ablations in Table 4 and Figures 19 and 20 contain images which are representative of the issues for each ablation experiment. Next, we discuss each ablation experiment in detail: Figure 19: From left to right: ablations 1 - 5. In every column and each row, the top image is the original state where $a=$ MoveRight, the center image is the auto-encoded reconstruction of the original state, the bottom image is a counterfactual state where $a^{\prime}=$ MoveRightAndFire. Figure 20: From left to right: ablations 6 - 10. In each column, the top image is an original state where $a=$ MoveRight, the middle image is an auto-encoded reconstruction, and the bottom is a counterfactual state where $a^{\prime}=$ MoveRightAndFire. 1. 1. We investigate the effect of only using the agent’s policy to generate reconstructed states and hand-modifying it to create counterfactual states: removing all parts from our model except the agent and the generator, passing solely the $\bm{\pi}(\bm{z})$, where $\bm{z}=A(\bm{s})$, into the generator. This gives us reconstructed states in the form of $G(\bm{\pi}(\bm{z}))$. We modify the policy vector $\bm{\pi}(\bm{z})$ by selecting a counterfactual action $a^{\prime}$, setting $\bm{\pi}(\bm{z},a^{\prime})=\bm{\pi}(\bm{z},a)*1.01$, and normalizing the probabilities back to 1. This hand modification is clearly not representative of the agent. As shown in figure 19, the reconstructed and counterfactual states are extremely low quality. 2. 2. We investigate the effect of only using the agent’s learned representation to generate both reconstructed states and counterfactual states. We removed all parts from our model except the agent and the generator, passing $\bm{z}$ into the generator. This gives us reconstructed states in the form of $G(\bm{z})$ and counterfactual states by modifying $\bm{z}$ with gradient descent as described in Section 3.3 to get a $\bm{z}^{*}$. As shown in figure 19, the counterfactual states are quite unrealistic, but surprisingly the reconstructed states are accurate. 3. 3. We removed all parts from our model except the agent and the generator, this time passing both $\bm{z}$ and $\bm{\pi}(\bm{z})$ into the generator. This gives us reconstructed states in the form of $G(\bm{z},\bm{\pi}(\bm{z}))$ and counterfactual states by modifying $\bm{z}$ with gradient descent as described in Section 3.3 to get a $\bm{z}^{*}$. As shown in figure 19, the counterfactual states are quite unrealistic, but surprisingly the reconstructed states are accurate. 4. 4. We investigate using only the Wasserstein auto-encoder. Here we pass only $\bm{z_{w}}$ into the generator, where $\bm{z_{w}}$ is the latent representation of the state in Wasserstein space $\bm{z_{w}}=E_{w}(A(\bm{s}))$. This gives us reconstructed states in the form of $G(\bm{z_{w}})$ and counterfactual states by modifying $\bm{z_{w}}$ with gradient descent as described in Section 3.3 to get a $\bm{z_{w}}^{*}$. As shown in figure 19, both the reconstructed and counterfactual states are quite unrealistic. 5. 5. We removed all parts from our model except the agent, the Wasserstein auto- encoder, and the generator. Here we pass both $\bm{z_{w}}$ and $\bm{\pi}(\bm{z})$ into the generator. This gives us reconstructed states in the form of $G(\bm{z},\bm{\pi}(\bm{z})$) and counterfactual states by modifying $\bm{z_{w}}$ with gradient descent as described in Section 3.3 to get a $\bm{z_{w}}^{*}$. As shown in Figure 20, both the reconstructed and counterfactual states improve relative to the previous ablation, but are still quite unrealistic. 6. 6. Here we investigate the effect of keeping the encoder and discriminator, but hand-modify the policy input to the generator instead of using the Wasserstein auto-encoder or gradient descent. The input to the generator is equivalent to our work described in section 3. We hand-modify the policy vector $\bm{\pi}(\bm{z})$, by selecting a counterfactual action $a^{\prime}$, setting $\bm{\pi}(\bm{z},a^{\prime})=\bm{\pi}(\bm{z},a)*1.01$, and normalizing the probabilities back to 1. These hand modification may, or may not, be representative of what the agent does. As shown in figure 19, the states have the same generated quality as our method and the counterfactual state has a small, but meaningful change. 7. 7. This ablation is similar to the previous ablation, but instead of passing the policy vector $\bm{\pi}(\bm{z})$ to the generator, we input the agent’s latent space $\bm{z}$. As with previous ablations, we generate counterfactual states by modifying $\bm{z}$ with gradient descent as described in Section 3.3 to get a $\bm{z}^{*}$. As shown in figure 20, the states have a decent quality, but the counterfactual states have relatively large changes and a couple of artifacts. 8. 8. Similar to the previous ablation, but instead of passing just $\bm{z}$, we pass in both the policy vector $\bm{\pi}(\bm{z})$ and $\bm{z}$ to the generator. As with previous ablations, we generate counterfactual states by modifying $\bm{z}$ with gradient descent as described in Section 3.3 to get a $\bm{z}^{*}$. As shown in figure 20, the states are better quality than just passing in $\bm{z}^{*}$, but the counterfactual are lower quality than our method. 9. 9. We add back in the Wasserstein auto-encoder to the previous ablation. Instead of passing in the agent’s latent space $\bm{z}$ to the generator, we pass in the Wasserstein representation $\bm{z_{w}}=E_{w}(A(\bm{s}))$. As described in 3.3, we generate counterfactual states by modifying $\bm{z_{w}}$ to get a $\bm{z_{w}}^{*}$. As shown in figure 20, the states are high quality, but the counterfactual states typically have no changes. 10. 10. This experiment is an ablation in the sense that we remove the disconnection between the generation and $\bm{z_{w}}$. In other words, we take our original method and add $\bm{z_{w}}$ as input to the generator. When counterfactual states are generated, $\bm{z_{w}}^{*}$ is passed into the generator along with $E(\bm{s})$ and $\bm{\pi}(\bm{z_{w}}^{*})$. As shown in figure 20, the states are high quality and the counterfactual states are interesting. We were not able to find a difference in quality for generated states between this ablation and our method. Since this ablation is more complex, and requires more parameters, we decided not to use it for our purposes. ## Appendix C Details for User Study 2 In this section, we provide further details on the second user study. Specifically, we include the tutorial script and the images used in the second user study. ### C.1 User Study Tutorial Script (a) (b) Figure 21: The user study tutorial examples used to describe counterfactual states, where the top row of images is one potential counterfactual explanation and the bottom is another. Query state with action $a=\text{TurnRight}$ where a self-driving car is taking you home (left), counterfactual state where action $a^{\prime}=\text{GoStraight}$ (right), and the highlighted difference (center). In this tutorial we will introduce you to the tool for finding the malfunctioning AI. This tool shows the AIs response to specific “What if” questions. Both the functioning and malfunctioning AI provide answers to the “What if” questions. For this study, we have selected 20 different screen shots from the videos. After learning how to use the tool, you will examine the selected screen shots to collect data on the two AIs. The identity of the AIs will remain anonymous until the final evaluation. At this time, please click the checkbox, then the continue button. For each selected screen shot, you will see three images arranged in a table. We will now go over how the table is arranged. Please click Next. The first image is a screen shot from the original videos. Please click Next. In this column, you will also see context for the original screen shot with a short gif. Please click Next. Click on the image to change it into a gif. The gif shows the three previous game states. Then click again to return image. In the column, you will also see the original action the AI decided it will take at that moment in the video. Please click Next. In this example, the AI originally decided it would take the ”shoot” action. We then asked the AI, ”What would the current screen need to look like for you to perform the ”move right” action?” To answer this question, the AI will only evaluate the current moment in the game, not the past or the future. Please click Next. For a more concrete example, consider the following. Imagine there is a red self-driving car that is taking you home. It approaches an intersection and it wants to turn right to take you to your destination. (Reveal Figure 21 top- left) Now imagine a situation where the red car would choose to go straight instead of turning right. There are various reasons why this could happen. One example is if the brown tree fell over and blocked the road. (Reveal Figure 21 top- right) In this example, an answer to a question of “what would need to change” right now for the car to choose go straight at this intersection (point to left image), would be “the brown tree fell over which blocks the right turn” (point to right image), Is that clear? Excellent. Now in the examples you will look at, the AI will answer the question of “what needs to change” by responding with 2 images. Please click Next. The first response is the changed state. This response shows the smallest amount of change in the game to take the different action of “move right”. Back to the car example, if the original image was the intersection (point to left image), the following response image would be the intersection with the fallen brown tree (point to right image), Please click Next. In the third column, note how the game has subtly changed in two ways: the ship is under the barrier and the barrier is fully armored. Please click Next. The second AI response is image highlights, which takes the original screenshot and adds blue highlights to the changes. This response shows where the AI is looking for change to occur. Using the car example, this response would look like the original intersection with a blue highlight where the brown tree has moved. (Reveal Figure 21 top-center) Is that clear? Excellent. It is also possible for multiple objects to influence an AIs decision. (Reveal Figure 21 bottom-left). In this example, two things influence the red self-driving cars decision to take the move straight action. The first is: if brown tree has fallen over, but also if the red car’s position changed such that it is passed the intersection. (Reveal entirety of Figure 21). The highlights for this example show both the red car and the brown tree highlighted in blue. Is this second example clear? Excellent. Let us continue with the table and please click Next. Note how the changed objects are highlighted in blue: the repaired barrier, and the new ship location. As you are viewing the table for each selected screen shot, you will be asked two questions. The first question is: “what objects in the game do you think the AI pays attention to?” Please click Next to view this question. You do not need to select an answer for this tutorial. Do note that you can select more than one checkbox, or no checkboxes at all. The second question you will be asked is: which AI response or responses did you use for making your decision? Please click Next. Again, you do not need to answer this for the tutorial. You will be asked these same questions for every selected screen shot. This is the full tool you will be using to analyze each screen shot presented in random order. This section will take about 10 to 15 minutes. For each set of images you will be asked to spend at least 30 seconds. There will be a timer on the screen. After you have finished examining the 20 randomized screen shots, you will use the data to complete the second evaluation. Your results from the tool will be displayed in both a table and a chart. Additionally, we will reveal to you which examples were from AI one and which were from AI two. With this information, you will re-answer the question: “which AI is malfunctioning and what objects in the game can it not see?” And finally, after you have submitted the 2nd evaluation, we ask you to perform a short written reflection. When you are ready, click “Finish tutorial” to begin viewing the 20 selected screen shots. I will leave the tutorial example on the projector and the car example on the whiteboard. You may begin. ### C.2 Images In this section we show a further selection of explanations from our user study. Figures 22 and 23 shows explanations for the normally trained agent for both the counterfactual state explanations and the nearest neighbor counterfactual explanations, sorted by game time step. Figures 24 and 25 similarly shows explanations for the flawed agent. These figures show how the nearest neighbor counterfactual explanations often show the green ship’s position changing for the flawed agent, whereas our counterfactual state explanations never change the ship’s position. (a) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveRight}$ (b) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{Shoot}$ (c) $a^{\prime}=\text{MoveRight}$ $a=\text{Shoot}$, $a_{nn}^{\prime}=\text{MoveRightAndShoot}$ (d) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeftAndShoot}$ (e) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeftAndShoot}$ Figure 22: The first five explanations for the normally trained agent used in the user study. (Center) The original state $s$ where the agent took action $a$. (Left) The counterfactual state explanations where the agent takes action $a^{\prime}$. (Right) The nearest neighbor counterfactual state where the agent takes action $a_{nn}^{\prime}$. (Center Left/Right) The highlighted difference between the counterfactual state and the original state. (a) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{Shoot}$ (b) $a^{\prime}=\text{Shoot}$ $a=\text{MoveRight}$, $a_{nn}^{\prime}=\text{MoveRightAndShoot}$ (c) $a^{\prime}=\text{Shoot}$ $a=\text{MoveRight}$, $a_{nn}^{\prime}=\text{MoveRightAndShoot}$ (d) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveRight}$ (e) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveRight}$ Figure 23: Explanations 6 through 10 for the normally trained agent used in the user study. (Center) The original state $s$ where the agent took action $a$. (Left) The counterfactual state explanations where the agent takes action $a^{\prime}$. (Right) The nearest neighbor counterfactual state where the agent takes action $a_{nn}^{\prime}$. (Center Left/Right) The highlighted difference between the counterfactual state and the original state. (a) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveLeft}$, $a_{nn}^{\prime}=\text{MoveLeftAndShoot}$ (b) $a^{\prime}=\text{MoveRight}$ $a=\text{Shoot}$, $a_{nn}^{\prime}=\text{StayStill}$ (c) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveRight}$ (d) $a^{\prime}=\text{MoveLeft}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{MoveRight}$ (e) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{Shoot}$ Figure 24: The first five explanations for the flawed agent used in the user study. (Center) The original state $s$ where the agent took action $a$. (Left) The counterfactual state explanations where the agent takes action $a^{\prime}$. (Right) The Nearest Neighbor counterfactual state where the agent takes action $a_{nn}^{\prime}$. (Center Left/Right) The highlighted difference between the counterfactual state and the original state. (a) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveRightAndShoot}$, $a_{nn}^{\prime}=\text{Shoot}$ (b) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveLeftAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeft}$ (c) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveLeftAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeft}$ (d) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveLeftAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeft}$ (e) $a^{\prime}=\text{MoveRight}$ $a=\text{MoveLeftAndShoot}$, $a_{nn}^{\prime}=\text{MoveLeft}$ Figure 25: Explanations 6 through 10 for the flawed agent used in the user study. (Center) The original state $s$ where the agent took action $a$. (Left) The counterfactual state explanations where the agent takes action $a^{\prime}$. (Right) The Nearest Neighbor counterfactual state where the agent takes action $a_{nn}^{\prime}$. (Center Left/Right) The highlighted difference between the counterfactual state and the original state. ## Appendix D User study data analysis For answering research questions 2 and 3, two researchers collectively applied content analysis ((Hsieh and Shannon, 2005] to the Post task questionnaire data corpus. They developed the codes shown in Table 5. These codes were defined by having two researchers coded 20% of the data corpus individually, achieving inter-rater reliability (IRR) of at least 90% (calculated using Jaccard Index ((Jaccard, 1908]) with all the data sets. Code | Description | Example ---|---|--- Helpful | The participant found the artifact to be helpful to the main task, and it helped them better understand and evaluate the agent. | “Yes the third image played a role in helping me make my decision.” Problematic | The participant found the artifact hinder-some and problematic in the main task. | “The changed state portion confused me because I wasn’t sure if that was the next action the AI took or the action it thought about taking given the highlighted circumstances.” Table 5: The qualitative codes used in our analysis
# The Mind’s Eye: Visualizing Class-Agnostic Features of CNNs ###### Abstract Visual interpretability of Convolutional Neural Networks (CNNs) has gained significant popularity because of the great challenges that CNN complexity imposes to understanding their inner workings. Although many techniques have been proposed to visualize class features of CNNs, most of them do not provide a correspondence between inputs and the extracted features in specific layers. This prevents the discovery of stimuli that each layer responds better to. We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer. Exploring features in this class-agnostic manner allows for a greater focus on the feature extractor of CNNs. Our method uses a dual- objective activation maximization and distance minimization loss, without requiring a generator network nor modifications to the original model. This limits the number of FLOPs to that of the original network. We demonstrate the visualization quality on widely-used architectures.111Code is available at https://git.io/JL9Wg and our demo video: https://youtu.be/Au3jaUdnPKM Index Terms— Feature visualization, CNN explainability, convolutional features ## 1 Introduction Fig. 1: Top 50 extracted features. ResNet-50 [1] was used with features visualized from ‘layer6.5.conv1’. Deep learning architectures have achieved substantial breakthroughs in comparison to hand-coded feature extractors, for a wide variety of image and video tasks. While their performance is high, their interpretability remains limited. For this reason, methods on visualizing features of CNNs have received significant research interest over the last few years. We identify two main approaches. The first is to consider the image regions that networks find informative [2, 3, 4]. This approach allows for the selection of salient regions, but it does not provide a description for the feature’s appearance. The second set of methods addresses this shortcoming by explicitly creating visualizations that activate features of specific classes [5, 6]. While both classes of approach have shown great promise to establish robust visual explanations for CNN features, one key aspect of deep learning method has not been explicitly addressed. Although early extracted features can be easily interpreted based on edges and textural patterns, features of deeper layers are significantly more complicated while they mostly do not correspond to singular concepts. We refer to features with polysemantic interpretations as entangled [7]. To visualize neurons that encapsulate entangled features, we propose a multi- objective method that creates class-agnostic visual representations of image features. As a result, we can show the top $k$ descriptive features in sets of images. Our approach further addresses the common problem of different interpretations through perturbations [8], as the method is not constrained by visualizing features in a per-class fashion. An example appears in Figure 1. Our contributions are as follows: * • We propose a class-agnostic method for visual explanations of CNN features. Our method uses a multi-objective loss based on activation maximization that optimizes an input image through the excitation of a user-defined number of layer neurons. * • We design an axiomatic distance objective to address entangled features by minimizing the distance between produced image activations and the averaged target activations of real images. * • We test our approach on different layers and neurons of popular CNNs and show that our visualizations can uncover interesting features in sets of images. The remainder of the paper is structured as follows. We first summarize related work on visual interpretations. We detail our method in Section 3. We demonstrate the produced feature visualizations in Section 4 and conclude in Section 5. ## 2 Related Work Recent works have argued the importance of interpretability in CNNs [9, 10] and how it can further lead to CNN improvements [11]. However, creating methods that capture the inner workings of CNNs has been proven challenging. One widely used method to visualize CNN features is to maximize neurons that correspond to a specific class [5]. To maximize a class neuron, the input is composed of trainable parameters that are updated based on the gradients. As the sole consideration of class activations does not give an intuitive representation of the features that correspond to classes, Zeiler and Fergus [6] proposed a de-convolution approach that aims at approximating layer features. Later works of Simonyan et al. [12] and Nguyen et al. [13] have shown how the exclusive use of an activation maximization objective can lead to the creation of unrealistic images, since the space of possible images and patterns that can be produced and that are close to extracted patterns, is extremely vast. This motivated the exploration of regularization techniques aimed at constraining the space of possible visual representations. Some of these techniques include the use of Gaussian filters [14, 15] during the image optimization process, jitter effect [16], and creating center-biased gradient masks [17]. Other approaches to improve the realism in images consider using a separate network that is capable of synthesizing feature visualizations [18, 19] based on adversarial training similar to that of generative networks. Although the visual quality in generative models is higher, generators lack in terms of representing the causality of learned features [20], as they include an additional factor of ambiguity through the generator sub-network. To address the problems associated with activation-maximization, we propose a method that is inspired by recent distance-minimization-based generative networks. Our method optimizes an image-based dot product of activations from generated and real-world images while additionally decreasing their activation distances within the feature space. Fig. 2: Feature space distances. Point corresponds to top-10 embeddings ($out_{n},\;n\in N$) with center $out_{c}$. ## 3 Visualization methodology We use a dot-product activation maximization with an additional distance minimization regression objective to optimize a trainable input image to represent the most informative features for a specific layer in the model. ### 3.1 Multi-faced clustered neuron selection We use a multi-faceted technique similar to the one proposed by Nguyen et al. [17] to optimize the creation of the initial images ($\overline{img}$) as well as the target activations ($t_{i}$) of layer $i$ used for regressions. We define $C$ target facets. We use real images and perform a normal forward-pass in the network until layer $i$. We then reduce the original channel dimensionality of activations $a_{i}$ in layer $i$ through PCA [21] and t-SNE [22] to create 2-dimensional embeddings $out$ that are then clustered into $C$ clusters with k-Means [23]. Instead of using the average within each of the $C$ clusters $c$ ($1\leq c\leq C$), we consider the $N=10$ 2D embeddings closest to each cluster center ($out_{c}$) to allow for a better correspondence to feature activations that are characteristic for cluster $c$. Based on the euclidean distance between the 2D embeddings and the cluster center ($edist(out_{c},out_{n})\forall n\in N$) we create a weight penalty $w_{c,n}$. The weight corresponds to the effect of each activation ($a_{i,n}$) that is exponentially counter-equal to the euclidean distance between its 2D embedding $out_{n}$ and the cluster center $out_{c}$. This is summarized in Eq. 1 with the image ($\overline{img}_{c}$) initialization and the discovered target activations ($t_{i,c}$). We include a constant ($\gamma=1e^{-5}$) for numeric stability. $\begin{split}{w}_{c,n}=\frac{e^{edist(out_{c},out_{n})^{-1}+\gamma}}{\sum\limits_{n\in N}e^{edist(out_{c},out_{n})^{-1}+\gamma}}\qquad\qquad\\\ \overline{img}_{c}=\sum\limits_{n\in N}img_{n}*w_{c,n}\;\;\&\;\;t_{i,c}=\sum\limits_{n\in N}a_{i,n}*w_{c,n}\end{split}\vspace{-1.5mm}$ (1) The effect is visualized in Figure 2, where the distance between each 2D embedding $out_{n}$ and the corresponding cluster center $out_{c}$ determines the contribution of each activation $a_{i,n}$ to the final target activation $t_{i,c}$. Fig. 3: Image optimization iterations. The target image from ImageNet’s [24] padlock class is the image closest to the cluster center. The top 30 features of wide-ResNet101’s [25] ‘layer6.12.conv3’ are used. ### 3.2 Objective formalization We define our loss function based on two additional auxiliary objectives to improve the feature clarity while simultaneously reduce the effects of feature entanglement. To visualize specific features, we select the top $k$ features of each target activation $t_{i,c}$ of each cluster $c$. This creates an averaged overview over the most informative top $k$ features that should be visualized. We define a dot-product activation maximization loss ($DM$) as the channel- wise dot product of the produced activation maps $a_{i}$ and the discovered target layer activations $t_{i,c}$. This calculation is performed for all top $k$ channels: $DM=\sum\limits_{j\in k}a_{i,j}*t_{i,c,j}$ (2) Through the maximization of the dot-product of the produced activations and the target layer activations, we create a path towards meaningful features for the gradients. However, this also corresponds to larger feature entanglement as the span of possible gradient directions to provide positive improvements can be extensive. To address this issue, we include a second objective: a multi-dimensional distance minimization between the produced activation maps $a_{i}$ and the target activations $t_{i}$. For the distance loss function, we use a super-set of distance methods as proposed by Barron [26] (denoted as $AD$). Because of the heteroscedastic nature of the produced distances, using trainable parameters ($r,b$), shown in Eq. 3, can better fit the produced multivariate distances. $\begin{split}AD=\begin{cases}\frac{1}{2}(\frac{mdist}{r})^{2},\;for\;b=2\\\ log(\frac{1}{2}(\frac{mdist}{r})^{2}+1)\;for\;b=0\\\ 1-exp(-\frac{1}{2}(\frac{mdist}{r})^{2})\;for\;b\rightarrow-\infty\\\ \frac{|b-2|}{b}\left(\left(\frac{(\frac{mdist}{r})^{2}}{|b-2|}+1\right)^{b/2}+1\right)\;otherwise\end{cases}\\\ where,\>\>mdist=\left(\sum\limits_{j\in k}||a_{i,j}-t_{i,c,j}||\right)\qquad\quad\end{split}$ (3) Finally, we synthesize our loss function $\mathcal{L}$ from Eqs. 2 and 3 by including a penalty for the activations of the previous layers ($a_{i-1}$) with channel size $D$, through $L_{1}$ regularization. The scaling value $\lambda$ has an initial value of $1e^{-3}$ which is linearly decreased to $1e^{-4}$ during training. The final loss is: $\mathcal{L}=AD-DM+\lambda*\sum\limits_{j\in D}||a_{i-1,j}||\vspace{-1.5mm}$ (4) ### 3.3 Parameterization setup We include parameterized noise functions within our workflow as constraints for the high-frequency gradients during back-propagation. This allows for the minimization of possible feature imbalances [15] and improves the final visual representation quality of CNN features. We include popular techniques used in visualization methods such as image blurring with low-variance Gaussian kernels [14, 15] in combination with a denoising split Bregman algorithm [27]. We additionally include a center-based gradient mask [17] to limit feature cluttering as well as to limit feature duplication during training. Learning rate $lr$, standard deviations of the Gaussian blur, denoising, and center- based gradient mask ($\sigma$) are decreased linearly over each iteration $t$ ($1\leq t\leq T$), based on user-set starting ($start$) and final ($end$) values: $lr,\sigma=\frac{((end-start)*t)+(start*T)}{T-1}\vspace{-1.5mm}$ (5) We present iterative changes in Figure 3 where an image is optimized to visualize features that correspond to a target. Fig. 4: Cross-layer visualizations of top-100 features. Images were sampled from ImageNet’s ‘screw’ class. The targets include variations in orientation, number of objects and shape. We use ResNeXt-101 [28] with the corresponding optimization layers being displayed at the top of each column. ## 4 Visualizations We demonstrate in Figure 1 three cases of the same object where the produced feature visualizations present some degree of variation. Although the target images show the same object (from class ‘banana’ in ImageNet), the number of objects present may correspond to differences in terms of the most dominant feature activations. We note however, that differences in the image targets do not present detrimental effects in the overall feature combination. In Figure 3, we present how features are visualized over time. This shows the variations in regions of the image being optimized and how feature-inclusive regions change over time. This demonstrates the ability of a specific architecture to combine only a certain number of features in order to encapsulate the general appearance of an object. For example, for images from the ‘padlock’ class, the 30 most highly activated features can be seen as sufficient for describing the overall appearance of the object. This can aid in determining the images and classes that are easier or harder for the network to extract meaningful features from. Lastly, we show in Figure 4 the features that are extracted among different layers of the same network. Using images from ImageNet’s ‘screw’ class, we sample target sets of images varying in terms of their general orientation with objects being perpendicular or at an angle, as well as differences in screw head types and number of objects. Visualizations of later layers (7.2 and 6.20) include the metallic look of the object as well as screw body patterns and detail in terms of the screw head type (circular and hexagonal). Such specific features however are absent in earlier layers as their feature extractors seem to correspond to basic features in terms of the general object appearance with the number of information-rich entangled features being lower from that of later layers. ## 5 Conclusions We have proposed a novel feature visualization method aimed at providing visual explanations for the top features extracted by CNN layers. Images are optimized based on a dual-objective loss which includes the dot-product activation maximization and the distance minimization between the produced and target layer activations. To reduce feature overlap and to improve the overall visualization clarity, we apply blurring and de-blurring filters as well as a gradient mask to the generated images during optimization. We present corresponding CNN features that are associated with specific classes and images. Based on this, we believe that the produced visual explanations can improve the in-depth understanding of trained CNNs and the features with polysemantic interpretations that are associated with a specific image or set of images. ## References * [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. * [2] Ruth C Fong and Andrea Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” in International Conference on Computer Vision (ICCV), 2017, pp. 3429–3437. * [3] Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller, “Explaining nonlinear classification decisions with deep Taylor decomposition,” Pattern Recognition, vol. 65, pp. 211–222, 2017. * [4] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra, et al., “Grad-CAM: Visual explanations from deep networks via gradient-based localization.,” in Internation Conference on Computer Vision (ICCV), 2017, pp. 618–626. * [5] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent, “Visualizing higher-layer features of a deep network,” Tech. Rep. 1341-3, University of Montreal, 2009. * [6] Matthew D Zeiler and Rob Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision (ECCV). Springer, 2014, pp. 818–833. * [7] Jesse Mu and Jacob Andreas, “Compositional explanations of neurons,” Advances in Neural Information Processing Systems (NeurIPS), vol. 33, 2020. * [8] Amirata Ghorbani, Abubakar Abid, and James Zou, “Interpretation of neural networks is fragile,” in Conference on Artificial Intelligence (AAAI), 2019, vol. 33, pp. 3681–3688. * [9] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6541–6549. * [10] Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach, “Multimodal explanations: Justifying decisions and pointing to the evidence,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8779–8788. * [11] Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata, “Grounding visual explanations,” in European Conference on Computer Vision (ECCV), 2018, pp. 269–286. * [12] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013. * [13] Anh Nguyen, Jason Yosinski, and Jeff Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 427–436. * [14] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson, “Understanding neural networks through deep visualization,” in International Conference of Machine Learning Workshops (ICML), 2015. * [15] Feng Wang, Haijun Liu, and Jian Cheng, “Visualizing deep neural network by alternately image blurring and deblurring,” Neural Networks, vol. 97, pp. 162–172, 2018. * [16] Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Inceptionism: Going deeper into neural networks,” 2015. * [17] Anh Nguyen, Jason Yosinski, and Jeff Clune, “Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks,” in International Conference of Machine Learning Workshops (ICML), 2016. * [18] Christian F Baumgartner, Lisa M Koch, Kerem Can Tezcan, Jia Xi Ang, and Ender Konukoglu, “Visual feature attribution using Wasserstein GANs,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8309–8319. * [19] Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune, “Synthesizing the preferred inputs for neurons in neural networks via deep generator networks,” in Advances in Neural Information Processing Systems (NeuIPS), 2016, pp. 3387–3395. * [20] Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal, and Heimo Müller, “Causability and explainability of artificial intelligence in medicine,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 9, no. 4, pp. e1312, 2019. * [21] IT Jolliffe, “Principal component analysis,” Technometrics, vol. 45, no. 3, pp. 276, 2003. * [22] Laurens van der Maaten and Geoffrey Hinton, “Visualizing data using t-SNE,” Journal of machine learning research, vol. 9, no. 86, pp. 2579–2605, 2008. * [23] Stuart Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982. * [24] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015. * [25] Sergey Zagoruyko and Nikos Komodakis, “Wide residual networks,” in British Machine Vision Conference (BMVC), 2016, pp. 87.1–87.12. * [26] Jonathan T Barron, “A general and adaptive robust loss function,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4331–4339. * [27] Pascal Getreuer, “Rudin-Osher-Fatemi total variation denoising using split Bregman,” Image Processing On Line, vol. 2, pp. 74–95, 2012. * [28] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He, “Aggregated residual transformations for deep neural networks,” in Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5987–5995.
Further author information: M. Hazumi (E-mail<EMAIL_ADDRESS> # LiteBIRD satellite: JAXA’s new strategic L-class mission for all-sky surveys of cosmic microwave background polarization M. Hazumi High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan The Graduate University for Advanced Studies (SOKENDAI), Miura District, Kanagawa 240-0115, Hayama, Japan P.A.R. Ade Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK A. Adler Stockholm University E. Allys Laboratoire de Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS, Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$, Universit$\acute{\rm e}$ de Paris, 75005 Paris, France K. Arnold University of California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA D. Auguste Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France J. Aumont IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France R. Aurlien University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway J. Austermann National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA C. Baccigalupi International School for Advanced Studies (SISSA), Via Bonomea 265, 34136, Trieste, Italy A.J. Banday IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France R. Banerji University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway R.B. Barreiro Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain S. Basak School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, Maruthamala PO, Vithura, Thiruvananthapuram 695551, Kerala, India J. Beall National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA D. Beck Stanford University, Department of Physics, CA 94305-4060, USA S. Beckman University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA J. Bermejo Instituto Universitario de Microgravedad Ignacio Da Riva (IDR/UPM), Plaza Cardenal Cisneros 3, 28040 - Madrid, Spain P. de Bernardis Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma M. Bersanelli Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano J. Bonis Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France J. Borrill Lawrence Berkeley National Laboratory (LBNL), Computational Cosmology Center, Berkeley, CA 94720, USA University of California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA F. Boulanger Laboratoire de Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS, Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$, Universit$\acute{\rm e}$ de Paris, 75005 Paris, France S. Bounissou Institut d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$ Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France M. Brilenkov University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway M. Brown University of Manchester, Manchester M13 9PL, United Kingdom M. Bucher Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France E. Calabrese Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK P. Campeti International School for Advanced Studies (SISSA), Via Bonomea 265, 34136, Trieste, Italy A. Carones Dipartimento di Fisica, Università di Roma ”Tor Vergata”, and Sezione INFN Roma2 F.J. Casas Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain A. Challinor DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, U.K. Institute of Astronomy, Madingley Road, Cambridge CB3 0HA, U.K. Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, U.K. V. Chan University of Toronto, Canada K. Cheung University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA Y. Chinone University of Tokyo, School of Science, Research Center for the Early Universe, RESCEU J.F. Cliche McGill University, Physics Department, Montreal, QC H3A 0G4, Canada L. Colombo Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano F. Columbro Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma J. Cubas Universidad Politécnica de Madrid A. Cukierman University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA Stanford University, Department of Physics, CA 94305-4060, USA D. Curtis University of California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA G. D’Alessandro Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma N. Dachlythra Stockholm University M. De Petris Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma C. Dickinson University of Manchester, Manchester M13 9PL, United Kingdom P. Diego-Palazuelos Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain M. Dobbs McGill University, Physics Department, Montreal, QC H3A 0G4, Canada T. Dotani Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan L. Duband Univ. Grenoble Alpes, CEA, IRIG-DSBT, 38000 Grenoble, France S. Duff National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA J.M. Duval Univ. Grenoble Alpes, CEA, IRIG-DSBT, 38000 Grenoble, France K. Ebisawa Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan T. Elleflot Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA H.K. Eriksen University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway J. Errard Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France T. Essinger-Hileman NASA Goddard Space Flight Center F. Finelli INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) R. Flauger University of California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA C. Franceschet Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano U. Fuskeland University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway M. Galloway University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway K. Ganga Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France J.R. Gao SRON Netherlands Institute for Space Research R. Genova-Santos Instituto de Astrofisica de Canarias (IAC), Spain M. Gerbino Dipartimento di Fisica e Scienze della Terra, Università di Ferrara and Sezione INFN di Ferrara, Via Saragat 1, 44122 Ferrara, Italy M. Gervasi University of Milano Bicocca, Physics Department, p.zza della Scienza, 3, 20126 Milan Italy T. Ghigna Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan University of Oxford E. Gjerløw University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway M.L. Gradziel National University of Ireland Maynooth J. Grain Institut d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$ Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France F. Grupp Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse, 85748 Garching, Germany A. Gruppuso INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) J.E. Gudmundsson Stockholm University T. de Haan High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan N.W. Halverson Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO, 80309, USA P. Hargrave Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK T. Hasebe Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan M. Hasegawa High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan M. Hattori Tohoku University, Graduate School of Science, Astronomical Institute, Sendai, 980-8578, Japan S. Henrot-Versillé Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France D. Herman University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway D. Herranz Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain C.A. Hill Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA G. Hilton National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA Y. Hirota The University of Tokyo, Tokyo 113-0033, Japan E. Hivon Institut d’Astrophysique de Paris, CNRS/Sorbonne Universit$\acute{\rm e}$, Paris France R.A. Hlozek University of Toronto, Canada Y. Hoshino Saitama University, Saitama 338-8570, Japan E. de la Hoz Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain J. Hubmayr National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA K. Ichiki Nagoya University, Kobayashi-Masukawa Institute for the Origin of Particle and the Universe, Aichi 464-8602, Japan T. Iida ispace, inc. H. Imada National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan K. Ishimura Waseda University, Tokyo, Japan H. Ishino Okayama University, Department of Physics, Okayama 700-8530, Japan G. Jaehnig Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO, 80309, USA T. Kaga Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Kashima National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan N. Katayama Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan A. Kato High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan The Graduate University for Advanced Studies (SOKENDAI), Miura District, Kanagawa 240-0115, Hayama, Japan T. Kawasaki Kitasato University, Sagamihara, Kanagawa 252-0373, Japan R. Keskitalo Lawrence Berkeley National Laboratory (LBNL), Computational Cosmology Center, Berkeley, CA 94720, USA University of California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA T. Kisner Lawrence Berkeley National Laboratory (LBNL), Computational Cosmology Center, Berkeley, CA 94720, USA University of California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA Y. Kobayashi The University of Tokyo, Tokyo 113-0033, Japan N. Kogiso Osaka Prefecture University, Sakai, Osaka 599-8531, Japan A. Kogut NASA Goddard Space Flight Center K. Kohri High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan E. Komatsu Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany K. Komatsu Okayama University, Department of Physics, Okayama 700-8530, Japan K. Konishi The University of Tokyo, Tokyo 113-0033, Japan N. Krachmalnicoff International School for Advanced Studies (SISSA), Via Bonomea 265, 34136, Trieste, Italy I. Kreykenbohm Dr. Remeis-Sternwarte and ECAP, Friedrich-Alexander-Universität Erlangen-Nürnberg, Sternwartstr. 7, 96049 Bamberg, Germany C.L. Kuo SLAC National Accelerator Laboratory, Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), Menlo Park, CA 94025, USA Stanford University, Department of Physics, CA 94305-4060, USA A. Kushino Kurume University, Kurume, Fukuoka 830-0011, Japan L. Lamagna Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma J.V. Lanen National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA M. Lattanzi Istituto Nazionale di Fisica Nucleare - Sezione di Ferrara A.T. Lee University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA C. Leloup Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France F. Levrier Laboratoire de Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS, Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$, Universit$\acute{\rm e}$ de Paris, 75005 Paris, France E. Linder University of California, Berkeley, Space Science Laboratory, Berkeley, CA 94720, USA Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA T. Louis Université Paris- Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France G. Luzzi Italian Space Agency (ASI) T. Maciaszek Centre National d’Etudes Staptiales (CNES), France B. Maffei Institut d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$ Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France D. Maino Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano M. Maki High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan S. Mandelli Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano E. Martinez-Gonzalez Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain S. Masi Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma T. Matsumura Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan A. Mennella Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano M. Migliaccio Dipartimento di Fisica, Università di Roma ”Tor Vergata”, and Sezione INFN Roma2 Y. Minami High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan K. Mitsuda National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan J. Montgomery McGill University, Physics Department, Montreal, QC H3A 0G4, Canada L. Montier IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France G. Morgante INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) B. Mot IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France Y. Murata Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan J.A. Murphy National University of Ireland Maynooth M. Nagai National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan Y. Nagano Okayama University, Department of Physics, Okayama 700-8530, Japan T. Nagasaki High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan R. Nagata Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Nakamura Yokohama National University, Yokohama, Kanagawa 240-8501, Japan T. Namikawa DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, U.K. P. Natoli Dipartimento di Fisica e Scienze della Terra, Università di Ferrara and Sezione INFN di Ferrara, Via Saragat 1, 44122 Ferrara, Italy S. Nerval University of Toronto, Canada T. Nishibori Japan Aerospace Exploration Agency (JAXA), Research and Development Directorate, Tsukuba, Ibaraki 305-8505, Japan H. Nishino University of Tokyo, School of Science, Research Center for the Early Universe, RESCEU F. Noviello Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK C. O’Sullivan National University of Ireland Maynooth H. Ogawa Osaka Prefecture University, Sakai, Osaka 599-8531, Japan H. Ogawa Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Oguri Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan H. Ohsaki The University of Tokyo, Tokyo 113-0033, Japan I.S. Ohta Konan University, Kobe, Japan N. Okada Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan N. Okada Osaka Prefecture University, Sakai, Osaka 599-8531, Japan L. Pagano Instituto de Astrofisica de Canarias (IAC), Spain A. Paiella Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma D. Paoletti INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) G. Patanchon Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France J. Peloton Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France F. Piacentini Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma G. Pisano Dipartimento di Fisica, Università La Sapienza, P. le A. Moro 2, Roma, Italy and INFN Roma Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK G. Polenta Space Science Data Center, Italian Space Agency, via del Politecnico, 00133, Roma, Italy D. Poletti International School for Advanced Studies (SISSA), Via Bonomea 265, 34136, Trieste, Italy T. Prouvé Univ. Grenoble Alpes, CEA, IRIG-DSBT, 38000 Grenoble, France G. Puglisi Stanford University, Department of Physics, CA 94305-4060, USA D. Rambaud IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France C. Raum University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA S. Realini Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano M. Reinecke Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany M. Remazeilles University of Manchester, Manchester M13 9PL, United Kingdom A. Ritacco Institut d’Astrophysique Spatiale (IAS), CNRS, UMR 8617, Universit$\acute{\rm e}$ Paris-Sud 11, B$\hat{\rm a}$timent 121, 91405 Orsay, France Laboratoire de Physique de l’$\acute{\rm E}$cole Normale Sup$\acute{\rm e}$rieure, ENS, Universit$\acute{\rm e}$ PSL, CNRS, Sorbonne Universit$\acute{\rm e}$, Universit$\acute{\rm e}$ de Paris, 75005 Paris, France G. Roudil IRAP, Universit$\acute{\rm e}$ de Toulouse, CNRS, CNES, UPS, (Toulouse), France J.A. Rubino-Martin Instituto de Astrofisica de Canarias (IAC), Spain M. Russell University of California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA H. Sakurai The Institute for Solid State Physics (ISSP), The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Y. Sakurai Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan M. Sandri INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) M. Sasaki Dr. Remeis-Sternwarte and ECAP, Friedrich-Alexander-Universität Erlangen-Nürnberg, Sternwartstr. 7, 96049 Bamberg, Germany G. Savini Optical Science Laboratory, Physics and Astronomy Dept., University College London (UCL) D. Scott University of British Columbia, Canada J. Seibert University of California, San Diego, Department of Physics, San Diego, CA 92093-0424, USA Y. Sekimoto Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan The University of Tokyo, Department of Astronomy, Tokyo 113-0033, Japan High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan B. Sherwin DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, U.K. Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, U.K. Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA K. Shinozaki Japan Aerospace Exploration Agency (JAXA), Research and Development Directorate, Tsukuba, Ibaraki 305-8505, Japan M. Shiraishi National Institute of Technology, Kagawa College P. Shirron NASA Goddard Space Flight Center G. Signorelli INFN Sezione di Pisa, Largo Bruno Pontecorvo 3, 56127 Pisa (Italy) G. Smecher Three-Speed Logic, Inc. S. Stever Okayama University, Department of Physics, Okayama 700-8530, Japan Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan R. Stompor Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France H. Sugai Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan S. Sugiyama Saitama University, Saitama 338-8570, Japan A. Suzuki Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA J. Suzuki High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan T.L. Svalheim University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway E. Switzer NASA Goddard Space Flight Center R. Takaku Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan The University of Tokyo, Department of Physics, Tokyo 113-0033, Japan H. Takakura The University of Tokyo, Department of Astronomy, Tokyo 113-0033, Japan Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan S. Takakura Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan Y. Takase Okayama University, Department of Physics, Okayama 700-8530, Japan Y. Takeda Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan A. Tartari INFN Sezione di Pisa, Largo Bruno Pontecorvo 3, 56127 Pisa (Italy) E. Taylor University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA Y. Terao The University of Tokyo, Tokyo 113-0033, Japan H. Thommesen University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway K.L. Thompson SLAC National Accelerator Laboratory, Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), Menlo Park, CA 94025, USA Stanford University, Department of Physics, CA 94305-4060, USA B. Thorne University of Oxford T. Toda Okayama University, Department of Physics, Okayama 700-8530, Japan M. Tomasi Dipartimento di Fisica, Università degli Studi di Milano, INAF-IASF Milano, and Sezione INFN Milano M. Tominaga The University of Tokyo, Department of Astronomy, Tokyo 113-0033, Japan Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan N. Trappe National University of Ireland Maynooth M. Tristram Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France M. Tsuji National Institute of Technology, Kagawa College M. Tsujimoto Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan C. Tucker Cardiff University, School of Physics and Astronomy, Cardiff CF10 3XQ, UK J. Ullom National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA G. Vermeulen Néel Institute, CNRS P. Vielva Instituto de Fisica de Cantabria (IFCA, CSIC-UC), Avenida los Castros SN, 39005, Santander, Spain F. Villa INAF - OAS Bologna, via Piero Gobetti, 93/3, 40129 Bologna (Italy) M. Vissers National Institute of Standards and Technology (NIST), Boulder, Colorado 80305, USA N. Vittorio Dipartimento di Fisica, Università di Roma ”Tor Vergata”, and Sezione INFN Roma2 I. Wehus University of Oslo, Institute of Theoretical Astrophysics, NO-0315 Oslo, Norway J. Weller Universitäts-Sternwarte, Fakultät für Physik, Ludwig- Maximilians Universität München, Scheinerstr. 1, 81679 München, Germany Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse, 85748 Garching, Germany B. Westbrook University of California, Berkeley, Department of Physics, Berkeley, CA 94720, USA J. Wilms Dr. Remeis-Sternwarte and ECAP, Friedrich-Alexander-Universität Erlangen- Nürnberg, Sternwartstr. 7, 96049 Bamberg, Germany B. Winter Optical Science Laboratory, Physics and Astronomy Dept., University College London (UCL) Mullard Space Science Laboratory, University College London, London E.J. Wollack Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720, USA N.Y. Yamasaki Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan T. Yoshida Japan Aerospace Exploration Agency (JAXA), Institute of Space and Astronautical Science (ISAS), Sagamihara, Kanagawa 252-5210, Japan J. Yumoto The University of Tokyo, Tokyo 113-0033, Japan M. Zannoni University of Milano Bicocca, Physics Department, p.zza della Scienza, 3, 20126 Milan Italy A. Zonca San Diego Supercomputer Center, University of California, San Diego, La Jolla, California, USA ###### Abstract LiteBIRD, the Lite (Light) satellite for the study of $B$-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. JAXA selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with its expected launch in the late 2020s using JAXA’s H3 rocket. LiteBIRD plans to map the cosmic microwave background (CMB) polarization over the full sky with unprecedented precision. Its main scientific objective is to carry out a definitive search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with an insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. To this end, LiteBIRD will perform full-sky surveys for three years at the Sun-Earth Lagrangian point L2 for 15 frequency bands between 34 and 448 GHz with three telescopes, to achieve a total sensitivity of 2.16 $\mu$K-arcmin with a typical angular resolution of 0.5∘ at 100 GHz. We provide an overview of the LiteBIRD project, including scientific objectives, mission requirements, top-level system requirements, operation concept, and expected scientific outcomes. ###### keywords: LiteBIRD, cosmic inflation, cosmic microwave background, $B$-mode polarization, primordial gravitational waves, quantum gravity, space telescope ## 1 Project Overview LiteBIRD, the Lite (Light) satellite for the study of $B$-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. After some initial conceptual studies[1, 2, 3, 4, 5] that started in 2008, we proposed LiteBIRD in 2015 as JAXA’s large-class (L-class) mission candidate. JAXA’s L-class is for flagship science missions with a 300 M USD cost cap. There will be three L-class missions in about ten years launched using JAXA’s H3 rocket. LiteBIRD passed an initial down-selection and in 2018 completed a two-year Pre-Phase-A2 concept development phase. JAXA selected LiteBIRD in May 2019 as the second L-class mission after MMX, the Martian Moons Exploration, which will be launched around 2025. The LiteBIRD Joint Study Group has more than 250 researchers from Japan, North America, and Europe with experience in CMB experiments, X-ray satellite missions, and other large projects in high-energy physics and astronomy. In particular, a large number of researchers who worked on the Planck satellite are members of LiteBIRD. We thus consider LiteBIRD to be the successor to the Planck satellite. The main scientific objective of LiteBIRD is to carry out a definitive search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. To this end, LiteBIRD plans to map the CMB polarization over the full sky with unprecedented precision. Although the hot Big-Bang picture is well supported by many distinct types of observation, several critical ‘origins’ problems remain unanswered. The leading theory today to resolve these problems is cosmological inflation, hypothesizing that our Universe went through an accelerating expansion phase at very early stages, effectively beginning the hot Big Bang. The cosmic inflation hypothesis predicts the emission of primordial gravitational waves during the inflationary era. These primordial gravitational waves should have imprinted a unique signature, called the $B$ modes, in the polarization pattern of the CMB[6, 7, 8, 9]. Measurements of the large-angle CMB polarization are known as the most sensitive probe for primordial gravitational waves. State-of-the-art technology is required for detection, since the $B$-mode signal will be much fainter than the already-detected $E$-mode pattern. The primordial $B$-mode measurements with LiteBIRD will also be the first stringent test of quantum gravity, which should exist behind any inflationary model. Here ‘quantum gravity’ means a theory that copes in a single framework with two pillars of physics: (1) Einstein’s theory of general relativity that describes gravity; and (2) quantum mechanics. At this SPIE 2020 conference, there are several contributions with more details on the design of individual components of the LiteBIRD satellite[10, 11, 12, 13, 14, 15, 16, 17]. The purpose of this article is to give a concise overview of LiteBIRD as an introduction to the other LiteBIRD proceedings. In Sect. 2, we describe our Level-1 mission requirements, or scientific requirements, and the rationale behind them. In Sect. 3, we introduce our requirements flow and explain the measurement requirements. After describing the launch vehicle (Sect. 4), the spacecraft (Sect. 5), the payload module (Sect. 6), and the operation concept (Sect. 7), we discuss the expected scientific outcomes in Sect. 8 and give a summary in Sect. 9. ## 2 Science Requirements Figure 1: Summary of present measurements of CMB power spectra[18, 19, 20, 21, 22, 23, 24, 25, 26, 27] and expected polarization sensitivity of LiteBIRD. In Fig. 1 we summarize the present measurements of the CMB power spectra, including $B$ modes, with the expected polarization sensitivities of LiteBIRD displayed. The $B$-mode power is proportional to the tensor-to-scalar ratio, $r$, which is observationally constrained to be $r<0.06$ (95%C.L.)[22], with a recent update using Planck data to $r<0.044$[28]. The next-generation of CMB polarization experiments on the ground have the potential to see a hint of the signal around $\ell\sim 100$, coming from the recombination epoch. However, if $r$ is less than approximately 0.03, the $B$ modes due to gravitational lensing become dominant. Removing contamination of the lensing $B$ modes, often called ‘delensing’, is needed in this case. In contrast, another excess at $\ell<10$, which is due to reionization, is larger than the lensing $B$ modes, even at $r=0.001$. In order to access the reionization peak, one needs to survey the full sky, where the advantage of observing in space is clear. The critical question is: to what precision should $r$ be measured? Here we introduce the total uncertainty on $r$, $\delta r$, which consists of five components: (instrumental) statistical uncertainties; systematic uncertainties; uncertainties due to contamination of foreground components; uncertainties due to gravitational lensing; and uncertainties due to observer biases. There are many different inflationary models under active discussion, which predict different values of $r$. Among them, there are well-motivated inflationary models that predict $r>0.01$[29]. If our requirement is $\delta r<0.001$, we can provide more than $10\,\sigma$ detection significance for such models. On the other hand, if LiteBIRD finds no primordial $B$ modes and obtains an upper limit on $r$, this limit would be stringent enough to set severe constraints on the physics of inflation. As discussed in Ref. 30, if we obtain an upper limit at $r\sim 0.003$, we can completely rule out one important category of models, namely any single-field model in which the characteristic field-variation scale of the inflaton potential is greater than the reduced Planck mass. ID | Title | Requirement description ---|---|--- Lv1.01 | Tensor-to-scalar ratio $r$ measurement sensitivity | The mission shall measure $r$ with a total uncertainty of $\delta r\,{<}\,1\times 10^{-3}$.This value shall include contributions from instrument statistical noise fluctuations, instrumental systematics, residual foregrounds, lensing $B$ modes, and observer bias, and shall not rely on future external data sets. Lv1.02 | Polarization angular power spectrum measurement capability | The mission shall obtain full-sky CMB linear polarization maps for achieving ${>}\,5\sigma$ significance using $2\,{\leq}\,\ell\,{\leq}\,10$ and $11\,{\leq}\,\ell\,{\leq}\,200$ separately, assuming $r\,{=}\,0.01$. We adopt a fiducial optical depth of $\tau=0.05$ for this calculation. Table 1: Two science requirements of LiteBIRD, also called Level 1 (Lv1) mission requirements. Based on all the considerations described above, we decided to impose the requirements described in Table 1. The first, Lv1.01, shall be achieved without delensing using external data; if external data are available, we may further reduce $\delta r$[31]. The second requirement, Lv1.02, becomes essential when $r$ is large. If there is some indication of the primordial $B$ modes before observations by LiteBIRD, that would imply a relatively large value of $r$. In this case, data from LiteBIRD will allow us to measure the $B$-mode signals from reionization and recombination simultaneously. If the spectral shape is consistent with the expectation from the standard cosmology, that will narrow down the list of possible inflationary models, and provide a much deeper insight into the correct model. If we observe an unexpected power spectrum beyond the standard model prediction, that will lead to a revolution in our picture of the Universe. Lv1.02 also sets the angular resolution requirement for LiteBIRD. ## 3 Measurement Requirements and System Requirements To satisfy the science requirements described in the previous section, we use the requirements flow-down framework shown in Table 2. To derive Lv2 measurement requirements from Lv1 science requirements, we also consider program-level constraints, such as the cost cap, which are not controlled by the LiteBIRD team. We also use agreed-upon assumptions between the LiteBIRD team and other parties or within the LiteBIRD team. Examples include assumptions on the complexity of the astronomical foreground components, the cooling-chain lifetime, and basic system redundancy guidelines. There are in total 11 Lv2 measurement requirements on the statistical uncertainty (Lv2.01), the systematic uncertainty (Lv2.02), the scan strategy (Lv2.03), the angular resolution (Lv2.04), calibration measurements (Lv2.05), error budget allocation (Lv2.06), systematic error budget allocation (Lv2.07), the duration of the normal observation phase (Lv2.08), the orbit (Lv2.09), observer bias (Lv2.10), and noise-covariance knowledge (Lv2.11). Our error budget (Lv2.06) is defined such that an equal amount, $(1/\sqrt{3})\times 10^{-3}=0.57\times 10^{-3}$, is given to each of the following three components: the total statistical error after foreground separation $\sigma_{\rm stat}$; the total systematic error $\sigma_{\rm syst}$; and a margin. The requirements are thus $\sigma_{\rm stat}<0.57\times 10^{-3}$ on the statistical uncertainty (Lv2.01) and $\sigma_{\rm syst}<0.57\times 10^{-3}$ on the systematic uncertainty (Lv2.02). Since we assume no delensing using external data, $\sigma_{\rm stat}$ includes uncertainties from the lensing $B$-mode component. Uncertainties due to foreground separation are also in $\sigma_{\rm stat}$. The observer bias (Lv2.10) shall be much smaller than $\sigma_{\rm syst}$. The requirement on the statistical uncertainty (Lv2.01) has six sub-requirements on (1) the measurement on CMB sensitivity, (2) on dust emission, and (3) on synchrotron emission, (4) separation of CO lines, (5) the number of observing bands, and (6) the observing frequency range. These are determined through detailed simulation (described in Sect. 8). We require full-sky surveys (Lv2.03) to obtain the $B$ modes to the lowest multipole of $\ell=2$. The angular resolution (Lv2.04) shall be better than 80 arcmin (FWHM) at the lowest frequency band in order to perform precision measurements at $\ell\,{=}\,200$. The regular observation phase (Lv2.08) shall be three years, considering the total cost cap and cooling-chain lifetime. The orbit (Lv2.09) shall be a Lissajous orbit around the Sun-Earth L2 point to avoid the influence of the Sun, Moon, or Earth radiation (discussed further in Sect. 7). Requirements on calibration measurements (Lv2.05, Lv2.11) and systematic error budget allocation (Lv2.07) will be explained in Sects. 6 and 8, respectively. Lv1 and Lv2 requirements are collectively called ‘mission requirements.’ In general, several possible designs meet mission requirements. We, therefore, performed implementation trade-off studies to choose the best design. Here, we also consider the program-level constraints and assumptions that we used to set Lv2 requirements. Lv3 instrument requirements constitute top-level system requirements. An essential distinction between Lv2 and Lv3 is that Lv3 instrument requirements are for the instrument chosen from trade-off studies, while Lv2 measurement requirements do not assume a specific instrument in principle. Lv3 requirements include general system requirements not only for mission instruments but also for the bus system,111Also called the ‘service module,’ or ‘SVM’ for short. ground segments and ground-support equipment. There are too many Lv3 requirements to list here. The requirement flow’s tree structure is also too detailed to show, since some Lv3 requirements derive from more than one Lv2 requirement; however, we will explain some essential Lv3 requirements in Sect. 6. Class | Symbol | Description ---|---|--- Mission requirements Level 1 (Lv1) | Lv1.XX | Top-level quantitative science requirements that are science | (e.g., Lv1.01) | directly connected to the full success of the mission. requirements | | Level 2 (Lv2) | Lv2.XX(.YY) (e.g. | Measurement requirements to achieve Lv1. measurement | Lv2.01, Lv2.01.01) | No assumption is made on an instrument. requirements | | $\downdownarrows$ Implementation trade-off studies $\downdownarrows$ System requirements Level 3 (Lv3) | Lv3.XX(.YY) (e.g. | Top-level implementation requirements for a chosen instrument | Lv3.01, Lv3.01.01) | instrument to achieve Lv2. Between Lv2 and Lv3 are requirements | | tradeoff studies for instrument selection. Level 4 (Lv4) | Lv4.XX(.YY) (e.g. | Component-level requirements to achieve Lv3. component | Lv4.01, Lv4.01.01) | requirements | | Level 5 (Lv5) | Lv5.XX(.YY) (e.g. | Sub-level build specifications to achieve Lv4. Sub-level build | Lv5.01, Lv5.01.01) | specifications | | Table 2: Definitions of five requirement levels used in LiteBIRD’s requirements flow-down. We split the requirements into five levels, from the top-level science requirements (Lv1) to sub-level build specifications (Lv5). Each level is allowed to have a sub-structure; for example, a Level 2 requirement Lv2.01 has six sub-requirements (such as Lv2.01.01, Lv2.01.02). ## 4 Launch Vehicle LiteBIRD will be launched on an H3[32], Japan’s new flagship rocket. It will achieve high flexibility, high reliability, and high performance at a lower cost than the currently used H-IIA rocket. The H3 rocket is under development with its prime contractor, Mitsubishi Heavy Industries, with a maiden flight scheduled in the Japanese fiscal year of 2021. The first stage of the H3 rocket will adopt the newly-developed liquid engine, LE-9, which achieves a 1.4 times larger thrust than the LE-7A engine currently in use. Its second- stage engine, LE-5B-3, and the solid rocket booster, SRB-3, will also be improved. The launch capability of the H3 rocket to the geostationary transfer orbit will be the highest ever among JAXA’s launch vehicles, exceeding that of the existing H-IIA and H-IIB launch vehicles. The launch facility at Tanegashima Space Center will also be upgraded following the development of H3. The design of the H3 rocket allows for several different configurations. The rocket type is defined by the combination of the number of first-stage engines (2 or 3), the number of solid rocket boosters (0, 2, or 4), and the length of the fairing (short or long).222Some of the combinations may not be offered as a standard lineup, based on market research. These lineups make it possible to cope with various payload sizes and orbits. Considering the size, weight, and orbit of LiteBIRD, we plan to adopt the H3-22L configuration, which means two first-stage engines, two boosters, and the long fairing. The H3 rocket is designed to have launch capability of at least 4 t to the Sun-synchronous orbit (500 km in altitude), and 6.5 t to the geostationary transfer orbit. The parameters of these orbits, however, are not appropriate for estimating the capability to L2. We thus use the C3-based launch capability to evaluate the maximum-allowed weight for the L2 orbit. Here, C3 is defined as a square of the residual velocity, which the payload launched from the Earth possesses at infinity. The launch capability defined for ${\rm C3}\,{=}\,0$ is a good approximation for L2. It is noteworthy that the launch capability vastly changes with the number of solid rocket boosters, whereas the number of main engines has only a moderate impact on the launch capability. Selection of the fairing also has little impact on the launch capability. With no solid rocket booster, e.g., with H3-30S, the launch capability for ${\rm C3}\,{=}\,0$ is far less than 2 t, which is much smaller than what is required for LiteBIRD. On the other hand, if two solid rocket boosters are used, e.g., with H3-22L or H3-32L, the launch capability (${\rm C3}\,{=}\,0$) becomes larger than 3.5 t, which is sufficient for LiteBIRD. This launch capability changes slightly with the selection of the number of main engines. Because two main engines can afford sufficient launch capability, we select H3-22L for the launch vehicle of LiteBIRD. Note that the long fairing is necessary for LiteBIRD to fit. Considering the estimated launch capability (${\rm C3}\,{=}\,0$) of H3-22L and the current estimation of the weight of LiteBIRD, together with the various ambiguities of the estimations, we set the provisional requirement on the total weight of LiteBIRD as ${<}\,3.5$ t. This requirement may be updated after the first flight of the H3 rocket. In most cases, the launch environment of H3 is expected to be similar or more moderate than that of H-IIA. Details of the launch environment may depend on the rocket configuration, especially on the number of solid rocket boosters, the satellite mass, and the flight path. We assume the launch environment of H-IIA in general for the design of LiteBIRD, to be on the safe side. However, when the launch environment is critical in the design, such as for the mechanical requirement on the fundamental frequency of the satellite, we adopt the requirements based on the current best estimation of the performance of H3. ## 5 Spacecraft Figure 2: Conceptual design of the LiteBIRD spacecraft. The payload module (PLM) houses the low-frequency telescope (LFT), the medium-frequency telescope (MFT), and the high-frequency telescope (HFT). The overall structure of the spacecraft for LiteBIRD is determined directly from the mission requirements. The axisymmetric shape of the spacecraft is selected to make the spin easier. Because the satellite’s spin axis should be near the Earth-Sun line, it is natural to place the telescopes and the solar panels at the opposite ends of the spacecraft. We chose to place the PLM (including the telescopes) at the top of the spacecraft and the solar panels at its bottom, perpendicular to the spin axis. The high-gain antenna should be placed on the bottom side of the satellite, i.e., opposite the mission instruments, to point to the Earth and reduce interference with the telescopes. Based on these considerations, we show the basic structure of the spacecraft in Fig. 2. In this configuration, the whole spacecraft spins, and the possibility of using a slip-ring to rotate only the PLM is not adopted. The main reasons for this selection are to handle large heat dissipation in the PLM and to reduce the possibility of a single-point failure. The PLM is equipped with mechanical coolers, which dissipate a fairly large amount of heat. Sufficient radiator size to dissipate the heat can be equipped only in the service module (SVM) and it is not easy to transfer heat from the spinning PLM through the slip- ring to a non-spinning SVM. The slip-ring introduces a single point whose failure would be critical for the mission. Furthermore, a slip-ring might produce micro-vibration and could increase the detector noise significantly. For these reasons, we decided not to adopt the slip-ring and to rotate the whole spacecraft. The spacecraft has a thrust tube at its center, which transfers the PLM launch load to the rocket. We will install the fuel tank inside the thrust tube to utilize the inner space effectively. The insides of the side panels are used to mount various electric components of both the SVM and PLM. PLM components are preferentially placed on the upper parts of the side panels, whereas SVM components are on the lower parts of the side panels. The outer sides of the upper parts of the side panels are used to mount radiators, which radiate the heat dissipation of the PLM, such as from the mechanical coolers and electronics boxes. Figure 3: Block diagram of the spacecraft for LiteBIRD. A box with broken lines represents electric equipment, while those with solid lines are subsystems composed of multiple equipment types. Lines and arrows connecting boxes are only representative. We show the block diagram of the spacecraft in Fig. 3. The LiteBIRD spacecraft takes a typical satellite configuration. Although the spacecraft spins, its attitude control system works like a 3-axis stabilized satellite to satisfy the attitude control and determination accuracy requirements. The low spin rate (nominal 0.05 rpm, contingency 0.3 rpm) makes this possible. The spacecraft will have a total weight of 2.6 t, including the fuel of approximately 400 kg and with a total height of 5.3 m. Thus the current weight has a large margin compared to the rocket’s capability. We estimate the total power of the spacecraft to be 3.0 kW. The downlink rate will be $10\,{\rm Mbits}\,{\rm s}^{-1}$ in X-band and will transfer a total of 17.9 GB of scientific data every day. All these parameters are subject to change as the conceptual design of the satellite continues to be developed. ## 6 Payload Module Figure 4: Overview of the payload module (PLM). Figure 5: Schematic overview of the cryogenic readout system, with digital frequency-domain multiplexing (left) and images of a chip of 40 inductor-capacitor resonators (right-top) and a single gradiometric SQUID (right bottom). The red section indicates the part of the circuit located at 300 K, blue section is the part located on the 100-mK stage, and yellow id the twisted pair wiring harness that connects them. Figure 4 shows an overview of the baseline-mission payload design of LiteBIRD. The LiteBIRD payload module (PLM) consists of three telescopes – at low, medium, and high frequencies – with their respective focal planes and cryostructure cooled down to 0.1 K. It also includes the global cooling chain from 300 K to 5 K and room-temperature elements, such as drivers and warm readout electronics of the detectors. We derive PLM requirements from the top- level requirement of achieving a tensor-to-scalar ratio error of $\delta r<0.001$. This implies technical challenges for the PLM regarding sensitivity, optical properties, stability, or even compactness over a wide range of frequencies, from 34 to 448 GHz.333The lowest (highest) frequency band has its center frequency at 40 GHz (402 GHz) and a fractional bandwidth of 30 % (23 %), giving the lower (upper) band edge at 34 GHz (448 GHz). On the other hand, the angular resolution requirement is not stringent, since we only need to cover the multipoles $2\leq\ell\leq 200$. As a result, we obtain relaxed constraints on the telescopes’ angular resolution (less than 80 arcmin), but a robust control of the systematics is needed to minimize the $1/f$ noise. In this context, a critical technical choice made for LiteBIRD was to use as the first optical element a continuously-rotating half-wave plate (HWP) in the polarization modulation unit (PMU) for each telescope. The HWP allows us to distinguish between the instrumental polarization signal and the sky signal because the HWP modulates the latter signal only at a frequency of $4f_{\rm HWP}$. If we do not use the HWP, we need to take the signal difference between pairs of detectors that are mutually orthogonal in the polarization orientation. This method is known to cause leakage from temperature to polarization if there are any differences in the beam, gain, or band-pass features between the two detectors. We can significantly reduce the intensity- to-polarization leakage when we use the HWP, enabling us to measure the polarization using a single detector. The presence of the continuously- rotating HWP additionally performs an effective suppression of the $1/f$ noise. We carried out detailed trade-off studies between two cases, with and without the HWP, simulating the polarization effects caused by the imperfection of the HWP to make a fair comparison. We found that the performance without the HWP is lower than in the case with the HWP, preventing us from satisfying the scientific requirement on $\delta r$. Hence, to guarantee appropriate thermal performances in terms of stability and minimal heat load, the three telescopes will be equipped with PMUs. The revolution rate of each HWP is a function of the scan speed and the beam size. We chose 46, 39, and 61 rpm for LFT, MFT, and HFT, respectively. We have optimized the number of bands and their distributions over a wide range of frequency, from 34 to 448 GHz, to deal with the following constraints. 1. 1. The spectral bandwidth has to ensure the appropriate characterization of the expected complexity of the spectral energy distribution of the synchrotron and dust Galactic foregrounds, leading to 15 partially overlapping broad bands. 2. 2. The limited frequency range of HWP materials (sapphire and metal-mesh) led us to split the entire spectral range into three telescopes. 3. 3. The spectral mapping of the CO lines has to be optimized by rejecting such molecular lines from some of the bands and including them in others (notice that we decided not to use notch-filters, since we have demonstrated that the rotating HWP highly mitigates temperature-to-polarization leakage from CO- lines). 4. 4. An overlap between bands and instruments was foreseen to mitigate systematic effects. We ended up with the following distribution: a reflective telescope at low frequency, the LFT (34–161 GHz); and two refractive telescopes at middle and high frequencies, the MFT (89–225 GHz) and HFT (166–448 GHz). We plan to mount the MFT and HFT on the same structure. They point in a different direction than the LFT; however, they cover the same circle over the sky when spinning. More details on the LFT are found in Ref. 10, and on the MFT and HFT in Ref. 11. The three telescopes’ focal planes with a large field of view ($18^{\circ}\times 9^{\circ}$ for LFT, $28^{\circ}$ for MFT and HFT) are populated with multichroic polarized transition-edge sensor (TES) detectors (one to three bands per pixel). The multichroic technology allows a very compact design with sufficient flexibility to optimize the sensitivity per band required to improve the component separation. We have been using two detector technologies: lenslet-coupled detectors for the low- and medium- frequency telescopes; and horn-coupled detectors for the high-frequency telescope, for a total of 4339 detectors cooled down to 100 mK. More details on the focal plane design and detector fabrication are described in Ref. 12. The readout electronics[33, 34] (Fig. 5) takes advantage of the frequency- multiplexing scheme to accommodate this large set of detectors without losing information and with minimal power dissipation on the focal planes. Figure 6: Assembly, integration, verification (AIV), and pre-launch calibration of LiteBIRD. The inset in the top-right corner shows the current LiteBIRD calibration strategy. The instruments’ temperature stability is another crucial point for CMB $B$-mode polarization probes for two reasons. Firstly, the temperature fluctuation of optical components contributes to noise stability and $1/f$ noise. Secondly, temperature variations of mechanical structures have a direct impact on pointing stability. Hence the three LiteBIRD telescopes are thoroughly cooled to 4.8 K to minimize the focal planes’ heat load. The proposed 300-K to 4.8-K cryogenic chain for LiteBIRD adopts the architecture developed as part of the SPICA-SAFARI mission. This combines radiative cooling (V-grooves) down to 30 K with mechanical cryocoolers to provide cooling to temperatures down to about 4.8 K. In its current definition, a 15-K pulse tube cooler associated with three V-groove radiators, respectively at 160 K, 90 K, and 30 K, intercept part of the thermal loads before a helium Joule-Thomson loop (4-K JT, 4He), pre-cooled by two two-stage Stirling coolers (100 K and 20 K). Between the 4.8-K mechanical enclosure and the 0.1-K detectors, all telescopes have intermediate cold stages at 1.8 K and 0.3 K. The 1.8-K cooler has three ADR stages operating in parallel, to provide a continuous cooling at 1.8 K. We will use the controlled heat rejection of the 1.8-K cooler operation to damp the 4.8-K stage thermal oscillations. The sub-kelvin cooler consists of two ADR stages in parallel to provide stable and continuous cooling at 0.3 K, combined with two other ADR stages in parallel at 0.1-K. We have optimized this cryochain design to ensure maximum stability of the focal planes’ temperature and the optical elements of the telescopes. Figure 6 shows the assembly, integration, verification (AIV), and pre-launch calibration of LiteBIRD. We have chosen the integration scheme carefully to keep interfaces between different institutions and countries as simple as possible. The inset on the top-right corner of Fig. 6 shows our calibration strategy. The plan is to derive a common approach for both instruments, LFT and MHFT, other than for some specific exceptions. The requirements on the accuracy of the measurements of the instrumental parameters, which serve as inputs for the determination of the calibration strategy, are derived from detailed systematics studies (see Sect. 8). The first step is to characterize the performance at the component level. These characterizations are part of the deliverables of the sub-systems. They will be based on the LiteBIRD specification and carried out before integration at the instrument level. We will also use the data from these characterizations to build an instrument model and forecast the in-flight performance as we develop the system. Considering the beam calibration’s specific challenges, we foresee a dedicated set of measurements, identified as ‘RF characterization’ in the inset. We will perform the instrument-level calibration for LFT and MHFT independently (in Japan and Europe, respectively) in a cold flight-like environment. We will perform a part of the final verification at the PLM level (system-level testing) when we assemble the LFT and MHFT with the satellite PLM and the SVM, together with the entire LiteBIRD cooling system. As highlighted in the inset, we plan to rely on ground-calibration operations mostly for some parameters to ensure that they are accurate enough determined (to limit the impact on the systematic error on $r$); the spectral response is an example. For other parameters (such as the main beam), the planned accuracy with flight data should allow us to rely on flight data themselves. In such a case, we will use ground measurements as the first guess for preliminary analysis and, later, as a reference for verifications. It is worth noting that we do not exclude the option of solving for some systematic parameters as part of the map-making or component-separation processes. However, the calibration design philosophy does not rely on those post-analysis mitigations. Our strategy is to pursue the hardware developments of ground-segment equipment as the current baseline. In parallel, we explore the possibility of mitigating the systematics through post-analysis steps, which we currently view as a potential safeguard. We will refine the calibration plans, the error budget allocation for hardware development, and the post-flight analysis mitigation strategies as the project evolves. ## 7 Operations concept Figure 7: Scan strategy of LiteBIRD in a Lissajous orbit around L2. Figure 8: Expected organization structure of the mission and science operations of LiteBIRD. We will use JAXA’s H3 rocket to launch LiteBIRD and insert it into an orbit around the Sun-Earth Lagrangian point known as L2, to carry out full-sky surveys for three years. A Lissajous orbit is our choice due mainly to the better thermal conditions compared to halo orbits. In our scan strategy (Fig. 7), the spacecraft spins about the spin axis at 0.05 rpm, i.e., in 20 minutes. The spin axis itself also rotates or precesses around the anti-Sun axis; however, the precession rate is much lower, with the current studies using a precession time of 3.2058 hours. Here an irrational number is used to avoid systematics due to synchronization of the spin and precession. The spin axis is canted $\alpha=$ 45∘ off the Sun-L2 axis, while the angle $\beta$ between the boresight and the spin axis is 50∘. The requirements on the scan strategy include high thermal stability, good uniformity on the moving direction of boresight pointing across each sky pixel (‘attack angle’ uniformity), near observational uniformity on sky pixels, broad daily sky coverage, and short revisit times for each sky pixel. These are all important to mitigate instrumental systematic uncertainties. Figure 8 shows the expected organizational structure of the mission and science operations of LiteBIRD. JAXA is preparing a new GRound station for deep-space Exploration And Telecommunication named GREAT[35], which should be operational before the launch of LiteBIRD. The X-band transmission capability of GREAT will be sufficient to downlink all the data every day. JAXA is also responsible for the management of the mission operation team, while the science operations team (the SOT) is international and is responsible for analyzing observational data and producing scientific results. The SOT also works on the integration of the observational data and ground-calibration data. ## 8 Expected scientific outcomes For cosmological forecasts, we have used the focal-plane parameters summarized in Table 3. Telescope | Band | Center | Frequency | $\theta_{\rm FWHM}$ | Detector | Total | NET${}^{T}_{\rm array}$ | $\omega^{-1/2}_{P}$ ---|---|---|---|---|---|---|---|--- | ID | Frequency | band [GHz] | [arcmin] | pixel size | Number of | [$\mu$K$\sqrt{\rm s}$] | [$\mu$K-arcmin] | | [GHz] | (fraction) | | [mm] | Detectors | | LFT | 1 | 40 | 12 (0.30) | 70.5 | 32 | 48 | 18.50 | 37.42 LFT | 2 | 50 | 15 (0.30) | 58.5 | 32 | 24 | 16.54 | 33.46 LFT | 3 | 60 | 14 (0.23) | 51.1 | 32 | 48 | 10.54 | 21.31 LFT | 4 | 68 | 16 (0.23) | (41.6, 47.1) | (16, 32) | (144, 24) | (9.84, 15.70) | (19.91, 31.77) combined | | | | | | | 8.34 | 16.87 LFT | 5 | 78 | 18 (0.23) | (36.9, 43.8) | (16, 32) | (144, 48) | (7.69, 9.46) | (15.55, 19.13) combined | | | | | | | 5.97 | 12.07 LFT | 6 | 89 | 20 (0.23) | (33.0, 41.5) | (16, 32) | (144, 24) | (6.07, 14.22) | (12.28, 28.77) combined | | | | | | | 5.58 | 11.30 LFT/ | 7 | 100 | 23 (0.23) | 30.2/ | 16/ | 144/ | 5.11/ | 10.34 MFT | | | | 37.8 | 11.6 | 366 | 4.19 | 8.48 combined | | | | | | | 3.24 | 6.56 LFT/ | 8 | 119 | 36 (0.30) | 26.3/ | 16/ | 144/ | 3.8/ | 7.69 MFT | | | | 33.6 | 11.6 | 488 | 2.82 | 5.70 combined | | | | | | | 2.26 | 4.58 LFT/ | 9 | 140 | 42 (0.30) | 23.7/ | 16/ | 144/ | 3.58/ | 7.25 MFT | | | | 30.8 | 11.6 | 366 | 3.16 | 6.38 combined | | | | | | | 2.37 | 4.79 MFT | 10 | 166 | 50 (0.30) | 28.9 | 11.6 | 488 | 2.75 | 5.57 MFT/ | 11 | 195 | 59 (0.30) | 28.0/ | 11.6/ | 366/ | 3.48/ | 7.05 HFT | | | | 28.6 | 6.6 | 254 | 5.19 | 10.50 combined | | | | | | | 2.89 | 5.85 HFT | 12 | 235 | 71 (0.30) | 24.7 | 6.6 | 254 | 5.34 | 10.79 HFT | 13 | 280 | 84 (0.30) | 22.5 | 6.6 | 254 | 6.82 | 13.80 HFT | 14 | 337 | 101 (0.30) | 20.9 | 6.6 | 254 | 10.85 | 21.95 HFT | 15 | 402 | 92 (0.23) | 17.9 | 5.7 | 338 | 23.45 | 47.45 Total | | | | | | 4508 | | 2.16 Table 3: Focal-plane parameters of the 2020 baseline design of LiteBIRD. In the calculation, the aperture stop temperature, mirrors for LFT, and lenses for HFT are at 5 K. The NET values include a margin (13 %), and the expected noise on the polarization signal on a sky pixel ($\omega^{-1/2}_{P}$) takes into account the end-to-end detector/readout yield of 80 %, as well as inefficiencies due to cosmic-ray hits (15 %) and ADR recycling (15 %). We use the method described in Ref. 36 for foreground cleaning. One of the critical points is that we take spatial variations into account as much as possible. We separate the entire sky into 768 regions. In each sky region, we model the synchrotron radiation with a spectral index and its running, and dust emission with a spectral index and a modified blackbody temperature parameter. We treat Stokes $Q$ and $U$ polarization independently. As a result, we have eight parameters in each sky region and $8\times 768=6144$ fit parameters in total. Figure 9: Sensitivity contours of LiteBIRD and the Simons Observatory. Systematic uncertainties are not included here. The plot assumes that the actual value of $r$ is 0.004. Our simulations yield $\sigma_{\rm stat}=0.6\times 10^{-3}$ for $r=0$ as an input after foreground cleaning, with negligibly small bias. We also confirmed a consistent result from an alternative foreground cleaning method described in Ref. 37. Figure 9 shows sensitivity contours of LiteBIRD. As a comparison, we also show the expectation from a leading ground-based project in this decade, namely Simons Observatory[38]. Here one of the most promising inflationary models is assumed, specifically the Starobinsky model that predicts $r\simeq 0.004$. We see the excellent discovery potential of LiteBIRD. More details will be given in a comprehensive overview of LiteBIRD[39] that is in preparation. We adopt a methodical approach for estimating systematic uncertainties. We start by listing sources of systematic uncertainties, where we have identified 70 items in 14 categories. For each item, we allocate 1 % of the total budget. This is based on our measurement requirement on the systematic error budget allocation (Lv2.07), which states the following: ‘We shall decouple studies of each systematic error on $r$ as much as possible. Each component of systematic error on $r$ shall be less than 1 % of the total budget (Lv2.02), i.e., $\sigma_{\rm syst}$ from each component be less than $0.57\times 10^{-5}$. In case an outstanding component is identified, however, it is allowed to allocate a particular budget for that item. If this happens, a careful investigation shall be done and a collaboration-wide agreement shall be made.’ We define our method for each source of systematic uncertainty, where some assumptions on the calibration methods are also introduced. As an example of such studies, we considered a source of systematic errors from half-wave plate imperfections. We model these imperfection based on a Mueller matrix from simulations of the rigorous coupled-wave analysis (RCWA). We obtain leakage maps and resulting $B$-mode power from that and find that the result satisfies our requirements. Another example is the systematic uncertainty due to cosmic- ray hits, which is described elsewhere.[16, 40] In case we find some outstanding items, we will allocate particular budgets. To obtain the total systematic error on $r$, $\sigma_{\rm syst}$, we make a sum in the map basis. We iterate this procedure by adjusting error budget allocations until we achieve the goal. Finally, combining $\sigma_{\rm syst}$ with $\sigma_{\rm stat}$, we have confirmed that we meet the requirement of $\delta r<0.001$ under some assumptions on the accuracy of our calibration methods[39]. Here we do not assume any combination of LiteBIRD data with future astronomical observations that will likely be available when we analyze LiteBIRD data. We have reasonable expectations on improved CMB polarization data in high $\ell$ regions from ground-based CMB projects and infrared survey data from space observations for delensing, as well as low-frequency millimeter-wave observations on the ground for synchrotron radiation cleaning.[39] If we combine all these non-LiteBIRD data, we expect to obtain an even better constraint on the tensor-to-scalar ratio. We define this possibility as the ‘extra success’ to enhance our science case beyond the ‘full success’ of achieving $\delta r<0.001$ for $2\leq\ell\leq 200$. We derive the system requirements of LiteBIRD from those for the tensor-to- scalar measurements alone. Once we carry out the observations successfully, we can use LiteBIRD data to study many additional topics in cosmology, particle physics, and astronomy. Examples include: (1) characterization of $B$ modes and searches for source fields, such as scale-invariance, non-Gaussianity, and parity violation; (2) power-spectrum features in polarization; (3) cosmic- variance-limited measurements of large-scale (low-$\ell$) $E$ modes, with implications for the reionization history and the sum of neutrino masses; (4) searches for cosmic birefringence; (5) thermal Sunyaev-Zeldovich effects and relativistic corrections; (6) elucidating large-angle anomalies; and (7) Galactic astrophysics. LiteBIRD has very targeted mission requirements, and at the same time will provide rich scientific outcomes. ## 9 Summary LiteBIRD is a space mission for primordial cosmology and fundamental physics. JAXA selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with its expected launch in the 2020s using JAXA’s H3 rocket. The LiteBIRD Joint Study Group has more than 250 researchers from Japan, North America, and Europe. LiteBIRD plans to map the CMB polarization over the full sky with unprecedented precision. Its main scientific objective is to carry out a definitive search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with an insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. To satisfy its essential science requirement, $\delta r\,{<}\,0.001$ for $2\,{\leq}\,\ell\,{\leq}\,200$, LiteBIRD will perform full- sky surveys for three years at the Sun-Earth Lagrangian point L2. LiteBIRD will use 15 frequency bands between 34 and 448 GHz with three telescopes to achieve a total sensitivity of $2.16\,\mu$K-arcmin with a typical angular resolution of 0.5∘ at 100 GHz. We have completed the pre-phase-A2 concept development studies of LiteBIRD successfully. Table 4 shows baseline specifications of LiteBIRD as the result of these studies. Item | Specification ---|--- Science requirement | $\delta r<0.001$ for $2\leq\ell\leq 200$ Target launch year | 2029 Launch vehicle | JAXA H3 Observation type | All-sky CMB surveys Observation time | 3 years Orbit | L2 Lissajous orbit Scan and | $\cdot$ Spin and precession (precession angle $\alpha=45^{\circ}$, spin angle $\beta=50^{\circ}$) data recording | $\cdot$ Spin period = 20 minutes, precession period = 3.2058 hours | $\cdot$ PMU revolution rate = 46/39/61 rpm for LFT/MFT/HFT | $\cdot$ Sampling rate = 19 Hz Observing frequencies | 34–448 GHz Number of bands | 15 Polarization sensitivity | $2.16\,\mu$K-arcmin (after 3 years) Angular resolution | 0.5∘ at 100 GHz (FWHM for LFT) Mission instruments | $\cdot$ Superconducting detector arrays | $\cdot$ Crossed-Dragone mirrors (LFT) + two refractive telescopes (MFT and HFT) | $\cdot$ PMU with continously-rotating HWP on each telescope | $\cdot$ 0.1-K cooling chain (ST/JT/ADR) Data size | $17.9\,{\rm GB}\,{\rm day}^{-1}$ Mass | 2.6 t Power | 3.0 kW Table 4: Main specifications of LiteBIRD. Parameters are from the LiteBIRD pre-phase-A2 concept development studies and additional studies in 2020 as preparation for the system-requirements review. ###### Acknowledgements. This work is supported in Japan by ISAS/JAXA for Pre-Phase A2 studies, by the acceleration program of JAXA research and development directorate, by the World Premier International Research Center Initiative (WPI) of MEXT, by the JSPS Core-to-Core Program of A. Advanced Research Networks, and by JSPS KAKENHI Grant Numbers JP15H05891, JP17H01115, and JP17H01125. The Italian LiteBIRD phase A contribution is supported by the Italian Space Agency (ASI Grants No. 2020-9-HH.0 and 2016-24-H.1-2018), the National Institute for Nuclear Physics (INFN) and the National Institute for Astrophysics (INAF). The French LiteBIRD phase A contribution is supported by the Centre National d’Etudes Spatiale (CNES), by the Centre National de la Recherche Scientifique (CNRS), and by the Commissariat à l’Energie Atomique (CEA). The Canadian contribution is supported by the Canadian Space Agency. The US contribution is supported by NASA grant no. 80NSSC18K0132. Norwegian participation in LiteBIRD is supported by the Research Council of Norway (Grant No. 263011). The Spanish LiteBIRD phase A contribution is supported by the Spanish Agencia Estatal de Investigación (AEI), project refs. PID2019-110610RB-C21 and AYA2017-84185-P. Funds that support the Swedish contributions come from the Swedish National Space Agency (SNSA/Rymdstyrelsen) and the Swedish Research Council (Reg. no. 2019-03959). The German participation in LiteBIRD is supported in part by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy (Grant No. EXC-2094 - 390783311). This research used resources of the Central Computing System owned and operated by the Computing Research Center at KEK, as well as resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy. ## References * [1] Hazumi, M., “Jumping into CMB polarization measurements: A new group at KEK,” AIP Conf. Proc. 1040(1), 78–88 (2008). * [2] Hazumi, M., “Future CMB polarization measurements and Japanese contributions,” Prog. Theor. Phys. Suppl. 190, 75–89 (2011). * [3] Hazumi, M. et al., “LiteBIRD: a small satellite for the study of B-mode polarization and inflation from cosmic background radiation detection,” Proc. SPIE Int. Soc. Opt. Eng. 8442, 844219 (2012). * [4] Matsumura, T. et al., “Mission design of LiteBIRD,” J. Low Temp. Phys. 176, 733 (2014). * [5] Matsumura, T. et al., “LiteBIRD: mission overview and design tradeoffs,” Proc. SPIE Int. Soc. Opt. Eng. 9143, 91431F (2014). * [6] Kamionkowski, M., Kosowsky, A., and Stebbins, A., “A Probe of primordial gravity waves and vorticity,” Phys. Rev. Lett. 78, 2058–2061 (1997). * [7] Seljak, U. and Zaldarriaga, M., “Signature of gravity waves in polarization of the microwave background,” Phys. Rev. Lett. 78, 2054–2057 (1997). * [8] Zaldarriaga, M. and Seljak, U., “An all sky analysis of polarization in the microwave background,” Phys. Rev. D 55, 1830–1840 (1997). * [9] Kamionkowski, M., Kosowsky, A., and Stebbins, A., “Statistics of cosmic microwave background polarization,” Phys. Rev. D 55, 7368–7388 (1997). * [10] Sekimoto, Y. and the LiteBIRD Joint Study Group, “Wide field-of-view design of low frequency telescope on CMB B-mode polarization satellite LiteBIRD,” Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave (2020 in preparation). * [11] Montier, L. and the LiteBIRD Joint Study Group, “Overview of the Medium- and High-Frequency Telescopes of the LiteBIRD satellite mission,” Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave (2020 in preparation). * [12] Westbrook, B., Raum, C., and Suzuki, A., “Detector fabrication development for the LiteBIRD satellite mission,” in [Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave ], (2020 in preparation). * [13] Sakurai, Y. et al., “Breadboard model of polarization modulator unit based on a continuous rotating half-wave plate for low frequency telescope of LiteBIRD space mission,” in [Space Telescopes and Instrumentation 2020 ], SPIE (2020). * [14] Takakura, H., Sekimoto, Y., Inatani, J., Kashima, S., and Sugimoto, M., “Polarization angle measurement of litebird low frequency telescope scaled model,” in [Space Telescopes and Instrumentation 2020 ], SPIE (2020). * [15] Tsuji, M., Tsujimoto, M., Sekimoto, Y., Dotani, T., and Shiraishi, M., “Simulating electromagnetic transfer function from the transmission antennae to the sensors vicinity in LiteBIRD,” Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave (2020 in preparation). * [16] Tominaga, M., Tsujimoto, M., Stever, S., Ghigna, T., Ishino, H., and Ebisawa, K., “Cosmic ray glitch predictions , physical modeling , and overall effect on the LiteBIRD space mission ( 2 ),” in [Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave ], (2) (2020 in preparation). * [17] Lamagna, L., J.E., G., Imada, H., Hargrave, P., Franceschet, C., De Petris, M., Austermann, J., Bounissou, S., Columbro, F., de Bernardis, P., Henrot-Versillé, S., Hubmayr, J., Jaehnig, G., Keskitalo, R., Maffei, B., Masi, S., Matsumura, T., Montier, L., Mot, B., Noviello, F., O’Sullivan, C., Paiella, A., Pisano, G., Realini, S., Ritacco, A., Savini, G., Suzuki, A., Trappe, N., and Winter, B., “The optical design of the LiteBIRD Middle and High Frequency Telescope,” in [Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Waves ], 11443-283, Int. Soc. for Optics and Photonics, SPIE (2021). * [18] Hinshaw, G. et al., “Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results,” Astrophys. J. Suppl. 208, 19 (2013). * [19] Bennett, C. et al., “Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Final Maps and Results,” Astrophys. J. Suppl. 208, 20 (2013). * [20] Ade, P. et al., “A Measurement of the Cosmic Microwave Background $B$-Mode Polarization Power Spectrum at Sub-Degree Scales from 2 years of POLARBEAR Data,” Astrophys. J. 848(2), 121 (2017). * [21] Henning, J. et al., “Measurements of the Temperature and E-Mode Polarization of the CMB from 500 Square Degrees of SPTpol Data,” Astrophys. J. 852(2), 97 (2018). * [22] Ade, P. et al., “BICEP2 / Keck Array x: Constraints on Primordial Gravitational Waves using Planck, WMAP, and New BICEP2/Keck Observations through the 2015 Season,” Phys. Rev. Lett. 121, 221301 (2018). * [23] Aghanim, N. et al., “Planck 2018 results. V. CMB power spectra and likelihoods,” Astron. Astrophys. 641, A5 (2020). * [24] Sayre, J. et al., “Measurements of B-mode Polarization of the Cosmic Microwave Background from 500 Square Degrees of SPTpol Data,” Phys. Rev. D 101(12), 122003 (2020). * [25] Aiola, S. et al., “The Atacama Cosmology Telescope: DR4 Maps and Cosmological Parameters,” arXiv e-prints , arXiv:2007.07288 (July 2020). * [26] Choi, S. K. et al., “The Atacama Cosmology Telescope: A Measurement of the Cosmic Microwave Background Power Spectra at 98 and 150 GHz,” arXiv e-prints , arXiv:2007.07289 (July 2020). * [27] Adachi, S. et al., “A measurement of the CMB E-mode angular power spectrum at subdegree scales from 670 square degrees of POLARBEAR data,” Astrophys. J. 904(1), 65 (2020). * [28] Tristram, M., Banday, A. J., Górski, K. M., Keskitalo, R., Lawrence, C. R., Andersen, K. J., Barreiro, R. B., Borrill, J., Eriksen, H. K., Fernandez-Cobos, R., Kisner, T. S., Martínez-González, E., Partridge, B., Scott, D., Svalheim, T. L., Thommesen, H., and Wehus, I. K., “Planck constraints on the tensor-to-scalar ratio,” arXiv e-prints , arXiv:2010.01139 (Oct. 2020). * [29] Kamionkowski, M. and Kovetz, E. D., “The Quest for B Modes from Inflationary Gravitational Waves,” Ann. Rev. Astron. Astrophys. 54, 227–269 (2016). * [30] Linde, A., “Gravitational waves and large field inflation,” J. Cosm. Astropart. Phys. 02, 006 (2017). * [31] Diego-Palazuelos, P., Vielva, P., Martínez-González, E., and Barreiro, R. B., “Comparison of delensing methodologies and assessment of the delensing capabilities of future experiments,” J. Cosm. Astropart. Phys. 2020, 058 (Nov. 2020). * [32] Japan Aerospace Exploration Agency (JAXA), “H3 Launch Vehicle.” https://global.jaxa.jp/projects/rockets/h3/. * [33] Dobbs, M., Bissonnette, E., and Spieler, H., “Digital Frequency Domain Multiplexer for Millimeter-Wavelength Telescopes,” IEEE Transactions on Nuclear Science 55, 21–26 (Jan. 2008). * [34] Montgomery, J., Digital Frequency Domain Multiplexing readout: design and performance of the SPT-3G instrument and LiteBIRD satellite readout, PhD thesis, McGill University (2020). * [35] Japan Aerospace Exploration Agency (JAXA), “GREAT, Ground Station for Deep Space Exploration and Telecommunication.” https://global.jaxa.jp/projects/sas/great/. * [36] Errard, J. and Stompor, R., “Characterizing bias on large scale CMB B-modes after galactic foregrounds cleaning,” Phys. Rev. D 99(4), 043529 (2019). * [37] Andersen, K. et al., “BeyondPlanck I. Global Bayesian analysis of the Planck Low Frequency Instrument data,” in [BeyondPlanck Release Conference ], (11 2020). * [38] Ade, P. et al., “The Simons Observatory: Science goals and forecasts,” JCAP 02, 056 (2019). * [39] The LiteBIRD Joint Study Group, “Probing Cosmic Inflation with the LiteBIRD Cosmic Microwave Background Polarization Survey,” Progress in Theoretical and Experimental Physics , in preparation. * [40] Stever, S. L., Ghigna, T., Tominaga, M., Tsujimoto, M., Minami, Y., Sugiyama, S., Kato, A., Matsumura, T., Ishino, H., and Hazumi, M., “Simulations of systematic effects arising from cosmic rays in the LiteBIRD space telescope, and effects on the measurements of CMB $B$ modes,” J. Cosmol. Astrophys. (2020 in preparation).
# A Study on the Association between Maternal Childhood Trauma Exposure and Placental-fetal Stress Physiology during Pregnancy Eileen Zhang (Department of Statistics, University of California, Irvine <EMAIL_ADDRESS>) Abstract Background It has been found that the effect of childhood trauma (CT) exposure may pass on to the next generation. Scientists have hypothesized that the association between CT exposure and placental-fetal stress physiology is the mechanism. A study was conducted to examine the hypothesis. Method To examine the association between CT exposure and placental corticotrophin-releasing hormone (pCRH), linear mixed effect model and hierarchical Bayesian linear model were constructed. In Bayesian inference, by providing conditionally conjugate priors, Gibbs sampler was used to draw MCMC samples. Piecewise linear mixed effect model was conducted in order to adjust to the dramatic change of pCRH at around week 20 into pregnancy. Pearson residual, QQ, ACF and trace plots were used to justify the model adequacy. Likelihood ratio test and DIC were utilized to model selection. Results The association between CT exposure and pCRH during pregnancy is obvious. The effect of CT exposure on pCRH varies dramatically over gestational age. Women with one childhood trauma would experience 11.9% higher in pCRH towards the end of pregnancy than those without childhood trauma. The increase rate of pCRH after week 20 is almost four-fold larger than that before week 20. Frequentist and Bayesian inference produce similar results. Conclusion The findings support the hypothesis that the effect of CT exposure on pCRH over GA exists. The effect changes dramatically at around week 20 into pregnancy. Keywords linear mixed effect model; Gibbs sampler; variable selection, likelihood ratio test ## 1 Introduction Childhood trauma (CT) is the traumatic experience that happens to children around age 0-6. It has been indicated that such adverse experience may have an effect on the developing brain [1], the development of depression and anxiety disorders [2]. Trauma survivors may become vulnerable during the period of time when there is other stress or changes in their lives[3]. Realizing the widespread consequences of CT, several studies have been conducted. Among them, an interesting finding is that CT may be transmitted between generations and its intergenerational impact will exist [3]. The mechanism behind it remains unsolved. Research results in this area proposed that the traumatized parents may be functional unavailable for their infant, which resulted in the enhanced symptomatology with their child [8]. Moreover, through parents’ potential traumatizing behavior, child trauma may pass on to the next generation [15]. All the findings indicate the difficulty in searching for the mechanism of CT transmission. It has been studied that the placental corticotrophin-releasing hormone (pCRH) is the key to communicating between the mother and the unborn child[16]. The concentration of pCRH is highly related to the fetal and infant health development[17]. The pCRH system serves as sensor, transducer and effector of the fetal brain development and peripheral systems [19]. Motivated by these findings, a novel biological pathway has been proposed by Moog, N.K. et al. [16] to explain the mechanism of CT transmission over generations. In their outstanding work, they built the hypothesis that, through the effect of maternal CT exposure on placental-fetal stress physiology, especially pCRH, the intergenerational transmission may take place during gestation[16]. Specifically, their study was conducted in a cohort of 295 pregnant women along with their CT exposure measurement. Linear mixed effect model and Bayesian piecewise linear models were implemented to show the association between maternal CT exposure and placental-fetal stress physiology. In this study, the dataset consists of pCRH concentrations along with the CT exposure measurement and other information is obtained after a sociodemographically-diverse cohort of 88 pregnant women. The key scientific questions are to study the effect of CT exposure on pCRH over gestation (using both frequentist and Bayesian inference) and realize the unneglectable change in the rate of change in pCRH after gestational age goes beyond 20. Motivated by these, this report aims to provide a solution of those questions. ## 2 Data Description and Statistical Methods ### 2.1 Observed data The dataset in this study was collected during a sociodemographically-diverse cohort of 88 pregnant women. To measure the CT exposure, Childhood Trauma Questionnaire with 28 items have been assessed. It covered the following five dimensions of childhood maltreatments: emotional abuse (EA), physical abuse (PA), sexual abuse (SA), emotional neglect (EN) and physical neglect (PN). CT- Sum is the total number of those traumas that a mother had during her childhood. Placental CRH (pCRH) concentrations were also measured in maternal blood collected during the course of gestation. Besides those, the following information was also collected from each woman: gestation age in weeks (GA), depression score based on a questionnaire by the Center for Epidemiological Studies (DCES), indicator of obstetric risk conditions (OB-risk), pre- pregnancy Body Mass Index (BMI), childhood socioeconomic score using a 15-item measure that characterizes distinct aspects of economic status during childhood (CSES) and number of previous pregnancies (Parity). ### 2.2 Scientific Questions The following scientific questions are to be considered: SQ1: Under the framework of frequentist inference, study the effect of CT-Sum on pCRH as gestation (GA) changes while considering all the potential confounding factors. SQ2: Repeat the analysis of SQ1 under the framework of Bayesian inference. SQ3: With the prior information that “the rate of change of pCRH changes remarkably around 20 weeks into pregnancy”, modify the models. ### 2.3 Statistical Methods As for the preliminary data exploration The main objective in this section is to study pregnant women characteristics during their gestation and how pCRH changes for different level of CT-Sum over GA. Quantitative covariates are presented in their mean, range and skewness along with graphing techniques such as spaghetti plot, regression line plot and scatterplot; categorical variables are presented in their percentage of the total population. As for SQ1 The strategies were based on the following principles: Linear mixed effect model Since the current dataset contains multiple observations per subject and is unbalanced (each individual was measured at different gestation age), linear mixed effect models were employed to study the effect of CT-Sum on pCRH over GA. The assumption is that we assume subjects are independent with each other and linear relationship between pCRH (or transformed form of pCRH) and CT-Sum exists. Transform of covariates It has been indicated that there exists an approximately exponential increase rate of pCRH over GA[20]. Following the same strategies of previous work[16, 10, 4, 6, 7], log-transformation of pCRH has been used in fitting models. The model follows that $\log(pCRH_{i})=X_{i}\bm{\beta}+Z_{i}b_{i}+\epsilon_{i},$ (1) where $i=1,2,\cdots,88.$ The subscript $i$ denotes subject id. $\bm{\beta}$ and $X_{i}$ are the coefficients and design matrix for the fixed effect. $Z_{i}$ and $b_{i}$ are the design matrix and random slope for the random effects. $\epsilon_{i}$ are error term within subjects. Variable Selection To study the effect of CT-Sum on pCRH over GA, covariates CT-Sum, GA and the interaction between them were included in the linear mixed effect model. It has been found that there is strong association between the socioeconomic background and childhood abuse, which results in the influence on pCRH [18, 12]. Thus, we have strong evidence that CSES should be in the fixed effects. Other covariates are either psychological or biophysical factors, which may be also associated with CT-Sum or pCRH. For instance, studies show that childhood sexual abuse have strong impact on depression during pregnancy [9, 11]. Childhood trauma survivors may deny their pregnancy or hide it from others [14, 5, 13], which may result in low number of previous pregnancies. Thus, in this study, we include all the factors in the fixed effects of the model. To take care of the variability between subjects in the effect of CT-Sum on pCRH, preliminary analysis suggests that the intercept varies across subjects. Moreover, it is also shown that the change rate of $\log pCRH$ over GA may also vary across subjects. Hence, we consider to include both of random intercept and slope of GA in the model. Likelihood ratio tests suggests that the random slope may not be necessarily included. Analysis in more details can be found in section 3.2. As for SQ2 Bayesian inference was implemented to study the effect of CT-Sum on pCRH over GA. The strategies are in the following principles: Bayesian hierarchical model With the same argument in SQ1, model (1) was also implemented in this section. To address the question in Bayesian inference, we have the following prior $\displaystyle\tau_{1}^{2}\sim\text{Inv}-\chi^{2}(c,d),\sigma_{\epsilon}^{2}\sim\text{Inv}-\chi^{2}(a,b),\beta_{l}\sim N(0,\sigma_{l}^{2}),l=0,1,\cdots,k,$ $\displaystyle b_{1i}|\tau_{1}^{2}\overset{iid}{\sim}N(0,\tau_{1}^{2}),\epsilon_{ij}|\sigma_{\epsilon}^{2}\overset{iid}{\sim}N(0,\sigma_{\epsilon}^{2}),i=1,2,\cdots,88,j=1,2,\cdots,n_{i}.$ As is stated in SQ1, we involve intercept in the random effects, which is denoted as $b_{1i}$. $\beta_{l}$ denotes the coefficient of the covariates in the fixed effect. MCMC estimation Gibbs sampler was implemented in making inference of parameters of interest. The derivation of conditional distribution of parameters can be found in Appendix A. About 20% of MCMC samplers were burned in to make inference of parameters of interest. Point estimate of parameters was calculated by the sample average. To get the 95% credible interval, sample quantiles were used to approximate the lower and upper bounds. Besides that, MCMC convergence diagnosis was also conducted. Variable selection From previous arguments, we have involved all the factors and the interaction CT-Sum*GA in the fixed effects and intercept in the random effects. In order to determine whether to include random slope of GA, DIC (Deviance Information Criterion) has been used to compare those two models. As for SQ3 Piecewise linear mixed effect model was employed in studying the dramatic change of rate at around 20 weeks. A knot of 20 was made in covariate GA. Based on the model proposed in SQ1, we assume additional slope and intercept when GA is larger than 20. Likelihood ratio test was conducted to compare models with only additional slope and with both of additional slope and intercept. ### 2.4 Model Diagnosis To evaluate the model adequacy proposed in SQ1 and SQ3, Pearson residual plots have been employed to justify the mean model. QQ plots were also implemented to check the normality assumption. To justify the model proposed in SQ2, we employed statistics $T(y,\theta)=-2\sum_{i=1}^{N}\log(p(y_{i}|\theta))$. Based on the posterior samplers, data $y^{rep}$ is generated. We calculate the predictive p-value as $pB=Pr(T(y^{rep},\theta)>T(y,\theta)).$ If the value is extremely small, there may be some trouble in model adequacy. ## 3 Result ### 3.1 Exploratory Data Analysis Summary of descriptive statistics of categorical variables is shown in Table 4 (Appendix A). It can be found that almost half of the subjects have no childhood trauma experience. The number of those who experience one to three childhood traumas are roughly the same (17.0%, 14.8% and 12.5%). Only 4.5% of the subjects have 4 childhood traumas. Most of the pregnant women (68.2%) have low obstetric risk, which compares to 31.8% as the high obstetric risk. 39.8% of the women have no previous pregnancy and 38.6% of them have one before. Only a small portion of subjects (12.5%) have two previous pregnancies, along with 6.8% have three and 2.3% have four previous pregnancies. Table 5 (Appendix A) presents characteristic of quantitative variables in this study. The sample average depression score is 0.65. The pre-pregnancy body mass index is 24.56 on average and 11.50 is the average childhood socioeconomic score among all the subjects. The gestational age is roughly centered around 26.73 weeks. The placental corticotrophin-releasing hormone has the average of 236.80. Three variables (DECS, CSES and GA) are not highly skewed (skewness is 0.88, -0.77 and 0.01 respectively). BMI and the response variable pCRH are highly skewed. | ---|--- (A) Spaghetti plot of $\log pCRH$ | (B) Scatter plot overlaid with over GA for each individual | regression lines for each CT-Sum Figure 1: $\log pCRH$ over GA | ---|--- (A) $95\%$ confidence interval of slopes of GA | (B) $95\%$ confidence interval of slopes of GA for individuals without childhood trauma | for individuals with childhood trauma Figure 2: $95\%$ confidence interval of slopes of GA | ---|--- (A) $95\%$ confidence interval of intercepts | (B) $95\%$ confidence interval of intercepts for individuals without childhood trauma | for individuals with childhood trauma Figure 3: $95\%$ confidence interval of intercepts To further study the effect of CT-Sum on $\log pCRH$ over GA, we conducted preliminary analysis. Figure 1(A) shows the spaghetti plots of $\log pCRH$ over GA. It can be found that as GA develops, $\log pCRH$ increase linearly and the change of rates starts to grow larger when GA is around 25-30 week. After we control for the factor CT-Sum, the spaghetti plots are summarized in Figure 4 (Appendix B). It shows that as GA closes to 15, the value of $\log pCRH$ for each individual is not consistent with each other after we control for the level of CT-Sum. The same pattern can also be found in Figure 1(A). Figure 3 shows the $95\%$ confidence intervals of the intercept of the fitted linear regression lines for each subject. It can be seen that after controlling for the factor of CT-Sum, the fitted intercept of each subjects differs from each other and the overlapping is relatively small. These findings motivate us to consider a random intercept into the linear mixed effects model. To further investigate the rate of change of $\log pCRH$ over GA, Figure 1(B) shows the scatter plot overlaid with regression line after controlling for CT-Sum. It can be seen that for each value of CT-Sum, $\log pCRH$ increases over GA and the rate of change (slope) differs among different levels of CT-Sum, which indicates the effect of CT-Sum on $\log pCRH$ over GA. Figure 2 presents the $95\%$ confidence intervals of the fitted slopes of GA for different subjects after controlling for CT-Sum. It shows that the fitted slope changes dramatically among different subjects especially for the group without childhood trauma. These findings also suggest us to introduce a random slope of GA in the linear mixed effect model. We will discuss on this in more details in Section 3.2. ### 3.2 Scientific Question 1 Following the analysis in Section 3.1, a linear mixed effect model was fitted to the dataset. We include all the factors and the interaction CT-Sum*GA in the fixed effects. For the random effects, we considered two scenarios: random intercept only and both of random intercept and slope of GA. The two priori models are $\displaystyle\begin{split}\log(\text{pCRH}_{ij})=&\beta_{0}+\beta_{1}*\text{GA}_{ij}+\beta_{2}*\text{CT- Sum}_{i}+\beta_{3}*\text{GA}_{ij}*\text{CT-Sum}_{i}\\\ &+\beta_{4}*\text{BMI}_{i}+\beta_{5}*\text{CSES}_{i}+\beta_{6}*\text{DCES}_{i}+\beta_{7}*\text{OB- risk}_{i}+\beta_{8}*\text{Parity}_{i}\\\ &+b_{1i}+b_{2i}*\text{GA}_{ij}+\epsilon_{ij},\end{split}$ (2) and $\displaystyle\begin{split}\log(\text{pCRH}_{ij})=&\beta_{0}+\beta_{1}*\text{GA}_{ij}+\beta_{2}*\text{CT- Sum}_{i}+\beta_{3}*\text{GA}_{ij}*\text{CT-Sum}_{i}\\\ &+\beta_{4}*\text{BMI}_{i}+\beta_{5}*\text{CSES}_{i}+\beta_{6}*\text{DCES}_{i}+\beta_{7}*\text{OB- risk}_{i}+\beta_{8}*\text{Parity}_{i}\\\ &+b_{1i}+\epsilon_{ij},\end{split}$ (3) To further compare model 2 with model 3, a likelihood ratio test was conducted. The $\chi^{2}$ statistics roughly follows a mixture of $\chi^{2}$ distributions, $\frac{1}{2}\chi^{2}(1)+\frac{1}{2}\chi^{2}(2).$ The result (p value is 0.78) shows that there is no strong evidence to include random slope of GA in the model. Thus, the proposed model follows that $\displaystyle\begin{split}\log(\text{pCRH}_{ij})=&\beta_{0}+\beta_{1}*\text{GA}_{ij}+\beta_{2}*\text{CT- Sum}_{i}+\beta_{3}*\text{GA}_{ij}*\text{CT-Sum}_{i}\\\ &+\beta_{4}*\text{BMI}_{i}+\beta_{5}*\text{CSES}_{i}+\beta_{6}*\text{DCES}_{i}+\beta_{7}*\text{OB- risk}_{i}+\beta_{8}*\text{Parity}_{i}\\\ &+b_{1i}+\epsilon_{ij},\end{split}$ (4) where fixed effects contain all the factors and the interaction GA*CT-Sum; the random effect involves intercept. Model diagnosis was conducted in checking the constant variance assumption and normality assumption. Pearson residual plot in Figure 5 (Appendix C) indicates that the constant variance assumption holds since there is not obvious pattern from residuals. QQ plot in Figure 5 indicates normal assumption holds since the graph is closed to a line with slope 1. Table 1 shows the fixed effect estimates of linear mixed effect model (4). It can be found that covariates of GA, BMI and CT-Sum*GA contribute significantly to the explanation of log(pCRH) change. Although CT-Sum does not preform statistical significant, there is no reason to argue that CT-Sum is not a persuasive predictor. Also, previous work [16] has shown the significance of CT-Sum. Moreover, it is shown from Table 1 that the association between CT-Sum and pCRH increases over gestational age (GA) since the estimate of CT-Sum*GA is positive. We will exponentiate the estimate to interpret the results. It can be concluded from the table that at gestational age 14 (the first time point collected in the dataset), give all the other factors the same, women with 1 childhood trauma tend to have 1.8% lower in pCRH than those without childhood trauma. However, as gestational age goes up, the median pCRH is expected to goes up too. For example, if at gestational age 40 (the last time point in the dataset), the expected median pCRH value will increase by 11.9% if the childhood trauma goes up by 1. In Summary, to address SQ1, we propose model (4) as the final fitted model. The effect of CT-Sum on pCRH over GA varies dramatically. At early gestational age, the effects of CT-Sum will decrease pCRH and as gestational age goes on, that effect becomes positive. If a pregnant woman experiences 1.2 childhood trauma (the average from the dataset), the expected median pCRH value will be 14.4% higher towards the end of gestation compared to those without childhood trauma. On the other hand, at gestational age 26.7 (the average from the dataset), women with one childhood trauma has 4.7% higher median pCRH value than those who do not have childhood trauma. Table 1: Fixed effect estimates of linear mixed effect model (4) Covariates of fixed effect | Estimate | Standard error | Degree of freedom | T value | P value* ---|---|---|---|---|--- Intercept | 1.750 | 0.270 | 113.200 | 6.502 | $<0.001$ GA | 0.142 | 0.004 | 301.900 | 37.254 | $<0.001$ CT-Sum | -0.088 | 0.071 | 375.900 | -1.242 | 0.215 CT-Sum*GA | 0.005 | 0.002 | 299.400 | 1.971 | 0.050 BMI | -0.021 | 0.007 | 86.900 | -3.056 | 0.003 CSES | -0.018 | 0.015 | 84.700 | -1.233 | 0.221 DCES | -0.093 | 0.102 | 87.100 | -0.914 | 0.363 OB-risk | 0.035 | 0.089 | 86.900 | 0.391 | 0.697 Parity | -0.076 | 0.041 | 90.400 | -1.830 | 0.071 *The p value is based on Satterthwaite approximation. | ### 3.3 Scientific Question 2 Following the argument in section 2.3, Bayesian inference has been made to study the association between CT-Sum and pCRH over GA. By similar arguments in section 3.2, we have involved all the factors and the interaction GA*CT-Sum in the fixed effects. DIC was compared to determine whether to include random slope of GA in the model. Results (random intercept and slope: 684.12; random intercept only: 577.75) suggests only random intercept included in the model. Thus, we propose model (4) as the final hierarchical Bayesian linear model as well. Model diagnosis was conducted in checking the convergence of MCMC and also the adequacy of the model. Trace and ACF plots of parameters can be found in Figures 6 and 7 (Appendix D). They reveal that all the MCMC samplers are in good mixing and independent. The Gibbs samplers provide good estimate of the posterior distribution for each parameter. Figure 8 (Appendix D) compares to observed average test statistics defined in section 2.3 with the values obtained from the replicated samples. The estimated $pB$ is 0.58, which implies the model fits the data well. The fixed effect estimates of the proposed model can be found in Table 2. Similar to the frequentist inference, it shows that intercept, GA and BMI are significant since the credible interval does not contain 0. Although the interaction GA*CT-Sum covers 0 in their credible interval, we conclude it is still significant since the lower bound is close to 0. CT-Sum does not preform statistically significant but we can not neglect it by the same argument in section 3.2. To interpret the results, we will still exponentiate the estimates. The positive value of 0.005 suggests positive association between CT-Sum and pCRH as gestational age increases. To address SQ2, we conclude that the increment of CT-Sum results in decrease in pCRH at early gestational age. However, as gestational age becomes large, increasing the CT-Sum will lead to the increment of pCRH. The arguments are similar to the frequentist inference. If a pregnant woman has 1.2 childhood trauma (the average of the dataset), the expected median pCRH value will be 14.4% higher at the end of the gestation compared to those without childhood trauma. Moreover, at the average gestational age 26.7, women with one childhood trauma will have pCRH value 4.7% higher than those without childhood trauma. Table 2: Fixed effect estimates of hierarchical Bayesian model (4) Covariates of fixed effect | Posterior mean | 95% Probability Interval ---|---|--- Intercept | 1.751 | (1.152, 2.328) GA | 0.141 | (0.132, 0.151) CT-Sum | -0.088 | (-0.265, 0.088) CT-Sum*GA | 0.005 | (-0.001, 0.011) BMI | -0.020 | (-0.035, -0.006) CSES | -0.018 | (-0.049, 0.013) DCES | -0.098 | (-0.313, 0.118) OB-risk | 0.037 | (-0.149, 0.225) Parity | -0.077 | (-0.164, 0.011) ### 3.4 Scientific Question 3 To modify model (4) with the additional information, we made a knot of 20 in covariate GA. The question is whether additional slope, or both of additional slope and intercept should be involved in the fit model. To address this question, models with only additional slope and both of additional slope and intercept have been fitted. Likelihood ratio test (pvalue is 0.43) suggests that there is no further information indicating the necessity of both of additional intercept and slope. Hence, we will propose the model with only additional slope that follows $\displaystyle\begin{split}\log(\text{pCRH}_{ij})&=\beta_{0}+\beta_{1}*\text{GA}_{ij}+\beta_{2}*\text{CT- Sum}_{i}+\beta_{3}*\text{GA}_{ij}*\text{CT- Sum}_{i}+\beta_{4}*\text{BMI}_{i}\\\ &+\beta_{5}*(\text{GA}_{ij}-20)_{+}+\beta_{6}*\text{CSES}_{i}+\beta_{7}*\text{Parity}_{i}\\\ &+\beta_{8}*\text{DCES}_{i}+\beta_{9}*\text{OB- risk}_{i}+b_{1i}+\epsilon_{ij},\end{split}$ (5) where $(\text{GA}_{ij}-20)_{+}=max\\{\text{GA}_{ij}-20,0\\}.$ Model diagnosis was conducted to justify the adequacy, assumption of constant variance and normality. Residual plot can be found in Figure 9 (Appendix D). There is no obvious pattern and the residuals scatters around 0. Also, the QQ plot indicates the validity of normal assumption. Summary of fixed effect estimates of model (5) is shown in Table 3. Similar to the results in SQ1 and SQ2, CT-Sum are not statistically significant. Since the objective is to study the change of effect of CT-Sum on pCRH over GA, there is no reason to remove CT-Sum from the model. Following similar strategies in SQ1 and SQ2, we will exponentiate the estimate of coefficients to interpret the results. From Table 3, it is revealed that around 20 gestational age, there is a dramatic increase of the change rate of pCRH over gestation. In particular, among women whose gestational age is less than 20 and experience 1.2 childhood trauma (the average from the dataset), the expected median pCRH will increase by 3.8% per week. But after 20 weeks, those women will experience a 17.9% increase of the expected median pCRH per week towards the end of gestation. On the other hand, among those women with 1 childhood trauma (the mode of the dataset), the expected median pCRH increases by 3.9% per week before week 20, 17.8% per week after week 20. But among those without childhood trauma, the expected median pCRH goes up by 3.3% (before week 20) and 17.2% (after week 20). It shows that although at around week 20, the increment of pCRH becomes dramatic, women without childhood trauma still remain lower pCRH increase rate compared to those with childhood trauma. Table 3: Fixed effect estimates of linear mixed effect model (5) Covariates of fixed effect | Estimate | Standard error | Degree of freedom | T value | P value* ---|---|---|---|---|--- Intercept | 3.729 | 0.374 | 300.500 | 9.961 | $<0.001$ GA | 0.032 | 0.015 | 298.600 | 2.126 | 0.034 $(\text{GA}-20)_{+}$ | 0.127 | 0.017 | 297.400 | 7.428 | $<0.001$ CT-Sum | -0.107 | 0.067 | 371.700 | -1.617 | 0.107 CT-Sum*GA | 0.005 | 0.002 | 298.400 | 2.409 | 0.016 BMI | -0.020 | 0.007 | 87.200 | -3.020 | 0.003 CSES | -0.020 | 0.015 | 85.200 | -1.349 | 0.181 Parity | -0.078 | 0.041 | 90.200 | -1.916 | 0.059 DCES | -0.084 | 0.100 | 87.400 | -0.838 | 0.404 OB-risk | 0.039 | 0.088 | 87.200 | 0.437 | 0.663 *The p value is based on Satterthwaite approximation. | ## 4 Discussion ### 4.1 Conclusion The objectives of this study were to examine the effect of maternal CT exposure on pCRH and modify the model to realize the difference before and after gestational age 20 (in week). In regarding to the first scientific question, linear mixed effect models have been implemented. Covariates of all the factors and the interaction GA*CT-Sum were chosen as fixed effects. Intercept was in the random effects. Results indicated that the association between CT exposure and pCRH varied over gestational age. During the first couple of weeks into pregnancy, women with childhood trauma were likely to have lower pCRH than those without childhood trauma. However, as gestational age moved on, those with childhood trauma experienced much higher increase rate of pCRH. At the end of pregnancy (GA=40), women with 1 childhood trauma have almost 14.4% higher value in the expected median pCRH than those without childhood trauma. In regarding to the second scientific question, hierarchical Bayesian linear mixed effect model was implemented. By choosing conditionally conjugate priors, Gibbs sampler was employed to obtain samplers from the posterior distribution of parameters. The same model in SQ1 was proposed after comparing the DIC values. Results are similar to the frequentist inference. At early gestational age, women with more childhood trauma would experience lower pCRH value. As the pregnancy moves on, more exposure to childhood trauma lead to much higher increase rate of pCRH. At the average gestational age (GA=26.7), women with one childhood trauma have 4.7% higher pCRH value than those without childhood trauma experience. In regarding to the last scientific question, piecewise linear model with knot at 20 week was conducted. The results indicated that after week 20 into pregnancy, the increase of pCRH over GA became more and more dramatic. The increase rate per week (after week 20) is almost four-fold larger than that before week 20. Women without childhood trauma still remain lower increase rate than those with one childhood trauma before and after week 20. ### 4.2 Limitations The main drawbacks of this study lie in the following two aspects. On one hand, the response (pCRH) on different subjects were not measured at the same time point (gestational age). It has been pointed out that the unbalanced structure may result in misspecification of the within-subject association over continuous time [21]. If the response are not missing completely at random, such misspecification may lead to biased estimates of the mean response[22]. On the other hand, there may be more potential confounding factors. For instance, characteristics such as race, ethnicity, drug use, alcohol in pregnancy and age are not considered in this study, which may result in unexpected models. ## References * [1] Nemeroff, C.B. Neurobiological consequences of childhood trauma. Journal of Clinical Psychiatry. 65 : p. 18-28. 2004. * [2] Heim, C., Nemeroff, C.B. The role of childhood trauma in the neurobiology of mood and anxiety disorders: preclinical and clinical studies. Biological Psychiatry. 49 (12): p. 1023-1039. 2001. * [3] Schwerdtfeger, K.L., Nelson Goff, B.S. Intergenerational transmission of trauma: Exploring mother-infant prenatal attachment. Journal of traumatic stress. 20 (1): p. 39-51. 2007. * [4] Gao, X., Shen, W., Shahbaba, B., Fortin, N. and Ombao, H. Evolutionary state-space model and its application to time-frequency analysis of local field potentials. arXiv preprint arXiv:1610.07271. * [5] Gao, X., Shen, W., Zhang, L., Hu, J., Fortin, N.J., Frostig, R.D. and Ombao, H., 2020 Regularized matrix data clustering and its application to image analysis. Biometrics. * [6] Gao, X., Shahbaba, B. and Ombao, H., 2018 Modeling binary time series using gaussian processes with application to predicting sleep states. Journal of Classification, 35(3), pp.549-579. * [7] Cheng, Q., Gao, X. and Martin, R., 2014. Exact prior-free probabilistic inference on the heritability coefficient in a linear mixed model. Electronic Journal of Statistics, 8(2), pp.3062-3076. * [8] Walker, M. The inter-generational transmission of trauma: The effects of abuse on the survivor’s relationship with their children and on the children themselves. European Journal of Psychotherapy, Counseling and Health. 2: p. 281-296. 1999. * [9] Wosu, A.C., Gelay, B., William, M.A. History of childhood sexual abuse and risk of prenatal and postpartum depression or depressive symptoms: an epidemiologic review. Archives of women’s mental health. 2015. * [10] Gao, X., Shen, W., Ting, C.M., Cramer, S.C., Srinivasan, R. and Ombao, H., 2019, April. Estimating brain connectivity using copula Gaussian graphical models. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (pp. 108-112). IEEE. * [11] Gao, X., Shen, W., Hu, J., Fortin, N. and Ombao, H., 2019, March. Modeling local field potentials with regularized matrix data clustering. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) (pp. 597-602). IEEE. * [12] Gao, X., Gillen, D. and Ombao, H., 2018. Fisher information matrix of binary time series. Metron, 76(3), pp.287-304. * [13] Wang, Y., Ting, C.M., Gao, X. and Ombao, H., 2019, March. Exploratory analysis of brain signals through low dimensional embedding. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER) (pp. 997-1002). IEEE. * [14] Simkin, P., Klaus, P. When Survivors Give Birth: Understanding and Healing the Effects of Early Sexual Abuse on Childbearing Women. Seattle: Classic Day Press. 2004. * [15] Felsen, I. Transgenerational transmission of effects of the Holocaust: The North American research perspective . The Plenum series on stress and coping. p. 43-68. 1998. * [16] Moog, N.K., Buss, C., Entringer, S., Shahbaba, B., Gillen, D., Hobel, C.J., Wadhwa, P.D. Maternal exposure to childhood trauma is associated during pregnancy with placental-fetal stress physiology. Biological Psychiatry (to apprear). * [17] Davis, E.P., Glynn, L.M., Dunkel, S.G., Hobel, C., Chicz-Demet, A., Sandman, CA. Corticotropin-releasing hormone during pregnancy is associated with infant temperament. Developmental Neuroscience. 27(5) p. 299-305. 2005. * [18] Springer, K.W., Sheridan, J., Kuo, D., Carnes, M. The long-term health outcomes of childhood abuse. Journal of General Internal Medicine . 18(10) p. 864-870. 2003. * [19] Buss, C., Entringer, S., Wadhwa, PD. Fetal programming of brain development: Intrauterine stress and susceptibility to psychopathology. Science Signal. 5 pt 7 * [20] Sorem, K.A., Smikle, C.B., Spencer, D.K., Yonder, B.A., Graveson, M.A., Siler-Khodr, T.M. Circulating maternal corticotropin-releasing hormone and gonadotropin-releasing hormone in normal and abnormal pregnancies. American Journal of Obstet Gynecol. 175 p. 912-916. 1996. * [21] Huang, W., Fitzmaurice, G.M. Analysis of longitudinal data unbalanced over time. Journal of the Royal Statistical Society. Series B 67(1) p. 135-155. 2005. * [22] Little, R.J.A., Rubin,D.B. Statistical Analysis with Missing Data, 2nd ed. New Work: Wiley. 2002 ## Appendix A Appendix A The derivation of conditional distributions in implementing Gibbs sampler. $\displaystyle b_{1i}|\bm{\beta},\tau_{1}^{2},\sigma_{\epsilon}^{2}\sim N(\frac{\sum_{j=1}^{n_{i}}(\log(pCRH_{ij})-X_{ij}\bm{\beta})/\sigma_{\epsilon}^{2}}{n_{i}/\sigma_{\epsilon}^{2}+1/\tau_{1}^{2}},(n_{i}/\sigma_{\epsilon}^{2}+1/\tau_{1}^{2})^{-1}),$ $\displaystyle\bm{\beta}|\tau_{1}^{2},b_{1i},\sigma_{\epsilon}^{2}\sim N((1/\sigma_{\epsilon}^{2})((1/\sigma_{\epsilon}^{2})*X^{T}X+\Lambda_{0})^{-1}X^{T}\tilde{Y},((1/\sigma_{\epsilon}^{2})*X^{T}X+\Lambda_{0})^{-1}),$ $\displaystyle\text{where}X=\begin{bmatrix}X_{11}\\\ X_{12}\\\ \cdots\\\ X_{88,n_{88}}\end{bmatrix},\Lambda_{0}=\begin{bmatrix}1/\sigma_{0}^{2}&0&0\\\ 0&\cdots&0\\\ 0&0&1/\sigma_{k}^{2}\end{bmatrix},$ $\displaystyle\tilde{Y}=(\log(pCRH_{11})-b_{11},\cdots,\log(pCRH_{88,n_{88}})-b_{1,88})^{T}.$ $\displaystyle\tau_{1}^{2}|\bm{\beta},b_{1i},\sigma_{\epsilon}^{2}\sim\text{Inv}-\chi^{2}(88+c,\frac{\sum_{i=1}^{88}b_{1i}^{2}+c*d}{88+c})$ $\displaystyle\sigma_{\epsilon}^{2}|\bm{\beta},\tau_{1}^{2},b_{1i}$ $\displaystyle\sim\text{Inv}-\chi^{2}(\sum_{i=1}^{88}n_{i}+a,\frac{\sum_{i=1}^{88}\sum_{j=1}^{n_{i}}(\log(pCRH_{ij})-X_{ij}\bm{\beta}-b_{1i})^{2}+a*b}{\sum_{i=1}^{88}n_{i}+a}).$ Table 4: Descriptive statistics of categorical variables Variables | Number of observations | Percentage ---|---|--- No childhood trauma (CT-Sum=0) | 45 | 51.1% One childhood trauma (CT-Sum=1) | 15 | 17.0% Two childhood trauma (CT-Sum=2) | 13 | 14.8% Three childhood trauma (CT-Sum=3) | 11 | 12.5% Four childhood trauma (CT-Sum=4) | 4 | 4.5% High obstetric risk (OB-risk=1) | 28 | 31.8% Low obstetric risk (OB-risk=0) | 60 | 68.2% No previous pregnancy (Parity=0) | 35 | 39.8% One previous pregnancy (Parity=1) | 34 | 38.6% Two previous pregnancies (Parity=2) | 11 | 12.5% Three previous pregnancies (Parity=3) | 6 | 6.8% Four previous pregnancies (Parity=4) | 2 | 2.3% Table 5: Descriptive statistics of quantitative variables Variables | Mean | Range | Skewness ---|---|---|--- Depression score (DCES) | 0.65 | 1.72 | 0.88 Pre-pregnancy body mass index (BMI) | 24.56 | 30.00 | 1.15 Childhood socioeconomic score (CSES) | 11.50 | 11.00 | -0.77 Gestational age (in weeks) (GA) | 26.73 | 26.00 | 0.01 Placental corticotrophin-releasing hormone (pCRH) | 236.80 | 1337.00 | 1.94 ## Appendix B Appendix B | ---|--- CT-Sum=0 | CT-Sum=1 | CT-Sum=2 | CT-Sum=3 | CT-Sum=4 | Figure 4: Spaghetti plots of $\log pCRH$ over GA after controlling for CT-Sum ## Appendix C Appendix C | ---|--- QQ Plot | Pearson Residual plot Figure 5: QQ and Pearson plots of the model in SQ1 ## Appendix D Appendix D | ---|--- | ---|--- Figure 6: Trace plots of parameters of the model in SQ2 | ---|--- | ---|--- Figure 7: ACF plots of parameters of the model in SQ2 Figure 8: Model diagnosis plot of the model in SQ2 | ---|--- QQ Plot | Pearson Residual plot Figure 9: QQ and Pearson plots of the model in SQ3
# Homogeneous varieties under split solvable algebraic groups Michel Brion Université Grenoble Alpes, Institut Fourier, CS 40700, 38058 Grenoble Cedex 9, France ###### Abstract. We present a modern proof of a theorem of Rosenlicht, asserting that every variety as in the title is isomorphic to a product of affine lines and punctured affine lines. ## 1\. Introduction Throughout this note, we consider algebraic groups and varieties over a field $k$. An algebraic group $G$ is _split solvable_ if it admits a chain of closed subgroups $\\{e\\}=G_{0}\subset G_{1}\subset\cdots\subset G_{n}=G$ such that each $G_{i}$ is normal in $G_{i+1}$ and $G_{i+1}/G_{i}$ is isomorphic to the additive group ${\mathbb{G}}_{a}$ or the multiplicative group ${\mathbb{G}}_{m}$. This class features prominently in a series of articles by Rosenlicht on the structure of algebraic groups, see [Ro56, Ro57, Ro63]. The final result of this series may be stated as follows (see [Ro63, Thm. 5]): ###### Theorem 1. Let $X$ be a homogeneous variety under a split solvable algebraic group $G$. Then there is an isomorphism of varieties $X\simeq{\mathbb{A}}^{m}\times({\mathbb{A}}^{\times})^{n}$ for unique nonnegative integers $m$, $n$. Here ${\mathbb{A}}^{m}\simeq({\mathbb{A}}^{1})^{m}$ denotes the affine $m$-space, and ${\mathbb{A}}^{\times}={\mathbb{A}}^{1}\setminus\\{0\\}$ the punctured affine line. Rosenlicht’s articles use the terminology and methods of algebraic geometry à la Weil, and therefore have become hard to read. In view of their fundamental interest, many of their results have been rewritten in more modern language, e.g. in the book [DG70] by Demazure & Gabriel and in the second editions of the books on linear algebraic groups by Borel and Springer, which incorporate developments on “questions of rationality” (see [Bo91, Sp98]). The above theorem is a notable exception: the case of the group $G$ acting on itself by multiplication is handled in [DG70, Cor. IV.4.3.8] (see also [Sp98, Cor. 14.2.7]), but the general case is substantially more complicated. 111The case where $k$ is algebraically closed and $X=G/H$ for some smooth connected subgroup $H\subset G$ is proposed as an exercise in [Sp98, §14.2]. The aim of this note is to fill this gap by providing a proof of Theorem 1 in the language of modern algebraic geometry. As it turns out, this theorem is self-improving: combined with Rosenlicht’s theorem on rational quotients (see [Ro56, Thm. 2], and [BGR17, Sec. 2] for a modern proof) and some “spreading out” arguments, it yields the following stronger version: ###### Theorem 2. Let $X$ be a variety equipped with an action of a split solvable algebraic group $G$. Then there exist a dense open $G$-stable subvariety $X_{0}\subset X$ and an isomorphism of varieties $X_{0}\simeq{\mathbb{A}}^{m}\times({\mathbb{A}}^{\times})^{n}\times Y$ (where $m$, $n$ are uniquely determined nonnegative integers and $Y$ is a variety, unique up to birational isomorphism) such that the resulting projection $f:X_{0}\to Y$ is the rational quotient by $G$. By this, we mean that $f$ yields an isomorphism $k(Y)\stackrel{{\scriptstyle\sim}}{{\to}}k(X)^{G}$, where the left-hand side denotes the function field of $Y$ and the right-hand side stands for the field of $G$-invariant rational functions on $X$; in addition, the fibers of $f$ are exactly the $G$-orbits. As a direct but noteworthy application of Theorem 2, we obtain: ###### Corollary 3. Let $X$ be a variety equipped with an action of a split solvable algebraic group $G$. Then $k(X)$ is a purely transcendental extension of $k(X)^{G}$. When $k$ is algebraically closed, this gives back the main result of [Po16]; see [CZ17] for applications to the rationality of certain homogeneous spaces. The proof of Theorem 2 also yields a version of [Sp98, Prop. 14.2.2]: ###### Corollary 4. Let $X$ be a variety equipped with a nontrivial action of ${\mathbb{G}}_{a}$. Then there exist a variety $Y$, an open immersion $\varphi:{\mathbb{A}}^{1}\times Y\to X$ and a monic additive polynomial $P\in{\mathcal{O}}(Y)[t]$ such that $g\cdot\varphi(x,y)=\varphi(x+P(y,g),y)$ for all $g\in{\mathbb{G}}_{a}$, $x\in{\mathbb{A}}^{1}$ and $y\in Y$. Here $P$ is said to be additive if it satisfies $P(y,t+u)=P(y,t)+P(y,u)$ identically; then ${\mathbb{G}}_{a}$ acts on ${\mathbb{A}}^{1}\times Y$ via $g\cdot(x,y)=(x+P(y,g),y)$, and $\varphi$ is equivariant for this action. If ${\rm char}(k)=0$, then we have $P=t$ and hence ${\mathbb{G}}_{a}$ acts on ${\mathbb{A}}^{1}\times Y$ by translation on ${\mathbb{A}}^{1}$. So Corollary 4 just means that every nontrivial ${\mathbb{G}}_{a}$-action becomes a trivial ${\mathbb{G}}_{a}$-torsor on some dense open invariant subset. On the other hand, if ${\rm char}(k)=p>0$, then $P$ is a $p$-polynomial, i.e., $P=a_{0}t+a_{1}t^{p}+\cdots+a_{n}t^{p^{n}}$ for some integer $n\geq 1$ and $a_{0},\ldots,a_{n}\in{\mathcal{O}}(Y)$. Thus, the map $(P,{\rm id}):{\mathbb{G}}_{a}\times Y\longrightarrow{\mathbb{G}}_{a}\times Y,\quad(g,y)\longmapsto(P(y,g),y)$ is an endomorphism of the $Y$-group scheme ${\mathbb{G}}_{a,Y}={\rm pr}_{Y}:{\mathbb{G}}_{a}\times Y\to Y$; conversely, every such endomorphism arises from an additive polynomial $P$, see [DG70, II.3.4.4]. Thus, Corollary 4 asserts that for any nontrivial ${\mathbb{G}}_{a}$-action, there is a dense open invariant subset on which ${\mathbb{G}}_{a}$ acts by a trivial torsor twisted by such an endomorphism. These twists occur implicitly in the original proof of Theorem 1, see [Ro63, Lem. 3]. 222Rosenlicht was very well aware of the limitations of classical methods. He wrote in the introduction of [Ro63]: “The methods of proof we use here are refinements of those of our previous Annali paper [Ro57] and cry for improvement; there are unnatural complexities and it seems that something new that is quite general, and possibly quite subtle, must be brought to light before appreciable progress can be made.” This note is organized as follows. In Section 2, we gather background results on split solvable algebraic groups. Section 3 presents further preliminary material, on the quotient of a homogeneous space $G/H$ by the left action of a normal subgroup scheme $N\triangleleft G$; here $G$ is a connected algebraic group, and $H\subset G$ a subgroup scheme. In particular, we show that such a quotient is a torsor under a finite quotient of $N$, if either $N\simeq{\mathbb{G}}_{m}$ or $N\simeq{\mathbb{G}}_{a}$ and ${\rm char}(k)=0$ (Lemma 3.4). The more involved case where $N\simeq{\mathbb{G}}_{a}$ and ${\rm char}(k)>0$ is handled in Section 4; we then show that the quotient is a “torsor twisted by an endomorphism” as above (Lemma 4.3). The proofs of our main results are presented in Section 5. Notation and conventions. We consider schemes over a field $k$ of characteristic $p\geq 0$ unless otherwise mentioned. Morphisms and products of schemes are understood to be over $k$ as well. A _variety_ is an integral separated scheme of finite type. An _algebraic group_ $G$ is a group scheme of finite type. By a _subgroup_ $H\subset G$, we mean a (closed) subgroup scheme. A $G$-_variety_ is a variety $X$ equipped with a $G$-action $\alpha:G\times X\longrightarrow X,\quad(g,x)\longmapsto g\cdot x.$ We say that $X$ is $G$-_homogeneous_ if $G$ is smooth, $X$ is geometrically reduced, and the morphism $({\rm id},\alpha):G\times X\longrightarrow X\times X,\quad(g,x)\longmapsto(x,g\cdot x)$ is surjective. If in addition $X$ is equipped with a $k$-rational point $x$, then the pair $(X,x)$ is a $G$-_homogeneous space_. Then $(X,x)\simeq(G/\operatorname{Stab}_{G}(x),x_{0})$, where $\operatorname{Stab}_{G}(x)\subset G$ denotes the stabilizer, and $x_{0}$ the image of the neutral element $e\in G(k)$ under the quotient morphism $G\to G/\operatorname{Stab}_{G}(x_{0})$. Given a field extension $K/k$ and a $k$-scheme $X$, we denote by $X_{K}$ the $K$-scheme $X\times_{{\rm Spec}(k)}{\rm Spec}(K)$. We will freely use results from the theory of faithfully flat descent, for which a convenient reference is [GW10, Chap. 14, App. C]. ## 2\. Split solvable groups We first recall some basic properties of these groups, taken from [DG70, IV.4.3] where they are called “groupes $k$-résolubles” (see also [Mi17, §16.g]). Every split solvable group is smooth, connected, affine and solvable. Conversely, every smooth connected affine solvable algebraic group over an algebraically closed field is split solvable (see [DG70, IV.4.3.4]). Clearly, every extension of split solvable groups is split solvable. Also, recall that every nontrivial quotient group of ${\mathbb{G}}_{m}$ is isomorphic to ${\mathbb{G}}_{m}$, and likewise for ${\mathbb{G}}_{a}$ (see [DG70, IV.2.1.1]). As a consequence, every quotient group of a split solvable group is split solvable as well. We now obtain a key preliminary result (a version of [Ro63, Lem. 1], see also [Sp98, Cor. 14.3.9]): ###### Lemma 2.1. Let $G$ be a split solvable group. Then there exists a chain of subgroups $G_{0}=\\{e\\}\subset G_{1}\subset\cdots G_{m}\subset\cdots\subset G_{m+n}=G,$ where $G_{i}\triangleleft G$ for $i=0,\ldots,m+n$ and $G_{i+1}/G_{i}\simeq\begin{cases}{\mathbb{G}}_{a}&\text{ if }i=0,\ldots,m-1,\\\ {\mathbb{G}}_{m}&\text{ if }i=m,\ldots,m+n-1.\\\ \end{cases}$ ###### Proof. Arguing by induction on $\dim(G)$, it suffices to show that either $G$ is a split torus, or it admits a normal subgroup $N$ isomorphic to ${\mathbb{G}}_{a}$. By [DG70, IV.4.3.4], $G$ admits a normal unipotent subgroup $U$ such that $G/U$ is diagonalizable; moreover, $U$ is split solvable. Since $G$ is smooth and connected, $G/U$ is a split torus $T$. Also, since every subgroup and every quotient group of a unipotent group are unipotent, $U$ admits a chain of subgroups $\\{e\\}=U_{0}\subset U_{1}\subset\cdots\subset U_{m}=U$ such that $U_{i}\triangleleft U_{i+1}$ and $U_{i+1}/U_{i}\simeq{\mathbb{G}}_{a}$ for any $i=0,\ldots,m-1$. By [DG70, IV.4.3.14], it follows that either $U$ is trivial or it admits a central characteristic subgroup $V$ isomorphic to ${\mathbb{G}}_{a}^{n}$ for some integer $n>0$. In the former case, $G=T$ is a split torus. In the latter case, $V\triangleleft G$ and the conjugation action of $G$ on $V$ factors through an action of $T$. By [Co15, Thm. 4.3], there is a $T$-equivariant isomorphism of algebraic groups $V\simeq V_{0}\times V^{\prime}$, where $V_{0}$ is fixed pointwise by $T$ and $V^{\prime}$ is a vector group on which $T$ acts linearly. If $V^{\prime}$ is nontrivial, then it contains a $T$-stable subgroup $N\simeq{\mathbb{G}}_{a}$; then $N\triangleleft G$. On the other hand, if $V^{\prime}$ is trivial then $V$ is central in $G$; thus, every copy of ${\mathbb{G}}_{a}$ in $V$ yields the desired subgroup $N$. ∎ ## 3\. Quotients of homogeneous spaces by normal subgroups Let $G$ be an algebraic group, $H\subset G$ a subgroup, and $N\triangleleft G$ a smooth normal subgroup. Then $H$ acts on $N$ by conjugation. The semi-direct product $N\rtimes H$ defined by this action (as in [Mi17, Sec. 2.f]) is equipped with a homomorphism to $G$, with schematic image the subgroup $NH\subset G$. Recall that $H\triangleleft NH\subset G$ and $NH/H\simeq N/N\cap H$. Denote by $q:G\longrightarrow G/H,\quad r:G\longrightarrow G/NH$ the quotient morphisms. Then $q$ is an $H$-torsor, and hence a categorical quotient by $H$. Since $r$ is invariant under the $H$-action on $G$ by right multiplication, there exists a unique morphism $f:G/H\longrightarrow G/NH$ such that the triangle $\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\scriptstyle{r}$$\textstyle{G/H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{G/NH}$ commutes. We will also need the following observation (see [Mi17, Prop. 7.15]): ###### Lemma 3.1. With the above notation, the square $\textstyle{G\times NH/H\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a}$$\scriptstyle{{\rm pr}_{G}}$$\textstyle{G/H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\textstyle{G/NH}$ is cartesian, where $a$ denotes the restriction of the action $G\times G/H\to G/H$ and ${\rm pr}_{G}$ denotes the projection. ###### Proof. Since $r$ is an $NH$-torsor, we have a cartesian square $\textstyle{G\times NH\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\scriptstyle{{\rm pr}_{G}}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\textstyle{G/NH,}$ where $m$ denotes the restriction of the multiplication $G\times G\to G$. Also, the square $\textstyle{G\times NH\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\scriptstyle{({\rm id},q)}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{G\times NH/H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a}$$\textstyle{G/H}$ is commutative, and hence cartesian since the vertical arrows are $H$-torsors. As $q$ is faithfully flat, this yields the assertion by descent. ∎ For simplicity, we set $X=G/H$ and $Y=G/NH$. These homogeneous spaces come with base points $x_{0}$, $y_{0}$ such that $f(x_{0})=y_{0}$. ###### Lemma 3.2. 1. (i) With the above notation, $f$ is $G$-equivariant and $N$-invariant, where $G$ (and hence $N$) acts on $X,Y$ by left multiplication. 2. (ii) $f$ is smooth, surjective, and its fibers are exactly the $N$-orbits. 3. (iii) The morphism $\gamma:N\times X\longrightarrow X\times_{Y}X,\quad(n,x)\longmapsto(x,n\cdot x)$ is faithfully flat. 4. (iv) The map $f^{\\#}:{\mathcal{O}}_{Y}\to f_{*}({\mathcal{O}}_{X})$ yields an isomorphism ${\mathcal{O}}_{Y}\stackrel{{\scriptstyle\sim}}{{\to}}f_{*}({\mathcal{O}}_{X})^{N}$, where the right-hand side denotes the subsheaf of $N$-invariants. 5. (v) If $N\cap H$ is central in $G$, then $f$ is a $N/N\cap H$-torsor. ###### Proof. (i) Let $R$ be an algebra, $g\in G(R)$ and $x\in X(R)$. As $q$ is faithfully flat, there exist a faithfully flat $R$-algebra $R^{\prime}$ and $g^{\prime}\in G(R^{\prime})$ such that $x=g^{\prime}\cdot x_{0}$. Then $f(g\cdot x)=f(gg^{\prime}\cdot x_{0})=gg^{\prime}\cdot y_{0}=g\cdot(g^{\prime}\cdot y_{0})=g\cdot f(x)$ in $Y(R^{\prime})$, and hence in $Y(R)$. This yields the $G$-equivariance of $f$. If $g\in N(R)$ then $gg^{\prime}=g^{\prime}n$ for some $n\in N(R^{\prime})$. Thus, $f(gg^{\prime}\cdot x_{0})=f(g^{\prime}\cdot x_{0})$, i.e., $f(g\cdot x)=f(x)$, proving the $N$-invariance. (ii) Observe that $NH/H$ is homogeneous under the smooth algebraic group $N$, and hence is smooth. Thus, ${\rm pr}_{G}:G\times NH/H\to G$ is smooth as well. It follows that $f$ is smooth by using Lemma 3.1 and the faithful flatness of $r$. Also, $f$ is surjective since so are ${\rm pr}_{G}$ and $r$. Let $K/k$ be a field extension, $x\in X(K)$, and $y=f(x)$. There exist a field extension $L/K$ and $g\in G(L)$ such that $x=g\cdot x_{0}$. Thus, $y=g\cdot y_{0}$ and the fiber $X_{y}$ satisfies $(X_{y})_{L}=g(X_{y_{0}})_{L}$. Also, $X_{y_{0}}=N\cdot x_{0}$ in view of Lemma 3.1 together with the isomorphisms $N\cdot x_{0}\simeq N/N\cap H\simeq NH/H$. Thus, $(X_{y})_{L}=g\cdot(Nx_{0})_{L}=(Ng\cdot x_{0})_{L}=(N\cdot x)_{L}$, and therefore $X_{y}=N_{K}\cdot x$ by descent. (iii) Consider the commutative triangle --- $\textstyle{N\times X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\gamma}$$\scriptstyle{{\rm pr}_{X}}$$\textstyle{X\times_{Y}X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\rm pr}_{1}}$$\textstyle{X.}$ Clearly, the morphism ${\rm pr}_{X}$ is faithfully flat. Also, ${\rm pr}_{1}$ is faithfully flat, since it is obtained from $f$ by base change. Moreover, for any field extension $K/k$ and any $x\in X(K)$, the restriction $\gamma_{x}:N\times x=N_{K}\to X_{x}$ is the orbit map $n\mapsto n\cdot x$, and hence is faithfully flat by (ii). So the assertion follows from the fiberwise flatness criterion (see [EGA, IV.11.3.11]). (iv) We have ${\mathcal{O}}_{Y}=r_{*}({\mathcal{O}}_{G})^{NH}=f_{*}q_{*}({\mathcal{O}}_{G})^{NH}=f_{*}(q_{*}({\mathcal{O}}_{G})^{H})^{N}=f_{*}({\mathcal{O}}_{X})^{N},$ since $q$ (resp. $r$) is a torsor under $H$ (resp. $NH$). (v) The subgroup $N\cap H\subset G$ fixes $x_{0}$ and is central in $G$. By a lifting argument as in (i), it follows that $N\cap H$ fixes $X=G\cdot x_{0}$ pointwise. Thus, the $N$-action on $X$ factors uniquely through an action of $N/N\cap H$. Since the square $\textstyle{G\times N/N\cap H\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{a}$$\scriptstyle{{\rm pr}_{G}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\textstyle{Y}$ is cartesian (Lemma 3.1) and $r$ is faithfully flat, this yields the assertion. ∎ In view of the assertions (i), (ii), (iii) and (iv), $f$ is a geometric quotient by $N$ in the sense of [MFK94, Def. 0.7]. Next, denote by $\operatorname{Stab}_{N}\subset N\times X$ the stabilizer, i.e., the pullback of the diagonal in $X\times_{Y}X$ under $\gamma$. Then $\operatorname{Stab}_{N}$ is a closed subgroup scheme of the $X$-group scheme $N_{X}=({\rm pr}_{X}:N\times X\to X)$, stable under the $G$-action on $N\times X$ via $g\cdot(n,x)=(gng^{-1},g\cdot x)$. ###### Lemma 3.3. 1. (i) The projection ${\rm pr}_{X}:\operatorname{Stab}_{N}\to X$ is faithfully flat and $G$-equivariant. Its fiber at $x_{0}$ is $H$-equivariantly isomorphic to $N\cap H$ on which $H$ acts by conjugation. 2. (ii) ${\rm pr}_{X}$ is finite if and only if $N\cap H$ is finite. ###### Proof. (i) Clearly, ${\rm pr}_{X}$ is equivariant and its fiber $\operatorname{Stab}_{N}(x_{0})$ is as asserted. Form the cartesian square $\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{\operatorname{Stab}_{N}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\rm pr}_{X}}$$\textstyle{G/H.}$ Then $Z$ is equipped with a $G$-action such that $\pi$ is equivariant, with fiber at $e$ being $N\cap H$. As a consequence, the morphism $G\times N\cap H\longrightarrow Z,\quad(g,z)\longmapsto g\cdot z$ is an isomorphism with inverse being $z\mapsto(\pi(z),\pi(z)^{-1}\cdot z)$. Via this isomorphism, $\pi$ is identified with the projection $G\times N\cap H\to G$. Thus, $\pi$ is faithfully flat, and hence so is ${\rm pr}_{X}$. (ii) This also follows from the above cartesian square, since $\pi$ is finite if and only if $N\cap H$ is finite. ∎ ###### Lemma 3.4. Assume that $N\not\subset H$. 1. (i) If $N\simeq{\mathbb{G}}_{m}$ and $G$ is connected, then $f$ is an $N/N\cap H$-torsor. Moreover, $N/N\cap H\simeq{\mathbb{G}}_{m}$. 2. (ii) If $N\simeq{\mathbb{G}}_{a}$ and $p=0$, then $f$ is an $N$-torsor. ###### Proof. (i) In view of the rigidity of tori (see [SGA3, Exp. IX, Cor. 5.5] or [Mi17, Cor. 12.37]), $N$ is central in $G$. Also, $N\cap H$ is a finite subgroup of $N$, and hence $N/N\cap H\simeq{\mathbb{G}}_{m}$. So we conclude by Lemma 3.2 (v). (ii) Likewise, $N\cap H$ is a finite subgroup of ${\mathbb{G}}_{a}$, and hence is trivial since $p=0$. So we conclude by Lemma 3.2 (v) again. ∎ ## 4\. Quotients by the additive group We first record two preliminary results, certainly well-known but for which we could locate no appropriate reference. ###### Lemma 4.1. Let $X$ be a locally noetherian scheme. Let $Z\subset{\mathbb{A}}^{1}\times X$ be a closed subscheme such that the projection ${\rm pr}_{X}:Z\to X$ is finite and flat. Then $Z$ is the zero subscheme of a unique monic polynomial $P\in{\mathcal{O}}(X)[t]$. ###### Proof. First consider the case where $X={\rm Spec}(A)$, where $A$ is a local algebra with maximal ideal $\mathfrak{m}$ and residue field $K$. Denoting by $x$ the closed point of $X$, the fiber $Z_{x}$ is a finite subscheme of ${\mathbb{A}}^{1}_{K}$. Thus, $Z_{x}=V(P)$ for a unique monic polynomial $P\in K[t]$. So the images of $1,t,\ldots,t^{n-1}$ in ${\mathcal{O}}(Z_{x})$ form a basis of this $K$-vector space, where $n={\rm deg}(P)$. Also, ${\mathcal{O}}(Z)$ is a finite flat $A$-module, hence free. By Nakayama’s lemma, the images of $1,t,\ldots,t^{n-1}$ in ${\mathcal{O}}(Z)$ form a basis of this $A$-module. So we have $t^{n}+a_{1}t^{n-1}+\cdots+a_{n}=0$ in ${\mathcal{O}}(Z)$ for unique $a_{1},\ldots,a_{n}\in A$. Thus, the natural map $A[t]/(t^{n}+a_{1}t^{n-1}+\cdots+a_{n})\to{\mathcal{O}}(Z)$ is an isomorphism, since it sends a basis to a basis. This proves the assertion in this case. For an arbitrary scheme $X$, the assertion holds in a neighborhood of every point by the local case. In view of the uniqueness of $P$, this completes the proof. ∎ ###### Lemma 4.2. Let $X$ be a locally noetherian scheme, and $H\subset{\mathbb{G}}_{a,X}$ a finite flat subgroup scheme. Then $H={\rm Ker}(P,{\rm id})$ for a unique monic additive polynomial $P\in{\mathcal{O}}(X)[t]$, where $(P,{\rm id})$ denotes the endomorphism ${\mathbb{G}}_{a,X}\longrightarrow{\mathbb{G}}_{a,X},\quad(g,x)\longmapsto(P(x,g),x).$ ###### Proof. We may assume that $X$ is affine by the uniqueness property. Let $X={\rm Spec}(A)$, then $H=V(P)$ for a unique monic polynomial $P\in A[t]$ (Lemma 4.1). We now adapt an argument from [DG70, IV.2.1.1] to show that $P$ is an additive polynomial. Denote by $m:{\mathbb{G}}_{a,X}\times_{X}{\mathbb{G}}_{a,X}\to{\mathbb{G}}_{a,X}$ the group law. Since $H$ is a subgroup scheme, we have $H\times_{X}H\subset m^{-1}(H)$. Considering the ideals of these closed subschemes of ${\mathbb{G}}_{a,X}\times_{X}{\mathbb{G}}_{a,X}\simeq{\mathbb{G}}_{a}\times{\mathbb{G}}_{a}\times X={\rm Spec}(A[t,u])$ yields that $P(t+u)\in(P(t),P(u))$ in $A[t,u]$. So there exist $Q,R\in A[t,u]$ such that $P(t+u)-P(t)-P(u)=Q(t,u)P(t)+R(t,u)P(u).$ Since $P$ is monic, there exist unique $Q_{1},Q_{2}\in A[t,u]$ such that $Q(t,u)=Q_{1}(t,u)P(u)+Q_{2}(t,u),\quad{\rm deg}_{u}(Q_{2})<{\rm deg}(P)=n.$ Thus, we have $P(t+u)-P(t)-P(u)-Q_{2}(t,u)P(t)=(Q_{1}(t,u)P(t)+R(t,u))P(u).$ As the left-hand side has degree in $u$ at most $n-1$, it follows that $Q_{1}(t,u)P(t)+R(t,u)=0$ and $P(t+u)-P(t)-P(u)=Q_{2}(t,u)P(t)$. Considering the degree in $t$, we obtain $Q_{2}=0$ and $P(t+u)=P(t)+P(u)$ identically. ∎ Next, we return to the setting of Section 3: $G$ is an algebraic group, $H\subset G$ a subgroup, $N\triangleleft G$ a smooth normal subgroup, and $f:X=G/H\to G/NH=Y$ the natural morphism. Since $f$ is $N$-invariant (Lemma 3.2 (i)), we may view $X$ as an $Y$-scheme equipped with an action of the $Y$-group scheme $N_{Y}$. ###### Lemma 4.3. Assume in addition that $N\simeq{\mathbb{G}}_{a}$ and $N\not\subset H$. Then there exist a faithfully flat morphism of $Y$-group schemes $\varphi:N_{Y}\to{\mathbb{G}}_{a,Y}$ and a ${\mathbb{G}}_{a,Y}$-action on $X$ such that $f$ is a ${\mathbb{G}}_{a,Y}$-torsor. ###### Proof. By Lemma 3.3, the stabilizer $\operatorname{Stab}_{N}$ is finite and flat over $Y$. Thus, $\operatorname{Stab}_{N}={\rm Ker}(P,{\rm id})$ for a unique monic $p$-polynomial $P\in{\mathcal{O}}(X)[t]$ (Lemma 4.2). Also, $\operatorname{Stab}_{N}\subset N\times X$ is stable under the action of the abstract group $N(k)$ via $g\cdot(n,x)=(n,g\cdot x)$; as a consequence, we have $P(g\cdot x,t)=P(x,t)$ identically on $X$, for any $g\in N(k)$. This still holds after base change by a field extension $K/k$, since the formation of $\operatorname{Stab}_{N}$ commutes with such base change and hence $P$ is invariant under any such extension. Since $N(K)$ is dense in $N_{K}$ for any infinite field $K$, it follows that $P$ is $N$-invariant. As ${\mathcal{O}}(X)^{N}={\mathcal{O}}(Y)$ (Lemma 3.2 (iv)), we see that $P\in{\mathcal{O}}(Y)[t]$. Choose an isomorphism ${\mathbb{G}}_{a}\stackrel{{\scriptstyle\sim}}{{\to}}N$ and consider the morphism $\varphi=(P,{\rm id}):{\mathbb{G}}_{a,Y}\longrightarrow{\mathbb{G}}_{a,Y},\quad(t,y)\longmapsto(P(y,t),y).$ Then $\varphi$ is an endomorphism of the $Y$-group scheme ${\mathbb{G}}_{a,Y}$. Moreover, $\varphi$ is faithfully flat, as follows from the fiberwise flatness criterion (see [EGA, IV.11.3.11]), since ${\mathbb{G}}_{a,Y}$ is faithfully flat over $Y$ and for any $y\in Y$, the morphism $\varphi_{y}:t\mapsto P(y,t)$ is faithfully flat. Denote by $K$ the kernel of $\varphi$. Then we have $K\times_{Y}X=\operatorname{Stab}_{N}$; thus, $K$ is finite and flat over $Y$, by Lemma 3.3 and descent. Moreover, the square $\textstyle{K\times_{Y}{\mathbb{G}}_{a,Y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{m}$$\scriptstyle{{\rm pr}}$$\textstyle{{\mathbb{G}}_{a,Y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{{\mathbb{G}}_{a,Y}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\textstyle{{\mathbb{G}}_{a,Y}}$ is cartesian, where $m$ denotes the group law, and ${\rm pr}$ the projection (indeed, $P(t,y)=P(u,y)$ if and only if $(u-t,y)\in K$). So $\varphi$ is a $K$-torsor. The action $\alpha:{\mathbb{G}}_{a,Y}\times_{Y}X={\mathbb{G}}_{a}\times X\longrightarrow X,\quad(t,x)\longmapsto t\cdot x$ is a $K$-invariant morphism. By descent again, it follows that there is a unique morphism $\beta:{\mathbb{G}}_{a}\times X\to X$ such that the triangle --- $\textstyle{{\mathbb{G}}_{a}\times X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\scriptstyle{\varphi}$$\textstyle{X}$$\textstyle{{\mathbb{G}}_{a}\times X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\beta}$ commutes. Thus, $\beta(t,x)=\alpha(P(f(x),t),x)$ identically on ${\mathbb{G}}_{a}\times X$. In particular, $\beta(0,x)=\alpha(0,x)=x$ identically on $X$. Also, $\beta$ satisfies the associativity property of an action, since so does $\alpha$ and $\varphi$ is faithfully flat. So $\beta$ is an action of ${\mathbb{G}}_{a,Y}$ on $X$. Consider the associated morphism $\delta:{\mathbb{G}}_{a}\times X\longrightarrow X\times_{Y}X,\quad(t,x)\longmapsto(\beta(t,x),x)$ as a morphism of $X$-schemes. For any field extension $K/k$ and any $x\in X(K)$, we get a morphism $\delta_{x}:{\mathbb{G}}_{a,K}\to X_{x}$ such that $\delta_{x}\circ P_{x}=\alpha_{x}$. Thus, $\delta_{x}$ is an isomorphism by the construction of $P$. In view of the fiberwise isomorphism criterion (see [EGA, IV.17.9.5]), it follows that $\delta$ is an isomorphism. So $f$ is a ${\mathbb{G}}_{a,Y}$-torsor relative to this action $\beta$. ∎ ## 5\. Proofs of the main results ### 5.1. Proof of Theorem 1 We first consider the case where $X$ is equipped with a $k$-rational point $x_{0}$. Then $X=G/H$ for some subgroup $H\subset G$. If $G$ is a torus, then $G/H$ has the structure of a split torus, and hence is isomorphic to $({\mathbb{A}}^{\times})^{n}$ for some integer $n\geq 0$. Otherwise, $G$ admits a normal subgroup $N\simeq{\mathbb{G}}_{a}$ by Lemma 2.1. If $N\subset H$ then $X\simeq(G/N)/(H/N)$ and we conclude by induction on $\dim(G)$. So we may assume that $N\not\subset H$. Then we have a morphism $f:X=G/H\longrightarrow G/NH\simeq(G/N)/(NH/N).$ Moreover, $f$ is a ${\mathbb{G}}_{a}$-torsor by Lemma 3.2 (if $p=0$) and Lemma 4.3 (if $p>0$). By induction on $\dim(G)$ again, we may assume that $Y\simeq{\mathbb{A}}^{m}\times({\mathbb{A}}^{\times})^{n}$ as a variety. In particular, $Y$ is affine, and hence the ${\mathbb{G}}_{a}$-torsor $f$ is trivial. So $X\simeq{\mathbb{A}}^{1}\times Y\simeq{\mathbb{A}}^{m+1}\times({\mathbb{A}}^{\times})^{n}$ as a variety. To complete the proof, it suffices to show that every homogeneous $G$-variety has a $k$-rational point. This follows from a result of Rosenlicht (see [Ro56, Thm. 10]) and is reproved in [Bo91, Thm. 15.11], [Sp98, Thm. 14.3.13]. For completeness, we present a proof based on the following lemma, also due to Rosenlicht (see [Ro56, Lem., p. 425]): ###### Lemma 5.1. Let $X$ be a homogeneous variety under $G={\mathbb{G}}_{a}$ or ${\mathbb{G}}_{m}$. Then $X$ has a $k$-rational point. 333This lemma is reproved in [Bo91, Prop. 15.6], but the argument there is unclear to me. In modern language, it is asserted that every smooth, geometrically rational curve is an open subvariety of a smooth complete curve of genus $0$. Yet this fails for nontrivial forms of the affine line, see [Ru70, Lem. 1.1]. Also, it is asserted that the $G$-action on $X$ extends to an action on its regular completion; this requires a proof. ###### Proof. Since $X$ is a smooth curve, it admits a unique regular completion $\bar{X}$, i.e., $\bar{X}$ is a regular projective curve equipped with an open immersion $X\to\bar{X}$. Moreover, $\bar{X}$ is geometrically integral since so is $X$. We identify $X$ with its image in $\bar{X}$, and denote by $Z=\bar{X}\setminus X$ the closed complement, equipped with its reduced subscheme structure. Then $Z=\coprod_{i=1}^{n}{\rm Spec}(K_{i})$, where the $K_{i}/k$ are finite extensions of fields. By the smoothness of $X$ again, we may choose a finite separable extension $K/k$ such that $X$ has a $K$-rational point $x_{0}$. Then $(X_{K},x_{0})$ is a homogeneous space under $G_{K}$, and hence is isomorphic to $G_{K}$ as a variety. Also, $\bar{X}_{K}$ is the regular completion of $X_{K}$; moreover, $Z_{K}$ is reduced and $\bar{X}_{K}\setminus X_{K}=Z_{K}$. Since $X_{K}\simeq{\mathbb{A}}^{1}_{K}$ or ${\mathbb{A}}^{\times}_{K}$, it follows that $\bar{X}_{K}\simeq{\mathbb{P}}^{1}_{K}$; in particular, $\bar{X}$ is a smooth projective curve of genus $0$. This identifies $Z_{K}$ with ${\rm Spec}(K)$ (the point at infinity) if $G={\mathbb{G}}_{a}$, resp. with ${\rm Spec}(K)\coprod{\rm Spec}(K)=\\{0,\infty\\}$ if $G={\mathbb{G}}_{m}$. In the former case, we have $Z={\rm Spec}(k)$ and hence $\bar{X}$ has a $k$-rational point. Thus, $\bar{X}\simeq{\mathbb{P}}^{1}$ , so that $X$ has a $k$-rational point as well. In the latter case, let $L=k(X)$; then $L/k$ is separable and $X_{L}$ has an $L$-rational point. Thus, we see as above that $\bar{X}_{L}\simeq{\mathbb{P}}^{1}_{L}$ and this identifies $Z_{L}$ with $\\{0,\infty\\}$. In particular, $Z(L)=Z(K)$. Since $K$ and $L$ are linearly disjoint over $k$, it follows that $Z(k)$ consists of two $k$-rational points; we then conclude as above. ∎ Returning to a homogeneous variety $X$ under a split solvable group $G$, we may choose $N\triangleleft G$ such that $N\simeq{\mathbb{G}}_{a}$ or ${\mathbb{G}}_{m}$ (Lemma 2.1). Also, we may choose a finite Galois extension $K/k$ such that $X$ has a $K$-rational point $x_{0}$. Let $H=\operatorname{Stab}_{G_{K}}(x_{0})$; then $(X_{K},x_{0})$ is the homogeneous space $G_{K}/H$, and hence there is a geometric quotient $f:X_{K}=G_{K}/H\longrightarrow G_{K}/N_{K}H$ (Lemma 3.2). Then $f$ is a categorical quotient, and hence is unique up to unique isomorphism. By Galois descent (which applies, since all considered varieties are affine), we obtain a $G$-equivariant morphism $\varphi:X\to Y$ such that $\varphi_{K}=f$. In particular, $Y$ is a homogeneous variety under $G/N$. Arguing by induction on $\dim(G)$, we may assume that $Y$ has a $k$-rational point $y$. Then the fiber $X_{y}$ is a homogeneous $N$-variety, and hence has a $k$-rational point. ### 5.2. Proof of Theorem 2 We may freely replace $X$ with any dense open $G$-stable subvariety. In view of Rosenlicht’s theorem on rational quotients mentioned in the introduction, we may thus assume that there exist a variety $Y$ and a $G$-invariant morphism $f:X\longrightarrow Y$ such that $k(Y)\stackrel{{\scriptstyle\sim}}{{\to}}k(X)^{G}$ and the fiber of $f$ at every $y\in Y$ is a homogeneous variety under $G_{\kappa(y)}$, where $\kappa(y)$ denotes the residue field at $y$. By generic flatness, we may further assume that $f$ is flat. Denoting by $\eta$ the generic point of $Y$, the fiber $X_{\eta}$ is a homogeneous variety under $G_{\eta}=G_{k(Y)}$. By Theorem 1, this yields an isomorphism (5.1) $Z_{\eta}\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}X_{\eta},$ where $Z={\mathbb{A}}^{m}\times({\mathbb{A}}^{\times})^{n}$ for unique integers $m,n\geq 0$. This yields in turn a birational map $\varphi:Z\times Y\dasharrow X$ such that $f\circ\varphi={\rm pr}_{Y}$ as rational maps. It suffices to show that there exists a dense open subvariety $Y_{0}\subset Y$ such that $\varphi$ is defined on $Z\times Y_{0}$ and yields an open immersion $Z\times Y_{0}\to X$ with $G$-stable image. For this, we start with some reductions. We may assume that $Y$ is affine (by replacing $X$ with the preimage of a dense open affine subvariety) and also that $X$ is normal (since its normal locus is a dense open $G$-stable subvariety). In view of a result of Sumihiro (see [Su75, Thm. 3.9]), we may further assume that $X$ is a locally closed $G$-stable subvariety of the projectivization ${\mathbb{P}}(V)$, where $V$ is a finite-dimensional $G$-module. The closure $\bar{X}$ of $X$ in ${\mathbb{P}}(V)$ and its boundary $\bar{X}\setminus X$ are $G$-stable. By a version of Borel’s fixed point theorem (see [DG70, IV.4.3.2]), there exist a positive integer $N$ and a nonzero $s\in H^{0}(\bar{X},{\mathcal{O}}(N))$ which vanishes identically on $\bar{X}\setminus X$ and is a $G$-eigenvector. Then the dense open subvariety $\bar{X}_{s}$ is affine, $G$-stable and contained in $X$; thus, we may further assume that $X$ is affine. This replaces $Y$ with a dense open subset $Y_{0}$ (as $f$ is flat and hence open). As $Y$ is affine, we may choose a nonzero $t\in{\mathcal{O}}(Y)$ which vanishes identically on $Y\setminus Y_{0}$. Replacing $X$ with $X_{t}$ and $Y$ with $Y_{t}$, we may finally assume that $X$, $Y$ are affine and $X$ is normal. Choose a closed immersion of $Y$-varieties $X\to{\mathbb{A}}^{N}\times Y$; then $\varphi$ yields a rational map $(\varphi_{1},\ldots,\varphi_{N},{\rm pr}_{Y}):Z\times Y\dasharrow{\mathbb{A}}^{N}\times Y$ such that the pull-back $Z_{\eta}\to{\mathbb{A}}^{N}_{\eta}$ is a closed immersion. In particular, $\varphi_{1},\ldots,\varphi_{N}\in{\mathcal{O}}(Z_{\eta})={\mathcal{O}}(Z)\otimes_{k}k(Y)$. Replacing again $Y$ with a dense open affine subvariety, we may thus assume that $\varphi_{1},\ldots,\varphi_{N}\in{\mathcal{O}}(Z)\otimes_{k}{\mathcal{O}}(Y)={\mathcal{O}}(Z\times Y)$. As a consequence, $\varphi$ is a morphism. Denote by $\operatorname{Isol}(\varphi)$ the set of points of $Z\times Y$ which are isolated in their fiber; then $\operatorname{Isol}(\varphi)$ contains the points of $Z_{\eta}$. By Zariski’s Main Theorem (see [EGA, III.4.4.3]), $\operatorname{Isol}(\varphi)$ is open in $Z\times Y$ and the restriction of $\varphi$ to $\operatorname{Isol}(\varphi)$ factors as $\operatorname{Isol}(\varphi)\stackrel{{\scriptstyle\psi}}{{\longrightarrow}}X^{\prime}\stackrel{{\scriptstyle\gamma}}{{\longrightarrow}}X,$ where $\psi$ is an open immersion and $\gamma$ is finite. Replacing $X^{\prime}$ with the schematic image of $\psi$, we may assume that $\psi$ is schematically dominant; then $X^{\prime}$ is a variety. Since $\varphi$ is birational, so is $\gamma$; as $X$ is normal, it follows that $\gamma$ is an isomorphism. Thus, $\varphi$ restricts to an open immersion $\operatorname{Isol}(\varphi)\to X$. Consider the closed complement $F=(Z\times Y)\setminus\operatorname{Isol}(\varphi)$. Then $F_{\eta}$ is empty, and hence the ideal $I(F)\subset{\mathcal{O}}(Z\times Y)$ satisfies $1\in I(F)\otimes_{{\mathcal{O}}(Y)}k(Y)$. Replacing $Y$ with a principal open subvariety, we may thus assume that $1\in I(F)$, i.e., $F$ is empty and $\operatorname{Isol}(\varphi)=Z\times Y$. Equivalently, $\varphi:Z\times Y\to X$ is an open immersion. It remains to show that the image of $\varphi$ is $G$-stable. The isomorphism (5.1) is equivariant relative to some action $\alpha:G_{\eta}\times_{\eta}Z_{\eta}\to Z_{\eta}$. We may view $\alpha$ as a morphism $G\times Z\times\eta\to Z$, i.e., a family $(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n})$, where $x_{1},\ldots,x_{m}\in{\mathcal{O}}(G\times Z\times\eta)$ and $y_{1},\ldots,y_{n}\in{\mathcal{O}}(G\times Z\times\eta)^{\times}$ (the group of invertible elements). Shrinking $Y$ again, we may assume that $x_{1},\ldots,x_{m}\in{\mathcal{O}}(G\times Z\times Y)$ and $y_{1},\ldots,y_{n}\in{\mathcal{O}}(G\times Z\times Y)^{\times}$. Then $\alpha$ is given by a morphism $G\times Z\times Y\to Z$, i.e., an action of $G_{Y}$ on $Z\times Y$. Moreover, $\varphi$ is $G_{Y}$-equivariant, since so is $\varphi_{\eta}$. This completes the proof of Theorem 2. The proof of Corollary 4 is completely similar; the point is that the generic fiber $X_{\eta}$ is a nontrivial ${\mathbb{G}}_{a,\eta}$-homogeneous variety, and hence is isomorphic to ${\mathbb{A}}^{1}_{\eta}$ on which ${\mathbb{G}}_{a,\eta}$ acts via a monic additive polynomial $P\in k(Y)[t]$ (Lemma 5.1). We leave the details to the reader. ###### Remark 5.2. (i) Theorem 1 may be reformulated as follows: every homogeneous variety $X$ under a split solvable algebraic group $G$ is affine and satisfies ${\mathcal{O}}(X)\simeq k[x_{1},\ldots,x_{m},y_{1},y_{1}^{-1},\ldots,y_{n},y_{n}^{-1}],$ where $x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}$ are algebraically independent. So the invertible elements of the algebra ${\mathcal{O}}(X)$ are exactly the Laurent monomials $cy_{1}^{a_{1}}\cdots y_{n}^{a_{n}}$, where $c\in k^{\times}$ and $a_{1},\ldots,a_{n}\in{\mathbb{Z}}$. As a consequence, the projection $f:X\longrightarrow({\mathbb{A}}^{\times})^{n}$ is uniquely determined (but the projection $X\to{\mathbb{A}}^{m}$ is not: as an example, $k[x,y,y^{-1}]\simeq k[x+P(y),y,y^{-1}]$ for any $P\in k[t]$). In fact, $f$ is the quotient by the unipotent part $U$ of $G$, as follows fom the proof of Theorem 1. (ii) Likewise, in the setting of Theorem 2, the projection $X_{0}\to({\mathbb{A}}^{\times})^{n}\times Y$ is the rational quotient by $U$. This theorem is known, in a more precise formulation, for a variety $X$ equipped with an action of a connected reductive algebraic group $G$ over an algebraically closed field of characteristic $0$. Then one considers the action of a Borel subgroup of $G$, and uses the “local structure theorem” as in [Kn90, Satz 2.3]. The dimension of $Y$ is the complexity of the $G$-action on $X$, and $n$ is its rank; both are important numerical invariants of the action (see e.g. [Ti11, Chap. 2]). These invariants still make sense in positive characteristics, and the local structure theorem still holds in a weaker form (see [Kn93, Satz 1.2]). Theorem 2 gives additional information in this setting. (iii) Corollary 4 also holds for a variety $X$ equipped with a nontrivial action of the multiplicative group: there exist a variety $Y$, a nonzero integer $n$ and an open immersion $\varphi:{\mathbb{A}}^{\times}\times Y\to X$ such that $g\cdot\varphi(x,y)=\varphi(g^{n}x,y)$ identically. This follows from the fact that every nontrivial ${\mathbb{G}}_{m,\eta}$-homogeneous variety is isomorphic to ${\mathbb{A}}^{\times}_{\eta}$ on which ${\mathbb{G}}_{m,\eta}$ acts by the $n$th power map for some $n\neq 0$. This extends to the action of a split torus $T$: using [Su75, Cor. 3.11], one reduces to the case where $X$ is affine and $T$ acts via a free action of a quotient torus $T^{\prime}$. Then the quotient $X\to Y$ exists and is a $T^{\prime}$-torsor, see [SGA3, Exp. IX, Thm. 5.1] for a much more general result. ## References * [BGR17] J. P. Bell, D. Ghioca, Z. Reichstein, On a dynamical version of a theorem of Rosenlicht, Ann. Sci. Norm. Super. Pisa Cl. Sci. (5) 17 (2017), no. 1, 187–204. * [Bo91] A. Borel, Linear algebraic groups. Second enlarged edition, Grad. Texts in Math. 126, Springer, New York, 1991. * [CZ17] C. Chin, D-Q. Zhang, Rationality of homogeneous varieties, Trans. Amer. Math. Soc. 369 (2017), 2651–2673. * [Co15] B. Conrad, The structure of solvable algebraic groups over general fields, pp. 159–192 in: Panor. Synth. 46, Soc. Math. France, 2015. * [DG70] M. Demazure, P. Gabriel, Groupes algébriques, Masson, Paris, 1970. * [EGA] A. Grothendieck, Éléments de géométrie algébrique (rédigés avec la collaboration de J. Dieudonné), Pub. Math. I.H.É.S. 4, 8, 11, 17, 20, 24, 28, 32 (1961–1967). * [GW10] U. Görtz, T. Wedhorn, Algebraic Geometry I, Vieweg, Wiesbaden, 2010. * [Kn90] F. Knop, Weylgruppe und Momentabbildung, Invent. math. 293 (1993), 333–363. * [Kn93] F. Knop, Über Bewertungen, welche unter einer reduktiven Gruppe invariant sind, Math. Ann. 293 (1993), 333–363. * [Mi17] J. S. Milne, Algebraic groups. The theory of group schemes of finite type over a field, Cambridge Stud. Adv. Math. 170, Cambridge University Press, 2017. * [MFK94] D. Mumford, J. Fogarty, F. Kirwan: Geometric invariant theory. Third enlarged edition, Ergeb. Math. Grenzgeb. 34, Springer, 1994. * [Po16] V. L. Popov, Birational splitting and algebraic group actions, European J. Math. 2 (2016), 283–290. * [Ro56] M. Rosenlicht, Some basic theorems on algebraic groups, Amer. J. Math. 78 (1956), 401–443. * [Ro57] M. Rosenlicht, Questions of rationality for algebraic groups, Ann. Mat. Pura Appl. 78 (1957), 25–50. * [Ro63] M. Rosenlicht, Questions of rationality for solvable algebraic groups over nonperfect fields, Ann. Mat. Pura Appl. 62 (1963), 97–120. * [Ru70] P. Russell, Forms of the affine line and its additive group, Pacific J. Math. 32 (1970), 527–539. * [SGA3] M. Demazure, A. Grothendieck, Séminaire de Géométrie Algébrique du Bois Marie, 1962–64, Schémas en groupes (SGA3), Tome I. Propriétés générales des schémas en groupes, Doc. Math. 7, Soc. Math. France, Paris, 2011. * [Sp98] T. A. Springer, Linear algebraic groups. Second edition, Prog. Math. 9, Birkhäuser, Basel, 1998. * [Su75] H. Sumihiro, Equivariant completion II, J. Math. Kyoto Univ. 15 (1975), 573–605. * [Ti11] D. A. Timashev, Homogeneous spaces and equivariant embeddings, Encyclopaedia Math. Sci. 238, Springer, 2011.
remarkRemark hypothesisHypothesis claimClaim exampleExample Companion CurveW. Wu and C. Chen ex_supplement # A Companion Curve Tracing Method for Rank-deficient Polynomial Systems Wenyuan Wu Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China (, , https://www.arcnl.org). <EMAIL_ADDRESS><EMAIL_ADDRESS>Changbo Chen11footnotemark: 1 Corresponding author. ###### Abstract We propose a method for tracing implicit real algebraic curves defined by polynomials with rank-deficient Jacobians. For a given curve $f^{-1}(0)$, it first utilizes a regularization technique to compute at least one witness point per connected component of the curve. We improve this step by establishing a sufficient condition for testing the emptiness of $f^{-1}(0)$. We also analyze the convergence rate and carry out an error analysis for refining the witness points. The witness points are obtained by computing the minimum distance of a random point to a smooth manifold embedding the curve while at the same time penalizing the residual of $f$ at the local minima. To trace the curve starting from these witness points, we prove that if one drags the random point along a trajectory inside a tubular neighborhood of the embedded manifold of the curve, the projection of the trajectory on the manifold is unique and can be computed by numerical continuation. We then show how to choose such a trajectory to approximate the curve by computing eigenvectors of certain matrices. Effectiveness of the method is illustrated by examples. ###### keywords: rank-deficiency, real algebraic curve, curve tracing, numerical continuation, penalty function, Tikhonov regularization 65H10, 14Q30, 90C23 ## 1 Introduction Given an implicit real algebraic curve in $\mathbb{R}^{n}$ defined by a finite set of polynomials $f\subset\mathbb{R}[x_{1},\ldots,x_{n}]$, producing a polygonal chain approximation of it is a classical problem. Existing numerical continuation methods [Allgower2003] for solving this problem often require that the Jacobian of $f$, denoted by $\mathcal{J}_{f}$, is of nullity one at all (or almost all) points of the real zero set of $f$, that is $f^{-1}(0)$. It remains a challenge to trace the curve defined by $f$ when $\mathcal{J}_{f}$ is rank-deficient, that is of nullity greater than one, which is the dimension of the curve. One typical example is when $f$ is a sum of squares of polynomials or more generally when $f$ is nonnegtive. For a smooth curve in $\mathbb{R}^{n}$ defined by $f=\\{f_{1},\ldots,f_{n-1}\\}$, where $\mathcal{J}_{f}$ is of full rank at any point of the curve, the main technical challenge would be to identify all branches of $f^{-1}(0)$ and make sure that there is no jumping during curve tracing. Identifying all branches of $f^{-1}(0)$ is the problem of computing witness points for every connected component of $f^{-1}(0)$ [Rouillier2000, Hauenstein2012, WR13]. Techniques for presenting or detecting curve jumping also exist [Blum1997, Beltran2013, Martin2013, Yu2014, WRF2017]. For an almost smooth curve in $\mathbb{R}^{n}$ defined by $f=\\{f_{1},\ldots,f_{n-1}\\}$, where $\mathcal{J}_{f}$ is of full rank at nearly all points of the curve, one further difficulty is to trace across the singular points and get the correct topology around the singular points. Many work exist for handling such problems [Hong1996, Daouda2008, Cheng2010, Gomes2014, DBLP:journals/jossac/ChenWF20, DBLP:journals/jossac/JinC20a]. In this paper, we are interested in the case that $\mathcal{J}_{f}$ is rank- deficient at every point of $f^{-1}(0)$, such as when $f$ consists of a polynomial in sum of squares. When $\mathcal{J}_{f}$ is rank-deficient, one of the main obstacles is that it is hard to find a tracing direction since the dimension of the nullspace of $\mathcal{J}_{f}$ is at least two. Our starting point for solving this problem is a penalty function based method for computing witness points for every connected component of $f^{-1}(0)$ [Wu2017]. We then extend it to a companion curve tracing method. The main idea of the penalty function is to embed the curve in a high-dimensional smooth manifold defined by a system $g$ with full rank Jacobians and compute the minimum distance of a random point to the manifold while at the same time penalizing the residual of $f$ at the local minima. To render the curve, one natural idea is to move the random point (as a guiding point) along a trajectory and hopefully the corresponding minima will be dragged continuously. To implement this idea, there are two main challenges to overcome. One is the potential occurence of discontinuity, which indeed may happen as illustrated in Section 3 and proved in Section 5. Another is to make sure the minima do move along the curve as one drags the guiding point along the trajectory. We show that both challenges can be overcome and the idea of moving the guiding point is indeed feasible if the point is moved along the directions defined by eigenvectors of certain matrices and inside a tubular neighborhood of the embedded manifold of the curve. The trajectory formed by moving the random point is called a companion curve of $f^{-1}(0)$. Interestingly, the penalty function method, although initially proposed in a completely different context, turns out to be closely related to Tikhonov regularization for solving rank-deficient nonlinear least squares problems [Tikhonov95, Engl1996, Eriksson2005], which is explained in the preliminary section. The paper is structured as follows. In Section 2, we recall how to compute at least one witness point for every connected component of an arbitrary real algebraic variety, whose defining system allows to be rank-deficient, via the so-called penalty function method [Wu2017]. In Section 3, we illustrate by a simple example the challenge for tracing real algebraic curves defined by rank-deficient systems, with the initial witness points provided by the penalty function method, as well as the main idea of our companion curve method for handling this problem. One weakness of the penalty function method for computing witness points is that it always return a non-empty set of points even the given real variety is empty. In Section 4, we provide a sufficient criterion for testing emptiness of a variety by the penalty function method. To successfully tracing a real algebraic curve, it is important to make the approximate witness points close enough to the curve. In Section 5, we propose a homotopy method for improving the precision of witness points. With all these preparations and another tool from differential geometry, namely Tubular Neighborhood Theorem, we propose a companion curve method for curve tracing in Section LABEL:sec:pathtrack. The effectiveness of the method is illustrated by several examples in Section LABEL:sec:ex. Finally, in Section LABEL:sec:con, we draw the conclusion and propose several ways to improve the current method. ## 2 Preliminaries In this section, we recall some preliminary results that were introduced in [Wu2017] for computing witness points of rank-deficient polynomial systems. Let $x=(x_{1},\ldots,x_{n})$ and let $f=\\{f_{1},...,f_{k}\\}\subset\mathbb{R}[x]$. Let $V_{\mathbb{R}}(f)$ be the zero set of $f$ in $\mathbb{R}^{n}$. Let $\mathfrak{a}=(\mathfrak{a}_{1},...,\mathfrak{a}_{n})\notin V_{\mathbb{R}}(f)$ be a point in $x$-space and consider the minimal distance from $V_{\mathbb{R}}(f)$ to this point: (1) $\displaystyle\min\;\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2}$ $\displaystyle s.t.\hskip 28.45274ptf(x)=0.$ Clearly every semi-algebraically connected component of $V_{\mathbb{R}}(f)$ has at least one point attaining the local minimum. These points make the following matrix lose full rank: $A:=\left(\begin{array}[]{ccc}\partial f_{1}/\partial x_{1}&\cdots&\partial f_{1}/\partial x_{n}\\\ \vdots&\ddots&\vdots\\\ \partial f_{k}/\partial x_{1}&\cdots&\partial f_{k}/\partial x_{n}\\\ x_{1}-\mathfrak{a}_{1}&\cdots&x_{n}-\mathfrak{a}_{n}\\\ \end{array}\right).$ Note that the first $k$ rows of $A$ are exactly the the Jacobian matrix of $f$ w.r.t. $x$, denoted by $\mathcal{J}_{f}$. If $\mathcal{J}_{f}$ is rank- deficient at every point of $V_{\mathbb{R}}(f)$, the matrix $A$ automatically loses full rank and thus does not provide any extra helpful information on computing the semi-algebraically connected components of $V_{\mathbb{R}}(f)$. To overcome this algebraic rank deficiency problem, the paper [Wu2017] introduces a penalty function based approach. Instead of solving the optimization problem (1), one considers the following unconstrained optimization problem: (2) $\displaystyle\min\;\mu$ $\displaystyle=(\beta\cdot(f_{1}^{2}+\cdots+f_{k}^{2})+\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2.$ Note that as $\beta$ approaches infinity, $f_{i}$, $i=1,\ldots,k$ are forced to be zero. Intuitively, this provides an approximate solution to the problem (1) for large enough $\beta$. Indeed, the paper [Wu2017] proves the following result justifying such intuitions. ###### Proposition (Corollary $1$ in [Wu2017]). Let $p$ be a local minimum of (1). There exists a local minimum $p^{\prime}$ of (2) for sufficiently large $\beta$, such that $\|p-p^{\prime}\|$ can be arbitrarily small. Moreover, one can get a rough estimation of the distance between $p$ and $p^{\prime}$ via the notion of degree index. ###### Definition (Definition $3$ in [Wu2017]). For a given $v\neq 0\in\mathbb{R}^{n}$, let $f_{v}:=f(x=vt)$. Denote by $\deg_{\min}(f_{v})$ the trailing degree of $f_{v}$. We call $\deg_{ind}(f)=\max_{v}\deg_{\min}(f_{v})$ the degree index of $f$. Given a point $p\in V_{\mathbb{R}}(f)$, the degree index of $f$ at $p$ is defined as $\deg_{ind}(f(x+p))$. ###### Theorem 2.1 (Theorem $5$ in [Wu2017]). For a random point $\mathfrak{a}\in\mathbb{R}^{n}$ and a sufficiently large $\beta$, suppose that $p\in V_{\mathbb{R}}(f)$ attains the local minimal distance to $\mathfrak{a}$. Then there is a solution $p^{\prime}$ of Equation (3) such that $\|p^{\prime}-p\|=O(\sqrt[2\mathpzc{I}-1]{1/\beta}\,)$, where $\mathpzc{I}=\max\\{\deg_{ind}(f_{i}(x+p)),i=1,...,k\\}$. The local minima of (2) are exactly the points that vanish the gradient of $\mu$, that is satisfying the following equation: (3) $\left(\begin{array}[]{c}x_{1}\\\ \vdots\\\ x_{n}\\\ \end{array}\right)+\beta\cdot\mathcal{J}^{t}\cdot\left(\begin{array}[]{c}f_{1}\\\ \vdots\\\ f_{k}\\\ \end{array}\right)=\left(\begin{array}[]{c}\mathfrak{a}_{1}\\\ \vdots\\\ \mathfrak{a}_{n}\\\ \end{array}\right).\\\ $ where the $n\times k$ matrix $\mathcal{J}^{t}$ is the transpose of the Jacobian of $f$. The left hand side of Equation (3) defines a smooth mapping $M:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$. ###### Lemma 2.2 (Lemma $2$ in [Wu2017]). For almost all points $\mathfrak{a}=(\mathfrak{a}_{1},...,\mathfrak{a}_{n})\notin V_{\mathbb{R}}(f)$, $M^{-1}(\mathfrak{a})$ is a nonempty finite set and every point of $M^{-1}(\mathfrak{a})$ is a regular point of $M$. Proposition 2 and Lemma 2.2 together show that for almost all points $\mathfrak{a}$ of $\mathbb{R}^{n}$, $M^{-1}(\mathfrak{a})$ contains points meeting every semi-algebraically connected component of $V_{\mathbb{R}}(f)$. We warn that $M^{-1}(\mathfrak{a})$ may contain extra points not belonging to $V_{\mathbb{R}}(f)$. In particular, even $V_{\mathbb{R}}(f)$ is empty, $M^{-1}(\mathfrak{a})$ is always nonempty. Numerically testing if a real variety is empty in general is a difficult problem. We provide a partial answer to this problem in Section 4. Sometimes, it is useful to use the following two equivalent formulations to (2). Let $z=(z_{1},..,z_{k})$ be $k$ slack variables and $g=\\{f_{1}+z_{1},f_{2}+z_{2},...,f_{k}+z_{k}\\}$. Note that we have $g\subset\mathbb{R}[x,z]$ and $V_{\mathbb{R}}(g)\subseteq\mathbb{R}^{n+k}$. $\displaystyle\min\;\mu$ $\displaystyle=(\beta\cdot(z_{1}^{2}+\cdots+z_{k}^{2})+\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2$ $\displaystyle s.t.\hskip 28.45274ptg(x,z)=0.$ Let $z_{i}=w_{i}/\sqrt{\beta}$, $i=1,\ldots,k$, and substitute them into (2). Let $h=\\{f_{1}+w_{1}/\sqrt{\beta},\ldots,f_{k}+w_{k}/\sqrt{\beta}\\}$. $\displaystyle\min\;$ $\displaystyle(w_{1}^{2}+\cdots+w_{k}^{2}+\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2$ $\displaystyle s.t.\hskip 28.45274pth=0.$ One nice thing about (2) and (2) is that the real varieties defined by $g$ and $h$ are smooth submanifolds of $\mathbb{R}^{n+k}$. ###### Remark 2.3. If we replace $\beta=1/t$, we obtain another equivalent formulation: (6) $\displaystyle\min\;\mu$ $\displaystyle=((f_{1}^{2}+\cdots+f_{k}^{2})+t\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2.$ Such a formulation is exactly the Tikhonov regularization for nonlinear least squares problems [Tikhonov95, Engl1996, Eriksson2005], where $t\sum_{i=1}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2$ is the regularization term for the minimization problem: $\min\|f\|^{2}$. Here, one must be cautious about the choice of $\mathfrak{a}$, since Lemma 2.2 does not exclude the possibility that the solution of problem (6) is not regular for certain $\mathfrak{a}$, which indeed poses a challenge for curve tracing as explained in next section. ## 3 An introductory example In this section, we illustrate by an example the challenge of a pure numerical method for tracing algebraic curves defined by rank-deficient polynomials as well as the main idea of our companion curve method. Let $f:={x_{{1}}}^{6}-2\,{x_{{1}}}^{3}x_{{2}}+{x_{{2}}}^{2}=(x_{1}^{3}-x_{2})^{2}$. Recall from Section 2 that, one can obtain approximate witness points of $V_{\mathbb{R}}(f)$ by solving Equation (3). Choosing $\beta=10^{4}$ and $\mathfrak{a}=(0,-1)$, the equation becomes $\\{60000\,{x_{{1}}}^{11}-180000\,{x_{{1}}}^{8}x_{{2}}+180000\,{x_{{1}}}^{5}{x_{{2}}}^{2}-60000\,{x_{{1}}}^{2}{x_{{2}}}^{3}+x_{{1}},-20000\,{x_{{1}}}^{9}+60000\,{x_{{1}}}^{6}x_{{2}}-60000\,{x_{{1}}}^{3}{x_{{2}}}^{2}+20000\,{x_{{2}}}^{3}+x_{{2}}+1\\}$. By the homotopy continuation method, say the one implemented in Hom4PS-2.0 [Lee2008], one obtains three approximate witness points: $(-0.3639,-0.0840)$, $(-0.8296,-0.5982)$, $(0,-0.0364)$. Geometrically, these points are actually the projection onto $(x_{1},x_{2})$-space of the following three local minima of the optimization problem (2): $(-0.3639,-0.0840,-0.1280),(-0.8296,-0.5982,-0.0739),(0,-0.0364,-0.1324).$ These three points attain the local minimum distance from the point $(0,-1,0)$ to the manifold $f+w/100=0$, as illustrated by Figure 1. Figure 1: (Color online) Left: the random point $\mathfrak{a}=(0,-1,0)$ (in red $\bullet$), the three local minima (in blue $+$), the smooth surface defined by $f+w/100=0$ and its intersection with $w=0$ (gold curve). Right: the random point $\mathfrak{a}=(0,-1)$ (in red $\bullet$), the three approximate witness points (in blue $+$), and the curve $f=0$ (gold curve). ### 3.1 The challenge Next we would like to generate more points of $V_{\mathbb{R}}(f)$ by curve tracing with the three initial points. One natural idea is to consider the following linear homotopy: (7) $H_{\tau}(x,\tau)=x+\beta_{0}\mathcal{J}^{t}\cdot f-(\tau p_{1}+(1-\tau)p_{0})\equiv 0,$ where $\beta_{0}=10000$ and one moves $\mathfrak{a}$ from $p_{0}=(0,-1)$ to $p_{1}=(-0.5,-1.5)$ in a line. As long as the number of solutions of $H(x,\mathfrak{a})=x+\beta_{0}\mathcal{J}^{t}\cdot f-\mathfrak{a}$ in $x$ remains unchanged and the graphs of the solutions of $H(x,\mathfrak{a})$ as functions of $\mathfrak{a}$ remain disjoint and smooth, one should encounter no much difficulty during curve tracing. However, as illustrated by the left subfigure of Fig. 2, the path of $\mathfrak{a}$ crosses the discriminant locus [LazardRouillier2007] of $H(x,\mathfrak{a})$, which can be obtained by a Gröbner basis computation [LazardRouillier2007] by Lemma 5.1 in Section 5. As illustrated by the right subfigure of Fig. 2, when $\mathfrak{a}$ approaches the discriminant locus, the solution curve starting with $(0,-0.0364)$ (blue $+$) and the solution curve starting with $(-0.3639,-0.0840)$ (purple $\bullet$) get close to each other gradually and finally are no longer solutions of $H(x,\mathfrak{a})$. Interestingly, if we apply Newton iteration now starting with these non-solution points, they finally converge to the third solution curve traced starting with the local minimum $(-0.8296,-0.5982)$ (gold $\star$). Therefore, curve jumping happened when the parameter $\mathfrak{a}$ crossed the discriminant locus. Figure 2: (Color online) Left: the discriminant locus of $H(x,\mathfrak{a})$ (two “V” curves in dark green) and the segment $\overline{p_{0}p_{1}}$ (in red). Right: the traced curve starting with the initial point $(-0.8296,-0.5982)$ (gold $\star$), the traced curve starting with the initial point $(0,-0.0364)$ (blue $+$), and the traced curve starting with the initial point $(-0.3639,-0.0840)$ (purple $\bullet$). ### 3.2 The companion curve solution Note that in Fig. 1, there are three points on the smooth surface attaining the local minimum distance to the given random point $(0,-1,0)$. Intuitively, there should only be one local minimum point if the given random point is close enough to the surface and the surface is connected. This is indeed true, which will be proved in Section LABEL:sec:pathtrack, thanks to the Tubular Neighborhood Theorem in differential geometry. Now suppose that $\mathfrak{a}=p_{0}$ is the initial random point and $(x,w)=(q_{0},r_{0})$ is an approximate local minimum of the optimization problem (2). Suppose that $\|r_{0}\|$ is small enough such that $x=q_{0}$ can be seen as an approximate witness point of $V_{\mathbb{R}}(f)$. We first move the point $\mathfrak{a}$ to another point $p_{1}$ on the segment $\overline{p_{0}q_{0}}$ such that $(p_{1},0)$ is close to the surface and hope that there is only one point $(q_{1},r_{1})$ on the surface with local minimum distance to $(p_{1},0)$ nearby. We then pick a well chosen direction, say $\vec{v}$, by some eigenvectors computation and move $\mathfrak{a}$ from $p_{1}$ to $p_{1}^{\prime}$. Accordingly, $x$ is moved in the same direction from $q_{1}$ to $q_{1}^{\prime}$ (may be further refined to $q_{1}^{\prime\prime}$ by Newton iteration if necessary). It is possible that $(p_{1}^{\prime},0)$ is now outside the tubular neighborhood of the surface and one may repeat the previous step to drag $p_{1}^{\prime}$ to $p_{2}$ and produce $q_{2}$. And then move $q_{2}$ to $q_{2}^{\prime}$, $p_{2}$ to $p_{2}^{\prime}$, so on so forth. The polygonal chain formed by the sequence of points $p_{1},p_{2},\ldots$ is called the companion curve of $V_{\mathbb{R}}(f)$, as illustrated in Fig. 3. Figure 3: (Color online) Left: illustrating the idea of companion curve tracing method. Right: an approximation of $V_{\mathbb{R}}(f)$ (polygonal chain in gold $\bullet$) and its companion curve (polygonal chain in blue $+$). The initial random point $(0,-1)$ (in red $\bullet$) at the bottom was moved to a point (in blue $\Box$) close to $V_{\mathbb{R}}(f)$. ## 4 Emptiness of real variety In this section, we propose a criterion for testing the emptiness of a given real variety. Lemma 2.2 implies that all the solutions of Equation (3) can be obtained by applying homotopy continuation methods. Among these solutions, we look for solutions with small residuals i.e. $\|z\|\ll 1$. It is possible that such points do not exist, which then provides strong evidence that $V_{\mathbb{R}}(f)$ is empty. Intuitively, this is because if $V_{\mathbb{R}}(f)$ is not empty, increasing the penalty factor $\beta$ will force $\|z\|$ close to zero. Thus, the minimal value of $\mu$ will be slightly larger than the distance from $\mathfrak{a}$ to $V_{\mathbb{R}}(f)$. To study the relationship between $\mu_{\min}$ and the emptiness of $V_{\mathbb{R}}(f)$, we homogenize the system $f$ by adding a variable $x_{0}$ satisfying a new equation $\bar{f}_{k+1}=\sum_{i=0}^{n}x_{i}^{2}-1=0$ to obtain a homogenized system $\bar{f}$ except for the new inhomogeneous equation. The corresponding unconstrained optimization problem is (8) $\displaystyle\min\;\bar{\mu}$ $\displaystyle=(\beta\cdot(\bar{f}_{1}^{2}+\cdots+\bar{f}_{k}^{2}+\bar{f}_{k+1}^{2})+\sum_{i=0}^{n}(x_{i}-\mathfrak{a}_{i})^{2})/2.$ where $\mathfrak{a}$ is chosen randomly in the unit ball $B(0;1)$ of $\mathbb{R}^{n+1}$. ###### Proposition 1. Let $\bar{\mu}_{\min}$ be the global minimal value of $\bar{\mu}$ in the optimization problem (8). If $\bar{\mu}_{\min}>2$, then $V_{\mathbb{R}}(f)=\emptyset$. ###### Proof 4.1. We prove it by contradiction. Suppose $V_{\mathbb{R}}(f)\neq\emptyset$. Then $V_{\mathbb{R}}(\bar{f})\neq\emptyset$ and let $p\in V_{\mathbb{R}}(\bar{f})$. We know that $p,\mathfrak{a}\in B(0;1)$ which implies $\|p-\mathfrak{a}\|\leq 2$. Thus, $\bar{\mu}_{\min}\leq\beta\cdot 0+\|p-\mathfrak{a}\|^{2}/2\leq 2$. It contradicts the assumption $\bar{\mu}_{\min}>2$. $\square$ ###### Example 4.2. Let $f={x}^{4}+{y}^{4}-3\,xy+3/2$. We can verify that $f={\frac{159}{784}}\,{x}^{4}+{\frac{15}{64}}\,{y}^{4}+{\frac{3}{50}}+\left(\frac{6}{5}-\frac{5}{4}\,xy\right)^{2}+\left({\frac{25}{28}}\,{x}^{2}-{\frac{7}{8}}\,{y}^{2}\right)^{2}>0.$ Homogenizing $f$ yields $\bar{f}=\\{{x}^{4}+{y}^{4}-3\,{h}^{2}xy+3/2\,{h}^{4},{h}^{2}+{x}^{2}+{y}^{2}-1\\}$. Choose $\mathfrak{a}=(0.2,0.5,0.3)$ and $\beta=10000$ and solve the corresponding system of Equation (3) by Hom4Ps2. It gives $23$ real roots among $111$ complex ones. The minimal value of $\bar{\mu}$ is $28.6$ at $(x=0.565,y=0.565,h=0.596)$. By Proposition 1, it indicates that $V_{\mathbb{R}}(f)=\emptyset$. To apply this lemma we may choose sufficiently large $\beta$ to make $\bar{\mu}_{\min}>4$ possible. However, for some positive polynomials it will never happen no matter how large $\beta$ is. ###### Example 4.3. Consider $f=(xy-1)^{2}+y^{2}$ and it is positive but arbitrarily close to zero. Fixing $\mathfrak{a}=(0,0,0)$, by Equation (8), we have $\bar{\mu}=\beta\left({h}^{4}-2\,{h}^{2}xy+{h}^{2}{y}^{2}+{x}^{2}{y}^{2}\right)^{2}+\beta\left({h}^{2}+{x}^{2}+{y}^{2}-1\right)^{2}+{h}^{2}+{x}^{2}+{y}^{2}.$ Numerical computation shows that $\bar{\mu}_{\min}=0.999975$ at the point $(x=2.9\times 10^{-8},y=0.9999,h=8.5\times 10^{-9})$ when $\beta=10^{4}$. Actually, $\bar{\mu}_{\min}$ must be no greater than $1$ since $\bar{\mu}(0,1,0)=1$ for any $\beta$. It means that we cannot tell if the real variety of $f$ is empty or not by Proposition 1. Proposition 1 only gives a sufficient condition for $V_{\mathbb{R}}(f)=\emptyset$. If $\bar{\mu}_{\min}<2$, deciding the emptiness is an open question. In the rest of this paper, we always assume that $V_{\mathbb{R}}(f)\neq\emptyset$. ## 5 Refinement By Proposition 2, theoretically we can use Equation (3) to update the approximate root $x^{\prime}$ by increasing $\beta$. Suppose we have all the real solutions of Equation (3) for $\beta=\beta_{0}$ denoted by $R_{\beta_{0}}$. When we increase $\beta$ from $\beta_{0}$ to $\beta_{1}$, there are two ways to obtain $R_{\beta_{1}}$. One way is to solve Equation (3) in the complex field and then keep the real solutions. Alternatively, we may trace the real curves of the following homotopy starting from all points of $R_{\beta_{0}}$ by moving $\beta$ from $\beta_{0}$ to $\beta_{1}$ continuously. (9) $H(x,\beta)=(x-\mathfrak{a})+\beta\cdot\mathcal{J}^{t}\cdot f\equiv 0.$ But we have to be aware of singular Jacobian for a successful tracing. ###### Lemma 5.1. For any polynomial system $f\subset\mathbb{R}[x]$, let $F=\\{{x}-\mathfrak{a}+\beta\cdot\mathcal{J}^{t}\cdot{f}\\}$ and $G=\\{F,\det(\frac{\partial F}{\partial x})\\}\subset\mathbb{R}[x,\mathfrak{a},\beta]$. Then there is a nonzero polynomial $\phi(\mathfrak{a},\beta)\in\langle G\rangle$.
11institutetext: CAS Key Laboratory of Geospace Environment, School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, China 11email<EMAIL_ADDRESS>22institutetext: CAS Center for Excellence in Comparative Planetology, University of Science and Technology of China, Hefei 230026, China 33institutetext: Mengcheng National Geophysical Observatory, School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, China 44institutetext: Collaborative Innovation Center of Astronautical Science and Technology, Hefei, Anhui 230026, China 55institutetext: School of Atmospheric Sciences, Sun Yat-sen University, Zhuhai, Guangdong, 519000, China 66institutetext: Institute for the Study of Earth, Ocean, and Space, University of New Hampshire, Durham, NH 03824, USA # How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Quanhao Zhang How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Rui Liu How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Yuming Wang How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Zhenjun Zhou How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Bin Zhuang How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Xiaolei Li How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration?How flux feeding causes eruptions of solar magnetic flux ropes with the hyperbolic flux tube configuration? Coronal magnetic flux ropes are generally considered to be the core structure of large-scale solar eruptions. Recent observations found that solar eruptions could be initiated by a sequence of “flux feeding,” during which chromospheric fibrils rise upward from below, and merge with a pre-existing prominence. Further theoretical study has confirmed that the flux feeding mechanism is efficient in causing the eruption of flux ropes that are wrapped by bald patch separatrix surfaces. But it is unclear how flux feeding influences coronal flux ropes that are wrapped by hyperbolic flux tubes (HFT), and whether it is able to cause the flux-rope eruption. In this paper, we use a 2.5-dimensional magnetohydrodynamic model to simulate the flux feeding processes in HFT configurations. It is found that flux feeding injects axial magnetic flux into the flux rope, whereas the poloidal flux of the rope is reduced after flux feeding. Flux feeding is able to cause the flux rope to erupt, provided that the injected axial flux is large enough so that the critical axial flux of the rope is reached. Otherwise, the flux rope system evolves to a stable equilibrium state after flux feeding, which might be even farther away from the onset of the eruption, indicating that flux feeding could stabilize the rope system with the HFT configuration in this circumstance. ###### Key Words.: Sun: filaments, prominences – Sun: flares – Sun: coronal mass ejections (CMEs) – Sun: magnetic fields – Sun: activity ## 1 Introduction Large-scale solar eruptions include prominence/filament eruptions, flares, and coronal mass ejections (CMEs) (Benz, 2008; Chen, 2011; Parenti, 2014; Liu, 2020). They are capable of inflicting huge impacts on the solar-terrestrial system (Švestka, 2001; Cheng et al., 2014; Shen et al., 2014; Lugaz et al., 2017; Gopalswamy et al., 2018). It is widely accepted that different kinds of large-scale solar eruptions are close related to each other: they are essentially different manifestations of the same eruptive process of a coronal magnetic flux rope system (Zhang et al., 2001; Vršnak et al., 2005; van Driel- Gesztelyi & Green, 2015; Jiang et al., 2018; Liu et al., 2018; Yan et al., 2020). Therefore, it is of great significance to investigate how the eruption of coronal magnetic flux ropes is initiated. According to the magnetic topology, coronal flux ropes are classified into two types of configurations: if the flux rope sticks to the photosphere, with a bald patch separatrix surface (BPSS, Titov et al., 1993; Titov & Démoulin, 1999; Gibson & Fan, 2006) wrapping the flux rope, this is usually called the BPS configuration (Filippov, 2013); for the flux rope system in which the rope is suspended in the corona and wrapped around by a hyperbolic flux tube (HFT), it is called the HFT configuration (Titov et al., 2003; Aulanier et al., 2005; Chintzoglou et al., 2017). Many theoretical analyses have been carried out to investigate the eruptive mechanism of coronal magnetic flux ropes. Apart from the well-known magnetohydrodynamic (MHD) instabilities (Romano et al., 2003; Török & Kliem, 2003; Kliem & Török, 2006; Fan & Gibson, 2007; Aulanier et al., 2010; Guo et al., 2010; Savcheva et al., 2012), it was also suggested by many previous studies that catastrophes could be responsible for solar eruptions (e.g., Forbes & Isenberg, 1991; Isenberg et al., 1993; Lin et al., 2001; Chen et al., 2007; Démoulin & Aulanier, 2010; Longcope & Forbes, 2014; Kliem et al., 2014). For example, if either the axial flux or the poloidal flux of a flux rope exceeds the corresponding critical value, a catastrophic loss of equilibrium will occur, resulting in the eruption of the rope (Li & Hu, 2002; Bobra et al., 2008; Su et al., 2011; Zhang et al., 2016; Zhuang et al., 2018). Alternatively, magnetic reconnection may play a dominant role in flux rope eruptions, such as break-out model (Antiochos et al., 1999; Sterling & Moore, 2004), tether-cutting model (Moore et al., 2001; Inoue et al., 2015), and flux emergence model (Chen & Shibata, 2000; Archontis & Hood, 2008). Recently, Zhang et al. (2014) observed that chromospheric fibrils rise from below a quiescent prominence and merge with this prominence. This phenomenon is called “flux feeding” process. As observed by Zhang et al. (2014), the flux feeding processes occur 3 times, followed by the eruption of the quiescent prominence. This implies that coronal eruptions could be initiated by flux feeding. Since there lacks accurate measurement of the local conditions in the corona, the physical essence of flux feeding mechanism is unclear based on observational results alone. To solve this, Zhang et al. (2020) (hereafter Paper I) carried out numerical simulations to investigate the scenario of the flux feeding mechanism in BPS configurations. In their simulations, the rising chromospheric fibril is represented by a small flux rope emerging from below a pre-existing coronal magnetic flux rope with a BPS configuration. It was found that flux feeding processes only inject axial magnetic flux into the pre- existing flux rope, whose poloidal flux, however, remains unchanged after flux feeding. If the amount of the injected axial flux is large enough so that the total axial flux of the flux rope exceeds its critical value, the eruption of the rope is initiated by the upward catastrophe associated with the critical axial flux; otherwise, the flux rope remains in the stable BPS configuration after flux feeding. Different from the BPS configurations, HFT configurations are usually considered as the pre-eruptive states of coronal eruptions (Galsgaard et al., 2003; Aulanier et al., 2010; Filippov, 2013), indicating that HFT configurations could be metastable or even unstable. Here come the questions: how does flux feeding influence the flux rope systems with an HFT configuration? Is flux feeding still able to cause eruptions? If flux feeding mechanism is still efficient, what are the similarities and differences between the BPS and the HFT cases? To resolve these questions, theoretical analyses about flux feeding in HFT configurations are needed, which will further shed light on the physical nature of the flux feeding mechanism. In this paper, we carry out 2.5-dimensional numerical simulations to investigate the influence of flux feeding on flux rope system with an HFT configuration, especially how flux feeding causes the rope’s eruption. The rest of the paper is arranged as follows: the simulating procedures are introduced in Sect. 2; the simulation results of a typical flux feeding process and the flux rope eruption caused by this process are presented in Sect. 3; the influence of flux feeding on the rope system and the physical scenario of flux feeding mechanism in HFT configurations are analyzed in Sect. 4. Finally, discussion and conclusion are given in Sect. 5. ## 2 Simulating procedures For the 2.5-dimensional cases in our simulations, all the magnitudes satisfy $\partial/\partial z=0$, thus the magnetic field can be denoted as $\displaystyle\textbf{B}=\triangledown\psi\times\hat{\textbf{\emph{z}}}+B_{z}\hat{\textbf{\emph{z}}}.$ (1) Here $B_{z}$ is the component of the magnetic field in $z-$dirention; $\psi$ is the magnetic flux function, and the isolines of $\psi$ correspond to the magnetic field lines projected in $x-y$ plane. In this paper, the multi-step implicit scheme (Hu, 1989) is used to simulate the evolution of coronal flux rope systems. The initial state in our simulation is obtained by numerical procedures: first we insert a flux rope into a potential background field from the lower base, and then by adjusting the properties of this rope, a flux rope system with an HFT configuration is eventually obtained. The basic equations and the detailed simulating procedures are introduced in Appendix A. The magnetic configuration of the initial state is shown in Fig. 1(a): a flux rope is suspended above a bundle of closed arcades, resulting in a X-type magnetic structure below the rope. The green curves in Fig. 1(a) mark the boundary of the flux rope and that of the arcades below the rope. The magnetic flux of these arcades per unit length along the $z-$direction is $\Phi_{a}=0.069\times 10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$. The background field is a partially open bipolar field, with a negative and a positive surface magnetic charge located at the lower photosphere within $-25\leavevmode\nobreak\ \mathrm{Mm}<x<-15\leavevmode\nobreak\ \mathrm{Mm}$ and $15\leavevmode\nobreak\ \mathrm{Mm}<x<25\leavevmode\nobreak\ \mathrm{Mm}$, respectively. $B_{z}$ is zero in the background field. Figure 1: Evolution of the magnetic configuration during a flux feeding process with $C_{E}=2.23$. Panel (a) shows the initial flux rope system with an HFT configuration; the green curves mark the boundary of the major flux rope and that of the arcades below the rope. The times marked in panels (b)-(i) are in the unit of $\tau_{A}$. Different colors depict the magnetic field lines corresponding to different values of $\psi$. Similar as that in Paper I, the emerging fibril in the scenario of flux feeding is represented by a small flux rope, and the pre-existing large flux rope is called “major rope” for simplicity in the rest of this paper. The emergence of the small rope is achieved by similar simulating procedures as those in Paper I: the small rope emerges from the central region right below the major rope at a constant speed; the emergence begins at $t=0$ and ends at $t=\tau_{E}=60\tau_{A}$, where $\tau_{A}=17.4$ s is the characteristic Alfvén transit time. Thus the emerged part of the small rope at the base is located within $-x_{E}\leqslant x\leqslant x_{E}$, where $x_{E}=(a^{2}-h_{E}^{2})^{1/2}$, $h_{E}=a(2t/\tau_{E}-1)$, $0\leqslant t\leqslant\tau_{E}$, and $a$ is the radius of the small rope. Based on this, by adjusting $\psi$, $B_{z}$, the velocities $v_{x,y,z}$, the temperature $T$, and the density $\rho$ at the base ($y=0,-x_{E}\leqslant x\leqslant x_{E}$), the emergence of the small flux rope is achieved by: $\displaystyle\psi(t,x,y=0)=\psi_{i}(x,y=0)+\psi_{E}(t,x),$ (2) $\displaystyle\psi_{E}(t,x)=\frac{C_{E}}{2}\mathrm{ln}\left(\frac{2a^{2}}{a^{2}+x^{2}+h_{E}^{2}}\right),$ (3) $\displaystyle B_{z}(t,x,y=0)=C_{E}a(a^{2}+x^{2}+h_{E}^{2})^{-1},$ (4) $\displaystyle v_{y}(t,x,y=0)=v_{E}=2a/\tau_{E},\leavevmode\nobreak\ v_{x}(t,x,y=0)=v_{z}(t,x,y=0)=0,$ (5) $\displaystyle T(t,x,y=0)=2\times 10^{5}\mathrm{\leavevmode\nobreak\ K},\leavevmode\nobreak\ \rho(t,x,y=0)=1.67\times 10^{-12}\mathrm{\leavevmode\nobreak\ kg\leavevmode\nobreak\ m^{-3}}.$ (6) Here $v_{E}=2a/\tau_{E}$ is the emerging velocity; $\psi_{i}$ is the magnetic flux function of the initial state, and it is determined by numerical means (please see Eq. 17 in Appendix A). It is noteworthy that $\psi$ at the lower base is fixed at $\psi_{i}$ except during the emergence of the small rope, so that the normal component of the photospheric magnetic field is unchanged, implying that the lower base corresponds to the photosphere (Lin et al., 2003). The parameter $C_{E}$ determines the magnetic field strength of the small rope; its dimensionless values given in this paper are in the unit of $0.373\times 10^{1}0\mathrm{\leavevmode\nobreak\ Mx\leavevmode\nobreak\ cm^{-1}}$. In both the small and the major ropes, $B_{z}$ is positive and the component of the magnetic field in $x$-$y$ plane is counterclockwise. Anomalous resistivity is used in our simulations, so as to restrict magnetic reconnection within the regions of current sheets: $\displaystyle\eta=\begin{cases}0,&\leavevmode\nobreak\ j\leq j_{c}\\\ \eta_{m}\mu_{0}v_{0}L_{0}(\frac{j}{j_{c}}-1)^{2}.&\leavevmode\nobreak\ j>j_{c}\end{cases}$ (7) Here $\eta_{m}=9.95\times 10^{-2}$, $L_{0}=10^{7}$ m, and $v_{0}=128.57$ km s-1; $\mu_{0}$ is the vacuum magnetic permeability; the critical current density is $j_{c}=4.45\times 10^{-4}$ A m-2. ## 3 Simulation results ### 3.1 Flux feeding process With the procedures introduced above, we are able to simulate flux feeding processes in HFT configurations. The simulation results of a typical flux feeding process with $C_{E}=2.23$ are illustrated in Fig. 1(b)-1(i); different colors are used to depict the magnetic fields lines corresponding to different values of the flux function $\psi$, so as to clearly demonstrate the interaction between the major rope and emerging small rope in detail. Figure 2: Eruptive process of the resultant flux rope after the flux feeding process with $C_{E}=2.23$. Panels (a)-(h) illustrate the magnetic configurations at different times, and the blue color depicts the distribution of the current density. Panel (i) plots the evolution of the height of the rope axis, and the times of Panels (a)-(h) are marked by the vertical dotted lines in panel (i). The green curves in panels (a)-(h) mark the outer boundaries of the flux ropes and the plasmoids. At the beginning of the flux feeding process, the small rope starts to emerge from the lower boundary, as marked by the brown curve in Fig. 1(b). The arcades below the major rope in the initial state are then pushed upward by the emerging small rope, so that the top boundary of these arcades reaches the lower boundary of the major rope (as shown by the green curves in Fig. 1(c)), and then these arcades reconnect with the magnetic fields of the major rope (Fig. 1(d)). A comparison between Fig. 1(d) and Fig. 1(b) shows that the outer section of the major rope is peeled off by the reconnection. After all the arcades have reconnected with the major rope, the small rope itself also reaches the major rope. During the first half ($0\sim\tau_{E}/2=30\tau_{A}$) of the flux feeding process, what emerges from the lower base is the top half of the small rope, in which the direction of $B_{x}$ is opposite to that in the bottom half of the major rope. Thus a horizontal current sheet forms at the interface between the two ropes. This current sheet can be clearly recognized in Fig. 2(a), in which the blue color depicts the distribution of the current density. As a result of the reconnection within this current sheet, the two ropes gradually merge together, as shown by, for example, the magnetic field lines plotted by the red and the pink colors in Fig. 1(e)-1(f). The magnetic cancelation occurring in this current sheet should decrease the local magnetic pressure below the major rope, so that the major rope gradually descends during the early period of the flux feeding process (see Fig. 1(f) and 1(g), also see the height profile of the major rope axis plotted in Fig. 2(i)). During the second half ($30\tau_{A}\sim 60\tau_{A}$) of the flux feeding process, however, it is the bottom half of the small rope that emerges from the lower base; the direction of the magnetic field lines in the small rope is now the same as that in the bottom half of the major rope; there should be no reconnection any more. Therefore, after $t=30\tau_{A}$, magnetic flux is injected from the lower base, so that the local magnetic pressure accumulates below the major rope, which causes the major rope to gradually rise, as shown by Fig. 1(g)-1(h). Eventually, the flux feeding process ends at $t=60\tau_{A}$. The configuration of the resultant flux rope after flux feeding is illustrated in Fig. 1(i): the rope sticks to the photosphere, without any arcade below the rope. ### 3.2 Eruption caused by flux feeding Further evolution of the resultant major rope after flux feeding indicates that the major rope is caused to erupt by this flux feeding process with $C_{E}=2.23$. As demonstrated by Fig. 2, the major rope keeps rising after flux feeding. Due to the line-tying effect from the photosphere, the rope does not separate from the lower base immediately, but its lower boundary sticks to the photosphere for a certain period (Fig. 2(b)). As the rope rises, the lower section of the flux rope and the ambient background field are stretched. Eventually, as shown in Fig. 2(c)-2(d), the rope is fully detached from the lower base, and magnetic reconnection occurs within the vertical current sheet formed below the rope. The reconnection results in closed arcades below the rope, which can be clearly recongized in the lower section of Fig. 2(d). Moreover, as shown in Fig. 2(e), a plasmoid appears within the current sheet below the rope, and then rises and merges with the major rope (Fig. 2(f)), which is similar as the observations and simulations in Gou et al. (2019). This process is followed by another similar one, as shown in Fig. 2(g)-2(h). These plasmoids should result from the tearing mode instability within the current sheet, which have been investigated in detail in many previous studies (e.g., Bárta et al., 2008; Shen et al., 2011; Ni et al., 2012; Huang et al., 2017). The height of the major rope axis versus time is plotted in Fig. 2(i). As explicated in Sect. 3.1, due to the magnetic cancellation during the early period of the flux feeding process, the major flux rope gradually descends, which is accompanied by the contraction of the background arcades above the major rope (see Fig. 1(f) and 1(g)). This kind of downward movement before the onset of the eruption is often considered as a signature of “coronal implosion” (Hudson, 2000), which has been reported in many previous observational studies (e.g., Ji et al., 2004; Veronig et al., 2006; Joshi et al., 2009; Liu et al., 2009; Li et al., 2017). For example, Liu et al. (2009) observed the large-scale contraction of the overlying coronal loops before the onset of the flare, which was attributed to the prolonged preheating phase dominated by coronal thermal emissions. Our simulation results suggest another scenario of coronal implosions: internal magnetic cancellation during the early period of the flux feeding process in HFT configurations decreases the local magnetic pressure below the major flux rope, resulting in the downward movement of the rope before the onset of its eruption. For comparison, the height profile of the flux rope eruption caused by flux feeding in BPS configurations is monotonic, without any downward movement before the eruption (see Paper I). It has been suggested in many previous studies that solar eruptions not only could be initiated by the gradual variations of photospheric magnetic fields, but also might in turn cause rapid magnetic field changes in the photosphere (Wang et al., 1994; Liu et al., 2005; Wang & Liu, 2015; Toriumi & Wang, 2019). The issue about this kind of photospheric response to the coronal eruptions is graphically called “tail wags the dog” problem (Aulanier, 2016). This phenomenon has been observed in many previous studies (e.g., Wang et al., 2012; Liu et al., 2012; Petrie, 2012; Sun et al., 2017; Castellanos Durán et al., 2018). For example, by measuring the various magnetic parameters of a compact region at the central polarity inversion line (PIL) of an active region, Sun et al. (2017) found that the horizontal component of the photospheric magnetic field in this compact region obviously increase after the flare. In our simulation results, Fig. 3 shows the photospheric distribution of the horizontal magnetic field, $B_{x}$, at the central region right below the major rope (we note that $B_{z}$ is zero in the background field). In the initial state, as shown in Fig. 3(e), $B_{x}$ at the photosphere is negative, consistent with the arcades below the rope in the initial state. During the eruptive process of the resultant major rope after flux feeding, closed arcades forms right below the rising rope, which also results in the negative photospheric $B_{x}$, as shown in Fig. 3(f). Moreover, as the rope rises, more and more field lines are closed by reconnection and pile up above the PIL. As a result of the reconnection-driven contraction of the flare loops, the horizontal photospheric magnetic field $B_{x}$ increases accordingly (see Fig. 3(f)-3(h)), which is consistent with the simulation results in Barczynski et al. (2019). Comparing the photospheric $B_{x}$ at $t=200\tau_{A}$ with that in the initial state, the average strength of the photospheric $B_{x}$ after the eruption increases from 6.27 G to 8.50 G. Figure 3: Variation of the photospheric magnetic field after the eruption. Panels (a)-(d) illustrate the magnetic configurations at the central region above the PIL, and panels (e)-(h) plot the corresponding horizontal component of the photospheric magnetic fields, $B_{x}$. The times marked in panels (b)-(d) are in the unit of $\tau_{A}$. Moreover, as shown in Fig. 3, the magnetic field strength is of the order of 10 G, indicating that coronal eruptions originating in quiescent regions could also result in the phenomenon of “tail wags the dog”, that is coronal eruptions originating in quiescent regions could also cause variations of the photospheric magnetic fields. ## 4 Initiation of Eruptions In order to investigate how the eruption is initiated, we first investigate the influence of flux feeding on the major flux rope. The magnetic properties of a flux rope could be characterized by its axial magnetic flux, $\Phi_{z}$, and its poloidal magnetic flux per unit length along the $z-$direction, $\Phi_{p}$. For the initial major rope shown in Fig. 1(a), the axial flux $\Phi_{z0}=4.366\times 10^{19}\leavevmode\nobreak\ \mathrm{Mx}$, and the poloidal flux $\Phi_{p0}=1.186\times 10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$. Here we select the flux rope at $t=80\tau_{A}$ (Fig. 1(i)) as the resultant rope after flux feeding, whose axial flux $\Phi_{z}=7.115\times 10^{19}\leavevmode\nobreak\ \mathrm{Mx}$, and poloidal flux $\Phi_{p}=1.116\times 10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$. The increased axial flux is injected by the flux feeding process, which is similar as the BPS cases investigated in Paper I. Different from the result in BPS cases, however, the poloidal flux is reduced by $0.070\times 10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$ after flux feeding. Since the distribution of the coronal and the photospheric magnetic fields play a dominant role in triggering coronal flux rope eruptions (Yeates, 2014; Yang et al., 2018; Thalmann et al., 2019; Xing et al., 2020), the influence of flux feeding processes on the major rope system should be sensitive to the scale of the magnetic field strength in the small rope. The magnetic parameters of the resultant flux ropes after the flux feeding processes with different $C_{E}$ are tabulated in Table 1. Larger $C_{E}$ implies stronger magnetic field strength in the small emerging rope, so that more axial flux is injected into the major rope. The deduced poloidal fluxes in different cases, however, are very close to each other: $\Delta\Phi_{p}=0.069\sim 0.070\times 10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$, which is almost the same as the magnetic flux of the arcades below the rope in the initial state, $\Phi_{a}$. This indicates that the reduced poloidal flux should be caused by the interaction between the major rope and the arcades below the rope with the HFT configuration, which has been introduced in Sect. 3.1: as pushed by the rising small flux rope, these arcades reconnect with the major rope, so that the outer section of the major rope is peeled off. Table 1: The parameters of the resultant flux ropes after the flux feeding processes with different $C_{E}$. $C_{E}$ | $\Phi_{z}$ ($10^{19}$ Mx) | $\Phi_{p}$ ( $10^{10}\leavevmode\nobreak\ \mathrm{Mx}\leavevmode\nobreak\ \mathrm{cm}^{-1}$) | Erupt or not ---|---|---|--- Initial | 4.366 | 1.186 | N 1.80 | 5.431 | 1.116 | N 1.90 | 5.684 | 1.116 | N 2.00 | 5.990 | 1.117 | N 2.10 | 6.426 | 1.117 | N 2.20 | 6.929 | 1.117 | N 2.22 | 7.051 | 1.117 | N 2.23 | 7.115 | 1.116 | Y 2.30 | 7.562 | 1.116 | Y 2.40 | 8.281 | 1.116 | Y 111$\Phi_{z}$ and $\Phi_{p}$ are the axial flux and the poloidal magnetic flux per unit length along the $z-$direction of the major rope, respectively. In the cases with differrent $C_{E}$, the further evolution of the resultant rope also varies, as tabulated in the last column in Table 1. The eruptive and the non-eruptive cases are well separated: the resultant rope erupts if $C_{E}$ is no smaller than 2.23, and the axial fluxes of the resultant ropes in all the eruptive cases are larger than those in the non-eruptive cases. This indicates that there should exist a critical axial magnetic flux of the resultant rope, and the onset of its eruption should be initiated by the upward catastrophe associated with this critical axial flux (e.g., Su et al., 2011; Zhang et al., 2016). The value of the critical axial flux is of the order of $7.1\times 10^{19}\leavevmode\nobreak\ \mathrm{Mx}$. The simulation results of the critical non-eruptive case with $C_{E}=2.22$ are illustrated in Fig. 4: the resultant rope does not keep rising after flux feeding (Fig. 4(b)-4(c)), but falls back to the photosphere (Fig. 4(d)-4(e)), resulting in an equilibrium state in which the rope sticks to the photosphere (Fig. 4(f)). Therefore, if the injected axial flux is not large enough so that the critical axial flux of the major rope is not reached, not only is the flux feeding process unable to cause the major rope to erupt, but also the major rope system is transformed from an HFT configuration to a stable BPS configuration. This result might also suggest a possible scenario for confined flares in HFT configurations. Figure 4: Simulation results of the critical non-eruptive case with $C_{E}=2.22$. The meanings of the symbols are the same as those in Fig. 2. As demonstrated in Fig. 2, obvious magnetic reconnection occurs in the vertical current sheet below the rising major flux rope. Here we simulate a specific case to distinguish between the role that reconnection plays and that upward catastrophe plays in initiating the flux rope eruption: $C_{E}$ is also 2.23, the same as the eruptive case shown in Fig. 2, but the magnetic reconnection in the current sheet below the rope is prohibited. To achieve this, similar simulating procedures as, for example, Zhang et al. (2016), are used here: after $t=80\tau_{A}$, first, the resistivity is adjusted to zero, and second, reassign the flux function $\psi$ along the entire vertical current sheet (if it exists) below the major rope with the initial value $\psi_{c}=\psi_{i}(x=0,\leavevmode\nobreak\ y=0)$ at each time step, so that $\psi$ along the current sheet is invariant. With these procedures, both numerical and physical magnetic reconnections below the major rope are prohibited. Figure 5: Simulation results of the critical eruptive case with $C_{E}=2.23$, in which the magnetic reconnection below the major rope is prohibited. The meanings of the symbols are the same as those in Fig. 2. The red dotted line in panel(g) is the corresponding eruptive profile with reconnection, which is the same as that in Fig. 2(i). The simulation results for this specific case are illustrated in Fig. 5. After flux feeding, the upward catastrophe occurs and the resultant rope rises for a while (Fig. 5(a)-5(c)), but then stops rising, and eventually, the rope does not erupt but levitates in the corona (Fig. 5(d)-5(f)). Without magnetic reconnection, the closed background arcades above the major rope are always rooted at the photosphere. These closed arcades constrain the major rope, so that prevent the rope from further rising. For comparison, in the case with reconnection demonstrated in Fig. 2, background arcades above the rope reconnect at the vertical current sheet below the rope, so that the “tether” constraining the rising major rope is removed. As suggested by Green et al. (2018), different kinds of eruptive mechanisms can be classified into “trigger” and “driver” according to their predominant role in the eruption process. In our simulation results, the flux feeding process causes the upward catastrophe, which triggers the flux rope eruption. Magnetic reconnection as well as the further evolution of the upward catastrophe act as the drivers of the flux rope eruption. ## 5 Discussion and conclusion In this paper, we investigate the influence of flux feeding on coronal magnetic flux ropes with an HFT configuration, especially how flux feeding causes the flux rope to erupt. During the flux feeding process, the small emerging flux rope as well as the arcades below the major rope with the HFT configuration reconnect with the pre-existing major flux rope, so that axial magnetic flux is injected into the major rope, whereas the poloidal magnetic flux of the major rope is reduced. Flux feeding is able to cause the major rope to erupt, provided that the amount of the axial flux inject by flux feeding is large enough so that the total axial flux of the major rope reaches its critical value, which is similar as that in the BPS cases investigated in Paper I. During the early period of the flux feeding process, the major rope gradually descends due to the magnetic cancellation, demonstrating a downward movement of the flux rope right before its eruption, which suggests an alternative theoretical scenario for coronal implosions. Moreover, our simulation reproduce the photospheric magnetic response to the eruption of the coronal flux rope: the horizontal component of the photospheric magnetic field increases around the PIL after the eruption, as is often observed in the wake of major solar eruptions. Although the onset of the eruption in both the BPS and the HFT cases is dominated by the upward catastrophe associated with the critical axial flux, the influence of flux feeding on the flux ropes with an HFT configuration is different from those with a BPS configuration. With BPS, the major flux rope sticks to the photosphere, so that the small rope directly interacts with the major rope from the very beginning of the flux feeding process, and the major rope only reconnects with the small emerging rope during the flux feeding process; the non-eruptive resultant rope after flux feeding is still with a BPS configuration. For HFT cases, there exisits additional interaction between the major rope and the arcades below it in the initial state. Consequently, not only the poloidal flux of the major rope is reduced due to the reconnection with these arcades, but the HFT configuration collapses after flux feeding. The flux feeding in HFT configurations results in a dramatic change of topology, with the flux rope either erupting or falling back to a BPS configuration. Moreover, previous theoretical studies found that a flux rope could erupt if either its axial or poloidal magnetic flux exceeds the corresponding critical value. For BPS cases, each episode of flux feeding injects axial flux into the flux rope while maintaining its poloidal flux, therefore pushing the rope system one step closer toward the eruption, no matter how small the amount of the injected axial flux is. For HFT cases, however, if the injected axial flux is too little, the reduction in poloidal flux could push the rope system even farther away from the onset of the eruption, and, what’s more, the rope system may also evolve from a metastable HFT configuration to a stable BPS configuration after flux feeding. Therefore, the flux feeding process in HFT configurations is not always in favor of the eruption of the major rope; it could even stabilize the rope system. As introduced in Sect. 4, flux feeding cases with different $C_{E}$ are simulated in this paper. The emerging velocity of the small rope, however, is unchanged in different cases. Since the emerging velocity might also have some influence on the evolution of the resultant rope, we will try to simulate flux feeding cases with various emerging velocities in our future work, so as to investigate the role that the kinetic properties of flux feeding processes plays in initiating coronal flux rope eruptions. ###### Acknowledgements. We appreciate prof. Jie Zhang for his insightful suggestions and comments. This research is supported by the National Natural Science Foundation of China (NSFC 41804161, 41774178, 41761134088, 41774150, 41842037 and 41574165), the Strategic Priority Program of CAS (XDB41000000 and XDA15017300), and the fundamental research funds for the central universities. We acknowledge for the support from National Space Science Data Center, National Science `&` Technology Infrastructure of China (www.nssdc.ac.cn). ## References * Antiochos et al. (1999) Antiochos, S. K., DeVore, C. R., & Klimchuk, J. A. 1999, ApJ, 510, 485 * Archontis & Hood (2008) Archontis, V. & Hood, A. W. 2008, ApJ, 674, L113 * Aulanier (2016) Aulanier, G. 2016, Nature Physics, 12, 998 * Aulanier et al. (2005) Aulanier, G., Pariat, E., & Démoulin, P. 2005, A&A, 444, 961 * Aulanier et al. (2010) Aulanier, G., Török, T., Démoulin, P., & DeLuca, E. E. 2010, ApJ, 708, 314 * Barczynski et al. (2019) Barczynski, K., Aulanier, G., Masson, S., & Wheatland, M. S. 2019, ApJ, 877, 67 * Bárta et al. (2008) Bárta, M., Vršnak, B., & Karlický, M. 2008, A&A, 477, 649 * Benz (2008) Benz, A. O. 2008, Living Reviews in Solar Physics, 5, 1 * Bobra et al. (2008) Bobra, M. G., van Ballegooijen, A. A., & DeLuca, E. E. 2008, ApJ, 672, 1209 * Castellanos Durán et al. (2018) Castellanos Durán, J. S., Kleint, L., & Calvo-Mozo, B. 2018, ApJ, 852, 25 * Chen (2011) Chen, P. F. 2011, Living Reviews in Solar Physics, 8, 1 * Chen & Shibata (2000) Chen, P. F. & Shibata, K. 2000, ApJ, 545, 524 * Chen et al. (2007) Chen, Y., Hu, Y. Q., & Sun, S. J. 2007, ApJ, 665, 1421 * Cheng et al. (2014) Cheng, X., Ding, M. D., Zhang, J., et al. 2014, ApJ, 789, 93 * Chintzoglou et al. (2017) Chintzoglou, G., Vourlidas, A., Savcheva, A., et al. 2017, ApJ, 843, 93 * Démoulin & Aulanier (2010) Démoulin, P. & Aulanier, G. 2010, ApJ, 718, 1388 * Fan & Gibson (2007) Fan, Y. & Gibson, S. E. 2007, ApJ, 668, 1232 * Filippov (2013) Filippov, B. 2013, Sol. Phys., 283, 401 * Forbes & Isenberg (1991) Forbes, T. G. & Isenberg, P. A. 1991, ApJ, 373, 294 * Galsgaard et al. (2003) Galsgaard, K., Titov, V. S., & Neukirch, T. 2003, ApJ, 595, 506 * Gibson & Fan (2006) Gibson, S. E. & Fan, Y. 2006, ApJ, 637, L65 * Gopalswamy et al. (2018) Gopalswamy, N., Akiyama, S., Yashiro, S., & Xie, H. 2018, Journal of Atmospheric and Solar-Terrestrial Physics, 180, 35 * Gou et al. (2019) Gou, T., Liu, R., Kliem, B., Wang, Y., & Veronig, A. M. 2019, Science Advances, 5, 7004 * Green et al. (2018) Green, L. M., Török, T., Vršnak, B., Manchester, W., & Veronig, A. 2018, Space Sci. Rev., 214, 46 * Guo et al. (2010) Guo, Y., Ding, M. D., Schmieder, B., et al. 2010, ApJ, 725, L38 * Hu (1989) Hu, Y. Q. 1989, Journal of Computational Physics, 84, 441 * Hu (2001) Hu, Y. Q. 2001, Solar Physics, 200, 115 * Hu et al. (1995) Hu, Y. Q., Li, X., & Ai, G. X. 1995, ApJ, 451, 843 * Huang et al. (2017) Huang, Y.-M., Comisso, L., & Bhattacharjee, A. 2017, ApJ, 849, 75 * Hudson (2000) Hudson, H. S. 2000, ApJ, 531, L75 * Inoue et al. (2015) Inoue, S., Hayashi, K., Magara, T., Choe, G. S., & Park, Y. D. 2015, ApJ, 803, 73 * Isenberg et al. (1993) Isenberg, P. A., Forbes, T. G., & Demoulin, P. 1993, ApJ, 417, 368 * Ji et al. (2004) Ji, H., Wang, H., Goode, P. R., Jiang, Y., & Yurchyshyn, V. 2004, ApJ, 607, L55 * Jiang et al. (2018) Jiang, C., Feng, X., & Hu, Q. 2018, ApJ, 866, 96 * Joshi et al. (2009) Joshi, B., Veronig, A., Cho, K. S., et al. 2009, ApJ, 706, 1438 * Kliem et al. (2014) Kliem, B., Lin, J., Forbes, T. G., Priest, E. R., & Török, T. 2014, ApJ, 789, 46 * Kliem & Török (2006) Kliem, B. & Török, T. 2006, Physical Review Letters, 96, 255002 * Li & Hu (2002) Li, G. & Hu, Y. 2002, Science in China A: Mathematics, 45, 65 * Li et al. (2017) Li, Y., Sun, X., Ding, M. D., Qiu, J., & Priest, E. R. 2017, ApJ, 835, 190 * Lin et al. (2001) Lin, J., Forbes, T. G., & Isenberg, P. A. 2001, J. Geophys. Res., 106, 25053 * Lin et al. (2003) Lin, J., Soon, W., & Baliunas, S. L. 2003, New A Rev., 47, 53 * Liu et al. (2012) Liu, C., Deng, N., Liu, R., et al. 2012, ApJ, 745, L4 * Liu et al. (2005) Liu, C., Deng, N., Liu, Y., et al. 2005, ApJ, 622, 722 * Liu et al. (2018) Liu, L., Cheng, X., Wang, Y., et al. 2018, ApJ, 867, L5 * Liu (2020) Liu, R. 2020, arXiv e-prints, arXiv:2007.11363 * Liu et al. (2009) Liu, R., Wang, H., & Alexander, D. 2009, ApJ, 696, 121 * Longcope & Forbes (2014) Longcope, D. W. & Forbes, T. G. 2014, Sol. Phys., 289, 2091 * Lugaz et al. (2017) Lugaz, N., Farrugia, C. J., Winslow, R. M., et al. 2017, ApJ, 848, 75 * Moore et al. (2001) Moore, R. L., Sterling, A. C., Hudson, H. S., & Lemen, J. R. 2001, ApJ, 552, 833 * Ni et al. (2012) Ni, L., Roussev, I. I., Lin, J., & Ziegler, U. 2012, ApJ, 758, 20 * Parenti (2014) Parenti, S. 2014, Living Reviews in Solar Physics, 11, 1 * Petrie (2012) Petrie, G. J. D. 2012, ApJ, 759, 50 * Romano et al. (2003) Romano, P., Contarino, L., & Zuccarello, F. 2003, Sol. Phys., 214, 313 * Savcheva et al. (2012) Savcheva, A. S., van Ballegooijen, A. A., & DeLuca, E. E. 2012, ApJ, 744, 78 * Shen et al. (2011) Shen, C., Lin, J., & Murphy, N. A. 2011, ApJ, 737, 14 * Shen et al. (2014) Shen, F., Shen, C., Zhang, J., et al. 2014, Journal of Geophysical Research (Space Physics), 119, 7128 * Sterling & Moore (2004) Sterling, A. C. & Moore, R. L. 2004, ApJ, 602, 1024 * Su et al. (2011) Su, Y., Surges, V., van Ballegooijen, A., DeLuca, E., & Golub, L. 2011, ApJ, 734, 53 * Sun et al. (2017) Sun, X., Hoeksema, J. T., Liu, Y., Kazachenko, M., & Chen, R. 2017, ApJ, 839, 67 * Thalmann et al. (2019) Thalmann, J. K., Linan, L., Pariat, E., & Valori, G. 2019, ApJ, 880, L6 * Titov & Démoulin (1999) Titov, V. S. & Démoulin, P. 1999, A&A, 351, 707 * Titov et al. (2003) Titov, V. S., Galsgaard, K., & Neukirch, T. 2003, ApJ, 582, 1172 * Titov et al. (1993) Titov, V. S., Priest, E. R., & Demoulin, P. 1993, A&A, 276, 564 * Toriumi & Wang (2019) Toriumi, S. & Wang, H. 2019, Living Reviews in Solar Physics, 16, 3 * Török & Kliem (2003) Török, T. & Kliem, B. 2003, A&A, 406, 1043 * Švestka (2001) Švestka, Z. 2001, Space Sci. Rev., 95, 135 * van Driel-Gesztelyi & Green (2015) van Driel-Gesztelyi, L. & Green, L. M. 2015, Living Reviews in Solar Physics, 12, 1 * Veronig et al. (2006) Veronig, A. M., Karlický, M., Vršnak, B., et al. 2006, A&A, 446, 675 * Vršnak et al. (2005) Vršnak, B., Sudar, D., & Ruždjak, D. 2005, A&A, 435, 1149 * Wang et al. (1994) Wang, H., Ewell, M. W., J., Zirin, H., & Ai, G. 1994, ApJ, 424, 436 * Wang & Liu (2015) Wang, H. & Liu, C. 2015, Research in Astronomy and Astrophysics, 15, 145 * Wang et al. (2012) Wang, S., Liu, C., & Wang, H. 2012, ApJ, 757, L5 * Xing et al. (2020) Xing, C., Cheng, X., Qiu, J., et al. 2020, ApJ, 889, 125 * Yan et al. (2020) Yan, X., Xue, Z., Cheng, X., et al. 2020, ApJ, 889, 106 * Yang et al. (2018) Yang, S., Büchner, J., Skála, J., & Zhang, H. 2018, A&A, 613, A27 * Yeates (2014) Yeates, A. R. 2014, Sol. Phys., 289, 631 * Zhang et al. (2001) Zhang, J., Dere, K. P., Howard, R. A., Kundu, M. R., & White, S. M. 2001, ApJ, 559, 452 * Zhang et al. (2014) Zhang, Q., Liu, R., Wang, Y., et al. 2014, ApJ, 789, 133 * Zhang et al. (2016) Zhang, Q., Wang, Y., Hu, Y., & Liu, R. 2016, ApJ, 825, 109 * Zhang et al. (2017) Zhang, Q., Wang, Y., Hu, Y., Liu, R., & Liu, J. 2017, ApJ, 835, 211 * Zhang et al. (2020) Zhang, Q., Wang, Y., Liu, R., et al. 2020, ApJ, 898, L12 * Zhuang et al. (2018) Zhuang, B., Hu, Y., Wang, Y., et al. 2018, Journal of Geophysical Research (Space Physics), 123, 2513 ## Appendix A Basic equations and initial preparations Combining the 2.5-Dimensional form of the magnetic field (Eq. 1), we could write the MHD equations in the non-dimensional form as: $\displaystyle\frac{\partial\rho}{\partial t}+\triangledown\cdot(\rho\textbf{\emph{v}})=0,$ (8) $\displaystyle\frac{\partial\textbf{\emph{v}}}{\partial t}+\frac{2}{\rho\beta_{0}}(\vartriangle\psi\triangledown\psi+B_{z}\triangledown B_{z}+\triangledown\psi\times\triangledown B_{z})+\textbf{\emph{v}}\cdot\triangledown\textbf{\emph{v}}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +\triangledown T+\frac{T}{\rho}\triangledown\rho+g\hat{\textbf{\emph{y}}}=0,$ (9) $\displaystyle\frac{\partial\psi}{\partial t}+\textbf{\emph{v}}\cdot\triangledown\psi-\frac{2\eta}{\beta_{0}}\vartriangle\psi=0,$ (10) $\displaystyle\frac{\partial B_{z}}{\partial t}+\triangledown\cdot(B_{z}\textbf{\emph{v}})+(\triangledown\psi\times\triangledown v_{z})\cdot\hat{\textbf{\emph{z}}}-\frac{2\eta}{\beta_{0}}\vartriangle B_{z}=0,$ (11) $\displaystyle\frac{\partial T}{\partial t}-\frac{4\eta(\gamma-1)}{\rho R\beta_{0}^{2}}\left[(\vartriangle\psi)^{2}+|\triangledown\times(B_{z}\hat{\textbf{\emph{z}}})|^{2}\right]$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +\textbf{\emph{v}}\cdot\triangledown T+(\gamma-1)T\triangledown\cdot\textbf{\emph{v}}=0,$ (12) where $\displaystyle\vartriangle\psi=\frac{\partial^{2}\psi}{\partial x^{2}}+\frac{\partial^{2}\psi}{\partial y^{2}},\leavevmode\nobreak\ \leavevmode\nobreak\ \vartriangle B_{z}=\frac{\partial^{2}B_{z}}{\partial x^{2}}+\frac{\partial^{2}B_{z}}{\partial y^{2}}.$ (13) Here $\rho$ and $T$ denote the density and the temperature; $v_{x},v_{y},v_{z}$ represent the $x$, $y$, and $z-$component of the velocity, respectively; $\gamma=5/3$ is the polytropic index in our simulation; $g$ is the normalized gravity; $\eta$ is the resistivity. Here $\beta_{0}=2\mu_{0}\rho_{0}RT_{0}L_{0}^{2}/\psi_{0}^{2}=0.1$ is the characteristic ratio of the gas pressure to the magnetic pressure, where $\rho_{0}=3.34\times 10^{-13}\mathrm{\leavevmode\nobreak\ kg\leavevmode\nobreak\ m^{-3}}$, $T_{0}=10^{6}\mathrm{\leavevmode\nobreak\ K}$, $L_{0}=10^{7}\mathrm{\leavevmode\nobreak\ m}$, and $\psi_{0}=3.73\times 10^{3}\mathrm{\leavevmode\nobreak\ Wb\leavevmode\nobreak\ m^{-1}}$ are the characteristic values of density, temperature, length and magnetic flux function, respectively, which are also the calculating units in the simulation. For the other quantities, the corresponding characteristic values are $v_{0}=128.57$ km s-1, $t_{0}=77.8$ s, $B_{0}=3.37\times 10^{-4}$ T, $g_{0}=1.65\times 10^{3}$ m s-2. The numerical domain in our simulation is $0<x<200$ Mm, $0<y<300$ Mm, which is discretized into 400$\times$600 uniform meshes. At the left side of the domain ($x=0$), symmetric boundary condition is used. The radiation and the heat conduction in the energy equation are neglected. Figure 6: Panel (a) is the magnetic configuration of the background field. Panel (b) is an interim state to construct the initial state. Panel (c) is the obtained flux rope system with an HFT configuration, which is the initial state of our simulation. In order to simulate flux feeding processes in HFT cases, we must first construct a typical coronal flux rope system with an HFT configuration. Here we select a partially open bipolar field as the background field, in which a negative and a positive surface magnetic charge located at the photosphere within $-b<x<-a$ and $a<x<b$, respectively. The background field could then be obtained by the complex variable method (e.g., Hu et al., 1995; Zhang et al., 2017). The background magnetic field can be cast in the complex variable form $\displaystyle f(\omega)\equiv B_{x}-iB_{y}=\frac{(\omega+iy_{N})^{1/2}(\omega- iy_{N})^{1/2}}{F(a,b,y_{N})}\mathrm{ln}\left(\frac{\omega^{2}-a^{2}}{\omega^{2}-b^{2}}\right),$ (14) where $\omega=x+iy$, and $\displaystyle F(a,b,y_{N})=\frac{1}{b-a}\int_{a}^{b}(x^{2}+y_{N}^{2})^{1/2}dx=\frac{1}{2(b-a)}\times$ $\displaystyle\left[b(b^{2}+y_{N}^{2})^{1/2}-a(a^{2}+y_{N}^{2})^{1/2}+y_{N}^{2}\mathrm{ln}\left(\frac{b+(b^{2}+y_{N}^{2})^{1/2}}{a+(a^{2}+y_{N}^{2})^{1/2}}\right)\right].$ (15) Here $a=15$ Mm, $b=25$ Mm, and ($y=y_{N}=34.5$ Mm, $x=0$) is the position of the neutral point of the partially open bipolar field. There is a neutral current sheet located at $(x=0,\leavevmode\nobreak\ y\geq y_{N})$. The magnetic flux function could then be calculated by: $\displaystyle\psi(x,y)=\mathrm{Im}\left\\{\int f(\omega)d\omega\right\\},$ (16) and the flux function at the lower base is $\psi_{i}(x,0)=\left\\{\begin{array}[]{ll}{\psi_{c}},&{|x|<a}\\\ {\psi_{c}F(|x|,b,y_{N})/F(a,b,y_{N})},&{a\leqslant|x|\leqslant b}\\\ {0},&{|x|>b}\end{array}\right.$ (17) where $\psi_{c}=\pi\psi_{0}$; the flux function at the neutral point $y=y_{N}$ can be calculated as: $\displaystyle\psi_{N}=\frac{\pi(b^{2}-a^{2})}{2F(a,b,y_{N})}.$ (18) Letting $B_{z}$ equals 0 in the background field, the configuration of the background field could then be obtained, as shown in Fig. 6(a). The initial corona is isothermal and static: $\displaystyle T_{c}\equiv T(0,x,y)=1\times 10^{6}\leavevmode\nobreak\ \mathrm{K},\ \ \rho_{c}\equiv\rho(0,x,y)=\rho_{0}\mathrm{e}^{-gy}.$ (19) Except during the emergence of the small rope, the lower boundary is fixed: the flux function $\psi$ is fixed at $\psi_{i}$ given by Eq. 17; $B_{z}$ and the velocity is always 0; the density and the temperature are fixed at their initial values. Increment equivalent extrapolation is used at the right and top boundaries. With the initial condition and the background configuration, equations (8) to (12) are solved by the multi-step implicit scheme. First, we let a flux rope emerge from the lower base of the background field, as shown in Fig. 6(b). By multiplying $B_{z}$ within the rope by a factor larger than 1, we increase the axial magnetic flux of the flux rope. The rope then rises, and magnetic reconnection occurs within the veritical current sheet formed below the rope, resulting in closed arcades below the rope. After that, adjust the axial flux of the rope again, so that the rope stops rising. Eventually, let the rope system relax to a equilibrium state with an HFT configuration, as shown in Fig. 6(c). This is the initial state of our simulation. It is noteworthy that the radius of the flux rope in our simulation is finite, so that the thin-rope approximation is not satisfied. Under this circumstance, the initial state could only be obtained by numerical procedures.
# Homomorphisms of algebraic groups: representability and rigidity Michel Brion Université Grenoble Alpes, Institut Fourier, CS 40700, 38058 Grenoble Cedex 9, France ###### Abstract. Given two algebraic groups $G$, $H$ over a field $k$, we investigate the representability of the functor of morphisms (of schemes) $\mathbf{Hom}(G,H)$ and the subfunctor of homomorphisms (of algebraic groups) $\mathbf{Hom}_{{\rm gp}}(G,H)$. We show that $\mathbf{Hom}(G,H)$ is represented by a group scheme, locally of finite type, if the $k$-vector space ${\mathcal{O}}(G)$ is finite- dimensional; the converse holds if $H$ is not étale. When $G$ is linearly reductive and $H$ is smooth, we show that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a smooth scheme $M$; moreover, every orbit of $H$ acting by conjugation on $M$ is open. ## 1\. Introduction The starting point of this article is the classification problem for actions of an algebraic group $G$ on an algebraic variety $X$. When $X$ is proper over the ground field $k$, its automorphism functor is represented by a locally algebraic group ${\rm Aut}_{X}$ (i.e., a group scheme, locally of finite type), see [MO67, Thm. 3.7]. Then the $G$-actions on $X$ correspond bijectively to the homomorphisms $f:G\to{\rm Aut}_{X}$, and the above problem is equivalent to classifying these homomorphisms up to conjugation by ${\rm Aut}(X)={\rm Aut}_{X}(k)$. This motivates the following questions: * • Given an algebraic group $G$ and a locally algebraic group $H$, is the functor of homomorphisms $\mathbf{Hom}_{{\rm gp}}(G,H)$ represented by a scheme $M$? * • If so, $M$ is equipped with an action of $H$ by conjugation on the target; how to describe the orbits? When $G$ is of multiplicative type and $H$ is smooth and affine, the representability of $\mathbf{Hom}_{{\rm gp}}(G,H)$ is due to Grothendieck; he showed in addition that the representing scheme $M$ is smooth and the morphism (1.1) $H\times M\longrightarrow M\times M,\quad(h,f)\longmapsto(hfh^{-1},f)$ is smooth as well (see [SGA3, Exp. XI, Thm. 4.2, Cor. 5.1]; these results are obtained over an arbitrary base). As a consequence, for any field extension $K/k$ and any $f\in M(K)=\operatorname{Hom}_{K-{\rm gp}}(G_{K},H_{K})$, the orbit map $H_{K}\to M_{K}$, $h\mapsto hfh^{-1}$ is smooth, and hence every orbit is open. This may be viewed as a rigidity property for actions of group schemes of multiplicative type: the only way to deform such an action is via conjugation on the target. For $G$ reductive and $H$ smooth affine, the representability of $\mathbf{Hom}_{{\rm gp}}(G,H)$ was obtained by Demazure; in characteristic $0$, he also showed that the representing scheme $M$ is smooth (see [SGA3, Exp. XXIV, Cor. 7.2.3, Prop. 7.3.1]; these results hold again over an arbitrary base). Further rigidity properties of $M$ were obtained by Margaux when $G$ is linearly reductive (see [Mar09] and Remark 6.4). Note that the above scheme $M$ is not necessarily of finite type; for example, if $G=H={\mathbb{G}}_{m}$ then $M$ is the constant scheme ${\mathbb{Z}}_{k}$. Much more elaborate examples occur in recent work of Lesieutre (see [Les18, Lem. 12, Cor. 15]) and Dinh & Oguiso (see [DO19, Lem. 4.5]): they constructed smooth complex projective varieties $X$ such that ${\rm Aut}(X)$ is discrete and has infinitely many conjugacy classes of involutions. Yet if $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$, then $M$ is locally of finite type. Indeed, when viewed as a functor from $k$-algebras to sets, $\mathbf{Hom}_{{\rm gp}}(G,H)$ commutes with direct limits as a consequence of [EGA, IV.8.8.2]; thus, the assertion follows from the characterization of schemes locally of finite type in terms of their functors of points obtained in [EGA, IV.8.14.2]. Likewise, if the functor of morphisms (of schemes) $\mathbf{Hom}(G,H)$ is represented by a scheme $N$, then $N$ is locally of finite type. The functors $\mathbf{Hom}(G,H)$ and $\mathbf{Hom}_{{\rm gp}}(G,H)$ usually contain more information than their sets of $K$-valued points for all field extensions $K/k$. For example, every morphism $f:{\mathbb{G}}_{a,K}\to{\mathbb{G}}_{m,K}$ is constant, but $\mathbf{Hom}({\mathbb{G}}_{a},{\mathbb{G}}_{m})$ is not representable, nor is $\mathbf{Hom}_{{\rm gp}}({\mathbb{G}}_{a},{\mathbb{G}}_{m})$ (see e.g. [Mil17, Exercise 1.1]). Also, if $G$ and $H$ are linear algebraic groups over an algebraically closed field $k$ of characteristic $0$, then the set of homomorphisms $\operatorname{Hom}_{{\rm gp}}(G,H)$ has a natural structure of affine ind-variety of finite dimension; if in addition $G$ is unipotent, then one even gets an affine variety (by results of Furter & Kraft, see [FK18, Lem. 8.2.1, Prop. 8.4.1]). But in the latter case, $\mathbf{Hom}_{{\rm gp}}(G,H)$ is representable if and only if $H$ is an extension of a finite group by a unipotent one. With these motivations and examples in mind, we consider in this article the issues of representability of $\mathbf{Hom}(G,H)$ and $\mathbf{Hom}_{{\rm gp}}(G,H)$, where again $G$ is an algebraic group, and $H$ a locally algebraic group. Observe that $\mathbf{Hom}(G,H)$ is a group functor relative to pointwise multiplication in $H$, and is the semi-direct product of the normal subgroup functor $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ of pointed homomorphisms by the group $H$ of constant morphisms (see Lemma 2.6 for details). Also, it is easy to show that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is a closed subfunctor of $\mathbf{Hom}(G,H)$ (Lemma 2.1). The case of an étale group scheme $H$ is easy as well: then $\mathbf{Hom}(G,H)$ is represented by an étale scheme (Proposition 4.1). Thus, $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by an étale scheme as well. So we may exclude this case in our first main result, which handles the representability of $\mathbf{Hom}(G,H)$: ###### Theorem 1. Let $G$ be an algebraic group, and $H$ a locally algebraic group. Assume that $H$ is not étale. Then $\mathbf{Hom}(G,H)$ is represented by a locally algebraic group if and only if the vector space ${\mathcal{O}}(G)$ is finite- dimensional. This can be reformulated by using the affinization theorem (see [DG70, III.3.8.2]): for any algebraic group $G$, the affine scheme $G^{{\rm aff}}=\operatorname{Spec}{\mathcal{O}}(G)$ is an algebraic group and the canonical morphism $G\to G^{{\rm aff}}$ is a faithfully flat homomorphism. Moreover, its kernel $N$ is smooth, connected, central in $G^{0}$ (in particular, commutative) and satisfies ${\mathcal{O}}(N)=k$; we say that $N$ is anti-affine. As a consequence, $N$ is the largest anti-affine subgroup of $G$; we denote it by $G_{{\rm ant}}$. Thus, Theorem 1 asserts that $\mathbf{Hom}(G,H)$ is represented by a locally algebraic group if and only if $G$ is an extension of a finite group scheme by an anti-affine one. The proof begins with a reduction to the case where $G$ is anti-affine; we then show that $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ is equal to $\mathbf{Hom}_{{\rm gp}}(G,H)$ and is represented by a form of some constant group scheme ${\mathbb{Z}}^{n}_{k}$ (Proposition 5.3). For this, we use a rigidity lemma for anti-affine schemes (Lemma 5.2), a version of a result of C. and F. Sancho de Salas (see [SS09, Thm. 1.7]). Our second main result gives a sufficient condition for the homomorphism functor to be representable. To state it, we introduce a variant of the classical notion of linear reductivity. We say that an algebraic group $G$ (possibly non-affine) is _linearly reductive_ if every $G$-module is semi- simple; this is equivalent to the affinization $G^{{\rm aff}}$ being linearly reductive. Examples of linearly reductive groups include group schemes of multiplicative type and semi-abelian varieties; see the beginning of Section 6 for more on this notion. We may now state our second main result in a simplified version; see Theorem 6.3 for the full, more technical statement. ###### Theorem 2. Let $G$ be a linearly reductive algebraic group, and $H$ a locally algebraic group. Then $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a smooth scheme $M$. Moreover, the morphism (1.1) is smooth. Conversely, if the assertions of Theorem 2 hold for an algebraic group $G$ and all affine algebraic groups $H$, then $G$ is linearly reductive; see Remark 6.5. So this theorem yields a version of Grothendieck’s representability and rigidity results mentioned above, which is close to optimal for group schemes over a field. We refer to [Ro21, Thm. 2] for a generalization of the representability theorem in its original setting of group schemes of multiplicative type over an arbitrary base. This article is organized as follows. Section 2 contains preliminary results on functors of (homo)morphisms; some of them are obtained in [SGA3, Exp. I] in a much greater generality. The tangent spaces to these functors are described in Section 3 by using constructions and results from [SGA3, Exp. II]. Section 4 collects representability results for these functors, when restricted to various classes of group schemes. In Section 5, we first prove the rigidity lemma mentioned above, and then deduce Theorem 1. Theorem 6.3 is stated and proved in the final Section 6, after some preliminary results on linear reductivity and a closely related notion of semi-reductivity. Notation and conventions. We consider schemes over a field $k$ of characteristic $p\geq 0$, with separable closure $k_{s}$ and algebraic closure $\bar{k}$. Morphisms and products are understood to be over $k$ unless otherwise stated. Schemes are assumed to be separated and locally of finite type throughout. The structural morphism of a scheme $X$ is denoted by $\pi_{X}:X\to\operatorname{Spec}(k)$. Given a field extension $K/k$, we denote by $X_{K}$ the $K$-scheme obtained from $X$ by the base change $\operatorname{Spec}(K)\to\operatorname{Spec}(k)$. Group schemes are assumed to be locally algebraic in view of our convention on schemes. Morphisms of group schemes will also be called homomorphisms. The neutral element of a group scheme $G$ is denoted by $e_{G}$. An algebraic group is a group scheme of finite type. We will freely use some of the functorial language of algebraic geometry developed in [DG70, I.1, I.2] (see also [EH00, Chap. VI]). We identify every scheme $S$ with its functor of points $h_{S}$. ## 2\. Hom functors We first recall some basic notions and results from [SGA3, Exp. I, §7] in our special setting. Given two schemes $X$, $Y$, we denote by $\mathbf{Hom}(X,Y)$ the contravariant functor from schemes to sets which sends every scheme $S$ to $\operatorname{Hom}_{S}(X\times S,Y\times S)$, and every morphism of schemes $u:S^{\prime}\to S$ to the pullback map $\operatorname{Hom}_{S}(X\times S,Y\times S)\longrightarrow\operatorname{Hom}_{S^{\prime}}(X\times S^{\prime},Y\times S^{\prime}).$ We may identify $\mathbf{Hom}(X,Y)(S)$ with $\operatorname{Hom}(X\times S,Y)$ by sending every morphism $f:X\times S\to Y$ to $(f,{\rm pr}_{S}):X\times S\to Y\times S$. This identifies $\mathbf{Hom}(X,Y)(u)$ with the map $\operatorname{Hom}(X\times S,Y)\longrightarrow\operatorname{Hom}(X\times S^{\prime},Y),\quad f\longmapsto f\circ({\rm id},u)$ for any $u$ as above. As a consequence, $\mathbf{Hom}(\operatorname{Spec}(k),Y)$ may be identified with $Y$. The formation of $\mathbf{Hom}(X,Y)$ commutes with base change by field extensions $K/k$. Also, every morphism of schemes $\varphi:X^{\prime}\to X$ induces a morphism of functors $\varphi^{*}:\mathbf{Hom}(X,Y)\longrightarrow\mathbf{Hom}(X^{\prime},Y)$ via precomposition with $\varphi$. Likewise, every morphism of schemes $\psi:Y\to Y^{\prime}$ induces a morphism of functors $\psi_{*}:\mathbf{Hom}(X,Y)\longrightarrow\mathbf{Hom}(X,Y^{\prime})$ via postcomposition with $\psi$. For any family of schemes $(X_{i})_{i\in I}$, the inclusions $X_{i}\to\coprod_{j\in I}X_{j}$ yield an isomorphism of functors $\mathbf{Hom}(\coprod_{i\in I}X_{i},Y)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\prod_{i\in I}\mathbf{Hom}(X_{i},Y).$ Likewise, for any family of schemes $(Y_{i})_{i\in I}$, the projections ${\rm pr}_{i}:\prod_{j\in I}Y_{j}\to Y_{i}$ yield an isomorphism of functors $\mathbf{Hom}(X,\prod_{i\in I}Y_{i})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbf{Hom}(X,\prod_{i\in I}Y_{i}).$ We will also freely use the canonical isomorphism of functors $\mathbf{Hom}(X\times Y,Z)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbf{Hom}(X,\mathbf{Hom}(Y,Z))$ for any schemes $X$, $Y$, $Z$ (see [SGA3, Exp. I, Prop. 1.7.1]). This identifies $\mathbf{Hom}(Y,Z)$ with the Weil restriction functor $\mathbf{R}_{Y/k}(Z)$. Next, recall the following result (a special case of [DG70, I.2.7.5]): ###### Lemma 2.1. Let $\psi:Y\to Y^{\prime}$ be a closed immersion of schemes. Then the morphism of functors $\psi_{*}:\mathbf{Hom}(X,Y)\to\mathbf{Hom}(X,Y^{\prime})$ is a closed immersion. We now consider two morphisms of schemes $\varphi_{1},\varphi_{2}:X^{\prime}\to X$, and their equalizer $\mathbf{Ker}(\varphi_{1}^{*},\varphi_{2}^{*})$, i.e., the subfunctor of $\mathbf{Hom}(X,Y)$ such that for any scheme $S$, the set $\mathbf{Ker}(\varphi_{1}^{*},\varphi_{2}^{*})(S)$ consists of the morphisms $f:X\times S\to Y$ such that $f\circ(\varphi_{1},{\rm id})=f\circ(\varphi_{2},{\rm id})$. ###### Lemma 2.2. With the above notation, $\mathbf{Ker}(\varphi_{1}^{*},\varphi_{2}^{*})$ is a closed subfunctor of $\mathbf{Hom}(X,Y)$. ###### Proof. We have a cartesian diagram of functors $\textstyle{\mathbf{Ker}(\varphi_{1}^{*},\varphi_{2}^{*})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbf{Hom}(X,Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\varphi_{1}^{*},\varphi_{2}^{*})}$$\textstyle{\mathbf{Hom}(X,Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Delta_{*}}$$\textstyle{\mathbf{Hom}(X,Y\times Y),}$ where $\Delta:Y\to Y\times Y$ denotes the diagonal, and $\mathbf{Hom}(X,Y\times Y)$ is identified with $\mathbf{Hom}(X,Y)\times\mathbf{Hom}(X,Y)$. Moreover, $\Delta_{*}$ is a closed immersion by Lemma 2.1; this yields the assertion. ∎ ###### Lemma 2.3. Let $\varphi:X^{\prime}\to X$ be a faithfully flat morphism of schemes, and ${\rm pr}_{1},{\rm pr}_{2}:X^{\prime}\times_{X}X^{\prime}\to X^{\prime}$ the projections. 1. (i) $\varphi^{*}$ identifies $\mathbf{Hom}(X,Y)$ with the equalizer $\mathbf{Ker}({\rm pr}_{1}^{*},{\rm pr}_{2}^{*})$. 2. (ii) $\varphi^{*}$ is a closed immersion. ###### Proof. (i) This holds by descent theory, see e.g. [Vis05, Thm. 2.55] (note that $\varphi$ is locally of finite presentation in view of our standing assumption on schemes). (ii) This follows from (i) together with Lemma 2.2. ∎ In particular, we obtain: ###### Corollary 2.4. The structural morphism of $X$ yields a closed immersion $\pi_{X}^{*}:Y=\mathbf{Hom}(\operatorname{Spec}(k),Y)\longrightarrow\mathbf{Hom}(X,Y).$ We may thus see $Y$ as the closed subfunctor of $\mathbf{Hom}(X,Y)$ consisting of constant morphisms. As a further direct consequence of Lemma 2.3, we record: ###### Corollary 2.5. Let $G$ be a group scheme, and $\varphi:X^{\prime}\to X$ a $G$-torsor for the fpqc topology. Then $\varphi^{*}$ identifies $\mathbf{Hom}(X,Y)$ with the closed subfunctor of $\mathbf{Hom}(X^{\prime},Y)$ consisting of $G$-invariant morphisms. We now assume that $X$ (resp. $Y$) is equipped with a $k$-rational point $x$ (resp. $y$). This yields a subfunctor $\mathbf{Hom}(X,Y;x\mapsto y)$ of $\mathbf{Hom}(X,Y)$, such that for any scheme $S$, the set $\mathbf{Hom}(X,Y;x\mapsto y)(S)$ consists of the morphisms $f:X\times S\to Y$ which satisfy $f(x,s)=y$ identically on $S$. In view of the cartesian diagram of functors $\textstyle{\mathbf{Hom}(X,Y;x\mapsto y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbf{Hom}(X,Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{x^{*}}$$\textstyle{\operatorname{Spec}(k)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{y}$$\textstyle{Y,}$ we see that $\mathbf{Hom}(X,Y;x\mapsto y)$ is a closed subfunctor of $\mathbf{Hom}(X,Y)$. In particular, for any group scheme $H$, we obtain a closed subfunctor $\mathbf{Hom}(X,H;x\mapsto e_{H})$ of $\mathbf{Hom}(X,H)$. Also, note that $\mathbf{Hom}(X,H)$ is a group functor relative to pointwise multiplication, and $\mathbf{Hom}(X,H;x\mapsto e_{H})$ is a normal subgroup functor. We also have the closed subfunctor $H$ of constant morphisms (Corollary 2.4); this is a subgroup functor as well. ###### Lemma 2.6. For any scheme $X$ equipped with a $k$-rational point $x$ and for any group scheme $H$, we have an isomorphism of group functors $\mathbf{Hom}(X,H)\simeq\mathbf{Hom}(X,H;x\mapsto e_{H})\rtimes H.$ ###### Proof. Let $S$ be a scheme, and $f\in\operatorname{Hom}(X\times S,H)$. Then we have $f=gh$, where $g\in\operatorname{Hom}(X\times S,H)$ sends $x\times S$ to $e_{H}$, and $h\in\operatorname{Hom}(S,H)$: just take $h(s)=f(x,s)$ and $g=fh^{-1}$. Moreover, such a decomposition of $f$ is clearly unique. This yields the statement. ∎ Next, consider an exact sequence of group schemes $1\longrightarrow N\stackrel{{\scriptstyle i}}{{\longrightarrow}}H\stackrel{{\scriptstyle q}}{{\longrightarrow}}Q,$ i.e., $i$, $q$ are homomorphisms, $i$ is a closed immersion, and its schematic image is the kernel of $q$. Then we readily obtain: ###### Lemma 2.7. With the above notation, the sequence of group functors $1\longrightarrow\mathbf{Hom}(X,N)\stackrel{{\scriptstyle i_{*}}}{{\longrightarrow}}\mathbf{Hom}(X,H)\stackrel{{\scriptstyle q_{*}}}{{\longrightarrow}}\mathbf{Hom}(X,Q)$ is exact. If $X$ is equipped with a $k$-rational point $x$, then the sequence of group functors $1\to\mathbf{Hom}(X,N;x\mapsto e_{N})\stackrel{{\scriptstyle i_{*}}}{{\to}}\mathbf{Hom}(X,H;x\mapsto e_{H})\stackrel{{\scriptstyle q_{*}}}{{\to}}\mathbf{Hom}(X,Q;x\mapsto e_{Q})$ is exact as well. Given two group schemes $G$, $H$, we denote by $\mathbf{Hom}_{{\rm gp}}(G,H)$ the subfunctor of $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ consisting of homomorphisms. Clearly, the $H$-action on $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ by conjugation on the target normalizes $\mathbf{Hom}_{{\rm gp}}(G,H)$. If $H$ is commutative, then $\mathbf{Hom}_{{\rm gp}}(G,H)$ is a subgroup functor of the commutative group functor $\mathbf{Hom}(G,H)$. ###### Lemma 2.8. For any group schemes $G$, $H$, the subfunctor $\mathbf{Hom}_{{\rm gp}}(G,H)$ is closed in $\mathbf{Hom}(G,H)$. ###### Proof. We adapt the argument of the proof of Lemma 2.2. Let $S$ be a scheme, and $f\in\operatorname{Hom}(G\times S,H)$. Then $f$ is a homomorphism if and only if the diagram $\textstyle{G\times G\times S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\mu,{\rm id})}$$\scriptstyle{(f,f,{\rm id})}$$\textstyle{G\times S\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(f,{\rm id})}$$\textstyle{H\times H\times S\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\nu,{\rm id})}$$\textstyle{H\times S}$ commutes, where $\mu$ (resp. $\nu$) denotes the multiplication in $G$ (resp. $H$). This yields a cartesian diagram of functors $\textstyle{\mathbf{Hom}_{{\rm gp}}(G,H)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu^{*}}$$\textstyle{\mathbf{Hom}(G,H)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\mu^{*},\nu_{*}\circ\Delta)}$$\textstyle{\mathbf{Hom}(G\times G,H)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbf{Hom}(G\times G,H\times H),}$ where $\Delta=(\Delta_{H})_{*}:\mathbf{Hom}(G,H)\to\mathbf{Hom}(G,H\times H)=\mathbf{Hom}(G,H)\times\mathbf{Hom}(G,H)$ denotes the diagonal. Since $\Delta_{H}$ is a closed immersion, this yields the statement in view of Lemma 2.1. ∎ ###### Lemma 2.9. Let $G_{1}$, $G_{2}$, $H$ be group schemes, and $\alpha:G_{1}\times G_{2}\to G_{2}$ an action of $G_{1}$ on $G_{2}$ by automorphisms. Consider the semi- direct product $G=G_{1}\ltimes G_{2}$. Then the product of restriction functors $\mathbf{Hom}_{{\rm gp}}(G,H)\longrightarrow\mathbf{Hom}_{{\rm gp}}(G_{1},H)\times\mathbf{Hom}_{{\rm gp}}(G_{2},H)$ is a closed immersion. ###### Proof. Let $S$ be a scheme, and $f:G\times S\to H$ a homomorphism. Denote by $f_{1}$ (resp. $f_{2}$) the restriction of $f$ to $G_{1}\times S$ (resp. $G_{2}\times S$). Then we have identically on $G_{1}\times G_{2}\times S$ $f(g_{1}g_{2},s)=f_{1}(g_{1},s)f_{2}(g_{2},s),\quad f_{2}(\alpha(g_{1},g_{2}),s)=f_{1}(g_{1},s)f_{2}(g_{2},s)f_{1}(g_{1},s)^{-1}$ by the definition of the semi-direct product. Conversely, every pair of homomorphisms $(f_{1},f_{2})$ satisfying the latter equality defines a unique homomorphism $f:G\times S\to H$, where the scheme $G$ is identified with $G_{1}\times G_{2}$. This yields the assertion by arguing as in the proof of Lemma 2.2. ∎ ## 3\. Tangent spaces We first recall the notion of tangent space for a functor $\mathbf{F}$ from $k$-algebras to sets (see e.g. [EH00, VI.1.3]). Denote by $D=k[t]/(t^{2})$ the algebra of dual numbers, so that $D=k\oplus k\varepsilon$ where $\varepsilon^{2}=0$. The algebra homomorphism $\sigma:D\longrightarrow k,\quad\varepsilon\longmapsto 0$ yields a map $\mathbf{F}(\sigma):\mathbf{F}(D)\to\mathbf{F}(k)$. The fiber of this map at $f\in\mathbf{F}(k)$ is the tangent space $T_{f}(\mathbf{F})$; it is equipped with an action of the multiplicative group $k^{\times}$ (the automorphism group of the $k$-algebra $D$). More generally, each vector space $V$ yields a $k$-algebra $\mathbf{D}(V)=k\oplus\varepsilon V$, equipped with the projection $\sigma_{V}:\mathbf{D}(V)\to k$ with kernel the ideal $\varepsilon V$ of square $0$. This defines a functor $\mathbf{D}$ from vector spaces to algebras, satisfying $\mathbf{D}(k)=D$. For any two vector spaces $V$, $W$, we obtain a fiber product of algebras (3.1) $\textstyle{\mathbf{D}(V\oplus W)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbf{D}({\rm pr}_{V})}$$\scriptstyle{\mathbf{D}({\rm pr}_{W})}$$\textstyle{\mathbf{D}(V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma_{V}}$$\textstyle{\mathbf{D}(W)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma_{W}}$$\textstyle{k.}$ If $\mathbf{F}$ commutes with such fiber products, then $T_{f}(\mathbf{F})$ has a natural structure of $k$-vector space. Given two functors $\mathbf{F}_{1}$, $\mathbf{F}_{2}$ as above and a morphism of functors $u:\mathbf{F}_{1}\to\mathbf{F}_{2}$, the induced map $u(D):\mathbf{F}_{1}(D)\to\mathbf{F}_{2}(D)$ yields the differential $T_{f}(u):T_{f}(\mathbf{F}_{1})\longrightarrow T_{u(f)}(\mathbf{F}_{2})$ for any $f\in\mathbf{F}_{1}(k)$. If $\mathbf{F}_{1}$, $\mathbf{F}_{2}$ commute with the fiber products (3.1), then $T_{f}(u)$ is $k$-linear. These notions may be applied to any contravariant functor from schemes to sets, and hence to $\mathbf{Hom}(X,Y)$ where $X$, $Y$ are schemes. The resulting functor from algebras to sets commutes with the fiber products (3.1) in view of [SGA3, Exp. II, Cor. 3.11.2]. Moreover, for any $f\in\operatorname{Hom}(X,Y)$, we have a canonical isomorphism of vector spaces (3.2) $T_{f}\mathbf{Hom}(X,Y)\simeq\operatorname{Hom}_{Y}(X,{\mathbb{V}}(\Omega^{1}_{Y})),$ where $\Omega^{1}_{Y}$ denotes the ${\mathcal{O}}_{Y}$-module of Kähler differentials of $Y$ over $k$, and ${\mathbb{V}}(\Omega^{1}_{Y})=\operatorname{Spec}_{Y}\operatorname{Sym}_{{\mathcal{O}}_{Y}}(\Omega^{1}_{Y})$ (see [SGA3, Exp. II, Prop. 3.3, Cor. 3.11.3]). Equivalently, we have canonical isomorphisms of vector spaces $T_{f}\mathbf{Hom}(X,Y)\simeq\operatorname{Hom}_{{\mathcal{O}}_{Y}}(\Omega^{1}_{Y},f_{*}({\mathcal{O}}_{X}))\simeq\operatorname{Hom}_{{\mathcal{O}}_{X}}(f^{*}(\Omega^{1}_{Y}),{\mathcal{O}}_{X}).$ Next, consider a group scheme $H$. By [SGA3, Exp. I, Prop. 6.8.6], the ${\mathcal{O}}_{H}$-module $\Omega^{1}_{H}$ is $H\times H$-equivariant, where $H\times H$ acts on $H$ by left and right multiplication. In particular, $\Omega^{1}_{H}$ is equivariant relative to the right $H$-action. Thus, there is a canonical isomorphism $\Omega^{1}_{H}\simeq{\mathcal{O}}_{H}\otimes\Omega^{1}_{H}(e_{H})$ in view of [SGA3, Exp. I, Prop. 6.8.1]. Moreover, we have canonical isomorphisms $\Omega^{1}_{H}(e_{H})\simeq\mathfrak{m}/\mathfrak{m}^{2}\simeq\operatorname{Lie}(H)^{*}$, where $\mathfrak{m}$ denotes the maximal ideal of the local ring ${\mathcal{O}}_{H,e_{H}}$, and $\operatorname{Lie}(H)$ stands for the Lie algebra. This yields a canonical isomorphism $\Omega^{1}_{H}\simeq{\mathcal{O}}_{H}\otimes\operatorname{Lie}(H)^{*}$. In view of the isomorphism (3.2), this yields in turn: ###### Lemma 3.1. Let $X$ be a scheme, $H$ a group scheme, and $f:X\to H$ a morphism. Then there is a canonical isomorphism of vector spaces (3.3) $i:T_{f}\mathbf{Hom}(X,H)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}{\mathcal{O}}(X)\otimes\operatorname{Lie}(H).$ ###### Remark 3.2. (i) With the above notation, we may view $\operatorname{Lie}(H)$ as the affine space $\operatorname{Spec}\,\operatorname{Sym}(\mathfrak{m}/\mathfrak{m}^{2})$; this identifies ${\mathcal{O}}(X)\otimes\operatorname{Lie}(H)$ with $\operatorname{Hom}(X,\operatorname{Lie}(H))$. If $X$ is equipped with a $k$-rational point $x$ and $f(x)=e_{H}$, then $i$ restricts to an isomorphism (3.4) $j:T_{f}\mathbf{Hom}(X,H;x\mapsto e_{H})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{Hom}(X,\operatorname{Lie}(H);x\mapsto 0).$ (ii) By [SGA3, Exp. II, §3.11], the isomorphism (3.3) may be interpreted as follows: consider $\varphi\in T_{f}\mathbf{Hom}(X,H)=\operatorname{Hom}_{Y}(X,{\mathbb{V}}(\Omega^{1}_{H}))=H^{0}(X,f^{*}(T_{H})),$ where $T_{H}$ denotes the tangent bundle. Let $S$ be a scheme, and $x\in X(S)$; then $f(x)\in H(S)$. Thus, we may view $\varphi(x)$ as an $S$-point of $T_{H}$ above $f(x)$. Also, $T_{H}$ is equipped with a group scheme structure, the semi-direct product $\operatorname{Lie}(H)\rtimes H$ (see [SGA3, Exp. II, §4.1]). Thus, $\varphi(x)f(x)^{-1}$ is an $S$-point of the affine space $\operatorname{Lie}(H)$, and the assignment $x\mapsto\varphi(x)f(x)^{-1}$ gives back the isomorphism $i$. Next, consider a homomorphism of group schemes $f:G\to H$, that is, $f\in\mathbf{Hom}_{{\rm gp}}(G,H)(k)$. Then the $H$-action on $\mathbf{Hom}_{{\rm gp}}(G,H)$ by conjugation yields a morphism of functors $\gamma_{f}:H\longrightarrow\mathbf{Hom}_{{\rm gp}}(G,H),\quad h\longmapsto(g\mapsto hf(g)h^{-1})$ that we may see as the orbit map associated with $f$. Since $\gamma_{f}(e_{H})=f$, we have the differential $T_{e_{H}}(\gamma_{f}):T_{e_{H}}(H)\longrightarrow T_{f}\mathbf{Hom}_{{\rm gp}}(G,H)\subset T_{f}\mathbf{Hom}(G,H)$ that we will view as a map $\operatorname{Lie}(H)\to\operatorname{Hom}(G,\operatorname{Lie}(H))$ by using the isomorphism (3.3). ###### Lemma 3.3. Keep the above notation and assumptions. 1. (i) The tangent space $T_{f}\mathbf{Hom}_{{\rm gp}}(G,H)$ is the subspace $Z^{1}(G,\operatorname{Lie}(H))$ of $\operatorname{Hom}(G,\operatorname{Lie}(H))$ consisting of the morphisms $\varphi:G\to\operatorname{Lie}(H)$ such that $\varphi(g_{1}g_{2})=\varphi(g_{1})+\operatorname{Ad}(f(g_{1}))\varphi(g_{2})$ identically on $G\times G$. 2. (ii) The image of the differential $T_{e_{H}}(\gamma_{f})$ is the subspace $B^{1}(G,\operatorname{Lie}(H))$ of $\operatorname{Hom}(G,\operatorname{Lie}(H))$ consisting of the morphisms $g\mapsto\operatorname{Ad}(f(g))x-x$, where $x\in\operatorname{Lie}(H)$. ###### Proof. (i) This follows from [SGA3, Exp. II, Prop. 4.2]. (ii) We have $\gamma_{f}(h)f(g)^{-1}=hf(g)h^{-1}f(g)^{-1}$ identically on $G\times H$. In view of Remark 3.2, it follows that $T_{e_{H}}(\gamma_{f})(x)(g)=x-\operatorname{Ad}(f(g))x$ for any $x\in\operatorname{Lie}(H)$ and any schematic point $g$ of $G$. ∎ ###### Remark 3.4. (i) The space $Z^{1}(G,\operatorname{Lie}(H))$ consists of the $1$-cocycles of the Hochschild complex $C^{*}(G,\operatorname{Lie}(H))$, where $\operatorname{Lie}(H)$ is a $G$-module via $\operatorname{Ad}\circ f$; moreover, the subspace $B^{1}(G,\operatorname{Lie}(H))$ consists of the $1$-coboundaries (see [SGA3, Exp. I, §5.1] or [DG70, II.3]). Thus, we have $Z^{1}(G,\operatorname{Lie}(H))/B^{1}(G,\operatorname{Lie}(H))=H^{1}(G,\operatorname{Lie}(H)),$ the first cohomology group of this module. With the notation and assumptions of Lemma 3.3, it follows that the map $T_{e_{H}}(\gamma_{f}):T_{e_{H}}(H)\longrightarrow T_{f}\mathbf{Hom}_{{\rm gp}}(G,H)$ is surjective if and only if $H^{1}(G,\operatorname{Lie}(H))=0$. (ii) Most results on cohomology groups of a group scheme $G$ are obtained under the assumption that $G$ is affine. When $G$ is an algebraic group, this entails no loss of generality in view of the affinization theorem recalled in the introduction. Indeed, the pullback map ${\mathcal{O}}(G^{{\rm aff}})\to{\mathcal{O}}(G)$ is an isomorphism. Moreover, the representation $\operatorname{Ad}\circ f:G\to\operatorname{GL}(\operatorname{Lie}(H))$ factors uniquely through a representation of $G^{{\rm aff}}$. Therefore, $\operatorname{Lie}(H)$ is a $G^{{\rm aff}}$-module, and the pullback maps $Z^{1}(G^{{\rm aff}},\operatorname{Lie}(H))\to Z^{1}(G,\operatorname{Lie}(H)),\;B^{1}(G^{{\rm aff}},\operatorname{Lie}(H))\to B^{1}(G,\operatorname{Lie}(H))$ are isomorphisms. So we obtain an isomorphism $H^{1}(G^{{\rm aff}},\operatorname{Lie}(H))\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}H^{1}(G,\operatorname{Lie}(H))$ (which extends to all cohomology groups of all $G$-modules). Next, we obtain a key smoothness result: ###### Lemma 3.5. Let $G$ be an algebraic group, and $H$ a smooth group scheme. Assume that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$. Then the following conditions are equivalent: 1. (i) The morphism (1.1) $\gamma:H\times M\longrightarrow M\times M,\quad(h,f)\longmapsto(hfh^{-1},f)$ is smooth. 2. (ii) For any field extension $K/k$ and any $f\in M(K)=\operatorname{Hom}_{K-{\rm gp}}(G_{K},H_{K})$, the morphism $\gamma_{f}:H_{K}\longrightarrow M_{K},\quad h\longmapsto(g\mapsto hf(g)h^{-1})$ is smooth. 3. (iii) For any field extension $K/k$ and any $f\in M(K)$, we have $H^{1}(G_{K},\operatorname{Lie}(H_{K}))=0,$ where $\operatorname{Lie}(H_{K})$ is a $G_{K}$-module via $\operatorname{Ad}\circ f$. These conditions hold whenever $G^{{\rm aff}}$ is linearly reductive. ###### Proof. (i)$\Rightarrow$(ii) Just observe that $\gamma$ is a morphism of $M$-schemes relative to the second projections; moreover, $\gamma_{f}$ is obtained from $\gamma$ by base change via $f:\operatorname{Spec}(K)\to M$. (ii)$\Rightarrow$(i) This follows from the above observation together with the fiberwise criterion for smoothness (see [EGA, IV.17.8.2]). (ii)$\Rightarrow$(iii) Since $\gamma_{f}$ is smooth at $e_{H}$, its differential at this point is surjective. This yields the assertion in view of Lemma 3.3. (iii)$\Rightarrow$(ii) We may assume that $K=k$ is algebraically closed. Then $H$ is a disjoint union of $H(k)$-cosets of the neutral component $H^{0}$, and hence we may further assume that $H$ is a smooth connected algebraic group. Also, $f\in M(k)$ and the morphism $\gamma_{f}:H\to M$ has a surjective differential at $e_{H}$ by Lemma 3.3 again. We now adapt to this setting some standard considerations for actions of smooth algebraic groups on schemes of finite type (recall that $M$ is locally of finite type). Consider the schematic image $X$ of $\gamma_{f}$; this is a closed integral subscheme of $M$, stable under $H(k)$ and hence under $H$ (see [DG70, II.5.3.2]). Moreover, $\gamma_{f}$ factors uniquely through a dominant morphism $\varphi:H\to X$, equivariant for the action of $H$ by left multiplication on itself. By generic flatness (see [EGA, IV.6.9.1]), there exists a dense open subscheme $U$ of $X$ such that the pullback $\varphi^{-1}(U)\to U$ is flat. Since the $H(k)$-translates of $\varphi^{-1}(U)$ cover $H$, it follows that $\varphi$ is flat. The fiber of $\varphi$ at $e_{H}$ is the isotropy subgroup scheme ${\rm Stab}_{H}(f)$, which is smooth in view of the vanishing of $H^{1}(G,\operatorname{Lie}(H))$ (see [DG70, II.5.2.8]). Thus, $\varphi$ is smooth at $e_{H}$ (see [EGA, IV.17.5.1]) and hence everywhere by equivariance. So the image of $\varphi$ (the orbit $H\cdot f$) is open in $X$ and smooth (see [EGA, IV.2.4.6, IV.6.8.3]); in particular, $H\cdot f$ is locally closed in $M$. Also, $T_{f}(H\cdot f)=T_{f}(M)$ by Lemma 3.3 again. We now claim that the natural homomorphism of local rings $u:{\mathcal{O}}_{M,f}\longrightarrow{\mathcal{O}}_{H\cdot f,f}$ is an isomorphism. Indeed, $u$ is clearly surjective. Consider the associated graded homomorphism ${\rm gr}(u):{\rm gr}({\mathcal{O}}_{M,f})\longrightarrow{\rm gr}({\mathcal{O}}_{H\cdot f,f}).$ The right-hand side is a polynomial ring in $n$ generators of degree $1$, where $n=\dim(H\cdot f)$. Also, ${\rm gr}(u)$ induces an isomorphism on subspaces of degree $1$. As the left-hand side is generated in degree $1$, it follows that ${\rm gr}(u)$ is an isomorphism. As a consequence, $u$ is injective, proving the claim. By this claim, $H\cdot f$ contains an open neighborhood of $f$ in $M$. Using equivariance again, it follows that $H\cdot f$ is open in $M$. Since the morphism $H\to H\cdot f$ is smooth, so is $\gamma_{f}$. Finally, if $G^{{\rm aff}}$ is linearly reductive, then Remark 3.4(ii) and [DG70, II.3.3.7] yield the vanishing of $H^{1}(G_{K},\operatorname{Lie}(H_{K}))$ for any field extension $K/k$. ∎ ## 4\. Representability: first steps ### 4.1. Morphisms to étale group schemes For any group scheme $G$, we denote by $G^{0}$ its neutral component, i.e., the connected component of $e_{G}$. Recall that $G^{0}$ is an algebraic group, and is the kernel of the homomorphism $\gamma:G\longrightarrow\pi_{0}(G),$ where $\pi_{0}(G)$ is the étale group scheme of connected components. Moreover, $\gamma$ is faithfully flat (see [DG70, II.5.1.8]), and hence a $G^{0}$-torsor. ###### Proposition 4.1. Let $G$ be an algebraic group, and $H$ an étale group scheme. 1. (i) The pullback $\gamma^{*}:\mathbf{Hom}(\pi_{0}(G),H)\to\mathbf{Hom}(G,H)$ is an isomorphism. 2. (ii) The functor $\mathbf{Hom}(G,H)$ is represented by an étale group scheme $N$. If $H$ is finite, then $N$ is finite as well. ###### Proof. (i) By Corollary 2.5, $\gamma^{*}$ identifies $\mathbf{Hom}(\pi_{0}(G),H)$ with the subfunctor of $G^{0}$-invariants in $\mathbf{Hom}(G,H)$. So it suffices to show that for any scheme $S$, every morphism $G\times S\to H$ is invariant under $G^{0}$. For this, we may assume $k$ algebraically closed by descent. Then the group schemes $\pi_{0}(G)$ and $H$ are constant; moreover, $G=\coprod_{i\in I}g_{i}G^{0}$ for some family $(g_{i})_{i\in I}$ of $G(k)$, where $I=\pi_{0}(G)(k)$. So we may further assume that $G$ is connected. Then it suffices to show that every morphism $f:G\times S\to H$ that sends $e_{G}\times S$ to $e_{H}$ is constant (Lemma 2.6). We may assume that $S$ is connected; then $G\times S$ is connected as well (see e.g. [SGA3, Exp. V, Lem. 2.1.2]). The schematic fiber of $f$ at $e_{H}$ is open and closed in $G\times S$, and contains $e_{G}\times S$; so this fiber is the whole $G\times S$ as desired. (ii) In view of (i), we may replace $G$ with $\pi_{0}(G)$, and hence assume that $G$ is finite and étale. Using Galois descent, we may further assume that $G$ is constant. Then $\mathbf{Hom}(G,H)$ is the constant group scheme associated with the group of maps $G(k)\to H(k)$. ∎ ###### Corollary 4.2. Let $G$ be a connected algebraic group, and $H$ a group scheme. Then the inclusion of $H^{0}$ in $H$ induces isomorphisms $\mathbf{Hom}(G,H^{0};e_{G}\mapsto e_{H})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbf{Hom}(G,H;e_{G}\mapsto e_{H}),$ $\mathbf{Hom}_{{\rm gp}}(G,H^{0})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbf{Hom}_{{\rm gp}}(G,H).$ ###### Proof. By Proposition 4.1, the natural morphism $\pi_{0}(H)\to\mathbf{Hom}(G,\pi_{0}(H))$ is an isomorphism. Thus, $\mathbf{Hom}(G,\pi_{0}(H);e_{G}\mapsto e_{\pi_{0}(H)})=\operatorname{Spec}(k)$. Together with the exact sequence $1\to H^{0}\to H\to\pi_{0}(H)$ and with Lemma 2.7, this yields the assertions. ∎ ### 4.2. Two finiteness notions We introduce finiteness notions on schemes, which will be very convenient for proving Theorem 6.3. We say that a scheme $X$ satisfies (FT) (resp. (AFT)) if every connected component of $X$ is of finite type (resp. affine of finite type). Since $X$ is locally of finite type, its connected components are open (see [EGA, I.Cor. 6.1.9]). Thus, (FT) (resp. (AFT)) is equivalent to $X$ being a sum (in the sense of [EGA, I.3.1]) of schemes of finite type (resp. affine of finite type). We now record some basic properties of these notions, with no attempt for exhaustivity. ###### Lemma 4.3. Consider a group scheme $G$ and two schemes $X$, $Y$. 1. (i) $G$ satisfies (FT). Also, $G$ satisfies (AFT) if and only if $G^{0}$ is affine. 2. (ii) If $X$ is étale, then it satisfies (AFT). 3. (iii) If $X$ and $Y$ satisfy (FT) (resp. (AFT)), then so does $X\times Y$. 4. (iv) If $X$ satisfies (FT) (resp. (AFT)), then so does the $K$-scheme $X_{K}$ for any field extension $K/k$. 5. (v) If there exists a finite Galois extension $K/k$ such that $X_{K}$ satisfies (FT) (resp. (AFT)), then so does $X$. 6. (vi) If $Y$ is a closed subscheme of $X$, and $X$ satisfies (FT) (resp. (AFT)), then so does $Y$. ###### Proof. (i) The assertion on (FT) follows from the “theorem of the neutral component” (see [DG70, II.5.1.1] or [SGA3, Exp. VIA, Cor. 2.4.1]). If $G$ satisfies (AFT), then $G^{0}$ is clearly affine. Conversely, assume that $G^{0}$ is affine, and consider a connected component $C$ of $G$. Then $C_{\bar{k}}$ is the sum of finitely many translates of $G^{0}_{\bar{k}}$ and hence is affine. By descent, it follows that $C$ is affine as well. (ii) Just recall that every connected component of $X$ is of the form $\operatorname{Spec}(K)$ for some finite (separable) field extension $K/k$. (iii) This follows from the fact that finite products commute with sums (see [EGA, I.3.2.8]), and preserve the properties of being of finite type (resp. affine of finite type); see [EGA, I.Prop. 6.3.4]. (iv) This is checked by a similar argument. (v) Assume that $X_{K}$ satisfies (FT) for $K/k$ finite Galois with group $\Gamma$. Then $\Gamma$ acts on $X_{K}$, and permutes its connected components. Thus, $X_{K}$ is a sum of $\Gamma$-stable schemes of finite type. By Galois descent for subschemes, it follows that $X$ is a sum of schemes of finite type, i.e., it satisfies (FT). The same argument works for (AFT). (vi) Let $Y^{\prime}$ be a connected component of $Y$. Then $Y^{\prime}$ is a closed subscheme of a unique connected component $X^{\prime}$ of $X$. So the assertion follows from [EGA, I.Prop. 6.3.4] again. ∎ ### 4.3. Morphisms from finite group schemes We first record an easy observation: ###### Lemma 4.4. Let $X$ be a finite scheme, and $H$ a group scheme. Then the functor $\mathbf{Hom}(X,H)$ is represented by a group scheme. ###### Proof. Recall the isomorphism $\mathbf{Hom}(X,H)\simeq\mathbf{R}_{X/k}(H)$, where $\mathbf{R}_{X/k}$ denotes the Weil restriction functor. By Lemma 4.3, every finite subset of the underlying topological space of $H$ is contained in an open affine subscheme of finite type. The representability of $\mathbf{R}_{X/k}(H)$ by a scheme (locally of finite type) follows from this by [DG70, I.1.6.6] and its proof. ∎ Next, let $G$, $H$ be group schemes, where $G$ is finite. Combining Lemmas 2.8, 4.3 and 4.4, we see that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$ satisfying (FT). If $H$ is algebraic, then $M$ is of finite type. Also, recall from [DG70, II.4.7.1] that a group scheme $G$ is _infinitesimal_ if $G$ is finite and $e_{G}$ is its unique point. ###### Proposition 4.5. With the above notation and assumptions, the scheme $M$ is affine of finite type under either of the following conditions: 1. (i) $G$ is infinitesimal. 2. (ii) $H$ is affine. 3. (iii) $H$ is connected. ###### Proof. (i) We may assume that ${\rm char}(k)=p>0$. For any scheme $X$, we denote by $F_{X}:X\to X^{(p)}$ the relative Frobenius morphism and by $F^{n}_{X}:X\longrightarrow X^{(p^{n})}$ its $n$th iterate, where $n$ is a positive integer. If $X$ is a group scheme, then $F^{n}_{X}$ is a homomorphism; we denote by $X_{n}$ its kernel (the $n$th Frobenius kernel). By assumption, we have $G=G_{n}$ for $n\gg 0$. For any scheme $S$ and any homomorphism $f:G\times S\to H$, we have a commutative diagram $\textstyle{G\times S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{(\pi_{G},F^{n}_{S})}$$\textstyle{H\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{F^{n}_{H}}$$\textstyle{S^{(p^{n})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{(n)}}$$\textstyle{H^{(p^{n})}.}$ Thus, we have identically on $G\times S$ $F^{n}_{H}(f(g,s))=f^{(n)}(F^{n}_{S}(s))=F^{n}_{H}f(e_{G},s)=e_{H^{(p^{n})}}.$ So $f$ factors uniquely through $H_{n}$. Thus, $\mathbf{Hom}_{{\rm gp}}(G,H_{n})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\mathbf{Hom}_{{\rm gp}}(G,H)$. As $G$ and $H_{n}$ are finite, we may view $\mathbf{Hom}_{{\rm gp}}(G,H_{n})$ as the functor of Hopf algebra homomorphisms ${\mathcal{O}}(H_{n})\to{\mathcal{O}}(G)$. This is a closed subfunctor of the functor of morphisms of $k$-modules ${\mathcal{O}}(H_{n})\to{\mathcal{O}}(G)$, and the latter functor is represented by an affine space. (ii) Since Hom functors commute with base change, we may assume $k$ algebraically closed. Then $G\simeq I\rtimes F$, where $I=G^{0}$ is infinitesimal, and $F\simeq\pi_{0}(G)$ is the constant group scheme associated with $G(k)$ (see e.g. [DG70, II.5.2.4]). In view of Lemma 2.9, we may thus assume in addition that $G$ is either infinitesimal or constant. In the former case, we conclude by (i). In the latter case, $\mathbf{Hom}(G,H)\simeq H^{n}$, where $n$ denotes the order of $G$. Thus, $\mathbf{Hom}(G,H)$ is affine of finite type, and hence so is $\mathbf{Hom}_{{\rm gp}}(G,H)$. (iii) By arguing as in the proof of (ii), we reduce to the case where $k$ is algebraically closed and $G$ is constant of order $n$. Then $g^{n}=e_{G}$ identically on $G$. Thus, for any scheme $S$, every homomorphism $f:G\times S\to H$ factors uniquely through the schematic fiber at $e_{H}$ of the $n$th power map of $H$. Denoting this fiber by $H[n]$, it follows that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is isomorphic to a closed subscheme of $\mathbf{Hom}(G,H[n])\simeq H[n]^{n}$. So it suffices to show that $H[n]$ is affine. As $H$ is connected, there exists an exact sequence of algebraic groups (4.1) $1\longrightarrow N\longrightarrow H\stackrel{{\scriptstyle q}}{{\longrightarrow}}A\longrightarrow 1,$ where $N$ is affine and $A$ is an abelian variety (see [Ray70, Lem. IX.2.7]). Thus, $H[n]$ is a closed subscheme of the pullback $q^{-1}(A[n])$. Since $A[n]$ is a finite group scheme and $q$ is affine, this yields the desired statement. ∎ ###### Example 4.6. Let $G$ be the constant group scheme of order $2$, and $H=E\rtimes G$, where $E$ is an elliptic curve on which $G$ acts by $\pm 1$. Then $H$ is a smooth proper non-connected algebraic group, and one readily checks that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by $H[2]\simeq E[2]\coprod E$. So $\mathbf{Hom}_{{\rm gp}}(G,H)$ is not affine. ###### Example 4.7. Assume that $p>0$ and consider the infinitesimal group scheme $\alpha_{p}$, the kernel of the Frobenius endomorphism of ${\mathbb{G}}_{a}$. For any group scheme $H$, the functor $\mathbf{Hom}_{{\rm gp}}(\alpha_{p},H)$ is represented by the fiber at $0$ of the $p$th power map of $\operatorname{Lie}(H)$, as follows from [SGA3, Exp. VII, Thm. 7.2] or [DG70, II.7.4.2]. For example, $\mathbf{Hom}_{{\rm gp}}(\alpha_{p},{\mathbb{G}}_{a})$ is represented by the affine line. ### 4.4. Homomorphisms from tori The following result is a version of Grothendieck’s representability theorem stated in the introduction (see [SGA3, Exp. XI, Thm. 4.2]): ###### Proposition 4.8. Let $G$ be a torus, and $H$ a group scheme. Then the functor $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme satisfying (AFT). ###### Proof. We first consider the case where $H$ is an abelian variety. We claim that for any scheme $S$, every homomorphism $f:G\times S\to H$ is constant. To show this, we adapt a classical rigidity argument. By descent, we may assume that $k$ is algebraically closed. Also, we may assume that $S$ is connected. Choose a positive integer $n$ prime to $p$. Then the $n$-torsion subgroups $G[n]$, $H[n]$ are finite and constant. For any $g\in G[n](k)$, the morphism $S\to H[n]$, $s\mapsto f(g,s)$ is constant. Choose $s_{0}\in S(k)$, then we have $f(g,s)=f(g,s_{0})$ identically on $G[n]\times S$. Since the family of the $G[n]$ for $n$ as above is schematically dense in $G$, the family of the $G[n]\times S$ is schematically dense in $G\times S$ (see [EGA, IV.11.10.6]). Thus, $f(g,s)=f(g,s_{0})$ identically on $G\times S$. Also, the morphism $G\to H$, $g\mapsto f(g,s_{0})$ is a homomorphism; it follows that $f(g,s_{0})=e_{H}$ identically on $G$, since $G$ is a torus, and $H$ an abelian variety. This yields the claim. We now consider the general case. By Corollary 4.2, we may assume that $H$ is a connected algebraic group. Thus, $H$ lies in an exact sequence of the form (4.1). Using the above claim together with Lemma 2.7, we may further assume that $H$ is affine. Then $H$ is isomorphic to a closed subgroup scheme of some general linear group $\operatorname{GL}_{n}$. In view of Lemmas 2.1, 2.8 and 4.3, we may thus reduce to the case where $H=\operatorname{GL}_{n}$. Also, since the torus $G$ is split by a finite Galois extension of $k$, we may assume that $G\simeq{\mathbb{G}}_{m}^{r}$ by using Lemma 4.3 again. For any homomorphism $f:G\times S\to\operatorname{GL}_{n}$, the ${\mathcal{O}}_{S}$-module ${\mathcal{O}}_{S}^{n}$ is a $G$-module via $(f,{\rm id})$, and hence is the direct sum of its weight submodules (see [SGA3, Exp. I, Prop. 4.7.3]). It follows that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by the scheme $\coprod_{(n_{1},\ldots,n_{s})}S_{s}\times\operatorname{GL}_{n}/\operatorname{GL}_{n_{1}}\times\cdots\times\operatorname{GL}_{n_{s}},$ where $(n_{1},\ldots,n_{s})$ runs over the tuples of positive integers with sum $n$, and $S_{s}\subset({\mathbb{Z}}^{r})^{s}$ denotes the set of $s$-tuples of pairwise distinct weights (viewed as a constant scheme). Since every homogeneous space $\operatorname{GL}_{n}/\operatorname{GL}_{n_{1}}\times\cdots\times\operatorname{GL}_{n_{s}}$ is affine of finite type, this completes the proof. ∎ ###### Corollary 4.9. Let $G$ be a reductive algebraic group, and $H$ a group scheme. Then the functor $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$ satisfying (FT). If $H$ is affine, then $M$ satisfies (AFT). ###### Proof. By Corollary 4.2, we may assume that $H$ is a connected algebraic group. Also, we may choose a maximal torus $T$ of $G$. Then $\mathbf{Hom}_{{\rm gp}}(T,H)$ is representable by Proposition 4.8; moreover, the morphism of functors $u:\mathbf{Hom}_{{\rm gp}}(G,H)\longrightarrow\mathbf{Hom}_{{\rm gp}}(T,H)$ is relatively representable by a morphism locally of finite presentation, as a consequence of [SGA3, Exp. XXIV, Prop. 7.2.1]. Thus, $\mathbf{Hom}_{{\rm gp}}(G,H)$ is representable by a scheme $M$ (locally of finite type). To show that $M$ satisfies (FT), recall that $T_{K}$ is split for some finite Galois field extension $K/k$. Using Galois descent and Lemma 4.3 (v), we may therefore assume that $T$ is split. We now use [SGA3, Exp. XXIV, Cor. 7.1.9], which asserts that the above morphism $u$ satisfies every property ${\mathcal{P}}$ of morphisms which is stable by composition and base change, and holds for closed immersions and for the structural morphism $\pi_{H}$. In view of the assumption on $H$ and [EGA, I.Prop. 6.3.4], we may take for ${\mathcal{P}}$ the property of being of finite type. Also, $\mathbf{Hom}_{{\rm gp}}(T,H)$ satisfies (FT) by Proposition 4.8 again. It follows readily that $\mathbf{Hom}_{{\rm gp}}(G,H)$ satisfies (FT). The proof for (AFT) is obtained similarly by taking for ${\mathcal{P}}$ the property of being affine of finite type. ∎ ### 4.5. Morphisms from abelian varieties The following result is a consequence of the rigidity lemma proved in the next section (Lemma 5.2). We provide a short and direct proof. ###### Proposition 4.10. Let $G$ be an abelian variety, and $H$ a group scheme. 1. (i) $H$ has a largest abelian subvariety $H_{{\rm ab}}$. 2. (ii) We have $\mathbf{Hom}_{{\rm gp}}(G,H_{{\rm ab}})=\mathbf{Hom}(G,H_{{\rm ab}};e_{G}\mapsto e_{H})=\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ and this functor is represented by a commutative étale group scheme. ###### Proof. (i) Clearly, we may assume that $H$ is connected, and hence an algebraic group. Let $A$, $B$ be abelian subvarieties of $H$, and assume that $A$ has maximal dimension among all such subvarieties. Then $A$, $B$ are both contained in $H_{{\rm ant}}$, and hence centralize each other (since every anti-affine group is commutative). Thus, the morphism $A\times B\to H$, $(a,b)\mapsto ab$ is a homomorphism, and its image is an abelian subvariety $C$ of $H$. By maximality, we have $C=A$, and hence $B\subset A$. (ii) By Corollary 4.2, we may again assume that $H$ is a connected algebraic group. Then the scheme $H$ is quasi-projective (see [Ray70, Cor. VI.2.6]); also, $G$ is projective. As a consequence, the functor $\mathbf{Hom}(G,H)$ is represented by a quasi-projective scheme $M$ (see [Gro61, p. 268]). Thus, the closed subfunctor $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ is represented by a closed subscheme $N$ of $M$. Moreover, the formation of $N$ commutes with base change by field extensions. We now show that $N$ is étale. For this, we may assume that $k$ is algebraically closed in view of [EGA, IV.17.7.3]. Since $N$ is locally of finite type, it suffices to show that $T_{f}(N)=0$ for any $f\in N(k)$. But this follows from the isomorphism (3.4), since every morphism $G\to\operatorname{Lie}(H)$ is constant. We have a chain of closed subfunctors $\mathbf{Hom}_{{\rm gp}}(G,H_{{\rm ab}})\subset\mathbf{Hom}(G,H_{{\rm ab}};e_{G}\mapsto e_{H})\subset\mathbf{Hom}(G,H;e_{G}\mapsto e_{H}).$ Moreover, the resulting inclusions of sets of $K$-points are equalities for any algebraically closed field $K$, in view of (i) and [Mum08, §4, Cor. 1]. Since $\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ is represented by an étale scheme, these inclusions of subfunctors are equalities. As $\mathbf{Hom}_{{\rm gp}}(G,H_{{\rm ab}})$ is a commutative group functor, this yields the assertion. ∎ It is shown in [LS21, Thm. 7.2] that the formation of $H_{{\rm ab}}$ commutes with base change by field extensions; we will not need this fact. ## 5\. Proof of Theorem 1 We begin with an easy observation: ###### Lemma 5.1. Let $G$ be a group scheme. 1. (i) If $\mathbf{Hom}(G,H)$ is representable for some non-étale group scheme $H$, then the vector space ${\mathcal{O}}(G)$ is finite-dimensional. 2. (ii) If $\mathbf{Hom}(G,{\mathbb{G}}_{a};e_{G}\mapsto 0)=\mathbf{Hom}_{{\rm gp}}(G,{\mathbb{G}}_{a})$, then $G$ is anti-affine. ###### Proof. (i) Consider the constant morphism $f:G\to H$ with image $e_{H}$. Then $T_{f}\mathbf{Hom}(G,H)\simeq{\mathcal{O}}(G)\otimes\operatorname{Lie}(H)$ by Lemma 3.1; also, $\operatorname{Lie}(H)\neq 0$ as $H$ is not étale. If $\mathbf{Hom}(G,H)$ is representable, then its tangent space at $f$ is finite- dimensional, hence the assertion. (ii) Every $f\in{\mathcal{O}}(G)$ such that $f(e_{G})=0$ satisfies $f(g_{1}g_{2})=f(g_{1})+f(g_{2})$ identically on $G\times G$. Applying this to $f^{2}$, we obtain that $2f(g_{1})f(g_{2})=0$ identically. If $p\neq 2$, it follows that $f=0$; thus, ${\mathcal{O}}(G)=k$. If $p=2$ we consider $f^{3}$ and argue similarly. ∎ Next, we obtain a key rigidity result: ###### Lemma 5.2. Let $X$ be a geometrically reduced scheme of finite type such that ${\mathcal{O}}(X)=k$. Let $Y$ be a geometrically connected scheme, and $f:X\times Y\to Z$ a morphism of schemes. Assume that there exist $x_{0}\in X(\bar{k})$ and $y_{0}\in Y(\bar{k})$ such that $f(x,y_{0})=f(x_{0},y_{0})$ for all $x\in X(\bar{k})$. Then $f$ factors through the projection ${\rm pr}_{Y}:X\times Y\to Y$. ###### Proof. By fpqc descent, it suffices to show that the base change $f_{\bar{k}}$ factors through $({\rm pr}_{Y})_{\bar{k}}$. Thus, we may assume that $k$ is algebraically closed. Let $W$ be the pullback of ${\rm diag}(Z)$ under the morphism $X\times Y\longrightarrow Z\times Z,\quad(x,y)\longmapsto(f(x,y),f(x_{0},y)).$ Then the equalizer $W$ is a closed subscheme of $X\times Y$. Consider the subset $Y^{\prime}$ of $Y$ consisting of those $y$ such that $X\times\\{y\\}\subset W$ as sets; equivalently, $X\times\\{y\\}\subset W$ as schemes, since $X$ is reduced. We claim that $Y^{\prime}$ is closed in $Y$. Indeed, $X\times Y^{\prime}\subset W$ as sets, and hence $\overline{X\times Y^{\prime}}\subset W$. Since the projection $X\times Y\to Y$ is open, we have $\overline{X\times Y^{\prime}}=X\times\overline{Y^{\prime}}$. Thus, $\overline{Y^{\prime}}=Y^{\prime}$, proving the claim. By this claim and the connectedness of $Y$, it suffices to show that every $y\in Y^{\prime}(k)$ admits an open neighborhood $U=U(y)$ such that $X\times U\subset W$ as schemes. Let $z=f(x_{0},y)$; then $z\in Z(k)$, and $f$ induces a morphism $f_{n}:X\times Y_{n}\longrightarrow Z_{n}$ for any positive integer $n$, where $Y_{n}$ (resp. $Z_{n}$) denotes the $n$th infinitesimal neighborhood of $y$ in $Y$ (resp. of $z$ in $Z$). Since $Z_{n}$ is finite, $f_{n}$ is given by an algebra homomorphism $f_{n}^{\\#}:{\mathcal{O}}(Z_{n})\longrightarrow{\mathcal{O}}(X\times Y_{n}).$ But we have ${\mathcal{O}}(Y_{n})\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}{\mathcal{O}}(X\times Y_{n})$ since $X$ is of finite type, ${\mathcal{O}}(X)=k$ and $Y_{n}$ is finite (see [DG70, I.2.2.6]). Thus, $f_{n}$ factors through ${\rm pr}_{Y_{n}}:X\times Y_{n}\to Y_{n}$. So $f(x,y)=f(x_{0},y)$ identically on $X\times Y_{n}$, i.e., $X\times Y_{n}\subset W$. By Krull’s intersection theorem, the family $(Y_{n})_{n\geq 1}$ is schematically dense in an open neighborhood $U$ of $y$ in $Y$. Thus, the family $(X\times Y_{n})_{n\geq 1}$ is schematically dense in $X\times U$ in view of [EGA, IV.11.10.6]. Since $W$ is a closed subscheme of $X\times Y$, we obtain that $X\times U\subset W$ as desired. ∎ As mentioned in the introduction, the above result is a version of [SS09, Thm. 1.7]. The proof presented there (and again in [BSU13, §4.3] and [Bri17, Lem. 3.3.3]) requires some minor corrections, e.g., there is a confusion in the final step between density and schematic density. We may now obtain the following result, a version of [BSU13, Prop. 5.1.4]: ###### Proposition 5.3. Let $G$ be an anti-affine algebraic group, and $H$ a group scheme. Then $\mathbf{Hom}_{{\rm gp}}(G,H)=\mathbf{Hom}(G,H;e_{G}\mapsto e_{H})$ and this functor is represented by a form of ${\mathbb{Z}}^{n}_{k}$ for some integer $n\geq 0$. ###### Proof. To show the equality of functors, we may assume that $k$ is algebraically closed. We now adapt a classical argument (see [Mum08, p. 43]). Let $S$ be a connected scheme, and $f:G\times S\to H$ a morphism that sends $e_{G}\times S$ to $e_{H}$. Consider the morphism $F:G\times G\times S\longrightarrow H,\quad(x,y,s)\longmapsto f(xy,s)f(y,s)^{-1}f(x,s)^{-1}.$ Then $F(x,e_{G},s)=F(e_{G},x,s)=e_{H}$ for all $x\in G$ and $s\in S$. Moreover, $G\times S$ is connected. By Lemma 5.2, it follows that $F(x,y,s)=e_{H}$ identically on $G\times G\times S$, i.e., $f$ is a homomorphism. It remains to show that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a form of some ${\mathbb{Z}}^{n}_{k}$. We first treat the case where $k$ is separably closed. Consider again a connected scheme $S$, and let $f:G\times S\longrightarrow H$ be a homomorphism. Choose an $s_{0}\in S(\bar{k})$; then the morphism of $\bar{k}$-schemes $G_{\bar{k}}\times_{\bar{k}}S_{\bar{k}}\longrightarrow H_{\bar{k}},\quad(x,s)\longmapsto f(x,s)f(x,s_{0})^{-1}$ sends $G_{\bar{k}}\times_{\bar{k}}s_{0}$ to $e_{H}$, and hence factors through the projection $G_{\bar{k}}\times_{\bar{k}}S_{\bar{k}}\to S_{\bar{k}}$ by Lemma 5.2. Since $f(e_{G},s)=e_{H}$ identically on $S$, it follows that $f(g,s)=f(g,s_{0})$ identically on $G_{\bar{k}}\times_{\bar{k}}S_{\bar{k}}$. Thus, $f_{\bar{k}}$ factors through the projection $G_{\bar{k}}\times_{\bar{k}}S_{\bar{k}}\to G_{\bar{k}}$. By fpqc descent, it follows that $f$ factors through the projection $G\times S\to G$. This shows that $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by the constant scheme $\operatorname{Hom}_{{\rm gp}}(G,H)_{k}$. Since $\operatorname{Hom}_{{\rm gp}}(G,H^{0}_{{\rm ant}})\stackrel{{\scriptstyle\sim}}{{\to}}\operatorname{Hom}_{{\rm gp}}(G,H)$, this yields the assertion in view of [Bri09, Lem. 1.5(ii)]. The case of an arbitrary field $k$ follows by Galois descent. Indeed, for any $f\in\operatorname{Hom}_{k_{s}-{\rm gp}}(G_{k_{s}},H_{k_{s}})$, there exist a finite Galois extension $K/k$ and a homomorphism of $K$-group schemes $\varphi:G_{K}\to H_{K}$ such that $f=\varphi_{k_{s}}$, since $f$ factors through the algebraic group $H^{0}_{k_{s}}$. ∎ Completion of the proof of Theorem 1. If $\mathbf{Hom}(G,H)$ is representable, then the vector space ${\mathcal{O}}(G)$ is finite-dimensional by Lemma 5.1. Conversely, assume that ${\mathcal{O}}(G)$ is finite-dimensional. Then we have an exact sequence of algebraic groups $1\longrightarrow G_{{\rm ant}}\longrightarrow G\longrightarrow F\longrightarrow 1,$ where $F$ is finite. By [Bri15, Thm. 1.1], there exists a finite subgroup scheme $F^{\prime}\subset G$ such that $G=G_{{\rm ant}}F^{\prime}$. This yields an exact sequence of algebraic groups $1\longrightarrow F^{\prime\prime}\longrightarrow G_{{\rm ant}}\rtimes F^{\prime}\longrightarrow G\longrightarrow 1,$ where $F^{\prime\prime}$ is finite as well. In view of Corollary 2.5, we may thus assume that $G=G_{{\rm ant}}\rtimes F^{\prime}$. Then $G\simeq G_{{\rm ant}}\times F^{\prime}$ as schemes, and hence $\mathbf{Hom}(G,H)\simeq\mathbf{Hom}(G_{{\rm ant}},\mathbf{Hom}(F^{\prime},H)).$ Since $\mathbf{Hom}(F^{\prime},H)$ is represented by a group scheme (Lemma 4.4), we may further assume that $G$ is anti-affine. Then the assertion follows from Lemma 2.6 and Proposition 5.3. ## 6\. Proof of Theorem 2 We begin with some observations and structure results on the class of linearly reductive groups. We will also consider the class of _semi-reductive_ groups: we say that an algebraic group is semi-reductive if its affinization $G^{{\rm aff}}$ is an extension of a finite group scheme by a reductive group scheme. Both classes turn out to be closely related. The _affine_ linearly reductive groups are well-understood: if $p=0$, they are exactly the extensions of finite group schemes by reductive group schemes, i.e., the affine semi-reductive groups (see e.g. [DG70, IV.3.3.3]). If $p>0$, then the affine linearly reductive groups are exactly the extensions $1\longrightarrow H\longrightarrow G\longrightarrow F\longrightarrow 1,$ where $F$ is a finite group scheme of order prime to $p$ and $H$ is a connected group scheme of multiplicative type (see [DG70, IV.3.3.6]). Moreover, $H$ has a largest subtorus $T$, with Cartier dual being the quotient of the Cartier dual of $H$ by its torsion subgroup (see [DG70, IV.1.3]). As a consequence, $T$ is characteristic in $H$, and hence normal in $G$. So the affine linearly reductive groups are exactly the extensions of finite linearly reductive groups by tori. Thus, every affine linearly reductive group is semi- reductive, but the converse fails (if $p>0$ again). As a consequence, _every linearly reductive group is semi-reductive; the converse holds if and only if $p=0$._ If $p>0$ then the linearly reductive groups are exactly the extensions of finite linearly reductive groups by semi- abelian varieties (as follows from the fact that every anti-affine group is a semi-abelian variety, see [Bri09, Prop. 2.2]). In particular, the smooth connected linearly reductive groups are exactly the semi-abelian varieties. We now discuss the behavior of both classes under base change by a field extension $K/k$. By [Mar09, Prop. 3.2], an affine algebraic group $G$ is linearly reductive if and only if so is $G_{K}$. Clearly, this invariance property also holds for anti-affine algebraic groups. In view of the affinization theorem, it follows that _an algebraic group $G$ is linearly reductive if and only if so is $G_{K}$._ Also, _if an algebraic group $G$ is semi-reductive, then so is $G_{K}$._ The converse holds if $K/k$ is separable algebraic (by Galois descent), but fails for purely inseparable extensions. For example, if $k$ is separably closed but not algebraically closed, then there exists a non-split extension of ${\mathbb{G}}_{m}$ by the infinitesimal group scheme $\alpha_{p}$, see [SGA3, Exp. XVII, 5.9 c)]. As every such extension splits over $\bar{k}$, this yields an example of an affine algebraic group $G$ such that $G_{\bar{k}}$ is semi- reductive, but $G$ is not. Next, we discuss the behavior of both classes under taking quotients, normal subgroup schemes and extensions. Clearly, _every quotient of a linearly reductive group is linearly reductive. The class of semi-reductive groups is also stable under quotients,_ since so are the classes of anti-affine, reductive and finite group schemes. By [Mar09, Prop. 3.4], the affine linearly reductive groups are also stable by normal subgroup schemes and extensions. But the class of linearly reductive groups is not stable under normal subgroup schemes. To see this if $p=0$, consider a non-trivial extension $G$ of an elliptic curve $E$ by ${\mathbb{G}}_{a}$; then $G$ is anti-affine (see [Bri09, Prop. 2.3]), but of course ${\mathbb{G}}_{a}$ is not. If $p>0$, let $H=\alpha_{p}\rtimes{\mathbb{G}}_{m}$, where ${\mathbb{G}}_{m}$ acts on $\alpha_{p}$ as its automorphism group. Also, let $E$ be a supersingular elliptic curve; then $\alpha_{p}$ is isomorphic to a subgroup of $E$. Consider the pushout $\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\alpha_{p}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathbb{G}}_{m}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1}$$\textstyle{1\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathbb{G}}_{m}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{1.}$ Then $G$ is linearly reductive, but $H$ is not. These examples also show that the class of semi-reductive groups is not stable under normal subgroup schemes. _The class of linearly reductive groups is stable by extensions_. Indeed, by a standard argument, an algebraic group $G$ is linearly reductive if and only if the fixed point functor $V\mapsto V^{G}$ (from finite-dimensional $G$-modules to vector spaces) is exact. Moreover, for any exact sequence of algebraic groups $1\to N\to G\to Q\to 1$ and any $G$-module $V$, we have $V^{G}=(V^{N})^{Q}$. In particular, if $p=0$ then the class of semi-reductive groups is stable under extensions. This fails if $p>0$ in view of the following: ###### Example 6.1. Let $V$ be a vector space of finite dimension $n\geq 1$. Consider the relative Frobenius morphism $F_{V/k}:V\to V^{(p)}$ and denote by $U$ its kernel; then $U$ is an infinitesimal unipotent subgroup scheme of $V$, normalized by the natural action of $\operatorname{GL}(V)$. Form the semi-direct product $G=U\rtimes\operatorname{GL}(V)$; then $G$ is clearly an extension of a reductive group scheme by a finite group scheme. But there is no exact sequence $1\longrightarrow R\longrightarrow G\longrightarrow F\longrightarrow 1,$ where $R$ is reductive and $F$ is finite. Otherwise, $R$ is the reduced subscheme $G^{0}_{{\rm red}}$, and hence $R=\operatorname{GL}(V)$. As $R\triangleleft G$, it follows that $\operatorname{GL}(V)$ centralizes $U$, a contradiction. Next, we obtain several criteria for semi-reductivity: ###### Proposition 6.2. Let $G$ be an algebraic group. Consider the conditions: 1. (i) $G$ is semi-reductive. 2. (ii) There exists an exact sequence of algebraic groups (6.1) $1\longrightarrow F_{1}\longrightarrow G_{1}\times G_{2}\longrightarrow G\longrightarrow F_{2}\longrightarrow 1,$ where $F_{1}$, $F_{2}$ are finite, $G_{1}$ is anti-affine, and $G_{2}$ is reductive. 3. (iii) The affinization $G^{{\rm aff}}$ is linearly reductive. 4. (iv) $G$ is smooth and $G_{\bar{k}}$ has no non-trivial smooth connected unipotent normal subgroup. Then (iii)$\Rightarrow$(i)$\Leftrightarrow$(ii) and (iv)$\Rightarrow$(i). If $p=0$ then (i), (ii), (iii) are equivalent. If $p>0$ then (i), (ii), (iv) are equivalent for $G$ smooth. ###### Proof. (ii)$\Rightarrow$(i) Cut the long exact sequence (6.1) in two short exact sequences (6.2) $1\longrightarrow F_{1}\longrightarrow G_{1}\times G_{2}\longrightarrow G_{0}\longrightarrow 1,\quad 1\longrightarrow G_{0}\longrightarrow G\longrightarrow F_{2}\longrightarrow 1,$ where $G_{0}$ is the largest smooth connected normal subgroup scheme of $G$. Denote by $N$ the schematic image of $G_{1}$ in $G_{0}$; then $N=(G_{0})_{{\rm ant}}$ in view of [Bri17, Lem. 3.3.6]. Thus, $N=G_{{\rm ant}}$ is normal in $G$, and $G/N=G^{{\rm aff}}$ is an extension of the finite group scheme $F_{2}$ by the schematic image of $G_{2}$ in $G_{0}$; this image is a reductive group scheme. This yields the assertion. (iii)$\Rightarrow$(i) Just recall that every linearly reductive group is semi- reductive. (iv)$\Rightarrow$(i) It suffices to show that $G^{0}$ is semi-reductive. We claim that $G^{0}$ satisfies (iv). Indeed, $G^{0}$ is smooth since so is $G$. Consider the largest smooth connected unipotent normal subgroup $U$ of $G^{0}_{\bar{k}}$; then $U$ is normalized by $G(\bar{k})$, and hence $U\triangleleft G_{\bar{k}}$. So $U$ is trivial, proving the claim. Thus, we may assume that $G$ is connected. By the Rosenlicht decomposition (see e.g. [Bri17, Thm. 5.5.1]), there exists a smooth connected affine algebraic $\bar{k}$-group $H\triangleleft G_{\bar{k}}$ such that $G_{\bar{k}}=(G_{\bar{k}})_{{\rm ant}}H$. Since $(G_{\bar{k}})_{{\rm ant}}$ is central in $G_{\bar{k}}$, we see that the unipotent radical of $H$ is trivial, i.e., $H$ is reductive. So $G_{\bar{k}}/(G_{\bar{k}})_{{\rm ant}}$ is reductive as well. Since the formation of $G_{{\rm ant}}$ commutes with field extensions, it follows that $G/G_{{\rm ant}}$ is reductive. For the remaining implications, we treat the cases $p=0$ and $p>0$ separately as we use the structure of anti-affine groups, which differs in both cases (see [Bri09, §2]). Assume that $p=0$. Then (i)$\Rightarrow$(iii) follows from the linear reductivity of affine semi-reductive groups. We now show (i)$\Rightarrow$(ii). By assumption, we have an exact sequence (6.3) $1\longrightarrow G_{{\rm ant}}\longrightarrow G^{0}\longrightarrow R\longrightarrow 1,$ where $R$ is reductive. Consider again the Rosenlicht decomposition $G^{0}=G_{{\rm ant}}G^{0}_{{\rm aff}}$. By the main result of [Mo56], there exists a Levi decomposition $G^{0}_{{\rm aff}}=R_{u}(G^{0}_{{\rm aff}})\rtimes L$, where $L$ is reductive, and $R_{u}$ denotes the unipotent radical. In view of (6.3), it follows that $R_{u}(G^{0}_{{\rm aff}})$ is the largest unipotent subgroup of $G_{{\rm ant}}$, and $G^{0}=G_{{\rm ant}}L$. This yields an exact sequence $1\longrightarrow M\longrightarrow L\longrightarrow R\longrightarrow 1,$ where $M=G_{{\rm ant}}\cap L$ is central in $L$, and hence of multiplicative type. So there exists a reductive subgroup scheme $L^{\prime}$ of $L$ such that $L=ML^{\prime}$ and $M\cap L^{\prime}$ is finite. Thus, $G^{0}=G_{{\rm ant}}L^{\prime}$ and $G_{{\rm ant}}\cap L^{\prime}$ is finite. Equivalently, we have an exact sequence $1\longrightarrow G_{{\rm ant}}\cap L^{\prime}\longrightarrow G_{{\rm ant}}\times L^{\prime}\longrightarrow G^{0}\longrightarrow 1,$ which yields the assertion. Next, we assume that $p>0$, and show that (i)$\Rightarrow$(ii). Recall that $G_{{\rm ant}}$ is a semi-abelian variety (see [Bri09, Prop. 2.1]). We may assume that there is an exact sequence $1\longrightarrow G_{{\rm ant}}\longrightarrow G\longrightarrow R\longrightarrow 1$, where $R$ is reductive. In particular, $G$ is smooth and connected; hence its derived subgroup ${\rm D}(G)$ is smooth, connected and affine. Also, $R=T{\rm D}(R)$ for some central torus $T$. Thus, the pullback of $T$ in $G$ is the largest normal semi-abelian variety $G_{{\rm sab}}$; moreover, $G=G_{{\rm sab}}{\rm D}(G)$, and ${\rm D}(G)$ is reductive. It follows that $G=G_{{\rm ant}}L$ for some reductive subgroup scheme $L$, and we conclude as above. Still assuming that $p>0$, we show that (i)$\Rightarrow$(iv) when $G$ is smooth. We may assume that $k$ is algebraically closed. Let $U$ be a smooth connected unipotent normal subgroup of $G$; then $U\cap G_{{\rm ant}}$ is finite. As the unipotent radical of $G/G_{{\rm ant}}=G^{{\rm aff}}$ is trivial, this yields the assertion. ∎ (As already mentioned, the implication (i)$\Rightarrow$(iii) fails if $p>0$. Also, the implication (i)$\Rightarrow$(iv) fails if $p=0$, as shown again by the example of a non-trivial extension of an elliptic curve by the additive group.) We now obtain a version of Theorem 2 in arbitrary characteristic: ###### Theorem 6.3. Let $G$ be a semi-reductive algebraic group, and $H$ a group scheme. Then $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$ satisfying (FT). If $H$ is affine, then $M$ satisfies (AFT). Also, the morphism (1.1) is smooth if $G$ is linearly reductive and $H$ is smooth. ###### Proof. If $G$ is finite, then the first assertion follows from Lemmas 2.8, 4.3 and 4.4. For an arbitrary $G$, consider again the two exact sequences (6.2), where $G_{1}$ is anti-affine, $G_{2}$ is reductive, and $F_{1}$, $F_{2}$ are finite. Thus, there exists a finite subgroup scheme $F\subset G$ such that $G=G_{0}F$ (see [Bri15, Thm. 1]). By arguing as in the proof of Theorem 1 and using Corollary 2.5 and Lemma 2.9, we may thus assume that $G=G_{0}$. In particular, $G$ is connected. In view of Corollary 4.2, we may further assume that $H$ is connected. We now use again Corollary 2.5 and Lemma 2.9 to reduce to the case where $G$ is either anti-affine or reductive. In the latter case, we conclude by Corollary 4.9; in the former case, we use Proposition 5.3. This shows the assertion about Property (FT); the proof of the assertion about (AFT) is similar and left to the reader. The final assertion follows from Lemma 3.5. ∎ ###### Remark 6.4. Assume that $k$ is algebraically closed of characteristic $0$. Consider a semi-reductive group $G$ and a group scheme $H$. By Theorem 6.3, for any $m\in M(k)$, the orbit $H^{0}\cdot m$ is open in the connected component of $m$ in $M$. Since every such orbit is connected, it follows that the connected components of $M$ are exactly the $H^{0}$-orbits of $k$-rational points. So the set of $H$-orbits in $M$ is in one-to-one correspondence with the set of $k$-rational points of the quotient $\pi_{0}(M)/\pi_{0}(H)$, where $\pi_{0}(M)$ denotes the (constant) scheme of connected components. Also, since $k=\bar{k}$, the above set of $H$-orbits in $M$ may be identified with the orbit space $M(k)/H(k)=\operatorname{Hom}_{{\rm gp}}(G,H)/H(k)$. As a consequence, we have (6.4) $\operatorname{Hom}_{{\rm gp}}(G/H)/H(k)\stackrel{{\scriptstyle\sim}}{{\longrightarrow}}\operatorname{Hom}_{K-{\rm gp}}(G_{K},H_{K})/H(K)$ for any algebraically closed field extension $K/k$. Next, assume that $k$ is algebraically closed of characteristic $p>0$, and consider a linearly reductive group $G$ and a smooth group scheme $H$. Then all the above results hold without any change, in view of Lemma 3.5 and Theorem 6.3 again. The bijection (6.4) gives back the main result of [Mar09] (Theorem 1.1; see also [Vin96, Prop. 10]). ###### Remark 6.5. Theorem 6.3 has a partial converse: let $G$ be an algebraic group and assume that for any smooth affine algebraic group $H$, the functor $\mathbf{Hom}_{{\rm gp}}(G,H)$ is represented by a scheme $M$ such that the morphism (1.1) is smooth. Then $G$ is linearly reductive. Indeed, we have $H^{1}(G,\operatorname{Lie}(H))=0$ for any $f\in\operatorname{Hom}_{{\rm gp}}(G,H)$, where $G$ acts on $\operatorname{Lie}(H)$ via $\operatorname{Ad}\circ f$ (Lemma 3.5). As a consequence, $H^{1}(G,\operatorname{End}(M))=0$ for any finite-dimensional representation $f:G\to\operatorname{GL}(M)$. But every finite-dimensional $G$-module $V$ is a summand of some $\operatorname{End}(M)$: just take $M=V\oplus k$, where $G$ acts trivially on $k$. Thus, $H^{1}(G,V)=0$ for any such $V$; this yields the assertion by [DG70, II.3.3.7] together with Remark 3.4. Acknowledgments. Many thanks to Mathieu Florence, Matthieu Romagny and Antoine Vézier for their careful reading of preliminary versions and for very helpful comments. Example 6.1 was suggested by Mathieu Florence. I thank Matthieu Romagny and Peng Du for pointing out a confusion between density and schematic density in the original proof of the rigidity lemma. Also, thanks to Philippe Gille for asking a number of stimulating questions and for drawing my attention on the rigidity result of Margaux. Finally, I thank the referee for valuable remarks and comments. ## References * [Bri09] M. Brion: Anti-affine algebraic groups, J. Algebra 321 (2009), 934–952. * [Bri15] M. Brion: On extensions of algebraic groups with finite quotients, Pacific J. Math. 279 (2015), 135–153. * [Bri17] M. Brion: Some structure theorems for algebraic groups, Proc. Symp. Pure Math. 94 (2017), 53–125. * [BSU13] M. Brion, P. Samuel, V. Uma: Lectures on the structure of algebraic groups and geometric applications, Hindustan Book Agency, New Dehli, 2013; available at https://www-fourier.univ-grenoble-alpes.fr/$\;\widetilde{}\;$mbrion/chennai.pdf * [DG70] M. Demazure, P. Gabriel: Groupes algébriques, Masson, Paris, 1970. * [DO19] T.-C. Dinh, K. Oguiso: A surface with discrete and non-finitely generated automorphism group, Duke Math. J. 168 (2019), 941–966. * [EGA] A. Grothendieck: Éléments de géométrie algébrique (rédigés avec la collaboration de J. Dieudonné), Pub. Math. I.H.É.S. 4, 8, 11, 17, 20, 24, 28, 32 (1961–1967). * [EH00] D. Eisenbud, J. Harris: The geometry of schemes, Grad. Text Math. 197, Springer, 2000. * [FK18] J.-P. Furter, H. Kraft: On the geometry of the automorphism groups of affine varieties, arXiv: 1809.04175. * [Gro61] A. Grothendieck: Techniques de construction et théorèmes d’existence en géométrie algébrique IV : les schémas de Hilbert, Sém. Bourbaki, Vol. 6 (1960–1961), Exp. 221, 249–276. * [LS21] B. Laurent, S. Schröer: Para-abelian varieties and Albanese maps, preprint, arXiv: 2101:10829. * [Les18] J. Lesieutre: A projective variety with discrete, non-finitely generated automorphism group, Inventiones Math. 212 (2018), no. 1, 189–211. * [Mar09] B. Margaux: Vanishing of Hochschild cohomology for affine group schemes and rigidity of homomorphisms between algebraic groups, Documenta Math. 14 (2009), 653–672. * [MO67] H. Matsumura, F. Oort: Representability of group functors, and automorphisms of algebraic schemes, Invent. math. 4 (1967), 1–25. * [Mil17] J. S. Milne: Algebraic groups. The theory of group schemes of finite type over a field, Cambridge Stud. Adv. Math. 126, Cambridge Univ. Press, 2017. * [Mo56] G. D. Mostow, Fully reducible subgroups of algebraic groups, Amer. J. Math. 78 (1956), 200–221. * [Mum08] D. Mumford: Abelian varieties. With appendices by C. P. Ramanujam and Yuri Manin. Corrected reprint of the 2nd ed. 1974, Hindustan Book Agency, New Dehli, 2008. * [Ray70] M. Raynaud: Faisceaux amples sur les schémas en groupes et les espaces homogènes, Lecture Note Math. 119, Springer, 1970. * [Ro21] M. Romagny: Fixed point stacks under groups of multiplicative type, preprint, arXiv: 2101.02450. * [SGA3] M. Demazure, A. Grothendieck: Schémas en groupes (SGA3), Tome I. Propriétés générales des schémas en groupes; Tome III. Structure des schémas en groupes réductifs, Revised version edited by P. Gille and P. Polo, Doc. Math. 7, 8, Soc. Math. France, Paris, 2011. * [SS09] C. Sancho de Salas, F. Sancho de Salas: Principal bundles, quasi-abelian varieties and structure of algebraic groups, J. Algebra 322 (2009), 2751–2772. * [Vin96] E. B. Vinberg: On invariants of a set of matrices, J. Lie Theory 6 (1996), 249–269. * [Vis05] A. Vistoli, Notes on Grothendieck topologies, fiber categories and descent theory, in: Fundamental algebraic geometry: Grothendieck’s FGA explained, Math. Surveys Monogr. 123, Amer. Math. Soc., Providence, RI, 2005.
# Synthesizing Monolingual Data for Neural Machine Translation Benjamin Marie Atsushi Fujita National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan {bmarie<EMAIL_ADDRESS> ###### Abstract In neural machine translation (NMT), monolingual data in the target language are usually exploited through a method so-called “back-translation” to synthesize additional training parallel data. The synthetic data have been shown helpful to train better NMT, especially for low-resource language pairs and domains. Nonetheless, large monolingual data in the target domains or languages are not always available to generate large synthetic parallel data. In this work, we propose a new method to generate large synthetic parallel data leveraging very small monolingual data in a specific domain. We fine-tune a pre-trained GPT-2 model on such small in-domain monolingual data and use the resulting model to generate a large amount of synthetic in-domain monolingual data. Then, we perform back-translation, or forward translation, to generate synthetic in-domain parallel data. Our preliminary experiments on three language pairs and five domains show the effectiveness of our method to generate fully synthetic but useful in-domain parallel data for improving NMT in all configurations. We also show promising results in extreme adaptation for personalized NMT. ## 1 Introduction Neural machine translation (NMT) systems usually require a large quantity of parallel data for training. For most language pairs and domains, we do not have such resources, or only in very small quantities, mainly because they are costly to produce (Germann, 2001). Unlike parallel data, monolingual data are readily available in large quantity for many languages. Previous work has proposed various strategies to integrate monolingual data into NMT systems and has confirmed their usefulness to improve NMT systems, especially in low- resource configurations. The so-called _back-translation_ of monolingual data (Sennrich et al., 2016a) is undoubtedly the most prevalent one. This approach uses a target-to-source MT system to translate monolingual data in the target language into the source language. The generated synthetic parallel data can be used together with the original parallel data to increase the size of the training data, and eventually to obtain better NMT systems. Nonetheless, generating synthetic parallel data in large quantity with this approach also requires a large quantity of monolingual data. For most domains in most languages, however, a large quantity of monolingual data is unavailable and thus generating large synthetic parallel data through back- translation is impracticable. In this preliminary work, we propose a new approach that leverages small in- domain monolingual data to generate large synthetic in-domain parallel data. We demonstrate that synthetic in-domain monolingual data generated by a GPT-2 model (Radford et al., 2019), fine-tuned on our very small in-domain monolingual data, can be successfully translated by NMT to generate synthetic in-domain parallel data. Our results on three language pairs and five domains show improvements in BLEU for all configurations when using our synthetic data to train NMT. We also show that this approach can be used in extreme adaptation for personalized NMT. (a) Generated by GPT-2 fine-tuned on Medical data > $\ldots$ Because of methodological differences we could not obtain a > comparable result for 17S viral nucleic acids or 16S viral nucleic acid > using different methods. > The SARI statement: A measure of the quality of health services, including > the availability of drugs, is a basic criterion for measuring the quality of > health services system. > The 12 patients seen at the DCP + IC applied for six of these six HDCP > methods (75%) successfully completed their pharmacy duties as per the > guidelines.$\ldots$ (b) Generated by GPT-2 fine-tuned on IT data > $\ldots$You can use the Page Colors application that you can find on Google+ > The maps of Portugal are free but you can acquire maps for other countries > You can use the program Ringtone Maker which you can find on Google$\ldots$ (c) Generated by GPT-2 fine-tuned on tweets (natural disaster domain) > $\ldots$A volcanic eruption in 1815 set off a massive effluence that sowed > #wildfire on the west coast!Thanks NSW #NSWgovernors for treating these > #shills > 4.4 earthquake occurred near Negros Region, Chile at 22: 10 UTC! #earthquake > #NegrosRegion > Day: Malta - Black cloud surrounded by rain. 16: 35 JST 14 / 16 / 19 - 21: > 45 JST 15 / 16 / 19 - 17: 00 JST$\ldots$ (d) Generated by GPT-2 not fine-tuned > $\ldots$On Thursday, fossil fuels minister Emmanuel Ponting said the year > 2000 was the year that the entire human genome was analyzed and ”explored > for new genes.” > Consider the mercenary work that Columbia University puts in. > Such coins have been suggested by Buzzfeed tech reporter Alex Seitz, who > wrote a very thorough investigation into the issue.$\ldots$ Figure 1: Examples of three raw consecutive lines inside one sequence generated by different GPT-2 models. We manually added “$\ldots$” to show the reader that they are extracts from a sequence. GPT-2 models, fine-tuning data, and hyper-parameters used to obtain these examples are presented in Section 4. ## 2 Motivation This work relies on three assumptions: * • GPT models generates mostly correct sentences. * • Sentences generated by a GPT model exhibit some of the characteristics of the in-domain data on which the model has been fine-tuned, even if the data is small. * • NMT training is robust, to some extent, to the noise in the texts generated by GPT models. For our two first assumptions, we can obtain some hints on their validity by manually checking sentences generated by fine-tuned GPT-2 models. Examples of such sentences are presented in Figure 1. We can see with these examples that GPT-2 models successfully generate sentences that are mostly correct and present characteristics of the domain on which they have been fine-tuned. For our third assumption, we rely on previous work that shows that back- translations in which artificial noise has been injected can improve translation quality when used for training NMT (Edunov et al., 2018). ## 3 Synthesizing Large Parallel Data Leveraging Small Monolingual Data ### 3.1 Requirements Our method has few requirements in terms of data that make it applicable in most MT scenarios. Precisely, we need the following three types of data: * • a GPT model or large (general-domain) monolingual data: in this preliminary work, we only exploit the smallest GPT-2 model released by OpenAI. For the future, we plan to experiment with in-house GPT models, trained on large general-domain monolingual data. * • small in-domain monolingual data: most of our experiments use 50k sentences for each target domain, but experiments in extreme adaptation for personalized NMT shows that our method is useful even when only hundreds of sentences are available. * • some parallel data: all our experiments use at least 156k sentence pairs. ### 3.2 Synthetic Monolingual Data We use GPT-2 (Radford et al., 2019)111https://github.com/openai/gpt-2 to generate synthetic monolingual data. GPT models are auto-regressive Transformer (Vaswani et al., 2017) decoders. Given some context, or no context at all if this is the first token of the sequence, the model predicts the next token. To generate texts in a particular domain, we fine-tuned a given GPT-2 model on a small amount of texts in the target domain and language. Since GPT-2 is efficient for text generation, we can generate millions of in-domain monolingual sentences. ### 3.3 Synthetic Parallel Data Once the synthetic monolingual data are generated, it can be used in NMT as any other monolingual data. In this work, we demonstrate its usefulness through back-translation (Sennrich et al., 2016a) and forward translation to generate in-domain synthetic parallel data. For back-translation, we adopted the tagged approach (Caswell et al., 2019) that has been shown to provide better results, especially for translating texts that are not translationese (Marie et al., 2020). In this configuration, the target side of the synthetic parallel data was generated by GPT-2, in English, and the source side by NMT. For forward translation, we did not use tags. In this configuration the source side was generated by GPT-2, in English, while the target side was obtained through NMT. Forward translation is known to underperform back-translation (Bogoychev and Sennrich, 2019). Nonetheless, since we do not have GPT-2 models in other languages than English, we could only exploit synthetic monolingual data for translation directions with English on the source side through forward translation. ## 4 Experiments ### 4.1 Data #### 4.1.1 Training We trained NMT systems for English–German (En-De), English–French (En-Fr), and Japanese–English (Ja-En) on the following parallel data (numbers of sentence pairs are given after pre-processing described in Section 4.2): * • En-De: WMT17222http://statmt.org/wmt17/translation-task.html parallel data (5.1M sentence pairs) * • En-Fr: WMT14333http://statmt.org/wmt14/translation-task.html (32.7M sentence pairs) * • En-Ja: Training parallel data provided in the MTNT dataset (Michel and Neubig, 2018b)444http://www.cs.cmu.edu/ pmichel1/mtnt/ which is a concatenation of three different datasets (TED talks555https://wit3.fbk.eu/, The Kyoto Free Translation Task (KFTT)666http://www.phontron.com/kftt/, and JESC777https://nlp.stanford.edu/projects/jesc/) (3.9M sentence pairs) #### 4.1.2 Validation We used one validation dataset, for each language pair, to select the best model after training NMT (see Section 4.2): * • En-De: WMT16 newstest888http://data.statmt.org/wmt20/translation-task/dev.tgz (2,999 sentence pairs) * • En-Fr: WMT13 newstest999http://data.statmt.org/wmt20/translation-task/dev.tgz (3,000 sentence pairs) * • En-Ja: Validation data provided in the MTNT dataset101010http://www.cs.cmu.edu/ pmichel1/mtnt/ that is a concatenation of data from TED Talks, KFTT, and JESC corpora (4,451 sentence pairs) #### 4.1.3 Test We used several datasets from different domains for evaluating the translation quality of our NMT systems for each language pair: * • En-De: * – News domain: WMT17 news translation task111111http://data.statmt.org/wmt20/translation-task/dev.tgz (3,004 sentence pairs) * – Medical domain: WMT14 medical translation task, khresmoi summary121212http://www.statmt.org/wmt14/medical-task/khresmoi-summary-test- set.tgz (1,000 sentence pairs) * – IT domain: WMT16 IT translation task, batch 3131313http://data.statmt.org/wmt16/it-translation-task/wmt16-it-task- references.tgz (1,000 sentence pairs) * • En-Fr: * – News domain: WMT14 news translation task141414http://data.statmt.org/wmt20/translation-task/dev.tgz (3,003 sentence pairs) * – Medical domain: WMT14 medical translation task, khresmoi summary151515http://www.statmt.org/wmt14/medical-task/khresmoi-summary-test- set.tgz (1,000 sentence pairs) * – Reddit domain: MTNT test sets,161616http://www.cs.cmu.edu/ pmichel1/mtnt/ one for each translation direction (for En$\rightarrow$Fr: 1,020 sentence pairs, for Fr$\rightarrow$En: 1,022 sentence pairs) * • En-Ja: * – News domain: ALT test set171717https://www2.nict.go.jp/astrec- att/member/mutiyama/ALT/ (1,018 sentence pairs) * – Reddit domain: MTNT test sets,181818http://www.cs.cmu.edu/ pmichel1/mtnt/ one for each translation direction (for En$\rightarrow$Ja: 1,002 sentence pairs, for Ja$\rightarrow$En: 1,001 sentence pairs) * – Twitter natural disaster domain: Tweets test set compiled and translated by ourselves (not publicly available) (1,400 sentence pairs) #### 4.1.4 English Monolingual Data English monolingual data are used as a source for back/forward translation and for fine-tuning GPT-2. There is one dataset for each domain: * • News domain: News Crawl 2019191919http://data.statmt.org/news- crawl/en/news.2019.en.shuffled.deduped.gz (1M lines for backward/forward translation, 50k lines for GPT-2 fine-tuning) * • IT domain: English side of the training parallel data provided for the WMT16 IT translation task, batch 1 and batch 2202020http://ufallab.ms.mff.cuni.cz/ popel/batch1and2.zip, (2k lines for backward/forward translation and GPT-2 fine-tuning) * • Medical domain: English side of the En-Fr EMEA parallel data212121http://opus.lingfil.uu.se/download.php?f=EMEA/en-fr.txt.zip provided for the WMT14 medical translation task (100k lines for backward/forward translation, 50k lines for GPT-2 fine-tuning) * • Reddit domain: English data crawled with the Reddit API (1M lines for backward/forward translation, 50k lines for GPT-2 fine-tuning) * • Twitter natural disaster domain: English tweets crawled with the Twitter API with the same keywords used to crawled the English tweets of the test set (not publicly released) (148k lines for backward/forward translation, 50k lines for GPT-2 fine-tuning) ### 4.2 Framework and Settings We exploited GPT-2 through the gpt-2-simple framework.222222https://github.com/minimaxir/gpt-2-simple We did not perform any pre-processing on the monolingual data used for fine-tuning GPT-2. For NMT, we tokenized and truecased all the data in English, French, and German, with the Moses toolkit (Koehn et al., 2007).232323https://github.com/moses- smt/mosesdecoder The truecaser has been trained on 1M lines randomly sampled from the News Crawl corpora in each language. For NMT, training data, validation data, and source side test set are all segmented into subword units. We used byte-pair encoding (BPE) (Sennrich et al., 2016b)242424https://github.com/rsennrich/subword-nmt for English, German, and French, separately trained on 10M lines from the News Crawl 2019 corpora252525http://data.statmt.org/news-crawl/ for each language to learn 32k BPE operations. For Japanese, we used SentencePiece (Kudo and Richardson, 2018)262626https://github.com/google/sentencepiece to learn 16k sentence pieces also from the News Crawl 2019 corpus. We used Marian (Junczys-Dowmunt et al., 2018)272727https://marian- nmt.github.io/, version v1.7.6 1d4ba73 2019-05-11 17:16:31 +0100 for NMT with standard hyper-parameters for training (see Table 1). \--type transformer --max-length 120 --mini-batch-fit --valid-freq 5000 --save-freq 5000 --workspace 10000 --disp-freq 500 --beam-size 12 --normalize=1 --valid-mini-batch 16 --overwrite --early-stopping 5 --cost- type=ce-mean-words --valid-metrics ce-mean-words bleu --keep-best --enc-depth 6 --dec-depth 6 --transformer-dropout 0.1 --learn-rate 0.0003 --lr-warmup 16000 --lr-decay-inv-sqrt 16000 --lr-report --label-smoothing 0.1 --devices 0 1 2 3 4 5 6 7 --optimizer-params 0.9 0.98 1e-09 --clip-norm 5 --sync-sgd --exponential-smoothing --- Table 1: Hyper-parameters of Marian used for training our NMT systems. We performed decoding with a beam size of 12 and a length normalization at 1.0. For evaluation, we used SacreBLEU (Post, 2018)282828https://github.com/mjpost/sacrebleu and report on BLEU scores Papineni et al. (2002) for English, French, and German, and chrF scores Popović (2015) for Japanese. Before evaluation, we post-processed the NMT output by undoing BPE and SentencePiece subword segmentations. Then, except for Japanese, we detokenized and detruecased the output with Moses. ### 4.3 Results with Back-translation The performance of our NMT systems trained on several different sets of back- translations is shown in Table 2. First, we assessed to what extent the human-made in-domain monolingual data used for fine-tuning GPT-2 are useful for back-translation. As we can see, despite the small size of the data, it improves BLEU compared to the baseline systems for all configurations. When using all the human-made in-domain monolingual data, or up to 1M sentences, BLEU improvements are even larger for almost all configurations (except for Ja$\rightarrow$En, Reddit). This result confirms the usefulness of exploiting more in-domain monolingual data through back-translation when available. Using 1M sentences generated by a GPT-2 model that is not fine-tuned leads to lower BLEU scores than using all the human-made in-domain monolingual data (except for Ja$\rightarrow$En, Reddit). The two last rows give the results of our approach: they use human-made in- domain monolingual data only up to 50k sentences for fine-tuning the GPT-2 model, but millions of synthetic monolingual data. They show that the back- translations of the monolingual data generated by the fine-tuned GPT-2 model are useful. We obtained better, or comparable, BLEU scores when using the back-translations of our synthetic monolingual data to train NMT systems than when using the back-translations of human-made monolingual data. Using more synthetic monolingual data (last row) also tends to lead to better BLEU scores (except for Ja$\rightarrow$En, Reddit and Twitter). System | Back-translated | De$\rightarrow$En | Fr$\rightarrow$En | Ja$\rightarrow$En ---|---|---|---|--- Data | News | Medical | IT | News | Medical | Reddit | News | Reddit | Twitter Baseline | none | 32.9 | 36.4 | 42.0 | 36.6 | 48.4 | 34.5 | 14.5 | 7.8 | 5.5 \+ H-TBT | fine-tuning | 34.2 | 40.2 | 42.7 | 37.1 | 48.9 | 34.7 | 17.2 | 8.3 | 16.6 \+ H-TBT | all | 35.8 | 40.7 | 43.4 | 37.4 | 49.6 | 35.9 | 22.1 | 8.0 | 17.1 \+ GPT_notft-TBT | 1M sentences | 34.6 | 37.3 | 41.9 | 37.1 | 48.5 | 34.7 | 20.0 | 8.6 | 9.8 \+ GPT-TBT | 1M sentences | 35.5 | 42.6 | 42.6 | 37.4 | 49.3 | 35.7 | 20.9 | 9.3 | 17.7 \+ GPT-TBT | 10M sentences | 35.5 | 42.9 | 44.6 | 37.8 | 50.3 | 36.9 | 22.3 | 8.7 | 15.9 Table 2: BLEU scores of our NMT systems translating into English, for each domain. “H-TBT” denotes systems trained on the back-translated human-made monolingual data (the data used for fine-tuning GPT or all the monolingual data described in Section 4.1.4). “GPT-TBT” denotes systems trained on the back-translation of either 1M or 10M monolingual sentences generated by a GPT-2 model fine-tuned on the in-domain monolingual data. “GPT_notft-TBT” denotes a configuration in which GPT-2 has not been fine-tuned. ### 4.4 Results with Forward Translation We performed similar experiments as in Section 4.3 but with forward translation instead of back-translation. Our results are shown in Table 3. We did not observe consistent improvements of BLEU and chrF scores when exploiting human-made monolingual data (H-FT configurations). Increasing the amount of monolingual data can also decrease or increase BLEU and chrF scores. Our approach (GPT-FT) leads to better, or similar, scores than the H-FT configurations that use all the human-made monolingual data. We conclude that forward translations perform reasonably well, but not consistently, in these configurations and that GPT-2 models in other languages than English would be necessary to properly evaluate to which extent our approach can improve BLEU and chrF scores when English is not the target language. System | Translated | En$\rightarrow$De (BLEU) | En$\rightarrow$Fr (BLEU) | En$\rightarrow$Ja (chrF) ---|---|---|---|--- Data | News | Medical | IT | News | Medical | Reddit | News | Reddit | Twitter Baseline | none | 27.3 | 28.8 | 37.4 | 36.3 | 40.9 | 25.5 | 0.2436 | 0.1419 | 0.0987 \+ H-FT | fine-tuning | 27.9 | 29.6 | 38.6 | 36.5 | 40.9 | 23.3 | 0.2643 | 0.1400 | 0.0839 \+ H-FT | all | 27.9 | 29.7 | 38.6 | 36.2 | 41.6 | 23.4 | 0.2847 | 0.1348 | 0.0845 \+ GPT_notft-FT | 1M sentences | 27.4 | 28.7 | 36.7 | 36.0 | 40.5 | 22.5 | 0.2479 | 0.1301 | 0.0799 \+ GPT-FT | 1M sentences | 27.9 | 29.6 | 39.1 | 36.2 | 42.0 | 23.1 | 0.2513 | 0.1324 | 0.0832 \+ GPT-FT | 10M sentences | 28.0 | 30.1 | 38.9 | 36.3 | 42.3 | 23.3 | 0.2749 | 0.1321 | 0.0810 Table 3: BLEU and chrF scores of our NMT systems translating from English, for each domain. “H-FT” denotes systems trained on forward-translated human-made monolingual data (the data used for fine-tuning GPT or all the monolingual data described in Section 4.1.4). “GPT-FT” denotes systems trained on the forward-translation of either 1M or 10M monolingual sentences generated by a GPT-2 model fine-tuned on the in-domain monolingual data. “GPT_notft-FT” denotes a configuration in which GPT-2 has not been fine-tuned. ## 5 Extreme Adaptation for Personalized NMT The objective of extreme adaptation for personalized NMT is to adapt a given NMT system so that it can better translate texts written or spoken by a specific person. Ideally, for such a task, we would require an amount as large as possible of parallel data in which one side are texts written or spoken by the target person in order to personalize our NMT system. Obviously, such a large data does not exist and would be too costly to create. Thus, we propose to synthesize such a data with our approach. The main difference with the domain adaptation scenarios presented in Section 4 is that we cannot even expect to obtain thousands of sentences of texts written by the target person to fine-tune GPT-2. For our extremely personalized NMT experiments, we used the Speaker Annotated TED Talks (SATED) corpus (Michel and Neubig, 2018a)292929http://www.cs.cmu.edu/ pmichel1/sated/ available for: * • English–German (En-De): 156k sentence pairs, 1,670 speakers * • English–Spanish (En-Es): 183k sentence pairs, 1,922 speakers * • English–French (En-Fr): 178k sentence pairs, 1,887 speakers Each sentence pair is provided with a tag that identifies the speaker. Note that this corpus is already pre-processed: tokenized and lower-cased. Validation data and test data contain two sentence pairs per speaker. In order to generate synthetic monolingual data for each specific speaker, we exploit the speaker tag by concatenating it to the English side of the parallel data and then use the resulting data to fine-tune the GPT-2 model. Through fine- tuning, we assume that GPT-2 learns the characteristics of each individual speaker by relying on the speaker tag. At decoding time, we then expect GPT-2 to generate texts for a particular speaker when prompting it with its speaker tag. System | De$\rightarrow$En | Es$\rightarrow$En | Fr$\rightarrow$En ---|---|---|--- Baseline | 24.6 | 32.2 | 29.5 Speaker Tags | 24.7 | 32.2 | 29.9 Speaker Tags + GPT-TBT | 27.6 | 34.6 | 32.2 Speaker Tags + GPT_speaker-TBT | 28.5 | 35.6 | 32.4 Table 4: BLEU scores of our NMT systems for the SATED translation task. “Speaker Tags” denotes the use of the speaker tags to tag each sentence pair in the given parallel data and each source sentence in the test data. “GPT- TBT” denotes systems trained on back-translation of 500k sentences generated by a GPT-2 model fine-tuned on the English side of the SATED parallel data. ‘GPT_speaker-TBT” is similar to ‘GPT-TBT,” except (1) that the data used for fine-tuning the GPT-2 model are also tagged with the speaker tag and (2) that each sentence generated by the fine-tuned GPT-2 model is tagged with a speaker tag. The results of our experiments are presented in Table 4. In addition to the a vanilla baseline NMT system, we used one of the adaptation approach (second row) that uses the parallel data with the speaker tag concatenated to the source sentence to train NMT systems (Michel and Neubig, 2018a). BLEU scores with this approach are close to the scores of the baseline system. Then, we tried our approach similarly to our experiments in Section 4.3 (third row). We fine-tuned GPT-2 on the English side of the SATED parallel data and generated 500k sentences with the fine-tuned model. Then, we back-translated the generated data with En$\rightarrow\ast$ baseline systems to obtain synthetic parallel data for the three language pairs and concatenated it to the speaker tagged SATED parallel data exploited in our experiments of the second row. The NMT systems trained on the resulting data improve the BLEU scores by several BLEU points for all the translation directions. In the last row, we finally report on the results exploiting the speaker tags also when fine-tuning the GPT-2 model. We generated 500k sentences, randomly prompting GPT-2 with one of the speaker tags, and exploited the resulting speaker-tagged monolingual data as for the other models. BLEU scores are further improved, implying that GPT-2 successfully exploits the speaker tag to generate better synthetic data for each speaker. ## 6 Conclusion and Future Work In this preliminary work, we showed that our approach can leverage small in- domain monolingual data produced by human to generate a large synthetic in- domain parallel data. Even though the synthetic parallel data are entirely synthetic, as opposed to a standard backward/forward translation, we obtained improvements in BLEU scores in all our configurations when using the generated data to train NMT systems. We also reported on successful experiments in extreme adaptation for personalized NMT. In our future work, we would like to perform an in-depth analysis to better understand our results. We will also conduct more experiments exploiting in- house GPT models for other languages. ## References * Bogoychev and Sennrich (2019) Nikolay Bogoychev and Rico Sennrich. 2019. Domain, translationese and noise in synthetic data for neural machine translation. _arXiv preprint arXiv:1911.03362_. * Caswell et al. (2019) Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In _Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)_ , pages 53–63, Florence, Italy. Association for Computational Linguistics. * Edunov et al. (2018) Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 489–500, Brussels, Belgium. Association for Computational Linguistics. * Germann (2001) Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In _Proceedings of the ACL Workshop on Data-Driven Methods in Machine Translation_. * Junczys-Dowmunt et al. (2018) Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In _Proceedings of ACL 2018, System Demonstrations_ , pages 116–121, Melbourne, Australia. Association for Computational Linguistics. * Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007\. Moses: Open source toolkit for statistical machine translation. In _Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions_ , pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. * Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 66–71, Brussels, Belgium. Association for Computational Linguistics. * Marie et al. (2020) Benjamin Marie, Raphael Rubino, and Atsushi Fujita. 2020. Tagged back-translation revisited: Why does it really work? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5990–5997, Online. Association for Computational Linguistics. * Michel and Neubig (2018a) Paul Michel and Graham Neubig. 2018a. Extreme adaptation for personalized neural machine translation. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 312–318, Melbourne, Australia. Association for Computational Linguistics. * Michel and Neubig (2018b) Paul Michel and Graham Neubig. 2018b. MTNT: A testbed for machine translation of noisy text. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 543–553, Brussels, Belgium. Association for Computational Linguistics. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, USA. Association for Computational Linguistics. * Popović (2015) Maja Popović. 2015. chrF: character n-gram f-score for automatic MT evaluation. In _Proceedings of the Tenth Workshop on Statistical Machine Translation_ , pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. * Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191, Brussels, Belgium. Association for Computational Linguistics. * Radford et al. (2019) A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019\. Language models are unsupervised multitask learners. * Sennrich et al. (2016a) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 86–96, Berlin, Germany. Association for Computational Linguistics. * Sennrich et al. (2016b) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems 30_ , pages 5998–6008. Curran Associates, Inc.
# Contact Pose Identification for Peg-in-Hole Assembly under Uncertainties Shiyu Jin, Xinghao Zhu, Changhao Wang, and Masayoshi Tomizuka Department of Mechanical Engineering, University of California, Berkeley, CA, USA. {jsy, zhuxh, changhaowang<EMAIL_ADDRESS> ###### Abstract Peg-in-hole assembly is a challenging contact-rich manipulation task. There is no general solution to identify the relative position and orientation between the peg and the hole. In this paper, we propose a novel method to classify the contact poses based on a sequence of contact measurements. When the peg contacts the hole with pose uncertainties, a tilt-then-rotate strategy is applied, and the contacts are measured as a group of patterns to encode the contact pose. A convolutional neural network (CNN) is trained to classify the contact poses according to the patterns. In the end, an admittance controller guides the peg towards the error direction and finishes the peg-in-hole assembly. Simulations and experiments are provided to show that the proposed method can be applied to the peg-in-hole assembly of different geometries. We also demonstrate the ability to alleviate the sim-to-real gap. ## I Introduction Robotic peg-in-hole assembly has been studied for decades. It is challenging because it requires accurate state estimations of the peg and the hole for alignment, and a combination of precise planning and control algorithms for insertion. Identifying the contact pose, the relative position and orientation between the peg and the hole, is required to align the peg and the hole before insertion. Visual feedback is the most common strategy to identify the pose [1, 2]. However, vision sensors suffer from high precision requirements and occlusions during the assembly task. In order to avoid such problems, search- based algorithms such as random search or spiral search [3] have been proposed to compensate the uncertainties of contact pose. The search strategy generates a search path within the search area for hole localization, which is not efficient especially when the search area is large and the search dimension is high. For insertion, the clearance between the peg and the hole is usually smaller than the precision of a robot. A tiny position and orientation error could cause workpieces to jam and wedge and may lead to failure or even damage to the workpieces. Compliance, either passive or active, has shown to be effective in handling the small uncertainties of position and orientation. Passive compliance utilizes passive compliance hardwares such as RCC [4, 5] to compensate uncertainties. In constrast, active compliance applies control strategies from software to let the robot mimic the spring-damping behavior [6, 7]. In contact-rich scenarios, force/torque-based method normally conveys more information than vision-based and search-based methods. Tang[8] analyzed a three-point contact model for round peg and hole. But the method lacked the ability to generalize to complex geometries. Kim proposed a peg shape recognition and hole detection algorithm using the force/torque sensor by inclining the peg in all directions, but their method suffered from the cumulative error[9]. In recent years, many learning-based methods have been proposed to solve the peg-in-hole assembly problem[10, 11, 12, 13, 14]. They treated the task as a Markov decision process, where the contact feedback at the current time step is used to determine the action of the next step. However, the mapping from the force/torque feedback to the contact pose is not injective as shwon in Fig. 1. On one hand, the same contact forces can be measured at different contact poses. On the other hand, the same contact pose could generate different contact forces, i.e. all the possible forces within the Coulomb friction cone. To deal with the above problem, particle filter was applied to identify the location based on multiple observations in [15, 16]. However, it is time-consuming to generate the force-position mapping in the real world and hard to generalize. Figure 1: (a) The same upward contact force could come from many possible contact poses. (b) The same contact pose could generate many possible contact forces within the Coulomb friction cone. In this paper, we propose a novel method that can identify the contact poses based on a sequence of contact measurements. At initialization, the peg contacts the hole with pose uncertainties. The peg then follows a designed tilt-then-rotate motion to make contact with the hole. The contact measurements are plotted in polar coordinates to generate a group of patterns. An injective mapping between the patterns and contact poses is learned by a convolutional neural network (CNN), which classifies the contact poses based on the error directions. Finally, an admittance controller will guide the peg towards the error direction and finish insertion. There are two main contributions of this paper. 1) We construct the mapping using a sequence of measurements as input instead of feedback at one single time step. This makes the mapping become one-to-one. 2) We classify the contact pose based on patterns, which improves the generalization ability of the proposed method. It can even tackle the sim-to-real gap. The remainder of this paper is organized as follows. Section II introduces the background including task description, admittance control, and assembly strategy. Section III describes the proposed contact pose identification method according to contact patterns. Section IV shows the performance of the proposed method by both simulations and real-world experiments. Section V discusses the advantages and disadvantages of the proposed method and proposes future work. ## II Background ### II-A Task Description We focus on the peg-in-hole assembly task under pose uncertainties. Generally speaking, the pose of the peg and the hole might be noisy due to sensor inaccuracy. To simply the problem, we assume the peg is fixed with the robot end-effector, and the pose can be obtained via forward kinematics. The hole is fixed on the table, and the pose can be estimated by a vision system with uncertainties in 6 degrees of freedom (DOF). The magnitudes of the uncertainties are roughly $\pm 20mm$ and $\pm 3\degree$ for position and orientation respectively, which are determined by the precision of the visual system. The clearance between the peg and the hole is $1mm$. The goal of the task is to compensate the uncertainties of contact pose and achieve the peg-in-hole assembly. The contact surfaces of both the peg and the hole are assumed to be flat. ### II-B Admittance Control Admittance control [6, 7] is widely used in robotic manipulation tasks to handle contact dynamics. By adding a virtual spring-damping system, the contact between the robot and the environment becomes soft, which improves the manipulation performance and prevents from damaging either the robot or the environment. We apply admittance control to the following assembly strategy to track the desired peg trajectory and compensate small uncertainties in assembly. In admittance control, the desired pose $x_{0}$ and measured external force/torque $F_{ext}$ are inputs to the admittance control block (Fig. 2), which generates the reference pose $x_{d}$ for the PD position control. Figure 2: Admittance Control. $\displaystyle F+F_{ext}$ $\displaystyle=m\ddot{x}$ (1) $\displaystyle F$ $\displaystyle=k_{p}(x_{d}-x)-k_{d}\dot{x}$ (2) $\displaystyle F_{ext}$ $\displaystyle=M_{d}(\ddot{x}_{d}-\ddot{x}_{0})+D_{d}(\dot{x}_{d}-\dot{x}_{0})+K_{d}(x_{d}-x_{0})$ (3) where $M_{d}$, $D_{d}$, and $K_{d}$ represent the desired inertia , damping, and stiffness, respectively. $k_{p}$ and $k_{d}$ are PD position control gains. ### II-C Assembly Strategy Peg-in-hole assembly has been studied for decades. An efficient and widely used assembly strategy divides the task into several stages [9, 17]: initialization, approaching, contact pose estimation, alignment, and insertion. At initialization, the peg and the hole are fixed on the robot manipulator and the table, respectively. A vision system is applied to roughly estimate the pose of the hole. At approaching, the peg approaches to the hole with an admittance controller. With well-tuned controller parameters, the plane contact between the flat surface of the peg and the hole could eliminate the pose uncertainties in 3 dimensions, roll axis, pitch axis, and z-axis. At contact pose estimation, the peg explores along the surface of the hole to estimate the relative position and orientation between the peg and the hole. This stage eliminates the uncertainties in x and y axes. Finally, based on the contact pose estimation, the peg can slide towards the hole and finish insertion with an admittance controller. Small oscillation is added to the yaw axis in this stage, together with admittance control, to compensate small uncertainties of yaw axis. In this paper, we mainly focus on the contact pose estimation stage, which is introduced in section III. ## III Proposed Method Figure 3: Framework of the proposed method. Peg-in-hole assembly can be accomplished easily by a human even with eyes closed. The human will first use the peg to make contact with the hole. Then he/she will locally move the peg to sense hole’s location based on a sequence of contacts instead of just one single contact. If there is a hole in one direction, the tip of the peg could slide into the hole a little bit and the force/torque feedback also have an impulse in that direction. Based on the historical measurements in a sequence of contacts, the human keeps updating the knowledge of the contact pose and eliminating the hole uncertainties. Inspired by the human strategy, we propose to use a sequence of contact feedback to identify the contact pose under uncertainties (Fig. 3). ### III-A Tilt-then-Rotate Strategy The peg contacts the hole after the approaching stage (Fig. 4.1). The peg and the hole have some overlaps but are not aligned well due to the uncertainties of the contact pose. We propose a tilt-then-rotate strategy to identify the contact pose. Figure 4: Snapshots of the tilt-then-rotate strategy. The blue line is z-axis. The yellow cone represents the designed trajectory for rotation. Tilt (2) then rotate (2-9) the peg for $2\pi$. While the peg is being rotated, a constant downward force is applied to maintain a single point contact (3,5,9), line contact (2,4,6,8), or two points contact (7) between the peg and the hole. We tilt the peg for $\alpha$ degrees in all directions by rotating the peg for $2\pi$ (Fig. 4). The tilt-then-rotate trajectory can be described as continuously changing $\theta$ from $0$ to $2\pi$ in order to change the roll and the pitch angle: $\\{roll,pitch\\}=\\{\alpha sin(\theta),\alpha cos(\theta)\\},\quad\theta\in[0,2\pi)$ (4) The desired tilt-then-rotate trajectory is tracked by an admittance controller. At the same time, a constant downward force is applied to the peg in order to maintain contact with the hole. During the procedure, contact force and torque are measured by a force/torque sensor. As the peg is tilted in all directions, the contact keeps switching between one point contact, two points contact, and line contact (Fig. 4.2-4.9). The tip of the peg could go into the hole when the peg tilts towards the hole and the force/torque measurements would also have an impulse. Different contact poses will result in different sequences of measurements along the designed tilt-then-rotate trajectory. Comparing with one measurement at a single time step, the mapping from a sequence of measurements to contact poses becomes an injective mapping. ### III-B Contact Pattern Generation The tilt-then-rotate strategy generates a sequence of measurements in 12 dimensions including force ($\mathbb{R}^{3}$), torque ($\mathbb{R}^{3}$), and peg pose ($\mathbb{R}^{6}$). For different control forces or different sizes of the parts, those measurements can be different in the order of magnitude. Human can sense the contact pose in different scenarios by the same exploring strategy. There must be some high-level features we can extract from the measurements. We propose to plot the measurements of each dimension in polar coordinate as one channel. The data in each channel is normalized, then smoothed by moving average. The normalization makes the data invariant to control forces and sizes of the parts. The moving average reduces the sensor noises. We utilize the plotted image with 12 channels as one contact pattern, which encodes high- level features about the contact pose. Fig. 5 shows z-axis channel of the contact pattern for different contact poses. Figure 5: Contact patterns in polar coordinates for 3 different contact poses. Only z-axis channel is shown. One contact pose corresponds to one contact pattern with 12 channels. In order to construct an informative mapping, we need to perform hundreds of tilt-then- rotate motions for all contact poses. This is not only time-consuming but also inaccurate due to the limitation of pose sensing in the real world. We propose to generate the contact pattern in the MuJoCo physics engine. The simulated environment can perform hundreds of trails in a short time. In addition, ground truth contact pose can be obtained easily in simulation (Fig. 4). ### III-C Contact Pose Classification Neural Network Contact poses of a square peg-hole can be classified into 9 classes according to which edge of the peg contacts the hole (Fig. 6). Each class of contact pose has a different error direction. Classifying the contact poses from the contact patterns is an image recognition problem. CNN has shown great success in image recognition in terms of efficiency and accuracy [18]. We train one simple CNN to classify the contact patterns. The CNN has two convolutional layers, two pooling layers, and one fully-connected layer. The input data are the 3 most informative channels out of the 12-channel pattern. The output is the class of contact pose, which has 9 error directions for a square peg-hole and 11 error directions for a pentagonal peg-hole. Once the contact pose is identified, the peg will be guided towards the error direction with admittance control and inserted into the hole. Figure 6: The contact poses are classified into 9 classes according to which edge of the peg contacts the hole. ### III-D Failure Recovery From the experiments, we observe failure cases even with the method described above. The reason is either the contact pose classification model predicts a wrong error direction or the admittance control fails to compensate small uncertainties. To increase the robustness of the proposed method, we add a failure recovery module. If we fail to insert the peg into the hole, the peg will be initialized to a slightly different pose than the original one, and redo the tilt-then-rotate strategy again. ## IV Simulations and experiments ### IV-A Simulations #### IV-A1 Simulation Setup The simulated environment in MuJoCo is shown in Fig. 4. The environment includes a peg and a hole, where the hole is fixed on the ground, and the peg is controlled by a well-tuned admittance controller. The side length of the hole is $50mm$ and the side length of the peg is $49mm$ (clearance = $1mm$). Contact force/torque is measured at the peg’s center of mass. #### IV-A2 Data Collection A self-supervised scheme is applied to collect the data and build the contact pose mapping. As mentioned in II-C, once the peg contacts the hole on the flat surface, the uncertainties in roll, pitch, and z-axis are eliminated. We only consider the remaining uncertainties in the x, y, and yaw axis. The contact poses are uniformly sampled from $x\in[-20,+20]mm$, $y\in[-20,+20]mm$, and $yaw\in[-3\degree,+3\degree]$. After the approaching stage in II-C, the tilt- then-rotate strategy is applied, and $\alpha$ in equation (4) is set to $15\degree$. The tilt-then-rotate motion is executed by the admittance controller in $N$ time steps, where $N=2000$. The 12-dimension peg pose and contact force/torque are recorded in a matrix $A\in\mathbb{R}^{N\times 12}$. The data of each dimension is normalized then smoothed by moving average with a window length $n=20$, and the contact pattern is recorded in polar coordinates as a $12\times 200\times 200$ binary image. We label the contact patterns of a square peg-hole with 9 classes according to the initial contact poses (Fig. 6). The uncertainty in the yaw axis is compensated by the admittance controller and small oscillations in the yaw axis. We also add $5\%$ noise to the parameters of the admittance controller in order to introduce variance to the collected data. We perform the self-supervised data collection for 5000 trails. The computation time is around 10 minutes. We split $80\%$ data as the training set and $20\%$ data as the test set. #### IV-A3 Model Training From the 12 channels contact patterns, we select 3 channels $A^{\prime}\in\mathbb{R}^{N\times 3}$ including the position in $z$ axis $X_{z}$, the torque in roll axis $M_{x}$, and the torque in pitch axis $M_{y}$ as the input to the CNN. The reason that we select these 3 channels is that we experimentally find that these channels contain more features than other channels. We downsample the contact patterns into $3\times 20\times 20$ images. We use an NVIDIA GeForce GTX 1080 Ti GPU for training. The training time is around 1 minute. #### IV-A4 Results The test accuracy of the contact pose classification neural network is $97.4\%$. Most of the failure cases are the contact pose at the boundary between two classes. We test on a second data set by collecting $1000$ data from a smaller square peg-hole, where the side length of the hole is $32mm$ (clearance = $1mm$). The test accuracy is $96.8\%$. This shows the generalization ability of the proposed method. Although the sizes of the parts, the contact measurements such as force, torque are different, the model still works very well. The reason is that we predict the contact pose according to the contact pattern, which is invariant to the size of the parts. We perform another simulation experiment on a pentagonal peg-hole. The side length of the hole is $37mm$ (clearance = $1mm$). Because the contact pattern highly depends on the geometry of the peg-hole, we cannot apply the model learned from square peg-hole to pentagonal peg-hole. We redo the data collection and model training on the pentagonal pen-hole. Everything is the same as square peg-hole, except the number of contact pose classes becomes 11. The test accuracy is $91.0\%$. We test the entire peg-in-hole assembly framework using the proposed method. We perform 100 trials on both square and pentagonal peg-hole. If the peg fails to be inserted into the hole, the failure recovery module will initialize the peg to a slightly different pose than the original one, and redo this trial again. If it requires more than 3 attempts to finish the task, we claim it fails. Table I shows the number of attempts needed to finish assembly in simulation. The high success rate shows that the proposed framework works well. TABLE I: Peg-in-hole assembly in simulation # of attempts | 1 | 2 | 3 | $>3$ | total | success rate ---|---|---|---|---|---|--- square ($50mm$) | 96 | 3 | 1 | 0 | 100 | $100\%$ pentagon ($37mm$) | 82 | 11 | 3 | 4 | 100 | $96\%$ ### IV-B Experiments Figure 7: Snapshots of the experiments. #### IV-B1 Experimental Setup The experiment environment (Fig. 7) includes a 6 DOF FANUC LR-Mate 200iD, an ATI Mini45 F/T sensor, and 3D printed peg-holes. The F/T sensor is embedded in the robot end-effector to measure the force and torque during assembly. The force/torque measured at the robot wrist can be transfer to the force/torque at the peg’s center of mass. The peg is fixed on the robot end-effector and the hole is fixed on a vise. The peg’s pose can be controlled with an admittance controller at $125Hz$. The hole is randomly initialized with position and orientation uncertainties $\pm 20mm$ and $\pm 3\degree$, respectively. Three pairs of 3D printed peg-holes are tested, including a $50mm$ square hole (clearance = $1mm$), a $32mm$ square hole (clearance = $0.5mm$), and a $37mm$ pentagonal hole (clearance = $1mm$). #### IV-B2 Results Fig. 8 shows the comparison of the contact patterns generated from tilt-then- rotate strategy in simulation and real-world experiments. They are generated from the same class of contact pose. The data collected from the real-world has much noise than from simulation. Although there is a huge sim-to-real gap[19] between the simulated environment and the real world in terms of friction coefficient, inertia, stiffness, damping ratio, etc., we observe that the contact patterns do share similar features. Figure 8: Comparison of the contact patterns in simulations and real-world experiments. They are generated from the same class of contact pose The contact pattern classification model learned in the simulation are applied to real-world experiments. Fig. 7 shows the snapshots of the assembly experiments. We perform 20 experiments on 3 different pairs of peg-hole respectively. Table II shows the number of attempts needed to finish assembly in real-world experiments. The model learned in simulation ($50mm$ square, clearance = $1mm$ ) can be successfully applied to real-world peg-hole of different sizes ($32mm$) and smaller clearance ($0.5mm$). This shows that the proposed method is able to tackle the sim-to-real gap. Supplementary videos can be found in [20]. TABLE II: Peg-in-hole assembly in real-world experiments # of attempts | clearance | 1 | 2 | 3 | $>3$ | total | success rate ---|---|---|---|---|---|---|--- square ($50mm$) | $1mm$ | 16 | 3 | 1 | 0 | 20 | $100\%$ square ($32mm$) | $0.5mm$ | 10 | 5 | 2 | 3 | 20 | $85\%$ pentagon ($37mm$) | $1mm$ | 15 | 3 | 1 | 1 | 20 | $95\%$ ## V Discussion In this paper, we propose a novel framework to identify contact pose for peg- in-hole assembly under uncertainties. The proposed method utilizes a tilt- then-rotate strategy to generate contact patterns. A CNN is utilized to classify the contact poses and guide the robot to achieve the assembly task with admittance control. Simulation and experiment results are provided to demonstrate the effectiveness of the proposed method. The main advantages of the proposed method include: * • The injective mapping from the contact pattern to the contact pose. * • Robustness to sensor noise. * • The contact pose classification model is easy to obtain. All the training data can be quickly generated in simulation with a self-supervised scheme. * • Good generalization ability and small sim-to-real gap. Since the contact data is normalized and recorded in a polar coordinate, the pattern is sensitive neither to the size of the object nor the parameters of the admittance controller. A model learned from a larger peg-hole can be successfully applied to smaller ones as long as the geometries are the same. Furthermore, the model learned in simulation can be adapted to the real world, despite the huge sim- to-real gap. Here are the limitations of the proposed framework: * • The proposed method can only find the directions of the error, while it is unable to obtain the magnitude. In order to compensate for the error, the admittance controller needs to be well-tuned. * • The contact pose classification model can handle only position uncertainties, but it cannot classify the orientation uncertainties in the yaw axis. For future works, we plan to improve the algorithm so that it can handle orientation uncertainties and test it in more challenging scenarios. We also intend to incorporate active and adaptive sensing strategies to our framework. ## References * [1] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes,” in _Robotics: Science and Systems (RSS)_ , 2018. * [2] B. Tekin, S. N. Sinha, and P. Fua, “Real-Time Seamless Single Shot 6D Object Pose Prediction,” in _CVPR_ , 2018. * [3] S. Chhatpar and M. Branicky, “Search strategies for peg-in-hole assemblies with position uncertainty,” in _2001 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , Maui, Hawaii, USA, 2001. * [4] S. Drake, “Using compliance in lieu of sensory feedback for automatic assembly.” 1978. * [5] D. E. Whitney, “Quasi-Static Assembly of Compliantly Supported Rigid Parts,” _Journal of Dynamic Systems, Measurement, and Control_ , vol. 104, no. 1, pp. 65–77, 03 1982. * [6] S. Bruno and O. Khatib, “Springer handbook of robotics,” 2008. * [7] C. Ott, R. Mukherjee, and Y. Nakamura, “Unified impedance and admittance control,” in _2010 IEEE International Conference on Robotics and Automation_ , 2010, pp. 554–561. * [8] T. Tang, H. Lin, Yu Zhao, Wenjie Chen, and M. Tomizuka, “Autonomous alignment of peg and hole by force/torque measurement for robotic assembly,” in _2016 IEEE International Conference on Automation Science and Engineering (CASE)_ , 2016, pp. 162–167. * [9] Y. Kim, H. Song, and J. Song, “Hole detection algorithm for chamferless square peg-in-hole based on shape recognition using f/t sensor,” _International Journal of Precision Engineering and Manufacturing_ , vol. 15, no. 3, pp. 425–432, Mar. 2014\. * [10] T. Tang, H. Lin, Y. Zhao, Y. Fan, W. Chen, and M. Tomizuka, “Teach industrial robots peg-hole-insertion by human demonstration,” in _2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM)_ , 2016, pp. 488–494. * [11] S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” in _The Journal of Machine Learning Research_ , 2016\. * [12] Y. Fan, J. Luo, and M. Tomizuka, “A learning framework for high precision industrial assembly,” in _2019 International Conference on Robotics and Automation (ICRA)_ , 2019, pp. 811–817. * [13] J. C. Triyonoputro, W. Wan, and K. Harada, “Quickly inserting pegs into uncertain holes using multi-view images and deep network trained on synthetic data,” _CoRR_ , vol. abs/1902.09157, 2019. * [14] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg, “Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks,” in _2019 International Conference on Robotics and Automation (ICRA)_ , 2019, pp. 8943–8950. * [15] S. R. Chhatpar and M. S. Branicky, “Particle filtering for localization in robotic assemblies with position uncertainty,” in _2005 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2005, pp. 3610–3617. * [16] U. Thomas, S. Molkenstruck, R. Iser, and F. M. Wahl, “Multi sensor fusion in robot assembly using particle filters,” in _Proceedings 2007 IEEE International Conference on Robotics and Automation_ , 2007, pp. 3837–3843. * [17] L. Johannsmeier, M. Gerchow, and S. Haddadin, “A framework for robot manipulation: Skill formalism, meta learning and adaptive control,” in _2019 International Conference on Robotics and Automation (ICRA)_ , 2019, pp. 5844–5850. * [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in Neural Information Processing Systems 25_. Curran Associates, Inc., 2012, pp. 1097–1105. * [19] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2017, pp. 23–30. * [20] Supplementary videos of the tilt-then-rotate strategy., https://shiyujin0.github.io/TiltThenRotate/ACC2021.html.
We determine the complete space-time metric from the bootstrapped Newtonian potential generated by a static spherically symmetric source in the surrounding vacuum. This metric contains post-Newtonian parameters which can be further used to constrain the underlying dynamical theory and quantum state of gravity. For values of the post-Newtonian parameters within experimental bounds, the reconstructed metric appears very close to the Schwarzschild solution of General Relativity in the whole region outside the event horizon. The latter is however larger in size for the same value of the mass compared to the Schwarzschild case. PACS - 04.70.Dy, 04.70.-s, 04.60.-m § INTRODUCTION AND MOTIVATION General Relativity predicts that the gravitational collapse of any compact source will generate geodetically incomplete space-times whenever a trapping surface appears [1]. Moreover, eternal point-like sources are mathematically incompatible with the Einstein field equations [2]. A consistent quantum theory should fix this pathological classical picture of black hole formation, like quantum mechanics explains the stability of the hydrogen atom. Whether this can be achieved by modifications of the gravitational dynamics solely at the Planck scale or with sizeable implications for astrophysical compact objects remains open to debate, because it is intrinsically very difficult to describe quantum states of strongly interacting systems. Strong interactions imply large nonlinearities, so that the space of classical solutions does not admit a vector basis for the canonical variables which are usually lifted to quantum operators. Of course, this quantisation process can be introduced in a linearised version of any theory, but it becomes questionable that one can then effectively obtain a reliable approximation for the quantum state of what would classically be a strongly interacting configuration. For instance, the physical relevance of the quantum theory of linear perturbations around a given classical solution entirely relies on whether the chosen “background” is actually the one realised in nature. In the Einstein theory of gravity, we know classical solutions, like the Schwarzschild metrics for the interior of a homogenous spherical star and the exterior of any spherical source, which cannot be obtained by perturbing the Minkowski vacuum. On the other hand, Deser [4] conjectured that it should be possible to reconstruct the full dynamics of General Relativity from the Fierz-Pauli action in Minkowski space-time by adding gravitational self-coupling terms consistent with diffeomorphisms invariance. On a closer inspection, this reconstruction of the Einstein-Hilbert action does not appear free of ambiguities since, for instance, it involves fixing the very important boundary terms in a specific way [5]. Generically, we know that any (modified) metric theory of gravity is invariant under changes of coordinates and must therefore be covariant under diffeomorphisms. Different choices of those boundary terms in the reconstruction proposed by Deser would therefore lead to different modified theories of gravity. What we do not know a priori is which (if any) of such theories describes the dynamics realised in nature and what the quantum state of the Universe really is. [We also remark that Lovelock's theorem [7] only holds in the vacuum, whereas our Universe is obviously a very different state and so are astrophysical compact objects.] Moreover, any reconstruction of the dynamics starting from the Minkowski vacuum can be practically effective only if the contribution of matter sources is perturbatively small, which introduces the further problem of reconstructing a large astrophysical source along with the ensuing gravitational field. Such considerations inspired a programme called bootstrapped Newtonian gravity [9, 10], which consists in adding gravitational self-coupling terms to a Fierz-Pauli-type of action for the static Newtonian potential generated by an arbitrarily large matter source. Furthermore, the coupling constants for such additional terms are allowed to vary from their Einstein-Hilbert values in order to effectively accommodate for corrections arising from the underlying dynamics which, as mentioned above, we do not wish to restrict a priori. The direct outcome of this programme is a nonlinear equation, which determines the gravitational potential acting on test particles at rest, and which is generated by a static large source, including pressure effects and the gravitational self-interaction to next-to-leading order in the Newton constant. [One could ideally iterate the process to any order, but the equations become quickly intractable analytically.] It is important to remark that our main aim eventually is to investigate the actual quantum state of such systems and the resulting bootstrapped Newtonian potential must therefore be viewed as a mean-field result depending on effective coupling constants which entail properties of such a (otherwise unknown) state. Our approach is not meant to provide solutions of the linearised Einstein equations (or any modifications thereof), but to describe features of the proper quantum state of gravity. Compact objects were studied with this equation [11, 12, 13] and, at least for the simplest case of homogenous density, one can explicitly build a coherent quantum state (for a scalar field) which reproduces the classical gravitational potential [14, 15]. Interestingly, these quantum states share some of the properties [16] found in the corpuscular model of black holes [17, 18]. Accurate descriptions of the interior of matter sources, whether it is a black hole or a more regular, yet highly compact, distribution, should be given in terms of quantum physics, possibly resulting in an effective equation of state. The relevant observables would eventually be represented by the radius and the mass of stable Instead, the exterior region of any astrophysical compact object is phenomenologically characterised by the (geodesic) motion of test particles, including photon trajectories. Studying these trajectories, and comparing them with those predicted by General Relativity, is more directly done by means of a full (effective) metric tensor, rather than the bootstrapped Newtonian potential describing only forces which act on static particles. The aim of this work is precisely to reconstruct a complete space-time metric from the bootstrapped Newtonian potential in the vacuum outside a spherically symmetric source. Of course, by employing an effective metric tensor we implicitly assume the effective dynamics is also invariant under changes of coordinates, which is compatible with the underlying fundamental theory of gravity being covariant under diffeomorphisms, although the particular metric we will find does not need to be a solution of the Einstein equations in the vacuum. Moreover, we will express this metric in terms of quantities which, if not directly observable, have at least an intrinsic geometric meaning. In particular, we will take advantage of the spherical symmetry and employ the usual angular coordinates on the spheres (as surfaces of symmetry of the system) of area $\mathcal A=4\,\pi\,\bar r^2$, along with the areal radius $\bar r$. The latter differs from the radial coordinate $r$ associated with the harmonic coordinates used to express the potential [19], which is a source of significant technical complication. Furthermore, starting from the potential acting on test particles at rest in a given (harmonic) reference frame does not fix the reconstructed spherically symmetric metric uniquely. For this reason, it will be useful to write the metric in the weak-field region in terms of post-Newtonian parameters, which allow for a direct comparison with experimental bounds. This procedure should, in principle, determine the entire metric in terms of the post-Newtonian parameters all the way into the strong coupling region, if we could solve all equations exactly. However, the post-Newtonian expansion fails near the horizon, so that an explicit calculation will require us to employ also a different near-horizon expansion. Since the potential is a smooth function of $r$, so must be the metric and the relation $\bar r=\bar r(r)$. The coefficients in the near-horizon expansion are therefore fully determined by the post-Newtonian parameters via matching conditions in a suitable intermediate region, but analytical expressions become rather involved very quickly. In the present work, we shall therefore just carry out the analysis by including the first few terms in each of the above two expansions. The main result is that the bootstrapped metric at large distance from the source approaches the Schwarzschild form in a way that can make it compatible with bounds from Solar system tests and other measurements of the first post-Newtonian parameters. The bootstrapped metric is however necessarily different from the exact Schwarzschild form, and this can be interpreted from the point of view of General Relativity as indicating the presence of an effective fluid, filling the space around the source with a non-vanishing energy-momentum tensor which violates the classical energy conditions. The presence of an effective fluid in bootstrapped Newtonian gravity was already noted in Refs. [20]. Moreover, the near-horizon region differs from the General Relativistic prediction mostly in that the horizon size is larger than the Schwarzschild radius for given black hole mass. The paper is organised as follows: in Section <ref>, we review the derivation of the bootstrapped Newtonian potential acting on a static test particle generated by a static spherically symmetric source. In Section <ref>, we discuss the relation between the harmonic coordinates used to express the potential and the more common areal radius. This relation plays a crucial role in the reconstruction of the metric performed at large distance from the source in Section <ref>, where corrections to the perihelion precession, light deflection and time delay are also estimated. The geometry near the horizon is studied in Section <ref> by matching with the weak-field expressions. We conclude with comments and an outlook in Section <ref>. § POTENTIAL IN THE VACUUM In General Relativity (and metric theories of gravity in general), the motion of test particles is determined by the entire metric tensor and there is no invariant notion of a gravitational potential. However, one can still introduce a potential for specific types of motion on specific metric space-times starting from the corresponding geodesic equation. For example, the geodesic equation in the weak field and non-relativistic limit reduces to the Newtonian equation of motion with the potential which solves the linearised Einstein equations in the vacuum provided one uses harmonic coordinates. In the following, we will reverse this argument and start from a bootstrapped Newtonian potential obtained in harmonic coordinates in order to reconstruct a compatible metric. §.§ Potential for static test particles We consider a massive particle moving along the trajectory $x^\mu=x^\mu(\tau)$ that satisfies the geodesic equation \begin{equation} \label{geodesic} \ddot x^\mu +\Gamma^\mu_{\alpha\beta}\,\dot x^\alpha\,\dot x^\beta \ , \end{equation} where dots denote derivatives with respect to the particle's proper time $\tau$ and $\Gamma^\mu_{\alpha\beta}$ are the Christoffel symbols of the metric If the space-time is static, one can choose a time coordinate $x^0$ in which the metric ϵ h_μν(x^i) where $\epsilon$ is a parameter we introduce to keep track of deviations from flat We can now say that the particle is (initially) at rest if $\dot x^i=0$ in this reference frame, which implies that $\dot x^0\simeq 1$ and, as long as $|\dot x^i|\simeq\epsilon\ll 1$ (weak-field approximation), Eq. (<ref>) to first order in $\epsilon$ reduces to \begin{equation} \dot x^i \simeq \frac{1}{2}\,\epsilon\,h_{00,i} \ , \label{Fma} \end{equation} which yields Newton's second law for a particle in the potential $V$ if we set \begin{equation} \label{weak field limit} \ , \end{equation} and the spatial coordinates $x^i$ in Eq. (<ref>) are the analogue of Cartesian coordinates in Newtonian mechanics. In fact, the explicit form of the potential $V$ generated by a given source can be obtained from the linearised Einstein equations, which then reduce to the Poisson equation for the Newtonian potential in the de Donder gauge -1/2 ∂_μh where $h\equiv \eta^{\alpha\beta}\,h_{\alpha\beta}$. We must correspondingly assume that the coordinates $x^\mu$ in which the components of the metric take the form in Eq. (<ref>) are harmonic coordinates satisfying Note that for a static metric with $|h_{ij}|\ll 1$, the condition (<ref>) is always §.§ Bootstrapped Newtonian vacuum We just recalled that the interpretation of $V$ in Eq. (<ref>) as the gravitational potential for massive particles at rest is consistent with the fact that, in the same approximation, the linearised Einstein field equations reduce to the Poisson equation of Newton's theory, 4 π ρ , where $\rho$ is the energy density of the static source and $\triangle$ the flat space Laplacian. The de Donder gauge condition (<ref>) implemented in the derivation of Eq. (<ref>) was thus employed explicitly also in deriving the equation for the bootstrapped Newtonian potential $V$ from the Einstein-Hilbert action in Ref. [15]. For the sake of brevity, we here review a more heuristic derivation of $V=V(r)$ outside static and spherically symmetric sources from a bootstrapped Newtonian effective action [15, 9, 11, 13]. We start from the Newtonian Lagrangian for a source of density $\rho=\rho(r)$, to wit -4 π∫_0^∞r^2 ṛ (V')^2/8 π +V ρ] from which Eq. (<ref>) can be derived, and stress that the radial coordinate $r$ is the one obtained from harmonic coordinates $x^i$, as we shall see more in details in Section <ref>. To this action several interacting terms for the field potential $V$ will be added for the motivation, stated in the introductory section, of describing mean-field deviations from General Relativity induced by quantum physics. First of all, we couple $V$ to a gravitational current proportional to its own energy density, -[V'(r)]^2/2 π where $\mathcal{V}$ is the spatial volume and $U_{\rm N}$ the Newtonian potential energy. Moreover, we add the “loop correction” $J_{\rho}\simeq-2V^2$, which couples to $\rho$ and, since the pressure gravitates and becomes relevant for large compactness, we also add to the energy density the term [11] [We only consider isotropic fluids.] where $U_p$ is the potential energy associated with the work done by the force responsible for the pressure. The total Lagrangian then reads -4 π∫_0^∞r^2 ṛ q_V J_V V 3 q_p J_p V q_ρ J_ρ(ρ+3 q_p p) -4 π∫_0^∞r^2 ṛ (V')^2/8 π (1-4 q_V V) +(ρ+3 q_p p) (1-2 q_ρ V) where the coupling constants $q_V$, $q_p$ and $q_\rho$ can be used to track the effects of the different As we mentioned previoulsy, different values of these couplings would correspond to different quantum states and depend on the underlying microscopic quantum theory of gravity and matter. For instance, the case $q_V=q_p=q_\rho=1$ reproduces the Einstein-Hilbert action at next-to-leading order in the expansion in $\epsilon$ in Eq. (<ref>) and can be naturally used as a primary reference [15] (see also Refs. [11, 13] for more details on the role of these coupling parameters). Eventually, their values should be fixed by experimental constraints. Finally, the field equation for $V$ reads 4 π 1-4 q_ρ V/1-4 q_V V (ρ+3 q_p p) 2 q_V(V')^2/1-4 q_V V which must be solved along with the conservation equation $p' = -V'\left(\rho+p\right)$. In vacuum, where $\rho=p=0$, Eq. (<ref>) simplifies to 2 q_V (V')^2/1-4 q_V V which allows for absorbing the coupling constant $V\to \tilde V=q_V\,V$. The exact solution was found in Ref. [9] and reads 1/4 q_V 1-(1+6 q_V M/r)^2/3 The asymptotic expansion away from the source yields ≃- M/r +q_V ^2 M^2/r^2 -q_V^2 8 ^3 M^3/3 r^3 so that the Newtonian behaviour is always recovered and the post-Newtonian terms are seen to depend on the coupling $q_V$. The value of $q_V$ can be constrained by experimental bounds once we compute trajectories to compare with. § HARMONIC AND AREAL COORDINATES FOR STATIC SPHERICAL SYSTEMS The argument leading to the potential (<ref>) starting from a general metric involves several approximations, which makes it impossible to determine the starting metric uniquely. In order to reconstruct a metric compatible with Eq. (<ref>), we will therefore have to supply further conditions. Before we get to that point, however, we need to discuss in details the relation between the radial coordinate $r$ used to express the potential in the previous section and the areal coordinate $\bar r$ usually employed to write the general static spherically symmetric metric as -B̅ ̣̅t^2 +A̅ ̣̅r^2 +r̅^2 Ω̣^2 where $\bar A=\bar A(\bar r)$, $\bar B=\bar B(\bar r)$, and $\d \Omega^2=\d \theta^2+\sin^2\theta\,\d \phi^2$ is the usual solid angle on the unit sphere, with $0\le\theta\le\pi$ and $0\le\phi<2\,\pi$. Cartesian coordinates $x^i=(x,y,z)$ in flat space satisfy Eq. (<ref>). This condition can be extended to general space-times by defining harmonic coordinates $x^\mu=(t,x,y,z)=(t,\bm x)$ such that g^αβ Γ_αβ^μ= which coincides with the de Donder gauge condition (<ref>). In particular, we are interested in spherically symmetric space-times with a metric of the form (<ref>) and we therefore find it convenient to employ polar coordinates associated to the harmonic ones by x=r(r̅) sinθ cosϕ , y=r(r̅) sinθ sinϕ , z=r(r̅) cosθ , where we assume that the “harmonic” [Polar coordinates do not satisfy Eq. (<ref>) even in Minkowski space-time, but we shall refer to $r$ as the “harmonic” radial coordinate for the sake of brevity.] $r$ is an invertible smooth function of the areal coordinate $\bar r$. A straightforward calculation of Eq. (<ref>) reveals that the function $r=r(\bar r)$ must satisfy [19] r̅^2 √(B̅/A̅) ṛ/̣̅r =2 √(A̅ B̅) r Expressing the metric (<ref>) in terms of the the rotationally invariant forms $\d \bm x^2=\d r^2+r^2\,\d\Omega^2$ and $(\bm x\cdot \d \bm x)^2=r^2\,\d r^2$, we deduce that the line element in harmonic coordinates reads -B ṭ^2 r̅^2/r^2 x^2 where $\d t=\d\bar t$, $\bar r=\bar r(r)$, $A=\bar A(\bar r(r))$ and $B=\bar B(\bar r(r))$. The unique Schwarzschild solution of the Einstein field equations in the vacuum outside a spherical source [Birkhoff's theorem ensures that uniqueness follows from spherical symmetry. In more general cases, other vacuum solutions can be obtained from the linearised solutions [25].] is given by 2 M is the gravitational radius. By solving Eq. (<ref>), one finds that the harmonic radial coordinate for the Schwarzschild metric is simply given by which leads to the potential for the Schwarzschild metric in harmonic coordinates + M/r By comparing with the expansion of $V_0$ in Eq. (<ref>), we then see that the unique prediction of General Relativity is recovered to first order in $q_V$ if $q_V=1$ (see Fig. <ref>). Comparison between the Newtonian $V_{\rm N}$, Schwarzschild $V_{\rm S}$ and bootstrapped Newtonian $V_{0}$ (with $q_V=1$). We can now replace $V_{\rm S}$ with the potential $V_0$ in Eq. (<ref>), that is 1+2 V_0 and start to reconstruct the bootstrapped metric in the areal coordinate $\bar r$. In particular, we notice that the metric coefficient $\bar B$ is fully determined by the potential $V_0$ and the relation $r=r(\bar r)$. Moreover, the Schwarzschild metric has the important property that $\bar A_{\rm S}\,\bar B_{\rm S}=1$, which is related with the vanishing of the light-like component of the Ricci tensor, $R_{\mu\nu}\,k^\mu\,k^\nu=0$ for any $k_\mu\,k^\mu=0$ [26], and the validity of the Equivalence Principle. Using $\bar C\equiv \bar A\,\bar B$, it is also convenient to rewrite Eq. (<ref>) as r̅ r” -r̅ C̅'/2 C̅ +r̅ B̅'/B̅ 2 C̅ r/B̅ r̅ where a prime denotes the derivative with respect to $\bar r$. This equation determines the relation between $\bar A$ and $\bar r$, but one equation is not sufficient to determine both $r=r(\bar r)$ and $\bar A=\bar A(\bar r)$ given $B=B(r)$, and we will have to resort to further conditions. § EFFECTIVE SPACE-TIME PICTURE: WEAK FIELD We first analyse the region far from the source by Taylor expanding the metric coefficients and $r=r(\bar r)$ in powers of the dimensionless ratio $\Rh/\bar r\sim M/\bar r$, that is 1+∑_k=1 a_k /r̅^k 1+∑_k=1 b_k /r̅^k 1+∑_k=1 σ_k /r̅^k We also introduce 1+ ∑_k=1 c_k /r̅^k in which the coefficients $a_k$'s are fully determined by the $c_k$'s and $b_k$'s since $\bar C=\bar A\,\bar B$. The above expressions for $\bar C$, $\bar B$ and $r/\bar r$ solve Eq. (<ref>) [equivalently, Eq. (<ref>)] at zero order in $\Rh/\bar r$ and ensure asymptotic flatness for $r\sim\bar r\to\infty$. In the following, we will solve Eq. (<ref>) in order to determine the metric up to third order in $\Rh/\bar r$. At first and second order in $\Rh/\bar r$ we obtain -3/4 c_1 2 c_1-b_1 and the third-order equation yields \begin{equation} \label{c3 coefficient} \frac{5}{2}\,c_1\,c_2 \ . \end{equation} We can now fix the coefficients $b_k$ to match Eq. (<ref>), that is +q_V ^2/2 [r(r̅)]^2 -2 q_V^2 ^3/3 [r(r̅)]^3 which yields $b_1=-1$ and -3/4 c_1 2+3 c_1 -2/3 q_V^2 Upon replacing the above expressions in the expansion of $\bar A$, we obtain +7/4 c_1 (2 q_V -9/4 c_1 5 c_2 17/8 c_1 3 c_1^2 In order to uniquely fix all of the coefficients in the above expansions from physical considerations, it is useful to introduce the Eddington-Robertson parameterised post-Newtonian (PPN) formalism, in which the metric reads [19] 1-α /r̅ +(β-α γ) ^2/2 r̅^2 +(ζ-1) ^3/r̅^3 1+γ /r̅ +ξ ^2/r̅^2 +r̅^2 Ω̣^2 where one can set $\alpha=1$ by the definition of the gravitational radius (<ref>). This is in agreement with $b_1=-\alpha=-1$ and allows us to identify the first order PPN parameters Finally, we obtain ^2/2 r̅^2 7+4 β (5+γ)-32 β^2-γ (26-7 γ)-24 c_2 ^3/48 r̅^3 γ /r̅ β-3 γ-2 c_2 ^2/2 r̅^2 32 β^2 4 β (9+γ) 3 γ (6+15 γ-8 γ^2) 8 c_2 (2+5 γ) ^3/16 r̅^3 so that (γ-1) /r̅ c_2 ^2/r̅^2 32 β^2 8 β (4-γ) γ (22-59 γ-36 γ^2) 12 c_2 (1-5 γ) ^3/24 r̅^3 The harmonic radius is also given by +1-3 γ/4 1-3 γ+2 γ^2 -2 c_2 ^2/4 r̅ Experimental data strongly constrain $\abs{\gamma-1}\simeq \abs{\beta-1}\ll 1$. Upon assuming $\beta=\gamma=1$, that is $c_1=0$ and $q_V=1$, we find that the bootstrapped metric which describes the minimum deviation from the Schwarzschild form is given by -2 M/r̅ 2(5+6 c_2) ^3 M^3/3 r̅^3 2(6 ξ-1) ^3 M^3/3 r̅^3 2 M/r̅ ^2 M^2/r̅^2 2 (9+14 c_2) ^3 M^3/r̅^3 ^2 M^2/r̅^2 2(14 ξ-9) ^3 M^3/r̅^3 2 (ξ-1) ^2 M^2/r̅ For completeness, we also display the Ricci scalar obtained from the above metric coefficients to next-to-leading order in the $\Rh/r$ expansion, 1-4 c_2-5 γ+4 γ^2 ^2/2 r^4 +16 c_2 -40 β+32 β^2 +14 γ+16 c_2 γ+16 β γ+5 γ^2 -16 γ^3 ^3/8 r^5 -2 c_2 5+8 c_2 ^3/2 r^5 where we set $\beta=\gamma=1$ in the second expression. Clearly, the above expression of $\bar R$ shows that the effective metric increasingly differs from Schwarzschild's $\bar R_{\rm S}=0$ as one goes closer to the source. In the above, the second order PPN parameters are both determined by the one parameter $c_2$ as -5+6 c_2/12 13-6 ξ/12 so that the combination $\xi=\zeta=1$ corresponding to the PPN expansion of the Schwarzschild metric is not allowed. We can see that the new contribution to $\bar A$ at second order in $\Rh/\bar r$ only vanishes for $c_2=0$, but higher-order corrections then cannot be eliminated. Correspondingly, for $\beta=\gamma=1$, we have (ξ-1) ^2/r̅^2 (12 ξ-7) ^3/6 r̅^3 and the Schwarzschild case $\bar C=1$ cannot be reproduced. In the following, we shall analyse the effects of these second order terms in Eqs. (<ref>) and (<ref>). §.§ Effective energy-momentum tensor Since the effective metric with components (<ref>) and (<ref>) differs from the Schwarzschild geometry, the space-time must contain a non-vanishing effective spherically symmetric energy-momentum tensor ρ^eff u_μ u_ν+ p_r^eff r_μ r_ν+ p_t^eff θ_μ θ_ν+ p_t^eff ϕ_μ ϕ_ν , where $\rho^{\rm eff}=\rho^{\rm eff}(\bar r)$, $p_r^{\rm eff}=p_r^{\rm eff}(\bar r)$ and $p_t^{\rm eff}=p_t^{\rm eff}(\bar r)$ are respectively the energy density, the radial pressure and the surface tension of the static effective fluid. In the coordinates $\bar x^\mu=(\bar t,\bar r,\theta,\phi)$ of Eq. (<ref>), we also have the tetrad components δ^μ_4/r̅ sinθ We can compute the density and pressures from the Einstein tensor, T^eff_μν u^μ u^ν= G_00/8 π B̅ (A̅-1) A̅+r̅ A̅'/8 π r̅^2 A̅^2 T^eff_μν r^μ r^ν= G_11/8 π A̅ B̅-C̅+r̅ B̅'/8 π r̅^2 C̅ T^eff_μν θ^μ θ^ν= G_22/8 π r̅^2 2 C̅ (2 B̅'+r̅ B̅”)-(2 B̅+r̅ B̅') C̅'/32 π r̅ C̅^2 where a prime denotes differentiation with respect to $\bar r$. The above expressions of course vanish for the Schwarzschild metric, whereas we obtain [The general expressions in terms of Eddington-Robertson parameters is given in Appendix <ref>.] M^2/2 π r̅^4 (1-6 ξ) (1-ξ) M^2/2 π r̅^4 2 M/r̅ For $\xi=1$ (that is $c_2=0$), the pressure and tension vanish, at this order of approximation, but one is still left with a negative energy density. §.§.§ Energy conditions One can now check if the effective source satisfies (some of) the energy conditions. Since $p_r\simeq -p_t$ the effective fluid is in general anisotropic. In particular, for anisotropic fluids, the null energy condition is implied by all other energy conditions and requires B̅ C̅'/8 π r̅ C̅^2 2 r̅^2 C̅ B̅”-r̅^2 B̅' C̅'+B̅ (2 r̅ C̅'-4 C̅)+4 C̅^2/32 π r̅^2 C̅^2 where primes again denote differentiation with respect to $\bar r$. For $\beta=\gamma=1$, we have ≃ M^2/π r̅^4 (3-8 ξ) M/2 r̅ (1+4 ξ) ^2 M^3/2 π r̅^5 and, in order to enforce the above conditions (<ref>) and (<ref>) for $\bar r\gg \Rh$, we would then need $\xi<-1/4$ (that is, $c_2<-5/4$). The case $\xi=1$ (or $c_2=0$) of minimal deviation from the Schwarzschild metric necessarily violates the classical energy conditions. In principle, this conclusion is in line with the original idea that the effective metric should incorporate corrections stemming from quantum The fact that the effective energy-momentum tensor does not vanish at large distance from the source means that quantum effects associated with a localised source will affect the space-time even at much larger scales. §.§.§ Misner-Sharp-Hernandez mass It is also interesting to cast the above result in terms of the Misner-Sharp-Hernandez mass [27, 28, 29] which is known to play an important role in the study of the viability of quantum and quantum-corrected black hole solutions (see e.g. [30, 31] and references therein).[It is also worth mentioning that the Misner-Sharp-Hernandez has a role in determining the location of horizons for static spherically symmetric spacetimes, thus providing a straightforward method for the characterization of the causal structure of such spaces (see e.g. Ref. [29]).] For $\beta=\gamma=1$, we find (ξ-1) 2 M/r̅ (6 ξ-1) ^2 M^2/r̅^2 which equals 4 π∫_r̅_s^r̅ ρ^eff(x) x^2 x̣ where $\bar r_{\rm s}\gg \Rh$ is the areal radius of the source of mass $M_{\rm s}=m(\bar r_{\rm s})$. For $\xi\ge 1$ (or $c_2\ge 0$), one therefore finds that the asymptotic ADM [34] mass $m(\bar r\to \infty)=M<M_{\rm s}$ (the effective negative energy density screens gravity), whereas for $\xi<1$ (or $c_2<0$) we have $M>M_{\rm s}$ (the positive effective energy density causes an anti-screening effect [35]). §.§ Geodesics Geodesics $\bar x^\mu=\bar x^\mu(\lambda)$ in a metric of the form in Eq. (<ref>) can be obtained from the Lagrangian 2 L B̅ ṫ̅̇^2 -A̅ ṙ̅̇^2 +sin^2θ ϕ̇^2 where a dot denotes differentiation with respect to $\lambda$. The constant $k=1$ and $\lambda=\tau$ is the proper time for massive particles, whereas $k=0$ and $\lambda$ is an affine parameter for light signals. Staticity and spherical symmetry ensure the existence of the usual integrals of motion, r̅^2 ϕ̇ , which is proportional to the angular momentum around the axis that defines the angle $\phi$ having chosen the trajectory to lie on the plane $\theta=\pi/2$. We are now left with just the equations of motion for $\phi=\phi(\tau)$ and $r=r(\tau)$, for which it is easier to use the mass-shell condition (<ref>), which we write as where the effective potential An interesting feature is that $\bar C=\bar A\,\bar B\not= 1$ in general, see Eq (<ref>), and one therefore expects an energy-dependent term in the acceleration experienced by a particle, in apparent violation of the equivalence principle [36], as predicted by some quantum models of gravity [37]. For the purpose of studying orbits with $\mathcal J\not=0$, it is more useful to parameterise the trajectories with the angle $\phi$, and therefore solve We next analyse massive ($k=1$) and massless ($k=0$) cases separately. §.§.§ Perihelion precession The precession of almost Newtonian orbits of planets and stars ($k=1$) with semilatus rectum $\ell$ and eccentricity $\varepsilon$ can be easily expressed in terms of the PPN parameters. In particular, at first order in $\Rh/\ell$, one finds [19] 2 π (2-β+2 γ) which reproduces the General Relativistic result 6 π for $\beta=\gamma=1$ of the Schwarzschild metric. The second order correction depends on both $\xi$ and $\zeta$ and, for $\beta=\gamma=1$, is given by Δ ϕ^(2) (41+10 ξ-24 ζ) (16 ξ-13) ε^2/2 ^2 M^2/ℓ^2 (37+22 c_2) (3+16 c_2) ε^2/2 ^2 M^2/ℓ^2 Δ ϕ^(2)_S 2 π[ 11 ξ-7 4 (ξ-1) ε^2 ^2 M^2/ℓ^2 where we used Eq. (<ref>), and the General Relativistic result $\Delta\,\phi^{(2)}_{\rm S}$ corresponds to $\xi=\zeta=1$. We see that, in the minimal case with $c_2=0$, we have $\xi=1$ but $\zeta\not=1$, and a correction remains which is independent of the eccentricity. Binary systems could therefore be employed in order to test the effective bootstrapped Newtonian metric at the second PPN order. §.§.§ Light deflection For light signals ($k=0$), one can likewise express the weak lensing angle for a trajectory reaching the minimum areal radius $\bar r_0$ from infinity in terms of the PPN parameters. At first order in $\Rh/\bar r_0$, we have [19] 2 M/r̅_0 which reproduces the result from the Schwarzschild geometry for $\gamma=1$ by construction. The second order correction for $\beta=\gamma=1$, however, only depends on $\xi$ and is given by ^2 M^2/r̅_0^2 (15+4 c_2) π-16 ^2 M^2/4 r̅_0^2 Δ ϕ^(2)_S (ξ-1) π ^2 M^2/r̅_0^2 which equals the General Relativistic result in the minimal case $\xi=1$ (or $c_2=0$). This shows that light is not significantly affected and weak gravitational lensing cannot be efficiently used to test the bootstrapped Newtonian metric. §.§.§ Time delay The radial equation (<ref>) for $\beta=\gamma=1$ reads -k { 1-2 M/r̅ +2 c_2 M/r̅ +(5+6 c_2) ^2 M^2/r̅^2 (1-2 M/r̅) 4 ^2 M^2/r̅^2 (5+12 c_2) M/3 r̅ which, even for the minimal deviation with $c_2=0$, contains an additional term proportional to $\mathcal E^2$. This terms will give rise to an additional acceleration ∼^3 M^3/r̅^4 which will affect the time of flight of both massive and light signals compared to the General Relativistic expectation. Let us consider, in particular, a trajectory with $\mathcal J=0$ between $\bar r_1=\bar r(\lambda_1)$ and $\bar r_2=\bar r(\lambda_2)>\bar r_1$. Eq. (<ref>) with $c_2=0$ then reads 1-2 M/r̅ +5 ^2 M^2/r̅^2 20 ^3 M^3/3 r̅^3 For light signals, since $k=0$, 10 ^3 M^3/3 r̅^3 which yields 5 ^3 M^3/3 r̅_1^2 r̅_2^2 The expected relative time delay $\delta\lambda/\Delta\lambda$ for light signals is therefore independent of $\mathcal E$. § EFFECTIVE SPACE-TIME PICTURE: NEAR HORIZON The task of reconstructing a metric from the potential (<ref>) is more challenging near the horizon, as we have far less experimental constraints to rely upon. Moreover, we need to first discuss how the horizon would be determined by the potential in harmonic coordinates. For the Schwarzschild solution (<ref>), the horizon areal radius is given by $\bar r=\Rh$, which corresponds to the harmonic radius $r_{\rm S}=\Rh/2=\gn\,M$, according to Eq. (<ref>). The potential (<ref>) then takes the value in agreement with the Newtonian concept of escape velocity being equal to the speed of light. In Refs. [9, 11, 12, 16], we relied on this result and likewise defined the horizon as the radius where the escape velocity equals the speed of light for the bootstrapped Newtonian potential, that is 2 V_0() which yields 6 q_V M/(1+2 q_V)^3/2-1 provided $q_V>0$. Note also that which is twice the Schwarzschild value $r_{\rm S}=\Rh/2$. Considering Eq. (<ref>) and the constraints on the PPN parameters $\gamma$ and $\beta$ from the weak-field regime, we must have $q_V\simeq 1$. In particular, for the minimal deviation from Schwarzschild given by $q_V=1$, we have 6 M/3 √(3)-1 ≃1.43 M which is also significantly larger than the corresponding harmonic Schwarzschild radius $r_{\rm S}$. Since $\Rh/\rh\sim 1$, the perturbative PPN expansion (<ref>) cannot be effectively extended into the near-horizon region. We instead have 1+2 V_0 where $\mathcal B=\mathcal B(r)$ is a regular and strictly positive function for $r\ge \rh$, which can be Taylor expanded as Of course, the coefficients $\beta_k$ are fully determined from the explicit form of $V_0$, although their expressions are rather cumbersome. The first few ones, for instance, are given by (1+2 q_V)^3/2-1/3 q_V √(1+2 q_V) q_V (3+6 q_V+4 q_V^2) -(1+2 q_V)^3/2/9 q_V (1+2 q_V)^2 where the numerical estimates are obtained by setting $q_V=1$. In order to change to the standard coordinates, we similarly expand the harmonic coordinate $r$ around $\bar r_{\rm H}\equiv \bar r(\rh)$ as ρ_0 r̅_H ∑_k=1 ρ_k r̅-r̅_H/r̅_H^k where $\rh=\rho_0\,\bar r_{\rm H}$ is again the harmonic horizon radius in Eq. (<ref>). By inserting Eq. (<ref>) into Eq. (<ref>), one can write where the coefficients ${\mathcal B}_k$ are determined by the known $\beta_j$'s in Eq. (<ref>) and the still undetermined $\rho_j$'s in Eq. (<ref>). We notice in particular that $\bar B(\bar r>\bar r_{\rm H})>0$ implies that ${\mathcal B}_0>0$ and each ${\mathcal B}_k$ depends on the $\rho_{j\le k+1}$'s, which quickly makes all expressions very cumbersome. In order to have a proper event horizon, we must require that both $\bar B$ and $\bar A$ become negative for $\bar r<\bar r_{\rm H}$. We thus assume [Note that we require that the determinant of the metric $g\sim \bar A\,\bar B$ is regular everywhere for $\bar r\ge \bar r_{\rm H}$.] where the function $\bar{ \mathcal A}$ is also regular and strictly positive for $\bar r\ge\bar r_{\rm H}$ and can be expanded as ∑_k=0 𝒜_k r̅-r̅_H/r̅_H^k where ${\mathcal A}_0>0$. It follows that $\bar C=\bar{\mathcal A}\,\bar{\mathcal B}$ and, upon replacing into Eq. (<ref>), we obtain r̅ r” 2 𝒜̅ r/r̅-r̅_H r̅ ℬ̅'/2 ℬ̅ r̅ 𝒜̅'/2 𝒜̅ where primes again denote derivatives with respect to $\bar r$. In principle, this equation can be solved order by order in $(\bar r-\bar{r}_{\rm H})$, thus relating the coefficients ${\mathcal A}_k$ to the ${\mathcal B}_k$'s and $\rho_k$'s (equivalently, to the $\beta_k$'s and $\rho_k$'s). At leading order, for $\bar r\simeq \bar{r}_{\rm H}$, we have 2 ρ_0 𝒜_0 which implies ρ_1/2 ρ_0 At next to leading order, we then have ρ_0 𝒜_1 +ρ_1 𝒜_0 3 𝒜_1 4-4 𝒜_0 where we recall that ${\mathcal A}_0$ and ${\mathcal B}_0$ must be positive. In particular, if $\rho_1\simeq 1$ and $|\rho_2|\ll 1$, [We will see next that this is a rather accurate estimate.] we must have ≃1/2 ρ_0 ≃2/3 ρ_0 -1/2 ρ_0 -ℬ_1/4 ℬ_0 where the known and exact coefficient (1+2 q_V)^3/2-1/3 ρ_0 q_V √(1+2 q_V) and, since ${\mathcal B}_1$ depends also on $\rho_2$, we do not show its rather long expression It is important to remark that the unknown coefficients ${\mathcal A}_k$'s depend on the coefficients $\rho_k$'s, both through Eq. (<ref>) and because the ${\mathcal B}_k$'s depend on the $\rho_k$'s. The only way to fix this ambiguity, related with the expression of the harmonic $r=r(\bar r)$, on physical grounds is to match the near-horizon expressions of the metric components $\bar A$ and $\bar B$ with their analogue in the weak-field The latter was obtained previously by imposing observational constraints to partly fix $r=r(\bar r)$ therein. The matching between the two regimes will therefore leave unspecified only those parameters which do not conflict with the experimental bounds at large distance from the source. §.§ Matching with weak field Let us start from noting that the Taylor expansion for the near-horizon regime is comparable with the one for weak field when r̅ - r̅_H/r̅_H or $\bar r\simeq \bar r_{\rm m}$, with 1+√(1+4 /r̅_H) /2 ρ_0 1+√(1+4 ρ_0 /) where we recall that $\rho_0>0$ and the harmonic $\rh$ is given in Eq. (<ref>). Moreover, the first few terms in the two expansions still provide a reliable approximation at $\bar r=\bar r_{\rm m}$ if 2 ρ_0 / 1+√(1+4 ρ_0 /) The above condition is satisfied for ≡2 / 6 q_V/(1+2 q_V)^3/2-1 In particular, by matching the expressions of the harmonic coordinates (<ref>) and (<ref>) at $\bar r=\bar r_{\rm m}$, that is ∑_k=2 σ_k ρ_0 r̅_H ρ_0 r̅_H we obtain (1-ρ_1) r̅_m/r̅_H /2 r̅_H (ρ_k-r̅_m/r̅_H σ_k) At leading order, we thus find ≃(1-ρ_1) r̅_m/r̅_H /2 r̅_H This estimate can be further improved by considering yet another expansion about $\bar r =\bar r_{\rm m}$ and determining the corresponding Taylor coefficients from the matching with the weak-field expansion for $\bar r\gtrsim \bar r_{\rm m}$ and with the near-horizon expansion for $\bar r\lesssim \bar r_{\rm m}$. This is equivalent to imposing continuity of the function $r=r(\bar r)$ and its derivatives across $\bar r_{\rm m}$ (see Appendix <ref>). We remark here that the result for $|c_2|=|\xi-1|\lesssim 1$ is consistent with the above expressions for $\rho_1\simeq 1$ and $|\rho_2|\ll 1$. §.§ Near-horizon geometry The better estimate of $\rho_0$ in Eq. (<ref>) yields for the areal radius of the bootstrapped Newtonian horizon ≃1.21 +0.27 c_2 The value of $c_2=c_2^{\rm S}$ which would give $\bar r_{\rm H}=\Rh=2\,\gn\,M$ according to this equation is outside our range of approximation (namely, $|c_2|\ll 1$). In fact, resorting to Eq. (<ref>), we obtain $c_2^{\rm S}\approx -0.696$, corresponding to $\xi=c_2^{\rm S}+1\simeq 0.3$. On using Eqs. (<ref>), (<ref>) and (<ref>), we obtain ≃1.37 + 0.50 c_2 ≃0.85 + 0.31 c_2 In particular, for $c_2=0$, we find $\rho_1\simeq 1$ and Eq. (<ref>) yields the same relation between the harmonic and the areal horizon radii which holds for the Schwarzschild solution, that is From the bootstrapped potential we thus obtain ≃(1+2 q_V)^3/2-1+6 q_V/(1+2 q_V)^3/2-1 ≃2.43 M where the last value is for $q_V=1$. The corresponding metric coefficients $\bar B$ and $\bar A$ at leading order in the near-horizon expansion are shown in Fig. <ref>, where they are compared with their Schwarzschild analogues. The only relevant difference is given by the areal radius of the bootstrapped horizon. For this reason we plot $\bar{r}_{\rm H}$ in units of $\Rh$ in Fig. <ref>, and note that $\bar{r}_{\rm H}=\Rh$ for 3+2 √(3)/2 Clearly, this much stronger self-coupling would not be compatible with the weak-field bounds, further supporting the result that the bootstrapped Newtonian metric contains an horizon $\bar{r}_{\rm H}$ larger than Schwarzschild's $\Rh$. $\ $ Comparison between the Schwarzschild and bootstrapped Newtonian metric components for $\Rh<\bar r<\bar{r}_{\rm m}$. The vertical line in the right panel is the location of the bootstrapped horizon $\bar r=\bar{r}_{\rm H}$. Bootstrapped horizon $\bar{r}_{\rm H}$ in units of $\Rh$. The horizontal dotted line is unity. Since the matching radius $\bar r_{\rm m}\simeq 3.73\,\gn\,M$, one can expect a correction for the radius $\bar r_{\rm ph}$ of the photon orbit, whose value is $3\,\gn\,M$ in General Relativity. Using $\bar C\simeq {\mathcal A}_0\,{\mathcal B}_0\simeq 1.17$ and constant, the latter can be estimated from Eq. (<ref>) as ∼3 r̅_H -2 r̅_ph where $\mathcal V^{\rm eff}$ is the potential in Eq. (<ref>) with $k=0$ for null trajectories. The result $\bar r_{\rm ph}\simeq 3.64\,\gn\,M$ is just short of $\bar r_{\rm m}$, and a better reconstruction of the near-horizon metric including a few higher order terms ${\mathcal A}_k$ and ${\mathcal B}_k$ is therefore likely to modify this value. In fact, we note that $\bar C$ must approach the General Relativistic value $\bar C=1$ rather fast in the weak-field regime, according to Eq. (<ref>), and $\bar C'$ cannot therefore be neglected near the horizon. For example, if we simply employ a linear approximation for $\bar{\mathcal B}$, and take $\rho_1=1$ and $\rho_2=0$, we get ≃3 ℬ_0-2 ℬ_1/2 ℬ_0-2 ℬ_1 ≃3.26 M which is closer to the prediction of General Relativity. On the other hand, the innermost stable circular orbit of General Relativity is located at $\bar r_{\rm ISCO}=6\,\gn\,M$, and its location in the bootstrapped Newtonian metric should instead be recovered rather accurately from the weak-field approximation. From Eq. (<ref>) and (<ref>) evaluated at $\bar r=\bar r_{\rm ISCO}$ we indeed obtain for the deviation of the bootstrapped metric from Schwarzschild's ≃5+17 c_2/72 where we expect $|c_2|=|\xi-1|\ll 1$. §.§ Harmonic and areal compactness For a source of harmonic radius $R$ in the Schwarzschild space-time, one can introduce the compactness in the harmonic coordinate as ≡ M/R or in the areal coordinate as ≡2 M/R̅ 2 X_S/1+X_S where we used Eq. (<ref>). In particular, $\bar X_{\rm S}(\Rh)=X_{\rm S}(r_{\rm S})=1$ for a Schwarzschild black hole. For the bootstrapped metric, we could likewise introduce the harmonic compactness and the areal compactness X/ρ_0+(1-ρ_0) X in which we employed the leading order transformation (<ref>) with $\rho_1\simeq 1$, ≃ρ_0 r̅_H +r̅ -r̅_H so that $\bar X(\bar r_{\rm H})=X(\rh)=1$. For the purpose of comparing with General Relativity, it is however more convenient to use the Schwarzschild quantities and note that, for a bootstrapped Newtonian black hole 6 q_V/(1+2 q_V)^3/2-1 2 ρ_0 X_S 2 X_S/1+X_S where the numerical values are those for $q_V=1$, as usual. We notice incidentally that this value is just slightly smaller than the Buchdahl limit $\bar X_{\rm B}=8/9\simeq 0.89$. § CONCLUSIONS AND OUTLOOK The bootstrapped Newtonian approach is devised to capture quantum effects which induce large (mean-field) deviations from classical General Relativity when large matter sources are involved. Such effects would be completely determined if we knew the proper quantum state describing specific self-gravitating systems. What we know for certain is that the strong field regime of gravity governed by the Einstein field equations is not linear. Determining the relevant quantum state therefore requires that one solves nonlinear quantum dynamics, which seems hardly a tenable task for large and very compact sources. The bootstrapped Newtonian approach considers a simplified form of nonlinear dynamics for gravity, compared to General Relativity, but aims at including quantum deviations from classicality in a form that is sufficiently general to confront observations. This generality is manifested in the coupling constants appearing in the action (<ref>). The potential experienced by test particle at rest is however not sufficient to determine all deviations from the classical solutions of General Relativity. Starting from the bootstrapped Newtonian potential outside a static and spherically symmetric source, we here obtained a complete metric by supplying further conditions of compatibility with observations in the weak-field regime. The main difference with respect to the unique Schwarzschild solution of General Relativity. is given by the larger horizon radius estimated in Eq. (<ref>). This prediction makes the bootstrapped Newtonian programme experimentally testable, for instance, by measurements of light trajectories reaching the photon orbit. A more detailed analysis of these trajectories in terms of the parameters of the effective metric is the natural continuation of the work presented here. A possible conclusion of such a phenomenological analysis could be that a consistent description of the near-horizon region of black holes requires more than the first few nonlinear terms included in the bootstrapped Newtonian Lagrangian (<ref>). This possibility will be investigated in the future, but, in this respect, it is important to recall that the entire programme about bootstrap Newtonian gravity is motivated by the idea that black holes and similarly compact sources might require a fully quantum, rather than semiclassical, description. It is therefore not a priori clear to what extent the effective metric we obtained is meaningful at such short distances from the (would-be classical) More precisely, one expects that the interaction of matter and light falling towards the black hole should be described in terms of scattering processes, for which classical geodesic lines will become an unreliable approximation if black holes are indeed extended quantum objects (for a non-exhaustive list, see Refs. [17, 16, 39, 41, 42, 43, 46, 47]). This viewpoint will also require a more detailed quantum description of the matter source itself, which is left completely out here. Finally, we would like to mention that the weak-field regime is also worthy of further study. First of all, there is the possibility that deviations from the Schwarzschild geometry reproduce the kind of effective dark fluid responsible for Dark Matter phenomenology as explored in Refs. [20]. Moreover, propagation of gravitational waves and other signals would also be affected by the non-trivial background corresponding to the effective metric. All of these developments are left for future investigations. §.§ Acknowledgments R.C. and I.K. are partially supported by the INFN grant FLAG. The work of R.C. and A.G has also been carried out in the framework of activities of the National Group of Mathematical Physics (GNFM, INdAM) and COST action Cantata. § WEAK-FIELD EFFECTIVE ENERGY-MOMENTUM TENSOR The effective fluid density for general values of the Robertson-Eddington parameters is given by M^2/4 π r̅^4 (β-3 γ+2 γ^2 -2 c_2) 32 β^2 -12 β (3- γ) +γ (18-3 γ-8 γ^2) +8 c_2 (2+γ) M /2 r̅ the pressure by M/4 π r̅^3 2-β-3 γ+2 γ^2 -2 c_2 M /r̅ 1-3 γ+2 γ^2-2 c_2 ^2 M^2 /r̅^2 and the tension by M/8 π r̅^3 2 β-3 +5 γ-4 γ^2 +4 c_2 M /r̅ 1-2 β+γ+6 γ^2 +c_2 (2+6 γ) ^2 M^2 /r̅^2 The anisotropy $\Pi\equiv p_r-p_t$ therefore is Π ≃ M/8 π r̅^3 3 (1-γ) 7 - 4 β-11 γ+ 8 γ^2 - 8 c_2 M /r̅ 1+2 β-3 γ+10 γ^2 -2 c_2 (3+5 γ) ^2 M^2 /r̅^2 The Misner-Sharp-Hernandez mass reads (β-3 γ+2 γ^2-2 c_2) M /r̅ -32 β^2 -12 β (3-γ) +18 γ-3 γ^2-8 γ^3 +8 c_2 (2+γ) ^2 M^2 /4 r̅^2 For $\beta=\gamma=1$, the above expressions reduce to those shown in the main § INTERMEDIATE RANGE EXPANSION Let us start by expanding the Schwarzschild metric in harmonic coordinates around the horizon $r_{\rm S}=\Rh/2$. From the general form (<ref>) and Eq. (<ref>) we obtain ≃r- M/2 M The analogous expansion for the coefficient $B$ of the bootstrapped Newtonian metric can be derived from Eq. (<ref>). To be more specific, we shall only consider the case $q_V=\gamma=1$ here, which yields ≃r-1.43 M/1.77 M However, we need both $\rho_0$ and $\rho_1$ in Eq. (<ref>) in order to obtain the leading order expression for the coefficient $A$. For this purpose, we expand $r=r(\bar r)$ around $\bar r_{\rm m}$ defined in Eq. (<ref>) as the radius at which the weak-field expansion becomes comparable to the near-horizon one. This intermediate expansion of $r=r(\bar r)$ can then be constrained by using the weak-field expansion to the right of $\bar r_{\rm m}$ and the near-horizon expansion to the left of $\bar r_{\rm m}$. In particular, continuity of $r=r(\bar r)$ and of its first few derivatives around $\bar r_{\rm m}$ implies ρ_0+ρ_1 /r̅_m+ρ_2/r̅_m^2+𝒪(3) 1-/2 r̅_m-c_2/2/r̅_m^2+𝒪(3) ρ_1+2 ρ_2 /r̅_m 2 ρ_2+𝒪(1) where $\mathcal O(k)$ denotes a quantity proportional to $\pqty{\Rh/\bar r_{\rm m}}^k$. Solving these equations gives 1-/2 r̅_m We next note that where the bootstrapped Newtonian horizon $\rh$ is given in Eq. (<ref>) and the last equality follows from the definition (<ref>). With this expression for $\rho_0$, Eq. (<ref>) can be used to relate ${\Rh}/{\bar r_{\rm m}}$ to the weak-field coefficient $c_2=\xi-1$, and one can then obtain explicit estimates for $\rho_0$, $\rho_1$ and $\rho_2$. Using again $q_V=1$, we get /r̅_m ≃0.54 - 0.09 c_2+𝒪(3) which is smaller than one, as required for the validity of the truncation. Correspondingly, we have ≃0.59-0.13 c_2+𝒪(3) ≃1+0.14 c_2+𝒪(2) From Eq. (<ref>), we can further estimate the orders of magnitude of neglected quantities, namely $\mathcal O(1)\approx 0.54$, $\mathcal O(2)\approx 0.29$ and $\mathcal O(3)\approx 0.15$, assuming proportionality constants of order one as well. Moreover, using the above estimates in Eq. (<ref>) yields $\rho_0\simeq 0.59$ to leading This confirms that the direct matching between the weak-field and near-horizon expansions is already rather accurate. In fact, the ratio between the first term that we neglected and the last we included in the direct matching in Eq. (<ref>), that is ρ_2 r̅_H-σ_2 r̅_m/ρ_1 r̅_H-σ_1 r̅_m·/r̅_m ≃0.08+0.19 c_2 is reasonably small in the expected range of values of $c_2=\xi-1$. Finally, Eq. (<ref>) with the above estimates yields ≃2.06+1.50 c_2 M/r-1.43 M where we used the approximation of small $c_2$ and neglected all $\mathcal O(k)$ terms. [1] S. W. Hawking and G. F. R. Ellis, “The Large Scale Structure of Space-Time,” (Cambridge University Press, Cambridge, 1973) [2] R. P. Geroch and J. H. Traschen, Phys. Rev. D 36 (1987) 1017 [Conf. Proc. C 861214 (1986) 138]; H. Balasin and H. Nachbagauer, Class. Quant. Grav. 10 (1993) 2271 [4] S. Deser, Gen. Rel. Grav. 1 (1970) 9 Gen. Rel. Grav. 42 (2010) 641 [arXiv:0910.2975 [gr-qc]]. [5] R. M. Wald, Phys. Rev. D 33 (1986) 3613; K. Heiderich and W. Unruh, Phys. Rev. D 38 (1988) 490; M. P. Hertzberg, JHEP 1709 (2017) 119 [arXiv:1702.07720 [hep-th]]; D. Bai and Y. H. Xing, Nucl. Phys. B 932 (2018) 15 [arXiv:1610.00241 [hep-th]]; R. Carballo-Rubio, F. Di Filippo and N. Moynihan, JCAP 1910 (2019) 030 [arXiv:1811.08192 [hep-th]]; D. Hansen, J. Hartong and N. A. Obers, Phys. Rev. Lett. 122 (2019) 061106 [arXiv:1807.04765 [hep-th]]. [7] D. Lovelock, J. Math. Phys. 12 (1971) 498; J. Math. Phys. 13 (1972) 874. [9] R. Casadio, M. Lenzi and O. Micu, Phys. Rev. D 98 (2018) 104016 [arXiv:1806.07639 [gr-qc]]. [10] R. Casadio and I. Kuntz, Eur. Phys. J. C 80 (2020) 581 [arXiv:2003.03579 [gr-qc]]. [11] R. Casadio, M. Lenzi and O. Micu, Eur. Phys. J. C 79 (2019) 894 [arXiv:1904.06752 [gr-qc]]. [12] R. Casadio and O. Micu, Phys. Rev. D 102 (2020) 104058 [arXiv:2005.09378 [gr-qc]]. [13] R. Casadio, O. Micu and J. Mureika, Mod. Phys. Lett. A 35 (2020) 2050172 [arXiv:1910.03243 [gr-qc]]. [14] R. Casadio, A. Giugno and A. Giusti, Phys. Lett. B 763 (2016) 337 [arXiv:1606.04744 [gr-qc]] [15] R. Casadio, A. Giugno, A. Giusti and M. Lenzi, Phys. Rev. D 96 044010 (2017) [arXiv:1702.05918 [gr-qc]]. [16] R. Casadio, M. Lenzi and A. Ciarfella, Phys. Rev. D 101 (2020) 124032 [arXiv:2002.00221 [gr-qc]]. [17] G. Dvali and C. Gomez, Fortsch. Phys. 61 (2013) 742 [arXiv:1112.3359 [hep-th]]; G. Dvali, C. Gomez and S. Mukhanov, “Black Hole Masses are Quantized,” arXiv:1106.5894 [hep-ph]. G. Dvali and C. Gomez, Phys. Lett. B 719 (2013) 419 [arXiv:1203.6575 [hep-th]]; Phys. Lett. B 716 (2012) 240 [arXiv:1203.3372 [hep-th]]; Eur. Phys. J. C 74 (2014) 2752 [arXiv:1207.4059 [hep-th]]. [18] A. Giusti, Int. J. Geom. Meth. Mod. Phys. 16 (2019) 1930001. [19] S. Weinberg, “Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity,” (Wiley & Sons, 1972) [20] R. Casadio and A. Giusti, “Bootstrapped Newtonian cosmology and the cosmological constant problem,” [arXiv:2009.10667 [gr-qc]]; M. Cadoni and A. P. Sanna, “Emergence of a Cosmological Constant in Anisotropic Fluid Cosmology,” [arXiv:2012.08335 [gr-qc]]; M. Cadoni, M. Tuveri and A. P. Sanna, Symmetry 12 (2020) no.9, 1396 [arXiv:2006.16652 [gr-qc]]; M. Cadoni, R. Casadio, A. Giusti and M. Tuveri, Phys. Rev. D 97 (2018) 044047 [arXiv:1801.10374 [gr-qc]]; M. Cadoni, R. Casadio, A. Giusti, W. Mück and M. Tuveri, Phys. Lett. B 776 (2018) 242 [arXiv:1707.09945 [gr-qc]]. [25] B. C. Xanthopoulos, J. Math. Phys. 19, 1607 (1978). [26] T. Jacobson, Class. Quant. Grav. 24 (2007) 5717 [arXiv:0707.3222 [gr-qc]]. [27] C. W. Misner and D. H. Sharp, Phys. Rev. 136 (1964) B571. [28] W. C. Hernandez and C. W. Misner, Astrophys. J. 143 (1966) 452. [29] V. Faraoni, “Cosmological and Black Hole Apparent Horizons,” (Springer Lect. Notes Phys. 907, 2015). [30] V. Faraoni and A. Giusti, Symmetry 12 (2020) 1264 [arXiv:2006.12577 [gr-qc]]; V. Faraoni, A. Giusti and T. F. Bean, “Asymptotic flatness and Hawking quasilocal mass,” [arXiv:2010.00069 [gr-qc]]. [31] R. Casadio and F. Scardigli, Eur. Phys. J. C 74 (2014) 2685 [arXiv:1306.5298 [gr-qc]]; R. Casadio, A. Giugno and O. Micu, Int. J. Mod. Phys. D 25 (2016) 1630006 [arXiv:1512.04071 [hep-th]]. [34] R.L. Arnowitt, S. Deser and C.W. Misner, Phys. Rev. 116 (1959) 1322. [35] A. Bonanno, R. Casadio and A. Platania, JCAP 01 (2020) 022 [arXiv:1910.11393 [gr-qc]]. [36] M. Gasperini, “Gravity at Finite Temperature, Equivalence Principle, and Local Lorentz Invariance,” [arXiv:2101.00458 [gr-qc]]. [37] N. E. J. Bjerrum-Bohr, J. F. Donoghue, B. K. El-Menoufi, B. R. Holstein, L. Planté and P. Vanhove, Int. J. Mod. Phys. D 24 (2015) 1544013 [arXiv:1505.04974 [hep-th]]; N. E. J. Bjerrum-Bohr, J. F. Donoghue, B. R. Holstein, L. Planté and P. Vanhove, Phys. Rev. Lett. 114 (2015) 061301 [arXiv:1410.7590 [hep-th]]. [39] P. Nicolini, A. Smailagic and E. Spallucci, Phys. Lett. B 632 (2006) 547 [arXiv:gr-qc/0510112 [gr-qc]]; P. Nicolini, Int. J. Mod. Phys. A 24 (2009) 1229 [arXiv:0807.1939 [hep-th]]. [41] D. C. Dai, D. Minic and D. Stojkovic, “On black holes as macroscopic quantum objects,” [arXiv:2006.09202 [gr-qc]]. [42] M. Bojowald, Universe 6 (2020) 125 [arXiv:2009.13565 [gr-qc]]; A. Perez, Rept. Prog. Phys. 80 (2017) 126901 [arXiv:1703.09149 [gr-qc]]. [43] R. Casadio and A. Orlandi, JHEP 08 (2013) 025 [arXiv:1302.7138 [hep-th]]; W. Mück and G. Pozzo, JHEP 05 (2014) 128 [arXiv:1403.1422 [hep-th]]; R. Casadio, A. Giugno, A. Giusti and O. Micu, Eur. Phys. J. C 77 (2017) 322 [arXiv:1701.05778 [gr-qc]]. [46] X. Calmet, R. Casadio and F. Kuipers, Phys. Rev. D 100 (2019) 086010 [arXiv:1909.13277 [hep-th]]. [47] A. Bonanno and M. Reuter, Phys. Rev. D 62 (2000) 043008 [arXiv:hep-th/0002196 [hep-th]]; A. Platania, Eur. Phys. J. C 79 (2019) 470 [arXiv:1903.10411 [gr-qc]].
# Battery-constrained Federated Edge Learning in UAV-enabled IoT for B5G/6G Networks Shunpu Tang, Wenqi Zhou, Lunyuan Chen, Lijia Lai, Junjuan Xia and Liseng Fan S. Tang, W. Zhou, L. Chen, L. Lai, J. Xia and L. Fan are all with the School of Computer Science, Guangzhou University, Guangzhou, China (e-mail: {tangshunpu, 2112006156, 2112019037<EMAIL_ADDRESS><EMAIL_ADDRESS>lsfan2019@126.com). ###### Abstract In this paper, we study how to optimize the federated edge learning (FEEL) in UAV-enabled Internet of things (IoT) for B5G/6G networks, from a deep reinforcement learning (DRL) approach. The federated learning is an effective framework to train a shared model between decentralized edge devices or servers without exchanging raw data, which can help protect data privacy. In UAV-enabled IoT networks, latency and energy consumption are two important metrics limiting the performance of FEEL. Although most of existing works have studied how to reduce the latency and improve the energy efficiency, few works have investigated the impact of limited batteries at the devices on the FEEL. Motivated by this, we study the battery-constrained FEEL, where the UAVs can adjust their operating CPU-frequency to prolong the battery life and avoid withdrawing from federated learning training untimely. We optimize the system by jointly allocating the computational resource and wireless bandwidth in time-varying environments. To solve this optimization problem, we employ a deep deterministic policy gradient (DDPG) based strategy, where a linear combination of latency and energy consumption is used to evaluate the system cost. Simulation results are finally demonstrated to show that the proposed strategy outperforms the conventional ones. In particular, it enables all the devices to complete all rounds of FEEL with limited batteries and meanwhile reduce the system cost effectively. ###### Index Terms: UAV, federated learning, latency, energy consumption, mobile edge computing. ## I Introduction Swarms of unmanned aerial vehicles (UAVs) play a non-negligible role in Internet of things (IoT) for beyond the fifth-generation (B5G) and the forthcoming sixth-generation (6G) wireless mobile networks[1, 2, 3]. Due to the mobility and flexibility, UAVs can effectively collect data and communicate in the edge computing-based IoT networks. Thanks to these advantages, UAVs have been widely used in many application scenarios, such as environmental monitoring, emergence communication, transportation control and remote sensing. In recent years, deep learning has shown its great success on speech recognition, computer vision and many other domains.[4]. Especially in the era of big data, millions or billions of data are collected by various sensors and are applied to train the deep learning model. The authors in [5] proposed a large-scale hierarchical database for image classification and promoted the emergence of state-of-the-art (SOTA) CNN models[6, 7, 8]. Large amounts of high-quality data are the key to improve the performance of the deep learning model. As a matter of fact, data collection faces a series of challenges. Though a lot of data are produced by heterogeneous devices, especially IoT devices and smartphones in the mobile edge, these data are fragmented and scattered in various devices. This brings out a huge communication cost to centralize data on the server to train models when we adopt the traditional centralized training method of deep learning. Meanwhile, increasing concerns of sensitive privacy make data collection more difficult. People do not want their personal data (e.g. figs, voice, chat history) to be uploaded to others’ servers. There are also some companies, banks and hospitals with a lot of sensitive data which are not allowed to divulge. Moreover, laws on privacy protection have been promulgated continuously. Due to these reasons, data are difficult to be centralized to train deep learning models, which is called “data island” [9]. In this context, federated learning has been proposed to break the “data island” and it makes full use of each device’s data to train a model with a fine performance. In the federated learning, all devices use local data to train a model and share the model on the premise of safety to aggregate a global model with the federated optimization[10, 11, 12]. In the edge computing-based IoT networks, in order to speed up the process of training and inference of deep learning, a new concept “Edge AI” was presented to bring model closer to the places where data are generated when the computational capabilities of edge devices are continually growing[13, 14, 15]. Federated learning is also applied to train deep learning models cooperatively in MEC networks, which is referred to as federated edge learning (FEEL)[16, 17]. Google Inc[18] used an edge server to build a keyboard input prediction model based on FEEL, and FEEL can be also applied to improve the performance of content caching without gathering users’ data centrally for training[19, 20]. Although the FEEL has been successfully applied to many application scenarios, there still exist some challenges. One major challenge is that there is a high requirement for the latency and energy consumption in UAV-enabled IoT networks, which is one of the important and critical issues. More importantly, UAVs are powered by limited batteries and they can only work for a few dozen minutes. It is dangerous for UAVs to run out of power when they are working and it is of vital importance to control the remaining power. On the other hand, enough participants are the guarantee of the performance for the FEEL training. With a limited battery power, how to complete more training rounds of federated learning is an unconsidered issue and it is our motivation to control batteries effectively to prolong their service life. Accordingly, in this paper we study the battery-constrained FEEL, where the UAVs can adjust their operating CPU-frequency to prolong the battery life and avoid withdrawing from federated learning training untimely. We optimize the system by jointly allocating the computational resource and wireless bandwidth in time-varying environments. To solve this optimization problem, we employ a deep deterministic policy gradient (DDPG) based strategy, where a linear combination of latency and energy consumption is used to evaluate the system cost. The main contributions of this work are summarized below. * • We study the resource allocation strategy for the FEEL in complicated scenarios, where UAVs in edge computing-based IoT networks are powered by limited batteries. Moreover, the characteristics of each device are different and complicated, such as computational capabilities and channel conditions which are time-varying. * • We propose a resource allocation strategy based on the deep reinforcement learning for the FEEL optimization, where the total latency and energy consumption can be minimized by adjusting the CPU-frequency of devices and upload wireless bandwidth for each device. More importantly, the strategy can prolong the battery life of each device and enable devices to complete the required rounds of federated learning. * • We conduct simulated experiments to evaluate the performance of the proposed resource allocation strategy and compare the performance with some conventional strategies to demonstrate the superiority of the proposed approach. The rest of this paper is described as follows. Existed and relevant works are introduced in Sec. II. Then, Sec. III describes the system model and formulates the optimization problem, and Sec. IV provides the DDPG-based allocation strategy involving the optimization on computational resource and wireless bandwidth in time-varying environments. After that, Sec. V gives some simulation results and discussions, and we finally conclude our work in Sec. VI. ## II Related Works Mobile Edge computing networks: With the explosive growth of the number of IoT devices in 5G and 6G era,more and more data are mainly generated in the mobile edge, and mobile edge computing (MEC) is used to process a large quantity of data with a lower latency and energy consumption[21]. Many researchers focus on the offloading strategy in MEC to reduce system cost (e.g. Zhao[22] optimized the computation offloading based on the discrete particle swarm algorithm, the authors in [23] studied the multi-user computational offloading and Li[24] proposed a DRL based approach to solve the problem of offloading for multiuser and multi-CAP MEC networks and so on [25, 26, 27]) FL in the wireless networks: In MEC networks, there are many existing works about reducing the latency and energy consumption by allocating the system resources. But federated learning is basically distributed and heterogeneous, and it needs to synchronize all the participants with different channel conditions and computational capabilities [28]. Hence, another challenge is how to reduce the cost of FEEL. In this direction, Takayuki [29] proposed a client selection scheme for federated learning in mobile edge to accelerate the training. In [30, 31], the authors studied the federated learning in wireless networks and tried to allocate more bandwidth to nodes with poor channel or weak computational capabilities. Due to the lack of continuous connection between the UAV swarms, a joint powered allocation design was proposed to optimize the convergence of federated learning [32]. The authors in [33] presented worker-centric model selection for federated learning in the MEC networks. Zhou[34] proposed a blockchain-based FL framework in the B5G network. So far, few works have considered that most of the IoT devices are powered by limited batteries. Zhan et al. [35] executed DRL to adjust the CPU- frequency making a good trade-off between the latency and energy consumption. But this work did not quantify the amount of battery power. Figure 1: System model of FEEL in the UAV-enabled IoT networks. ## III System Model and Problem Formulation ### III-A Federated Learning Federated learning is a distributed machine learning method to train a shared model in the context of protecting personal privacy. It allows users to train their dataset locally instead of uploading the sensitive data to a server. The server collects the information from different users and generates a global model. This process can be expressed as $\displaystyle\arg\min_{w\in\mathbb{R}}F(\omega)=\frac{1}{M}\sum_{m=1}^{M}F_{m}(\omega),$ (1) where $\omega$ is the model weight, $M$ is the user number, and $F_{m}(\cdot)$ is the loss function. As a matter of fact, FedAvg[10] is used to aggregate a global model to reduce the communication rounds. Firstly, all the devices train their models locally and then update the weight as $\displaystyle\omega_{k+1}^{m}\leftarrow\omega_{k}^{m}-\alpha\nabla F_{m}(\omega),$ (2) where $\alpha$ is a positive learning rate, $\nabla(\cdot)$ denotes the gradient operation and $k$ is the round index. Each user executes (2) several times, and then it uploads the weight $w_{k+1}^{m}$ to the server. The server aggregates the collected weights and calculates the weighted average weights as $\displaystyle\overline{\omega}_{k+1}\leftarrow\sum_{m=1}^{M}\frac{n_{m}}{n}\omega^{m}_{k+1},$ (3) where $\frac{n_{m}}{n}$ is the proportion of samples in total. It is called as model average[11, 36]. Lastly, the server broadcasts the new global model to each user. ### III-B System Model Fig. 1 shows the system model of the considered FEEL framework, where there are $M$ distributed users and a centralized server. In practical, the $M$ users can be various UAV devices in the edge computing-based IoT network, and the parameters server can be the base station or one of the UAVs. We use $\mathcal{M}=\left\\{1,2,\cdots,M\right\\}$ to denote the user set. The number of data samples which needs to be trained locally is $d_{m}$. We assume that a round of FL training can be completed in a time slot. For each device $m\in\mathcal{M}$ in the $k$-th round, the base CPU-frequency is $f_{m}^{k}$ and it requires $c_{m}$ CPU-cycles to process a sample of the local dataset. To deal with the problem of limited communication resources, each device will train $e$ times locally before uploading the weights[10]. The local training time of device $m$ at the $k$-th round can be written as $\displaystyle t^{k}_{m,local}=\frac{ec_{m}D_{m}}{\eta_{m}^{k}f_{m}^{k}},$ (4) where $\eta_{m}^{k}$ is the coefficient of frequency adjustment which determines the practical operating CPU-frequency and $K$ is the maximum round of FL. The weights will be uploaded to the centralized server through the wireless link after each device completes the local training. The data rate of the wireless link from device $m$ to the centralized server is: $\displaystyle r_{m}^{k}=B_{m}^{k}\log_{2}\left(1+\frac{P_{m}|h_{m}^{k}|^{2}}{\sigma_{m}^{2}}\right),$ (5) where $h_{m}^{k}\sim\mathcal{CN}(0,\beta)$ is the channel parameter of the link from device $m$ to the centralized server, $\sigma^{2}$ is the variance of the additive white Gaussian noise (AWGN) at the server and $P_{m}$ is the transmit power of device $m$. Notation $B_{m}^{k}$ is the bandwidth of device $m$ and it can be allocated by the base station. At each round, $B_{m}^{k}$ should meet the following requirement $\displaystyle\sum_{m=1}^{M}B_{m}^{k}=B_{total}.$ (6) According to (5), the transmission latency can be calculated as $\displaystyle t^{k}_{m,up}=\frac{\epsilon}{r_{m}^{k}},$ (7) where $\epsilon$ is the size of deep learning model’s weights. The total time to provide a local model of device $m$ can be expressed as $\displaystyle T^{k}_{m}=t^{k}_{m,local}+t^{k}_{m,up}.$ (8) Each device trains its local model and uploads parallelly. The centralized server has to wait for the local model from the slowest device which can be viewed as the bottleneck of the system to aggregate a new global model. So the system latency at the $k$-th round is $\displaystyle T_{k}=\max_{m\in\mathcal{M}}{T^{k}_{m}}.$ (9) Similarly, we can calculate the energy consumption in two stages. At the first stage, device $m$ performs some algorithms like back propagation (BP) to update the weights at the cost of huge energy consumption. According to [2], the training energy consumption can be written as $\displaystyle E^{k}_{m,local}=\zeta_{m}c_{m}D_{m}(\eta_{m}^{k}f_{m}^{k})^{2},$ (10) where $\zeta_{m}$ is the energy consumption coefficient of CPU chips. Then device $m$ uploads the weights with transmit power $P_{m}$ and the energy consumption of transmission can be easily expressed as $\displaystyle E^{k}_{m,up}=P_{m}t^{k}_{m,up}.$ (11) So the total energy consumption of device $m$ at the $k$-th round is $\displaystyle E^{k}_{m}=E^{k}_{m,local}+E^{k}_{m,up}.$ (12) Under ideal conditions, devices can continuously participate in FedAvg[10] and contribute their local models until achieving the best performance. However, in the case of limited resources, we have to consider that the battery of devices may run out. The energy model should meet the following requirement $\displaystyle\sum_{k=1}^{K}E^{k}_{m}\leq\delta_{m},$ (13) where $\delta_{m}$ is the total battery power of device $m$. To describe the total system cost, we use a linear combination of latency and energy consumption as following [35, 24] $\displaystyle\Phi=\sum_{k=1}^{K}\left[\lambda T_{k}+(1-\lambda)\sum_{m}^{M}E^{k}_{m}\right],$ (14) where $\lambda\in\left[0,1\right]$ is a factor to describe the importance of latency and energy consumption in the system cost. We can adjust $\lambda$ to trade-off the latency and energy consumption. Specifically, the linear combination approaches to the energy consumption when $\lambda$ goes to zero, while it degenerates into the latency if $\lambda$ is near one. In fact, the linear combination is a method to solve multi-objective programming problems by giving a proper weight coefficient according to the importance of each objective and changing to single-objective programming problems. ### III-C Problem Formulation For the practical battery-constrained devices, we expect that they can participate in the rounds of FEEL training as much as possible and meanwhile try to avoid running out of power. Moreover, the system cost measured by the latency and energy consumption should be minimized, which is particularly important for the IoT devices. By taking into account these factors, we can optimize the training rounds and system cost by adjusting the CPU-frequency and bandwidth. The system optimization problem can be given by $\displaystyle\max_{\\{{\eta_{m}^{k}},B_{m}^{k}\\}}K-\ \overline{\Phi}$ $\displaystyle\quad\quad\mathrm{s.t.}\quad C_{1}:$ $\displaystyle\eta_{m}^{k}\in[0,1],$ (15) $\displaystyle C_{2}:$ $\displaystyle\sum_{m=1}^{M}B_{m}^{k}=B_{total},$ $\displaystyle C_{3}:$ $\displaystyle\sum_{k=1}^{K}E^{k}_{m}\leq\delta_{m},$ where $\overline{\Phi}=\frac{\Phi}{K}$ is the average cost of per round. It is generally very hard to solve the above optimization problem by using conventional optimization methods such as convex optimization. Especially, in a time-varying environment, the base CPU-frequency and channel condition are different in each round, causing heterogeneity in the training process. Due to these reasons, a learning based algorithm should be developed to adapt to different states in order to find a proper solution. The notations of this section we have used are summarized in Table. I. TABLE I: Symbol notations Notation | Definition ---|--- $\alpha$ | learning rate of FL $\omega^{m}_{k}$ | model weight of the $m$-th user in the $k$-th round $\epsilon$ | size of deep learning model weight $\zeta_{m}$ | energy consumption coefficient of the $m$-th user $\Phi$ | system cost $r^{k}_{m}$ | transmission rate between the $m$-th user and the server $e$ | local training times $c_{m}$ | cpu-cycles to process a sample of the $m$-th user $d_{m}$ | number to data samples of the $m$-th user $\delta_{m}$ | total battery power of device $m$ $\eta_{m}^{k}$ | coefficient of frequency adjustment of the $m$-th user in the $k$-th round $f_{m}^{k}$ | base CPU-frequency of the $n$-th user in the $k$-th round $t^{k}_{m,local}$ | local latency of the $m$-th user in the $k$-th round $t^{k}_{m,up}$ | transmission latency of the $m$-th user in the $k$-th round $B_{m}^{k}$ | wireless bandwidth of the $n$-th user in the $k$-th round $T^{k}$ | system latency in the $k$-th round $E^{k}_{m,local}$ | Local energy consumption of the $m$-th user in the $k$-th round $E^{k}_{m}$ | system energy consumption of the $n$-th user in the $k$-th round $F(\cdot)$ | loss function of FL ## IV DDPG-Based Allocation Strategy In this section, we will try to solve the system optimization problem in (III-C), by using the DDPG-based allocation strategy. Specifically, we will first describe the Markov decision process (MDP), and then introduce how to implement the DDPG-based resource allocation strategy. ### IV-A Markov Decision Process MDP is used to model decision-making in time-varying environments and it mainly consists of a 4-tuple $\\{S,A,P_{a},R_{a}\\}$. Specifically, $S=\\{k,\mathbf{O^{k}},\mathbf{F^{k}},\mathbf{H^{k}}\\}$ is the state space, where $\mathbf{O^{k}}$ is a vector of the remaining battery power, $\mathbf{F^{k}}$ is a vector of the current base CPU-frequency of all the devices, and $\mathbf{H^{k}}$ is the channel parameter of the $k$-th round which can be obtained by using some channel estimation methods. We use $A=\\{\mathbf{N^{k}},\mathbf{B^{k}}\\}$ to denote the action space, where $\mathbf{N^{k}}=\\{\eta_{1}^{k},\eta_{2}^{k},\cdots\eta_{M}^{k}\\}$ is the coefficient of CPU-frequency adjustment and $\mathbf{B^{k}}=\\{B^{k}_{1},B^{k}_{2}\cdots B^{k}_{M}\\}$ is the allocated vector of bandwidth. At the $k$-th round of the FEEL training, the current state is $s_{k}\in S$ and the agent will perform an action $a_{t}\in A$ in the environment. From the feedback of environment, $s_{k}$ will transit to $s_{k+1}$ with a conditional probability $P$. Meanwhile, the agent will achieve an instant reward from the environment, which can be expressed by $\displaystyle R^{k}=k-\phi,$ (16) where $k$ is positive feedback and $\phi$ is the instant cost of a round in FEEL training, which is denoted by the linear combination of energy consumption and latency. The agent expects to achieve a long-term average reward from the environment by maximizing $K$ and minimizing $\overline{\Phi}$. However, it is difficult to estimate the conditional probability $P$ in many application scenarios. Hence, we turn to use the DDPG algorithm to solve this problem. Figure 2: Actor-critic framework. ### IV-B Deep Deterministic Policy Gradient In the considered FEEL scenario, the CPU-frequency and bandwidth allocation vary over a continuous range with infinite possibilities. Accordingly, although deep Q-learning network [37, 38] can deal with the problem of discrete and low-dimensional action space, it is unable to work efficiently in the face of continuous action space since its performance severely relies on finding the maximum of value function approximated by the neural network in each iteration. Due to this reason, we turn to use the DDPG strategy to solve the optimization problem of CPU-frequency and bandwidth allocation. Deterministic Policy Gradient[39] was proposed to output a deterministic action, instead of the value of all the actions under the actor-critic framework. Fig. 2 shows the actor-critic framework in the DDPG, where a deep neural network called actor network is used to produce a deterministic action while a critic network is employed to approximate the value-function which evaluates the action. To formulate the DDPG strategy, we use $\mu(s|\theta^{\mu})$ and $Q(s,a|\theta^{Q})$ to denote the actor and critic network, respectively. For a deterministic action $a_{t}$ in the state $s_{t}$, the Bellman equation for the action-state value function can be written as $\displaystyle Q(s,a)=\mathbb{E}_{s_{t+1}\sim S}\left[r(s_{t},a_{t})+\gamma Q(s_{t+1},a_{t+1})\right],$ (17) where $s_{t+1}$ is the next state when the agent executes the action $a_{t}$ in the state $s_{t}$ and $a_{t+1}$ is the next action given by the actor network. Figure 3: Implementation structure of DDPG. Motivated by the idea of double network in the DQN, target networks and main networks are used in DDPG. Fig. 3 shows the implementation structure of DDPG, where there are four neural networks working together, * • Main actor network $\mu$: We will get the deterministic action from $a_{t}=\mu(s|\theta^{\mu})$ and $\mu$ is used to update weights of the target actor network. * • Target actor network $\mu^{\prime}$: The target actor network predicts the next action by $a_{t+1}=\mu^{\prime}(s_{t+1}|\theta^{\mu^{\prime}})$. * • Main critic network $Q$: According to the current state $s_{t}$ and action given by the main actor network $\mu$, the main critic network $Q$ is responsible for giving the $Q$ value by $Q(s,a|\theta^{Q})$. * • Target critic network $Q^{\prime}$: In the state $s_{t+1}$, the $Q$-value will be evaluated by $Q^{\prime}(s_{t+1},\mu^{\prime}(s_{t+1}|\theta^{\mu^{\prime}})|\theta^{Q^{\prime}})$. The approach of target networks reduces the correlation between the current $Q$-value and target $Q$-value, which helps increase the robustness of training. For the main critic network, the loss function can be expressed as $\displaystyle L(\theta^{Q})=\mathbb{E}_{\mu^{\prime}}(Q(s,a|\theta^{Q})-y_{i})^{2},$ (18) where $\displaystyle y_{i}=r(s_{t},a_{t})+\gamma Q^{\prime}(s_{t+1},\mu^{\prime}(s_{t+1}|\theta^{\mu^{\prime}})|\theta^{Q^{\prime}}),$ (19) in which $\gamma$ is a positive discount factor of reward. By executing the gradient descent method and minimizing the loss function in (18), we can update the main critic network. Additionally, to optimize action decisions and achieve higher rewards, $Q$-value is considered as the loss function of the actor network. The main actor network will be updated by the gradient ascent from $\displaystyle\nabla_{\theta^{\mu}}=\mathbb{E}_{\mu^{\prime}}\left[\nabla_{a}Q(s,a|\theta^{Q})|_{s=s_{t},a=\mu(s_{t})}\nabla_{\theta^{\mu}}\mu(s|\theta^{\mu})|_{s_{t}}\right].$ (20) Different from copying weights from the main networks in a certain interval of time in DQN, the target networks in DDPG adopt a soft update in each step as $\displaystyle\theta^{Q^{\prime}}\leftarrow\tau\theta+(1-\tau)\theta^{Q^{\prime}}$ (21) $\displaystyle\theta^{\mu^{\prime}}\leftarrow\tau\theta+(1-\tau)\theta^{\mu^{\prime}},$ (22) where $\tau$ is an update factor. In addition, an experience reply unit (ERU) is used to collect training samples and the agent will randomly choose batches of samples during the training process. This can help break the relevance and non-stationary distribution between training samples and improve the performance. ### IV-C DDPG-based resource allocation strategy Algorithm 1 DDPG-based resource allocation strategy Input: current state $S=\\{k,\mathbf{O^{k}},\mathbf{F^{k}},\mathbf{H^{k}}\\}$ Output: allocation matrix $A=\\{\mathbf{N^{k}},\mathbf{B^{k}}\\}$ 1:Initialize the target network $Q^{\prime}$ and $\mu^{\prime}$ with $\theta^{Q^{\prime}}\leftarrow\theta^{Q}$,$\theta^{\mu^{\prime}}\leftarrow\theta^{\mu}$ in the server. 2:Initialize experience replay memory $\mathcal{D}$ 3:for episode = 1,M do 4: Initialize a random process $\mathcal{N}$ for exploring. 5: Initialize state $s_{1}$ 6: while $s\notin S_{end}$ do 7: Devices upload the base frequency to the server and estimate the channel parameters. 8: Server chooses an action $a_{k}=\mu(s_{t}|\theta^{\mu})+\mathcal{N}_{t}$ 9: Execute action the $a_{k}$ and observe reward $r_{k}$ and next state $s_{k+1}$ 10: Store $(s_{k},a_{k},r_{k},s_{k+1})$ in $\mathcal{D}$ 11: Randomly sample a mini-batch of transitions $(s_{i},a_{i},r_{i},s_{i+1})$ from $\mathcal{D}$ 12: $y_{i}=r(s_{i}+\gamma Q^{\prime}(s_{i+1},\mu^{\prime}(s_{i+1}|\theta^{\mu^{\prime}})|\theta^{Q^{\prime}}).$ 13: Update the critic network by minimizing : $L=\frac{1}{N}{\mu^{\prime}}(Q(s,a|\theta^{Q})-y_{i})^{2}$ 14: Update the actor network by gradient ascent: $\nabla_{\theta^{\mu}}=\mathbb{E}_{\mu^{\prime}}\left[\nabla_{a}Q(s,a|\theta^{Q})|_{s=s_{t},a=\mu(s_{t})}\nabla_{\theta^{\mu}}\mu(s|\theta^{\mu})|_{s_{t}}\right]$ 15: Update the target networks: $\theta^{Q^{\prime}}\leftarrow\tau\theta+(1-\tau)\theta^{Q^{\prime}}$ $\theta^{\mu^{\prime}}\leftarrow\tau\theta+(1-\tau)\theta^{\mu^{\prime}}$ 16: end while 17:end for In this part, we will introduce the proposed DDPG-based resource allocation strategy. Firstly, we design a mutil-task neural network which outputs the results of $N_{k}$ and $B_{k}$ simultaneously. In order to ensure the outputs in the correct range, the Sigmoid and Softmax activation functions are used in the last layer of network. Before each round of FEEL training, clients need to report their available resources such as the base CPU-frequency and remaining battery power to the server. At the same time, the server should estimate the channel parameters of the links from it to the users. These information is entered as the input of the main actor network to get a deterministic action of adjusting the practical CPU-frequency and bandwidth allocation. The base station will inform participants the operating CPU-frequency and the available wireless bandwidth to them. The agent will achieve different rewards in each round according to the latency, energy consumption and the current number of rounds. When the experience reply unit has enough samples, the agent will adjust its strategy by the way introduced in the previous part. In order to guarantee the global convergence, we reset the environment and re-initialize state $s_{1}$ to train the agent when the agent reaches the final state. After some episodes with convergence, the DDPG strategy can find a proper result of $\eta_{m}^{k}$ and $B_{m}^{k}$. In this way, the optimization problem of (III-C) is solved. The whole procedure of the DDPG strategy is summarized in Algorithm 1. ## V Simulations and Discussions In this section, we will demonstrate the performance of the proposed DDPG- based resource allocation strategy by simulations. To simulate the practical scenarios of FEEL, we suppose that there are 20 users with limited battery power subject to $\mathcal{U}(2\times 10^{4},3\times 10^{4})$ and their initial base CPU-frequencies are subject to $\mathcal{U}(1\times 10^{7},5\times 10^{7})$. The users will train locally 5 times per round and participate in 1000 rounds of the FEEL training at most. In addition, the number of CPU-cycles to process a sample is set subject to $\mathcal{U}(7\times 10^{4},2\times 10^{5})$, the sizes of users’ dataset are subject to $\mathcal{U}(400\\\ ,600)$, and the energy consumption coefficients of CPU chips are set subject to $\mathcal{U}(1\times 10^{-22},2\times 10^{-22})$. Moreover, the transmit power of all the devices is set to $5\times 10^{-5}$, the total wireless bandwidth is $5$MHz, and the model size is set to $10$MB. We assume that the condition will be static in a round, while it is time-varying in different rounds of training. To simulate the time-varying environments, we set the initial base CPU-frequency varying from $0.8$ to $1.2$ times. Similarly, the average channel gain of the wireless links is set to unity, and the variance of the AWGN is set to $10^{-9}$. As to the DDPG network, we implement it by the well-known Pytorch library [40]. The actor networks consist of 2 hidden layers with 64 and 256 nodes, respectively. In the critic networks, there are 2 hidden layers with 30 nodes per layer. To enhance the fitting ability of the networks, the Rectified Linear Unit (ReLU)[41] is used as the activation function. The Adam[42] method is used to optimize the loss function of networks and the learning rates are $10^{-6}$ and $10^{-2}$ in the main actor and main critic networks, respectively. We set the capacity of ERU to $10^{4}$ and the batch size is $128$. Besides, the value of $\gamma$ is $0.999$, and $\tau$ is set to $10^{-3}$. The DDPG agent will train $50$ times at the end of FEEL in each episode. The total episode of DRL is $800$, and we repeat the experiments to reduce the accidental error. Figure 4: Convergence of the proposed DDPG approach with $\lambda=0.5$. Figure 5: Convergence of the maximum round. Fig. 4 shows the training process of the proposed DDPG-basd resource allocation strategy, where $\lambda=0.5$ and there are 800 episodes at all. We focus on the total reward in each episode. From this figure, we can find that the curve of the total reward grows very fast in the first 200 episodes and then it continues to increase with a slow growth tendency. After about 500 episodes, the proposed DDPG approach can achieve an asymptotic reward value of about 4000. These results can help verify the proposed DDPG-based strategy. Fig. 5 depicts the maximum round of the proposed strategy versus episodes, where $\lambda=0.5$ and there are 800 episodes at all. From this figure, we can observe that at the beginning of episode, the battery-constrained devices are easily to run out of power and can only participate in about 500 rounds. When the episode increases, the devices can participate in more rounds of training. This is because that with the learning of DDPG agent, the proposed resource allocation strategy continues to improve and all the devices can prolong the life of their batteries. When the curve is converged, all the devices can complete 1000 rounds of training. This can not only ensure the performance of training, but also avoid the danger caused by shutdown. Fig. 6 shows the maximum round of two strategies versus the static adjustment factor of frequency, where $\lambda=0.5$ and the adjustment factor varies from 0.1 to 1. For comparison, we provide the performance of the static strategy, where the coefficient of frequency adjustment is static at each FL training round and bandwidth allocation is even with $B_{m}^{k}=B_{total}/M$ for each device. We use the static strategy to compare since in the practical time- varying scenario, it is however difficult to determine how to set the proper operating CPU-frequency of devices, and the devices should run at a proportion of base CPU-frequency in order to reduce the energy consumption. From this figure, we can see that the proposed strategy remains unchanged with the adjustment factor, and it can enable all the devices to complete 1000 rounds of training. In contrast, the static strategy can only obtain about 200 rounds of training when the adjustment factor is around 1, since all the devices operate at the base CPU-frequency with a high energy consumption. When the adjustment factor decreases, the static strategy can enable all devices to reduce the energy consumption and take part in more rounds of training. In particular, the static approach can achieve 1000 rounds of training when the adjusting factor is smaller than 0.3. The comparison results in this figure can further verify the effectiveness of the proposed DDPG strategy. Figure 6: Maximum rounds versus the static adjustment factor of frequency. Figure 7: Energy consumption versus the static adjustment factor of frequency. Fig. 7 shows the energy consumption versus the static adjustment factor of frequency, where $\lambda=0.5$. The energy consumption of the proposed strategy is about $69$ and it is a little higher than it in the static approach when the adjustment factor is smaller than $0.2$. With the increase of the static adjustment factor, the energy consumption of the static approach grows explosively. This causes the battery power of devices being easily exhausted and influences the continuous training of FL. Fig. 8 describes the latency versus the static adjustment factor of frequency, where $\lambda=0.5$. From this figure, we can observe that the latency is more than 200 when the adjustment factor is smaller than $0.2$ while the latency of the proposed strategy is only about 96. Hence, it is unacceptable to save energy by decreasing the adjustment factor smaller than $0.2$. With the increase of the adjustment factor, the latency is lowing down. Moreover, the latency of the proposed strategy is almost identical to that in the static strategy when the adjustment factor is $0.5$. By combining the results in Figs. 7\- 8, we can conclude that the proposed strategy can make a good trade- off between the energy consumption and latency efficiently. This can help verify the effectiveness of the proposed DDPG strategy furthermore. Fig. 9 shows the system cost versus the static adjustment factor of frequency. To describe clearly, we use the linear combination to express the system cost, where $\lambda=0.5$. We can find from this figure that the system cost of the proposed DDPG strategy remains unchanged with the adjustment factor, which is similar to the phenomena in Figs. 6-8. In contrast, the static approach is affected by the adjustment factor significantly. Specifically, the system cost of the static approach becomes smaller when the adjustment factor increases in a low region of adjustment factor. This is due to the energy saving, which however leads to a huge latency. On the contrary, the system cost of the static approach becomes larger when the adjustment factor increases in a high region of adjustment factor. This is because that the pursuit of reducing latency leads to a great energy consumption. In particular, the static approach achieves the minimum system cost when the adjustment factor is 0.3, which is still higher than that of the proposed DDPG strategy. Fig. 10 depicts the system cost of several strategies versus the number of users, where $\lambda=0.5$. For comparison, we plot the performance of the proposed DDPG strategy with even bandwidth allocation, denoted by E-DDPG. From this figure, we can find that the proposed DDPG outperforms E-DDPG, since the former can exploit the wireless bandwidth resources in the learning process, which can help improve the system performance. Moreover, E-DDPG is much better than the static approach, since the later cannot utilize the system communication and computational resources efficiently. In further, the system performance of all the three strategies increases with a larger number of users, as more uses impose a heavier burden on the training process. Figure 8: Latency versus the static adjustment factor of frequency. Figure 9: Average system cost versus the static adjustment factor of frequency. Figure 10: Average system cost versus number of users. In Fig. 11, we plot the performance of several resource allocation strategies versus the wireless bandwidth, where $\lambda=0.5$ and the wireless bandwidth varies from 1MHz to 9MHz. From this figure, we can see that the performances of the three strategies become better when the bandwidth increases, since a larger bandwidth can help reduce the transmission latency as well as the transmission energy consumption. Moreover, the proposed strategy outperforms the static approach and E-DDPG for various values of wireless bandwidth, since it can exploit the system communication and computational resources efficiently. In particular, when the total bandwidth is 1MHz, the cost of the proposed DDPG strategy, E-DDPG and the static approach is about 130, 140, and 170, respectively. In further, when the bandwidth increases, the transmission latency becomes negligible in the system cost for the proposed DDPG and E-DDPG strategies, which makes the performance gap between these two strategies decrease. The results in this figure demonstrate the merits of the proposed DDPG strategy furthermore. Figure 11: Average system versus different bandwidth. ## VI Conclusions This paper studied how to optimize the FEEL in UAV-enabled IoT networks where UAVs had limited batteries, from a deep reinforcement learning approach. Specifically, we provided an optimization framework where the devices could adjust their operating CPU-frequency to prolong the battery life and avoid withdrawing from federated learning training untimely, through jointly allocating the computational resource and wireless bandwidth in time-varying environments. To solve this optimization problem, we employed the DDPG based strategy, where a linear combination of latency and energy consumption was used to evaluate the system cost. Simulation results were demonstrated to show that the proposed strategy could efficiently avoid devices withdrawing from the FL training untimely and meanwhile reduce the average system cost. ## References * [1] M. Mozaffari, A. T. Z. Kasgari, W. Saad, M. Bennis, and M. Debbah, “Beyond 5g with uavs: Foundations of a 3d wireless cellular network,” _IEEE Transactions on Wireless Communications._ , vol. 18, no. 1, pp. 357–372, 2019\. * [2] X. Li, Q. Wang, Y. Liu, T. A. Tsiftsis, Z. Ding, and A. Nallanathan, “Uav-aided multi-way noma networks with residual hardware impairments,” _IEEE Wireless Communications Letters_ , vol. 9, no. 9, pp. 1538–1542, 2020. * [3] M. Mozaffari, W. Saad, M. Bennis, Y. Nam, and M. Debbah, “A tutorial on uavs for wireless networks: Applications, challenges, and open problems,” _IEEE Communications Surveys Tutorials_ , vol. 21, no. 3, pp. 2334–2360, 2019\. * [4] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _Nature_ , vol. 521, no. 7553, pp. 436–444, 2015. * [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li, “Imagenet: A large-scale hierarchical image database,” in _Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2009, pp. 248–255. * [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in Neural Information Processing Systems (NIPS)_ , 2012, pp. 1106–1114. * [7] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2016, pp. 770–778. * [8] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)_ , 2017, pp. 4700–4708. * [9] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” _ACM Transactions on Intelligent Systems and Technology (TIST)_ , vol. 10, no. 2, pp. 1–19, 2019. * [10] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22_ , vol. 54, 2017, pp. 1273–1282. * [11] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” _arXiv preprint arXiv:1610.05492_ , 2016. * [12] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” _arXiv preprint arXiv:1610.02527_ , 2016. * [13] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,” _IEEE internet of things journal_ , vol. 3, no. 5, pp. 637–646, 2016. * [14] E. Li, Z. Zhou, and X. Chen, “Edge intelligence: On-demand deep learning model co-inference with device-edge synergy,” in _Proceedings of the 2018 Workshop on Mobile Edge Communications, MECOMM@SIGCOMM_. ACM, 2018, pp. 31–36. * [15] E. Li, L. Zeng, Z. Zhou, and X. Chen, “Edge ai: On-demand accelerating deep neural network inference via edge computing,” _IEEE Transactions on Wireless Communications_ , vol. 19, no. 1, pp. 447–457, 2020. * [16] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” _IEEE Communications Surveys & Tutorials_, vol. 22, no. 3, pp. 2031–2063, 2020. * [17] Y. Guo, F. Liu, Z. Cai, L. Chen, and N. Xiao, “Feel: A federated edge learning system for efficient and privacy-preserving mobile healthcare,” in _49th International Conference on Parallel Processing-ICPP_ , 2020, pp. 1–11. * [18] T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied federated learning: Improving google keyboard query suggestions,” _arXiv preprint arXiv:1812.02903_ , 2018. * [19] Z. Yu, J. Hu, G. Min, H. Lu, Z. Zhao, H. Wang, and N. Georgalas, “Federated learning based proactive content caching in edge computing,” in _IEEE Global Communications Conference (GLOBECOM)_ , 2018, pp. 1–6. * [20] L. Cui, X. Su, Z. Ming, Z. Chen, S. Yang, Y. Zhou, and W. Xiao, “Creat: Blockchain-assisted compression algorithm of federated learning for content caching in edge computing,” _IEEE Internet of Things Journal_ , pp. 1–1, 2020. * [21] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 4, pp. 2322–2358, 2017. * [22] Z. Zhao, R. Zhao, J. Xia, X. Lei, D. Li, C. Yuen, and L. Fan, “A novel framework of three-hierarchical offloading optimization for mec in industrial iot networks,” _IEEE Transactions on Industrial Informatics_ , vol. 16, no. 8, pp. 5424–5434, 2019. * [23] Z. Liang, Y. Liu, T.-M. Lok, and K. Huang, “Multiuser computation offloading and downloading for edge computing with virtualization,” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 9, pp. 4298–4311, 2019\. * [24] C. Li, J. Xia, Y. Rao, F. Liu, L. Fan, G. K. Karagiannidis, and A. Nallanathan, “Dynamic offloading for multiuser muti-cap mec networks: A deep reinforcement learning approach,” _IEEE Transactions on Vehicular Technology_ , vol. 72, no. 3, pp. 3424–3438, 2020. * [25] J. Feng, F. R. Yu, Q. Pei, J. Du, and L. Zhu, “Joint optimization of radio and computational resources allocation in blockchain-enabled mobile edge computing systems,” _IEEE Transactions on Wireless Communications_ , vol. 19, no. 6, pp. 4321–4334, 2020. * [26] J. Feng, Q. Pei, F. R. Yu, X. Chu, J. Du, and L. Zhu, “Dynamic network slicing and resource allocation in mobile edge computing systems,” _IEEE Transactions on Vehicular Technology_ , vol. 69, no. 7, pp. 7863–7878, 2020. * [27] Y. Wang, X. Tao, Y. T. Hou, and P. Zhang, “Effective capacity-based resource allocation in mobile edge computing with two-stage tandem queues,” _IEEE Transactions on Communications_ , vol. 67, no. 9, pp. 6221–6233, 2019\. * [28] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečnỳ, S. Mazzocchi, H. B. McMahan _et al._ , “Towards federated learning at scale: System design,” _arXiv preprint arXiv:1902.01046_ , 2019. * [29] T. Nishio and R. Yonetani, “Client selection for federated learning with heterogeneous resources in mobile edge,” in _IEEE International Conference on Communications (ICC)_ , 2019, pp. 1–7. * [30] W. Shi, S. Zhou, and Z. Niu, “Device scheduling with fast convergence for wireless federated learning,” in _IEEE International Conference on Communications (ICC)_ , 2020, pp. 1–6. * [31] W. Shi, S. Zhou, Z. Niu, M. Jiang, and L. Geng, “Joint device scheduling and resource allocation for latency constrained wireless federated learning,” _IEEE Transactions on Wireless Communications_ , 2020. * [32] T. Zeng, O. Semiari, M. Mozaffari, M. Chen, W. Saad, and M. Bennis, “Federated learning in the sky: Joint power allocation and scheduling with uav swarms,” in _IEEE International Conference on Communications (ICC): Next-Generation Networking and Internet Symposium_ , 2020. * [33] H. Huang and Y. Yang, “Workerfirst: Worker-centric model selection for federated learning in mobile edge computing,” in _2020 IEEE/CIC International Conference on Communications in China (ICCC)_ , 2020, pp. 1039–1044. * [34] S. Zhou, H. Huang, W. Chen, P. Zhou, Z. Zheng, and S. Guo, “Pirate: A blockchain-based secure framework of distributed machine learning in 5g networks,” _IEEE Network_ , vol. 34, no. 6, pp. 84–91, 2020. * [35] Y. Zhan, P. Li, and S. Guo, “Experience-driven computational resource allocation of federated learning by deep reinforcement learning,” in _Proceedings of IEEE International Parallel and Distributed Processing Symposium (IPDPS)_ , 2020, pp. 234–243. * [36] H. Yu, S. Yang, and S. Zhu, “Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 5693–5700. * [37] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” _arXiv preprint arXiv:1312.5602_ , 2013. * [38] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski _et al._ , “Human-level control through deep reinforcement learning,” _nature_ , vol. 518, no. 7540, pp. 529–533, 2015. * [39] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller, “Deterministic policy gradient algorithms,” in _Proceedings of the 31st International Conference on Machine Learning (ICML)_ , vol. 32, 2014, pp. 387–395. * [40] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga _et al._ , “Pytorch: An imperative style, high-performance deep learning library,” in _Advances in neural information processing systems_ , 2019, pp. 8026–8037. * [41] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , 2011, pp. 315–323. * [42] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
# Dual exponential polynomials and a problem of Ozawa J. Heittokangas, K. Ishizaki, K. Tohge and Z.-T. Wen ###### Abstract Complex linear differential equations with entire coefficients are studied in the situation where one of the coefficients is an exponential polynomial and dominates the growth of all the other coefficients. If such an equation has an exponential polynomial solution $f$, then the order of $f$ and of the dominant coefficient are equal, and the two functions possess a certain duality property. The results presented in this paper improve earlier results by some of the present authors, and the paper adjoins with two open problems. Key words: Dual exponential polynomials, exponential sum, finite order, linear differential equation, Ozawa’s problem, value distribution. 2020 MSC: Primary 30D15; Secondary 30D35. ## 1 Introduction Frei [2] has proved that the differential equation $f^{\prime\prime}+e^{-z}f^{\prime}+\alpha f=0,\quad\alpha\in\mathbb{C}\setminus\\{0\\},$ (1.1) has a subnormal (that is, non-trivial and finite-order) solution if and only if $\alpha=-m^{2}$ for a positive integer $m$. The subnormal solution $f$ is a polynomial in $e^{z}$ of degree $m$, that is, an exponential sum of the form $f(z)=1+C_{1}e^{z}+\cdots+C_{m}e^{mz},\quad C_{j}\in\mathbb{C}.$ (1.2) It was discovered in [18, Lemma 1] that in this representation one has $C_{j}\neq 0$ for $1\leq j\leq m$. Substituting the subnormal solution $f$ into (1.1), we get $\sum_{j=1}^{m}C_{j}j^{2}e^{jz}+\sum_{j=1}^{m}C_{j}je^{(j-1)z}-m^{2}\sum_{j=1}^{m}C_{j}e^{jz}=m^{2}.$ By the Borel-Nevanlinna theorem [3, pp. 70, 108], or simply by an elementary observation on three polynomials in $e^{z}$, this gives rise to the recursive formula $C_{1}=m^{2},\quad(m^{2}-j^{2})C_{j}=(j+1)C_{j+1},\quad 1\leq j\leq m,$ from which $C_{j}=\frac{1}{j!}\prod_{k=0}^{j-1}(m^{2}-k^{2})$ for $1\leq j\leq m$. Due to the presence of the transcendental coefficient $e^{-z}$, any solution of (1.1) linearly independent with $f$ in (1.2) must be of infinite order [7]. For example, when $\alpha=-1$, the function $g(z)=\exp(e^{-z}+z)$ is an infinite order solution of (1.1) and linearly independent with $f(z)=1+e^{z}$. Ozawa [15] showed that if $a\neq 0$, then the non-trivial solutions of $f^{\prime\prime}+e^{-z}f^{\prime}+(az+b)f=0$ are of infinite order of growth. If $P(z)$ is a non-constant polynomial, the question whether all non-trivial solutions of $f^{\prime\prime}+e^{-z}f^{\prime}+P(z)f=0$ are of infinite order of growth has been known as the Ozawa problem. This problem has been answered affirmatively for particular polynomials $P(z)$ by Amemiya-Ozawa [1] and by Gundersen [4], while the complete solution is by Langley [12]. We proceed to state three new examples of Frei-Ozawa type. ###### Example 1.1 If $H$ is an arbitrary entire function, then $f(z)=e^{z}+1$ solves $f^{\prime\prime}+(H-1+He^{-z})f^{\prime}-Hf=0.$ Of particular interest is the case when $H$ is a polynomial. ###### Example 1.2 The function $f(z)=1+(1-3c)\left(e^{z}+\left(1-\frac{3}{4}c\right)\right)e^{2z}$, $c\in\mathbb{C}\setminus\\{\frac{1}{3},\frac{4}{3}\\}$, with two exponential terms solves $f^{\prime\prime}+\left(-\frac{5}{3}-c+\frac{2}{3}e^{-z}\right)f^{\prime}+\left(2c-\frac{2}{3}\right)f=0.$ ###### Example 1.3 The function $f(z)=1+3e^{2z}+\sqrt{6}ie^{3z}$ with two exponential terms solves $f^{\prime\prime}+\left(1-\sqrt{6}ie^{-z}+2e^{-2z}\right)f^{\prime}-12f=0,$ where the transcendental coefficient has two exponential terms also. By making a change of variable $z\to wz$, where $w\in\mathbb{C}\setminus\\{0\\}$, we see that $g^{\prime\prime}+w\left(1-\sqrt{6}ie^{-wz}+2e^{-2wz}\right)g^{\prime}-12w^{2}g=0$ has a solution $g(z)=1+3e^{2wz}+\sqrt{6}ie^{3wz}=f(wz)$. One might wonder about possible examples of solutions $f$ with a single exponential term and of transcendental coefficients $A(z)$ having at least two exponential terms. The non-existence of such examples will be confirmed in Theorem 3.2 below. For example, it will be shown that a function $f(z)=1+be^{wz}$ for $b,w\in\mathbb{C}$ is a solution of $f^{\prime\prime}+\left\\{P_{1}(z)+P_{2}(z)e^{-wz}\right\\}f^{\prime}-P(z)f=0$ for $P(z),P_{1}(z),P_{2}(z)\in\mathbb{C}[z]$ if and only if $P_{1}(z)=\frac{1}{w}P(z)$ and $P_{2}(z)=\frac{1}{bw}P(z)$. In contrast to Ozawa’s problem and complementing the three examples above, our primary focus is on exponential polynomial solutions of linear differential equations, in particular of second order equations $f^{\prime\prime}+A(z)f^{\prime}+B(z)f=0,$ (1.3) where $A(z)$ and $B(z)$ are entire. An _exponential polynomial_ is a function of the form $f(z)=P_{1}(z)e^{Q_{1}(z)}+\cdots+P_{k}(z)e^{Q_{k}(z)},$ (1.4) where $P_{j}$, $Q_{j}$ are polynomials for $1\leq j\leq k$. Observe that a polynomial is a special case of an exponential polynomial. A transcendental exponential polynomial $f$ can be written in the _normalized form_ $f(z)=F_{0}(z)+F_{1}(z)e^{w_{1}z^{q}}+\cdots+F_{m}(z)e^{w_{m}z^{q}},$ (1.5) where $q=\max\\{\deg(Q_{j})\\}\geq 1$ is the order of $f$, the frequencies $w_{j}$ are non-zero and pairwise distinct, the multipliers $F_{j}$ are exponential polynomials of order $\leq q-1$ such that $F_{j}(z)\not\equiv 0$ for $1\leq j\leq m$, and $m\leq k$ [8, 16]. ###### Definition 1.4 ([18]) Let $f$ be given in the normalized form (1.5). If the non-zero frequencies ${w}_{1},\ldots,{w}_{m}$ of $f$ all lie on a fixed ray $\arg(w)=\theta$, then $f$ is called a _simple exponential polynomial_. If $g$ is another simple exponential polynomial of the same order $q$ as $f$ such that the non-zero frequencies of $g$ all lie on the opposite ray $\arg(w)=\theta+\pi$, then $f$ and $g$ are called _dual exponential polynomials_. For example, the functions $f(z)=z^{2}e^{-iz}+ze^{z^{2}}+e^{2z^{2}+(1-i)z}$ and $g(z)=2e^{-z^{2}+(1+i)z}+z^{2}e^{-4z^{2}+iz}$ are dual exponential polynomials of order 2. In studying the differential equation (1.3) with entire coefficients $A(z)$ and $B(z)$, it is fundamental that each of its solutions $f$ is an entire function also. In this paper we study cases when $f$ can be an exponential polynomial assuming that $A(z)$ is an exponential polynomial and that $B(z)$ grows slowly compared to $A(z)$. Naturally, the set ${\mathcal{E}}$ of entire functions is a ring closed under differentiation, and the set $\operatorname{Exp}_{q}$ of exponential polynomials of order $\leq q\in\mathbb{N}$ together with constants in $\mathbb{C}$ and ordinary polynomials in $\mathbb{C}[z]=:\operatorname{Exp}_{0}$ becomes a differential subring of $\mathcal{E}$. On the other hand, $\operatorname{Exp}_{q}$ is not closed under integration in general except for the set $\operatorname{Exp}_{1}$ of exponential polynomials of order $\leq 1$, which plays a role in our discussions. To identify a primitive of each element in $\operatorname{Exp}_{1}$, it is convenient to use the formula $\int z^{n}e^{wz}\,dz=\left(\frac{1}{w}z^{n}+\sum_{\nu=0}^{n-1}\frac{(-1)^{n-\nu}n!}{w^{n-\nu+1}\nu!}z^{\nu}\right)e^{wz}+\text{constant}$ for $n\in\mathbb{N}\cup\\{0\\}$ and $w\in\mathbb{C}\setminus\\{0\\}$. Of course, an analogous formula is not in general available for $z^{n}e^{wz^{q}}$ when $q\geq 2$. Indeed, recall the error function $\mathrm{erf}(z)$ defined also for complex argument $z$ by $\mathrm{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{-\zeta^{2}}\,d\zeta.$ It is the primitive of $\frac{2}{\sqrt{\pi}}e^{-z^{2}}\in\operatorname{Exp}_{2}$, but the function itself is not an exponential polynomial. For this reason one needs the special expression $\mathrm{erf}(z)$ for this function as in the real argument case. This is also the case when $q\geq 3$. The value distribution of the functions $\int_{0}^{z}e^{-\zeta^{q}}d\zeta$, $q\in\mathbb{N}$, as described in Nevanlinna’s monograph [14, pp. 168–170], is quite different from that of exponential polynomials [8, 10, 16]. Along with $\operatorname{Exp}_{q-1}$, the set $\mathcal{S}_{q}(\theta)$ of simple exponential polynomials of order $q$ with respect to a fixed angle $\theta\in[0,2\pi)$ forms a differential subring of $\operatorname{Exp}_{q}$. A unit element in $\mathcal{S}_{q}(\theta)$ is a single exponential term $e^{wz^{q}+p(z)}$ with $\arg(w)=\theta$, $p(z)\in\mathbb{C}[z]$ and $\deg(p)\leq q-1$, whose multiplicative inverse belongs to the set $\mathcal{S}_{q}(\theta+\pi)$ as its dual exponential polynomial. It should be observed that if $f\in\mathcal{S}_{q}(\theta)$ and $g\in\mathcal{S}_{q}(\theta+\pi)$ are dual exponential polynomials, then $fg\in\operatorname{Exp}_{q-1}$ might not hold, but even so, the growth of $fg$ in terms of the characteristic function could be somewhat reduced from that of $f$ or $g$. ###### Example 1.5 If $f(z)=e^{z}+e^{2z}$ and $g(z)=e^{-4z}$, then $T(r,f)=\frac{2}{\pi}r+O(\log r)$ and $T(r,g)=\frac{4}{\pi}r+O(\log r)$, while $T(r,fg)=\frac{3}{\pi}r+O(\log r)$, see [10]. Alternatively, the choice $g(z)=e^{-z}$ gives $T(r,g)=\frac{1}{\pi}r$ and $T(r,fg)=\frac{1}{\pi}r+O(1)$. In our setting, the duality of two exponential polynomials is an interdependence among them in order to reduce the growth under multiplication, especially when combined with differentiation. For example, if $A$ and $f$ are dual exponential polynomials of order $q$, then $A$ and $f^{\prime}$ are also dual, and at times the growth of $Af^{\prime}$ is reduced to $\rho(Af^{\prime})<q$. The motivation for studying exponential polynomial solutions of (1.3) arises from the following previous result. ###### Theorem 1.6 ([18]) Suppose that $f$ is a transcendental exponential polynomial solution of (1.3), where $A(z)$ and $B(z)$ are exponential polynomials satisfying $\rho(B)<\rho(A)$. Then the following assertions hold. * (a) $f$ and $A(z)$ are dual exponential polynomials of order $q\in\mathbb{N}$, and $f$ has the normalized representation $f(z)=c+F_{1}(z)e^{w_{1}z^{q}}+\cdots+F_{m}(z)e^{w_{m}z^{q}},$ (1.6) where $m\in\mathbb{N}$ and $c\in\mathbb{C}\setminus\\{0\\}$. * (b) If $\rho(Af^{\prime})<q$, then $q=1$ and $A(z)=ae^{-wz},\quad B(z)=-w^{2}\quad\text{and}\quad f(z)=c\left(1+\frac{w}{a}e^{wz}\right),$ (1.7) where $w=w_{1}$ and $a\in\mathbb{C}\setminus\\{0\\}$. If $a=c=w=1$, then (1.7) reduces to Frei’s equation (1.1) and Frei’s solution (1.2) in the case $m=1$. The following example illustrates that it is not always the case that the differential equation (1.3) possesses a non-trivial exponential polynomial solution when $A(z)$ and $B(z)$ are exponential polynomials satisfying $\rho(B)<\rho(A)$. ###### Example 1.7 For a fixed $n\in\mathbb{Z}$, let $A(z)=-\frac{5}{3}+n+\frac{2}{3}e^{-z}$ and $B(z)=-\frac{8}{3}+n$. Then (1.3) has a zero-free solution $f(z)=\exp\left\\{\dfrac{2}{3}e^{-z}+\left(\dfrac{8}{3}-n\right)z\right\\}.$ (1.8) Note that $f$ is an exponential of an exponential polynomial. Another solution of (1.3), linearly independent with $f$, is $\displaystyle g(z)$ $\displaystyle=$ $\displaystyle f(z)\int^{z}\frac{e^{-\zeta}}{f(\zeta)}d\zeta$ $\displaystyle=$ $\displaystyle\exp\left\\{\frac{2}{3}e^{-z}+\left(\frac{8}{3}-n\right)z\right\\}\int^{z}\exp\left\\{\frac{2}{3}e^{-\zeta}+\left(\frac{5}{3}-n\right)\zeta\right\\}d\zeta,$ where the integral represents an arbitrary primitive function. We may re-write this as $g(z)\exp\left\\{-\frac{2}{3}e^{-z}-\left(\frac{8}{3}-n\right)z\right\\}=\int^{z}\exp\left\\{\frac{2}{3}e^{-\zeta}+\left(\frac{5}{3}-n\right)\zeta\right\\}d\zeta$ to see that $g$ solves a first order equation $g^{\prime}(z)+\left\\{-\frac{2}{3}e^{-z}+\left(\frac{8}{3}-n\right)\right\\}g(z)=\exp\left\\{\frac{4}{3}e^{-z}+\left(\frac{13}{3}-2n\right)z\right\\}.$ This shows that $g$ cannot be any exponential polynomial as a function of infinite order. Hence it is necessary in Theorem 1.6 to assume that (1.3) has a nontrivial exceptional polynomial solution $f$. One may also observe that a small perturbation in the above coefficients $A(z)$ and $B(z)$ brings our desired case. In fact, by choosing $A(z)=-\frac{5}{3}-n+\frac{2}{3}e^{-z}$ and $B(z)=-\frac{2}{3}+2n$ for any $n\in\mathbb{Z}$, the equation (1.3) permits the exponential polynomial solution $f(z)=1+(1-3n)e^{z}+(1-3n)\left(1-\dfrac{3}{4}n\right)e^{2z}.$ (1.9) A difference between these two cases can also be observed in the logarithmic derivatives: If $f$ is the function in (1.8), then $\frac{f^{\prime}(z)}{f(z)}=-\frac{2}{3}e^{-z}+\frac{8}{3}-n$, while if $f$ is the function in (1.9), then $\frac{f^{\prime}(z)}{f(z)}$ is not an exponential polynomial but an irreducible rational function in $e^{z}$. After discussing some properties of exponential polynomials in Section 2, we will show in Section 3 that the conclusions in Theorem 1.6(a) can be made stronger under weaker assumptions. Complementing the condition $\rho(Af^{\prime})<q$ in Theorem 1.6(b), some new conditions implying the conclusion $q=1$ will be discovered. Examples on higher order duality as well as on the cases where a solution is dual to more than one coefficient will be discussed in Sections 4 and 5, respectively. Two open problems are formulated in the hope that these findings would give raise to further discussions in the future. ## 2 Preliminaries on exponential polynomials We need to introduce several concepts some of which are new. ###### Definition 2.1 ([11, p. 214]) Let $f$ in (1.5) be a simple exponential polynomial. If there exists a constant $w\in\mathbb{C}\setminus\\{0\\}$ such that $w_{j}/w$ is a positive integer for every $j=1,2,\ldots,m$, then the (non-zero) frequencies of $f$ are said to be _commensurable_ , and $w$ is called a _common factor_. For example, $f(z)=e^{\pi z}+3e^{2\pi z}+ze^{3\pi z}$ and $g(z)=e^{4iz}+e^{6iz}$ are simple exponential polynomials, both of their frequencies are commensurable, and examples for common factors are $\pi,\pi/2$ for $f$ and $i,2i$ for $g$. In particular, a common factor is not unique. Note that it is usual to say that non-zero real numbers $a$ and $b$ are commensurable if their ratio $a/b$ is a rational number. Equivalently, there exist a real number $c$ and integers $m$ and $n$ such that $a=mc$ and $b=nc$. In Definition 2.1 we are concerned with a simple exponential polynomial and a fixed $\theta\in[0,2\pi)$, and thus all the non-zero frequencies are of the form $w_{j}=r_{j}e^{i\theta}$ for $r_{j}>0$, and the ratio of $w_{j}$ and $w_{i}$ is $\frac{w_{j}}{w_{i}}=\frac{r_{j}}{r_{i}}=\frac{w_{j}/w}{w_{i}/w}.$ This is a positive rational number for a common factor $w$. If we consider the dual exponential polynomial as well, all those frequencies are commensurable in the usual sense. If $f$ is a simple exponential polynomial of order one with constant multipliers, and if its frequencies are commensurable as in Frei’s case (1.2), then by the fundamental theorem of algebra, $f$ can be written as $f(z)=A\prod_{j=1}^{m}\left(e^{wz}-\alpha_{j}\right),$ where $A\neq 0$, $\alpha_{j}$’s are complex constants and $m$ is a positive integer. In particular, all the zeros of $f$ lie on at most $m$ lines. We note that if the non-zero frequencies of $f$ are commensurable, then they are clearly linearly dependent over rationals (see [13] for results in this direction), but not the other way around. For example, the points $w_{1}=1,w_{2}=\sqrt{2},w_{3}=\sqrt{2}-1$ are linearly dependent over rationals but not commensurable. ###### Definition 2.2 Suppose that $f$ and $g$ are dual exponential polynomials with commensurable frequencies $\\{w_{j}\\}$ $(j>0)$ and $\\{\lambda_{i}\\}$ $(i>0)$, respectively, sharing the same common factor $w$ but with opposite signs. If the points $w_{j}+\lambda_{i}$ are on one ray including the origin for all $i,j>0$, then $f$ and $g$ are called _strongly dual exponential polynomials_. For example, the functions $f(z)=1+ze^{z}+2e^{3z}$ and $g(z)=1-e^{-z}$ are strongly dual exponential polynomials, while $f(z)$ and $h(z)=g(z)+2z^{2}e^{-2z}$ are not. Note that if $\arg(w_{j})=\theta$, then $\arg(\lambda_{j})=\theta+\pi$ by duality, and moreover, if $w_{j}+\lambda_{i}\neq 0$, then precisely one of $\arg(w_{j}+\lambda_{i})=\theta$ or $\arg(w_{j}+\lambda_{i})=\theta+\pi$ holds for all $i,j>0$. Alternatively, strong duality of $f$ and $g$ of order $q$ can be expressed as follows: There exists a non-zero constant $w$ such that $f(z)=\sum_{j=0}^{m}F_{j}(z)(e^{wz^{q}})^{j}\quad\textnormal{and}\quad g(z)=\sum_{i=0}^{m}G_{i}(z)(e^{-wz^{q}})^{i},$ (2.1) where $F_{j},G_{i}$ are exponential polynomials of order $\leq q-1$. Hence $f$ is a polynomial in $e^{wz^{q}}$ and $g$ is a polynomial in $e^{-wz^{q}}$, with smaller exponential polynomials as multipliers. Using the notation above, $f\in\operatorname{Exp}_{q-1}[e^{wz^{q}}]$ and $g\in\operatorname{Exp}_{q-1}[e^{-wz^{q}}]$. Differing from the situation in (1.5), some of the multipliers $F_{j},G_{i}$ $(i,j>0)$ in (2.1) must suitably vanish identically so that only one of $j-i\geq 0$ or $j-i\leq 0$ always holds for all non-vanishing multipliers $F_{j},G_{i}$ $(i,j>0)$. This is a consequence of Definition 2.2. We may think that being strong in our duality means that the product of $f(z)-F_{0}(z)$ and $g(z)-G_{0}(z)$ becomes again a commensurable exponential polynomial with either $w$ or $-w$ as a common factor. In the case when both $F_{0}(z)$ and $G_{0}(z)$ are constant, each product of the derivatives $f^{(k)}(z)$ and $g^{(\ell)}(z)$, $k,\ell\in\mathbb{N}$, is a commensurable exponential polynomial with the same common factor as the product of $f(z)-F_{0}(z)$ and $g(z)-G_{0}(z)$. ###### Definition 2.3 ([18]) Denote the set of complex conjugate frequencies of the function $f$ in (1.5) by $W_{f}=\\{\overline{w}_{0},\overline{w}_{1},\ldots,\overline{w}_{m}\\}$, where $\overline{w}_{0}=0$ is related to the multiplier $F_{0}(z)\not\equiv 0$, and $W_{f}=\\{\overline{w}_{1},\ldots,\overline{w}_{m}\\}$ when $F_{0}(z)\equiv 0$. Denote the convex hull of the set $W_{f}$ by $\operatorname{co}(W_{f})$, and let $C(\operatorname{co}(W_{f}))$ denote the circumference of $\operatorname{co}(W_{f})$. The set $\operatorname{co}(W_{f})$ is defined as the intersection of all closed convex sets containing $W_{f}$, and as such it is either a convex polygon or a line segment. The latter occurs when $f$ is simple, and, in particular, when $w_{1},\ldots,w_{m}$ are commensurable. The vertices of $\operatorname{co}(W_{f})$ are formed by some (possibly all) of the points $\overline{w}_{0},\overline{w}_{1},\ldots,\overline{w}_{m}$. The circumference $C(\operatorname{co}(W_{f}))$ of $\operatorname{co}(W_{f})$ plays an important role in describing the value distribution of $f$, see [8, 10, 16]. Let $h$ be a quotient of two transcendental exponential polynomials, say $h(z)=f(z)/g(z),$ where $f$ is of the form (1.5) and $g$ is an exponential polynomial of the normalized form $g(z)=G_{0}(z)+G_{1}(z)e^{w_{1}z^{q}}+\cdots+G_{m}(z)e^{w_{m}z^{q}}.$ In these representations of $f$ and $g$ for the quotient $h$, we allow that some of the multipliers $F_{j}$ or $G_{j}$ may vanish identically, but we suppose that the matching multipliers $F_{j}$ and $G_{j}$ do not both vanish identically for any $j$. For the quotient $h=f/g$, define the set $W_{h}=\\{\overline{w}_{0},\overline{w}_{1},\ldots,\overline{w}_{m}\\}$. The proximity function of $h$ is $m(r,h)=\big{(}C(\operatorname{co}(W_{h}))-C(\operatorname{co}(W_{g}))\big{)}\frac{r^{q}}{2\pi}+o(r^{q}),$ (2.2) see [17, Satz 1]. In particular, if $g\equiv 1$, then $W_{g}=\\{0\\}$ and $C(\operatorname{co}(W_{g}))=0$. This yields [16, Satz 1] as a special case, namely $T(r,f)=m(r,f)=C(\operatorname{co}(W^{0}_{f}))\frac{r^{q}}{2\pi}+o(r^{q}),$ (2.3) where $W^{0}_{f}=W_{f}\cup\\{0\\}$. The estimates (2.2) and (2.3) are consistent with the estimate $m\left(r,\frac{f^{\prime}}{f}\right)=o(T(r,f)),$ known as the lemma on the logarithmic derivative, since $W_{f^{\prime}/f}=W_{f}$ holds for any given exponential polynomial $f$ of the form (1.5). We also point out that $W_{f/f^{\prime}}=W_{f^{\prime}}$. This fact will be used in proving our main results in Section 3. ## 3 The main results Motivated by Example 1.3, we improve Theorem 1.6(a) under weaker assumptions on $B(z)$. ###### Theorem 3.1 Suppose that $f$ and $A(z)$ in (1.3) are transcendental exponential polynomials, and that $B(z)$ is an entire function satisfying $T(r,B)=o(T(r,A))$. Then the following assertions hold. * (a) $f$ and $A(z)$ are dual exponential polynomials of order $q\in\mathbb{N}$, $f$ has the normalized representation (1.6), and $B(z)$ is an exponential polynomial of order $\rho(B)\leq q-1$. * (b) The frequencies of $f$ are commensurable if and only if the frequencies of $A(z)$ are commensurable. In both cases, $f$ and $A(z)$ are strongly dual exponential polynomials. Proof. (a) Suppose that $0\leq\rho(f)\leq\rho(A)-1$. The case $\rho(f)=0$ is not possible because $f$ is transcendental. Hence $\rho(f)\geq 1$. But now $|A|\leq|f^{\prime\prime}/f^{\prime}|+|B||f/f^{\prime}|$ and the assumption $T(r,B)=o(T(r,A))$ imply $T(r,A)=m(r,A)\leq m(r,B)+O\left(r^{\rho(A)-1}\right)=o(T(r,A)),$ which is a contradiction. Here we have used (2.2) for $h=f/f^{\prime}$ and $g=f^{\prime}$, as well as (2.3) for $A$ in place of $f$. The following two cases are also impossible by the proof of [9, Theorem 3.6]: * (1) $\rho(f)=\rho(A)$ and either $F_{0}(z)\equiv 0$ or $F^{\prime}_{0}(z)\not\equiv 0$. * (2) $\rho(f)\geq\rho(A)+1$. Thus $\rho(f)=\rho(A)=q\geq 1$ and $f$ has the representation (1.6). We proceed to prove that $f$ and $A(z)$ are dual exponential polynomials. Using (1.3), we find that $m\left(r,\frac{Af^{\prime}}{f}\right)=O(\log r)+m(r,B)=o(T(r,A))=o\left(r^{q}\right).$ The formula (7.3) in [18] should be replaced by this. Thus the formula (7.7) in [18] holds, and the reasoning in [18] shows that $f$ and $A(z)$ are dual exponential polynomials. To complete the proof of (a), it suffices to prove that $B(z)$ is an exponential polynomial of order $\rho(B)\leq q-1$. Since the frequencies $w_{j}$ of $f$ are all on one ray, we may appeal to a rotation, and suppose that $w_{1},\ldots,w_{m}\in\mathbb{R}_{+}$. By renaming the frequencies $w_{j}$, if necessary, we may further suppose that $0<w_{1}<\cdots<w_{m}$. Thus the dual coefficient must be of the form $A(z)=A_{0}(z)+\sum_{j=1}^{k}A_{j}(z)e^{-\lambda_{j}z^{q}},$ (3.1) where $A_{j}(z)\not\equiv 0$ for all $j\in\\{1,\ldots,k\\}$ and $\lambda_{1},\ldots,\lambda_{k}\in\mathbb{R}_{+}$. Renaming the frequencies $\lambda_{j}$, if necessary, we may suppose that $0<\lambda_{1}<\cdots<\lambda_{k}$. Write $f^{\prime}(z)=\sum_{j=1}^{m}G_{j}(z)e^{w_{j}z^{q}}\quad\textnormal{and}\quad f^{\prime\prime}(z)=\sum_{j=1}^{m}H_{j}(z)e^{w_{j}z^{q}},$ where $G_{j}(z)=F_{j}^{\prime}(z)+qw_{j}z^{q-1}F_{j}(z)\not\equiv 0$ and $H_{j}(z)=G_{j}^{\prime}(z)+qw_{j}z^{q-1}G_{j}(z)\not\equiv 0$. Next, write $-Af^{\prime}=Bf+f^{\prime\prime}$ in the form $-\left(\sum_{j=1}^{k}A_{j}e^{-\lambda_{j}z^{q}}\right)\left(\sum_{j=1}^{m}G_{j}e^{w_{j}z^{q}}\right)=cB+\sum_{j=1}^{m}(A_{0}G_{j}+BF_{j}+H_{j})e^{w_{j}z^{q}}.$ (3.2) From (3.2) we find that $B$ is an exponential polynomial of order $\rho(B)\leq q$. In fact, from (2.3) and the assumption $T(r,B)=o(T(r,A))$, it follows that $\rho(B)\leq q-1$. (b) We begin with some preparations. From [16] and [17], we have $m\left(r,\sum_{j=1}^{k}A_{j}e^{-\lambda_{j}z^{q}}\right)=T\left(r,\sum_{j=1}^{k}A_{j}e^{-\lambda_{j}z^{q}}\right)=\frac{\lambda_{k}}{\pi}r^{q}+o(r^{q}),$ and $\displaystyle m\left(r,\bigg{\\{}cB+\sum_{j=1}^{m}(A_{0}G_{j}+BF_{j}+H_{j})e^{w_{j}z^{q}}\bigg{\\}}\Big{/}\sum_{j=1}^{m}G_{j}e^{w_{j}z^{q}}\right)$ $\displaystyle\qquad\qquad=\frac{2w_{m}-2(w_{m}-w_{1})}{2\pi}r^{q}+o(r^{q})=\frac{w_{1}}{\pi}r^{q}+o(r^{q}).$ Therefore, we deduce that $0<\lambda_{1}<\cdots<\lambda_{k}=w_{1}<\cdots<w_{m}.$ (3.3) Thus from (3.2), it follows that $-A_{k}G_{1}=cB.$ (3.4) If $A_{0}G_{m}+BF_{m}+H_{m}\not\equiv 0$, then from [10, Theorem 2.2] and (3.3), we get $\displaystyle N(r,0,L)$ $\displaystyle=$ $\displaystyle\frac{2(w_{m}-w_{1})+2(\lambda_{k}-\lambda_{1})}{2\pi}r^{q}+O(r^{q-1}+\log r)$ $\displaystyle=$ $\displaystyle\frac{w_{m}-\lambda_{1}}{\pi}r^{q}+O(r^{q-1}+\log r),$ $\displaystyle N(r,0,R)$ $\displaystyle=$ $\displaystyle\frac{w_{m}}{\pi}r^{q}+O(r^{q-1}+\log r),$ where $N(r,0,L)$ and $N(r,0,R)$ are the counting functions of zeros of the exponential polynomials on the left-hand side and on the right-hand side of (3.2), respectively. This implies $w_{m}=w_{m}-\lambda_{1}$, which is impossible. Thus we have $A_{0}G_{m}+BF_{m}+H_{m}\equiv 0.$ (3.5) Now (3.2) reduces to $-\left(\sum_{j=1}^{k}A_{j}e^{-\lambda_{j}z^{q}}\right)\left(\sum_{j=1}^{m}G_{j}e^{w_{j}z^{q}}\right)=cB+\sum_{j=1}^{m-1}(A_{0}G_{j}+BF_{j}+H_{j})e^{w_{j}z^{q}}.$ (3.6) From the Borel-Nevanlinna theorem, and from $A_{i}G_{j}\not\equiv 0$ for $j\in\\{1,2,\ldots,m\\}$ and $i\in\\{1,2,\ldots,k\\}$, it follows that there are only two possibilities: * (I) For some pairs $(j,i)$, where $j\in\\{1,2,\ldots,m\\}$ and $i\in\\{1,2,\ldots,k\\}$, there exists $\ell\in\\{0,1,\ldots,m-1\\}$ such that $w_{j}-\lambda_{i}=w_{\ell}.$ (3.7) * (II) For some pairs $(j,i)$, where $j\in\\{1,2,\ldots,m\\}$ and $i\in\\{1,2,\ldots,k\\}$, there exist $s\in\\{1,2,\ldots,m\\}\setminus\\{j\\}$ and $t\in\\{1,2,\ldots,k\\}\setminus\\{i\\}$ such that $w_{j}-\lambda_{i}=w_{s}-\lambda_{t}.$ (3.8) After these preparations we proceed to prove that the frequencies of $f$ are commensurable if and only if the frequencies of $A(z)$ are commensurable. By appealing to (3.3) and to a change of variable as in Example 1.3, we may suppose that $w_{1}=\lambda_{k}\in\mathbb{N}$. Thus we prove that $w_{j}\in\mathbb{N}$ for $j\in\\{1,\ldots,m\\}$ if and only if $\lambda_{i}\in\mathbb{N}$ for $i\in\\{1,\ldots,k\\}$. * (i) Suppose that $w_{j}\in\mathbb{N}$ for $j\in\\{1,\ldots,m\\}$. From (3.3), we see that $w_{m}-\lambda_{1}=\max_{j,i}\\{w_{j}-\lambda_{i}\\}$ and $w_{m}-\lambda_{1}>w_{j}-\lambda_{i}$ for any $j\neq m$ and $i\neq 1$. Hence, from (3.7) and (3.8), there exists $p<m$ such that $w_{m}-\lambda_{1}=w_{p}$, which implies that $\lambda_{1}\in\mathbb{N}$. In addition, from (3.3), we have $w_{m}-\lambda_{2}>w_{j}-\lambda_{i}$ for any $j\neq m$ and $i>2$. Thus, from (3.7) and (3.8), there are only two possibilities: (1) There exists $p<m$ such that $w_{m}-\lambda_{2}=w_{p}-\lambda_{1}$. (2) There exists $p<m$ such that $w_{m}-\lambda_{2}=w_{p}$. In both cases, it follows that $\lambda_{2}\in\mathbb{N}$. Repeating this argument for $k$ times gives us $\lambda_{i}\in\mathbb{N}$ for $i\in\\{1,\ldots,k\\}$. * (ii) Suppose that $\lambda_{i}\in\mathbb{N}$ for $i\in\\{1,\ldots,k\\}$. From (3.3), we have $\lambda_{k}=w_{1}$, and consequently $w_{1}\in\mathbb{N}$. Moreover, from (3.3), we have $w_{2}-\lambda_{k}<w_{j}-\lambda_{i}$ for any $j>1$ and $i\neq k$. Thus, from (3.7) and (3.8), there are only two possibilities: There exists $p<k$ such that either $w_{2}-\lambda_{k}=w_{1}-\lambda_{p}$ or $w_{2}-\lambda_{k}=w_{1}$. In both cases, we have $w_{2}\in\mathbb{N}$. Repeating this argument for $m$ times gives us $w_{j}\in\mathbb{N}$ for $i\in\\{1,\ldots,m\\}$. If the frequencies are commensurable for one of $f,A(z)$, then they are commensurable for both of $f,A(z)$ by the reasoning above. The remaining fact that $f$ and $A(z)$ are strongly dual exponential polynomials now follows by (3.3). $\Box$ The assumption $\rho(Af^{\prime})<\rho(f)$ in Theorem 1.6(b) seems to be the only known sufficient condition for the conclusion $q=1$. However, in the case of Frei’s result (1.1), we have $A(z)f^{\prime}(z)=e^{-z}\sum_{j=1}^{m}jC_{j}e^{jz}=\sum_{j=0}^{m-1}(j+1)C_{j+1}e^{jz},$ and so $\rho(Af^{\prime})=\rho(f)=1$. This shows that $q=1$ may happen even if $\rho(Af^{\prime})=\rho(f)$. Theorem 3.2 below shows that $f$ having only one large exponential term is also a sufficient condition for $q=1$. In contrast, if $A(z)$ has only one large exponential term, then $f$ can have multiple large exponential terms as in (1.1). ###### Theorem 3.2 Suppose that $f(z)=F_{0}(z)+F_{1}(z)e^{wz^{q}}$ is a solution of (1.3), where $A(z)$ is an exponential polynomial and $B(z)$ is an entire function satisfying $T(r,B)=o(T(r,A))$. Then $q=1$, and there are constants $c,b\in\mathbb{C}\setminus\\{0\\}$ and a non-trivial polynomial $P(z)$ such that $f(z)=c+be^{wz},\ A(z)=\frac{b}{c}P(z)-w+P(z)e^{-wz}\ \text{and}\ B(z)=-\frac{wb}{c}P(z).$ Proof. We proceed similarly as in the proof of Theorem 3.1 until (3.6), which now reduces to the form $-\left(\sum_{j=1}^{k}A_{j}e^{-\lambda_{j}z^{q}}\right)G_{1}e^{wz^{q}}=cB,$ (3.9) where $F_{0}(z)\equiv c\in\mathbb{C}\setminus\\{0\\}$. Hence $k=1$, and consequently $A(z)$ reduces to the form $A(z)=A_{0}(z)+A_{1}(z)e^{-wz^{q}}.$ From (3.9) and (3.5), with $k=1=m$, we find that $-A_{1}G_{1}=cB\quad\textnormal{and}\quad-A_{0}G_{1}=BF_{1}+H_{1}.$ In other words, $c^{-1}A_{1}G_{1}F_{1}=A_{0}G_{1}+H_{1}=A_{0}G_{1}+G_{1}^{\prime}+qwz^{q-1}G_{1}.$ (3.10) Dividing both sides of (3.10) by $G_{1}$, we observe that at every zero of $G_{1}$ the right-hand side has a pole but the left-hand side does not. Thus $G_{1}$ has no zeros, and so we may write it in the form $G_{1}=e^{g}$, where $g(z)=a_{q-1}z^{q-1}+\cdots+a_{0}$ is a polynomial of degree $\leq q-1$. Since $G_{1}=F_{1}^{\prime}+qwz^{q-1}F_{1}=e^{g},$ we obtain $\left(F_{1}(z)e^{wz^{q}}\right)^{\prime}=e^{wz^{q}+g(z)}$, and consequently $F_{1}(z)e^{wz^{q}}=\int^{z}e^{w\zeta^{q}+a_{q-1}\zeta^{q-1}+\cdots+a_{0}}\,d\zeta.$ (3.11) Here the right-hand side is an exponential polynomial, which happens only if $q=1$. Since $q=1$, we see from (3.11) that $F_{1}(z)$ reduces to a non-zero constant, say $F_{1}(z)\equiv b$. Thus $f(z)=c+be^{wz}$, and we have $G_{1}(z)\equiv wb$ and $H_{1}(z)\equiv w^{2}b$. A substitution to (3.10) followed by a simplification gives $\frac{b}{c}A_{1}=A_{0}+w.$ There is no restriction for $A_{1}$ other than the fact that $A$ is an exponential polynomial. Thus we may suppose that $A_{1}$ is any non-trivial polynomial, say $A_{1}=P$. This gives us $A_{0}=\frac{b}{c}P-w$, and finally $B=-\frac{wb}{c}P$. $\Box$ Example 1.1 shows that the coefficient $B(z)$ in (1.3) can be a polynomial. Next, we prove that this is equivalent to $A_{0}(z)$ in (3.1) being a polynomial, and reveal another sufficient condition for the conclusion $q=1$. ###### Proposition 3.3 Under the assumptions of Theorem 3.1, the term $A_{0}(z)$ of $A(z)$ in (3.1) is a polynomial if and only if $B(z)$ is a polynomial. Moreover, if the multipliers of $f$ and of $A(z)$ are constants, then $q=1$ and $B(z)$ is a constant function. Proof. From the proof of Theorem 3.1 we find that (3.5) holds, that is, $A_{0}G_{m}+BF_{m}+H_{m}=0,$ (3.12) where $\displaystyle G_{m}$ $\displaystyle=$ $\displaystyle F_{m}^{\prime}+w_{m}qz^{q-1}F_{m},$ $\displaystyle H_{m}$ $\displaystyle=$ $\displaystyle F_{m}^{\prime\prime}+2w_{m}qz^{q-1}F_{m}^{\prime}+\left(w_{m}q(q-1)z^{q-2}+w_{m}^{2}q^{2}z^{2q-2}\right)F_{m}.$ Thus $F_{m}$ solves the second order differential equation $F^{\prime\prime}_{m}+P(z)F^{\prime}_{m}+Q(z)F_{m}=0,$ (3.13) where $\displaystyle P(z)$ $\displaystyle=$ $\displaystyle 2w_{m}qz^{q-1}+A_{0},$ $\displaystyle Q(z)$ $\displaystyle=$ $\displaystyle w_{m}qz^{q-1}A_{0}+w_{m}q(q-1)z^{q-2}+w_{m}^{2}q^{2}z^{2(q-1)}+B.$ Suppose first that $A_{0}(z)$ is a polynomial. If $B(z)$ is transcendental, then it follows from (3.13) and [5, Corollary 1] that $\rho(F_{m})=\infty$, which is a contradiction. Hence $B(z)$ must be a polynomial. Conversely, suppose that $B(z)$ is a polynomial. Suppose on the contrary to the assertion that $A_{0}(z)$ is a transcendental exponential polynomial. Then there exists an open sector $S$ such that $A_{0}(z)$ blows up exponentially in $S$. Using [6, Corollary 1] and $\rho(F_{m})\leq q-1$ in (3.13), we obtain on almost every ray in $S$ that $|w_{m}qz^{q-1}||A_{0}(z)|\leq O\left(|z|^{\max\\{2q,\deg(B)\\}}\right)+O\left(|z|^{q-2+\varepsilon}|A_{0}(z)|\right).$ However, this is obviously a contradiction, and hence $A_{0}(z)$ is a polynomial. Finally, suppose that the multipliers of $f$ and of $A(z)$ are constants. From (3.4), we find that $B(z)=Cz^{q-1}$ for some constant $C\in\mathbb{C}\setminus\\{0\\}$. Since $F_{m}(z)$ is a non-zero constant function, it follows that the coefficient $Q(z)$ in (3.13) vanishes identically. But this is not possible because $A_{0}(z)$ is a constant function, unless $q=1$. $\Box$ Remark. (a) The equation (3.13) implies that every possible zero of $F_{m}$ is simple. (b) Assuming that $A_{0}(z)$ is a polynomial, we give an alternative proof for the fact that $B(z)$ is a polynomial. We already know from Theorem 3.1 that $B(z)$ is of order $\leq q-1$. Since the non-zero frequencies of $A(z)$ are all on one ray by duality, it follows that the plane divides into $2q$ sectors of opening $\pi/q$ such that in every other sector $A(z)$ either blows up exponentially or is asymptotic to the polynomial $A_{0}(z)$. In the latter case, if $A_{0}(z)\equiv 0$, then $A(z)$ decays to zero exponentially. Thus, from [5, Theorem 7], we deduce that $B(z)$ is a polynomial. Note, in particular, that the constant $\mu$ in [5, Theorem 7] satisfies $\mu=\pi/q$. Open problem 1. _Under the assumptions of Theorem 3.1, is it always true that $q=1$ and $B(z)$ is a polynomial?_ This problem is fragile in the sense that the desired conclusion is not valid if a minor modification in the assumptions of Theorem 3.1 is performed. For example, the differential equation $f^{\prime\prime}-\bigl{(}qwz^{q-1}+z^{-1}e^{-wz^{q}}\bigr{)}f^{\prime}-q(q-1)wz^{q-2}f=0$ possesses an exponential polynomial solution $f(z)=e^{wz^{q}}-1/(q-1)$ for any $q\geq 2$. Moreover, the function $f(z)=e^{z^{2}}+1$ satisfies the differential equations $\begin{split}f^{\prime\prime}+\left(\frac{e^{-z^{2}}-1}{2z}-2z\right)f^{\prime}-f&=0,\\\ f^{\prime\prime}-\frac{e^{-z^{2}}(z-1)+4z^{2}+z+1}{2z}f^{\prime}+(z-1)f&=0.\end{split}$ (3.14) The transcendental coefficients in (3.14) are entire exponential polynomials with rational multipliers because $z=0$ is a removable singularity for both. ## 4 Duality for higher order functions Next we construct examples of differential equations of order $n\geq 2$ having an exponential polynomial solution $f$ of order $\rho(f)=n-1$ which is dual with one of the coefficients. ###### Example 4.1 If $H$ is an arbitrary entire function, then $f(z)=e^{z^{2}}+1$ solves $\begin{split}f^{\prime\prime\prime}+\left(1+e^{-z^{2}}\right)Hf^{\prime\prime}-(6+4z^{2})f^{\prime}-(2+4z^{2})Hf&=0,\\\ f^{\prime\prime\prime}-2zf^{\prime\prime}+(H-4+He^{-z^{2}})f^{\prime}-2zHf&=0.\end{split}$ (4.1) A particularly interesting case is when $H$ is either a polynomial or an exponential polynomial of order one. Thus either of the two possible coefficients can be dual with $f$. Examples of second order dual solutions for third order equations can be found in [18] but for polynomial coefficients only. We can use the relation $zf^{\prime\prime}(z)=(2z^{2}+1)f^{\prime}$ to see that, in addition to (4.1), the function $f(z)=e^{z^{2}}+1$ satisfies the equations $\displaystyle f^{\prime\prime\prime}(z)-2zf^{\prime}(z)-4f^{\prime}(z)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle(1+e^{-z^{2}})f^{\prime}(z)-2zf(z)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle(1+e^{-z^{2}})f^{\prime\prime}(z)-(2+4z^{2})f(z)$ $\displaystyle=$ $\displaystyle 0.$ ###### Example 4.2 If $H$ is an arbitrary entire function, then $f(z)=e^{z^{3}}+1$ solves $\displaystyle f^{(4)}+\left(1+e^{-z^{3}}\right)Hf^{\prime\prime\prime}-9z^{4}f^{\prime\prime}-30\left(2+3z^{3}\right)f^{\prime}-\left(6+54z^{3}+27z^{6}\right)Hf$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle f^{(4)}-3z^{2}f^{\prime\prime\prime}+\left(H-18z+He^{-z^{3}}\right)f^{\prime\prime}-18f^{\prime}-H\left(6z+9z^{4}\right)f$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle f^{(4)}-3z^{2}f^{\prime\prime\prime}-27zf^{\prime\prime}+\left(H+27z^{3}+He^{-z^{3}}\right)f^{\prime}-3z^{2}Hf$ $\displaystyle=$ $\displaystyle 0.$ A particularly interesting case is when $H$ is an exponential polynomial of order at most two. Thus all three of the possible coefficients can be dual with $f$. Previous examples of third order dual solutions do not seem to be known. As in the previous example, we can use the relations $zf^{\prime\prime}(z)=(3z^{3}+2)f^{\prime}(z)$ and $zf^{\prime\prime\prime}(z)=(3z^{3}+1)f^{\prime\prime}(z)+9z^{2}f^{\prime}(z)$ to see that $f(z)=e^{z^{3}}+1$ satisfies the equations $\displaystyle f^{(4)}(z)-3z^{2}f^{\prime\prime\prime}(z)-18zf^{\prime\prime}(z)-18f^{\prime}(z)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle(1+e^{-z^{3}})f^{\prime}(z)-3z^{2}f(z)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle(1+e^{-z^{3}})f^{\prime\prime}(z)-(6z+9z^{4})f(z)$ $\displaystyle=$ $\displaystyle 0,$ $\displaystyle(1+e^{-z^{3}})f^{\prime\prime\prime}(z)-(6+54z^{3}+27z^{6})f(z)$ $\displaystyle=$ $\displaystyle 0.$ In light of Open problem 1 and the examples just discussed, it is natural to pose our second open problem. Open problem 2. _If a solution and the dominant coefficient are dual exponential polynomials of order $q$, then is the differential equation in question of order at least $q+1$?_ For the fragility of this problem, recall the equations (3.14) satisfied by $f(z)=e^{z^{2}}+1$. Moreover, the function $f(z)=e^{z^{3}}+1$ satisfies the third order equation $f^{\prime\prime\prime}+\left(\frac{e^{-z^{3}}-1}{2z}\right)f^{\prime\prime}-3z(3z^{3}+5)f^{\prime}-\frac{3}{2}(3z^{3}+2)f=0$ with entire coefficients. As the first initial step to knowing more about Open problem 2, we make a summary of the fundamental ideas in constructing Examples 4.1 and 4.2. ###### Lemma 4.3 The function $f(z)=e^{z^{q}}+1$, $q\in\mathbb{N}$, possesses the following two properties: * (i) $\displaystyle(1+e^{-z^{q}})f^{(j+1)}(z)=\sum_{k=0}^{j}P_{j,k}(z)f^{(k)}(z),\quad j\in\mathbb{N}\cup\\{0\\}$, * (ii) $\displaystyle f^{(q+1)}(z)=\sum_{\ell=1}^{q}Q_{\ell}(z)f^{(\ell)}(z)$, where the $P_{j,k}(z)$ and $Q_{\ell}(z)$ are non-zero polynomials satisfying * (a) $\displaystyle\left\\{\begin{array}[]{l}P_{j+1,j+1}(z)=P_{j,j}(z)-qz^{q-1},\quad P_{0,0}(z)=qz^{q-1},\\\ P_{j+1,k}(z)=P_{j,k}^{\prime}(z)+qz^{q-1}P_{j,k}(z)+P_{j,k-1}(z),\quad P_{j,-1}(z)\equiv 0,\quad 1\leq k\leq j,\\\ P_{j+1,0}(z)=P_{j,0}^{\prime}(z)+qz^{q-1}P_{j,0}(z),\end{array}\right.$ * (b) $\displaystyle Q_{\ell}(z)=-\binom{q}{\ell-1}(e^{-z^{q}})^{(q-\ell+1)}e^{z^{q}}.$ Proof. First, let us prove (i) by induction on $j$. Of course, by taking their logarithmic derivatives, we have $(1+e^{-z^{q}})f^{\prime}(z)=qz^{q-1}f(z)$ immediately, that is, the case when $j=0$ follows with $P_{0,0}(z)=qz^{q-1}$. Assume (i) is true for each $j=0,1,\ldots,n$. Then $\displaystyle(1+e^{-z^{q}})f^{(n+2)}(z)$ $\displaystyle=$ $\displaystyle qz^{q-1}e^{-z^{q}}f^{(n+1)}+\sum_{k=0}^{n}\bigl{\\{}P_{n,k}^{\prime}(z)f^{(k)}(z)+P_{n,k}(z)f^{(k+1)}(z)\bigr{\\}}$ $\displaystyle=$ $\displaystyle qz^{q-1}(1+e^{-z^{q}})f^{(n+1)}(z)+\bigl{\\{}P_{n,n}(z)-qz^{q-1}\bigr{\\}}f^{(n+1)}(z)+$ $\displaystyle+\sum_{k=1}^{n}\bigl{\\{}P_{n,k}^{\prime}(z)f^{(k)}(z)+P_{n,k-1}(z)\bigr{\\}}f^{(k)}(z)+P_{n,0}^{\prime}(z)f(z)$ $\displaystyle=$ $\displaystyle qz^{q-1}\sum_{k=0}^{n}P_{n,k}(z)f^{(k)}(z)+\bigl{\\{}P_{n,n}(z)-qz^{q-1}\bigr{\\}}f^{(n+1)}(z)+$ $\displaystyle+\sum_{k=1}^{n}\bigl{\\{}P_{n,k}^{\prime}(z)f^{(k)}(z)+P_{n,k-1}(z)\bigr{\\}}f^{(k)}(z)+P_{n,0}^{\prime}(z)f(z)$ $\displaystyle=$ $\displaystyle\bigl{\\{}P_{n,n}(z)-qz^{q-1}\bigr{\\}}f^{(n+1)}(z)+$ $\displaystyle+\sum_{k=1}^{n}\bigl{\\{}P_{n,k}^{\prime}(z)f^{(k)}(z)+qz^{q-1}P_{n,k}(z)+P_{n,k-1}(z)\bigr{\\}}f^{(k)}(z)+$ $\displaystyle+\bigl{\\{}P_{n,0}^{\prime}(z)+qz^{q-1}P_{n,0}(z)\bigr{\\}}f(z),$ which is the one to be proved. Second, let us calculate the $q$-th order derivative of the product $f^{\prime}(z)e^{-z^{q}}=qz^{q-1}$. The Leibniz rule gives $\sum_{\ell=0}^{q}\binom{q}{\ell}f^{(\ell+1)}(z)(e^{-z^{q}})^{(q-\ell)}\equiv 0.$ Denoting $Q_{\ell+1}(z)=-\binom{q}{\ell}(e^{-z^{q}})^{(q-\ell)}e^{z^{q}}$ for $0\leq\ell\leq q-1$, we have $f^{(q+1)}(z)=\sum_{\ell=0}^{q-1}Q_{\ell+1}(z)f^{(\ell+1)}(z)=\sum_{\ell=1}^{q}Q_{\ell}(z)f^{(\ell)}(z),$ as desired. $\Box$ ###### Example 4.4 We may apply the two identities in Lemma 4.3 to construct differential equations of arbitrary order. Given any entire function $H$, we have the identity $f^{(q+1)}(z)-\sum_{\ell=1}^{q}Q_{\ell}(z)f^{(\ell)}(z)=H(z)\left((1+e^{-z^{q}})f^{(j)}(z)-\sum_{k=0}^{j-1}P_{j-1,k}(z)f^{(k)}(z)\right),$ that is, $f(z)=e^{z^{q}}+1$ solves $\displaystyle f^{(q+1)}(z)$ $\displaystyle-$ $\displaystyle\sum_{\ell=j+1}^{q}Q_{\ell}(z)f^{(\ell)}(z)-\left((1+e^{-z^{q}})H(z)+Q_{j}(z)\right)f^{(j)}(z)$ $\displaystyle+$ $\displaystyle\sum_{\ell=1}^{j-1}\bigl{(}P_{j-1,\ell}(z)H(z)-Q_{\ell}(z)\bigr{)}f^{(\ell)}(z)+P_{j-1,0}(z)H(z)f(z)=0,$ where $1\leq j\leq q$, the sum $\sum_{\ell=1}^{j-1}$ is empty if $j=1$ and the sum $\sum_{\ell=j+1}^{q}$ is empty if $j=q$. ## 5 Multiple duality The possibility that a solution $f$ would be dual to more than one coefficient has not been studied rigorously. In this case there would be at least two equally strong dominant coefficients, or, in the case of (1.3), both coefficients $A(z),B(z)$ would be equally strong. For example, $f(z)=e^{-z}$ solves $f^{\prime\prime}+e^{z}f^{\prime}+(e^{z}-1)f=0$ and is dual to both coefficients. Obviously the coefficients are not dual to each other. More examples can be produced from Example 1.1. Note that $f(z)=e^{z}$ solves (1.3) if $A(z)=-B(z)-1$. Hence $f$ is not necessarily dual with either of $A(z),B(z)$. If $H$ is any entire function, then $f(z)=e^{z^{q}}$ solves $f^{\prime\prime}+\left(H(z)-qz^{q-1}\right)f^{\prime}-\left(q(q-1)z^{q-2}+qz^{q-1}H(z)\right)f=0.$ This example is from [5]. Note that $f(z)=e^{z^{q}}$ satisfies both $f^{\prime}(z)-qz^{q-1}f(z)=0$ and $f^{\prime\prime}(z)-qz^{q-1}f^{\prime}(z)-q(q-1)z^{q-2}f(z)=0$. Recall [9, Theorem 2.1], according to which there cannot be even one ray on which $B(z)$ would be stronger than $A(z)$ in the sense of the Phragmén- Lindelöf indicator, for otherwise all solutions of (1.3) are of infinite order. This happens, for example, when $A(z)$ and $B(z)$ are dual to each other. ###### Example 5.1 One may observe the necessity of the assumption on the duality of $f$ and $A(z)$ as well as that on the dominance of $A(z)$ over $B(z)$ by the following example: The function $f(z)=\bigl{(}e^{z}+e^{-z}\bigr{)}e^{z^{q}}$, $q\in\mathbb{N}$, satisfies $\displaystyle f^{\prime\prime}$ $\displaystyle+$ $\displaystyle\bigl{\\{}H(z)e^{z}+H(z)e^{-z}-2qz^{q-1}\bigr{\\}}f^{\prime}$ $\displaystyle-$ $\displaystyle\bigl{\\{}(qz^{q-1}+1)H(z)e^{z}+(qz^{q-1}-1)H(z)e^{-z}-q^{2}z^{2(q-1)}+q(q-1)z^{q-2}+1\bigr{\\}}f=0$ for any entire function $H$. When $q=1$, this becomes $f^{\prime\prime}+\bigl{\\{}H(z)e^{z}+H(z)e^{-z}-2\bigr{\\}}f^{\prime}-2H(z)e^{z}f=0$ with $f(z)=e^{2z}+1$. Thus we may use it in order to observe the duality of $A(z)$ and $B(z)$ by several choices of $H$ such as $H(z)=e^{nz}$ for $n\in\mathbb{Z}$ or $H(z)=e^{iz}$. Here we note that $f(z)=F(z)e^{z^{q}}$ satisfies $\dfrac{f^{\prime}}{f}=\dfrac{F^{\prime}}{F}+qz^{q-1}$ and $\frac{f^{\prime\prime}}{f}=\frac{F^{\prime\prime}}{F}+2qz^{q-1}\frac{f^{\prime}}{f}+\bigl{(}q(q-1)z^{q-2}-q^{2}z^{2(q-1)}\bigr{)}$ so that there is no large freedom to choose the function $F$. For example, taking an Airy function as $F$, we cannot have our desired equation $f^{\prime\prime}+A(z)f^{\prime}+B(z)f=0$ with the exponential polynomial coefficients $A(z)$ and $B(z)$. Acknowledgements. Ishizaki was supported by JSPS KAKENHI Grant Number 20K03658. Wen was supported by the National Natural Science Foundation of China (No. 11971288 and No. 11771090) and Shantou University SRFT (NTF18029). ## References * [1] Amemiya I. and M. Ozawa, _Non-existence of finite order solutions of $w^{\prime\prime}+e^{-z}w^{\prime}+Q(z)w=0$_. Hokkaido Math. J. 10 (1981), Special Issue, 1–17. * [2] Frei M., _Über die subnormalen Lösungen der Differentialgleichung $w^{\prime\prime}+e^{-z}w^{\prime}+\textnormal{konst}\cdot w=0$_. Comment. Math. Helv. 36 (1961), 1–8. (German) * [3] Gross F., _Factorization of Meromorphic Functions_. Mathematics Research Center, Naval Research Laboratory, Washington, D. C., 1972. * [4] Gundersen G. G., _On the question of whether $f^{\prime\prime}+e^{-z}f^{\prime}+B(z)f=0$ can admit a solution $f\not\equiv 0$ of finite order_. Proc. Roy. Soc. Edinburgh Sect. A 102 (1986), no. 1–2, 9–17. * [5] Gundersen G. G., _Finite order solutions of second order linear differential equations_. Trans. Amer. Math. Soc. 305 (1988), no. 1, 415–429. * [6] Gundersen G. G., _Estimates for the logarithmic derivative of a meromorphic function, plus similar estimates_. J. London Math. Soc. (2) 37 (1988), no. 1, 88–104. * [7] Gundersen G. G., E. Steinbart and S. Wang, _The possible orders of solutions of linear differential equations with polynomial coefficients_. Trans. Amer. Math. Soc. 350 (1998), no. 3, 1225–1247. * [8] Heittokangas J., K. Ishizaki, K. Tohge and Z.-T. Wen, _Zero distribution and division results for exponential polynomials_. Israel J. Math. 227 (2018), no. 1, 397–421. * [9] Heittokangas J., I. Laine, K. Tohge and Z.-T. Wen, _Completely regular growth solutions of second order complex linear differential equations_. Ann. Acad. Sci. Fenn. Math. 40 (2015), no. 2, 985–1003. * [10] Heittokangas J. and Z.-T. Wen, _Generalization of Pólya’s zero distribution theory for exponential polynomials, and sharp results for asymptotic growth_. Comput. Methods Funct. Theory, online publ., https://doi.org/10.1007/s40315-020-00336-7, 26 pp. * [11] Langer R. E., _On the zeros of exponential sums and integrals_. Bull. Amer. Math. Soc. 37 (1931), no. 4, 213–239. * [12] Langley J. K., _On complex oscillation and a problem of Ozawa_. Kodai Math. J. 9 (1986), no. 3, 430–439. * [13] Moreno C. J., _The zeros of exponential polynomials_. I. Compositio Math. 26 (1973), 69–78. * [14] Nevanlinna R., _Analytic Functions_. Translated from the second German edition by Phillip Emig. Die Grundlehren der mathematischen Wissenschaften, Band 162 Springer-Verlag, New York-Berlin, 1970. * [15] Ozawa M., _On a solution of $w^{\prime\prime}+e^{-z}w^{\prime}+(az+b)w=0$_. Kodai Math. J. 3 (1980), no. 2, 295–309. * [16] Steinmetz N., _Zur Wertverteilung von Exponentialpolynomen_. Manuscripta Math. 26 (1978/79), no. 1–2, 155–167. (German) * [17] Steinmetz N., _Zur Wertverteilung der Quotienten von Exponentialpolynomen_. Arch. Math. (Basel) 35 (1980), no. 5, 461–470. (German) * [18] Wen Z. T., G. G. Gundersen and J. Heittokangas, _Dual exponential polynomials and linear differential equations_. J. Differential Equations 264 (2018), no. 1, 98–114. _J. Heittokangas_ University of Eastern Finland, Department of Physics and Mathematics, P.O. Box 111, 80101 Joensuu, Finland <EMAIL_ADDRESS> _K. Ishizaki_ The Open University of Japan, Faculty of Liberal Arts, Mihama-ku, Chiba, Japan <EMAIL_ADDRESS> _K. Tohge_ Kanazawa University, College of Science and Engineering, Kakuma-machi, Kanazawa 920-1192, Japan <EMAIL_ADDRESS> _Z.-T. Wen_ Shantou University, Department of Mathematics, Daxue Road No. 243, Shantou 515063, China <EMAIL_ADDRESS>
# A Comprehensive Survey on 6G Networks: Applications, Core Services, Enabling Technologies, and Future Challenges Amin Shahraki*, Mahmoud Abbasi, Md. Jalil Piran*, and Amir Taherkordi Amin Shahraki (Corresponding Author) is with the Department of Informatics, University of Oslo, Oslo, Norway (e-mail: am.shahraki@ieee.org)Mahmoud Abbasi is with Islamic Azad University, Mashhad, Iran (email: mahmoud.abbasi@ieee.og)Md. Jalil Piran (Corresponding Author) is with the Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea (email: piran@sejong.ac.kr)Amir Taherkordi is with University of Oslo, Oslo, Norway (email: amirhost@ifi.uio.no)Manuscript received xxx xx, 2021; revised xxx xx, 2021. ###### Abstract Cellular Internet of Things (IoT) is considered as de facto paradigm to improve the communication and computation systems. Cellular IoT connects massive number of physical and virtual objects to the Internet using cellular networks. The latest generation of cellular networks, e.g. fifth-generation (5G), use evolutionary and revolutionary technologies to notably improve the performance of wireless networks. However, given the envisioned new use-cases, e.g., holographic communication, and the ever-increasing deployment of massive smart-physical end-devices in IoT, the volume of network traffic has considerably raised, and therefore, the current generation of mobile networks cannot wholly meet the ever-increasing demands. Hence, it is envisioned that the next generation, sixth generation (6G) networks, need to play a critical role to alleviate such challenges in IoT by providing new communication services, network capacity, and ultra-low latency communications (uRLLC). In this paper, first, the need for 6G networks is discussed. Then, the potential 6G requirements and trends, as well as the latest research activities related to 6G are introduced e.g., Tactile Internet and Terahertz (THz). Furthermore, the key performance indicators, applications, new services, and the potential key enabling technologies for 6G networks are presented. Finally, several potential unresolved challenges for future 6G networks are presented. ###### Index Terms: Internet of Things (IoT), 6G, 5G, uRLLC, THz, Tactile Internet, cellular IoT. **footnotetext: Equal contribution in case of corresponding author [Note: This work has been submitted to the IEEE Transactions on Network and Service Management journal for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible ] ## I Introduction IoT is the most prevalent framework in the realm of Information and Communication Technology (ICT). Cisco predicted that by 2023 there will be 14.7 billion devices connect to IoT [1] with various applications that are generating a huge volume of data [2]. Moreover, the popularity of multimedia services over wireless networks is exploding [1]. Therefore, supporting such a volume of data demands new generations of cellular networks due to the limitation of the previous generations. Since 2019, the fifth-generation (5G) of cellular networks is commercially available in several countries [3]. 5G uses revolutionary technologies, e.g., higher frequencies, network function virtualization (NFV), software defined networking (SDN), and network slicing and evolutionary technologies, e.g., massive multiple-input and multiple-output (MIMO) to make a significant improvement in data rates, energy efficiency, reliability, and connection density. Meanwhile, 5G network is used in a wide range of applications such as IoT, smart city, Industry 4.0 [4], e-health, wearables, smart utilities [5], etc. Generally, the main 5G service classes include: 1) enhanced mobile broadband (eMBB), 2) ultra-reliable low-latency communications (URLLC), and 3) massive MTC (MTC). eMBB is capable of providing high data rates of 1G bits/s for mobile users [6] while URLLC focuses on the reliability (99.999%) and latency (milliseconds) of the communication, especially for the applications such as industrial IoT (IoT) and vehicle to everything (V2X) [6]. mMTC emphasizes on the number of connected devices in IoT employment (up to 1 million connections per km2) [7]. However, By taking the present-day and emerging advancements of wireless communications into account, 5G may not meet the future demands for the following reasons: * • Given the ever-increasing growth of the deployment of IoT devices, there is a particular need to improve further the connection density (10 million connections per km2) and coverage of 5G-enabled IoT networks [8, 9]. * • New emerging services of IoT such as extended reality (XR), telemedicine systems, mind-machine interface (MMI), and flying cars will challenge the original 5G service classes. To effectively provide IoT services such as XR and telemedicine systems for mobile devices, future mobile networks must simultaneously provide high transmission rates, high reliability, and low latency, which significantly exceeds the original goals of the 5G networks [10, 11, 12]. * • It is expected that future mobile networks will be an ultra-large-scale, highly dynamic, and incredibly complex system, e.g. the massive and heterogeneous devices in the IoT. However, the architecture of the current wireless networks (e.g., 4G and 5G) are often fixed, and the optimization tasks are defined to cope with specific and identified challenges and services. Hence, the prevailing manual-based optimization and configuration tasks are no longer appropriate for future networks [13, 6, 14, 15]. To deal with the challenges mentioned above, 6G networks are expected to provide new service classes, use new spectrum for wireless communications, enormous network capacity, ultra-low latency communications, and adopt novel energy-efficient transmission methods [16]. We aim to present a comprehensive survey on 6G cellular networks by considering a wide ranges of 6G aspects. Our contributions can be summarized, but not limited, as follows. * • Discussing the need of a new generation of cellular networks beyond 5G. * • Providing a detailed review on the existing works on 6G. * • Introducing the 6G key performance indicators (KPIs) and new use-cases e.g., holographic communication and Industrial automation. * • Studying various potential enabling technologies that will have important contributions in 6G. * • highlighting the potential 6G requirements, challenges, and trends e.g., Green 6G and 3D coverage. * • Outlining several 6G future research directions. The rest of this paper is organized as follows. Section II reviews the related survey and magazine articles on 6G. In Section III, we present the requirements and trends of 6G. The research activities and motivation are discussed in Section IV. Moreover, in this section, we provide a comprehensive list of 6G KPIs and use-cases, as well as new service classes. In Sections V and VI, we introduce the important evolutionary and revolutionary technologies respectively that contribute to the future 6G networks. We discuss several 6G challenges in Section VII. Finally, Section VIII draws the conclusion. Figure 1: The organizational structure of the survey.IIIIIIIVKPIVVIVIIVIII ## Glossary 3CLS Control, Localization, and Sensing 3D 3-Dimensional 3GPP 3rd generation partnership project AES Advanced Encryption Standard AI artificial intelligence AID Autonomous intelligent driving AMP approximate message passing APs access points AR augmented reality AU aerial user CAD cooperative automated driving CRAN cloud radio access network CSI channel state information D2D Device-to-device DL deep learning E2E end-to-end ECC Elliptic Curve Encryption EI Edge Intelligence EM emit electromagnetic eMBB enhanced mobile broadband ETSI European Telecommunications Standards Institute GPS global positioning system HMIMOS holographic MIMO surface ICT Information and Communication Technology IDS intrusion detection system IIoT industrial Internet of things (IoT) IoT Internet of things IR infrared ITU International Telecommunication Union KPI key performance indicator LIS Large Intelligent Surfaces LoRaWAN Long Range Wide Area Network LoS line-of-sight MEC mobile edge computing MIMO multiple-input and multiple-output ML machine learning MMI mind-machine interface mMTC massive machine-type communication (MTC) mmWave millimetre wave MTC machine-type communication NB-IoT Narrow band IoT NFV network function virtualization NMO Network Management and Orchestration NOMA Non-orthogonal multiple access NTC network traffic classification NTMA network traffic monitoring and analysis NTP network traffic prediction OFDM Orthogonal frequency-division multiplexing OMA orthogonal multiple access OWC optical wireless communication PLMN public land mobile network QC quantum computing QKD quantum key distribution QOC quantum optical communication QoE quality of experience QoS quality of service RA Random Access RF radio frequency RIS reconfigurable intelligent surfaces RL reinforcement learning SAL single-anchor localization SDN software defined networking SIC successive interference cancellation SR super resolution SVM support vector machine SWAP size/weight and power THz Terahertz UAV unmanned aerial vehicle UDHN ultra-dense heterogeneous networks UE User Equipment UHDD ultra-high data density uHSLLC ultra-high speed-with-low-latency communication uMUB ubiquitous mobile ultra-broadband URLLC ultra-reliable low-latency communications UV ultraviolet V2X vehicle to everything VLC visible light communication VMR virtual meeting room VR virtual reality WIPT wireless information and power transfer WPT wireless power transfer XR extended reality ## II Related Survey Articles Several papers provide a vision of the 6G mobile networks including enabling technologies, potential applications, requirements, and potential challenges. In the sequel, we present a brief overview of the existing survey papers that discussing different aspects of the 6G networks, and compare them with our work. The authors in [17] provided a vision of the 6G network and its requirements (e.g., battery lifetime) from a user viewpoint. Meanwhile, the authors envisioned some key enabling technologies for 6G, such as energy harvesting techniques, the glsai and glsml. In [18] the authors investigated driver factors for 6G (i.e., holographic, massive connectivity, and time sensitive/time engineered applications), the 6G network design principles, and propagation characteristics of the 6G networks. The authors in [19] introduced the state- of-the-art of glslis, and then 6G. The work in [20] discussed the evolution of mobile networks to the 6G network. More specifically, the authors focused on expected architectural changes, e.g., gls3d architecture and pervasive AI in the 6G networks. They highlight that these changes are essential for cellular networks to satisfy future use cases’ demand. In [21], the authors paint a comprehensive picture of 6G, foreseen applications, emerging trends, enabling technologies, and open research challenges. The authors in [5] presented a vision of the 6G for wireless communications and discuss the applications and critical abilities of 6G. They also refer to glsai as a key enabling technologies in the 6G era to achieve autonomous network and producing innovative services/applications. As AI-based mobile applications become increasingly popular, the authors in [22] discussed 6G networks from an glsai perspective. In this paper, the authors pointed out that the 6G networks will be expected to adopt ubiquitous glsai solutions from the network core to the edge devices. In [23], the authors highlight the need for a new communication wireless network from technological and societal perspectives. Then, they list several new use case scenarios that are not possible to be efficiently supported by the current 5G networks, such as holographic communications, smart environments, and high- precision industrial manufacturing. A similar work towards envisioning 6G wireless communications has been performed in [16]. They described the potential 6G requirements and provided a sketch of the latest technological achievements evolving to 6G networks. Furthermore, the authors presented several technical challenges related to 6G and potential solutions. Table I: Comparison of Related Survey Articles. Research | Year | Requirements | Trends | Vision | Service Classes | Architecture | Applications | Challenges | Enabling Technologies ---|---|---|---|---|---|---|---|---|--- [17] | 2018 | | | | | | | | Optical communication, radio charging [20] | 2019 | | | | | | | | Terahertz, visible light communication (VLC), energy harvesting, molecular communication, blockchain quantum computing (QC)-assisted, intelligent reflecting surface (IRS) [21] | 2019 | | | | | | | | Terahertz, VLC, edge AI, non-terrestrial technologies, (IRS), energy harvesting [5] | 2019 | | | | | | | | Blockchain, terahertz, LISs, molecular communication, QC-assisted, VLC and laser, OAM multiplexing, holographic beamforming [22] | 2019 | | | | | | | | AI, big data, AI-powered closed-loop optimization, intelligent communication [23] | 2019 | | | | | | | | Edge AI, self-optimization, 3D coverage, terahertz and VLC, distributed security [16] | 2019 | | | | | | | | Terahertz and VLC, mmWave band, non-terrestrial technologies, multi-mode ultra-massive MIMO, OAM multiplexing, multi-domain index modulation, AI and big data, heterogeneous networks [24] | 2020 | | | | | | | | Pervasive AI, radar-enabled communications, cell-free network, metamaterials, VLC and WPT, energy harvesting, OAM multiplexing [25] | 2020 | | | | | | | | AI-empowered air interface, AI-empowered optimization, sub-networks, hyper-specialized slicing, new cognitive spectrum [26] | 2020 | | | | | | | | Terahertz and VLC, synthetic materials, very large scale antenna, blockchain, AI [27] | 2020 | | | | | | | | AI/DL and distributed processing, space-air-ground-sea integrated (SAGSI), non-terrestrial technologies, super massive MIMO, advanced multiple access techniques, rate splitting multiple access (RSMA), WPT and energy harvesting [28] | 2020 | | | | | | | | AI, terahertz, operational/environmental intelligence [18] | 2021 | | | | | | | | New frequency bands, new physical layer techniques, multiple antenna techniques, intelligent surfaces, multiple-access techniques, AI [29] | 2021 | | | | | | | | AI, blockchain, digital twin, IE, communication-computing-control convergence [19] | 2021 | | | | | | | | LIS technology Our survey | 2021 | | | | | | | | Terahertz, AI, non-terrestrial technologies, optical wireless technology, 3D network architecture, energy harvesting, quantum communications, LISs, edge intelligence (: The paper investigated the determined factor, : The paper partially covered that factor, and : The papers did not consider that factor) One of the latest studies on 6G networks has been conducted in [24]. The authors provided their vision of 6G networks and discussed 6G requirements. Moreover, they speculated on the 6G era applications and shed light on the main expected challenges. In [25], the authors presented a list of the 6G requirements derived by the potential future use cases (e.g., glsar and glsvr for industry and biosensors). The authors in [26] provided a systematic overview of 6G networks. Their overview covers enabling techniques, potential use case scenarios, challenges, as well as prospects and development of 6G. One of the few articles that well investigate the 6G network’s core services has been conducted in work [27]. The authors in [28] focused on the future mobile network, with the aim of giving a vision for 6G that would guide the researcher in the beyond 5G era [30]. The authors refer to this critical fact that 6G will still be a human-centric network. Regarding this fact, tight security and privacy would be essential requirements of 6G networks. Last but not least, in [29], the authors conducted a survey covering potential use cases, core services, vision, requirements, and enabling technologies of the 6G mobile systems. To the best of our knowledge, most of the existing survey papers on 6G do not fully cover all aspects of the 6G networks. We have listed the published survey papers in Table I. The table summarizes the existing survey paper that focuses on different aspects of the 6G networks such as requirements, trends, vision, service classes, architecture, applications, challenges, and enabling technologies. We represent the topics included in these survey papers and the respective topics in our work. Compared to the existing literature, the objective of this paper is to give a comprehensive view of 6G networks in various aspects. To this end, we intend to respond to the following fundamental questions: 1. 1. Is there any need for a new mobile network generation? If yes, why? 2. 2. What are the fundamental requirements and trends relevant to the 6G mobile networks? 3. 3. What are the most recent research activities in the 6G network domain? 4. 4. What are the critical 6G KPIs and potential applications? 5. 5. What are the new service classes in the 6G era? And why will these services be needed? 6. 6. Which are the most critical enabling technologies for 6G and need further study? 7. 7. Which are the main expected future challenges and open issues in the 6G era that call for significant efforts to resolve? We use seven factors for evaluation of the existing works on 6G networks, including requirements, trends, vision, service classes, architecture, applications, and challenges. To the best of our knowledge, the existing papers in the field of 6G networks only partially considered these factors (See Table I). This work moves beyond the previously mentioned papers and mainly focuses on the fundamental aspects of 6G networks (e.g., requirements, trends, and KPIs). Unlike the existing works on 6G that does not fully cover all essential aspects of 6G ( i.e., requirements, trends, visions, new services classes, architectures, applications, challenges, and enabling technologies) (see Figure 2), we try to cover all these aspects to an acceptable extent. Figure 2: 6G vision: Enabling technologies, and requirements and trends ## III 6G Requirements And Trends To tackle the current challenges of the current cellular networks are involved, e.g. quality of service (QoS)-provisioning in terms data rate, latency and quality of experience (QoE), the operators must consider new strategies such as operation in shared spectrum bands, inter-operator spectrum sharing, heterogeneous networks, leasing networking slices on-domain, etc. Ass announced by the corresponding authorities, 6G will need more stringent requirements in compare with 5G as shown in Figure 3. In particular, key 6G requirements and trends are introduced as follows: [31]. * • Broad frequency bands: According to the requirements of the envisioned use- cases of 6G, it is salient that the bands allocated to NR, e.g. sub-6 and mmWave bands, will not be able to support the required QoS and QoE. Therefore, it is predicted that future networks require higher frequency spectrum bands, such as 73 GHz, 140GHz, 1THz, 3THz. * • Opportunistic data-rate: To support the emerging application such as immersive multimedia, a very high and opportunistic peak data rate is required [32]. * • Opportunistic latency: 6G requires the latency to be zero and end-to-end delay to be less than 1 ms. For instance, XR services require a latency to be close to zero in order to improve the QoS [33]. Furthermore, telepresence require the latency must be lower than sub-millisecond [31]. * • mMTC: There will be a number of connected devices that need to be supported by 6G. The current trend, e.g. human-centric solutions will not be efficient due to the network complexity and the sheer number of the connected devices. Therefore, a new trend, machine-centric, will be necessary to support such huge number of devices. However, there will be some critical challenges in this context including scalability, efficient connectivity, coverage improvement, as well as QoS and QoE-provisioning. Moreover, along with high data rate and low-latency communications, 6G use-cases (e.g., mission critical applications) also demand fast connectivity and high reliability/availability (i.e., %99.999). Towards this end, the 6G networks must comply with strict requirements such as availability, very short latency, and reliability, known as super MTC (sMTC). sMTC will play a critical role in the real-time control process in cyber-physical systems, vehicular communications, and Industry 4.0 operations. * • Self-X network: In comparison with the previous generations, 6G networks must be more flexible and robust. Such kind of flexible and robust networks are out of human ability to control. Therefore, to manage such networks, sophisticated machine learning (ML) techniques are of vital importance. ML techniques are used to support the network autonomy as well as capture insights and comprehension of surrounding environment in which they operate. Hence, with the help of ML algorithms, 6G network will be self-learning, self- reconfiguration, self-optimization, self-healing, self-organization, self- aggregation, and self-protection. * • Super-precision operating: It is clear to see that the traditional statistical multiplexing techniques are not adequate to support the future high-precision services and applications. Such as, tele-surgery and intelligent transportation systems. Such services and applications require very high-level and guaranteed precision, e.g., absolute delivery time. To do so, a bunch of new function components are necessary to utilize such as user-network interfaces, reservation signaling, new forwarding paradigm, intrinsic and self-monitoring operation, administration, and management for network configuration. * • Super-precision positioning: global positioning system (GPS), uses signal and travel time to find a position. However, the precision of such services are not adequate according to the error that they struggle with, even in order of meters [34]. The future services require very high precision in positioning even in sub-millimeter accuracy. Tele-surgery and tactile Internet are of those services. * • Scalability: IoT is going to connect billion of devices ranging from high-end devices to sensors, actuators, smartphones, tablets, wearable, home appliances, vehicles, and many more to the Internet [35]. A huge amount of data is generated via such connected devices. In order to extract the relevant hidden knowledge, ML is required. ML-enabled devices can analyze the data and extract the hidden knowledge and hence solve the issue of raw data transmission, which results in improving the network resources utilization. * • Supper energy efficiency: 6G devices are supposed to operate in higher frequency band and therefore they need much more energy compared to 5G devices. As an example, energy harvesting is under investigation to overcome the issue of energy efficiency in 6G. * • Connectivity in 3D coverage: 6G users will be able to experience beyond 2D in 6G applications such as 3D holographic display [36]. To achieve such wonderful services, terrestrial and aerial devices will be employed. * • Integration with satellites: Satellite communication technologies will be used in 6G to provide global coverage. Telecommunication satellites, earth imaging satellites, and navigation satellites will be integrated in 6G in order to provide localization services, broadcast and Internet connectivity. * • SDN: network management in 6G needs dynamic and programmatically efficient network configuration. NFV enables the consolidation of network instruments onto the servers located at data centers, distributed network devices, or even at end-user premises. Moreover, network slicing offers a cognitive and dynamic network framework on-demand, which can support several virtual networks on top of shared physical infrastructure. Figure 3: The requirements of 5G and 6G. ## IV 6G Research Activities and Motivation 6G mobile network is envisioned to simultaneously meet future applications’ stringent requirements, such as ultra-high reliability, efficiency, capacity, and low latency. Motivated by these foreseen advantages, several research institutes and countries have started research projects on 6G network developments. For example, the International Telecommunication Union (ITU) organized a research group for the network 2030, namely FG NET-2030 [37]. The central aim of FG NET-2030 is to recognize the suitable set of system technologies to meet the future applications’ requirements, such as ultra- massive connectivity and various types of resource requirements. In Finland, the University of Oulu collaborates with NOKIA on the 6G network, _6Genesis_ [38]. The project started at the beginning of 2018 to conceptualize the 6G enabling technologies. In 2017, the European Union (EU) developed a three-year research plan aimed at identifying the essential system technologies of the 6G network [39]. One of the funded projects is TERRANOVA, which aims to study and confirm the usefulness of ultra high bandwidth wireless links working in the THz frequency band. The United States, as one of the primary countries in 6G-related research, decided to use the terahertz frequency band for 6G. Towards this end, Wireless Center at the New York University (NYU WIRELESS) is designing THz communication channels with up to 100 Gbp/s data rates [40]. In China, it launched research on 6G from March 2018 to meet IoT applications’ proliferation in the future [5]. LG Electronics company signed an agreement with the Korea Advanced Institute of Science and Technology to develop 6G wireless communication systems in South Korea [11]. The project’s main focus would be on studying THz communications and wireless solutions to achieve 1 Tb/s data transmission speed. Samsung started the study on the 6G networks in June 2019 and released its 6G vision in June 2020 [41]. This vision discussed different aspects of 6G, including enabling technologies, new core services, and requirements. 6G wireless summit is another research activity towards enabling the next generation of cellular communications. Lapland, Finland, in 2019, has hosted the first 6G wireless summit [42]. Industry companies such as Ericsson, Huawei, ZTE, and NOKIA are the summit’s principal patrons. The 6G wireless summit’s main objective is to identify the critical 6G challenges, potential use cases in the 6G era, possible technical requirements, and candidate enabling technologies. In the following section, we discuss the main 6G glsplkpi and foreseen applications. ### IV-A The Key 6G KPIs and Use-cases This section discusses the main KPIs and characteristics of 6G applications that are envisioned to be widely implemented in the future. A schematic classification of the envisioned services and use-cases for 6G is represented in Fig. 6. * • Holographic communication: holographic telepresence will be one of the most critical applications of 6G for both profession and social communication [43]. It will enable users to enrich their traditional audiovisual communication with the sense of touch, while they are in different geographical locations. Holographic communication is a highly data-intensive application, and 5G is unexpected to deal with many holographic communications with absolute reliability. This is mainly due to the fact that holographic imposes strict requirements such as terabits data rate (up to 4 Tb/s), ultra-low latency (sub-milliseconds), and reliable communications. * • Industrial automation: It is foreseen that the Industry 4.0 transformation will complete in the era of 6G, i.e., the digital transformation of traditional manufacturing and industrial processes via cyber-physical and IoT systems. The central goal of Industry 4.0 will be to decrease the demand for direct human intervention in manufacturing practices through automatic control systems and networks and communication systems. Toward this end, 6G has to meet tight KPIs and requirements, such as a significant level of reliability (i.e., above $1–10^{-9}$), very low latency (i.e., below 1 ms), and multiple connected links [44]. Also, some industrial control operations (e.g., industrial augmented reality (AR) and virtual reality (VR)) call for establishing real-time communications with very low delay jitter of $1\mu s$ and Gbp/s peak data rates. Moreover, it is envisioned that the 6G network can provide exceptional help for connected robotics and autonomous systems, such as unmanned aerial vehicle (UAV) delivery services, the swarm of self-governing drones, air-ground integrated vehicular network (AGVN) [45], and autonomous cars. 6G will enable autonomous vehicles to engage more actively in everyday life, industry, and transportation. More specifically, 6G will fully actualize the large-scale deployment and services of autonomous cars [46]. * • Smart environments: The concepts of smart cities and smart homes refer to the smart environments that can significantly increase the quality of life by optimizing the services and operations, resources management, functions automation, services efficiently, monitoring, etc. Accordingly, the 6G network will combine ICT and an ultra-massive number of smart-physical devices (i.e., IoT device) to optimize daily life processes such as home security waste management, transportation systems, traffic monitoring, and utilities-related operations, to name a few. Realizing smart environments is one of the critical goals of the 5G network; however, in the 5G era, these applications can partially be realized due to their stringent requirements. One can refer to the high connection density, ultra-high reliability communications (e.g., for transportation systems), tight security (e.g., for smart homes applications), and massive data rates (e.g., for social XR and connected cars). * • E-Health: 6G will also offer enormous advantages to the health system through technological innovations such as holographic communication, artificial intelligence (AI), and AR/VR. 6G will help the health system by excluding time and space boundaries through telemedicine and optimizing health system functions / workflow [47]. Guaranteeing the eHealth services will require satisfying demanding QoS requirements such as reliable communication (99.99999%), ultra-low latency ($<$ 1 ms), and mobility robustness. * • Tactile Internet: The 5G-enabled IoT systems mostly focus on perception and connectivity. In the future, 6G will provide a more intelligent human-to- machine type of communication for real-time controlling IoT devices, namely tactile Internet. According to [26], the tactile Internet is a wireless communication system that will enable humans and machines to exchange control, touch, and sense data in a real-time manner. The tactile Internet will enable technology for haptics interface and, consequently, possible visual feedback and remote response behavior. The tactile Internet is expected to play a vital role in different services, such as Industry 4.0, everyday life, cooperative automated driving (CAD), e-commerce 4.0, and other human-machine interaction- based applications. To enable tactile communications in the future, 6G has to provide ultra-real-time response, ultra-low latency and reliable communications. One typical application for tactile Internet is presented in Figure 4. Figure 4: Remote surgery Figure 5: Relative requirement scores for the seven applications – representing graph by ball diameter. Adopted from [48]. * • Private 6G networks: Different industries and corporations (e.g., factories, airports, oil and gas, health sector, and grids) have employed IIoT to create an autonomous network for control and monitoring tasks without human intervention. To realize IIoT, the establishment of the underlying network is crucially important. Traditionally, the underlying network has been created by using both wired (e.g., fiber, industrial Ethernet, and Fieldbus) and wireless technologies (e.g., WiFi, WiMAX, and Bluetooth). However, these communication technologies still fail to satisfy the stringent requirements of IIoT applications, such as security, low latency, and reliability. Moreover, the ever-increasing number of IIoT devices, their mobility, and the rise of security and privacy threats motivate the industrial community to replace these technologies with private cellular networks [49]. New launched 5G cellular networks provide an effective alternative to the traditional IIoT underlying network, named the private 5G network, to fulfill stringent communication requirements in industry and other sectors. The private 5G network refers to a 5G new radio technology-based local area network for dedicated wireless coverage in a particular area. The private 5G network can provide distinctive features, such as mobility, positioning, improved security, guaranteed QoS, exclusive capacity, customized service, and intrinsic control that are especially attractive for industrial communication. Another expected evolution path will be the transition from private 5G networks to private 6G networks. This is mainly because many more industries may participate in the competition of adopting custom solutions tailored for their actual use cases, such as industrial automation, warehouse operations, and remote industrial operation. * • AR and VR: AR and VR have considered being one of the most distinguished services of the 5G networks. 5G has been adopted the new frequency spectrum (i.e.,millimetre wave (mmWave)) to increase the network capacity, and consequently support data-hungry applications such as AR and VR. AR/VR technologies dramatically affect many research areas and provide new use cases, such as remote surgery, MMI, haptic technology, and game technology [50]. These use cases will need a different level of latency and reliability. Despite this fact that some of the 5G service classes (e.g., URLLC) provide high reliability and low latency communications, some extremely-sensitive use cases (e.g., remote surgery ) will need latency to be shorter than one millisecond, which is not still feasible in the forthcoming 5G network. Moreover, other up-coming VR-based applications such as haptic technology and virtual meeting room (VMR) call for massive transmission amounts of real-time data expected to exceed the capability of 5G. Hence, they will need the 6G system to satisfy the end-to-end latency requirements. Figure 5 represents the relative significance of each requirement for the applications. Figure 6: The envisioned services and use-cases for 6G [31]. ### IV-B New Service Classes for 6G The applications mentioned above and related requirements will lead to new 6G service classes. These service classes expected to refine 5G core service classes, i.e., URLLC, eMBB, and mMTC. In the following, we explain the identified new service classes for the 6G network. * • Massive URLLC:5G URLLC refers to communications with high reliability, low latency, and high availability for mission-critical scenarios, such as IIoT and remote surgery. URLLC for many devices will be an essential scenario for future communication systems and networks [51]. Towards this end, 6G has to increase the size of 5G URLLC service to a massive scale, leading to a new service class, called massive URLLC, which combines 5G URLLC with classical mMTC. Autonomous intelligent driving (AID) is one of the foreseeable applications of massive URLLC, in which several important considerations must be taken into account simultaneously, such as motion planning, automated driving, automatic vehicle monitoring, obstacle detection, emergency rescue operations, and so on. Massive URLLC 6G must provide low latency, high reliability, high data rate, massive connectivity, and full mobility at the same time to meet such applications’ requirements. To realized massive URLLC, multiple access techniques such as OMA, NOMA, and contention-based multiple access could be promising solutions. By applying OMA techniques, massive URLLC 6G could experience a linear increase in the required bandwidth along with the rise in the number of devices. Moreover, other multiple access techniques (e.g., NOMA and contention-based) can be used to achieve proper trade-offs among latency, reliability, and scalability [52]. Massive URLLC calls for the transmission of massive short-data packets to guarantee time-sensitive 6G applications with excellent resource efficiency and low latency. * • eMBB: Furthermore, some other application such as AR, VR, and holographic meetings are promising examples of 5G and beyond 5G applications. Such applications often need high transmission rates (e.g., high-quality video streams), low latency (e.g., real-time interactive instructions), and high- reliability communications [24]. Moreover, these requirements must also be satisfied in high mobility conditions such as sea and air travel. As a result, a new service class, the so-called enhanced mobile broadband URLLC (eMBB \+ URLLC), has been envisioned to allow 6G to support any scenarios subject to the requirements of the high rate rate-reliability-latency. Energy efficiency will be a severe concern for this service class due to its direct effect on reliability and data rate. Compared with the eMBB and URLLC in the 5G networks, this envisioned service class should be highly competent in optimizing mobile communication systems in terms of the handover process, interference, and big data transmission/processing. Furthermore, the security threats and privacy concerns related to enhanced mobile broadband URLLC communication service shall be considered. It seems that the developing resource sharing technique for the coexistence of uRLLC and eMBB services in the 6G networks is one of the most significant technical challenges towards enabling enhanced mobile broadband URLLC [53]. * • Massive eMBB: in Section IV-A, we mentioned that tactile Internet would be one of the 6G use cases (e.g., Industry 4.0), which will pose stringent requirements such as high data rates, ultra-low latency, and reliable communications [5]. Meanwhile, to gain tactile perceptions and convert them into digital information, the connection density would often be very high in Industry 4.0-based scenarios(e.g., 100 connections in $m^{3}$). Hence, massive eMBB will attract lots of interest in the 6G era for improving the operations and functions in large-scale IIoT by supporting massive low-latency connectivity among workers, sensors, and actuators. Beyond these services, the works have been conducted in [22] and [54] envisioned three other service classes for 6G, including Computation Oriented Communications, Event Defined uRLLC, and Contextually Agile eMBB Communications. In [55], Zong et al. refer to the fact that advances in industry and autonomous intelligent driving will lead to the appearance new core service classes in the future network, such as ultra-high speed-with-low- latency communication (uHSLLC), ultra-high data density (UHDD), and ubiquitous mobile ultra-broadband (uMUB). Considering foreseen 6G usage scenarios and KPIs, a different classification of new service classes will be supported by 6G is introduced in [5]. These services include extremely reliable and low- latency communications, ultra-massive machine-type communications, further enhanced mobile broadband, ultra-low-power transmissions, and long-distance and high-mobility communications. Moreover, Human-Centric Services and Multi- Purpose Control, Localization, and Sensing (3CLS) and Energy Services have been introduced as new 6G service classes [21]. In the next section, we discuss the technologies that are expected to be integrated into 6G as enabling technologies. ## V Evolutionary Technologies of 6G This section and the next one investigate the technologies arising as enablers of the 6G network use-cases and related KPIs, discussed in Sections IV-B and IV-A. Some of these technologies have already been considered or discussed in the 5G networks; However, due to the technological limitations or market boundaries, they are not commercially available for the 5G networks. The 6G breakthroughs can happen in different layers (e.g., PHY), network architecture, communication protocols, network intelligence, etc. This section is dedicated to evolutionary technologies of 6G. As mentioned above, evolutionary solutions aim to use existing or previously adopted technologies (e.g., MIMO) to realize the 6G networks, where revolutionary solutions aim to exploit novel technologies (e.g., THz communications) to serve 6G. In other words, the revolutionary technologies will fundamentally change different layers of cellular networks (e.g., PHY layer) compared to the 5G networks. Note that many aspects of the revolutionary technologies are still at the step of scientific investigation. ### V-A Non-Terrestrial Technologies The existing cellular networks based on legacy terrestrial technologies have risen to the challenges of providing extensive wireless coverage to rural areas, the lack of availability/ reliability, and vulnerability to natural and human-made disasters. To address these challenges, the 6G networks will integrate with non-terrestrial technologies, i.e., UAV-assisted wireless communications and satellite connectivity, in order to afford full coverage and high capacity connectivity [56]. * • UAV-assisted wireless communications: UAVs can fly at low altitudes (>100 m) and have recently obtained ever-increasing interest thanks to their simplicity and low cost to implement broadband broad-scale wireless coverage during emergencies or act as relay nodes for terrestrial wireless communications. One of the promising foreseen applications of UAV-assisted communications will be in 6G-enabled IoT networks. UAVs can overcome geographical and environmental limitations on wireless communications such as ships on the ocean, deployed sensors in the remote/ isolated regions, and out of terrestrial network coverage areas. It is almost impossible for traditional IoT communication technologies such as Long Range Wide Area Network (LoRaWAN) and Narrow band IoT (IoT) to be applied to such situations. Regarding integrating UAVs into the existing and future cellular networks, two possible scenarios can be imagined [57]. In the first scenario, UAVs can be incorporated into the cellular network as a new category of User Equipment (UE) that gets services while it flies in the sky, referred to as aerial user (AU). It is envisioned that AUs will be a win-win solution for UAV technologies and cellular networks because it is a cost-effective solution. More significantly, AUs can reuse many installed cellular base station (BS) without the need to construct new necessary infrastructures. AUs will introduce many new UAV-based use cases in the 6G era, such as urban/ road traffic control, search and rescue missions in remote areas, and environmental photography. Moreover, AUs can be used as a complementary tool to help the positioning systems based on cellular networks. In the second scenario, UAVs can be work as aerial BS or relay nodes to help legacy terrestrial wireless communications by connectivity from the sky. Non-terrestrial BS/relay nodes could deliver enormous benefits compared to terrestrial BS/relays installed at fixed sites [58]. For instance, aerial BS/relays can be quickly established if desired. This feature is crucially essential for use case scenarios, such as emergencies and unforeseen disasters, and search and rescue purposes. Moreover, compared to the terrestrial BSs/relays, aerial BSs/relays are more likely to establish radio link with terrestrial UEs since they are above the ground. Hence, they can make more reliable connections and provide multi-user scheduling and resource allocation for radio access. Several significant issues, however, posed by UAVs-based communications, such as the different communication QoS requirements for UAV command/control signals and payload data, severe interference in air-ground communications (uplink/downlink) due to the line-of-sight (LoS)-dominant channels, and challenges related to UAVs size/weight and power (SWAP) constraints. * • Satellite communications To achieve ubiquitous connectivity, 6G will integrate satellite communications with the cellular network. High throughput satellite technology will be widely used for establishing broadband Internet connectivity, especially for the remote and out-of-coverage areas of terrestrial communication networks. It is expected that this Internet access service to be competitive with ground-based services in terms of pricing and bandwidth. In the 6G era, satellite-enabled communication is a anticipate alternative for terrestrial and UAV-assisted communication technologies to provide global connectivity for the smart-physical devices on the ground and hence can be utilized for IoT scenarios. Recently, some efforts have been initialized by the 3rd generation partnership project (3GPP) to provide satellite communication standards in order to serve upcoming terrestrial wireless communications [59]. Motivated by the enormous advantages of satellite-assisted IoT communications such as communication reliability, broad coverage, security/protection, fast rural deployment, and long longevity [60], some satellite communications companies, e.g., Globalstar and Iridium Communications, are involved in designing dedicated satellites for satellite- based IoT networks. One can refer to several applications for satellite-based IoT networks, including mission-critical services, location-based services, navigation systems, healthcare sector, to name a few. ### V-B AI and 6G Maybe the most influential and recently proposed enabling technology for the 6G network is AI [31, 61]. AI has been used in 5G somewhat or very limited, especially ML algorithms such as deep learning (DL) and reinforcement learning (RL). Classical ML algorithms also have been applied to 5G, for instance, support vector machine (SVM) and bayesian networks. One can refer to a wide range of ML applications in the 5G era, such as network traffic classification (NTC)/network traffic prediction (NTP), intrusion detection system (IDS), and network traffic monitoring and analysis (NTMA) to name a few [62]. Nonetheless, 6G will fully operationalize AI for different purposes (e.g., intelligent reasoning, decision making, and design and optimization of architecture, operations, and protocols), beyond the classification/prediction tasks. The 6G network is expected to provide ubiquitous AI services from the core to its end devices. AI can leverage massive training data to learn, provide forecasts, and make decisions, making it a powerful tool for enhancing wireless networks’ performance, even when comprehensive system information is not available. Hence, AI can be considered a foundation stone for different aspects, e.g., design and optimization, of future wireless networks, especially the 6G network. It is predicted that the significant impacts of AI on 6G will be in air interface design and optimization. AI methods have been first broadly adopted to meet challenges in the upper layers of communication systems and networks, and achieve remarkable successes [62]. Motivated by these successes, and stimulated by the foreseen KPIs (see Section IV-A) and challenges of the next-generation mobile network (see Section VII), AI-based approaches have now been studied also at the other layers of communication systems and networks. In the following, we discuss AI-enabled methodologies and technologies for the 6G network. SDN and NFV, also known as network softwarization, are crucial 5G enabling technologies that enable architectural innovation, such as network slicing. Network slicing is one of the key innovative design aspects in the 5G network that allows enormous virtualization capability and, consequently, encourages diverse enterprise business models with a considerable degree of flexibility, more profits for the service provider, and lessen operational costs. However, 6G will be a more complex and heterogeneous network, and hence, network softwarization will not be sufficient. As mentioned, 6G will benefit from new radio interfaces access types such as the THz frequency band and intelligent surfaces. Moreover, the 6G network will need to serve more difficult IoT related functionalities, e.g., sensing, data gathering, data analytics, and storage. When put all together, 6G will need an adaptive, flexible, and intelligent architecture. This calls for further improvement in the current technologies (e.g., SDN, NFV, and network slicing) to meet the requirements mentioned above. AI-based approaches will present a more versatile network slicing architecture in the 6G era by allowing rapid learning and adaptation to dynamics. The ever-increasing growth in the volume and variety of data produced in communication networks calls for developing data-driven network operations and planning to adapt to future networks’ highly dynamic nature. Leveraging ML- based methods for big data analytics is one of the AI applications that can be used for the 6G network. The key applications of AI in 6G networks include [63]: * • Descriptive analytics: Descriptive techniques refer to presenting historical data to easily understand different aspects of the network, such as channel conditions, network performance, traffic profiles, etc. * • Diagnostic analytics: Using AI, 6G networks can use the network data to detect the network faults, find out the faulty services, identify the root causes of the network anomalies, and thus enhancing the network performance in terms of reliability and security [64, 65]. * • Predictive analytics: Using AI, 6G networks can provide forecasts about the future or unknown events, such as traffic patterns and network congestion, resource availability, and future locations of the users. * • Prescriptive analytics: Prescriptive techniques leverage descriptive and predictive analytics to help with the decision about network slicing (e.g., number of slices), virtualization, caching placement problems, and resource allocation. As mentioned earlier, 6G will be a highly dynamic and complex network because of the extremely large-scale, high connection density and heterogeneity. Hence, conventional approaches for wireless network optimization (e.g., mathematical and dynamic programming) will not be applicable for the 6G network [22]. As a result of this incapability, it is expected that 6G will benefit from AI-enabled automated and closed-loop optimization. Recent advances in ML methods (e.g., deep RL) can build a feedback loop system between the decision-making agent and the cyber-physical systems [66]. The agent can iteratively improve its performance by receiving the system’s feedback to achieve optimal performance eventually [67]. ### V-C Energy Harvesting Energy harvesting mechanisms have been incorporated into 5G to meet strict energy limitations affordably and sustainably. These mechanisms can generate electrical power from the external sources for the energy supply of network devices, e.g., BSs and UEs. However, 5G energy harvesting mechanisms currently encounter some challenges, such as the coexistence of these mechanisms with communication protocols and efficiency degradation during converting harvested signals to electricity. Considering the foreseen massive scale of the 6G network, and stimulated by the fact that any sustainable development in communication systems and networks should devote careful attention to energy consumption, 6G will have to develop effective energy harvesting mechanisms and energy-efficient communication techniques. Moreover, it is expected that enabling IoT in the 6G era through the massive connectivity of low-power and batteryless smart devices. Nonetheless, finding a practical solution (s) to increase batteryless devices’ lifetime is a serious issue. Two potential solutions have attracted increasing attention for tackling this issue, including 1) further improve the energy efficiency of low-power devices, and 2) energy harvesting mechanisms and wireless information and power transfer (WIPT) [68]. Emerging 6G enabling technologies such as Terahertz (THz) communications and intelligent surfaces open up ample opportunities to achieve the energy self-sufficiency and self-sustainability vision for the 6G network. For example, because of its better directionality, the THz frequency band is more efficient than lower frequency bands for WIPT scenarios. ### V-D Large Intelligent Surfaces (LIS) As we mentioned in Section IV, the spectral efficiency also is one of the 6G KPIs. Massive MIMO is a leading-edge technology to improve the spectral efficiency in which it is possible to serve many UEs in a cellular cell over the identical bandwidth. Besides, the THz frequency band is one of the technologies that promise to help the spectral efficiency. However, in the 6G era, the simultaneous deployment of traditional Massive MIMO technology and THz frequency band can cause several challenges, including high power consumption, extreme complexity in signal-processing algorithms, and increased hardware cost equipment. Using LISs for communications will be a solution to alleviate these challenges in the 6G network. Intelligent surfaces are smart electromagnetic materials that can be embedded in our environments, such as buildings, walls, and clothes. These surfaces able to change the reflection of radio waves and expected to lead to the introduction of new communication technologies such as holographic MIMO and holographic radio frequency (RF) [69]. The concepts such as smart radio environments, reconfigurable intelligent surfacess (RISs), and LISs are intelligent surfaces technology branches, and each of them will bring its advantages. For example, RIS- assisted systems can improve the UEs’ achievable data rate and increase MIMO communication channel rank efficiency. Among the technologies mentioned earlier, LISs gain more attention from academia and industry. LIS refers to utilizing artificial electromagnetic metasurfaces as large antennas in order to increase the capacity of the network [70] or to adopt single-anchor localization (SAL) approach. Indeed, LISs can be considered an extension of the traditional massive MIMO technology, but with different array architectures and operating mechanisms. Under using LIS technology, massive MIMO systems’ impressive performance gains can be reached and improve these systems’ energy efficiency as LIS’ elements do not dependent on any active power source to transmit data [71]. ### V-E mobile edge computing (MEC) MEC, formerly recognized as mobile edge computing, is a network paradigm defined by European Telecommunications Standards Institute (ETSI) and refers to the deployment and execution of distributed computing capabilities, content caching, and network data analytics and network decisions making at the network edge [72]. MEC will become a primary player in the 6G networks as it can act as an intermediate layer that allows active data analytics, where the data is generated. This paradigm is crucially essential for resource- constrained services/applications [73]. One can refer to V2X communication, improving the energy efficiency and computing offloading of URLLC, and security and privacy purposes as the prominent use cases related to MEC. There are several emerging services, such as AR/VR and V2X, that need low end-to-end (E2E) latency. In this case, MEC can dramatically decline E2E latency by providing edge-based data processing and analytics approaches. Besides, MEC’s localized data pre-processing capabilities can reduce the need for sending a considerable amount of redundant or unnecessary data to the cloud data centers. MEC is also expected to be used to efficiently manage network resources (e.g., computational and communication). More specifically, the deployment and operationalization of edge servers, also called MEC servers, at the edge of the network can realize the semi-centralized resource allocation schemes, in which centralized resource allocation techniques can be used to assign the network resources to a cluster of edge devices in the presence of a limited channel state information (CSI) and with low complexity. ### V-F Non-orthogonal multiple access (NOMA) Multiple access techniques have always been essential in developing communication systems and networks, and 6G is not an exception [74]. NOMA will become one of the most influential radio access mechanisms for the 5G and 6G cellular networks. NOMA plays an essential role in the implementation and optimization of polar coding and channel polarization methods. As a multiple access technology, NOMA has been proposed for spectral efficiency in 5G/B5G. In comparison to the traditional orthogonal multiple access (OMA) technologies, NOMA also represent considerable improvement in terms of security, secrecy capacity, and user fairness [75]. NOMA leverages different techniques to provide these improvements, such as successive interference cancellation (SIC) technique and strong/weak users’ decoding order. Massive URLLC is envisioned as one of the leading service classes in the 6G networks, where NOMA technologies have remarkable abilities in guaranteeing services such as mMTC and URLLC. Adopting NOMA techniques is crucially vital for current and future mobile networks because they can help improved bandwidth utilization and efficient allocation of resources. Moreover, the MEC convergence with the NOMA, also called NOMA-assisted MEC, can be further studied to enhance the computation service in B5G [76]. ### V-G Device-to-device (D2D) Communication It is expected that D2D communication will be one of the most innovative technologies that help to fulfill the requirements of different emerging use cases in the 6G era [14]. Towards this end, D2D communication can provide 6G network infrastructure for various D2D-based solutions for NOMA, network slicing, and MEC, to name a few. Furthermore, it is envisioned that low latency and high-speed D2D communication will be essential for the 6G networks to deal with the limited distance communication because of THz technology in the future ultra-dense heterogeneous networks (UDHN). Regarding MEC, the UEs’ spare resources (e.g., computational and communication) can be used to improve network edge computing performance. D2D can enable the 6G to use the idle resources by establishing a virtual network infrastructure to manage the resources. Different topology management techniques can be used by D2D, e.g., clustering, that can allow the network to use spare resources efficiently [77]. In the timeframe of 6G, thanks to THz band D2D technology’s employment, the communication between two nearby UEs will be near real-time [78]. Hence, it is expected that the 6G networks will use the capabilities of D2D clusters in edge computing. In terms of network slicing, it is envisioned that the D2D-enhanced intelligent network slicing mechanism will encourage telecommunications operators to effectively centralize and combine network resources, such as D2D clusters, private third party, and public land mobile network (PLMN) at the edge of the network. Finally, NOMA’s utilization as a multiple-access approach for D2D can enable a D2D transmitter to communicate with multiple D2D receivers through NOMA. As a result, the performance gains of D2D will significantly improve [79]. ### V-H Grant-free Transmission Grant-free transmission technology has been introduced as one of the primary trends in future mobile networks [80]. Indeed, this technology has been classified as a critical medium access control mechanism for enabling massive IoT connectivity over mobile networks. For the 5G networks, different grant- free transmission techniques have been adopted for mMTC and URLLC services; however, these techniques’ provided capacity is still restricted [81]. Regarding the ever-increasing growth in the number of smart-physical devices and the popularity of these two services, more efficient grant-free transmission technologies will need to be designed for the 6G networks. Fortunately, the integration of NOMA with the grant-free transmission, GF- NOMA, is a promising solution for the 6G-enabled IoT systems because of NOMA’s short delay performance [82]. The majority of conventional NOMA techniques considered a centralized scheduling scheme, in which IoT devices are already connected, and different network parameters, e.g., spreading sequences and power control, are pre-determined. However, due to the specific characteristics of mMTC traffic, such as mass uplink communication, small size and periodic data transmission, and various QoS requirements, the conventional NOMA techniques’ performance can be highly degraded. In other words, this type of traffic can cause signaling overhead and increase the latency of the centralized scheduler. To deal with this challenge, the grant-free transmission is a viable solution, where the devices can send their data during randomly chosen time-frequency resources in an automatic manner to realize low-latency access and decline signaling overhead associating with the scheduling request. One can refer to signature-based, compressive sensing- based, and compute-and-forward based as the main categories of grant-free NOMA schemes [82]. ### V-I Sparse signal processing Sparse sampling, also known as compressive sensing, is a sparse signal processing paradigm that optimally utilizes sparsity in signals for reconstructing them efficiently and with fewer samples. This paradigm’s applications have been studied in different aspects of the 5G/B5G networks, including MIMO random access, embedded security, cloud radio access network (CRAN), channel-source network coding based on the compressive paradigm, etc [83]. Sparse signal processing algorithms can be used to accurately and effectively recognize active IoT devices in the grant-free transmission approach. One of the main challenges in grant-free transmission, consequently in enabling massive IoT connectivity, is to identify the active IoT devices for data decoding [84]. Sparse signal processing is also important for realizing THz communications in the 6G networks. Due to THz channels’ sparse nature, compressive sensing methods for sparse channel recovery in THz channels estimation can be used. For example, the work in [85] the applicability of approximate message passing (AMP) as a compressive sensing technique has been investigated in THz channel estimation. By leveraging the sparsity feature, the compressive sensing paradigm demonstrates an excellent ability to improve the spectrum and energy efficiency for the future wireless networks and IoT systems. ### V-J Holographic MIMO Surfaces One of the key 6G enabling technologies is holographic MIMO surface (HMIMOS) [86]. In the 5G networks, massive MIMO systems (i.e., BSs with large antenna arrays) have been used to satisfy 5G networks’ throughput requirements. Nevertheless, because of various reasons such as energy consumption concerns and the considerable cost of fabrication/operation, it is difficult to fully realize massive MIMO systems. Given the remarkable advances in programmable metamaterials, RISs show enormous potential to deal with massive MIMO challenges and develop the challenging vision for the 6G networks by actualizing seamless connectivity and control of the environment in cellular wireless networks through intelligent software. HMIMOS is expected to improve massive MIMO technology in terms of size, cost, weight, and energy consumption by transforming the wireless network environment into a reconfigurable intelligent entity. Towards this end, HMIMOS may take three different roles, including receiver, transmitter, and reflector. The distinguishing characteristics of HMIMOS (i.e., intelligence and reconfigurability) make it a prospective technology to fulfill the different 6G requirements, including low-latency, low-power, and high-throughput communications. In terms of communications, one can refer to two main groups of applications for HMIMOS, including outdoor and indoor applications. HMIMOS outdoor applications include energy-efficient beamforming, creating connections between the users and BS, PHY layer security, and wireless power transfer (WPT), where indoor applications are accurate indoor positioning and coverage enhancement in indoor environments [86]. ## VI Revolutionary Technologies of 6G As mentioned in Section V, 6G enabling technologies can be categorized to evolutionary and revolutionary technologies. In this section, we study the revolutionary technologies of 6G as follows. ### VI-A THz Communications Despite the successful 5G deployment with the help from enabling technologies such as the mmWave frequency spectrum, the demand for enhancing data rates continues. In this sense, higher frequencies above the terahertz band will be fundamental in the 6G network. The terahertz frequency band is between the mmWave and optical bands, and it ranges from 100 GHz to 10 THz [87]. THz frequency band is promised to provide data rates on hundreds of Gbp/s(e.g., 100 Gbp/s), secure transmissions, extensive connectivity, highly dense network, and enhance spectral efficiency, consequently increase the bandwidth (>50 GHz) to meet the requirements of 6G use cases with massive data rates and ultra-low latency. Moreover, the THz frequency band benefits from high- resolution time-domain, which is crucially vital for super resolution (SR) sensing technology (e.g., remote sensing) and high precision positioning services (e.g., autonomous driving). Despite these remarkable advantages, multiple unique issues arising from the THz frequency communications. For example, THz links are prone to excessive signal attenuation, rapid channel fluctuation, and severe propagation loss, notably restricting communications over long distances. Besides, to use the terahertz frequency band in commercial communication systems, one should consider engineering-related challenges, e.g., design very large-scale antenna and requiring high computational power for supporting the extensive bandwidth. Fortunately, rapid advances in infrastructure and algorithmic aspects of communication systems, such as ultra-massive MIMO (UM-MIMO), intelligent surfaces, new signal processing methods, and communication protocols, will mature THz communications. ### VI-B Optical Wireless Technology Alongside mobile communications based on RF, optical wireless communications (OWCs) will be widely used in the timeframe of 6G. OWC frequency range comprises infrared (IR), VLC, and ultraviolet (UV) spectrum [88]. OWC is now being operated since the 4G network. Nevertheless, it will be deployed more broadly to satisfy the requirements of 6G. Among OWC technologies, VLC is the most promising frequency spectrum because of the technology advancement and extensive using of light-emitting diodes (LEDs). The OWC in the visible spectrum (380 to 740 nanometers) is generally known as VLC, which visible to the human eye. For short-range communication distances (up to a few meters), VLC technology offers unique advantages over its RF-based counterparts [89]. First, the occupied spectrum by VLC systems is free and unlicensed, and they can provide extensive bandwidth (THz-level bandwidth). Second, VLC-based communications do not emit electromagnetic (EM) radiation and are not seriously interfere with other potential EM interference sources. This means that VLC communications can be adapted for sensitive EM interference applications such as gas stations, aircrafts, and hospitals. Communication security and privacy is the third advantage of VLC. The transmission medium in a VLC-based network cannot penetrate walls and other opaque obstructions. In other words, the transmission range of the network is limited to indoors. As a result, it can protect users privacy and sensitive information from malicious adversaries. This could be more interesting when we know that about 80% or more of the time, people tend to stay indoors. Last but not least, VLC can rapidly establish wireless networks and does not need expensive BS since it uses illumination light sources as BSs. The maximum data rate of OWC is highly dependent on lighting technology. For example, in [90] [91] the authors claimed they achieve up to 4Gbp/s data rate with a gallium nitride (GaN)-based LED. Given the technological improvements of LED lamps and related fields, e.g., digital modulation techniques, it is expected that the achievable data rate of VLC will reach hundreds of Gbp/s for the 6G network [92]. It is envisioned that VLC technology will be widely used in different applications, such as intelligent transportation systems (ITS), smart cities and homes, the advertising and entertainment industry, and hospitals. ### VI-C 3-Dimensional (3D) Network Architecture The currently and previously deployed cellular network architectures are designed for 2-Dimensional (2D) connectivity between network access points and ground-based UEs. Conversely, it is envisioned that the 6G network will integrate the terrestrial and non-terrestrial technologies (see Section V-A) to support 3D network coverage. Compared with the fixed 2D infrastructures, the 3D strategy is much more timely and economically efficient (telecommunications operators have to bear the cost of the deployment of dense mobile networks to guarantee massive connectivity), especially when the operators want to quickly provide seamless/reliable/continuous services in rural areas or the case of natural disasters. 3D coverage will also enable communication system for deep-sea and high-altitude. Despite the significant advantages mentioned above, 3D network architecture will pose many challenges that need to be responded before this technology can effectively be used in real-world cellular networks, e.g., channel models for air-to-ground communications, trajectory optimization, resource management, topology, etc. [93] [94]. ### VI-D Edge Intelligence (EI) EI or edge AI is another promising computing paradigm that gains enormous interest [95] [96]. Big data sources as an enabling technology for learning- based solutions have recently represented a significant shift from the cloud data centers to the ever-increasing edge devices, e.g., smartphones and industrial IoT devices [97]. It is evident that these edge devices can fuel AI and present several new use cases by providing a massive volume of data. Motivated by the marked shift in big data sources and stimulated by the advantages mentioned earlier, there is an imperative action to push the AI solutions to the edge of the network to exploit the edge big data sources’ potential entirely. Nevertheless, providing AI solutions at the network edge is not a trivial task because of the issues related to performance, data/user privacy, and cost. The traditional approach is to transfer the data generated by the edge devices to the cloud data centers for processing and analytics to deal with these issues. Clearly, transferring such a considerable amount of data will bring monetary/communication costs and cause delays. Moreover, data protection and privacy-preserving can also be significant concerns in this scenario. On-device data analytics is proposed as a remedy, in which AI solutions can be run on the edge devices to process generated data locally. Nonetheless, in this alternative, lower performance and energy efficiency are expressed as primary concerns [98]. This is mainly because most AI solutions need immense computational power that significantly exceeds the capability of power- and resource-constrained edge devices. EI has been emerged to tackle the issues mentioned above. EI is the combination of edge computing and AI, which promises to provide tremendous advantages compared to the conventional approaches based on cloud, such as privacy-preserving, low-latency, efficient energy consumption, cost-effective communications, etc [99]. It is generally believed that the 6G networks will adopt ubiquitous AI solutions from the network core to the edge devices. Nevertheless, conventional centralized ML algorithms need the availability of a large amount of centralized data and training on a central server (e.g., cloud server or centralized machine), which will be a bottleneck in the future ultra-large- scale mobile networks [100]. Fortunately, as an emerging distributed ML technique, FL is a promising solution to deal with this challenge and realize ubiquitous AI in the 6G networks. FL is an ML technique in which creating ML models do not rely on storing all data to a central server where model training can occur. Instead, the innovative idea of FL is to train an ML model at each device (participant or data owner) where data is generated or a data source has resided, and then let the participants send their individual models to a server (or aggregation serve) achieve an agreement for a global model (See Figure 7). FL can alleviate privacy and security challenges associated with traditional centralized ML algorithms, as well as guarantee ubiquitous and secure ML for the 6G networks [101]. The centralized ML technique is contradictory to ubiquitous ML services, which are promised by 6G. This is mainly due to the data collection- and data processing-related overheads in centralized ML techniques. As a result, distributed ML techniques, mostly FL, in which all training data is located in remote devices locally, are required for future communication systems. FL is showing itself to be an accelerator for the extension of privacy-sensitive applications/services. However, despite the considerable potential advantages of FL for the 6G networks, FL is still in its infancy and encounter various challenges for fully operationalize in the 6G networks. Figure 7: A typical federated learning architecture. The main challenges facing FL in the 6G era include significant communication cost for model updating and aggregation, privacy concerns associated with gradient leakage attacks and membership inference attack [102], security concerns resulted from heterogeneous and various data owners (e.g., data poisoning attacks), and the model training and inference concerns caused by the ultra-large-scale of 6G networks. It is envisioned that an enormous number of heterogeneous devices and communication technologies will be deployed in the 6G networks; hence, it is crucially important to improve the communication efficiency in FL algorithms to reduce the number of times the aggregation server gets gradients from the participant devices. Most importantly, it is necessary to develop privacy-enhancing mechanisms in FL as the current techniques proposed for improving privacy in FL, e.g., homomorphic encryption and secure multiparty computation, can not deal with the attacks mentioned above [103]. ### VI-E Quantum Communications Motivated by the great potential of parallelism showed by QC, the QC-assisted communications field has gained lots of interest. This is mainly due to the fact that quantum communications has a strong potential to meet the stringent requirements of 6G such as massive data rates, efficient computing, and strong security [104]. Toward this end, technologies such as quantum optical communications (QOCs), quantum optical twin, quantum communication, and quantum key distribution (QKD) have been investigated in the literature [105] [106]. The main idea behind QC-assisted communications is that QC uses photons (or quantum fields) to encode the data in the quantum state (or qubits) and transmits qubits from a quantum emitter to a quantum receiver. Using qubits in communications brings enormous advantages, such as communication security, high-speed and low transmission losses in optical and radio mediums, lessening the chance of decoherence, etc. Moreover, QC-assisted communication shows excellent potentials for long-distance communications. More specifically, quantum repeaters can be used at long distances to divide the communication link into multiple shorter middle segments and then correct errors such as photon loss and operation errors in these segments [107]. Several works have practically implemented the applications of quantum-based technologies in communication systems and networks. For example, QKD’s capabilities for building a quantum access network have been investigated in [108] [109]. Besides, the works have been conducted in [110][111] focused on implementation and testing quantum switches. AI is another field that will be revolutionized by QC. The currently available AI techniques are quite expensive in terms of energy, time, and resources, especially DL models. This is mainly due to the fact that these techniques use traditional Boolean algebra-based transistors for processing a massive amount of data, unusually DL models. In other words, the technological advancement in chipsets does not grow at the same pace as the AI techniques growing. Fortunately, computing systems based on quantum principles are a promising solution to tackle this problem as these systems are significantly faster and more energy-efficient than their traditional ancestors. ### VI-F Cell-less architecture Cell-less communication architecture, also known as cell-free, has been proposed to deal with performance degradation poses by the cellular networks’ handover process [112]. Under this architecture, a UE can communicate with cooperative BSs (or access points (APs)) through coordinated multipoint transmission and reception techniques instead of connecting to a single BS. Establishing cell-less communications can enhance connectivity and lower the latency induced by the handover process. Cell-less communications will be inevitable in the 6G era due to the fast deployment of heterogeneous communication systems and using several frequency bands, where UEs will transfer from one network to another network without requiring doing handover process [46]. The UEs will then choose the best link from the available heterogeneous links (e.g., THz, mmWave, and VLC) in an automated manner. As a result, the traditional handover process issues, such as data loss and handover delays/failures, can be alleviated and achieve better QoS. In other words, cell-less communications will ensure UEs’ seamless mobility without overhead because of the handover process. ## VII 6G Challenges and Future Research Directions A few potential critical open issues for future research work in the 6G networks are presented in this section. ### VII-A Energy Consumption Challenges In the era of 6G, an unprecedented number of low-power IoT devices and battery-less smartphones will serve over 6G to realize IoT. As a result, super energy-efficient techniques will be fundamental to guarantee the QoE. The energy-related issue is much more notable in smartphones as the current battery life for smartphones without charge is almost one day, which will be problematic for the development of cellular communications. To deal with this challenge and comprehensively improve the 6G network performance so as to serve more end devices, designing effective power supply mechanisms with novel signal processing techniques is necessary. Several energy supply techniques, especially wireless energy harvesting-based techniques, can be adopted in the 6G network. In particular, the design of low-density channel codes and energy efficient modulation techniques are useful. Energy-efficient communication paradigms also can be investigated, such as symbiotic radio (SR). Furthermore, energy management optimization in the future cellular network is another promising technique to create a trade-off between energy demand and supply dynamically. ### VII-B AI-related Challenges It is undoubtable that ML is an integral solution in 6G, but there are some challenges that should be considered. AI tasks usually generate a heavy computational load and are often designed, trained, and used at servers with task-customized hardware. Considering the rapid proliferation of smart- physical end devices, it is envisioned that a vast number of AI applications will be designed and used by these devices at the network edge. However, given the current mobile networks, user privacy, data security, wireless link capacity/latency are the main concerns of mobile AI-based applications. Regarding privacy and security, the emergence of decentralized machine learning techniques is a promising possibility to preserve the users’ privacy and protect sensitive data. FL tries to create a joint ML model by training an algorithm on the data placed at several distributed sites. In FL, each participant (data owner or client) trains a local model and sends the model weight updates to a central server (a.k.a. aggregation server) instead of sending raw data. FL offers multiple advantages. For example, it preserves the users’ privacy and protects the security of data. Moreover, FL allows various participants to train an ML model in a collaborative manner to achieve a better model compared to what they can achieve alone. ML can be integrated by 6G in two aspects including: * • ML on 6G networks: In this aspect, distributed ML techniques e.g.FL and split learning are performed on 6G network infrastructures for especial tasks e.g.EI. * • ML for 6G: In this aspect, ML techniques are used as decision-makers for networking solutions to provide automated network infrastructure establishment and maintenance. In a more depth aspect, the ML solutions for 6G can be performed in a distributed manner, although most of existing solution in this aspect working as a centralized model [113]. One of the most important aspects of ML for 6G is resource management. Nevertheless, both aspects are resource-hungry tasks, but they have two goals. ML on 6G networks try to allocate more idle resources to ML tasks in an efficient manner, but an optimal ML for 6G solutions can establish the 6G network with the lowest resource consumption. To achieve both goals, available resources should be separated between these two aspects in an efficient manner that is a complex task. In other words, it is unaffordable to use a big volume of available resources for managing 6G networks and its available resources as they finally should be allocated to ML tasks for especial purposes. ### VII-C Terahertz Communications Challenges THz communications are expected to be a critical enabling technology for the 6G network. However, as we mentioned, THz communications are facing some challenges, notably severe propagation loss and constrained communication over long distances. Hence, research communities need to work jointly together to deal with these challenges and realize THz communications. Fortunately, several research activities are open-ended, such as THz wireless transmission by Fraunhofer HHI, Communications, Sensing at Terahertz by NYU WIRELESS, and ICT-09-2017 Networking research funded by Europe Horizon 2020. Due to the nature of THz communications’ features, the design of THz MAC also poses many challenges, including serious deafness problems, complicated network discovery and coupling operations, and the need for designing efficient concurrent transmission scheduling techniques. These challenges and many others are tackled in work have been conducted by Han et al. [71]. Hardware design is another challenge in THz communications. More specifically, real-world THz communications deployments call for innovations in circuit and antenna design, as well as miniaturizing current big high-frequency transceivers. For instance, the antenna size to support joint communication in mmWave and THz bands may vary (from nanometers to micrometers) and need to be redesigned. ### VII-D 3D Coverage Challenges As mentioned in Section VI-C, the 6G network will support 3D network coverage because of integrating the terrestrial and non-terrestrial technologies (e.g., UAV-assisted and satellite communications). However, this calls for collaborative research on different aspects of 3D network architecture. First, air-to-ground 3D channel modeling and measurement for communications are required. Second, novel topology optimization and network planning methods (e.g., for the deployment of non-terrestrial BSs/relay nodes) must be developed. Finally, novel network optimization tools and techniques for energy efficiency, mobility management, trajectory, and resource management in 3D network architecture are required. ### VII-E Security Challenges Given the new service classes, developed threat landscape, raised privacy concerns, and new wireless communication methods (e.g., non-terrestrial technologies), security/privacy is expected to be a critical issue for 6G. Moreover, network virtualization and softwarization in the 6G era will cause network security/privacy boundaries to gradually fade. As a result, the security defects induced by network architecture have become more and more notable. The increased and closer integration of big data analytics techniques, AI, and EI may also introduce data security risks at the network’s edge. The conventional independent security approaches will not be practical for internal network security risks (network- and access-side) posed by enabling technologies, new use case scenarios, and service classes. Hence, it is crucially important to develop the conventional approaches and think about new security methods [114][37]. In the 6G era, new security techniques, e.g., integrated network security, will complement traditional physical layer security techniques, especially if they consider the requirements such as tight security levels and low complexity. Towards this end, the extension of some of the previously proposed PHY layer security methods can be used for 6G. For example, PHY layer security mechanisms for mmWave communications can be adopted for Terahertz communications. As mentioned in [114], one can refer to authentication-, access control-, communication security-, data encryption-related technology as the key security enabling technologies for 6G. Another envisioned security challenge in 6G is related to the continuing growth of IoT devices proliferation. At present, the most prevalent IoT communication protocols are including 6LoWPAN, short for IPv6 over Low -Power Wireless Personal Area Networks, IEEE 802.15.4, LoRa, and . These communication technologies use cryptographic algorithms such as Elliptic Curve Encryption (ECC) and Advanced Encryption Standard (AES) as security mechanisms. However, with the emergence of new communication technologies, e.g., QC-assisted communications and the growth in end devices’ computational power, the traditional IoT protocols will not be secure [115]. ### VII-F Tactile Internet Challenges Incorporating control, touch, communication, and sensing capabilities into a shared real-time system pose a crucial issue in accomplishing the tactile Internet. Despite using virtualization and softwarization technologies and MEC to achieve the low latency requirements, the tactile Internet is still in its infancy and further research is needed to solve some open technical challenges, such as physical layer challenges. Moreover, to alleviate signaling overhead and air interface latency, taking into account several other issues, including optimal waveform selection algorithms, robust modulation techniques, intelligent control plane, Control and User Plane Separation techniques, should is essential. Scalable routing mechanisms and adaptive network coding schemes is also worthy further research as they can reduce end-to-end delay. Besides, security flaws are among the main concerns about tactile Internet-based use case scenarios. Hence, providing an effective security mechanism to deal with malicious activities is crucial for realizing the tactile Internet. ### VII-G Random Access Protocols The proliferation of IoT devices calls for providing wireless solutions that can transfer data in a reliable/energy-efficient/spectral-efficient manner. However, this is not a trivial task, mostly when an IoT system consists of a vast number of IoT devices. This is mainly due to the fact that the IoT devices send/receive data packets sporadically, and unpredictably, and consequently makes it challenging to design an effective resource allocation mechanism. Random Access (RA) protocols have been proposed to deal with this challenge, where they can decline the cost of communication in terms of wireless resources. The majority of RA protocols that have been employed in the existing wireless solutions, such as 5G and LoRaWAN, are not optimal in the networks with a massive number of nodes granting access. To help deal with this situation, the cooperation opportunities between modern RA methods with technologies such as NOMA, Orthogonal frequency-division multiplexing (OFDM), massive MIMO, and sparse signal processing would lead to more efficient wireless systems [116]. ### VII-H Privacy concerns in the future smart services The 6G networks are envisioned to offer ubiquitous and unlimited network access for many users and MTC devices. This seamless connectivity is a prominent supporter of future smart-based services such as smart environments, homes, health, industries, cities, utilities, government, etc. As an example, for a given smart-based service, one might refer to a smart lighting system for a home, which can provide a more efficient way of lighting in terms of energy consumption and cost. However, such systems will use/share sensitive and private information, e.g., occupancy time, household information, habits, and preferences. Indeed, using this de-identified data by a provider to deliver smart services may be a double-edged sword in many scenarios rather than absolutely bring benefit. Given the technological advancements and the availability of rich data flows, another privacy-related challenge is the need for a clear definition of de-identified data. This is especially important when it comes to introducing measures for determining the privacy and sensitivity level of data in a dataset on any occasion. Integrating the blockchain technology can be considered as a potential solution to improve the privacy of 6G, but to the best of our knowledge, there are few efforts to apply the blockchain in 6G [117]. ### VII-I Green 6G 6G is expected to integrate terrestrial wireless networks with space, aerial and underwater communications to realize connectivity in 3D coverage. Indeed, the primary objective of 6G is to ensure anytime anywhere network connectivity for a vast number of IoT devices and users. These devices/users impose diverse QoS requirements, need for handling massive/heterogeneous amount of traffic, and have very low power consumption through design and building energy- efficient wireless communication protocols, computing, and transmitter/receiver technologies. Besides the trends and innovative technologies (i.e., the evolutionary and revolutionary technologies) that we mentioned throughout the paper, green communication and green computing will be among the next generation of wireless networks’ primary goals. This is mainly important for reducing overall power consumption and operating costs, consequently can bring positive effects on environmental and business aspects. Towards this end, making a shift from self-organizing networks to is an increasing trend in the past few years that can be applied in 6G networks. ### VII-J Network Management and Orchestration (NMO), NTMA, and 6G 6G is considered one of the most important network infrastructures for IoT networks. Due to the IoT applications, massive-scale IoT networks make significant challenges in NMO and NTMA techniques [118]. As the former techniques are used to organize the network infrastructure e.g.fault management and network configuration management [77], the latter methods are broadly used to evaluate networking performance in different aspects e.g.performance management and security management. Both techniques can highly affect the QoS and QoE which are the crucial factors in 6G. To the best of our knowledge, there is a shortage of effort to overcome these challenges in 6G, especially in complex 6G network infrastructures e.g.D2D-enabled 6G and massive-scale IoT based 6G networks. During the last decade, ML techniques are proposed to overcome the challenges of NMO [119] and NTMA [66], but there is a considerable lack of research in these fields for 6G that can be considered as a future research direction. ## VIII Conclusion In this paper, we studied 6G as the next communication paradigm for IoT. We have first discussed the need for 6G networks. Then, we have introduced the potential 6G requirements and trends, as well as the latest research activities related to 6G. Furthermore, the key performance indicators, applications, new services, and the potential key enabling technologies for 6G networks have presented. Finally, we have presented several potential unresolved challenges for future 6G networks. ## References * [1] C. white paper, “Cisco Visual Networking Index; Global mobile data traffic forecast update, 2018–2023,” 2019. * [2] A. Shahraki and Ø. Haugen, “Social ethics in internet of things: An outline and review,” in _2018 IEEE Industrial Cyber-Physical Systems (ICPS)_. IEEE, 2018, pp. 509–516. * [3] M. Cave, “How disruptive is 5G?” _Telecommunications Policy_ , vol. 42, no. 8, pp. 653–658, Sep. 2018. * [4] S. Filin, H. Murakami, K. Ibuka, H. Kawasaki, K. Ishizu, and F. Kojima, “5g and b5g technologies to implement private operators supporting high quality video production in dense user environments,” in _2019 22nd International Symposium on Wireless Personal Multimedia Communications (WPMC)_. IEEE, 2019, pp. 1–6. * [5] Z. Zhang, Y. Xiao, Z. Ma, M. Xiao, Z. Ding, X. Lei, G. K. Karagiannidis, and P. Fan, “6G wireless networks: Vision, requirements, architecture, and key technologies,” _IEEE Vehicular Technology Magazine_ , vol. 14, no. 3, pp. 28–41, Jul. 2019. * [6] L. Zhang, Y.-C. Liang, and D. Niyato, “6G visions: Mobile ultra-broadband, super internet-of-things, and artificial intelligence,” _China Communications_ , vol. 16, no. 8, pp. 1–14, Aug. 2019. * [7] J. Chai, L. Feng, F. Zhou, P. Zhao, P. Yu, and W. Li, “Energy-Efficient Resource Allocation Based on Hypergraph 3D Matching for D2D-Assisted mMTC Networks,” in _Proc. IEEE Global Communications Conference (GLOBECOM)_ , Abu Dhabi, United Arab Emirates, Dec. 2018. * [8] D. Jiang and G. Liu, “An overview of 5G requirements,” in _5G Mobile Communications_. Springer, Oct. 2017, pp. 3–26. * [9] S. P. RM, S. Bhattacharya, P. K. R. Maddikunta, S. R. K. Somayaji, K. Lakshmanna, R. Kaluri, A. Hussien, and T. R. Gadekallu, “Load balancing of energy cloud using wind driven and firefly algorithms in internet of everything,” _Journal of Parallel and Distributed Computing_ , Aug. 2020\. * [10] R. Gupta, A. Shukla, and S. Tanwar, “BATS: A blockchain and AI-empowered drone-assisted telesurgery system towards 6G,” _IEEE Transactions on Network Science and Engineering_ , Dec. 2020. * [11] L. U. Khan, I. Yaqoob, M. Imran, Z. Han, and C. S. Hong, “6G wireless systems: A vision, architectural elements, and future directions,” _IEEE Access_ , vol. 8, pp. 147 029–147 044, Aug. 2020. * [12] Z. Lv, L. Qiao, and I. You, “6G-enabled network in box for internet of connected vehicles,” _IEEE Transactions on Intelligent Transportation Systems_ , Nov. 2020. * [13] S. Nayak and R. Patgiri, “6G: Envisioning the key issues and challenges,” _arXiv preprint arXiv:2004.04024_ , Jun. 2020. * [14] S. Zhang, J. Liu, H. Guo, M. Qi, and N. Kato, “Envisioning device-to-device communications in 6G,” _IEEE Network_ , vol. 34, no. 3, pp. 86–91, March 2020. * [15] F. Tang, Y. Kawamoto, N. Kato, and J. Liu, “Future intelligent and secure vehicular network toward 6G: Machine-learning approaches,” _Proceedings of the IEEE_ , vol. 108, no. 2, pp. 292–307, Dec. 2019. * [16] P. Yang, Y. Xiao, M. Xiao, and S. Li, “6G wireless communications: Vision and potential techniques,” _IEEE Network_ , vol. 33, no. 4, pp. 70–75, Jul. 2019. * [17] K. David and H. Berndt, “6G vision and requirements: Is there any need for beyond 5G?” _IEEE Vehicular Technology Magazine_ , vol. 13, no. 3, pp. 72–80, Jul. 2018. * [18] H. Tataria, M. Shafi, A. F. Molisch, M. Dohler, H. Sjöland, and F. Tufvesson, “6g wireless systems: Vision, requirements, challenges, insights, and opportunities,” _Proceedings of the IEEE_ , pp. 1–34, 2021\. * [19] R. Alghamdi, R. Alhadrami, D. Alhothali, H. Almorad, A. Faisal, S. Helal, R. Shalabi, R. Asfour, N. Hammad, A. Shams _et al._ , “Intelligent surfaces for 6g wireless networks: A survey of optimization and performance analysis techniques,” _IEEE Access_ , 2020. * [20] T. Huang, W. Yang, J. Wu, J. Ma, X. Zhang, and D. Zhang, “A survey on green 6G network: Architecture and technologies,” _IEEE Access_ , vol. 7, pp. 175 758–175 768, Dec. 2019. * [21] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” _IEEE Network_ , vol. 34, no. 3, pp. 134–142, May/June. 2019. * [22] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6G: AI empowered wireless networks,” _IEEE Communications Magazine_ , vol. 57, no. 8, pp. 84–90, Aug. 2019. * [23] E. C. Strinati, S. Barbarossa, J. L. Gonzalez-Jimenez, D. Ktenas, N. Cassiau, L. Maret, and C. Dehos, “6G: The next frontier: From holographic messaging to artificial intelligence using subterahertz and visible light communication,” _IEEE Vehicular Technology Magazine_ , vol. 14, no. 3, pp. 42–50, Aug. 2019. * [24] F. Tariq, M. R. Khandaker, K.-K. Wong, M. A. Imran, M. Bennis, and M. Debbah, “A speculative study on 6G,” _IEEE Wireless Communications_ , vol. 27, no. 4, pp. 118–125, Aug. 2020. * [25] H. Viswanathan and P. E. Mogensen, “Communications in the 6G era,” _IEEE Access_ , vol. 8, pp. 57 063–57 074, Mar. 2020. * [26] Y. Lu and X. Zheng, “6G: A survey on technologies, scenarios, challenges, and the related issues,” _Journal of Industrial Information Integration_ , vol. 19, p. 100158, Jul. 2020. * [27] G. Gui, M. Liu, F. Tang, N. Kato, and F. Adachi, “6G: Opening new horizons for integration of comfort, security, and intelligence,” _IEEE Wireless Communications_ , vol. 27, no. 5, pp. 126–132, Oct. 2020. * [28] S. Dang, O. Amin, B. Shihada, and M.-S. Alouini, “What should 6G be?” _Nature Electronics_ , vol. 3, no. 1, pp. 20–29, Jan. 2020. * [29] W. Jiang, B. Han, M. A. Habibi, and H. D. Schotten, “The road towards 6g: A comprehensive survey,” _IEEE Open Journal of the Communications Society_ , 2021. * [30] X. Li, Q. Wang, M. Liu, J. Li, H. Peng, J. Piran, and L. Li, “Cooperative wireless-powered NOMA relaying for B5G IoT networks with hardware impairments and channel estimation errors,” _IEEE Internet of Things Journal_ , Oct. 2020. * [31] M. Piran and D. Y. Suh, “Learning-driven wireless communications, towards 6G,” in _Proc. IEEE International Conference on Computing, Electronics & Communication Engineering_, London, UK, Aug. 2019. * [32] B. Sliwa, R. Falkenberg, and C. Wietfeld, “Towards cooperative data rate prediction for future mobile and vehicular 6G networks,” in _Proc. 6G Wireless Summit (6G SUMMIT)_ , Levi, Finland, Finland, Mar. 2020, pp. 1–5. * [33] M. J. Piran, Q.-V. Pham, S. R. Islam, S. Cho, B. Bae, D. Y. Suh, and Z. Han, “Multimedia communication over cognitive radio networks from qos/qoe perspective: A comprehensive survey,” _Journal of Network and Computer Applications_ , vol. 172, p. 102759, Dec. 2020. * [34] S. U. Taki, A. Chakrabarty, M. J. Piran, Q.-V. Pham, and D. Y. Suh, “An indoor positioning and navigation system using named data networking,” _IEEE Access_ , vol. 8, pp. 196 408–196 424, Oct. 2020. * [35] M. Woźniak, A. Zielonka, A. Sikora, M. J. Piran, and A. Alamri, “6g-enabled iot home environment control using fuzzy rules,” _IEEE Internet of Things Journal_ , Dec. 2020. * [36] X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in _2011 International Conference on Information Photonics and Optical Communications_. IEEE, Oct. 2011, pp. 1–4. * [37] A. Clemm, M. F. Zhani, and R. Boutaba, “Network management 2030: Operations and control of network 2030 services,” _Journal of Network and Systems Management_ , vol. 28, no. 721-750, pp. 1–30, Mar. 2020. * [38] M. Katz, M. Matinmikko-Blue, and M. Latva-Aho, “6Genesis flagship program: Building the bridges towards 6G-enabled wireless smart society and ecosystem,” in _Proc. IEEE Latin-American Conference on Communications (LATINCOM)_ , Guadalajara, Mexico, Nov. 2018, pp. 1–9. * [39] R. Song and N. Li, “High speed terahertz communication in the space and terrestrial integrated next generation wireless communication systems,” in _2020 13th UK-Europe-China Workshop on Millimetre-Waves and Terahertz Technologies (UCMMT)_ , Sept. 2020, pp. 1–3. * [40] T. S. Rappaport, Y. Xing, O. Kanhere, S. Ju, A. Madanayake, S. Mandal, A. Alkhateeb, and G. C. Trichopoulos, “Wireless communications and applications above 100 ghz: Opportunities and challenges for 6g and beyond,” _IEEE Access_ , vol. 7, pp. 78 729–78 757, Jun. 2019. * [41] R. Shafin, L. Liu, V. Chandrasekhar, H. Chen, J. Reed, and J. C. Zhang, “Artificial intelligence-enabled cellular networks: A critical path to beyond-5G and 6G,” _IEEE Wireless Communications_ , vol. 27, no. 2, pp. 212–217, Mar. 2020. * [42] S. P. Rout, “6G wireless communication: Its vision, viability, application, requirement, technologies, encounters and research,” in _2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT)_. IEEE, Oct. 2020, pp. 1–8. * [43] A. Clemm, M. T. Vega, H. K. Ravuri, T. Wauters, and F. De Turck, “Toward truly immersive holographic-type communication: challenges and solutions,” _IEEE Communications Magazine_ , vol. 58, no. 1, pp. 93–99, Jan. 2020. * [44] G. Berardinelli, N. H. Mahmood, I. Rodriguez, and P. Mogensen, “Beyond 5G wireless IRT for industry 4.0: Design principles and spectrum aspects,” in _Proc. IEEE Global Communications Conference Workshops (GC Wkshps)_ , Abu Dhabi, United Arab Emirates, United Arab Emirates, Dec. 2018, pp. 1–6. * [45] J. Sun, F. Liu, Y. Zhou, G. Gui, T. Ohtsuki, S. Guo, and F. Adachi, “Surveillance plane aided air-ground integrated vehicular networks: Architectures, applications, and potential,” _IEEE Wireless Communications_ , 2020. * [46] M. Z. Chowdhury, M. Shahjalal, S. Ahmed, and Y. M. Jang, “6G wireless communication systems: Applications, requirements, technologies, challenges, and research directions,” _IEEE Open Journal of the Communications Society_ , vol. 1, pp. 957–975, Jul. 2020. * [47] Q. Zhang, J. Liu, and G. Zhao, “Towards 5G enabled tactile robotic telesurgery,” _arXiv preprint arXiv:1803.03586_ , 2018. * [48] I. FG-NET2030, “Representative use cases and key network requirements for network 2030,” _FG-NET2030 document NET2030-O-027_ , 2020. * [49] A. Aijaz, “Private 5g: The future of industrial wireless,” _IEEE Industrial Electronics Magazine_ , vol. 14, no. 4, pp. 136–145, 2020. * [50] D. Van Den Berg, R. Glans, D. De Koning, F. A. Kuipers, J. Lugtenburg, K. Polachan, P. T. Venkata, C. Singh, B. Turkovic, and B. Van Wijk, “Challenges in haptic communications over the tactile internet,” _IEEE Access_ , vol. 5, pp. 23 502–23 518, Oct. 2017. * [51] H. S. Dhillon, H. Huang, and H. Viswanathan, “Wide-area wireless communication challenges for the Internet of Things,” _IEEE Communications Magazine_ , vol. 55, no. 2, pp. 168–174, Feb. 2017. * [52] B. Singh, O. Tirkkonen, Z. Li, and M. A. Uusitalo, “Contention-based access for ultra-reliable low latency uplink transmissions,” _IEEE Wireless Communications Letters_ , vol. 7, no. 2, pp. 182–185, 2017. * [53] A. K. Bairagi, M. Munir, M. Alsenwi, N. H. Tran, S. S. Alshamrani, M. Masud, Z. Han, C. S. Hong _et al._ , “Coexistence mechanism between eMBB and uRLLC in 5G wireless networks,” _arXiv preprint arXiv:2003.04551_ , 2020\. * [54] C. Sergiou, M. Lestas, P. Antoniou, C. Liaskos, and A. Pitsillides, “Complex systems: A communication networks perspective towards 6G,” _IEEE Access_ , vol. 8, pp. 89 007–89 030, May 2020. * [55] B. Zong, C. Fan, X. Wang, X. Duan, B. Wang, and J. Wang, “ 6G technologies: Key drivers, core requirements, system architectures, and enabling technologies,” _IEEE Vehicular Technology Magazine_ , vol. 14, no. 3, pp. 18–27, Jul. 2019. * [56] M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and C. S. Hong, “Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience,” _IEEE Journal on Selected Areas in Communications_ , vol. 35, no. 5, pp. 1046–1061, Mar. 2017. * [57] Y. Zeng, Q. Wu, and R. Zhang, “Accessing from the sky: A tutorial on UAV communications for 5G and beyond,” _Proceedings of the IEEE_ , vol. 107, no. 12, pp. 2327–2375, Dec. 2019. * [58] Y. Zeng, R. Zhang, and T. J. Lim, “Wireless communications with unmanned aerial vehicles: Opportunities and challenges,” _IEEE Communications Magazine_ , vol. 54, no. 5, pp. 36–42, May 2016. * [59] “joint access and backhaul resource management in satellite-drone networks: A competitive market approach.” * [60] S. K. Routray and H. M. Hussein, “Satellite based IoT networks for emerging applications,” _arXiv preprint arXiv:1904.00520_ , Mar 2019. * [61] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial neural networks-based machine learning for wireless networks: A tutorial,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 4, pp. 3039–3071, Jul. 2019. * [62] M. E. Morocho-Cayamcela, H. Lee, and W. Lim, “Machine learning for 5G/B5G mobile and wireless communications: Potential, limitations, and future directions,” _IEEE Access_ , vol. 7, pp. 137 184–137 206, Sep. 2019. * [63] F. Balali, J. Nouri, A. Nasiri, and T. Zhao, “Data analytics,” in _Data Intensive Industrial Asset Management_. Springer, 2020, pp. 105–113. * [64] M. G. Kibria, K. Nguyen, G. P. Villardi, O. Zhao, K. Ishizu, and F. Kojima, “Big data analytics, machine learning, and artificial intelligence in next-generation wireless networks,” _IEEE Access_ , vol. 6, pp. 32 328–32 338, May 2018. * [65] A. Shahraki, M. Abbasi, and Ø. Haugen, “Boosting algorithms for network intrusion detection: A comparative evaluation of real adaboost, gentle adaboost and modest adaboost,” _Engineering Applications of Artificial Intelligence_ , vol. 94, p. 103770, 2020. * [66] M. Abbasi, A. Shahraki, M. J. Piran, and A. Taherkordi, “Deep reinforcement learning for qos provisioning at the mac layer: A survey,” _Engineering Applications of Artificial Intelligence_ , vol. 102, p. 104234, 2021. * [67] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of deep reinforcement learning in communications and networking: A survey,” _IEEE Communications Surveys & Tutorials_, vol. 21, no. 4, pp. 3133–3174, May 2019. * [68] D. W. K. Ng, T. Q. Duong, C. Zhong, and R. Schober, _Wireless information and power transfer: theory and practice_. John Wiley & Sons, Jan 2019. * [69] M. Di Renzo, A. Zappone, M. Debbah, M.-S. Alouini, C. Yuen, J. de Rosny, and S. Tretyakov, “Smart radio environments empowered by reconfigurable intelligent surfaces: How it works, state of research, and road ahead,” _arXiv preprint arXiv:2004.09352_ , Apr 2020. * [70] N. Shlezinger, O. Dicker, Y. C. Eldar, I. Yoo, M. F. Imani, and D. R. Smith, “Dynamic metasurface antennas for uplink massive MIMO systems,” _IEEE Transactions on Communications_ , vol. 67, no. 10, pp. 6829–6843, Jul. 2019. * [71] Y. Yuan, Y. Zhao, B. Zong, and S. Parolari, “Potential key technologies for 6G mobile communications,” _Science China Information Sciences_ , vol. 63, pp. 1–19, Aug. 2020. * [72] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration,” _IEEE Communications Surveys & Tutorials_, vol. 19, no. 3, pp. 1657–1681, May 2017. * [73] N. H. Mahmood, H. Alves, O. A. López, M. Shehab, D. P. M. Osorio, and M. Latva-aho, “Six key enablers for machine type communication in 6G,” _arXiv preprint arXiv:1903.05406_ , Mar. 2019. * [74] W. U. Khan, F. Jameel, M. A. Jamshed, H. Pervaiz, S. Khan, and J. Liu, “Efficient power allocation for NOMA-enabled IoT networks in 6G era,” _Physical Communication_ , vol. 39, p. 101043, Apr. 2020. * [75] W. U. Khan, F. Jameel, T. Ristaniemi, S. Khan, G. A. S. Sidhu, and J. Liu, “Joint spectral and energy efficiency optimization for downlink noma networks,” _IEEE Transactions on Cognitive Communications and Networking_ , vol. 6, no. 2, pp. 645–656, Oct. 2020. * [76] H. Li, F. Fang, and Z. Ding, “Joint resource allocation for hybrid NOMA-assisted MEC in 6G networks,” _Digital Communications and Networks_ , vol. 6, no. 3, pp. 241–252, Aug. 2020. * [77] A. Shahraki, A. Taherkordi, Ø. Haugen, and F. Eliassen, “A survey and future directions on clustering: From wsns to iot and modern networking paradigms,” _IEEE Transactions on Network and Service Management_ , Nov. 2020\. * [78] M. Abbasi, A. Shahraki, H. R. Barzegar, and C. Pahl, “Synchronization techniques in” device to device-and vehicular to vehicular-enabled” cellular networks: A survey,” 2021. * [79] S. Zhang, H. Zhang, and L. Song, “Beyond D2D: Full Dimension UAV-to-Everything Communications in 6G,” _IEEE Transactions on Vehicular Technology_ , vol. 69, no. 6, pp. 6592–6602, Apr. 2020. * [80] Q. Bi, “Ten Trends in the Cellular Industry and an Outlook on 6G,” _IEEE Communications Magazine_ , vol. 57, no. 12, pp. 31–36, Dec. 2019. * [81] D. Zucchetto and A. Zanella, “Uncoordinated access schemes for the IoT: approaches, regulations, and performance,” _IEEE Communications Magazine_ , vol. 55, no. 9, pp. 48–54, Sep. 2017. * [82] M. B. Shahab, R. Abbas, M. Shirvanimoghaddam, and S. J. Johnson, “Grant-free non-orthogonal multiple access for IoT: A survey,” _IEEE Communications Surveys Tutorials_ , vol. 22, no. 3, pp. 1805–1838, May 2020. * [83] G. Wunder, H. Boche, T. Strohmer, and P. Jung, “Sparse signal processing concepts for efficient 5G system design,” _IEEE Access_ , vol. 3, pp. 195–208, Feb. 2015. * [84] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. de Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the Internet of Things,” _IEEE Signal Processing Magazine_ , vol. 35, no. 5, pp. 88–99, Sep. 2018\. * [85] H. Sarieddeen, M.-S. Alouini, and T. Y. Al-Naffouri, “An overview of signal processing techniques for terahertz communications,” _arXiv preprint arXiv:2005.13176_ , 2020. * [86] C. Huang, S. Hu, G. C. Alexandropoulos, A. Zappone, C. Yuen, R. Zhang, M. D. Renzo, and M. Debbah, “Holographic MIMO surfaces for 6G wireless networks: Opportunities, challenges, and trends,” _IEEE Wireless Communications_ , vol. 27, no. 5, pp. 118–125, Jul. 2020. * [87] J. M. Jornet and I. F. Akyildiz, “Channel modeling and capacity analysis for electromagnetic wireless nanonetworks in the terahertz band,” _IEEE Transactions on Wireless Communications_ , vol. 10, no. 10, pp. 3211–3221, Aug. 2011. * [88] M. Z. Chowdhury, M. T. Hossan, A. Islam, and Y. M. Jang, “A comparative survey of optical wireless technologies: Architectures and applications,” _IEEE Access_ , vol. 6, pp. 9819–9840, Jan. 2018. * [89] S. U. Rehman, S. Ullah, P. H. J. Chong, S. Yongchareon, and D. Komosny, “Visible light communication: A system perspective—overview and challenges,” _Sensors_ , vol. 19, no. 5, pp. 1153–1175, Jan. 2019. * [90] C. Lee, C. Zhang, M. Cantore, R. M. Farrell, S. H. Oh, T. Margalith, J. S. Speck, S. Nakamura, J. E. Bowers, and S. P. DenBaars, “4 Gbps direct modulation of 450 nm GaN laser for high-speed visible light communication,” _Optics express_ , vol. 23, no. 12, pp. 16 232–16 237, Jun. 2015. * [91] D. Tsonev, H. Chun, S. Rajbhandari, J. J. McKendry, S. Videv, E. Gu, M. Haji, S. Watson, A. E. Kelly, G. Faulkner _et al._ , “A 3-Gb/s Single-LED OFDM-Based Wireless VLC Link Using a Gallium Nitride,” _IEEE Photonics Technology Letters_ , vol. 26, no. 7, pp. 637–640, Jan. 2014. * [92] S. Viola, M. S. Islim, S. Watson, S. Videv, H. Haas, and A. E. Kelly, “15 Gb/s OFDM-based VLC using direct modulation of 450 GaN laser diode,” in _Advanced Free-Space Optical Communication Techniques and Applications III_ , vol. 10437. International Society for Optics and Photonics, Oct. 2017, p. 104370E. * [93] M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” _IEEE Communications Magazine_ , vol. 58, no. 3, pp. 55–61, Mar. 2020. * [94] J. Sun, G. Gui, H. Sari, H. Gacanin, and F. Adachi, “Aviation data lake: Using side information to enhance future air-ground vehicle networks,” _IEEE Vehicular Technology Magazine_ , 2020. * [95] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” _IEEE Transactions on Wireless Communications_ , to appear, 2020. * [96] M. Chen, H. V. Poor, W. Saad, and S. Cui, “Wireless communications for collaborative federated learning,” _arXiv preprint arXiv:2006.02499_ , Jun. 2020. * [97] A. Shahraki, M. Geitle, and Ø. Haugen, “A comparative node evaluation model for highly heterogeneous massive-scale internet of things-mist networks,” _Transactions on Emerging Telecommunications Technologies_. * [98] K. A. Ogudo, D. Muwawa Jean Nestor, O. Ibrahim Khalaf, and H. Daei Kasmaei, “A device performance and data analytics concept for smartphones’ IoT services and machine-type communication in cellular networks,” _Symmetry_ , vol. 11, no. 4, pp. 593–609, Apr. 2019. * [99] X. Wang, Y. Han, V. C. Leung, D. Niyato, X. Yan, and X. Chen, “Convergence of edge computing and deep learning: A comprehensive survey,” _IEEE Communications Surveys & Tutorials_, vol. 22, no. 2, pp. 869–904, Jan. 2020\. * [100] Y. Lu, X. Huang, K. Zhang, S. Maharjan, and Y. Zhang, “Low-latency federated learning and blockchain for edge association in digital twin empowered 6g networks,” _IEEE Transactions on Industrial Informatics_ , pp. 1–1, Aug. 2020. * [101] Y. Liu, X. Yuan, Z. Xiong, J. Kang, X. Wang, and D. Niyato, “Federated learning for 6g communications: Challenges, methods, and future directions,” _arXiv preprint arXiv:2006.02931_ , Jun. 2020. * [102] W. Wei, L. Liu, M. Loper, K.-H. Chow, M. E. Gursoy, S. Truex, and Y. Wu, “A framework for evaluating gradient leakage attacks in federated learning,” _arXiv preprint arXiv:2004.10397_ , Apr. 2020. * [103] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” _IEEE Signal Processing Magazine_ , vol. 37, no. 3, pp. 50–60, May 2020. * [104] L. Gyongyosi and S. Imre, “A survey on quantum computing technology,” _Computer Science Review_ , vol. 31, pp. 51–71, Feb. 2019. * [105] A. Manzalini, “Quantum communications in future networks and services,” _Quantum Reports_ , vol. 2, no. 1, pp. 221–232, Mar. 2020. * [106] M. M. Wilde and M.-H. Hsieh, “The quantum dynamic capacity formula of a quantum channel,” _Quantum Information Processing_ , vol. 11, no. 6, pp. 1431–1463, Dec. 2012. * [107] Q. Ruihong and M. Ying, “Research progress of quantum repeaters,” vol. 1237, no. 5, pp. 052 032–052 039, Jun. 2019. * [108] B. Fröhlich, J. F. Dynes, M. Lucamarini, A. W. Sharpe, Z. Yuan, and A. J. Shields, “A quantum access network,” _Nature_ , vol. 501, no. 7465, pp. 69–72, Sep. 2013. * [109] C. Cai, Y. Sun, J. Niu, and Y. Ji, “A quantum access network suitable for internetworking optical network units,” _IEEE Access_ , vol. 7, pp. 92 091–92 099, Jul. 2019. * [110] G. Rubino, L. A. Rozema, F. Massa, M. Araújo, M. Zych, Č. Brukner, and P. Walther, “Experimental entanglement of temporal orders,” in _Quantum Information and Measurement_. Optical Society of America, Apr. 2019, pp. S3B–3. * [111] L. M. Procopio, A. Moqanaki, M. Araújo, F. Costa, I. A. Calafell, E. G. Dowd, D. R. Hamel, L. A. Rozema, Č. Brukner, and P. Walther, “Experimental superposition of orders of quantum gates,” _Nature communications_ , vol. 6, no. 1, pp. 1–6, Aug. 2015. * [112] L. Wang, T. Han, Q. Li, J. Yan, X. Liu, and D. Deng, “Cell-less communications in 5G vehicular networks based on vehicle-installed access points,” _IEEE Wireless Communications_ , vol. 24, no. 6, pp. 64–71, Dec. 2017. * [113] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge,” _Proceedings of the IEEE_ , vol. 107, no. 11, pp. 2204–2239, 2019. * [114] E. Yaacoub and M.-S. Alouini, “A key 6G challenge and opportunity—connecting the base of the pyramid: A survey on rural connectivity,” _Proceedings of the IEEE_ , vol. 108, no. 4, pp. 533–582, Mar. 2020. * [115] S. Chen, Y.-C. Liang, S. Sun, S. Kang, W. Cheng, and M. Peng, “Vision, requirements, and technology trend of 6G: How to tackle the challenges of system coverage, capacity, user data-rate and movement speed,” _IEEE Wireless Communications_ , vol. 27, no. 2, pp. 218–228, Feb. 2020. * [116] E. De Carvalho, E. Bjornson, J. H. Sorensen, P. Popovski, and E. G. Larsson, “Random access protocols for massive MIMO,” _IEEE Communications Magazine_ , vol. 55, no. 5, pp. 216–222, Apr. 2017. * [117] T. Hewa, G. Gür, A. Kalla, M. Ylianttila, A. Bracken, and M. Liyanage, “The role of blockchain in 6g: challenges, opportunities and research directions,” in _2020 2nd 6G Wireless Summit (6G SUMMIT)_. IEEE, 2020, pp. 1–5. * [118] A. Shahraki, A. Taherkordi, and Ø. Haugen, “Tonta: Trend-based online network traffic analysis in ad-hoc iot networks,” _Computer Networks_ , p. 108125, 2021. * [119] S. Ayoubi, N. Limam, M. A. Salahuddin, N. Shahriar, R. Boutaba, F. Estrada-Solano, and O. M. Caicedo, “Machine learning for cognitive network management,” _IEEE Communications Magazine_ , vol. 56, no. 1, pp. 158–165, 2018.
The Ext-algebra of the Brauer tree algebra associated to a line Olivier Dudas Université de Paris and Sorbonne Université, CNRS, IMJ-PRG, F-75006 Paris, France. The author gratefully acknowledges financial support by the ANR, Project No ANR-16-CE40-0010-01. We compute the $\mathsf{Ext}$-algebra of the Brauer tree algebra associated to a line with no exceptional vertex. § INTRODUCTION This note provides a detailed computation of the $\mathsf{Ext}$-algebra for a very specific finite dimensional algebra, namely a Brauer tree algebra associated to a line, with no exceptional vertex. Such algebras appear for example as the principal $p$-block of the symmetric group $\mathfrak{S}_p$, and in a different context, as blocks of the Verlinde categories $\mathsf{Ver}_{p^2}$ studied by Benson–Etingof in [2] (our computation is actually motivated by <cit.>). Let us emphasise that $\mathsf{Ext}$-algebras for more general biserial algebras were explicitly computed by Green–Schroll–Snashall–Taillefer in [4], but under some assumption on the multiplicity of the the vertices, assumption which is not satisfied for the simple example treated in this note. Other general results relying on Auslander–Reiten theory were obtained by Antipov–Generalov [1] and Brown [3]. However we did not manage to use their work to get an explicit description in our case. Nevertheless, the simple structure of the projective indecomposable modules for the line allows a straightforward approach using explicit projective resolutions of simple modules. The Poincaré series for the $\mathsf{Ext}$-algebra is given in Proposition <ref> and its structure as a path algebra with relations is given in Proposition <ref>. § ACKNOWLEDGMENTS We thank Raphaël Rouquier and Rachel Taillefer for providing helpful references. § NOTATION Let $\mathbb{F}$ be a field, and $A$ be the self-injective finite dimensional $\mathbb{F}$-algebra. All $A$-modules will be assumed to be finitely generated. Given an $A$-module $M$, we denote by $\Omega(M)$ the kernel of a projective cover $P\twoheadrightarrow M$. Up to isomorphism it does not depend on the cover. We then define inductively $\Omega^n(M) = \Omega(\Omega^{n-1}(M))$ for $n \geq 1$. To compute the extension groups between simple modules we will use the property that if $\Omega^n(M)$ is indecomposable and non-projective then $$\mathsf{Ext}_A^n(M,S) \simeq \mathsf{Hom}_{A}(\Omega^n(M),S)$$ for all simple $A$-module $S$ and all $n \geq 1$. For computing the algebra structure on the various $\mathsf{Ext}$-groups it will be convenient to work in the homotopy category $\mathsf{Ho}(A)$ of the complexes of finitely generated $A$-modules. If $S$ (resp. $S'$) is a simple $A$-module, and ${P}_\bullet \rightarrow S$ (resp. $P_\bullet' \rightarrow S'$) is a projective resolution then $$ \mathsf{Ext}_A^n(S,S') \simeq \mathsf{Hom}_{\mathsf{Ho(A)}}(P_\bullet,P_\bullet'[n])$$ with the Yoneda product being given by the composition of maps in $\mathsf{Ho}(A)$. Assume now that $A$ is the $\mathbb{F}$-algebra associated to the following Brauer tree with $N+1$ vertices: \node[shape=circle,draw=black] (A) at (0,0) {}; \node[shape=circle,draw=black] (B) at (2,0) {}; \node[shape=circle,draw=black] (C) at (4,0) {}; \node[shape=circle,draw=black] (D) at (8,0) {}; \node[shape=circle,draw=black] (E) at (10,0) {}; \path (A) edge node[above] {$S_1$} (B); \path (B) edge node[above] {$S_2$} (C); \path[dashed] (C) edge node[above] {} (D); \path (D) edge node[above] {$S_N$} (E); % \path [->](D) edge node[left] {$3$} (E); % \path [->](D) edge node[top] {$3$} (F); % \path [->](C) edge node[top] {$5$} (F); % \path [->](E) edge node[right] {$8$} (F); \end{tikzpicture}$$ Here, unlike in [4] we assume that there are no exceptional vertex. The edges are labelled by the simple $A$-modules $S_1, \ldots,S_{N}$. The head and socle of $P_i$ are isomorphic to $S_i$ and $\mathsf{rad}(P_i)/S_i \simeq S_{i-1}\oplus S_{i+1}$ with the convention that $S_0 = S_{N+1} = 0$. § EXT-GROUPS Given $1 \leq i \leq j \leq N$ with $i-j $ even, there is, up to isomorphism, a unique non-projective indecomposable module ${}^i \sfX^j$ such that * $\mathsf{rad}({}^i \sfX^j) = S_{i+1} \oplus S_{i+3} \oplus \cdots \oplus S_{j-1}$ * $\mathsf{hd}({}^i \sfX^j) = S_{i} \oplus S_{i+2} \oplus \cdots \oplus S_{j}$. The structure of ${}^i \sfX^j$ can be represented by the following diagram: &[-20pt] S_i \ar[rdd,dash]&[-30pt] &[-30pt] S_{i+2} \ar[rdd,dash]&[-30pt] &[-30pt] S_{i+4}&[-20pt] \cdots &[-20pt] S_{j-2} \ar[rdd,dash] &[-30pt] &[-30pt] S_j \\[-15pt] {}^i \sfX^j = & & & & & & \cdots & & \\[-15pt] & & S_{i+1} \ar[ruu,dash] & & S_{i+3} \ar[ruu,dash] & & \cdots & & S_{j-1} \ar[ruu,dash] \end{tikzcd}$$ Similarly we denote by ${}_i \sfX_j$ the unique indecomposable module with the following structure: &[-20pt] &[-30pt] S_{i+1} \ar[rdd,dash] &[-30pt] &[-30pt] S_{i+3} \ar[rdd,dash] &[-30pt] &[-20pt] \cdots &[-20pt] &[-30pt] S_{j-1} \ar[rdd,dash] &[-30pt] \\[-15pt] {}_i \sfX_j = & & & & & & \cdots & & \\[-15pt] &S_i \ar[ruu,dash]& & S_{i+2} \ar[ruu,dash]& & S_{i+4}& \cdots & S_{j-2} \ar[ruu,dash] & & S_j \end{tikzcd}$$ Finally, in the case where $i-j $ is odd we define the modules ${}_i \sfX^j$ and ${}^i \sfX_j$ as the indecomposable modules with the following respective structure: &[-20pt] &[-30pt] S_{i+1} \ar[rdd,dash] &[-30pt] &[-30pt] S_{i+3} \ar[rdd,dash] &[-30pt] &[-20pt] \cdots &[-20pt] &[-30pt] S_{j} \\[-15pt] {}_i \sfX^j = & & & & & & \cdots & & \\[-15pt] &S_i \ar[ruu,dash]& & S_{i+2} \ar[ruu,dash]& & S_{i+4}& \cdots & S_{j-1} \ar[ruu,dash] & \end{tikzcd}$$ &[-20pt] S_i \ar[rdd,dash]&[-30pt] &[-30pt] S_{i+2} \ar[rdd,dash]&[-30pt] &[-30pt] S_{i+4}&[-20pt] \cdots &[-20pt] S_{j-1} \ar[rdd,dash] &[-30pt] \\[-15pt] {}^i \sfX_j = & & & & & & \cdots & & \\[-15pt] & & S_{i+1} \ar[ruu,dash] & & S_{i+3} \ar[ruu,dash] & & \cdots & & S_{j} \end{tikzcd}$$ For convenience we will extend the notation ${}^i \sfX^j$, ${}_i \sfX_j$, ${}_i \sfX^j$ and ${}^i \sfX_j$ to any integers $i,j \in \mathbb{Z}$ (with the suitable parity condition on $i-j$) so that the following relations hold: \begin{equation} \label{eq:xij} {}^i \sfX = {}_{1-i} \sfX, \qquad {}^i \sfX^j = {}_{j} \sfX_i, \qquad {}^{i \pm 2N} \sfX = {}^i \sfX. \end{equation} Note that this also implies $\sfX^{j} = \sfX_{1-j}$ and $\sfX^{j\pm 2N} = \sfX^j$. Let $i,j \in \mathbb{Z}$ with $i-j$ even. Then $$\Omega ( {}^i \sfX^j) \simeq {}^{i-1} \sfX^{j+1}.$$ Using the relations (<ref>) it is enough to prove that for $1 \leq k \leq l \leq N$ we have the following isomorphisms $$ \Omega ( {}^k \sfX^l) \simeq {}^{k-1} \sfX^{l+1}, \quad \Omega ( {}_k \sfX^l) \simeq {}_{k+1} \sfX^{l+1}, \quad \Omega ( {}^k \sfX_l) \simeq {}^{k-1} \sfX_{l-1}, \quad \Omega ( {}_k \sfX_l) \simeq {}_{k+1} \sfX_{l-1}.$$ We only consider the first one, the others are similar. If $1 \leq k \leq l \leq N$ a projective cover of ${}^k \sfX^l$ is given by $P_k \oplus P_{k+2} \oplus \cdots \oplus P_l \twoheadrightarrow {}^k \sfX^l$, whose kernel equals ${}^{k-1} \sfX^{l+1}$. Note that this holds even when $k=1$ since ${}^0\sfX^{l+1} = {}_1 \sfX^{l+1}$ or when $l = N$ since ${}^{k-1} \sfX^{N+1} = {}^{k-1} \sfX^{-N+1} = {}^{k-1} \sfX_{N} $. We deduce from Lemma <ref> that for any simple module $S_i$ and for all $k \geq 0$ we have $$ \Omega^k (S_i) = \Omega^k ({}^i\sfX^i) \simeq {}^{i-k} \sfX^{i+k}$$ as $A$-modules. Consequently we have $$\mathsf{Ext}_A^k(S_i,S_j) = \left\{ \begin{array}{ll} \mathbb{F} & \text{if $S_j$ appears in the head of ${}^{i-k} \sfX^{i+k}$}, \\ 0 & \text{otherwise.} \end{array}\right.$$ From this description one can compute explicitly the Poincaré series of the $\mathsf{Ext}$-groups. Given $1 \leq i,j\leq N$, the Poincaré series of $\mathsf{Ext}_A^\bullet(S_i,S_j)$ is given by $$ \sum_{k \geq 0} \mathsf{dim}_\mathbb{F} \, \mathsf{Ext}_A^k(S_i,S_j) t^k= \frac{Q_{i,j}(t) + t^{2N-1} Q_{i,j}(t^{-1})}{1-t^{2N}}$$ where $Q_{i,j}(t) = t^{|j-i|} + t^{|j-i|+2} + \cdots + t^{N-1-|N+1-j-i|}$. Without loss of generality we can assume that $i \leq j$. Let $k \in \{0,\ldots,N-1\}$. If $i+j \leq N+1$, the simple module $S_j$ appears in the head of ${}^{i-k} \sfX^{i+k}$ if and only if $k=j-i, j-i+2, \ldots, j+i-2$. The limit cases are indeed ${}^{2i-j} \sfX^{j}$ for $k = j-i$ and ${}^{2-j} \sfX^{2i+j-2} = {}_{j-1}\sfX^{2i+j-2}$ for $k = j+i-2$. Note that if $j-i \leq k \leq i+j-2$ then $j \leq i+k$ and $j \leq 2N-i-k$ so that $S_j$ appears in the head of ${}^{i-k} \sfX^{i+k} = {}^{i-k} \sfX_{2N-i-k+1}$ whenever $k$ has the suitable parity. If $i+j > N+1$ one must ensure that $j \leq 2N-i-k$ and therefore $S_j$ appears in the head of ${}^{i-k} \sfX^{i+k}$ if and only if $k=j-i, j-i+2, \ldots, 2N-i-j$. Consequently we have \begin{equation} \label{eq:ext} \begin{aligned} \sum_{k = 0}^{N-1} \mathsf{dim}_\mathbb{F} \, \mathsf{Ext}_A^k(S_i,S_j) t^k & \, = t^{j-i} + t^{j-i+2} + \cdots + t^{N-1-|N+1-j-i|} \\[-10pt] & \, = t^{|j-i|} + t^{|j-i|+2} + \cdots + t^{N-1-|N+1-j-i|} \\ & \, = Q_{i,j}(t).\end{aligned} \end{equation} Now the relation $$\Omega^N (S_i) = {}^{i-N} \sfX^{i+N} = {}_{1+N-i}\sfX_{1-N-i} = {}_{1+N-i}\sfX_{1+N-i} = S_{N+1-i}$$ $$\sum_{k = 0}^{2N-1} \mathsf{dim}_\mathbb{F} \, \mathsf{Ext}_A^k(S_i,S_j) t^k = \sum_{k = 0}^{N-1} \mathsf{dim}_\mathbb{F} \, \mathsf{Ext}_A^k(S_i,S_j) t^k + t^N \sum_{k = 0}^{N-1} \mathsf{dim}_\mathbb{F} \, \mathsf{Ext}_A^k(S_{N+1-i},S_j) t^k.$$ and the proposition follows from (<ref>) after observing that $Q_{N+1-i,j}(t) = t^{N-1} Q_{i,j}(t^{-1})$. § ALGEBRA STRUCTURE §.§ Minimal resolution Given $1\leq i \leq N- 1$ we fix non-zero maps $f_i : P_{i} \longrightarrow P_{i+1}$ and $f_i^* : P_{i+1} \longrightarrow P_{i}$ such that $f_i^* \circ f_i + f_{i-1} \circ f_{i-1}^* = 0$ for all $2\leq i \leq N- 1$. Given $1 \leq i \leq j \leq N$ with $j-i$ even we denote by ${}_iP_j$ the following projective $A$-module $$ {}_iP_j := P_i \oplus P_{i+2} \oplus \cdots \oplus P_{j-2} \oplus P_{j}.$$ For $1\leq i < j \leq N$ with $j-i$ even we let $d_{i,j} : {}_{i}P_{j} \longrightarrow {}_{i+1}P_{j-1}$ be the morphism of $A$-modules corresponding to the following matrix: $$ d_{i,j} = \begin{bmatrix} f_{i} & f_{i+1}^* & 0 & \cdots & \cdots & 0 \\ 0 & f_{i+2} & f_{i+3}^* & 0 & & \vdots \\ \vdots & \ddots & \ddots& \ddots & \ddots & \vdots\\ \vdots & & \ddots & \ddots & \ddots & 0 \\ 0 & \cdots & \cdots & 0 & f_{j-2} & f_{j-1}^* \end{bmatrix}$$ The definition of ${}_iP_j$ extends to any integers $i,j \in \mathbb{Z}$ with the convention that \begin{equation}\label{eq:pij} {}_iP_j = {}_{j+1}P_{i-1}, \quad {}_{i}P_{-j} = {}_{i}P_j, \quad {}_iP_{j\pm2N} = {}_iP_{j}. \end{equation} Note that these relations imply ${}_{1-i}P_{j} = {}_{1+i}P_j$ and ${}_{i\pm2N}P_{j} = {}_iP_{j}$. Furthermore, the definition of $d_{i,j}$ extends naturally to any pair $i,j$ if we set in addition $$d_{i,i} = (-1)^i f_i^* f_i = (-1)^{i-1} f_{i-1} f_{i-1}^*,$$ a map from ${}_iP_i = P_i$ to ${}_{i+1}P_{i-1} = P_i$. With this notation one checks that for all $k > 0$ the image of the map $d_{i\mn k,i\pl k} : {}_{i-k}P_{i+k} \longrightarrow {}_{i-k+1}P_{i+k-1}$ is isomorphic to ${}^{i-k} \sfX^{i+k} \simeq \Omega^k(S_i)$ so that the bounded above complex $$R_i := \cdots \xrightarrow{d_{i\mn k\mn1,i\pl k\pl1}} {}_{i\mn k}P_{i\pl k} \xrightarrow{d_{i\mn k,i\pl k}} \cdots \xrightarrow{d_{i\mn 2,i\pl 2}} {}_{i\mn 1}P_{i\pl 1} \xrightarrow{d_{i\mn1,i\pl1}} P_i \longrightarrow 0$$ forms a minimal projective resolution of $S_i$. §.§ Generators and relations We will consider two kinds of generators for the $\mathsf{Ext}$-algebra, of respective degrees $1$ and $N$. We start by defining a map $x_i \in \mathsf{Hom}_{\mathsf{Ho}(A)}(R_i,R_{i+1}[1])$ for any $1\leq i \leq N- 1$. Let $k$ be a positive integer. If $k \notin N\mathbb{Z}$, the projective modules ${}_{i\mn k}P_{i\pl k}$ and ${}_{i\pl 1\mn(k\mn1)}P_{i\pl1\pl(k-1)} = {}_{i\mn k\pl2}P_{i\pl k}$ have at least one indecomposable summand in common and we can consider the map $X_{i,k} : {}_{i\mn k}P_{i\pl k} \longrightarrow {}_{i\mn k\pl2}P_{i\pl k}$ given by the identity map on the common factors. If $k \in N+2N\mathbb{Z}$ then from the relations (<ref>) we have $${}_{i\mn k}P_{i\pl k} = {}_{i\mn N}P_{i\pl N} = {}_{i\pl N\pl1}P_{i\mn N\mn1} ={}_{\mn i\mn N+1}P_{\mn i\pl N\pl1} = P_{N\pl1\mn i }$$ $${}_{i\mn k\pl 2}P_{i\pl k} = {}_{i\mn N\pl2}P_{i\pl N} = {}_{N\mn i}P_{\mn N\mn i} =P_{N\mn i }.$$ In that case we set $X_{i,k} := (-1)^{N-i}f_{N-i}^*$. If $k \in 2N\mathbb{Z}$ then ${}_{i\mn k}P_{i\pl k} = P_i$, ${}_{i\mn k\pl 2}P_{i\pl k} = {}_{i+2}P_i = P_{i+1}$ and we set $X_{i,k} := (-1)^i f_i$. If $k \geq 0$ we set $X_{i,k} := 0$. Then the family of morphisms of $A$-modules $X_i := (X_{i,k})_{k\in \mathbb{Z}}$ defines a morphism of complexes of $A$-modules from $R_i$ to $R_{i+1}[1]$ and we denote by $x_i$ its image in $\mathsf{Ho}(A)$. Similarly we define a map $X_i^* : R_{i+1} \longrightarrow R_{i}[1]$ by exchanging the role of $f$ and $f^*$. More precisely we consider in that case $X_{i,-N}^* := (-1)^{N-i}f_{N-i}$ and $X_{i,-2N}^* := (-1)^{i} f_{i}^*$. We denote by $x_i^*$ the image of $X_i^*$ in $\mathsf{Ho}(A)$. Assume now that $1\leq i \leq N$. The modules ${}_{i\mn k}P_{i\pl k}$ and ${}_{(N\pl1\mn i)\mn (k\mn N)}P_{(N\pl1\mn i)\pl (k\mn N)}$ are equal, which means that starting from the degree $-N$, the terms of the complexes $R_i$ and $R_{N+1-i}[N]$ coincide. We denote by $Y_i : R_i \longrightarrow R_{N+1-i}[N]$ the natural projection between $R_i$ and its obvious truncation at degrees $\leq -N$, and by $y_i$ its image in $\mathsf{Ho}(A)$. The following relations hold in $\mathrm{End}_{\mathsf{Ho}(A)}^\bullet(\bigoplus R_i)$: $\mathrm{(a)}$ $ x_1^*\circ x_1 = x_{N-1} \circ x_{N-1}^* = 0$; $\mathrm{(b)}$ $x_i \circ x_i^* = x_{i+1}^* \circ x_{i+1} $ for all $i = 1,\ldots,N-2$; $\mathrm{(c)}$ $y_{i+1} \circ x_i =x_{N-i}^* \circ y_i $ for all $i = 1,\ldots,N-1$; $\mathrm{(d)}$ $y_{i} \circ x_i^* = x_{N-i} \circ y_{i+1} $ for all $i = 1,\ldots,N-1$. If $N=1$ there are no relation to check. Therefore we assume $N \geq 2$. The relations in (a) follow from the fact that $\mathsf{Ext}^2_A(S_1,S_1) = \mathsf{Ext}^2_A(S_N,S_N) =0$, which is for example a consequence of Proposition <ref> when $N \geq 2$. To show (c), we observe that the morphism of complexes $X_{i} : R_i \longrightarrow R_{i+1}[1]$ defined above coincide with $X_{N-i}^*[N] : R_{N+1-i}[N] \longrightarrow R_{N-i}[N+1]$ in degrees less than $-N$. Since $Y_i$ and $Y_{i+1}$ are just obvious truncations we actually have $Y_{i+1} \circ X_i =X_{N-i}^* \circ Y_i$. The relation (d) is obtained by a similar argument. We now consider (b). The morphism of complexes $X_i \circ X_i^*$ and $X_{i+1}^* \circ X_{i+1}$ coincide at every degree $k$ except when $k $ is congruent to $0$ or $-1$ modulo $N$. Let us first look in details at the degrees $-N$ and $-N-1$. The map $X_i \circ X_i^*$ is as follows: $$ \begin{tikzcd}[ampersand replacement=\&] \cdots \arrow[r] \arrow[d]\&[10pt] P_{N\mn1\mn i} \oplus P_{N\pl 1\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1\mn i} & f_{N\mn i}^* \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 0 & 1 \end{bmatrix}}"] \&[10pt] P_{N\mn i} \arrow[r,"(-1)^{N\mn i}f_{N\mn i}^* \circ f_{N\mn i}"] \arrow[d,"(-1)^{N\mn i}f_{N\mn i}"] \&[10pt] P_{N\mn i} \arrow[d,"{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}"] \\[40pt] P_{N\mn i} \oplus P_{N\pl2\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn i} & f_{N\pl1\mn i}^* \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 1 & 0 \end{bmatrix}}"]\& P_{N\pl1\mn i} \arrow[r,"(-1)^{N\mn i}f_{N\mn i} \circ f_{N\mn i}^*"] \arrow[d,"(-1)^{N\mn i}f_{N\mn i}^*"] \& P_{N\pl1\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn i}^*\\ f_{N\pl1\mn i} \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}"] \& P_{N\mn i} \oplus P_{N\pl2\mn i} \arrow[d] \\[40pt] P_{N\mn i} \arrow[r,"(-1)^{N\mn i}f_{N\mn i}^* \circ f_{N\mn i}"] \& P_{N\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1 \mn i}^*\\ f_{N\mn i} \end{bmatrix}}"] \& P_{N\mn1 \mn i} \oplus P_{N\pl 1\mn i} \arrow[r] \& \cdots \end{tikzcd} whereas the map $X_{i+1}^* \circ X_{i+1}$ corresponds to the following composition: $$ \begin{tikzcd}[ampersand replacement=\&] \cdots \arrow[r] \arrow[d]\&[10pt] P_{N\mn1\mn i} \oplus P_{N\pl 1\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1\mn i} & f_{N\mn i}^* \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 1 & 0 \end{bmatrix}}"] \&[10pt] P_{N\mn i} \arrow[r,"(-1)^{N\mn i}f_{N\mn i}^* \circ f_{N\mn i}"] \arrow[d,"(-1)^{N\mn1\mn i}f_{N\mn1\mn i}^*"] \&[10pt] P_{N\mn i} \arrow[d,"{\begin{bmatrix} 0 \\ 1 \end{bmatrix}}"] \\[40pt] P_{N\mn2\mn i} \oplus P_{N\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn2\mn i} & f_{N\mn1\mn i}^* \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 0 & 1 \end{bmatrix}}"]\& P_{N\mn1\mn i} \arrow[r,"(-1)^{N\mn1\mn i}f_{N\mn1\mn i}^* \circ f_{N\mn1\mn i}"] \arrow[d,"(-1)^{N\mn1\mn i}f_{N\mn1\mn i}"] \& P_{N\mn1\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn2\mn i}^*\\ f_{N\mn1\mn i} \end{bmatrix}}"] \arrow[d,"{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}"] \& P_{N\mn2\mn i} \oplus P_{N\mn i} \arrow[d] \\[40pt] P_{N\mn i} \arrow[r,"(-1)^{N\mn i}f_{N\mn i}^* \circ f_{N\mn i}"] \& P_{N\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1 \mn i}^*\\ f_{N\mn i} \end{bmatrix}}"] \& P_{N\mn1 \mn i} \oplus P_{N\pl 1\mn i} \arrow[r] \& \cdots \end{tikzcd} We deduce that at the degrees $-N$ and $-N-1$ the map $X_i \circ X_i^* - X_{i+1}^* \circ X_{i+1}$ is given by $$ \begin{tikzcd}[ampersand replacement=\&] P_{N\mn1\mn i} \oplus P_{N\pl 1\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1\mn i} & f_{N\mn i}^* \end{bmatrix}}"] \arrow[d,swap,"{(-1)^{N\mn i} \begin{bmatrix} f_{N\mn1\mn i} & f_{N\mn i}^* \end{bmatrix}}"] \&[10pt] P_{N\mn i} \arrow[d,"(-1)^{N\mn i}{\begin{bmatrix} f_{N\mn1\mn i}^* \\ f_{N\mn i} \end{bmatrix}}"] \\[60pt] P_{N\mn i} \arrow[r,"{\begin{bmatrix} f_{N\mn1 \mn i}^*\\ f_{N\mn i} \end{bmatrix}}"] \& P_{N\mn1 \mn i} \oplus P_{N\pl 1\mn i} \end{tikzcd} $$ A similar picture holds at the degrees $-2N$ and $-2N-1$: $$ \begin{tikzcd}[ampersand replacement=\&] P_{i} \oplus P_{i\pl2} \arrow[r,"{\begin{bmatrix} f_{i} & f_{i\pl1}^* \end{bmatrix}}"] \arrow[d,swap,"{(-1)^{i} \begin{bmatrix} f_{i} & f_{i\pl1}^* \end{bmatrix}}"] \&[10pt] P_{i\pl1} \arrow[d,"(-1)^{i}{\begin{bmatrix} f_{i}^* \\ f_{i+1} \end{bmatrix}}"] \\[60pt] P_{i+1} \arrow[r,"{\begin{bmatrix} f_{i}^*\\ f_{i\pl1} \end{bmatrix}}"] \& P_{i} \oplus P_{i\pl2} \end{tikzcd} $$ Using the map $s : X_{i+1} \rightarrow X_{i+1}[1]$ defined by $$s_k := \left\{ \begin{array}{ll} (-1)^{N-i} \mathrm{Id}_{P_{N-i}} & \text{if $-k \in N + 2N\mathbb{N}$}, \\ (-1)^{i} \mathrm{Id}_{P_{i+1}} & \text{if $-k \in 2N + 2N\mathbb{N}$}, \\ 0 & \text{otherwise}, \end{array}\right.$$ we see that $X_i \circ X_i^* - X_{i+1}^* \circ X_{i+1}$ is null-homotopic, which proves that $x_i \circ x_i^* - x_{i+1}^* \circ x_{i+1}$ is zero in $\mathsf{Hom}_{\mathsf{Ho}(A)}(P_{i+1},P_{i+1}[2])$. The next proposition shows that the relations given in Lemma <ref> are actually enough to describe the $\mathsf{Ext}$-algebra. We use here the concatenation of paths as opposed to the composition of arrows, which explains the discrepancy in the relations. The $\mathsf{Ext}$-algebra of $A$ is isomorphic to the path algebra associated with the following quiver $$ \begin{tikzcd} S_1 \arrow[r,bend left,swap,"x_1"] \arrow[rrrrrr,bend left=45,swap,"y_1"] & S_2 \arrow[l,bend left,swap,"x_1^*"] \arrow[r,bend left,swap,"x_2"] \arrow[rrrr,bend left=45,swap,"y_2"] & \arrow[l,bend left,swap,"x_2^*"] S_3 \arrow[rr,bend left=45,swap,"y_3"] &[-8mm] \cdots&[-8mm] S_{N-2} \arrow[ll,bend left=45,swap,"y_{N-2}"] \arrow[r,bend left,swap,"x_{N-2}"] & S_{N-1} \arrow[l,bend left,swap,"x_{N-2}^*"] \arrow[llll,bend left=45,swap,"y_{N-1}"] \arrow[r,bend left,swap,"x_{N-1}"]& \arrow[l,bend left,swap,"x_{N-1}^*"] S_{N\hphantom{-1}} \arrow[llllll,bend left=45,swap,"y_N"] \end{tikzcd} $$ with $x_i$'s of degree $1$ and $y_i$'s of degree $N$, subject to the relations $\mathrm{(a)}$ $x_1 x_1^* = x_{N-1}^*x_{N-1} = 0$; $\mathrm{(b)}$ $x_i^* x_i = x_{i+1}x_{i+1}^* $ for all $i = 1,\ldots,N-2$; $\mathrm{(c)}$ $x_i y_{i+1} = y_i x_{N-i}^* $ for all $i = 1,\ldots,N-1$; $\mathrm{(d)}$ $x_i^* y_{i} = y_{i+1} x_{N-i} $ for all $i = 1,\ldots,N-1$. Let $Q$ (resp. $I$) be the quiver (resp. the set of relations) given in the proposition. Let $\Gamma = \mathbb{F} Q/\langle I \rangle$ be the corresponding path algebra. By Lemma <ref>, the $\mathsf{Ext}$-algebra of $A$ is a quotient of $\Gamma$. To show that $A \simeq \Gamma$ it is enough to show that the graded dimension of $\Gamma$ is smaller than that of $A$. Let $1 \leq i , j \leq N$ and $\gamma$ be a path between $S_i$ and $S_j$ in $Q$ containing only $x_l$'s. Let $k$ be the length of $\gamma$. We have $k \geq |i-j|$, which is the length of the minimal path from $S_i$ to $S_j$. Using the relations, there exist loops $\gamma_1$ and $\gamma_2$ around $S_i$ and $S_j$ respectively such that $$\gamma = \left\{ \begin{array}{ll} \gamma_1 x_i x_{i+1} \cdots x_{j-1} = x_i x_{i+1} \cdots x_{j-1} \gamma_2 & \text{if $i\leq j$};\\ \gamma_1 x_{i-1}^* x_{i-2}^* \cdots x_j^* = x_{i-1}^* x_{i-2}^* \cdots x_j^* \gamma_2 & \text{otherwise}. \end{array}\right.$$ Maximal non-zero loops starting and ending at $S_i$ are either $x_{i-1}^*x_{i-2}^*\cdots x_1^* x_1 x_2 \cdots x_{i-1}$ or $x_{i}x_{i+1}\cdots x_{N-1} x_{N-1}^*\cdots x_{i+1}^* \cdots x_{i}^*$ depending on whether $S_i$ is closer to $S_1$ or $S_N$. Indeed, any longer loop will involve $x_1 x_1^*$ or $x_{N-1}^* x_{N-1}$, which are zero by (a). Therefore if $\mathsf{deg}(\gamma_1) > 2(i-1)$ or $\mathsf{deg}(\gamma_1) > 2(N-i)$ then $\gamma_1 =0$. Using a similar argument for loops around $S_j$ we deduce that $\gamma$ is zero whenever $$k = \mathsf{deg}(\gamma) > |i-j| + 2\,\mathsf{min}(i-1,j-1, N-j,N-j)$$ which is equivalent to $k= \mathsf{deg}(\gamma) > N-1 - |N+1-j-i|$. This proves that $\gamma$ is zero unless $ |i-j| \leq k \leq N-1 - |N+1-j-i|$ in which case it equals to $$\gamma = x_i x_{i+1} \cdots x_{r-1} x_{r-1}^* x_{r-2}^*\cdots x_j^* $$ where $k = 2r-i-j$. Assume now that $\gamma$ is any path between $S_i$ and $S_j$ in $Q$. Using the relations one can write $\gamma$ as $\gamma = y_i^a \gamma_1 \gamma_2$ where $\gamma_2$ is a loop around $S_j$ containing only $y_l$'s, $\gamma_1$ is a product of $x_l$'s and $a\in\{0,1\}$. Note that $\mathsf{deg}(\gamma_2)$ is a multiple of $2N$ and $\gamma_1$ is either a path from $S_i$ to $S_j$ if $a=0$ or a path from $S_{N+1-i}$ to $S_j$ if $a=1$. From the previous discussion and Proposition <ref> we conclude that $\gamma$ is zero if $\mathsf{dim}_\mathbb{F}\, \mathsf{Ext}^k_A(S_i,S_j) = 0$ or unique modulo $I$ otherwise. This shows that the projection $\Gamma \twoheadrightarrow A$ must be an isomorphism. [1] M. A. Antipov, A. I. Generalov. Finite generability of Yoneda algebras of symmetric special biserial algebras. (Russian) Algebra i Analiz 17 (2005), 1?-23; translation in St. Petersburg Math. J. 17 (2006), 377-?392. [2] D. Benson, P. Etingof. On cohomology in symmetric tensor categories in prime characteristic. Preprint , 2020. [3] P. Brown. The Ext-algebra of a representation-finite biserial algebra. J. Algebra 221 (1999), 611–629. [4] E. Green, S. Schroll, N. Snashall, R. Taillefer. The Ext algebra of a Brauer graph algebra. J. Noncommut. Geom. 11 (2017), 537–579.
# Charge exchange radiation diagnostic with gas jet target for measurement of plasma flow velocity in the linear magnetic trap A. Lizunov11footnotetext: Corresponding author. ###### Abstract The ambipolar electrostatic potential rising along the magnetic field line from the grounded wall to the centre in the linear gas dynamic trap, rules the available suppression of axial heat and particle losses. In this paper, the visible range optical diagnostic is described using the Doppler shift of plasma emission lines for measurements of this accelerating potential drop. We used the room temperature hydrogen jet puffed directly on the line of sight as the charge exchange target for plasma ions moving in the expanding flux from the mirror towards the wall. Both bulk plasma protons and $He^{2+}$ ions velocity distribution functions can be spectroscopically studied; the latter population is produced via the neutral He tracer puff into the central cell plasma. This way, potential in the centre and in the mirror area can be measured simultaneously along with the ion temperature. A reasonable accuracy of $4\div 8\%$ was achieved in observations with the frame rate of $\approx 1~{}kHz$. Active acquisitions on the gas jet also provide the spatial resolution better than 5 mm in the middle plane radial coordinate because of the strong compression of the object size when projected to the centre along the magnetic flux surface. The charge exchange radiation diagnostic operates with three emission lines: H-$\alpha$ 656.3 nm, He-I 667.8 nm and He-I 587.6 nm. Recorded spectra are shown in the paper and examples for physical dependences are presented. The considered experimental technique can be scaled to the upgraded multi-point diagnostic for the next generation linear traps and other magnetic confinement systems. ## 1 Introduction Linear magnetic systems for plasma confinement, also frequently referred as open-ended traps, have the common issue of field lines facing the grounded metallic wall somewhere beyond the mirror. The particular magnetic field configuration for different devices varies. In order for these confinement concepts to be attractive for real applications, the axial heat flux through the direct contact with the wall must be radically depressed comparing to the classic Spitzer [1] heat conductivity. The gas dynamic trap (GDT) [2] utilizes a strongly expanding magnetic "fan" beyond the mirror with a straight or curved inwards field line shape. The axial profile of the plasma electrostatic potential plays a crucial role forming the actual heat transport physics. In a steady state, this ambipolar potential equalises the electron and ion currents onto the wall. Study of the axial particle and energy transport [3] is one of the top priorities in the GDT scientific task list. This activity embraces new diagnostics development as well as experimental and theoretical research. Layout of the GDT device in the Budker Institute is shown in the Figure 1. The detailed description of plasma heating and sustainment scenarios can be found in [2], [4]. Figure 1: The gas dynamic trap: 1 – central cell, 2 – right expander tank, 3 – magnetic coil of central solenoid, 4 – atomic beam injector, 5 – deuterium beam, 6 – beam dump, 7 – arc discharge plasma source, 8 – plasma dump in the left expander tank, 9 – radial limiter, 10 – left gas box, 11 – right gas box, 12 – waveguides of ECRH system, 13 – diamagnetic loop, 14 – Thomson scattering diagnostic. The current GDT scenario employs electron cyclotron resonance (ECR) discharge for the plasma startup, but does not involve ECR heating. Only one of gyrotrons and waveguides (12) drawn in the Figure 1, is used. In this scenario, the energetic ion population and bulk plasma heating is produced via the neutral beam injection. ## 2 Diagnostic setup The analysis of the velocity distribution function for ions streaming out of the magnetic mirror, would bring the desired information about the electrostatic potential drop along the trajectory and the ion temperature. In GDT plasmas, the particle flow through the mirror towards the absorber surface, is effectively collisionless. The ion energy and magnetic moment conservation in a collisionless regime is expressed as $\frac{m_{i}v^{2}}{2}=\frac{m_{i}v_{0}^{2}}{2}+\Delta U(z),\qquad v_{\perp}^{2}=v_{\perp_{0}}^{2}\frac{H(z)}{H_{0}}\thickapprox 0.$ (2.1) Here in (2.1), $v$ and $v_{0}$ is the ion velocity in the measurement point in the expander and in the central cell, $U(z)=q\phi(z)$ is the potential energy for the $q$-charged ion in the electrostatic potential distribution $\phi(z)$. The $z=0$ point is the device mid-plane. We imply that the measurement location is far beyond the mirror, so $H(z)/H_{0}\ll 1$ and one can assume $v\approxeq v_{z}$. The one half of the Maxwellian ion distribution function (IDF) with $v_{z}\geq 0$ leaves the trap through the mirror, accordingly IDF in some point within the expansion region is $f_{i}=n\left(\frac{m_{i}}{2\pi T_{i}}\right)^{3/2}e^{U/T_{i}}\exp\left(-\frac{m_{i}v^{2}}{2T_{i}}\right).$ (2.2) Acceleration in the potential drop means that (2.2) is not zero only for the axial velocity above the value defined by the expression $\frac{m_{i}v_{z_{min}}^{2}}{2}=q\Delta\phi.$ (2.3) The equation (2.3) already provides the measurement approach. A natural and commonly used way to measure the axial ion velocity (or energy) in the plasma flux is by means of a grid electrostatic analyser, where the scanning analysing voltage is applied to decelerate ions. In past experiments in GDT, such experiments were successfully done [2] (page 26). There are some flaws though in this technique. For example, it is typically tricky to arrange measurements in multiple radial points with the grid energy analyser. A special high voltage power supply for such an analyser can be complex and expensive. In this paper, we are considering a spectroscopic approach to the task. The method relies on charge exchange conversion of streaming ions into atoms with the subsequent light emission, which implies the classical Charge eXchange Radiation Spectroscopy (CXRS) scheme: $A^{Z+}+H^{0}\rightarrow A^{*(Z-1)+}+H^{+}\rightarrow A^{(Z-1)+}+H^{+}+h\nu.$ (2.4) The neutral hydrogen target is used in (2.4), which is typical for CXRS. The emitted light spectrum shape encodes the IDF parameters. The ion temperature can be calculated by the Doppler FWHM (Full Width Half Maximum) as $\displaystyle T_{i}=\frac{m_{i}c^{2}}{2\ln 2}\left(\frac{\delta\lambda_{1/2}}{2\lambda_{0}}\right)^{2},$ (2.5a) where $\lambda_{0}$ is the unshifted wavelength. In turn, the accelerating potential drop is linked to the Doppler line shift as $\Delta\phi=\frac{m_{i}c^{2}}{2q}\left(\frac{\Delta\lambda_{D}}{\lambda_{0}}\right)^{2}\frac{1}{\cos^{2}\Theta},$ (2.5b) where $\Theta$ is the angle between the ion velocity and the Line of Sight (LOS) direction. Diagnostics for CXRS in a tokamak or other magnetic plasma confinement device, are typically associated with the atomic (hydrogen or deuterium) beam acting as a target. Indeed it is generally a crucial requirement to deliver a substantial target atom density to the certain point inside the bulk of the plasma. The major performance criterion here is the signal-above-background ratio (S/B). In the simplest case, it is given by the relation between the active CX signal recorded on the target, and the passive emission collected along the LOS. A respectable $S/B\gtrsim 1$ can be achieved with a high enough beam current density in the measurement point and a sufficiently high particle energy to have a possibly low beam trapping on the way to this point. In some particular cases, supersonic gas jets [5, 6] or thermal gas puff can be used for optical diagnostics. A well established method for study of electron temperature and density in the scrape-off layer and edge turbulence in hot magnetically confined plasmas is based on relative intensities of neutral helium (He-I) lines [7, 8, 9, 10]. These diagnostics record light emitted by the introduced helium atoms. For our spectroscopy needs in GDT expander, the room temperature jet of molecular hydrogen is applied as the charge exchange target for plasma ions. The setup of CXRS measurements in the GDT expander is drawn in the Figure 2. Figure 2: Layout of CXRS measurements in the left GDT expander: 1 – cone part of the GDT central cell, 2 – magnetic coil, 3 – mirror magnetic coil assembly, 4 – gas puff volume, 5 – boundary magnetic field line, 6 – expander tank, 7 – plasma dump, 8 – translation vacuum feedthrough of the gas feed tube, 9 – quartz tube 2 mm diameter, 10 – charge exchange gas target, 11 – $H_{2}$ reservoir and the feed line with the pulsed electromagnetic valve, 12 – 2-inch lens for light collection, 13 – light collection solid angle, 14 – optical fibre optic, 15 – spectrometer coupled with the CCD camera. The ion density (both for the plasma majority and impurities) in the expanding plasma flow drops with the expansion ratio $k=H(z)/H_{0}$ roughly linear. In the region under study, it is $50\div 100$ less than that in the central cell, which poses a certain diagnostic challenge. On the other hand, restrictions against perturbing the plasma parameters are mild. It is proven [11] that even a strong emission of cold electrons downstream in the expander do not affect the central cell plasma. Then it is advantageous to use a gas jet puffing from the narrow capillary placed directly on the LOS instead of the energetic atomic beam because a much larger local atomic density (and the rate of CX events) can be achieved. The CXRS gas target assembly is a 2 mm-diameter quartz tube inserted via the vacuum feedthrough and connected to the pulsed fast electromagnetic valve. The valve open delay relative to the measurement time window, is adjusted to the minimum while it is still sufficient to generate an effective CX target. The main concern is to reduce the additional gas load in the expander tank during the plasma discharge and so to less contaminate other measurements in the expander. The estimated molecular hydrogen density of $n(H_{2})\cong 10^{19}~{}m^{-3}$ is observed sufficient for an ample optical signal of CX emission, as it is shown below in the paper. The local observation volume (gas cloud) has the size of $\approx 20mm$, but the actual space resolution is defined by the cloud size mapped onto the GDT central plane along the magnetic flux surface: $\delta r_{0}\approx\delta r_{cloud}\sqrt{H(z)/H_{0}}\approx 2\div 5mm$. In the paper, radii are expressed in the device mid-plane if not otherwise specified. As the Figure 2 illustrates, the diagnostic LOS is fixed. The gas tube tip is movable between the axis and the plasma periphery along the LOS allowing for profile acquisition in a series of shots. One should keep in mind that both the radius and the z-coordinate are changed at the same time along the LOS. The optical system (12) (see Figure 2) collects the light, which comes to the spectrometer (15) via the fibre optical light guide (14). The Table 1 summarizes the main parameters of the registration system. The spectrometer we used is the factory LOMO MDR-23 model with the custom cylindrical output lens for the astigmatism correction. It is coupled with the Princeton Instuments PyLoN CCD camera [12] which has a small dark current and readout noise. The CCD is configured in the "Kinetics" mode (see [12]) featuring multiple exposures on the sensor within the single digitization and the readout cycle. This regime provides a trade-off for a reasonably fast frame rate in the kHz region with a limited number of exposures at the price of sacrificing the light throughput. We set ten exposures of $0.5ms$ duration and the effective frame rate of 1.1 kHz, the spectrometer entrance slit is accordingly enabled only on the $1/10$ of its height. Table 1: Main parameters of the optical registration system. _Spectrometer_ | ---|--- Optical scheme | Czerny-Turner Focal distance | 600 mm F/No. | F/6 Groove density | 1800 g/mm Blaze | 550 nm Dispersion | 0.69 nm/mm Spectral resolution | 0.041 nm _CCD_ | Model | PyLoN 2KB eXcelon (LN-cooled to -120℃) Sensor | 2048x512 pixels $13.5\times 13.5\mu m$ Binning | _Kinetics_ , 50 rows vertical Exposures on CCD | 10 Exposure duration | 0.5 ms Frame rate | 1.1 kHz System noise | $\approx 3.5e-$ ## 3 Measurement of IDF by Doppler spectroscopy ### Hydrogen H-alpha spectral line. The observation geometry shown in the Figure 2 shows that the angle between the ion velocity vector and the LOS vector (the latter being directed towards the optics) $\Theta>90^{\circ}$. This angle varies from $\Theta_{0}=150^{\circ}$ on the axis to the $\Theta_{edge}=130^{\circ}$ on the edge field line projecting to $r_{0}=15cm$ – the radial limiter radius. For these angles, the Doppler shift is towards larger wavelengths. In all GDT experimental scenarios, there is a non-negligible hydrogen plasma component even if deuterium is used to create both the bulk plasma and fast ions via neutral beam injection. Possible explanations of that are some organic residuals inside the vacuum vessel and also micro-leaks of the air with water vapor. Anyway, observation of the Doppler-shifted D-$\alpha$ 656.1 nm spectral line is not expedient because the left wing of the cold-gas H-$\alpha$ 656.3 nm would overlap it. Instead, we modified the plasma scenario with the portion of hydrogen puffing from the left gas box, see (10) in the Figure 1. A typical molecule or atom free path before ionization amounts $\lesssim 10~{}mm$, so one can assume hydrogen ion birth places localized in the left mirror area. We will refer the plasma electrostatic potential calculated by the H-$\alpha$ Doppler shift as the "potential in the mirror" to distinguish from the central potential. Figure 3: The spectrum recorded on the axis with the CX gas target in GDT shot 49311 at $t=8~{}ms$, exposure $\tau=0.5~{}ms$: 1 – bright D-$\alpha$ and H-$\alpha$ lines (cut out), 2 – C-II lines 657.8 nm and 658.3 nm, 3 – He-I line 667.8 nm with Doppler-shifted component. Figure 4: Spectrum of H-$\alpha$ emission on the axis used for calculation of the mirror potential: magenta curve – active CX frame acquired in the GDT shot 48994, blue curve – passive frame acquired in the GDT shot 48993, black curve – model fit of the active CX spectrum. Vertical dashed lines mark positions of unshifted D-$\alpha$ and H-$\alpha$. 1 – bright cold-gas D-$\alpha$ and H-$\alpha$ lines (cut out), 2 – D-$\alpha$ in the passive spectrum, 3 – part of spectrum corresponding to the accelerated IDF. Acquisition timing parameters: $t=8~{}ms$, $\tau=0.5~{}ms$. ### He-I spectral lines. The diagnostic scheme becomes more versatile with the additional helium puff in the central GDT part. Helium component is used as a tracer; the net He amount is small compared to the deuterium and hydrogen puff and it should be barely enough to create an observable optical signal. Atoms of He are stripped down to $He^{2+}$ with the characteristic time of $\sim 0.1\div 0.5~{}ms$ which is smaller than the axial particle loss time $\tau_{l}\cong 1.5~{}ms$. One can expect the prevailing content of $He^{2+}$ over $He^{+}$ in the plasma flow in the expander, which is useful from the spectral analysis viewpoint due to the larger Doppler shift. One can not however presume the He ion population being in a local thermodynamic equilibrium with the bulk deuterium or deuterium-hydrogen plasma. Ions $He^{2+}$ are partially converted into $He^{*0}$ with subsequent emission of He-I lines. In this work, we observed the He-I lines 587.6 nm and 667.8 nm. The latter option was used in most shots because this He-I line fits the same working spectral range with H-$\alpha$. In this way, the measurements of the mirror potential, the central potential, the hydrogen temperature and the helium temperature were possible simultaneously. The Figure 3 demonstrates the spectrum sample taken in the GDT shot 49311 with the charge exchange gas target switched on. The frame exposure was 0.5 ms recorded at $t=8~{}ms$ during the plasma heating phase. One may observe both the narrow cold-gas He line and the broader red-shifted emission responsible for accelerated $He^{2+}$ ions. Figure 5: Spectrum of He-I emission 667.8 nm on the axis used for calculation of the central potential: magenta curve – active CX frame acquired in the GDT shot 49707, blue curve – passive frame acquired in the GDT shot 49705, black curve – model fit of the active CX spectrum. The vertical dashed line marks the unshifted line position. 1 – cold-gas He-I line, 2 – active CX spectrum, 3 – mirror reflection. Acquisition timing parameters: $t=8~{}ms$, $\tau=0.5~{}ms$. ### Fitting CX spectra and measurement of potential and ion temperature. Figure 4 shows the active CX H-$\alpha$ spectrum (magenta) measured on the axis in the shot 48994 and the background or passive spectrum from the shot 48993 (blue). The fit curve is also plotted (black). Note that the Doppler- shifted line is almost vanished in the passive sample. With this contrast ratio, we can neglect the passive contribution in the gas target enabled frame thus considering the recorded optical signal being the active CX emission. This ensures the spatial resolution considered above. In this particular series of shots, the hydrogen bulk plasma feed predominated. This lead to a relatively small background D-$\alpha$ emission, which is rendered in the passive spectrum as a smoothed dent (2) on the H-$\alpha$ wing, see Figure 4. Physical data, namely the temperature and the potential, are obtained via fitting recorded spectra with the model function. The model we used, is a superposition of multiple bi-Gaussian lines each having its own set of parameters. Upon the fit convergence, line width and shift parameters are used for the calculation of the ion temperature and the potential drop via formulas (2.5a) and (2.5b). The mathematical processing codes are made using the nonlinear fit libraries provided in the Center Space NMath .NET software package [13]. Error bars shown in all graphs that follow, reflect the accuracy of the spectrum fit with the model function. Recorded spectra of He-I emission on the axis are plotted in the Figure 5. Similar to the Figure 4 with the H-$\alpha$ profile, both passive and active CX spectra are shown. There is no prominent Doppler-shifted signal above the noise level in the passive spectrum and likewise the H-$\alpha$ case, no background subtraction is needed for processing of the active CX spectrum. Easy to notice, that the cold-gas He-I line (1) also features the positive Doppler shift $\Delta\lambda_{cold}(He)\simeq 0.04~{}nm$ that is a reliable and reproducible effect. Such a shift is translated to the velocity $v_{cold}(He)\simeq 1.8\cdot 10^{3}~{}m/s$ which is consistent with collisions of cold He atoms in the expander with the accelerated plasma particles in the flow. The peak (3) we believe to be responsible for the mirror reflection of emitted light from the plasma dump surface. Due to some oversight, this surface is neither sand blasted nor darkened and the reflection would be remarkable indeed giving an approximately opposite Doppler shift as it is observable in the Figure 5. We also admit a partial reflection of plasma flow particles from the dump surface. Using the spectrum mathematical analysis as explained above, time evolutions of the hydrogen ion temperature and $He^{2+}$ ion temperature are plotted in the Figure 6(a). The electron temperature is measured by the Thomson scattering in the GDT centre. The dashed curve provides the reference of diamagnetic signal showing the dynamics of the plasma energy content. The Figure 6(b) shows the time evolution of the plasma electrostatic potential measured via the H-$\alpha$ Doppler shift (blue filled triangles) and Doppler shifts of two He-I lines: 667.8 nm (red filled circles) and 587.6 nm (magenta open squares). The former two measurements are done simultaneously in the same shot, the latter one required tuning the spectrometer. (a) Ion temperature time evolution close to the axis. Horizontal error bars show the exposure duration of 0.5 ms. (b) Time evolution of the plasma electrostatic potential in two locations (close to the axis). Figure 6: Example of application of the CXRS diagnostic for the study of the ion velocity distribution function. ## 4 Conclusion The accuracy of considered CXRS measurements on He-I lines $\epsilon\approx 4\div 8\%$ is primarily a function of the active signal intensity, because contribution of the background line emission along the LOS and the continuum light both are small. Mathematical calculations of the Doppler-shifted H-$\alpha$ are slightly complicated by the neighbouring cold-gas background line that may be approximately two orders of magnitude stronger. The exposure duration of 0.5 ms and the frame rate of 1.1 kHz allow for resolution of plasma parameters dynamics during the plasma startup, heating and sustainment in GDT. The spatial resolution of $\lesssim 5~{}mm$ determined by the gas cloud projection onto the middle plane, is sufficient for the study of spatial profiles of the ion velocity distribution function. In the present diagnostic design, there is a single line of sight which requires a series of shots for profile measurements. A comprehensive investigation of the axial transport of particles and energy in the gas dynamic trap with the intensive CXRS diagnostic involvement is under way. From the viewpoint of R&D of new instrumentation, some valuable experience has been obtained as well. The basic diagnostic technique using the gas jet target is confirmed to be workable in a wide range of central cell plasma parameters and residual gas pressure in the expander vessel. It provides a solid ground for development of a more capable CXRS diagnostic version for the next- generation linear magnetic system for plasma confinement [14], which construction is to be started within a couple of years. The improved optical system will have multiple observation points distributed across the plasma fan in the expander area. The charge exchange target should be a more collimated helium jet with a greater penetration depth, probably a supersonic gas jet like [5, 6]. ## Acknowledgments This work is supported by the Russian Science Foundation, project No. 18-72-10084 issued on 31.07.2018. ## References * [1] V.P.Pastukhov, _Nucl. Fusion_ , 14 3 (1974). * [2] A. A. Ivanov and V. V. Prikhodko, _Gas-dynamic trap: an overview of the concept and experimental results_ , _Plasma Phys. Control. Fusion_ 55 (2013) 063001. * [3] E.I. Soldatkina, V.V. Maximov, V.V. Prikhodko, V.Ya. Savkin, D.I. Skovorodin, D.V. Yakovlev and P.A. Bagryansky, _Measurements of axial energy loss from magnetic mirror trap_ , _Nucl. Fusion_ 60 (2020) 086009. * [4] Bagryansky P.A., Shalashov A., Gospodchikov E., Lizunov A., Maximov V., Prikhodko V., Soldatkina E., Solomakhin A. and Yakovlev D., _Phys. Rev. Lett._ , 114 205001 (2015). * [5] K. Schmid and L. Veisz, _Supersonic gas jets for laser-plasma experiments_ , _Rev. Sci. Instrum._ , 83, 053304 (2012), http://dx.doi.org/10.1063/1.4719915 * [6] V. A. Soukhanovskii, H. W. Kugel, R. Kaita, R. Majeski, and A. L. Roquemore, _Rev. Sci. Instrum._ , 75, 4320 (2004), https://doi.org/10.1063/1.1787579 * [7] J. M. Muñoz Burgos, M. Agostini, et. al., _Physics of Plasmas_ , 23, 053302, (2016), https://doi.org/10.1063/1.4948554 * [8] M. Agostini, P. Scarin, R. Cavazzana, A. Fassina, A. Alfier, and V. Cervaro, _Rev. Sci. Instrum._ , 81, 10D715 (2010), http://dx.doi.org/10.1063/1.3478679 * [9] M. Griener, E. Wolfrum, et. al., _Rev. Sci. Instrum._ , 89, 10D102 (2018), https://doi.org/10.1063/1.5034446 * [10] B. D. Yuan, Y. Yu, et. al., _Rev. Sci. Instrum._ , 91, 073505 (2020), https://doi.org/10.1063/5.0005545 * [11] Soldatkina E., Anikeev M., Bagryansky P., Korzhavina M., Maximov V., Savkin V., Yakovlev D., Yushmanov P. and Dunaevsky A. _Phys. Plasmas_ 24 (2017) 022505. * [12] https://www.princetoninstruments.com/wp-content/uploads/2020/04/PyLoN_2k_datasheet.pdf * [13] https://www.centerspace.net/nmath * [14] A. Beklemishev, A. Anikeev, et. al., _Novosibirsk Project of Gas-Dynamic Multiple-Mirror Trap_ , _Fusion Science and Technology_ , 63, 1T (2013) 46-51. https://doi.org/10.13182/FST13-A16872
# Scattering on Quasi-Spherical Black-Holes: Features and Beyond A.M<EMAIL_ADDRESS>and A.J. <EMAIL_ADDRESS> $\,{}^{\vardiamondsuit}$ Department of Physics & Technology, Karazin Kharkov National University, 4 Svobody Sq., Kharkov 61022 UA $\,{}^{\spadesuit}$ Akhiezer Institute for Theoretical Physics of NSC KIPT 1 Akademicheskaya St., Kharkov 61108 UA $\,{}^{\varheartsuit}$ Usikov Institute of Radiophysics and Electronics 12 Ak. Proskury, Kharkov 61085 UA ###### Abstract Recent developments in the gravitational waves interferometry require more pertinent theoretical models of gravitational waves generation and propagation. Untouched possible mechanisms of spin-2 spacetime perturbations production, we will consider their subsequent scattering on other black holes (BHs). Specifically, we consider a generalization of the Regge-Wheeler-Zerilli equations for the case of distorted BHs (BHs surrounded with matter) in Minkowski and Anti-de Sitter spacetimes, the metric potential of which obeys the Liouville equation. We establish significant differences in scattering characteristics of waves of different spins and angular momenta, including the gravitational waves, caused by losing the spherical symmetry of their propagation background. In particular, we demonstrate the strong impact of the background geometry deformation on the grey-body factors, hence on the absorption cross-sections of scattering waves, and explore the issue of stability of the background geometry upon changing the deformation degree parameters. ## 1 Introduction Progress in observation astronomy is going so fast that we become beholders of transforming astrophysics into genuinely physical discipline, where theoretical predictions are supervised with experimentally verified data. A lot of discoveries in the field were done since the century beginning. We will outline just a few breakthroughs in the contemporary multi-messenger Astrophysics [1], related in different ways to black holes (BHs): * • First, we have the LIGO-Virgo scientific collaborations to mention for their fundamental contribution in registrations of gravitational waves (GWs) [2]. It is believed that the GWs are mostly induced in processes which involve BHs. * • Second, it is important to notice the detection of ultra high energy cosmic rays (UHECRs) of energy $\sim$$10^{19}$ eV—that far exceeds energies of the LHC (Large Hadron Collider)—coming from active galactic nuclei (AGN) [3]. It is believed that super-massive BHs (SMBHs) are located in the core of the AGN [4, 5], and mechanisms of the UHECRs generation are directly related to BH physics; see, e.g., refs. [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], in this respect. * • Next, it is worth mentioning achievements of the Event Horizon Telescope (EHT) collaboration in revealing the BH event horizon (for M87 SMBH with the mass 6.5 $\times$ $10^{9}$ $M_{\odot}$ [20]). Due to the EHT activity it is believed more and more that Astrophysical BHs (BHs in the sky) share common properties with mathematical BHs (solutions to Einstein equations). * • Last but not least: the recent data analysis of the EHT team puts the Einstein theory on the test [21]. It turns out that general relativity works well, and many alternative theories of gravity should be discarded as that of contradicting experimental data. However, one has to notice that the conclusions of ref. [21] cannot be considered as ultimate ones; see, e.g., ref. [22], in this respect. The mentioned achievements in contemporary astrophysics show how various studies of compact astrophysical objects give new prospects in resolving old and revealing new problems, which impact the development of Physics at whole. A large enough amount of information on BHs comes with d-waves (spin 2 waves; GWs). In view of this fact, any current studies of d-waves become potentially important. One of the branches in this activity is scattering d-waves on compact astrophysical objects. Starting from first seminal papers on scattering of different spin particles on Kerr and Schwarzschild BHs [23, 24, 25, 26, 27, 28, 29], it has been obtained, analytically and numerically, cross-sections and grey-body factors of external particle fluxes propagating in effective potentials of various BH models; see, e.g., refs. [30, 31, 32, 33, 34, 38, 36, 37, 35, 39, 40, 41, 42, 43]. In what follows, we will be interested in solving to the scattering problem for the GWs in the background of the so-called distorted BHs. Rationales behind appearing the distorted BHs on the scene come from the point that the Schwarzschild solution is highly idealized and cannot be fully employed in any real astrophysical problem; the same concerns the Kerr solution. Any presence of matter does not just change the metric, but may drastically change its type, due to the backreaction of matter fields enforcing the spacetime geometry to evolve in time. However, time-dependent solutions to the Einstein equations in analytical form are extremely rare (and for very specific matter fields) [44, 45] and generally require involved numerical simulations. A significant simplification occurs upon replacing the backreaction of matter with an effective distortion of a static/stationary BH metric with keeping the time-independence of the solution. Then, the solution describing a static distribution of matter localized outside the BH horizon and the vacuum spacetime at its proximity is called the distorted BH [47, 46]; also see refs. [48, 49]. There are various astrophysical applications for the distorted BH- type metrics (see, e.g., refs. [50, 51, 52, 53, 54, 55, 56, 57]), among which one can find the effective description of a double (neutron) “star” system, when the tidal forces of one of the components deforming the shape of the other are mimicked in the solution [58, 59, 60]. In the next parts of the paper, we will explore the scattering of d-waves on another class of distorted BHs, in which spacetime configuration is defined by a specific solution originally proposed and studied in refs. [61, 62, 63, 64] and later rediscovered in ref. [65] in a different context. In Section 2, we briefly discuss the relationship between the standard metric of axisymmetric distorted BH [47, 46] and the quasi-spherical BH solution of refs. [61, 62, 63, 64]. In its customary formulation, the Weyl solution includes two metric potentials (functions of the radial direction and polar angle) and describes a family of distorted BHs in dependence on their specific choice. The proposed generalization of the Weyl solution with third metric potential, in Section 2, now depending on two angles, breaks the original axial symmetry. It results in fixing two metric potentials of the Weyl solution, so that the final geometry becomes that of a quasi-spherical BH, the metric potential of which obeys the Liouville equation. The quasi-spherical solution also describes a family of BHs, now determined by the choice of a single metric potential. The local spacetime geometry we will explore further on is that of a static neutral distorted BH. It is evidently not enough to describe real astrophysical problems in full extent (see more on a quasi-spherical generalization of the Kerr solution in the last section). However, it is sufficient to reveal main features in the scattering processes of external particle fluxes, which will also be inherent in scattering by rotating distorted black holes. In Section 3, we consider equations for gravitational perturbations over backgrounds of quasi-spherical neutral BHs in Minkowski and anti-de Sitter (AdS) spacetimes and examine the issue of separation of variables. As it is well known, a possibility to separate the variables in dynamical equations of relativistic fields in a curved background is related to symmetry properties of the background spacetime. By use of the Newman-Penrose formalism, we figure out the spacetime of quasi-spherical BHs is of type D in the Petrov classification. Therefore, the radial and angular parts of equations for small perturbations over the quasi-spherical BH background can be set apart, and the resulting expressions generalize the Regge-Wheeler-Zerilli equations and the spherical harmonics equation. In Section 4, we focus on the angular part of the scattering problem and survey differences in solution to the angular differential equation for the distorted/quasi-spherical spacetime in compare to the standard Schwarzschild background. In general, the solution to the angular equation in a quasi- spherical BH background is reduced to the spectral problem for infinite- dimensional matrices. For the background with axial symmetry, viable in many astrophysical problems, we can make further simplifications, and to solve the spectral problem numerically. Here, we find the principle difference between the quasi-spherical and spherically-symmetric cases, which will have the foremost impact on the scattering process: the eigenvalues of the examined spectral problem are not integers anymore; for each scattering mode, there is a set of eigenvalues, the number of which is determined by the corresponding to spherical symmetry value of the scattering mode angular momentum and by its projections (i.e., by the set of $(l,m)$); and finally, the eigenvalues depend on the deformation degree of the background geometry from spherically- symmetric, specified by a single parameter. On account of numerical computations we recovered the functional dependence of the generalized eigenvalues on the degree of deformation in the axially-symmetric case. Section 5 contains computations of the grey-body factor for different scattering waves on a quasi-spherical Schwarzschild BH and further comparison of the regular spherically-symmetric case [43] to its counter-part with a non- trivial deformation. Because the generalized eigenvalues found in the preceding section carry on two indices and enter the Regge-Wheeler-Zerilli equations via the separation constant, numerically computed grey-body factors for each type of perturbations—scalar, vector, and tensor—also become differentiated by $(l,m)$ indices. Explicitly, for every scattering mode with the angular momentum $l$, we find $l+1$ different values of the grey-body factor, properties of which were compared to that of the spherically-symmetric case. In particular, we find that the grey-body factors, as functions of the deformation degree, increase with increasing the value of this parameter, as well as that the transparency of the effective potential is reached for $(l,l)$ scattering modes at the lowest value of corresponding frequencies. In Section 6, we consider the issue of stability of BH backgrounds by studying the quasinormal modes (QNMs). Here, we briefly review the relation between the stability of a spacetime and positivity of effective potentials in the Regge- Wheeler-Zerilli equations. Since the effective potential of small perturbations over distorted/quasi-spherical BH backgrounds depends on the deformation degree, this parameter becomes crucial for determining the stability of the background geometry against small perturbations. We find that the value of the deformation degree equal to one is the critical value for the stability of an axially-symmetric quasi-spherical Schwarzschild BH in Minkowski and AdS spacetimes. And, if the result does not depend on the size of a BH in flat spacetime, the instability of a Schwarzschild-AdS BH may only be encountered for the so-called large BHs, the event horizons of which are of the next order in compare to the characteristic scale of empty AdS space. Finally, discussion of the results and summary of our findings are collected in the last section. ## 2 Background Metric: From Distorted to Quasi-Spherical Black Hole Let us begin with a clarification of how the background metric, mainly used throughout the paper, is related to the metric of a distorted BH. A distorted BH solution to the flat spacetime Einstein vacuum equations is traditionally described by the Weyl axisymmetric metric [46] in the cylindrical space-time coordinates $(t,\rho,\theta,\varphi)$ [47], $t$ is the time coordinate. However, when the case is about a static axisymmetric solution with an arbitrary quadrupole moment, things get essentially simplified in the prolate spheroidal space-time coordinates $(t,x,y,\varphi)$ [66], in which the line element looks as follows: $ds^{2}=-e^{2\psi(x,y)}dt^{2}+M^{2}e^{-2\psi(x,y)}\Bigg{[}e^{2\gamma(x,y)}(x^{2}-y^{2})\left(\frac{dx^{2}}{x^{2}-1}+\frac{dy^{2}}{1-y^{2}}\right)+(x^{2}-1)(1-y^{2})d\varphi^{2}\Bigg{]}.$ (1) To obey the Einstein equations in vacuum, the metric potentials $\psi(x,y)$ and $\gamma(x,y)$ of (1) should fulfill the following set of equations [66, 67]: $\partial_{x}\left((x^{2}-1)\partial_{x}\psi\right)+\partial_{y}\left((1-y^{2})\partial_{y}\psi\right)=0,$ (2) $\partial_{x}\gamma=\frac{1-y^{2}}{x^{2}-y^{2}}\left[x(x^{2}-1)(\partial_{x}\psi)^{2}-x(1-y^{2})(\partial_{y}\psi)^{2}-2y(x^{2}-1)\partial_{x}\psi\partial_{y}\psi\right],$ (3) $\partial_{y}\gamma=\frac{x^{2}-1}{x^{2}-y^{2}}\left[y(x^{2}-1)(\partial_{x}\psi)^{2}-y(1-y^{2})(\partial_{y}\psi)^{2}+2x(1-y^{2})\partial_{x}\psi\partial_{y}\psi\right],$ (4) where $\partial_{a}\equiv\partial/\partial a$. Equations (2)–(4) leave enough freedom in choosing the specific form of the metric potentials. The entering (1) constant $M$ is associated to the mass of the BH. A non-trivial generalization of (1), which includes the third metric potential $\Phi$, now dependent on $(y,\varphi)$ coordinates, therefore breaking the axisymmetric invariance of the original metric, is $ds^{2}=-e^{2\psi(x,y)}dt^{2}+M^{2}e^{-2\psi(x,y)}\times$ $\times\Bigg{[}e^{2\gamma(x,y)}(x^{2}-y^{2})\left(\frac{dx^{2}}{x^{2}-1}+\frac{e^{\Phi(y,\varphi)}dy^{2}}{1-y^{2}}\right)+e^{\Phi(y,\varphi)}(x^{2}-1)(1-y^{2})d\varphi^{2}\Bigg{]}.$ (5) In this case, the presence of the third metric potential with the specific dependence on coordinates puts strong restrictions on the metric potentials $\psi(x,y)$ and $\gamma(x,y)$. In particular, the non-triviality of $\Phi(y,\varphi)$ results in the appearance of non-diagonal terms in the Ricci tensor that, in the absence of matter in the Einstein equations, sets the following constraints on $\gamma(x,y)$: $\partial_{x}\gamma=\frac{x(1-y^{2})}{(x^{2}-y^{2})(x^{2}-1)},\qquad\partial_{y}\gamma=\frac{y}{x^{2}-y^{2}},$ (6) the further account of which leads to the additional restriction on $\psi(x,y)$: $\partial_{x}\psi\,\partial_{y}\psi=0.$ (7) A general solution to (6) comes as follows: $\gamma(x,y)=\frac{1}{2}\ln\frac{x^{2}-1}{x^{2}-y^{2}}+C_{\gamma};$ (8) then, solving for the corresponding equation for $\psi(x,y)$ constrained by (7), we get $\psi=\frac{1}{2}\ln\frac{x-1}{x+1}+C_{\psi}.$ (9) Note that (9) is a particular solution for $\psi$, which is convenient to choose in this form to establish in what follows the relation of (5) to a quasi-spherical Schwarzschild BH metric. With $\gamma$ and $\psi$ of (8), (9), the metric potential $\Phi(y,\varphi)$ is confined by the following differential equation: $\partial_{y}\left((y^{2}-1)\partial_{y}\Phi\right)-2(e^{\Phi}-1)+e^{2C_{\gamma}}\frac{\partial^{2}_{\varphi}\Phi}{y^{2}-1}=0.$ (10) Fixing two integration constants $C_{\gamma}$ and $C_{\psi}$ to be zero and introducing new coordinates $(r,\theta)$, related to $(x,y)$ via $x=\frac{r}{M}-1,\qquad y=\cos\theta,$ (11) we recover the metric of a neutral BH [64, 63] (also see ref. [65]) in the spherical coordinates $ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}e^{\chi({\theta},\varphi)}\left(d\theta^{2}+\sin^{2}\theta\,d\varphi^{2}\right),$ (12) with the standard red-shift factor $f(r)=1-2M/r$ and the “smearing” function of the BH horizon $\chi(\theta,\varphi)$, which follows from $\Phi(y,\varphi)$ after the coordinate transformations (11). Equation (10) turns into the spherical Liouville equation $\Delta_{\theta,\varphi}\,\chi(\theta,\varphi)+2(e^{\chi(\theta,\varphi)}-1)=0,$ (13) with $\Delta_{\theta,\varphi}=\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\sin\theta\frac{\partial}{\partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{\partial\varphi^{2}}.$ (14) Recall that the Liouville equations are exactly-solvable and possess general analytic solutions in terms of unrestricted functions; see, e.g., refs. [68, 69], or a brief summary in refs. [61, 64] and Appendix A in below. Therefore, we have enough freedom in choosing a function to fit the desired shape of a two-dimensional surface. For a BH, this surface is a quasi-spherical/distorted horizon; in the case of a rigid celestial body (as a neutron star), the two- dimensional surface is a 2-dimensional (2D) slice of the 3-dimensional (3D) surface, deformed by the tidal forces of another star/BH. Therefore, we have established the generalization of the standard Weyl-Erez- Rosen distorted BH solution (1) with additional, breaking the axial symmetry, metric potential (cf. (5)). As a result of such modification, the solution becomes more rigid, and corresponds, after turning to the standard spherical coordinates, to the neutral BH solution with the Liouville mode [61, 62, 63, 64]. That makes possible to consider the BH solution of refs. [61, 62, 63, 64, 65] as a specific distorted BH. Taking this point of view, we will not differentiate further on distorted and quasi-spherical BHs and will freely use both terms on equal footing. To sum up this part of the work, the spacetime background we will consider throughout the paper is defined by the line element $ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}e^{\chi(\theta,\varphi)}(d\theta^{2}+\sin^{2}\theta d\varphi^{2}),$ (15) with $f(r)=\frac{\Delta}{r^{2}},\quad\Delta=r^{2}-2Mr+\kappa^{2}r^{4}.$ (16) A non-trivial value of $\kappa^{2}=-{\Lambda}/{3}$, where $\kappa$ is the inverse characteristic length of AdS spacetime and $\Lambda$ is its cosmological constant, corresponds to the AdS-Schwarzschild BH; $\kappa=0$ is that of the flat spacetime. To solve the vacuum Einstein equations, the metric potential $\chi(\theta,\varphi)$ has to be confined to the spherical Liouville Equation (13) with the angular Laplacian (14). ## 3 Basic Equations and Separation of Variables Technically, we would like to extend the known Regge-Wheeler-Zerilli [70, 71] equations for small d-wave perturbations over the background (15) and to solve them. First step on this way is to separate the variables in the corresponding relativistic spin equation. Our previous experience with s-wave perturbations over such a background [64] indicates the possibility to separate variables in the massless Klein-Gordon equation, the explicit form of which in the background (15) is as follows: $-\frac{1}{f}\partial^{2}_{t}\Phi+\frac{1}{r^{2}}\partial_{r}(r^{2}f\,\partial_{r}\Phi)+\frac{e^{-\chi(\theta,\varphi)}}{r^{2}}\triangle_{\theta,\varphi}\,\Phi=0,$ (17) by use of the separation ansatz $\Phi(t,r,\theta,\varphi)=e^{-i\omega t}\,Q_{0}(r)\,\Theta(\theta,\varphi).$ (18) But it is well-known that the (im)possibility to separate the variables strongly depends on special properties of the spacetime upon the consideration, specifically on its type within the Petrov spacetime classification scheme [72, 73]. To determine the Petrov type of the metric in hand, we follow the Newman- Penrose (NP) formalism [74] and introduce the null tetrad $e_{(1)}^{\mu}=l^{\mu},\;\;e_{(2)}^{\mu}=n^{\mu},\;\;e_{(3)}^{\mu}=m^{\mu},\;\;e_{(4)}^{\mu}=\bar{m}^{\mu},$ (19) which is standardly related to the metric via $g_{\mu\nu}=e_{\mu}^{(a)}\eta_{(a)(b)}e_{\nu}^{(b)}$ with $\eta_{(a)(b)}=\left(\begin{array}[]{cccc}0&-1&0&0\\\ -1&0&0&0\\\ 0&0&0&1\\\ 0&0&1&0\end{array}\right).$ (20) The curved space indices $\mu,\nu$ run over $0,1,2,3$; ”$0$” corresponds to time direction. The latin indices ”$(a)$” are that of flat tangent space and run over $1,2,3,4$. Explicitly, for the metric (15), we get $l_{\mu}=\delta_{\mu 0}-\frac{\delta_{\mu r}}{f(r)},\;\;n_{\mu}=\frac{f(r)}{2}\,\delta_{\mu 0}+\frac{\delta_{\mu r}}{2},\;\;m_{\mu}=\frac{r}{\sqrt{2}}\,e^{\frac{\chi(\theta,\varphi)}{2}}(\delta_{\mu\theta}+i\sin\theta\delta_{\mu\varphi}),$ (21) where $\delta_{\mu\nu}$ is the Kronecker delta. By use of the NP tetrad (21), one may compute various coefficients of the spin-connection $\gamma_{(c)(a)(b)}=e_{(c)}^{\nu}e_{(a)\nu;\mu}e^{\mu}_{(b)}$, where, as usual, the semicolon corresponds to the covariant derivative over the metric $g_{\mu\nu}$, and observe vanishing the coefficients (in the notation of ref. [74]) $\kappa$, $\sigma$, $\nu$, $\lambda$. According to the Goldberg-Sachs theorem, trivialization of $\kappa$, $\sigma$, $\nu$, $\lambda$ corresponds to the Petrov type D metrics [72, 73, 74]. Since the pioneering papers by Teukolsky [75, 76], it was realized that, in the Petrov type D spacetimes, equations for gravitational perturbations decouple for quantities $\psi_{0}=-C_{\alpha\beta\gamma\delta}l^{\alpha}m^{\beta}l^{\gamma}m^{\delta},\qquad\psi_{4}=-C_{\alpha\beta\gamma\delta}n^{\alpha}\bar{m}^{\beta}n^{\gamma}\bar{m}^{\delta},$ (22) forming with the Weyl tensor $C_{\alpha\beta\gamma\delta}$ and the NP tetrad (21). As we will see, in short, these quantities correspond to the odd (axial) gravitational perturbation of ref. [70] and even (polar) d-wave perturbation of ref. [71]. Indeed, by use of the ansatz (we use indices ”$+2$” and ”$-2$” for odd and even d-wave perturbations, respectively) $\left(\begin{array}[]{c}\psi_{0}\\\ r^{4}\psi_{4}\end{array}\right)=e^{-i\omega t}\left(\begin{array}[]{c}\Psi_{+2}(r)\Theta_{+2}(\theta,\varphi)\\\ \Psi_{-2}(r)\Theta_{-2}(\theta,\varphi)\end{array}\right),$ (23) one may separate the temporal, radial and angular parts of the gravitational perturbations over the basic metric (15). Upon the separation of the angular part of the d-wave perturbations, we arrive at the following master equation for the fundamental angular variable $\Theta(\theta,\varphi)$: $\Delta_{\theta,\varphi}\Theta(\theta,\varphi)+Ce^{\chi(\theta,\varphi)}\Theta(\theta,\varphi)=0,$ (24) in which, for the sake of convenience in further comparing to the spherically symmetric example, we set the separation constant to $C=\nu(\nu+1)$. Apparently, the spherical symmetry of the background spacetime is recovered for the trivial metric potential $\chi(\theta,\varphi)=0$ ($\chi(\theta,\varphi)=\mathrm{const}$ also gets, re-scaling the radial coordinate, the spherical symmetry back). However, in general, the spherical symmetry is lost, that means $\nu$ does not fall into the set of integers. The fundamental angular variable $\Theta(\theta,\varphi)$, together with angular differential operators $L_{n}=\partial_{\theta}-\frac{i}{\sin\theta}\partial_{\varphi}+n\left(\cot\theta+\frac{1}{2}\left(\partial_{\theta}\chi-\frac{i}{\sin\theta}\partial_{\varphi}\chi\right)\right),$ (25) form $\Theta_{\pm 2}(\theta,\varphi)$ of (23): $\Theta_{+2}(\theta,\varphi)=e^{-\frac{1}{2}\chi(\theta,\varphi)}L^{\dagger}_{-1}e^{-\frac{1}{2}\chi(\theta,\varphi)}L^{\dagger}_{0}\,\Theta(\theta,\varphi),$ $\Theta_{-2}(\theta,\varphi)=e^{-\frac{1}{2}\chi(\theta,\varphi)}L_{-1}e^{-\frac{1}{2}\chi(\theta,\varphi)}L_{0}\,\Theta(\theta,\varphi).$ (26) For the radial part of the perturbation equations of $\psi_{0}$ and $\psi_{4}$, the NP formalism is equivalent (after transition to new radial functions $Q_{\pm 2}(r)$; see refs. [77, 78] for details) to the Regge- Wheeler-Zerilli equations $\left[\frac{\partial^{2}}{\partial r_{*}^{2}}+\omega^{2}-V_{s}(r)\right]Q_{s}=0,\;\;r_{*}\in(-\infty,+\infty),\;\;s=\pm 2.$ (27) Here, $r_{*}$ is the so-called “tortoise” coordinate ${dr}/{dr_{*}}=f(r)$; $V_{\pm 2}$ are the effective potentials, entering either the Regge-Wheeler [70] $V_{+2}(r)=-\frac{3f(r)\partial_{r}f(r)}{r}+\nu(\nu+1)\frac{f(r)}{r^{2}}+6\kappa^{2}f(r),$ (28) or the Zerilli [71] $V_{-2}(r)=\frac{2f(r)}{r^{3}}\frac{9M^{3}+3c^{2}Mr^{2}+c^{2}(1+c)r^{3}+9M^{2}\left(cr+3\kappa^{2}r^{3}\right)}{(3M+cr)^{2}},\quad c=\frac{\nu(\nu+1)}{2}-1$ (29) equations. Altogether, the Regge-Wheeler-Zerilli Equations (27) look like a stationary Schrödinger equation for the wave functions $Q_{\pm 2}$ and the effective potentials (28) and (29). Note that, due to the lack of the spherical symmetry, we do not have the total angular momentum (and its projection, as well) conservation, though we can expect a quantization of the generalized angular momentum quantum numbers $\nu$. Let us also emphasize that the scalar and vector perturbations over the basic metric (15) are described by a Schrödinger-like Equation (27), as well, with the effective potential (33), in which one has to choose $s=0$ and $s=1$ for the scalar and vector modes, respectively. The common separation ansatz for the scalar, vector, and tensor perturbations looks as follows (cf. (23)): $\left(\begin{array}[]{c}r^{-1}\Phi\\\ r\tilde{\phi}\\\ \psi_{0}\\\ r^{4}\psi_{4}\end{array}\right)=e^{-i\omega t}\left(\begin{array}[]{c}Q_{0}(r)\,\Theta(\theta,\varphi)\\\ Q_{1}(r)\,\Theta_{1}(\theta,\varphi)\\\ \Psi_{+2}\,\Theta_{+2}(\theta,\varphi)\\\ \Psi_{-2}\,\Theta_{-2}(\theta,\varphi)\end{array}\right).$ (30) The $\Theta(\theta,\varphi)$ function of (30) obeys the master Equation (24); $\Theta_{\pm 2}(\theta,\varphi)$ are those of (26). $\Theta_{1}(\theta,\varphi)$ is also related to the fundamental angular variable $\Theta(\theta,\varphi)$; in this case we have $\Theta_{1}=e^{-\frac{1}{2}\chi(\theta,\varphi)}L_{1}^{\dagger}\,\Theta(\theta,\varphi),$ (31) with the angular differential operator $L_{1}$ of (25). Therefore, the “axial” perturbations over the quasi-spherical BH background are described by equations $\left[\frac{\partial^{2}}{\partial r_{*}^{2}}+\omega^{2}-V_{s}(r)\right]Q_{s}=0,\;\;r_{*}\in(-\infty,+\infty),\;\;s=0,1,2,$ (32) with the effective potentials [70, 24, 25, 26, 79, 80, 81] $V_{s}(r)=\frac{(1-s^{2})f\partial_{r}f}{r}+\nu(\nu+1)\frac{f}{r^{2}}+3s(s-1)\kappa^{2}f,\quad s=0,1,2.$ (33) The polar d-wave perturbation is still described by the Zerilli Equation (27) with $s=-2$ and with the effective potential (29). The shape of the effective potentials for flat and AdS spacetimes in the spherically symmetric case are depicted in Figures 1 and 2. Two comments on Figures 1 and 2 are in order. First, the shape of the effective potentials in Minkowski space-time (both panels of Figure 1) keeps its form up to spatial (radial in the case) infinity. It is not true for AdS spacetime, where the shape of the potentials (in both panels of Figure 2) will change to $\sim$$r^{2}$ profile at $r\gg 1$. It, in particular, means that there are bound states in AdS spacetime that result in the discrete part of the spectrum of admissible perturbations [64, 81, 82, 38, 83]. And second, the effective potentials of the Regge-Wheeler-Zerilli Equations (28) and (29) are almost the same as in Minkowski, as well as in AdS spacetimes (cf. right panels in Figures 1 and 2). This is the indication of isospectrality of the Hamiltonians, entering Equations (27) [81, 84, 85, 86]. Figure 1: Left panel: the shape of the effective potential of different modes ($l=0,1,2$) of the scalar perturbation over the spherically-symmetric Schwarzschild background. Right panel: comparison of the effective potentials $V_{+2}(r)$ (solid lines) and $V_{-2}(r)$ (dashed lines) of the d-wave perturbations ($l=2,3,4$) over the spherically-symmetric Schwarzschild background. The (almost) coincidence of $V_{\pm 2}$ reflects the isospectrality of the corresponding Hamiltonians of (27). The value of the horizon radial location is chosen to be $r_{+}=1$. Figure 2: Left panel: the shape of the effective potential of different modes ($l=0,1,2$) of the scalar perturbation over the spherically-symmetric Anti-de Sitter (AdS)-Schwarzschild background near the black hole (BH) horizon. Right panel: comparison of the effective potentials $V_{+2}(r)$ (solid lines) and $V_{-2}(r)$ (dashed lines) of the d-wave perturbations ($l=2,3,4$) over the spherically-symmetric AdS-Schwarzschild background near the BH horizon. The value of the horizon radial location is $r_{+}=0.1$. The AdS cosmological constant is chosen to be $\kappa^{2}=-\Lambda/3=0.1$. ## 4 Generalized Angular Momentum Numbers: Generalities and Axial Symmetry Case Let us turn back to the master equation for the fundamental angular variable (24). It can be solved with series expansion of $\Theta(\theta,\varphi)$ over the spherical harmonics $Y_{lm}$: $\Theta(\theta,\varphi)=\sum_{l,m}^{\infty}c_{lm}Y_{lm}(\theta,\varphi);\quad l\in\mathbb{Z},\,\,m=-l,\dots,l.$ (34) For a general function of angles, the expansion (34) contains infinite number of terms. Therefore, once one plugs (34) in the master Equation (24), the solution to the master equation turns into a generalized eigenvalue problem: $\sum_{j,m}^{\infty}A_{km^{\prime},\;jm}c_{jm}=C\sum_{j,m}^{\infty}B_{km^{\prime},\;jm}c_{jm},\quad C=\nu(\nu+1),$ (35) with infinite dimensional matrices $A_{km^{\prime},jm}=j(j+1)\delta_{kj}\delta_{m^{\prime}m},\quad B_{km^{\prime},jm}=\int d\Omega\,e^{\chi(\theta,\varphi)}\,Y^{*}_{km^{\prime}}(\theta,\varphi)Y_{jm}(\theta,\varphi).$ (36) As usual, the measure of integration over the angle variables is determined by $d\Omega=\sin\theta d\theta d\varphi$. One of the ways to resolve the generalized eigenvalue problem we are dealing with is to use numerical computations. However, even in this case, we have to establish an upper bound for $j$ in (35) to reduce the task to the generalized eigenvalue problem of $n\times n$ matrices ${\bf A}$ and ${\bf B}$ ${\bf A}\cdot{\bf c}=C\,{\bf B}\cdot{\bf c},$ (37) where $n=\sum_{j=0}^{j_{max}}(2j+1)=(1+j_{max})^{2}.$ (38) The upper bound value $j_{max}$ is chosen in such a way that the eigenvalues and the corresponding eigenvectors do not visibly change upon $j_{max}$ increasing. Another simplification can be reached with restoring a part of the spherical symmetry, that is the axial symmetry. Viz., the metric potential $\chi$, generally dependent on two spherical angles, becomes a function of just one polar angle $\theta$. In this case, Equation (13) simplifies to $\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\,\frac{d\chi(\theta)}{d\theta}\right)+2\left(e^{\chi(\theta)}-1\right)=0,$ (39) so one can establish that (see Appendix A) $e^{\chi(x)}=\frac{(2ab)^{2}\left(\frac{1-x}{1+x}\right)^{a}}{(1-x^{2})\left(b^{2}+\left(\frac{1-x}{1+x}\right)^{a}\right)^{2}},\quad x=\cos\theta,$ (40) where the constants $a=1+\alpha$ and $b=1+\beta$ are restricted to be real and positive. (The case of trivial deformation parameters $\alpha$ and $\beta$ corresponds to the spherical symmetry of the background metric.) Additional restriction on the deformation parameters is required by finiteness of the metric potential $e^{\chi(\theta)}$ in the endpoints of the $\theta$ fundamental domain $\theta\in[0,\pi]$. According to this requirement, $\alpha$ has to be non-negative. Axial symmetry makes possible to further separate the angular variables $\Theta\rightarrow\Theta_{m}(\theta,\varphi)=e^{im\varphi}S_{m}(\theta),\;\;\Theta_{m}(\theta,\varphi)=\Theta_{m}(\theta,\varphi+2\pi)\rightarrow m\in\mathbb{Z},$ (41) so that the master Equation (24) is reduced to the enlargement of general Legendre equation: $\frac{d}{dx}\left[(1-x^{2})\frac{dS_{m}(x)}{dx}\right]+\left[\nu(\nu+1)e^{\chi(x)}-\frac{m^{2}}{1-x^{2}}\right]S_{m}(x)=0,\;\;x=\cos{\theta}.$ (42) Consequently, the matrices (36) are reduced to $A^{m}_{ij}=j(j+1)\frac{2(j+m)!}{(2j+1)(j-m)!}\delta_{ij},\quad B^{m}_{ij}=\int_{-1}^{1}dx\,e^{\chi(x)}P_{im}(x)P_{jm}(x),$ (43) and the generalized eigenvalue problem (35) turns into $\sum_{j=|m|}^{\infty}A^{m}_{ij}c_{j}=C\sum_{j=|m|}^{\infty}B^{m}_{ij}c_{j}.$ (44) Solving for Equation (44), we arrive at the following conclusions: the eigenvalues $\nu$’s are labeled with two indices—$l\geq 0$ and $m=\\{-l,\dots,l\\}$—with the trivialization condition $\nu_{lm}\rightarrow l\in\mathbb{Z}$ once $\chi\rightarrow 0$; numerics also give $\nu_{l0}=l$ and $\nu_{l,-m}=\nu_{lm}$. Computations of the separation constant $C_{lm}=\nu_{lm}(\nu_{lm}+1)$ for different values of the admissible deformation degrees $\alpha$ and $\beta$ show the independence of the results from the value of $\beta$. See Table 1 as an example. Therefore, $\nu_{lm}$ do not depend on $\beta$ and are solely functions of the parameter $\alpha$. Table 1: Values of $C_{lm}=\nu_{lm}(\nu_{lm}+1)$ with $l=1,m=1$ for different deformation degrees $\alpha$ and $\beta$ obtained by solving for eq. (44) with different cut-off values of $j_{max}$. $\boldsymbol{\alpha}$ | $\boldsymbol{\beta}$ | $\boldsymbol{j_{max}=42}$ | $\boldsymbol{j_{max}=72}$ ---|---|---|--- $\alpha=0.001$ | $\beta=0.001$ | 1.99873 | 1.99700 $\alpha=0.001$ | $\beta=0.01$ | 1.99873 | 1.99700 $\alpha=0.001$ | $\beta=0.1$ | 1.99873 | 1.99700 $\alpha=0.001$ | $\beta=1.0$ | 1.99873 | 1.99700 $\alpha=0.01$ | $\beta=0.001$ | 1.97164 | 1.97039 $\alpha=0.01$ | $\beta=0.01$ | 1.97164 | 1.97039 $\alpha=0.01$ | $\beta=0.1$ | 1.97164 | 1.97039 $\alpha=0.01$ | $\beta=1.0$ | 1.97164 | 1.97039 $\alpha=0.1$ | $\beta=0.001$ | 1.73554 | 1.73553 $\alpha=0.1$ | $\beta=0.01$ | 1.73554 | 1.73553 $\alpha=0.1$ | $\beta=0.1$ | 1.73554 | 1.73553 $\alpha=0.1$ | $\beta=1$ | 1.73554 | 1.73553 $\alpha=1.0$ | $\beta=0.001$ | 0.74999 | 0.74999 $\alpha=1.0$ | $\beta=0.01$ | 0.74999 | 0.74999 $\alpha=1.0$ | $\beta=0.1$ | 0.74999 | 0.74999 $\alpha=1.0$ | $\beta=1$ | 0.74999 | 0.74999 Next, fixing $b=1$ and $a=1+\alpha$, we observe from numerics that first eigenvalues with $l,m=1,2$ as functions of a positive parameter $\alpha$ obey the inequality $\nu_{lm}\leq l$ (see Figure 3). This trend gets preserved for other combinations of $(l,m)$ in $\nu_{lm}$, as well. (Cf. Table 2.) Figure 3: Egenvalues $\nu_{lm}$ for $l=1,2$ and $m\leq l$. Finally, comparing different values of $\nu_{lm}$ for different values of the deformation degree $\alpha$, we observe that (see Table 2) $\nu_{lm}=\nu_{l-1,m}+1.$ (45) Therefore, we conclude that the angular momentum is quantized, but not in integers. Table 2: Values of $\nu_{lm}$ ($l=0,\dots,5$, $m=0,\dots,4$) for different values of the deformation parameter $\alpha$. | $\boldsymbol{\alpha=0.001}$ | $\boldsymbol{\alpha=0.01}$ | $\boldsymbol{\alpha=0.1}$ | $\boldsymbol{\alpha=1}$ | $\boldsymbol{\alpha=2}$ | $\boldsymbol{\alpha=3}$ ---|---|---|---|---|---|--- $\nu_{00}$ | 0 | 0 | 0 | 0 | 0 | 0 $\nu_{10}$ | 1 | 1 | 1 | 1 | 1 | 1 $\nu_{20}$ | 2 | 2 | 2 | 2 | 2 | 2 $\nu_{30}$ | 3 | 3 | 3 | 3 | 3 | 3 $\nu_{40}$ | 4 | 4 | 4 | 4 | 4 | 4 $\nu_{50}$ | 5 | 5 | 5 | 5 | 5 | 5 $\nu_{11}$ | 0.999 | 0.990 | 0.909 | 0.5 | 0.333 | 0.25 $\nu_{21}$ | 1.999 | 1.990 | 1.909 | 1.5 | 1.333 | 1.25 $\nu_{31}$ | 2.999 | 2.990 | 2.909 | 2.5 | 2.333 | 2.25 $\nu_{41}$ | 3.999 | 3.990 | 3.909 | 3.5 | 3.333 | 3.25 $\nu_{51}$ | 4.999 | 4.990 | 4.909 | 4.5 | 4.333 | 4.25 $\nu_{22}$ | 1.9980 | 1.9802 | 1.8181 | 1 | 0.666 | 0.5 $\nu_{32}$ | 2.9980 | 2.9802 | 2.818 | 2 | 1.666 | 1.5 $\nu_{42}$ | 3.9980 | 3.9802 | 3.818 | 3 | 2.666 | 2.5 $\nu_{52}$ | 4.9980 | 4.9802 | 4.818 | 4 | 3.666 | 3.5 $\nu_{33}$ | 2.9970 | 2.9702 | 2.727 | 1.5 | 1.0 | 0.75 $\nu_{43}$ | 3.9970 | 3.9702 | 3.727 | 2.5 | 2.0 | 1.75 $\nu_{53}$ | 4.9970 | 4.9702 | 4.727 | 3.5 | 3.0 | 2.75 $\nu_{44}$ | 3.9960 | 3.9604 | 3.6363 | 2 | 1.333 | 1.0 $\nu_{54}$ | 4.9960 | 4.9604 | 4.6363 | 3 | 2.333 | 2.0 We end up this section with an analysis of Table 2, which results in the following analytical expression for $\nu_{lm}(\alpha)$: $\nu_{lm}(\alpha)=l-\frac{\alpha}{1+\alpha}\,m.$ (46) The obtained expression is in the fine agreement with the previously numerically computed functional dependences of $\nu_{lm}$ on $\alpha$ (Figure 3). Let us also notice the coincidence in $\nu_{lm}=\nu_{l+k,m+s}$ values for integers $(k,s)$, determined by $k(1+\alpha)=\alpha s$ and restricted by $l+k\geq m+s$. For instance, $\nu_{00}=\nu_{33}$, $\nu_{20}=\nu_{43}=\nu_{66}$, $\nu_{21}=\nu_{44}$, $\nu_{31}=\nu_{54}=\nu_{77}$, and so on. ## 5 The Grey-Body Factor: Schwarzschild vs Distorted Schwarzschild Now, let us turn to the radial part of the d-wave perturbations described by Equation (27). Solving for these equations analytically is hampered due to a complicated form of the effective potentials. Typically, one has to find solutions either numerically (as it was done in ref. [43]), or to follow the procedure of finding solutions in different coordinate domains—the near, mid and far zones (see, e.g., ref. [38], for a review)—with subsequent constructing the united solution in different approximations (as, for instance, in refs. [23, 25, 26, 27, 28, 29]). In the context of the scattering problem, solutions to Equation (27) are used in computing different cross- sections, one of important ingredients of which is the so-called grey-body factor (GBF). To compute this characteristic, one has to solve a Schrödinger- like Equation (27) with the specified boundary conditions (see, for example, ref. [38]), which especially determine the transmission coefficient of an ingoing wave through the barrier of effective potentials (28) or (29). Then, the grey-body factor is $\gamma(\omega)=|T(\omega)|^{2}$, where $T(\omega)$ stands for the transmission coefficient. The complete transmission means the complete absorption of incoming waves by a BH. To figure out hallmarks of scattering/absorption in the background of quasi- spherical/distorted BHs, we will compare the spherically symmetric case (here, we follow the approach of ref. [43]) with that of deformed but axially symmetric. Looking at Figure 4 with the results of numerics for the spherically symmetric (Schwarzschild) background (which reproduce in part data in Figures 2 and 8 of ref. [43]), one may notice that: * • the scalar $s$-wave (left panel) has the complete transmission at the lowest admissible value of frequency $\omega$; * • increasing $l$ in the scalar mode perturbations (p- and d-modes on the left panel) requires higher values of $\omega$ to reach the complete transmission; * • basic axial gravitational perturbations ($d$-waves) (right panel) reach the complete transmission at lower, w.r.t. $l=2$ mode of axial electro-magnetic (EM) and scalar perturbations, frequency. Figure 4: Left panel: the grey-body factors $\gamma(\omega)$ of different modes ($l=0,1,2$) of the scalar perturbation over the Schwarzschild background. Right panel: comparison of $\gamma(\omega)$ of the basic axial d-wave perturbation ($l=2$) to the corresponding mode (with $l=2$) grey-body factors (GBFs) of the scalar and axial EM perturbations over the Schwarzschild background. For the distorted BH background (15) with the metric potential (40), the deformation parameters of which are chosen to be $b=1$ and $a=1+\alpha=1+0.2$, putting the data for the same type of waves on plots, we encounter important differences in compare to the previously considered cases (see Figure 5). We observe that: * • for each value of $l$ (recall, $l\in\mathbb{Z}$ is a non-negative degree of the corresponding spherical harmonics in the series expansion (34)), there are $l+1$ different values of the grey-body factor $\gamma(\omega)$; * • $\gamma(\omega)$ gets increased with increasing the deformation degree $\alpha$; * • for a fixed $l$, the GBFs with the maximal projection value $m=l$ reach the complete transmission at the lowest values of $\omega$. Figure 5: Left panel: splitting the grey-body factors $\gamma(\omega)$ for different modes ($l=0,1,2$) of the scalar perturbation in the distorted BH background. Right panel: comparison of the GBFs of the basic axial d-wave perturbation ($l=2$) to that of the scalar and axial EM perturbations over the distorted BH background. The deformation parameter $\alpha$ is equal to $0.2$. ## 6 Quasinormal Modes of a Quasi-Spherical Axisymmetric Black Hole As we have noted above, the scattering problem for an axially-symmetric neutral BH is completely determined by a single parameter—the deformation degree $\alpha$—which is required to be non-negative without any additional limitations. However, new restrictions on $\alpha$ could appear from the demand on stability of the BH spacetime background against small perturbations. That is fully determined by the quasinormal modes (QNMs). Recall that the quasinormal modes (see, e.g., refs. [77, 78, 79, 80, 81, 83, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]) correspond to solutions to Equations (27) and (32), which satisfy the specific boundary conditions (cf. [77, 79, 80]): ingoing waves at the horizon $Q_{s}(x)\simeq e^{-i\omega r_{*}},\;\;\;r_{*}\rightarrow-\infty\;(r\rightarrow r_{+}),$ (47) and outgoing waves at the spatial infinity $\begin{split}&Q_{s}(x)\simeq e^{i\omega r_{*}},\;\;r_{*},r\rightarrow+\infty\;\;\;\text{flat space},\\\ &Q_{s}(x)\rightarrow 0,\;\;\;\;\;r_{*},r\rightarrow+\infty\;\;\;\text{AdS space}\;.\end{split}$ (48) For fixed values of spin $s$ and (projection of) angular momentum $(l,m)$, there are infinite number of QNMs, which are labeled by the overtone number $n$; the least damped (fundamental) QNMs correspond to $n=0$. Generally, the QNMs are complex and their imaginary part can be positive or negative. It is easy to verify [79, 80] that positivity in the imaginary part of any QNM results in the instability of the background geometry. Indeed, following refs. [79, 80], we will find solutions to the Schrödinger-like equation $\left[\frac{\partial^{2}}{\partial r_{*}^{2}}+\omega^{2}-V_{s}(r)\right]Q_{s}=0,\;\;r_{*}\in(-\infty,+\infty),$ (49) with the “wave-function” $Q_{s}(r)=e^{-i\omega r_{*}}\phi_{s}(r),$ (50) and figure out restrictions on the admissible complex frequencies. To maintain the boundary conditions (47), (48), the “amplitude” $\phi_{s}(r)$ should satisfy $\phi_{s}(r)\simeq\mathrm{const},\;\;\;\;r\rightarrow r_{+},$ (51) and $\begin{split}&\phi_{s}(r)\rightarrow e^{2i\omega r_{*}},\;\;r_{*},r\rightarrow+\infty\;\;\;\text{flat space},\\\ &\phi_{s}(r)\rightarrow 0,\;\;\;\;\;\;\;r_{*},r\rightarrow+\infty\;\;\;\text{AdS space}\;.\end{split}$ (52) Plugging the ansatz (50) into Equation (49), for inherently complex valued $\phi_{s}(r)$, we get $f\frac{d^{2}\phi_{s}}{dr^{2}}+\left(\frac{df}{dr}-2i\omega\right)\frac{d\phi_{s}}{dr}-\frac{V_{s}}{f}\phi_{s}=0,\quad f=1-\frac{2M}{r}+\kappa^{2}r^{2}.$ (53) Now, one multiplies both sides of (53) by $\phi_{s}^{*}(r)$ and integrates over $[r_{+},\infty)$: $\int_{r_{+}}^{\infty}dr\left(\phi_{s}^{*}\frac{d(f\partial_{r}\phi_{s})}{dr}-2i\omega\phi_{s}^{*}\frac{d\phi_{s}}{dr}-\frac{V_{s}}{f}|\phi_{s}|^{2}\right)=0\;.$ (54) Further integration by parts on account of the boundary conditions (51), (52) and the asymptotic behaviour of the red-shift factor $f(r)$ at the integration end points results in the following expression for the first term of the integrand in (54): $\int_{r_{+}}^{\infty}dr\frac{d(\phi_{s}^{*}f\partial_{r}\phi_{s})}{dr}=\phi_{s}^{*}f\frac{d\phi_{s}}{dr}\bigg{|}_{r=r_{+}}^{r=\infty}=\begin{cases}&2i\omega\;\;\;\text{flat}\\\ &0\,\,\,\;\;\;\;\text{AdS}\end{cases}.$ (55) Hence, $\int_{r_{+}}^{\infty}dr\left(f\bigg{|}\frac{d\phi_{s}}{dr}\bigg{|}^{2}+2i\omega\phi_{s}^{*}\frac{d\phi_{s}}{dr}+\frac{V_{s}}{f}|\phi_{s}|^{2}\right)=\begin{cases}&2i\omega\;\;\;\text{flat}\\\ &0\,\,\,\;\;\;\;\text{AdS}\end{cases}\;,$ (56) and, for the imaginary part of (56), we obtain $\int_{r_{+}}^{\infty}dr\left(\omega\phi_{s}^{*}\frac{d\phi_{s}}{dr}+\omega^{*}\phi_{s}\frac{d\phi_{s}^{*}}{dr}\right)=\begin{cases}&\omega+\omega^{*}\;\;\;\text{flat}\\\ &0\,\,\,\;\,\,\,\;\,\,\,\;\;\;\;\;\text{AdS}\end{cases}\;.$ (57) Using the integration by parts for the left-hand-side (l.h.s.) of (57) once again results in $\int_{r_{+}}^{\infty}dr\left(\omega\phi_{s}^{*}\frac{d\phi_{s}}{dr}+\omega^{*}\phi_{s}\frac{d\phi_{s}^{*}}{dr}\right)=(\omega-\omega^{*})\int_{r_{+}}^{\infty}\phi_{s}^{*}\frac{d\phi_{s}}{dr}dr+\omega^{*}|\phi_{s}(r)|^{2}\bigg{|}_{r=r_{+}}^{r=\infty},$ (58) so that $\int_{r_{+}}^{\infty}\phi_{s}^{*}\frac{d\phi_{s}}{dr}dr=\begin{cases}&\frac{\omega+\omega^{*}|\phi_{s}(r_{+})|^{2}}{\omega-\omega^{*}}\;\;\;\text{flat}\\\ &\frac{\omega^{*}|\phi_{s}(r_{+})|^{2}}{\omega-\omega^{*}}\,\,\,\;\;\;\;\;\text{AdS}\end{cases}\;.$ (59) Lastly, substituting (59) into the l.h.s. of (56) leads to [79, 80] $\int_{r_{+}}^{\infty}dr\left(f\bigg{|}\frac{d\phi_{s}}{dr}\bigg{|}^{2}+\frac{V_{s}}{f}|\phi_{s}|^{2}\right)=\begin{cases}&-\frac{|\omega|^{2}|\phi_{s}(r_{+})|^{2}+(\text{Re}\,\omega)^{2}+(\text{Im}\,\omega)^{2}}{\text{Im}\,\omega}\;\;\;\text{flat}\\\ &-\frac{|\omega|^{2}|\phi_{s}(r_{+})|^{2}}{\text{Im}\,\omega}\,\,\,\;\,\,\,\;\,\,\,\;\,\,\,\;\,\,\,\;\,\,\,\;\,\,\,\;\,\,\,\;\;\;\text{AdS}\end{cases}\;.$ (60) Therefore, for a non-negative in the domain $[r_{+},\infty)$ potential $V_{s}(r)$ (cf. Figures 1 and 2 with effective potentials in spherically- symmetric spacetimes), the l.h.s. of (60) is non-negative. The same is required for the right-hand-side (r.h.s.) of this expression that means negativity of the denominator: $\text{Im}\,\omega<0$. Once the imaginary part of frequencies becomes positive, it corresponds to an unphysical solution because fields begin exponentially growth at spatial infinity and near the horizon. Put differently, this situation occurs when the effective potential $V_{s}(r)$ turns out to be negative within the physical domain $[r_{+},\infty)$. Below, we will explore a possibility to find negative branches of the effective potentials for small perturbations over the quasi-spherical neutral BH background with axial symmetry in Minkowski and AdS spacetimes. ### 6.1 Flat Spacetime In Minkowski spacetime with $f(r)=1-r_{+}/r$, the effective potential (see (33) for $\kappa^{2}=0$) $V_{s}=\frac{f(r)}{r^{2}}\left((1-s^{2})\frac{r_{+}}{r}+\nu(\nu+1)\right)$ (61) could be negative only for odd (axial) gravitational perturbations with $s=+2$ (clearly, the polar tensor perturbations with the Zerilli potential (29) do not satisfy this condition); it happens for $\nu(\nu+1)<\frac{3r_{+}}{r},\;\;r\in[r_{+},\infty).$ (62) For $\nu$’s satisfying (62), $V_{+2}(r)$ is negative within the interval $r\in[r_{+},r_{0})$ with $r_{0}=\frac{3r_{+}}{\nu(\nu+1)}$. Then, from $r_{0}>r_{+}$, we get the following condition on the separation constant: $\nu(\nu+1)<3$, or $\nu<1.303$. Eigenvalues $\nu_{2m}$ smaller than the critical value $\nu_{cr}=1.303$ produce the negative effective potential. Recall that, in the spherically symmetric case, $\nu\rightarrow l\geq 2$, so that the separation constant lowest value is $l(l+1)=6$. Hence, the effective potential $V_{s}(r)$ for $s=0,1,\pm 2$ is always positive. According to (59) that gives $\text{Im}\,\omega<0$ and the standard, Schwarzschild background is stable against small perturbations [70, 71, 77]. In contrast, small perturbations over a quasi-spherical axially-symmetric BH background are characterized by three different values of $\nu_{lm}$ with $l=2$, viz., $\nu_{20},\nu_{21},\nu_{22}$. Their dependences on the deformation degree $\alpha$ are depicted in Figure 3 in accordance to the relation (46). One can notice that there are values of $\alpha$ for which $\nu_{22}$ and $\nu_{21}$ become smaller of the critical value $\nu_{cr}=1.303$; hence, we can expect the appearance of QNMs with a positive imaginary part. To find the QNMs, we follow the semianalytical Padé approximation [88, 94, 95], improving the standard Wentzel-Kramers-Brillouin (WKB) technique, and the numerical Leaver method [90] based on the continued fraction. As we have discussed, we are mainly interested in values of the QNMs frequencies corresponding to tensor perturbations with eigenvalues $\nu_{20},\nu_{21},\nu_{22}$. The eigenvalue $\nu_{20}$ coincides with that of the spherically symmetric case; therefore, all possible QNMs (fundamental and overtones) related to this case will have the negative imaginary part. The other QNMs for eigenvalues $\nu_{21},\nu_{22}$ (of frequency $\omega_{21}$ and $\omega_{22}$, respectively) are functions of the deformation degree $\alpha$. The corresponding frequencies (actually, their real and imaginary parts) are shown in Figure 6. We find that, for $\alpha>1$, the imaginary part of $\omega_{22}$ becomes greater than zero (see Figure 7). The critical value of the deformation parameter $\alpha=1$ corresponds to $\nu_{22}=1$ (cf. (46)), so that the domain of negativity of $V_{+2}(r)$ is determined by $[r_{+},\frac{3}{2}r_{+})$. Increasing $\alpha$, this domain increases; $\text{Im}\,\omega_{22}$ will increase, too. For some values of $\alpha>1$, the imaginary parts of other tensor QNMs also become positive. To sum up, above the critical value of the deformation degree $\alpha=1$, the spacetime geometry of a distorted/quasi-spherical BH becomes unstable against small tensor perturbations. Figure 6: Left panel: the real part of the lowest odd gravitational perturbations $\omega_{2m}(\alpha)$ over a quasi-spherical axially-symmetric neutral BH background in flat spacetime. Right panel: the imaginary part of $\omega_{2m}(\alpha)$ under the same conditions. Figure 7: The imaginary part of $\omega_{22}(\alpha)$ in more detail. ### 6.2 AdS Spacetime Now, turn to AdS spacetime. Here, we have $f(r)=1+r^{2}\kappa^{2}-\frac{r_{+}}{r}(1+\kappa^{2}r_{+}^{2})$, and the effective potential of odd perturbations $V_{s}=\frac{f(r)}{r^{2}}\left((1-s^{2})\left(2\kappa^{2}r+(1+\kappa^{2}r_{+}^{2})\frac{r_{+}}{r}\right)+\nu(\nu+1)+3s(s-1)\kappa^{2}r^{2}\right)$ (63) could still be negative just for $s=+2$ (the effective potential for even tensor perturbations (29) is positive for any $\nu_{lm}\geq 2$). The condition of $V_{+2}(r)$ negativity is $\nu(\nu+1)<(1+\kappa^{2}r_{+}^{2})\frac{3r_{+}}{r},$ (64) and $V_{+2}(r)<0$ within $r\in[r_{+},r_{0})$, where $r_{0}=\frac{3r_{+}(1+\kappa^{2}r^{2}_{+})}{\nu(\nu+1)}$ ($r_{0}>r_{+}$). Therefore, for eigenvalues $\nu$, we get $\nu(\nu+1)<3(1+\kappa^{2}r_{+}^{2}).$ (65) Apparently, the result strongly depends on the specific value of $\kappa^{2}r_{+}^{2}$ ($r_{+}$ is the radius of the event horizon), so now $\nu$ depends on the size of a BH. In general, the value of $\nu_{cr}$ in AdS becomes higher than that of Minkowski spacetime; hence, the QNMs with a positive imaginary part may appear for smaller values of $\alpha$. To compute the tensor QNMs, here, we will follow the approach of ref. [79]. In addition, we will take into account the following features of the fundamental (i.e., least damped) QNMs of a Schwarzschild-AdS BH background, marked in ref. [81]: * • for large (with $r_{+}\kappa\simeq 100$) and intermediate (with $r_{+}\kappa\simeq 1$) BHs the fundamental quasinormal modes are purely imaginary and scale as $\omega/\kappa\simeq(r_{+}\kappa)^{-1}$; * • for small BHs the fundamental frequencies get non-trivial real and imaginary parts; the latter behaves as $\text{Im}\,\omega\simeq-r_{+}$. In the limit $r_{+}\rightarrow 0$, the QNMs turn into normal modes of AdS spacetime, determined by $\omega^{AdS}_{l}=2n+l+2$. In the background of a quasi-spherical axisymmetric neutral BH, we have three different tensor QNMs—of frequencies $\omega_{20}$, $\omega_{21}$ and $\omega_{22}$—one of which, $\omega_{20}$, coincides with that of the standard spherically-symmetric AdS-Schwarzschild BH. The other two, $\omega_{21}$ and $\omega_{22}$, become functions of the deformation degree $\alpha$. It turns out that, similar to the spherically-symmetric case, frequencies of the fundamental QNMs for large and intermediate BHs come to be purely imaginary. Their functional dependences on $\alpha$ are plotted in Figure 8. Looking at the left panel of Figure 8, one may notice that, for a chosen deformation degree $\alpha$, the least damped QNM corresponds to $m=2$ that gives the smallest value of $\nu$. Values of the deformation degree $\alpha=0.2$ and $\alpha=0.5$ still give the negative imaginary part of the QNM frequencies. However, increasing the value of $\alpha$ may trigger flipping the sign of $\text{Im}\,\omega$. Indeed, in the right panel of Figure 8, one finds dependences of $\text{Im}\,\omega_{21}$ and $\text{Im}\,\omega_{22}$ for a large BH (with $\kappa r_{+}=10.0$) on the parameter $\alpha$. Starting from $\alpha>1$, $\text{Im}\,\omega_{22}$ turns out to be positive. It corresponds to $\nu_{22}<1$. In contrast, $\text{Im}\,\omega_{21}$ remains negative, even for those $\alpha$ for which $V_{+2}(r)<0$. Figure 8: Left panel: imaginary parts of $\omega_{2m}$ as functions of $r_{+}\kappa$. Right panel: imaginary parts of Quasinormal modes (QNMs) of frequencies $\omega_{2m}$ ($m=1,2$) as functions of the deformation degree $\alpha$. Figure 9: Functional dependences on $r_{+}\kappa$ of real (left panel) and imaginary (right panel) parts of the fundamental QNMs of frequencies $\omega_{2m}$ ($m=1,2$). (Small BHs; the deformation degree is fixed to be $\alpha=0.2$.) AdS-S stands for Anti-de Sitter-Schwarzschild; AdS-S LM is the shorthand notation of Anti-de Sitter- Schwarzschild with the Liouville Mode (distorted BH). In addition, in the left panel of Figure 8, we can find that pure imaginary QNMs of large and intermediate BHs depend on $r_{+}\kappa$ quite similarly. This observation allows us to conclude that, for any fixed value of $\alpha$, one can find the spherically symmetric counterpart (of large and intermediate quasi-spherical BHs) with a larger value of the horizon location $r_{+}$, which possesses the same fundamental QNMs. The exception is the $\omega_{22}$ mode, the imaginary part of which becomes positive for $\alpha>1$, so such a correspondence does not take place. Examining the QNMs of small AdS-Schwarzschild BHs, we find they behave much like the standard spherically-symmetric case. One observes that $\text{Im}\,\omega\simeq-r_{+}$ within the range $r_{+}\kappa\in[0.3;0,8]$ (right panel of Figure 9). Once $r_{+}\rightarrow 0$, the real parts of the fundamental QNMs are determined by the expression for normal fundamental modes of empty AdS spacetime, $\omega_{AdS}=l+2$, in which $l$ is replaced with $\nu$ (left panel of Figure 9). Therefore, as in the case of flat spacetime, we have established the critical value of the deformation degree $\alpha=1$, below which the background geometry of distorted/quasi-spherical BHs is stable against small tensor perturbations, but above which the background geometry of large quasi- spherical BHs becomes unstable. ## 7 Summary and Open Questions Let us make a brief sum up of our achievements and findings. We have discussed the scattering problem for small perturbations—scalar, vector, and tensor—on quasi-spherical BHs in Minkowski and AdS spacetimes. Studying the problem, in general, has resulted in the following observations: * • There is a deep connection between distorted and quasi-spherical static BHs. To be precise, we have established the generalization of the Weyl-Erez-Rosen solution to the flat-space Einstein vacuum equations that is reduced to the quasi-spherical Schwarzschild BH solution, the metric potential of which obeys the Liouville equation. * • The obtained BH spacetime is of type D in the Petrov classification. It makes it possible, despite the spherical symmetry breaking, to separate the variables in dynamical equations of small perturbations and to arrive at the generalized Regge-Wheeler-Zerilli equations, as well as to the generalization of the spherical harmonics equation. * • The outcomes of the spherical symmetry breaking are: non-integer eigenvalues in the generalized spectral problem, coming from the angular part of dynamical equations of small perturbations; their multi-dimensional character for each scattering mode, when the dimension of the appropriate set of eigenvalues is determined by the degree of the corresponding spherical harmonics; and last, the functional dependence of the generalized eigenvalues on the deformation degree parameters. Restoring a part of the spherical symmetry—the axial symmetry of the spacetime background—relevant for a bunch of astrophysical problems has led us to the following findings: * • The angular dependence of corresponding quantities is further reduced to the dependence on the single polar angle; the spherical Liouville equation for the metric potential turns into the enlargement of general Legendre equation, which can be explicitly solved. * • It turns out that the eigenvalues of the generalized spectral problem depend solely on one parameter of the deformation degree after all. And this functional dependence has been recovered in analytical form. In addition, it has been observed that the generalized eigenvalues are quantized in non- integers. * • For every scattering mode corresponding to the appropriate degree of spherical harmonics $l$, there are $l+1$ different values of the grey-body factors, properties of which are determined by the deformation degree. For instance, we have observed that the grey-body factors increase with increasing the value of the deformation. * • Studying the issue of stability of BH backgrounds, we have found that the value of the deformation degree equal to one is the critical value for the stability of the axially symmetric quasi-spherical Schwarzschild BH in Minkowski and AdS spacetimes against the specific small tensor perturbations. * • We also find that, for large and intermediate AdS-Schwarzschild quasi- spherical BHs, the fundamental tensor QNMs of any fixed admissible value of the deformation degree are the same as that of a spherically-symmetric BH with a larger value of the event horizon along the radial direction. Therefore, we have observed significant differences in scattering characteristics of gravitational waves caused by losing the spherical symmetry of the background spacetime of their propagation. It would be interesting to find signs of the established effects in the data of real astrophysical observations, at least for slowly rotating systems. Finally, we will touch upon the following point. It is well known the tidal deformation of compact objects in double neutron star systems is described by the famous I-Love-Q relation; see refs. [96, 97] for reviews. It seems to be important to figure out any possible relation between the effective metric used in refs. [96, 97] and that of the distorted Kerr. Another direction of further studies is related to recovering the metric of a quasi-spherical rotating BH and studying the scattering processes in a more realistic setup. Our preliminary investigations showed the fail of the Newman-Janis algorithm [98] (also see refs. [99, 100, 101, 102, 103] and refs. therein) upon constructing the rotating extension of a quasi-spherical static metric, mostly used in the paper. Perhaps, the observed here connection between distorted and quasi-spherical BHs will make possible to complete this task in another way. We hope to continue studies on this and other related topics and report results in forthcoming publications. Acknowledgments: A.M.A. is grateful to the Institute for Theoretical Physics of NSC KIPT, where part of this work was completed, for its warm hospitality. A.J.N. acknowledges all the colleagues, with whom the subject of the present research was discussed, for fruitful and stimulating conversations. The authors are thankful to the anonymous reviewers for their suggestions in improving the early version of the paper. ## Appendix A. Spherically symmetric and axially symmetric solutions to the Liouville equation First, let us briefly discuss the way to solve the spherical Liouville Equation (13) in terms of unconstrained functions. See Appendix B of ref. [64] for more details. To this end, it is convenient to turn to the stereographic projection plane coordinates $z=e^{i\varphi}\tan\frac{\theta}{2},\qquad\bar{z}=e^{-i\varphi}\tan\frac{\theta}{2}\,.$ (A.1) Then, the angular part of $S^{2}$ line element, $ds^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}$, becomes the Fubini-Study metric of $CP^{1}$ complex projective space: $ds^{2}=\frac{4}{(1+z\bar{z})^{2}}\,dz\,d\bar{z},$ (A.2) and the angular part of the Laplacian (14) is simplified up to $\Delta_{\theta,\varphi}=\frac{1}{4}(1+z\bar{z})^{2}\partial_{z}\partial_{\bar{z}}.$ (A.3) Consequently, the spherical Liouville Equation (13) for $\chi(\theta,\varphi)$ takes the form of $\frac{1}{4}(1+z\bar{z})^{2}\partial_{z}\partial_{\bar{z}}\chi(z,\bar{z})+2e^{\chi(z,\bar{z})}-2=0$ (A.4) and can be solved in terms of arbitrary complex analytical function $F(z)$ [63, 64]: $\chi(z,\bar{z})=-2\ln\left[F(z)\bar{F}(\bar{z})+1\right]+\ln\left[\frac{dF(z)}{dz}\frac{d\bar{F}(\bar{z})}{d{\bar{z}}}\right]+\ln(1+z\bar{z})^{2}.$ (A.5) Second, let us survey the way of getting expression (40) in the case of axial symmetry of a quasi-spherical BH. Turning to the isothermal coordinates $(x,y)$ in $z$, viz. $z=x+iy$, for (A.4), we obtain $\frac{1}{4}(1+x^{2}+y^{2})^{2}\left(\frac{\partial^{2}\chi}{\partial x^{2}}+\frac{\partial^{2}\chi}{\partial y^{2}}\right)+2e^{\chi(x,y)}-2=0\,.$ (A.6) The Fubini-Study metric (A.3) comes to $ds^{2}=e^{\chi_{0}(x,y)}(dx^{2}+dy^{2}),$ (A.7) with $e^{\chi_{0}(x,y)}\equiv\frac{4}{(1+x^{2}+y^{2})^{2}},$ (A.8) or $\chi_{0}(x,y)=2\ln\left[\frac{2}{1+x^{2}+y^{2}}\right].$ (A.9) Next, we define a function $\Phi(x,y)$ as $\chi(x,y)=\Phi(x,y)-\chi_{0}(x,y)=\Phi(x,y)-2\ln\left[\frac{2}{1+x^{2}+y^{2}}\right].$ (A.10) Substituting (A.10) into (A.6) leads to the following equation for $\Phi(x,y)$: $\frac{1}{4}\bigg{(}1+x^{2}+y^{2}\bigg{)}^{2}\left[\frac{\partial^{2}\Phi}{\partial x^{2}}+\frac{\partial^{2}\Phi}{\partial y^{2}}+2e^{\Phi(x,y)}\right]=0\quad\leadsto\quad\frac{\partial^{2}\Phi}{\partial x^{2}}+\frac{\partial^{2}\Phi}{\partial y^{2}}+2e^{\Phi(x,y)}=0.$ (A.11) Now, the axially-symmetric case corresponds to demanding $\Phi(x,y)$ to solely depend on the radial coordinate $\rho$, related to $(x,y)$ via $\rho^{2}=x^{2}+y^{2}$. Hence, $\Phi(x,y)\rightarrow\Phi(\rho)$, and $\Phi(\rho)$ obeys $\frac{d^{2}\Phi(\rho)}{d\rho^{2}}+\frac{1}{\rho}\frac{d\Phi(\rho)}{d\rho}+2e^{\Phi(\rho)}=0.$ (A.12) With $\Phi=\ln f(\rho)$, Equation (A.12) transforms into $f\frac{d^{2}f(\rho)}{d\rho^{2}}-\left(\frac{df(\rho)}{d\rho}\right)^{2}+\frac{f}{\rho}\,\frac{df(\rho)}{\rho}+2f^{3}(\rho)=0,$ (A.13) the solution to which is $f(\rho)=\frac{2+C_{1}}{2\rho^{2}\cosh^{2}\left(\frac{\sqrt{2+C_{1}}(C_{2}-\ln\rho)}{\sqrt{2}}\right)}.$ (A.14) Turning to Equation (A.10) back, for $\chi(\rho)$, we have $\chi(\rho)=\ln\left[\frac{(2+C_{1})(1+\rho^{2})^{2}}{8\rho^{2}\cosh^{2}\left(\frac{\sqrt{2+C_{1}}(C_{2}-\ln\rho)}{\sqrt{2}}\right)}\right].$ (A.15) Finally, with inserting new constants $a=\sqrt{1+\frac{C_{1}}{2}}$, $b=e^{aC_{2}}$ and replacing $\rho$ with its functional dependence on $\theta$, $\rho=\tan\frac{\theta}{2}$ (cf. (A.1)), we arrive at $e^{\chi(\theta)}=\left(\frac{a}{b}\right)^{2}\tan^{2a-2}\frac{\theta}{2}\left(\frac{1+\tan^{2}\frac{\theta}{2}}{(1+b^{-2}\tan^{2a}\frac{\theta}{2})}\right)^{2}.$ (A.16) On account of the trigonometric identity $\tan^{2}\theta/2=(1-x)/(1+x)$, $x=\cos\theta$, the obtained solution (A.16) to the Liouville equation in polar coordinates turns into expression (40). ## References * [1] Bartos, I.; Kowalski, M. Multimessenger Astronomy; IOP Publishing: Bristol, UK, 2017. * [2] Abbott, B.P.; LIGO Scientific Collaboration; Virgo Collaboration. Binary Black Hole Mergers in the first Advanced LIGO Observing Run. Phys. Rev. X 2016, 6, 041015; Erratum in 2018, 8, 039903. * [3] Pierre Auger Collaboration. Correlation of the highest energy cosmic rays with nearby extragalactic objects. Science 2007, 318, 938. * [4] Eckart, A.; Genzel, R. Observations of stellar proper motions near the Galactic Centre. Nature 1996, 383, 415. * [5] Ghez, A.M.; Klein, B.L.; Morris, M.; Becklin, E.E. High proper motion stars in the vicinity of Sgr A*: Evidence for a supermassive black hole at the center of our galaxy. Astrophys. J. 1998, 509, 678. * [6] De Felice, F.; Sorge, F. Magnetized orbits around a Schwarzschild black hole. Class. Quant. Grav. 2003, 20, 469. * [7] De Felice, F.; Sorge, F.; Zilio, S. Magnetized orbits around a Kerr black hole. Class. Quant. Grav. 2004, 21, 961. * [8] Kachelriess, M. Lecture notes on high energy cosmic rays. arXiv 2008, arXiv:0801.4376. * [9] Stanev, T. High Energy Cosmic Rays, 2nd ed.; Springer & Praxis: Berlin, Germany; Chichester, UK, 2010. * [10] Dawson, B.R.; Fukushima, M.; Sokolsky, P. Past, Present and Future of UHECR Observations. PTEP 2017, 12, 12A101. * [11] Park, I.Y. Quantum-corrected Geometry of Horizon Vicinity. Fortsch. Phys. 2017, 65, 1700038. * [12] Nurmagambetov, A.J.; Park, I.Y. Quantum-induced trans-Planckian energy near horizon J. High Energy Phys. 2018, 1805, 167. * [13] Guépin, C.; Kotera, K.; Barausse, E.; Fang, K.; Murase, K. Ultra-High Energy Cosmic Rays and Neutrinos from Tidal Disruptions by Massive Black Holes. Astron. Astrophys. 2018, 616, A179; Erratum in 2020, 636, C3. * [14] Nurmagambetov, A.J.; Park, I.Y. Quantum-gravitational trans-Planckian energy of a time-dependent black hole. Symmetry 2019, 11, 1303. * [15] Nurmagambetov, A.J.; Park, I.Y. On Firewalls in quantum-corrected General Relativity. J. Phys. Conf. Ser. 2019, 1390, 012091. * [16] Comisso, L.; Sironi, L. The interplay of magnetically-dominated turbulence and magnetic reconnection in producing nonthermal particles. Astrophys. J. 2019, 886, 122. * [17] Nurmagambetov, A.J. Quantum Leaps in the Vicinity of One-Loop Gravity Black Holes. Phys. Part. Nucl. 2020, 51, 739. * [18] Nurmagambetov, A.J.; Park, I.Y. Quantum-gravitational trans-Planckian radiation by a rotating black hole. arXiv 2020, arXiv:2007.06070. * [19] Comisso, L.; Asenjo, F.A. Magnetic Reconnection as a Mechanism for Energy Extraction from Rotating Black Holes. arXiv 2020, arXiv:2012.00879. * [20] Event Horizon Telescope Collaboration. First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole. Astrophys. J. 2019, 875, L1. * [21] EHT Collaboration. Gravitational Test beyond the First Post-Newtonian Order with the Shadow of the M87 Black Hole. Phys. Rev. Lett. 2020, 125, 141104. * [22] Gralla, S.E. Can the EHT M87 results be used to test general relativity? arXiv 2020 arXiv:2010.08557. * [23] Starobinskii, A.A.; Churilov, S.M. Amplification of electromagnetic and gravitational waves scattered by a rotating “black hole”. Sov. Phys. JETP 1974, 65, 1–5. * [24] Fabbri, R. Scattering and absorption of electromagnetic waves by a Schwarzschild black hole. Phys. Rev. D 1975, 12, 933–942. * [25] Unruh, W.G. Absorption Cross-Section of Small Black Holes. Phys. Rev. D 1976, 14, 3251–3259. * [26] Sanchez, N.G. Scattering of scalar waves from a Schwarzschild black hole. J. Math. Phys. 1976, 17, 688–692. * [27] Sanchez, N.G. The Wave Scattering Theory and the Absorption Problem for a Black Hole. Phys. Rev. D 1977, 16, 937–945. * [28] Sanchez, N.G. Absorption and Emission Spectra of a Schwarzschild Black Hole. Phys. Rev. D 1978, 18, 1030–1036. * [29] Sanchez, N.G. Elastic Scattering of Waves by a Black Hole. Phys. Rev. D 1978, 18, 1798–1804. * [30] Cvetic, M.; Larsen, F. Grey body factors for rotating black holes in four-dimensions. Nucl. Phys. B 1997, 506, 107–120. * [31] Klebanov, I.R.; Mathur, S.D. Black hole grey body factors and absorption of scalars by effective strings. Nucl. Phys. B 1997, 500, 115–132. * [32] Cvetic, M.; Larsen, F. Greybody factors for black holes in four-dimensions: Particles with spin. Phys. Rev. D 1998, 57, 6297–6310. * [33] Kanti, P. Black holes in theories with large extra dimensions: A Review. Int. J. Mod. Phys. A 2004, 19, 4899–4951. * [34] Grain, J.; Barrau, A.; Kanti, P. Exact results for evaporating black holes in curvature-squared lovelock gravity: Gauss-Bonnet greybody factors. Phys. Rev. D 2005, 72, 104016. * [35] Keshet, U.; Neitzke, A. Asymptotic spectroscopy of rotating black holes. Phys. Rev. D 2008, 78, 044006. * [36] Boonserm, P.; Visser, M. Bounding the greybody factors for Schwarzschild black holes. Phys. Rev. D 2008, 78, 101502. * [37] Li, W.; Xu, L.; Liu, M. Greybody factors in rotating charged Goedel black holes. Class. Quant. Grav. 2009, 26, 055008. * [38] Harmark, T.; Natario J.; Schiappa R. Greybody Factors for d-Dimensional Black Holes. Adv. Theor. Math. Phys. 2010, 14, 727–793. * [39] Gonzalez, P.; Papantonopoulos, E.; Saavedra, J. Chern-Simons black holes: Scalar perturbations, mass and area spectrum and greybody factors. JHEP 2010, 08, 050. * [40] Boonserm, P.; Ngampitipan, T.; Visser, M. Regge-Wheeler equation, linear stability, and greybody factors for dirty black holes. Phys. Rev. D 2013, 88, 041502. * [41] Dong, R.; Stojkovic, D. Greybody factors for a black hole in massive gravity. Phys. Rev. D 2015, 92, 084045. * [42] Catalán, M.; Cisternas, E.; González, P.A.; Vásquez, Y. Quasinormal modes and greybody factors of a four-dimensional Lifshitz black hole with z = 0. Astrophys. Space Sci. 2016, 361, 189. * [43] Gray, F.; Visser, M. Greybody Factors for Schwarzschild Black Holes: Path-Ordered Exponentials and Product Integrals. Universe 2018, 4, 93. * [44] Stephani, H.; Kramer, D.; MacCallum, M.; Hoenselaers, C.; Herlt, E. Exact Solutions of Einstein’s Field Equations, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. * [45] Griffiths, J.B.; Podolsky, J. Exact Space-Times in Einstein’s General Relativity; Cambridge University Press: Cambridge, UK, 2009. * [46] Weyl, H. Zur Gravitationstheorie. Ann. Phys. 1917, 54, 117–145. * [47] Geroch, R.; Hartle, J.B. Distorted black holes. J. Math. Phys. 1982, 23, 680–692. * [48] Thorne, K.S. Nonspherical Gravitational Collapse: Does it Produce Black Holes? Comments Astrophys. Space Phys. 1970, 2, 191–196. * [49] Thorne, K.S. Nonspherical Gravitational Collapse: A Short Review. In Magic Without Magic; Klauder, J.R., Ed.; Freeman: San Francisco, CA, USA,1972; pp. 231–258. * [50] Tomimatsu, A. Distorted Rotating Black Holes. Phys. Lett. A 1984, 103, 374–376. * [51] Breton, N.; Denisova, T.E.; Manko, V.S. A Kerr black hole in the external gravitational field. Phys. Lett. A 1997, 230, 7–11. * [52] Breton, N.; Garcia, A.A.; Manko, V.S.; Denisova, T.E. Arbitrarily deformed Kerr Newman black hole in an external gravitational field. Phys. Rev. D 1998, 57, 3382–3388. * [53] Semerák, O.; Žačék, M. Gravitating discs around a Schwarzschild black hole: I. Class. Quant. Grav. 2000, 17, 1613–1626. * [54] Žačék, M.; Semerák, O. Gravitating discs around a Schwarzschild black hole: II. Czech. J. Phys. 2002, 52, 19–27. * [55] Letelier, P.S. On the stability of circular orbits of particles moving around black holes surrounded by axially symmetric structures. Phys. Rev. D 2003, 68, 104002. * [56] Shoom, A.A.; Walsh, C.; Booth, I. Geodesic motion around a distorted static black hole. Phys. Rev. D 2016, 93, 064019. * [57] Kunz, J.; Nedkova, P.; Yazadjiev, S. Magnetized Black Holes in an External Gravitational Field. Phys. Rev. D 2017, 96, 024017. * [58] Araujo, M.E.; Oliveira, S.R. Static axisymmetric approach for the headon collision of two black holes. Phys. Rev. D 1995, 52, 816–820. * [59] Araujo, M.E.; Letelier, P.S.; Oliveira, S.R. Two Kerr black holes with axisymmetric spins: An Improved Newtonian model for the head—On collision and gravitational radiation. Class. Quant. Grav. 1998, 15, 3051–3060. * [60] Semerák, O.; Basovnίk, M. Geometry of deformed black holes. I. Majumdar-Papapetrou binary. Phys. Rev. D 2016, 94, 044006. * [61] Moskalets, T.; Nurmagambetov, A. Liouville mode in Gauge/Gravity Duality. Eur. Phys. J. C 2015, 75, 551. * [62] Moskalets, T.M.; Nurmagambetov A.J. Non-uniform horizons in Gauge/Gravity Duality. Phys. Atom. Nucl. 2016, 79, 1497–1499. * [63] Moskalets, T.M.; Nurmagambetov A.J. Static and non-static black holes with the Liouville mode. Phys. Part. Nucl. Lett. 2017, 14, 365–367. * [64] Moskalets, T.; Nurmagambetov, A. Absorption cross-sections of small quasi-spherical black holes: The massless scalar case. arXiv 2016, arXiv:1607.08830. * [65] Boos, J.; Frolov, V.P. Stationary black holes with stringy hair. Phys. Rev. D 2018, 97, 024024. * [66] Erez, G.; Rosen, N. The Gravitational Field of a Particle Possessing a Multipole Moment. Bull. Res. Counc. Isr. 1959, 8F, 47–50. * [67] Quevedo, H. General static axisymmetric solution of Einstein’s vacuum field equations in prolate spheroidal coordinates. Phys. Rev. D 1989, 39, 2904–2911. * [68] Crowdy, D.G. General solutions to the 2D Liouville equation. Int. J. Eng. Sci. 1997, 35, 141. * [69] Popov, A.G. Exact formulae of constructing solutions to the Liouville equation by use of solutions to the Laplace equation. Dokl. Akad. Nauk 1993, 333, 440–441. (In Russian) * [70] Regge, T.; Wheeler J.A. Stability of a Schwarzschild singularity. Phys. Rev. 1957, 108, 1063–1069. * [71] Zerilli, F.J. Effective potential for even parity Regge-Wheeler gravitational perturbation equations. Phys. Rev. Lett. 1970, 24, 737–738. * [72] Petrov, A.Z. The classification of spaces defining gravitational fields. Sci. Not. Kazan State Univ. 1954, 114, 55–69. * [73] Petrov, A.Z. Einstein Spaces; Pergamon Press: Oxford, UK, 1969 * [74] Newman, E.; Penrose R. An Approach to gravitational radiation by a method of spin coefficients. J. Math. Phys. 1961, 3, 566–578. * [75] Teukolsky, S.A. Perturbations of a rotating black hole. 1. Fundamental equations for gravitational electromagnetic and neutrino field perturbations. Astrophys. J. 1973, 185, 635–647. * [76] Press, W.H.; Teukolsky S.A. Perturbations of a Rotating Black Hole. II. Dynamical Stability of the Kerr Metric. Astrophys. J. 1973, 185, 649–674. * [77] Chandrasekhar, S. The Mathematical Theory of Black Holes; Oxford University Press: Oxford, UK, 1983. * [78] Otsuki, H.; Futamase, T. Gravitational Perturbation of Schwarzschild-De Sitter Spacetime and Its Quasi-Normal Modes. Prog. Theor. Phys. 1991, 85, 771–778. * [79] Horowitz, G.T.; Hubeny, V.E. Quasinormal modes of AdS black holes and the approach to thermal equilibrium. Phys. Rev. D 2000, 62, 024027. * [80] Cardoso, V.; Lemos, J.P.S. Quasinormal modes of Schwarzschild anti-de Sitter black holes: Electromagnetic and gravitational perturbations. Phys. Rev. D 2001, 64, 084017. * [81] Cardoso, V.; Konoplya, R.; Lemos, J.P.S. Quasinormal frequencies of Schwarzschild black holes in anti-de Sitter space-times: A Complete study on the asymptotic behavior. Phys. Rev. D 2003, 68, 044024. * [82] Burgess, C.P.; Lutken, C.A. Propagators and Effective Potentials in Anti-de Sitter Space. Phys. Lett. B 1985, 153, 137–141. * [83] Konoplya, R.A.; Zhidenko, A. Quasinormal modes of black holes: From astrophysics to string theory. Rev. Mod. Phys. 2011, 83, 793–836. * [84] Dias, O.J.C.; Reall, H.S.; Santos, J.E. Strong cosmic censorship: Taking the rough with the smooth. JHEP 2018, 10, 001. * [85] Glampedakis, K.; Johnson, A.D.; Kennefick, D. Darboux transformation in black hole perturbation theory. Phys. Rev. D 2017, 96, 024036. * [86] Moulin, F.; Barrau, A. Analytical proof of the isospectrality of quasinormal modes for Schwarzschild-de Sitter and Schwarzschild-Anti de Sitter spacetimes. Gen. Rel. Grav. 2020, 52, 82. * [87] Arslanaliev, A.M.; Nurmagambetov, A.J. Price’s Theorem in Gauge/Gravity Duality. Phys. Part. Nucl. 2018, 49, 879–883. * [88] Konoplya, R.A. Quasinormal behavior of the d-dimensional Schwarzschild black hole and higher order WKB approach. Phys. Rev. D 2003, 68, 024018. * [89] Lin, K.; Qian, W.L. A Matrix Method for Quasinormal Modes: Schwarzschild Black Holes in Asymptotically Flat and (Anti-) de Sitter Spacetimes. Class. Quant. Grav. 2017, 34, 095004. * [90] Leaver, E.W. An Analytic representation for the quasi normal modes of Kerr black holes. Proc. Roy. Soc. Lond. A 1985, 402, 285–298. * [91] Nollert, H.P. Quasinormal modes: The characteristic ‘sound’ of black holes and neutron stars. Class. Quant. Grav. 1999, 16, R159-R216. * [92] Berti, E.; Kokkotas, K.D. Quasinormal modes of Reissner-Nordstrom-anti-de Sitter black holes: Scalar, electromagnetic and gravitational perturbations. Phys. Rev. D 2003, 67, 064020. * [93] Ferrari, V.; Gualtieri, L. Quasi-Normal Modes and Gravitational Wave Astronomy. Gen. Rel. Grav. 2008, 40, 945–970. * [94] Matyjasek, J.; Opala M. Quasinormal modes of black holes. The improved semianalytic approach. Phys. Rev. D 2017, 96, 024011. * [95] Konoplya, R.A.; Zhidenko, A.; Zinhailo, A.F. Higher order WKB formula for quasinormal modes and grey-body factors: Recipes for quick and accurate calculations. Class. Quant. Grav. 2019, 36, 155002. * [96] Yagi, K.; Yunes, N. I-Love-Q Relations in Neutron Stars and their Applications to Astrophysics, Gravitational Waves and Fundamental Physics. Phys. Rev. D 2013, 88, 023009. * [97] Yagi, K.; Yunes, N. I-Love-Q Relations: From Compact Stars to Black Holes. Class. Quant. Grav. 2016, 33, 095005. * [98] Newman, E.T.; Janis, A.I. Note on the Kerr spinning particle metric. J. Math. Phys. 1965, 6, 915–917. * [99] Drake, S.P.; Szekeres, P. Uniqueness of the Newman-Janis algorithm in generating the Kerr-Newman metric. Gen. Rel. Grav. 2000, 32, 445–458. * [100] Ferraro, R. Untangling the Newman-Janis algorithm. Gen. Rel. Grav. 2014, 46, 1705. * [101] Erbin, H. Janis-Newman algorithm: Simplifications and gauge field transformation. Gen. Rel. Grav. 2015, 47, 19. * [102] Rajan, D. Complex Spacetimes and the Newman-Janis trick. arXiv 2016, arXiv:1601.03862. * [103] Erbin, H. Janis-Newman algorithm: Generating rotating and NUT charged black holes. Universe 2017, 3, 19.
# Iterated Brownian motion ad libitum is not the pseudo-arc Jérôme Casse and Nicolas Curien Université Paris-Dauphine. Email: <EMAIL_ADDRESS>Paris-Saclay and Institut Universitaire de France. E-mail<EMAIL_ADDRESS> ###### Abstract We show that the construction of a random continuum $\mathcal{C}$ from independent two-sided Brownian motions as considered in [11] almost surely yields a non-degenerate indecomposable but not-hereditary indecomposable continuum. In particular $\mathcal{C}$ is (unfortunately) not the pseudo-arc. ## 1 Introduction #### Iterated Brownian motions ad libitum. Let $(\mathfrak{B}_{i})_{i\geq 1}$ be a sequence of i.i.d. two-sided Brownian motions (BM), i.e. $(\mathfrak{B}_{i}(t))_{t\geq 0}$ and $(\mathfrak{B}_{i}(-t))_{t\geq 0})$ are independent standard linear Brownian motions started from $0$. The $n$th iterated BM is $I^{(n)}=\mathfrak{B}_{1}\circ\dots\circ\mathfrak{B}_{n}.$ (1) The doubly iterated Brownian motion $I^{(2)}$ has been deeply studied in the 90’s. It permits to construct solutions to partial differential equations [9] and lots of results about its probabilistic and analytic properties can be found in [1, 4, 5, 8, 10, 16, 17] and references therein. Of course $I^{(n)}$ is wilder and wilder as $n$ increases (see Figure 1) but in [7], second author and Konstantopoulos proved that _the occupation measure_ of $I^{(n)}$ over $[0,1]$ converges as $n\to\infty$ towards a random probability measure $\Xi$ which can be though of as iterated Brownian motions ad libitum. This object has then been studied in [6] by the first author and Marckert, and they gave a description of $\Xi$ using invariant measure of an iterated functions system (IFS). However, many distributional properties of $\Xi$ remain open. Figure 1: Simulations of $I^{{(1)}},I^{{(2)}}$ and $I^{{(3)}}$, the first three iteration of independent two-sided Brownian motions. The article studies random continuum build out the sequence of $(I^{{(n)}}:n\geq 1)$. #### Continuum and pseudo-arc. In a recent work, Kiss and Solecki used iterated Brownian motions to define a random _continuum_. Recall that a _continuum_ is a nonempty, compact, connected metric space. They were interested by the so-called _pseudo-arc_. The _pseudo-arc_ is a homogeneous continuum which is similar to an arc, so similar, that its existence was unclear in the beginning of the last century. A continuum $C$ is * • _chainable_ (also called _arc-like_ , see [15, Theorem 12.11]), if for each $\varepsilon>0$, there exists a continuous function $f:C\to[0,1]$ such that the pre-images of points under $f$ have diameter less than $\varepsilon$. * • _decomposable_ , if there exist $A$ and $B$ two subcontinua of $C$ such that $A,B\neq C$ and $C=A\cup B$. A non decomposable continuum is called _indecomposable_. * • _hereditarily indecomposable_ if any of its subcontinuum (non reduced to a singleton) is indecomposable. By [3], the _pseudo-arc_ is the unique (up to homeomorphisms) chainable and hereditarily indecomposable continuum non reduced to a singleton. In particular, any subcontinuum (non reduced to a singleton) of a pseudo-arc is a pseudo-arc. Its name “pseudo-arc” comes from this property because arcs have the same property, in the sense that any subcontinuum (non reduced to a singleton) of an arc is an arc. For more information on pseudo-arc, we refer the interested reader to the second paragraph of [15, Chapter XII] and to [2, 3, 12, 13]. Sadly, it is very complicated to get a “drawing” of the pseudo-arc due to its complicated crocked structure, see [15, Exercise 1.23]. Following the works of Bing, one can wonder whether the pseudo-arc is typical among arc- like continua and ask whether there is a natural probabilistic construction of the pseudo-arc. Let us recall the construction of continua from inverse limits used in [11], see [15, Section II.2] for details. Suppose we are given a sequence $\cdots\xrightarrow[]{f_{3}}X_{3}\xrightarrow[]{f_{2}}X_{2}\xrightarrow[]{f_{1}}X_{1}$ where for any $i\geq 1$, the metric space $(X_{i},d_{i})$ is compact and $f_{i}:X_{i+1}\to X_{i}$ is a continuous surjective function. Then the _inverse limit_ of $(\\{X_{i},f_{i}\\})_{i\geq 1}$ is the subspace of $\prod_{i\geq 1}X_{i}$ defined by $\varprojlim(f_{i},X_{i}:i\geq 1)=\left\\{(x_{i})_{i\geq 1}\in\prod_{i\geq 1}X_{i}:f_{i}(x_{i+1})=x_{i}\right\\}.$ (2) In the application below $X_{i}$ are compact intervals of $\mathbb{R}$ and in this case, by [15, Theorems 2.4 and 12.19], the inverse limit is a chainable continuum. In [11], Kiss and Solecki constructed a system as above using two- sided independent Brownian motions $(\mathfrak{B}_{i}:i\geq 1)$. More precisely, they proved that for any interval $J$ of $\mathbb{R}$ with $0\in J$ and $J\neq\\{0\\}$, the following limit exists almost surely $\mathcal{I}_{i}=\lim_{m\to\infty}\mathfrak{B}_{i}\left(\mathfrak{B}_{i+1}\left(\dots\left(\mathfrak{B}_{i+m}\left(J\right)\right)\dots\right)\right),$ (3) and does not depend on $J$, so that we can consider the random chainable continuum $\mathcal{C}$ obtained as the inverse limit of the system $\cdots\xrightarrow[]{\mathfrak{B}_{3}}\mathcal{I}_{3}\xrightarrow[]{\mathfrak{B}_{2}}\mathcal{I}_{2}\xrightarrow[]{\mathfrak{B}_{1}}\mathcal{I}_{1}.$ Kiss and Solecki proved [11, Theorem 2.1] that the random chainable continuum $\mathcal{C}$ is almost surely non-degenerate and indecomposable. This note answers negatively the obvious question the preceding result triggers: ###### Theorem 1. Almost surely, the random continuum $\mathcal{C}$ is _not_ hereditary indecomposable (hence is not the pseudo-arc). The proof below could be adapted to prove that a random continuum constructed similarly from a sequence of i.i.d. _reflected_ Brownian motions is neither a pseudo-arc, answering a question in [11, Section 3.1.1]. Although almost surely not homeomorphic to the pseudo-arc, the random continuum $\mathcal{C}$ is interesting in itself and one could ask about its topological property, e.g. we wonder whether the topology of $\mathcal{C}$ is almost surely constant and if it is easy to characterise. Acknowledgements: We acknowledge support from the ERC 740943 “GeoBrown” and ANR 16-CE93-0003 “MALIN”. ## 2 Finding good intervals In the rest of the article the Brownian motions $\mathfrak{B}_{i}$ are fixed and we recall the definition of $\mathcal{I}_{i}$ in (3) and of the continuum $\mathcal{C}$. We will show that Theorem 1 follows from the proposition below stated in terms of images of intervals under the flow of independent Brownian motions whose proof occupy the remaining of the article: ###### Proposition 2. For any $\varepsilon>0$ small enough, with probability at least $p_{\varepsilon}=\prod_{i=1}^{\infty}1-2\left(\varepsilon^{{(5/4)}^{i-1}}\right)^{1/8}>0,$ there exists two sequences $(U_{i})_{i\geq 1}$ and $(V_{i})_{i\geq 1}$ of subintervals of $\mathbb{R}$ such that, for any $i\geq 1$, the five following conditions are satisfied 1. 1. $U_{i},V_{i}\subset\mathcal{I}_{i}$ where $\mathcal{I}_{i}$ is defined in (3), 2. 2. $U_{i}\nsubseteq V_{i}$ and $V_{i}\nsubseteq U_{i}$, 3. 3. $U_{i}\cap V_{i}\neq\emptyset$, 4. 4. $U_{i}=\mathfrak{B}_{i}(U_{i+1})$ and $V_{i}=\mathfrak{B}_{i}(V_{i+1})$, 5. 5. $|U_{i}|,|V_{i}|\leq\varepsilon^{(5/4)^{i-1}}$. ###### Proof of Theorem 1 given Proposition 2.. In the proof, since we are always working with the functions $\mathfrak{B}_{i}$ we write $\varprojlim(W_{i}:i\geq 1)$ for the inverse limit previously denoted by $\varprojlim(\mathfrak{B}_{i},W_{i}:i\geq 1)$ for any sequence of intervals $W_{1},W_{2},...$ such that $W_{i+1}\xrightarrow{\mathfrak{B}_{i}}W_{i}$. On the event described in the above proposition we have with probability at least $p_{\varepsilon}>0$: * • For any $i\geq 1$, $\mathfrak{B}_{i}(U_{i+1}\cup V_{i+1})=U_{i}\cup V_{i}$ (point 4) and $U_{i}\cup V_{i}\subset\mathcal{I}_{i}$ (point 1) and $U_{i}\cup V_{i}$ is an interval (point 3), so by Lemma 2.6 of [15], $\varprojlim(U_{i}\cup V_{i}:i\geq 1)$ is a subcontinuum of $\mathcal{C}$. * • By Lemma 2.6 of [15], both $\varprojlim(U_{i}:i\geq 1)$ and $\varprojlim(V_{i}:i\geq 1)$ are also subcontinua of $\varprojlim(U_{i}\cup V_{i}:i\geq 1)$. * • Let $x=(x_{i})_{i\geq 1}\in\varprojlim(U_{i}\cup V_{i}:i\geq 1)$, then * – either, for any $i$, we have $x_{i}\in U_{i}\cap V_{i}$, and so $x\in\varprojlim(U_{i}:i\geq 1)$ and $x\in\varprojlim(V_{i}:i\geq 1)$, * – or there exists $j\geq 1$ such that $x_{j}\in U_{j}$ and $x_{j}\notin V_{j}$, but then by point $4$ we have $x_{i}\in U_{i}$ for all $i\geq j$ and so $x\in\varprojlim(U_{i}:i\geq 1)$, * – or there exists $j\geq 1$ such that $x_{j}\notin U_{j}$ and $x_{j}\in V_{j}$ and similarly we deduce that $x\in\varprojlim(V_{i}:i\geq 1)$. Hence, $\varprojlim(U_{i}\cup V_{i}:i\geq 1)\subset\varprojlim(U_{i}:i\geq 1)\cup\varprojlim(V_{i}:i\geq 1)$ and the reverse inclusion is obvious. * • $\varprojlim(U_{i}\cup V_{i}:i\geq 1)\neq\varprojlim(U_{i}:i\geq 1)$ nor $\varprojlim(U_{i}\cup V_{i}:i\geq 1)\neq\varprojlim(V_{i}:i\geq 1)$ by combining point $2$ and point 4. All of these points imply that $\varprojlim(U_{i}\cup V_{i}:i\geq 1)$ is a decomposable subcontinuum of $\mathcal{C}=\varprojlim(\mathcal{I}_{i}:i\geq 1)$. That implies that $\mathcal{C}$ is not a pseudo-arc with probability at least $p_{\varepsilon}$ for any $\varepsilon>0$. As $p_{\varepsilon}\to 1$ when $\varepsilon\to 0$, it is not a pseudo-arc with probability one. ∎ ### 2.1 Construction of a decomposable subcontinuum using good shape excursions Let us now explain the idea behind the construction of the intervals of Proposition 2. This relies on the concept of excursions with a good shape. Imagine that we have a sequence of non trivial intervals $[u_{i},v_{i}]\subset[0,1]$ such that $\mathfrak{B}_{i}([u_{i+1},v_{i+1}])=[u_{i},v_{i}]$ and furthermore that $\mathfrak{B}_{i}(u_{i+1})=u_{i}$ and $\mathfrak{B}_{i}(v_{i+1})=v_{i}$ and $\mathfrak{B}_{i}(t)\in(u_{i},v_{i})$ for $t\in(u_{i+1},v_{i+1})$. In words, over the time interval $[u_{i+1},v_{i+1}]$, the Brownian motion $\mathfrak{B}_{i}$ makes an excursion from $u_{i}$ to $v_{i}$. We say that this excursion has a _good shape_ if it stays in the pentomino of Figure 2. Figure 2: An excursion from $u_{i}$ to $v_{i}$ over the time interval $[u_{i+1},v_{i+1}]$ has a good shape if it stays in the light grey region. If we have such a sequence of intervals and excursions, then one can define a sequence of intervals $U_{i},V_{i}$ by setting for any $i\geq 1$, $\displaystyle U_{i}=\lim_{n\to\infty}\underbrace{\left(\mathfrak{B}_{i}\circ\mathfrak{B}_{i+1}\circ\dots\circ\mathfrak{B}_{i+n-1}\right)\left(\left[u_{i+n},\frac{u_{i+n}+2v_{i+n}}{3}\right]\right)}_{U_{i,n}}\text{ and }$ $\displaystyle V_{i}=\lim_{n\to\infty}\left(\mathfrak{B}_{i}\circ\mathfrak{B}_{i+1}\circ\dots\circ\mathfrak{B}_{i+n-1}\right)\left(\left[\frac{2u_{i+n}+v_{i+n}}{3},v_{i+n}\right]\right).$ First, these two limits exist a.s. and are closed intervals a.s. because they are limits of a sequence of decreasing closed intervals. Indeed, because $\mathfrak{B}_{i+n}$ performs a good shape excursion from $u_{i+n}$ to $v_{i+n}$ over $[u_{i+n+1},v_{i+n+1}]$ we have $\displaystyle\mathfrak{B}_{i+n}\left(\left[u_{i+n+1},\frac{u_{i+n+1}+2v_{i+n+1}}{3}\right]\right)$ $\displaystyle\subset\left[u_{i+n},\frac{u_{i+n}+2v_{i+n}}{3}\right],\text{ and so}$ $\displaystyle U_{i,n+1}$ $\displaystyle\subset U_{i,n},$ and $U_{i,n}$ are intervals because the BM is continuous a.s. It is then an easy matter to check that the interval constructed above satisfies points 2-4 of Proposition 2. Our task is thus to construct the sequence $u_{i},v_{i}$ so that $\mathfrak{B}_{i}$ performs a good shape excursion from $u_{i}$ to $v_{i}$ over $[u_{i+1},v_{i+1}]$ and to ensure points $1$ and $5$ of Proposition 2. The key idea is to look for these intervals in the vicinity of $0$ because any given small interval close to $0$ has MANY pre-images close to $0$ by a Brownian motion. These many pre-images enable us to select one with a good shape. ### 2.2 Pre-images of a small interval by a Brownian motion In the following lemma the dependence in $i$ is superfluous but we keep it to make the connection with the preceding discussion easier to understand. ###### Lemma 3. Let $a_{i}$ be any real positive number small enough. Fix $[u_{i},v_{i}]\subset[0,a_{i}]$. Then with probability at least $1-2a_{i}^{1/8}$ we can find $[u_{i+1},v_{i+1}]\subset[0,a_{i}^{5/4}]$ so that $\mathfrak{B}_{i}$ performs an excursion with a good shape from $u_{i}$ to $v_{i}$ over the time interval $[u_{i+1},v_{i+1}]$. ###### Proof. Fix $0<u_{i}<v_{i}$ and consider the successive excursions $\mathcal{E}_{1},\mathcal{E}_{2},...$ that the Brownian motion $\mathfrak{B}_{i}$ performs from $u_{i}$ to $v_{i}$ over the respective time intervals $[u_{i+1}^{(1)},v_{i+1}^{(1)}],[u_{i+1}^{(2)},v_{i+1}^{(2)}],\cdots$. By the Markov property of Brownian motion and standard argument in excursion theory, these excursions are i.i.d. We claim that $r=\mathbb{P}(\mathcal{E}\mbox{ has a good shape})>0.$ Indeed, since the law of Brownian motion has full support in the space of continuous functions (with the topology of uniform convergence over all compacts of $\mathbb{R}_{+}$), the first excursion from $u_{i}$ to $v_{i}$ might be close to any prescribed continuous function and in particular, the probability to have a good shape is strictly positive. See Figure 3. Figure 3: For any given continuous function $f$ starting from $0$ and any $\varepsilon>0$, the Brownian motion may stay within distance $\varepsilon>0$ of $f$ up to time $1$ with a positive probability. Choosing $f$ carefully, we deduce that the first excursion from $u_{i}$ to $v_{i}$ has a good shape with positive probability. Hence, the probability that at least one of the $k$ first excursions has a good shape is at least $1-(1-r)^{k}.$ Figure 4: In red, blue and orange, the excursion from $u_{i}$ to $v_{i}$ we consider. To control the number of excursions from $u_{i}$ to $v_{i}$ performed up to time $a_{i}$ by $\mathfrak{B}_{i}$, we introduce the auxiliary stopping times defined by $w_{i+1}^{(1)}=\inf\\{t\geq 0:\mathfrak{B}_{i}(t)=u_{i}\\}$ and for $k\geq 2$ $w_{i+1}^{(k)}=\inf\\{t\geq v_{i+1}^{(k-1)}:\mathfrak{B}_{i}(t)=u_{i}\\}.$ Hence $w_{i+1}^{(1)}<v_{i+1}^{(1)}<w_{i+1}^{(2)}<v_{i+1}^{(2)}<\cdots$ are the successive hitting times of $u_{i},v_{i},u_{i},v_{i}$ by $\mathfrak{B}_{i}$, see Figure 4. For $a\geq 0$, we let $\mathcal{T}_{a}=\inf\\{t\geq 0:\mathfrak{B}_{i}(t)=a\\}$ the hitting time of $a$ by a standard linear Brownian motion. It is classic (see e.g. [14, Theorem 2.35]) that for $a>0$ we have $\mathcal{T}_{a}=a^{2}\cdot\mathcal{T}_{1}$ in law where $\mathcal{T}_{1}$ is distributed according to the Lévy law $\mathcal{T}_{1}\underset{(d)}{=}\frac{\mathrm{d}t}{\sqrt{2\pi t^{3}}}\mathrm{exp}\left(-\frac{1}{2t}\right)\mathbf{1}_{t>0}.$ In our case, applying the strong Markov property at time $w_{i+1}^{(1)}<v_{i+1}^{(1)}<w_{i+1}^{(2)}<v_{i+1}^{(2)}<\cdots$ and using invariance by symmetry we deduce that we have the equalities in distribution $w_{i+1}^{(1)}\stackrel{{\scriptstyle(d)}}{{=}}\mathcal{T}_{u_{i}},\quad v_{i+1}^{(1)}\stackrel{{\scriptstyle(d)}}{{=}}\mathcal{T}_{u_{i}+|v_{i}-u_{i}|},\quad w^{(2)}_{i+1}\stackrel{{\scriptstyle(d)}}{{=}}\mathcal{T}_{u_{i}+2|v_{i}-u_{i}|},\cdots,\quad v^{(k)}_{i+1}\stackrel{{\scriptstyle(d)}}{{=}}\mathcal{T}_{u_{i}+(2k-1)|v_{i}-u_{i}|},$ for $k\geq 2$. Since $\mathcal{T}_{u_{i}+(2k-1)|v_{i}-u_{i}|}\leq\mathcal{T}_{2ka_{i}}$, the probability that the first $k$ excursions of $\mathfrak{B}_{i}$ occurs before $a_{i}^{5/4}$ is at least $\mathbb{P}\left(\mathcal{T}_{2ka_{i}}<a_{i}^{5/4}\right)=\mathbb{P}\left(\mathcal{T}_{1}<\left(\frac{1}{2k\,a_{i}^{3/8}}\right)\right)\geq 1-\sqrt{\frac{2}{\pi}}2ka_{i}^{3/8}\text{ (for $ka_{i}^{3/8}$ small enough)}.$ Gathering-up the above remarks and taking $k=\lfloor a_{i}^{-1/4}\rfloor$, we deduce that the probability to do not find an excursion from $u_{i}$ to $v_{i}$ with a good shape in $[0,a_{i}^{5/4}]$ is bounded above by $(1-r)^{\lfloor a_{i}^{-1/4}\rfloor}+2\sqrt{\frac{2}{\pi}}\lfloor a_{i}^{-1/4}\rfloor a_{i}^{3/8}\leq 2a_{i}^{-1/8}\text{ (for $a_{i}$ small enough)}.\qed$ ## 3 Proof of Proposition 2 Let $(\mathfrak{B}_{i})_{i\geq 1}$ be a sequence of i.i.d. two-sided Brownian motions, and $\varepsilon$ be any real positive number small enough. For any $i\geq 1$, take $a_{i}=\varepsilon^{(5/4)^{i-1}}$. Firstly, we put $[u_{1},v_{1}]=[0,\varepsilon]=[0,a_{1}]$, by Lemma 3, with probability at least $1-2a_{1}^{1/8}$, there exists an interval $[u_{2},v_{2}]\subset[0,a_{1}^{5/4}]=[0,a_{2}]$ such that $\mathfrak{B}_{1}$ performs a good shape excursion from $u_{1}$ to $v_{1}$ over the time interval $[u_{2},v_{2}]$. Now, we apply Lemma 3 to $[u_{2},v_{2}]\subset[0,a_{2}]$, etc. At the end, with probability at least $\prod_{i=1}^{\infty}1-2\left(\varepsilon^{{(5/4)}^{i-1}}\right)^{1/8},$ we obtain a sequence of non trivial intervals $([u_{i},v_{i}])_{i\geq 1}$ such that for any $i$, $\mathfrak{B}_{i}$ makes a good shape excursion from $u_{i}$ to $v_{i}$ over $[u_{i+1},v_{i+1}]$. By Section 2.1, we can then construct two sequences of intervals $U_{i}$, $V_{i}$ that satisfy points 2-4 of Proposition 2. Moreover, by construction, $U_{i},V_{i}\subset[u_{i},v_{i}]\subset[0,a_{i}]$, hence point 5 is also satisfied. Finally, to obtain point 1, just remark that, for any $i,n\geq 1$, $[u_{i+n},v_{i+n}]\subset[0,a_{i+n}]\subset[0,1]$, so $\displaystyle U_{i}$ $\displaystyle=\lim_{n\to\infty}\left(\mathfrak{B}_{i}\circ\mathfrak{B}_{i+1}\circ\dots\circ\mathfrak{B}_{i+n}\right)\left(\left[u_{i+n+1},\frac{u_{i+n+1}+2v_{i+n+1}}{3}\right]\right)$ $\displaystyle\subset\lim_{n\to\infty}\left(\mathfrak{B}_{i}\circ\mathfrak{B}_{i+1}\circ\dots\circ\mathfrak{B}_{i+n}\right)\left(\left[0,1\right]\right)=\mathcal{I}_{i}\text{ (by\leavevmode\nobreak\ \eqref{eq:interval})}.$ Similarly, $V_{i}\subset\mathcal{I}_{i}$. ∎ ## References * [1] Jean Bertoin. Iterated Brownian motion and stable(1/4) subordinator. Statistics & probability letters, 27(2):111–114, 1996. * [2] R. H. Bing. A homogeneous indecomposable plane continuum. Duke Mathematical Journal, 15(3):729–742, 1948. * [3] R. H. Bing. Concerning hereditarily indecomposable continua. Pacific Journal of Mathematics, 1(1):43–51, 1951. * [4] Krzysztof Burdzy. Some path properties of iterated Brownian motion. In Seminar on Stochastic Processes, 1992, pages 67–87. Springer, 1993. * [5] Krzysztof Burdzy and Davar Khoshnevisan. The level sets of iterated Brownian motion. In Séminaire de Probabilités XXIX, pages 231–236. Springer, 1995. * [6] Jérôme Casse and Jean-François Marckert. Processes iterated ad libitum. Stochastic Processes and their Applications, 126(11):3353–3376, 2016. * [7] Nicolas Curien and Takis Konstantopoulos. Iterating Brownian motions, ad libitum. Journal of theoretical probability, 27(2):433–448, 2014. * [8] Nathalie Eisenbaum and Zhan Shi. Uniform oscillations of the local time of iterated Brownian motion. Bernoulli, 5(1):49–65, 1999. * [9] Tadahisa Funaki. Probabilistic construction of the solution of some higher order parabolic differential equation. Proceedings of the Japan Academy, Series A, Mathematical Sciences, 55(5):176–179, 1979. * [10] Davar Khoshnevisan and Thomas M Lewis. Iterated Brownian motion and its intrinsic skeletal structure. In Seminar on Stochastic Analysis, Random Fields and Applications, pages 201–210. Springer, 1999. * [11] Viktor Kiss and Sławomir Solecki. Random continuum and Brownian motion. arXiv preprint arXiv:2004.01367, 2020. * [12] Bronisław Knaster. Un continu dont tout sous-continu est indécomposable. Fundamenta Mathematicae, 1(3):247–286, 1922. * [13] Edwin E. Moise. An indecomposable plane continuum which is homeomorphic to each of its nondegenerate subcontinua. Transactions of the American Mathematical Society, 63(3):581–594, 1948. * [14] Peter Mörters and Yuval Peres. Brownian motion, volume 30. Cambridge University Press, 2010. * [15] Sam Nadler. Continuum theory: an introduction. CRC Press, 1992. * [16] Enzo Orsingher and Luisa Beghin. Fractional diffusion equations and processes with randomly varying time. The Annals of Probability, pages 206–249, 2009. * [17] Yimin Xiao. Local times and related properties of multidimensional iterated Brownian motion. Journal of Theoretical Probability, 11(2):383–408, 1998.
# Efficient-CapsNet: Capsule Network with Self-Attention Routing Vittorio Mazzia Department of Electronics and Telecommunications Politecnico di Torino Turin, Italy 10124 <EMAIL_ADDRESS> &Francesco Salvetti Department of Electronics and Telecommunications Politecnico di Torino Turin, Italy 10124 <EMAIL_ADDRESS> &Marcello Chiaberge Department of Electronics and Telecommunications Politecnico di Torino Turin, Italy 10124 <EMAIL_ADDRESS> ###### Abstract Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy of features detectors. Even though capsules networks are still in their infancy, they constitute a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations. Indeed, a properly working capsule network should theoretically achieve higher results with a considerably lower number of parameters count due to intrinsic capability to generalize to novel viewpoints. Nevertheless, little attention has been given to this relevant aspect. In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results on three different datasets with only 2% of the original CapsNet parameters. Moreover, we replace dynamic routing with a novel non-iterative, highly parallelizable routing algorithm that can easily cope with a reduced number of capsules. Extensive experimentation with other capsule implementations has proved the effectiveness of our methodology and the capability of capsule networks to efficiently embed visual representations more prone to generalization. ## 1 Introduction In the last decade, convolutional neural networks (CNN) drastically changed artificial visual perception, achieving remarkable results in all core fields of computer vision, from image classification [1, 2, 3] to object detection [4, 5, 6] and instance segmentation [7]. In contrast to other deep neural architectures, the main characteristic of a CNN is its capability to efficiently replicate the same knowledge at all locations in the spatial dimension of an input image. Indeed, using translated replicas of learned feature detectors, features learned at one spatial location are available at other locations. Local shared connectivity coupled with spatial reduction layers, such as max-pooling, extract local translation-invariant features. So, as shown in Figure 1, object translations in the input space do not affect activations of high-level neurons, because max-pooling layers are able to rout low-level features between the layers. Nevertheless, translation invariance achieved by CNN comes at the expense of losing the precise encoding of objects location. Moreover, CNNs are not invariant to all other affine transformations. During the years, different techniques have been developed to counterbalance that problem. Most of the adopted common solutions make use of an increased number of feature maps in such a way that the network is endowed with enough feature detectors for all additional transformations. Data augmentation techniques are used to produce the different pose to be learned, and residual connections and normalization techniques allow to enlarge networks filter capacity. However, all those additional mechanisms only partially make up for the intrinsic limitations of CNN, preventing the model from recognising different transformations of the same objects encountered during training. Indeed, CNNs trained on large datasets have a massive redundancy of features detectors and difficulties to scale to thousands of objects with their respective viewpoints. Hinton et al. [8] proposed to make neurons cooperate in a new form of unit, dubbed capsules, where individual activations inside them do not represent the presence of a specific feature but different properties of the same entity anymore. In their paper they showed that groups of neurons, if properly trained, are able to produce a whole vector of numbers, explicitly representing the pose of the detected entity as in classical hand-engineered features [9]. After six years, Sabour et al. [10] presented a first architecture, named CapsNet, that introduced capsules inside a CNN. The major insight of the paper is that viewpoint changes have complicated effects on the pixel space, but simple linear effects on the pose that represents the relationship between an object-part and the whole. In a generic fully- connected or convolutional deep neural network, weights are used to encode feature detectors and neuron activations to represent the presence of a specific feature. So, fixing weights after training, the model is not able to detect simple transformation patterns not encountered during training. On the other hand, they suggested repurposing weights to embed relationships between object features. Indeed, being intrinsic transformation between parts and a whole invariant to the viewpoint, weights are perfectly fitted to represent them efficiently, and they should be automatically capable of generalizing to novel viewpoints. Moreover, we do not want anymore to achieve activations invariant to transformations, but groups of neurons working in synergy to represent different properties of the same entity. Capsules are vector representations of features, and they are equivariant to viewpoint transformation. So, each capsule not only represents a specific type of entity but also dynamically describes how the entity is instantiated. Finally, the working principle of traditional networks, in which a scalar unit is activated based on the matching score with learned feature detectors, is dropped altogether favouring a much more robust mechanism. Indeed, with viewpoint invariant transformations encoded in the weights, we can make capsules predict the whole that they should be part of. So, we can consider predictions accordance of low-level capsules to activate high-level capsules. That requires a process to measure their agreement and route capsules to their best match parent. Originally, dynamic routing was proposed as the first routing- by-agreement mechanism. Exploiting groups of neuron activations to make predictions and assess their reciprocal agreement is a much more effective way to capture covariance and should lead to models with a considerably reduced number of parameters and far better generalization capabilities. Figure 1: Compressed representation of a simple CNN with max-pooling layers for spatial reduction and two input objects obtained with a plain spatial translation. Max-pooling operations are schematized in such a way that their primitive routing role is highlighted for both digits. Low-level features detected in the earlier stage of the network are progressively routed to common high-level features. So, the model is translation invariant but gradually loses relevant object localization information. Nevertheless, little attention has been given to the efficiency aspect of capsule networks and their intrinsic capability to represent knowledge object transformations better. Indeed, all model solutions presented so far account for a large number of parameters that inevitably hide the intrinsic generalization capability that capsules should provide. In this paper, we propose Efficient-CapsNet, an extreme architecture with barely 160K parameters and a 85% TOPs improvement upon the original CapsNet model that is perfectly capable of achieving state-of-the-art results on three distinct datasets, maintaining all important aspects of capsule networks. With extensive experimentation with traditional CNNs and other capsule implementations, we proved the effectiveness of our methodology and the important contribution lead by capsules inside a network. Moreover, we propose a novel non-iterative, routing algorithm that can easily cope with a reduced number of capsules exploiting a self-attention mechanism. Indeed, attention, as also max-pooling layers, can be seen as a way to route information inside a network. Our proposed solution exploits similarities between low-level capsules to cluster and routs them to more promising high-level capsules. Overall, the main contribution of our work lies in: * • Deep investigation of the generalization power of networks based on capsules, drastically reducing the number of trainable parameters compared to previous literature research studies. * • The Conceptualization and development of an efficient, highly replicable, deep learning neural network based on capsules able to reach state-of-the-art results on three distinct datasets. * • The introduction of a novel non-iterative, highly parallelizable routing algorithm that exploits a self-attention mechanism to route a reduced number of capsules efficiently. All of our training and testing code are open source and publicly available111https://github.com/EscVM/Efficient-CapsNet. The remainder of this paper is structured as follows. Section II covers the related work on capsule networks, their developments in the latest years and practical applications. Section III provides a comprehensive overview of the methodology, network architecture and its routing algorithm. Section IV discusses the experimentation and results with three datasets, MNIST, smallNorb and MultiMNIST. Moreover, it provides an introspect analysis of the inner operation of capsules inside a network. Finally, section V draws some conclusions and future directions. ## 2 Related Works Figure 2: Schematic representation of the overall architecture of Efficient- CapsNet. Primary capsules make use of depthwise separable convolution to create a vectorial representation of the features they represent. On the other hand, the first stack of convolutional layers maps the input tensor onto a higher-dimensional space, facilitating capsules creation. As already devised in the introduction to this paper, introducing a vectorial organization of neurons to encapsulate both probability and instantiation parameters of a detected feature was first proposed by Hinton et al. [8] introducing the new concept of capsules. Sabour et al. [10] proposed the first CNN able to incorporate two layers of capsules, called CapsNet, and introduced the routing-by-agreement concept, with their dynamic routing. Several researchers have then investigated the routing process, proposing alternative ways to measure accordance between low-lever capsules in activating high-level ones. Xi et al. [11] proposed a variant to the squash activation function used in the original CapsNet. Wang et al. [12] gave a formal description of the original dynamic routing as an optimization problem that minimizes clustering loss and proposes a slightly modified version. Lenssen et al. [13] proposed group capsule networks, claiming they preserve equivariance for the output pose and invariance for activations. The same authors of the original CapsNet adapted the Expectation-Maximization algorithm to cluster similar votes, and route predictions [14]. Spectral capsule network [15] was based on this last work, and modified routing basing it on Singular Value Decomposition of votes from the previous layers. Ribeiro et al. [16] proposed a routing derived from Variational Bayes for fitting a gaussian mixture model. Gu et al. [17] focused on making capsule networks robust to affine transformations by sharing transformation matrices between all low-level capsules and each high-level ones. Paik et al. [18] put in discussion the effectiveness of the routing algorithm presented so far, claiming that better results can be obtained with no routing at all. On the other hand, Venkataraman et al. [19] proved that routing-by-agreement mechanism is essential to ensure compositional structures of capsule-based networks. Byerly et al. [20], instead, proposed a new architecture based on a variation of the original capsule idea, named Homogeneous Filter Capsules, and with no routing between layers. The attention mechanism allows to dynamically give more importance to particular features that are considered more relevant for the problem under analysis. Such an idea gained great popularity in a number of Deep Learning applications and have been implemented in natural language processing [21, 22] or computer vision [23, 24, 3, 25, 26]. Choi et al. [27] applied the attention mechanism to capsule routing with a feed-forward operation with no iterations. However, they selected low-level capsules by multiplying their activations to a parameter vector learnt with backpropagation, and they did not measure agreement. In this way, the original idea of routing-by-agreement is drastically modified. Tsai et al. [28] slightly changed the original dynamic routing to compute the agreement between a pose of a high-level capsule and the votes of the low-level capsules by an inverted dot-product mechanism. They proposed a concurrent iterative routing instead of a sequential one, performing the routing procedure simultaneously on all the capsule layer. Huang et al. [29] proposed a dual attention mechanism by adapting the squeeze- and-excitation block [3] to both Primary and Digit Caps, together with a change in the squash activation function. Peng et al. [30] applied capsules in addition with a self-attention based backbone for an entity relation task in natural language processing. However, both these last two works used attention mechanisms as part of the computational graph of the proposed networks, without modifying the original dynamic routing proposed by Sabour et al. [10]. In this sense, our approach strongly differs from theirs since we first propose self-attention as a substitute routing algorithm between capsules. Capsule-based networks have also been recently used for a variety of applications. For example, they have been applied for natural language processing [30, 31, 32, 33], with GANs for image generation [34], computer vision [35, 36, 37] or medicine [38, 39]. ## 3 Methods ### 3.1 Efficient-CapsNet The overall architecture of Efficient-CapsNet is depicted in Figure 2. As a high-level description, the network can be broadly divided into three different parts in which the first two are the main instruments of the primary capsule layer to interact with the input space. Indeed, each capsule exploits the below convolutional layer filters to convert pixel intensities into a vectorial representation of the feature it acts for. So, the activities of neurons within an active capsule embody the various properties of the entity it learnt to represent during the training process. As stated in Sabour et al. [10], these properties can include many different types of instantiation parameter such as pose, texture, deformation, and among those the existence of the feature itself. In our implementation, the length of each vector is used to represent the probability that the entity represented by a capsule is present. That is compatible with our self-attention routing algorithm that does not require any sensible objective function minimization. Moreover, it makes biological sense as it does not use large activities to represent absent entities. Finally, the last part of the network operates under the self- attention algorithm to rout low-level capsules to the whole they represent. More formally, in the case of a single instance $(i)$, the model takes as input an image that can be represented as a tensor X with a shape $H\times W\times F$ where $H$, $W$ and $C$ are the height, width, and channels/features of the single input image. Before entering the primary caps layer, we extract local features from the input image X by means of a set of convolutional and Batch Normalization layers [40]. Each output of a convolution layer $l$ is constituted by a convolutional operation with a certain kernel dimension $k$, number of feature maps $f$, stride $s=1$ and ReLU as activation function: $\displaystyle\mathrm{F}^{l+1}(\textbf{X}^{l})=\mathrm{ReLU}\left(Conv_{k\times k}(\textbf{X}^{l})\right)$ (1) Overall, the first convolutional part of the network can be modelled as a single function $H_{Conv}$ that maps the input image onto a higher dimensional space that facilitates the capsule creation. On the other hand, the second part of the network is the main instrument used by primary capsules to create a vectorial representation of the features they represent. As depicted in Figure 3, it is a depthwise separable convolution with linear activation that performs just the first step of a depthwise spatial convolution operation, acting separately on each channel. Moreover, imposing a kernel dimension $k\times k$ and a number of filters $f$ equal to the output dimensions $H\times W$ and $F$ of the $H_{Conv}$ function, it is possible to obtain the primary capsule layer $\textbf{{S}}^{l}_{n,d}$ where $n^{l}$ and $d^{l}$ are the number of primary capsules and their individual dimension of the $l-th$ layer, respectively. Figure 3: The first part of the network can be modelled as single-function $H_{Conv}$ that maps the input image onto a higher-dimensional space. Then, the primary capsule layer $\textbf{{S}}^{l}_{n,d}$ is obtained with a depthwise separable convolution that greatly reduces the number of parameters needed for the capsules creation. The depthwise separable convolution is an efficient operation that greatly simplifies and reduces the number of parameters required for the capsule creation process. We leave it to discriminative learning to make good use of its filters to smartly extract all capsule properties. After that operation, location information is not anymore "place-coded" but "rate-coded" in the properties of the capsules. So, the base element of the network is not anymore a single neuron but a vector-output capsule. Indeed, the first operation applied to the primary capsule layer is a capsule-wise activation function. In order to encode the probability that a certain entity exists with the length of vectors and let active capsules make predictions for the instantiation parameters of higher-level capsules, two important properties should be satisfied by the activation function; it should preserve a vector orientation and maintain the length between zero and one. Efficient- CapsNet makes use of a variant of the original activation function, dubbed squash operation: $\displaystyle\textrm{squash}(\textbf{{s}}^{l}_{n})=\left(1-\frac{1}{e^{||\textbf{{s}}^{l}_{n}||}}\right)\frac{\textbf{{s}}^{l}_{n}}{||\textbf{{s}}^{l}_{n}||}$ (2) where we refer to a single capsule as $\textbf{{s}}^{l}_{n}$, which are the individual entries $n^{l}$ of $\textbf{{S}}^{l}_{(n,:)}$111$\textbf{{s}}^{l}_{n_{0}}:=\\{\textbf{{S}}^{l}_{n,d}|n^{l}=n_{0}^{l}\\}$ with $\textbf{{s}}^{l}_{n}\in\mathbb{R}^{d^{l}}$. The capsule-wise squash function of Eq. (2), satisfies the required two properties and is much more sensitive to small changes near zero, providing a boost to the gradient during the training phase [11]. So, after the squash activation we obtain a new matrix $\textbf{{U}}^{l}_{n,d}$ with all $n^{l}$ entries $\textbf{{u}}^{l}_{n}$ with the same dimensionality and properties of $\textbf{{s}}^{l}_{n}$, but with a length "squashed" between zero and one. Indeed the non-linearity ensure that short vectors get shrunk to almost zero length and long vectors get shrunk to a length slightly below one. Figure 4: Capsules of the layer $l-th$ make predictions of the whole they could be part of. All predictions obtained with the weight tensor $\textbf{W}^{l}_{n^{l},n^{l+1},d^{l},d^{l+1}}$ are collected in $\hat{\textbf{U}}_{n^{l},n^{l+1},d^{l+1}}^{l}$ that is subsequently used in conjunction with the priors $\textbf{{B}}^{l}_{n^{l},n^{l+1}}$ and coupling coefficients $\textbf{{C}}^{l}_{n^{l},n^{l+1}}$ matrices to obtain all capsules $\textbf{{s}}^{l+1}_{n}$ of layer $l+1$. ### Self-Attention routing In order to rout active capsules to the whole they belong, we make use of our self-attention routing algorithm. As shown in Figure 4, despite the additional dimension, the overall architecture is very similar to a fully-connected network with an additional branch brought by the self-attention algorithm. Indeed, the total input of a capsule in the above layer, $\textbf{{s}}^{l+1}_{n}$, is a weighted sum over all "prediction vectors" from the capsules $\textbf{{u}}^{l}_{n}$ in the layer below. That is produced by a matrix multiplication of each capsule, $\textbf{{u}}^{l}_{n}$, belonging to $\textbf{{U}}^{l}_{n,d}$, for a weight matrix. Intuitively, the whole tensor $\textbf{W}^{l}_{n^{l},n^{l+1},d^{l},d^{l+1}}$ that contains all weight matrices, embeds all affine transformation between capsule of two adjacent layers. So, each capsule of the layer $l$, in order to make its projections for the layer above, follows Eq. 3 $\displaystyle\hat{\textbf{U}}_{(n^{l},n^{l+1},:)}^{l}=\textbf{{u}}_{n}^{\textrm{T}l}\times\textbf{W}_{(n^{l},n^{l+1},:,:)}^{l}$ (3) where $\hat{\textbf{U}}_{n^{l},n^{l+1},d^{l+1}}^{l}$ contains all predictions of $l-th$ capsules. Indeed, each $n^{l}$ capsule, by means of the weight matrix, predicts the properties of all $n^{l+1}$ capsules. Indeed, capsules of the above layer, $\textbf{{s}}^{l+1}_{n}$, can be computed with Eq. 4 $\displaystyle\textbf{{s}}^{l+1}_{n}=\hat{\textbf{U}}_{(:,n^{l+1},:)}^{\textrm{T}l}\times\left(\textbf{{C}}^{l}_{(:,n^{l+1})}+\textbf{{B}}^{l}_{(:,n^{l+1})}\right)$ (4) where $\textbf{{B}}^{l}_{n^{l},n^{l+1}}$ is the log priors matrix containing all weights discriminatively learnt at the same time as all the other weights. On the other hand, $\textbf{{C}}^{l}_{n^{l},n^{l+1}}$ is the matrix containing all coupling coefficients produced by the self-attention algorithm. So, the priors help to create biases towards more linked capsules and the self- attention routing dynamically assigns detected shapes to the whole they represent in the specific $(i)$ instance taken into account. The coupling coefficients are computed starting from the self-attention tensor $\textbf{A}^{l}_{n^{l},n^{l},n^{l+1}}$ using Eq. 5 $\displaystyle\textbf{A}^{l}_{(:,:,n^{l+1})}=\frac{\hat{\textbf{U}}_{(:,n^{l+1},:)}^{l}\times\hat{\textbf{U}}_{(:,n^{l+1},:)}^{\textrm{T}l}}{\sqrt{d^{l}}}$ (5) which contains a symmetric matrix $\textbf{A}^{l}_{:,:,n^{l+1}}$ for each capsule $n^{l+1}$ of the layer above. The term $\sqrt{d^{l}}$ stabilizes training and helps maintaining a balance between coupling coefficients and log priors. Each self-attention matrix contains the score agreement for each combination of the $n^{l}$ capsules predictions, and so, they can be used to compute all coupling coefficients. In particular, Eq.6 is used to compute the final coefficients that can be used in Eq. 4 to obtain all capsules $\textbf{{S}}^{l+1}_{n,d}$ of the layer $l+1$. $\displaystyle\textbf{{C}}^{l}_{(:,n^{l+1})}=\frac{exp\left(\sum_{n^{l}}\textbf{A}^{l}_{(:,n^{l},n^{l+1})}\right)}{\sum_{n^{l+1}}exp\left(\sum_{n^{l}}\textbf{A}^{l}_{(:,n^{l},n^{l+1})}\right)}$ (6) So, the coupling coefficients between a capsule of layer $l$ and all the capsules in the layer above, $l+1$, sum to one. Successively, initial log prior probabilities are add to the coupling coefficients to obtain the final routing weights. The procedure remains unchanged in presence of multiple capsule layers, stacked on top of each other in order to create a deeper hierarchy. ### 3.2 Margin Loss and reconstruction regularizer The output layer is not anymore represented by a scalar, but by a vector as well. Indeed, a capsule of the final layer does not only represent the probability that a certain object class exists, but also all its properties extracted from its individual parts. The length of the instantiation vector is used to represent the probability that a capsule’s entity exists. Its length should be close to one if and only if the entity it represents is the only one present in the image. So, to allow multiple-class, we compute Eq. 7 for each class represented by a capsule $n^{L}$ of the last layer $L$: $\displaystyle\mathcal{L}_{n^{L}}=T_{n^{L}}\textrm{max}\left(0,m^{+}-||\textbf{{u}}^{L}_{n}||\right)^{2}+\lambda\left(1-T_{n^{L}}\right)\textrm{max}\left(0,||\textbf{{u}}^{L}_{n}||-m^{-}\right)^{2}$ (7) where $T_{n^{L}}$ is equal to one if the class $n^{L}$ is present and $m^{+}$, $m^{-}$ and $\lambda$ are hyperparameters to be tuned. Then, the separate margin loss $\mathcal{L}_{n^{L}}$ are summed to compute the final score during the training phase. Finally, we adopt the reconstruction regularizer as in [10] to encourage all final capsules to encode robust and meaningful properties. So, the output capsules $\\{\textbf{{u}}^{L}_{n}\\}_{n=1,...,N}$ are fed to the reconstruction decoder and the mean of L2 loss between an input image and the decoder output is added to the marginal loss scaled by a factor $r$. ## 4 Results We aim to simply demonstrate that a properly working capsule network should achieve higher results with a considerably lower number of parameters due to its intrinsic capability to embed information better and efficiently. In this section, we test the proposed methodology in an experimental context, assessing its generalization capabilities and efficiency respect to traditional convolutional neural networks and similar works present in literature. On this purpose, we test our proposed methodology with three of the most used dataset for capsule-based networks assessment: MNIST, smallNORB and MultiMNIST. On all datasets, we demonstrate a remarkable difference with traditional solutions and comparable accuracy levels with similar methodologies but with a fraction of the trainable parameters in most cases. All experimentation clearly shows that a capsule network is capable to achieve higher results with a considerably lower number of parameters count. Moreover, we show how a simple ensemble of a few instances of Efficient-CapsNet can easily establish state-of-the-art results in all the three datasets. Finally, using principal component analysis, we give an introspect to the inner representations of the network and its capability to encode visual information. ### 4.1 Experimental settings Method | Parameters [K] | OPS$|_{1batch}$ [G] | Improvement$|_{1batch}$ (%) ---|---|---|--- CapsNet [10] | 6800 | 0.401 | 84.96 AR CapsNet [27] | 5310 | 0.098 | 38.66 Matrix-CapsNet with EM routing [14] | 310 | 0.086 | 29.56 Efficient-CapsNet | 161 | 0.06 | - Table 1: Comparison of the computational cost in terms of necessary operations between Efficient-CapsNet and other similar methodologies present in literature. Efficient-CapsNet, besides having a reduced number of trainable parameters, is much more efficient. In all experiments, in order to map input samples onto an higher dimensional space, we adopt four convolutional layers with $k=5$ for the first convolution and $k=3$ for all others. On the other hand, $f$ is equal to 32, 64, 64 and 128, respectively. ReLU is used in all layers, but leaky-ReLU is a valuable alternative. As previously discussed, the number of capsules depend by the number of feature maps, $f$, of the last convolutional layer. Indeed, the depthwise separable operation has a kernel dimension $k\times k$ equal to the output dimension $H\times W$ of the $H_{Conv}$ function and a number of filters $f$ equal to its filter dimension $F$. The first layer of primary capsules, $\textbf{{S}}^{1}_{n,d}$, has $n^{1}=16$ capsules with a dimension $d^{1}$ of 8. Multiple fully-connected capsule layers can be added to increase the capacity of the network. However, we adopt only two layers of capsules due to the relative simplicity of the dataset investigated. Finally, the output layer of the network has a number of capsules $n^{L}$ equal to the classes of the specific dataset taken into account. Since that higher-level capsules represent more complex entities with more degrees of freedom, their capsules dimensionality increases. All loss parameters are obtained by CapsNet [10] training. So, for all experimentation $m^{+}$, $m^{-}$ and $\lambda$ are set to 0.9, 0.1 and 0.5, respectively. Moreover, the scaling factor $r$ for the reconstruction regularizer is set to 0.392. Indeed, since we use the mean of L2 loss, while CapsNet uses the sum of L2 loss, $0.392=0.0005*784$. All experimentations are carried out on a workstation with an Nvidia RTX2080 GPGPU with 8GB of memory and 64GB of DDR4 SDRAM. We use the TensorFlow 2.x framework with CUDA 11. All result statistics are obtained with a mean of 30 trials. In Table 1 is presented a comparison between the architecture of Efficient- CapsNet and other similar methodologies. Our model has a much lower number of parameters count, and it is much more efficient in terms of operations required. So, it can clearly highlight the generalization capability of capsules with respect to traditional CNN. ### 4.2 MNIST results Figure 5: Digit reconstruction with different tested methodologies. Even with different architecture strategies and training objectives, all networks are able to embed different properties of the input digits keeping only important details. The MNIST dataset [41] is composed of 70000, $28\times 28$, images divided in 60000 and 10000 for training and testing, respectively. We adopt the same data augmentation proposed in Byerly et al. [20]. The reconstruction network is a simple fully-connected network with two hidden layers with 512 and 1024 neurons. We test our methodology and compare it with different models and two custom CNN baseline. In particular, our baseline is identical to Sabour et al. [10] with the exception of a reduced number of feature maps and layers, in order to keep the number of parameters as close as possible to Efficient- CapsNet. On the other hand, "Base-CapsNet" is a CNN but with a vectorial output as in a capsule-based network. So, it is also trained with the marginal loss function. That is specifically devised to assess the role of the reconstruction network and its impact on the overall accuracy. Our networks are trained for 100 epochs, batch size of 16, Adam [42] optimizer and an initial learning rate of $\eta=5e-4$ with exponential decay 0.98. All hyperparameters are selected with a small percentage of validation data. Method | Reconstruction | Parameters [K] | MNIST [%] ---|---|---|--- Our Baseline | no | 173 | 0.48 Base-CapsNet | no | 183 | 0.54 Our Baseline | yes | 173 | 0.4 Base-CapsNet | yes | 183 | 0.39 Efficient-CapsNet | yes | 161 | $0.26_{\pm 0.0002}$ Baseline [10] | no | 35400 | 0.39 CapsNet [10] | yes | 6800 | $0.25_{\pm 0.005}$ ($0.36_{\pm 0.04}$)* Matrix-CapsNet with EM routing [14] | no | 310 | 0.44 DA-CapsNet [29] | yes | 7000 | 0.47 AR CapsNet [27] | yes | 5310 | 0.54 HFCs [20] | no | 1514 | $0.25_{\pm 0.0002}$ Table 2: Test error (%) on the MNIST classification task. All methodologies are reported with their number of parameters and the presence of the reconstruction regularizer during the training phase. * indicates the results from our experiments. In Table 2 are reported parameters and test errors of the different tested architectures. It is evident the gap between all baseline CNNs and all other capsule-based networks. Moreover, even if Efficient-CapsNet has barely 161K parameters, it is comparable with all other methodologies present in the literature so far. It achieves a mean accuracy of 0.9974 with a min value of 0.9971 and a max one of 0.9978. Finally, a network with a vectorial output receives a significant boost in performance using the reconstruction regularizer. In Figure 5 are presented some images generated by the reconstruction networks of the different tested methodologies. It also worth to notice that, even in the presence of an adaptive gradient descent method, Efficient-CapsNet does not overfit the training set but register a similar accuracy with the test set after the training. Method | Year | Test Error [%] ---|---|--- Multi-Column Deep Neural Networks for Image Classification [43] | 2012 | 0.23 Regularization of Neural Networks using DropConnect [44] | 2013 | 0.21 RMDL:Random Multimodel Deep Learning for Classification [45] | 2018 | 0.18 Base-Branching & Merging CNNw/HFCs [20] | 2020 | 0.16 Efficient-CapsNet | 2021 | 0.16 Table 3: Test error (%) on the MNIST classification task of state-of-the-art methodologies based on ensemble over the years. As previously stated, we also demonstrate that a simple ensemble of Efficient- CapsNet models can easily establish a state-of-the-art result. Indeed, we exploit the 30 trained networks for test score statistics to produce an ensemble prediction. In particular, we average all network predictions with an accuracy greater than 0.9973, obtaining a final test error of 0.16. In Table 3 are summarized results of top MNIST leaderboard methodologies. The considerable gap between the mean single network test score, 0.26, and the ensemble one, 0.16, is due to the uncertainty on predictions of all remaining digits. Indeed, Efficient-CapsNet predicts the output class using the length of its output vector. So, unlike the exclusive softmax function, most of the ambiguous digits are reflected in the uncertainty of the network outputs. The ensemble simply steers predictions on the most probable answer. That is a clear sign of the strong knowledge of the dataset encapsulated by the network during the training. Indeed, analyzing the misclassified digits and their prediction scores in the case of a single model clarifies the correctness of its answers despite the given labels. As shown in Figure 6, misclassified examples are ambiguous and classifying them correctly is only a matter of pure luck. In our opinion, it is for this reason that networks capable of achieving Efficient-CapsNet level of accuracy have modelled every important aspect of the MNIST dataset and further improvements in the test score have no significant meaning. Figure 6: Example of Efficient-CapsNet misclassified digits. Green bars represent correct labels and their high the corresponding capsule length. The ambiguity of these remaining questionable examples is reflected in the uncertainty of the network predictions. ### 4.3 smallNORB results Method | Reconstruction | Parameters [K] | smallNORB [%] ---|---|---|--- Our Baseline | no | 198 | 5.9 Base-CapsNet | no | 167 | 4.58 Our Baseline | yes | 198 | 4.59 Base-CapsNet | yes | 167 | 4.33 Efficient-CapsNet | yes | 151 | $2.54_{\pm 0.003}$ Baseline [14] | no | 4200 | 5.2 Matrix-CapsNet with EM routing [14] | no | 310 | 1.8 ($4.4_{\pm 0.004}$)* CapsNet [10] | yes | 6800 | 3.77 VB-Routig [16] | yes | 310 | $1.6_{\pm 0.06}$ Table 4: Test error (%) on the smallNORB classification task. All methodologies are reported with their number of parameters andthe presence of the reconstruction regularizer during the training phase. * indicates the results from our experiments. The dataset smallNORB is a collection of 48600 stereo, grayscale images ($96\times 96\times 2$), representing 50 toys belonging to 5 generic categories: human, airplanes, trucks, cars and four-legged animals. Each toy was photographed by two cameras under 6 lighting conditions, 9 elevations, and 18 azimuths. The dataset is split in half; 5 instances of each category for the training and the remaining ones for the testing. Efficient-CapsNet has the same structure described in the "MNIST results" section with the only exception of Instance Normalization [46] in place of Batch Normalization layers. That greatly helps the network to deal with different lighting conditions and make the network training as independent as possible of the contrast and brightness differences among the input images. On the other hand, we follow the same data augmentation and pre-processing proposed in Hinton et. al [14] with the only exception of the input dimension: we scale the original images to $64\times 64$ using patches of $48\times 48$. We train for 200 epochs, with a batch size of 16, Adam optimizer and an initial learning rate of $\eta=5e-4$ with exponential decay of 0.99. In Table 4 are summarized the results of the baseline networks, Efficient- CapsNet and some capsule-based methodologies present in literature. As for the MNIST dataset, also for smallNORB is evident the gap between classical CNN and capsule-based networks. Moreover, again our methodology has comparable results with all other similar methodologies but with half of the parameters. It achieves a mean accuracy of 0.974 with a min value of 0.97 and a max one of 0.983. Finally, as before we exploit the 30 networks, trained for statistical evidence, to produce an ensemble prediction. We select only the two networks with the lowest test error, and we adopt for both a 40 patch prediction [14] before averaging their results. We obtain a test accuracy of 1.23, setting a new state-of-the-art result for this dataset. ### 4.4 MultiMNIST results The MultiMNIST dataset has been proposed by Sabour et al. [10] and is based on the superposition of couples of shifted digits from the MNIST dataset. Each original image is first padded to a $36\times 36$ pixels dimension. A MultiMNIST sample is generated by overlaying two padded digits, which shifts up to 4 pixels in both dimensions, resulting in an average 80% overlap. The only condition to be met is that the two digits are of different classes. In the labels, both indexes corresponding to the two classes are set to 1. In this way, the network aim is to detect both the digits concurrently. During training, the output capsules corresponding to the target classes are selected one at a time and used to reconstruct the two input images, while during testing we select the two most active capsules, i.e. the longest. Ideally, the network should be able to segment the two digits that have generated the MultiMNIST sample and independently reconstruct them. During training, for each epoch, we randomly generate 10 MultiMNIST images for each original MNIST example. We train the model 5 times independently for about 100 epochs, with a batch size of 64, Adam optimizer and an initial learning rate of $\eta=5e-4$ with exponential decay of 0.97. Since we generate two reconstruction images for each input sample, we divide the reconstruction regularizer by half. During testing, we generate 1000 MultiMNIST images for each MNIST digit to have a fair comparison with the work by Sabour et al. [10], for a total of 10 million samples. We get a mean test error of $5.1\%_{\pm 0.005}$ with our model of 154K parameters, in comparison to the original work test error of $5.2\%$ with more than 9M parameters. Moreover, with an ensemble of the three models that get an accuracy greater than a threshold of 0.9470, we get a reduction of the test error to 3.8%. These results show how our methodology is able to correctly detect and recognize highly overlapping digits encoding information about their position and style in the output layer capsules. ### 4.5 Affine transformations embedding Figure 7: Effect on the digit reconstruction of the addition of perturbations to the output capsule values with different tested methodologies. All networks are able to embed shape, position and orientation information of the input digit except for the classical CNN with softmax output. That suggests that the capsule structure of the output, in which each class has its feature vector, is fundamental to get interpretable output embeddings. To understand what kind of information is embedded in the output capsules, we can perturb the prediction and observe how the reconstruction is affected. We select the capsule with the longest length and we add small positive and negative contributions to its single elements. Figure 7 shows some example of perturbed images with different methodologies. We can observe how Efficient- CapsNet is behaving similarly to the original CapsNet [10], with the ability to encode combinations of different transformations of the digit. Retraining CapsNet also obtains similar behaviour with the proposed self-attention routing. A Convolutional Neural Network with a fake capsule layer, i.e. a vector instead of a scalar for each output class, also demonstrates the ability to encode actual shape, position and orientation information. On the other hand, considering the last features of a classical CNN, we are not able to reproduce this behaviour. That suggests that a capsule organization of the output, in which each digit has its instantiation parameters and the activation is measured by the length of the vector, is fundamental for a meaningful embedding of the information. (a) Translations on x: [-5,+5] pixels (b) Translations on y: [-5,+5] pixels (c) Rotations: [-25,+25] degrees (d) Random Figure 8: Test set average cumulative variance explained with different numbers of PCA components by Efficent-CapsNet output capsule. It is clearly visible how the model is able to linearly embed affine transformations in the output space. To further investigate the ability of the proposed model to capture meaningful information in the components of the output capsules, we study the equivariance to transformations with a method similar to the one proposed by Choi et al. [27]. For each test image we generate the images corresponding to the 11 translations between [-5,+5] pixels on both the axes and to the 51 rotations between [-25,+25] degrees. If the model is behaving as expected, we should see that each affine transformation (translation on x, translation on y, rotation) is independently linearly encoded in the activations of the correct output capsule. We verify it, by computing the Principal Component Analysis on the output vectors for each type of transformation. We denote as $K$ the number of transformed images and with $N$ the number of output classes and we collect the output predictions $\textbf{{u}}_{i},\;i=1,...,K$. We center the data points and we compute the Singular Value Decomposition on the covariance matrix $C$: $\displaystyle\textbf{{z}}_{i}=\textbf{{u}}_{i}-\overline{\textbf{{u}}}$ (8) $\displaystyle\textbf{{C}}=\frac{1}{K}\sum_{i=1}^{K}\textbf{{z}}_{i}\,\textbf{{z}}_{i}^{T}$ (9) $\displaystyle\textbf{{C}}=\textbf{{U}}\bm{\varSigma}\textbf{{U}}^{T}$ (10) As a linearity metric, we consider the fraction of the first eigenvalue $\sigma_{1}$ of the matrix $\bm{\varSigma}$ over the sum of all its eigenvalues. Since the eigenvalues represent the variance of the original data points explained by each component of the PCA, if the transformations are linearly encoded, we should have a high fraction of the variance captured with just a single component, thus a high first eigenvalue ratio. $\displaystyle r=\frac{\sigma_{1}}{\sum_{j=1}^{N}\sigma_{j}}$ (11) We perform this analysis on both the original CapsNet [10] and our model. The average results on all the test images are shown in Table 5, along with a comparison with the PCA performed on randomly generated vectors with the same dimension. Efficient-CapsNet shows higher linearity with respect to the original CapsNet in the encoding of affine transformations in the output capsule space. Figure 8 presents the average cumulative variance explained increasing the number of PCA components on the whole test set. For all the three transformations, Effienct-CapsNet is able to capture all the information with just two components, showing an almost perfectly linear behaviour with respect to the random example. That shows how our architecture can correctly embed position and orientation information of the recognized digit in the output vector components. Method | Translations on x | Translations on y | Rotations ---|---|---|--- Random | $25.57\%_{\pm 0.028}$ | $25.54\%_{\pm 0.028}$ | $13.49\%_{\pm 0.009}$ CapsNet [10] | $83.78\%_{\pm 0.006}$ | $79.82\%_{\pm 0.009}$ | $88.01\%_{\pm 0.006}$ Efficient-CapsNet | $89.69\%_{\pm 0.005}$ | $87.28\%_{\pm 0.008}$ | $88.75\%_{\pm 0.005}$ Table 5: Average percentage of variance captured by the first component of PCA performed on the output capsule vectors of the different transformations applied to test set images. ## 5 Conclusion In this paper, we proposed Efficient-CapsNet, a novel capsule-based network that strongly highlights the generalization capabilities of capsules over traditional CNN, showing a much stronger knowledge representation after training. Indeed, our implementation, even with a very limited number of parameters is still capable of achieving state-of-the-art results on three distinct datasets, considerably outperforming previous implementations in terms of needed operations. Moreover, we introduced an alternative non- iterative routing algorithm that exploits a self-attention mechanism to rout a reduced number of capsules between subsequent layers efficiently. Further works will aim at designing a synthetic dataset to scale the network and analyze in-depth viewpoint generalization and network inner feature representations. ## Acknowledgements This work has been developed with the contribution of the Politecnico di Torino Interdepartmental Centre for Service Robotics PIC4SeR111https://pic4ser.polito.it and SmartData@Polito222https://smartdata.polito.it. ## Author contributions statement Conceptualization, V.M. and F.S.; methodology, V.M.; software, V.M. and F.S.; validation, V.M. and F.S.; formal analysis, V.M. and F.S.; investigation, V.M. and F.S.; resources, M.C.; data curation, V.M. and F.S.; writing original draft preparation V.M. and F.S.; writing review and editing, V.M. and F.S.; visualization, V.M. and F.S.; supervision, V.M. and F.S.; project administration, V.M., F.S. and M.C.; funding acquisition, M.C. ## References * [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017. * [2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [3] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018. * [4] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016. * [5] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016. * [6] Vittorio Mazzia, Aleem Khaliq, Francesco Salvetti, and Marcello Chiaberge. Real-time apple detection system using embedded systems with hardware accelerators: An edge ai application. IEEE Access, 8:9102–9114, 2020. * [7] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. * [8] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International conference on artificial neural networks, pages 44–51. Springer, 2011. * [9] David G Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1150–1157. Ieee, 1999. * [10] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. Advances in neural information processing systems, 30:3856–3866, 2017. * [11] Edgar Xi, Selina Bing, and Yang Jin. Capsule network performance on complex data. arXiv preprint arXiv:1712.03480, 2017. * [12] Dilin Wang and Qiang Liu. An optimization view on dynamic routing between capsules, 2018. * [13] Jan Eric Lenssen, Matthias Fey, and Pascal Libuschewski. Group equivariant capsule networks. arXiv preprint arXiv:1806.05086, 2018. * [14] Geoffrey E Hinton, Sara Sabour, and Nicholas Frosst. Matrix capsules with em routing. In International conference on learning representations, 2018. * [15] Mohammad Taha Bahadori. Spectral capsule networks, 2018. * [16] Fabio De Sousa Ribeiro, Georgios Leontidis, and Stefanos D Kollias. Capsule routing via variational bayes. In AAAI, pages 3749–3756, 2020. * [17] Jindong Gu and Volker Tresp. Improving the robustness of capsule networks to image affine transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7285–7293, 2020. * [18] Inyoung Paik, Taeyeong Kwak, and Injung Kim. Capsule networks need an improved routing algorithm. In Asian Conference on Machine Learning, pages 489–502. PMLR, 2019\. * [19] Sai Raam Venkatraman, Ankit Anand, S Balasubramanian, and R Raghunatha Sarma. Learning compositional structures for deep learning: Why routing-by-agreement is necessary. arXiv preprint arXiv:2010.01488, 2020. * [20] Adam Byerly, Tatiana Kalganova, and Ian Dear. A branching and merging convolutional network with homogeneous filter capsules. arXiv preprint arXiv:2001.09136, 2020. * [21] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. * [22] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017. * [23] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. arXiv preprint arXiv:1506.02025, 2015. * [24] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. PMLR, 2015. * [25] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018. * [26] Francesco Salvetti, Vittorio Mazzia, Aleem Khaliq, and Marcello Chiaberge. Multi-image super resolution of remotely sensed images using residual attention deep neural networks. Remote Sensing, 12(14):2207, 2020. * [27] Jaewoong Choi, Hyun Seo, Suii Im, and Myungjoo Kang. Attention routing between capsules. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019. * [28] Yao-Hung Hubert Tsai, Nitish Srivastava, Hanlin Goh, and Ruslan Salakhutdinov. Capsules with inverted dot-product attention routing. arXiv preprint arXiv:2002.04764, 2020. * [29] Wenkai Huang and Fobao Zhou. Da-capsnet: dual attention mechanism capsule network. Scientific Reports, 10(1):1–13, 2020. * [30] Dunlu Peng, Dongdong Zhang, Cong Liu, and Jing Lu. Bg-sac: Entity relationship classification model based on self-attention supported capsule networks. Applied Soft Computing, 91:106186, 2020. * [31] Bruce McIntosh, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. Multi-modal capsule routing for actor and action video segmentation conditioned on natural language queries. arXiv preprint arXiv:1812.00303, 2018. * [32] Ningyu Zhang, Shumin Deng, Zhanlin Sun, Xi Chen, Wei Zhang, and Huajun Chen. Attention-based capsule networks with dynamic routing for relation extraction. arXiv preprint arXiv:1812.11321, 2018. * [33] Yongping Du, Xiaozheng Zhao, Meng He, and Wenyang Guo. A novel capsule based hybrid neural network for sentiment classification. IEEE Access, 7:39321–39328, 2019. * [34] Ayush Jaiswal, Wael AbdAlmageed, Yue Wu, and Premkumar Natarajan. Capsulegan: Generative adversarial capsule network. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0–0, 2018. * [35] Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. Videocapsulenet: A simplified network for action detection. arXiv preprint arXiv:1805.08162, 2018. * [36] Rodney LaLonde and Ulas Bagci. Capsules for object segmentation. arXiv preprint arXiv:1804.04241, 2018. * [37] Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. Capsule-forensics: Using capsule networks to detect forged images and videos. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2307–2311. IEEE, 2019. * [38] Aryan Mobiny, Hengyang Lu, Hien V Nguyen, Badrinath Roysam, and Navin Varadarajan. Automated classification of apoptosis in phase contrast microscopy using capsule network. IEEE transactions on medical imaging, 39(1):1–10, 2019. * [39] KR Kruthika, HD Maheshappa, Alzheimer’s Disease Neuroimaging Initiative, et al. Cbir system using capsule networks and 3d cnn for alzheimer’s disease diagnosis. Informatics in Medicine Unlocked, 14:59–68, 2019. * [40] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. * [41] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. * [42] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. * [43] Dan Ciregan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. In 2012 IEEE conference on computer vision and pattern recognition, pages 3642–3649. IEEE, 2012. * [44] Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International conference on machine learning, pages 1058–1066, 2013. * [45] Kamran Kowsari, Mojtaba Heidarysafa, Donald E Brown, Kiana Jafari Meimandi, and Laura E Barnes. Rmdl: Random multimodel deep learning for classification. In Proceedings of the 2nd International Conference on Information System and Data Mining, pages 19–28, 2018. * [46] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
# A Practical Two-Sample Test for Weighted Random Graphs Mingao Yuan<EMAIL_ADDRESS>Qian Wen<EMAIL_ADDRESS>Department of Statistics, North Dakota State University, Fargo, ND,USA, 58102. ###### Abstract Network (graph) data analysis is a popular research topic in statistics and machine learning. In application, one is frequently confronted with graph two- sample hypothesis testing where the goal is to test the difference between two graph populations. Several statistical tests have been devised for this purpose in the context of binary graphs. However, many of the practical networks are weighted and existing procedures can’t be directly applied to weighted graphs. In this paper, we study the weighted graph two-sample hypothesis testing problem and propose a practical test statistic. We prove that the proposed test statistic converges in distribution to the standard normal distribution under the null hypothesis and analyze its power theoretically. The simulation study shows that the proposed test has satisfactory performance and it substantially outperforms the existing counterpart in the binary graph case. A real data application is provided to illustrate the method. ###### keywords: two-sample hypothesis test, random graph , weighted graph ††journal: Journal Name††journal: Journal Name ## 1 Introduction A graph or network $\mathcal{G}=(V,E)$ is a mathematical model that consists of a set $V$ of nodes (vertices) and a set $E$ of edges. In the last decades, it has been widely used to represent a variety of systems in various regimes [20, 10, 12, 19, 9]. For instance, in social networks, a node denotes an individual and an edge represents the interaction between two individuals [12]; in brain graphs, a node may be a neural unit and the functional link between two units forms an edge [14]; in co-authorship networks, the authors of a collection of articles are the nodes and an edge is defined to be the co- authorship of two authors [20]. Due to the widespread applications, network data analysis has drawn a lot of attentions in both statistical and machine learning communities [1, 2, 5, 7, 13, 18, 24]. Most of the existing literature focus on mining a single network, such as community detection [1, 2, 7, 5], global testing of the community structures [7, 18, 13, 24] and so on. In practice, a number of graphs from multiple populations may be available. For example, in the 1000 Functional Connectomes Project, 1093 fMRI (weighted) networks were collected from subjects located in 24 communities [8]; to study the relation between Alzheimer’s disease and a functional disconnection of distant brain areas, dozens of functional connectivity (weighted) networks from patients and control subjects were constructed [21]. In this case, a natural and fundamental question is to test the difference between two graph populations, known as graph two-sample hypothesis testing. There are a few literature dealing with the graph two-sample hypothesis testing problem [8, 22, 15, 16]. Specifically, [8] firstly investigated this problem and proposed a $\chi^{2}$-type test. In [22], the authors developed a kernel-based test statistic for random dot product graph models. Under a more general setting, [15] studied the graph two sample test from a minimax testing perspective and proposed testing procedures based on graph distance such as Frobenius norm or operator norm. The threshold of the test statistics in [15] could be calculated by concentration inequalities, which usually makes the test very conservative [16]. To overcome this issue, [16] derived the asymptotic distribution of the test statistic and proposed practical test methods that outperform existing methods. In practice, most of the graphs are weighted [8, 21, 23, 4, 3]. The testing procedures in [16, 15, 22, 11] are designed under the context of binary (unweighted) graphs and the tests can’t be directly applied to weighted graphs (See Section 3 for an example). Consequently, before using these tests, one has to artificially convert weighted graphs into binary graphs, which can result in a loss of information [23, 4, 3]. Motivated by the $T_{fro}$ test in [16], we propose a powerful test statistic for weighted graph two-sample hypothesis testing. Under the null hypothesis, the proposed test statistic converges in distribution to the standard normal distribution and the power of the test is theoretically characterized. Simulation study shows that the test can achieve high power and it substantially outperforms its counterpart in the binary graph case. Besides, we apply the proposed test to a real data. The rest of the paper is organized as follows. In Section 2, we formally state the weighted graph two-sample hypothesis testing problem and present the theoretical results. In Section 3, we present the simulation study results and real data application. The proof of main result is deferred to Section 4. ## 2 Weighted Graph Two-Sample Hypothesis Test For convenience, let $X\sim F$ represent random variable $X$ follows distribution $F$ and let $Bern(r)$ denote the Bernoulli distribution with success probability $r$. Let $V=\\{1,2,\dots,n\\}$ be a vertex (node) set and $\mathcal{G}=(V,E)$ denote an undirected graph on $V$ with edge set $E$. The adjacency matrix of graph $\mathcal{G}$ is a symmetric matrix $A\in\\{0,1\\}^{n\times n}$ such that $A_{ij}=1$ if $(i,j)\in E$ and 0 otherwise. The graph $\mathcal{G}$ is binary or unweighted, since $A_{ij}$ only records the existence of an edge. If $A_{ij}\sim Bern(p_{ij}),0\leq p_{ij}\leq 1$, then the graph $\mathcal{G}$ is called an inhomogeneous random graph(inhomogeneous Erdös-Rényi graph). Let $\mu=(\mu_{ij})_{1\leq i<j\leq n}$ be a sequence of real numbers and $Q=(Q_{ij})_{1\leq i<j\leq n}$ be a sequence of distributions defined on a bounded interval, where each $Q_{ij}$ is uniquely parametrized by its mean value $\mu_{ij}$. A weighted random graph $\mathcal{G}=(V,Q,\mu)$ is defined as follows. For nodes $i,j$, $A_{ij}=A_{ji},\ \ A_{ij}\sim Q_{ij}(\mu_{ij}),\ 1\leq i<j\leq n,$ $A_{ii}=0$ $(i=1,2,\dots,n)$ and $A_{ij}$ is independent of $A_{kl}$ if $\\{i,j\\}\neq\\{k,l\\}$. If $Q_{ij}(\mu_{ij})=Bern(\mu_{ij})$, then $\mathcal{G}=(V,Q,\mu)$ is just the inhomogeneous random graph [16, 15]. Given i.i.d. graph sample $G_{1},\dots,G_{m}\sim\mathcal{G}_{1}=(V,Q,\mu_{1})$ and i.i.d. graph sample $H_{1},\dots,H_{m}\sim\mathcal{G}_{2}=(V,Q,\mu_{2})$, we are interested in the weighted graph two-sample hypothesis testing problem $H_{0}:\mathcal{G}_{1}=\mathcal{G}_{2},\hskip 28.45274ptH_{1}:\mathcal{G}_{1}\neq\mathcal{G}_{2}.$ (1) Let $A_{G_{k}}$ and $A_{H_{k}}$ be the adjacency matrix of graph $G_{k}$ and $H_{k}$ respectively. Then $A_{G_{k},ij}\sim Q_{ij}(\mu_{1,ij})$ and $A_{H_{k},ij}\sim Q_{ij}(\mu_{2,ij})$ for $1\leq i<j\leq n$. Consequently, (1) is equivalent to the following hypothesis test $H_{0}:\mu_{1}=\mu_{2},\hskip 28.45274ptH_{1}:\mu_{1}\neq\mu_{2}.$ In the binary graph case ($Q_{ij}(\mu_{ij})=Bern(\mu_{ij}),1\leq i<j\leq n$), several testing procedures for (1) are available in the literature. For $m\rightarrow\infty$ and small $n$, a $\chi^{2}$-type test was proposed in [8]. For $m=1$ and $n\rightarrow\infty$, under the random dot product model, a nonparametric test statistic was developed in [22], and a test based on eigenvalues of adjacency matrix under the inhomogeneous random graph could be found in [16]. A more practical case is small $m$ $(m\geq 2)$ and $n\rightarrow\infty$. In this case, a test called $T_{fro}$ was proposed in [15] and its asymptotic behavior was studied in [16]. Recently, [11] proposed a test statistic based on the largest eigenvalue of a Wigner matrix. In this work, we study (1) for a broad class of distributions $Q$ and focus on the regime $m\geq 2$ and $n\rightarrow\infty$. The sample size $m$ could be either fixed or tend to infinity along with $n$. To define the test statistic, the two samples $G_{k},H_{k},(1\leq k\leq m)$ are randomly partitioned into two parts, denoted as $G_{k},H_{k},(1\leq k\leq m/2)$ and $G_{k},H_{k},(m/2<k\leq m)$ with a little notation abuse. Let $s_{n}^{2}=\sum_{1\leq i<j\leq n}T_{ij}^{2}$, where $T_{ij}=\sum_{k\leq\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij})\sum_{k>\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij}).$ We propose the following test statistic for (1): $\mathcal{T}_{n}=\frac{\sum_{1\leq i<j\leq n}T_{ij}}{s_{n}}.$ (2) Let $\sigma_{ij}^{2}=Var(A_{G_{k},ij})$ and $\eta_{ij}=\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{4}$ under $H_{0}$. The asymptotic distribution of $\mathcal{T}_{n}$ is given in the following theorem. ###### Theorem 2.1. Suppose $\displaystyle n=o\big{(}\sum_{1\leq i<j\leq n}\sigma_{ij}^{4}\big{)},\ \hskip 34.14322pt\ \frac{\sum_{1\leq i<j\leq n}\sigma_{ij}^{8}}{(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}=o(1),\ $ $\displaystyle\frac{\sum_{1\leq i<j\leq n}\sigma_{ij}^{4}\eta_{ij}}{m(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}=o(1),\ \ \frac{\sum_{1\leq i<j\leq n}\eta_{ij}^{2}}{m^{2}(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}=o(1).$ (3) Then under $H_{0}$, $\mathcal{T}_{n}$ converges in distribution to $N(0,1)$, the standard normal distribution, as $n\rightarrow\infty$. Theorem 2.1 states that the limiting distribution of $\mathcal{T}_{n}$ under the null hypothesis is $N(0,1)$. Given type one error $\alpha$, reject $H_{0}$ if $|\mathcal{T}_{n}|>Z_{(1-\frac{\alpha}{2})}$ where $Z_{(1-\frac{\alpha}{2})}$ is the $100(1-\frac{\alpha}{2})\%$ quantile of the standard normal distribution. Condition (3) could be simplified in the binary case. Suppose $Q_{ij}(\mu_{ij})=Bern(\mu_{ij})$ and $\mu_{1,ij}=\mu_{2,ij}=\mu_{ij}\leq 1-\delta$ for some $\delta\in(0,1)$ under $H_{0}$. Then $\eta_{ij}\leq 2\sigma_{ij}^{2}\leq 2\mu_{ij}$. In this case, condition (3) reduces to $n=o(\|\mu\|_{F}^{2})$. Here $\|\mu\|_{F}$ denotes the Frobenius norm of matrix $\mu$. To see this, let $C$ be a generic constant, then $0.5\delta^{2}\|\mu\|_{F}^{2}=\delta^{2}\sum_{1\leq i<j\leq n}\mu_{ij}^{2}\leq\sum_{1\leq i<j\leq n}\sigma_{ij}^{4}\leq\sum_{1\leq i<j\leq n}\mu_{ij}^{2}=0.5\|\mu\|_{F}^{2},$ (4) $\frac{\sum_{1\leq i<j\leq n}\sigma_{ij}^{8}}{(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}\leq C\frac{\sum_{1\leq i<j\leq n}\mu_{ij}^{4}}{(\sum_{1\leq i<j\leq n}\mu_{ij}^{2})^{2}}\leq C\frac{\sum_{1\leq i<j\leq n}\mu_{ij}^{2}}{(\sum_{1\leq i<j\leq n}\mu_{ij}^{2})^{2}}=C\frac{1}{\|\mu\|_{F}^{2}}\rightarrow 0,$ (5) $\frac{\sum_{1\leq i<j\leq n}\sigma_{ij}^{4}\eta_{ij}}{m(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}\leq C\frac{\sum_{1\leq i<j\leq n}\mu_{ij}^{2}}{m(\sum_{1\leq i<j\leq n}\mu_{ij}^{2})^{2}}=C\frac{1}{m\|\mu\|_{F}^{2}}\rightarrow 0,$ (6) $\frac{\sum_{1\leq i<j\leq n}\eta_{ij}^{2}}{m^{2}(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}\leq C\frac{\sum_{1\leq i<j\leq n}\mu_{ij}^{2}}{m^{2}(\sum_{1\leq i<j\leq n}\mu_{ij}^{2})^{2}}=C\frac{1}{m^{2}\|\mu\|_{F}^{2}}\rightarrow 0.$ (7) If $n=o(\|\mu\|_{F}^{2})$, then (3) holds by (4),(5),(6),(7). In the following, we analyze the power of the proposed test statistic. Let $\sigma_{1,ij}^{2}=Var(A_{G_{k},ij})$, $\sigma_{2,ij}^{2}=Var(A_{H_{k},ij})$ under $H_{1}$ and $V_{ij}=\sigma_{1,ij}^{2}+\sigma_{2,ij}^{2}+(\mu_{1,ij}-\mu_{2,ij})^{2},\ \ \lambda_{n}=\frac{m\sum_{1\leq i<j\leq n}(\mu_{1,ij}-\mu_{2,ij})^{2}}{2\sqrt{\sum_{1\leq i<j\leq n}V_{ij}^{2}}}.$ ###### Theorem 2.2. Suppose $n=o\big{(}m\sum_{1\leq i<j\leq n}V_{ij}^{2}\big{)}$. Then under $H_{1}$, $\mathcal{T}_{n}=\lambda_{n}+O_{P}(1)$. According to Theorem 2.2, the power of the test goes to one if $\lambda_{n}\rightarrow\infty$, as $n\rightarrow\infty$. The expression of $\lambda_{n}$ explicitly characterizes the effect of sample size $m$ and the mean and variance of edge weight on the power of the test statistic. In the following, let’s restrict Theorem 2.2 to binary graphs to see when the test could achieve high power. Suppose $Q_{ij}(\mu_{t,ij})=Bern(\mu_{t,ij})$, $\mu_{t,ij}\rightarrow 0$, and $\mu_{1,ij}/\mu_{2,ij}\rightarrow\tau$($\tau>0$) for $t=1,2$. Then $V_{ij}=\mu_{1,ij}(1-\mu_{1,ij})+\mu_{2,ij}(1-\mu_{2,ij})+(\mu_{1,ij}-\mu_{2,ij})^{2}=(\mu_{1,ij}+\mu_{2,ij})(1+o(1)).$ In this case, $n=o(m\sum_{1\leq i<j\leq n}V_{ij}^{2})$ requires $n=o(m\|\mu_{1}+\mu_{2}\|_{F}^{2})$. Besides, $\lambda_{n}=\frac{m\sum_{1\leq i<j\leq n}(\mu_{1,ij}-\mu_{2,ij})^{2}}{2\sqrt{\sum_{1\leq i<j\leq n}(\mu_{1,ij}+\mu_{2,ij})^{2}}}(1+o(1))=(1+o(1))\frac{m\|\mu_{1}-\mu_{2}\|_{F}^{2}}{2\sqrt{2}\|\mu_{1}+\mu_{2}\|_{F}}.$ (8) For fixed $\mu_{1}$ and $\mu_{2}$, as the sample size $m$ increases, the power increases. As $\|\mu_{1}-\mu_{2}\|_{F}^{2}$ gets larger, the power gets higher when $\|\mu_{1}+\mu_{2}\|_{F}^{2}$ and $m$ are held constant. ###### Remark 1. The quantity $\lambda_{n}$ in Theorem 2.2 completely characterizes the power of our test. For binary graphs, the sparsity may increase or decrease the power, dependent on the model settup. To see this, we consider two scenarios below. (a) Suppose $\mu_{1,ij}=\tau a_{n}$ for a constant $\tau>0$ and $\mu_{2,ij}=a_{n}$ with $a_{n}=o(1)$, $1\leq i<j\leq n$. By (8), it follows that $\lambda_{n}=mna_{n}\frac{(\tau-1)^{2}}{4(\tau+1)}[1+o(1)].$ For fixed sample size $m$ and the number of nodes $n$, the power of our test statistic declines as the networks get sparser (smaller $a_{n}$). (b) Suppose $\mu_{1,ij}=a_{n}+b_{n}$ and $\mu_{2,ij}=a_{n}-b_{n}$ with $a_{n}=o(1)$ and $b_{n}=o(1)$, $1\leq i<j\leq n$. Then by equation (8), one has $\lambda_{n}=\frac{mn}{2}\frac{b_{n}^{2}}{a_{n}}[1+o(1)].$ The ratio $\frac{b_{n}^{2}}{a_{n}}$ controls the power, if the sample size $m$ and the number of nodes $n$ are held constant. Model 1: $a_{n}=\frac{n^{0.6}}{n}$, $b_{n}=\sqrt{a_{n}}$, then $\frac{b_{n}^{2}}{a_{n}}=1$ and $\lambda_{n}=\frac{mn}{2}[1+o(1)]$. Model 2: $a_{n}=\frac{n^{0.7}}{n}$, $b_{n}=\sqrt{\frac{a_{n}}{\log n}}$, then $\frac{b_{n}^{2}}{a_{n}}=\frac{1}{\log n}$ and $\lambda_{n}=\frac{mn}{2\log n}[1+o(1)]$. Clearly, Model 1 is sparser than Model 2 but our test achieves higher power under Model 1 than Model 2 based on Theorem 2.2. ###### Remark 2. Recall that the $T_{fro}$ test in [16] is defined as $T_{fro}=\frac{\sum_{1\leq i<j\leq n}T_{ij}}{t_{n}},$ where $t_{n}^{2}=\sum_{1\leq i<j\leq n}\sum_{k\leq\frac{m}{2}}(A_{G_{k},ij}+A_{H_{k},ij})\sum_{k>\frac{m}{2}}(A_{G_{k},ij}+A_{H_{k},ij}).$ The difference between $\mathcal{T}_{n}$ and $T_{fro}$ lies in the difference between $s_{n}$ and $t_{n}$. Note that $s_{n}^{2}$ in $\mathcal{T}_{n}$ is proved to be a consistent estimator of the variance of $\sum_{1\leq i<j\leq n}T_{ij}$ under $H_{0}$ for a broad class of distributions $Q$, while $t_{n}^{2}$ may not be a consistent estimator of the variance. To see this, $\tau_{n}^{2}=\mathbb{E}[t_{n}^{2}]=\sum_{1\leq i<j\leq n}m^{2}\mu_{ij}^{2}.$ By the proof of Theorem 2.1, we have $s_{n}^{2}=(1+o_{p}(1))\sigma_{n}^{2}$ and $\sigma_{n}^{2}=\mathbb{E}[s_{n}^{2}]=\sum_{1\leq i<j\leq n}m^{2}\sigma_{ij}^{4}.$ Here $\sigma_{n}^{2}$ is the variance of $\sum_{1\leq i<j\leq n}T_{ij}$. For any distribution $Q$ with $\tau_{n}^{2}\neq(1+o(1))\sigma_{n}^{2}$, the test statistic $T_{fro}$ will fail, since $t_{n}^{2}$ is not a consistent estimator of $\sigma_{n}^{2}$ in this case. For example, let $Q_{ij}(\mu_{ij})$ be the beta distribution $Beta(\alpha,\beta)$. Then for $1\leq i<j\leq n$, $\mu_{ij}=\frac{\alpha}{\alpha+\beta},\ \ \sigma_{ij}^{2}=\frac{\alpha\beta}{(\alpha+\beta)^{2}(\alpha+\beta+1)},\ \ \frac{\sigma_{ij}^{2}}{\mu_{ij}}=\frac{\beta}{(\alpha+\beta)(\alpha+\beta+1)}.$ When $\alpha$ and $\beta$ are fixed constants, $\tau_{n}^{2}\neq(1+o(1))\sigma_{n}^{2}$. In this case, $T_{fro}$ doesn’t work. For $Q_{ij}(\mu_{ij})=Bern(\mu_{ij})$, $\sigma_{ij}^{2}=\mu_{ij}(1-\mu_{ij})$. If $\mu_{ij}\geq 1-\epsilon$ with $\epsilon\in(0,1)$, then $\tau_{n}^{2}\neq(1+o(1))\sigma_{n}^{2}$. In this case, $T_{fro}$ will fail. On the contrary, when $\mu_{ij}=o(1)$, $\sigma_{ij}^{2}=(1+o(1))\mu_{ij}$ and the test $T_{fro}$ may be valid. The simulation results in Section 3 are consistent with the above findings. ## 3 Simulation and Real Data In this section, we evaluate the finite sample performance of the proposed test $\mathcal{T}_{n}$ and compare it with the test $T_{fro}$ in [16] by simulation. Besides, we apply our test method to a real data. ### 3.1 Simulation Throughout this simulation, we set the nominal type one error $\alpha$ to be 0.05. The empirical size and power are obtained by repeating the experiment 1000 times. We take $n=10,30,50,100,200,300$ and $m=2,4,14$. In the first simulation, we generate weights from beta distribution. Specifically, we generate $G_{1},\dots,G_{m}\sim\mathcal{G}_{1}=(V,Q,\mu_{1})$ with $Q_{ij}(\mu_{1,ij})=Beta(a,b)$ for $1\leq i<j\leq n/2$ or $n/2<i<j\leq n$ and $Q_{ij}(\mu_{1,ij})=Beta(c,d)$ for $1\leq i\leq n/2<j\leq n$. Denote the graph model as $\mathcal{G}_{1}(Beta(a,b),Beta(c,d))$. For a fixed constant $\epsilon$ ($\epsilon\geq 0$), we generate the second sample $H_{1},\dots,H_{m}\sim\mathcal{G}_{2}=(V,Q,\mu_{2})$, with $Q_{ij}(\mu_{2,ij})=Beta(a+\epsilon,b+\epsilon)$ for $1\leq i<j\leq n/2$ or $n/2<i<j\leq n$ and $Q_{ij}(\mu_{2,ij})=Beta(c+\epsilon,d+\epsilon)$ for $1\leq i\leq n/2<j\leq n$. Denote the graph model as $\mathcal{G}_{2}(Beta(a+\epsilon,b+\epsilon),Beta(c+\epsilon,d+\epsilon))$. Note that the constant $\epsilon$ ($\epsilon\geq 0$) characterizes the difference between $\mu_{2,ij}$ and $\mu_{1,ij}$ with fixed $a,b,c,d$, since for $Beta(a+\epsilon,b+\epsilon)$, the mean is equal to $\mu(\epsilon)=\frac{a+\epsilon-1}{a+b+2\epsilon-2}.$ Clearly $\mu(\epsilon)$ is an increasing function of $\epsilon$ $(\epsilon\geq 0)$ and larger $\epsilon$ implies larger difference in the means and consequently the power of the test $\mathcal{T}_{n}$ is supposed to increase. We take $a=2,b=3,c=1,d=3$ and $a=9,b=3,c=3,d=2$ to yield right-skewed and left-skewed beta distributions respectively. The simulation results are summarized in Table 1 and Table 2, where the sizes (powers) are reported in column(s) with $\epsilon=0$ ($\epsilon>0$). The sizes and powers of $T_{fro}$ are all zeros, which indicates this test (designed for binary graphs) doesn’t apply to weighted graph (see Remark 2 for explanation). On the contrary, all the sizes of the proposed test $\mathcal{T}_{n}$ are close to 0.05, which implies the null distribution is valid even for small networks (small $n$) and small sample sizes (small $m$). Besides, the power can approach one, this shows the consistency of the proposed test $\mathcal{T}_{n}$. The parameter $\epsilon$, $n$, $m$ have significant influence on the powers. As any one of them increases with the rest held constant, the power of $\mathcal{T}_{n}$ gets higher. Table 1: Simulated size and power with graphs generated from $\mathcal{G}_{1}(Beta(2,3),Beta(1,3))$ and $\mathcal{G}_{2}(Beta(2+\epsilon,3+\epsilon),Beta(1+\epsilon,3+\epsilon))$ $n(m=2)$ | Method | $\epsilon=0$(size) | $\epsilon=0.3$ (power) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) ---|---|---|---|---|--- 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.000 200 | | 0.000 | 0.000 | 0.000 | 0.000 300 | | 0.000 | 0.000 | 0.000 | 0.000 10 | | 0.046 | 0.052 | 0.060 | 0.069 30 | | 0.043 | 0.065 | 0.085 | 0.123 50 | | 0.049 | 0.079 | 0.093 | 0.203 100 | | 0.052 | 0.089 | 0.248 | 0.604 200 | $\mathcal{T}_{n}$ | 0.045 | 0.199 | 0.754 | 0.995 300 | | 0.055 | 0.383 | 0.968 | 1.000 $n(m=4)$ | Method | $\epsilon=0$(size) | $\epsilon=0.3$ (power) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.000 200 | | 0.000 | 0.000 | 0.000 | 0.000 300 | | 0.000 | 0.000 | 0.000 | 0.000 10 | | 0.049 | 0.058 | 0.065 | 0.069 30 | | 0.048 | 0.067 | 0.134 | 0.237 50 | | 0.048 | 0.088 | 0.251 | 0.576 100 | $\mathcal{T}_{n}$ | 0.056 | 0.209 | 0.757 | 0.992 200 | | 0.047 | 0.594 | 0.999 | 1.000 300 | | 0.058 | 0.907 | 1.000 | 1.000 $n(m=14)$ | Method | $\epsilon=0$(size) | $\epsilon=0.3$ (power) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.006 200 | | 0.000 | 0.000 | 0.538 | 1.000 300 | | 0.000 | 0.000 | 1.000 | 1.000 10 | | 0.044 | 0.056 | 0.115 | 0.244 30 | | 0.048 | 0.193 | 0.707 | 0.980 50 | | 0.047 | 0.460 | 0.989 | 1.000 100 | $\mathcal{T}_{n}$ | 0.044 | 0.960 | 1.000 | 1.000 200 | | 0.041 | 1.000 | 1.000 | 1.000 300 | | 0.044 | 1.000 | 1.000 | 1.000 Table 2: Simulated size and power with graphs generated from $\mathcal{G}_{1}(Beta(9,3),Beta(3,2))$, $\mathcal{G}_{2}(Beta(9+\epsilon,3+\epsilon),Beta(3+\epsilon,2+\epsilon))$. $n(m=2)$ | Method | $\epsilon=0$(size) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) | $\epsilon=0.9$ (power) ---|---|---|---|---|--- 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.000 200 | | 0.000 | 0.000 | 0.000 | 0.000 300 | | 0.000 | 0.000 | 0.000 | 0.000 10 | | 0.043 | 0.051 | 0.055 | 0.057 30 | | 0.050 | 0.056 | 0.061 | 0.065 50 | | 0.047 | 0.062 | 0.076 | 0.077 100 | $\mathcal{T}_{n}$ | 0.050 | 0.058 | 0.093 | 0.205 200 | | 0.049 | 0.136 | 0.304 | 0.611 300 | | 0.057 | 0.231 | 0.591 | 0.926 $n(m=4)$ | Method | $\epsilon=0$(size) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) | $\epsilon=0.9$ (power) 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.000 200 | | 0.000 | 0.000 | 0.000 | 0.000 300 | | 0.000 | 0.000 | 0.000 | 0.000 10 | | 0.047 | 0.053 | 0.056 | 0.057 30 | | 0.045 | 0.059 | 0.065 | 0.075 50 | | 0.055 | 0.063 | 0.110 | 0.181 100 | $\mathcal{T}_{n}$ | 0.053 | 0.113 | 0.294 | 0.602 200 | | 0.041 | 0.357 | 0.834 | 0.989 300 | | 0.046 | 0.674 | 0.988 | 1.000 $n(m=14)$ | Method | $\epsilon=0$(size) | $\epsilon=0.3$ (power) | $\epsilon=0.5$ (power) | $\epsilon=0.7$ (power) 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.000 50 | | 0.000 | 0.000 | 0.000 | 0.000 100 | $T_{fro}$ | 0.000 | 0.000 | 0.000 | 0.000 200 | | 0.000 | 0.000 | 0.000 | 0.000 300 | | 0.000 | 0.000 | 0.000 | 0.000 10 | | 0.044 | 0.060 | 0.064 | 0.077 30 | | 0.048 | 0.070 | 0.136 | 0.290 50 | | 0.051 | 0.082 | 0.286 | 0.667 100 | $\mathcal{T}_{n}$ | 0.049 | 0.187 | 0.784 | 0.998 200 | | 0.051 | 0.578 | 1.000 | 1.000 300 | | 0.045 | 0.902 | 1.000 | 1.000 In the second simulation, we generate binary graphs to compare the performance of $\mathcal{T}_{n}$ and $T_{fro}$. Specifically, we generate $G_{1},\dots,G_{m}\sim\mathcal{G}_{1}=(V,Q,\mu_{1})$ with $Q_{ij}(\mu_{1,ij})=Bern(a)$ for $1\leq i<j\leq n/2$ or $n/2<i<j\leq n$ and $Q_{ij}(\mu_{1,ij})=Bern(b)$ for $1\leq i\leq n/2<j\leq n$. Denote the graph model as $\mathcal{G}_{1}(Bern(a),Bern(b))$. For a constant $\epsilon$ ($\epsilon\geq 0$), the second sample $H_{1},\dots,H_{m}$ are generated from $\mathcal{G}_{2}=(V,Q,\mu_{2})$, with $Q_{ij}(\mu_{2,ij})=Bern(a+\epsilon)$ for $1\leq i<j\leq n/2$ or $n/2<i<j\leq n$ and $Q_{ij}(\mu_{2,ij})=Bern(b+\epsilon)$ for $1\leq i\leq n/2<j\leq n$. Denote the graph model as $\mathcal{G}_{2}(Bern(a+\epsilon),Bern(b+\epsilon))$. We take $a=0.05,b=0.01$, $a=0.1,b=0.05$ and $a=0.5,b=0.5$ to yield sparse, moderately sparse and dense networks respectively. Table 3, Table 4 and Table 5 summarize the simulation results, where the sizes (powers) are reported in column(s) with $\epsilon=0$ ($\epsilon>0$). When $a=0.05,b=0.01$ and $a=0.1,b=0.05$, the networks are too sparse so that the denominators of $\mathcal{T}_{n}$ and $T_{fro}$ may be zeros for smaller $n$. Consequently, $\mathcal{T}_{n}$ and $T_{fro}$ may not be available and we denote them as NA in Table 3 and Table 4. The sizes of $\mathcal{T}_{n}$ fluctuate around 0.05 and the pattern of powers resemble that in Table 1 and Table 2. Since the networks are binary, the test $T_{fro}$ is applicable. For denser networks, the test seems to be pretty conservative since almost all the sizes are less than 0.04 in Table 4 and almost all the sizes are zeros in Table 5 (see Remark 2 for explanation). This fact undermines its power significantly. On the contrary, the proposed test $\mathcal{T}_{n}$ has satisfactory power and outperforms $T_{fro}$ substantially. For sparser networks in Table 3, the sizes of $T_{fro}$ are closer to 0.05 and has powers close to that of $\mathcal{T}_{n}$. This simulation shows the advantage of the proposed test $\mathcal{T}_{n}$ over $T_{fro}$ under the setting of binary graphs. Table 3: Simulated size and power with graphs generated from $\mathcal{G}_{1}(Bern(0.05),Bern(0.01))$ and $\mathcal{G}_{2}(Bern(0.05+\epsilon),Bern(0.01+\epsilon))$. $n(m=2)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) ---|---|---|---|---|--- 10 | | NA | NA | NA | NA 30 | | NA | NA | NA | NA 50 | | NA | 0.038 | 0.094 | 0.236 100 | $T_{fro}$ | 0.043 | 0.088 | 0.295 | 0.754 200 | | 0.044 | 0.244 | 0.880 | 1.000 300 | | 0.031 | 0.503 | 0.997 | 1.000 10 | | NA | NA | NA | NA 30 | | NA | NA | NA | NA 50 | | NA | 0.050 | 0.114 | 0.280 100 | $\mathcal{T}_{n}$ | 0.052 | 0.099 | 0.343 | 0.791 200 | | 0.048 | 0.283 | 0.903 | 1.000 300 | | 0.044 | 0.548 | 0.998 | 1.000 $n(m=4)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) 10 | | NA | NA | NA | NA 30 | | 0.032 | 0.058 | 0.132 | 0.342 50 | | 0.043 | 0.078 | 0.314 | 0.750 100 | $T_{fro}$ | 0.035 | 0.231 | 0.868 | 1.000 200 | | 0.049 | 0.740 | 1.000 | 1.000 300 | | 0.041 | 0.976 | 1.000 | 1.000 10 | | NA | NA | NA | NA 30 | | 0.043 | 0.064 | 0.143 | 0.368 50 | | 0.047 | 0.094 | 0.344 | 0.767 100 | $\mathcal{T}_{n}$ | 0.050 | 0.259 | 0.885 | 1.000 200 | | 0.058 | 0.773 | 1.000 | 1.000 300 | | 0.051 | 0.986 | 1.000 | 1.000 $n(m=14)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) 10 | | NA | 0.069 | 0.189 | 0.392 30 | | 0.038 | 0.263 | 0.872 | 1.000 50 | | 0.049 | 0.613 | 1.000 | 1.000 100 | $T_{fro}$ | 0.040 | 0.995 | 1.000 | 1.000 200 | | 0.048 | 1.000 | 1.000 | 1.000 300 | | 0.038 | 1.000 | 1.000 | 1.000 10 | | NA | 0.060 | 0.146 | 0.335 30 | | 0.048 | 0.277 | 0.858 | 0.998 50 | | 0.055 | 0.622 | 1.000 | 1.000 100 | $\mathcal{T}_{n}$ | 0.055 | 0.996 | 1.000 | 1.000 200 | | 0.060 | 1.000 | 1.000 | 1.000 300 | | 0.049 | 1.000 | 1.000 | 1.000 Table 4: Simulated size and power with graphs generated from $\mathcal{G}_{1}(Bern(0.1),Bern(0.05))$ and $\mathcal{G}_{2}(Bern(0.1+\epsilon),Bern(0.05+\epsilon))$. $n(m=2)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) ---|---|---|---|---|--- 10 | | NA | NA | NA | NA 30 | | 0.025 | 0.031 | 0.036 | 0.054 50 | | 0.031 | 0.033 | 0.039 | 0.101 100 | | 0.030 | 0.038 | 0.123 | 0.311 200 | $T_{fro}$ | 0.034 | 0.085 | 0.401 | 0.891 300 | | 0.046 | 0.157 | 0.756 | 0.998 10 | | NA | NA | NA | NA 30 | | 0.043 | 0.055 | 0.060 | 0.083 50 | | 0.049 | 0.050 | 0.069 | 0.131 100 | | 0.053 | 0.063 | 0.172 | 0.403 200 | $\mathcal{T}_{n}$ | 0.049 | 0.123 | 0.483 | 0.933 300 | | 0.046 | 0.204 | 0.818 | 0.998 $n(m=4)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) 10 | | NA | NA | NA | NA 30 | | 0.034 | 0.040 | 0.056 | 0.148 50 | | 0.030 | 0.037 | 0.120 | 0.328 100 | | 0.040 | 0.074 | 0.402 | 0.895 200 | $T_{fro}$ | 0.031 | 0.277 | 0.962 | 1.000 300 | | 0.036 | 0.531 | 1.000 | 1.000 10 | | NA | NA | NA | NA 30 | | 0.044 | 0.056 | 0.072 | 0.185 50 | | 0.049 | 0.053 | 0.165 | 0.392 100 | | 0.053 | 0.114 | 0.486 | 0.922 200 | $\mathcal{T}_{n}$ | 0.049 | 0.339 | 0.976 | 1.000 300 | | 0.048 | 0.606 | 1.000 | 1.000 $n(m=14)$ | Method | $\epsilon=0$(size) | $\epsilon=0.03$ (power) | $\epsilon=0.05$ (power) | $\epsilon=0.07$ (power) 10 | | 0.039 | 0.044 | 0.075 | 0.156 30 | | 0.031 | 0.091 | 0.432 | 0.886 50 | | 0.035 | 0.199 | 0.874 | 1.000 100 | $T_{fro}$ | 0.022 | 0.717 | 1.000 | 1.000 200 | | 0.023 | 0.995 | 1.000 | 1.000 300 | | 0.026 | 1.000 | 1.000 | 1.000 10 | | 0.046 | 0.049 | 0.092 | 0.159 30 | | 0.055 | 0.120 | 0.480 | 0.889 50 | | 0.048 | 0.257 | 0.889 | 1.000 100 | $\mathcal{T}_{n}$ | 0.046 | 0.765 | 1.000 | 1.000 200 | | 0.040 | 0.997 | 1.000 | 1.000 300 | | 0.046 | 1.000 | 1.000 | 1.000 Table 5: Simulated size and power with graphs generated from $\mathcal{G}_{1}(Bern(0.5),Bern(0.4))$ and $\mathcal{G}_{2}(Bern(0.5+\epsilon),Bern(0.4+\epsilon))$. $n(m=2)$ | Method | $\epsilon=0$(size) | $\epsilon=0.07$ (power) | $\epsilon=0.10$ (power) | $\epsilon=0.12$ (power) ---|---|---|---|---|--- 10 | | 0.000 | 0.000 | 0.000 | 0.000 30 | | 0.000 | 0.000 | 0.000 | 0.002 50 | | 0.000 | 0.000 | 0.002 | 0.001 100 | $T_{fro}$ | 0.000 | 0.000 | 0.005 | 0.022 200 | | 0.000 | 0.006 | 0.131 | 0.523 300 | | 0.001 | 0.032 | 0.615 | 0.983 10 | | 0.041 | 0.043 | 0.047 | 0.051 30 | | 0.048 | 0.054 | 0.077 | 0.101 50 | | 0.054 | 0.065 | 0.104 | 0.182 100 | $\mathcal{T}_{n}$ | 0.051 | 0.109 | 0.290 | 0.551 200 | | 0.050 | 0.298 | 0.797 | 0.981 300 | | 0.042 | 0.550 | 0.989 | 1.000 $n(m=4)$ | Method | $\epsilon=0$(size) | $\epsilon=0.07$ (power) | $\epsilon=0.10$ (power) | $\epsilon=0.12$ (power) 10 | | 0.001 | 0.001 | 0.001 | 0.002 30 | | 0.001 | 0.001 | 0.002 | 0.002 50 | | 0.001 | 0.002 | 0.003 | 0.030 100 | $T_{fro}$ | 0.000 | 0.007 | 0.139 | 0.502 200 | | 0.000 | 0.162 | 0.973 | 1.000 300 | | 0.000 | 0.625 | 1.000 | 1.000 10 | | 0.044 | 0.063 | 0.064 | 0.066 30 | | 0.045 | 0.066 | 0.124 | 0.232 50 | | 0.058 | 0.112 | 0.255 | 0.518 100 | $\mathcal{T}_{n}$ | 0.053 | 0.269 | 0.793 | 0.965 200 | | 0.047 | 0.787 | 1.000 | 1.000 300 | | 0.044 | 0.983 | 1.000 | 1.000 $n(m=14)$ | Method | $\epsilon=0$(size) | $\epsilon=0.07$ (power) | $\epsilon=0.10$ (power) | $\epsilon=0.12$ (power) 10 | | 0.000 | 0.001 | 0.002 | 0.014 30 | | 0.000 | 0.013 | 0.174 | 0.559 50 | | 0.000 | 0.083 | 0.812 | 0.997 100 | $T_{fro}$ | 0.000 | 0.834 | 1.000 | 1.000 200 | | 0.032 | 0.998 | 1.000 | 1.000 300 | | 0.032 | 1.000 | 1.000 | 1.000 10 | | 0.052 | 0.075 | 0.120 | 0.189 30 | | 0.040 | 0.273 | 0.741 | 0.955 50 | | 0.040 | 0.648 | 0.990 | 1.000 100 | $\mathcal{T}_{n}$ | 0.055 | 0.997 | 1.000 | 1.000 200 | | 0.049 | 0.998 | 1.000 | 1.000 300 | | 0.050 | 1.000 | 1.000 | 1.000 ### 3.2 Real Data Application In this section, we consider applying the proposed method to a real life data that can be downloaded from a public database (http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html). This data set contains Raw anatomical and functional scans from 146 subjects (72 patients with schizophrenia and 74 healthy controls). After a series of processes done by [6], only 124 subjects (70 patients with schizophrenia and 54 healthy controls) were kept. In their study, 263 brain regions of interests were chosen as nodes and connectivity between nodes were measured by the edge weights that represent the Fisher-transformed correlation between the fMRI time series of the nodes after passing to ranks [17]. As the healthy group (54 networks) and patient group (70 networks) have different sample sizes, our test statistic is not directly applicable. We adopt the following two methods to solve this issue. The first way is to randomly sample 16 networks from the 54 networks in the health group and unite them with the 54 networks of health group to yield 70 samples. Then the healthy group and patient group have equal sample sizes and we can calculate the test statistics. This process is repeated 100 times and the five-number summary of the test statistics is presented in Table 6. The second method is to randomly sample 54 networks from the schizophrenia patient group and then calculate the test statistics based on the sampled 54 networks and the 54 networks in the healthy group. The random sampling procedure is repeated 100 times and the five-number summary of test statistics is presented in Table 7. All the calculated test statistics $\mathcal{T}_{n}$ and $T_{fro}$ are much larger than 1.96, which leads to the same conclusion that the patient population significantly differs from the healthy population at significance level $\alpha=0.05$. Moreover, the proposed test statistic $\mathcal{T}_{n}$ is almost twice of $T_{fro}$, implying that our test is more powerful to detect the population difference. The computation of the proposed test statistic requires randomly splitting two samples into two groups. In order to evaluate the effect of the random splitting on the proposed test, we randomly sample 54 networks from the patient group, denoted as $G_{k},(1\leq k\leq 54)$. Let $H_{k},(1\leq k\leq 54)$ be the 54 networks in the healthy group. Consider $G_{k},H_{k},(1\leq k\leq 54)$ as the two samples. We randomly partition the two samples into two groups, denoted as $\tilde{G}_{k},\tilde{H}_{k},(1\leq k\leq 27)$ and $\tilde{G}_{k},\tilde{H}_{k},(27<k\leq 54)$ and then compute the test statistics $\mathcal{T}_{n}$ and $T_{fro}$. This procedure is repeated 100 times and the five-number summary of the 100 calculated test statistics are recorded in Table 8. The same conclusion could be drawn based on the 100 statistics, implying that random splitting doesn’t significantly affect the proposed method. Additionally, to compare the performance of $\mathcal{T}_{n}$ and $T_{fro}$ in binary graph setting, we artificially transform the weighted graphs to binary graphs by thresholding as follows. For a given threshold $\tau$, if the absolute value of an edge weight is greater (smaller) than $\tau$, then the edge is transformed to 1 (0). Smaller (larger) $\tau$ yields denser (sparser) networks. We take the threshold values $\tau\in\\{0.01,0.03,0.1,0.3,0.5,0.7,0.9\\}$. For each $\tau$, we calculate the test statistics as in Table 6 and the results are summarized in Table 9. The threshold $\tau$ dramatically affects the conclusion. For $0.1\leq\tau\leq 0.7$, both $\mathcal{T}_{n}$ and $T_{fro}$ reject the null hypothesis $H_{0}$ that the two network populations are the same, with $\mathcal{T}_{n}$ more powerful than $T_{fro}$ in most cases. However, for $\tau=0.01$ (denser networks), $\mathcal{T}_{n}$ rejects the null hypothesis $H_{0}$, while $T_{fro}$ fails to reject $H_{0}$. This analysis outlines the importance to develop testing procedures for weighted networks, as artificial transforming weighted networks to unweighted networks may lead to contradictory conclusions. Table 6: Repeat sampling 16 networks from 54 networks in the healthy group. Method | Min. | 1st Qu. | Median | 3rd Qu. | Max. ---|---|---|---|---|--- $\mathcal{T}_{n}$ | 43.52 | 50.33 | 53.62 | 57.05 | 62.39 $T_{fro}$ | 22.27 | 25.93 | 27.43 | 29.45 | 32.34 Table 7: Repeat sampling 54 networks from 70 networks in patient group. Method | Min. | 1st Qu. | Median | 3rd Qu. | Max. ---|---|---|---|---|--- $\mathcal{T}_{n}$ | 27.81 | 35.45 | 37.55 | 39.83 | 47.57 $T_{fro}$ | 12.33 | 15.09 | 15.88 | 16.95 | 20.53 Table 8: Random splitting of two samples $G_{k},H_{k},1\leq k\leq 54$. Method | Min. | 1st Qu. | Median | 3rd Qu. | Max. ---|---|---|---|---|--- $\mathcal{T}_{n}$ | 28.25 | 35.28 | 38.19 | 40.95 | 46.33 $T_{fro}$ | 12.30 | 15.12 | 16.12 | 17.18 | 19.21 Table 9: Transforming weighted graphs to unweighted graphs with different threshold $\tau$. Method | Min. | 1st Qu. | Median | 3rd Qu. | Max. ---|---|---|---|---|--- $\mathcal{T}_{n}$ ($\tau=0.01$) | 9.70 | 16.40 | 19.61 | 22.10 | 31.88 $T_{fro}$ ($\tau=0.01$) | 0.42 | 0.70 | 0.83 | 0.93 | 1.31 $\mathcal{T}_{n}$ ($\tau=0.03$) | 11.91 | 17.32 | 21.20 | 24.90 | 33.23 $T_{fro}$ ($\tau=0.03$) | 1.54 | 2.17 | 2.62 | 3.05 | 3.95 $\mathcal{T}_{n}$ ($\tau=0.1$) | 11.86 | 19.66 | 23.01 | 25.78 | 38.03 $T_{fro}$ ($\tau=0.1$) | 4.88 | 7.81 | 9.01 | 9.93 | 14.10 $\mathcal{T}_{n}$ ($\tau=0.3$) | 14.06 | 26.37 | 29.64 | 32.42 | 41.33 $T_{fro}$ ($\tau=0.3$) | 11.81 | 20.90 | 23.20 | 25.34 | 32.16 $\mathcal{T}_{n}$ ($\tau=0.5$) | 11.86 | 16.57 | 18.19 | 19.21 | 24.43 $T_{fro}$ ($\tau=0.5$) | 10.14 | 13.81 | 15.10 | 16.14 | 20.14 $\mathcal{T}_{n}$ ($\tau=0.7$) | 5.60 | 6.93 | 7.32 | 7.93 | 9.37 $T_{fro}$ ($\tau=0.7$) | 6.55 | 8.02 | 8.53 | 9.08 | 10.60 $\mathcal{T}_{n}$ ($\tau=0.9$) | 0.47 | 1.79 | 2.19 | 2.52 | 3.36 $T_{fro}$ ($\tau=0.9$) | 0.81 | 2.54 | 2.89 | 3.55 | 5.02 ## 4 Proof of Main Results Proof of Theorem 2.1: We employ the Lindeberg Central Limit Theorem to prove theorem 2.1. Firstly, note that under $H_{0}$, we have $\displaystyle\sigma_{n}^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}[s_{n}^{2}]=\sum_{1\leq i<j\leq n}\mathbb{E}[T_{ij}^{2}]$ $\displaystyle=$ $\displaystyle\sum_{1\leq i<j\leq n}\mathbb{E}\Big{[}\sum_{k\leq\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij})\Big{]}^{2}\mathbb{E}\Big{[}\sum_{k>\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij})\Big{]}^{2}$ $\displaystyle=$ $\displaystyle\sum_{1\leq i<j\leq n}\sum_{k\leq\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{2}\sum_{k>\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{2}$ $\displaystyle=$ $\displaystyle\sum_{1\leq i<j\leq n}m^{2}\sigma_{ij}^{4},$ Next, we verify the Lindeberg condition. By Cauchy-Schwarz inequality and Markov inequality, it follows that for any $\epsilon>0$, $\displaystyle\mathbb{E}T_{ij}^{2}I[|T_{ij}|>\epsilon\sigma_{n}]$ $\displaystyle\leq$ $\displaystyle\sqrt{\mathbb{E}T_{ij}^{4}\mathbb{P}[|T_{ij}|>\epsilon\sigma_{n}]}\leq\sqrt{\mathbb{E}T_{ij}^{4}\frac{\mathbb{E}T_{ij}^{4}}{\epsilon^{4}\sigma_{n}^{4}}}=\frac{\mathbb{E}T_{ij}^{4}}{\epsilon^{2}\sigma_{n}^{2}}.$ Notice that $\displaystyle\mathbb{E}T_{ij}^{4}$ $\displaystyle=$ $\displaystyle\sum_{k_{1},k_{2},k_{3},k_{4}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})(A_{G_{k_{4}},ij}-A_{H_{k_{4}},ij})$ $\displaystyle\times$ $\displaystyle\sum_{k_{1},k_{2},k_{3},k_{4}>\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})(A_{G_{k_{4}},ij}-A_{H_{k_{4}},ij}).$ Since for distinct $k_{1}$, $k_{2}$, $k_{3},k_{4}\in\\{1,2,\dots,m\\}$, $\mathbb{E}[(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{3}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})]=0,$ $\mathbb{E}[(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})]=0,$ $\mathbb{E}[(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})(A_{G_{k_{4}},ij}-A_{H_{k_{4}},ij})]=0.$ As a result, it follows $\displaystyle\sum_{k_{1},k_{2},k_{3},k_{4}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})(A_{G_{k_{4}},ij}-A_{H_{k_{4}},ij})$ $\displaystyle=$ $\displaystyle\sum_{k_{1}=k_{2}\neq k_{3}=k_{4}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{3}},ij}-A_{H_{k_{3}},ij})^{2}$ $\displaystyle+\sum_{k_{1}=k_{3}\neq k_{2}=k_{4}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})^{2}$ $\displaystyle+\sum_{k_{1}=k_{4}\neq k_{2}=k_{3}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})^{2}$ $\displaystyle+\sum_{k_{1}=k_{4}=k_{2}=k_{3}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{4}$ $\displaystyle=$ $\displaystyle\sum_{k_{1}=k_{4}=k_{2}=k_{3}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{4}+3\sum_{k_{1}\neq k_{2}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})^{2}$ Then we have $\displaystyle\mathbb{E}T_{ij}^{4}$ $\displaystyle=$ $\displaystyle\Big{[}\sum_{k\leq\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{4}+3\sum_{k_{1}\neq k_{2}\leq\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})^{2}\Big{]}$ $\displaystyle\times$ $\displaystyle\Big{[}\sum_{k>\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{4}+3\sum_{k_{1}\neq k_{2}>\frac{m}{2}}\mathbb{E}(A_{G_{k_{1}},ij}-A_{H_{k_{1}},ij})^{2}(A_{G_{k_{2}},ij}-A_{H_{k_{2}},ij})^{2}\Big{]}$ $\displaystyle=$ $\displaystyle\Big{(}m^{2}\sigma_{ij}^{4}+3m\eta_{ij}\Big{)}^{2}=m^{4}\sigma_{ij}^{8}+6m^{3}\sigma_{ij}^{4}\eta_{ij}+9m^{2}\eta_{ij}^{2},$ where $\eta_{ij}=\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{4}$. Hence, by condition (3), it follows that $\displaystyle\frac{1}{\sigma_{n}^{2}}\sum_{1\leq i<j\leq n}\mathbb{E}T_{ij}^{2}I[|T_{ij}|>\epsilon\sigma_{n}]\leq\frac{1}{\epsilon^{2}\sigma_{n}^{4}}\sum_{1\leq i<j\leq n}\mathbb{E}T_{ij}^{4}$ $\displaystyle\leq$ $\displaystyle\frac{m^{4}\sum_{1\leq i<j\leq n}\sigma_{ij}^{8}+6m^{3}\sum_{1\leq i<j\leq n}\sigma_{ij}^{4}\eta_{ij}+9m^{2}\sum_{1\leq i<j\leq n}\eta_{ij}^{2}}{\epsilon^{2}m^{4}(\sum_{1\leq i<j\leq n}\sigma_{ij}^{4})^{2}}\rightarrow 0.$ By the Lindeberg Central Limit Theorem, we conclude that $\sigma_{n}^{-1}\sum_{1\leq i<j\leq n}T_{ij}$ converges in distribution to $N(0,1)$. Finally, we prove $\mathcal{T}_{n}$ converges in distribution to $N(0,1)$ by proving that $s_{n}^{2}=(1+o_{p}(1))\sigma_{n}^{2}$. Note that for $i<j$ and $k<l$, $\mathbb{E}\big{[}(T_{ij}^{2}-m^{2}\sigma_{ij}^{4})(T_{kl}^{2}-m^{2}\sigma_{kl}^{4})\big{]}=0,\hskip 28.45274ptif\ \\{i,j\\}\neq\\{k,l\\}.$ Consequently, one has $\displaystyle\mathbb{E}\big{[}s_{n}^{2}-\sigma_{n}^{2}\big{]}^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}\big{[}\sum_{1\leq i<j\leq n}(T_{ij}^{2}-m^{2}\sigma_{ij}^{4})\big{]}^{2}=\sum_{1\leq i<j\leq n}\mathbb{E}\big{[}(T_{ij}^{2}-m^{2}\sigma_{ij}^{4})\big{]}^{2}=O(n^{2}m^{4}),$ Hence, $s_{n}^{2}=\sigma_{n}^{2}+O_{p}(\sqrt{n^{2}}m^{2})$. If $\sqrt{n^{2}}m^{2}=o(\sigma_{n}^{2})$, then $s_{n}^{2}=(1+o_{p}(1))\sigma_{n}^{2}$, which implies $\mathcal{T}_{n}$ converges in distribution to $N(0,1)$ by Slutsky’s theorem. ∎ Proof of Theorem 2.2: Under $H_{1}$, we have $\displaystyle\Lambda_{ij}=\mathbb{E}[T_{ij}]$ $\displaystyle=$ $\displaystyle\mathbb{E}\Big{[}\sum_{k\leq\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij})\Big{]}\mathbb{E}\Big{[}\sum_{k>\frac{m}{2}}(A_{G_{k},ij}-A_{H_{k},ij})\Big{]}$ $\displaystyle=$ $\displaystyle\sum_{k\leq\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})\sum_{k>\frac{m}{2}}\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})$ $\displaystyle=$ $\displaystyle\frac{m^{2}}{4}(\mu_{1,ij}-\mu_{2,ij})^{2},$ and $\displaystyle V_{ij}=\mathbb{E}(A_{G_{k},ij}-A_{H_{k},ij})^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}(A_{G_{k},ij}-\mu_{1,ij}+\mu_{1,ij}-\mu_{2,ij}+\mu_{2,ij}-A_{H_{k},ij})^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}(A_{G_{k},ij}-\mu_{1,ij})^{2}+\mathbb{E}(\mu_{1,ij}-\mu_{2,ij})^{2}+\mathbb{E}(\mu_{2,ij}-A_{H_{k},ij})^{2}$ $\displaystyle=$ $\displaystyle\sigma_{1,ij}^{2}+\sigma_{2,ij}^{2}+(\mu_{1,ij}-\mu_{2,ij})^{2}.$ Then $\sigma_{1,n}^{2}=\mathbb{E}s_{n}^{2}=\frac{m^{2}}{4}\sum_{1\leq i<j\leq n}V_{ij}^{2},$ and $\displaystyle\mathbb{E}\big{[}s_{n}^{2}-\sigma_{1,n}^{2}\big{]}^{2}$ $\displaystyle=$ $\displaystyle\mathbb{E}\big{[}\sum_{1\leq i<j\leq n}(T_{ij}^{2}-\frac{m^{2}}{4}V_{ij}^{2})\big{]}^{2}=\sum_{1\leq i<j\leq n}\mathbb{E}\big{[}(T_{ij}^{2}-\frac{m^{2}}{4}V_{ij}^{2})\big{]}^{2}=O(n^{2}m^{4}).$ Hence, $s_{n}^{2}=\sigma_{1,n}^{2}(1+o_{P}(1))$ if $nm=o(m^{4}\sum_{1\leq i<j\leq n}V_{ij}^{2})$. Note that $\mathbb{E}\big{[}\sum_{1\leq i<j\leq n}(T_{ij}-\Lambda_{ij})\big{]}^{2}=\sum_{1\leq i<j\leq n}\mathbb{E}(T_{ij}-\Lambda_{ij})^{2}\leq\sigma_{1,n}^{2}.$ As a result, under $H_{1}$, the test statistic is decomposed as $\displaystyle\mathcal{T}_{n}$ $\displaystyle=$ $\displaystyle\frac{\sum_{1\leq i<j\leq n}(T_{ij}-\Lambda_{ij})}{s_{n}}+\frac{\sum_{1\leq i<j\leq n}\Lambda_{ij}}{s_{n}}$ $\displaystyle=$ $\displaystyle\frac{\sigma_{1,n}}{s_{n}}\Big{(}\frac{\sum_{1\leq i<j\leq n}(T_{ij}-\Lambda_{ij})}{\sigma_{1,n}}+\frac{\sum_{1\leq i<j\leq n}\Lambda_{ij}}{\sigma_{1,n}}\Big{)}$ $\displaystyle=$ $\displaystyle\frac{\sum_{1\leq i<j\leq n}\Lambda_{ij}}{\sigma_{1,n}}+O_{P}(1).$ ## Acknowledgement The authors are grateful to the Editor, the Associate Editor and Referees for helpful comments that significantly improved this manuscript. ## References * [1] Abbe, E.(2018). Community Detection and Stochastic Block Models: Recent Developments. Journal of Machine Learning Research, 18: 1-86. * [2] Agarwal, S., Branson, K. and Belongie, S. (2006). Higher order learning with graphs. Proceedings of the International Conference on Machine Learning, 17-24. * [3] Aicher, C. (2014). The Weighted Stochastic Block Model. Applied Mathematics Graduate Theses and Dissertations, 50. * [4] Aicher, C., Jacob, A. and Clauset, A.(2015). Learning Latent Block Structure in Weighted Networks. Journal of Complex Networks, 3, 221-248. * [5] Amini, A., Chen, A. and Bickel, P. (2013). Pseudo-likelihood methods for community detection in large sparse networks. Annals of Statistics, 41(4), 2097-2122. * [6] Arroyo Relión, Jesús D., Kessler, Daniel and Levina, Elizaveta and Taylor, Stephan F.(2019) Network classification with applications to brain connectomics. The Annals of Applied Statistics. 13,1648-1677. * [7] Bickel, P. J. and Sarkar, P. (2016). Hypothesis testing for automated community detection in networks. Journal of Royal Statistical Society, Series B, 78, 253-273. * [8] Cedric E. Ginestet, Jun Li, Prakash Balanchandran, Steven Rosenberg, and Eric D. Kolaczyk.(2017). Hypothesis testing for network data in functional neuroimaging. The Annals of Applied Statistics, 11(2):725–750. * [9] Chen J. and Yuan, B.(2006). Detecting functional modules in the yeast protein-protein interaction network. Bioinformatics, 22(18):2283–2290. * [10] Costa LF, Oliveira ON Jr, Travieso G, Rodrigues FA, Villas Boas PR, Antiqueira L, Viana MP, CorreaRocha LE (2011). Analyzing and modeling real-world phenomena with complex networks: a survey of applications. Adv Phys 60(3):329–412. * [11] Chen, L., Zhou, J. and Lin, L. Hypothesis testing for populations of networks. https://arxiv.org/pdf/1911.03783.pdf * [12] Fortunato S (2010). Community detection in graphs. Phys Rep486(3):75–174. * [13] Gao, C. and Lafferty, J. (2017). Testing for global network structure using small subgraph statistics. https://arxiv.org/pdf/1710.00862.pdf * [14] Garcia, J., Ashourvan, A., Muldoon, S., Vettel, J. and Bassett, D.(2018). Applications of community detection techniques to brain graphs: Algorithmic considerations and implications for neural function Proc IEEE Inst Electr Electron Eng, 106:846-867. * [15] Ghoshdastidar, et. al(2019). Two-sample hypothesis testing for inhomogeneous random graphs. arXiv:1707.00833. * [16] Ghoshdastidar, D. and Luxburg, V. U.(2018). Practical Methods for Graph Two-Sample Testing, NIPS 2018, 3019–3028, Montréal, Canada, 2018. * [17] Jesus Daniel Arroyo Relion (2019). graphclass: Network classification. R package version 1.1. * [18] Lei, J.(2016). A goodness-of-fit test for stochastic block models. The Annals of Statistics, 44(1):401–424. * [19] Ma’ayan, A.(2011). Introduction to Network Analysis in Systems Biology. Sci Signal. 4(190): tr5. * [20] Newman, M.E.J.(2004). Coauthorship networks and patterns of scientific collaboration. PNAS, 101: 5200-505. * [21] Stam, C. J. , Jones, B. F., Nolte, G., Breakspear, M. and Scheltens, P.(2007). Small-world networks and functional connectivity in Alzheimer’s disease. Cerebral Cortex, 17(1):92–99. * [22] Tang, M., Athreya, A., Sussman, D. L., Lyzinski, V., and Priebe, C. E. (2017). A nonparametric two-sample hypothesis testing problem for random graphs. Bernoulli, 23(3):1599–1630. * [23] Thomas, A. C. and Blitzstein, J. K. (2011). Valued ties tell fewer lies: Why not to dichotomize network edges with thresholds. arXiv:1101.0788 * [24] Yuan, M. and Nan, Y.(2020). Test dense subgraphs in sparse uniform hypergraph. Communications in Statistics - Theory and Methods, DOI:10.1080/03610926.2020.1723637.
# Analog dual to a 2+1-dimensional holographic superconductor Neven Bilić<EMAIL_ADDRESS>Departamento de Física, Universidade Federal do Espírito Santo (UFES) Av. Fernando Ferrari s/n CEP 29.075-910, Vitória, ES, Brazil Division of Theoretical Physics, Rudjer Bošković Institute, 10002 Zagreb, Croatia Júlio C. Fabris<EMAIL_ADDRESS>Departamento de Física, Universidade Federal do Espírito Santo (UFES) Av. Fernando Ferrari s/n CEP 29.075-910, Vitória, ES, Brazil ###### Abstract We study an analog hydrodynamic model that mimics a 3+1 AdS planar BH spacetime dual to a 2+1-dimensional superconductor. We demonstrate that the AdS4 bulk and its holographic dual could be realized in nature in an analog gravity model based on fluid dynamics. In particular we mimic the metric of an $O_{2}$ holographic superconductor and calculate the entanglement entropy of a conveniently designed subsystem at the boundary of the analog AdS4 bulk. ## 1 Introduction A pseudo-Riemannian geometry of spacetime can be mimicked by fluid dynamics in Minkowski spacetime. The basic idea is the emergence of an effective metric $G_{\mu\nu}=a[g_{\mu\nu}-(1-c_{\rm s}^{2})u_{\mu}u_{\nu}],$ (1) which describes the effective geometry for acoustic perturbations propagating in a fluid potential flow with $u_{\mu}\propto\partial_{\mu}\theta$. The quantity $c_{\rm s}$ is the adiabatic speed of sound, the conformal factor $a$ is related to the equation of state of the fluid, and the background spacetime metric $g_{\mu\nu}$ is usually assumed Minkowski. The metric of the form (1) has been exploited in various contexts including emergent gravity [1, 2], scalar theory of gravity [3], Einstein-aether gravity [4], acoustic geometry [5, 6, 7, 8] and euclidean gravity [9, 10, 11]. The work presented here is motivated by recent development of anti-de Sitter/conformal field theory (AdS/CFT) dual theory of 2+1-dimensional superconductor [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] (for a review and additional references see [22]). The AdS/CFT duality in these models is based on a correspondence between gravitational theory and dynamics of quantum field theory on the boundary of asymptotically anti-de Sitter (AdS) spacetime. The gravity side can be well described by classical general relativity, while the dual field theory involves the dynamics with strong interaction. This correspondence is often referred to as “holography” since a higher dimensional gravity system is described by a lower dimensional field theory without gravity, which resembles optical holography. A particularly important work in this context is the minimal model of a holographic superconductor by Bobev et al [19] with an Abelian gauge field embedded in the truncation of four-dimensional maximal gauged super-gravity. Besides, it is worth mentioning the work on $d$-wave superconductivity by Benini et al [16] in which interesting physical phenomena are demonstrated such as the formation of Fermi arcs. The AdS4 spacetime as a solution to Einstein’s equations cannot actually exist in nature due to instability problems. However, it can inspire some configurations where the underlying general gravitational structure can be studied through analogue models. The aim of this paper is to demonstrate that AdS4 and its holographic dual could be realized in nature in an analog gravity model based on hydrodynamics of a physical fluid. In particular we will mimic the bulk metric of the minimal model of a holographic superconductor consisting of the metric, a charged scalar with a non-trivial potential and an Abelian gauge field embedded in the truncation of four-dimensional maximal gauged super-gravity [19]. This model was recently studied in the context of holographic entanglement entropy [20, 21]. The entanglement entropy is an important tool for keeping track of symmetry breaking and phase transition in strong coupling systems. In the context of black-hole thermodynamics the entropy of a black hole is proportional to the area of the horizon in the same way as is the entanglement entropy proportional to the boundary area of between two subsystems of a quantum system. Our first task is to derive an analog acoustic geometry which mimics a $d+1$-dimensional asymptotic AdS geometry with a general planar black hole (BH). Furthermore, we will apply this to a 3+1-dimensional model and calculate the entanglement entropy for a particular geometry obtained as solution related to the holographic $O_{2}$ superconductor. The reason why we are specifically interested in the $O_{2}$ type is due to its pronounced first order phase transition at finite temperature. It is important to stress that analog gravity in general is concerned by curved geometry per se without referring to sources of the gravitational field as in general relativity. More specifically, the fluid analog mimics the geometry only and says nothing about the source such as matter and other fields. There are no equations analog to Einstein’s which, as in general relativity, would involve curvature tensor and stress tensor. However, even without Einstein’s equations, the analog BH horizon entropy is realized via quantum entanglement of phonons. This is why the model studied in this paper and analog gravity in general can teach us something about black holes and related phenomena. We divide the remainder of the paper into three sections and an two appendices. We start with section 2 in which we derive an analog metric for a $d+1$-dimensional AdS planar BH hole of the form relevant for a holographic description of the superconductor. In the next section, Sec. 3, we apply our formalism to a 3+1-dimensional bulk related to the minimal model of the 2+1-dimensional holographic superconductor. For a particular geometry related to the $O_{2}$ superconductor we calculate the entanglement entropy. Concluding remarks are given in section 4. In appendix A we outline a derivation of the relativistic acoustic metric and in appendix B we derive the effective speed of sound in a fluid with an external pressure. ## 2 Analog planar black hole Geometric structures in the form of a planar BH may have interesting applications in condensed matter physics [23]. In this section we construct a model of an analog planar BH hole in a general asymptotic AdSd+1. A similar model for $d=4$ was discussed in detail by Hossenfelder [24, 25] and recently in [26, 27]. We will discuss in more detail the case $d=3$ which is of particular interest for 2+1-dimensional superconductor [19, 20, 22]. In our approach we will consider a nonisentropic fluid flow which yields the desired analog metric. We start from a general form of the AdS planar BH metric in an arbitrary number of space-like dimensions $d$ $\displaystyle ds^{2}=\frac{\ell^{2}}{z^{2}}\left[e^{-\chi(z)}\gamma(z)dt^{2}-\gamma(z)^{-1}dz^{2}-d\mbox{\boldmath$x$}^{2}\right],$ (2) where $\ell$ is the curvature radius of AdSd+1 and $d\mbox{\boldmath$x$}^{2}=\sum_{i=1}^{d-1}{\rm d}x^{i}{\rm d}x^{i}.$ (3) For $d=3$ we will relate the functions $\chi$ and $\gamma$ to the truncated Lagrangian of the four-dimensional $\mathcal{N}=8$ super-gravity [28] studied by Bobev et al [19] in the context of holographic superconductivity. In order to have an asymptotic AdS for $z\rightarrow 0$ we can always rescale the time coordinate so that, without loss of generality, we may assume $\gamma(0)=1,\quad\chi(0)=0.$ (4) Next, the dimensionless functions $\chi$ and $\gamma$ can be thought of as functions of the dimensionless variable $z/z_{\rm h}$, where $z=z_{\rm h}$ is the location of the horizon. In other words $\gamma(z_{\rm h})=0$ (5) and $\gamma$ has no zeros on the interval $0<z<z_{\rm h}$. Then, the horizon temperature is $T=\left.\frac{e^{-\chi/2}}{4\pi}\frac{d\gamma}{dz}\right|_{z=z_{\rm h}}.$ (6) This temperature measured in some chosen fixed units, e.g., in units of $\ell^{-1}$ is ambiguous because the geometry (2) is invariant under rescaling $\tau\rightarrow\alpha\tau,\quad z\rightarrow\alpha z,\quad x^{i}\rightarrow\alpha x^{i}\quad z_{\rm h}\rightarrow\alpha{z}_{\rm h}.$ (7) Thus, the metric (2) has a rescaled horizon $z_{\rm h}/\alpha$ with the corresponding rescaled horizon temperature $\bar{T}=\left.\frac{e^{-\chi/2}}{4\pi}\frac{d\gamma(\alpha z)}{dz}\right|_{z=z_{\rm h}/\alpha}=\alpha T.$ (8) However, the temperature $T$ expressed in units of $1/z_{\rm h}$ is unique, i.e., the quantity $Tz_{\rm h}$ is invariant under the rescaling (7). Therefore, in the following we will express the temperature and other dimensionfull physical quantities in units of some power of $z_{\rm h}$. Now we seek a fluid analog model which would mimic the induced metric of the form (2). The basic idea is to find a suitable coordinate transformation $t\to\bar{t}$, $z\to\bar{z}$ such that the new metric takes the form of the relativistic acoustic metric (92) derived in appendix A with $g_{\mu\nu}$ replaced by the Minkowski metric $\eta_{\mu\nu}$ $G_{\mu\nu}=\frac{n}{m^{2}c_{\rm s}w}[\eta_{\mu\nu}-(1-c_{\rm s}^{2})u_{\mu}u_{\nu}]\,.$ (9) Here $n$ and $w$ denote the particle number density and specific enthalpy, respectively, and an arbitrary mass scale $m$ is introduced to make $G_{\mu\nu}$ dimensionless. The specific enthalpy is defined as usual $w=\frac{p+\rho}{n},$ (10) where $p$ and $\rho$ denote the pressure and energy density, respectively. The quantity $c_{\rm s}$ is the so-called “adiabatic” speed of sound defined by $c_{\rm s}^{2}\equiv\left.\frac{\partial p}{\partial\rho}\right|_{s}=\frac{n}{w}\left(\left.\frac{\partial n}{\partial w}\right|_{s}\right)^{-1},$ (11) where $|_{s}$ denotes that the specific entropy, i.e., entropy per particle $s=S/N$, is kept fixed. The second equality in (11) follows from the thermodynamic law $dw=Tds+\frac{1}{n}dp.$ (12) Following Hossenfelder [24] we transform the metric (2) by making use of a coordinate transformation $t=\bar{t}+h(z),\quad z=z(\bar{z}),$ (13) where the functions $z(\bar{z})$ and $h(z)$ are determined by the requirement that the transformed metric takes the form (9). By simple algebraic manipulations the line element (2) can be recast into a convenient form $\displaystyle ds^{2}$ $\displaystyle=$ $\displaystyle\frac{\ell^{2}}{z^{2}}\biggl{\\{}d\bar{t}^{2}-d\bar{z}^{2}-d\mbox{\boldmath$x$}^{2}-(1-\tilde{\gamma})d\bar{t}^{2}$ (14) $\displaystyle+2(1-\tilde{\gamma})^{1/2}(c_{\rm s}^{2}-\tilde{\gamma})^{1/2}d\bar{t}d\bar{z}-(c_{\rm s}^{2}-\tilde{\gamma})d\bar{z}^{2}]\biggr{\\}},$ where we have set $\frac{dz}{d\bar{z}}=e^{\chi/2}c_{\rm s},$ (15) $\frac{dh}{dz}=\frac{(1-\tilde{\gamma})^{1/2}(c_{\rm s}^{2}-\tilde{\gamma})^{1/2}}{c_{\rm s}e^{\chi/2}\tilde{\gamma}},$ (16) and an abbreviation $\tilde{\gamma}=e^{-\chi}\gamma.$ (17) From (4) and (5) it follows $0\leq\tilde{\gamma}\leq 1;\quad\tilde{\gamma}(z_{\rm h})=0,\quad\tilde{\gamma}(0)=1.$ (18) Comparing (14) with the acoustic metric (9) we identify $c_{\rm s}$ as the speed of sound and the non-vanishing components of the velocity vector $u_{\bar{t}}$ and $u_{\bar{z}}$ in transformed coordinates as $u_{\bar{t}}=\frac{(1-\tilde{\gamma})^{1/2}}{(1-c_{\rm s}^{2})^{1/2}},\quad u_{\bar{z}}=-\frac{(c_{\rm s}^{2}-\tilde{\gamma})^{1/2}}{(1-c_{\rm s}^{2})^{1/2}}.$ (19) These equations imply $\tilde{\gamma}\leq c_{\rm s}^{2}\leq 1.$ (20) Next, by applying the potential-flow equation (see appendix A) $wu_{\mu}=\partial_{\mu}\theta$ (21) we derive closed expressions for $w$, $n$, and $c_{\rm s}$ in terms of the variable $z$. Since the metric is stationary, the velocity potential must be of the form $\theta=m\bar{t}+g(z),$ (22) where $m$ is an arbitrary mass parameter which we can identify with the mass scale that appears in (9) and $g(z)$ is a function of $\bar{z}$ through $z$. Then, from (21) and (22) it follows $w=\frac{m}{u_{\bar{t}}}=m\frac{(1-c_{\rm s}^{2})^{1/2}}{(1-\tilde{\gamma})^{1/2}},$ (23) and the function $g$ in (22) must satisfy $\frac{dg}{dz}=wu_{\bar{z}}\left(\frac{dz}{d\bar{z}}\right)^{-1}=-\frac{m}{c_{\rm s}e^{\chi/2}}\frac{(c_{\rm s}^{2}-\tilde{\gamma})^{1/2}}{(1-\tilde{\gamma})^{1/2}}.$ (24) The particle number density can be obtained from the condition that the conformal factor in (2) must be equal to that of (9), i.e., we require $\frac{n}{m^{2}c_{\rm s}w}=\frac{\ell^{2}}{z^{2}}.$ (25) As $m$ is arbitrary it is natural to choose $m=\frac{1}{\ell},$ (26) so using this and (23) we find $n=\frac{c_{\rm s}}{\ell z^{2}}\frac{(1-c_{\rm s}^{2})^{1/2}}{(1-\tilde{\gamma})^{1/2}}.$ (27) In this way, both $w$ and $n$ are expressed as functions of $z$ and $c_{\rm s}$. However, $c_{\rm s}$ is not independent since by the definition (11) $c_{\rm s}^{2}=\left.\frac{n}{w}\frac{\partial w}{\partial n}\right|_{s}=\frac{n}{w}\frac{dw}{dz}\left(\frac{dn}{dz}\right)^{-1}.$ (28) Using (28) with (23) and (27) we obtain a differential equation for $c_{\rm s}$ $2c_{\rm s}\frac{dc_{\rm s}}{dz}-c_{\rm s}^{2}\left[\frac{2}{z}+\frac{1}{2}\frac{d}{dz}\ln(1-\tilde{\gamma})\right]+\frac{1}{2}\frac{d}{dz}\ln(1-\tilde{\gamma})=0,$ (29) with solution $c_{\rm s}^{2}=1-\frac{z^{2}}{z_{\rm h}^{2}}(1-\tilde{\gamma})^{1/2}\left(K+2z_{h}^{2}\int_{z}^{z_{\rm h}}\frac{dz}{z^{3}}(1-\tilde{\gamma})^{-1/2}\right).$ (30) The integration constant must satisfy the constraint $1\geq K\geq 0$ as a consequence of the condition $0\leq c_{\rm s}^{2}\leq 1$. Dimensionless physical quantities such as $\ell w$, $c_{s}$ and the components of the fluid velocity field are functions of $z/z_{\rm h}$ and are invariant under the rescaling (7). Plugging (30) into (23) and (25) one obtains $w$ and $n$ as functions of $z$. Note that explicit functional forms of $z(\bar{z})$, $\gamma(z)$, and $g(z)$ can be obtained by making use of (30) and integrating respectively (15), (16), and (24). However, the precise forms of these functions are not really needed for obtaining a closed expression for the analog metric. It is of particular interest to discuss the above solution in the asymptotic limit, i.e., in the limit $z\rightarrow 0$. Motivated by the asymptotic behavior of the $O_{2}$ holographic superconductor with $d=3$ (see section 3.1) $\displaystyle\tilde{\gamma}(z)=1+c_{3}\left(\frac{z}{z_{\rm h}}\right)^{3}+\mathcal{O}(z^{3+1}),$ (31) in the following we assume for general $d$ $\displaystyle\tilde{\gamma}(z)=1+c_{d}\left(\frac{z}{z_{\rm h}}\right)^{d}+\mathcal{O}(z^{d+1}),$ (32) with $c_{d}<0$. Then, it may be easily shown that in the limit $z\rightarrow 0$ the sound speed squared tends to a constant $c_{\rm s}^{2}\rightarrow d/(d+4)<1$. However, in this limit $\tilde{\gamma}\rightarrow 1$ so from equations (19) it follows that the limit $z\rightarrow 0$ cannot be reached since we must have $c_{\rm s}^{2}\geq\tilde{\gamma}$. This puts the constraint as to how close to the boundary is our analog metric applicable. Our analog model breaks down at a point $z=z_{\rm min}$ which is the maximal root of the equation $c_{\rm s}^{2}=\tilde{\gamma}$. For the minimal value of $K$, $K_{\rm min}=0$, this equation reads $(1-\tilde{\gamma}(z))^{1/2}-2z^{2}\int_{z}^{z_{\rm h}}\frac{dy}{y^{3}}(1-\tilde{\gamma}(y))^{-1/2}=0.$ (33) In the case of a Schwarzschild AdS planar black hole, i.e., for $\chi=0$ and $\gamma=1-(z/z_{\rm h})^{d}$, the integration in (29) can be easily performed yielding $c_{\rm s}^{2}=\frac{d}{d+4}+\left(\frac{4}{d+4}-K\right)\left(\frac{z}{z_{\rm h}}\right)^{d/2+2},$ (34) The condition $c_{\rm s}^{2}-\gamma=0$ now reads $\left(\frac{z}{z_{\rm h}}\right)^{d}+\left(\frac{4}{d+4}-K\right)\left(\frac{z}{z_{\rm h}}\right)^{d/2+2}-\frac{4}{d+4}=0,$ (35) For example, for $d=4$, the root $z_{\rm min}$ is given by $\frac{z_{\rm min}}{z_{\rm h}}=(3-2K)^{-1/4}\geq 3^{-1/4},$ (36) and for $d=3$ we find numerically $\frac{z_{\rm min}}{z_{\rm h}}=0.727,\quad{\rm for}\quad K=K_{\rm min}=0.$ (37) Hence, the simple prescription for an analog model is only valid from the point $z_{\rm min}$ up to the location of the horizon at $z_{\rm h}$. In principle we could place the boundary of our model at $z_{\rm min}$ and cut off the section of AdS from $z=0$ to $z_{\rm min}$ as it has been done in the Randall-Sundrum model [29, 30]. However, as we aim to make a connection with CFT at the boundary of AdS and calculate the boundary entanglement entropy at $z=0$, we would like to extend our model all the way down to the AdS boundary at $z=0$. As we demonstrate in appendix B, such an an extension can be achieved by manipulating the equation of state by adding an external pressure. For a fluid with an external pressure of the form $p_{\rm ext}=\alpha(p+\rho),$ (38) where $\alpha$ is a function of $z$, one finds the effective speed of sound $\tilde{c}_{\rm s}^{2}=\frac{c_{\rm s}^{2}-\alpha}{1+\alpha}.$ (39) Depending on the functional form of $\tilde{\gamma}$ we can choose $\alpha$ to make the quantity $\tilde{c}_{\rm s}^{2}$ satisfy equation (20) in the interval $0\leq z\leq z_{\rm h}$. For example, if $\tilde{\gamma}$ behaves as in (32) near $z=0$, we can choose $\alpha=\frac{d-(d+4)\tilde{\gamma}}{(d+4)(1+\tilde{\gamma)}}$ (40) to obtain $c_{\rm s}^{2}\geq\tilde{\gamma}$ in the entire interval $0\leq z\leq z_{\rm h}$ and $\lim_{z\rightarrow 0}\tilde{c}_{\rm s}^{2}=1.$ (41) ## 3 Analog bulk for the holographic superconductor Here we consider a concrete example of the analog metric of the form (2) for $d=3$ related to the holographic superconductor. Instead of solving the field equations we will implement the already known solutions [19, 20, 21] into our analogue setup. Based on the known results we will construct approximate analytic expressions for $\gamma$ corresponding to a chosen horizon temperature. With this we can calculate the entanglement entropy and by comparison with the results of Refs. [20, 21] we can also find an analytic expression for $\chi(z)$. The analog geometry which we have derived in general form can be used to mimic these analytic expressions. ### 3.1 Holographic superconductor Here we briefly review the minimal model of a holographic superconductor following Bobev et al [19]. We consider the minimal model of a holographic superconductor realized by an $SO(3)\times SO(3)$ invariant truncation of four-dimensional $\mathcal{N}=8$ gauged super-gravity [28]. The truncated action is $S=\frac{1}{16\pi G_{4}}\int d^{4}x\sqrt{-G}\left(-\mathcal{R}+\mathcal{L}\right),$ (42) where $\mathcal{L}$ involves two real dimensionless scalar fields $\lambda$ and $\varphi$ coupled to an Abelian gauge field $A_{\mu}$ and gravity. The Lagrangian can be written as $\mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+2\partial_{\mu}\lambda\partial^{\mu}\lambda+\frac{\sinh^{2}\left(2\lambda\right)}{2}\left(\partial_{\mu}\varphi-\frac{g}{2}A_{\mu}\right)\left(\partial^{\mu}\varphi-\frac{g}{2}A^{\mu}\right)-\mathcal{P},$ (43) with potential $\mathcal{P}=-g^{2}\left(6\cosh^{4}\lambda-8\cosh^{2}\lambda\sinh^{2}\lambda+\frac{3}{2}\sinh^{4}\lambda\right).$ (44) The gauge coupling $g$ sets the scale $\ell$ of AdS4 via the relation $\mathcal{P}\ell^{2}=-6$ [18] with scalar potential evaluated at a critical point. For the critical point $\lambda=0$ related to $SO(8)$ global symmetry [19, 28] we obtain the relation $g^{2}\ell^{2}=1$. The spacetime metric can be parameterized as $ds^{2}=\frac{\ell^{2}}{z^{2}}\left[\gamma(z)e^{-\chi(z)}dt^{2}-\left(dx_{1}^{2}+dx_{2}^{2}\right)-\frac{dz^{2}}{\gamma(z)}\right]\ ,$ (45) where the functions $\gamma$ and $\chi$ are to be determined by solving the field equations with appropriate boundary conditions. As we have noted in section 2, the value $\chi_{0}\equiv\chi(0)$ can be set to zero by rescaling the time coordinate. The field equations are derived in Ref. [19] for the gauge choice $\varphi=0$ and $A_{\mu}=(\psi(z),0,0,0)$ and solved for two types of superconductors depending on the choice of boundary conditions, with non-trivial gauge fields and scalar condensates below some critical value of the temperature. The solutions are characterized by the vacuum expectation values of the charged operators $O_{1}$ and $O_{2}$ (see Figs. 1 and 2 in Ref. [19])). Depending on the asymptotic behavior of the field $\lambda$ we distinguish two solutions: i) $\lambda=\lambda_{1}\tilde{z}+\mathcal{O}(\tilde{z}^{3})$ corresponding to an $O_{1}$ superconductor with $O_{1}\propto\lambda_{1}$ and $O_{2}=0$, and ii) $\lambda=\lambda_{2}\tilde{z}^{2}+\mathcal{O}(\tilde{z}^{4})$ corresponding to an $O_{2}$ superconductor with $O_{2}\propto\lambda_{2}$ and $O_{1}=0$. Here and from here on we use the dimensionless variable $\tilde{z}=z/\ell$. As functions of temperature, the condensates $O_{1}$ and $O_{2}$ exhibit the second and first order phase transitions, respectively. The typical behavior of the condensates as functions of temperature is shown in Figs. 1 and 2 of Ref. [19]. The quantity $\rho_{\rm c}$ which was chosen to set the units in these figures appears as a coefficient in the expansion $\psi=\mu\ell-\rho_{\rm c}\ell z+\dots$ near the AdS boundary. Physically, $\mu$ and $\rho_{\rm c}$ are appropriately normalized chemical potential and charge density, respectively. From the field equations one can derive the following asymptotic expansions near $z=0$: $\lambda=\lambda_{1}\tilde{z}+\lambda_{2}\tilde{z}^{2}+\frac{\lambda_{1}}{24}\left(2\lambda_{1}^{2}-3e^{\chi_{0}}\psi_{0}^{2}\right)\tilde{z}^{3}+\mathcal{O}(\tilde{z}^{4}),$ (46) $\psi=\psi_{0}+\psi_{1}\tilde{z}+\frac{\psi_{0}}{2}\lambda_{1}^{2}\tilde{z}^{2}+\frac{\psi_{0}}{3}\lambda_{1}\lambda_{2}\tilde{z}^{3}+\mathcal{O}(\tilde{z}^{4}),$ (47) $\gamma=1+\lambda_{1}^{2}\tilde{z}^{2}+\gamma_{3}\tilde{z}^{3}+\mathcal{O}(\tilde{z}^{4}),$ (48) $\chi=\chi_{0}+\lambda_{1}^{2}\tilde{z}^{2}+\frac{8}{3}\lambda_{1}\lambda_{2}\tilde{z}^{3}+\frac{1}{4}\left(\lambda_{1}^{4}+8\lambda_{2}^{2}-e^{\chi_{0}}\lambda_{1}^{2}\psi_{0}^{2}\right)\tilde{z}^{4}+\mathcal{O}(\tilde{z}^{5}).$ (49) As we have mentioned, $\chi_{0}$ can be set to 0 and the other coefficients in the expansion are related to physical quantities as follows: $\lambda_{1}=4\ell O_{1}\,,\quad\lambda_{2}=4\ell^{2}O_{2}\,,$ (50) $\psi_{0}=\ell\mu\,,\quad\psi_{1}=-\ell\rho_{\rm c}.$ (51) For $\lambda=\chi=0$ there are no condensates and the solution is just the Reisner-Nordstrom (RN) AdS4 planar BH with $\gamma_{\rm RN}=1-(1+Q^{2})\frac{z^{3}}{z_{\rm RN}^{3}}+Q^{2}\frac{z^{4}}{z_{\rm RN}^{3}}$ (52) and $\psi_{\rm RN}=\frac{2Q\ell}{z_{\rm RN}}\left(1-\frac{z}{z_{\rm RN}}\right).$ (53) The charge squared $Q^{2}$ ranges between 0 and 3 where $Q^{2}=0$ corresponds to a Schwarzschild AdS4 planar BH and and $Q^{2}=3$ to the maximal RN AdS4 planar BH. ### 3.2 Entanglement entropy Here we present the calculation of the holographic entanglement entropy in the analogue model discussed in section 3.1. Before we proceed to do that let us first discuss basic notions related to the entanglement entropy in general. Supose we have a quantum system with the density of states matrix $\rho=|\Psi\rangle\langle\Psi|$. If we divide the total system into two subsystems $A$ and $B$ we define the reduced density matrix for the subsystem $A$ by taking a partial trace over the subsystem $B$. i.e., $\rho_{A}=\mathrm{tr}_{B}\,|\Psi\rangle\langle\Psi|$. Then, the entanglement entropy defined as $\displaystyle S_{A}=-\mathrm{tr}_{A}\,\rho_{A}\log\rho_{A}$ (54) is the entropy for an observer who can access information only from the subsystem $A$ and can receive no information from $B$. The subsystem $B$ is analogous to the interior of a black hole horizon for an observer outside of the horizon. However, it is often not easy to compute the entanglement entropy, in particular in field theory in 3+1 or higher dimensions. A convenient description of the entanglement entropy is derived in a $d+1$-dimensional field theory. It has been shown that the leading term of the entanglement entropy can be expressed as the area law [31, 32] $S_{A}=a\frac{\mbox{Area}(\partial A)}{\ell^{d-1}}+\mbox{subleading terms},$ (55) where $\partial A$ is the boundary of $A$, $\ell$ is an ultraviolet cutoff or the minimal length in the theory, and $a$ is a constant which depends on the system. It is not accidental that this area law is of the same form as the Bekenstein-Hawking entropy of black holes in 3+1 dimensions which is proportional to the area of the event horizon, with the constants $d=3$, $a=1/4$, and $\ell$ equal to the Planck length. It is of particular relevance here that the entropy-area relation arises in the context of AdS/CFT duality. AdS/CFT, or gauge/gravity duality, is a correspondence between string theories in asymptotically anti-de Sitter bulk spacetimes and certain conformal field theories living on the holograhic boundary [33]. According to AdS/CFT, the entanglement entropy being basically tied to the gravity in the bulk, should reflect fundamental features of the boundary gauge theory. In this regard we will study the so called holographic entaglement entropy in 3+1 dimensions in the context of holographic superconductivity. There is a subtle difference between the usual entanglement entropy and holographic entanglement entropy: although both obey the area low, in the case of holographic entanglement entropy, as we will shortly demonstrate, for a fixed two-dimensional subsystem on the holographic boundary the area depends on the geometry in the bulk. In particular, we expect that the holographic entanglement entropy in our model should exhibit the phase transition discussed in the previous section as demonstrated by the temperature dependence of the superconductor condensates [19]. The holographic entanglement entropy $S$ in a 2+1-dimensional boundary CFT for a subsystem $\mathcal{A}$ that has an arbitrary one-dimensional boundary $\partial\mathcal{A}$ is defined by the following area law [34, 35, 36] $S=\frac{{\rm Area}(\Sigma)}{4\ell_{\rm Pl}^{2}},$ (56) where $\Sigma$ is the two-dimensional static minimal surface in AdS4 with boundary $\partial\mathcal{A}$ and $\ell_{\rm Pl}$ is the Planck length. As we are dealing with an analog geometry we will assume that there exist a minimal length, typically of the order of the atomic separation, below which the bulk description of the fluid fails. This length is referred to in the condensed matter literature as the coherence length, where the meaning of the word ”coherence” is different from that in optics. Since it describes the distance over which the wave function of a BE condensate tends to its bulk value when subjected to a localized perturbation, it is also referred to as the healing length [37]. In analog gravity systems, a healing length $\ell_{\rm hl}$ plays the role of the Planck length [38, 39, 40, 41, 42] and for a BE gas is typically of order $\ell_{\rm hl}\simeq 1/(mc_{\rm s})$ where $m$ is the boson mass. Hence, to calculate the entanglement entropy we use (56) with the Planck length $\ell_{\rm Pl}$ replaced by the healing length $\ell_{\rm hl}$. Furthermore, we will identify the arbitrary scale $\ell$ with $\ell_{\rm hl}$. Figure 1: Strip geometry employed to calculate the entanglement entropy. Next we apply the prescription (56) to the geometry suggested in Refs. [20, 34] illustrated in Fig. 1 and calculate the entropy $S$ as a function of the strip width $d$ for a fixed temperature. Consider the bulk metric (2) with $d=3$ and a surface $\Sigma$ defined by the equation $z-z(x)=0,$ (57) where $z(x)$ is a function of $x$ such that $\Sigma$ extends into the bulk and is bounded by the perimeter of $\mathcal{A}$ as illustrated in Fig. 1. The induced metric $\sigma_{ij}$ on $\Sigma$ defines the line element $ds_{\Sigma}^{2}=\sigma_{ij}dx^{i}dx^{j}=\frac{\ell^{2}}{z^{2}}\left[dx^{2}\left(1+\frac{{z^{\prime}}^{2}}{\gamma}\right)+y^{2}\right].$ (58) The area of $\Sigma$ can be viewed as a functional $I[z,z^{\prime}]=-{\rm Area}(\Sigma)/L=\int_{-d/2}^{d/2}dx\mathcal{L},$ (59) where $L$ and $d$ are respectively the length and width of the strip, and $\mathcal{L}=-\frac{\ell^{2}}{z^{2}}\left(1+\frac{{z^{\prime}}^{2}}{\gamma}\right)^{1/2}.$ (60) Next we calculate the maximal area of $\Sigma$. Clearly, a maximum of Area corresponds to a minimum of $I$ and variation of $I$ yields the equation of motion for $z$. Instead of solving the equation of motion we will use the Hamiltonian approach. We define the conjugate momentum $\pi=\frac{\partial\mathcal{L}}{\partial z^{\prime}}$ (61) and construct the Hamiltonian $\mathcal{H}=\pi z^{\prime}-\mathcal{L}=\frac{\ell^{2}}{z^{2}}\frac{1}{(1+{z^{\prime}}^{2}/\gamma)^{1/2}}.$ (62) It can be easily shown that the equation of motion is satisfied if and only if the Hamiltonian is a constant of motion. In particular, at the bottom of the surface $z=z_{*}$ we have $z^{\prime}=0$ and the Hamiltonian is equal to $\ell^{2}/z_{*}^{2}$. In this way we obtain the equation $\frac{\ell^{2}}{z_{*}^{2}}=\frac{\ell^{2}}{z^{2}}\frac{1}{(1+{z^{\prime}}^{2}/\gamma)^{1/2}},$ (63) from which we can express $z^{\prime}$ as $z^{\prime}=\pm\frac{\sqrt{(z_{*}^{4}-z^{4})\gamma}}{z^{2}}.$ (64) Inserting this into (59) and changing the integration variable from $x$ to $z$ with $dx=dz/z^{\prime}$ we obtain the area of the extremal surface ${\rm Area}=8L\int_{0}^{z_{*}}dz\frac{z_{*}^{2}}{z^{2}}\frac{\ell^{2}}{\sqrt{(z_{*}^{4}-z^{4})\gamma}}.$ (65) Dividing this by $4\ell^{2}$ we obtain the entanglement entropy expressed as an integral over $z$ $S=\frac{{\rm Area}}{4\ell^{2}}=2L\int_{0}^{z_{*}}dz\frac{z_{*}^{2}}{z^{2}}\frac{1}{\sqrt{(z_{*}^{4}-z^{4})\gamma}}.$ (66) The location of the bottom $z_{*}$ of the extremal surface is related to the strip width $d=2\int_{-d/2}^{d/2}dx=2\int_{0}^{z_{*}}dz\frac{z^{2}}{\sqrt{(z_{*}^{4}-z^{4})\gamma}}.$ (67) The integral in (66) is divergent near $z=0$ and can be regularized by adding and subtracting a counter-term $2L\int_{\epsilon}^{z_{*}}dz/z^{2}.$ (68) The entropy is then expressed as $S=S_{\rm fin}+\frac{2L}{\epsilon},$ (69) where the finite part reads $S_{\rm fin}=2L\int_{0}^{z_{*}}dz\left(\frac{z_{*}^{2}}{z^{2}}\frac{1}{\sqrt{(z_{*}^{4}-z^{4})\gamma}}-\frac{1}{z^{2}}\right)-\frac{2L}{z_{*}}.$ (70) Figure 2: The metric function $\gamma$ versus $z/z_{\rm h}$ for $T=0.61\times 10^{-2}\sqrt{\rho_{\rm c}}$. Next we calculate the entanglement entropy using the bulk profile corresponding to an $O_{2}$ superconductor at fixed temperature. The reason why we specifically address the $O_{2}$ type is that the $O_{2}$ superconductor exhibits a first order phase transition which manifests itself as a discontinuity depicted in Fig. 2 of Ref. [19]. To calculate $S_{\rm fin}$ we use a polynomial function $\gamma=1+\sum_{i=3}^{6}c_{i}\left(\frac{z}{z_{\rm h}}\right)^{i}$ (71) with $c_{3}=-44,\quad c_{4}=118,\quad c_{5}=-98,\quad c_{6}=23.$ (72) We plot this function in Fig. 2. This choice is motivated by the superconductor bulk metric profile plotted in Fig. 7(b) of Ref. [21] for a fixed horizon temperature $T=0.61\times 10^{-2}\sqrt{\rho_{\rm c}}$ where $\rho_{\rm c}$ is the charge density of the $O_{2}$ superconductor (see section 3.1). The function (71) is an analytic approximation to the bulk metric found by numerically solving the field equations of the holographic superconductor. In Fig. 3 we plot $S_{\rm fin}$ as a function of $d/2$. For comparison we plot in the same figure the entanglement entropies of a Schwarzschild AdS planar BH hole and a maximal RN AdS planar BH which have the same asymptotic behavior near $z=0$. The metric profiles are determined so that the cubic terms are the same as in the $O_{2}$ superconductor case. Hence we have $\gamma_{\rm AdS}=1+c_{3}\left(\frac{z}{z_{\rm h}}\right)^{3}$ (73) for the Schwarzschild AdS planar BH and $\gamma_{\rm RN}=1+c_{3}\left(\frac{z}{z_{\rm h}}\right)^{3}+3\left(\frac{c_{3}}{4}\right)^{4/3}\left(\frac{z}{z_{\rm h}}\right)^{4}$ (74) for the maximal RN AdS planar BH. The coefficient of the quartic term in (74) was fixed by virtue of (52) and requirement $\gamma_{\rm RN}(z_{\rm RN})=0$, where $z_{\rm RN}=(-4/c_{3})^{1/3}z_{\rm h}$ is the location of the RN BH horizon. Figure 3: The finite part of the entanglement entropy $S_{\rm fin}$ in units of $2L/z_{\rm h}$ versus half strip width $d/2$ in units of $z_{\rm h}$ at fixed temperature. Dotted and dashed lines represent the entanglement entropies of the Schwarzschild AdS planar and maximal RN AdS planar BH, respectively. The right panel shows the zoomed-in crossover region. To complete our model we still have to determine the function $\chi(z)$. To do this we need to set the scale $z_{\rm h}$ in relation to the previous works [19, 20, 21]. We will make a comparison of the scales at a fixed temperature $T=0.61\times 10^{-2}\sqrt{\rho_{\rm c}}$, where $\rho_{\rm c}$ is the charge density of dimension of length-2. In Ref. [19] $\rho_{\rm c}$ is chosen to set the scale whereas in Refs. [20, 21] the scale is set by the quantity $\tilde{\rho}_{\rm c}=\frac{\rho_{\rm c}\sqrt{16\pi G_{4}}}{\ell}.$ (75) The relation between $\tilde{\rho}_{\rm c}$ and $\rho_{\rm c}$ can be fixed by identifying the $O_{2}$ phase transition temperature $T_{\rm tr}$ of Albash and Johnson [20] (their figure 2(b)) $T_{\rm tr}=0.003635\tilde{\rho}_{\rm c}^{1/2}$ with that of Bobev et al [19] (their figure 2) $T_{\rm tr}=0.007269\rho_{\rm c}^{1/2}$. From this we obtain $\tilde{\rho}_{\rm c}=4\rho_{\rm c}.$ (76) In our approach the scale is set by $z_{\rm h}$ so we have to find a relation between our $z_{\rm h}$ and $\tilde{\rho}_{\rm c}$ or $\rho_{\rm c}$. To this end we compare the transition point $d_{\rm tr}/2=0.744z_{\rm h}$ (Fig. 3) with that of Chakraborty [21] $d_{\rm tr}/2=2.56\tilde{\rho}_{\rm c}^{-1/2}$. This yields $\frac{1}{z_{\rm h}}=0.2906\tilde{\rho}_{\rm c}^{1/2}=0.5812\rho_{\rm c}^{1/2}.$ (77) Using this we can express the horizon temperature of our configuration depicted in Fig. 2 in units of $z_{h}^{-1}$, $Tz_{\rm h}\equiv\frac{3}{\pi}e^{-\chi_{\rm h}/2}=1.05\times 10^{-2},$ (78) which yields $\chi_{\rm h}\equiv\chi(z_{\rm h})=-2\ln\frac{0.0105\pi}{3}=9.02.$ (79) Next, we express $\chi$ as a function of $z$ using the expression (49) from section 3.1 in which we set $\chi_{0}=0$, $\lambda_{1}=0$, keep the $z^{5}$ term and neglect the higher order terms. Hence we write $\chi(z)=2\lambda_{2}^{2}\frac{z^{4}}{\ell^{4}}+\chi_{5}\frac{z^{5}}{\ell^{5}},$ (80) where the coefficient $\lambda_{2}$ can be fixed from Eq. (50) with the value of $O_{2}$ deduced from Fig. 2 of Ref. [19]. At $T=0.61\times 10^{-2}\sqrt{\rho_{\rm c}}$ we find $\lambda_{2}=1.1462$ and using (79) we obtain $\chi(z)=2.63\left(\frac{z}{z_{\rm h}}\right)^{4}+6.39\left(\frac{z}{z_{\rm h}}\right)^{5}.$ (81) This equation together with (71) and (72) can be used to find closed expressions for the hydrodynamic functions and variables of our analog model. The considerations in this section can as well be carried out for the type $O_{1}$ superconductor. ## 4 Summary and conclusions We have derived an analog acoustic geometry which mimics a $d+1$-dimensional asymptotic AdS geometry with a planar Black hole. In 3+1 dimensions, this geometry has been exploited as a holographic model for the 2+1-dimensional superconductor. We have applied this general analog geometry to a 3+1-dimensional bulk and calculated the entanglement entropy for a particular geometry obtained as solution related to the holographic $O_{2}$ superconductor. We have demonstrated that the entanglement entropy in our analog model exhibits the usual first order phase transition which characterizes the $O_{2}$ superconductor. In this way we have confirmed the basic idea that a 3+1 AdS bulk with a planar BH can be realized in nature as a hydrodynamic analog gravity model. Moreover, the analog bulk metric can be parameterized so that the coefficient in the asymptotic expansion in powers of $z$ are such that the dual AdS/CFT boundary field theory corresponds to the type $O_{2}$ superconductor. A procedure similar to the one described in section 3.2 can easily be applied to the case of type $O_{1}$ superconductor. It would be of considerable interest to construct a concrete fluid system in the laboratory which would satisfy the properties of the analog geometry described above. It is fare to say that at this stage we cannot provide a clear proposal of how to prepare an adequate laboratory setup. As we are concerned with fluid velocities close to the speed of light $c$ and sound speed close to $c$, we would need an essentially relativistic fluid. So far the only known realistic experimental set up for a relativistic-fluid laboratory is provided by high-energy colliders. The study of analog gravity in high energy collisions may in general improve our understanding of the dynamics of general relativistic fluids [43, 44]. Maybe, with the advance of accelerator technology, one day it will be possible, e.g., by choosing appropriate heavy ions and specially designed beam geometry to obtain the desired equation of state and expansion flow of the fluid. ## Acknowledgments The work of N. Bilić has been partially supported by the European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (KK.01.1.1.06). J.C. Fabris thanks CNPq (Brazil) and FAPES (Brazil) for partial support. ## Appendix A Acoustic metric Here we briefly review the derivation of the relativistic acoustic metric. Acoustic metric is the effective metric perceived by acoustic perturbations propagating in a perfect fluid background. Under certain conditions the perturbations satisfy a Klein-Gordon equation in curved geometry with metric of the form (1). We first derive a propagation equation for linear perturbations of a nonisentropic flow assuming a fixed background geometry. Following Landau and Lifshitz [45] we assume that the enthalpy flow $wu_{\mu}$ is a gradient of a scalar potential, i.e., that there exist a scalar function $\theta$ such that the velocity field satisfies $wu_{\mu}=\partial_{\mu}\theta,$ (82) where $w$ is the specific enthalpy defined by (10). Then, from the relativistic Euler equation and standard thermodynamic identities it follows [26] that the entropy gradient is also proportional to the gradient of the potential, i.e., $s_{,\mu}=w^{-1}u^{\nu}s_{,\nu}\theta_{,\mu}.$ (83) Furthermore, instead of the continuity equation $(nu^{\mu})_{;\mu}=0$, one finds $(nu^{\mu})_{;\mu}=\frac{1}{w}\frac{\partial p}{\partial s}u^{\mu}s_{,\mu}.$ (84) In a nonisentropic flow we have $u^{\mu}s_{,\mu}\neq 0$ and the above equation shows that the particle number is generally not conserved. As demonstrated in Ref. [26], from equation (83) and Lagrangian description of fluid dynamics it follows that the specific entropy is a function of the velocity potential $\theta$ only. Then, using (83) equation (84) can be expressed in the form $(nu^{\mu})_{;\mu}=\frac{\partial p}{\partial\theta},$ (85) where $p=p(w,s(\theta))$ is the pressure of the fluid. Given some average bulk motion represented by $w$, $n$, and $u^{\mu}$, following the standard procedure [5, 6, 45], we make a replacement $w\rightarrow w+\delta w,\quad n\rightarrow n+\delta n,\quad u^{\mu}\rightarrow u^{\mu}+\delta u^{\mu},$ (86) where the perturbations $\delta w$, $\delta n$, and $\delta u^{\mu}$ are induced by a small perturbation $\delta\theta$ around a background velocity potential $\theta$. From (82) it follows $\delta w=u^{\mu}\delta\theta_{,\mu},$ (87) $w\delta u^{\mu}=(g^{\mu\nu}-u^{\mu}u^{\nu})\delta\theta_{,\nu}.$ (88) Using this and (86) equation (85) at linear order yields $\left(f^{\mu\nu}\delta\theta_{,\nu}\right)_{;\mu}+\left[\left(\frac{\partial n}{\partial\theta}u^{\mu}\right)_{;\mu}-\left(\frac{\partial^{2}p}{\partial\theta^{2}}\right)\right]\delta\theta=0,$ (89) where $f^{\mu\nu}=\frac{n}{w}\left[g^{\mu\nu}-\left(1-\frac{w}{n}\frac{\partial n}{\partial w}\right)u^{\mu}u^{\nu}\right].$ (90) Then, it may be easily shown that equation (89) can be recast into the form $\frac{1}{\sqrt{-G}}\partial_{\mu}\left({\sqrt{-G}}\,G^{\mu\nu}\partial_{\nu}\delta\theta\right)+m_{\rm eff}^{2}\delta\theta=0,$ (91) where the matrix $G^{\mu\nu}$ is the inverse of the acoustic metric tensor $G_{\mu\nu}=\frac{n}{m^{2}c_{\rm s}w}[g_{\mu\nu}-(1-c_{\rm s}^{2})u_{\mu}u_{\nu}]\,,$ (92) with determinant $G$. Here $m$ is an arbitrary mass parameter introduced to make $G_{\mu\nu}$ dimensionless and $c_{\rm s}$ is the speed of sound defined by (11). The effective mass squared is given by $m^{2}\sqrt{|G|}\,m_{\rm eff}^{2}=\left[\left(\frac{\partial n}{\partial\theta}u^{\mu}\right)_{;\mu}-\frac{\partial^{2}p}{\partial\theta^{2}}\right].$ (93) Hence, the linear perturbations $\chi$ propagate in the effective metric (92) and acquire an effective mass. In an equivalent field-theoretical description [1, 26, 46] the fluid velocity $u_{\mu}$ is derived from the scalar field as $u_{\mu}=\partial_{\mu}\theta/\sqrt{X}$, and $n$ and $c_{\rm s}$ are expressed in terms of the Lagrangian and its first and second derivatives with respect to the kinetic energy term $X=g^{\mu\nu}\theta_{,\mu}\theta_{,\nu}$. Obviously, the quantity $\sqrt{X}$ in this picture is identified with the specific enthalpy $w$. Equation (91) with (92) and (11) coincides with that of Ref. [1] derived in field theory with a general Lagrangian of the form $\mathcal{L}=\mathcal{L}(X,\theta)$. ## Appendix B Effective sound speed with external pressure Consider a fluid with internal variables $p$, $\rho$, and $n$. Suppose we apply to the fluid an external pressure $p_{\rm ext}$ so that the total pressure is $P=p+p_{\rm ext}.$ (94) The speed of sound is still defined by $c_{\rm s}^{2}=\left.\frac{\partial p}{\partial\rho}\right|_{s},$ (95) but the thermodynamic TdS equation (12) must include the external pressure, i.e., $dW=Tds+\frac{1}{n}dP,$ (96) where $W=\frac{P+\rho}{n}=w+\frac{p_{\rm ext}}{n}.$ (97) Then the sound speed is given by $c_{\rm s}^{2}=\left.\frac{\partial(P-p_{\rm ext})}{\partial\rho}\right|_{s}=\frac{n}{W}\left.\frac{\partial W}{\partial n}\right|_{s}-\frac{\partial p_{\rm ext}}{\partial\rho}.$ (98) For an isentropic process from (96) it follows $dP=ndW,\quad\quad d\rho=Wdn,$ (99) so by making use of $\frac{\partial}{\partial n}=W\frac{\partial}{\partial\rho}$ (100) we find $c_{\rm s}^{2}=\frac{n}{w+p_{\rm ext}/n}\left(\left.\frac{\partial w}{\partial n}\right|_{s}-\frac{p_{\rm ext}}{n^{2}}\right).$ (101) Now we make the following ansatz $p_{\rm ext}=\alpha(p+\rho),$ (102) where $\alpha=\alpha(z)$ will be determined by the requirement that the speed of sound is well defined as $z\rightarrow 0$. With this ansatz we find a modified expression for the sound speed $\tilde{c}_{\rm s}^{2}=\frac{1}{1+\alpha}\left(\frac{n}{w}\left.\frac{\partial w}{\partial n}\right|_{s}-\alpha\right)=\frac{c_{\rm s}^{2}-\alpha}{1+\alpha}.$ (103) ## References * [1] E. Babichev, V. Mukhanov, and A. Vikman, JHEP 0802, 101 (2008) [arXiv:0708.0561 [hep-th]]. * [2] M. Novello and E. Goulart, Class. Quant. Grav. 28, 145022 (2011) [arXiv:1102.1913 [gr-qc]]. * [3] M. Novello, E. Bittencourt, U. Moschella, E. Goulart, J. M. Salim, and J. D. Toniato, JCAP 1306, 014 (2013) [arXiv:1212.0770 l[gr-qc]]. * [4] T. Jacobson, PoS QG-PH, 020 (2007) [arXiv:0801.1547 [gr-qc]]. * [5] M. Visser, Class. Quant. Grav. 15, 1767 (1998) [arXiv:gr-qc/9712010]. * [6] N. Bilić, Class. Quant. Grav. 16, 3953 (1999) [arXiv:gr-qc/9908002]. * [7] S. Kinoshita, Y. Sendouda, and K. Takahashi, Phys. Rev. D 70, 123006 (2004). [astro-ph/0405149]. * [8] C. Barcelo, S. Liberati and M. Visser, Living Rev. Rel. 8, 12 (2005) [Living Rev. Rel. 14, 3 (2011)] [gr-qc/0505065]. * [9] J. F. Barbero G., Phys. Rev. D 54, 1492 (1996) [arXiv:gr-qc/9605066]. * [10] J. F. Barbero G. and E. J. S. Villasenor, Phys. Rev. D 68, 087501 (2003) [gr-qc/0307066]. * [11] S. Mukohyama and J. P. Uzan, Phys. Rev. D 87, 065020 (2013) [arXiv:1301.1361 [hep-th]]. * [12] S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, Phys. Rev. Lett. 101, 031601 (2008) [arXiv:0803.3295 [hep-th]]. * [13] S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, JHEP 12, 015 (2008) [arXiv:0810.1563 [hep-th]]. * [14] S. S. Gubser, C. P. Herzog, S. S. Pufu and T. Tesileanu, Phys. Rev. Lett. 103, 141601 (2009) [arXiv:0907.3510 [hep-th]]. * [15] J. P. Gauntlett, J. Sonner and T. Wiseman, Phys. Rev. Lett. 103, 151601 (2009) [arXiv:0907.3796 [hep-th]]. * [16] F. Benini, C. P. Herzog, R. Rahman and A. Yarom, JHEP 11, 137 (2010) [arXiv:1007.1981 [hep-th]]. * [17] G. T. Horowitz, Lect. Notes Phys. 828, 313-347 (2011) [arXiv:1002.1722 [hep-th]]. * [18] F. Aprile, D. Roest and J. G. Russo, JHEP 06, 040 (2011) [arXiv:1104.4473 [hep-th]]. * [19] N. Bobev, A. Kundu, K. Pilch and N. P. Warner, JHEP 1203, 064 (2012) [arXiv:1110.3454 [hep-th]]. * [20] T. Albash and C. V. Johnson, JHEP 1205, 079 (2012) [arXiv:1202.2605 [hep-th]]. * [21] A. Chakraborty, Class. Quant. Grav. 37, no.6, 065021 (2020) doi:10.1088/1361-6382/ab6d09 [arXiv:1903.00613 [hep-th]]. * [22] R. G. Cai, L. Li, L. F. Li and R. Q. Yang, Sci. China Phys. Mech. Astron. 58, no. 6, 060401 (2015) [arXiv:1502.00437 [hep-th]]. * [23] S. A. Hartnoll, Class. Quant. Grav. 26, 224002 (2009) [arXiv:0903.3246 [hep-th]]. * [24] S. Hossenfelder, Phys. Lett. B 752, 13 (2016) [arXiv:1508.00732 [gr-qc]]. * [25] S. Hossenfelder, Phys. Rev. D 91, no. 12, 124064 (2015) [arXiv:1412.4220 [gr-qc]]. * [26] N. Bilić and H. Nikolic, Class. Quant. Grav. 35, no. 13, 135008 (2018) [arXiv:1802.03267 [gr-qc]]. * [27] N. Bilić and T. Zingg, arXiv:1903.03401 [gr-qc]. * [28] T. Fischbacher, K. Pilch and N. P. Warner, [arXiv:1010.4910 [hep-th]]. * [29] L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999) * [30] L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999) * [31] L. Bombelli, R. K. Koul, J. H. Lee and R. D. Sorkin, Phys. Rev. D 34, 373 (1986). * [32] M. Srednicki, Phys. Rev. Lett. 71, 666 (1993) [arXiv:hep-th/9303048]. * [33] J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231-252 (1998) [arXiv:hep-th/9711200 [hep-th]]. * [34] S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602 (2006) [arXiv:hep-th/0603001 [hep-th]]; JHEP 08, 045 (2006) [arXiv:hep-th/0605073 [hep-th]]. * [35] A. Lewkowycz and J. Maldacena, JHEP 08 (2013), 090 [arXiv:1304.4926 [hep-th]]. * [36] N. Engelhardt and A. C. Wall, JHEP 01 (2015), 073 [arXiv:1408.3203 [hep-th]]. * [37] C. J. Pethick and H. Smith, Bose-Einstein Condensation in Diluted Gases, Cambrige University Press, Cambrige (2006). * [38] M. Uhlmann, Y. Xu and R. Schutzhold, New J. Phys. 7, 248 (2005) [arXiv:quant-ph/0509063 [quant-ph]]. * [39] F. Girelli, S. Liberati and L. Sindoni, Phys. Rev. D 78, 084013 (2008) [arXiv:0807.4910 [gr-qc]]. * [40] V. Fleurov and R. Schilling. Phys. Rev. A 85, 045602 (2012) [arXiv:1105.0799[cond-mat.quant-gas]]. * [41] M. Rinaldi, Phys. Rev. D 84, 124009 (2011) [arXiv:1106.4764 [gr-qc]]. * [42] P. R. Anderson, R. Balbinot, A. Fabbri and R. Parentani, Phys. Rev. D 87, no.12, 124018 (2013) [arXiv:1301.2081 [gr-qc]]. * [43] N. Bilic and D. Tolic, Phys. Rev. D 87, no.4, 044033 (2013) [arXiv:1210.3824 [gr-qc]]. * [44] N. Bilić and D. Tolić, Phys. Rev. D 88, 105002 (2013) [arXiv:1309.2833 [gr-qc]]. * [45] L. D. Landau, E. M. Lifshitz, Fluid Mechanics, (Pergamon, Oxford, 1993) p. 507. * [46] O. F. Piattella, J. C. Fabris, and N. Bilić, Class. Quant. Grav. 31, 055006 (2014) [arXiv:1309.4282 [gr-qc]].
Magnetization Reversal Mechanism in Exchange-Biased Spring-like Thin-Film Composite Marcin Perzanowski, Jakub Gregor-Pawlowski, Arkadiusz Zarzycki, Marta Marszalek Institute of Nuclear Physics Polish Academy of Sciences, Deparment of Materials Science, Radzikowskiego 152, 31-342 Krakow, Poland email<EMAIL_ADDRESS> Abstract: Development of modern spintronic devices requires materials exhibiting specific magnetic effects. In this paper, we investigate a magnetization reversal mechanism in a [Co/Pdx]7/CoO/[Co/Pdy]7 thin-film composite where an antiferromagnet is sandwiched between a hard and a soft ferromagnets with different coercivities. The antiferromagnet/ferromagnet interfaces give rise to the exchange bias effect. The application of soft and hard ferromagnetic films causes exchange-spring-like behavior while the choice of the Co/Pd multilayers provides large out-of-plane magnetic anisotropy. We observed that the magnitude and the sign of the exchange bias anisotropy field are related to the arrangement of the magnetic moments in the antiferromagnetic layer. This ordering is induced by the spin orientation present in neighboring ferromagnetic films which is, in turn, dependent on the orientation and strength of the external magnetic field. DOI: 10.1021/acsami.0c14115 (OPEN ACCESS) Keywords: exchange bias, spring magnet, perpendicular magnetic anisotropy, magnetization reversal, FORC, thin films, multilayers, antiferromagnet ## 1 Introduction Exchange bias effect is a phenomenon occurring at the interface between a ferromagnet (FM) and an antiferromagnet (AFM). The exchange coupling occurs after cooling such a FM/AFM system in the external magnetic field below the Néel temperature of the AFM, giving rise to the magnetic hysteresis loop shift along the field axis.1, 2 The shift is called the exchange bias field $H_{\mathrm{ex}}$ and its magnitude is inversely proportional to the thickness of the FM material revealing the interfacial nature of the effect. In most cases the bias field decreases monotonically with increasing temperature to the field $H_{\mathrm{ex}}=0$ for the blocking temperature for exchange bias. However, there are also systems where the bias field first increases, and then drops down as temperature rises.3 Usually, the blocking temperature is lower than the Néel temperature of the AFM due to the structural imperfections present in FM and AFM materials, as well as the condition of the interface. To describe the exchange bias phenomenon various models have been applied considering the magnetic domains in the antiferromagnet,4 the role of uncompensated spins 5 as well as the roughness of the FM/AFM interface.6 The possible technological application of the exchange bias effect has been studied in the context of its implementation in sensors, 7 biomedicine, 8, 9 and magnetic read heads and spintronic devices.10, 11, 12 Most of the research on the exchange bias effect has been done on flat multilayers, however, there are also studies for materials in the form of magnetic antidots and dots,13, 14, 15 core-shell structures,16, 17, 18 or rings and disks.19, 20 One of the key features especially important for application in flat and patterned spintronic devices is a perpendicular magnetic anisotropy present in the magnetic material. For this reason, we focused on system including Co/Pd ferromagnetic multilayers with easy axis of magnetization perpendicular to a film plane.21, 22 As an antiferromagnetic material for the exchange bias effect studies we have chosen cobalt oxide CoO since its properties are well known and the Co/CoO interface is considered to be a model system for such investigations.23, 24, 25, 26 Here, we present studies on the cooling field influence on the magnetization reversal mechanism for the exchange-biased system where the CoO antiferromagnetic layer is sandwiched between two [Co/Pd] ferromagnetic multilayers with different coercivities. The issue of the cooling field impact on the magnitude and sign of the exchange bias field has been recently raised in a few research papers. However, these works focused on AFM/FM bilayer,27 spin glass/FM interface,28 or system where AFM and FM layers are separated by a paramagnetic material.29 This paper develops this field of research further by combining two magnetic effects in one study — exchange bias and magnetic exchange spring, to find how they affect the reversal process. Such type of FM/AFM/FM composite that we study here is similar to the hard-soft exchange spring materials which can be applied in high density magnetic recording devices.30, 31, 32 We find that the magnetization switching process and the magnitude and the sign of the exchange bias field are different depending on the magnetic state of both the ferromagnetic and the antiferromagnetic films induced by cooling the system in various external magnetic fields. The studies of the magnetic hysteresis loops obtained under different conditions are supported by the First Order Reversal Curve (FORC) measurements. The FORC investigations were carried out for both ascending and descending branches of the hysteresis loop which is especially significant for the exchange-biased systems with bias loop shift from zero position. ## 2 Experimental The samples were fabricated by thermal evaporation at room temperature in ultrahigh vacuum under a pressure of $10^{-7}$ Pa. The systems were deposited on single-crystal Si(100) substrates with a 2 nm thick Pd buffer layer. Ferromagnetic multilayers consisted of [Co/Pd]7 stacks with a Co thickness of 0.3 nm. The Pd layers have a thickness of 0.6 or 1.2 nm. The antiferromagnetic CoO layer (AFM) was obtained by oxidizing 1-nm-thick Co layer in the atmosphere of pure oxygen under a pressure of $3\times 10^{2}$ Pa for 10 minutes. In this paper four systems were studied — the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 and [Co/Pd${}_{\mathrm{1.2\ nm}}$]7 multilayers, the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 and the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composites. During the deposition the substrates were kept at 300 K. The multilayer structure of the samples was investigated by X-ray reflectivity (XRR) measurements carried out using an X’Pert Pro PANalytical diffractometer equipped with a Cu X-ray tube operated at 40 kV and 30 mA. For systems without AFM layer the XRR method was used only for validation of the assumed layered structure, while for AFM-based composite it was also applied to determine the CoO thickness and density. Magnetic studies were done using Quantum Design MPMS XL SQUID magnetometer. The zero-field-cooled (ZFC) and field-cooled (FC) magnetization curves were measured as follows. First, the sample was demagnetized at 300 K by application of an osciltextitg external magnetic field, inducing zero magnetization. Then, the sample was cooled down to 10 K without the presence of the external field. At 10 K, the field of 500 Oe was applied in the direction perpendicular to the sample surface, and the ZFC magnetization curve was measured during heating to 300 K. After reaching the final temperature, the FC curve was recorded during cooling down to 10 K with the same external field. Both ZFC and FC curves were measured at a temperature change rate of 3 K/min. The hysteresis loops were measured in out-of-plane geometry at 10 K, 50 K, and 100 K, with the external magnetic field perpendicular to the sample plane. During the cooling down to low a temperature various external magnetic cooling fields were applied. For more details see further in the text. First-order reversal curve (FORC) measurements were done at 10 K in out-of- plane geometry. During an FORC measurement along a single hysteresis branch, first, a $\pm$10 kOe field was set to magnetically saturate the ferromagnetic components of the system. Then, the field was changed to the reversal field $H_{\mathrm{R}}$ and the magnetization was measured as a function of the external field $H$ toward the initial $\pm$10 kOe point. Next, the succeeding $H_{\mathrm{R}}$ field was set prior to the subsequent magnetization measurement. The set of $M(H,H_{\mathrm{R}})$ curves measured for different starting $H_{\mathrm{R}}$ fields was transformed into an FORC distribution. ## 3 Results and discussion First, we studied the magnetic properties of two ferromagnetic components of a complex composite system. The hysteresis loops together with the d$M$/d$H$ switching field distributions, measured at 10 K after field cooling in +50 kOe for [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 and [Co/Pd${}_{\mathrm{1.2\ nm}}$]7 multilayers, and [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 system, are presented in Figure 1. To obtain quantitative information on the magnetic properties of the systems both upper and lower magnetization branches of the hysteresis loops for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 and [Co/Pd${}_{\mathrm{1.2\ nm}}$]7 multilayers were fitted using the following expression13, 33, 34 $M(H)=\frac{2}{\pi}M_{\mathrm{s}}\arctan\left(g\left[\frac{H-H_{\mathrm{c}}}{H_{\mathrm{c}}}\right]\right)\ ,$ (1) where $M_{\mathrm{s}}$ is the saturation magnetization of the system, $H_{\mathrm{c}}$ is its coercitivy, and $g$ represents the slope of the magnetization curve. The [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 multilayer (Figure 1a) shows remanence magnetization equal to 0.98 of saturation magnetization $M_{\mathrm{s}}$, indicating that the easy axis of magnetization is perpendicular to the sample plane. Figure 1: Magnetic hysteresis loops measured at 10 K in out-of-plane geometry for (a) [Co/Pd${}_{\mathrm{0.6\ nm}}$]7, (b) [Co/Pd${}_{\mathrm{1.2\ nm}}$]7, and (c) [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 systems after cooling in a +50 kOe external magnetic field. The upper panels present d$M$/d$H$ switching field distributions. The points represent experimental data, and solid lines are fits (see text). In Figure (c) the hard magnet (HM) and soft magnet (SM) components are marked with corresponding color fields. The switching field distributions d$M$/d$H$ for both upper and lower magnetization branches are symmetrical and sharp denoting an abrupt magnetization reversal process. The coercivity of the sample obtained from the fit is 1 kOe. The [Co/Pd${}_{\mathrm{1.2\ nm}}$]7 multilayer (Figure 1b) has a similar remanence demonstrating large out-of-plane magnetic anisotropy. In comparison to the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 multilayer, the switching field distributions d$M$/d$H$ of this sample are broader showing that the rotation of the magnetic moments induced by the magnetic field sweep takes place in a more gradual way. The coercivity of this multilayer is 5.4 kOe, and therefore, it will be denoted further in the text as a hard magnet (HM), while the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 multilayer will be labeled as a soft magnet (SM). In both cases the same total Co thickness was deposited, and the saturation magnetizations normalized to the surface area are equal. Due to the symmetrical single-step magnetization reversal both SM and HM systems can be described as rigid magnets. The hysteresis loop and the d$M$/d$H$ distributions for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 system, being a superposition of the hard and soft ferromagnets and further labeled as SM/HM, are shown in Figure 1c. The d$M$/d$H$ curves demonstrate asymmetrical switching field distributions related to the presence of different magnetic phases SM and HM reversing at distinct external magnetic fields. To acquire quantitive information on coercive fields and saturation magnetizations of the components each magnetization branch was fitted using a sum of two $M(H)$ functions expressed by Eq. (1) $\begin{split}M(H)=\frac{2}{\pi}M_{\mathrm{s}}^{\mathrm{SM}}\arctan\left(g^{\mathrm{SM}}\left[\frac{H-H_{\mathrm{c}}^{\mathrm{SM}}}{H_{\mathrm{c}}^{\mathrm{SM}}}\right]\right)\\\ +\frac{2}{\pi}M_{\mathrm{s}}^{\mathrm{HM}}\arctan\left(g^{\mathrm{HM}}\left[\frac{H-H_{\mathrm{c}}^{\mathrm{HM}}}{H_{\mathrm{c}}^{\mathrm{HM}}}\right]\right)\ ,\end{split}$ (2) where the SM and HM superscripts denote the quantities associated with the corresponding constituent multilayers. The sharper maxima, indicating swift magnetization reversal, occur for the magnetic field of $\pm 1.7$ kOe and they are associated with the reversal of the SM component. Accordingly, the broader d$M$/d$H$ maxima, suggesting slower process of magnetization rotation, giving a coercivity of 2.2 kOe are related to the HM multilayer. The saturation magnetizations of both SM and HM components are similar to those observed for the single [Co/Pd] multilayers (Figures 1a and 1b). As a consequence, the SM/HM system has the saturation magnetization twice as large as the individual [Co/Pd] stack. The two-step reversal process demonstrates that the SM/HM composite acts like an exchange- spring system rather than like a single rigid magnet.35, 36 The increase of the SM and the decrease of the HM coercivities, in comparison to the single [Co/Pd] stacks, result from the exchange coupling between magnetic materials. Similar changes observed for exchange-spring films with out-of-plane anisotropy were reported by Casoli et al.37 In order to study the magnetization reversal mechanism of a spring-like exchange-biased system, an antiferromagnetic (AFM) CoO layer was sandwiched between SM and HM multilayers. The XRR measurement (Figure 2) showed that the density of the cobalt oxide layer is 6.38 g/cm3 which is close to the bulk CoO value of 6.44 g/cm3. Figure 2: XRR measurement (red points) and fitted curve (blue line) for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composite. Therefore, we can expect that the Co layer was oxidized to CoO and it has appropriate magnetic properties to create magnetic exchange coupling with the ferromagnetic layers. The thickness of the CoO film determined from the XRR equals to 1.8 nm which is a larger value than the deposited Co thickness due to the incorporation of atoms in the material. According to the work by van der Zaag et al.,38 for such a CoO thickness the blocking temperature for the exchange bias is approximately 160 K. Accordingly, the temperature of 10 K, at which magnetization reversal studies were carried out, is sufficiently low to register meaningful exchange bias field. The fit shows that the roughness of the Co layers within the [Co/Pd] stacks is comparable to their thickness equal to 0.3 nm, indicating large jaggedness of the Co-Pd interfaces. Therefore, it is highly unlikely that the deposited Co forms continuous layers. Taking into account the calculated roughness of each Pd layer being approximately 0.4 nm, the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7 stack should instead be considered as an intermixed Co-Pd film. In the case of the [Co/Pd${}_{\mathrm{1.2\ nm}}$]7 multilayer, due to the larger thickness of the Pd layers, the intermixed Co-Pd interface regions are separated from each other by the layers of pure Pd. The zero-field-cooled (ZFC) and field-cooled (FC) curves for the SM/AFM/HM system are shown in Figure 3. The system was demagnetized at 300 K and, due to the cooling in zero magnetic field, that state was preserved at 10 K. Application of the 500 Oe external field induced nonzero magnetization caused by the partial orientation of the magnetic moments along the field direction. Heating of the system led to the increase of thermal fluctuations of the magnetic moments leading to more moments being unblocked and able to align with the field. This process can be observed in the ZFC curve as a progressive rise of the magnetization up to 300 K. Cooling the system from 300 K (the FC curve) resulted in a gradual decrease of thermal fluctuation energy, resulting in the blocking of the magnetic moments. On the other hand, the presence of the external magnetic field caused stepwise reorientation of the spins along the field direction. These two factors are responsible for the progressive increase of the magnetization seen in the FC measurement. Figure 3: ZFC/FC curves for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composite measured with +500 Oe external magnetic field. The shape of the ZFC and FC curves is typical for ferromagnets and indicates that the composite reveals ferromagnetic behavior in the whole temperature range from 10 K to 300 K. Since both curves are monotonic without any maxima, no superparamagnetic or superferromagnetic effects have to be taken into account during further analysis of the magnetization switching process.24 Additionally, in both FC and ZFC curves there is a small increase of the signal at low temperatures. Such behavior suggests that a small paramagnetic contribution is present in the system. The magnetization reversal mechanism of the exchange-biased SM/AFM/HM composite was studied by a series of hysteresis loops measured at 10 K in the out-of-plane geometry. Prior to the measurement the external perpendicular magnetic field of +50 kOe was set at 300 K to align all ferromagnetic moments in the SM and HM layer in the positive out-of-plane direction. Then, the field was changed to $H_{\mathrm{cool}}$ in which the sample was cooled down to 10 K and the loops were measured. Representative hysteresis loops for various $H_{\mathrm{cool}}$ values are shown in Figure 4a. Figure 4: (a) Representative hysteresis loops measured for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composite at 10 K for different cooling fields. (b) Hysteresis loops measured at 10 K, 50 K, and 100 K after field cooling in +50 kOe. The upper panels in both figures show the d$M$/d$H$ switching field distributions. The points represent experimental data, and the solid lines are fits (see text). The hard (HM) and soft (SM) magnetization components are marked with corresponding color fields. In all loops two pairs of switching fields per magnetization branch are present, similar to those observed for SM/HM system, indicating a comparable spring-like behaviour of the composite film with SM and HM multilayers reversing at different external fields, lower and higher, respectively. Moreover, in the $M(H)$ loops are observed small signal steps around $H=0$, reflected in the d$M$/d$H$ curves as small maxima centered around zero external field. Similarly to the previous cases, the upper and lower magnetization branches of the loops were fitted by the function of Eq. (1). However, here Eq. (2) used for the SM/HM system was complemented by another component labeled as C to reflect the presence of the signal observed for $H\\!\approx\\!0$ Oe, giving the following expression $\begin{split}M(H)=\frac{2}{\pi}M_{\mathrm{s}}^{\mathrm{SM}}\arctan\left(g^{\mathrm{SM}}\left[\frac{H-H_{\mathrm{c}}^{\mathrm{SM}}}{H_{\mathrm{c}}^{\mathrm{SM}}}\right]\right)\\\ +\frac{2}{\pi}M_{\mathrm{s}}^{\mathrm{HM}}\arctan\left(g^{\mathrm{HM}}\left[\frac{H-H_{\mathrm{c}}^{\mathrm{HM}}}{H_{\mathrm{c}}^{\mathrm{HM}}}\right]\right)\\\ +\frac{2}{\pi}M_{\mathrm{s}}^{\mathrm{C}}\arctan\left(g^{\mathrm{C}}\left[\frac{H-H_{\mathrm{c}}^{\mathrm{C}}}{H_{\mathrm{c}}^{\mathrm{C}}}\right]\right)\ .\end{split}$ (3) The fits showed that the C component reveals no coercivity and its saturation magnetization $M_{\mathrm{s}}^{\mathrm{C}}$ accounts in average for 5% of the total magnetization exhibited by the SM/AFM/HM system. Figure 4b demonstrates the evolution of the hysteresis loop for increased temperature indicating a gradual decrease of the saturation magnetization as well as the reduction of the coercive field, accompanied by the reduction of the exchange bias field. Moreover, after heating the SM/SFM/HM systems to 50 K or 100 K the signal for $H=0$ does not appear. Taking into account the shape of the ZFC/FC curves (Figure 3) and the results obtained from the hysteresis loops it can be concluded that a small fraction of a paramagnetic phase is present in the SM/AFM/HM system. This is due to the $M\\!\propto\\!T^{-1}$ Curie law according to which a paramagnetic contribution to the magnetic signal becomes more prominent as temperature approaches zero. The origin of such phase can be related to the oxidation procedure applied to obtain antiferromagnetic CoO. Most of the Co volume deposited between the SM and HM stacks transformed into antiferromagnet. Therefore, the antiparallel arrangement of the magnetic moments gives zero net magnetization and does not contribute to the magnetization recorded in the measurements. However, there are still some Co atoms which were not oxidized and provide paramagnetic contribution to the system. Due to their magnetic nature they are not magnetically coupled to the other magnetic regions. Thereby, this phase does not have influence on the magnetic properties revealed by the soft, hard, and antiferromagnetic components of the system, and does not affect the reversal mechanism present in the composite. For each value of the cooling field $H_{\mathrm{cool}}$ the saturation magnetizations $M_{\mathrm{s}}^{\mathrm{SM}}$ and $M_{\mathrm{s}}^{\mathrm{HM}}$ of the soft and hard ferromagnetic components are approximately equal and similar to those observed for the SM and HM stacks (Figure 1a and b), and to the magnetization components obtained from fitting of the SM/HM hysteresis loop (Figure 1c). Thereby, the overall magnetization of the SM/AFM/HM system, neglecting the paramagnetic contribution described above, is similar to that recorded for the SM/HM system, testifying that the magnetic signal comes only from the Co atoms within the [Co/Pd] stacks. The oxidized Co layer becomes antiferromagnetic below the Néel temperature, and preserves this property regardless of the cooling procedure, which is observed as a lack of the overall magnetization change upon the $H_{\mathrm{cool}}$ alteration. Changes of the exchange bias fields and coercivities on the cooling field $H_{\mathrm{cool}}$, obtained from the fits, are presented in Figures 5a and b. Figure 5: (a) Dependencies of the full loop exchange bias field $H_{\mathrm{ex}}$ (black squares) and coercivity $H_{\mathrm{c}}$ (green squares) on the cooling field $H_{\mathrm{cool}}$ for the SM/AFM/HM composite. (b) Dependencies of the exchange bias fields $H_{\mathrm{ex}}^{\mathrm{SM}}$ and $H_{\mathrm{ex}}^{\mathrm{HM}}$ (full squares) and coercivities $H_{\mathrm{c}}^{\mathrm{SM}}$ and $H_{\mathrm{c}}^{\mathrm{HM}}$ (full triangles) on the cooling field $H_{\mathrm{cool}}$ for the soft (SM) and hard (HM) magnetization components of the SM/AFM/HM composite. (c) Schematic representation of different magnetization reversal mechanisms for the different regions marked in subfigures (a) and (b) by Roman numbers. The dependencies can be divided into four regions in which the magnetization reversal process takes place in a different manner. In the first region (I), covering the $H_{\mathrm{cool}}$ range from high positive fields down to -0.2 kOe, there is no significant change of the full loop bias field $H_{\mathrm{ex}}$, equal to -0.5 kOe. The corresponding $H_{\mathrm{ex}}^{\mathrm{SM}}$ and $H_{\mathrm{ex}}^{\mathrm{HM}}$ values also do not alter with the $H_{\mathrm{cool}}$, the $H_{\mathrm{ex}}^{\mathrm{SM}}$ is -0.4 kOe, and the $H_{\mathrm{ex}}^{\mathrm{HM}}$ is slightly larger than the $H_{\mathrm{ex}}$. The initial field of +50 kOe forces all magnetic moments in both SM and HM multilayers to align and to point in a positive out-of-plane direction. The absence of the $H_{\mathrm{ex}}$ changes suggests that in this region the $H_{\mathrm{cool}}$ field in this range is too weak to alter the original arrangement of the spins. Therefore, the system is cooled down with all ferromagnetic moments pointing in the direction perpendicular to the sample plane (see corresponding path I in Figure 5c). This lack of a spin reorientation upon external field change, even for slightly negative $H_{\mathrm{cool}}$ values, is a consequence of a large out-of-plane anisotropy accompanied by the high magnetic remanence present in both SM and HM layers. Below the Néel temperature the CoO orders antiferromagnetically, and below the exchange bias blocking temperature its magnetic moments start to couple with the SM and HM ferromagnetic spins. Due to this exchange coupling the orientation of the ferromagnetic spins imposes the out-of-plane alignment of the AFM moments and freezes them, making the AFM spin structure insensitive to further changes of the external magnetic field. Such highly-ordered perpendicular arrangement of the SM, HM, and AFM magnetic moments favors a strong out-of-plane exchange bias coupling on both SM/AFM and AFM/HM interfaces which is reflected as a largest value of $H_{\mathrm{ex}}$, seen also for both SM and HM components. The second region (II) covers the $H_{\mathrm{cool}}$ range from -0.2 kOe to -0.5 kOe. Here, the $H_{\mathrm{ex}}$ value changes rapidly and alters its sign from negative to positive. As previously, the external field was set to +50 kOe and then changed to $H_{\mathrm{cool}}$, prior to the cooling and loop measurement. Contrary to the previous case, the cooling field becomes strong enough to drive the changes in ferromagnetic spin orientation and to start magnetization reversal process within the SM and HM layers. Since the SM has a lower coercivity we can expect that arrangement of its magnetic moments is more sensitive to the external field change. Additionally, we can anticipate that the spin arrangement in the SM layer diverges from the ideal out-of-plane orientation to a greater extent than for the HM with larger coercivity (see path II in Figure 5c). Therefore, at the AFM/HM interface the CoO spins are coupled antiferrmagnetically to each other and aligned mostly perpendicularly as imposed by the exchange coupling to the HM magnetic moments whose out-of- plane orientation is driven by the external field. On the other hand, the random orientation of the SM magnetic moments induced by the cooling field caused formation of the variously oriented regions in the neighboring CoO layer. Within these regions the AFM spins are coupled antiparallel and the spatial direction along which they are arranged is determined by the orientation of the nearby ferromagnetic moments due to the exchange interaction induced during the cooling procedure. Since the cooling field caused a high degree of spin disorder in the SM layer the number of the perpendicularly oriented AFM magnetic moments is smaller than in the region (I). This results in the decrease of the out-of-plane exchange bias field $H_{\mathrm{ex}}$ and is also reflected in a faster reduction of the $H_{\mathrm{ex}}^{\mathrm{SM}}$ value upon $H_{\mathrm{cool}}$ change than for the $H_{\mathrm{ex}}^{\mathrm{HM}}$ field (see Figure 5b). Further change of the $H_{\mathrm{cool}}$ to the value of -0.45 kOe (see path in Figure 5c) leads to the bias field $H_{\mathrm{ex}}=0$. In this situation only a small fraction of the HM spins is aligned in the initial positive out- of-plane direction. Thereby, this restricts the amount of the CoO material being frozen in the same orientation and causes a meaningful reduction of the $H_{\mathrm{ex}}^{\mathrm{HM}}$ field measured in the out-of-plane direction. In the case of the SM layer, since it has lower coercivity than the HM, the cooling field was sufficiently large to rotate a limited number of the magnetic moments and to point them in the negative out-of-plane direction. As a consequence, the amount of the AFM spins coupled in the same manner is restricted, which leads to the appearance of poor positive out-of-plane $H_{\mathrm{ex}}^{\mathrm{SM}}$ bias field of the opposite sign in comparison to the $H_{\mathrm{ex}}^{\mathrm{HM}}$. The contributions from these two effects, taking place in the opposite spatial directions, compensate each other and give the overall exchange bias field $H_{\mathrm{ex}}$ equal to zero. In the third region of the $H_{\mathrm{ex}}$ dependence, for the $H_{\mathrm{cool}}$ from -0.5 kOe to -2 kOe, the positive exchange bias field rises constantly but in a less rapid way than in the second region. In this regime, it can be expected that the $H_{\mathrm{cool}}$ field is sufficiently large to reverse most of the spins in the SM layer and point them in the direction opposite to the initial state set by +50 kOe. However, the field is still not strong enough to fully rotate magnetic moments in the HM. Therefore, a good out-of-plane arrangement of AFM spins extorted by the ordered ferromagnet is present mainly at the SM/AFM interface where the spins point in a negative perpendicular direction. This leads to the perpendicular exchange coupling and gives rise to the positive bias loop shift (see accompanying path in Figure 5c). A gradual change of the $H_{\mathrm{cool}}$ extorts an out-of- plane spin alignment also in the HM. This results in an increase of the exchange coupling strength at the AFM/HM interface which, together with the coupling at the SM/AFM interface, is responsible for a constant rise of the exchange bias field. The situation is similar to that observed in the second region with a considerable difference in the $H_{\mathrm{ex}}$ slope. The magnetization reversal process in the SM is more abrupt than in the HM as it is seen in Figure 1 for the [Co/Pd] multilayers. Therefore, when $H_{\mathrm{cool}}$ drives the magnetization reversal, a small modification in the external field value can cause a larger spin rotation in the SM than in the HM. Since the arrangement of the ferromagnetic moments is a key parameter for the magnitude of the exchange bias field, the sharp reversal of the SM spins leads to a substantial change of $H_{\mathrm{ex}}$. The reversal of the HM requires larger $H_{\mathrm{cool}}$ fields and is more protracted, and for these reasons it provides less dynamic change of $H_{\mathrm{ex}}$ in the third region. The fourth region extends from -2 kOe up to -3 kOe $H_{\mathrm{cool}}$ fields. The $H_{\mathrm{ex}}$ values are maximal and the same as in region I, but with the opposite sign, which indicates the same magnetization reversal mechanism. In the first region the $H_{\mathrm{cool}}$ field was too weak to change the orientation of the spins ordered by the initial +50 kOe field, so the spins were pointing in the positive out-of-plane direction. Here, the $H_{\mathrm{cool}}$ field is large enough to magnetically saturate the SM and HM layers in the opposite, negative, out-of-plane direction (see corresponding path in Figure 5c). Such orientation of the SM and HM moments maximizes their exchange coupling strength to the AFM producing the largest hysteresis loop shift. We observed that in region (I) the coercive field of the full loop has a value of approximately 3.3 kOe. In region (II) it starts to increase, reaches its maximum for $H_{\mathrm{cool}}\\!=\\!-0.45$ kOe, and then decreases in region (III), reaching in region (IV) the same value as observed in region (I). Similar behavior is observed for the $H_{\mathrm{c}}^{\mathrm{HM}}$ and $H_{\mathrm{c}}^{\mathrm{SM}}$ coercive fields. In region (I), the magnetization reversal taking place along the upper hysteresis branch is more energetically demanding than the reversal along the lower branch due to the unidirectional anisotropy induced by the ferromagnetic-antiferromagnetic coupling. In region (IV) this situation is mirrored since the cooling field strength was sufficient to magnetically saturate both SM and HM components. Considering region (II), a stepwise change of the cooling field leads to progressive spin orientation disorder induced in the ferromagnetic layers. The SM layer is more susceptible to this disarranging process due to its softer ferromagnetic properties, contrary to the HM component revealing larger coercitivy. As a result, the exchange coupling process causes formation of the variously spatially oriented regions within the antiferromagnetic layer below the blocking temperature during the cooling procedure. These regions, maintaining their antiferromagnetic nature, couple the ferromagnetic layers in diverse spatial directions and constrict the reversal process along both upper and lower magnetization branches. Since that, the reversal process becomes more energetically demanding as the spatial orientations of the pinning AFM regions get more diverse, leading to the coercivity rise. For the cooling field of -0.45 kOe, at which the coercivity is maximal, some of the regions in the antiferromagnet at the SM/AFM interface extort already the appearance of small positive exchange bias field, while the bias field associated with the AFM/HM interface still remains negative. This means that some of the AFM regions counteract against the reversal along the upper magnetization branch, due to the pinning effects driven by the exchange coupling, while the others constrict the magnetization switching in the opposite field sweep direction. Coexistence of these two processes causes the reversal is energetically demanding along both branches, leading to the increase of coercivity. In region (III) both $H_{\mathrm{ex}}^{\mathrm{HM}}$ and $H_{\mathrm{ex}}^{\mathrm{SM}}$ bias fields changed their sign to positive. In this situation, a stepwise change of the external cooling field extorts gradual spin ordering along the negative perpendicular direction. Thereby, the magnetization reversal along the upper hysteresis branch becomes less energetically demanding than in region (II) due to the fact that the exchange anisotropy at the SM/AFM and AFM/HM interfaces tends to couple the ferromagnetic layers along the same direction. At the same time, since the cooling field orientation induced the unidirectional anisotropy in the opposite direction than in region (I), the reversal along the lower branch becomes less difficult. This results in the progressive decrease of the coercivity to the minimal value observed in regions (I) and (IV). To support the information on the magnetization reversal mechanism provided by the analysis of the $H_{\mathrm{ex}}$ and $H_{\mathrm{c}}$ dependencies on the cooling field $H_{\mathrm{cool}}$, a series of FORC measurements were carried out. The experiments were done at 10 K for the SM/AFM/HM system cooled in the external fields of $+50$ kOe and $-0.45$ kOe to obtain FORC distributions for the cases of maximal exchange bias field value, and for no biased loop. Materials with the exchange bias effect show training effect which is observed as a gradual decrease of the exchange bias field magnitude through consecutive hysteresis loop cycling.39 The $|H_{\mathrm{ex}}|$ falloff is mainly present during the first few cycles, and after that the changes of the exchange field become less prominent. During the FORC measurement the external field is switched many times between the reversal $H_{\mathrm{R}}$ and saturation $\pm 10$ kOe fields. Therefore, starting the measurement from the untrained state would be associated with the presence of the training effect affecting the resultant distribution, especially for a few initial $M(H_{\mathrm{R}},H)$ curves. To minimize the influence of this effect on the final FORC, prior to the measurements at low temperature, the external magnetic field was switched from +50 kOe to -50 kOe 15 times. Figure 6: FORC measurements for the [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composite obtained at 10 K for different $H_{\mathrm{cool}}$ cooling fields: (a) +50 kOe, and (b) -0.45 kOe. Left: $M(H,H_{\mathrm{R}})$ FORC loop families for ascending and descending magnetization branches, plotted in canvas of the hysteresis loops. The characteristic features are marked with color arrows (see text). The initial magnetization data dependent on the reversal field $H_{\mathrm{R}}$ and the external field $H$, measured for gradual ascending and descending $H$ and $H_{\mathrm{R}}$, were transformed into FORC distributions $\rho(H_{\mathrm{R}},H)$ by calcutextitg mixed second-order derivatives of the magnetization $M(H_{\mathrm{R}},H)$:40 $\rho(H_{\mathrm{R}},H)=-\frac{1}{2}\frac{\partial^{2}M(H_{\mathrm{R}},H)}{\partial H\partial H_{\mathrm{R}}}\ .$ (4) Such a definition of the FORC $\rho(H_{\mathrm{R}},H)$ distribution eliminates reversible components of the magnetization reversal process. Therefore, any nonzero $\rho(H_{\mathrm{R}},H)$ signal represents an irreversible magnetization switching. The FORC $\rho$ distribution can be plotted in coordinate system rotated by 45∘ with the coercive field $h_{\mathrm{c}}$ and interaction field $h_{\mathrm{u}}$ on the horizontal and vertical axes, respectively. These coordinates are defined as follows: $h_{\mathrm{c}}=\frac{H-H_{\mathrm{R}}}{2}\ ,\ h_{\mathrm{u}}=\frac{H+H_{\mathrm{R}}}{2}\ .$ (5) In a such coordination system, $h_{\mathrm{c}}$ represents the local coercive field of each hysteron, and $h_{\mathrm{u}}$ corresponds to the local interaction field shifting the hysteron’s center along the field axis.41 The obtained FORC distributions $\rho(h_{\mathrm{c}},h_{\mathrm{u}})$ for both ascending and descending magnetization branches, together with FORC curve families $M(H_{\mathrm{R}},H)$ plotted in canvas of the hysteresis loop, are shown in Figure 6. In each FORC distribution there are two main prominent features related to the irreversible magnetization switching processes. These features are characteristic of the reversal process taking place by the nucleation of the magnetic domains and subsequent domain-wall motion.42 First, the ridges seen in all FORCs and marked by the green arrows in Figure 6 are directly related to the nucleation and subsequent rapid propagation of the magnetic domains with reversed magnetization orientation within the system. It can be seen in both FORCs measured for descending magnetization branches that the ridge is centered on the point with no shift along the vertical $h_{\mathrm{u}}$ axis. On the other hand, the ridge centre observed for ascending fields after cooling the sample in +50 kOe is moved along $h_{\mathrm{u}}$ for approximately -0.3 kOe (blue arrow in Figure 6a). This feature is connected to the presence of the exchange interaction in the system which is, in our case, the ferromagnetic-antiferromagnetic coupling present at the SM/AFM and AFM/HM interfaces in the composite. The value of the $h_{\mathrm{u}}$ shift is lower than the exchange bias field -0.5 kOe observed in the hysteresis loop. However, it has to be taken into account that the FORCs were measured on trained system where the value of the $H_{\mathrm{ex}}$ is expected to be lower than for untrained composite. Therefore, the $h_{\mathrm{u}}$ shift is the evidence for the presence of ferromagnetic- antiferromagnetic coupling in the system which drives the exchange bias effect. For the case of $H_{\mathrm{cool}}\\!=\\!-0.45$ kOe when no hysteresis loop shift was recorded (see Figure 5) the corresponding ascending ridge also reveals no shift along the interaction axis $h_{\mathrm{u}}$. The second feature observed in each FORC is a positive-negative pair of irreversible regions along the lines marked by the purple arrows in Figures 6a and b. Such paired signals are distinctive for switching of the orientation of the magnetization by domain nucleation and domain-wall motion, and are related to the annihilation of magnetic domains at the end of the magnetization reversal process as the external field induces magnetic saturation of the system. It is also seen in the FORCs for the system cooled in $H_{\mathrm{cool}}\\!=\\!-0.45$ kOe (Figure 6b) that the irreversible regions related to the domain annihilation are better pronounced than the corresponding features observed after field cooling in +50 kOe (black arrows in Figure 6b). All FORCs show the lack of irreversible signal around $h_{\mathrm{c}}\\!=\\!0$. This confirms that the magnetization drop observed in the hysteresis loops for external magnetic field around 0 (see Figure 4a) has a paramagnetic nature. The paramagnetic contribution is also reflected in the FC and ZFC curves (Figure 3) and observed as a small magnetization rise at a low temperature. The origin of the effect is the presence of unoxidized Co atoms in the AFM. They do not exhibit ferromagnetic behavior and are not magnetically coupled to the other magnetic regions. Due to their nature they do not influence the magnetization reversal process of the composite. ## 4 Conclusions In this paper, we describe the magnetization reversal mechanism of the exchange-biased [Co/Pd${}_{\mathrm{0.6\ nm}}$]7/CoO/[Co/Pd${}_{\mathrm{1.2\ nm}}$]7 composite where the antiferromagnetic film is sandwiched between the soft and hard ferromagnetic layers. Upon cooling such an exchange-spring-like system in the external magnetic field $H_{\mathrm{cool}}$ the initial alignment of the ferromagnetic spins forces the antiferromagnet to freeze in a certain state. This state, induced by the cooling field, is then crucial for the magnetization switching process, which has been reflected as the change of the magnitude and the sign of the exchange bias field $H_{\mathrm{ex}}$ with the $H_{\mathrm{cool}}$. Since we have two different ferromagnetic films from both sides of the AFM, the shape of the exchange bias field $H_{\mathrm{ex}}$ of the cooling field $H_{\mathrm{cool}}$ dependence is conditioned by the reversal mechanisms manifested by these two materials. In case where the external field is large enough to affect the spin orientation in the soft magnet, simultaneously being too weak to rotate moments of the hard magnet, we observed rapid change of the exchange bias field. On the other hand, the rotation of the magnetic moments of the hard magnetic film induced by the $H_{\mathrm{cool}}$ resulted in less prominent slope of the $H_{\mathrm{ex}}$ curve. Therefore, the dependence is not symmetrical around the point $H_{\mathrm{ex}}\\!=\\!0$. We also observed that cooling the system in zero field does not result in the lack of the exchange bias loop shift, which is caused by large out-of-plane anisotropy, leading to high magnetic remanence displayed by the system. The FORC studies show that the magnetization reversal is driven by the nucleation of magnetic domains with opposite orientation followed by the domain-wall motion. Additionally, we record the shift of the FORC irreversible region caused by the presence of the exchange bias field in the system. $\star$ $\star$ $\star$ ## * [1] Kiwi, M. Exchange Bias Theory. J. Magn. Magn. Mater. 2001, 234, 584–595. * [2] Nogués, J.; Schuller, I. K. Exchange Bias. J. Magn. Magn. Mater. 1999, 192, 203–232. * [3] Shiratsuchi, Y.; Nakano, Y.; Inami, N.; Ueno, T.; Ono, K.; Kumai, R.; Sagayama,R.; Nakatani, R. Determination of Specific Ion Positions of Cr3+ and O2- in Cr2O3 Thin Films and their Relationship to Exchange Anisotropy at Co/Cr2O3 Interfaces. J. Appl. Phys. 2018, 123, 103903. * [4] Miltényi, P.; Gierlings, M.; Keller, J.; Beschoten, B.; Güntherodt, G.; Nowak, U.; Usadel, K. D. Diluted Antiferromagnets in Exchange Bias: Proof of the Domain State Model. Phys. Rev. Lett. 2000, 84, 4224–4227. * [5] Takano, K.; Kodama, R. H.; Berkowitz, A. E.; Cao, W.; Thomas, G. Interfacial Uncompensated Antiferromagnetic Spins: Role in Unidirectional Anisotropy in Polycrystalline Ni81Fe19/CoO Bilayers. Phys. Rev. Lett. 1997, 79, 1130–1133. * [6] Malozemoff, A. P. Random-Field Model of Exchange Anisotropy at Rough Ferromagnetic-Antiferromagnetic Interfaces. Phys. Rev. B 1987, 35, 3679–3682. * [7] Negulescu, B.; Lacour, D.; Montaigne, F.; Gerken, A.; Paul, J.; Spetter, V.; Marien, J.; Duret, D.; Hehn, M. Wide Range and Tunable Linear Magnetic Tunnel Junction Sensor Using Two Exchange Pinned Electrodes. Appl. Phys. Lett. 2009, 95, 112502. * [8] Ehresmann, A.; Koch, I.; Holzinger, D. Manipulation of Superparamagnetic Beads on Patterned Exchange-Bias Layer Systems for Biosensing Applications. Sensors 2015, 15, 28854–28888. * [9] Issa, B.; Obaidat, I. M.; Albiss, B. A.; Haik, Y. Magnetic Nanoparticles: Surface Effects and Properties Related to Biomedicine Applications. Int. J. Mol. Sci. 2013, 14, 21266–21305. * [10] Parkin, S.; Jiang, X.; Kaiser, C.; Panchula, A.; Roche, K.; Samant, M. Magnetically Engineered Spintronic Sensors and Memory. Proc. IEEE 2003, 91, 661–679. * [11] Gasi, T.; Nayak, A. K.; Winterlik, J.; Ksenofontov, V.; Adler, P.; Nicklas, M.; Felser, C. Exchange-Spring Like Magnetic Behavior of the Tetragonal Heusler Compound Mn2FeGa as a Candidate for Spin-Transfer Torque. Appl. Phys. Lett. 2013, 102, 202402. * [12] Polenciuc, I.; Vick, A. J.; Allwood, D. A.; Hayward, T. J.; Vallejo-Fernandez, G.; O’Grady, K.; Hirohata, A. Domain Wall Pinning for Racetrack Memory Using Exchange Bias. Appl. Phys. Lett. 2014, 105, 162406. * [13] Perzanowski, M.; Krupinski, M.; Zarzycki, A.; Dziedzic, A.; Zabila, Y.; Marszalek, M. Exchange Bias in the [CoO/Co/Pd]10 Antidot Large Area Arrays, ACS Appl. Mater. Interfaces 2017, 9, 33250–33256. * [14] Carpenter, R.; Vick, A. J.; Hirohata, A.; Vallejo-Fernandez, G.; O’Grady K. Effect of Grain Cutting in Exchange Biased Nanostructures. J. Appl. Phys. 2014, 115, 17B905. * [15] Suck, S. Y.; Neu, V.; Wolff, U.; Bahr, S.; Bourgeois, O.; Givord, D. Magnetic Force Microscopy Analysis of Magnetization Reversal in Exchange-Biased Co/CoO Nanostructure Arrays. Appl. Phys. Lett. 2009, 95, 162503. * [16] Salazar-Alvarez, G.; Geshev, J.; Agramunt-Puig, S.; Navau, C.; Sanchez, A.; Sort, J.; Nogués, J. Tunable High-Field Magnetization in Strongly Exchange-Coupled Freestanding Co/CoO Core/Shell Coaxial Nanowires, ACS Appl. Mater. Interfaces 2016, 8, 22477–22483. * [17] Swiatkowska-Warkocka, Z.; Pyatenko, A.; Shimizu, Y.; Perzanowski, M.; Zarzycki, A.; Jany, B. R.; Marszalek, M. Tailoring of Magnetic Properties of NiO/Ni Composite Particles Fabricated by Pulsed Laser Irradiation. Nanomaterials 2018, 8, 790. * [18] Shi, D.-W.; Javed, K.; Ali, S. S.; Chen, J.-Y.; Li, P.-S.; Zhao, Y.-G.; Han, X.-F. Exchange-Biased Hybrid Ferromagnetic-Multiferroic Core-Shell Nanostructures. Nanoscale 2014, 6, 7215–7220. * [19] Tripathy, D.; Adeyeye, A. O.; Singh, N.; Stamps, R. L. Controlling the Magnetization Reversal in Exchange-Biased Co/CoO Elongated Nanorings. Nanotechnology 2009, 20, 015304. * [20] Gilbert, D. A.; Ye, L.; Varea, A.; Agramunt-Puig, S.; del Valle, N.; Navau, C.; López-Barbera, J. F.; Buchanan, K. S.; Hoffmann, A.; Sánchez, A.; Sort, J.; Liu, K.; Nogués, J. A New Reversal Mode in Exchange Coupled Antiferromagnetic/Ferromagnetic Disks: Distorted Viscous Vortex. Nanoscale 2015, 7, 9878–9885. * [21] Carcia, P. F.; Meinhaldt, A. D.; Suna, A. Perpendicular Magnetic Anisotropy in Pd/Co Thin Film Layered Structures. Appl. Phys. Lett. 1985, 47, 178–180. * [22] Carrey, J.; Berkowitz, A. E.; Egelhoff, W. F.; Smith, D. J. Influence of Interface Alloying on the Magnetic Properties of Co/Pd Multilayers. Appl. Phys. Lett. 2003, 83, 5259–5261. * [23] Menéndez, E.; Modarresi, H.; Dias, T.; Geshev, J.; Pereira, L. M. C.; Temst, K.; Vantomme, A. Tuning the Ferromagnetic-Antiferromagnetic Interfaces of Granular Co-CoO Exchange Bias Systems by Annealing. J. Appl. Phys. 2014, 115, 133915. * [24] Perzanowski, M.; Marszalek, M.; Zarzycki, A.; Krupinski, M.; Dziedzic, A.; Zabila, Y. Influence of Superparamagnetism on Exchange Anisotropy at CoO/[Co/Pd] Interfaces, ACS Appl. Mater. Interfaces 2016, 8, 28159–28165. * [25] Dobrynin, A. N.; Givord, D. Exchange Bias in a Co/CoO/Co Trilayer with Two Different Ferromagnetic-Antiferromagnetic Interfaces. Phys. Rev. B 2012, 85, 014413. * [26] Dias, T.; Menéndez, E.; Liu, H.; Van Haesendonck, C.; Vantomme, A.; Temst, K.; Schmidt, J. E.; Giulian, R.; Geshev, J. Rotatable Anisotropy Driven Training Effects in Exchange Biased Co/CoO Films. J. Appl. Phys. 2014, 115, 243903. * [27] Hu, Y.; Lu, Q.; Chi, X.; Zhang, Z.; Hu, T.; Li, R.; Yu, L.; Du, A. Cooling-Field Dependence of Dipole-Induced Loop Bias. Nanotechnology 2019, 30, 325701. * [28] Rui, W. B.; Hu, Y.; Du, A.; You, B.; Xiao, M. W.; Zhang, W.; Zhou, S. M.; Du, J. Cooling Field and Temperature Dependent Exchange Bias in Spin Glass/Ferromagnet Bilayers. Sci. Rep. 2015, 5, 13640. * [29] Torres, F.; Morales, R.; Schuller, I. K.; Kiwi, M. Dipole-Induced Exchange Bias. Nanoscale 2017, 9, 17074–17079. * [30] Thiele, J.-U.; Maat, S.; Fullerton, E. E. FeRh/FePt Exchange Spring Films for Thermally Assisted Magnetic Recording Media. Appl. Phys. Lett. 2003, 82, 2859–2861. * [31] Asti, G.; Ghidini, M.; Pellicelli, R.; Pernechele, C.; Solzi, M.; Albertini, F.; Casoli, F.; Fabbrici, S.; Pareti, L. Magnetic Phase Diagram and Demagnetization Processes in Perpendicular Exchange-Spring Multilayers. Phys. Rev. B 2006, 73, 094406. * [32] de Sousa, N.; Apolinario, A.; Vernay, F.; Monteiro, P. M. S.; Albertini, F.; Casoli, F.; Kachkachi, H.; Schmool, D. S. Spin Configurations in Hard/Soft Coupled Bilayer Systems: Transitions from Rigid Magnet to Exchange-Spring. Phys. Rev. B 2010, 82, 104433. * [33] Stearns, M. B.; Cheng, Y. Determination of Para- and Ferromagnetic Components of Magnetization and Magnetoresistance of Granular Co/Ag Films. J. Appl. Phys. 1994, 75, 6894–6899. * [34] Kuzminski, M.; Slawska-Waniewska, A.; Lachowicz, H. K.; Knobel, M. Effect of Particle Size and Surface-to-Volume Ratio Distribution on Giant Magnetoresistance (GMR) in Melt-Spun Cu-Co Alloys. J. Magn. Magn. Mater. 1999, 205, 7–13. * [35] Kneller, E. F.; Hawig, R. The Exchange-Spring Magnet: A New Material Principle for Permanent Magnets. IEEE Trans. Magn. 1991, 27, 3588–3560. * [36] Fullerton, E. E.; Jiang, J. S.; Grimsditch, M.; Sowers, C. H.; Bader, S. D. Exchange-Spring Behavior in Epitaxial Hard/Soft Magnetic Bilayers. Phys. Rev. B 1998, 58, 12193–12200. * [37] Casoli, F.; Albertini, F.; Nasi, L.; Fabbrici, S.; Cabassi, R.; Bolzoni, F.; Bocchi, C. Strong Coercivity Reduction in Perpendicular FePt/Fe Bilayers Due to Hard/Soft Coupling. Appl. Phys. Lett. 2008, 92, 142506. * [38] van der Zaag, P. J.; Ijiri, Y.; Borchers, J. A.; Feiner, L. F.; Wolf, R. M.; Gaines, J. M.; Erwin, R. W.; Verheijen, M. A. Difference between Blocking and Néel Temperatures in the Exchange Biased Fe3O4/CoO System. Phys. Rev. Lett. 2000, 84, 6102–6105. * [39] Binek, C. Training of the Exchange-Bias Effect: A Simple Analytic Approach. Phys. Rev. B 2004, 70, 014421. * [40] Palmero, E. M.; Béron, F.; Bran, C.; del Real, R. P. ; Vázquez, M. Magnetic Interactions in Compositionally Modulated Nanowire Arrays. Nanotechnology 2016, 27, 435705. * [41] Ruta, S.; Hovorka, O.; Huang, P.-W.; Wang, K.; Ju, G.; Chantrell, R. First Order Reversal Curves and Intrinsic Parameter Determination for Magnetic Materials; Limitations of Hysteron-Based Approaches in Correlated Systems. Sci. Rep. 2017, 7, 45218. * [42] Rahman, M. T.; Dumas, R. K.; Eibagi, N.; Shams, N. N.; Wu, Y.-C.; Liu, K.; Lai, C.-H. Controlling Magnetization Reversal in Co/Pt Nanostructures with Perpendicular Anisotropy. Appl. Phys. Lett. 2009, 94, 042507.
# One-parameter robust global frequency estimator for slowly varying amplitude and noisy oscillations Michael Ruderman<EMAIL_ADDRESS>University of Agder, 4604-Norway ###### Abstract Robust online estimation of oscillation frequency belongs to classical problems of system identification and adaptive control. The given harmonic signal can be noisy and with varying amplitude at the same time, as in the case of damped vibrations. A novel robust frequency-estimation algorithm is proposed here, motivated by the existing globally convergent frequency estimator. The advantage of the proposed estimator is in requiring one design parameter only and being robust against measurement noise and initial conditions. The proven global convergence also allows for slowly varying amplitudes, which is useful for applications with damped oscillations or additionally shaped harmonic signals. The proposed analysis is simple and relies on an averaging theory of the periodic signals. Our results show an exponential convergence rate, which depends, analytically, on the sought frequency, adaptation gain and oscillation amplitude. Numerical and experimental examples demonstrate the robustness and efficiency of the proposed estimator for signals with slowly varying amplitude and noise. ###### keywords: Frequency estimation , adaptive notch filter , robust estimator , identification algorithm ††journal: Mechanical Systems and Signal Processing ## 1 Introductory note A common problem associated with online estimation of the unknown frequency of harmonic signals has been studied in multiple works (see, e.g., [1, 2, 3, 4, 5] and references in the latter). Some differing approaches rely on extended observer or Kalman filter principles (see, e.g., [6]). Other approaches (particularly those accommodated in power electronics applications) use so- called phase-locked-loop (PLL) algorithms (see, e.g., [7]). Frequency estimation can be seen to be one of the fundamental questions in systems and signals theory, and it has multiple practical mechanical and electrical applications. For instance, it can be found in the following: power system converters and controllers (e.g., [7]); active control of sound and vibrations (e.g., [8]); rotary machines, like in magnetic bearings (e.g., [9]); drives with eccentricities (e.g., [10]); and motion control with periodic and vibrational disturbances of, e.g., disk drives [11] or suspensions [12], to mention just a few. In several application scenarios, like in the case of damped vibrations, the time-varying amplitude of a harmonic signal is, however, the most challenging factor for robust and sufficiently fast estimation of the unknown frequency. Besides, measurement and process noise can further degrade those estimation approaches for which the convergence is theoretically proven, but which can suffer through sensitivity to real measured (physical) data, making their applicability questionable in terms of robust convergence. Among the numerous existing frequency-estimation methods, the globally convergent estimator, introduced in [2], appears promising due to its structural simplicity and low number of design parameters. This paper focuses on the globally convergent frequency estimator (cf. [2, 4]) and proposes a one-parameter robust modification, which targets unbiased harmonic signals with a slowly varying amplitude and band-limited white noise. The rest of the paper is structured as follows. The problem statement for a noisy harmonic signal with slowly varying amplitude and an unknown frequency of interest is given in Section 2. The existing globally convergent frequency estimator (relevant to this work) is summarized in Section 3. In this regard, differences in the proposed estimator (and therefore the contributions of the paper) are highlighted. The main results, with corresponding analysis and proofs, are provided in Section 4. In Section 5, various simulated and experimental harmonic signals are shown as confirming the highlighted estimator properties. Brief conclusions are drawn in Section 6. ## 2 Problem statement We consider a classical estimation problem, which is of importance for system identification and adaptive control, where a signal of the form $\sigma(t)=k(t)\sin(\omega_{0}t)+\eta(t),$ (1) is the single measured oscillating quantity. The harmonic signal has a slowly varying111For the rest of the paper, we will assume a sufficiently slow amplitude variation, i.e., in terms of a low $|\dot{k}|$, compared to the basic frequency $\omega_{0}$ of the sinusoidal signal. Using the averaging theory of the periodic signals, we will assume that the system (1) contains timescale separation, which means a faster oscillation with the angular frequency $\omega_{0}$ versus a slower drift of $k(t)$. amplitude within a certain range $\underline{k}\leq k\leq\overline{k}$, and an unknown angular frequency $\omega_{0}>0$, which we are mainly interested in. The measured $\sigma(t)$ is affected by the noise $\eta(t)$, which is a zero-mean ergodic process uniformly distributed over the whole frequency range $\omega$. In other words, $\eta(t)$ can be seen as power-limited (and therefore band- limited) white noise, seen from a signal-processing viewpoint. We will assume a constant power spectral density (PSD) of the noise, i.e., $\mathrm{PSD}\\{\eta(\omega)\\}\equiv p=\mathrm{const}$, and a reasonable (in terms of the signal-to-noise ratio) finite variance $\mathrm{Var}\\{\eta(t)\\}=\tau^{2}$, this without going into further details about the spectral properties of $\eta(j\omega)$. Although biased sinusoids with unknown frequency are often considered (e.g., [3, 13]), this work focuses only on unbiased sinusoidal signals (1), while keeping in mind that some constant bias can be removed by high-pass or other dedicated filtering approaches. Rather, we emphasize that $k(t)$ can be slowly varying. For instance, if one allows for $k(t)\rightarrow 0$ with the progressing time, then (1) will represent a damped oscillation response $\sigma(t)$, where only a finite number of the periods is available for estimating $\omega_{0}$. ## 3 Globally convergent adaptive notch filter The globally convergent adaptive filter (also denoted as an adaptive notch filter (ANF) due to the structural properties of a second-order notch filter) was provided and analyzed in [2], based on the original work [14]. A continuous-time version of the ANF [14] can also be found in [1]. The ANF considers a sinusoidal signal (1) but without explicit amplitude variation $k(t)$ and noise $\eta(t)$, and estimates the unknown frequency $\omega_{0}$ using the following structure [2]: $\displaystyle\ddot{x}+2\zeta\theta\dot{x}+\theta^{2}x$ $\displaystyle=$ $\displaystyle\theta^{2}\sigma,$ (2) $\displaystyle\dot{\theta}$ $\displaystyle=$ $\displaystyle-\gamma x\bigl{(}\theta^{2}\sigma-2\zeta\theta\dot{x}\bigr{)}.$ (3) Subsequently, another scaling of the forcing signal in (2) and, correspondingly, error signal in (3) was proposed and analyzed in [4], while it was claimed that the adaptation scheme and necessary stability condition become independent of the damping ratio $\zeta$. Both aforementioned formulations of the ANF have two real positive design parameters: $\gamma$, which determines the ’adaptation speed’, and $\zeta$, which determines the ’depth of the notch’ and hence noise sensitivity, according to [4]. In both approaches [2, 4], stability analysis and proof of convergence rely on the concept of the uniqueness of a periodic orbit $\bigl{[}\bar{x},\bar{\dot{x}},\bar{\theta}\bigr{]}(t)$, where $(\bar{\cdot})$ denotes equilibria (not necessarily constant). Towards this unique orbit, with $\bar{\theta}=\omega_{0}$, the adaptive system (2), (3) is then shown to converge globally. Since its appearance, the ANF approach has become popular, and performance, design equations and applications, as well as signal/noise ratios and initialization, have been addressed in several works (see, e.g., [15]). The estimator introduced below keeps the same motivating ANF principle while (i) adapting the scaling factor, (ii) using the sign instead of the $x$-state in (3), and (iii) canceling the damping ratio design parameter, which is shown to be unnecessary. The proposed convergence analysis is based on a simpler consideration of steady states in the frequency domain, through keeping $\theta$ as a frozen parameter, which avoids the challenges of demonstrating convergence of (2), (3) into a unique periodic orbit (cf. [2, 4]). At the same time, we can demonstrate explicitly an exponential convergence rate and can take into account (explicitly) the impact of measurement noise and (implicitly) insensitivity to slow amplitude variations. ## 4 Main results The proposed robust global frequency estimator is provided by the auxiliary second-order system $\displaystyle\left[\begin{array}[]{c}\dot{x}_{1}\\\ \dot{x}_{2}\\\ \end{array}\right]$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}0&1\\\ -\theta^{2}&-2\theta\\\ \end{array}\right]\left[\begin{array}[]{c}x_{1}\\\ x_{2}\\\ \end{array}\right]+\left[\begin{array}[]{c}0\\\ 2\theta\\\ \end{array}\right]\sigma,$ (12) $\displaystyle y$ $\displaystyle=$ $\displaystyle\left[\begin{array}[]{cc}0&1\\\ \end{array}\right]\left[\begin{array}[]{c}x_{1}\\\ x_{2}\\\ \end{array}\right],$ (16) and the adaptation law $\dot{\theta}=-\gamma\,\mathrm{sign}(x_{1})(\sigma-y),$ (17) where $\gamma>0$ appears as the single design parameter. Being excited by $\sigma(t)$, the vector of dynamic states $[x_{1},x_{2}]^{T}$ performs steady- state oscillations at the angular frequency $\theta=\omega_{0}$, once the input-output synchronization brings the _output error_ to zero, i.e., $e=\sigma-y\rightarrow 0$. For a slowly varying $k(t)$, the $2\theta$ input coupling factor used (on the right-hand side of (12)) endows the input-output ratio of (12), (16) to be independent of $k$ in a steady state, thus making a frequency estimation insensitive to slow amplitude variations. It is also worth noting that, unlike similar adaptation mechanisms [2], [4], no further scaling factors are assigned to the forcing signal $\sigma(t)$. A clear advantage of this purposeful simplification will be shown later when analyzing the convergence rate. While an ANF contains an additional damping parameter, which determines the ’depth of the notch’ and, according to [2, 4], its noise sensitivity, we deliberately assume a critically damped dynamic system (12), (16) (compared with (2) for $\zeta=1$). This simplification not only removes the second design parameter but also eliminates transient oscillations of $\theta(t)$ in the course of the frequency adaptation. This comes as no surprise since the linear $[x_{1},x_{2}]^{T}$ sub-dynamics with $\zeta=1$ have no conjugate-complex poles and therefore no transient oscillations (thus also not in $e(t)$). Later, we will demonstrate a performance degradation when a damping factor $0<\zeta<1$ is included, as a parameter, in (12), (16). Denoting the system matrix and input and output coupling vectors in (12), (16) by $A$, $B$ and $C$, correspondingly, the input-to-output transfer function $G(j\omega)=\frac{y(j\omega)}{\sigma(j\omega)}=C^{T}(j\omega I-A)^{-1}B,$ (18) can be written for the frequency domain, when $\theta$ is considered to be a frozen parameter. Obviously, $I$ is the $2\times 2$ identity matrix, and $\omega$ is the angular frequency variable. As long as the output error is not zero, it is excited as $e(j\omega)=(1-G)G^{-1}y(j\omega),$ (19) by the forced dynamics (12), (16), so that $\bigl{|}e(j\omega)\bigr{|}>0$ for all $\omega$ excepts $\omega=\omega_{0}$. This motivates the adaptation law (17), which ensures $\dot{\theta}\neq 0$ always excepts at $\theta=\omega_{0}$. Denoting the above transfer function, i.e., from the filter output to the error, by $E(j\omega)=\frac{e(j\omega)}{y(j\omega)}=\bigl{(}1-G(j\omega)\bigr{)}G(j\omega)^{-1},$ we can take a closer look at the amplitude and phase response of $E(\cdot)$, illustrated in Fig. 1 for $\theta=10$ rad/sec. Figure 1: Amplitude and phase response of the transfer function $E$, for exemplary $\theta=10$ rad/sec; the transfer function with an additional damping $\zeta=0.1$ (cf. with (1)) is included (gray line) for the sake of comparison. It becomes evident that the _error transfer function_ amplitude $|E|$ has global minima at $\omega=\theta$, so that $|e(j\omega)|\rightarrow 0$ when $\theta(t)\rightarrow\omega_{0}$, and this is independent of the estimator initialization $\theta(0)$. Indeed, without loss of generality, we can assume an arbitrary $0<\theta(0)\neq\omega_{0}$ so that $\bigl{|}e(j\omega)\bigr{|}_{\omega=\omega_{0}}=a$, where $a>0$ is some positive magnitude determined by the oscillating output state and error transfer function. We also recall that the oscillating output in a steady state will then be given by $y=G(j\omega)\sigma$, since the system (12), (16) is asymptotically stable and, moreover, critically damped. This basically leads to the harmonic behavior of $y(t)$ and $e(t)$, provided $\omega_{0}=\mathrm{const}.$ Further, it becomes evident (cf. phase response in Fig. 1) that $\angle e(j\omega)$ always lags behind the phase $\angle y(j\omega)$ for $\omega<\omega_{0}$, which is due to $\angle E(j\omega)\rightarrow-\pi/2$. Correspondingly, $\angle e(j\omega)$ always leads before the phase $\angle y(j\omega)$ for $\omega>\omega_{0}$, which is due to $\angle E(j\omega)\rightarrow\pi/2$. It is only at $\omega=\omega_{0}$ where both signals $y(j\omega)$ and $e(j\omega)$ are in the phase and $e\rightarrow 0$. The $\pm\pi/2$ phase response of $\angle E(j\omega)$ allows for providing an ever-increasing or decreasing $\theta(t)$ on the left-hand or, respectively, right-hand side of $\omega_{0}$. With this in mind, we are now in a position to formally prove global convergence of the adaptation law (17). ###### Theorem 1. The frequency estimator (12)-(17) is global for (1) and converges asymptotically as $\theta(t)\rightarrow\omega_{0}$ for $t\rightarrow\infty$, regardless of the $\theta(0)>0$ initialization, provided the small adaptation gains $\gamma>0$ and slowly varying amplitudes $k(t)$. The frequency- estimation error $\varepsilon(t)=\omega_{0}-\theta(t)$ converges uniformly and exponentially in terms of $\bigl{|}\varepsilon(t_{2})\bigr{|}<\alpha\bigl{|}\varepsilon(t_{1})\bigr{|}\exp\bigl{(}-\beta(t_{2}-t_{1})\bigr{)}$ (20) for some $\alpha>0$ and $\forall\;t_{2}>t_{1}$. The exponential rate of convergence is independent of $\eta(t)$ noise, as follows: $\beta=0.5\,\gamma k\,\omega_{0}^{-1}+\delta,$ (21) where $\delta$ is a small positive constant independent of $\gamma$, $k$, $\omega_{0}$. ###### Proof. Let $0<\theta(0)<\omega_{0}$ be an arbitrary initialization of the estimator (12)-(17). Note that for $\theta(0)>\omega_{0}$, the proof is fully identical due to the phase symmetry of $\angle E\bigl{(}j\theta\bigr{)}\rightarrow\pm\pi/2$ for all $\theta\neq\omega_{0}$, and as a consequence, $\mathrm{sign}\bigl{(}\dot{\theta}\bigr{)}=\pm 1$. A harmonic excitation (1) leads to an output harmonic $y(t)=b\sin(\omega_{0}t+c),$ (22) where $b=k\bigl{|}G(j\omega_{0})\bigr{|}$, while the phase shift $c=\angle\bigl{(}G(j\omega_{0})\bigr{)}$ is of minor relevance here. The internal dynamic state of the estimator then becomes $x_{1}(t)=-\frac{b}{\omega_{0}}\cos(\omega_{0}t+c)=-\frac{b}{\omega_{0}}\sin(\omega_{0}t+c+\pi/2),$ (23) and the output error becomes $e(t)=a\sin(\omega_{0}t+c+\pi/2),$ (24) where $a=b|E(j\omega_{0})|>0$ for all $\theta<\omega_{0}$. Substituting (23) and (24) into (17), and writing out $a$ and $b$, results in $\dot{\theta}=\gamma k\,\bigl{|}G(j\omega_{0})\bigr{|}\,\Bigl{|}\frac{1-G(j\omega_{0})}{G(j\omega_{0})}\Bigr{|}\,\bigl{|}\sin(\omega_{0}t+c+\pi/2)\bigr{|}.$ (25) It is clear that for all $\gamma,k>0$ the $\mathrm{sign}(\dot{\theta})=+1$ as long as $\theta(t)<\omega_{0}$. This implies global uniform convergence and completes the first part of the proof. Evaluating the first and second modulus $|\cdot|$-terms in (25) $\bigl{|}G(j\omega_{0})\bigr{|}\,\Bigl{|}\frac{1-G(j\omega_{0})}{G(j\omega_{0})}\Bigr{|}=\frac{\bigl{|}\theta^{2}-\omega_{0}^{2}\bigr{|}}{\theta^{2}+\omega_{0}^{2}}\equiv\Omega(\theta),$ (26) one can show that the $\theta$-dependent magnitude $\Omega$ always decreases monotonically and $1\geq\Omega(\theta)\geq 0$ on the interval $\theta\in[0,\,\omega_{0}]$. Inspecting the $\Omega(\theta)$ function, with the $\omega_{0}$-normalized argument, as depicted in Fig. 2, Figure 2: $\Omega(\theta)$ function of the $\omega_{0}$-normalized argument. one can linearly approximate the magnitude of the decrease by $\Omega^{*}(\theta)=-\theta\omega_{0}^{-1}+1.$ (27) Using (27) and the fact of an always positive third modulus $|\cdot|$-term in (25), which has a mean value equal to $1/2$, one can write the first-order approximation $\dot{\theta}^{*}=0.5\,\gamma k\bigl{(}-\theta^{*}\omega_{0}^{-1}+1\bigr{)}$ (28) of the estimator dynamics. Note that the $\angle E\bigl{(}j\theta\bigr{)}\rightarrow\pm\pi/2$ phase, which determines the sign of $\dot{\theta}$ (cf. (25)), allows us to write (28), regardless of whether $\theta^{*}(0)<\omega_{0}$ or $\theta^{*}(0)>\omega_{0}$. Because of (27) is an under-approximation of the $(\theta\omega_{0}^{-1})$-dependent gaining factor of the adaptation rate (cf. Fig. 2), the following can be concluded from the eq. (28). The asymptotic convergence of $\theta(t)$ to $\omega_{0}$ has an exponential rate of $0.5\gamma k\omega_{0}^{-1}+\delta$, which is not slower than that of the dynamics $\dot{\theta}(t)+0.5\gamma k\omega_{0}^{-1}\theta(t)=0.5\gamma k\omega_{0}^{-1}\cdot\omega_{0},$ (29) (cf. (28) and (29)). This completes the second part of the proof. ∎ ###### Remark 1. Note that the asymptotic convergence of $\theta(t)$ can be guaranteed firstly when $y(t)$ is in a steady state, i.e., after the transients of (12), (16) dynamics excited by (1). The transient response results in a temporary bias of the output harmonic $y(t)$, depending on the initial phase of the excitation signal $\sigma(t)$. Consequently, the $\theta(t)$ trajectory can drift oscillatory in the opposite direction, away from $\omega_{0}$, until $y(t)$ is in a steady state (cf. the first numerical example in Section 5.1). This initial by-effect must be taken into account when assigning $0<\theta(0)<\omega_{0}$, while being irrelevant for $\theta(0)>\omega_{0}$. ###### Remark 2. When including the damping factor $0<\zeta<1$ as an additional design parameter of the estimator (cf. [2, 4]), the dynamic system (12), (16) needs to be modified in respect of the $A$ and $B$ terms (cf. with (2)) and consequently becomes non-critically damped. By implication, additional oscillating dynamics of $y(t)$ appear, both during the transients and with a steady state of the dynamic $\theta(t)$ trajectory. Notice that $\zeta$ has a minor influence on the asymptotic convergence and its exponential rate (cf. Fig. 1 for $\zeta=0.1$). At the same time, it deteriorates the smoothness of $\theta(t)$ during the transient phase and produces residual steady-state oscillations of $\varepsilon(t)$ (cf. $\theta(t)$-trajectories, exemplified in Fig. 3 for $\zeta=\\{0.1,0.5,1\\}$). Figure 3: Convergence trajectories of the estimator ($\gamma=100$, $\omega_{0}=20$ rad/sec) with additional damping $\zeta=\\{0.1,0.5,1\\}$. Once we have shown the asymptotic convergence of $\theta(t)$ to $\omega_{0}$ and estimated the exponential convergence rate, it is of further interest to analyze the non-vanishing residual $\varepsilon(t)$, depending on the signal noise $\eta(t)$. ###### Lemma 2. The residual frequency-estimation error is a zero-mean ergodic process, with $\mathrm{Var}\\{\varepsilon(t)\\}<4\tau^{2}\omega_{0}k^{-1},$ (30) for the signal (1) with band-limited white noise, which has variance $\mathrm{Var}\\{\eta(t)\\}=\tau^{2}$. Note that the Lemma 2 claims the upper bound of the second moment of $\varepsilon(t)$ for all times $t>t_{c}>0$, since $\varepsilon(t)$ is a random process driven by $\eta(t)$, after $\theta(t)$ has converged to a neighborhood of $\omega_{0}$ at some finite time $t_{c}$. ###### Proof. Denoting the harmonic part of the signal (1) by $\tilde{\sigma}(j\omega_{0})$ and that of the output (16) by $\tilde{y}(j\omega_{0})$, respectively, the output error in the frequency domain can be written as $e(j\omega)=\tilde{\sigma}(j\omega_{0})-\tilde{y}(j\omega_{0})+\eta(j\omega)-G(j\omega)\eta(j\omega).$ (31) Provided the estimator has already converged to a neighborhood of $\omega_{0}$, the harmonic part of the error can be set to zero, and the residual error, due to the noise, is to be analyzed further. Since no phase response can be considered for a stochastic noise signal (only the magnitude), one can assume $|\hat{e}(j\omega)|=2|\eta(j\omega)|$ (32) as a worst case (i.e., upper bound) since $\|G(j\omega)\|_{\infty}=1$. Using the variance (for the noise magnitude) and substituting it into (17), one can write the noise-driven dynamics of the estimate as $|\dot{\hat{\theta}}|=\gamma\,2\tau^{2},$ (33) for some neighborhood $\hat{\theta}$ of the true value $\omega_{0}$. Note that the sign of $\dot{\hat{\theta}}$ is also a random process driven by $\mathrm{sign}\bigl{(}x_{1}(t,\eta)\bigr{)}\,\mathrm{sign}\bigl{(}\eta(t)\bigr{)}$ (cf. with (17)). Now, comparing (25) and (33), one can see that for the estimate dynamics not driven by noise, the following inequality should hold $\mathrm{sign}\bigl{(}\dot{\theta}\bigr{)}=\mathrm{const}=\gamma\,0.5k\,\Omega(\theta)>\gamma\,2\tau^{2}.$ (34) Using the linear approximation (27), and substituting it into (34), yields $k\Bigl{(}-\frac{\theta}{\omega_{0}}+1\Bigr{)}>4\tau^{2}.$ (35) Solving (35) with respect to $|\varepsilon|=|\omega_{0}-\theta(t)|$ results in $|\varepsilon|>4\tau^{2}\omega_{0}k^{-1},$ (36) which should be fulfilled so that the estimate dynamics (17) do not become driven by the signal noise. Turning back to the random nature of the noise- driven residual estimation error, one can state (30) (cf. with (36)), which completes the proof. ∎ ## 5 Numerical and experimental examples ### 5.1 Simulated signals We firstly demonstrate convergence of the estimator (12)-(17) for a purely sinusoidal signal $\sigma(t)=\sin(\omega_{0}t)$, i.e., with a constant unity amplitude and without noise. Assuming two estimator initializations $\theta(0)=\\{10,100\\}$ rad/sec, i.e., one higher and one lower than $\omega_{0}=50$ rad/sec, the $\theta(t)$ trajectories are shown in Fig. 4 for $\gamma=\\{200,100,50\\}$ adaptation gains. Figure 4: Convergence of $\theta(t)$ with $\gamma=\\{200,100,50\\}$ to $\omega_{0}=50$ rad/sec for $\sigma(t)=\sin(\omega_{0}t)$ signal with constant amplitude and without noise. The resulting exponential shape of convergence is in accord with (21). Note that for $\theta(0)=10$ rad/sec, the initial $\theta(t)$ trajectory is oscillatory, progressing in the opposite direction. This occurs due to a transient response of (12), (16) to the $\sigma$-excitation, which takes about three oscillation periods for the given $\theta(0)$ and $\omega_{0}$ values (cf. Remark 1). Next, we consider the signal (1) with $k=1$ for different $\omega_{0}=\\{10,30,50,70\\}$ rad/sec. For all angular frequencies, an additional band-limited white noise $\eta(t)$ with $p=1e-7$ and $\tau^{2}=0.001$ is included, as exemplified in Fig. 5 (a) for $\omega_{0}=10$ rad/sec. The adaptation gain is set to $\gamma=100$. Figure 5: (a) Simulated signal (1) with $k=1$, $\omega_{0}=10$ rad/sec; (b) convergence of $\theta(t)$ with $\gamma=100$ for $\omega_{0}=\\{10,30,50,70\\}$ rad/sec. The convergence of $\theta(t)$ depends inversely on $\omega_{0}$ and is in accord with (21), as can be seen from Fig. 5 (b). Note that the noise of $\eta(t)$ does not affect the convergence rate but solely the steady-state fluctuations of the residual estimation error $\varepsilon(t)$, in accord with the Lemma 2. To demonstrate insensitivity of the frequency estimator to the slow amplitude variations of the signal (1), we consider $\omega_{0}=40$ rad/sec with $k(t)=5+5\sin(0.9t-\pi/2)$, as depicted in Fig. 6 (a). Figure 6: (a) Simulated signal (1) with $k(t)=5+5\sin(0.9t-\pi/2)$ and $\omega_{0}=40$ rad/sec; (b) convergence of $\theta(t)$ with $\gamma=100$. For the adaptation gain $\gamma=100$ and two different estimator initializations $\theta(0)=\\{20,80\\}$, the $\theta(t)$ convergence is shown in Fig. 6 (b). After a stable transient of $\theta(t)$, whose shape is dynamically affected by the $k(t)$ variations, the estimation error $\varepsilon(t)$ in a steady state (for $t>1.8$ sec) does not appear to be affected by persistent variations in the amplitude $k(t)$. Finally, we are eager to see how the proposed robust estimator can deal with the continuously varying frequencies $\omega_{0}(t)$. For this purpose, the simulated signal (1) with $k=1$ is designed as a linear down-chirp $\omega_{0}(t)=\omega_{0}(0)-\mu t$, with the frequency bounds $\omega_{0}(0)=20\cdot 2\pi$ rad/sec and $\omega_{0}(30)=1\cdot 2\pi$ rad/sec, and the resulting $\mu=3.98$, as depicted in Fig. 7 (a). Figure 7: (a) Simulated signal (1) with $k=1$, $\omega_{0}(t)=\omega_{0}(0)-\mu t$; (b) convergence of $\theta(t)$ with $\gamma=200$, $\theta(0)=10$ rad/sec. The $\theta(t)$ trajectory, for the assigned $\gamma=200$, is shown in Fig. 7 (b) versus the linearly changing $\omega_{0}(t)$. It can be seen that after a certain time, the $\theta(t)$ trajectory closely follows $\omega_{0}(t)$. The visible residual estimation error $\varepsilon(t)\neq 0$ is clearly due to the dynamically changing excitation frequency $\omega_{0}$; a more detailed analysis of this effect is beyond the scope of this work. Still, the estimator appears sufficiently robust to also follow a continuously varying excitation frequency. ### 5.2 Experimental case The proposed frequency estimation algorithm can equally be used for mechanical system applications in which the oscillating behavior (including vibrations) of structural parts and elements with elasticities requires tracking of the frequency for various purposes. Those include, e.g., controller tuning, condition and fault monitoring, commissioning and identification, and others. An experimental case provided below is realized in a laboratory setting which is still representing a standard situation of an unknown mechanical oscillation frequency in combination with a low damping. Such application scenarios are commonly appearing in two-inertia systems with either a load- depending varying natural frequency, like in the flexible robotic joints (see e.g. [16, 17]) and machine tools and instruments with cantilevers and flexible frames (see e.g. [18, 19]). Or it is associated with problems of varying excitation frequencies that propagate through a flexible, correspondingly oscillating structure (see e.g. [20, 21]). For benchmarking with existing frequency estimators of the same principle (cf. Sections 1, 3), the ANF modified and proposed in [4] was also implemented, in the same numerical setting as the proposed algorithm (12)-(17). Both frequency estimators are then evaluated, as described below, on the same experimental data and for the same initial and parametric conditions. The case study with a simultaneous variation of the signal amplitude $k(t)$ and angular frequency $\omega_{0}(t)$ is evaluated experimentally suing the laboratory setup [22] shown in Fig. 8. More details on the system dynamics and modeling, that is however of lower relevance for the recent work, an interested reader is referred to [23]. (a) (b) Figure 8: Experimental setup: (a) laboratory view of the two-mass oscillator, passive free-hanging mass ($M$) is placed on the holder-disc when not in operation; (b) schematic representation of two-mass oscillator, oscillating displacement $y\equiv\sigma(t)$ is measured contactlessly. The oscillating displacement of a free hanging load, attached through a nearly linear spring ($K$), is measured contactlessly by means of an inductive distance sensor, which has $\pm 12$ $\mu$m repeatability. The sampling rate of real-time measurements, analog to digital conversion with 16-bit quantization, is 2 kHz. Being subject to both sensing ($\eta(t)$) and process disturbances (non-modeled $d$ and $D$), the measured response $\sigma(t)$ constitutes an oscillating and noisy (cf. zoom-in in Fig. 9) time series. The double-mass experimental setup, with the first ’active’ mass ($m$) of the voice-coil-motor actuator and second ’passive’ mass ($M$) of a free hanging load, allows for testing of the eigendynamics response and excited (i.e., input-driven) response, both of a low-damped oscillating nature. The measured signal (Fig. 9) represents an exemplary response to an actuated chirp excitation, which yields a linearly increasing angular frequency $\omega_{0}(t)=\mu t$, with some initial value and $\mu>0$. The measured displacement $\sigma(t)$ is freed (by data postprocessing) from a steady-state offset, thus approaching a single harmonic, in accord with (1). Note that apart from the noise, the measured $\sigma(t)$ is still slightly affected by an asymmetry around zero, therefore disclosing a certain additional non- constant bias. Figure 9: Experimentally measured signal $\sigma(t)$ with varying amplitude $k(t)$ and linearly increasing angular frequency $\omega_{0}(t)$. The evaluation setting, in terms of the parameters and initial conditions, is summarized in Table 1 for both estimators under benchmark. Table 1: Estimators’ evaluation setting Set parameter | ANF according to [4] | Proposed estimator ---|---|--- $\theta(0)$ (rad/sec) | 20 | 20 $\gamma$ | $4e4$ | $2e4$ $\zeta$ | $\\{0.7,\,1,\,1.3\\}$ | $\\{0.7,\,1,\,1.3\\}$ Note that for the proposed estimator, the assigned $\gamma$-gain is twice smaller than $\gamma$ for the ANF [4], since for the latter it is already integrating the factor $2\zeta$, cf. [4, eqs. (5),(6)]. Further, for the sake of completeness and a fair comparison, the damping ratio $\zeta$ is additionally included into the proposed estimator. Recall that, otherwise, $\zeta=1$ is set as default and is not appearing as a design parameter, according to (12)-(17). The online estimate of the angular frequency $\theta(t)$ is shown in Fig. 10 versus the linear progress of the chirp-driven true $\omega_{0}(t)$ value. The estimated $\theta(t)$ values are plotted over each other for the ANF [4] and the proposed estimator, for $\zeta=0.7$ in (a), for $\zeta=1$ in (b), and for $\zeta=1.3$ in (c), correspondingly. Figure 10: Online estimate of the angular frequency $\theta(t)$ versus the chirp-driven $\omega_{0}(t)$: (a) for $\zeta=0.7$, (b) $\zeta=1$, and (c) for $\zeta=1.3$. One can recognize that in all three cases, the proposed estimation algorithm follows the varying true angular frequency similarly as [4] at the benefit of one design parameter. The transient convergence appears slightly faster, and less deviations to $\omega_{0}(t)$ appear for the critically damped case of $\zeta=1$. ## 6 Conclusions In this paper, the problem of estimating the unknown frequency of noisy sinusoidal signals with slowly varying amplitude has been considered. The existing globally convergent frequency estimator was modified by changing the scaling of the excitation signal and output error, and canceling the damping ratio as a free design parameter. Furthermore, the main robustification was achieved by using the sign of an internal state, instead of the state itself, within the adaptation law. Relying on the averaging theory of periodic signals, an easy-to-follow and straightforward analysis was developed in the frequency domain, assuming that the timescales of a relatively fast harmonic (to be estimated) and relatively slow drift of the amplitude can be separated. We analyzed and proved the global asymptotic convergence of the frequency estimate and determined the exponential convergence rate. The dependency between band-limited white noise and the resulting residual estimation error in a steady state was established. The demonstrated numerical and experimental results confirm the properties and performance of the proposed estimator. A more detailed evaluation of the estimation performance and comparison with other frequency estimation algorithms, also for different experimental data- sets, are subject of our future works. ## References * [1] M. Bodson, S. C. Douglas, Adaptive algorithms for the rejection of sinusoidal disturbances with unknown frequency, Automatica 33 (12) (1997) 2213–2221. * [2] L. Hsu, R. Ortega, G. Damm, A globally convergent frequency estimator, IEEE Transactions on Automatic Control 44 (4) (1999) 698–713. * [3] R. Marino, G. L. Santosuosso, P. Tomei, Robust adaptive compensation of biased sinusoidal disturbances with unknown frequency, Automatica 39 (10) (2003) 1755–1761. * [4] M. Mojiri, A. R. Bakhshai, An adaptive notch filter for frequency estimation of a periodic signal, IEEE Transactions on Automatic Control 49 (2) (2004) 314–318. * [5] A. A. Vedyakov, A. O. Vediakova, A. A. Bobtsov, A. A. Pyrkin, S. V. Aranovskiy, A globally convergent frequency estimator of a sinusoidal signal with a time-varying amplitude, Europ. J. of Control 38 (2017) 32–38. * [6] S. Bittanti, S. M. Savaresi, On the parametrization and design of an extended Kalman filter frequency tracker, IEEE transactions on automatic control 45 (9) (2000) 1718–1724. * [7] M. Karimi-Ghartemani, M. R. Iravani, A method for synchronization of power electronic converters in polluted and variable-frequency environments, IEEE Transactions on Power Systems 19 (3) (2004) 1263–1270. * [8] C. Fuller, A. Von Flotow, Active control of sound and vibration, IEEE Con. Syst. Mag. 15 (6) (1995) 9–19. * [9] R. Herzog, P. Buhler, C. Gahler, R. Larsonneur, Unbalance compensation using generalized notch filters in the multivariable feedback of magnetic bearings, IEEE Transactions on control systems technology 4 (5) (1996) 580–586. * [10] C. C. De Wit, L. Praly, Adaptive eccentricity compensation, IEEE Transactions on Control Systems Technology 8 (5) (2000) 757–766. * [11] A. Sacks, M. Bodson, P. Khosla, Experimental results of adaptive periodic disturbance cancellation in a high performance magnetic disk drive, Journal of Dynamic Systems, Measurement, and Control 118 (3) (1996) 416–424. * [12] I. D. Landau, A. Constantinescu, D. Rey, Adaptive narrow band disturbance rejection applied to an active suspension - an internal model principle approach, Automatica 41 (2005) 563–574. * [13] S. Aranovskiy, A. Bobtsov, A. Kremlev, N. Nikolaev, O. Slita, Identification of frequency of biased harmonic signal, Europ. J. of Control 16 (2) (2010) 129–139. * [14] P. A. Regalia, An improved lattice-based adaptive IIR notch filter, IEEE trans. on signal proc. 39 (1991) 2124–2128. * [15] D. Clarke, On the design of adaptive notch filters, Int. Jour. of Adaptive Control and Signal Processing 15 (7) (2001) 715–744. * [16] M. J. Kim, F. Beck, C. Ott, A. Albu-Schäffer, Model-free friction observers for flexible joint robots with torque measurements, IEEE Transactions on Robotics 35 (6) (2019) 1508–1515. * [17] M. Ruderman, On stability of virtual torsion sensor for control of flexible robotic joints with hysteresis, Robotica 38 (7) (2020) 1191–1204. * [18] F. Leonard, J. Lanteigne, S. Lalonde, Y. Turcotte, Free-vibration behaviour of a cracked cantilever beam and crack detection, Mechanical systems and signal processing 15 (3) (2001) 529–548. * [19] M. A. Beijen, R. Voorhoeve, M. F. Heertjes, T. Oomen, Experimental estimation of transmissibility matrices for industrial multi-axis vibration isolation systems, Mechanical Systems and Signal Processing 107 (2018) 469–483. * [20] A. Baz, Active control of periodic structures, ASME Journal of Vibration and Acoustics 123 (4) (2001) 472–479. * [21] J. Helsen, B. Marrant, F. Vanhollebeke, F. De Coninck, D. Berckmans, D. Vandepitte, W. Desmet, Assessment of excitation mechanisms and structural flexibility influence in excitation propagation in multi-megawatt wind turbine gearboxes: experiments and flexible multibody model optimization, Mechanical Systems and Signal Processing 40 (1) (2013) 114–135. * [22] M. Ruderman, Oscillating actuator-load setup, UiA (2020). URL https://home.uia.no/michaeru/IMG1878.MOV * [23] M. Ruderman, Robust output feedback control of non-collocated low-damped oscillating load, in: IEEE 29th Mediterranean Conference on Control and Automation (MED’21), 2021, pp. 639–644.
††thanks<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> # Neutrinoless double beta decays tell nature of right-handed neutrinos Takehiko Asaka Department of Physics, Niigata University, Niigata 950-2181, Japan Hiroyuki Ishida KEK Theory Center, IPNS, Tsukuba, Ibaraki 305-0801, Japan Kazuki Tanaka Graduate School of Science and Technology, Niigata University Niigata, 950-2181, Japan ###### Abstract We consider the minimal seesaw model, the Standard Model extended by two right-handed neutrinos, for explaining the neutrino masses and mixing angles measured in oscillation experiments. When one of right-handed neutrinos is lighter than the electroweak scale, it can give a sizable contribution to neutrinoless double beta ($0\nu\beta\beta$) decay. We show that the detection of the $0\nu\beta\beta$ decay by future experiments gives a significant implication to the search for such light right-handed neutrino. ††preprint: KEK-TH-2292 The Standard Model (SM) of the particle physics preserve two accidental global symmetries in the (classical) Lagrangian, namely the baryon and lepton number symmetries. It is well known that these global symmetries are non- perturbatively broken at the quantum level tHooft:1976rip ; tHooft:1976snw , especially at high temperature of the universe Dimopoulos:1978kv ; Manton:1983nd ; Klinkhamer:1984di ; Kuzmin:1985mm . Even at the quantum level, however, a baryon minus lepton symmetry, often called $U(1)_{\rm B\mathchar 45\relax L}$ #1#1#1We do not specifically consider the symmetry as the gauge symmetry., has to be preserved in the SM. The simplest way to break the $U(1)_{\rm B\mathchar 45\relax L}$ symmetry without loss of the renormalizability is introducing right-handed neutrinos (RH$\nu$s) into the SM. Since RH$\nu$s are singlet under the SM gauge symmetries, we can write the mass term, called Majorana mass term, of it without conflicting the gauge principle. The Majorana mass term breaks the lepton number symmetry by two units. Therefore, the phenomena of the lepton number violation can be a definite signal of the existence of RH$\nu$s. The existence of RH$\nu$s is not only for the violation of the $U(1)_{\rm B\mathchar 45\relax L}$ symmetry but also important to solve the origin of the observed tiny neutrino masses. In the renormalizable Lagrangian with RH$\nu$s, we can obtain two kind of the neutrino masses, one is called Dirac masses and another is called Majorana masses. When enough hierarchy between these masses is realized, we can simply explain the tiny neutrino masses by the seesaw mechanism Minkowski:1977sc ; Yanagida:1979as ; Yanagida:1980xy ; Ramond:1979 ; GellMann:1980vs ; Glashow:1979 ; Mohapatra:1979ia . In addition, the violation of $U(1)_{\rm B\mathchar 45\relax L}$ can seeds the origin of the baryon asymmetry of the universe #2#2#2There are a bunch of possibilities to provide the baryon asymmetry through the lepton number violation. But the detail of the mechanism is independent of the discussions below. . One of the most promising signals of the $U(1)_{\rm B\mathchar 45\relax L}$ violation is the neutrinoless double beta decay, which breaks the lepton number by two units while keeping the baryon number. (See, for example, articles Doi:1985dx ; Pas:2015eia ; DellOro:2016tmg ; Dolinski:2019nrj .) The rate of the decay is characterized by the effective mass defined by the neutrino masses and mixing angles. When we simply add Majorana masses of three (active or left-handed) neutrinos which are responsible for the neutrino oscillation into the SM, the effective mass can be predicted depending on the lightest active neutrino mass together with the unknown CP violating phases. In view of the fundamental models for the origin of the neutrino masses, the mass of the lightest active neutrino cannot be determined uniquely, leading to different predictions on the effective mass. It should be noted that the effective mass can vanish in the normal hierarchy (NH) case of the active neutrinos in a certain parameter region. In such a case, the contribution from new physics (other than active neutrinos) including RH$\nu$s would be more important for the detection. So far, no neutrinoless double beta decay is detected and the upper bounds on the effective mass have been imposed by various experiments.#3#3#3In a recent analysis 1833580 , the differential rate of two neutrinoless double beta decay is discussed to constrain mixing elements of RH$\nu$s with masses at $\mathcal{O}(0.1\mathchar 45\relax 10)~{}{\rm MeV}$. The most stringent bound at present is $61$-$165$ meV by the KamLAND-Zen experiment KamLAND-Zen:2016pfg . Since this limit is approaching to the predicted range in the inverted hierarchy (IH) case, the experimental results in near future can give us some implication on RH$\nu$s. There are several interesting possibilities that the effective mass can be significantly modified due to the destructive or constructive contribution from RH$\nu$s. This additional contribution becomes important when the masses of RH$\nu$s are smaller or comparable to the typical scale of Fermi momentum in the decaying nucleus ($\sim\mathcal{O}(100)~{}{\rm MeV}$). Recently, we have pointed out one interesting possibility is that RH$\nu$ may hide one of the neutrinoless double beta decay processes Asaka:2020wfo ; Asaka:2020lsx (see also Refs. Halprin:1983ez ; Leung:1984vy ). This is due to the destructive contribution of RH$\nu$ to the effective mass. Note that the impact of RH$\nu$ does depend on the decaying nuclei. If this is the case, the mixing elements of RH$\nu$ can be predicted in terms of its mass in a certain range which is a good target of future search experiments. In this paper, we project out the consequences of the opposite situation, namely the case when the neutrinoless double beta decay is observed in some nucleus, and discuss the impacts on the mixing elements of RH$\nu$s. First of all, let us explain the framework of the present analysis, the minimal seesaw model. It is the Standard Model extended by two right-handed neutrinos $\nu_{RI}$ ($I=1,2)$, which Lagrangian is given by $\displaystyle{\cal L}=$ $\displaystyle{\cal L}_{\rm SM}+i\overline{\nu_{RI}}\gamma^{\mu}\partial_{\mu}\nu_{RI}$ $\displaystyle-\left(F_{\alpha I}\overline{L_{\alpha}}\Phi\nu_{RI}+\frac{M_{I}}{2}\overline{\nu_{RI}^{c}}\nu_{RI}+h.c.\right)\,,$ (1) where $L_{\alpha}=(\nu_{L\alpha},e_{L\alpha})^{T}$ ($\alpha=e,\mu,\tau)$ and $\Phi$ are the weak doublets of left-handed lepton and Higgs, respectively. The Yukawa coupling constants and the Majorana masses for neutrinos are denoted by $F_{\alpha I}$ and $M_{I}$. By assuming that the Dirac masses $F_{\alpha I}\langle\Phi\rangle$ are much smaller than the Majorana mass $M_{I}$, the seesaw mechanism works, and the mass eigenstates of neutrinos are three active neutrinos $\nu_{i}$ ($i=1,2,3)$ with masses $m_{i}$ and two heavy neutral leptons (HNLs) $N_{I}$ with masses $M_{I}$. The mass ordering of active neutrinos is not determined by the oscillation data, and two possibilities, the normal hierarchy (NH) with $m_{3}>m_{2}>m_{1}=0$ and the inverted hierarchy (IH) with $m_{2}>m_{1}>m_{3}=0$, are allowed. Note that the lightest active neutrino is massless in the considering situation. On the other hand, we can take the masses of HNLs as $M_{2}\geq M_{1}$ without loss of generality. The left- handed (flavor) neutrinos are then written as $\displaystyle\nu_{L\alpha}=\sum_{i}U_{\alpha i}\,\nu_{i}+\sum_{I}\Theta_{\alpha I}\,N_{I}^{c}\,,$ (2) where $U_{\alpha i}$ is the mixing matrix of active neutrinos called as the PMNS matrix while $\Theta_{\alpha I}$ is that of HNLs. One of the most important consequences of the seesaw mechanism is that active neutrinos and HNLs are both Majorana particles. In this case the lepton number violating processes are induced by these particles, which is a clear signature of physics beyond the SM. One promising example is the $0\nu\beta\beta$ decay, and the quest for the decay is going on by various experiments. The rate for the $0\nu\beta\beta$ decay mediated by active neutrinos and HNLs is proportional $|m_{\rm eff}|^{2}$, where $m_{\rm eff}$ is the so-called effective (neutrino) mass in the $0\nu\beta\beta$ decay. In the minimal seesaw model it is given by $\displaystyle m_{\rm eff}=m_{\rm eff}^{\nu}+m_{\rm eff}^{N}\,.$ (3) Here the first term in the right-hand side represents the contributions from the active neutrinos, which is given by $\displaystyle m_{\rm eff}^{\nu}$ $\displaystyle=\sum_{i}U_{ei}^{2}\,m_{i}\,.$ (4) On the other hand, the contributions from HNLs are expressed as $\displaystyle m_{\rm eff}^{N}$ $\displaystyle=\sum_{I}\Theta_{eI}^{2}\,M_{I}\,f_{\beta}(M_{I})\,,$ (5) where $f_{\beta}$ is the suppression factor compared to $m_{\rm eff}^{\nu}$ due to the heaviness of HNLs $M_{I}\gg m_{i}$. Here we apply the result in Ref. Faessler:2014kka ; Barea:2015zfa and assume the following form $\displaystyle f_{\beta}(M)=\frac{\Lambda_{\beta}^{2}}{\Lambda_{\beta}^{2}+M^{2}}\,,$ (6) where $\Lambda_{\beta}={\cal O}(10^{2})$ MeV denotes the typical scale of the Fermi momentum in the $0\nu\beta\beta$ decay. Hereafter we take $\Lambda_{\beta}=200$ MeV as a representative value. In this letter we consider the impacts of the detection of the $0\nu\beta\beta$ decay by future experiments on the properties of HNLs. The measurement of the decay rate gives the value of $|m_{\rm eff}|$. Note that $m_{\rm eff}$ is a complex number. First, we consider the case when right- handed neutrinos possess the hierarchical masses $M_{2}\gg M_{1}$. We then find that the mixing element $|\Theta_{e1}|^{2}$ of the lighter HNL is given by $\displaystyle\Theta_{e1}^{2}=\frac{m_{\rm eff}-m_{\rm eff}^{\nu}\left[1-f_{\beta}(M_{2})\right]}{M_{1}\left[f_{\beta}(M_{1})-f_{\beta}(M_{2})\right]}\,.$ (7) Here we have used the intrinsic relation between mixing elements in the seesaw mechanism $\displaystyle 0$ $\displaystyle=\sum_{i}U_{ei}^{2}\,m_{i}+\sum_{I}\Theta_{eI}^{2}\,M_{I}\,.$ (8) Importantly, the mixing element $|\Theta_{e1}|^{2}$ is given by $m_{\rm eff}$ and $m_{\rm eff}^{\nu}$ together with masses $M_{1}$ and $M_{2}$. This means that, if $|m_{\rm eff}|$ is found by the detection of the $0\nu\beta\beta$ decay, the range of $|\Theta_{e1}|^{2}$ can be predicted. In practice both upper and lower bounds on $|\Theta_{e1}|^{2}$ are obtained by varying the unknown parameters in $m_{\rm eff}^{\nu}$ (i.e., the Majorana phase $\eta$ and the mass ordering) and the phase of $m_{\rm eff}$. Figure 1: Upper and lower bounds on $|\Theta_{e1}|^{2}$ for the NH (red solid lines) and IH (blue dashed lines) cases. Here $M_{1}=1$ GeV and $M_{2}=200$ GeV. When $M_{1}=1$ GeV and $M_{2}=200$ GeV, these bounds are shown in Fig. 1 in terms of the (would-be) observed value of $|m_{\rm eff}|$ denoted by $m_{\rm eff}^{\rm obs}$. In the present analysis we take the central values of the mass squared differences, the mixing angles and the Dirac phase in the PMNS matrix given in Ref. Esteban:2020cvm for the estimation of $|m_{\rm eff}^{\nu}|$. We find that $|m_{\rm eff}^{\nu}|=1.45$–$3.68$ meV and 18.6–48.4 meV for the NH and IH cases, respectively. It is found from Eq. (7) that the lower bound on $|\Theta_{e1}|^{2}$ vanishes when $m_{\rm eff}^{\rm obs}=|m_{\rm eff}^{\nu}|(1-f_{\beta}(M_{2}))$. Figure 2: Upper and lower bounds on $|\Theta_{e1}|^{2}$ for the NH (left) and IH (right) cases. We take $m_{\rm eff}^{\rm obs}$ =100 meV (red sold lines), 50 meV (blue dashed lines), and 10 meV (green dot-dashed lines). Here $M_{2}=200$ GeV. The current (conservative) upper bound on $|\Theta_{e1}|^{2}$ from $|m_{\rm eff}|<165$ meV is shown by black solid line (and the light-gray region is exluded). The dark-gray regions are excluded by the direct search experiments. The dotted lines shows the sensitivities by the future experiments. See the detail in the main text. The predicted range of $|\Theta_{e1}|^{2}$ is shown in Fig. 2 where the current upper bounds and the sensitivities on $|\Theta_{e1}|^{2}$ by future search experiments are also shown PIENU:2011aa ; Aguilar-Arevalo:2017vlf ; NA62:2020mcv ; Blondel:2014bra ; SHiP:2018xqw ; Krasnov:2019kdc ; Alpigiani:2020tva . We take the (would-be) observed value of the effective mass as $|m_{\rm eff}|$ = 100 meV, 50 meV, and 10 meV. Importantly, the most of the predicted range can be tested by the future experiments. We should note that the understanding of $f_{\beta}(M)$ is important for the precise prediction of the mixing elements, since it contains the uncertainty of the order unity. For this purpose the better understanding of the nuclear matrix elements of the $0\nu\beta\beta$ decay mediated by HNL is crucial. Next, let us consider the case when the masses of HNLs are degenerate $\displaystyle M_{1}=M_{2}=M_{N}\,.$ (9) In this case, the total effective mass is given by $\displaystyle m_{\rm eff}=m_{\rm eff}^{\nu}\left[1-f_{\beta}(M_{N})\right]\,,$ (10) and hence the total value is always smaller than the that from active neutrinos $|m_{\rm eff}|<|m_{\rm eff}^{\nu}|$ as long as HNLs participate the $0\nu\beta\beta$ decay. Note that the arguments of $m_{\rm eff}$ and $m_{\rm eff}^{\nu}$ are the same. In this case, we find the interesting consequences if $|m_{\rm eff}|$ is measured: First, the mass of degenerate HNLs is determined depending on the measured value of $|m_{\rm eff}|$ as $\displaystyle M_{N}=\Lambda_{\beta}\sqrt{\frac{m_{\rm eff}^{\rm obs}}{|m_{\rm eff}^{\nu}|-m_{\rm eff}^{\rm obs}}}\,.$ (11) This shows that, once $m_{\rm eff}^{\rm obs}$ is fixed, the unknown Majorana phase in $m_{\rm eff}^{\nu}$ determines $M_{N}$. Second, the sum of the mixing elements is found to be $\displaystyle\left|\Theta_{e1}^{2}+\Theta_{e2}^{2}\right|=\frac{|m_{\rm eff}^{\nu}|}{\Lambda_{\beta}}\sqrt{\frac{|m_{\rm eff}^{\nu}|-m_{\rm eff}^{\rm obs}}{m_{\rm eff}^{\rm obs}}}\,.$ (12) Figure 3: The degenerate mass $M_{N}$ and mixing element $|\Theta_{e1}^{2}+\Theta_{e2}^{2}|$ in terms of the observed value $m_{\rm eff}^{\rm obs}$ in the NH (red solid line) or IH (blue dashed line). We take the Majorana phase $\eta=0$. Figure 4: Range of the mixing element $|\Theta_{e1}^{2}+\Theta_{e2}^{2}|$ in terms of the degenerate mass $M_{N}$ by taking the Majorana phase $\eta=0$–$\pi$ in the NH (red solid line) or IH (blue dashed line). These results are shown in Fig. 3. Here we take the Majorana phase as $\eta=0$, and $|m_{\rm eff}^{\nu}|$ = 3.54 meV and 48.4 meV for the NH and IH cases, respectively. It is seen that the observed effective mass $m_{\rm eff}^{\rm obs}$ of a few $10$ meV corresponds to the Majorana mass $M_{N}\simeq{\cal O}(0.1-1)$ GeV and the mass ordering is the IH since HNL contributions are always destructive to the active neutrino ones. The relation between $M_{N}$ and $|\Theta_{e1}^{2}+\Theta_{e2}^{2}|$ is shown in Fig. 4. We find that in order to test the degenerate case the improvement of the sensitivity by future experiments is required especially for the NH case. Before concluding the paper, we stress the impact of the difference among the $0\nu\beta\beta$ decay nuclei Asaka:2020lsx . Throughout this paper, we have assumed the approximated form of the suppression function $f_{\beta}$ to be Eq. (6) and fixed the typical Fermi momentum as $\Lambda_{\beta}=200~{}{\rm MeV}$. The important point is that the nuclear matrix elements including the suppression factor due to HNLs are different depending on the decaying nuclei used in the $0\nu\beta\beta$ experiments. This effect may be quantified by the choice the typical Fermi momentum in this analysis. Figure 5: Upper and lower bounds of predicted effective mass with $\tilde{\Lambda}_{\beta}=100~{}{\rm MeV}$ (left) and $\tilde{\Lambda}_{\beta}=400~{}{\rm MeV}$ (right) in the NH case. We assume that the effective mass observed in the nucleus with $\Lambda_{\beta}=200~{}{\rm MeV}$ is $100~{}{\rm meV}$ (red, solid), $50~{}{\rm meV}$ (blue, bashed), and $10~{}{\rm meV}$ (green, dot-dashed). Here, we fix $M_{2}=200~{}{\rm GeV}$. In Fig. 5, we plot the upper and lower values of the predicted effective mass with different Fermi momentum from $200~{}{\rm MeV}$ while assuming the $0\nu\beta\beta$ decay is observed at the experiment with $\Lambda_{\beta}=200~{}{\rm MeV}$ in the NH case. We can obtain similar behavior straightforwardly in the IH case as well. We take the observed value of the effective mass to be $100~{}{\rm meV}$, $50~{}{\rm meV}$, or $10~{}{\rm meV}$. Interestingly, the predicted effective mass can be significantly enhanced when $\Lambda_{\beta}$ becomes larger enough than $200~{}{\rm MeV}$ and $M_{1}$ gets heavier. By inserting Eq. (7) into the expression of the effective mass, we can obtain $\displaystyle\tilde{m}_{\rm eff}$ $\displaystyle=\left[1-\tilde{f}_{\beta}(M_{2})\right]m_{\rm eff}^{\nu}$ $\displaystyle+\left[m_{\rm eff}-m_{\rm eff}^{\nu}\left[1-f_{\beta}(M_{2})\right]\right]\frac{\tilde{f}_{\beta}(M_{1})-\tilde{f}_{\beta}(M_{2})}{f_{\beta}(M_{1})-f_{\beta}(M_{2})}\,,$ (13) where $\Lambda_{\beta}=200$ MeV in $f_{\beta}$ but $\Lambda_{\beta}\neq 200$ MeV in $\tilde{f}_{\beta}$ which is denoted as $\tilde{\Lambda}$. Since the last fraction in the RHS of Eq. (13) can be simplified as $\displaystyle\frac{\tilde{f}_{\beta}(M_{1})-\tilde{f}_{\beta}(M_{2})}{f_{\beta}(M_{1})-f_{\beta}(M_{2})}=\frac{\tilde{\Lambda}_{\beta}^{2}}{\Lambda_{\beta}^{2}}\frac{\left(\Lambda_{\beta}^{2}+M_{1}^{2}\right)\left(\Lambda_{\beta}^{2}+M_{2}^{2}\right)}{\left(\tilde{\Lambda}_{\beta}^{2}+M_{1}^{2}\right)\left(\tilde{\Lambda}_{\beta}^{2}+M_{2}^{2}\right)}\,,$ (14) Namely, the effective mass is enhanced as $M_{1}$ gets greater/suppressed than the typical Fermi momentum in $\tilde{f}_{\beta}$ by the factor $\tilde{\Lambda}_{\beta}^{2}/\Lambda_{\beta}^{2}$. As clearly seen, since significant enhancement/suppression could happen depending on the values of $\Lambda_{\beta}$ due to the contributions from HNLs. Thus, we can claim that the multiple detection by the $0\nu\beta\beta$ experiments using different nuclei is crucial to reveal the properties of HNLs. In conclusions, we have considered the minimal seesaw model with two right- handed neutrinos. It has been shown that, if the effective mass in the $0\nu\beta\beta$ decay will be measured by future experiments, the possible range of the mixing elements for the lighter heavy neutral lepton (right- handed neutrino) is determined. Especially, when two heavy neutral leptons are hierarchical and the lighter mass is below the electroweak scale, $N_{1}$ is a good target of the direct search experiments. It has also been shown that the predicted effective mass can depend on nucleus of the experiment. Therefore, comprehensive studies on the neutrinoless double beta decays in the seesaw mechanism is necessary to extract the concrete information of the heavy neutral leptons. ## Acknowledgments The work of T.A. was partially supported by JSPS KAKENHI Grants No. 17K05410, No. 18H03708, No. 19H05097, and No. 20H01898. The work of H.I. was supported by JSPS KAKENHI Grant No. 18H03708. ## References * (1) G. ’t Hooft, Phys. Rev. Lett. 37 (1976), 8-11 doi:10.1103/PhysRevLett.37.8 * (2) G. ’t Hooft, Phys. Rev. D 14 (1976), 3432-3450 [erratum: Phys. Rev. D 18 (1978), 2199] doi:10.1103/PhysRevD.14.3432 * (3) S. Dimopoulos and L. Susskind, Phys. Rev. D 18 (1978), 4500-4509 doi:10.1103/PhysRevD.18.4500 * (4) N. S. Manton, Phys. Rev. D 28 (1983), 2019 doi:10.1103/PhysRevD.28.2019 * (5) F. R. Klinkhamer and N. S. Manton, Phys. Rev. D 30 (1984), 2212 doi:10.1103/PhysRevD.30.2212 * (6) V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett. B 155 (1985), 36 doi:10.1016/0370-2693(85)91028-7 * (7) P. Minkowski, Phys. Lett. B 67, 421 (1977). * (8) T. Yanagida, in Proceedings of the Workshop on Unified Theory and Baryon Number of the Universe, edited by.O. Sawada and A. Sugamoto (KEK, Tsukuba, Ibaraki 305- 0801 Japan, 1979) p. 95. * (9) T. Yanagida, Prog. Theor. Phys. 64, 1103 (1980). * (10) P. Ramond, in Talk given at the Sanibel Symposium, Palm Coast, Fla., Feb. 25-Mar. 2, 1979, preprint CALT-68-709 (retroprinted as hep-ph/9809459). * (11) M. Gell-Mann, P. Ramond, and R. Slansky, in Supergravity, edited by.P. van Niewwenhuizen and D. Freedman (North Holland, Amsterdam, 1979) [arXiv:1306.4669 [hep-th]]. * (12) S. L. Glashow, in Proc. of the Cargése Summer Institute on Quarks and Leptons, Cargése, July 9-29, 1979, eds. M. Lévy et. al, , (Plenum, 1980, New York), p707. * (13) R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44 (1980) 912. * (14) M. Doi, T. Kotani and E. Takasugi, Prog. Theor. Phys. Suppl. 83 (1985), 1 doi:10.1143/PTPS.83.1 * (15) H. Päs and W. Rodejohann, New J. Phys. 17 (2015) no.11, 115010 doi:10.1088/1367-2630/17/11/115010 [arXiv:1507.00170 [hep-ph]]. * (16) S. Dell’Oro, S. Marcocci, M. Viel and F. Vissani, Adv. High Energy Phys. 2016 (2016), 2162659 doi:10.1155/2016/2162659 [arXiv:1601.07512 [hep-ph]]. * (17) M. J. Dolinski, A. W. Poon and W. Rodejohann, Ann. Rev. Nucl. Part. Sci. 69 (2019), 219-251 doi:10.1146/annurev-nucl-101918-023407 [arXiv:1902.04097 [nucl-ex]]. * (18) P. D. Bolton, F. F. Deppisch, L. Gráf and F. Šimkovic, [arXiv:2011.13387 [hep-ph]]. * (19) A. Gando et al. [KamLAND-Zen], Phys. Rev. Lett. 117 (2016) no.8, 082503 doi:10.1103/PhysRevLett.117.082503 [arXiv:1605.02889 [hep-ex]]. * (20) T. Asaka, H. Ishida and K. Tanaka, [arXiv:2012.12564 [hep-ph]]. * (21) T. Asaka, H. Ishida and K. Tanaka, [arXiv:2012.13186 [hep-ph]]. * (22) A. Halprin, S. T. Petcov and S. P. Rosen, Phys. Lett. B 125 (1983), 335-338 doi:10.1016/0370-2693(83)91296-0 * (23) C. N. Leung and S. T. Petcov, Phys. Lett. B 145 (1984), 416-420 doi:10.1016/0370-2693(84)90071-6 * (24) A. Faessler, M. González, S. Kovalenko and F. Šimkovic, Phys. Rev. D 90, no.9, 096010 (2014) doi:10.1103/PhysRevD.90.096010 [arXiv:1408.6077 [hep-ph]]. * (25) J. Barea, J. Kotila and F. Iachello, Phys. Rev. D 92 (2015), 093001 doi:10.1103/PhysRevD.92.093001 [arXiv:1509.01925 [hep-ph]]. * (26) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, JHEP 09 (2020), 178 doi:10.1007/JHEP09(2020)178 [arXiv:2007.14792 [hep-ph]]; NuFIT 5.0 (2020), www.nu-fit.org. * (27) M. Aoki et al. [PIENU], Phys. Rev. D 84 (2011), 052002 doi:10.1103/PhysRevD.84.052002 [arXiv:1106.4055 [hep-ex]]. * (28) A. Aguilar-Arevalo et al. [PIENU], Phys. Rev. D 97 (2018) no.7, 072012 doi:10.1103/PhysRevD.97.072012 [arXiv:1712.03275 [hep-ex]]. * (29) E. Cortina Gil et al. [NA62], Phys. Lett. B 807 (2020), 135599 doi:10.1016/j.physletb.2020.135599 [arXiv:2005.09575 [hep-ex]]. * (30) A. Blondel et al. [FCC-ee study Team], Nucl. Part. Phys. Proc. 273-275 (2016), 1883-1890 doi:10.1016/j.nuclphysbps.2015.09.304 [arXiv:1411.5230 [hep-ex]]. * (31) C. Ahdida et al. [SHiP], JHEP 04 (2019), 077 doi:10.1007/JHEP04(2019)077 [arXiv:1811.00930 [hep-ph]]. * (32) I. Krasnov, Phys. Rev. D 100 (2019) no.7, 075023 doi:10.1103/PhysRevD.100.075023 [arXiv:1902.06099 [hep-ph]]. * (33) C. Alpigiani et al. [MATHUSLA], [arXiv:2009.01693 [physics.ins-det]].
# Parsimonious Bayesian Factor Analysis for modelling latent structures in spectroscopy data Alessandro Casa School of Mathematics & Statistics, University College Dublin Vistamilk SFI Research Centre Tom F. O’Callaghan School of Food & Nutritional Sciences, University College Cork Vistamilk SFI Research Centre Thomas Brendan Murphy School of Mathematics & Statistics, University College Dublin Vistamilk SFI Research Centre ###### Abstract In recent years, within the dairy sector, animal diet and management practices have been receiving increased attention, in particular examining the impact of pasture-based feeding strategies on the composition and quality of milk and dairy products, in line with the increased prevalence of premium _grass-fed_ dairy products appearing on market shelves. To date, there are limited testing methods available for the verification of _grass-fed_ dairy and as a consequence these products are susceptible to food fraud and adulteration. Therefore, with this in mind, enhanced statistical tools studying potential differences among milk samples coming from animals on different feeding systems are required, thus providing increased security around the authenticity of the products. Infrared spectroscopy techniques are widely used to collect data on milk samples and to predict milk related traits and characteristics. While these data are routinely used to predict the composition of the macro components of milk, each spectrum also provides a reservoir of unharnessed information about the sample. The accumulation and subsequent interpretation of these data present a number of challenges due to their high-dimensionality and the relationships amongst the spectral variables. In this work, directly motivated by a dairy application, we propose a modification of the standard factor analysis to induce a parsimonious summary of spectroscopic data. The proposed procedure maps the observations into a low-dimensional latent space while simultaneously clustering observed variables. The proposed method indicates possible redundancies in the data and it helps disentangle the complex relationships among the wavelengths. A flexible Bayesian estimation procedure is proposed for model fitting, providing reasonable values for the number of latent factors and clusters. The method is applied on milk mid-infrared (MIR) spectroscopy data from dairy cows on distinctly different pasture and non-pasture based diets, providing accurate modelling of the data correlation, the clustering of variables in the data and information on differences between milk samples from cows on different diets. Keywords: Food authenticity studies, dairy science, spectroscopy, chemometrics, factor analysis, redundant variables, clustering, Gibbs sampling ## 1 Introduction In recent years, the food industry has gone through rapid changes, partially due to ever evolving consumer preferences and increased consumers’ awareness of food and health. We are currently at the forefront of growing demand for detailed and accurate knowledge concerning food quality, authentication and security. The sectors potentially most vulnerable to these changing trends are those processing and preparing foodstuffs of animal origin. As a consequence, the dairy sector has been particularly involved in this transition, with an increasing attention towards product quality, traceability and adherence to procedures respectful to animal welfare and the environment. In this scenario, one of the aspects which is gaining more attention and relevance is concerned with cattle feeding regimen. In general consumers regard pasture based feeding as more respectful of the animal well-being, producing products that are more natural and healthy (Elgersma,, 2012). These perceptions appear to have some basis in fact as mounting evidence has demonstrated that outdoor pasture based feeding of cows results in milk and dairy products with enhanced beneficial nutrients compared to indoor total mixed ration (TMR) like systems (O’Callaghan et al., 2016b, ; O’Callaghan et al., 2016a, ; O’Callaghan et al.,, 2017). Furthermore, pasture based feeding has been demonstrated to lead to ameliorations in the organolectic characteristics of dairy products providing signature like traits (Faulkner et al.,, 2018; Alothman et al.,, 2019; Garvey et al.,, 2020) and improved quality. As such, in many markets, _grass-fed_ dairy products usually demand a premium price from consumers, and as often happens with expensive products, milk produced by grass fed cows is susceptible to food adulteration and fraud. As a direct consequence, there has been an increased requirement for development of methods capable to detect the presence of adulterants and authenticate the traceability of the milk (see Kamal and Karoui,, 2015, for a recent review). In the literature several different approaches which have been proposed rely on the identification of one or more useful biomarkers in order to build meaningful authentication methods (see Capuano et al.,, 2014, and references therein). Furthermore, O’Callaghan et al., 2016b ; O’Callaghan et al., (2018) highlighted the ability of fatty acid profiling coupled with multivariate analysis and H-NMR to be able to distinguish between milk from pasture and TMR based diets. Nonetheless, these techniques are considered to be expensive and time consuming since they require laboratory extraction routines to collect the data, compromising their widespread effective utility. On the other hand vibrational spectroscopy techniques, such as Fourier transform near-infrared (NIR) and mid-infrared (MIR) spectroscopy, are known to be cheap, rapid and non-disruptive alternatives to collect large amounts of data and to analyze different biological materials. Such methods have been already proven useful in food authenticity studies (eg. Murphy et al.,, 2010); readers may refer to Downey, (1996) and Reid et al., (2006) for more thorough reviews. When a material is analyzed via MIR spectroscopy the light is passed through a sample of that material at a sequence of wavelengths in the mid- infrared region (900 to 5000 cm-1). The passage of the light activates the sample’s chemical bonds leading to an absorption of energy from the light itself. The amount of energy absorbed or transmitted by the sample, at different wavelengths, creates the spectrum of that sample that might be subsequently used to analyze its characteristics (see Figure 1 for a graphical illustration of some MIR spectra). Figure 1: The mid-infrared spectra recorded for a subset of the analyzed milk samples corresponding to different diet regimens. The samples are colored as pasture-diet = red, total mixed ration-diet = blue. Vibrational spectroscopy techniques have already been fruitfully used in the dairy framework to determine milk characteristics such as protein, lactose and casein concentration (De Marchi et al.,, 2014) as well as to predict milk and animal related traits such as milk fatty acids (Bonfatti et al.,, 2017) and energy efficiency and intake (McParland et al.,, 2014; McParland and Berry,, 2016). However, the usefulness of infrared spectroscopy data to authenticate cow feeding regimens has been less widely explored even though standard classification tools have produced promising results in Coppa et al., (2012) and Valenti et al., (2013). Therefore, a thorough exploration of the features of spectroscopy data when used to analyze samples coming from differently fed animals is somehow still missing. From a statistical point of view NIR and MIR spectroscopy data introduce some challenges that have to be carefully addressed. The first one is concerned with their high-dimensionality since, usually, each single observation consists of more than 1000 transmittance or absorbance values over the MIR or NIR regions. Moreover these data are highly correlated with underlying chemical processes entailing rather complex correlation structures, as can be seen in Figure 2. From the figure it is clear how, albeit adjacent wavelengths tend to be highly correlated, strong relationships are witnessed among regions of the spectrum far from each other. The information contained in each spectrum is indeed known to be structured in a rather complicated way and possibly spread over different locations. For these reasons, statistical methodologies being able to provide a parsimonious representation of the correlation structures in spectroscopy data turn out to be particularly useful. On one hand they can mitigate high- dimensionality related issues, by summarizing parsimoniously the information in a lower dimensional representation. On the other hand a proper reconstruction of the relations seen in Figure 2 may help in identifying which wavelength regions are carrying similar information, when the aim is to discriminate between samples coming from differently fed animals. This identification, when coupled with subject-matter knowledge, can highlight which chemical structures are responsible for the structures seen in the milk sample spectra. Moreover, it can be useful to identify the main nutritional differences implied in the milk from different diet regimens, serving as a stepping stone for classification purposes. When analyzing spectroscopy data latent variable models are considered to be the state of the art to tackle some of the challenges mentioned above. Techniques such as _Partial Least Squares_ (PLS) and _principal component analysis_ (PCA) are widely used both for predictive purposes and to reduce the dimensionality of the data, by summarizing the information in a lower number of newly built features. In a similar fashion, _factor analysis_ (FA, Everitt,, 1984; Bartholomew et al.,, 2011) provides a parsimonious representation of the observed data, by building new variables called _factors_ , while simultaneously explaining the correlation among high- dimensional observations. For this reason, when the aim is to reconstruct structures as the ones in Figure 2, factor analysis represents a suitable strategy to follow. Nonetheless, even if FA effectively reduces the dimensionality of the data, standard FA does not provide information about possible redundancies in the observed features. For this reason, in this work, we propose a suitable modification of the standard factor analysis model which allows the detection of redundant variables and which produces a partition of the variables themselves, thus possibly gaining useful insights about similarly behaving spectral regions. Figure 2: Sample correlation matrices computed on the milk samples produced by pasture fed cows (on the left) and total mixed ration fed cows (on the right). The rest of the paper is structured as follows. In Section 2 we described the mid-infrared spectroscopy data which motivates our proposal. In Section 3 we outline the proposed methodology with a specific focus on the proposed Bayesian estimation procedure and on the involved model selection steps. Some analyses on synthetic datasets are reported in Section 4, while in Section 5 we present the results obtained on the milk spectroscopy data. Finally, in Section 6, we conclude with some final remarks and highlight some advantageous avenues for future research. ## 2 Dairy diet MIR spectroscopy data The data we consider in this study consists of a collection of mid-infrared spectra from 4320 milk samples produced from cattle on three dietary treatments over a three year period on the Teagasc Moorepark Dairy research Farm (Fermoy, Co. Cork, Ireland). The data are comprised of spectra extracted from morning (am) and evening (pm) milk samples collected weekly from Holstein-Freisian cows using a ProFoss FT6000 series instruments (FOSS, Ireland) between the period of May and August in 2015, 2016 and 2017. In each year 54 cows were randomly assigned to each of the dietary treatment for the entire lactation period of that year. Treatments included grass (GRS) which consisted of cows maintained outdoors on a perennial ryegrass sward only, clover (CLV) whereby cows were maintained outdoors on a perennial ryegrass with 20% white clover sward only and total mixed ration (TMR) where cows were maintained indoors year round and nutrients are combined together in a single nutritional mix consisting of grass silage, maize silage and concentrates. For further information on the experimental design and dietary treatments, see Faulkner et al., (2018), O’Callaghan et al., 2016a , O’Callaghan et al., 2016b , O’Callaghan et al., (2017) and O’Callaghan et al., (2018). More specifically, 2931 samples come from cows being fed with pasture (GRS and CLV), while the remaining 1389 come from cows being fed with TMR. Note that the original milk samples may be grouped according to the feeding system into three different classes (GRS, CLV and TMR) rather than two (pasture and TMR). In practice, in our work the first two classes have been merged together into a general pasture-based diet group, because of their strong similarities from a compositional perspective. The total number of cows involved in the experiment is equal to 120, thus implying that multiple animal measurements are available with a mean number of 36 samples per cow. The samples have been collected following a yearly balanced scheme and they represent a balance of different parities. The samples considered in this work have been restricted to the ones being collected mainly in the summer months since it represents a period of milk production with highest prevalence of grass growth. Note that, for each sample, a spectrum consists of 1060 transmittance measurements in the region going from 925 cm-1 to 5010 cm-1. Finally, we have some additional information about fat, protein and lactose content in the available milk samples obtained using channels on the FT6000 calibrated against wet chemistry results. ## 3 Factor analysis with redundant variables ### 3.1 Framework and model specification As mentioned in the introduction standard Factor Analysis (denoted as FA in the following) provides a convenient and parsimonious representation of the dependence structure among high-dimensional observations by mapping them in a low-dimensional latent space. Let $X=\\{x_{1},\dots,x_{n}\\}$, with $x_{i}\in\mathbb{R}^{p}$, the set of the observed data. Factor analysis models each observation $x_{i}$ as a linear combination of latent variables, called factors, as follows $x_{i}=\mu+\Lambda u_{i}+\varepsilon_{i},\hskip 28.45274pti=1,\dots,n,$ (1) where $\mu\in\mathbb{R}^{p}$ is the mean vector, $\Lambda=\\{\lambda_{jk}\\}_{j=1,\dots,p,k=1,\dots,K}$ is a $p\times K$ factor loadings matrix with $K$ being the number of factors, $u_{i}\in\mathbb{R}^{K}$ denotes the factor scores vector while $\varepsilon_{i}\sim\mathcal{N}_{p}(0,\Psi)$ is an idiosyncratic error term where $\Psi=\text{diag}(\psi_{1},\dots,\psi_{p})$, with $\psi_{j}$’s often referred to as the uniquenesses. Without loss of generality we assume in the following that the data have been centered, hence $\mu=0$. Moreover, the latent factors are assumed to be normally distributed with zero mean and covariance equal to the identity matrix. Consequently, we have that $(x_{i}|u_{i})\sim\mathcal{N}_{p}(\Lambda u_{i},\Psi)$. On the other hand marginally $x_{i}$ is distributed according to a Gaussian distribution with zero mean and covariance matrix $\Sigma=\Lambda\Lambda^{T}+\Psi\;.$ (2) In practical applications the number of variables $p$ is considerably higher than the number of factors $K$. Therefore the decomposition in (2) introduces a convenient and parsimonious representation of the relationships among the observed features in high-dimensional settings. From (2), and recalling that $\Psi$ is a diagonal matrix, it follows, in standard FA, that the correlation between the original variables is modelled via the loading matrix $\Lambda$. Therefore, in the literature, the attention has been focused on how to model and estimate $\Lambda$ appropriately. In recent years a lot of different solutions have been proposed, both from a frequentist (see e.g. Hirose and Konishi,, 2012; Hirose and Yamamoto,, 2015) and from a Bayesian (see e.g Bhattacharya and Dunson,, 2011; Legramanti et al.,, 2020) standpoint, to obtain sparse estimates of the loading matrix. Setting some values of $\Lambda$ exactly equal to zero allows an even more parsimonious representation of the original covariance structure. Moreover, it could be convenient from an interpretative point of view since relating each factor to a smaller number of observed variables helps to give meaning to the factors themselves. Lastly, note that if all the elements in a row of the loading matrix are equal to zero we obtain an indication about the uninformative nature of the $j$-th variable, being uncorrelated with all the other features and essentially represented as noise. The detection of uninformative features in the FA framework is tackled in the aforementioned references, but to the best of our knowledge possible redundancy has not been tackled yet. A variable is defined as redundant when it carries information similar to the one provided by another variable (or variables), usually due to the strong correlation between them. The effective detection of redundancies, which is a challenging task, can lead to more parsimonious modelling. Note that, as briefly introduced in the previous sections, and as it is graphically represented in Figure 2, redundancy can be a complex issue when analyzing spectroscopy data. Therefore, in order to properly account for it, in this work we introduce a model in which some of the variables are mapped into the latent space by means of the same loading coefficients thus giving an indication of possible grouping structures in the observed features themselves. The proposed model is then defined as follows $\displaystyle x_{i}$ $\displaystyle=$ $\displaystyle Z\Lambda_{c}u_{i}+\varepsilon_{i}$ (3) $\displaystyle=$ $\displaystyle\tilde{\Lambda}u_{i}+\varepsilon_{i}\hskip 28.45274pti=1,\dots,n,$ with $x_{i},u_{i}$ and $\varepsilon_{i}$ previously defined while $Z=\\{z_{j}\\}_{j=1,\dots,p}$, with $z_{j}=(z_{j1},\dots,z_{jG})$, is a $p\times G$ latent allocation matrix where $G$ is the number of variable clusters. Here the standard binary partition is adopted for $Z$; therefore $z_{jg}=1$ if the $j$-th variable belongs to the $g$-th group and 0 otherwise. Lastly, $\Lambda_{c}=\\{\Lambda_{c,g}\\}_{g=1,\dots,G}$, with $\Lambda_{c,g}=(\lambda_{c,g1},\dots,\lambda_{c,gK})$, is a $G\times K$ matrix whose $g$-th row contains the unique and representative loading values for the $g$-th variable cluster. Note that, as a direct consequence of the specification of the model (3), $\tilde{\Lambda}$ has duplicate row values. We believe this constitutes a sensible way to account for redundancy in the observed features, by simply constraining the relations with the latent factors to be equal for those variables belonging to the same cluster. The distributional properties highlighted above for the standard FA model are still valid. Therefore, we have that $(x_{i}|u_{i},z)\sim\mathcal{N}_{p}(\tilde{\Lambda}u_{i},\Psi)$ while $(x_{i}|z)\sim\mathcal{N}_{p}(0,\tilde{\Sigma})$ where $\tilde{\Sigma}=\tilde{\Lambda}\tilde{\Lambda}^{T}+\Psi$. It is straightforward to see how the proposed model induces an even more parsimonious decomposition of the covariance matrix. In fact the specification of our model entails a possibly drastic reduction in the total number of covariance parameters to estimate; for model (3) this number is equal to $(G\times k)+p$, for model (1) it is equal to $(p\times k)+p$; due to rotational invariance in FA, the number of identifiable parameters is fewer than this. Clearly, the smaller the number of variable clusters $G$, hence the more redundancy is observed in the data, the greater will be the reduction. From an interpretative point of view the estimation of the allocation matrix $Z$ allows obtaining a clustering of the variables. This partition gives insights into the redundancy phenomenon under study by clearly highlighting which variables are strongly correlated and providing similar information. Finally, note that our proposal may be adapted in order to detect both redundant and uninformative variables by a priori forcing all the elements in a single specific row of $\Lambda_{c}$ to be exactly equal to zero. This would imply that all the variables assigned to the corresponding group are modelled as noise and are uncorrelated with all the other variables. ### 3.2 Likelihood and prior specification Under the specification of model (3), and recalling that $(x_{i}|u_{i},z)\sim\mathcal{N}_{p}(Z\Lambda_{c}u_{i},\Psi)$, the corresponding likelihood function is given by $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)$ $\displaystyle=$ $\displaystyle\prod_{i=1}^{n}(2\pi)^{-\frac{p}{2}}|\Psi|^{-\frac{1}{2}}\exp\left\\{-\frac{1}{2}(x_{i}-Z\Lambda_{c}u_{i})^{T}\Psi^{-1}(x_{i}-Z\Lambda_{c}u_{i})\right\\}$ (4) $\displaystyle\propto$ $\displaystyle|\Psi|^{-\frac{n}{2}}\exp\left\\{-\frac{1}{2}\text{tr}\left[\Psi^{-1}(X-U\Lambda_{c}^{T}Z^{T})^{T}(X-U\Lambda_{c}^{T}Z^{T})\right]\right\\}$ where $U=\\{u_{i}\\}_{i=1,\dots,n}$, with $u_{i}=(u_{i1},\dots,u_{iK})$, is the $n\times K$ factor scores matrix while $X,\Lambda_{c},Z$ and $\Psi$ are defined as in the previous section. Lastly $|A|$ and $\text{tr}[A]$ denotes respectively the determinant and the trace of a generic matrix $A$. Different strategies might be adopted to estimate the parameters involved in (3). From a frequentist perspective, the maximum likelihood estimates are usually obtained via iterative algorithms such as the ones proposed in Jöreskog, (1967), Jennrich and Robinson, (1969) and Rubin and Thayer, (1982). Conversely, in this work we adopt a Bayesian approach to factor analysis estimation (see e.g. Press and Shigemasu,, 1989; Arminger and Muthén,, 1998; Song and Lee,, 2001). More specifically, we assume independent prior distributions for the model parameters as follows $\displaystyle\Lambda_{c,g}$ $\displaystyle\sim$ $\displaystyle\mathcal{N}_{K}(0,\sigma^{2}_{\lambda}\mathbb{I}_{K})\hskip 48.36958pt\text{for}\;\;g=1,\dots,G$ (5) $\displaystyle u_{i}$ $\displaystyle\sim$ $\displaystyle\mathcal{N}_{K}(0,\mathbb{I}_{K})\hskip 60.3197pt\text{for}\;\;i=1,\dots,n$ (6) $\displaystyle\psi_{j}$ $\displaystyle\sim$ $\displaystyle\text{IG}(\alpha,\beta_{j})\hskip 66.01059pt\text{for}\;\;j=1,\dots,p$ (7) $\displaystyle z_{j}$ $\displaystyle\sim$ $\displaystyle\text{PPM}(\alpha_{z})\hskip 66.01059pt\text{for}\;\;j=1,\dots,p.$ (8) The choice of the hyperparameters for the inverse gamma prior on the uniquenesses is guided by the suggestions in Frühwirth-Schnatter and Lopes, (2010); here the authors avoid encurring in the Heywood problem by choosing $\alpha$ and $\beta_{j}$ so that $\psi_{j}$ tends to be bounded away from zero. More specifically in our analyses we set $\alpha=2.5$ and $\beta_{j}=(\alpha-1)/S^{-1}_{jj}$ where $S^{-1}$ represents the inverse of the sample covariance matrix. On the other hand $\sigma^{2}_{\lambda}$ might be chosen to be subjectively large in order to consider an uninformative prior for the rows of $\Lambda_{c}$. Some words of caution are required for the prior in (8). Let ${\bf c}$ be a clustering of indices $\\{1,\dots,p\\}$; even if different representations might be possible, we consider ${\bf c}=\\{C_{1},\dots,C_{G}\\}$ as a collection of disjoint subsets such that $C_{g}$ contain all the indices of the variables belonging to cluster $g$-th. A product partition model (PPM, Hartigan,, 1990; Barry and Hartigan,, 1992) assumes that the prior probability for ${\bf c}$ is expressed as follows $\displaystyle\pi({\bf c}=\\{C_{1},\dots,C_{G}\\})\propto\prod_{g=1}^{G}\rho(C_{g})$ (9) where $\rho(\cdot)$ is known as the _cohesion function_. Since we have a one- to-one correspondence between the representation of the partition ${\bf c}$ as a collection of blocks and the one via the allocation matrix $Z$, with a slight abuse of notation, we specify the prior as in (8) even in our framework. Several different specifications for $\rho(\cdot)$ have been proposed in literature: in this work we consider $\pi({\bf c})\propto\alpha_{z}^{G}\prod_{g=1}^{G}(|C_{g}|-1)!$, where $|C_{g}|$ denotes the cardinality of the $g$-th cluster, sharing strong connections with the Dirichlet process that is widely used in the Bayesian clustering framework (Quintana and Iglesias,, 2003). Note that in our analyses we set $\alpha_{z}=1$ as is standard; however, this choice did not seem influential. When specifying a prior over the set of the partitions, another reasonable approach to take within our framework would consist of borrowing ideas from the Bayesian spatial clustering literature or to consider some additional information such as distances between the objects to be clustered (see e.g. Blei and Frazier,, 2011; Page et al.,, 2016; Dahl et al.,, 2017; Wehrhahn et al.,, 2020, and references therein). In such a way contiguous groups of wavelengths would be favored leading to some advantages in the modelling process for some specific applications. Given the likelihood function in (4) and the specification of the priors outlined above, the posterior distribution is defined as follows $\displaystyle\pi(\Lambda_{c},\Psi,Z,U|X)$ $\displaystyle=$ $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)\pi(\Lambda_{c}|\sigma^{2}_{\lambda})\pi(\Psi|\alpha,\beta_{j})\pi(Z|\alpha_{z})\pi(U)$ $\displaystyle\propto$ $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)\prod_{g=1}^{G}\phi^{(K)}(\Lambda_{c,g};0,\sigma^{2}_{\lambda}\mathbb{I}_{K})\prod_{j=1}^{p}\text{IG}(\alpha,\beta_{j})$ $\displaystyle\times\,\alpha_{z}^{G}\prod_{g=1}^{G}(|C_{g}|-1)!\prod_{i=1}^{n}\phi^{(K)}(u_{i};0,\mathbb{I}_{K})$ where $\phi^{(K)}(\cdot,\mu,\Sigma)$ denotes the pdf of a $K$-dimensional Gaussian random variable with mean vector $\mu$ and covariance matrix $\Sigma$. ### 3.3 Model estimation Model parameter estimation is carried out in a Bayesian framework, using Markov Chain Monte-Carlo. Due to the conditionally conjugate nature of various prior distributions adopted, samples from (3.2) are obtained using a Gibbs sampling scheme with the exception of the latent variable allocation matrix $Z$ which is sampled via a Metropolis-Hastings step. The full conditional distributions are listed below (more details on their derivation and on the involved parameters are provided in the Appendix). $\displaystyle\text{vec}(\Lambda_{c})|\dots$ $\displaystyle\sim$ $\displaystyle\mathcal{N}_{G\times K}(\mu_{\lambda},\Sigma_{\lambda})$ (11) $\displaystyle u_{i}|\dots$ $\displaystyle\sim$ $\displaystyle\mathcal{N}_{K}(\mu_{u},\Sigma_{u})$ (12) $\displaystyle\psi_{j}|\dots$ $\displaystyle\sim$ $\displaystyle\text{IG}(\alpha+n/2,\beta_{j}^{*})$ (13) For the Metropolis-Hastings step to sample the allocation matrix $Z$, we adapt to our case one of the moves proposed by Nobile and Fearnside, (2007) in the so called _allocation sampler_. Each single move attempts to reallocate to cluster $g_{2}$ a group of variables previously assigned to cluster $g_{1}$; in this way, by possibly reallocating blocks of variables, big moves are proposed so that the space will be explored faster. The detailed steps of the procedure are outlined hereafter: 1. 1. Draw, from the total $G$ variable clusters, a group $g_{1}$. If $n_{g_{1}}=|C_{g_{1}}|=0$, where $|C_{g_{1}}|$ denotes the cardinality of group $g_{1}$, the move fails; 2. 2. Compute $d(\Lambda_{c,g_{1}},\Lambda_{c,g^{\prime}})$, the Euclidean distance between $\Lambda_{c,g_{1}}$ and $\Lambda_{c,g^{\prime}}$, $\forall\;g^{\prime}\in\\{1,\dots,G\\}$ with $g^{\prime}\neq g_{1}$. Afterwards a second group $g_{2}$ is drawn from the set $\\{1,\dots,G\\}\setminus g_{1}$ with $\mathbb{P}(g^{\prime}=g_{2})\propto d(\Lambda_{c,g_{1}},\Lambda_{c,g^{\prime}})^{-1}$; 3. 3. Draw $M$ from the set $\\{1,\dots,n_{g_{1}}\\}$ with $\mathbb{P}(M=m)\propto 1/m,\;\forall m=1,\dots,n_{g_{1}}$; 4. 4. Select randomly $M$ observations among the $n_{g_{1}}$ belonging to group $g_{1}$ and reallocate them to group $g_{2}$; 5. 5. Denote with $Z$ and $Z^{\prime}$ respectively the starting latent allocation matrix and the one after the reallocation move. The move itself is then accepted with probability $\min\\{1,R\\}$, where R is given by $\displaystyle R=\frac{\pi(\Lambda_{c},\Psi,Z^{\prime},U|X)}{\pi(\Lambda_{c},\Psi,Z,U|X)}\frac{\mathbb{P}(Z^{\prime}\rightarrow Z)}{\mathbb{P}(Z\rightarrow Z^{\prime})}\;.$ It can be easily shown that the proposal ratio is $\frac{\mathbb{P}(Z^{\prime}\rightarrow Z)}{\mathbb{P}(Z\rightarrow Z^{\prime})}=\frac{\sum_{m=1}^{n_{g_{1}}}\frac{1}{m}}{\sum_{m=1}^{n_{g_{2}}+M}\frac{1}{m}}\frac{n_{g_{1}}!n_{g_{2}}!}{(n_{g_{1}}-M)!(n_{g_{2}}+M)!}\;\;\;.$ Our modification of the procedure proposed by Nobile and Fearnside, (2007) consists in changing the probabilities involved in the selection of $g_{2}$ and $M$. The rationale lies in the need to propose, at each step, the reallocation of blocks of variables while increasing the acceptance ratio by proposing moves involving similar clusters and by keeping a reasonable size of the blocks. Lastly, note that the model we are proposing, having a factor analytic structure, inherits standard identifiability issues related to the rotational invariance property. A common solution consists in considering some constraints on the factor loadings (see e.g., Arminger and Muthén,, 1998; Lopes and West,, 2004). In our framework, where the factor analytic structure of the model may be seen as a tool to reconstruct $\Sigma$ in a parsimonious way, identification is not strictly necessary. Moreover it has been noted (Bhattacharya and Dunson,, 2011) that identifiability constraints may lead to order dependence among the variables and general inefficiencies. As a direct consequence we decided not to consider such constraints in our modelling strategy. ### 3.4 Model selection In the previous sections, the number of factors $K$ has been considered as fixed; in practice inference on $K$ constitutes one of the most challenging and trickiest issues to tackle when considering factor analytic models. Several different approaches have been proposed in literature, with a standard one resorting to widely known information criteria, such as the AIC and BIC, as selection tools. Nonetheless these criteria might not be reliable in high- dimensional settings and they are not theoretically justified in the FA framework. From a Bayesian perspective it is worth mentioning the work by Lopes and West, (2004) where the authors propose a reversible jump Markov Chain Monte-Carlo algorithm, moving between models having different number of factors. Another viable approach comes from the nonparametric literature where models with an infinite number of factors have been widely used in combination with shrinkage priors on the loading matrices (Bhattacharya and Dunson,, 2011; Durante,, 2017; Schiavon and Canale,, 2020); such a strategy allows the automatic choice of the number of the active factors, being the ones with non- negligible loading values. Note that in the framework developed herein, the model selection step is even more troublesome since the choice of $K$ is coupled with the one of the number of variable clusters $G$. A similar problem arises in the context of mixture of factor analyzers (Ghahramani and Hinton,, 1996) where usually different models, corresponding to different configurations of factors and mixture components, are compared by means of information criteria. In a Bayesian framework a solution has been proposed by Fokoué and Titterington, (2003) where the authors adopt a stochastic model search to jointly select the optimal number of clusters and factors. In order to address the issues mentioned above, in the considered settings different strategies may be adopted. A viable alternative to the information criteria usually adopted and more coherent with the estimation routine outlined in Section 3.3, may consist of considering the BICM (BIC Monte-Carlo) or the AICM (AIC Monte-Carlo) proposed by Raftery et al., (2007) and defined as $\text{BICM}=2\log\tilde{\mathcal{L}}-2s^{2}_{l}\log(n)$ and $\text{AICM}=2\log\overline{\mathcal{L}}-2s^{2}_{l}$ where $\tilde{\mathcal{L}}$, $\overline{\mathcal{L}}$ and $s^{2}_{l}$ are respectively the maximum observed, the mean and the variance of the likelihood computed for each posterior samples. Another, somewhat heuristic, approach is given by the so called BIC-MCMC (Frühwirth-Schnatter,, 2011) with $\text{BIC- MCMC}=2\log\tilde{\mathcal{L}}-\nu\log(n)$ where $\nu$ is the total number of parameters in the model. Note that, not depending on $s^{2}_{l}$, the BIC-MCMC turns out to be less influenced by possible fluctuations and jumps in the log- likelihood values across the MCMC draws. Furthermore, an exhaustive search of the model space is computationally expensive, if not infeasible, considering that in our scenario both $K$ and $G$ might span over a wide range of values. Moreover, in the given framework, the focus is on models providing good and parsimonious reconstructions of the covariance matrices, jointly with indications about which variables provide similar information through redundancy, rather than on finding the optimal number of factors and groups. For these reasons, in this work, we consider an ad hoc initialization strategy which yields a promising configuration $(K_{\text{init}},G_{\text{init}})$ for the number of factors and variable clusters. The procedure consists in the following steps: 1. 1. Estimate a standard FA model as defined in (1) for $k=1,\dots,K_{\text{max}}$, with $K_{\text{max}}$ chosen sufficiently large. This yields the loading matrices $\Lambda_{k}$, $k=1,\dots,K_{\text{max}}$; 2. 2. Use a model-based clustering strategy (see Fraley and Raftery, (2002) or Bouveyron et al., (2019) for a recent review) to obtain a partition of the rows of $\Lambda_{k}$, for $k=1,\dots,K_{\text{max}}$, into $G_{k}$ groups with $G_{k}$ automatically selected by means of the Bayesian Information Criterion (BIC); 3. 3. Build new loading matrices $\overline{\Lambda}_{k}$, for $k=1,\dots,K_{\text{max}}$, where the rows of $\Lambda_{k}$ are replaced with the mean of the cluster they belong to. In such a way the repeated row values structure of $\tilde{\Lambda}$ in (3) is mimicked; 4. 4. Considering the distributional properties of FA models, compute the BIC for all the models corresponding to different configurations $(k,G_{k})$, with $k=1,\dots,K_{\text{max}}$. Select as $(K_{\text{init}},G_{\text{init}})$ the configuration which attains the highest value for the BIC. This approach allows to find reasonable values that might be used as the starting point of a local search. More specifically, once $(K_{\text{init}},G_{\text{init}})$ are obtained by running the initialization procedure illustrated above, we consider a greedy search model selection strategy. From a practical point of view we fit four different models corresponding to $(K_{\text{init}}\pm 1,G_{\text{init}}\pm 1)$ and we compare them by means of the BIC-MCMC. The model with the best value of the information criterion is then selected and its neighboring models are subsequently estimated. These two steps are iterated until no improvements in the BIC-MCMC values are found. The best model according to the BIC-MCMC is then selected and subsequently used to reconstruct the covariance structure and to obtain a partition of the wavelengths. Lastly, note that some sensitivity analyses conducted and reported in the next section confirms that running a global and exhaustive search is not strictly necessary if covariance reconstruction is the final aim. ## 4 Synthetic data In this section we investigate the performances of the proposed procedure on some synthetic datasets. The aim of the analyses reported hereafter is twofold. On one hand we want to quantify the deterioration of the results when a wrong model, having different $(K,G)$ values with respect to the model generating the data, is employed. The possible deterioration is studied in terms of variable partitions quality, that is measured according the _Adjusted Rand Index_ (ARI, Hubert and Arabie,, 1985), and of correlation reconstruction. The latter is evaluated considering two different criteria measuring the dissimilarity between the true correlation matrix $R=D^{-1/2}\Sigma D^{-1/2}$ and the estimated one $\hat{R}=\hat{D}^{-1/2}\hat{\Sigma}\hat{D}^{-1/2}$ with $D=\text{diag}(\sigma^{2}_{1},\dots,\sigma^{2}_{p})$ and $\hat{D}=\text{diag}(\hat{\sigma}^{2}_{1},\dots,\hat{\sigma}^{2}_{p})$. The first criterion we considered is the _Mean Squared Error_ (MSE) that is defined as $\displaystyle\text{MSE}(R,\hat{R})=\frac{1}{\frac{p(p+1)}{2}}\sum_{j=1}^{p}\sum_{j\geq j^{\prime}}(R_{jj^{\prime}}-\hat{R}_{jj^{\prime}})^{2}$ where $R_{jj^{\prime}}$ represents the $(j,j^{\prime})$-th element of the matrix $R$. The second criterion adopted is the RV coefficient (Abdi,, 2007) expressed as $\displaystyle\text{RV}(R,\hat{R})=\frac{\text{tr}(R^{T}\hat{R})}{\sqrt{\text{tr}(R^{T}R)\text{tr}(\hat{R}^{T}\hat{R})}}$ and taking values between 0 and 1 where values closer to 1 denote a greater similarity between the matrices. Note that we consider the correlation matrices and not the covariance ones in order to have more interpretable indications from a MSE perspective. On the other hand the second aim of the simulation study consists in the numerical exploration of the quality of the initialization strategy proposed in Section 3.4 in order to find promising configurations for the number of factors $K$ and variable clusters $G$. A total of $B=200$ samples have been drawn with sample size $n=500$ and $p=40$ variables. The data are generated according to the probabilistic mechanism underlying model (3) so that the sampled vectors $x_{i}$’s are distributed as a Gaussian random variables with zero mean and covariance matrix $\Sigma_{\text{true}}=\Lambda_{\text{true}}\Lambda_{\text{true}}^{T}+\Psi_{\text{true}}$. The true number of factors and of variable clusters have been fixed to $K_{\text{true}}=3$ and $G_{\text{true}}=5$ respectively. Prior to running the Gibbs sampler outlined in Section 3.3, the involved parameters are initialized by estimating a standard FA model where the factor loadings are obtained as the cluster centroids of a $k$-means clustering procedure, which also allows to obtain the starting values for the loadings partition. The hyperparameters have been selected according to the indication given in Section 3.2 with $\sigma_{\lambda}=5$ to entail a priori uninformativeness about the dispersion of the factor loadings. All the reported analyses have been conducted within the R environment (R Core Team,, 2020) with the aid of the mclust package (Scrucca et al.,, 2016). Table 1: Means, over the $B$ simulated samples, of the ARI values (and their standard errors) comparing the true and the estimated covariance matrices for varying values of $K$ and $G$. Bold cell represents the true model generating the data. $K$ | $G$ | 3 | 4 | 5 | 6 | 7 ---|---|---|---|---|--- 2 | 0.728 (0.140) | 0.915 (0.082) | 0.985 (0.044) | 0.993 (0.029) | 0.995 (0.024) 3 | 0.709 (0.155) | 0.890 (0.109) | 0.970 (0.063) | 0.982 (0.054) | 0.991 (0.034) 4 | 0.719 (0.140) | 0.861 (0.121) | 0.934 (0.101) | 0.962 (0.058) | 0.970 (0.051) 5 | 0.696 (0.162) | 0.837 (0.141) | 0.909 (0.097) | 0.918 (0.093) | 0.936 (0.076) Results are reported in Tables 1, 2, 3 and 4. First of all note that variable partitions might be erroneously seen as a byproduct of the procedure proposed in Section 3.1, helping to reduce even further the number of free parameters when resorting to factor analysis. Nonetheless obtaining variable clusters can represent the final aim of the analyses since it produces relevant insights about the phenomenon that we are studying, as it will be clear for the application in Section 5. For this reason a proper evaluation of the clustering performances, even in simulated scenarios, is crucial to validate our procedure. From Table 1 we can see how in these scenarios the obtained variable partitions are generally close to the true groupings, as the ARI generally achieves good values. A more careful investigation of the results shows how, as expected, the values are strongly influenced by the number of cluster $G$ used in the model fitting procedure. Nonetheless this behavior turns out not being symmetrical since, while an underestimation of the number of clusters appears to be harmful, overestimating $G$ looks harmless if not beneficial; this might give a clue about the spuriousness of some of the clusters when $G>G_{\text{true}}$. The impact of the number of factors is less evident as the ARI values appear robust with respect to changes in $K$. A relevant behavior to highlight consists in the slight degradation of the results when increasing the number of factors; note that, as $K$ increases, the dimensionality of the space in which cluster searches are conducted increases too possibly enhancing the sparsity of the data and deteriorating clustering performances. Table 2: Mean, over the $B$ simulated samples, of the MSE (and their standard errors) comparing the true and the estimated correlation matrices for varying values of $K$ and $G$. Bold cell represents the true model generating the data. $K$ | $G$ | 3 | 4 | 5 | 6 | 7 ---|---|---|---|---|--- 2 | 0.052 (0.048) | 0.022 (0.030) | 0.009 (0.010) | 0.009 (0.010) | 0.010 (0.011) 3 | 0.064 (0.060) | 0.019 (0.030) | 0.004 (0.011) | 0.002 (0.010) | 0.002 (0.007) 4 | 0.062 (0.059) | 0.027 (0.031) | 0.014 (0.030) | 0.007 (0.015) | 0.004 (0.010) 5 | 0.066 (0.064) | 0.032 (0.041) | 0.015 (0.024) | 0.013 (0.026) | 0.008 (0.020) Table 3: Means, over the $B$ simulated samples, of the RV coefficients (and their standard errors) comparing the true and the estimated correlation matrices for varying values of $K$ and $G$. Bold cell represents the true model generating the data. $K$ | $G$ | 3 | 4 | 5 | 6 | 7 ---|---|---|---|---|--- 2 | 0.911 (0.089) | 0.972 (0.031) | 0.989 (0.019) | 0.987 (0.021) | 0.984 (0.025) 3 | 0.895 (0.097) | 0.968 (0.052) | 0.993 (0.020) | 0.997 (0.006) | 0.997 (0.011) 4 | 0.898 (0.095) | 0.960 (0.049) | 0.982 (0.035) | 0.992 (0.015) | 0.994 (0.009) 5 | 0.894 (0.104) | 0.953 (0.057) | 0.979 (0.035) | 0.985 (0.029) | 0.989 (0.022) In Tables 2 and 3 the results concerning correlation matrix reconstruction are reported. First of all it is relevant to highlight how both the MSE and the RV coefficient tend to provide very similar indications. Generally speaking the obtained performances are good overall, regardless of the values of $K$ and $G$. A more detailed look reveals how, coherently with what happens for the clustering quality, underestimation of the number of variable clusters $G$ seeems to have a more visible impact on the degradation of the results. On the other hand overestimation of $G$ seems to have little impact on the values of the MSE and RV coefficient and the same holds for different choices of the number of factors. Finally in Table 4 the performances of the model selection initialization strategy outlined in Section 3.4 are displayed. A first promising result is given by the fact that the true model generating the data is the one selected more often by this strategy. Moreover the right number of factors is chosen in more than 80% of the cases. On the other hand the correct number of clusters looks harder to detect, with $G=5$ being selected one every two samples. Nonetheless a tendency to overestimate $G$ is witnessed with this behavior being generally beneficial according to the results commented above on the quality of the partitions and the correlation reconstruction. Table 4: Proportion of times, over $B=200$ simulated samples, when a specific configuration $(K,G)$ has been selected by the initialization strategy outlined in Section 3.4. Bold cell represents the true model generating the data. $K$ | $G$ | 4 | 5 | 6 | 7 | 8 or more ---|---|---|---|---|--- 1 | 0.005 | 0.000 | 0.000 | 0.000 | 0.000 2 | 0.030 | 0.010 | 0.005 | 0.000 | 0.005 3 | 0.125 | 0.460 | 0.115 | 0.040 | 0.080 4 | 0.000 | 0.005 | 0.010 | 0.000 | 0.035 5 | 0.000 | 0.000 | 0.005 | 0.000 | 0.005 6 or more | 0.000 | 0.030 | 0.005 | 0.000 | 0.030 We strongly believe that the results reported in this section have to be considered as a whole, in order to obtain useful indications about reasonable paths to take when analyzing real datasets. From Table 4 we can see how the proposed initialization strategy never selected $G<4$ and how, when not selecting the true model generating the data, it favors the overestimation of both $K$ and $G$. These indications, if coupled with the ones obtained from Tables 1, 2 and 3, suggest that the initialization strategy tends to select models producing satisfactory performances both from a clustering and from a correlation reconstruction perspectives. As commented above a more pronounced deterioration of the results is witnessed especially when $G=3$ while, overestimation of both the number of clusters and factors, generally lead to an amelioration of the performances. As a consequence we believe that the proposed strategy might be fruitfully used as a fast and effective replacement of more intensive and time consuming grid searches over $K$ and $G$ coupled with the reliance to some information criterion that has to be carefully selected. In fact, note that some further analyses, not reported in the paper, generally showed the unreliability of the information criteria introduced in Section 3.4. As a general indication it has been noted that BIC-MCMC is more stable with respect to AICM and BICM since the latter ones could be strongly influenced by jumps, usually due to updates of the allocation matrix $Z$, of the likelihood at a given step of the Gibbs updating procedure. Nonetheless in our analyses all the three criteria selected the true model generating the data less often with respect to the suggested initialization strategy. ## 5 Application to the milk MIR spectroscopy data In this section, the proposed method is applied to the milk MIR spectroscopy data described in Section 2. The initialization of the parameters involved in the model and the specification of the hyperparameters have been carried out coherently with what we have done for the synthetic data in the previous section. Moreover, prior to running the proposed metholodogy, we removed from each single spectrum three wavelength regions supposed to be highly noisy (Hewavitharana and van Brakel,, 1997) namely the ones from 1592 cm-1 to 1720 cm-1, from 2996 cm-1 to 3698 cm-1 and from 3818 cm-1 to 5010 cm-1. Consequently we end up working with a dataset having $n=4320$ milk samples and $p=533$ wavelengths measured. The initialization procedure outlined in Section 3.4 selects different $(K,G)$ configurations for the Pasture and for the TMR samples. More specifically, in the first case it selects a number of factors $K$ equal to 4 and a number of variable clusters $G$ equal to 25 while in the latter one $K=3$ and $G=19$. This might give a first rough indication about the more complex wavelengths relations underneath the samples coming from pasture fed cows since in order to capture those relations an higher number of clusters, lying in an higher dimensional reduced subspace, is needed. Figure 3: Estimated correlation matrices computed on the milk samples produced by pasture fed cows (on the left) and TMR fed cows (on the right). In Figure 3 the estimated correlation matrices, obtained running the proposed model with the mentioned $(K,G)$ values, are reported. By comparing these matrices with the sample correlation ones displayed in Figure 2 we can obtain an indication about the capabilities of our methodology to map the data into lower dimensional subspaces while retaining the relevant correlation structures. Denoting with $R$ the sample correlation matrix and with $\tilde{R}$ the estimated one we have that, for the pasture samples, $\text{MSE}(R_{\text{Pasture}},\tilde{R}_{\text{Pasture}})=0.021$ and $\text{RV}(R_{\text{Pasture}},\tilde{R}_{\text{Pasture}})=0.980$. On the other hand, in the case of TMR, we obtain $\text{MSE}(R_{\text{TMR}},\tilde{R}_{\text{TMR}})=0.035$ and $\text{RV}(R_{\text{TMR}},\tilde{R}_{\text{TMR}})=0.965$. These results suggest that our method reconstructs in quite a satisfactory way the relations among the wavelengths and that the initialization strategy selects reasonable values for $K$ and $G$. Furthermore, the graphical inspection of Figure 3, suggests how our variable clustering mechanism tends to favour the appearance of blocky structures thus possibly simplifying the practical interpretation of the relations among wavelengths. Moreover, even in the face of a rather similar correlation structure, this characteristic of our proposal allows to highlight even more the differences in the correlation among different wavelengths regions between milk samples coming from pasture fed and TMR fed cows. These graphical differences may serve as an interesting starting point to study how the diet regimens can impact the chemical processes underlying the spectral behaviour. Table 5: Confusion matrix comparing wavelengths partitions obtained on samples from pasture fed (on the rows) and TMR fed (on the columns) cows. Blank spaces are used instead of zeroes. | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 ---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- 1 | 7 | 2 | | | | | | | | | | | | | | | | | 2 | | | 15 | | | 1 | 2 | | | | | | | | | | | | 3 | | | | 15 | | | | | | | | | | | | | | | 4 | | | | | 6 | | 1 | 5 | | | | | | | | | | | 5 | | | | | | 29 | | | | | | | | | | 3 | | | 6 | | | | | | | | | 20 | | | | | | | | | | 7 | | | | | | 15 | | | | | | | | | | | | | 8 | | | | | | | 12 | | | 3 | | | | | | | | | 9 | | | | | | | | | 12 | | | | | | | 2 | | | 10 | 1 | 1 | | | | | | 2 | | 1 | | | | | | | | | 11 | | | | | 2 | | | 6 | | 2 | | | | | | | | | 12 | | | | | | | | | | | 55 | 8 | | | | | | | 13 | | 4 | | | | | | 1 | | | | | 37 | | | | | | 14 | | | | | | | | | 1 | | | | | 24 | | | | | 15 | | | | | | | | | | | | 29 | 8 | | | | | | 16 | | | | | | | | | | | 15 | | | | 5 | | | | 17 | | | | | | | | | | | | | | | 26 | | | | 18 | | | | | | | | | | | | | | | | | | 10 | 19 | | | | | | | | | | | | | | | 3 | | 27 | | 20 | | | | | | | | | | | | | | 6 | | | | 3 | 21 | | | | | | | | | | | | | | | | 33 | | | 22 | | | | | | | | | | | | | | | | 15 | 17 | | 23 | | | | | | | | | | | | | | | | | | 11 | 24 | | | | | | | | | | | | | | | | | | | 9 25 | | | | | | | | | | | | | | | | | | | 21 Coherently with the comments we have made for the synthetic data analyses, another way we consider to assess the performances of our methodology consists in the investigation of the partitions of the variables. In Table 5, we report the confusion matrix comparing the two clusterings of the wavelengths obtained on the pasture and on the TMR milk samples. It stands out how, despite having a different number of clusters, the two partitions are similar as the table show an almost diagonal structure. A confirmation about their similarity is given by the quite high ARI value being equal to 0.651. The agreement between the two partitions is somehow expected since we are examining milk samples where the only different experimental condition consists in the different diet regimens. Moreover, this behaviour may be seen as a strong signal about the presence of a real clustering structure in the measured wavelengths, thus entailing a traceable redundancy in the information they provide. Nonetheless, a careful analysis of the results in Table 5 reveals how the different number of clusters among the partitions generally imply that the large TMR variable clusters are split in two pasture variable clusters, as it happens for example for TMR clusters 11 and 17. This behaviour might provide some initial indications, possibly deserving further explorations, about how the different diets can impact chemical features in the milk in turn modifying the structures we see in the spectral data. Similar indications can be drawn from Figure 4 where the partitions of the wavelengths for the two different diet regimens are visually represented. Figure 4: Wavelengths partitions obtained on samples from pasture fed (on the top) and TMR fed (on the bottom) cows. The grey shaded areas correspond to the removed noisy regions. Furthermore, note that the insights obtained from a clustering perspective may be exploited to build variable selection tools possibly useful both for exploratory or graphical analyses and for classification purposes. In fact, the indication about the strong redundancy implied by the witnessed clustering structures can be used in order to build new features defined as summaries of the groups themselves possibly highlighting differences among pasture and TMR samples. Figure 5: Adjusted R-squared values of the regression models where only the wavelengths in a specific cluster are used to predict the content of three different milk traits, namely Fat, Protein and Lactose contents. On the left the results for the pasture samples, on the right the ones for the TMR ones. In order to gain some further useful and practical knowledge about the phenomenon that we are studying, we considered a cluster-specific predictive analysis. Therefore, we run different linear regression models, separately for pasture and TMR samples, where the covariates are given by the spectral measurements at the wavelengths belonging to a cluster, while the response variables are the content of fat, protein and lactose in the samples. These analyses, as briefly mentioned in the introduction, are important in order to understand if spectroscopy data may be fruitfully used in order to predict some important features of the milk in a rapid and non-expensive way. Moreover, in our case, the predictions are based only on a small subset of variables, namely those assigned to a specific cluster, thus alleviating high- dimensionality induced issues. In Figure 5 we report the results we obtained in terms of the _Adjusted R-squared_ index. At first glance it seems that the content of fat in the milk samples is easy to predict, regardless of the specific spectral region considered and of the diet regimens. Moreover, the predictive performances are generally higher for the TMR samples in comparison to the pasture ones. A more careful examination of the results, if paired with the suggestions obtained about the wavelengths clustering structures, allows us to give some further indication about how the information carried in some spectral regions is different depending on the diet. For example, from Table 5 we can see how TMR cluster 15 find its correspondence with pasture clusters 16, 17 and 19. It is straightforward to see how, in terms of the _Adjusted R-squared_ , the wavelengths in the corresponding regions seem to produce better predictions of the lactose content for the TMR than for the pasture milk samples. Similar indications can be found by carefully studying jointly the results shown in Table 5 and Figure 5. The clustering results obtained, if paired with previously conducted studies, can lead to other relevant insights. For example, the work by Picque et al., (1993) suggests that the measurements in the region spanning from 1515cm-1 to 1593cm-1 are characteristic of the lactate ion. On the other hand, the regions from 1040cm-1 to 1100cm-1 and from 1298cm-1 to 1470cm-1 are related to galactose component of milk. Note that, from a practical standpoint, both lactate and galactose can be seen as indicators of the milk quality. As is visually clear in Figure 4, the wavelengths pertaining to the lactate ion region mainly belong to pasture cluster 8 and to TMR cluster 7; a closer inspection of Table 5 reveals how these groups are strongly related in the two partitions, giving an additional indication about the coherency of the clustering results. On the other hand the wavelengths belonging to the galactose regions are split into groups 2, 5 and 7, for the pasture samples, while they are mainly associated with groups 3 and 6 for the TMR samples; again with a strong correspondence among these clusters visible in the confusion matrix. Moreover, the results obtained from the cluster specific regression analyses show how these groups are among the best ones for predicting the lactose content in the milk samples. Since lactose is a disaccharide molecule formed by the linking of the galactose and glucose monosaccharide modules, these results serve as a confirmation of the practical utility of the variable partitions obtained from the model. Lastly, note that the correlation matrices estimated by using the proposed methodology can be used as an exploratory tool to deepen our knowledge about the relations among wavelengths in a specific spectral region. For example, if we focus our attention on the correlations among the first 50 wavelengths. From a visual inspection of Figure 3, the wavelengths in this region seems to relate one to the other differently depending on the diet; this region is shown more closely in Figure 6. From Figure 6, we can see how the relationships among the spectral values at these wavelengths is highly dependent on the diet in this region. If we compare the two sub-matrices using the metrics considered previously, we obtain that $\text{MSE}(\hat{R}_{\text{Pasture}}^{(1:50)},\hat{R}_{\text{TMR}}^{(1:50)})=0.058$ and $\text{RV}(\hat{R}_{\text{Pasture}}^{(1:50)},\hat{R}_{\text{TMR}}^{(1:50)})=0.931$, thus providing an indication of stronger discrepancies in this spectral region compared to the one witnessed among the full covariances. Again, considering jointly these indications with the results outlined above, we can hypothesize the reasons behind this difference. More specifically, in this case, the initial wavelengths seem to give consistently better performances when predicting the protein content for the TMR samples compared to the pasture samples. Similar analyses can be conducted also for other spectral regions, depending on the specific application interest. Figure 6: Estimated correlation sub-matrices corresponding to the first 50 wavelengths computed on the milk samples produced by pasture fed cows (on the left) and TMR fed cows (on the right). ## 6 Discussion and further work In this paper we have presented a modification of a standard Factor Analysis model where factor loadings matrix is reparameterised so that redundancy in the originally observed variables can be detected. Moreover, to estimate the proposed model, a flexible Metropolis within Gibbs sampler has been implemented. Our method yields a parsimonious representation of strongly dependent high-dimensional data with complex correlation structures. In fact, the specification we propose, entails a huge reduction of the number of parameters to be estimated with respect to a standard FA model. At the same time, as a direct consequence of the specification itself, the model yields a grouping of the original variables when mapping them into the lower- dimensional subspace. The subsequent partition throws light on the relations among the observed features and about their possible redundancies. These indications, when supported by subject matter knowledge, can be translated into practical knowledge about the phenomenon under study. Our proposal was directly motivated by an application to vibrational spectroscopy data analysis and showed good performances on the dairy feed experiment data under investigation, both in terms of correlation reconstruction and interpretability of the results. Spectroscopy data usually present some recurring challenges, from a statistical perspective, as they are high-dimensional with strong and peculiar correlation structures among the wavelengths, possibly entailing complex redundancies. The model we introduced has been proven particularly useful in the given context since it has provided a parsimonious characterization of the correlation matrix. From a practical point of view, this allowed to gain relevant knowledge and to highlight differences among different milk samples, thus possibly helping in authenticity assessment and in preventing food adulteration in the future. Moreover, by clustering the variables, it provided interesting insights about which spectral regions carry the same information implying that, even if possibly far from each other, they may be influenced by similar chemical processes. Lastly note that our proposal can be thought as a sort of starting point when building classification tools aiming to discriminate samples according to the relations occurring among the observed features. Note that, the proposed methodology has been applied to MIR spectroscopy data but, in principle, its use may be extended to other data sharing some characteristics with the ones considered in this work. A possible interesting extension and research direction consists in the exploration of different possibilities concerning $\pi(Z)$, the prior distribution for the allocation matrix. When accounting for peculiar correlation structures in the data, it can be interesting, as we briefly mentioned in Section 3.2, to explore prior distributions incorporating information about specific relations and constraints, such spatial or temporal ones, for the variables to be clustered. Another aspect that is worth examining is concerned with the model selection. In Section 3.4 we introduced an initialization strategy that provides good indications about reasonable values for the number of factors $K$ and clusters $G$. Nonetheless several different approaches may be adopted and a thorough exploration of different model selection tools may be beneficial. As briefly mentioned in Section 3.4, a possible extension consists in allowing $K$ to go toward infinity and in considering shrinkage priors on the factor loadings as proposed in Bhattacharya and Dunson, (2011); Murphy et al., (2018). This strategy, possibly fruitful even for the determination of $G$, allows to circumvent the issues related to the selection of $K$, by automating the choice of the active factors, i.e. the ones with non-negligible loading values, in characterizing the covariance structure. Finally, as we mentioned above, our proposal can be thought as a stepping stone when building new classification tools. A possible straightforward strategy, pointing in this direction, would consist in embedding the model we introduced in a Mixture of Factor Analysis (MFA, Ghahramani and Hinton,, 1996) framework thus allowing to perform classification and clustering of high-dimensional data. ## Posterior conditional distributions Full conditional for the factor scores $U$ Let denote, as we did in the Section 3, $\tilde{\Lambda}=Z\Lambda_{c}$. The full conditional (12) for the factor scores $U$ is obtained following standard FA results as follows $\displaystyle\pi(U|\dots)$ $\displaystyle\propto$ $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)\pi(U)$ $\displaystyle\propto$ $\displaystyle\exp\left\\{-\frac{1}{2}\text{tr}[\Psi^{-1}(X-U\Lambda^{T})^{T}(X-U\Lambda^{T})]\right\\}\exp\left\\{-\frac{1}{2}\text{tr}[U^{T}U]\right\\}$ $\displaystyle\propto$ $\displaystyle\dots$ $\displaystyle\propto$ $\displaystyle\exp\left\\{-\frac{1}{2}\text{tr}[(\mathbb{I}+\Lambda^{T}\Psi^{-1}\Lambda)(U-\tilde{U})^{T}(U-\tilde{U})]\right\\}$ Therefore U is distributed as a matrix-normal random variable, $U\sim\mathcal{MN}_{n,K}(\tilde{U},\mathbb{I}_{n},(\mathbb{I}_{K}+\Lambda^{T}\Psi^{-1}\Lambda)^{-1})$. Focusing on a single row of the factor scores matrix we obtain $\pi(u_{i}|\dots)\sim\mathcal{N}_{K}((\mathbb{I}+\Lambda^{T}\Psi^{-1}\Lambda)^{-1}\Lambda^{T}\Psi^{-1}x_{i},(\mathbb{I}+\Lambda^{T}\Psi^{-1}\Lambda)^{-1})\;\;\text{for}\;i=1,\dots,n$ so that $\displaystyle\mu_{u}$ $\displaystyle=$ $\displaystyle(\mathbb{I}+\Lambda^{T}\Psi^{-1}\Lambda)^{-1}\Lambda^{T}\Psi^{-1}x_{i}$ $\displaystyle\Sigma_{u}$ $\displaystyle=$ $\displaystyle(\mathbb{I}+\Lambda^{T}\Psi^{-1}\Lambda)^{-1}$ Full conditional for the unique factor loadings matrix $\Lambda_{c}$ Using several times the properties of the trace operator and of the vectorization, the full conditional (11) is obtained as follows $\displaystyle\pi(\Lambda_{c}|\dots)$ $\displaystyle\propto$ $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)\pi(\Lambda_{c})$ $\displaystyle\propto$ $\displaystyle\exp\left\\{-\frac{1}{2}\text{tr}[\Psi^{-1}(X-U\Lambda_{c}^{T}Z^{T})^{T}(X-U\Lambda_{c}^{T}Z^{T})]\right\\}\exp\left\\{-\frac{1}{2}\text{tr}[\sigma_{\lambda}^{-2}\Lambda_{c}^{T}\Lambda_{c}]\right\\}$ $\displaystyle=$ $\displaystyle\exp\left\\{-\frac{1}{2}\text{tr}[(X-U\Lambda_{c}^{T}Z^{T})\Psi^{-1}(X-U\Lambda_{c}^{T}Z^{T})^{T}]-\frac{1}{2}\sigma_{\lambda}^{-2}\text{tr}[\Lambda_{c}^{T}\Lambda_{c}]\right\\}$ $\displaystyle=$ $\displaystyle\exp\left\\{-\frac{1}{2}\left[\text{tr}(U\Lambda_{c}^{T}Z^{T}\Psi^{-1}Z\Lambda_{c}U^{T})-2\text{tr}(X\Psi^{-1}Z\Lambda_{c}U^{T})+\sigma_{\lambda}^{-2}\text{tr}(\Lambda_{c}^{T}\Lambda_{c})\right]\right\\}$ $\displaystyle=$ $\displaystyle\exp\biggl{\\{}-\frac{1}{2}\bigl{[}\text{tr}(U\Lambda_{c}^{T}Z^{T}\Psi^{-1}Z\Lambda_{c}U^{T})-2\text{vec}(\Lambda_{c})^{T}\text{vec}(Z^{T}\Psi^{-1}X^{T}U)+$ $\displaystyle\sigma_{\lambda}^{-2}\text{tr}(\Lambda_{c}^{T}\Lambda_{c})\bigr{]}\biggr{\\}}$ $\displaystyle=$ $\displaystyle\exp\biggl{\\{}-\frac{1}{2}\bigl{[}\text{vec}(\Psi^{-1/2}Z\Lambda_{c}U^{T})^{T}\text{vec}(\Psi^{-1/2}Z\Lambda_{c}U^{T})+\sigma_{\lambda}^{-2}\text{vec}(\Lambda_{c})^{T}\text{vec}(\Lambda_{c})-$ $\displaystyle 2\text{vec}(\Lambda_{c})^{T}\text{vec}(Z^{T}\Psi^{-1}X^{T}U)\bigl{]}\biggl{\\}}$ $\displaystyle=$ $\displaystyle\exp\biggl{\\{}-\frac{1}{2}\bigl{[}((U\otimes\Psi^{-1/2})\text{vec}(\Lambda_{c}))^{T}(U\otimes\Psi^{-1/2})\text{vec}(\Lambda_{c})+$ $\displaystyle\sigma_{\lambda}^{-2}\text{vec}(\Lambda_{c})^{T}\text{vec}(\Lambda_{c})-2\text{vec}(\Lambda_{c})^{T}\text{vec}(Z^{T}\Psi^{-1}X^{T}U)\bigl{]}\biggl{\\}}$ $\displaystyle\propto$ $\displaystyle\exp\biggl{\\{}-\frac{1}{2}\bigl{[}\text{vec}(\Lambda_{c})^{T}(U^{T}U\otimes Z^{T}\Psi^{-1}Z+\sigma_{\lambda}^{-2}\mathbb{I})\text{vec}(\Lambda_{c})-$ $\displaystyle 2\text{vec}(\Lambda_{c})^{T}\text{vec}(Z^{T}\Psi^{-1}X^{T}U)\bigl{]}\biggl{\\}}$ which is proportional to a pdf of a multivariate Gaussian random variable. Therefore we have that $\text{vec}(\Lambda_{c})\sim\mathcal{N}_{G\times K}(\mu_{\Lambda},\Sigma_{\Lambda})$ where $\displaystyle\mu_{\Lambda}$ $\displaystyle=$ $\displaystyle(U^{T}U\otimes Z^{T}\Psi^{-1}Z+\sigma^{-2}\mathbb{I})^{-1}\text{vec}(Z^{T}\Psi^{-1}X^{T}U)$ $\displaystyle\Sigma_{\Lambda}$ $\displaystyle=$ $\displaystyle(U^{T}U\otimes Z^{T}\Psi^{-1}Z+\sigma^{-2}\mathbb{I})^{-1}$ Full conditional for the uniquenesses $\psi_{j}$’s Let define $M=(X-U\Lambda_{c}^{T}Z^{T})^{T}(X-U\Lambda_{c}^{T}Z^{T})$ and $M_{jj}$ as the $(j,j)$-th element of the matrix $M$. The full conditional (13) for the uniquenesses $\psi_{j}$ for $j=1,\dots,p$ are obtained as $\displaystyle\pi(\Psi|\dots)$ $\displaystyle\propto$ $\displaystyle\mathcal{L}(X|\Lambda_{c},\Psi,Z,U)\pi(\Psi)$ $\displaystyle\propto$ $\displaystyle|\Psi|^{-n/2}\exp\left\\{-\frac{1}{2}\text{tr}[\Psi^{-1}M]\right\\}\prod_{j=1}^{p}\frac{\beta^{\alpha}_{j}}{\Gamma(\alpha)}\psi_{j}^{-(\alpha+1)}\exp\left(-\frac{\beta_{j}}{\psi_{j}}\right)$ $\displaystyle\propto$ $\displaystyle\left(\prod_{j=1}^{p}\psi_{j}^{-n/2}\psi_{j}^{-(\alpha+1)}\right)\exp\left\\{-\frac{1}{2}\text{tr}[\Psi^{-1}M]-\beta_{j}\sum_{j=1}^{p}\psi_{j}^{-1}\right\\}$ $\displaystyle=$ $\displaystyle\left(\prod_{j=1}^{p}\psi_{j}^{-(\alpha+\frac{n}{2}+1)}\right)\exp\left\\{-\frac{1}{2}\sum_{j=1}^{p}\psi_{j}^{-1}M_{jj}-\beta_{j}\sum_{j=1}^{p}\psi_{j}^{-1}\right\\}$ $\displaystyle=$ $\displaystyle\left(\prod_{j=1}^{p}\psi_{j}^{-(\alpha+\frac{n}{2}+1)}\right)\exp\left\\{-\sum_{j=1}^{p}\psi_{j}^{-1}\left(\frac{M_{jj}}{2}+\beta_{j}\right)\right\\}$ Therefore we obtain that $(\psi_{j}|\dots)\sim\text{InvGamma}(\alpha+n/2,\beta_{j}+M_{jj}/2)$ so that $\beta_{j}^{*}=\beta_{j}+M_{jj}/2$. ## Acknowledgements This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland under grant number (16/RC/3835) and the SFI Insight Research Centre under grant number (SFI/12/RC/2289_P2). ## References * Abdi, (2007) Abdi, H. (2007). Rv coefficient and congruence coefficient. Encyclopedia of measurement and statistics, 849:853. * Alothman et al., (2019) Alothman, M., Hogan, S. A., Hennessy, D., Dillon, P., Kilcawley, K. N., O’Donovan, M., Tobin, J., Fenelon, M. A., and O’Callaghan, T. F. (2019). The “grass-fed” milk story: understanding the impact of pasture feeding on the composition and quality of bovine milk. Foods, 8(8):350. * Arminger and Muthén, (1998) Arminger, G. and Muthén, B. O. (1998). A Bayesian approach to nonlinear latent variable models using the Gibbs sampler and the Metropolis-Hastings algorithm. Psychometrika, 63(3):271–300. * Barry and Hartigan, (1992) Barry, D. and Hartigan, J. A. (1992). Product partition models for change point problems. The Annals of Statistics, 20(1):260–279. * Bartholomew et al., (2011) Bartholomew, D. J., Knott, M., and Moustaki, I. (2011). Latent variable models and factor analysis: A unified approach. John Wiley & Sons. * Bhattacharya and Dunson, (2011) Bhattacharya, A. and Dunson, D. B. (2011). Sparse Bayesian infinite factor models. Biometrika, 98(2):291–306. * Blei and Frazier, (2011) Blei, D. M. and Frazier, P. I. (2011). Distance dependent Chinese restaurant processes. Journal of Machine Learning Research, 12(8):2461–2488. * Bonfatti et al., (2017) Bonfatti, V., Tiezzi, F., Miglior, F., and Carnier, P. (2017). Comparison of bayesian regression models and partial least squares regression for the development of infrared prediction equations. Journal of Dairy Science, 100(9):7306–7319. * Bouveyron et al., (2019) Bouveyron, C., Celeux, G., Murphy, T. B., and Raftery, A. E. (2019). Model-based clustering and classification for data science: with applications in R. Cambridge University Press. * Capuano et al., (2014) Capuano, E., Van der Veer, G., Boerrigter-Eenling, R., Elgersma, A., Rademaker, J., Sterian, A., and Van Ruth, S. M. (2014). Verification of fresh grass feeding, pasture grazing and organic farming by cows farm milk fatty acid profile. Food Chemistry, 164:234–241. * Coppa et al., (2012) Coppa, M., Martin, B., Agabriel, C., Chassaing, C., Sibra, C., Constant, I., Graulet, B., and Andueza, D. (2012). Authentication of cow feeding and geographic origin on milk using visible and near-infrared spectroscopy. Journal of Dairy Science, 95(10):5544–5551. * Dahl et al., (2017) Dahl, D. B., Day, R., and Tsai, J. W. (2017). Random partition distribution indexed by pairwise information. Journal of the American Statistical Association, 112(518):721–732. * De Marchi et al., (2014) De Marchi, M., Toffanin, V., Cassandro, M., and Penasa, M. (2014). Invited review: Mid-infrared spectroscopy as phenotyping tool for milk traits. Journal of Dairy Science, 97(3):1171–1186. * Downey, (1996) Downey, G. (1996). Authentication of food and food ingredients by near infrared spectroscopy. Journal of Near Infrared Spectroscopy, 4(1):47–61. * Durante, (2017) Durante, D. (2017). A note on the multiplicative gamma process. Statistics & Probability Letters, 122:198–204. * Elgersma, (2012) Elgersma, A. (2012). New developments in the netherlands: dairies reward grazing because of public perception. Grassland Science in Europe, 17:420–422. * Everitt, (1984) Everitt, B. S. (1984). An introduction to latent variable models. Chapman and Hall. * Faulkner et al., (2018) Faulkner, H., O’Callaghan, T. F., McAuliffe, S., Hennessy, D., Stanton, C., O’Sullivan, M. G., Kerry, J. P., and Kilcawley, K. N. (2018). Effect of different forage types on the volatile and sensory properties of bovine milk. Journal of Dairy Science, 101(2):2034–1047. * Fokoué and Titterington, (2003) Fokoué, E. and Titterington, D. (2003). Mixtures of factor analysers. Bayesian estimation and inference by stochastic simulation. Machine Learning, 50(1-2):73–94. * Fraley and Raftery, (2002) Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association, 97(458):611–631. * Frühwirth-Schnatter, (2011) Frühwirth-Schnatter, S. (2011). Label switching under model uncertainty. In Mixtures: Estimation and Application, pages 213–239. John Wiley & Sons. * Frühwirth-Schnatter and Lopes, (2010) Frühwirth-Schnatter, S. and Lopes, H. F. (2010). Parsimonious Bayesian factor analysis when the number of factors is unknown. Technical report, University of Chicago Booth School of Business. * Garvey et al., (2020) Garvey, E. C., Sander, T., O’Callaghan, T. F., Drake, M., Fox, S., O’Sullivan, M. G., Kerry, J. P., and Kilcawley, K. N. (2020). A cross-cultural evaluation of liking and perception of salted butter produced from different feed systems. Foods, 9(12):1767. * Ghahramani and Hinton, (1996) Ghahramani, Z. and Hinton, G. E. (1996). The em algorithm for mixtures of factor analyzers. Technical report, CRG-TR-96-1, University of Toronto. * Hartigan, (1990) Hartigan, J. A. (1990). Partition models. Communications in Statistics — Theory and Methods, 19(8):2745–2756. * Hewavitharana and van Brakel, (1997) Hewavitharana, A. K. and van Brakel, B. (1997). Fourier transform infrared spectrometric method for the rapid determination of casein in raw milk. Analyst, 122(7):701–704. * Hirose and Konishi, (2012) Hirose, K. and Konishi, S. (2012). Variable selection via the weighted group lasso for factor analysis models. Canadian Journal of Statistics, 40(2):345–361. * Hirose and Yamamoto, (2015) Hirose, K. and Yamamoto, M. (2015). Sparse estimation via nonconcave penalized likelihood in factor analysis model. Statistics and Computing, 25(5):863–875. * Hubert and Arabie, (1985) Hubert, L. and Arabie, P. (1985). Comparing partitions. Journal of classification, 2(1):193–218. * Jennrich and Robinson, (1969) Jennrich, R. I. and Robinson, S. M. (1969). A Newton-Raphson algorithm for maximum likelihood factor analysis. Psychometrika, 34(1):111–123. * Jöreskog, (1967) Jöreskog, K. G. (1967). Some contributions to maximum likelihood factor analysis. Psychometrika, 32(4):443–482. * Kamal and Karoui, (2015) Kamal, M. and Karoui, R. (2015). Analytical methods coupled with chemometric tools for determining the authenticity and detecting the adulteration of dairy products: A review. Trends in Food Science & Technology, 46(1):27–48. * Legramanti et al., (2020) Legramanti, S., Durante, D., and Dunson, D. B. (2020). Bayesian cumulative shrinkage for infinite factorizations. Biometrika, 107(3):745–752. * Lopes and West, (2004) Lopes, H. F. and West, M. (2004). Bayesian model assessment in factor analysis. Statistica Sinica, 14:41–67. * McParland and Berry, (2016) McParland, S. and Berry, D. (2016). The potential of fourier transform infrared spectroscopy of milk samples to predict energy intake and efficiency in dairy cows. Journal of Dairy Science, 99(5):4056–4070. * McParland et al., (2014) McParland, S., Lewis, E., Kennedy, E., Moore, S. G., McCarthy, B., O’Donovan, M., Butler, S. T., Pryce, J., and Berry, D. P. (2014). Mid-infrared spectrometry of milk as a predictor of energy intake and efficiency in lactating dairy cows. Journal of Dairy Science, 97(9):5863–5871. * Murphy et al., (2018) Murphy, K., Viroli, C., and Gormley, I. C. (2018). Infinite mixtures of infinite factor analysers. Bayesian Analysis. * Murphy et al., (2010) Murphy, T. B., Dean, N., and Raftery, A. E. (2010). Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications. Annals of Applied Statistics, 4(1):396–421. * Nobile and Fearnside, (2007) Nobile, A. and Fearnside, A. T. (2007). Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing, 17(2):147–162. * O’Callaghan et al., (2017) O’Callaghan, T. F., Mannion, D. T., Hennessy, D., McAuliffe, S., O’Sullivan, M. G., Leeuwendaal, N., Beresford, T. P., Dillon, P., Kilcawley, K. N., Sheehan, J. J., Ross, R. P., and Stanton, C. (2017). Effect of pasture versus indoor feeding systems on quality characteristics, nutritional composition, and sensory and volatile properties of full-fat cheddar cheese. Journal of Dairy Science, 100(8):6053–6073. * (41) O’Callaghan, T. F., Faulkner, H., McAuliffe, S., O’Sullivan, M. G., Hennessy, D., Dillon, P., Kilcawley, K. N., Stanton, C., and Ross, R. P. (2016a). Quality characteristics, chemical composition, and sensory properties of butter from cows on pasture versus indoor feeding systems. Journal of Dairy Science, 99(12):9441–9460. * (42) O’Callaghan, T. F., Hennessy, D., McAuliffe, S., Kilcawley, K. N., O’Donovan, M., Dillon, P., Ross, R. P., and Stanton, C. (2016b). Effect of pasture versus indoor feeding systems on raw milk composition and quality over an entire lactation. Journal of Dairy Science, 99(12):9424–9440. * O’Callaghan et al., (2018) O’Callaghan, T. F., Vázquez-Fresno, R., Serra-Cayuela, A., Dong, E., Mandal, R., Hennessy, D., McAuliffe, S., Dillon, P., Wishart, D. S., Stanton, C., and Ross, R. (2018). Pasture feeding changes the bovine rumen and milk metabolome. Metabolites, 8(2):27. * Page et al., (2016) Page, G. L., Quintana, F. A., et al. (2016). Spatial product partition models. Bayesian Analysis, 11(1):265–298. * Picque et al., (1993) Picque, D., Lefier, D., Grappin, R., and Corrieu, G. (1993). Monitoring of fermentation by infrared spectrometry: Alcoholic and lactic fermentations. Analytica Chimica Acta, 279(1):67–72. * Press and Shigemasu, (1989) Press, S. J. and Shigemasu, K. (1989). Bayesian inference in factor analysis. In Contributions to probability and statistics, pages 271–287. Springer. * Quintana and Iglesias, (2003) Quintana, F. A. and Iglesias, P. L. (2003). Bayesian clustering and product partition models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(2):557–574. * R Core Team, (2020) R Core Team (2020). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. * Raftery et al., (2007) Raftery, A. E., Newton, M. A., Satagopan, J. M., and Krivitsky, P. N. (2007). Estimating the integrated likelihood via posterior simulation using the harmonic mean identity. Bayesian statistics 8, pages 1–45. * Reid et al., (2006) Reid, L. M., O’Donnell, C. P., and Downey, G. (2006). Recent technological advances for the determination of food authenticity. Trends in Food Science & Technology, 17(7):344–353. * Rubin and Thayer, (1982) Rubin, D. B. and Thayer, D. T. (1982). EM algorithms for ML factor analysis. Psychometrika, 47(1):69–76. * Schiavon and Canale, (2020) Schiavon, L. and Canale, A. (2020). On the truncation criteria in infinite factor models. Stat, 9(1):e298. * Scrucca et al., (2016) Scrucca, L., Fop, M., Murphy, T. B., and Raftery, A. E. (2016). mclust 5: Clustering, classification and density estimation using gaussian finite mixture models. The R journal, 8(1):289. * Song and Lee, (2001) Song, X.-Y. and Lee, S.-Y. (2001). Bayesian estimation and test for factor analysis model with continuous and polytomous data in several populations. British Journal of Mathematical and Statistical Psychology, 54(2):237–263. * Valenti et al., (2013) Valenti, B., Martin, B., Andueza, D., Leroux, C., Labonne, C., Lahalle, F., Larroque, H., Brunschwig, P., Lecomte, C., and Brochard, M. (2013). Infrared spectroscopic methods for the discrimination of cows’ milk according to the feeding system, cow breed and altitude of the dairy farm. International Dairy Journal, 32(1):26–32. * Wehrhahn et al., (2020) Wehrhahn, C., Leonard, S., Rodriguez, A., Xifara, T., et al. (2020). A Bayesian approach to disease clustering using restricted chinese restaurant processes. Electronic Journal of Statistics, 14(1):1449–1478.
# Nuclear Quantum Effects on Autoionization of Water Isotopologues Studied by Ab Initio Path Integral Molecular Dynamics Bo Thomsen<EMAIL_ADDRESS>CCSE, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba, 277-0871, Japan Motoyuki Shiga <EMAIL_ADDRESS>CCSE, Japan Atomic Energy Agency, 178-4-4, Wakashiba, Kashiwa, Chiba, 277-0871, Japan ###### Abstract In this study we investigate the nuclear quantum effects (NQEs) on the acidity constant (p$K_{A}$) of liquid water isotopologues at the ambient condition by path integral molecular dynamics (PIMD) simulations. We compared simulations using a fully explicit solvent model with a classical polarizable force field, density functional tight binding, and ab initio density functional theory, which correspond to empirical, semiempirical, and ab initio PIMD simulations, respectively. The centroid variable with respect to the proton coordination number of a water molecule was restrained to compute the gradient of the free energy, which measures the reversible work of the proton abstraction for the quantum mechanical system. The free energy curve obtained by thermodynamic integration was used to compute the p$K_{A}$ value based on probabilistic determination. This technique not only reproduces the p$K_{A}$ value of liquid D2O experimentally measured (14.86) but also allows for a theoretical prediction of the p$K_{A}$ values of liquid T2O, aqueous HDO and HTO which are unknown due to its scarcity. It is also shown that the NQEs on the free energy curve can result in a downshift of $4.5\pm 0.9$ p$K_{A}$ units in the case of liquid water, which indicates that the NQEs plays an indispensable role in the absolute determination of p$K_{A}$. The results of this study can help to inform further extensions into the calculation of the acidity constants of isotope substituted species with high accuracy. ## I Introduction The acidity constant, p$K_{A}$, plays a fundamental role in acid-base chemistry. There is no doubt about the importance of quantitatively estimating p$K_{A}$ for a given functional group in a molecular species. Computational chemistry rooted in molecular theory has the merit that it is possible to evaluate p$K_{A}$ independent of experiments.alongi_chapter_2010 ; alexov_progress_2011 ; ho_universal_2010 However, computational evaluation of p$K_{A}$ has yet to reach the quantitative level of accuracy, even for the most basic case of liquid water, i.e., the water autoionization constant, p$K_{W}$. Previous computational evaluations of p$K_{A}$ have been based on either an implicit solvent model, in which a solute molecule is embedded in a solvent described by polarizable continuum medium, tomasi_quantum_2005 ; miertus_electrostatic_1981 ; foresman_solvent_1996 ; pascualahuir_gepol:_1994 ; miertus_approximate_1982 ; cossi_energies_2003 ; klamt_cosmo:_1993 ; klamt_conductor-like_1995 ; marenich_universal_2009 ; soteras_extension_2005 or an explicit solvent model, in which both the solute and the solvent are described explicitly as molecules. doltsinis_theoretical_2003 ; schilling_determination_2019 ; chen_prediction_2012 ; davies_estimating_2002 ; sandmann_copperii-mediated_2019 ; tummanapelli_dissociation_2014 ; tummanapelli_ab_2015 ; tummanapelli_ab_2015-1 ; daub_ab_2019 In general, the implicit solvent model is able to provide p$K_{A}$ values with small computational effort but limited accuracy. The intrinsic error of the implicit solvent model arises from the lack of complexity to describe solute-solvent interactions. The inclusion of explicit solvent molecules xu_methods_2019 ; sutton_first_2012 ; zhang_reliable_2012 ; thapa_calculations_2015 ; thapa_density_2016 and extended conformational samplinghaworth_modeling_2017 have recently been shown to partially solve these issues. Both methods, however, require careful (re)-parametrization of the implicit solvent model and selection of the conformers to be considered in the calculation. It is therefore preferable to deal with the solvent molecules explicitly for quantitatively estimating p$K_{A}$, where solute-solvent interactions are taken into account properly. In this case p$K_{A}$ is estimated directly from the free energy change upon the protonation of a solute molecule by molecular dynamics (MD) techniques, such as coordination-constrained MD (often to referred to as the blue-moon ensemble method).carter_constrained_1989 ; sprik_free_1998 p$K_{A}$ is known to have a strong dependence on thermodynamic conditions such as the temperature and pressure. For liquid water under a pressure of 15 MPa, the p$K_{W}$ value of 14 at the ambient temperature decreases to about 11 in subcritical conditions at 300 ∘C, and then increases to about 20 in supercritical conditions at 400 ∘C.bandura_ionization_2005 It is also known that p$K_{A}$ clearly differs between the hydrogen isotopologues,mora- diez_theoretical_2015 e.g., p$K_{W}$ of D2O under ambient conditions is 14.86, which is larger than that of H2O, 14.00.shoesmith_ionization_1976 Constrained MD simulations allow the estimation of p$K_{A}$ under different thermodynamic conditions. However, they will produce identical results for the hydrogen isotopologues, because the free energy change calculated by classical MD does not depend on the nuclear masses. The isotope effect on the free energy can be traced back to the quantum nature whereby the kinetic and potential energy operators in the Boltzmann density do not commute. Therefore, it follows that the nuclear quantum effects (NQEs) should not be ignored for the quantitative estimation of p$K_{A}$ . In fact, hydrogen is generally known to exhibit quantum behaviors such as zero-point vibration and tunneling because of its light mass. The p$K_{W}$ of water has been a long-standing interest for theoretical studies, since the long time-scale on which the autodissociation takes place makes it difficult to sample efficiently. Furthermore the strong solvation effects of the produced OH- and H+ ions make it important to consider the solvation structure explicitly. The computation of p$K_{W}$ of liquid water has been done with explicit solvent in several studies.sprik_computation_2000 ; perlt_predicting_2017 ; strajbl_ab_2002 ; wang_first-principles_2020 The reaction mechanism and kinetics of the autoionization of liquid water have been the subject in extensive studies.moqadam_local_2018 ; geissler_autoionization_2001 ; trout_analysis_1999 However, none of these studies have explored the NQEs on the p$K_{W}$ of water in ab initio simulations based on first principles. In this paper we introduce a coordination-restrained path integral molecular dynamics (PIMD) method based on an explicit solvent model to study the p$K_{W}$ of liquid water and its isotopologues. PIMD is a rigorous approach that takes account of both nuclear quantum and temperature effects based on the imaginary-time path integral theory for quantum statistical mechanics shiga_path_2018 ; parrinello_study_1984 ; hall_nonergodicity_1984 ; ceriotti_efficient_2010 ; tuckerman_efficient_1993 . PIMD has been used to study the NQEs on several properties of water as summarized in a recent review.ceriotti_nuclear_2016 In this review the NQEs on the p$K_{W}$ of water is estimated by extrapolating the effect on the water isotopologues to the case of infinite mass nuclei, which corresponds to the result of a classical MD simulation. The change from classical to quantum nuclei was found to cause a downshift of around 3 p$K_{A}$ units. This empirical shift was subsequently used in the study by Wang et al. to correct the p$K_{W}$ calculated from classical MD. wang_first- principles_2020 Two important references in the context of this paper are the study on the solvated proton and hydroxide ion by Marx et al. marx_nature_1999 and the recent study on the NQEs in proton transport in water under an electrical field by Cassone.cassone_nuclear_2020 Here we will extend the PIMD method for p$K_{W}$/p$K_{A}$ estimations, allowing the computations of quantum free energies upon the protonation of the solute, by restraining the centroid variable of the coordination number (CN). In Section II we will outline the theory of coordination-restrained PIMD and the calculation of p$K_{W}$ in terms of probabilistic and absolute methods. Section III contains the computational details for the simulations conducted in this study. The results of MD and PIMD will then be discussed in Section IV. Furthermore we will investigate the isotope effects in pure D2O and T2O, and the solvated HDO and HTO isotope substituted water like molecules. Finally in Section V we will summarize our findings. ## II Theory ### II.1 Coordination Number Restrained Path Integral Molecular Dynamics Consider the quantum Hamiltonian for an $N$ atom system within the Born Oppenheimer approximation $\hat{H}=\sum_{I=1}^{N}\frac{\hat{\mathbf{P}}^{2}_{I}}{2M_{I}}+V(\hat{\mathbf{R}}_{1},\ldots,\hat{\mathbf{R}}_{N}),$ (1) where $M_{I}$ is the mass of the $I$th particle, $\hat{\mathbf{P}}^{2}_{I}$ is the momentum operator $\left(\hat{P}_{I,x},\hat{P}_{I,y},\hat{P}_{I,z}\right)$, $\hat{\mathbf{R}}_{I}$ is the position operator $\left(\hat{R}_{I,x},\hat{R}_{I,y},\hat{R}_{I,z}\right)$ and $V$ is the potential. The quantum partition function for this system is given as $Z=\int d\mathbf{R}_{1}\cdots\int d\mathbf{R}_{N}\left\langle\mathbf{R}_{1}\cdots\mathbf{R}_{N}\left|\exp\left(-\beta\hat{H}\right)\right|\mathbf{R}_{1}\cdots\mathbf{R}_{N}\right\rangle=\mathrm{Tr}\exp\left(-\beta\hat{H}\right).$ (2) Here $\beta=\frac{1}{k_{b}T}$, where $k_{b}$ is the Boltzmann constant and $T$ is the temperature. It is assumed that the thermal de Broglie wavelength is smaller than the separation of two identical atoms, thereby allowing us to ignore the possibility for exchange of fermion/boson positions. By dividing the Boltzmann operator in Equation (2) into $P$ terms, inserting closure relations in coordinate spaces between each term of the Boltzmann operator, and applying the second-order Suzuki-Trotter expansion one can derive the following expression for the partition function, $Z=\lim_{P\rightarrow\infty}Z_{P}$ (3) where $\displaystyle Z_{P}$ $\displaystyle=$ $\displaystyle\prod_{I=1}^{N}\left[\left(\frac{M_{I}P}{2\pi\beta\hbar^{2}}\right)^{\frac{3P}{2}}\int d\mathbf{R}_{I}^{(1)}\int d\mathbf{R}_{I}^{(2)}\cdots\int d\mathbf{R}_{I}^{(P)}\right]$ (4) $\displaystyle\times\exp\left[-\beta\left\\{\sum_{s=1}^{P}\sum_{I=1}^{N}\frac{M_{I}}{2}\omega_{P}^{2}\left(\mathbf{R}_{I}^{(s)}-\mathbf{R}_{I}^{(s-1)}\right)^{2}+\sum_{s=1}^{P}\frac{1}{P}V\left(\mathbf{R}_{1}^{(s)},\ldots,\mathbf{R}_{N}^{(s)}\right)\right\\}\right].$ The constant $\omega_{P}$ is given as $\frac{\sqrt{P}}{\beta\hbar}$. The factor $\prod_{I=1}^{N}\left(\frac{M_{I}P}{2\pi\beta\hbar^{2}}\right)^{\frac{3P}{2}}$ is a constant and will be omitted in the following, as it does not alter the relative free energy differences. Rearranging the exponent in the above equation we arrive at the following, $Z_{P}\propto\prod_{I=1}^{N}\left[\int d\mathbf{R}_{I}^{(1)}\int d\mathbf{R}_{I}^{(2)}\cdots\int d\mathbf{R}_{I}^{(P)}\right]\exp\left(-\beta V_{\mathrm{eff}}(\left\\{\mathbf{R}\right\\})\right),$ (5) where $V_{\mathrm{eff}}\left(\left\\{\mathbf{R}\right\\}\right)=\sum^{P}_{s=1}\left\\{\sum^{N}_{I=1}\frac{M_{I}}{2}\omega_{P}^{2}\left(\mathbf{R}_{I}^{(s)}-\mathbf{R}_{I}^{(s-1)}\right)^{2}+\frac{1}{P}V\left(\mathbf{R}_{1}^{(s)},\ldots,\mathbf{R}_{N}^{(s)}\right)\right\\}.$ (6) These two equations show that the quantum behaviour of an $N$ particle system can be mimicked by considering an $NP$ particle system. The individual terms in $P$ are often referred to as beads on a chain, where each bead corresponds to a single copy of the classical system. This system is evolved according to its classical forces and its coupling to a polymer chain. The cost of this method thus scales steeply with the number of beads, and by extension accuracy desired. Convergence with respect to number of beads does however scale inversely with temperature, i.e. a few tens of beads can be sufficient to accurately describe quantum effects for systems under standard conditions. The blue-moon sampling approachcarter_constrained_1989 ; sprik_free_1998 is here used to compute the free energy curve of water autoionization. Following a study by Spriksprik_computation_2000 , the CN of an oxygen atom (labeled as “O∗”) is used for studying this reaction. We introduce a rational function for the CN as $n_{\mathrm{O^{*}}}(\\{{\bf R}\\})=\sum_{j\in H}\frac{1-\left(\frac{r_{\mathrm{O^{*}}j}}{d_{\mathrm{OH}}}\right)^{6}}{1-\left(\frac{r_{\mathrm{O^{*}}j}}{d_{\mathrm{OH}}}\right)^{12}},$ (7) where $\\{{\bf R}\\}$ is the set of atomic coordinates of the system, $r_{ij}$ are the distances to all hydrogen atoms in the system, and $d_{\mathrm{OH}}$ is a constant set to 1.35 Å. The CN is in this study is restrained to vary the coordination of hydrogen in a single OH- moiety from one to zero. These conditions correspond to the moiety forming a H2O molecule and a solvated OH- ion respectively. In practice, for a random oxygen atom, one of its attached hydrogens is chosen to remain bound, while the other attached hydrogen and all other hydrogens are subject to the CN restraint. The same method is used in the case of D2O and T2O for the deuterium and tritium atoms in the simulation, respectively. For HDO and HTO in solution all hydrogens are restrained, while the core OD- or OT- is kept unrestrained. To derive the free energy difference in the blue-moon ensemble one has to consider the free energy of a system restrained to a fixed coordination number $\bar{n}_{\mathrm{O^{*}}}$, $A(\bar{n}_{\mathrm{O^{*}}})=-\beta^{-1}\log\rho(\bar{n}_{\mathrm{O^{*}}})+A_{0}.$ (8) $A_{0}$ is a constant term taking care of the constants dropped in Equation (5). This term is only necessary to consider when calculating the absolute value of the free energy, as it will cancel for relative free energy differences. The distribution $\rho(\bar{n}_{\mathrm{O^{*}}})$ is the scaled probability for finding the system with the given coordination number $\bar{n}_{O^{*}}$, which is expressed as $\displaystyle\rho(\bar{n}_{\mathrm{O^{*}}})=\left\langle\delta\left(\bar{n}_{\mathrm{O^{*}}}-\frac{1}{P}\sum_{s=1}^{P}n_{\mathrm{O^{*}}}\left(\mathbf{R}^{(s)}\right)\right)\right\rangle$ $\displaystyle=$ $\displaystyle\lim_{P\rightarrow\infty}Z_{P}^{-1}\prod_{I=1}^{N}\left[\int d\mathbf{R}_{I}^{(1)}\cdots\int d\mathbf{R}_{I}^{(P)}\right]\delta\left(\bar{n}_{\mathrm{O^{*}}}-\frac{1}{P}\sum_{s=1}^{P}n_{\mathrm{O^{*}}}\left(\mathbf{R}^{(s)}\right)\right)\exp\left(-\beta V_{\mathrm{eff}}(\left\\{\mathbf{R}\right\\})\right)$ in the PIMD formalism. The brackets indicate the ensemble average of the contained function, which is equal to its time average assuming ergodicity. To further simplify the expression a narrow peaked Gaussian function, with an inverse variance given as $\frac{\kappa}{2}$, is used to approximate the delta function, giving $\rho(\bar{n}_{\mathrm{O^{*}}})\approx\lim_{P\rightarrow\infty}Z_{P}^{-1}\sqrt{\frac{\beta\kappa}{2\pi}}\prod_{I=1}^{N}\left[\int d\mathbf{R}_{I}^{(1)}\cdots\int d\mathbf{R}_{I}^{(P)}\right]\exp\left(-\beta V_{\mathrm{eff}}^{\mathrm{cons}}(\left\\{\mathbf{R}\right\\},\bar{n}_{\mathrm{O^{*}}})\right)$ (10) where $V^{\mathrm{cons}}_{\mathrm{eff}}(\left\\{\mathbf{R}\right\\},\bar{n}_{\mathrm{O^{*}}})=V_{\mathrm{eff}}(\left\\{\mathbf{R}\right\\})+\frac{\kappa}{2}\left(\bar{n}_{\mathrm{O^{*}}}-\frac{1}{P}\sum_{s=1}^{P}n_{\mathrm{O^{*}}}\left(\mathbf{R}^{(s)}\right)\right)^{2}.$ (11) It is exceedingly difficult to calculate the free energy value itself through the above equations. Calculating the derivative of the free energy with respect to the restrained value is, however, less of a challenge. This derivative is given as $f(\bar{n}_{\mathrm{O^{*}}})=\frac{\partial A(\bar{n}_{\mathrm{O^{*}}})}{\partial\bar{n}_{\mathrm{O^{*}}}}=\left\langle\kappa\left(\bar{n}_{\mathrm{O^{*}}}-\frac{1}{P}\sum_{s=1}^{P}n_{\mathrm{O^{*}}}\left(\mathbf{R}^{(s)}\right)\right)\right\rangle_{\mathrm{eff}},$ (12) where the subscript “eff” stands for the sampling by a PIMD simulation with the restraint. This derivative corresponds to the time-averaged force which will be used in the following to calculate the free energy surface as a function of the CN. ### II.2 Methodology for Calculating the Autoionization Constant of Water Using the time-averaged forces, $f(\bar{n}_{\mathrm{O^{*}}i})$, from Equation (12) and the time-averaged CNs, $n_{\mathrm{O^{*}}i}$, obtained from a number of CN restrained simulations it is possible to estimate the free energy difference through thermodynamic integration, $\Delta A(\bar{n}_{\mathrm{O^{*}}i})=\int_{\bar{n}_{\mathrm{O^{*}}1}}^{\bar{n}_{\mathrm{O^{*}}i}}f(\bar{n})d\bar{n}.$ (13) The numerical integral is evaluated by spline interpolation between each of the simulated restrained CNs $(\bar{n}_{\mathrm{O^{*}}1},\ldots,\bar{n}_{\mathrm{O^{*}}M})$. The numerical integral is here calculated using a third order B-spline which passes through all the calculated points. Using the free energy surface one can employ a probabilistic (PROB) method to calculate the p$K_{W}$ of water and p$K_{A}$ of an acid, as suggested by Davies et al.,davies_estimating_2002 based on the work of Chandler.chandler_introduction_1987 This method relies on the relative probability of finding the system in a bound state. For this purpose we define a cutoff bond distance, $R_{c}$, at which the O-H bond breaks and the OH- and H3O+ ions are formed. The probability ratio between the bound and dissociated states is given by $\gamma(R_{c})=\frac{\int_{0}^{R_{c}}\exp(-\beta\Delta A(\bar{n}_{\mathrm{O^{*}}}(r)))4\pi r^{2}dr}{\int_{0}^{R_{\mathrm{max}}}\exp(-\beta\Delta A(\bar{n}_{\mathrm{O^{*}}}(r)))4\pi r^{2}dr},$ (14) where the factor $4\pi r^{2}$ arises from the Jacobian of polar coodinates. The mapping $\bar{n}_{\mathrm{O^{*}}}(r)$ is carried out by assigning the time averaged closest distance between the central oxygen and the restrained hydrogens, $r_{i}$, for the given restraint $\bar{n}_{\mathrm{O^{*}}i}$. The assigned free energy difference calculated in Equation (13), $\Delta A(\bar{n}_{\mathrm{O^{*}}}(r))$, can then be linearly interpolated to numerically evaluate the probability. $R_{\mathrm{max}}$ is the time averaged distance when the CN restraint is set to the lowest coordination number, $\bar{n}_{\mathrm{O^{*}}M}$. The value of $R_{c}$ is determined so that it gives $\mathrm{p}K_{W}=14.00$ for liquid H2O. For water autoionization $\mathrm{H_{2}O(l)}\rightleftharpoons\mathrm{OH^{-}(aq)+H^{+}(aq)},$ (15) the autoionization constant is $\mathrm{p}K_{W}=-\log\left([\mathrm{OH^{-}(aq)}][\mathrm{H^{+}(aq)}]\right).$ (16) Rewriting this using the probabilities of water dissociation $(\gamma_{W}(R_{c}))$ from Equation (14) leads to $\mathrm{p}K_{W}(R_{c})=-2\log\left(\left(1-\gamma_{W}(R_{c})\right)\frac{N_{W}}{c_{0}V}\right),$ (17) where $N_{W}$ and $V$ are the number of water molecules and the volume of the simulation box, respectively, and $c_{0}$ is the standard concentration (1 M). To calculate $R_{c}$, the reference value at standard conditions is used, resulting in $\gamma_{W}\left(R_{c}\right)=1-\frac{10^{-7}}{c_{W}}$ (18) where $c_{W}=\frac{N_{W}}{c_{0}V}$ (which is about 55.6 in ambient conditions). This can then be used to find $R_{c}$ for Equation (14). It is then commonly assumed that this distance is also applicable to breaking the A-H bond to form A- and H3O+ for a common acid A. The p$K_{A}$ of an acid, or acid group of a molecule, (A) in water can be found by considering $\mathrm{AH(aq)}\rightleftharpoons\mathrm{A^{-}(aq)+H^{+}(aq)},$ (19) and the following expression for the acidity constant (p$K_{A}$), $\mathrm{p}K_{A}=-\log\left(\frac{[\mathrm{A^{-}}][\mathrm{H^{+}}]}{[\mathrm{AH}]}\right).$ (20) Here we have assumed a dilute aqueous solution, where the activities of all species are determined by their concentration in the solution. The solvated protons, $\mathrm{H^{+}}$, can in principle stem from both water autoionization and the dissociation from A leaving us with the following expression for their concentration $[\mathrm{H^{+}}]=[\mathrm{H^{+}}]_{\mathrm{acid}}+[\mathrm{H^{+}}]_{\mathrm{water}}=(1-\gamma_{A}(R_{c}))c_{A}+(1-\gamma_{W}(R_{c}))c_{W}^{\prime},$ (21) where $c_{W}^{\prime}=\frac{N_{W}^{\prime}}{c_{0}V}$ and $c_{A}=\frac{N_{A}}{c_{0}V}$. $N_{W}^{\prime}=N_{W}-N_{A}$, and $N_{A}$ is the number of acid molecules replacing water molecules in the box. It is implicitly assumed that the p$K_{W}$ of water is unchanged in the solution of AH, a reasonable assumption given that the solution is dilute. Using the probabilities of dissociation for the acid and water we can rewrite Equation (20) as $\displaystyle\mathrm{p}K_{A}({\rm PROB})$ $\displaystyle=$ $\displaystyle-\log\left(\frac{(1-\gamma_{A}(R_{c}))c_{A}\left((1-\gamma_{A}(R_{c}))c_{A}+(1-\gamma_{W}(R_{c}))c_{W}^{\prime}\right)}{c_{A}\gamma_{A}}\right)$ (22) $\displaystyle=$ $\displaystyle-\log\left(\frac{(1-\gamma_{A})}{\gamma_{A}}\left(\frac{(1-\gamma_{A})N_{A}}{c_{0}V}+10^{-7}\frac{N_{W}-N_{A}}{N_{W}}\right)\right).$ Generally one would find that $(1-\gamma_{A}(R_{c}))\gg(1-\gamma_{W}(R_{c}))$, so the above equation can then be reasonably approximated as $\mathrm{p}K_{A}({\rm PROB})\approx-\log\left(\frac{(1-\gamma(R_{c}))^{2}}{(\gamma(R_{c}))}\frac{N_{A}}{c_{0}V}\right).$ (23) Equation (23) was used in previous studies to predict the p$K_{A}$ of acidic substances where the proton concentration are mainly from the solute dissociation. doltsinis_theoretical_2003 ; schilling_determination_2019 ; chen_prediction_2012 ; davies_estimating_2002 ; sandmann_copperii- mediated_2019 In this study we will resort to using Equation (22), since the p$K_{A}$ of HDO and HTO in aqueous solution is expected to be close to that of the solvent H2O itself. This approach is however general to all very weak acids, with p$K_{A}$ values around their solvent. These types of acids have not previously been studied in this context, which is why most studies have relied on Equation (23) for calculating p$K_{A}$ values. Another way of calculating the probabilities used in the equations above is to employ a basic two state model. That is, we assume that the minimum of the free energy surface corresponds to the free energy of the bound state of water and the free energy of the lowest CN restraint corresponds to the dissociated state. From this model we can formulate the probability of finding a water molecule in a dissociated state as $1-\gamma=\frac{\exp\left(-\beta\Delta A\right)}{1+\exp\left(-\beta\Delta A\right)},$ (24) where $\Delta A$ is taken as the difference between the maximum (dissociated state) value and the minimum (bound state) value of $\Delta A(\bar{n}_{\mathrm{O^{*}}i})$ in Equation (13). For the autoionization of water and its isotopologues we can approximate this probability as $\exp\left(-\beta\Delta A\right)$, as the free energy difference is very large. Inserting this into Equation (17) results in $\mathrm{p}K_{W}({\rm ABS})=-2\log\left(\exp\left(-\beta\Delta A\right)\frac{N_{W}}{c_{0}V}\right)=\frac{2\beta\Delta A}{\ln(10)}-2\log\left(\frac{N_{W}}{c_{0}V}\right).$ (25) This method allows us to compare the calculated p$K_{W}$ of H2O to that of D2O and T2O, as opposed to the probabilistic method, in which the H2O simulation is used as a reference. In the following this method will be referred to as the absolute (ABS) method, as it depends on the absolute free energy difference between two states. ## III Computational Details MD and PIMD simulations of liquid H2O and its isotopologues were undertaken in the canonical ensemble at temperature 300 K. The interatomic potential and the associated force were computed on the fly by ab initio DFT, semiempirical DFTB, or empirical OSS2 methods. The simulation conditions are listed in Table I. Systems of $32-64$ water molecules were contained in cubic boxes with periodic boundary conditions. The box size was set such that the number density was 29.86 molecules/Å3, which amounts to 1.00 g/cm3 for H2O. An example structure is depicted in Figure 1 along with snapshots of the constrained water molecule and its surroundings under different CN restraints recorded from the DFT trajectories. The numerical integration schemes for MD and PIMD were based on the reversible reference system propagation algorithm (RESPA) as implemented in the PIMD software package.shiga_pimd_2020 The temperature was strongly controlled by attaching massive Nosé-Hoover chain (MNHC) thermostatsnose_unified_1984 ; hoover_canonical_1985 ; martyna_nosehoover_1992 to each degree of freedom in the MD simulations. The MNHC thermostats were also attached to each normal mode representing the ring polymer in the PIMD simulations. The fifth-order Suzuki-Yoshida factorization was used for the numerical integration of the MNHC thermostats. The simulations were run for $12.5-25.0$ ps with the step size of $0.25-0.43$ fs depending on the system and the interatomic potential. The restraints with a force constant $\kappa=4-10$ hartree were applied at 15 points in the range $0.16\leq n_{i}\leq 0.98$. This resulted in a fluctuation of CN within the order of 0.01. To deal with the fast oscillation caused by the restraints, the restraint force was updated 5 times per MD/PIMD step in the RESPA technique. The ab initio DFT energy calculations were carried out using the Vienna ab initio simulation package (VASP). kresse_ab_1993 ; kresse_ab_1994 ; kresse_efficiency_1996 We employed the Perdew-Burke-Ernzerhof (PBE)perdew_generalized_1996 exchange correlation functional and Grimme’s D3 van der Waals correction.grimme_consistent_2010 The core electrons were taken into account using the projector augmented wave (PAW) method. The valence electrons are expressed in terms of a linear combination of plane wave basis functions with a cutoff at 400 eV. Only the $\Gamma$-point of the Brillouin zone was computed. The semiempirical DFTB energy calculations were carried out using DFTB+.hourahine_dftb+_2020 We employed the third-order self-consistent-charge density-functional tight-binding (SCC-DFTB)elstner_self-consistent-charge_1998 method with the 3ob Slater-Koster parameter set.gaus_parametrization_2013 The empirical energy calculations based on the OSS2ojamae_potential_1998 potential were implemented in an in-house version of the PIMD software package. The OSS2 potential can be categorized as a polarizable force field composed of short-range intramolecular bonds and long-range interactions between point charges and induced point dipoles with damping. The parameters in the OSS2 potential are fitted so as to reproduce the ab initio energies of neutral and protonated water clusters at the level of second-order Møller- Plesset perturbation theory (MP2). The OSS2 potential was originally developed for small water clusters in the free boundary condition, but it can be applied to bulk water by adaption for periodic boundary conditions. We used the Ewald sum of point charges and induced point dipoles for this, where the point dipoles in the electrostatic field were determined by the matrix inversion method.sala_polarizable_2010 The resulting restraints, forces and trajectories were analysed using a locally developed python script, where the MDAnalysis library michaud- agrawal_mdanalysis:_2011 ; gowers_mdanalysis:_2016 was used for calculating the O-H distances. The errors reported here were calculated by the method outlined by Flyvbjerg and Petersen.flyvbjerg_error_1989 The errors of the cut off distance $R_{c}$ and the probabilistic p$K_{A}$ or p$K_{W}$ are not considered to be correlated, i.e., the errors in p$K_{W}$ or p$K_{A}$ are calculated using a fixed value of $R_{c}$. All figures depicting the molecular systems were visualized using the VMD software.humphrey_vmd:_1996 The numerical integrals needed for the calculations outlined in the theory section were all carried out using linear interpolation and a step size of $1\cdot 10^{-4}$ in both CN and O-H distance space. ## IV Results The free energy curves, $A(n(r))$, obtained from ab initio MD and PIMD simulations are displayed in Figure 2(a). These represent in our opinion the most reliable results in the present study. The following features become clear when comparing the results presented in Figure 2(a). The ab initio PIMD results in a much lower free energy for the dissociation process when compared to that of ab initio MD. This effect can be explained by the NQEs, which is expected to lead to lower free energies of dissociation due to the delocalization of the proton. ceriotti_nuclear_2016 Comparing the results for the isotopologues H2O and T2O we find that the free energy curves are different in the ab initio PIMD. The calculation of these two species using ab initio MD would result in the same curves due to the absence of NQEs in classical MD . An important consequence of this difference is that the cut off distance $R_{c}$ determined by ab initio MD ($1.30\pm 0.03$ Å) must be adapted to that by ab initio PIMD ($1.50\pm 0.02$ Å(H2O) $1.53\pm 0.04$ Å(D2O), see Table II or SI, respectively) in order to reproduce the reference p$K_{W}$ value of liquid water. We note in passing that the result obtained from ab initio MD is consistent with values used in previous studies of $1.22-1.3$ Å. davies_estimating_2002 ; doltsinis_theoretical_2003 ; chen_prediction_2012 ; schilling_determination_2019 One way of interpreting the increase in cut off distance in the PIMD simulations is that the proton stays bonded to the restrained oxygen longer due to NQEs. In Figures 2(b) and 2(c) we display the results obtained from semiempirical MD/PIMD and empirical MD/PIMD, respectively. Comparing Figures 2(b) and 2(c) with Figure 2(a), the free energy curves of semiempirical MD and PIMD look very different from that of ab initio MD and PIMD in the absolute values. Accordingly, the cutoff distances $R_{c}$ are quite different from one another. On the other hand, the reduction of the free energy curves behaves similarly with respect to the NQEs. Therefore, we speculate that the isotope effects predicted by empirical and semiempirical PIMD can be as reliable as those from ab initio PIMD. The free energy curves of the classical simulation show an anharmonic behavior since the mean force upon proton dissociation is a nonlinear function of $r$. In addition, the difference between the free energy curves of the classical and quantum simulations vary along $r$, which implies the influence of anharmonicity on the NQEs. In general, the magnitude of the NQEs tends to be large where the potential curvature $\omega$ is larger than $1/\beta\hbar$, which can be understood from the formula of the quantum harmonic correction to the free energy, $A_{\rm qhc}=-\beta^{-1}\log\left\\{\frac{\beta\hbar\omega/2}{\sinh(\beta\hbar\omega/2)}\right\\}$.shiga_quantum_2012 The computational effort needed to obtain the free energy curve increases proportionally with respect to the number of restraints. We therefore setup the ab initio MD and PIMD simulations with a smaller system and number of beads ($N=32$ and $P=12$) compared to our previous work on the same system without restraints ($N=64$ and $P=16$).machida_nuclear_2017 To verify this setup, we checked the size- and bead-dependence of the MD and PIMD simulations using the semiempirical DFTB potential. Figure 3(a) shows the free energy curves obtained from semiempirical PIMD simulation for a larger number of beads with $P=32$, while Figure 3(b) shows the free energy curves obtained from semiempirical MD and PIMD simulation for a larger system size with 64 water molecules. It can be clearly seen that Figures 3(a) and (b) follow the same trend as Figures 2(b), in the sense that the free energy is reduced by the NQEs within the semiempirical simulations. We can therefore expect that the results obtained from ab initio MD and PIMD simulations shown in Figure 2(a) are reasonable with respect to the nuclear quantum and isotope effects on the free energy curves. In Table II, we display the autoionization constants, p$K_{W}$, of liquid D2O and T2O calculated using Equation (17) and the free energy curves obtained from PIMD simulations. The cutoff parameter $R_{c}$ was determined for a particular setup of PIMD simulations such that the p$K_{W}$ of liquid H2O is 14.00. We see the trend that the p$K_{W}$ value of liquid T2O is higher than 14.00, which is consistent with experimental expectations. ceriotti_nuclear_2016 However, the difference between the p$K_{W}$ values of liquid H2O and D2O in the ab initio PIMD simulations lies within the error bars. All the PIMD simulations were able to predict that the p$K_{W}$ value of liquid T2O is larger than that of liquid H2O with statistical significance. We also tried calculating p$K_{W}$ using the D2O p$K_{W}$ as a reference, results given in Table SI of the supplementary information (SI), and we found that the results agree well with the ones presented in Table II. The autoionization constants calculated using the absolute method outlined around Equation (25) are given in Table III. This method makes it possible to compare the p$K_{W}$ calculated using MD and PIMD simulations directly, as no reference value is required for this method. Comparing the ab initio MD and PIMD results for H2O reveals that the NQEs play an important role in determining the correct p$K_{W}$ value. We note that the values predicted from the semi empirical MD and PIMD simulations are far from the correct value of p$K_{W}$, see Table SII in the SI, as can be expected from their free energy profiles. The results using the empirical OSS force field are, however, comparable with those of the ab initio method, with a slightly smaller difference between MD and PIMD results. As shown above, the absolute method of ab initio PIMD simulation can correct for the large overestimation of the unbinding energy of a proton from water of ab initio MD simulation. Additionally, the absolute method produces a reasonable result for the isotope effect for all pure isotopologues studied here. In Table IV, we display the acidity constants, p$K_{A}$, associated with the reaction, ${\rm HXO}({\rm aq})\rightleftharpoons{\rm OX}^{-}({\rm aq})+{\rm H}^{+}({\rm aq}),$ (26) where ${\rm X=D}$ or ${\rm T}$. The values were calculated using Equation (22) and the free energy curves obtained from PIMD simulations. Experimental values do not exist for the p$K_{A}$ of HDO and HTO. However, the “rational” p$K_{A}$ of H2O, which is $15.74=$ p$K_{W}$$+\log\left(c_{W}\right)$, would be a reference. The rational p$K_{A}$ is obtained by using an activity for the H2O molecules when the dissociating water molecule is assumed misleadingly to be distinguishable from the rest of the water molecules in solution. silverstein_pka_2017 ; meister_confusing_2014 It can however serve well as a reference in this case where the HDO and HTO molecules are in fact distinguishable from the solvent molecules. It is expected that the p$K_{A}$ of HDO and HTO in water are either similar to or larger than the “rational” p$K_{A}$ of H2O, assuming that the isotope effect follows the same trend as the case of the p$K_{W}$ of H2O, D2O and T2O. Here it turned out that the p$K_{A}$ value of HDO predicted from empirical PIMD is close to the “rational” p$K_{A}$ of H2O, while the predicted p$K_{A}$ value of HTO is larger than that of H2O with statistical significance. For the semiempirical calculations we find that both the p$K_{A}$ of HDO and HTO are larger than the “rational” p$K_{A}$ of H2O. We note that these p$K_{A}$ values are difficult to measure experimentally, but they are important in determining the concentration of ${\rm OD}^{-}$ and ${\rm OT}^{-}$ ions in liquid water. ## V Conclusion In this study we established a first-principles approach to compute the autoionization and acidity constants of water taking account of NQEs by PIMD with CN restraints. The simulations were carried out using different potential energy surfaces, i.e., ab initio DFT, semiempirical DFTB and empirical OSS2 methods. The findings presented here are in line with previous empirical results.ceriotti_nuclear_2016 The current study does differentiate itself from previous studies, by targeting the autoionization process directly, without any empirical factors to estimate its free energy curves. It was found that the free energy curve in the proton dissociation obtained from the quantum PIMD simulation is downshifted significantly compared with that obtained from the classical MD simulation, thus showing the importance of the NQEs on the autoionization and acidity constants. The p$K_{W}$ values of water isotopologues, liquid D2O and T2O, were estimated based on a probabilistic method using shifts in the free energy curves of D2O and T2O with respect to that of H2O. The results agree well with experimental values, accounting for the statistical uncertainties of our simulations. We went on to compute the p$K_{A}$ values of aqueous HDO and HTO molecules which are difficult to measure experimentally. The results predict that p$K_{A}$ of aqueous HTO (HDO) is larger than (close to) that of H2O. The work presented here opens the possibility for accurate calculations of p$K_{A}$ for more complex systems, such as small organic molecules in solution. It furthermore makes it possible to to predict the isotope effect on these system by direct calculation. These goals and calculating the temperature and pressure dependence of the autoionization constant of water will be a subject of future studies. We finally note that the probabilistic method requires the reference p$K_{W}$ value of H2O (14.00) while the absolute method does not. For the latter, however, an accurate estimation of absolute p$K_{A}$ values remains a difficult challenge. In fact the p$K_{W}$ estimated using the absolute method with the present simulations have very different values to those from the probablistic method, see Tables II and III. This is because the standard free energy curve is strongly dependent on the potential models and the system sizes. These issues should be studied more carefully in the future. It is however clear that the inclusion of the NQEs is important for determining autoionization and acidity constants. In fact, we do find a difference of $4.5\pm 0.9$ p$K_{A}$ units between the ab initio MD and PIMD results, where the PIMD result is clearly closer to the true value of the p$K_{W}$ of water. This difference does seem to be in line, to some extent, with the experimental extrapolation of 3 p$K_{A}$ units which was suggested earlier. ceriotti_nuclear_2016 ## VI Supplementary material See the supplementary material for Tables showing the autoionization constants of water isotopologues calculated by the probabilistic method using the experimental p$K_{W}$ of D2O to calculate $R_{c}$, and the autoionization constants of water isotopologues calculated by the absolute method using the semiempirical DFTB method. ## VII Acknowledgements This work was completed under the project “Hydrogenomics” in Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan. The computations were mostly run on the supercomputer facilities at Japan Atomic Energy Agency and the Institute for Solid State Physics, The University of Tokyo. M.S. thanks JSPS KAKENHI (18H05519, 18H01693, 18K05208) and MEXT Program for Promoting Research on the Supercomputer Fugaku (Fugaku Battery & Fuel Cell Project) for financial support. We thank Prof. Nikos Doltsinis in Universität Münster for his advice on the coordination number constraints. We thank Dr. Alex Malins in JAEA for proofreading the text. ## VIII Data Availability The data that support the findings of this study are available within the article and its supplementary material. ## References * (1) K. S. Alongi and G. C. Shields, “Chapter 8 - Theoretical Calculations of Acid Dissociation Constants: A Review Article,” in Annual Reports in Computational Chemistry (R. A. Wheeler, ed.), vol. 6, pp. 113–138, Elsevier, Jan. 2010. * (2) E. Alexov, E. L. Mehler, N. Baker, A. M. Baptista, Y. Huang, F. Milletti, J. E. Nielsen, D. Farrell, T. Carstensen, M. H. M. Olsson, J. K. Shen, J. Warwicker, S. Williams, and J. M. Word, “Progress in the prediction of pKa values in proteins,” Proteins, vol. 79, pp. 3260–3275, Dec. 2011\. * (3) J. Ho and M. Coote, “A universal approach for continuum solvent pK a calculations: are we there yet?,” Theor. Chem. Acc., 2010. * (4) J. Tomasi, B. Mennucci, and R. Cammi, “Quantum Mechanical Continuum Solvation Models,” Chem. Rev., vol. 105, pp. 2999–3094, Aug. 2005\. * (5) S. Miertuš, E. Scrocco, and J. Tomasi, “Electrostatic interaction of a solute with a continuum. A direct utilizaion of AB initio molecular potentials for the prevision of solvent effects,” Chem. Phys., vol. 55, pp. 117–129, Feb. 1981. * (6) J. B. Foresman, T. A. Keith, K. B. Wiberg, J. Snoonian, and M. J. Frisch, “Solvent Effects. 5. Influence of Cavity Shape, Truncation of Electrostatics, and Electron Correlation on ab Initio Reaction Field Calculations,” J. Phys. Chem., vol. 100, pp. 16098–16104, Jan. 1996. * (7) J. L. Pascual‐ahuir, E. Silla, and I. Tuñon, “GEPOL: An improved description of molecular surfaces. III. A new algorithm for the computation of a solvent-excluding surface,” J. Comput. Chem., vol. 15, no. 10, pp. 1127–1138, 1994. * (8) S. Miertuš and J. Tomasi, “Approximate evaluations of the electrostatic free energy and internal energy changes in solution processes,” Chem. Phys., vol. 65, pp. 239–245, Mar. 1982. * (9) M. Cossi, N. Rega, G. Scalmani, and V. Barone, “Energies, structures, and electronic properties of molecules in solution with the C-PCM solvation model,” J. Comput. Chem., vol. 24, no. 6, pp. 669–681, 2003. * (10) A. Klamt and G. Schüürmann, “COSMO: a new approach to dielectric screening in solvents with explicit expressions for the screening energy and its gradient,” J. Chem. Soc., Perkin Trans. 2, pp. 799–805, Jan. 1993. * (11) A. Klamt, “Conductor-like Screening Model for Real Solvents: A New Approach to the Quantitative Calculation of Solvation Phenomena,” J. Phys. Chem., vol. 99, pp. 2224–2235, Feb. 1995. * (12) A. V. Marenich, C. J. Cramer, and D. G. Truhlar, “Universal Solvation Model Based on Solute Electron Density and on a Continuum Model of the Solvent Defined by the Bulk Dielectric Constant and Atomic Surface Tensions,” J. Phys. Chem. B, vol. 113, pp. 6378–6396, May 2009\. * (13) I. Soteras, C. Curutchet, A. Bidon-Chanal, M. Orozco, and F. J. Luque, “Extension of the MST model to the IEF formalism: HF and B3lyp parametrizations,” J. Mol. Struct. (Theochem), vol. 727, pp. 29–40, Aug. 2005. * (14) N. L. Doltsinis and M. Sprik, “Theoretical pKa estimates for solvated P(OH)5 from coordination constrained Car–Parrinello molecular dynamics,” Phys. Chem. Chem. Phys., vol. 5, pp. 2612–2618, June 2003. * (15) M. Schilling and S. Luber, “Determination of pKa Values via ab initio Molecular Dynamics and its Application to Transition Metal-Based Water Oxidation Catalysts,” Inorganics, vol. 7, p. 73, June 2019\. * (16) Y.-L. Chen, N. L. Doltsinis, R. C. Hider, and D. J. Barlow, “Prediction of Absolute Hydroxyl pKa Values for 3-Hydroxypyridin-4-ones,” J. Phys. Chem. Lett., vol. 3, pp. 2980–2985, Oct. 2012. * (17) J. E. Davies, N. L. Doltsinis, A. J. Kirby, C. D. Roussev, and M. Sprik, “Estimating pKa Values for Pentaoxyphosphoranes,” J. Am. Chem. Soc., vol. 124, pp. 6594–6599, June 2002. * (18) N. Sandmann, J. Bachmann, A. Hepp, N. L. Doltsinis, and J. Müller, “Copper(II)-mediated base pairing involving the artificial nucleobase 3h-imidazo[4,5-f]quinolin-5-ol,” Dalton Trans., vol. 48, pp. 10505–10515, July 2019. * (19) A. K. Tummanapelli and S. Vasudevan, “Dissociation Constants of Weak Acids from ab Initio Molecular Dynamics Using Metadynamics: Influence of the Inductive Effect and Hydrogen Bonding on pKa Values,” J. Phys. Chem. B, vol. 118, pp. 13651–13657, Nov. 2014. * (20) A. K. Tummanapelli and S. Vasudevan, “Ab Initio MD Simulations of the Brønsted Acidity of Glutathione in Aqueous Solutions: Predicting pKa Shifts of the Cysteine Residue,” J. Phys. Chem. B, vol. 119, pp. 15353–15358, Dec. 2015. * (21) A. K. Tummanapelli and S. Vasudevan, “Ab Initio Molecular Dynamics Simulations of Amino Acids in Aqueous Solutions: Estimating pKa Values from Metadynamics Sampling,” J. Phys. Chem. B, vol. 119, pp. 12249–12255, Sept. 2015. * (22) C. D. Daub and L. Halonen, “Ab Initio Molecular Dynamics Simulations of the Influence of Lithium Bromide Salt on the Deprotonation of Formic Acid in Aqueous Solution,” J. Phys. Chem. B, vol. 123, pp. 6823–6829, Aug. 2019. * (23) L. Xu and M. L. Coote, “Methods To Improve the Calculations of Solvation Model Density Solvation Free Energies and Associated Aqueous pKa Values: Comparison between Choosing an Optimal Theoretical Level, Solute Cavity Scaling, and Using Explicit Solvent Molecules,” J. Phys. Chem. A, vol. 123, pp. 7430–7438, Aug. 2019. * (24) C. C. R. Sutton, G. V. Franks, and G. da Silva, “First Principles pKa Calculations on Carboxylic Acids Using the SMD Solvation Model: Effect of Thermodynamic Cycle, Model Chemistry, and Explicit Solvent Molecules,” J. Phys. Chem. B, vol. 116, pp. 11999–12006, Oct. 2012. * (25) S. Zhang, “A reliable and efficient first principles-based method for predicting pKa values. III. Adding explicit water molecules: Can the theoretical slope be reproduced and pKa values predicted more accurately?,” J. Comput. Chem., vol. 33, no. 5, pp. 517–526, 2012. * (26) B. Thapa and H. B. Schlegel, “Calculations of pKa’s and Redox Potentials of Nucleobases with Explicit Waters and Polarizable Continuum Solvation,” J. Phys. Chem. A, vol. 119, pp. 5134–5144, May 2015. * (27) B. Thapa and H. B. Schlegel, “Density Functional Theory Calculation of pKa’s of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model,” J. Phys. Chem. A, vol. 120, pp. 5726–5735, July 2016. * (28) N. L. Haworth, Q. Wang, and M. L. Coote, “Modeling Flexible Molecules in Solution: A pKa Case Study,” J. Phys. Chem. A, vol. 121, pp. 5217–5225, July 2017. * (29) E. A. Carter, G. Ciccotti, J. T. Hynes, and R. Kapral, “Constrained reaction coordinate dynamics for the simulation of rare events,” Chem. Phys. Lett., vol. 156, pp. 472–477, Apr. 1989. * (30) M. Sprik and G. Ciccotti, “Free energy from constrained molecular dynamics,” J. Chem. Phys., vol. 109, pp. 7737–7744, Nov. 1998. * (31) A. V. Bandura and S. N. Lvov, “The Ionization Constant of Water over Wide Ranges of Temperature and Density,” J. Phys. Chem. Ref. Data, vol. 35, pp. 15–30, Dec. 2005. * (32) N. Mora-Diez, Y. Egorova, H. Plommer, and P. R. Tremaine, “Theoretical study of deuterium isotope effects on acid–base equilibria under ambient and hydrothermal conditions,” RSC Adv., vol. 5, pp. 9097–9109, Jan. 2015. * (33) D. W. Shoesmith and W. Lee, “The ionization constant of heavy water (D2o) in the temperature range 298 to 523 K,” Can. J. Chem., vol. 54, pp. 3553–3558, Nov. 1976. * (34) M. Sprik, “Computation of the pK of liquid water using coordination constraints,” Chemical Physics, vol. 258, pp. 139–150, Aug. 2000. * (35) E. Perlt, M. v. Domaros, B. Kirchner, R. Ludwig, and F. Weinhold, “Predicting the Ionic Product of Water,” Sci Rep, vol. 7, pp. 1–10, Aug. 2017\. * (36) M. Štrajbl, G. Hong, and A. Warshel, “Ab Initio QM/MM Simulation with Proper Sampling: “First Principle” Calculations of the Free Energy of the Autodissociation of Water in Aqueous Solution,” J. Phys. Chem. B, vol. 106, pp. 13333–13343, Dec. 2002. * (37) R. Wang, V. Carnevale, M. L. Klein, and E. Borguet, “First-Principles Calculation of Water pKa Using the Newly Developed SCAN Functional,” J. Phys. Chem. Lett., vol. 11, pp. 54–59, Jan. 2020. * (38) M. Moqadam, A. Lervik, E. Riccardi, V. Venkatraman, B. K. Alsberg, and T. S. v. Erp, “Local initiation conditions for water autoionization,” PNAS, vol. 115, pp. E4569–E4576, May 2018. * (39) P. L. Geissler, C. Dellago, D. Chandler, J. Hutter, and M. Parrinello, “Autoionization in Liquid Water,” Science, vol. 291, pp. 2121–2124, Mar. 2001. * (40) B. L. Trout and M. Parrinello, “Analysis of the Dissociation of H2o in Water Using First-Principles Molecular Dynamics,” J. Phys. Chem. B, vol. 103, pp. 7340–7345, Aug. 1999. * (41) M. Shiga, “Path Integral Simulations,” in Reference Module in Chemistry, Molecular Sciences and Chemical Engineering, Elsevier, Jan. 2018. * (42) M. Parrinello and A. Rahman, “Study of an F center in molten KCl,” J. Chem. Phys., vol. 80, pp. 860–867, Jan. 1984. * (43) R. W. Hall and B. J. Berne, “Nonergodicity in path integral molecular dynamics,” J. Chem. Phys., vol. 81, pp. 3641–3643, Oct. 1984. * (44) M. Ceriotti, M. Parrinello, T. E. Markland, and D. E. Manolopoulos, “Efficient stochastic thermostatting of path integral molecular dynamics,” J. Chem. Phys., vol. 133, p. 124104, Sept. 2010. * (45) M. E. Tuckerman, B. J. Berne, G. J. Martyna, and M. L. Klein, “Efficient molecular dynamics and hybrid Monte Carlo algorithms for path integrals,” J. Chem. Phys., vol. 99, pp. 2796–2808, Aug. 1993. * (46) M. Ceriotti, W. Fang, P. G. Kusalik, R. H. McKenzie, A. Michaelides, M. A. Morales, and T. E. Markland, “Nuclear Quantum Effects in Water and Aqueous Systems: Experiment, Theory, and Current Challenges,” Chem. Rev., vol. 116, pp. 7529–7550, July 2016. * (47) D. Marx, M. E. Tuckerman, J. Hutter, and M. Parrinello, “The nature of the hydrated excess proton in water,” Nature, vol. 397, pp. 601–604, Feb. 1999\. * (48) G. Cassone, “Nuclear Quantum Effects Largely Influence Molecular Dissociation and Proton Transfer in Liquid Water under an Electric Field,” J. Phys. Chem. Lett., vol. 11, pp. 8983–8988, Nov. 2020. * (49) D. Chandler, Introduction to Modern Statistical Mechanics. Oxford: Oxford University Press, 1987. * (50) M. Shiga, “PIMD,” 2020. * (51) S. Nosé, “A unified formulation of the constant temperature molecular dynamics methods,” J. Chem. Phys., vol. 81, pp. 511–519, July 1984. * (52) W. G. Hoover, “Canonical dynamics: Equilibrium phase-space distributions,” Phys. Rev. A, vol. 31, pp. 1695–1697, Mar. 1985. * (53) G. J. Martyna, M. L. Klein, and M. Tuckerman, “Nosé–Hoover chains: The canonical ensemble via continuous dynamics,” J. Chem. Phys., vol. 97, pp. 2635–2643, Aug. 1992. * (54) G. Kresse and J. Hafner, “Ab initio molecular dynamics for liquid metals,” Phys. Rev. B, vol. 47, pp. 558–561, Jan. 1993. * (55) G. Kresse and J. Hafner, “Ab initio molecular-dynamics simulation of the liquid-metal–amorphous-semiconductor transition in germanium,” Phys. Rev. B, vol. 49, pp. 14251–14269, May 1994. * (56) G. Kresse and J. Furthmüller, “Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set,” Comput. Mater. Sci., vol. 6, pp. 15–50, July 1996. * (57) J. P. Perdew, K. Burke, and M. Ernzerhof, “Generalized Gradient Approximation Made Simple,” Phys. Rev. Lett., vol. 77, pp. 3865–3868, Oct. 1996. * (58) S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, “A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu,” J. Chem. Phys., vol. 132, p. 154104, Apr. 2010. * (59) B. Hourahine, B. Aradi, V. Blum, F. Bonafé, A. Buccheri, C. Camacho, C. Cevallos, M. Y. Deshaye, T. Dumitrică, A. Dominguez, S. Ehlert, M. Elstner, T. van der Heide, J. Hermann, S. Irle, J. J. Kranz, C. Köhler, T. Kowalczyk, T. Kubař, I. S. Lee, V. Lutsker, R. J. Maurer, S. K. Min, I. Mitchell, C. Negre, T. A. Niehaus, A. M. N. Niklasson, A. J. Page, A. Pecchia, G. Penazzi, M. P. Persson, J. Řezáč, C. G. Sánchez, M. Sternberg, M. Stöhr, F. Stuckenberg, A. Tkatchenko, V. W.-z. Yu, and T. Frauenheim, “DFTB+, a software package for efficient approximate density functional theory based atomistic simulations,” J. Chem. Phys., vol. 152, p. 124101, Mar. 2020. * (60) M. Elstner, D. Porezag, G. Jungnickel, J. Elsner, M. Haugk, T. Frauenheim, S. Suhai, and G. Seifert, “Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties,” Phys. Rev. B, vol. 58, pp. 7260–7268, Sept. 1998. * (61) M. Gaus, A. Goez, and M. Elstner, “Parametrization and Benchmark of DFTB3 for Organic Molecules,” J. Chem. Theory Comput., vol. 9, pp. 338–354, Jan. 2013. * (62) L. Ojamäe, I. Shavitt, and S. J. Singer, “Potential models for simulations of the solvated proton in water,” J. Chem. Phys., vol. 109, pp. 5547–5564, Sept. 1998. * (63) J. Sala, E. Guàrdia, and M. Masia, “The polarizable point dipoles method with electrostatic damping: Implementation on a model system,” J. Chem. Phys., vol. 133, p. 234101, Dec. 2010. * (64) N. Michaud-Agrawal, E. J. Denning, T. B. Woolf, and O. Beckstein, “MDAnalysis: A toolkit for the analysis of molecular dynamics simulations,” J. Comput. Chem., vol. 32, no. 10, pp. 2319–2327, 2011. * (65) R. J. Gowers, M. Linke, J. Barnoud, T. J. E. Reddy, M. N. Melo, S. L. Seyler, J. Domański, D. L. Dotson, S. Buchoux, I. M. Kenney, and O. Beckstein, “MDAnalysis: A Python Package for the Rapid Analysis of Molecular Dynamics Simulations,” Proceedings of the 15th Python in Science Conference, pp. 98–105, 2016. * (66) H. Flyvbjerg and H. G. Petersen, “Error estimates on averages of correlated data,” J. Chem. Phys., vol. 91, pp. 461–466, July 1989. * (67) W. Humphrey, A. Dalke, and K. Schulten, “VMD: Visual molecular dynamics,” J. Mol. Graph., vol. 14, pp. 33–38, Feb. 1996. * (68) M. Shiga and H. Fujisaki, “A quantum generalization of intrinsic reaction coordinate using path integral centroid coordinates,” J. Chem. Phys., vol. 136, p. 184103, May 2012. * (69) M. Machida, K. Kato, and M. Shiga, “Nuclear quantum effects of light and heavy water studied by all-electron first principles path integral simulations,” J. Chem. Phys., vol. 148, p. 102324, Dec. 2017. * (70) T. P. Silverstein and S. T. Heller, “pKa Values in the Undergraduate Curriculum: What Is the Real pKa of Water?,” J. Chem. Educ., vol. 94, pp. 690–695, June 2017. * (71) E. C. Meister, M. Willeke, W. Angst, A. Togni, and P. Walde, “Confusing Quantitative Descriptions of Brønsted-Lowry Acid-Base Equilibria in Chemistry Textbooks – A Critical Review and Clarifications for Chemical Educators,” Helv. Chim. Acta, vol. 97, no. 1, pp. 1–31, 2014. Figure Captions Figure 1: (A) Initial structure containing 32 water molcules. The four inserts on the right show the water molecules within 4 Å of the central oxygen molecule during the simulation with a coordination number restraint of (B) 0.98, (C) 0.80, (D) 0.60, (E) 0.31. The distance from the central oxygen to the nearest constrained hydrogen is (B) 1.00 Å, (C) 1.16 Å, (D) 1.29 Å, (E) 1.47 Å. The O-H bonds are for all figures drawn if the O-H distance is less than 1.3 Å, with the exception of the O∗-H bond in which case the bonds are drawn for distances up to 1.5 Å. The central oxygen is in (B-E) marked with orange, and the teal hydrogen atom is the only hydrogen atom not subject to any constraints during the simulation. Figure 2: The free energy curves, $A(n(r))$, for H2O using MD (black), H2O using PIMD (orange) and T2O using PIMD (green). (a) was calcultated using the ab initio potential, while (b) uses the semiempirical potential and (c) uses the empirical potential. For the MD simulations using OSS2, DFTB and DFT we found the values of $R_{c}$ to be $1.39\pm 0.02$ Å, $1.13\pm 0.00$ Å, $1.27\pm 0.01$ Å, respectively. See Table II for the corresponding values for the PIMD simulations. The $R_{c}$ values are shown in this figure as vertical lines for H2O using MD (black) and PIMD (orange). Figure 3: The free energy curves of semiempirical simulations for H2O and T2O in a box containing (a) 32 and (b) 64 molecules. In both figures the black curve represents classical MD on H2O, and the corresponding black vertical line is the calculated $R_{c}$ distance for this simulation, (a) $1.13\pm 0.00$ Å and (b) $1.15\pm 0.00$ Å, respectively. The orange and green curves represent H2O and T2O respectively in a PIMD simulation with (a) P = 32 or (b) P = 12. The orange vertical lines represent the calculated $R_{c}$ of the two H2O PIMD simulations, (a) $1.16\pm 0.00$ Å and (b) $1.17\pm 0.00$ Å, respectively. Figure 1: Figure 2: Figure 3: Table I. Simulation setup of water isotopologues. --- Method | System | Molecules | $P$ | $\Delta t$ [ps] | Length [ps] | Restraints DFTa | liq. H2O | 32 | 1 | 0.25 | 24.0 | 15 DFTb | liq. H2O | 32 | 12 | 0.25 | 25.0 | 15 DFTb | liq. D2O | 32 | 12 | 0.25 | 25.0 | 15 DFTb | liq. T2O | 32 | 12 | 0.25 | 25.0 | 15 DFTBc | liq. H2O | 32 | 1 | 0.25 | 25.0 | 15 DFTBd | liq. H2O | 32 | 12 | 0.25 | 25.0 | 15 DFTBd | liq. H2O | 32 | 32 | 0.25 | 25.0 | 15 DFTBd | liq. D2O | 32 | 12 | 0.25 | 25.0 | 15 DFTBd | liq. T2O | 32 | 12 | 0.25 | 25.0 | 15 DFTBd | liq. T2O | 32 | 32 | 0.25 | 25.0 | 15 DFTBd | aq. HDO | 32 | 12 | 0.25 | 25.0 | 15 DFTBd | aq. HTO | 32 | 12 | 0.25 | 25.0 | 15 DFTBc | liq. H2O | 64 | 1 | 0.25 | 25.0 | 15 DFTBd | liq. H2O | 64 | 12 | 0.25 | 25.0 | 15 DFTBd | liq. T2O | 64 | 12 | 0.25 | 25.0 | 15 OSS2e | liq. H2O | 64 | 1 | 0.25 | 12.5 | 15 OSS2f | liq. H2O | 64 | 12 | 0.25 | 12.5 | 15 OSS2f | liq. D2O | 64 | 12 | 0.35 | 17.5 | 15 OSS2f | liq. T2O | 64 | 12 | 0.43 | 21.5 | 15 OSS2f | aq. HDO | 64 | 12 | 0.25 | 12.5 | 15 OSS2f | aq. HTO | 64 | 12 | 0.25 | 12.5 | 15 aAb initio MD. bAb initio PIMD. cSemiempirical MD. dSemiempirical PIMD. eEmpirical MD. fEmpirical PIMD. Table II. Autoionization constants of water isotopologues calculated by --- the probabilistic method. Method | System | Molecules | $P$ | p$K_{W}$ (PROB) | $R_{c}$ [Å] DFTa | liq. H2O | 32 | 12 | 14.0e | 1.50$\pm$0.02 DFTa | liq. D2O | 32 | 12 | 13.5 $\pm$ 1.2 | 1.50$\pm$0.02 DFTa | liq. T2O | 32 | 12 | 15.4 $\pm$ 0.9 | 1.50$\pm$0.02 DFTBb | liq. H2O | 32 | 12 | 14.0e | 1.16$\pm$0.00 DFTBb | liq. D2O | 32 | 12 | 15.0 $\pm$ 0.3 | 1.16$\pm$0.00 DFTBb | liq. T2O | 32 | 12 | 15.0 $\pm$ 0.2 | 1.16$\pm$0.00 DFTBb | liq. H2O | 32 | 32 | 14.0e | 1.17$\pm$0.00 DFTBb | liq. T2O | 32 | 32 | 15.9 $\pm$ 0.3 | 1.17$\pm$0.00 DFTBb | liq. H2O | 64 | 12 | 14.0e | 1.16$\pm$0.00 DFTBb | liq. T2O | 64 | 12 | 14.4 $\pm$ 0.2 | 1.16$\pm$0.00 OSS2c | liq. H2O | 64 | 12 | 14.0e | 1.51$\pm$0.01 OSS2c | liq. D2O | 64 | 12 | 15.3 $\pm$ 0.5 | 1.51$\pm$0.01 OSS2c | liq. T2O | 64 | 12 | 15.6 $\pm$ 0.5 | 1.51$\pm$0.01 Exptld | liq. H2O | 14.00 | | | Exptld | liq. D2O | 14.86shoesmith_ionization_1976 | | | Exptld | liq. T2O | 15.2ceriotti_nuclear_2016 | | | aAb initio PIMD. bSemiempirical PIMD. cEmpirical PIMD. dExperimental values. eReference value to determine $R_{c}$. Table III. Autoionization constants of water isotopologues calculated by --- the absolute method. Method | System | Molecules | $P$ | p$K_{W}$ (ABS) DFTa | liq. H2O | 32 | 1 | 19.7 $\pm$ 0.3 DFTb | liq. H2O | 32 | 12 | 15.2 $\pm$ 0.9 DFTb | liq. D2O | 32 | 12 | 14.8 $\pm$ 0.7 DFTb | liq. T2O | 32 | 12 | 16.0 $\pm$ 0.8 OSS2c | liq. H2O | 64 | 1 | 15.0 $\pm$ 0.5 OSS2d | liq. H2O | 64 | 12 | 13.5 $\pm$ 0.5 OSS2d | liq. D2O | 64 | 12 | 14.9 $\pm$ 0.7 OSS2d | liq. T2O | 64 | 12 | 14.6 $\pm$ 0.6 aAb initio MD. bAb initio PIMD. cEmpirical MD. dEmpirical PIMD. Table IV. Acidity constants of water isotopologues, calculated using Equation (23). --- Method | System | p$K_{A}$ (PROP) | $R_{c}$ [Å] DFTBa | aq. HDO | 16.1 $\pm$ 0.1 | 1.16$\pm$0.00 DFTBa | aq. HTO | 16.3 $\pm$ 0.1 | 1.16$\pm$0.00 OSS2b | aq. HDO | 15.6 $\pm$ 0.3 | 1.51$\pm$0.01 OSS2b | aq. HTO | 17.1 $\pm$ 0.4 | 1.51$\pm$0.01 aSemiempirical PIMD of 32 water molecules with $P=12$. bEmpirical PIMD of 64 water molecules with $P=12$.
# Destruction of refractory carbon grains drives the final stage of proto- planetary disk chemistry Arthur D. Bosman Department of Astronomy, University of Michigan, 323 West Hall, 1085 S. University Avenue, Ann Arbor, MI 48109, USA Felipe Alarcón Department of Astronomy, University of Michigan, 323 West Hall, 1085 S. University Avenue, Ann Arbor, MI 48109, USA Ke Zhang Department of Astronomy, University of Michigan, 323 West Hall, 1085 S. University Avenue, Ann Arbor, MI 48109, USA Edwin A. Bergin Department of Astronomy, University of Michigan, 323 West Hall, 1085 S. University Avenue, Ann Arbor, MI 48109, USA (Received XX; Revised YY; Accepted ZZ; ) ###### Abstract Here we aim to explore the origin of the strong C2H lines to reimagine the chemistry of protoplanetary disks. There are a few key aspects that drive our analysis. First, C2H is detected in young and old systems, hinting at a long- lived chemistry. Second, as a radical, C2H is rapidly destroyed, within $<$1000 yr. These two statements hint that the chemistry responsible for C2H emission must be predominantly in the gas-phase and must be in equilibrium. Combining new and published chemical models we find that elevating the total volatile (gas and ice) C/O ratio is the only natural way to create a long lived, high C2H abundance. Most of the C2H resides in gas with a $F_{\mathrm{UV}}/n_{\mathrm{gas}}\sim 10^{-7}\,G_{0}\,\mathrm{cm}^{3}$. To elevate the volatile C/O ratio, additional carbon has to be released into the gas to enable an equilibrium chemistry under oxygen-poor conditions. Photo- ablation of carbon-rich grains seems the most straightforward way to elevate the C/O ratio above 1.5, powering a long-lived equilibrium cycle. The regions at which the conditions are optimal for the presence of high C/O ratio and elevated C2H abundances in the gas disk set by the $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ condition lie just outside the pebble disk as well as possibly in disk gaps. This process can thus also explain the (hints of) structure seen in C2H observations. Protoplanetary disks, Astrochemistry, Chemical abundances ††journal: ApJ††software: RAC2D (Du & Bergin, 2014), SciPy (Virtanen et al., 2020), NumPy (Van Der Walt et al., 2011), Matplotlib (Hunter, 2007). ## 1 Introduction The abundance of the volatile elements (carbon, nitrogen, oxygen and sulfur), and whether these are in the gas, incorporated in ices, or part of the refractory material, is an important parameter in proto-planetary disk physics and chemistry. Volatile elemental abundances influence the molecular composition which in turn influences the ionization and temperature of the gas. Furthermore, giant planets forming in the disk will accrete the local gas, so the elemental composition of the gas will influence the final (elemental) composition of planet. The abundance of volatile elements, both in the gas and in the ice, in proto- planetary disks atmospheres appears to be different from the volatile ISM abundances. Herschel studies have shown that H2O vapor and ice are strongly under abundant, factor 10 to 1000 lower, than expected in ISM composition chemical models in the upper layers beyond snowline (Bergin et al., 2010; Hogerheijde et al., 2011; Kamp et al., 2013; Du et al., 2017). This water is most likely trapped in ice on large grains that have settled to the midplane (e.g. Krijt et al., 2016). Furthermore sub-millimeter studies are showing that CO isotopologue emission is weaker than expected (e.g. Favre et al., 2013; Ansdell et al., 2016; Miotello et al., 2017). This indicates that the CO abundance is reduced from its expected value ($\sim 10^{-4}$ relative to H2) in the surface layers of the outer ($\gtrsim$ 20 AU) disk. As such the dominant carriers of volatile oxygen and carbon are missing in both the gas and the ice from the surface layers of the outer disk. As disk mass estimates are usually based on the dust mass, these low oxygen and carbon abundances could be interpreted as a low gas-to-dust ratio. However, Hydrogen-Deuteride (HD) observations towards a handful of disks provide an independent measurement of the gas mass finding gas-to-dust ratios in agreement with earlier assumptions (Bergin et al., 2013; McClure et al., 2016). On top of this, measured accretion rates and the composition of accreting material imply disk gas-to-dust ratios that are 100 (ISM) or higher (Kama et al., 2015; Manara et al., 2016; McClure, 2019). Nitrogen-bearing molecules provide a additional constraint with analysis of N2H+ and HCN also implying gas-to-dust ratios of 100 (van ’t Hoff et al., 2017; Cleeves et al., 2018; Anderson et al., 2019). Finally, observations of atomic carbon and oxygen lines towards TW Hya are consistent with the missing CO and H2O (Kama et al., 2016; Trapman et al., 2017). So it is unlikely that the missing carbon and oxygen are present in unobservable species such as CH4 and O2 in upper layers. At face value, current data and analysis suggest that the carbon and oxygen is sequestered in ices on large grains near the mid-plane. Observational evidence is suggesting that this process is relatively fast and takes place in the first Myr after disk formation (Zhang et al., 2020; Bergner et al., 2020). The low total abundance of CO and the even lower abundance of H2O indicate that total volatile C/O ratios are elevated above the ISM ratio of 0.4. This is confirmed by the brightness of the C2H lines detected towards many disks (Guilloteau et al., 2016; Bergner et al., 2019; Miotello et al., 2019), with the C2H lines fluxes comparable to ^13CO (Kastner et al., 2014). Models that match these observations require C/O ratios of 1.5 – 2 (Bergin et al., 2016; Miotello et al., 2019). Under these high C/O ratio conditions, CO is the dominant oxygen bearing molecule, so these conditions also explain the low H2O fluxes observed with Herschel (Kamp et al., 2013). To get to these higher C/O ratios it is not enough to remove volatiles from the surface layers and leave a small fraction of the CO. It is necessary to create a surplus of carbon relative to oxygen. This can be done by extracting oxygen from CO and putting it into water ice sequestered near the mid-plane, or by releasing excess carbon from a refractory reservoir, carbonaceous grains or PAHs (Draine, 1979; Finocchi et al., 1997; Visser et al., 2007; Alata et al., 2014, 2015; Anderson et al., 2017). Observations of some galactic PDRs also show the need for elevated C/O ratios (e.g. Guzmán et al., 2015; Le Gal et al., 2019). This high C/O ratio can be caused by release of carbon from grains due to photo-ablation (Alata et al., 2015), a similar process might thus be active in proto- planetary disk surface layers. Strong C2H emission is seen in a majority of proto-planetary disks spread over all ages (Bergner et al., 2019; Miotello et al., 2019). Furthermore, C2H emission in many disks shows (signs of) structure (Bergin et al., 2016; Bergner et al., 2019). As such the conditions for bright C2H emission must be set early, persist for the disk lifetime and have some dependence on local disk conditions. The C2H chemical time-scales are short, as such the abundant C2H thus has to be the result of a long-lived (millions of years) equilibrium cycle. The goal of this paper is to elucidate the conditions necessary for this cycle. ## 2 Chemistry of C2H Figure 1: Carbon chemistry network showing the important reaction pathways in the UV dominated layers of proto-planetary disks. Thick arrows show major pathways, thin arrows show minor pathways. Reactions involving oxygen, which always end in CO, are shown in yellow arrows. CH4 and C2H2 (light blue background) are species that can only be efficiently destroyed by UV photons. In the absence of UV photons these species can contain a significant amount of carbon. C2H (red background) can only be formed if the cycle is active, that is when there are sufficient UV photons to release carbon from CH4 and C2H2. If water ice is present, these same photons would release atomic oxygen from any present H2O ice, quenching the C2H production, in regions with large amounts of water ice. Figure 2: The C2H abundance normalized to the abundance at 10 Myr as function of time using the Bosman et al. (2018b) gas- grain network. All models that end with a C2H abundance greater than $10^{-10}$ are plotted with colors denoting the model densities (left) and final C2H abundance (right). All models converge on the final C2H abundance within 1 Myr, and models that have a high C2H abundance at the end of the chemical model converge faster than models with a lower abundance. The Ethynyl radical, C2H, is a radical that is often used to trace the C/O ratio of gas (e.g. Bergin et al., 2016; Cleeves et al., 2018; Miotello et al., 2019). A simplified reaction network for the chemistry that leads to C2H is shown in Fig. 1. The C2H abundance is strongly dependent on the amount of free carbon; that is carbon not contained in CO. Furthermore C2H and related hydrocarbons react very quickly, especially with atomic oxygen and the OH radical. Reactions with oxygen bearing species inevitably lead to CO as one of the products. As such, to produce a high abundance of C2H, a high C/O ratio is necessary (Bergin et al., 2016; Miotello et al., 2019). As Fig. 1 shows, to form C2H a significant level of UV is also necessary, as further exemplified as its use as PDR tracer (e.g. Jansen et al., 1995; Nagy et al., 2015). The level of UV necessary to create abundant C2H depends on the density of the gas and the C/O ratio. For a C/O of 0.4 and the low density ($10^{5}$ cm-3) outer regions of the disk a $F_{\mathrm{UV}}=1\,G_{0}$ is enough, while in denser disk surface layers ($10^{9}$ cm-3) a $F_{\mathrm{UV}}=10^{4}\,G_{0}$ is necessary, where $G_{0}$ is the Habing (1968) flux of $1.6\times 10^{3}$ erg s-1 cm-2. 111The relation between abundant C2H and the physical conditions be explored in more detail at the end of this section. This means that C2H is most abundant in regions with an active chemistry which equilibriates quickly. Figure 2 shows the converging behavior of the C2H abundance in a set of chemical points models for conditions relevant to the disk layers with abundant C2H. As UV photons are abundant in the regions where C2H is produced, it is not just enough to change the C/O ratio in the gas-phase, for example by freeze-ing out H2O. Only by also removing the oxygen from UV dominated layer, so neither photo-desorption nor photo-dissociation can replenish oxygen to the gas-phase, is it possible to increase the C2H abundance. The chemical models in Fig. 2 is based on the network presented in Bosman et al. (2018b) with photo-dissociation reactions from (Heays et al., 2017). The conditions were chosen to encompass the PDR layer of a proto-planetary disk model around a T-Tauri star. The density was varied between $10^{6}$ and $10^{10}$ cm-3 with a UV field between 10 and $10^{5}$ $G_{0}$. Gas and dust temperatures were varied between 30 and 300 K. Finally the C/O ratio was varied between 0.4 and 2.0. All models that end up with a C2H abundance larger than $10^{-10}$ equilibrate within 1 Myr. Variations of the initial composition in these chemical models shows, as also shown in Bergin et al. (2016), that a high initial abundance of C2H2 or CH4 will initially lead to a high, $>10^{-10}$, C2H abundances. However, the high C2H abundance is not long lived unless the conditions are right to have a high C2H abundance in kinetic equilibrium, which is reached in $10^{5}$ – $10^{6}$ years (Fig. 2 and Bergin et al., 2016). The physical parameter space in which this happens is bigger when the total C/O ratio is larger than 1. Put simply, there must be more carbon available for the chemistry than oxygen. Figure 3: Mass average C2H abundance as function of the UV field over the density ($F_{\mathrm{UV}}/n_{\mathrm{gas}}$) for a C/O of 0.4 (blue), 1.0 (purple) and 2.0 (red) in the TW Hya model of Bergin et al. (2016) (left axis). The total mass in each of the bins is shown in green (right axis). In regions with $F_{\mathrm{UV}}/n_{\mathrm{gas}}>10^{-5}G_{0}\,\mathrm{cm}^{3}$ the C2H is independent of the C/O ratio, only in regions with lower $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ does the C2H abundance start to strongly depend on the C/O ratio. A smooth disk will naturally have a lot more mass that has lower $F_{\mathrm{UV}}/n_{\mathrm{gas}}$, as the density is higher in these regions. This leads to a strong C2H abundance increase at higher C/O ratios. This can be clearly seen in the C2H abundances in the thermo-chemical models from Bergin et al. (2016). Fig. 3 shows the mass averaged C2H abundance as function of $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ for a TW Hya disk model (Bergin et al., 2016). The chemical network used in these thermo-chemical models is different from the one used in the single point models in Fig. 2. It contains a more resticted grain surface chemistry and the gas-phase chemistry in RAC2D is based on UMIST06 (Woodall et al., 2007; Du & Bergin, 2014), while the Bosman et al. (2018b) model is based on UMIST12 (McElroy et al., 2013). The reactions important for C2H formation and destruction (see Fig. 1) are the same between the two networks. $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ is a central parameter in describing effects on the chemistry and thermal physics within PDR models (Kaufman et al., 1999) as it balances destruction and heating ($\propto F_{\mathrm{UV}}$) and formation and cooling ($\propto n_{\mathrm{gas}}$) rates of molecules, respectively. Through out the paper $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ will be expressed in units of $G_{0}\,\mathrm{cm}^{3}$. High C2H abundances are found between $F_{\mathrm{UV}}/n_{\mathrm{gas}}=10^{-8}-10^{-3}\,G_{0}\,\mathrm{cm}^{3}$, with conditions between $10^{-8}-10^{-5}\,G_{0}\,\mathrm{cm}^{3}$ are most sensitive to changes in the C/O ratio. As densities in the disk are greater than $10^{6}$ cm-3, these conditions always correspond to an effective radiation fields $>0.1\,G_{0}$. This effectively means that the gas responsible for C2H emission is not fully shielded from UV emission. This supports the conclusion of Bergin et al. (2016) that dust evolution is important for C2H formation: dust evolution, especially grain growth and settling increase the penetration of UV in the disk, increasing the mass fraction of the disk where C2H can be abundant. Fig. 3 also demonstrates that high C/O ratios are needed to elevate the C2H abundance within regions that carry significant mass. As in all our models the maximum C2H abundance seems to be around $10^{-8}$, or $\sim$ 0.01% of the total carbon. In the CO/C/C+ transition layer, which is the layer that produces C2H at low C/O ratios, this only allows a C2H column of order $10^{12}\mathrm{cm}^{2}$. Thus an increase in abundance in the deeper, denser layers of the disk is necessary to reproduce the high C2H columns, $10^{14}$–$10^{15}$ cm-2 observed, which only happens when the C/O ratio is above 1.0 (e.g. Bergin et al., 2016; Bergner et al., 2019). ## 3 Towards a high C/O ratio ### 3.1 High C/O due to Volatile Depletion? It is currently unclear what exactly is causing the loss of oxygen and carbon bearing species from the surface and outer regions of protoplanetary disks with leading theories exploring dust evolution or chemical processing (Krijt et al., 2016; Schwarz et al., 2018; Krijt et al., 2018; Bosman et al., 2018a). The high sublimation temperature of water (150-300 K, depending on density Bergin & Cleeves, 2018) lends it to be found as water ice for the majority of the disk mass and hence dust grain growth would appear to be most relevant (Krijt & Ciesla, 2016). For CO, the much lower sublimation temperature (20–25 K Schwarz et al., 2016; Pinte et al., 2018; Qi et al., 2019) makes it more difficult for dust evolution to be the sole process and models have therefore also explored chemical processing of CO into less volatile forms such as CO2 or CH3OH (Furuya & Aikawa, 2014; Schwarz et al., 2018; Bosman et al., 2018a; Dodson-Robinson et al., 2018; Schwarz et al., 2019). It seems however, that the CO removal process is relatively fast and happens within the first Myr after disk formation (Zhang et al., 2020). This is shorter than the timescales necessary to reduce the CO abundance in chemical models using cosmic-ray driven gas-grain chemistry, which are generally $>1$ Myr, unless elevated cosmic ray ionization rates are assumed (e.g. Schwarz et al., 2018; Bosman et al., 2018b). Furthermore, as C2H is created in a photon- dominated layer layer, oxygen carrying species in the ice which are formed from CO destruction, such as H2O and CO2 would be photo-desorbed and dissociated by the same UV that is necessary to form C2H. The released oxygen would destroy the C2H in the gas-phase. Chemical conversion of CO into thus does not directly create the high C/O ratio conditions necessary for the high observed C2H abundances. A combined chemical-dynamical process would thus be necessary, and the interaction of chemical and dynamical effects seem to strengthen each other, shortening the CO depletion timescale (Krijt et al., 2020). In these models the total C/O ratio is tracked and a rise in C/O ratio is seen. The total, gas+ice C/O does rises to 1.0 between the CO and CH4 snow surfaces, with a low column layer. In these models the gas-phase C/O ratio is greater than one, but this excess carbon is balanced by CO2 and H2O in the ice. Under the UV conditions necessary for C2H production, this oxygen would be released from this ice, and smothering C2H formation. These chemical-dynamical models, thus do not create the conditions for strong C2H emission naturally. Furthermore, if the process of CO depletion is linked with the increase of the C/O ratio above 1.0, there should be a clear trend between CO abundance and C2H flux. This, however, is not seen observationally (Miotello et al., 2019). Finally, the C2H emission is structured in both TW Hya and DM Tau. Thus it is likely that the C/O ratio is similarly structured. This is not easily explained by volatile depletion, which seems to be relatively smooth (Zhang et al., 2019; Krijt et al., 2020). The depletion of CO is not directly responsible for the high C/O ratios necessary to explain the C2H abundances. However, the lower elemental abundance of oxygen in the surface layers due to CO depletion does make it easier for a source of additional carbon to elevate the C/O ratio. We therefore propose that the depletion of CO is a necessary pre-condition for the high C/O ratios observed. ### 3.2 Photo-ablation of Refractory Carbon Figure 4: Carbon grain lifetime for the TW Hya (left) and DM Tau (right). Density structure and resulting UV field used for the calculation of the grain lifetime are from the models in Bergin et al. (2016). Most of the disk surface has a carbon grain lifetime less than 1 Myr, while the region with a refractory carbon lifetime less than 3 Myr spans the entire disk except for pebble disk. The radial size of the pebble disk is denoted with the vertical black line at the top of the figure. The white lines denote the location that C2H is abundant, $x_{\ce{C2H}}>10^{-9}$. The black contours encompass the regions with $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ between $10^{-8}$ and $10^{-5}$ $G_{0}\,\mathrm{cm}^{3}$. In this region, an increase in C/O ratio leads to the strongest response in total C2H abundance. The areas where C2H is abundant and where $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ is between $10^{-8}$ and $10^{-5}$ $G_{0}\,\mathrm{cm}^{3}$ do not overlap completely as the models have a varying C/O ratio. A C/O ratio $<$ 1.0 suppresses C2H formation inside 30 and outside 100 AU in TW Hya and between 40 and 200 AU in DM Tau. The initial evolution of the disk leaves the surface layer and outer disk depleted of volatiles and dust. The volatile Carbon and oxygen budget are dominated by CO at an abundance between $10^{-6}$ and $10^{-5}$ with respect to H. The gas thus has a C/H that is 1 to 2 orders lower than the volatile ISM and a C/O close to unity as a result of the CO and H2O depletion episode. Finally the grain growth and settling also allows the UV to penetrate more deeply into the disk. If the excess carbon is not drawn from a volatile source, it is thus has to originate from a refractory source. In interstellar space about 50% of the carbon is contained in refractory form, from PAHs and nano-particles to amorphous carbon grains and carbon “goo” coatings on silicate grains (Greenberg et al., 1995; Jones et al., 2013; Chiar et al., 2013; Mishra & Li, 2015). Carbon can be extracted from these refractory forms by interactions with energetic particles or by oxidation (Draine, 1979; Finocchi et al., 1997; Alata et al., 2014, 2015; Anderson et al., 2017). As the region of interest here are mostly cold ($<100$ K), and oxygen poor, oxidation should not play a role. As such we will only consider the release of carbon by energetic photons, which can penetrate more deeply into the disk due to the dust evolution. Specifically, we consider the release of carbon from carbonaceous grains due to UV photons, other carbon release mechanisms and carbon reservoirs will be considered in the discussion (Sec. 4.2). Hydrogenated amorphous carbon on the surface of grains can be photo-ablated, releasing the carbon, mostly in the form of CH4 to the gas-phase (Alata et al., 2014). The grain lifetime, following Anderson et al. (2017), is given by: $\tau_{\mathrm{C}_{\mathrm{ref}}}=N_{\mathrm{C\,grain}}/\left(\sigma Y_{\mathrm{C}}F_{\mathrm{UV}}\right)$ (1) where $N_{\mathrm{C\,grain}}=\rho_{\mathrm{C}_{\mathrm{ref}}}\frac{4}{3}\pi a^{3}/m_{\mathrm{C}}$ (2) is the number of carbon atoms per grain, $\sigma$ is the geometric cross section of the grain, $Y_{\mathrm{C}}=8\times 10^{-4}\,\mathrm{photon}^{-1}$ is the carbon sputtering yield (Alata et al., 2014, 2015), $F_{\mathrm{UV}}$ is the UV field, $10^{8}\times G_{0}\,\mathrm{photons}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$, $\rho_{\mathrm{C}_{\mathrm{ref}}}=2.24\,\mathrm{g}\,\mathrm{cm}^{-3}$ is the density of carbonacous grains, $a=0.1\,\mu\mathrm{m}$ is the grain radius and $m_{\mathrm{C}}$ is the mass of a carbon atom. This carbonaceous grain lifetime is calculated for TW Hya and DM Tau disks using the model structures from Bergin et al. (2016) and shown in Fig. 4. The carbon grain lifetime is significantly less than a few million years for most of the C2H emitting region. This is less than the expected disk lifetime or even the age of the 5-10 Myr TW Hya disk (Weinberger et al., 2013). Carbonaceous grains can thus be processed enough to remove a significant fraction of the carbon from the grains, enriching the gas. As the carbon grain lifetimes are short, a single enrichment event is expected, in contrast to a slow continuous release of carbon over the disks lifetime. It is thus qualitatively possible to enrich the gas with carbon from refractory origin, but what is necessary to quantitatively match the extreme C/O ratios (C/O $\approx 2$)? In the ISM grains contain $\sim$50% of the total carbon, $10^{-4}$ w.r.t. H (Draine, 2003; Mishra & Li, 2015). However, the grains have grown and settled lowering the abundance of small grains, those grains that are well coupled to the gas, increasing the gas-to-dust ratio above 100 in the C2H emitting layers. The available carbon abundance in refractory form is thus $10^{-4}\times 100/$gas–to–dust ratio. To elevate the C/O ratio from the depleted state with a C/O of 1.0 to 2.0 it is thus necessary to add as much carbon as there already is in the gas-phase, which is between $10^{-6}$ \- 10-5 w.r.t. H. This requires ablation of 1-to-10% of all the refractory carbon originally in the disk surface layers. To have enough grains to provide the carbon it is thus necessary that the gas-to-dust-ratio is $\leq 1/\left(100\times\left(C/H\right)_{\mathrm{gas}}\right)$ in the layers where carbon is efficiently photo-ablated. For strongly volatile depleted disks the surface layer gas-to-dust ratio should thus be less than $10^{4}$ while for less depleted disks a lower gas-to-dust ratio (around 1000) is necessary to be able to provide the required carbon to the gas. The actual grain abundance in the surface layers and outer regions of the disk is hard to constrain. SED fitting models generally use a gas-to-dust ratio of 1000-10000 in the surface layers (e.g. Andrews et al., 2013). The TW Hya scattered light model of van Boekel et al. (2017) also has a gas-to-dust ratio of 10000 in the surface layers, consistent with the SED models, this is a factor of five higher than the gas-to-dust ratio in the thermo-chemical model of Bergin et al. (2016), which also matches a host of other gas tracers (Du et al., 2015). The DM Tau model of Bergin et al. (2016) even has a gas-to-dust ratio of 125 in the surface layers enough to elevate the C/O ratio to approximately 2.0, even if there is very little carbon and oxygen depletion. These models, while highly degenerate are at least in the right ball park to provide enough grain material in these regions to elevate the C/O ratio to the required values. Settling and drift models show a larger range of dust depletions in the outer disk. Models with a low $\alpha$ ($<10^{-3}$), as inferred from observations (Pinte et al., 2016; Flaherty et al., 2017; Teague et al., 2018), predicting more than a factor 10 depletion of the small dust grains in the surface layers, in general predicting less dust than necessary in SED models (e.g. Facchini et al., 2017; Krijt et al., 2018; Woitke et al., 2019). All these things considered, it is very hard to say what is, and what is not enough turbulence to provide the small grains, and thus excess carbon necessary for the elevated C/O ratios. Disk that have an close to ISM O/H ratio in the surface layers, needs a lot of small dust, consistent with no settling, and thus high very high turbulence ($\alpha>10^{-2}$). Very oxygen depleted disks, such as TW Hya, however, 1% of the original dust, which for that disk is consistent with an $\alpha$ of $10^{-4}$ (van Boekel et al., 2017). We note however, only needs a single injection of excess carbon from the small grains. So even if the current levels of small grains, or by extension, current turbulent $\alpha$ is not enough, it is possible that the high C/O ratios are a result of a previous stage of the disk evolution in which there were enough carbonaceous grains in the disk atmosphere. ## 4 Discussion ### 4.1 Structure in C2H Observations of C2H show that the C2H is structured. High resolution of DM Tau and TW Hya shows an emission ring outside of the pebble disk while lower resolution data from Bergner et al. (2019) and Miotello et al. (2019) hints at structure below the observed resolution in many of the disks observed to date. A typical disk models with a tapered power-law surface density structure and a constant gas-to-dust ratio will lead to very smooth C2H surface density profiles outside of $\sim$ 20 AU (e.g. Bergin et al., 2016, Fig. 7). The structures that are visible are thus due to a combination of radial changes in the C/O ratio and changes in the UV penetration. As the UV penetration can only significantly change the C2H abundance for elevated C/O ratios, changes in the C/O ratio are the expected dominant driver in the C2H structure. If the C/O ratio is elevated due to the photo-ablation of carbonaceous grains, then the C/O ratio is linked to the (historical) UV penetration and small grain abundances. #### 4.1.1 Rings outside of the pebble disk Outside of the pebble disk, the radiation field is a combination of the interstellar radiation field, that can reach a large volume of the outer disk, and stellar UV photons that scatter from the disk surface downward. The radiation field is typically between 0.1 and $10\times G_{0}$. As the external UV is only barely attenuated in these disk regions, changing the amount of small dust does little to change the UV field in the outer disk. As such the increase in C2H column must be due to a change in the C/O ratio outside of the pebble disk. The region outside the pebble disk is a natural place for elevated C/O ratios to occur due to photo-ablation. Grain coagulation is less effective at larger disk radii as densities are lower (e.g. Brauer et al., 2008). With less growth, a smaller fractions of grains will have settled, leaving a larger reservoir of small grains ($a<0.1\mu$m) exposed to UV photons. On top of that, vertical mixing is expected to concentrate volatiles in ices on pebbles near the disk mid-plane, as there is a lack of pebbles near the mid-plane this volatile sink is not present, and the excess carbon can potentially stay in the gas-phase for the entire disk lifetime. Of course, this does depend on timescales for radial motions and gas loss via winds. However, beyond the pebble disk it is clear that the depletion cycle will not be activated. #### 4.1.2 Structures in the Pebble Disk Figure 5: Gas (scaled down by a factor 100) and small dust surface densities (top), and gas surface density that has a $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ between $10^{-5}$ and $10^{-3}$ (middle), and $10^{-8}$ and $10^{-5}$ $G_{0}\,\mathrm{cm}^{3}$ (bottom) for the 100 AU gap of the AS 209 model of Alarcon et al. (2020). The small dust surface density is varied in the gap, with a gap that is shallower in the small dust than in the gas (blue), has equal depth in the gas and the small dust (purple) and is deeper in the small dust than in the gap (red). The surface density in the high $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ layer follows radial profile of the gas density and does not depend on the amount of small dust. The lower $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ layer however is dependent on the levels of small dust depletion and this layer contains more mass, and thus is brighter in C2H at lower dust surface densities. Axisymmetric structures in and above the pebble disks have been observed in many disks in the (sub-)millimeter (e.g. Andrews et al., 2018) and scattered light (e.g. Garufi et al., 2017). These features are attributed to changes in the physical conditions that impact the distribution of dust and thus the penetration of UV photon, which directly effects the C2H abundance. Locations of prime interest in this regard would be planet created gaps. Alarcon et al. (2020) looked at the chemistry in the gaps of AS 209 and found little to no effect of the inclusion of a gap on the C2H columns and fluxes for a solar C/O ratio. These models only considered a constant gas-to-small- dust ratio in the gap, while increasing this ratio would enhance the UV penetration and increase the amount of mass available at a given $F_{\mathrm{UV}}/n_{\mathrm{gas}}$. To check the effects of a possible gap on the distribution of $F_{\mathrm{UV}}/n_{\mathrm{gas}}$ in the gap region three models were run, based on the AS 209 model of Alarcon et al. (2020) (See their Table 1. for general model parameters). The model includes a gap, centered around 100 AU. The gap is modeled as a Gaussian with a FWHM of 16 AU and has a factor 12 depletion in the gas and large dust. For the small dust we consider here three different distributions. A model in which the small dust is depressed by the same value as the gas, keeping a constant gas-to-dust ration in the gap. As model in which the dust is only depleted by 20% w.r.t. a smooth model, having a factor $\sim$ 10 lower gas-to-dust ratio in the gap center, and a model with a small dust depression that is 10 times stronger than than the depression in the gas surface density, leading to a higher gas-to-dust ratio in the gap. The gas and small dust surface densities as well as the mass that is in the strongly irradiated, and mildly irradiated layer in the vicinity of the gap is show in Fig. 5. Interestingly, changing the gas-to-small-dust does not change the amount of mass in the strongly irradiated layer, $F{{}_{\mathrm{UV}}}/n{{}_{\mathrm{gas}}}=10^{-5}-10^{-3}$ $G_{0}\,\mathrm{cm}^{3}$. Figure 6: Sketch of the physical processes that set the distribution of C2H in the disk. In the inner disk vertical mixing will lock any excess carbon in the mid-plane. We predict that in the disk gap, the increased UV field, combined with the lack of large grains in the mid-plane will lead to a ring of C2H emission. Similarly, the lack of large grains outside of the pebble disk lead to the formation of a large reservoir of long lived carbon rich gas, creating a C2H emission ring outside of the pebble disk, such as observed in TW Hya and DM Tau (Bergin et al., 2016). The mass in the mildly irradiated layer, $F{{}_{\mathrm{UV}}}/n{{}_{\mathrm{gas}}}=10^{-8}-10^{-5}$ $G_{0}\,\mathrm{cm}^{3}$, does show dependence on the small-dust surface density. Thus gaps with C/O $>$ 1.0 will have an increased column density as long as there is a depletion of the small dust in the gap. We note however that a factor 5 depletion in the small dust only leads to an increase of a factor two in the available mass in these models. This indicates that large variations in the C2H column are more likely due to changes in C/O ratios, but that factor few changes in C2H column can still be directly due to changes in UV penetration. Gaps have an increased UV field in deeper layers of the disk. Aside from the effect that this has on the chemistry directly through changing the distribution $F{{}_{\mathrm{UV}}}/n{{}_{\mathrm{gas}}}$ it also exposes more and unprocessed dust grains. Photo-desorption and -ablation can then release species from the dust surfaces into the gas. If the grains are water-ice poor, this can increase the C/O ratio, and thus increase the C2H abundance. Meridional flows also be efficient in bringing unprocessed material into regions of high UV flux (Morbidelli et al., 2014; Teague et al., 2019). This could lead to C2H rings around the millimeter dust gaps that have been imaged. Hints of this can be seen in the DM Tau disk (Bergin et al., 2016), although conversely, the rings in TW Hya do not show corresponding C2H rings. The C2H emission in Bergner et al. (2019) does show some hints of structure, however the resolution of the data is not enough to correlate it with the location of millimeter structure. Further, high resolution of C2H are thus needed to check for a milli-meter gap versus C2H ring location correlation in a large sample of disks. ### 4.2 Source of Refractory Carbon So far it has been assumed that the source of the excess carbon is on grains in the form of a hydrogenated amorphous carbon. This is however not the only source of refractory carbon in the disk. Up to 20% of the refractory carbon in the ISM is in the form of PAHs (Tielens, 2008). These PAHs could thus provide a significant amount of carbon to the gas, if they are present and can be efficiently destroyed. It is clear from observations that PAHs are not present above the 10 $\mu$m continuum disk photo-sphere (Geers et al., 2007), which means that PAHs are not present in the C2H emitting layers. If PAHs act like a large molecule, with corresponding freeze-out behavior, then they should be depleted together with the volatiles and sequestered into the mid-plane. If this is not the case, then the PAH absence needs to be explained by the destruction of PAHs. UV photons, especially in T-Tauri disks have difficulties destroying PAHs (Visser et al., 2007) so they can be ruled out as a reason for the lack of PAHs in the disk surface. X-rays are, however able to destroy PAHs around T-Tauri stars (e.g. Siebenmorgen & Krügel, 2010; Siebenmorgen & Heymann, 2012). In this case, the disk surface layers can easily be enriched in carbon for T-Tauri disks, but for the disks around the X-ray weaker Herbig Ae/Be stars this might not be the case. At the same time, a large reservoir of gas outside the pebble disk near the mid-plane is shielded from X-rays (Rab et al., 2018). This is the region however, that can contribute significantly to the C2H emission rings outside of the sub-millimeter continuum. It is thus unlikely that PAH destruction by X-rays can explain all the necessary excess carbon. As such, to elevate the C/O ratio it is critical that there is enough carbonaceous material in small grains in the outer disk. Collision experiments with carbonaceous grains are scarce, however, there is evidence that carbonaceous grains stick less efficiently than silicate grains at temperatures below 220 K (Kouchi et al., 2002). However, no experiments at cryogenic temperatures have been performed. If the sticking of carbonaceous material is lower than that for silicate materials, would naturally lead to a high fraction of small carbonaceous grains in the surface layers of disk. This could lead to an observable increase of carbonaceous material in the lower density regions of proto-planetary disks and could even help explain the low carbon content of carbonaceous chondrites. ## 5 Conclusions We have studied the chemistry of C2H in the photon-dominated layers of proto- planetary disks. To be able to explain the observed C2H we find a specific set of conditions that need to be satisfied. These are also shown schematically in Fig. 6. 1. 1. The short chemical timescales of C2H necessitate a gas-phase equilibrium cycle. 2. 2. As previously concluded by, for example Bergin et al. (2015) and Miotello et al. (2019), this cycle is active in disk regions with a C/O ratio $>1.5$. These high C/O ratios allow the C2H emitting layer to extend deeper, from just the top $F_{\mathrm{UV}}/n_{\mathrm{gas}}=10^{-5}-10^{-3}$ layer at C/O $<$ 1.0, down to the $F_{\mathrm{UV}}/n_{\mathrm{gas}}=10^{-8}-10^{-3}$ layer. 3. 3. To replicate the elevated C/O ratio we propose the photo-ablation refractory carbon carried by carbonaceous grains as the source of the excess carbon in the gas. As significant amounts of oxygen carried by CO and H2O are no longer present in the disk surface layers, only of 1–10% of the refractory carbon is necessary to increase the C/O ratio $>$1.5. Based on the existing laboratory data the time-scale for photo-ablation are $<1$ Myr for the C2H emitting area. 4. 4. These processes reach a maximum in areas where the small grain population dominates in the mid-plane, which naturally occur at the edges of pebble disks, as well as possibly at the locations of the millimeter gaps, leading to the formation of long-lived rings rich in hydrocarbons. Chemical models coupled with gas-grain dynamics can be used to test the efficiency of carbon photo-ablation in increasing the C/O ratios. This vertical motion of the grains and gas needs to be accounted for to assess the fraction of grain that can lose their carbon during the disk life-time and the longevity of the high C/O ratios. These models are out of the scope of this paper, but will be considered in follow-up studies. Furthermore, future infrared scattered light and absorption studies will be critical in constraining the abundance as well as the composition of the small grains. Especially absorption studies of edge-on protoplanetary disks should show the absence or presence of the C-H stretch of hydrogenated amorphous carbon and PAHs around 3 $\mu$m. ADB and EAB acknowledge support from NSF Grant#1907653 and NASA grant XRP 80NSSC20K0259. K. Z. acknowledges the support of NASA through Hubble Fellowship grant HST-HF2-51401.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. ## References * Alarcon et al. (2020) Alarcon, F., Teague, R., Zhang, K., Bergin, E., & Barraza-Alfaro, M. 2020, arXiv e-prints, arXiv:2010.08000 * Alata et al. (2014) Alata, I., Cruz-Diaz, G. A., Muñoz Caro, G. M., & Dartois, E. 2014, A&A, 569, A119 * Alata et al. (2015) Alata, I., Jallat, A., Gavilan, L., et al. 2015, A&A, 584, A123 * Anderson et al. (2017) Anderson, D. E., Bergin, E. A., Blake, G. A., et al. 2017, ApJ, 845, 13 * Anderson et al. (2019) Anderson, D. E., Blake, G. A., Bergin, E. A., et al. 2019, ApJ, 881, 127 * Andrews et al. (2018) Andrews, S. M., Huang, J., Pérez, L. M., et al. 2018, ApJ, 869, L41 * Andrews et al. (2013) Andrews, S. M., Rosenfeld, K. A., Kraus, A. L., & Wilner, D. J. 2013, ApJ, 771, 129 * Ansdell et al. (2016) Ansdell, M., Williams, J. P., van der Marel, N., et al. 2016, ApJ, 828, 46 * Bergin et al. (2015) Bergin, E. A., Blake, G. A., Ciesla, F., Hirschmann, M. M., & Li, J. 2015, Proceedings of the National Academy of Science, 112, 8965 * Bergin & Cleeves (2018) Bergin, E. A. & Cleeves, L. I. 2018, Chemistry During the Gas-Rich Stage of Planet Formation, ed. H. J. Deeg & J. A. Belmonte, 137 * Bergin et al. (2013) Bergin, E. A., Cleeves, L. I., Gorti, U., et al. 2013, Nature, 493, 644 * Bergin et al. (2016) Bergin, E. A., Du, F., Cleeves, L. I., et al. 2016, ApJ, 831, 101 * Bergin et al. (2010) Bergin, E. A., Hogerheijde, M. R., Brinch, C., et al. 2010, A&A, 521, L33 * Bergner et al. (2020) Bergner, J. B., Oberg, K. I., Bergin, E. A., et al. 2020, arXiv e-prints, arXiv:2006.12584 * Bergner et al. (2019) Bergner, J. B., Öberg, K. I., Bergin, E. A., et al. 2019, ApJ, 876, 25 * Bosman et al. (2018a) Bosman, A. D., Tielens, A. G. G. M., & van Dishoeck, E. F. 2018a, A&A, 611, A80 * Bosman et al. (2018b) Bosman, A. D., Walsh, C., & van Dishoeck, E. F. 2018b, A&A, 618, A182 * Brauer et al. (2008) Brauer, F., Dullemond, C. P., & Henning, T. 2008, A&A, 480, 859 * Chiar et al. (2013) Chiar, J. E., Tielens, A. G. G. M., Adamson, A. J., & Ricca, A. 2013, ApJ, 770, 78 * Cleeves et al. (2018) Cleeves, L. I., Öberg, K. I., Wilner, D. J., et al. 2018, ApJ, 865, 155 * Dodson-Robinson et al. (2018) Dodson-Robinson, S. E., Evans, Neal J., I., Ramos, A., Yu, M., & Willacy, K. 2018, ApJ, 868, L37 * Draine (1979) Draine, B. T. 1979, ApJ, 230, 106 * Draine (2003) Draine, B. T. 2003, ApJ, 598, 1017 * Du & Bergin (2014) Du, F. & Bergin, E. A. 2014, ApJ, 792, 2 * Du et al. (2017) Du, F., Bergin, E. A., Hogerheijde, M., et al. 2017, ApJ, 842, 98 * Du et al. (2015) Du, F., Bergin, E. A., & Hogerheijde, M. R. 2015, ApJ, 807, L32 * Facchini et al. (2017) Facchini, S., Birnstiel, T., Bruderer, S., & van Dishoeck, E. F. 2017, A&A, 605, A16 * Favre et al. (2013) Favre, C., Cleeves, L. I., Bergin, E. A., Qi, C., & Blake, G. A. 2013, ApJ, 776, L38 * Finocchi et al. (1997) Finocchi, F., Gail, H. P., & Duschl, W. J. 1997, A&A, 325, 1264 * Flaherty et al. (2017) Flaherty, K. M., Hughes, A. M., Rose, S. C., et al. 2017, ApJ, 843, 150 * Furuya & Aikawa (2014) Furuya, K. & Aikawa, Y. 2014, ApJ, 790, 97 * Garufi et al. (2017) Garufi, A., Meeus, G., Benisty, M., et al. 2017, A&A, 603, A21 * Geers et al. (2007) Geers, V. C., van Dishoeck, E. F., Visser, R., et al. 2007, A&A, 476, 279 * Greenberg et al. (1995) Greenberg, J. M., Li, A., Mendoza-Gomez, C. X., et al. 1995, ApJ, 455, L177 * Guilloteau et al. (2016) Guilloteau, S., Reboussin, L., Dutrey, A., et al. 2016, A&A, 592, A124 * Guzmán et al. (2015) Guzmán, V. V., Pety, J., Goicoechea, J. R., et al. 2015, ApJ, 800, L33 * Habing (1968) Habing, H. J. 1968, Bull. Astron. Inst. Netherlands, 19, 421 * Heays et al. (2017) Heays, A. N., Bosman, A. D., & van Dishoeck, E. F. 2017, A&A, 602, A105 * Hogerheijde et al. (2011) Hogerheijde, M. R., Bergin, E. A., Brinch, C., et al. 2011, Science, 334, 338 * Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90 * Jansen et al. (1995) Jansen, D. J., Spaans, M., Hogerheijde, M. R., & van Dishoeck, E. F. 1995, A&A, 303, 541 * Jones et al. (2013) Jones, A. P., Fanciullo, L., Köhler, M., et al. 2013, A&A, 558, A62 * Kama et al. (2016) Kama, M., Bruderer, S., Carney, M., et al. 2016, A&A, 588, A108 * Kama et al. (2015) Kama, M., Folsom, C. P., & Pinilla, P. 2015, A&A, 582, L10 * Kamp et al. (2013) Kamp, I., Thi, W. F., Meeus, G., et al. 2013, A&A, 559, A24 * Kastner et al. (2014) Kastner, J. H., Hily-Blant, P., Rodriguez, D. R., Punzi, K., & Forveille, T. 2014, ApJ, 793, 55 * Kaufman et al. (1999) Kaufman, M. J., Wolfire, M. G., Hollenbach, D. J., & Luhman, M. L. 1999, ApJ, 527, 795 * Kouchi et al. (2002) Kouchi, A., Kudo, T., Nakano, H., et al. 2002, ApJ, 566, L121 * Krijt et al. (2020) Krijt, S., Bosman, A. D., Zhang, K., et al. 2020, ApJ, 899, 134 * Krijt & Ciesla (2016) Krijt, S. & Ciesla, F. J. 2016, ApJ, 822, 111 * Krijt et al. (2016) Krijt, S., Ciesla, F. J., & Bergin, E. A. 2016, ApJ, 833, 285 * Krijt et al. (2018) Krijt, S., Schwarz, K. R., Bergin, E. A., & Ciesla, F. J. 2018, ApJ, 864, 78 * Le Gal et al. (2019) Le Gal, R., Brady, M. T., Öberg, K. I., Roueff, E., & Le Petit, F. 2019, ApJ, 886, 86 * Manara et al. (2016) Manara, C. F., Rosotti, G., Testi, L., et al. 2016, A&A, 591, L3 * McClure (2019) McClure, M. K. 2019, A&A, 632, A32 * McClure et al. (2016) McClure, M. K., Bergin, E. A., Cleeves, L. I., et al. 2016, ApJ, 831, 167 * McElroy et al. (2013) McElroy, D., Walsh, C., Markwick, A. J., et al. 2013, A&A, 550, A36 * Miotello et al. (2019) Miotello, A., Facchini, S., van Dishoeck, E. F., et al. 2019, A&A, 631, A69 * Miotello et al. (2017) Miotello, A., van Dishoeck, E. F., Williams, J. P., et al. 2017, A&A, 599, A113 * Mishra & Li (2015) Mishra, A. & Li, A. 2015, ApJ, 809, 120 * Morbidelli et al. (2014) Morbidelli, A., Szulágyi, J., Crida, A., et al. 2014, Icarus, 232, 266 * Nagy et al. (2015) Nagy, Z., Ossenkopf, V., Van der Tak, F. F. S., et al. 2015, A&A, 578, A124 * Pinte et al. (2016) Pinte, C., Dent, W. R. F., Ménard, F., et al. 2016, ApJ, 816, 25 * Pinte et al. (2018) Pinte, C., Ménard, F., Duchêne, G., et al. 2018, A&A, 609, A47 * Qi et al. (2019) Qi, C., Öberg, K. I., Espaillat, C. C., et al. 2019, ApJ, 882, 160 * Rab et al. (2018) Rab, C., Güdel, M., Woitke, P., et al. 2018, A&A, 609, A91 * Schwarz et al. (2016) Schwarz, K. R., Bergin, E. A., Cleeves, L. I., et al. 2016, ApJ, 823, 91 * Schwarz et al. (2018) Schwarz, K. R., Bergin, E. A., Cleeves, L. I., et al. 2018, ApJ, 856, 85 * Schwarz et al. (2019) Schwarz, K. R., Bergin, E. A., Cleeves, L. I., et al. 2019, ApJ, 877, 131 * Siebenmorgen & Heymann (2012) Siebenmorgen, R. & Heymann, F. 2012, A&A, 543, A25 * Siebenmorgen & Krügel (2010) Siebenmorgen, R. & Krügel, E. 2010, A&A, 511, A6 * Teague et al. (2019) Teague, R., Bae, J., & Bergin, E. A. 2019, Nature, 574, 378 * Teague et al. (2018) Teague, R., Henning, T., Guilloteau, S., et al. 2018, ApJ, 864, 133 * Tielens (2008) Tielens, A. G. G. M. 2008, ARA&A, 46, 289 * Trapman et al. (2017) Trapman, L., Miotello, A., Kama, M., van Dishoeck, E. F., & Bruderer, S. 2017, A&A, 605, A69 * van Boekel et al. (2017) van Boekel, R., Henning, T., Menu, J., et al. 2017, ApJ, 837, 132 * Van Der Walt et al. (2011) Van Der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22 * van ’t Hoff et al. (2017) van ’t Hoff, M. L. R., Walsh, C., Kama, M., Facchini, S., & van Dishoeck, E. F. 2017, A&A, 599, A101 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261 * Visser et al. (2007) Visser, R., Geers, V. C., Dullemond, C. P., et al. 2007, A&A, 466, 229 * Weinberger et al. (2013) Weinberger, A. J., Anglada-Escudé, G., & Boss, A. P. 2013, ApJ, 762, 118 * Woitke et al. (2019) Woitke, P., Kamp, I., Antonellini, S., et al. 2019, PASP, 131, 064301 * Woodall et al. (2007) Woodall, J., Agúndez, M., Markwick-Kemper, A. J., & Millar, T. J. 2007, A&A, 466, 1197 * Zhang et al. (2019) Zhang, K., Bergin, E. A., Schwarz, K., Krijt, S., & Ciesla, F. 2019, ApJ, 883, 98 * Zhang et al. (2020) Zhang, K., Schwarz, K. R., & Bergin, E. A. 2020, ApJ, 891, L17
# Tree-based Node Aggregation in Sparse Graphical Models Ines Wilmsa and Jacob Bienb a Department of Quantitative Economics, Maastricht University, Maastricht, The Netherlands b Data Sciences and Operations, University of Southern California, Los Angeles, CA, USA #### Abstract. High-dimensional graphical models are often estimated using regularization that is aimed at reducing the number of edges in a network. In this work, we show how even simpler networks can be produced by aggregating the nodes of the graphical model. We develop a new convex regularized method, called the tree- aggregated graphical lasso or tag-lasso, that estimates graphical models that are both edge-sparse and node-aggregated. The aggregation is performed in a data-driven fashion by leveraging side information in the form of a tree that encodes node similarity and facilitates the interpretation of the resulting aggregated nodes. We provide an efficient implementation of the tag-lasso by using the locally adaptive alternating direction method of multipliers and illustrate our proposal’s practical advantages in simulation and in applications in finance and biology. #### Keywords. aggregation, graphical model, high-dimensionality, regularization, sparsity ## 1 Introduction Graphical models are greatly useful for understanding the relationships among large numbers of variables. Yet, estimating graphical models with many more parameters than observations is challenging, which has led to an active area of research on high-dimensional inverse covariance estimation. Numerous methods attempt to curb the curse of dimensionality through regularized estimation procedures (e.g., Meinshausen and Bühlmann, 2006; Yuan and Lin, 2007; Banerjee et al., 2008; Friedman et al., 2008; Rothman et al., 2008; Peng et al., 2009; Yuan, 2010; Cai et al., 2011, 2016). Such methods aim for sparsity in the inverse covariance matrix, which corresponds to graphical models with only a small number of edges. A common method for estimating sparse graphical models is the graphical lasso (glasso) (Yuan and Lin, 2007; Banerjee et al., 2008; Rothman et al., 2008; Friedman et al., 2008), which adds an $\ell_{1}$-penalty to the negative log-likelihood of a sample of multivariate normal random variables. While this and many other methods focus on the edges for dimension reduction, far fewer contributions (e.g., Tan et al., 2015; Eisenach et al., 2020; Pircalabelu and Claeskens, 2020) focus on the nodes as a guiding principle for dimension reduction. Nonetheless, node dimension reduction is becoming increasingly relevant in many areas where data are being measured at finer levels of granularity. For instance, in biology, modern high-throughput sequencing technologies provide low-cost microbiome data at high resolution; in neuroscience, brain activity in hundreds of regions of interest can be measured; in finance, data at the individual company level at short time scales are routinely analyzed; and in marketing, joint purchasing data on every stock-keeping-unit (product) is recorded. The fine-grained nature of this data brings new challenges. The sheer number of fine-grained, often noisy, variables makes it difficult to detect dependencies. Moreover, there can be a mismatch between the resolution of the measurement and the resolution at which natural meaningful interpretations can be made. The purpose of an analysis may be to draw conclusions about entities at a coarser level of resolution than happened to be measured. Because of this mismatch, practitioners are sometimes forced to devise ad hoc post-processing steps involving, for example, coloring the nodes based on some classification of them into groups in an attempt to make the structure of an estimated graphical model more interpretable and the domain- specific takeaways more apparent (e.g., Millington and Niranjan, 2019). Figure 1: Top: True full graph and precision matrix $\boldsymbol{\Omega}$ with corresponding aggregated graph and precision matrix. Middle: Estimation output of the tag-lasso. Bottom: Estimation output of the glasso. Our solution to this problem is to incorporate the side information about the relationship between nodes directly into the estimation procedure. In our framework, this side information is encoded as a tree whose leaves correspond to the measured variables. Such tree structures are readily available in many domains (e.g., taxonomies in biology and hierarchical classifications of jobs, companies, and products in business) and is well-suited to expressing multi- resolution structure that is present in many problems. We propose a new convex regularization procedure, called tag-lasso, which stands for tree-aggregated- graphical-lasso. This procedure combines node (or variable) aggregation with edge-sparsity. The tree-based aggregation serves to both amplify the signal of similar, low-level variables and render a graphical model involving nodes at an appropriate level of scale to be relevant and interpretable. The edge- sparsity encourages the graphical model involving the aggregated nodes has a sparse network structure. Our procedure is based on a tree-based parameterization strategy that translates the node aggregation problem into a sparse modeling problem, following an approach previously introduced in the regression setting (Yan and Bien, 2020). In Figure 1 (to be discussed more thoroughly in Section 4), we see that tag-lasso is able to recover the aggregated, sparse graph structure. By doing so, it yields a more accurate estimate of the true graph, and its output is easier to interpret than the full, noisy graph obtained by the glasso. The rest of the paper is organized as follows. Section 2 introduces the tree- based parameterization structure for nodewise aggregation in graphical models. Section 3 introduces the tag-lasso estimator, formulated as a solution to a convex optimization problem, for which we derive an efficient algorithm. Section 4 presents the results of a simulation study. Section 5 illustrates the practical advantages of the tag-lasso on financial and microbiome data sets. Section 6 concludes. ## 2 Node Aggregation in Penalized Graphical Models Let $\bf S$ be the empirical covariance matrix based on $n$ multivariate normal observations of dimension $p$, with mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$. The target of estimation is the precision matrix $\boldsymbol{\Omega}=\boldsymbol{\Sigma}^{-1}$, whose sparsity pattern provides the graph structure of the Gaussian graphical model, since $\Omega_{jk}=0$ is equivalent to variables $j$ and $k$ being conditionally independent given all other variables. To estimate the precision matrix, it is common to use a convex penalization method of the form $\widehat{\boldsymbol{\Omega}}=\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf S}\boldsymbol{\Omega})+\lambda\mathcal{P}(\boldsymbol{\Omega})\ \ \text{s.t.}\ \boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ 0\\},$ (1) where $\text{tr}(\cdot)$ denotes the trace, $\mathcal{P}(\cdot)$ is a convex penalty function, and $\lambda>$ is a tuning parameter controlling the degree of penalization. Choosing the $\ell_{1}$-norm $\mathcal{P}(\boldsymbol{\Omega})=\|\boldsymbol{\Omega}^{-\text{diag}}\|_{1},$ (2) where $\boldsymbol{\Omega}^{-\text{diag}}$ contains the unique off-diagonal elements, yields the graphical lasso (glasso) (Friedman et al., 2008; Yuan and Lin, 2007; Banerjee et al., 2008; Rothman et al., 2008). It encourages $\widehat{\boldsymbol{\Omega}}$ to be sparse, corresponding to a graphical model with few edges. However, when $\boldsymbol{\Omega}$ is not sparse, demanding sparsity in $\widehat{\boldsymbol{\Omega}}$ may not be helpful, as we will show in Section 2.1. Such settings can arise when data are measured and analyzed at ever higher resolutions (a growing trend in many areas, see e.g. Callahan et al. 2017). A tree is a natural way to represent the different scales of data resolution, and we introduce a new choice for $\mathcal{P}$ that uses this tree to guide node aggregation, thereby allowing for a data adaptive choice of data scale for capturing dependencies. Such tree-based structures are available in many domains. For instance, companies can be aggregated according to hierarchical industry classification codes; products can be aggregated from brands towards product categories; brain voxels can be aggregated according to brain regions; microbiome data can be aggregated according to taxonomy. The resulting penalty function then encourages a more general and yet still highly interpretable structure for $\widehat{\boldsymbol{\Omega}}$. In the following subsection, we use a toy example to illustrate the power of such an approach. ### 2.1 Node Aggregation Consider a toy example with $p$ variables $\displaystyle X_{1}$ $\displaystyle=$ $\displaystyle\sum_{j=3}^{p}X_{j}+\varepsilon_{1}$ $\displaystyle X_{2}$ $\displaystyle=$ $\displaystyle\sum_{j=3}^{p}X_{j}+\varepsilon_{2}$ $\displaystyle X_{j}$ $\displaystyle=$ $\displaystyle\varepsilon_{j},\ \text{for}\ 3\leq j\leq p,$ where $\varepsilon_{1},\ldots,\varepsilon_{p}$ are independent standard normal random variables. By construction, it is clear that there is a very simple relationship between the variables: The first two variables both depend on the sum of the other $p-2$ variables. However, a standard graphical model on the $p$ variables does not naturally express this simplicity. The first row of Table 1 shows the covariance and precision matrices for the full set of variables $X_{1},\ldots,X_{p}$. The graphical model on the full set of variables is extremely dense $O(p^{2})$ edges. Imagine if instead we could form a graphical model with only three variables: $X_{1},X_{2},\widetilde{X}$, where the last variable $\widetilde{X}=\sum_{j=3}^{p}X_{j}$ aggregates all but the first two variables. The bottom row of Table 1 results in a graphical model that matches the simplicity of the situation. The lack of sparsity in the $p$-node graphical model means that the graphical lasso will not do well. Nonetheless, a method that could perform node aggregation would be able to yield a highly-interpretable aggregated sparse graphical model since $X_{1}$ and $X_{2}$ are conditionally independent given the aggregated variable $\widetilde{X}$. Table 1: Toy example: Covariance and precision matrices with corresponding graphical model (drawn for $p=50$) for the full (top) and aggregated (bottom) set of nodes. Nodes | Covariance Matrix | Precision Matrix | Graphical ---|---|---|--- | $\boldsymbol{\Sigma}$ | $\boldsymbol{\Omega}$ | Model $X_{1},\ldots,X_{p}$ | ${\footnotesize\begin{pmatrix}p-1&p-2&{\bf 1}_{p-2}^{\top}\\\ p-2&p-1&{\bf 1}_{p-2}^{\top}\\\ {\bf 1}_{p-2}^{\top}&{\bf 1}_{p-2}^{\top}&{\bf I}_{p-2}\\\ \end{pmatrix}}$ | ${\footnotesize\begin{pmatrix}1&0&-{\bf 1}_{p-2}^{\top}\\\ 0&1&-{\bf 1}_{p-2}^{\top}\\\ -{\bf 1}_{p-2}&-{\bf 1}_{p-2}&{\bf L}\\\ \end{pmatrix}}$ | | | with ${\bf L}={\bf I}_{p-2}+2\cdot{\bf 1}_{p-2}\cdot{\bf 1}_{p-2}^{\top}$ | $X_{1},X_{2},\widetilde{X}$ | ${\footnotesize\begin{pmatrix}p-1&p-2&p-2\\\ p-2&p-1&p-2\\\ p-2&p-2&p-2\\\ \end{pmatrix}}$ | ${\footnotesize\begin{pmatrix}1&0&-1\\\ 0&1&-1\\\ -1&-1&2+1/(p-2)\end{pmatrix}}$ | Note: Let ${\bf 1}_{d}$ denote a $d$-dimensional column vector of ones, and ${\bf I}_{d}$ be the $d\times d$ identity matrix. It is useful to map from the small aggregated graphical model to the original $p$-node graphical model. One does so by writing the precision matrix in “$G$-block” format (Bunea et al., 2020, although they introduce this terminology in the context of the covariance matrix, not its inverse) for a given partition $G=\\{G_{1},...,G_{K}\\}$ of the nodes $\\{1,\ldots,p\\}$ and corresponding $p\times K$ membership matrix ${\bf M}$, with entries $M_{jk}=1$ if $j\in G_{k}$, and $M_{jk}=0$ otherwise. In particular, there exists a $K\times K$ symmetric matrix ${\bf C}$ and a $p\times p$ diagonal matrix ${\bf D}$ such that the precision matrix can be written as $\boldsymbol{\Omega}={\bf M}{\bf C}{\bf M}^{\top}+{\bf D}$. The block-structure of $\boldsymbol{\Omega}$ is captured by the first part of the decomposition, the aggregated $K\times K$ precision matrix on the set of aggregated nodes can then be written as $\boldsymbol{\Omega}_{\text{agg}}={\bf C}+{\bf D}_{\text{agg}},$ where ${\bf D}_{\text{agg}}=({\bf M}^{\top}{\bf D}^{-1}{\bf M})^{-1}$ is diagonal. In the above example, $K=3$, $G_{1}=\\{1\\},\ G_{2}=\\{2\\},\ G_{3}=\\{3,\ldots,p\\}$ and ${\bf M}{\bf C}{\bf M}^{\top}$ has only three distinct rows/columns since the aggregated variables $j=3,\ldots,p$ share all their entries. In the presence of node aggregation and edge sparsity, the graphical model corresponding to the aggregated precision matrix is far more parsimonious than the graphical model on the full precision matrix (see Table 1). As motivated by this example, our main goal is to estimate the precision matrix in such a way that we can navigate from a $p$-dimensional problem to a $K$-dimensional problem whose corresponding graphical model provides a simple description of the conditional dependency structure among $K$ aggregates of the original variables. In the following proposition, we show that this can be accomplished by looking for a precision matrix that has a $G$-block structure. The proof of the proposition is included in Appendix A ###### Proposition 2.1. Suppose ${\bf X}\sim N_{p}({\bf 0},\boldsymbol{\Omega}^{-1})$ with $\boldsymbol{\Omega}={\bf M}{\bf C}{\bf M}^{\top}+{\bf D}$, where ${\bf M}\in\\{0,1\\}^{p\times K}$ is the membership matrix, $\bf D\succ 0$, and let ${\bf\widetilde{X}}={\bf M}^{\top}{\bf X}\in\mathds{R}^{K}$ be the vector of aggregated variables. Then ${\bf\widetilde{X}}$ has precision matrix ${\bf C}+{\bf D}_{\text{agg}}$, where ${\bf D}_{\text{agg}}$ is a diagonal matrix, and therefore $c_{ij}=0$ is equivalent to the aggregates $\widetilde{X}_{i}$ and $\widetilde{X}_{j}$ being conditionally independent given all other aggregated variables. While Proposition 2.1 gives us the desired interpretation in the graphical model with $K$ aggregated nodes, in practice, the partition $G$, its size $K$, and corresponding membership matrix ${\bf M}$ are, however, unknown. Rather than considering arbitrary partitions of the variables, we constrain ourselves specifically to partitions guided by a known tree. In so doing, we allow ourselves to exploit side information and help ensure that the aggregated nodes will be easily interpretable. To this end, we introduce a tree-based parameterization strategy that allows us to embed the node dimension reduction into a convex optimization framework. ### 2.2 Tree-Based Parameterization Our aggregation procedure assumes that we have, as side information, a tree that represents the closeness (or similarity) of variables. We introduce here a matrix-valued extension of the tree-based parameterization developed in Yan and Bien (2020) for the regression setting. We consider a tree $\mathcal{T}$ with $p$ leaves $\boldsymbol{\Omega}_{1},\ldots,\boldsymbol{\Omega}_{p}$ where $\boldsymbol{\Omega}_{j}$ denotes column $1\leq j\leq p$ of $\boldsymbol{\Omega}$. We restrict ourselves to partitions that can be expressed as a collection of branches of $\mathcal{T}$. Newly aggregated nodes are then formed by summing variables within branches. To this end, we assign a $p$-dimensional parameter vector ${\boldsymbol{\gamma}}_{u}$ to each node $u$ in the tree $\mathcal{T}$ (see Figure 2 for an example). Writing the set of nodes in the path from the root to the $j^{\text{th}}$ leaf (variable) as $\text{ancestor}(j)\cup\\{j\\}$, we express each column/row in the precision matrix as $\boldsymbol{\Omega}_{j}=\sum_{u\in\text{ancestor}(j)\cup\\{j\\}}\boldsymbol{\gamma}_{u}+d_{j}{\bf e}_{j},$ (3) where we sum over all the $\boldsymbol{\gamma}_{u}$’s along this path, and ${\bf e}_{j}$ denotes the $p$-dimensional vector with all zeros except for its $j^{\text{th}}$ element that is equal to one. In the remainder, we will make extensive use of the more compact notation $\boldsymbol{\Omega}={\bf A}\boldsymbol{\Gamma}+{\bf D},$ where ${\bf A}\in\\{0,1\\}^{p\times|\mathcal{T}|}$ is a binary matrix with $A_{jk}=1{\\{u_{k}\in\text{ancestor}(j)\cup\\{j\\}\\}}=1{\\{j\in\text{descendant}(u_{k})\cup\\{u_{k}\\}\\}}$, $\boldsymbol{\Gamma}$ is a $|\mathcal{T}|\times p$ parameter matrix collecting the $\boldsymbol{\gamma}_{u}$’s in its rows and ${\bf D}$ is a diagonal parameter matrix with elements $d_{1},\ldots,d_{p}$. Figure 2: An example of a tree $\mathcal{T}$ encoding similarity among $p=5$ variables. Figure 3: Left: An example of a $5\times 5$-dimensional $\boldsymbol{\Omega}$ and a tree $\mathcal{T}$ that relates the corresponding $p=5$ variables. We have $\boldsymbol{\Omega}_{i}={\boldsymbol{\gamma}}_{i}+\boldsymbol{\gamma}_{1:3}+\boldsymbol{\gamma}_{1:5}$ for $i=1,2,3$ and $\boldsymbol{\Omega}_{j}={\boldsymbol{\gamma}}_{j}+\boldsymbol{\gamma}_{4:5}+\boldsymbol{\gamma}_{1:5}$ for $j=4,5$, by equation (3), ignoring the diagonal elements. Middle: By zeroing out the $\boldsymbol{\gamma}_{i}$’s in the gray nodes, we aggregate the rows/columns of $\boldsymbol{\Omega}$ into two groups indicated by the two colors: $\boldsymbol{\Omega}_{1}=\boldsymbol{\Omega}_{2}=\boldsymbol{\Omega}_{3}=\boldsymbol{\gamma}_{1:3}+\boldsymbol{\gamma}_{1:5}$ (blue) and $\boldsymbol{\Omega}_{4}=\boldsymbol{\Omega}_{5}=\boldsymbol{\gamma}_{1:5}$ (red). Right: The precision matrix $\boldsymbol{\Omega}$ thus has a block- structure. By zeroing out $\boldsymbol{\gamma}_{u}$’s, certain nodes will be aggregated, as can be seen from the illustrative example in Figure 3. More precisely, let ${\mathcal{Z}}=\\{u:{{\boldsymbol{\gamma}}}_{u}\neq{\bf 0}\\}$ denote the set of non-zero rows in ${\boldsymbol{\Gamma}}$ and let ${\bf A}_{{\mathcal{Z}}}$ be the sub-matrix of ${\bf A}$ where only the columns corresponding to the non-zeros rows in ${{\boldsymbol{\Gamma}}}$ are kept. The number of blocks $K$ in the aggregated network is then given by the number of unique rows in ${\bf A}_{{\mathcal{Z}}}$. The membership matrix $\bf M$ (Section 2.1), and hence the set of aggregated nodes, can then be derived from the variables (rows) in the matrix ${\bf A}_{{\mathcal{Z}}}$ that share all their row-entries. We are now ready to introduce the tag-lasso, which is based on this parameterization. ## 3 Tree Aggregated Graphical lasso To achieve dimension reduction via node aggregation and edge sparsity simultaneously, we extend optimization problem (1) by incorporating the parameterization introduced above. Our estimator, called the tag-lasso, is defined as $(\widehat{\boldsymbol{\Omega}},\widehat{\boldsymbol{\Gamma}},\widehat{\boldsymbol{D}})=\underset{\boldsymbol{\Omega},\boldsymbol{\Gamma},\boldsymbol{D}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf S}\boldsymbol{\Omega})+\lambda_{1}\|\boldsymbol{\Gamma}_{-r}\|_{2,1}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}}\|_{1}\ \\\ \text{s.t.}\ \boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf 0},\boldsymbol{\gamma}_{r}=\gamma{\bf 1}_{p},\ \boldsymbol{\Omega}={\bf A}\boldsymbol{\Gamma}+{\bf D},\ {\bf D}\ \text{diag},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p\\},$ (4) with $\|\boldsymbol{\Gamma}_{-r}\|_{2,1}=\sum_{u\in\mathcal{T}_{-r}}\|\boldsymbol{\gamma}_{u}\|_{2}$ and $\mathcal{T}_{-r}$ being the set of all nodes in $\mathcal{T}$ other than the root. This norm induces row-wise sparsity on all non-root rows of ${\boldsymbol{\Gamma}}$. This row-wise sparsity, in turn, induces node aggregation as explained in Section 2.2. The root is excluded from this penalty term so that in the extreme of large $\lambda_{1}$ one gets complete aggregation but not necessarily sparsity (in this extreme, all off-diagonal elements of $\widehat{\boldsymbol{\Omega}}$ are equal to the scalar $\gamma$ that appears in the equality constraint involving $\boldsymbol{\gamma}_{r}$). While $\lambda_{1}$ controls the degree of node aggregation, $\lambda_{2}$ controls the degree of edge sparsity. When $\lambda_{1}=0$, the optimization problem in (4) reduces to the glasso. Finally, note that optimization problem (4) fits into the general formulation of penalized graphical models given in (1) since it can be equivalently expressed as $\widehat{\boldsymbol{\Omega}}=\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf S}\boldsymbol{\Omega})+\lambda_{1}\mathcal{P}_{\text{aggregate}}(\boldsymbol{\Omega})+\lambda_{2}\mathcal{P}_{\text{sparse}}(\boldsymbol{\Omega})\ \text{s.t.}\ \boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf 0}\\},$ where $\mathcal{P}_{\text{aggregate}}(\boldsymbol{\Omega})=\underset{\boldsymbol{\Gamma},{\bf D}}{\operatorname{min}}\ \\{\|\boldsymbol{\Gamma}_{-r}\|_{2,1}\ \text{s.t.}\ \boldsymbol{\gamma}_{r}=\gamma{\bf 1}_{p},\ \boldsymbol{\Omega}={\bf A}\boldsymbol{\Gamma}+{\bf D},\ {\bf D}\ \text{diag},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p\\}$ and $\mathcal{P}_{\text{sparse}}(\boldsymbol{\Omega})$ is the $\ell_{1}$-norm defined in (2). ### 3.1 Locally Adaptive Alternating Direction Method of Multipliers We develop an alternating direction method of multipliers (ADMM) algorithm (Boyd et al., 2011), specifically tailored to solving (4). Our ADMM algorithm is based on solving this equivalent formulation of (4): $\underset{\underset{\boldsymbol{\Gamma}^{(1)},\boldsymbol{\Gamma}^{(2)},\boldsymbol{\Omega},\boldsymbol{\Gamma},\boldsymbol{D}}{\boldsymbol{\Omega}^{(1)},\boldsymbol{\Omega}^{(2)},\boldsymbol{\Omega}^{(3)}}}{\operatorname{min}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf S}\boldsymbol{\Omega}^{(1)})+\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\\ \text{s.t.}\ \boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf 0},\boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf 1}_{p},\ \boldsymbol{\Omega}^{(2)}={\bf A}\boldsymbol{\Gamma}^{(2)}+{\bf D},\ {\bf D}\ \text{diag},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\\ \boldsymbol{\Omega}=\boldsymbol{\Omega}^{(1)}=\boldsymbol{\Omega}^{(2)}=\boldsymbol{\Omega}^{(3)}\ \text{and}\ \boldsymbol{\Gamma}=\boldsymbol{\Gamma}^{(1)}=\boldsymbol{\Gamma}^{(2)}\\}.$ (5) Additional copies of $\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$ are introduced to efficiently decouple the optimization problem. Furthermore, we use an extension called locally adaptive-ADMM (LA-ADMM, Xu et al., 2017) with adaptive penalization to improve performance. The full details of the algorithm are provided in Appendix B. ### 3.2 Selection of the Tuning Parameters To select the tuning parameters $\lambda_{1}$ and $\lambda_{2}$, we form a $10\times 10$ grid of ($\lambda_{1},\lambda_{2}$) values and find the pair that minimizes a 5-fold cross-validated likelihood-based score, $\frac{1}{5}\sum_{k=1}^{5}\left\\{-\text{logdet}(\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}})+\text{tr}({\bf S}_{\mathcal{F}_{k}}\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}})\right\\},$ (6) where $\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ is an estimate of the precision matrix trained while withholding the samples in the $k^{\text{th}}$ fold and ${\bf S}_{\mathcal{F}_{k}}$ is the sample covariance matrix computed on the $k^{\text{th}}$ fold. In particular, we take $\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ to be a re-fitted version of our estimator (e.g., Belloni and Chernozhukov, 2013). After fitting the tag-lasso, we obtain $\widehat{\mathcal{Z}}=\\{u:\widehat{{\boldsymbol{\gamma}}}_{u}\neq{\bf 0}\\},$ the set of non-zero rows in $\widehat{{\boldsymbol{\Gamma}}}$, which suggests a particular node aggregation; and $\widehat{\mathcal{P}}=\\{(i,j):\widehat{{\boldsymbol{\Omega}}}_{ij}\neq{\bf 0}\\},$ the set of non-zero elements in $\widehat{{\boldsymbol{\Omega}}}$, which suggests a particular edge sparsity structure. We then re-estimate $\boldsymbol{\Omega}$ by maximizing the likelihood subject to these aggregation and sparsity constraints: $\displaystyle\underset{\boldsymbol{\Omega},\boldsymbol{\Gamma}_{\widehat{\mathcal{Z}}},\boldsymbol{D}}{\operatorname{min}}$ $\displaystyle-\text{logdet}(\boldsymbol{\Omega})+\text{tr}({\bf S}\boldsymbol{\Omega})$ (7) subject to $\displaystyle\boldsymbol{\Omega}=\boldsymbol{\Omega}^{\top},\boldsymbol{\Omega}\succ{\bf 0},$ $\displaystyle\boldsymbol{\gamma}_{\widehat{\mathcal{Z}},r}=\gamma{\bf 1}_{p},$ $\displaystyle\boldsymbol{\Omega}={\bf A}_{\widehat{\mathcal{Z}}}\boldsymbol{\Gamma}_{\widehat{\mathcal{Z}}}+{\bf D},{\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p$ $\displaystyle\boldsymbol{\Omega}_{ij}=0,\ \text{for}\ (i,j)\notin\widehat{\mathcal{P}}.$ We solve this with an LA-ADMM algorithm similar to what is described in Section 3.1 and Appendix B. ### 3.3 Connections to Related Work Combined forms of dimension reduction in graphical models can be found in, amongst others, Chandrasekaran et al. (2012); Tan et al. (2015); Eisenach et al. (2020); Brownlees et al. (2020); Pircalabelu and Claeskens (2020). Chandrasekaran et al. (2012) consider a blend of principal component analysis with graphical modeling by combining sparsity with a low-rank structure. Tan et al. (2015) and Eisenach et al. (2020) both propose two-step procedures that first cluster variables in an initial dimension reduction step and subsequently estimate a cluster-based graphical model. Brownlees et al. (2020) introduce partial correlation network models with community structures but rely on the sample covariance matrix of the observations to perform spectral clustering. Our procedure differs from these works by introducing a single convex optimization problem that simultaneously induces aggregation and edge sparsity for the precision matrix. Our work is most closely related to Pircalabelu and Claeskens (2020) who estimate a penalized graphical model and simultaneously classify nodes into communities. However, Pircalabelu and Claeskens (2020) do not use tree-based node-aggregation. Our approach, in contrast, considers the tree $\mathcal{T}$ as an important part of the problem to help determine the extent of node aggregation, and as a consequence the number of aggregated nodes (i.e. clusters, communities or blocks) $K$, in a data-driven way through guidance of the tree-based structure on the nodes. ## 4 Simulations We investigate the advantages of jointly exploiting node aggregation and edge sparsity in graphical models. To this end, we compare the performance of the tag-lasso to two benchmarks: 1. (i) oracle: The aggregated, sparse graphical model in (7) is estimated subject to the true aggregation and sparsity constraints. The oracle is only available for simulated data and serves as a “best case” benchmark. 2. (ii) glasso: This does not perform any aggregation (corresponding to the tag-lasso with $\lambda_{1}=0$). A sparse graph on the full set of variables is estimated. The glasso is computed using the same LA-ADMM algorithm as detailed in Appendix B. The tuning parameter is selected from a 10-dimensional grid as the value that minimizes the 5-fold cross-validation likelihood-based score in equation (6) with $\widehat{\boldsymbol{\Omega}}_{-\mathcal{F}_{k}}$ taken to be the glasso estimate. All simulations were performed using the simulator package (Bien, 2016) in R (R Core Team, 2017). We evaluate the estimators in terms of three performance metrics: estimation accuracy, aggregation performance, and sparsity recovery. We evaluate estimation accuracy by averaging over many simulation runs the Kullback-Leibler (KL) distance $\text{KL}=-\text{logdet}(\boldsymbol{\Sigma}\widehat{\boldsymbol{\Omega}})+\text{tr}(\boldsymbol{\Sigma}\widehat{\boldsymbol{\Omega}})-p,$ where $\boldsymbol{\Sigma}=\boldsymbol{\Omega}^{-1}$ is the true covariance matrix. Note that the KL distance is zero if the estimated precision matrix equals the true precision matrix. To evaluate aggregation performance, we use two measures: the Rand index (Rand, 1971) and the adjusted Rand index (Hubert and Arabie, 1985). Both indices measure the degree of similarity between the true partition on the set of nodes ${1,\ldots,p}$ and the estimated partition. The Rand index ranges from zero to one, where one means that both partitions are identical. The adjusted Rand index performs a re-scaling to account for the fact that random chance will cause some variables to occupy the same group. Finally, to evaluate sparsity recovery, we use the false positive and false negative rates FPR $\displaystyle=\frac{\\#\\{(i,j):\widehat{\Omega}_{ij}\neq 0\>\text{and}\>\Omega_{ij}=0\\}}{\\#\\{(i,j):\Omega_{ij}=0\\}}\ \ \text{and}\ \ \text{FNR}$ $\displaystyle=\frac{\\#\\{(i,j):\widehat{\Omega}_{ij}=0\>\text{and}\>\Omega_{j}\neq 0\\}}{\\#\\{(i,j):\Omega_{ij}\neq 0\\}}.$ The FPR reports the fraction of truly zero components of the precision matrix that are estimated as nonzero. The FNR gives the fraction of truly nonzero components of the precision matrix that are estimated as zero. Figure 4: Four aggregation designs: chain, random, unbalanced and unstructured graphs with corresponding precision matrix (top) and graph on the set of aggregated nodes (bottom). ### 4.1 Simulation Designs Data are drawn from a multivariate normal distribution with mean zero and covariance matrix $\boldsymbol{\Sigma}=\boldsymbol{\Omega}^{-1}$. We take $p=15$ variables and investigate the effect of increasing the number of variables in Section 4.3. We consider four different simulation designs, shown in Figure 4, each having a different combination of aggregation and sparsity structures for the precision matrix $\boldsymbol{\Omega}$. Aggregation is present in the first three structures. The precision matrix has a $G$-block structure with $K=3$ blocks. In Section 4.4, we investigate the effect of varying the number of blocks. In the chain graph, adjacent aggregated groups are connected through an edge. This structure corresponds to the motivating example of Section 1. In the random graph, one non-zero edge in the aggregated network is chosen at random. In the unbalanced graph, the clusters are of unequal size. In the unstructured graph, no aggregation is present. Across all designs, we take the diagonal elements of $\boldsymbol{\Omega}$ to be $1$, the elements within a block of aggregated variables to be $0.5$, and the non-zero elements across blocks to be $0.25$. We generate $100$ different data sets for every simulation design and use a sample size of $n=120$. The number of parameters ($p+p(p-1)/2=120$) equals the sample size. Figure 5: A simple tree used for the “tag-lasso ideal” (left) and a more realistic tree used for the “tag-lasso realistic” (right). The tag-lasso estimator relies on the existence of a tree to perform node dimension reduction. We consider two different tree structures throughout the simulation study. First, we use an “ideal” tree which contains the true aggregation structure as the sole aggregation level between the leaves and the root of the tree. As an example, the true aggregation structure for the chain graph structure is shown in the left panel of Figure 5. We form ${\bf A}$ corresponding to this oracle tree to obtain the “tag-lasso ideal” estimator. We also consider a more realistic tree, shown in the right panel of Figure 5, following a construction similar to that of Yan and Bien (2020). The tree is formed by performing hierarchical clustering of $p$ latent points chosen to ensure that the tree contains the true aggregation structure and that these true clusters occur across a variety of depths. In particular, we generate $K$ cluster means $\mu_{1},\ldots,\mu_{K}$ with $\mu_{i}=1/i$. We set the number of latent points associated with each of the $K$ means equal to the cluster sizes from Figure 4. These latent points are then drawn independently from $N(\mu_{i},[0.05\cdot\min_{j}(\mu_{i}-\mu_{j})]^{2})$. Finally, we form ${\bf A}$ corresponding to this tree to obtain the “tag-lasso realistic” estimator. ### 4.2 Results We subsequently discuss the results on estimation accuracy, aggregation performance, and sparsity recovery. Figure 6: Estimation accuracy of the tree estimators relative to the oracle. #### Estimation Accuracy. Boxplots of the KL distances for the three estimators (tag-lasso ideal, tag- lasso realistic and glasso) relative to the oracle are given in Figure 6. The first three panels correspond to simulation designs with aggregation structures. In these settings, the tag-lasso estimators considerably outperform the glasso, on average by a factor five. The tag-lasso ideal method performs nearly as well as the oracle. Comparing the tag-lasso realistic method to the tag-lasso ideal method suggests a minimal price paid for using a more realistic tree. The “unstructured” panel of Figure 6 shows a case in which there is sparsity but no aggregation in the true data generating model. As expected, the glasso performs best in this case; however, we observe minimal cost to applying the tag-lasso approaches (which encompass the glasso as a special case when $\lambda_{1}=0$). #### Aggregation Performance. Table 2 summarizes the aggregation performance of the three estimators in terms of the Rand index (RI) and adjusted Rand index (ARI). No results on the ARI in the unstructured simulation design are reported since it cannot be computed for a partition consisting of singletons. The tag-lasso estimators perform very well. If one can rely on an oracle tree, the tag-lasso perfectly recovers the aggregation structure, as reflected in the perfect (A)RI values of the tag-lasso ideal method. Even when the tag-lasso uses a more complex tree structure, it recovers the correct aggregation structure in the vast majority of cases. The glasso returns a partition of singletons as it is unable to perform dimension reduction through aggregation, as can be seen from its zero values on the ARI. Table 2: Aggregation performance of the three estimators, as measured by the Rand index (RI) and adjusted Rand index (ARI), for the four simulation designs. Standard errors are in parentheses. Estimators | chain | random | unbalanced | unstructured ---|---|---|---|--- | RI | ARI | RI | ARI | RI | ARI | RI | ARI tag-lasso ideal | 1.00 (.00) | 1.00 (.01) | 1.00 (.00) | 1.00 (.00) | 1.00 (.00) | 0.99 (.01) | 0.84 (.02) | NA tag-lasso realistic | 0.95 (.01) | 0.88 (.01) | 0.97 (.01) | 0.93 (.01) | 0.94 (.01) | 0.85 (.02) | 0.81 (.02) | NA glasso | 0.71 (.00) | 0.00 (.00) | 0.71 (.00) | 0.00 (.00) | 0.67 (.00) | 0.00 (.00) | 1.00 (.00) | NA #### Sparsity Recovery. Table 3 summarizes the results on sparsity recovery (FPR and FNR). The tag- lasso estimators enjoy favorable FPR and FNR, mostly excluding the irrelevant conditional dependencies (as reflected by their low FPR) and including the relevant conditional dependencies (as reflected by their low FNR). In the simulation designs with aggregation, the glasso pays a big price for not being able to reduce dimensionality through aggregation, leading it to include too many irrelevant conditional dependencies, as reflected through its large FPRs. In the unstructured design, the rates of all estimators are, overall, low. Table 3: Sparsity recovery of the three estimators, as measured by the false positive rate (FPR) and false negative rate (FNR), for the four simulation designs. Standard errors are in parentheses. Estimators | chain | random | unbalanced | unstructured ---|---|---|---|--- | FPR | FNR | FPR | FNR | FPR | FNR | FPR | FNR tag-lasso ideal | 0.22 (.04) | 0.00 (.00) | 0.19 (.04) | 0.00 (.01) | 0.46 (.05) | 0.00 (.00) | 0.06 (.01) | 0.15 (.01) tag-lasso realistic | 0.30 (.04) | 0.02 (.01) | 0.13 (.02) | 0.09 (.01) | 0.44 (.04) | 0.05 (.01) | 0.05 (.01) | 0.14 (.01) glasso | 0.80 (.02) | 0.08 (.01) | 0.73 (.01) | 0.09 (.01) | 0.82 (.02) | 0.07 (.01) | 0.16 (.01) | 0.04 (.01) ### 4.3 Increasing the Number of Nodes We investigate the sensitivity of our results to an increasing number of variables $p$. We focus on the chain simulation design from Section 4.1 and subsequently double $p$ from 15 to 30, 60 and 120 while keeping the number of blocks $K$ fixed at three. The sample size $n$ is set proportional to the complexity of the model, as measured by $Kp+p$. Hence, the sample sizes corresponding to the increasing values of $p$ are respectively, $n=120,240,480,960$, thereby keeping the ratio of the sample size to the complexity fixed at two. In each setting, the number of parameters to be estimated is large, equal to 120, 465, 1830, 7260, respectively; thus increasing relative to the sample size. The left panel of Figure 7 shows the mean KL distance (on a log-scale) of the four estimators as a function of $p$. As the number of nodes increases, the estimation accuracy of the tag-lasso estimators and the oracle increases slightly. For fixed $K$ and increasing $p$, the aggregated nodes—which can be thought of as the average of $p/K$ random variables—may be stabler, thereby explaining why the problem at hand does not get harder when increasing $p$ for the methods with node aggregation. By contrast, the glasso—which is unable to exploit the aggregation structure—performs worse as $p$ increases. For $p=120$, for instance, the tag-lasso estimators outperform the glasso by a factor 50. Figure 7: Estimation accuracy of the four estimators (on a log-scale) for increasing number of variables $p$ (and fixed $K=3$, left panel) the number of blocks $K$ (and fixed $p=30$, right panel). Results on aggregation performance and sparsity recovery are presented in Figure 12 of Appendix C. The tag-lasso ideal method perfectly recovers the aggregation structure for all values of $p$. The realistic tag-lasso’s aggregation performance is close to perfect and remains relatively stable as $p$ increases. The glasso is unable to detect the aggregation structure, as expected and reflected through its zero ARIs. The tag-lasso estimators also maintain a better balance between the FPR and FNR than the glasso. While their FPRs increase as $p$ increases, their FNRs remain close to perfect, hence all relevant conditional dependencies are recovered. The glasso, in contrast, fails to recover the majority of relevant conditional dependencies when $p=60,120$, thereby explaining its considerable drop in estimation accuracy. ### 4.4 Increasing the Number of Blocks Finally, we investigate the effect of increasing the number of blocks $K$. We take the chain simulation design from Section 4.1 and increase the number of blocks from $K=3$ to $K=5,6,10$, while keeping the number of variables fixed at $p=30$. The right panel of Figure 7 shows the mean KL distance (on a log- scale) of the four estimators as a function of $K$. As one would expect, the difference between the aggregation methods and the glasso decreases as $K$ increases. However, for all $K$ considered, the glasso does far less well than the aggregation based methods. Similar conclusions hold in terms of aggregation and sparsity recovery performance. Detailed results are presented in Figure 13 of Appendix C. The tag-lasso ideal method performs as well as the oracle in terms of capturing the aggregation structure; the tag-lasso realistic method performs close to perfect and its aggregation performance improves with increasing $K$. In terms of sparsity recovery, the tag-lasso estimators hardly miss relevant conditional dependencies and only include a small number of irrelevant conditional dependencies. The glasso’s sparsity recovery performance is overall worse but does improve with increasing $K$. ## 5 Applications ### 5.1 Financial Application We demonstrate our method on a financial data set containing daily realized variances of $p=31$ stock market indices from across the world in 2019 ($n=254$). Daily realized variances based on five minute returns are taken from the Oxford-Man Institute of Quantitative Finance (publicly available at http://realized.oxford-man.ox.ac.uk/data/download). Following standard practice, all realized variances are log-transformed. An overview of the stock market indices is provided in Appendix D. We encode similarity between the 31 stock market indices according to geographical region, and use the tree shown in Figure 8 to apply the tag-lasso estimator. Since the different observations of the consecutive days are (time)-dependent, we first fit the popular and simple heterogeneous autoregressive (HAR) model of (Corsi, 2009) to each of the individual log-transformed realized variance series. Graphical displays of the residual series of these 31 HAR models suggest that almost all autocorrelation in the series is captured. We then apply the tag-lasso to the residual series to learn the conditional dependency structure among stock market indices. Figure 8: Geography-based tree for the stock market data, which aggregates the $p=31$ stock market indices (leaves) over several sub-continents towards a single root. Leaves, which represent individual stock markets, are displayed horizontally. #### Estimated Graphical Model. We fit the tag-lasso estimator, with 5-fold cross validation to select tuning parameters, to the full data set, with the matrix ${\bf A}$ encoding the tree structure in Figure 8. The tag-lasso returns a solution with $K=6$ aggregated blocks; the sparsity pattern of the full estimated precision matrix is shown in the top left panel of Figure 9. The coloring of the row labels and the numbering of columns convey the memberships of each variable to aggregated blocks (to avoid clutter, only the first column of each block is labeled). Figure 9: Stock market indices data. Top left: Sparsity pattern (non-zeros in black) of full $\hat{\boldsymbol{\Omega}}$ with aggregation structure conveyed through row label coloring and column numbering. Top right: Test errors across the ten replications (dots) for the tag-lasso versus glasso. Bottom: Aggregated graph for the $K=6$ nodes obtained with the tag-lasso as an adjacency matrix (bottom left) and as a network (bottom right) with the size of each node proportional to the number of original variables it aggregates. Dimension reduction mainly occurs through node aggregation, as can be seen from the aggregated precision matrix in the bottom right panel of Figure 9. The resulting aggregated graphical model is rather dense with only about half of the off-diagonal entries being non-zero in the estimated aggregated precision matrix, thereby suggesting strong volatility connectedness. The solution returned by the tag-lasso estimator consists of one single-market block (block 5: Canada) and five multi-market blocks, which vary in size. The Australian, South-America, and all Asian stock markets form one aggregated block (block 6). Note that the tag-lasso has “aggregated” these merely because they have the same non-dependence structure (i.e. all of these markets are estimated to be conditionally independent of each other and all other markets). The remaining aggregated nodes concern the US market (block 4) and three European markets, which are divided into North-Europe (block 1), Central-, South-Europe & STOXX50E (block 2), and West-Europe (block 3). In the aggregated network, the latter two and the US play a central role as they are the most strongly connected nodes: These three nodes are connected to each other, the US node is additionally connected to Canada, whereas these European nodes are additionally connected with North-Europe. #### Out-of-sample Performance. We conduct an out-of-sample exercise to compare the tag-lasso estimator to the glasso estimator. We take a random $n=203$ observations (80% of the full data set) to form a “training sample” covariance matrix and use the remaining data to form a “test sample” covariance matrix ${\bf S}^{\text{test}}$, and repeat this procedure ten times. We fit both the tag-lasso and glasso estimator to the training covariance matrix, with 5-fold cross-validation on the training data to select tuning parameters. Next, we compute their corresponding out-of- sample errors on the test data, as in (6). The top right panel of Figure 9 shows each of these ten test errors for both the tag-lasso (x-axis) and the glasso estimator (y-axis). The fact that in all ten replicates the points are well above the 45-degree line indicates that the tag-lasso estimator has better estimation error than the glasso. Tag-lasso has a lower test error than glasso in all ten replicates, resulting in a substantial reduction in glasso’s test errors. This indicates that jointly exploiting edge and node dimension reduction is useful for precision matrix estimation in this context. ### 5.2 Microbiome Application We next turn to a data set of gut microbial amplicon data in HIV patients (Rivera-Pinto et al., 2018), where our goal is to estimate an interpretable graphical model, capturing the interplay between different taxonomic groups of the microbiome. Bien et al. (2020) recently showed that tree-based aggregation in a supervised setting leads to parsimonious predictive models. The data set has $n=152$ HIV patients, and we apply the tag-lasso estimator to all $p=104$ bacterial operational taxonomic units (OTUs) that have non-zero counts in over half of the samples. We use the taxonomic tree that arranges the OTUs into natural hierarchical groupings of taxa: with 17 genera, 11 families, five orders, five classes, three phyla, and one kingdom (the root node). We employ a standard data transformation from the field of compositional data analysis (see e.g., Aitchison, 1982) called the centered log-ratio (clr) transformation that is commonly used in microbiome graphical modeling (Kurtz et al., 2015; Lo and Marculescu, 2018; Kurtz et al., 2019). After transformation, Kurtz et al. (2015) apply the glasso, Lo and Marculescu (2018) incorporate phylogenetic information into glasso’s optimization problem through weights within the $\ell_{1}$-penalty, and Kurtz et al. (2019) estimate a latent graphical model which combines sparsity with a low-rank structure. We instead, use the tag- lasso to learn a sparse aggregated network from the clr-transformed microbiome compositions. While the clr-transform induces dependence between otherwise independent components, Proposition 1 in Cao et al. (2019) provides intuition that as long as the underlying graphical model is sparse and $p$ is large, these induced dependencies may have minimal effect on the covariance matrix. Future work could more carefully account for the induced dependence, incorporating ideas from Cao et al. (2019) or Kurtz et al. (2019). Figure 10: Microbiome data. Full precision matrix (left) and aggregated precision matrix (right) estimated by the tag-lasso with an unconstrained five-fold cross-validation (top) and with a cross-validation subject to the constraint that there are at most ten blocks (bottom). #### Estimated Graphical Model. We fit the tag-lasso to the full data set and use 5-fold cross-validation to select the tuning parameters. The tag-lasso estimator provides a sparse aggregated graphical model with $K=28$ aggregated blocks (a substantial reduction in nodes from the original $p=104$ OTUs). The top panel of Figure 10 shows the sparsity pattern of the $p\times p$ estimated precision matrix (top left) and of the $K\times K$ estimated aggregated precision matrix (top right). A notable feature of the tag-lasso solution is that it returns a wide range of aggregation levels: The aggregated network consists of 17 OTUs, 7 nodes aggregated to the genus level (these nodes start with “g_”), 3 to the family level (these nodes start with“f_”), and 1 node to the kingdom level (this node starts with “k_”). Some aggregated nodes, such as the “g_Blautia” node (block 19), contain all OTUs within their taxa; some other aggregated nodes, indicated with an asterisk like the “k_Bacteria*” node (block 28), have some of their OTUs missing. This latter “block” consists of 18 OTUs from across the phylogenetic tree that are estimated to be conditionally independent with all other OTUs in the data set. Figure 11: Microbiome data. Left: Aggregated network estimated by the constrained CV version of the tag-lasso. The colour of the nodes is based on their level of aggregation (OTU: pink, genus: orange, family: blue); their width is proportional to the number of OTUs they aggregate. Middle: Network estimated by the glasso. Right: Test errors across the ten replications for the unconstrained (solid black) and constrained (unfilled blue) CV version of the tag-lasso versus the glasso. While the tag-lasso determines the aggregation level in a data-driven way through cross validation, practitioners or researchers may also sometimes wish to restrict the number of blocks $K$ to a pre-determined level when such prior knowledge is available or if this is desirable for interpretability. As an illustration, we consider a constrained cross-validation scheme in which we restrict the number of blocks $K$ to maximally ten and select the sparsity parameters with the best cross validated error among those solutions with $K\leq 10$. The bottom panel of Figure 10 shows the sparsity pattern of the full and aggregated precision matrices estimated by this constrained version of the tag-lasso. The resulting network consists of $K=8$ aggregated nodes. The “k_Bacteria*” node now aggregates 78 OTUs that are estimated to be conditionally independent with each other and all others. The interactions among the remaining nodes are shown in the left panel of Figure 11, which consists of three OTUs (OTU134, OTU156, and OTU161, in pink), three genera (Prevotella, Bacteroides, and Alistipes in orange) and one family (Porphyromonadaceae in blue). The resulting network is much simpler than the one estimated by the glasso, shown in the middle panel of Figure 11. The glasso finds 58 OTUs to be conditionally independent with all others, but the interactions among the remaining 46 OTUs are much more difficult to interpret. The glasso is limited to working at the OTU-level, which prevents it from providing insights about interactions that span different levels of the taxonomy. #### Out-of-sample Performance. We conduct the same out-of-sample exercise as described in Section 5.1. The right panel of Figure 11 presents the ten test errors (black dots) for the unconstrained CV tag-lasso and glasso. In all but one case, the tag-lasso leads to a better fit than the glasso, suggesting that it is better suited for modeling the conditional dependencies among the OTUs. The unfilled blue dots show the same but for the constrained CV tag-lasso. In all ten cases, it underperforms the unconstrained CV tag-lasso (see shift to the right on the horizontal axis); however, its performance is on a par with the glasso, with test errors close to the 45 degree line. Thus, there does not appear to be a cost in out-of-sample-performance to the interpretability gains of the constrained tag-lasso over the glasso. ## 6 Conclusion Detecting conditional dependencies between variables, as represented in a graphical model, forms a cornerstone of multivariate data analysis. However, graphical models, characterized by a set of nodes and edges, can quickly explode in dimensionality due to ever-increasing fine-grained levels of resolution at which data are measured. In many applications, a tree is available that organizes the measured variables into various meaningful levels of resolution. In this work, we introduce the tag-lasso, a novel estimation procedure for graphical models that curbs this curse of dimensionality through joint node and edge dimension reduction by leveraging this tree as side information. Node dimension reduction is achieved by a penalty that allows nodes to be aggregated according to the tree structure; edge dimension reduction is achieved through a standard sparsity-inducing penalty. As such, the tag-lasso generalizes the popular glasso approach to sparse graphical modelling. An `R` package called `taglasso` implements the proposed method and is available on the GitHub page of the first author. #### Acknowledgments We thank Christian Müller for useful discussions. Jacob Bien was supported in part by NSF CAREER Award DMS-1653017 and NIH Grant R01GM123993. ## References * Aitchison (1982) Aitchison, J. (1982), “The statistical analysis of compositional data,” Journal of the Royal Statistical Society: Series B (Methodological), 44, 139–160. * Banerjee et al. (2008) Banerjee, O.; Ghaoui, L. E. and d’Aspremont, A. (2008), “Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data,” Journal of Machine Learning Research, 9, 485–516. * Belloni and Chernozhukov (2013) Belloni, A. and Chernozhukov, V. (2013), “Least squares after model selection in high-dimensional sparse models,” Bernoulli, 19, 521–547. * Bien (2016) Bien, J. (2016), “The simulator: an engine to streamline simulations,” arXiv preprint arXiv:1607.00021. * Bien et al. (2020) Bien, J.; Yan, X.; Simpson, L. and Müller, C. L. (2020), “Tree-Aggregated Predictive Modeling of Microbiome Data,” bioRxiv. * Boyd et al. (2011) Boyd, S.; Parikh, N.; Chu, E.; Peleato, B. and Eckstein, J. (2011), “Distributed optimization and statistical learning via the alternating direction method of multipliers.” Found. Trends Mach. Learn., 3, 1–122. * Brownlees et al. (2020) Brownlees, C.; Gumundsson, G. S. and Lugosi, G. (2020), “Community detection in partial correlation network models,” Journal of Business & Economic Statistics, 1–33. * Bunea et al. (2020) Bunea, F.; Giraud, C.; Luo, X.; Royer, M. and Verzelen, N. (2020), “Model assisted variable clustering: minimax-optimal recovery and algorithms,” The Annals of Statistics, 48, 111–137. * Cai et al. (2011) Cai, T.; Liu, W. and Luo, X. (2011), “A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation,” Journal of the American Statistical Association, 106, 594–607. * Cai et al. (2016) Cai, T. T.; Liu, W. and Zhou, H. H. (2016), “Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation,” The Annals of Statistics, 44, 455–488. * Callahan et al. (2017) Callahan, B. J.; McMurdie, P. J. and Holmes, S. P. (2017), “Exact sequence variants should replace operational taxonomic units in marker-gene data analysis,” The ISME journal, 11, 2639–2643. * Cao et al. (2019) Cao, Y.; Lin, W. and Li, H. (2019), “Large covariance estimation for compositional data via composition-adjusted thresholding,” Journal of the American Statistical Association, 114, 759–772. * Chandrasekaran et al. (2012) Chandrasekaran, V.; Parrilo, P. A. and Willsky, A. S. (2012), “Latent variable graphical model selection via convex optimization,” The Annals of Statistics, 40, 1935–1967. * Corsi (2009) Corsi, F. (2009), “A simple approximate long-memory model of realized volatility,” Journal of Financial Econometrics, 7(2), 174–196. * Eisenach et al. (2020) Eisenach, C.; Bunea, F.; Ning, Y. and Dinicu, C. (2020), “High-Dimensional Inference for Cluster-Based Graphical Models,” Journal of Machine Learning Research, 21, 1–55. * Friedman et al. (2008) Friedman, J.; Hastie, T. and Tibshirani, R. (2008), “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, 9, 432–441. * Henderson and Searle (1981) Henderson, H. V. and Searle, S. R. (1981), “On deriving the inverse of a sum of matrices,” Siam Review, 23, 53–60. * Hubert and Arabie (1985) Hubert, L. and Arabie, P. (1985), “Comparing partitions,” Journal of classification, 2, 193–218. * Kurtz et al. (2019) Kurtz, Z. D.; Bonneau, R. and Müller, C. L. (2019), “Disentangling microbial associations from hidden environmental and technical factors via latent graphical models,” bioRxiv. * Kurtz et al. (2015) Kurtz, Z. D.; Müller, C. L.; Miraldi, E. R.; Littman, D. R.; Blaser, M. J. and Bonneau, R. A. (2015), “Sparse and compositionally robust inference of microbial ecological networks,” PLoS Comput Biol, 11, e1004226. * Lo and Marculescu (2018) Lo, C. and Marculescu, R. (2018), “PGLasso: Microbial Community Detection through Phylogenetic Graphical Lasso,” arXiv preprint arXiv:1807.08039. * Meinshausen and Bühlmann (2006) Meinshausen, N. and Bühlmann, P. (2006), “High-dimensional graphs and variable selection with the lasso,” The Annals of statistics, 34, 1436–1462. * Millington and Niranjan (2019) Millington, T. and Niranjan, M. (2019), “Quantifying influence in financial markets via partial correlation network inference,” in 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), IEEE, pp. 306–311. * Peng et al. (2009) Peng, J.; Wang, P.; Zhou, N. and Zhu, J. (2009), “Partial correlation estimation by joint sparse regression models,” Journal of the American Statistical Association, 104, 735–746. * Pircalabelu and Claeskens (2020) Pircalabelu, E. and Claeskens, G. (2020), “Community-Based Group Graphical Lasso.” Journal of Machine Learning Research, 21, 1–32. * R Core Team (2017) R Core Team (2017), R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. * Rand (1971) Rand, W. M. (1971), “Objective criteria for the evaluation of clustering methods,” Journal of the American Statistical Association, 66, 846–850. * Rivera-Pinto et al. (2018) Rivera-Pinto, J.; Egozcue, J. J.; Pawlowsky-Glahn, V.; Paredes, R.; Noguera-Julian, M. and Calle, M. L. (2018), “Balances: a New Perspective for Microbiome Analysis,” mSystems, 3, 1–12. * Rothman et al. (2008) Rothman, A. J.; Bickel, P. J.; Levina, E. and Zhu, J. (2008), “Sparse permutation invariant covariance estimation,” Electronic Journal of Statistics, 2, 494–515. * Tan et al. (2015) Tan, K. M.; Witten, D. and Shojaie, A. (2015), “The cluster graphical lasso for improved estimation of Gaussian graphical models,” Computational statistics & data analysis, 85, 23–36. * Xu et al. (2017) Xu, Y.; Liu, M.; Lin, Q. and Yang, T. (2017), “ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization,” in Advances in Neural Information Processing Systems, pp. 1267–1277. * Yan and Bien (2020) Yan, X. and Bien, J. (2020), “Rare feature selection in high dimensions,” Journal of the American Statistical Association, doi:10.1080/01621459.2020.1796677. * Yuan (2010) Yuan, M. (2010), “High dimensional inverse covariance matrix estimation via linear programming,” Journal of Machine Learning Research, 11, 2261–2286. * Yuan and Lin (2007) Yuan, M. and Lin, Y. (2007), “Model selection and estimation in the Gaussian graphical model,” Biometrika, 94, 19–35. ## Appendices ## Appendix A Proof of Proposition 2.1 ###### Proof. First, note that ${\bf\widetilde{X}}$ follows a $K$-dimensional multivariate normal distribution with mean zero and covariance matrix ${\bf M}^{\top}({\bf D}+{\bf M}{\bf C}{\bf M}^{\top})^{-1}{\bf M}$. Next, we re-write this covariance matrix by two successive applications of equation (23) in Henderson and Searle (1981): $\displaystyle{\bf M}^{\top}({\bf D}+{\bf M}{\bf C}{\bf M}^{\top})^{-1}{\bf M}$ $\displaystyle=$ $\displaystyle{\bf M^{\top}}{\bf D}^{-1}{\bf M}-{\bf M^{\top}}{\bf D}^{-1}{\bf M}({\bf I}+{\bf C}{\bf M^{\top}}{\bf D}^{-1}{\bf M})^{-1}{\bf C}{\bf M^{\top}}{\bf D}^{-1}{\bf M}$ $\displaystyle=$ $\displaystyle\left([{\bf M^{\top}}{\bf D}^{-1}{\bf M}]^{-1}+{\bf C}\right)^{-1}.$ Hence, the precision matrix of ${\bf\widetilde{X}}$ is given by $({\bf M^{\top}}{\bf D}^{-1}{\bf M})^{-1}+{\bf C}$. Now since $({\bf M^{\top}}{\bf D}^{-1}{\bf M})^{-1}$ is diagonal, $c_{ij}=0\Leftrightarrow\widetilde{X}_{i}\ \bot\ \widetilde{X}_{j}|{\bf\widetilde{X}}_{-\\{i,j\\}}$, for any $i,j=1,\ldots,K$ and with ${\bf\widetilde{X}}_{-\\{i,j\\}}$ containing all aggregated variables expect for aggregate $i$ and $j$. ∎ ## Appendix B Details of the LA-ADMM Algorithm The augmented Lagrangian of (5) is given by $\displaystyle-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf S}\boldsymbol{\Omega}^{(1)})+1_{\infty}\\{\boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf 0}\\}+\langle{\bf U}^{(1)},{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\|^{2}_{F}$ (8) $\displaystyle+$ $\displaystyle\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}+1_{\infty}\\{\boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf 1}_{p}\\}\ +\langle{\bf U}^{(4)},{\boldsymbol{\Gamma}}^{(1)}-{\boldsymbol{\Gamma}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Gamma}}^{(1)}-{\boldsymbol{\Gamma}}\|^{2}_{F}$ $\displaystyle+$ $\displaystyle 1_{\infty}\\{\boldsymbol{\Omega}^{(2)}={\bf A}\boldsymbol{\Gamma}^{(2)}+{\bf D},{\bf D}\ \text{diag.},D_{jj}\geq 0\\}+\langle{\bf U}^{(2)},{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\|^{2}_{F}+\langle{\bf U}^{(5)},{\boldsymbol{\Gamma}}^{(2)}-{\boldsymbol{\Gamma}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Gamma}}^{(2)}-{\boldsymbol{\Gamma}}\|^{2}_{F}$ $\displaystyle+$ $\displaystyle\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}+\langle{\bf U}^{(3)},{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(2)}-{\boldsymbol{\Omega}}\|^{2}_{F},$ where ${\bf U}^{(i)}$ (for $i=1,\ldots,5)$ are the dual variables, and $\rho$ is a penalty parameter. Note that equation (8) is of the same form as Equation (3.1) in Boyd et al. (2011) and thus involves iterating three basic steps: (i) minimization with respect to $(\boldsymbol{\Omega}^{(1)},\boldsymbol{\Omega}^{(2)},\boldsymbol{\Omega}^{(3)},\boldsymbol{\Gamma}^{(1)},\boldsymbol{\Gamma}^{(2)},{\bf D})$, (ii) minimization with respect to $(\boldsymbol{\Omega},\boldsymbol{\Gamma})$, and (iii) update of $({\bf U}^{(1)},\ldots,{\bf U}^{(5)})$. Step (i) decouples into four independent problems, whose solutions are worked out in Sections B.1-B.4. Step (ii) involves the minimization of a differentiable function of ${\bf\Omega}$ and ${\bf\Gamma}$ and boils down to the calculation of simple averages, as shown in Section B.5. Step (iii)’s update of the dual variables is provided in B.6. Algorithms 1-2 then provide an overview of the LA-ADMM algorithm to solve problem (5). We use the LA-ADMM algorithm with $\rho_{1}=0.01,\ \texttt{T}_{\text{stages}}=10,\ \texttt{maxit}=100$. Algorithm 1 ADMM Input: ${\bf S},{\bf A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho,\texttt{maxit},\boldsymbol{\Omega}_{0},\boldsymbol{\Gamma}_{0}.$ Initialization: Set * $\widehat{{\boldsymbol{\Omega}}}^{(i)}_{0}\leftarrow\widehat{{\bf U}}^{(i)}_{0}\leftarrow\boldsymbol{\Omega}_{0}\ \hskip 11.38092pt\text{for}\ i=1,\ldots,3$ * $\widehat{{\boldsymbol{\Gamma}}}^{(j)}_{0}\leftarrow\widehat{{\bf U}}^{(j+3)}_{0}\leftarrow\boldsymbol{\Gamma}_{0}\ \text{for}\ j=1,\ldots,2$ * $k\leftarrow 0$ for $k\leq{\tt maxit}$ do * $k\leftarrow k+1$ * $\widehat{{\boldsymbol{\Omega}}}_{k}^{(1)}\leftarrow{\bf Q}{\bar{\boldsymbol{\Omega}}}_{k-1}{\bf Q}^{\top}$, see equation (10). * $\widehat{\Omega}^{(3)}_{k,ij}\leftarrow S({\widehat{\Omega}}_{k-1,ij}-{{\widehat{U}}}_{k-1,ij}^{(3)}/\rho,\ \lambda_{2}/\rho),\forall\ i,j=1,\ldots,p$, see equation (16). * $\boldsymbol{\widehat{\Gamma}}^{(1)}_{k,j}\leftarrow S_{G}(\boldsymbol{\widehat{\Gamma}}_{k-1,j}-{\bf{\widehat{U}}}_{k-1,j}^{(4)}/\rho,\ \lambda_{1}/\rho),\forall j=1,\ldots,|\mathcal{T}|\backslash\\{r\\}$, see equation (11). * $\boldsymbol{\widehat{\Gamma}}^{(1)}_{k,r}\leftarrow\widehat{\gamma}_{k-1}{\bf 1}_{p}$, see equation (11). * $\text{diag}({\bf{\widehat{D}}}_{k})\leftarrow\text{diag}({\bf C}^{\top}{\bf C})^{-1}\text{diag}({\bf B}^{\top}{\bf C})_{+}$, see equation (15). * $\boldsymbol{\widehat{\Gamma}}^{(2)}_{k}\leftarrow({\bf A}^{\top}{\bf A}+{\bf I}_{|\mathcal{T}|})^{-1}({\bf A}^{\top}:{\bf I}_{|\mathcal{T}|})({\bf\widetilde{M}}-{\bf{\widetilde{D}}}_{k})$, see equation (14). * $\widehat{\boldsymbol{\Omega}}^{(2)}_{k}={\bf A}\widehat{\boldsymbol{\Gamma}}^{(2)}_{k}+{\bf{\widehat{D}}}_{k}$, see equation (13) * $\widehat{\boldsymbol{\Omega}}_{k}\leftarrow(\widehat{\boldsymbol{\Omega}}^{(1)}_{k}+\widehat{\boldsymbol{\Omega}}^{(2)}_{k}+\widehat{\boldsymbol{\Omega}}^{(3)}_{k})/3$ * $\widehat{\boldsymbol{\Gamma}}_{k}\leftarrow(\widehat{\boldsymbol{\Gamma}}^{(1)}_{k}+\widehat{\boldsymbol{\Gamma}}^{(2)}_{k})/2$ * $\widehat{\bf U}^{(i)}_{k}\leftarrow\widehat{\bf U}^{(i)}_{k-1}+\rho\left(\widehat{{\boldsymbol{\Omega}}}_{k}^{(i)}-\widehat{{\boldsymbol{\Omega}}}_{k}\right),\ \text{for}\ i=1,\ldots,3$ * $\widehat{\bf U}^{(j+3)}_{k}\leftarrow\widehat{\bf U}^{(j+3)}_{k-1}+\rho\left(\widehat{{\boldsymbol{\Gamma}}}_{k}^{(j)}-\widehat{{\boldsymbol{\Gamma}}}_{k}\right),\ \text{for}\ j=1,\ldots,2$ end for Output: $\widehat{{\boldsymbol{\Omega}}}_{\texttt{maxit}},\ \widehat{{\boldsymbol{\Gamma}}}_{\texttt{maxit}},\ \widehat{{\bf D}}_{\texttt{maxit}}$ Algorithm 2 LA-ADMM Input: ${\bf S},{\bf A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho_{1},\texttt{maxit},\texttt{T}_{\text{stages}}$ Initialization: Set * $\widehat{{\boldsymbol{\Omega}}}_{0}\leftarrow{\bf 0}$; $\widehat{{\boldsymbol{\Gamma}}}_{0}\leftarrow{\bf 0}$ * $t\leftarrow 0$ for $t\leq\texttt{T}_{\text{stages}}$ do * $t\leftarrow t+1$ * $(\boldsymbol{\widehat{\Omega}}_{t},\boldsymbol{\widehat{\Gamma}}_{t},\widehat{{\bf D}}_{t})\leftarrow\text{ADMM}({\bf S},{\bf A},p,|\mathcal{T}|,\lambda_{1},\lambda_{2},\rho_{t},\texttt{maxit},\widehat{{\boldsymbol{\Omega}}}_{t-1},\widehat{{\boldsymbol{\Gamma}}}_{t-1})$ * $\rho_{t+1}\leftarrow 2\rho_{t}$ end for Output: $\widehat{{\boldsymbol{\Omega}}}_{\texttt{T}_{\text{stages}}},\ \widehat{{\boldsymbol{\Gamma}}}_{\texttt{T}_{\text{stages}}},\ \widehat{{\bf D}}_{\texttt{T}_{\text{stages}}}$ ### B.1 Solving for $\boldsymbol{\Omega}^{(1)}$ Minimizing the augmented Lagrangian with respect to $\boldsymbol{\Omega}^{(1)}$ gives $\displaystyle\widehat{\boldsymbol{\Omega}}^{(1)}_{k+1}$ $\displaystyle:=$ $\displaystyle\underset{\boldsymbol{\Omega}^{(1)}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf S}\boldsymbol{\Omega}^{(1)})+\langle{\bf U}^{(1)},{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(1)}-{\boldsymbol{\Omega}}\|^{2}_{F}\ \ \text{s.t.}\ \ \boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf 0}\\}$ $\displaystyle=$ $\displaystyle\underset{\boldsymbol{\Omega}^{(1)}}{\operatorname{argmin}}\\{-\text{logdet}(\boldsymbol{\Omega}^{(1)})+\text{tr}({\bf S}\boldsymbol{\Omega}^{(1)})+\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(1)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}/\rho)\|^{2}_{F}\ \ \text{s.t.}\ \ \boldsymbol{\Omega}^{(1)}={\boldsymbol{\Omega}^{(1)}}^{\top},\boldsymbol{\Omega}^{(1)}\succ{\bf 0}\\}$ The solution should satisfy the first order optimality condition $\rho\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}-{\boldsymbol{\widehat{\Omega}}_{k+1}^{(1)}}^{-1}=\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf S}.$ (9) This means that the eigenvectors of $\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}$ are the same as the eigenvectors of $\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf S}$ and that the eigenvalues of $\boldsymbol{\widehat{\Omega}}^{(1)}_{k+1}$ are a simple function of the eigenvalues of $\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf S}$. Consider the orthogonal eigenvalue decomposition of right hand side: $\rho\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(1)}-{\bf S}={\bf Q}{\boldsymbol{\Lambda}}{\bf Q}^{\top},$ where ${\boldsymbol{\Lambda}}={\text{\bf diag}}(\delta_{1},\ldots,\delta_{p})$ and ${\bf Q}{\bf Q}^{\top}={\bf Q}^{\top}{\bf Q}={\bf I}$. Multiply (9) by ${\bf Q}^{\top}$ on the left and ${\bf Q}$ on the right $\rho{\boldsymbol{\bar{\Omega}}}^{(1)}_{k+1}-{\boldsymbol{\bar{\Omega}}_{k+1}^{(1)}}^{-1}={\boldsymbol{\Lambda}},\ \text{with}\ {\boldsymbol{\bar{\Omega}}}^{(1)}_{k+1}={\bf Q}^{\top}{\boldsymbol{\widehat{\Omega}}}_{k+1}^{(1)}{\bf Q}.$ Then ${\boldsymbol{\widehat{\Omega}}}_{k+1}^{(1)}={\bf Q}{\boldsymbol{\bar{\Omega}}}_{k+1}^{(1)}{\bf Q}^{\top},\ \text{with}\ {{\bar{\Omega}}}^{(1)}_{k+1,jj}=\dfrac{\delta_{j}+\sqrt{\delta_{j}^{2}+4\rho}}{2\rho}.$ (10) ### B.2 Solving for $\boldsymbol{\Gamma}^{(1)}$ Minimizing the augmented Lagrangian with respect to $\boldsymbol{\Gamma}^{(1)}$ gives $\widehat{\boldsymbol{\Gamma}}^{(1)}_{k+1}:=\underset{\boldsymbol{\Gamma}^{(1)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Gamma}^{(1)}-(\boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(4)}/\rho)\|^{2}_{F}+\lambda_{1}\|\boldsymbol{\Gamma}^{(1)}_{-r}\|_{2,1}\ \text{s.t.}\ \boldsymbol{\gamma}_{r}^{(1)}=\gamma^{(1)}{\bf 1}_{p}\\}$ The solution is groupwise soft-thresholding: $\boldsymbol{\widehat{\Gamma}}^{(1)}_{k+1,j}=\begin{cases}S_{G}(\boldsymbol{\widehat{\Gamma}}_{k,j}-{\bf{\widehat{U}}}_{k,j}^{(4)}/\rho,\lambda_{1}/\rho),&\text{if}\ j=1,\ldots,|\mathcal{T}|\backslash\\{r\\}\\\ \widehat{\gamma}_{k}{\bf 1}_{p},&\text{if}\ j=r.\end{cases}$ (11) with the group-wise soft-thresholding operator $S_{G}({\boldsymbol{\gamma}},\lambda)=\text{max}(1-\lambda/\|{\boldsymbol{\gamma}}\|_{2},0){\boldsymbol{\gamma}}$ applied to $\boldsymbol{\gamma}\in\mathds{R}^{p}$, and $\widehat{\gamma}_{k}$ is equal to the average of the $p$-dimensional vector $\boldsymbol{\widehat{\Gamma}}_{k,r}-{\bf{\widehat{U}}}_{k,r}^{(4)}/\rho$. Note that in this Appendix we use the capitalized $\boldsymbol{\Gamma}_{j}$ notation to index the $j^{\text{th}}$ row of the matrix $\boldsymbol{\Gamma}$ whereas we use lowercase $\boldsymbol{\gamma}_{u}$ when indexing a node $u$ based on the tree structure in Section 2 of the main paper. ### B.3 Solving for $\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf D}$ Minimizing the augmented Lagrangian with respect to $\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf D}$ gives $(\widehat{\boldsymbol{\Omega}}^{(2)}_{k+1},\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1},\widehat{\boldsymbol{D}}_{k+1}):=\underset{\boldsymbol{\Omega}^{(2)},\boldsymbol{\Gamma}^{(2)},{\bf D}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(2)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(2)}/\rho)\|^{2}_{F}+\dfrac{\rho}{2}\|\boldsymbol{\Gamma}^{(2)}-(\boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(5)}/\rho)\|^{2}_{F}\\\ \text{s.t.}\ \boldsymbol{\Omega}^{(2)}={\bf A}\boldsymbol{\Gamma}^{(2)}+{\bf D},\ {\bf D}\ \text{diagonal},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$ (12) The solution $\widehat{\boldsymbol{\Omega}}^{(2)}_{k+1}={\bf A}\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1}+{\bf{\widehat{D}}}_{k+1}$ (13) is immediate and we are left with $(\widehat{\boldsymbol{\Gamma}}^{(2)}_{k+1},\widehat{\boldsymbol{D}}_{k+1}):=\underset{\boldsymbol{\Gamma}^{(2)},{\bf D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|{\bf{\widetilde{A}}}\boldsymbol{\Gamma}^{(2)}+{\bf{\widetilde{D}}}-{\bf{\widetilde{M}}}\|^{2}_{F}\ \text{s.t.}\ {\bf D}\ \text{diagonal},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$ where we have substituted ${\boldsymbol{\Omega}}^{(2)}={\bf A}{\boldsymbol{\Gamma}}^{(2)}+{\bf{D}}$ and we denote ${\bf{\widetilde{A}}}=\begin{pmatrix}{\bf A}\\\ {{\bf I}_{|\mathcal{T}|}}\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times|\mathcal{T}|},\ {\bf{\widetilde{D}}}=\begin{pmatrix}{\bf D}\\\ {{\bf 0}_{|\mathcal{T}|\times p}}\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times p},\ \text{and}\ {\bf{\widetilde{M}}}=\begin{pmatrix}\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(2)}/\rho\\\ \boldsymbol{\widehat{\Gamma}}_{k}-{\bf{\widehat{U}}}_{k}^{(5)}/\rho\end{pmatrix}\in\mathds{R}^{(p+|\mathcal{T}|)\times p}.$ The solution $\displaystyle\boldsymbol{\widehat{\Gamma}}^{(2)}_{k+1}$ $\displaystyle=$ $\displaystyle({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}}_{k+1})$ (14) $\displaystyle=$ $\displaystyle({\bf A}^{\top}{\bf A}+{\bf I}_{|\mathcal{T}|})^{-1}({\bf A}^{\top}:{\bf I}_{|\mathcal{T}|})({\bf\widetilde{M}}-{\bf{\widetilde{D}}}_{k+1})$ is immediate and we are left with $\displaystyle\widehat{\boldsymbol{D}}_{k+1}$ $\displaystyle:=$ $\displaystyle\underset{{\bf D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})\|^{2}_{F}\ \text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$ $\displaystyle=$ $\displaystyle\underset{{\bf D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|({\bf I}_{p+|\mathcal{T}|}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top})({\bf{\widetilde{M}}}-{\bf{\widetilde{D}}})\|^{2}_{F}\ \text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$ $\displaystyle=$ $\displaystyle\underset{{\bf D}}{\operatorname{argmin}}\\{\dfrac{1}{2}\|{\bf B}-{\bf C}{\bf{D}}\|^{2}_{F}\ \text{s.t.}\ {\bf D}\ \text{diag.},\ D_{jj}\geq 0\ \text{for}\ j=1,\ldots,p,\\}$ with ${\bf B}=({{\bf I}_{p+|\mathcal{T}|}}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{\widetilde{A}}}^{\top}){\bf{\widetilde{M}}}\in\mathds{R}^{(p+|\mathcal{T}|)\times p},\ {\bf C}=({{\bf I}_{p}}:{{\bf 0}_{p\times|\mathcal{T}|}})^{\top}-{\bf{\widetilde{A}}}({\bf{\widetilde{A}}}^{\top}{\bf{\widetilde{A}}})^{-1}{\bf{A}}^{\top}\in\mathds{R}^{(p+|\mathcal{T}|)\times p}$. The solution is $\text{diag}({\bf{\widehat{D}}}_{k+1})=\text{diag}({\bf C}^{\top}{\bf C})^{-1}\text{diag}({\bf B}^{\top}{\bf C})_{+}.$ (15) ### B.4 Solving for $\boldsymbol{\Omega}^{(3)}$ Minimizing the augmented Lagrangian with respect to $\boldsymbol{\Omega}^{(3)}$ gives $\displaystyle\widehat{\boldsymbol{\Omega}}^{(3)}_{k+1}$ $\displaystyle:=$ $\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\langle{\bf U}^{(3)},{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\rangle+\dfrac{\rho}{2}\|{\boldsymbol{\Omega}}^{(3)}-{\boldsymbol{\Omega}}\|^{2}_{F}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$ $\displaystyle=$ $\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(3)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(3)}/\rho)\|^{2}_{F}+\dfrac{1}{2\rho}\|{\bf U}^{(3)}\|_{F}^{2}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$ $\displaystyle=$ $\displaystyle\underset{\boldsymbol{\Omega}^{(3)}}{\operatorname{argmin}}\\{\dfrac{\rho}{2}\|\boldsymbol{\Omega}^{(3)}-(\boldsymbol{\widehat{\Omega}}_{k}-{\bf{\widehat{U}}}_{k}^{(3)}/\rho)\|^{2}_{F}+\lambda_{2}\|\boldsymbol{\Omega}^{-\text{diag}(3)}\|_{1}\\}$ The solution is simply elementwise soft-thresholding: $\widehat{\Omega}^{(3)}_{k+1,ij}=\begin{cases}S({\widehat{\Omega}}_{k,ij}-{{\widehat{U}}}_{k,ij}^{(3)}/\rho,\lambda_{2}/\rho),&\text{if}\ i\neq j\\\ {\widehat{\Omega}}_{k,ij}-{{\widehat{U}}}_{k,ij}^{(3)}/\rho,&\text{if}\ i=j,\\\ \end{cases}$ (16) with the soft-threshold operator $S(\omega,\lambda)=\text{sign}(\omega)\text{max}(|\omega|-\lambda,0)$ applied to $\omega\in\mathds{R}$. ### B.5 Update Variables $\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$ Minimizing the augmented Lagrangian with respect to variables $\boldsymbol{\Omega}$ and $\boldsymbol{\Gamma}$ gives $\displaystyle\widehat{\boldsymbol{\Omega}}_{k+1}$ $\displaystyle:=$ $\displaystyle\underset{\boldsymbol{\Omega}}{\operatorname{argmin}}\left\\{\sum_{i=1}^{3}\|\boldsymbol{\widehat{\Omega}}^{(i)}_{k+1}-(\boldsymbol{\Omega}-\widehat{{\bf U}}^{(i)}_{k}/\rho)\|_{F}^{2}\right\\}=\bar{{\boldsymbol{\Omega}}}_{k+1}+\dfrac{1}{\rho}\bar{\bf U}^{\Omega}_{k}$ (17) $\displaystyle\widehat{\boldsymbol{\Gamma}}_{k+1}$ $\displaystyle:=$ $\displaystyle\underset{\boldsymbol{\Gamma}}{\operatorname{argmin}}\left\\{\sum_{i=1}^{2}\|\boldsymbol{\widehat{\Gamma}}^{(i)}_{k+1}-(\boldsymbol{\Gamma}-\widehat{{\bf U}}^{(i+3)}_{k}/\rho)\|_{F}^{2}\right\\}=\bar{{\boldsymbol{\Gamma}}}_{k+1}+\dfrac{1}{\rho}\bar{\bf U}^{\Gamma}_{k},$ (18) where $\bar{{\boldsymbol{\Omega}}}_{k}:=\dfrac{\widehat{\boldsymbol{\Omega}}^{(1)}_{k}+\widehat{\boldsymbol{\Omega}}^{(2)}_{k}+\widehat{\boldsymbol{\Omega}}^{(3)}_{k}}{3},\bar{\bf U}^{\Omega}_{k}:=\dfrac{\widehat{\bf U}^{(1)}_{k}+\widehat{\bf U}^{(2)}_{k}+\widehat{\bf U}^{(3)}_{k}}{3},\bar{{\boldsymbol{\Gamma}}}_{k}:=\dfrac{\widehat{\boldsymbol{\Gamma}}^{(1)}_{k}+\widehat{\boldsymbol{\Gamma}}^{(2)}_{k}}{2},\bar{\bf U}^{\Gamma}_{k}:=\dfrac{\widehat{\bf U}^{(4)}_{k}+\widehat{\bf U}^{(5)}_{k}}{2}.$ ### B.6 Update Dual Variables The updates of the dual variables are given by $\displaystyle\widehat{\bf U}^{(i)}_{k+1}$ $\displaystyle:=$ $\displaystyle\widehat{\bf U}^{(i)}_{k}+\rho\left(\widehat{{\boldsymbol{\Omega}}}_{k+1}^{(i)}-\widehat{{\boldsymbol{\Omega}}}_{k+1}\right),\ \text{for}\ i=1,\ldots,3$ $\displaystyle\widehat{\bf U}^{(j+3)}_{k+1}$ $\displaystyle:=$ $\displaystyle\widehat{\bf U}^{(j+3)}_{k}+\rho\left(\widehat{{\boldsymbol{\Gamma}}}_{k+1}^{(j)}-\widehat{{\boldsymbol{\Gamma}}}_{k+1}\right),\ \text{for}\ j=1,\ldots,2.$ Similarly, averaging the first three updates and the latter two gives $\displaystyle\bar{\bf U}^{\Omega}_{k+1}$ $\displaystyle:=$ $\displaystyle\bar{\bf U}^{\Omega}_{k}+\rho\left(\bar{{\boldsymbol{\Omega}}}_{k+1}-\widehat{{\boldsymbol{\Omega}}}_{k+1}\right),\ \text{for}\ i=1,\ldots,3$ (19) $\displaystyle\bar{\bf U}^{\Gamma}_{k+1}$ $\displaystyle:=$ $\displaystyle\bar{\bf U}^{\Gamma}_{k}+\rho\left(\bar{{\boldsymbol{\Gamma}}}_{k+1}-\widehat{{\boldsymbol{\Gamma}}}_{k+1}\right),\ \text{for}\ j=1,\ldots,2,$ (20) Substituting (17) and (18) into (19) and (20) yields that $\bar{\bf U}^{\Omega}_{k+1}=\bar{\bf U}^{\Gamma}_{k+1}={\bf 0}$ after the first iteration. ## Appendix C Additional Simulation Results Figure 12: Simulation results for increasing number of nodes $p$. Top: Aggregation performance (RI: left; ARI: right); Bottom: Sparsity recovery (FPR: left; FNR: right) of the four estimators Figure 13: Simulation results for increasing number of blocks $K$. Top: Aggregation performance (RI: left; ARI: right); Bottom: Sparsity recovery (FPR: left; FNR: right) of the four estimators ## Appendix D Financial Application: Data Description Table 4: Financial Application: Data Description, as taken from https://realized.oxford-man.ox.ac.uk/data/assets. Abbreviation | Description | Location ---|---|--- DJI | Dow Jones Industrial Average | US IXIC | Nasdaq 100 | US SPX | S&P 500 Index | US RUT | Russel 2000 | US GSPTSE | S&P/TSX Composite index | Canada BVSP | BVSP BOVESPA Index | Brazil MXX | IPC Mexico | Mexico OMXC20 | OMX Copenhagen 20 Index | Denmark OMXHPI | OMX Helsinki All Share Index | Finland OMXSPI | OMX Stockholm All Share Index | Sweden OSEAX | Oslo Exchange All-share Index | Norway GDAXI | Deutscher Aktienindex | Germany SSMI | Swiss Stock Market Index | Switzerland BVLG | Portuguese Stock Index | Portugal FTMIB | Financial Times Stock Exchange Milano Indice di Borsa | Italy IBEX | Iberia Index 35 | Spain SMSI | General Madrid Index | Spain AEX | Amsterdam Exchange Index | Netherlands BFX | Bell 20 Index | Belgium FCHI | Cotation Assistée en Continue 40 | France FTSE | Financial Times Stock Exchange 100 | UK STOXX50E | EURO STOXX 50 | Europe HSI | HANG SENG Index | Hong Kong KS11 | Korea Composite Stock Price Index (KOSPI) | South Korea N225 | Nikkei 225 | Japan SSEC | Shanghai Composite Index | China STI | Straits Times Index | Singapore KSE | Karachi SE 100 Index | Pakistan BSESN | S&P Bombay Stock Exchange Sensitive Index | India NSEI | NIFTY 50 | India AORD | All Ordinaries Index | Australia
# Learning User Preferences in Non-Stationary Environments Wasim Huleihel Soumyabrata Pal Ofer Shayevitz W. Huleihel is with the Department of Electrical Engineering-Systems at Tel-Aviv university, Tel-Aviv 6997801, Israel (e-mail: wasimh@tauex.tau.ac.il).S. Pal is with the Computer Science Department at the University of Massachusetts Amherst, Amherst, MA 01003, USA (email: spal@cs.umass.edu).O. Shayevitz is with the Department of Electrical Engineering-Systems at Tel-Aviv university, Tel-Aviv 6997801, Israel (email: ofersha@eng.tau.ac.il). ###### Abstract Recommendation systems often use online collaborative filtering (CF) algorithms to identify items a given user likes over time, based on ratings that this user and a large number of other users have provided in the past. This problem has been studied extensively when users’ preferences do not change over time (static case); an assumption that is often violated in practical settings. In this paper, we introduce a novel model for online non- stationary recommendation systems which allows for temporal uncertainties in the users’ preferences. For this model, we propose a user-based CF algorithm, and provide a theoretical analysis of its achievable reward. Compared to related non-stationary multi-armed bandit literature, the main fundamental difficulty in our model lies in the fact that variations in the preferences of a certain user may affect the recommendations for other users severely. We also test our algorithm over real-world datasets, showing its effectiveness in real-world applications. One of the main surprising observations in our experiments is the fact our algorithm outperforms other static algorithms even when preferences do not change over time. This hints toward the general conclusion that in practice, dynamic algorithms, such as the one we propose, might be beneficial even in stationary environments. ## 1 Introduction Recommendation systems provide users with appropriate items based on their revealed preferences such as ratings. Due to their wide applicability, recommendation systems have received significant attention in machine learning and data mining societies [53, 45, 47, 6, 36, 48, 31]. In practice, recommendation systems often employ _collaborative filtering_ (CF) [22], for recommending (potentially) liked items to a given user by considering items that other “similar” users liked. There are two main categories of CF algorithms: _user-based_ , e.g., [46, 30, 12, 7, 29], and _item-based_ , e.g., [25, 50, 38, 14], and many references therein. User-based algorithms exploit similarity in the user space by recommending a particular user $u$ those items which were liked by other similar users. Item-based algorithms, in contrast, exploit similarity in the item space by recommending items similar to those which were liked in the past by a particular user. Prevalent recommendation systems typically operate in an online fashion where items are recommended to users over time, and the obtained ratings are used for future recommendations. Typically, the goal in such a problem is to maximize the number of likes revealed at any time. This problem has been studied extensively, e.g., [12, 14, 29, 13], always under the assumption that user’s preferences do not vary over time. In practice, however, temporal changes in the structure of the user’s preferences are an intrinsic characteristic of the problem, since users change their taste occasionally [27, 39, 42, 26, 33, 52, 5, 40, 43, 23]. Ignoring these changes results in recommendation algorithms which are doomed to failure in practice. This sets the goal of this paper: _we aim to model and understand the effect of non- stationary environment on online recommendation systems_. To that end, we introduce a novel latent probabilistic model for online non- stationary recommendation systems, and analyze the reward of an online user- based algorithm. Our model and certain elements of the algorithm are inspired by related static models and algorithms studied in [12, 14, 29]. In a nutshell, each user in our model has a latent probability preference vector which describes the extent to which he/she likes or dislikes each item. Similar users have similar preference vectors (this will be defined rigorously in the following section). At a given time step $t=1,2,\ldots$, the algorithm recommends a single item to each user, typically different for each user, and with probability specified by the corresponding preference vector, the user likes or dislikes the recommended item. Following [12, 14, 29] we impose the constraint that an item that has been rated by a user, cannot be recommended to the same user again. This is due to the fact that in many applications, such as, recommendation of movies or books, a rating often corresponds to consuming an item, and there is little point in, e.g., recommending a product that has been previously purchased in the past for a second time, at least not immediately. While in certain applications, this constraint might be unnecessary/irrelevant, it _forces_ our algorithm to exploit the user-item structure for collaboration. In any case, repeating the same recommendations only makes the problem easier algorithmically, and the results of this paper can be generalized to account for this case as well. Finally, to model the non-stationarity in the users’ preferences, we allow users to change their user-type over time. Main Contributions. Despite the success of CF, there has been no theoretical development to justify its effectiveness in non-stationary environment. The main contributions of this paper are two-fold. First, we introduce a novel model for general online non-stationary recommendation systems where we generalize the stationary model introduced in [12] by allowing arbitrary shifts in user preferences over time. Our second main contribution is a theoretical analysis of a user-based CF algorithm that maximize the number of recommendations that users like. As time evolves, our CF algorithm randomly explores batches of items, one batch at a time, in order to learn users’ preferences of new items in each batch. By splitting the space into optimal number of batches, our algorithms can start exploiting without having learned the preferences of the users regarding at all items. Furthermore, in each batch, our algorithm tests for variations, and once a change in the preference of a certain user is detected, that user is removed from the exploitation steps. Our results allow us to quantify the “price of non-stationarity”, which mathematically captures the added complexity embedded in changing users’ preferences versus stationary ones. The proposed algorithm achieve near- optimal performance relative to an oracle that recommends all likable items first. Our findings, such as the scaling of the cold-start time on the various parameters, and the effect of non-stationary environment on recommendation, can inform the design of recommendation algorithms, and refine our understanding of the associated benefits and costs while designing a practical recommendation system. Related Work. While to the best of our knowledge, this is the first work that _analytically_ studies temporal changes in the users’ preferences, theoretical results have been established for the stationary setting where there are no changes in the user preferences over time. We next briefly survey these prior works. One of the first initial asymptotic theoretical results concerning user-based CF were established in [11]. Most related to our approach are the setups and algorithms studied in [12, 14, 29, 13]. In particular, [12] analyzed a user-based algorithm for online two-class CF problem in a similar setting to ours, while a corresponding item-based algorithm was analyzed in [14]. A probabilistic model in an online setup was studied in [19], and [4] study a probabilistic model in an offline setup, and derived asymptotic optimal performance guarantees for two-class CF problem. Theoretical guarantees for a one-class models were derived in [29]. Another related work is [21], who considered online recommendation systems in the context of linear multi-armed bandit (MAB) problem [15]. Our setup relates to some variants of the MAB problem. An inherent conceptual difference between our setup and standard MAB formulation [55] is that in our case an item can be recommended to a user just once, while in MAB an item (or arm) is allowed to be recommended ceaselessly. In fact, the solution principle for MAB is to explore for the best item, and once found, keep exploiting (i.e., recommending) it [2, 15]. This observation applies also to clustered bandits [16], or, bandits with dependent arms [44]. Specifically, in these formulations the arms are grouped into clusters, and within each cluster arms are correlated. It is assumed, however, that the assignment of arms to clusters is known, while in our setup this information is not available. Another related formulation of MAB is “sleeping bandits” [35], where the availability of arms vary over time in an adversarial manner, while in our setup, the sequence of items that are available is not adversarial. Finally, a more recent related version is the problem of MAB with non-stationary rewards, e.g., [28, 24, 56, 10, 8, 34, 41, 17, 18, 3, 32, 51, 37, 9, 1, 57]. This formulation allows for a broad range of temporal uncertainties in the rewards. While the motivation of this setup is similar to ours, due to the same reasons as above, the results and methods in these papers are quite different from ours. In particular, the main fundamental difficulty in our model compared to MAB literature lies in the fact that variations in the preferences of a certain user may affect the recommendations for other users severely. ## 2 Model and Learning Problem In this section we introduce the model and the learning problem considered in this paper. We start with the static setting where users’ preferences do not change over time, and then generalize to the dynamic setting. Static Model. Consider a system with ${\mathsf{N}}$ users and ${\mathsf{M}}$ items. A user may “like” ($+1$) or “dislike” ($-1$) an item. At each time step, each user is recommended an item that he has not consumed yet. Each user $u\in[{\mathsf{N}}]\triangleq\\{1,2,\ldots,{\mathsf{N}}\\}$ is associated with a _latent_ (unknown) preference vector $\mathbf{p}_{u}\in\left[0,1\right]^{{\mathsf{M}}}$, whose entries $p_{ui}$ are the probabilities of user $u$ liking item $i\in[{\mathsf{M}}]$, independently across items. We assume that an item $i$ is either _likable_ for user $u$, i.e., $p_{ui}>1/2$, or _unlikable_ , i.e., $p_{ui}<1/2$. The reward earned by the recommendation system up to any time step is the total number of liked items that have been recommended so far across all users (a precise definition will be given in the sequel). Accordingly, to maximize this reward, clearly likable items for the user should be recommended before unlikable ones. For a particular item $i$, recommended to a user $u$, the observed rating is a random variable $\mathbf{R}_{ui}$, such that $\mathbf{R}_{ui}=1$ with probability $p_{ui}$, and $\mathbf{R}_{ui}=-1$ with probability $1-p_{ui}$. The ratings are assumed random in order to model the fact that users are not fully consistent in their ratings. A CF algorithm operates as follows: at each time step $t=0,1,\ldots$, the algorithm recommends a single item $i=\pi_{u,t}\in[{\mathsf{M}}]$ to each user $u\in[{\mathsf{N}}]$, and obtains a realization of the binary random variable $\mathbf{R}_{ui}$ in response. Thus, if $\mathbf{R}_{ui}=1$, user $u$ likes item $i$, and $\mathbf{R}_{ui}=-1$ means that user $u$ does not like item $i$. Either way, as we explain and motivate in the Introduction, once rated by user $u$, item $i$ _will not_ be recommended to that user again. To learn the preference of some user for an item, we need this user to rate that item. However, since we cannot recommend that item to the same user again, the only way to estimate the preferences of a user is through _collaboration_ (e.g., by making inferences from ratings made by other users). In order to make collaborative recommendations based on users’ preferences we must assume some structure/relation between users and/or items. To that end, we study a latent model, in which users are clustered. Specifically, following [4, 19, 12, 14, 29, 13] (and many references therein), we assume that each user belongs to one of ${\mathsf{K}}<{\mathsf{N}}$ _user-types_ , where users of the same type have “similar” item preference vectors. The number of types ${\mathsf{K}}$ represents the heterogeneity in the population. This assumption is common and implicitly invoked by contemporary user-based CF algorithms [49], which perform well in practice. Several empirical justifications for the clustering behavior in the user-item space can be found in, e.g., [54, 12, 29]. There are many ways to define similarity between users. For example, in [20, 4, 19, 13] two users are considered of the same type if they have the same exact ratings, and these rating vectors are generated at random. While this model is perhaps unrealistic and does not capture challenges in real-world datasets, it allows for a neat theoretical analysis. A slightly more general and flexible model was proposed in [12]. Here, two users $u$ and $v$ belong to the same type if their corresponding preference vectors ${\mathbf{p}}_{u}$ and ${\mathbf{p}}_{v}$ are the same, nonetheless, their actual ratings might be different. In [29] this assumption was significantly relaxed by assuming instead that the preference vectors belonging to the same type are more similar (in terms of the magnitude of their inner product) than those belonging to other types. Roughly speaking, in this paper we follow this latter assumption, but we relax it even further. The precise statement of our assumptions will be given in the following section. This concludes the presentation of the static model. We next incorporate the non-stationary aspect. User Variations. As explained in the introduction, our paper initializes the theoretical investigation of the situation where the preferences of users do not remain static but vary over time. To model this, we allow users to change their user-type over time. To wit, denote the type of user $u\in[{\mathsf{N}}]$ at time $t\in[{\mathsf{T}}]$ by ${\cal T}_{u}(t)\in[{\mathsf{K}}]$. In the sequel, ${\cal T}_{u}(1)$, for any $u\in[{\mathsf{N}}]$, designates the initial clustering of users into types. We assume that each user can change his type an _arbitrary_ number of times, but bound the total number of such variations. Specifically, we define two notions for variations: $\displaystyle\mathsf{V_{1}}$ $\displaystyle\triangleq\max_{u\in[{\mathsf{N}}]}\sum_{t\in[{\mathsf{T}}-1]}\mathds{1}[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1)],$ (1) and $\displaystyle\mathsf{V_{2}}$ $\displaystyle\triangleq{\mathsf{N}}^{-1}\sum_{t\in[{\mathsf{T}}-1]}\sum_{u\in[{\mathsf{N}}]}\mathds{1}[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1)],$ (2) where $\mathds{1}[\cdot]$ is the indicator function. These definitions capture the constraint imposed on the non-stationary environment faced by the CF algorithm. In words, $\mathsf{V}_{1}$ is the maximum number of variations over ${\mathsf{T}}$ steps, while $\mathsf{N}\mathsf{V}_{2}$ is the total number of variations. In general, $\mathsf{V_{1}}$ and $\mathsf{V_{2}}$ are designed to depend on the time horizon ${\mathsf{T}}$. It turns out that in order to obtain the tightest bound on the reward, both definitions are needed. It is clear that $\mathsf{V_{1}}\leq\mathsf{N}\cdot\mathsf{V}_{2}$. In this paper, we consider the already non-trivial case where the values of (or, at least, upper bounds on) $\mathsf{V}_{1}$ and $\mathsf{V}_{2}$ are _given_ to the learner/algorithm in advance. This is inline with the various settings of classical results in the non-stationary MAB literature, e.g., [10, 8, 41]. Nonetheless, in a similar fashion to recent advances in the MAB literature [34, 3, 18], it is an important, challenging, and of practical importance to propose and analyze algorithms which are oblivious to the number of variations (see Appendix 5). Learning Problem and Reward. Generally speaking, a reasonable objective for a CF algorithm is to maximize the expected reward up to time ${\mathsf{T}}$, i.e., $\displaystyle\overline{\mathsf{reward}}({\mathsf{T}})\triangleq\mathbb{E}\sum_{t\in[{\mathsf{T}}]}\frac{1}{{\mathsf{N}}}\sum_{u\in[{\mathsf{N}}]}\mathbf{R}_{u\pi_{u,t}},$ where we note that the recommended item $\pi_{u,t}$ is a random variable because it is chosen by the CF algorithm as a function of previous responses to recommendations made at previous times. In this paper, however, we focus on recommending _likable_ items. Following [12, 14, 29, 13], we consider the accumulated reward defined as the expected total number of liked items that are recommended by an algorithm up to time ${\mathsf{T}}$, i.e., $\displaystyle\mathsf{reward}({\mathsf{T}})\triangleq\mathbb{E}\sum_{t\in[{\mathsf{T}}]}\frac{1}{{\mathsf{N}}}\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[p_{u\pi_{u,t}}>1/2\right].$ (3) Note that the main difference between these two objectives is that the former prioritize items according to their probability of being liked, while the latter asks to recommend likable items for a user in an arbitrary order. This is sufficient for many real recommendation systems such as for movies and music (see, [12, Sec. 2] for a detailed discussion). We would like to emphasize that depending on the intended application, other metrics can be considered. For example, one may be interested in the number of actual “clicks” rather then the number of “likable” recommendations. Indeed, in some applications, the former is the measurable quantity. Nonetheless, we believe that our algorithms and techniques can handle such a criterion as well. ## 3 Algorithms and Theoretical Guarantees In this section, we present our algorithm Collaborative which recommends items to users over time, followed by a formal theoretical statement for its performance. We start with a non-formal description of our algorithm. The pseudocodes are given in Algorithms 1–3. The algorithm takes as input the parameters $(\alpha,\lambda,\mathsf{T_{static}},\Delta_{{\mathsf{T}}})$, which we shall discuss below. We now describe the proposed algorithm. Below, we use “$t$” to denote the time index at any step of the algorithm. Also, in the sequel, we use ${\cal P}$ to denote an estimated partitioning of the users into clusters, i.e., ${\cal P}$ is a collection of clusters, where each cluster refers to a set of users who have same estimated user type. #### Batches. In Algorithm 1, the time horizon ${\mathsf{T}}$ is partitioned into $\left\lceil{\mathsf{T}}/\Delta_{\mathsf{T}}\right\rceil$ batches of size $\Delta_{\mathsf{T}}$ each, and denoted by $\\{{\cal B}_{b}\\}_{b\geq 1}$. We use $\tau$ to denote the time index at which each batch starts, namely, for the $b^{\mathsf{th}}$ batch $\tau=(b-1)\Delta_{\mathsf{T}}$, for $b\in[1,\left\lceil 1/\Delta_{\mathsf{T}}\right\rceil]$. At the beginning of each batch, we _restart_ Algorithm Recommend, and run it for the entire batch. Also, at the beginning of each batch, we estimate an initial partition ${\cal P}_{0}$ for the set of all users $[\mathsf{N}]$ using Algorithm 3, which we describe bellow. At each subsequent time index $t$, the Algorithm Recommend either runs a _test for variations_ with probability (w.p.) $\mathsf{p}_{{\mathsf{T}}}\propto\sqrt{\mathsf{V}_{1}/{\mathsf{T}}}$, or, explores-exploits w.p. $1-\mathsf{p}_{{\mathsf{T}}}$. #### User partitioning in the batch. The goal of Algorithm 3 is to create a partition of the users into types. To that end, routine $\textsc{Test}(\mathsf{T_{static}},\lambda,{\cal S}_{t-1},{\cal P}_{0})$ recommends $\mathsf{T_{static}}\in\mathbb{N}$ random items ${\cal T}_{\mathsf{test}}$ ($\left|{\cal T}_{\mathsf{test}}\right|=\mathsf{T_{static}}$) to all users in ${\cal S}_{t-1}\subseteq[{\mathsf{N}}]$, initialized in each batch to be the set of all users $[{\mathsf{N}}]$. Then, using the obtained responses $\left\\{\mathbf{R}_{ui}\right\\}_{u\in[{\cal S}],i\in{\cal T}_{\mathsf{test}}}$, in the second and third steps of this algorithm a partition of the users is created. Specifically, for any pair of distinct users $u,v\in{\cal S}_{t-1}$, we let $\mathsf{X}_{u,v}$ be the number of items for which $u$ and $v$ had the same responses. Let $\mathsf{E}_{u,v}=\mathds{1}\left[\mathsf{X}_{u,v}\geq\lambda\cdot\mathsf{T_{static}}\right]$, for some $\lambda>0$. Accordingly, users $u$ and $v$ are inferred to have the same type if $\mathsf{E}_{u,v}=1$. Subsequently, if there exists a valid partitioning ${\cal P}$ over the set of users in ${\cal S}_{t-1}$, which is consistent with the variables $\mathsf{E}_{u,v}$, then we declare that ${\cal P}$ is our estimated user-partition, otherwise, we place all users in the same group. This is true precisely when the graph with edge set $\mathsf{E}_{u,v}$ is a disjoint union of cliques. Note that this partitioning procedure is equivalent to the cosine similarity test, declaring that two users $u$ and $v$ as being neighbors if their cosine similarity is at least as large as some threshold. The values of $\mathsf{T_{static}}$ and $\lambda\in(0,1)$ are specified in Theorem 1. #### Variations detection in the batch. Given the partitions ${\cal P}_{0}$ and ${\cal P}$, in step 14 of Algorithm 1 we compare those partitions in order to detect any variations using routine Variation. We show that if the user variations is not “too large” in the corresponding batch, then it is possible to draw a one-to-one correspondence between the clusters in ${\cal P}$ and ${\cal P}_{0}$, and therefore, it is possible to identify the users who have changed their clusters, i.e., they are in a different cluster in ${\cal P}$ than the one in ${\cal P}_{0}$. All such users are declared as “bad” users and are included in the set ${\cal V}_{t}$. We update ${\cal S}_{t}\leftarrow{\cal S}_{t-1}\setminus{\cal V}_{t}$. The users in ${\cal V}_{t}$ will be excluded from future exploitation rounds. Procedure 1 Collaborative Algorithm for recommending items. 0: Parameters $(\alpha,\lambda,\mathsf{T_{static}},\Delta_{\mathsf{T}})$. 1: Set index batch $b=1$. 2: while $b\leq\left\lceil{\mathsf{T}}/\Delta_{\mathsf{T}}\right\rceil$ do 3: Set $\tau\leftarrow(b-1)\Delta_{\mathsf{T}}$. 4: Call Recommend$(\tau,\alpha,\lambda,\mathsf{T_{static}},\Delta_{\mathsf{T}})$. 5: Set $b\leftarrow b+1$ and return to the beginning of Step 2. Procedure 2 Recommend$(\tau,\alpha,\lambda,\mathsf{T_{static}},\Delta_{\mathsf{T}})$ 1: Select a random ordering $\sigma$ of the items $[{\mathsf{M}}]$. 2: Define $\mathsf{p_{R}}={\mathsf{N}}^{-\alpha}$ and $\mathsf{p_{T}}=\sqrt{\mathsf{V_{1}}/({\mathsf{T}}\cdot\mathsf{T_{static})}}$. 3: Let $t$ to be the time index at any step of the algorithm. 4: ${\cal P}_{0}\leftarrow$Test($\mathsf{T_{static}}$,$\lambda$,$[{\mathsf{N}}]$). 5: Initialize ${\cal S}_{\tau+\mathsf{T_{static}}}\leftarrow[{\mathsf{N}}]$. 6: while $\tau+\mathsf{T_{static}}<t\leq\min({\mathsf{T}},\tau+\Delta_{\mathsf{T}})$ do 7: Draw $\Theta\sim\mathsf{Bern}(\mathsf{p_{T}})$. 8: if $\Theta=0$ then 9: ${\cal S}_{t}\leftarrow{\cal S}_{t-1}$. 10: $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ With probability $\mathsf{p_{R}}$: $\pi_{u,t}\leftarrow$ random item for all $u\in[{\mathsf{N}}]$ that has not rated. 11: $\mathbin{\vbox{\hbox{\scalebox{0.75}{$\bullet$}}}}$ With probability $1-\mathsf{p_{R}}$: $\pi_{u,t}\leftarrow$ item $i$ that user $u\in{\cal S}_{t}$ has not rated and that maximizes score $\hat{p}_{ui}^{(t)}$ in (4). 12: else 13: ${\cal P}\leftarrow$Test($\mathsf{T_{static}}$,$\lambda$,${\cal S}_{t-1}$). 14: ${\cal V}_{t}\leftarrow$Variation$({\cal P},{\cal P}_{0})$. 15: ${\cal S}_{t}\leftarrow{\cal S}_{t-1}\setminus{\cal V}_{t}$. Procedure 3 Test($\mathsf{T_{static}}$,$\lambda$,${\cal S}_{t-1}$) Algorithm for partitioning users. 1: Recommend $\mathsf{T_{static}}$ random items ${\cal T}_{\mathsf{test}}$ to all users in ${\cal S}_{t-1}$. 2: For any $u\neq v\in{\cal S}_{t-1}$, let $\mathsf{X}_{u,v}$ be the number of items in ${\cal T}_{\mathsf{test}}$ for which $u$ and $v$ had the same responses, and let $\mathsf{E}_{u,v}=\mathds{1}\left[\mathsf{X}_{u,v}\geq\lambda\cdot\mathsf{T_{static}}\right]$. 3: Let ${\cal P}$ be the valid partitioning over users consistent with the variables $\mathsf{E}_{u,v}$. If such a partitioning does not exist, let ${\cal P}\equiv{\cal S}_{t-1}$. 4: Return ${\cal P}$. Procedure 4 Variation$({\cal P},{\cal P}_{0})$ Algorithm for testing variations. 1: For each cluster ${\cal C}$ in ${\cal P}$, find a cluster ${\cal C}^{\prime}$ in ${\cal P}_{0}$ that shares at least half the users in ${\cal C}$ i.e., $\left|{\cal C}\cap{\cal C}^{\prime}\right|\geq\frac{|{\cal C}|}{2}$. 2: Establish a one-to-one mapping from clusters in ${\cal P}$ to clusters in ${\cal P}_{0}$ in this manner. If such a one-to-one mapping is not possible, return $\emptyset$. 3: Identify the set of users ${\cal V}$ who belong to different clusters in ${\cal P}$ and ${\cal P}_{0}$. 4: Return ${\cal V}$. #### Exploration-Exploitation. Since we restart the main algorithm in each batch, we focus on a particular batch $b$ in the explanation below. For ease of notation, we omit the batch index from all definitions. As mentioned above, w.p. $1-\mathsf{p_{T}}$ our algorithm performs an exploration-exploitation routine. In such a case, with probability $\mathsf{p_{R}}={\mathsf{N}}^{-\alpha}$ the algorithm randomly explores the space of items, and with complementary probability, $1-\mathsf{p_{R}}$, the algorithm exploits by recommending those items that maximize a certain score. With some abuse of notation let $\mathbf{R}_{ui}^{(t)}\in\left\\{-1,0,1\right\\}$ be the observed rating of user $u$ to item $i$ up to time $t$ in the $b^{\mathsf{th}}$ batch, where “$0$” means that no rating has been given yet (in the $b^{\mathsf{th}}$ batch). When exploiting, the algorithm evaluates empirical probabilities $\hat{p}_{ui}^{(t)}$, at time $t$, for user $u\in{\cal S}_{t}$, and item $i$. These empirical probabilities are defined as follows, $\displaystyle\hat{p}_{ui}^{(t)}\triangleq\begin{cases}\frac{\sum_{v\in\mathsf{neigh}(u,t)}\mathds{1}\left[\mathbf{R}_{vi}^{(t)}=1\right]}{\mathsf{C}_{ut}},\ &\text{if }\mathsf{C}_{ut}>0\\\ 1/2,\ &\text{otherwise},\end{cases}$ (4) where $\mathsf{C}_{ut}\triangleq\sum_{v\in\mathsf{neigh}(u,t)}\mathds{1}[\mathbf{R}_{vi}^{(t)}\neq 0]$, and the “neighborhood” of user $u\in{\cal S}_{t}$ at time $t$ in the $b^{\mathsf{th}}$ batch is $\mathsf{neigh}(u,t)\triangleq{\cal P}_{0}(u)\cap{\cal S}_{t}$, where ${\cal P}_{0}(u)$ is the set of users in the same cluster as user $u$ in the initial partition ${\cal P}_{0}$ created in the beginning of the batch. Note that the exploitation step at any time index $t$ is done only for those users which are present in ${\cal S}_{t}$. Finally, we emphasize here that the empirical probabilities $\hat{p}_{ui}^{(t)}$ as well as the neighborhoods $\mathsf{neigh}(u,t)$, are all refreshed at the beginning of each batch; ratings from previous batches are ignored in the evaluation of these quantities. ###### Remark 1. In practice, we can continue recommending items to any user $u$ in $\mathcal{V}_{t}$ (bad users) based on the items liked by other users who belong to $\mathcal{P}_{0}(u)$. Theoretical performance guarantee. In the following, we state our main theoretical result, which is a lower bound on the reward in (3) achieved by Algorithm Collaborative. To that end, we introduce three natural and prima facie necessary assumptions, which will be used in order to establish our result. * A1 No $\Delta$-ambiguous items. For every user $u\in[{\mathsf{N}}]$ and item $i\in[{\mathsf{M}}]$, there exists a constant $\Delta\in(0,1/2]$ such that $\left|p_{ui}-1/2\right|\geq\Delta$. * A2 Minimum portion of likeable items. There exists a constant $\mu\in[0,1]$, such that for every user $u\in[{\mathsf{N}}]$, it holds $\sum_{i=1}^{{\mathsf{M}}}\mathds{1}\left[p_{ui}>1/2\right]\geq\mu{\mathsf{M}}$. * A3 (In)coherence. There exist constants $\gamma_{2}\geq\gamma_{1}\in[0,1)$ such that if two users $u$ and $v$ are of _different_ types, then $\langle 2{\mathbf{p}}_{u}-\mathbf{1},2{\mathbf{p}}_{v}-\mathbf{1}\rangle\leq 4\gamma_{1}\Delta^{2}{\mathsf{M}}$, and if they are of the _same_ type, then $\langle 2{\mathbf{p}}_{u}-\mathbf{1},2{\mathbf{p}}_{v}-\mathbf{1}\rangle\geq 4\gamma_{2}\Delta^{2}{\mathsf{M}}$. Here $\mathbf{1}$ is the all ones vector. In a nutshell, Assumption A1 is necessary to assure that one can infer whether an item is either likable or unlikable with a finite number of samples. The parameter $\Delta$ quantifies the inconsistency (or, noise), where $\Delta=0$ ($\Delta=1/2$) is the completely noisy (noiseless) case. The second condition states that each user likes at least a fraction $\mu$ of the total items. This assumption is made to avoid degenerate situations were a user $u$ does not like any item. Note that one can always ignore those users whose activity is insignificant since their contribution to the reward will be insignificant as well. Evidently, from a practical point of view, any real-world recommendation engine must prioritize users whose activity is significant. Notice that relevant literature, such as [12, 29], makes the same assumptions as well. The more interesting assumption is A3. The incoherence part of assumption A3 requires that the preference vectors for any two users $u$ and $v$ belonging to different user-types are not too similar, so that the cosine similarity test can separate users of different types over time. The parameter $\gamma_{1}$ controls this incoherence; the smaller $\gamma_{1}$ is, the less similar are users of different types. This incoherence assumption appears in [12] as well. The coherence part of assumption A3 asks that any two users of the same user-type share a large fraction of the items that they find likable, and this fraction is controlled by the parameter $\gamma_{2}$. This coherence assumption should be contrasted with [12] where it was assumed that the preference vectors ${\mathbf{p}}_{u}$ and ${\mathbf{p}}_{v}$ of two users $u$ and $v$ from the same user-type to be exactly the same, which is evidently a stronger assumption, and accordingly, our coherence assumption relaxes that significantly. _We would like to clarify that the above assumptions are only required for the analysis; Our proposed algorithm can be implemented regardless of these assumptions_. As is shown in Section 4, in real-world applications, our algorithm works well even if these assumptions do not hold. We next provide two examples where the typical values of the various parameters in assumptions A1–A3 are derived. ###### Example 1. Consider the noiseless case where $\Delta=1/2$. In this case, the users’ ratings are deterministic given their user-types. Accordingly we generate ${\mathsf{K}}$ $\mathsf{d}$-dimensional binary vectors $\\{\mathbf{b}_{i}\\}_{i=1}^{{\mathsf{K}}}$ by randomly drawing $\mathsf{d}$ statistically independent $\mathsf{Bernoulli}(1/2)$ random variables, for each user-type. Here $\mathsf{d}\leq{\mathsf{M}}$ is some parameter. Then, the preference vector of any user in the $\ell$ user-type (i.e., ${\cal T}_{\ell}$) will be the concatenation of $\mathbf{b}_{\ell}$ with ${\mathsf{M}}-\mathsf{d}$ statistically independent $\mathsf{Bernoulli}(1/2)$ random variables. To wit, the preference vector of user $u\in{\cal T}_{\ell}$ is ${\mathbf{p}}_{u}=[\mathbf{b}_{\ell};\mathbf{e}_{u}]$, where $\mathbf{e}_{u}$ is a binary vector whose ${\mathsf{M}}-\mathsf{d}$ elements are statistically independent $\mathsf{Bernoulli}(1/2)$ random variables. Now, for any two users $u$ and $v$ from different user-types, it should be clear that the inner product $\frac{1}{{\mathsf{M}}}\langle 2{\mathbf{p}}_{u}-\mathbf{1},2{\mathbf{p}}_{v}-\mathbf{1}\rangle$ is merely a sum of ${\mathsf{M}}$ Rademacher random variables normalized by ${\mathsf{M}}$. Accordingly, a standard concentration inequality on sum of Rademacher random variables tell us that the value of this inner product is in the interval $[-\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}}),\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}})]$, with probability at least $1-\mathsf{poly}({\mathsf{M}}^{-1})$. Therefore, for the incoherence condition to hold with high probability we need $\gamma_{1}>\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}})$. On the other hand, if $u$ and $v$ are from the same user-type, the inner product of the first $\mathsf{d}$ items is maximal (i.e., unity) by construction. Therefore, using the same arguments it can be shown that the value of the above inner product is at least $\frac{\mathsf{d}}{{\mathsf{M}}}-\Theta(\sqrt{\frac{({\mathsf{M}}-\mathsf{d})\log({\mathsf{M}}-\mathsf{d})}{{\mathsf{M}}^{2}}})\geq\frac{\mathsf{d}}{{\mathsf{M}}}-\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}})$, with high probability. This implies that the coherence condition holds if $\gamma_{2}\leq\frac{\mathsf{d}}{{\mathsf{M}}}-\Theta(\sqrt{\frac{({\mathsf{M}}-\mathsf{d})\log{\mathsf{M}}}{{\mathsf{M}}^{2}}})$. When $\mathsf{d}={\mathsf{M}}$, which means that users of the same user-type have exactly the same preference vectors and therefore $\gamma_{2}$ can get as large as $1$. Otherwise, there is a certain payment depending on how similar the preference vectors are, controlled by $\mathsf{d}$. Finally, the typical value of $\mu$ is clearly around $1/2$ with high probability. ###### Example 2. We generalize the previous example. Consider the case where each entry of the $\mathsf{d}$-dimensional vectors $\\{\mathbf{b}_{\ell}\\}_{\ell=1}^{{\mathsf{K}}}$ is $\frac{1}{2}+\Delta$ with probability $\mu$ and $\frac{1}{2}-\Delta$ with probability $1-\mu$, for a fixed $\Delta$. Then, as in the previous example, the preference vector of user $u\in{\cal T}_{\ell}$ is ${\mathbf{p}}_{u}=[\mathbf{b}_{\ell};\mathbf{e}_{u}]$, where $\mathbf{e}_{u}$ is now a random vector whose ${\mathsf{M}}-\mathsf{d}$ elements are statistically independent, and each element is either $\frac{1}{2}+\Delta$ with probability $\mu$ and $\frac{1}{2}-\Delta$ with probability $1-\mu$. Then, using the same arguments as in the previous example, it can be shown that if users $u$ and $v$ are of different user types, then the incoherence condition holds with high probability when $\gamma_{1}>(1-2\mu)^{2}+\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}})$. On the other hand, if users $u$ and $v$ are of the same user type, then the coherence condition holds with high probability when $\gamma_{2}\leq\frac{\mathsf{d}}{{\mathsf{M}}}-(1-2\mu)^{2}-\Theta(\sqrt{\frac{({\mathsf{M}}-\mathsf{d})\log({\mathsf{M}}-\mathsf{d})}{{\mathsf{M}}^{2}}})$. Let $a\vee b\triangleq\max(a,b)$ and $a\wedge b\triangleq\min(a,b)$, for $a,b\in\mathbb{R}$. We are now in position to state our main result. ###### Theorem 1. Let $\delta\in(0,1)$ and $\nu\in(0,1)$ be some pre-specified tolerances. Take as input to Collaborative $\alpha\in(0,4/7]$, any $\lambda\in(\lambda_{-},\lambda_{+})$, and $\Delta_{\mathsf{T}}={\mathsf{T}}\wedge\sqrt{\frac{2\nu{\mathsf{T}}}{\mathsf{3V_{2}}}\kappa}$, where $\lambda_{\pm}\triangleq 2\gamma_{1}\Delta^{2}+\frac{1}{2}\pm\Delta^{2}(\gamma_{2}-\gamma_{1})$ and $\kappa\triangleq\mathsf{T_{static}}(1-\delta-\mu)$. Define $\mathsf{T_{static}}\triangleq\frac{2\log(3{\mathsf{N}}^{2}/\delta)}{\Delta^{4}(\gamma_{2}-\gamma_{1})^{2}}$ and $\mathsf{T}_{\mathsf{learn}}\triangleq\left[1\vee\frac{3\mathsf{V_{2}}}{2\nu(1-\delta-\mu)}\right]\mathsf{T}_{\mathsf{static}}.$ Consider the latent source model and assumptions A1–A3. If at every time- point, the portion of users belonging to any user-type is at least $\nu$, then, for any ${\mathsf{T}}_{\mathsf{learn}}\leq{\mathsf{T}}\leq\mu\cdot{\mathsf{M}}$, the expected proportion of liked items recommended by Collaborative up until time ${\mathsf{T}}$ satisfies $\displaystyle\mathsf{reward}({\mathsf{T}})$ $\displaystyle\geq(1-\delta)\cdot{\mathsf{T}}-\kappa\vee\sqrt{\frac{3\mathsf{V_{2}{\mathsf{T}}}\kappa}{2\nu}}-2\mathsf{V_{1}}\mathsf{T_{static}}-2\sqrt{\mathsf{V}_{1}{\mathsf{T}}\mathsf{T_{static}}}-\frac{3\mathsf{V_{2}}{\mathsf{T}}}{2\nu}\wedge\sqrt{\frac{3\mathsf{V}_{2}{\mathsf{T}}\kappa}{2\nu}}.$ (5) For ${\mathsf{T}}<{\mathsf{T}}_{\mathsf{learn}}$, the reward satisfies $\mathsf{reward}({\mathsf{T}})\geq\mu\cdot{\mathsf{T}}$. The proof of Theorem 1 can be found in Appendix A, and we now discuss its implications. For ${\mathsf{T}}<{\mathsf{T}}_{\mathsf{learn}}$, the algorithm may give poor recommendations. This is reasonable since in the first ${\mathsf{T}}_{\mathsf{learn}}$ rounds mostly random items are recommended, independently of the users responses, and thus the reward is at least $\mu\cdot{\mathsf{T}}$. This is the initial phase for which our CF algorithm gives poor recommendations. Then, for ${\mathsf{T}}_{\mathsf{learn}}<{\mathsf{T}}<\mu\cdot{\mathsf{M}}$, the algorithm becomes efficient. Specifically, when $\mathsf{V_{1}}=\mathsf{V_{2}}=0$, we get that $\mathsf{reward}({\mathsf{T}})/{\mathsf{T}}\geq(1-\mathsf{T_{learn}}/{\mathsf{T}})\cdot(1-\delta-\eta)$. Therefore, the proposed algorithm becomes near-optimal, as the achieved reward is $(1-\varepsilon^{\prime})$–close to an oracle that recommends only likeable items and thus achieves a reward of ${\mathsf{T}}$. Note that contrary to multi-armed bandit literature, linear reward is common in collaborative filtering frameworks (see, for example, [12, 29, 13]). For ${\mathsf{T}}>\mu\cdot{\mathsf{M}}$, on the other hand, one cannot guarantee that likable items remain, and the learning time (cold-start time) is ${\mathsf{T}}_{\mathsf{learn}}={\mathsf{T}}_{\mathsf{static}}$. Clearly, when $\mathsf{V_{1}},\mathsf{V_{2}}>0$ the reward decreases. Specifically, if both of these parameters scale as $O({\mathsf{T}}^{c})$, for some constant $c\in[0,1]$, then the payoff for non-stationarity compared to the static case is of order $O(\sqrt{{\mathsf{T}}^{1+c}})$, a sub-linear cost in ${\mathsf{T}}$. In particular, ignoring the exact dependency of the reward on the various parameters, the scaling of (5) with ${\mathsf{T}}$ is $\mathsf{reward}({\mathsf{T}})/{\mathsf{T}}\geq 1-\delta-O(\sqrt{{\mathsf{T}}^{c-1}})$. This result provides a spectrum of orders of the payoff ranging between order $O(\sqrt{{\mathsf{T}}})$ (constant number of variations), and of order $O({\mathsf{T}})$ (number of variations is $O({\mathsf{T}})$). The sub-linearity growth implies that our user-based CF algorithm has long run average performance that converges to the performance that would have been achieved in the static environment, where users’ preferences do not vary. Finally, in terms of the learning time, it can be checked that ${\mathsf{T}}_{\mathsf{learn}}=\Theta({\mathsf{T}}^{c}\log({\mathsf{N}}^{2}/\delta))$, and thus the condition ${\mathsf{T}}>{\mathsf{T}}_{\mathsf{learn}}$ boils down to ${\mathsf{T}}>\Theta([\log({\mathsf{N}}^{2}/\delta)]^{\frac{1}{1-c}})$. Therefore, when there are variations, the cold-start time grows, and the scaling of the variations with ${\mathsf{T}}$ dictates the poly-log order of this learning phase. Next, we study the dependency of the learning time in Theorem 1 on $\gamma_{1}$ and $\gamma_{2}$, for Example 1 above. It can be seen that the learning time depend on these parameters via the term $(\gamma_{2}-\gamma_{1})^{-2}$. Accordingly, when $\mathsf{d}={\mathsf{M}}$ in Example 1 we get that $(\gamma_{2}-\gamma_{1})^{-2}$ is of order constant by taking $\gamma_{1}=\Theta(\sqrt{\frac{\log{\mathsf{M}}}{{\mathsf{M}}}})$ and $\gamma_{2}=1$. In fact, it is evident that this is true when $\mathsf{d}=\Theta({\mathsf{M}})$ as well, and thus if any two users from the same user-type share a constant fraction of the total number of items that they find likeable, then this has a multiplicative constant effect on the learning time. If however, $\mathsf{d}=o({\mathsf{M}})$, say, $\mathsf{d}=O({\mathsf{M}}^{q})$, for $q\in(0,1/2)$, then we can set $\gamma_{2}=\Theta({\mathsf{M}}^{-c})$, and accordingly, ${\mathsf{T}}_{\mathsf{static}}$ will scale as ${\mathsf{M}}^{2q}\cdot\log({\mathsf{N}}^{2}/\delta)$, for $\alpha\to 0$. Accordingly, we see that when negligible amount of common items are likeable by users of the same type, then the learning time is significantly larger, as expected. Below we mention briefly some of the technical challenges encountered in the proof of Theorem 1. First, we establish a connection between the static reward and the reward of a dynamic oracle in the non-stationary setting. This connection is general and can be used in other possible static recommendation system models that incorporate non-stationary environment. One of the main difficulties in the proof of Theorem 1 is the analysis of how would a variation in the preference of some user affects other users in its estimated neighborhood. Unless this user is detected by Algorithm 3, the algorithm does not know the change of this user, and it will keep using this user’s feedback to make recommendations for other users. This is one source of added regret incurred by the unsuccessful and incorrect detections of the change-points. Other sources of regret are the cost associated with the detection/testing delay, and a regret incurred by variations happening when testing. These costs are captured by the third and fourth terms at the R.H.S. of (5). ## 4 Experiments We simulate an online recommender system using real-world data in order to understand whether our algorithm performs well, even when the data is not generated by the probabilistic model introduced in Section 2. To that end, we follow a similar vein as in [12, 29], and look at movie ratings from the popular Movielens25m dataset,111https://grouplens.org/datasets/movielens/25m/ which provides 5-star rating and free-text tagging activity from Movielens, a movie recommendation service. We parsed the first $7$ million ratings for our experiment, and consider only those users who have rated at least $225$ movies, ending up with a total number of ${\mathsf{N}}=247$ users. To avoid any kind of biases, we also restrict ourselves to movies which are more or less equally liked and disliked by the users. To that end, we choose those movies whose average ratings is between $2.5$ and $3.5$, and we found out that ${\mathsf{M}}=10149$ such movies exist. Finally, we looked at two genres: Action and Romance. For each user $u\in[{\mathsf{N}}]$, we recover piece-wise stationary preferences by the following steps: 1. 1. We sort the movies rated by user $u$ in ascending order according to the time- stamp. 2. 2. We partition the movies rated by user $u$ into $15$ bins so that each bin contains equal number of movies. We will consider each bin to be a window of time. 3. 3. For each bin, we find the number $\mathsf{a}_{u}\in\mathbb{N}$ of Action movies rated by user $u$, as well as $\mathsf{r}_{u}\in\mathbb{N}$ the number of Romance movies rated by the same user. Accordingly, note that in each bin, the probability of user $u$: liking a movie tagged Action but not Romance is $\mathsf{a}_{u}/(\mathsf{a}_{u}+\mathsf{r}_{u})$; liking a movie tagged Romance but not Action is $\mathsf{r}_{u}/(\mathsf{a}_{u}+\mathsf{r}_{u})$; liking a movie tagged both Action and Romance is 1, and finally, a movie which does not have any of these tags is 0. We want to point out that we consider the number of Action and Romance movies that were rated by the user, rather than just liked, since any user is biased towards rating the movies he will like (see, [29]), and therefore the number of movies rated by the user is a better indicator of his preference towards the genre. Fig. 1 shows the probability of $5$ randomly chosen users liking Action movies across $10$ different bins. It is clear that the preferences exhibit a piece-wise stationary behaviour, and that the variations are significant. We now assume for simplicity that the number of rounds in each bin is $100$ (this value is unknown to the algorithm), and we took the total number of rounds to be ${\mathsf{T}}=600$. In lieu of creating the initial disjoint clusters at the beginning of each batch (i.e., $\mathcal{P}_{0}$), we recommend $\mathsf{T}_{\mathsf{static}}$ randomly chosen items to all users. For each user $u\in[{\mathsf{N}}]$, we take the neighbors of $u$ to be the top $10$ users whose feedback vector has the highest cosine similarity with that of user $u$, over the $\mathsf{T}_{\mathsf{static}}$ recommended items. Further, since ${\mathsf{T}}=600$ is quite small, we do not test for bad users in each batch (namely, we skip lines $13-15$ in Algorithm 2). The reasons for this modification are as follows. First, in the theoretical analysis, we have assumed that ratings of a single bad user can potentially result in faulty recommendations for all other users in their user group. However, in practice, that might not be the case as future recommendations are determined by multiple other users who can negate the effect of that bad user. Secondly, as the dataset for our experiment is not very large ($10$ neighbors for each user), detecting bad users based on ratings of neighbors can be unreliable. Finally, for small $\mathsf{T}$, $\mathsf{T}_{\mathsf{static}}$ is comparatively large and therefore testing for bad users can potentially bias the accumulated reward towards larger batch-sizes. Nevertheless, as we will show, our experiment clearly demonstrates the dependence on $\Delta_{{\mathsf{T}}}$ and $\mathsf{T}_{\mathsf{static}}$ in the non- stationary setting. We run Algorithm 1 with $\mathsf{T}_{\mathsf{static}}=10$ and $p_{\mathsf{R}}=0.1$, for several different values of the batch-size $\Delta_{\mathsf{T}}$, each for $5$ different iterations. The performance of the algorithms is measured in terms of the average cumulative reward up to time ${\mathsf{T}}$, namely, $\mathsf{acc}\text{-}\mathsf{reward}({\mathsf{T}})\triangleq\sum_{t\in[{\mathsf{T}}]}\frac{1}{{\mathsf{N}}}\sum_{u\in[{\mathsf{N}}]}\mathbf{R}_{u\pi_{u,t}},$ where $\pi_{u,t}$ is the item recommended by the algorithm to user $u$ at time $t$. The average cumulative reward up to time ${\mathsf{T}}$ is given in Table 1. From this table, it is clear that the highest average cumulative reward is obtained when the batch-size is $\Delta_{\mathsf{T}}=100$, and decreases gradually as the batch-size increases. Finally, not that since we are not detecting bad users in our experiments, the knowledge of $\mathsf{V}_{1}$ is not required ($\mathsf{V}_{1}$ is only used to set $p_{\mathsf{T}}$). Notice that $\mathsf{V}_{2}$ is used to set the batch-size $\Delta_{\mathsf{T}}$ correctly. Since an incorrect value of $\mathsf{V}_{2}$ results in a sub- optimal value for $\Delta_{\mathsf{T}}$, computing the average cumulative reward by iterating through different values of $\Delta_{\mathsf{T}}$ also gives an idea about the sensitivity of our algorithms with respect to this mis-specification. As can be seen from our results, the highest value of $\mathsf{acc}\text{-}\mathsf{reward}({\mathsf{T}})$ was achieved when $\Delta_{\mathsf{T}}=100$, while the $\mathsf{acc}\text{-}\mathsf{reward}({\mathsf{T}})$ degrades gracefully with the mis-specification of $\Delta_{\mathsf{T}}$ (or, $\mathsf{V}_{2}$). $\Delta_{\mathsf{T}}$ | $\mathsf{acc}\text{-}\mathsf{reward}({\mathsf{T}})$ ---|--- 50 | 316.707 100 | 325.716 150 | 306.538 200 | 278.219 300 | 278.642 350 | 224.893 400 | 239.410 450 | 204.127 500 | 162.96 550 | 169.97 600 | 137.40 Table 1: Accumulated reward as a function of the batch-size: $\Delta_{\mathsf{T}}=600$ corresponds to the static case, and $\Delta_{\mathsf{T}}=100$ corresponds to the optimal value. Next, we illustrate the benefit of our algorithm compared to the static algorithm even in a stationary environment. To that end, we run Algorithm 1 with $\Delta_{\mathsf{T}}\in\\{100,600\\}$, ${\mathsf{T}}_{\mathsf{static}}\in\\{10,30,60,80,100\\}$, and assume a single bin of size ${\mathsf{T}}=600$. Our results are presented in Fig. 2, and perhaps surprisingly, Algorithm 1 with $\Delta_{\mathsf{T}}=100$ achieves a better accumulated reward compared to $\Delta_{\mathsf{T}}=600$ (static algorithm), for small values of ${\mathsf{T}}_{\mathsf{static}}$. The main reason for this phenomenon is because for Algorithm 1 with $\Delta_{\mathsf{T}}=600$, the neighbors of any user might not be well chosen due to small values of $\mathsf{T}_{\mathsf{static}}$ because of which the user will receive poor recommendations throughout the entire time frame. On the other hand, running Algorithm 1 with $\Delta_{\mathsf{T}}=100$ restarts Algorithm 2 at periodic intervals. As a result, the users have a good set of neighbors in some batches and a bad set in others, but the cumulative reward concentrate because the neighbors are independent across the batches. However, the performance of the algorithm with $\Delta_{\mathsf{T}}=600$ improves as ${\mathsf{T}}_{\mathsf{static}}$ gets larger since the quality of the estimated neighborhood improves. This experiment hits that it is better to restart the recommendation algorithm periodically, i.e., follow Algorithm 1 (with $\Delta_{{\mathsf{T}}}<{\mathsf{T}}$) even in stationary environments. We would like to emphasize that an insufficient number of samples for the initial clustering, results in a worse accumulated reward for $\Delta_{\mathsf{T}}=600$. In practice, however, the number of samples used for the initial clustering might be difficult to determine a-priori. In that situation, we suggest to restart the algorithm periodically with a small value of $\mathsf{T}_{\mathsf{static}}$. Indeed, since the batches are independent, the accumulated reward concentrates due to the law of large numbers. Figure 1: The probability $\mathsf{a}_{u}/(\mathsf{a}_{u}+\mathsf{r}_{u})$ of user $u$ liking a movie with Action tag but not Romance tag, for five different users, across $10$ different bins/windows. Figure 2: Comparison of Average Cumulative Reward $\mathsf{acc-reward(T)}$ for batchsize $(\Delta_{{\mathsf{T}}})\in\\{100,600\\}$ and ${\mathsf{T}}_{\mathsf{static}}\in\\{10,30,60,80,100\\}$. Next, we further compare the performance of our algorithm to the static case [12], and to the Popularity Amongst Friends (PAF) algorithm [4]. We consider the same setting as in [12]. In particular, we again quantize movie ratings $\geq 4$ as $+1$ (likable), movie ratings $<3$ as $-1$ (unlikable), and missing ratings as $0$. We consider the top ${\mathsf{N}}=250$ and ${\mathsf{M}}=500$ users and movies, respectively. This results in $\approx 80\%$ nonzero entries among the total number of entries in the rating matrix. There are of course missing entries in the resulted rating matrix. Accordingly, in our simulation if at a certain time, item $i$ was recommended to user $u$, who has not rated that item, we receive $0$ reward. Despite that we will still treat item $i$ as being consumed by user $u$, and accordingly, item $i$ cannot be recommended to user $u$ again. Since we allow algorithms to recommend an item to a given user only once, after ${\mathsf{T}}={\mathsf{M}}=500$ time steps, all items have been recommend to all users. As before, the performance of the algorithms is measured in terms of the average cumulative reward up to time ${\mathsf{T}}$. Figure 3: The accumulated reward over time achieved by Algorithm Collaborative and existing recommendation algorithm Popularity Amongst Friends [4], for several values of the variation budget $\mathsf{V}\in\\{0,5,10\\}$, using Movielens10m dataset. In the simulation, we run Algorithm Collaborative with three different values for the variation budget $\mathsf{V}=\mathsf{V_{1}}=\mathsf{V_{2}}\in\\{0,5,10\\}$, and recall that $\mathsf{V}=0$ corresponds to the static case [12]. The results are given in Fig. 3. It is evident that Algorithm Collaborative significantly outperforms PAF algorithm, a fact which was already observed in [12]. More importantly we see that assuming that $\mathsf{V}=5$, and accordingly recommending in batches, gives the best results among the other values of $\mathsf{V}$, and in particular the static case. Except for coping with variations in the preferences of users, this can be attributed also to _model mismatch_. To wit, the static algorithm Recommend was designed for a certain probabilistic model which may not capture certain phenomena in real-world datasets. Accordingly, it might be the case that the algorithm will “stuck” on a certain wrong rating trajectory which will hinder the rate at which likeable movies are recommended. Working in batches, and by which letting the algorithm to “restart” occasionally, may compensate for this mismatch. Finally, note that the reason for the $\cap$-shape of the obtained curves is the fact that after recommending most of the likable items (around $t\approx 310$), mostly unlikable movies are left to recommend, until we exhaust all possible movies. ## 5 Conclusion and Outlook In this paper, we introduced a novel model for online non-stationary recommendation systems, where users may change their preferences over time adversarially. For this model, we analyzed the performance of a CF recommendation algorithm, and derived a lower bound on its achievable reward. We hope our work has opened more doors than it closes. Apart from tightening the obtained lower bound on the reward, there are several exciting directions for future work. First, it is of significant importance to tackle the case where the number of variations is unknown. Devising universal algorithms which are oblivious to the knowledge of the non-stationarity, and proving theoretical guarantees is quite challenging (see, for example, the recent papers [34, 3, 18] where the problem of non-stationary MAB with unknown number of variations was considered). Secondly, it is very interesting and technically challenging to derive information-theoretic upper bounds on the performance (reward) of any CF algorithm for the general model introduced in this paper. The results of this paper can be rather directly generalized to one-class recommendation systems where users only rate what they like and never reveal what they dislike. It would be interesting to introduce and analyze models which combines both content/graph information on top of the collaborative filtering information. Also, while in this paper our ultimate goal was to design recommender system which maximize the number of likes, in some applications one might want to take into account other aspects, such as fairness, novelty and multi-stakeholder recommender systems. Formally analyzing such aspects has not been done, and is of practical and theoretical importance. Finally, as was mentioned in the Appendix 4 there are several inherent challenges with standard CF datasets used for simulating (non- stationary) online recommender systems. Implementing a real interactive online recommendation system and testing our algorithms over it is an important step towards a complete understanding of CF based recommender systems. ## References * [1] R. Allesiardo, R. Féraud, and O. Maillard. The non-stationary stochastic multi-armed bandit problem. Int J Data Sci Anal, 3:267–283, 2017. * [2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 05 2002. * [3] P. Auer, P. Gajane, and R. Ortner. Adaptively tracking the best bandit arm with an unknown number of distribution changes. In Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pages 138–158. PMLR, 25–28 Jun 2019. * [4] K. Barman and O. Dabeer. Analysis of a collaborative filter based on popularity amongst neighbors. IEEE Transactions on Information Theory, 58(12):7110–7134, Dec 2012\. * [5] P. Basile, A. Caputo, M. Degemmis, P. Lops, and G. Semeraro. Modeling short-term preferences in time-aware recommender systems. In UMAP Workshops, 2015. * [6] R. Bell, Y. Koren, and C. Volinsky. Modeling relationships at multiple scales to improve accuracy of large recommender systems. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’07, pages 95–104, New York, NY, USA, 2007. ACM. * [7] A. Bellogin and J. Parapar. Using graph partitioning techniques for neighbour selection in user-based collaborative filtering. In Proceedings of the Sixth ACM Conference on Recommender Systems, RecSys ’12, pages 213–216. ACM, 2012. * [8] O. Besbes, Y. Gur, and A. Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. In Advances in Neural Information Processing Systems 27, pages 199–207, 2014. * [9] O. Besbes, Y. Gur, and A. Zeevi. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015. * [10] Besbes Omer and Gur Yonatan and Zeevi Assaf. Stochastic multi-armed-bandit problem with non-stationary reward. In Proceedings of the 20th International Conference on Neural Information Processing Systems, pages 199–207, 2014. * [11] G. Biau, B. Cadre, and L. Rouvière. Statistical analysis of k -nearest neighbor collaborative recommendation. Ann. Statist., 38(3):1568–1592, 06 2010. * [12] G. Bresler, G. H. Chen, and D. Shah. A latent source model for online collaborative filtering. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3347–3355, 2014. * [13] G. Bresler and M. Karzand. Regret bounds and regimes of optimality for user-user and item-item collaborative filtering. arXiv:1711.02198, 2019. * [14] G. Bresler, D. Shah, and L. F. Voloch. Collaborative filtering with low regret. In Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science, SIGMETRICS ’16, pages 207–220, New York, NY, USA, 2016. ACM. * [15] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. * [16] L. Bui, R. Johari, and S. Mannor. Clustered bandits. CoRR, 2012. * [17] Y. Cao, W. Zheng, B. Kveton, and Y. Xie. Nearly optimal adaptive procedure for piecewise-stationary bandit: a change-point detection approach. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. * [18] Y. Chen, C.-W. Lee, H. Luo, and C.-Y. Wei. A new algorithm for non-stationary contextual bandits: Efficient, optimal and parameter-free. In Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pages 696–726. PMLR, 25–28 Jun 2019. * [19] O. Dabeer. Adaptive collaborating filtering: The low noise regime. In 2013 IEEE International Symposium on Information Theory, pages 1197–1201, July 2013. * [20] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: Scalable online collaborative filtering. In Proceedings of the 16th International Conference on World Wide Web, WWW ’07, pages 271–280, New York, NY, USA, 2007. ACM. * [21] Y. Deshpande and A. Montanari. Linear bandits in high dimension and recommendation systems. In Allerton Conference on Communication, Control, and Computation, pages 1750–1754, October 2012. * [22] M. D. Ekstrand, J. T. Riedl, and J. A. Konstan. Collaborative filtering recommender systems. Found. Trends Hum.-Comput. Interact., 4(2):81–173, Feb. 2011. * [23] F. Eskandanian and B. Mobasher. Detecting changes in user preferences using hidden markov models for sequential recommendation tasks. CoRR, abs/1810.00272, 2018. * [24] A. Garivier and E. Moulines. On upper-confidence bound policies for non-stationary bandit problems. 06 2008. * [25] D. L. Gregory, A. J. Jennifer, and A. B. Eric. Collaborative recommendations using item-to-item similarity mappings. U.S. Patent 6266649B1, 1998. * [26] N. Hariri, B. Mobasher, and R. Burke. Context adaptation in interactive recommender systems. In RecSys 2014 - Proceedings of the 8th ACM Conference on Recommender Systems, pages 41–48, 10 2014. * [27] N. Hariri, B. Mobasher, and R. Burke. Adapting to user preference changes in interactive recommendation. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), pages 4268–4274, 2015. * [28] C. Hartland, N. Baskiotis, S. Gelly, M. Sebag, and O. Teytaud. Change point detection and meta-bandits for online learning in dynamic environments. 04 2011. * [29] R. Heckel and K. Ramchandran. The sample complexity of online one-class collaborative filtering. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 1452–1460. JMLR.org, 2017. * [30] J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22Nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’99, pages 230–237, New York, NY, USA, 1999. ACM. * [31] M. Jahrer, A. Töscher, and R. Legenstein. Combining predictions for accurate recommender systems. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, pages 693–702, New York, NY, USA, 2010. ACM. * [32] K.-S. Jun, R. Willett, S. Wright, and R. Nowak. Bilinear bandits with low-rank structure. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 3163–3172. PMLR, 09–15 Jun 2019. * [33] B. Karahodza, H. Supic, and D. Donko. An approach to design of time-aware recommender system based on changes in group user’s preferences. In 2014 X International Symposium on Telecommunications (BIHTEL), pages 1–4, Oct 2014. * [34] Z. S. Karnin and O. Anava. Multi-armed bandits: Competing with optimal sequences. In Advances in Neural Information Processing Systems 29, pages 199–207. 2016. * [35] R. Kleinberg, A. Niculescu-Mizil, and Y. Sharma. Regret bounds for sleeping experts and bandits. volume 80, pages 425–436, 01 2008. * [36] Y. Koren. Factorization meets the neighborhood: A multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’08, pages 426–434, New York, NY, USA, 2008. ACM. * [37] B. Kveton, C. Szepesvari, A. Rao, Z. Wen, Y. Abbasi-Yadkori, and S. Muthukrishnan. Stochastic low-rank bandits, 2017. * [38] G. Linden, B. Smith, and J. York. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76–80, Jan 2003. * [39] J. Liu, E. Pedersen, and P. Dolan. Personalized news recommendation based on click behavior. In 2010 International Conference on Intelligent User Interfaces, 2010. * [40] X. Liu. Modeling users’ dynamic preference for personalized recommendation. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pages 1785–1791, 2015. * [41] H. Luo, A. Agarwal, and J. Langford. Efficient contextual bandits in non-stationary worlds. In COLT, 2017. * [42] J. L. Moore, S. Chen, T. Joachims, and D. Turnbull. Taste over time: the temporal dynamics of user preferences. In ISMIR, 2013. * [43] S. Mukherjee, H. Lamba, and G. Weikum. Item recommendation with evolving user preferences and experience. CoRR, abs/1705.02519, 2017. * [44] S. Pandey, D. Chakrabarti, and D. Agarwal. Multi-armed bandit problems with dependent arms. pages 721–728, 01 2007. * [45] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 713–719, New York, NY, USA, 2005. ACM. * [46] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: An open architecture for collaborative filtering of netnews. In Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, CSCW ’94, pages 175–186, New York, NY, USA, 1994. ACM. * [47] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Proceedings of the 20th International Conference on Neural Information Processing Systems, NIPS’07, pages 1257–1264, USA, 2007. * [48] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte carlo. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 880–887, New York, NY, USA, 2008. ACM. * [49] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Analysis of recommendation algorithms for e-commerce. In Proceedings of the 2Nd ACM Conference on Electronic Commerce, EC ’00, pages 158–167, New York, NY, USA, 2000. ACM. * [50] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th International Conference on World Wide Web, WWW ’01, pages 285–295, New York, NY, USA, 2001. ACM. * [51] R. Sen, K. Shanmugam, M. Kocaoglu, A. Dimakis, and S. Shakkottai. Contextual Bandits with Latent Confounders: An NMF Approach. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 518–527. PMLR, 20–22 Apr 2017. * [52] F. Shi, C. Ghedira, and J. Marini. Context adaptation for smart recommender systems. IT Professional, 17(6):18–26, Nov 2015. * [53] L. Si and R. Jin. Flexible mixture model for collaborative filtering. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML’03, pages 704–711. AAAI Press, 2003. * [54] I. Sutskever, J. B. Tenenbaum, and R. R. Salakhutdinov. Modelling relational data using bayesian clustered tensor factorization. In Advances in Neural Information Processing Systems 22, pages 1821–1828. 2009. * [55] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285–294, 1933. * [56] J. Y. Yu and S. Mannor. Piecewise-stationary bandit problems with side observations. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML’09, page 1177–1184, 2009. * [57] P. Zhao, L. Zhang, Y. Jiang, and Z.-H. Zhou. A simple approach for non-stationary linear bandits. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 746–755. PMLR, 26–28 Aug 2020. ## Appendix A Proof of Theorem 1 To prove Theorem 1, we establish first a few accompanying results. We start with the following lemma which bounds the probability that user of different (same) type have the same response. For this lemma, we assume that users _cannot_ change their type over time, and denote the type of user $u\in[{\mathsf{N}}]$ by ${\cal T}_{u}$. ###### Lemma 1 (Same Response Lemma). Consider the latent source model and the incoherence Assumption A3. Let $\ell$ be an item chosen uniformly at random from $[{\mathsf{M}}]$. Then, the probability that two users $u$ and $v$ rate $\ell$ in the same way is: $\displaystyle\mathbb{P}\left[\mathbf{R}_{u\ell}=\mathbf{R}_{v\ell}|{\cal T}_{u}\neq{\cal T}_{v}\right]\leq 2\gamma_{1}\Delta^{2}+\frac{1}{2},$ (6) for users of different types, and, $\displaystyle\mathbb{P}\left[\mathbf{R}_{u\ell}=\mathbf{R}_{v\ell}|{\cal T}_{u}={\cal T}_{v}\right]\geq 2\gamma_{2}\Delta^{2}+\frac{1}{2},$ (7) for users of the same type. ###### Proof. Notice that for two users $u$ and $v$ belonging to different user groups, the probability in question is $\displaystyle\mathbb{P}\left[\mathbf{R}_{u\ell}=\mathbf{R}_{v\ell}|{\cal T}_{u}\neq{\cal T}_{v}\right]$ $\displaystyle=\frac{1}{{\mathsf{M}}}\sum_{i=1}^{{\mathsf{M}}}\left[p_{u,i}p_{v,i}+(1-p_{u,i})(1-p_{v,i})\right]$ $\displaystyle\quad=\frac{1}{{\mathsf{M}}}\sum_{i=1}^{{\mathsf{M}}}\left[\frac{(2p_{u,i}-1)(2p_{v,i}-1)}{2}+\frac{1}{2}\right]$ $\displaystyle\quad=\frac{1}{{\mathsf{M}}}\langle 2{\mathbf{p}}_{u}-\mathbf{1},2{\mathbf{p}}_{v}-\mathbf{1}\rangle+\frac{1}{2}$ $\displaystyle\quad\leq 2\gamma_{1}\Delta^{2}+\frac{1}{2},$ where the inequality follows from the incoherence Assumption A3. Similarly, for two users of the same type, $\displaystyle\mathbb{P}\left[\mathbf{R}_{u\ell}=\mathbf{R}_{v\ell}|{\cal T}_{u}={\cal T}_{v}\right]$ $\displaystyle=\frac{1}{{\mathsf{M}}}\langle 2{\mathbf{p}}_{u}-\mathbf{1},2{\mathbf{p}}_{v}-\mathbf{1}\rangle+\frac{1}{2}$ $\displaystyle\geq 2\gamma_{2}\Delta^{2}+\frac{1}{2},$ where, again, the inequality follows from the coherence Assumption A3. ∎ The following lemma gives a condition on the number of random recommendations needed for the cosine-similarity test to output the correct clustering with high probability, assuming that no variations happened during the test. We establish a few notations. Let ${\cal T}_{\mathsf{test}}\subseteq[{\mathsf{M}}]$ be a set of $\mathsf{L}$ items chosen uniformly at random from ${\mathsf{M}}$. Let $\mathsf{Y}_{u,v}\in\\{0,1\\}$ be a binary variable indicating whether $(u,v)$ are in the same cluster or not, for $u,v\in[{\mathsf{N}}]$. Using the responses $\\{\mathbf{R}_{u,i}\\}_{u\in[{\mathsf{N}}],i\in{\cal T}_{\mathsf{test}}}$, we would like to infer the values of $\mathsf{Y}_{u,v}$ for all $u,v\in[{\mathsf{N}}]$. For any pair of distinct users $u,v\in[{\mathsf{N}}]$, let $\mathsf{X}_{u,v}$ be the random variable corresponding to the number of items for which $u$ and $v$ had the same responses. Finally, we let $\displaystyle\hat{\mathsf{Y}}_{u,v}=\begin{cases}1,\ &\mathrm{if}\;\mathsf{X}_{u,v}\geq\lambda\cdot\mathsf{L}\\\ 0,\ &\mathrm{otherwise},\end{cases}$ (8) for some $\lambda\geq 0$. We have the following result. ###### Lemma 2. Consider the latent source model and the incoherence Assumption A3. Let $\delta\in(0,1)$. For any $\mathsf{L}\geq\frac{2\log(3{\mathsf{N}}^{2}/\delta)}{\Delta^{4}(\gamma_{2}-\gamma_{1})^{2}}\triangleq\mathsf{T_{static}}$, and any $\lambda\in[\lambda_{-},\lambda_{+}]$ with $\lambda_{-}=2\gamma_{1}\Delta^{2}+\frac{1}{2}+\sqrt{\frac{2}{\mathsf{L}}\log(3{\mathsf{N}}^{2}/\delta)}$ and $\lambda_{+}=2\gamma_{2}\Delta^{2}+\frac{1}{2}-\sqrt{\frac{2}{\mathsf{L}}\log(3{\mathsf{N}}^{2}/\delta)}$, the test in (8) discriminates between $\mathsf{Y}_{u,v}=0$ and $\mathsf{Y}_{u,v}=1$, for any pair of users $u,v\in[{\mathsf{N}}]$, with probability at least $1-\delta/3$. ###### Proof. First, it is clear that Lemma 1 implies that $\displaystyle\mathbb{E}\left[\mathsf{X}_{u,v}|\mathsf{Y}_{u,v}=0\right]\leq\mathsf{L}\Big{(}2\gamma_{1}\Delta^{2}+\frac{1}{2}\Big{)},$ $\displaystyle\mathbb{E}\left[\mathsf{X}_{u,v}|\mathsf{Y}_{u,v}=1\right]\geq\mathsf{L}\Big{(}2\gamma_{2}\Delta^{2}+\frac{1}{2}\Big{)}.$ Then, we note that $\mathsf{X}_{u,v}$ is a sum of $\mathsf{L}$ random variables in $[-1,1]$, drawn without replacement from $[{\mathsf{M}}]$. Accordingly, Hoeffding’s inequality gives, $\displaystyle\mathbb{P}\left[\left.\mathsf{X}_{u,v}\geq\lambda_{-}\cdot\mathsf{L}\right|\mathsf{Y}_{u,v}=0\right]$ $\displaystyle\leq\exp\left[-\frac{\left(\lambda_{-}\cdot\mathsf{L}-\mathbb{E}\left[\mathsf{X}_{u,v}|\mathsf{Y}_{u,v}=0\right]\right)^{2}}{2\mathsf{L}}\right]$ (9) $\displaystyle\leq\exp\left[-\frac{\left(\lambda_{-}-2\gamma_{1}\Delta^{2}-\frac{1}{2}\right)^{2}}{2}\mathsf{L}\right],$ (10) and $\displaystyle\mathbb{P}\left[\left.\mathsf{X}_{u,v}\leq\lambda_{+}\cdot\mathsf{L}\right|\mathsf{Y}_{u,v}=1\right]$ $\displaystyle\leq\exp\left[-\frac{\left(2\gamma_{2}\Delta^{2}+\frac{1}{2}-\lambda_{+}\right)^{2}}{2}\mathsf{L}\right].$ (11) Therefore, taking $\lambda_{-}=2\gamma_{1}\Delta^{2}+\frac{1}{2}+\sqrt{\frac{2}{\mathsf{L}}\log(3{\mathsf{N}}^{2}/\delta)}$ and $\lambda_{+}=2\gamma_{2}\Delta^{2}+\frac{1}{2}-\sqrt{\frac{2}{\mathsf{L}}\log(3{\mathsf{N}}^{2}/\delta)}$, we obtain that $\displaystyle\mathbb{P}\left[\left.\mathsf{X}_{u,v}\geq\lambda_{-}\cdot\mathsf{L}\right|\mathsf{Y}_{u,v}=0\right]$ $\displaystyle\leq\frac{\delta}{3{\mathsf{N}}^{2}},$ (12) and $\displaystyle\mathbb{P}\left[\left.\mathsf{X}_{u,v}\leq\lambda_{+}\cdot\mathsf{L}\right|\mathsf{Y}_{u,v}=1\right]$ $\displaystyle\leq\frac{\delta}{3{\mathsf{N}}^{2}}.$ (13) Picking any $\lambda\in[\lambda_{-},\lambda_{+}]$, we can see that the bounds in (12)–(13), with $\lambda_{-}$ and $\lambda_{+}$ replaced by $\lambda$. This is equivalent to $\mathbb{P}\left[\left.\hat{\mathsf{Y}}_{u,v}\neq\mathsf{Y}_{u,v}\right|\mathsf{Y}_{u,v}=\ell\right]\leq\delta/(3{\mathsf{N}}^{2})$, for $\ell=0,1$. Such $\lambda$ exists if $\lambda_{+}\geq\lambda_{-}$, which holds whenever, $\displaystyle\mathsf{L}\geq\frac{2\log(3{\mathsf{N}}^{2}/\delta)}{\Delta^{4}(\gamma_{2}-\gamma_{1})^{2}}=\mathsf{T_{static}}.$ (14) Finally, taking a union bound over all pairs of users (we trivially have at most $\mathsf{N}^{2}$ such pairs) we conclude that we can correctly infer the values of $\mathsf{Y}_{u,v}$ for all $u,v\in[{\mathsf{N}}]$ (and therefore cluster all such pairs of users correctly), with probability at least $1-\delta/3$, as claimed. ∎ We would like to mention here that the test described above can only distinguish between whether $\mathsf{Y}_{u,v}=1$ or $\mathsf{Y}_{u,v}=0$, _assuming_ that users did not change their type during the test. If, however, a test is conducted when there are switches, we can still infer the clustering of those users who have not changed during the test correctly. We are now in a position to prove Theorem 1. With some abuse of notation, let us denote by $\mathsf{reward}({\cal B}_{\ell})$ the expected reward accumulated in batch ${\cal B}_{\ell}$, i.e., $\displaystyle\mathsf{reward}({\cal B}_{\ell})$ $\displaystyle\triangleq\mathbb{E}\left[\sum_{t\in{\cal B}_{\ell}}\frac{1}{{\mathsf{N}}}\sum_{u=1}^{{\mathsf{N}}}\mathds{1}[\mathbf{R}_{u\pi_{u,t}}=1]\right]$ $\displaystyle=|{\cal B}_{\ell}|-\mathbb{E}\left[\sum_{t\in{\cal B}_{\ell}}\frac{1}{{\mathsf{N}}}\sum_{u=1}^{{\mathsf{N}}}\mathds{1}[\mathbf{R}_{u\pi_{u,t}}=0]\right]$ $\displaystyle\triangleq|{\cal B}_{\ell}|-\mathsf{regret}({\cal B}_{\ell}),$ (15) where $\mathsf{regret}({\cal B}_{\ell})$ is the regret accumulated during batch ${\cal B}_{\ell}$. As can be seen from Algorithm Collaborative, we decompose the recommendation horizon ${\mathsf{T}}$ to a sequence of batches of size $\Delta_{\mathsf{T}}$ each. To obtain Theorem 1, we will relate the total reward/regret with the local reward/regret of the static algorithm Recommend. Specifically, let ${\cal B}_{\ell}$, for $\ell=1,2,\ldots,\left\lceil{\mathsf{T}}/\Delta_{\mathsf{T}}\right\rceil$, denote the $\ell$’th batch of size $\Delta_{\mathsf{T}}$, and let $t_{{\cal B}_{\ell}}$ be the ending time of batch ${\cal B}_{\ell}$. We will keep track of a set of users $\mathcal{V}_{t}\subseteq[{\mathsf{N}}]$ which will include all those users for whom we have been able to identify that they have have changed their user groups at some point of time during the batch $\mathcal{B}_{\ell}$. We initialize $\mathcal{V}_{t_{\mathcal{B}_{\ell-1}}+1}=\phi$ at the beginning of the batch to be the empty set. We define $\displaystyle\mathsf{V}_{\mathcal{B}_{\ell},1}\triangleq\sum_{t\in{\cal B}_{\ell}\setminus t_{{\cal B}_{\ell}}}\mathds{1}\left[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1),\;\text{for some}\;u\in[{\mathsf{N}}]\right],$ (16) as the number of variations that have occurred during the batch $\mathcal{B}_{\ell}$. Furthermore, we let $\displaystyle\mathsf{V}_{\mathcal{B}_{\ell},2}\triangleq\frac{1}{{\mathsf{N}}}\sum_{u\in[N]}\sum_{t\in{\cal B}_{\ell}\setminus t_{{\cal B}_{\ell}}}\mathds{1}\left[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1)\right],$ (17) as the total number of variations that have occurred during the batch $\mathcal{B}_{\ell}$. For $\tau\in\mathcal{B}_{\ell}$, we define $\mathsf{Z}_{\tau}$ to be an indicator random variable which is unity if some user switches its type in a window of $2\cdot\mathsf{T_{static}}$ around round $\tau$ within the batch $\mathcal{B}_{\ell}$. For $\tau\in\mathcal{B}_{\ell}$, let us denote $\mathsf{W}_{\tau}:=\\{\max\\{\tau-\mathsf{T_{static}},t_{{\cal B}_{\ell-1}+1}\\},\dots,\min\\{\tau+\mathsf{T_{static}},t_{{\cal B}_{\ell}}\\}\\}$ as the window of size $2\cdot\mathsf{T_{static}}$ around $\tau$. Then, note that $\mathsf{Z}_{\tau}$ can be written as $\displaystyle\mathsf{Z}_{\tau}=\mathds{1}\left[\sum_{u\in[{\mathsf{N}}]}\sum_{t\in\mathsf{W}_{\tau}}\mathds{1}\left[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1)\right]>0\right].$ (18) As can be seen from Algorithm Recommend, at every round in each batch, we start a test with probability $1/\sqrt{\Delta_{{\mathsf{T}}}}$, which involves recommending randomly sampled items to every user for $\mathsf{T_{static}}$ rounds. After each such test, we can use Lemma 2 to partition the set of users. In addition, in the fourth step of Algorithm Recommend we conduct a _reference test_ at the beginning of the batch. In the sequel, we denote this test by $\mathsf{Test}_{0}$, and further denote the $(j+1)^{\mathsf{th}}$ test by $\mathsf{Test}_{j}$. The partition induced by the $(j+1)^{\mathsf{th}}$ test is denoted by $\mathcal{P}_{\mathsf{Test}_{j}}$. By comparing the partitions $\mathcal{P}_{\mathsf{Test}_{j}}$ and $\mathcal{P}_{\mathsf{Test}_{0}}$, we will be able to partially identify users who have changed their user groups in the batch. This is done in Algorithm Test. We will call those users who have changed their user groups in a particular batch as bad users and those users who have not changed their user groups throughout the batch as good users. Moreover, a user is also good until he changes his user group and will be denoted as bad from the round he changes his group. In order to bound the regret over each batch we will consider the following three cases: * • Case 1: Consider the situation where at least $2/3$ of the users of any particular user group have changed their user group. We denote this event by $\mathcal{E}_{1}$. In such a case, we will upper bound the regret in the batch $\mathcal{B}_{\ell}$ by $\mathsf{\Delta}_{T}$. Notice that, since $\mathsf{V}_{\mathcal{B}_{\ell},2}\geq\frac{2\nu}{3}$, therefore conditioned on ${\cal E}_{1}$, we have $\mathsf{regret}(\mathcal{B}_{\ell})\leq\frac{3\Delta_{\mathsf{T}}\mathsf{V}_{\mathcal{B}_{\ell},2}}{2\nu}$. * • Case 2: In this case, we will assume that for every user group, at most $1/3$ of the users change their user groups in the batch. For any test $\mathcal{P}_{\mathsf{Test}_{j}}$, notice that we can actually end up with more than $\mathsf{K}$ clusters (say we have $\mathsf{K}^{\prime}$ clusters) because of variations. In that case, we will identify all the users in the smallest $\mathsf{K}^{\prime}-\mathsf{K}$ clusters as users who have changed their user group. Note that, it is possible that we make a mistake in this process because one of the clusters in the smallest $\mathsf{K}^{\prime}-\mathsf{K}$ clusters might correspond to good users who have not changed their user group. This, however, must mean that a larger cluster among the largest $\mathsf{K}$ clusters must correspond to users who have changed. This in turn implies that at least $\frac{2\nu\mathsf{N}}{3}$ users have changed since the size of the smaller cluster corresponding to users who do not change their group throughout the batch $\mathcal{B}_{\ell}$ is at least $\frac{2\nu{\mathsf{N}}}{3}$. We will denote this event by $\mathcal{E}_{2}$. As in the previous case, we trivially upper bound the regret in the batch $\mathcal{B}_{\ell}$ by $\Delta_{\mathsf{T}}$, and similarly to Case 1, we have $\mathsf{regret}(\mathcal{B}_{\ell})\leq\frac{3\Delta_{\mathsf{T}}\mathsf{V}_{\mathcal{B}_{\ell},2}}{2\nu}$, conditioned on ${\cal E}_{2}$. * • Case 3: In this case, as in the previous case, we assume that for every user group, at most $1/3$ of the users change their user groups in the batch. Contrary to the previous case, we will also assume that in every test with more than $\mathsf{K}$ clusters (say, $\mathsf{K}^{\prime}$ clusters), the users in the smallest $\mathsf{K}^{\prime}-\mathsf{K}$ users correspond to users who have changed their user group. For a future test $j$ started at round $\tau_{\mathsf{test},j}$ such that $\mathsf{Z}_{\tau_{\mathsf{test},j},u}=0$, we will compare the partitions $\mathcal{P}_{\mathsf{Test}_{0}}$ and $\mathcal{P}_{\mathsf{Test}_{j}}$ by establishing a bijective mapping between the clusters of the two partitions. For every cluster $\mathcal{C}$ in $\mathcal{P}_{\mathsf{Test}_{0}}$, we can find a cluster $\mathcal{C}^{\prime}$ in $\mathcal{P}_{\mathsf{Test}_{j}}$ such that at least two-thirds of the elements in $\mathcal{C},\mathcal{C^{\prime}}$ are common. Subsequently, for all those users in $\mathcal{C}$ which are not present in $\mathcal{C}^{\prime}$, we correctly identify them as users who have changed their user groups. For a pair of distinct users $(u,v)\subset[{\mathsf{N}}]\times[{\mathsf{N}}]$ belonging to the same user group at the beginning of the batch $\mathcal{B}_{\ell}$, we call them interesting if one of them have changed their user group. Note that, for any pair of interesting users $(u,v)$ where one of them have changed their user group at any round after the reference test is conducted and remains in different user group before the $(j+1)^{\mathsf{th}}$ test is started will belong to different clusters in $\mathcal{P}_{\mathsf{Test}_{j}}$. Note that it is possible that $\mathsf{Z}_{t_{\mathcal{B}_{\ell-1}}+1,u}=1$, i.e., some user $u$ might change their user group during the first $\mathsf{T_{static}}$ rounds when the reference test is being conducted. Since we can label the top $\mathsf{K}$ clusters (by the corresponding user-group) returned by the reference test as we know that two-thirds users of every user group did not change. We denote by $\mathcal{P}_{\mathsf{Test}_{0}}(u)$ the cluster (label) $u$ belongs to in the partition returned by the reference test $\mathsf{Test}_{0}$. Let us define an indicator random variable $\mathsf{L}_{u}$ which is unity if user $u$ has changed his user group in the first $\mathsf{T_{static}}$ rounds. Consider such a user $u$ for which $\mathsf{L}_{u}=1$. In that case, three things are possible at the end of the reference test: 1. 1. $u$ might belong to the smallest $\mathsf{K}^{\prime}-\mathsf{K}$ clusters in the reference test in which case $u$ is identified as a user who has changed his user group and he is not involved in the main algorithm started after the reference test, i.e., $u$ is added to the set $\mathcal{V}_{\mathsf{T_{static}}}$. We define an indicator random variable $\mathsf{X}_{u,1}$ which is unity if user $u$ has changed his user group during the first $\mathsf{T_{static}}$ rounds in the batch, and is returned in the smallest $\mathsf{K}^{\prime}-\mathsf{K}$ clusters at the end of the reference test. 2. 2. $u$ belongs to the cluster corresponding to his new user group in which case we will not be able to infer that $u$ has changed his user group. In this case, we will consider $u$ to be a good user unless he changes his user group later. We will consider his user group at the end of the reference test ($\mathcal{P}_{\mathsf{Test}_{0}}(u)$ which is same as $\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})$) to be his actual user group. We will call this a special case and we define an indicator random variable $\mathsf{X}_{u,2}\triangleq\mathds{1}[\mathcal{P}_{\mathsf{test}_{0}}(u)=\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})]$ which is unity if user $u$ changes his user group during the first $\mathsf{T_{static}}$ rounds in the batch, and belongs to his final user group (the user group he belongs to at the end of the reference test). 3. 3. $u$ remains in his original user group (or an intermediate user group if he changes his user group multiple times during the reference test). We define an indicator random variable $\mathsf{X}_{u,3}$ which is unity if the user changes his user group in the first $\mathsf{T_{static}}$ rounds and does not belong to his final user group (the user group he belongs to at the end of the reference test) at the end of the reference test, i.e., $\mathsf{X}_{u,3}\triangleq\mathds{1}[\mathcal{P}_{\mathsf{test}_{0}}(u)\neq\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})]$. For a round $\tau>\mathsf{T_{static}}$ in ${\cal B}_{\ell}$, we will define an indicator random variable $\mathsf{J}_{u,\tau}=\mathds{1}[\mathcal{T}_{u}(\tau)\neq\mathcal{P}_{\mathsf{Test}_{0}}(u)]$, which is unity if user $u$ is in a different group at round $\tau$ than the user group of $u$ that was returned by the reference test. We are now in a position to bound the regret over each batch. To that end, we will decompose the regret into a few terms and analyze the contribution of each term separately. First, as we described above conditioned on Cases 1 and 2, namely, ${\cal A}\triangleq{\cal E}_{1}\cup{\cal E}_{2}$ we have $\displaystyle\mathsf{regret}({\cal B}_{\ell}|{\cal A})$ $\displaystyle\triangleq\frac{1}{{\mathsf{N}}}\sum_{t\in\mathcal{B}_{\ell}\setminus{\cal T}_{\mathsf{test,\ell}}}\sum_{u\in[{\mathsf{N}}]}\mathbb{E}\left[\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}\right]\right]$ (19) $\displaystyle\leq\frac{3\Delta_{{\mathsf{T}}}\mathsf{V}_{\mathcal{B}_{\ell},2}}{2\nu}.$ (20) Next, we analyze Case 3, where we condition on ${\cal A}^{c}$, namely, $\displaystyle\mathsf{regret}({\cal B}_{\ell}|{\cal A}^{c})\triangleq\frac{1}{{\mathsf{N}}}\sum_{t\in\mathcal{B}_{\ell}\setminus{\cal T}_{\mathsf{test,\ell}}}\sum_{u\in[{\mathsf{N}}]}\mathbb{E}\left[\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}^{c}\right]\right].$ (21) We do that by considering each of the sub-cases listed above. ### A.1 Variations When Testing We bound the regret for those rounds in the batch for which $\mathsf{Z}_{\tau}=1$. Specifically, for a round $\tau\in\mathcal{B}_{\ell}$, we denote the event $\mathcal{E}_{\tau,1}$ when $\mathsf{Z}_{\tau}=1$, which by definition imply that there is a variation in a window of size $2\cdot\mathsf{T_{static}}$ around round $\tau$ for some user. In particular, using the definitions in (16) and (18), we note that $\displaystyle\sum_{\tau\in\mathcal{B}_{\ell}\setminus{\cal T}_{\mathsf{test,\ell}}}\mathsf{Z}_{\tau}$ $\displaystyle\leq\sum_{\tau\in\mathcal{B}_{\ell}}\sum_{t\in\mathsf{W}_{\tau}}\mathds{1}\left[{\cal T}_{u}(t)\neq{\cal T}_{u}(t+1),\;\text{for some }u\in[{\mathsf{N}}]\right]$ (22) $\displaystyle\leq 2\cdot\mathsf{V}_{\mathcal{B}_{\ell},1}\cdot\mathsf{T_{static}}.$ (23) Therefore, we can bound the regret in those rounds and users where $\mathsf{Z}_{\tau}=1$ by $\displaystyle\mathsf{A}_{2}$ $\displaystyle\triangleq\frac{1}{{\mathsf{N}}}\mathbb{E}\left[\sum_{t\in\mathcal{B}_{\ell}\setminus{\cal T}_{\mathsf{test,\ell}}}\sum_{u\in[{\mathsf{N}}]:\mathsf{Z}_{t}=1}\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}^{c}\right]\right]$ (24) $\displaystyle\leq\mathbb{E}\left[\sum_{\tau\in\mathcal{B}_{\ell}}\mathds{1}\left[\mathsf{Z}_{t}=1\right]\right]$ (25) $\displaystyle\leq 2\cdot\mathsf{V}_{\mathcal{B}_{\ell},1}\cdot\mathsf{T_{static}}.$ (26) ### A.2 Regret Due To Testing We bound the regret for those rounds where we test in Algorithm Recommend. Specifically, for a round $\tau\in\mathcal{B}_{\ell}$, we define the indicator random variable $\mathsf{Y}_{\tau}$ which is unity when a test is being conducted at the round $\tau$. We have $\displaystyle\mathsf{A}_{3}$ $\displaystyle\triangleq\frac{1}{{\mathsf{N}}}\mathbb{E}\left[\sum_{t\in\mathcal{B}_{\ell}\setminus{\cal T}_{\mathsf{test,\ell}}:\mathsf{Y}_{\tau}=1}\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}^{c}\right]\right]$ (27) $\displaystyle\leq\mathbb{E}\left[\sum_{t\in\mathcal{B}_{\ell}}\mathds{1}\left[\mathsf{Y}_{\tau}=1\right]\right]$ (28) $\displaystyle\leq\Delta_{\mathsf{T}}\cdot p\cdot\mathsf{T_{static}},$ (29) where we have used the fact that $\mathbb{P}[\mathsf{Y}_{\tau}=1]=\mathbb{P}[\tau\in\mathrm{Test}]=p$, $|{\cal B}_{\ell}|=\Delta_{{\mathsf{T}}}$, and each test takes $\mathsf{T_{static}}$ rounds. ### A.3 Undetected Bad Users For a user $u\in[{\mathsf{N}}]$, we define an indicator random variable $\mathsf{B}_{u,t}$ which is unity if the user is not included in the set of bad users $\mathcal{V}_{t}$ at round $t\in\mathcal{B}_{\ell}$. Furthermore, for a round $t$ after the reference test, namely, $t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}$, where ${\cal T}_{\mathsf{test},\ell}\triangleq[t_{\mathcal{B}_{\ell-1}}+1,\ldots,t_{\mathcal{B}_{\ell-1}}+\mathsf{T_{static}}]$, define an indicator random variable $\mathsf{H}_{t}$ which is unity if there is a bad user which is undetected (or, untested) involved in the algorithm. As we explain Below this random variable can be decomposed into the union of three sub-cases which we discussed above. For any round $t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}$, we have: * • A user $u$ who satisfies $\mathsf{L}_{u}=1,\mathsf{B}_{u,t}=1,\mathsf{X}_{u,3}=1,\mathsf{J}_{u,t}=1$ and $\mathsf{Z}_{t}=0$, is one who has changed his user group in the first $\mathsf{T_{static}}$ rounds in the batch, was not in his final user group at the end of the reference test, and his user group at round $t$ is different from his user group that was returned by the reference test, i.e., $\displaystyle\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})\neq\mathcal{P}_{\mathsf{Test}_{0}}(u)\quad\text{and}\quad\mathcal{T}_{u}(t)\neq\mathcal{P}_{\mathsf{Test}_{0}}(u).$ * • A user $u$ who satisfies $\mathsf{L}_{u}=1,\mathsf{B}_{u,t}=1,\mathsf{X}_{u,2}=1,\mathsf{J}_{u,t}=1$ and $\mathsf{Z}_{t}=0$, is one who has changed his user group in the first $\mathsf{T_{static}}$ rounds in the batch, and his user group at the end of the reference test is also same as the one provided by the estimate of the reference test, but his user group at round $t$ is different from his user group at the end of the reference test, i.e., $\displaystyle\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})=\mathcal{P}_{\mathsf{Test}_{0}}(u)\quad\text{and}\quad\mathcal{T}_{u}(t)\neq\mathcal{P}_{\mathsf{Test}_{0}}(u).$ * • A user $u$ who satisfies $\mathsf{L}_{u}=0,\mathsf{B}_{u,t}=1,\mathsf{J}_{u,t}=1$ and $\mathsf{Z}_{t}=0$ is one who has not changed his user group in the first $\mathsf{T_{static}}$ rounds in the batch, but his user group at round $t$ is different from his user group at the beginning of the batch, i.e., $\displaystyle\mathcal{T}_{u}(t_{\mathcal{B}_{\ell-1}}+1+\mathsf{T_{static}})=\mathcal{T}_{u}(t),\quad\text{for}\;t\in{\cal T}_{\mathsf{test},\ell},$ $\displaystyle\mathcal{T}_{u}(t)\neq\mathcal{P}_{\mathsf{Test}_{0}}(u).$ Given the above three sub-cases, it is clear that $\mathsf{H}_{t}$ for $t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}$, can be written as $\displaystyle\mathsf{H}_{t}$ $\displaystyle=\mathds{1}\left[\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[\mathsf{L}_{u}=1,\mathsf{B}_{u,t}=1,\mathsf{X}_{u,3}=1,\mathsf{J}_{u,t}=1,\mathsf{Z}_{t}=0\right]\right.$ $\displaystyle\quad\quad\quad+\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[\mathsf{L}_{u}=0,\mathsf{B}_{u,t}=1,\mathsf{J}_{u,t}=1,\mathsf{Z}_{t}=0\right]$ $\displaystyle\left.\quad\quad\quad+\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[\mathsf{L}_{u}=1,\mathsf{B}_{u,t}=1,\mathsf{X}_{u,2}=1,\mathsf{J}_{u,t}=1,\mathsf{Z}_{t}=0\right]>0\right].$ (30) Basically, $\mathsf{H}_{t}$ indicates whether at time $t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}$ a bad user is present or not. Accordingly, we bound the regret in this case as follows $\displaystyle\mathsf{A}_{4}$ $\displaystyle\triangleq\frac{1}{{\mathsf{N}}}\mathbb{E}\left[\sum_{t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}:\mathsf{H}_{t}=1}\sum_{u\in[{\mathsf{N}}]}\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}^{c}\right]\right]$ (31) $\displaystyle\leq\mathbb{E}\sum_{t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}}\mathds{1}\left[\mathsf{H}_{t}=1\right]$ (32) $\displaystyle=\mathbb{E}\sum_{t\in{\cal B}_{\ell}:\exists u\in[{\mathsf{N}}],\mathsf{J}_{u,t}=1}\mathsf{G}_{t},$ (33) where in (33) we sum over all those rounds where some user changed its type, and $\mathsf{G}_{t}$ counts the number of rounds it takes to detect the bad users. This random variable is clearly stochastically dominated by by a Geometric random variable with mean $1/p$. Indeed, a test can start at every round with probability $p$, and a test that starts at a round $\mathsf{Z}_{t}=0$ will certainly reveal that the user is in a different user group than the one returned in the reference test $\mathcal{P}_{\mathsf{test}_{0}}$. Accordingly, we will add that user to the set $\mathcal{V}_{t_{{\cal B}_{\ell-1}}+1+\mathsf{T_{static}}+t}$. Therefore, we obtain that, $\displaystyle\mathsf{A}_{4}$ $\displaystyle\leq\mathbb{E}\sum_{t\in{\cal B}_{\ell}:\exists u\in[{\mathsf{N}}],\mathsf{J}_{u,t}=1}\mathsf{G}_{t}\leq\mathbb{E}\sum_{t\in{\cal B}_{\ell}:\exists u\in[{\mathsf{N}}],\mathsf{J}_{u,t}=1}\frac{1}{p}\leq\frac{\mathsf{V}_{{\cal B}_{\ell},1}}{p}.$ (34) ### A.4 The “Static” Regret It remains to bound the regret for those round where we do not test and all bad users are detected, i.e., $\displaystyle\mathsf{A}_{5}\triangleq\frac{1}{{\mathsf{N}}}\mathbb{E}\left[\sum_{t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}:\mathsf{H}_{t}=0,\mathsf{Y}_{t}=0}\sum_{u\in[{\mathsf{N}}]\setminus\mathcal{V}_{t}}\mathds{1}\left[\mathbf{R}_{u,\pi_{u,t}}=0\mid\mathcal{A}^{c}\right]\right].$ (35) We shall refer to this regret as the static regret. This static case was studied in [12], where algorithm Recommend was analyzed thoroughly. As discussed before, in [12] it was assumed that users of the same user-type have the same exact preference vectors, while in this paper we assume the weaker coherence Assumption A3. Nonetheless, except for a few technical differences (which we highlight in the proof of the following result), our analysis relies on the proof of Theorem 1 in [12]. ###### Lemma 3 (No Variations). Let $\delta\in(0,1)$, and consider the latent source model and assumptions A1–A3. Also, assume that ${\mathsf{N}}=\Omega\left(\frac{{\mathsf{M}}}{\nu}\log\frac{1}{\delta}+\left(\frac{3}{\delta}\right)^{1/\alpha}\right)$. Then, for any $\mathsf{T_{static}}\leq\Delta_{{\mathsf{T}}}\leq\mu\cdot{\mathsf{M}}$, we have $\displaystyle\mathsf{A}_{5}$ $\displaystyle\leq(\Delta_{{\mathsf{T}}}-\mathsf{T_{static}})\cdot\delta.$ (36) ### A.5 Collecting Terms We finally collect all the above bounds to obtain the result stated in Theorem 1. Specifically, using (20), (26), (29), (34), and Theorem 1, we obtain $\displaystyle\mathsf{regret}({\cal B}_{\ell})$ $\displaystyle\leq\mathsf{T_{static}}\cdot(1-\mu)+(\Delta_{{\mathsf{T}}}-\mathsf{T_{static}})\cdot\delta+2\cdot\mathsf{V}_{\mathcal{B}_{\ell},1}\cdot\mathsf{T_{static}}+p\cdot\Delta_{{\mathsf{T}}}\cdot\mathsf{T_{static}}$ $\displaystyle\quad+\frac{\mathsf{V}_{\mathcal{B}_{\ell},1}}{p}+\frac{3\Delta_{{\mathsf{T}}}\mathsf{V}_{\mathcal{B}_{\ell},2}}{2\nu}$ (37) $\displaystyle\leq\delta\cdot\Delta_{{\mathsf{T}}}+\mathsf{T_{static}}\cdot(1-\delta-\mu)+2\cdot\mathsf{V}_{\mathcal{B}_{\ell},1}\cdot\mathsf{T_{static}}+p\cdot\Delta_{{\mathsf{T}}}\cdot\mathsf{T_{static}}$ $\displaystyle\quad+\frac{\mathsf{V}_{\mathcal{B}_{\ell},1}}{p}+\frac{3\Delta_{{\mathsf{T}}}\mathsf{V}_{\mathcal{B}_{\ell},2}}{2\nu},$ (38) where the first term at the r.h.s. of (37) is the regret due to the first $\mathsf{T_{static}}$ rounds where we recommend random items. Since (38) is true for every batch ${\cal B}_{\ell}$, we can sum-up over $\ell$, and obtain that $\displaystyle\mathsf{regret}({\mathsf{T}})$ $\displaystyle\leq\sum_{\ell=1}^{\left\lceil{\mathsf{T}}/\Delta_{\mathsf{T}}\right\rceil}\mathsf{regret}({\cal B}_{\ell})$ (39) $\displaystyle\leq\delta\cdot{\mathsf{T}}+\frac{{\mathsf{T}}}{\Delta_{{\mathsf{T}}}}\mathsf{T_{static}}\cdot(1-\delta-\mu)+2\cdot\mathsf{V_{1}}\cdot\mathsf{T_{static}}+p\cdot{\mathsf{T}}\cdot\mathsf{T_{static}}+\frac{\mathsf{V}_{1}}{p}+\frac{3\Delta_{{\mathsf{T}}}\mathsf{V_{2}}}{2\nu}.$ (40) Minimizing the r.h.s. of the above inequality w.r.t. $p$, we obtain that its optimal value is $p^{\star}=\sqrt{\mathsf{V}_{1}/(\mathsf{T}\cdot\mathsf{T_{static}})}$. Therefore, $\displaystyle\mathsf{regret}({\mathsf{T}})$ $\displaystyle\leq\delta\cdot{\mathsf{T}}+\frac{{\mathsf{T}}}{\Delta_{{\mathsf{T}}}}\mathsf{T_{static}}\cdot(1-\delta-\mu)+2\cdot\mathsf{V_{1}}\cdot\mathsf{T_{static}}+2\sqrt{\mathsf{V}_{1}\cdot{\mathsf{T}}\cdot\mathsf{T_{static}}}+\frac{3\Delta_{{\mathsf{T}}}\mathsf{V}_{2}}{2\nu}.$ (41) It is left to do is to minimize the r.h.s. of the above inequality over $\Delta_{\mathsf{T}}$. The optimal value is given in a form of a solution for a cubic equation. Alternatively, it turns out that the following choice which minimizes the first three terms at the r.h.s. of (41) is $\displaystyle\Delta_{\mathsf{T}}^{*}=\min\left({\mathsf{T}},\sqrt{\frac{2\nu{\mathsf{T}}}{3\mathsf{V_{2}}}\kappa}\right),$ (42) where $\kappa\triangleq\mathsf{T_{static}}(1-\delta-\mu)$. Substituting this value back in (41) gives $\displaystyle\mathsf{regret}({\mathsf{T}})$ $\displaystyle\leq\delta\cdot{\mathsf{T}}+\max\left(\kappa,\sqrt{\frac{3\mathsf{V_{2}{\mathsf{T}}}\kappa}{2\nu}}\right)+2\cdot\mathsf{V_{1}}\cdot\mathsf{T_{static}}+2\sqrt{\mathsf{V}_{1}\cdot{\mathsf{T}}\cdot\mathsf{T_{static}}}$ $\displaystyle\quad\quad+\min\left(\frac{3\mathsf{V_{2}}{\mathsf{T}}}{2\nu},\sqrt{\frac{3\mathsf{V}_{2}{\mathsf{T}}\kappa}{2\nu}}\right),$ (43) and so $\mathsf{reward}({\mathsf{T}})={\mathsf{T}}-\mathsf{regret}({\mathsf{T}})$ is lower bounded by the same expression as in Theorem 1. Note that the condition $\Delta_{\mathsf{T}}>\mathsf{T_{static}}$ in Lemma 3 boils down to ${\mathsf{T}}>\mathsf{T}_{\mathsf{static}}\cdot\max\left\\{1,\frac{\mathsf{3V_{2}}}{2\nu(1-\delta-\mu)}\right\\}=\mathsf{T}_{\mathsf{learn}}$. Finally, for ${\mathsf{T}}\leq\mathsf{T}_{\mathsf{learn}}$, we get that $\mathsf{reward}({\mathsf{T}})\geq\mu\cdot{\mathsf{T}}$, as claimed. ### A.6 Proof of Lemma 3 To prove the result in Lemma 3, it is suffice to lower bound the probability $\mathbb{P}\left[\mathbf{R}_{u\pi_{u,t}}=1,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0\right]$. To that end, for any $u\in[{\mathsf{N}}]$ and $t\in[{\mathsf{T}}]$, define $\displaystyle{\cal G}_{u,t}\triangleq\left\\{|\partial_{t}(u)|\geq\frac{2\nu{\mathsf{N}}}{3}\right\\},$ (44) where $\partial_{t}(u)$ is the set of neighbors at time $t$ user $u$ have from the same user-types, respectively. For $t$ large enough the probability of ${\cal G}_{u,t}$ is lower bounded strictly by zero. To show that recall that $|{\cal T}_{u}(t)|$ is the number of users in user’s $u$ type at round $t$. As we argued above at each round we know that $|{\cal T}_{u}(t)|>\frac{2\nu{\mathsf{N}}}{3}$. Also, recall that in the beginning of the batch we devote the first $\mathsf{T_{static}}$ recommendations for creating an initial partition ${\cal P}_{0}$ of the users into types (see, the the fourth step in Algorithm 2). We showed in Lemma 2 that the resulted partition is correct with probability at least $1-\delta/3$, and therefore, $|\partial_{t}(u)|\frac{2\nu{\mathsf{N}}}{3}$ with the same probability, i.e., $\mathbb{P}[{\cal G}_{u,t}]\geq 1-\delta/3$, for $t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}$. Next, using the same steps as in the proof of Lemma 2 in [12], we show that the good neighborhoods have, through random exploration, accurately estimated the probability of liking each item. Thus, we correctly classify each item as likable or not with high probability. In particular, we show Below that $\displaystyle\mathbb{P}\left[\left.\mathbf{R}_{u,\pi_{u,t}}=1,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0\right|{\cal G}_{u,t}\right]$ $\displaystyle\geq 1-2{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)-\frac{1}{{\mathsf{N}}^{\alpha}}.$ (45) Before proving the above inequality let us first show how we can use it lower bound the regret. Indeed, combining the above inequality with the fact that $\mathbb{P}[{\cal G}_{u,t}]\geq 1-\delta/3$, we get $\displaystyle\mathbb{P}\left[\mathbf{R}_{u,\pi_{u,t}}=1,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0\right]$ $\displaystyle\geq 1-2{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)-\frac{1}{{\mathsf{N}}^{\alpha}}-\frac{\delta}{3}.$ (46) It can be seen that if the number of users ${\mathsf{N}}$ satisfy ${\mathsf{N}}=\Omega\left(\frac{{\mathsf{M}}}{\nu}\log\frac{1}{\delta}+\left(\frac{3}{\delta}\right)^{1/\alpha}\right)$, and of course $t\geq\mathsf{T_{static}}$, then the r.h.s. of (46) is at least $1-\delta$, namely, $\mathbb{P}\left[\mathbf{R}_{u,\pi_{u,t}}=1,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0\right]\geq 1-\delta$. Therefore, we obtain, $\displaystyle\mathsf{A}_{5}$ $\displaystyle\leq\sum_{t\in{\cal B}_{\ell}\setminus{\cal T}_{\mathsf{test},\ell}}\frac{1}{{\mathsf{N}}}\sum_{u=1}^{{\mathsf{N}}}\mathbb{P}\left[\mathbf{R}_{u,\pi_{u,t}}=0,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0|{\cal A}^{c}\right]$ $\displaystyle\leq(\Delta_{{\mathsf{T}}}-\mathsf{T_{static}})\cdot\delta,$ (47) where in the second inequality we have used Assumption A2. Next, we prove (45). First, we lower bound the number of times an arbitrary item has been rated by the good neighbors of some user $u$, conditioned on the event ${\cal G}_{u,t}$. To that end, note that the number of good neighbors user $u$ has and who have rated item $i$ is stochastically dominated by $\mathsf{Binomial}\left(\frac{2\nu{\mathsf{N}}}{3},\frac{t}{{\mathsf{M}}{\mathsf{N}}^{\alpha}}\right)$. Let ${\cal D}$ be the event “item $i$ has less than $\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}$ ratings from good neighbors of $u$”. Then, Chernoff’s bound then gives $\displaystyle\mathbb{P}\left({\cal D}\right)$ $\displaystyle\leq\mathbb{P}\left(\mathsf{Binomial}\left(\frac{2\nu{\mathsf{N}}}{3},\frac{t}{{\mathsf{M}}{\mathsf{N}}^{\alpha}}\right)\leq\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ (48) $\displaystyle\leq\exp\left(-\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right).$ (49) Next, conditioned on ${\cal G}_{u,t}$ and ${\cal D}$ we prove that with high probability when exploiting the algorithm predicts correctly every item as likable or unlikable for user $u$. Recall our definition for the posterior $\hat{p}_{u\ell}$ in (4). Suppose item $i$ is likeable by user $u$, and let $\mathsf{G}\triangleq\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}$. Then, conditioned on $\mathsf{G}$, $\hat{p}_{u\ell}$ stochastically dominates $\tilde{p}_{ui}\triangleq\mathsf{Binomial}(\mathsf{G},p_{ui})/\mathsf{G}$. Then, $\displaystyle\mathbb{P}\left(\left.\tilde{p}_{ui}\leq\frac{1}{2}\right|\mathsf{G}\right)$ $\displaystyle=\mathbb{P}\left(\left.\mathsf{Binomial}(\mathsf{G},p_{ui})\leq\frac{\mathsf{G}}{2}\right|\mathsf{G}\right)$ (50) $\displaystyle\leq\exp\left(-2\mathsf{G}\Delta^{2}\right)$ (51) $\displaystyle\leq\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right),$ (52) where the first inequality follows from Hoeffding’s inequality, and the second inequality is because $p_{ui}\geq 1/2+\Delta$. Using monotonicity, we also have $\displaystyle\mathbb{P}\left(\left.\tilde{p}_{ui}\leq\frac{1}{2}\right|\mathsf{G}\geq\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ $\displaystyle\leq\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right).$ (53) Using the same steps we can show that if item $i$ is unlikeable by user $u$ then with the same probability $\tilde{p}_{ui}\geq\frac{1}{2}$. Taking a union bound over all items we get that with probability at least $1-{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ our algorithm correctly classifies every item as likable or unlikable for user $u$. We are now in a position to prove (45). Specifically, for user $u$ at time $t$, conditioned on ${\cal G}_{u,t}$ we have shown in (49) that with probability at least $1-{\mathsf{M}}\exp\left(-\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ _every_ item has more than $\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}$ ratings from good neighbors of $u$. Now, using the fact that with probability at least $1-{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ we classify correctly all items, coupled with the fact that we exploit with probability $1-{\mathsf{N}}^{-\alpha}$, we get $\displaystyle\mathbb{P}\left[\left.\mathbf{R}_{u,\pi_{u,t}}=1,\mathsf{Y}_{t}=0,\mathsf{H}_{t}=0\right|{\cal G}_{u,t}\right]$ $\displaystyle\geq 1-{\mathsf{M}}\exp\left(-\frac{\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)-{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)$ $\displaystyle\quad\quad-\frac{1}{{\mathsf{N}}^{\alpha}}$ (54) $\displaystyle\geq 1-2{\mathsf{M}}\exp\left(-2\frac{\Delta^{2}\nu t{\mathsf{N}}^{1-\alpha}}{3{\mathsf{M}}}\right)-\frac{1}{{\mathsf{N}}^{\alpha}},$ (55) as claimed.
# Challenges for Using Impact Regularizers to Avoid Negative Side Effects David Lindner,1 Kyle Matoba, 211footnotemark: 1 Alexander Meulemans 311footnotemark: 1 The authors contributed equally. ###### Abstract Designing reward functions for reinforcement learning is difficult: besides specifying which behavior is rewarded for a task, the reward also has to discourage undesired outcomes. Misspecified reward functions can lead to unintended negative side effects, and overall unsafe behavior. To overcome this problem, recent work proposed to augment the specified reward function with an impact regularizer that discourages behavior that has a big impact on the environment. Although initial results with impact regularizers seem promising in mitigating some types of side effects, important challenges remain. In this paper, we examine the main current challenges of impact regularizers and relate them to fundamental design decisions. We discuss in detail which challenges recent approaches address and which remain unsolved. Finally, we explore promising directions to overcome the unsolved challenges in preventing negative side effects with impact regularizers. ## 1 Introduction Specifying a reward function in reinforcement learning (RL) that completely aligns with the designer’s intent is a difficult task. Besides specifying what is important to solve the task at hand, the designer also needs to specify how the AI system should behave in the environment in general, which is hard to fully cover. For example, RL agents playing video games often learn to achieve a high score without solving the desired task by exploiting the game (e.g. Saunders et al. 2018). Side effects occur when the behavior of the AI system diverges from the designer’s intent because of some considerations that were not anticipated beforehand, such as the possibility to exploit a game. In this work, we focus on side effects that are tied to the reward function, which we define as side effects that would still occur if we had access to an oracle that finds an optimal policy for a given reward function. We explicitly do not consider side effects resulting from the used RL algorithm, which are often discussed using the term _safe exploration_ (Garcıa and Fernández 2015). In practice, the designer typically goes through several iterations of reward specification to optimize the agent’s performance and minimize side effects. This is often a tedious process and there is no guarantee that the agent will not exhibit side effects when it encounters new situations. In fact, such problems with misspecified reward functions have been observed in various practical applications of RL (Krakovna et al. 2020b). In most situations, it is useful to decompose the reward $R(s)$ into a task- related component $R^{\textnormal{task}}(s)$ and an environment-related component $R^{\textnormal{env}}(s)$, where the latter specifies how the agent should behave in the environment, regardless of the task.111We write the reward function only as a function of states for simplicity, as the state- space can be formally extended to include the last action. As Shah et al. (2019) observe, $R^{\textnormal{env}}$ is related to the frame problem in classical AI (McCarthy and Hayes 1969): we not only have to make a prediction about what is supposed to change, but also what is supposed to remain unchanged. $R^{\textnormal{env}}$ is more prone to misspecification, because it needs to specify everything that can happen beyond a task, that can result in undesired outcomes. Because the designer builds an RL agent to solve a specific problem, it is relatively easy to anticipate considerations directly related to solving the task in $R^{\textnormal{task}}$. Shah et al. (2019) point out that environments are generally already optimized for humans, hence, defining $R^{\textnormal{env}}$ primarily requires to specify which features of the environment the AI systems should not disturb. Therefore, penalizing large changes in the current state of the world can be thought of as a coarse approximation for $R^{\textnormal{env}}$. Impact regularization (IR) has emerged as a tractable and effective way to approximate $R^{\textnormal{env}}$ (Armstrong and Levinstein 2017; Krakovna et al. 2019; Turner, Hadfield-Menell, and Tadepalli 2020). The main idea behind IR is to approximate $R^{\textnormal{env}}$ through a measure of “impact on the environment”, which avoids negative side effects and reduces the burden on the reward designer. In this paper, we discuss IR of the form $\displaystyle R(s_{t})=R_{\textnormal{spec}}(s_{t})-\lambda\cdot d(s_{t},b(s_{0},s_{t-1},t))$ (1) where $s_{t}$ denotes the state at time step $t$, $R_{\textnormal{spec}}$ denotes the reward function specified by the designer,222$R_{\textnormal{spec}}$ contains the specified parts of both $R^{\textnormal{task}}$ and $R^{\textnormal{env}}$. and: * • the _baseline_ $b(s_{0},s_{t-1},t)$ provides a state obtained by following a “default” or “safe” policy at timestep $t$ and uses either the initial state and the current time $(s_{0},t)$ to compute it, or else the current state $s_{t-1}$, * • $d$ measures the _deviation_ of the realized state from the baseline state, and * • $\lambda\geq 0$ gives a global _scale_ at which to trade off the specified reward and the regularization. Composing these three terms gives a general formulation of regularization that encompasses most proposals found in the literature, but permits separate analysis (Krakovna et al. 2019). We start by giving an overview of the related work on IR (Section 2), before we discuss the three main design decisions for IR. First, we discuss how to choose a _baseline_ (Section 3), emphasizing considerations of environment dynamics and a tendency for agents to offset their actions. Second, we discuss how to quantify _deviations_ from the baseline (Section 4), especially the distinction between negative, neutral, and positive side effects. Third, we discuss how to choose the scale $\lambda$ (Section 5). Finally, we propose some directions to improve the effectiveness of IR (Section 6) . The main contribution of this work is to, discuss in detail the current main challenges of IR, building upon previous work, and to suggest possible ways forward to overcome these challenges. ## 2 Related Work Amodei et al. (2016) reviewed negative side effects as one of several problems in AI safety, and discussed using impact regularization (IR) to avoid negative side effects. Since then, several concrete approaches to IR have been proposed, of which eq. (1) gives the underlying structure. Armstrong and Levinstein (2017) proposed to measure the impact of the agent compared to the inaction baseline, starting from the initial state $s_{0}$. The inaction baseline assumes the agent does nothing, which can be formalized by assuming a non-action exists.333Armstrong and Levinstein (2017) define this baseline as the state the environment would be in when the agent would have never been deployed. This is slightly different from the definition of the inaction baseline we give here and that later work used, as the mere presence of the agent can influence the environment. Armstrong and Levinstein (2017) emphasized the importance of a semantically meaningful state representation for the environment when measuring distances from the inaction baseline. While Armstrong and Levinstein (2017) discussed the problem of measuring the impact of an agent abstractly, Krakovna et al. (2019) proposed a concrete deviation measure called Relative Reachability (RR). RR measures the average reduction in the number of states reachable from the current state, compared to a baseline state. This captures the intuition that irreversible changes to the environment should be penalized more, but has advantages over directly using irreversibility as a measure of impact (as e.g. in Eysenbach et al. (2018)), such as allowing to quantify the magnitude of different irreversible changes. Turner, Hadfield-Menell, and Tadepalli (2020) and Krakovna et al. (2019) generalized the concept of RR towards Attainable Utility Preservation (AUP) and Value Difference (VD) measures respectively, which both share the same structural form for the deviation measure: $\displaystyle d_{\textnormal{VD}}(s_{t},s^{\prime}_{t})=\sum_{x=1}^{X}w_{x}f\big{(}V_{x}(s^{\prime}_{t})-V_{x}(s_{t})\big{)},$ (2) where $x$ ranges over some sources of value, $V_{x}(s_{t})$ is the value of state $s_{t}$ according to $x$, $w_{x}$ is its weight in the sum and $f$ is a function characterizing the deviation between the values. AUP is a special case of this with $w_{x}=1/X$ for all $x$ and the absolute value operator as $f$. This formulation captures the same intuition as RR, but allows to measure the impact of the agent in terms of different value functions, instead of just counting states. Concretely, AUP aims to measure the agent’s ability to achieve high utility on a range of different goals in the environment, and penalizes any change that reduces this ability. Turner, Hadfield-Menell, and Tadepalli (2020) also introduced the stepwise inaction baseline to mitigate offsetting behavior (c.f. Section 3.2). This baseline follows an inaction policy starting from the previous state $s_{t-1}$ rather than the starting state $s_{0}$. Follow-up work scaled AUP towards more complex environments (Turner, Ratzlaff, and Tadepalli 2020). Krakovna et al. (2020a) built upon the VD measure and introduced an auxiliary loss representing how well the agent could solve future tasks in the same environment, given its current state. This can be seen as a deviation measure in e.q. (1) that rewards similarity with a baseline instead of penalizing deviation from it. Eysenbach et al. (2018)’s approach to penalize irreversibility can be seen as a special case of Krakovna et al. (2020a). Aside from IR, Rahaman et al. (2019) proposed to learn an arrow of time, representing a directed measure of reachability, using the intuition that irreversible actions tend to leave the environment in a more disorderly state, making it possible to define an arrow of time with methods inspired by thermodynamics. As another alternative to IR, Zhang, Durfee, and Singh (2018, 2020) proposed to learn which environmental features an AI system is allowed to change by querying a human overseer. They provided an active querying approach that makes maximally informative queries. Shah et al. (2019) developed a method for learning which parts of the environment a human cares about by assuming that the world is optimized to suit humans. Saisubramanian, Kamar, and Zilberstein (2020) formulated the side effects problem as a multi- objective Markov Decision Process, where they learn a separate reward function penalizing negative side effects and optimize this secondary objective while staying close to the optimal policy of the task objective. Saisubramanian, Zilberstein, and Kamar (2020) provide a broad overview of the various existing approaches for mitigating negative side effects, while we zoom in on one class of approaches, IR, and discuss the corresponding challenges in detail. ## 3 Choosing a Baseline Recent work mainly uses two types of baselines in impact regularization (IR): (i) the inaction baseline $b(s_{0},s_{t},t)=T(s_{t}|s_{0},\pi_{\mathrm{inaction}})$ and (ii) the stepwise inaction baseline $b(s_{0},s_{t},t)=T(s_{t}|s_{t-1},\pi_{\mathrm{inaction}})$, where $T$ is the distribution over states $s_{t}$ when starting at state $s_{0}$ or $s_{t-1}$ respectively and following the inaction policy $\pi_{\mathrm{inaction}}$ that always takes an action $a_{\mathrm{nop}}$ that does nothing. Unfortunately, the inaction baseline can lead to undesirable offsetting behavior, where the agent tries to undo the outcomes of their task after collecting the reward, moving back closer to the initial baseline (Turner, Hadfield-Menell, and Tadepalli 2020). The stepwise inaction baseline removes the offsetting incentive of the agent by branching off from the previous state instead of the starting state (Turner, Hadfield-Menell, and Tadepalli 2020). However, Krakovna et al. (2020a) argued that offsetting behavior is desirable in many cases. In section 3.2 we contribute to this discussion by breaking down in detail when offsetting behavior is desirable or undesirable, whereas in section 3.3, we argue that the inaction baseline and step-wise inaction baseline can lead to inaction incentives in nonlinear dynamical environments. We start, however, with the fundamental observation that the inaction baseline and stepwise inaction baseline do not always represent safe policies in section 3.1. ### 3.1 Inaction Baselines are not Always Safe The baseline used in IR should represent a safe policy where the AI system does not harm its environment or itself. In many cases, taking no actions would be a safe policy for the agent, e.g. for a cleaning robot. However, if the AI system is responsible for a task requiring continuous control, inaction of the AI system can be disastrous. For example, if the agent is responsible for driving a car on a highway, doing nothing likely results in a crash. This is particularly problematic for the stepwise inaction baseline, which follows an inaction policy starting from the previous state. The inaction policy starting from the initial state can also be unsafe, for example, if an agent takes over the control of the car from a human, and therefore the initial state $s_{0}$ already has the car driving. For this reason, designing a safe baseline for a task or environment that requires continuous control is a hard problem. One possible approach is to design a policy that is known to be safe based on expert knowledge. However, this can be a time-consuming process, and is not always feasible. Designing safe baselines for tasks and environments that require continuous control is an open problem that has to be solved before IR can be used in these applications. ### 3.2 Offsetting An agent engages in offsetting behavior when it tries to undo the outcomes of previous actions, i.e. when it “covers up its tracks”. Offsetting behavior can be desirable or undesirable, depending on which outcomes the agent counteracts. Undesirable offsetting. Using IRs with an inaction baseline starting from the initial state can lead to undesirable offsetting behavior where the agent counteracts the outcomes of its task (Krakovna et al. 2019; Turner, Hadfield- Menell, and Tadepalli 2020). For example, Krakovna et al. (2019) consider a vase on a conveyor belt. The agent is rewarded for taking the vase off the belt, hence preventing that it will fall off the belt. The desired behavior is to take the vase and stay put. The offsetting behavior is to take the vase off the belt, collect the reward, and afterwards put the vase back on the conveyor belt to reduce deviation from the baseline. To understand this offsetting behavior recall the decomposition of the true reward into a task-related and an environment-related component from section 1. A designer usually specifies a task reward $R^{\textnormal{task}}_{\textnormal{spec}}$ that rewards states signaling task completion (e.g. taking the vase off the belt). However, each task has consequences to the environment, which often are the reason why the task should be completed in the first place (e.g. the vase being not broken). In all but simple tasks, assigning a reward to every task consequence is impossible, and so by omission, they have a zero reward. When IR penalizes consequences of completing the task, because they differ from the baseline, this results in undesirable offsetting behavior. The stepwise inaction baseline (Turner, Ratzlaff, and Tadepalli 2020) successfully removes all offsetting incentives. However, in other situations offsetting might be desired. Desirable Offsetting. In many cases, offsetting behavior is desired, because it can prevent unnecessary side effects. Krakovna et al. (2020a) provide an example of an agent which is asked to go shopping, and needs to open the front door of the house to go to the shop. If the agent leaves the door open, wind from outside can knock over a vase inside, which the agent can prevent by closing the door after leaving the house. When using the stepwise inaction baseline (with rollouts, c.f. Section 4.2), the agent gets penalized once when opening the door for knocking over the vase in the future, independent of whether it closes the door afterwards (and thus prevents the vase from breaking) or not. Hence, for this example, the offsetting behavior (closing the door) is desirable. The reasoning behind this example can be generalized to all cases where the offsetting behavior concerns states that are instrumental towards achieving the task (e.g. opening the door) and not a consequence of completing the task (e.g. the vase being not broken). A Crucial Need for a New Baseline. The recently proposed baselines either remove offsetting incentives altogether or allow for both undesirable and desirable offsetting to occur, which are both unsatisfactory solutions. Krakovna et al. (2020a) proposed resolving this issue by allowing all offsetting (e.g. by using the inaction baseline) and rewarding all states where the task is completed in the specified reward function. However, we attribute three important downsides to this approach. First, states that occur after task completion can still have negative side effects. If the reward associated with these states is high enough to prevent offsetting, it might also be high enough to encourage the agent to pursue these states and ignore their negative side effects. Second, not all tasks have a distinct goal state that indicates the completion of a task, but rather accumulate task-related rewards at various time steps during an episode. Third, this approach creates a new incentive for the agent to prevent shut-down, as it continues to get rewards after the task is completed (Hadfield-Menell et al. 2017). We conclude that offsetting is still an unsolved problem, highlighting the need for a new baseline, to prevent undesirable offsetting behavior, but allow for desirable offsetting. ### 3.3 Environment Dynamics and Inaction Incentives In dynamic environments that are highly sensitive to the agent’s actions, the agent will be susceptible to inaction incentives. Either the agent does not act at all (for all but small magnitudes of $\lambda$) or it will be insufficiently regularized and possibly result in undesired side effects (for small $\lambda$). Sensitivity to Typical Actions. Many real-world environments exhibit chaotic behavior, in which the state of the environment is highly sensitive to small perturbations. In such environments, the environment state where the agent has performed an action will be fundamentally different from the environment state for the inaction baseline (Armstrong and Levinstein 2017). Furthermore, for the step-wise inaction baseline, the same argument holds for the non-action compared to the planned action of the agent. Hence, when using these baselines for IR, all actions of the agent will be strongly regularized, creating the inaction incentive. When $\lambda$ is lowered to allow the agent to take actions, the agent can cause negative side effects when the IR cannot differentiate between negative side effects and chaotic changes in the environment. Here, it is useful to distinguish between _typical_ and _atypical_ actions. We say (informally) that an action is _typical_ if it is commonly used for solving a wide variety of tasks (e.g. moving). When the environment is highly sensitive to typical actions, IRs with the current baselines will prevent the agent from engaging in normal operations. However, it is not always a problem if the environment is highly sensitive to atypical actions of the agent (e.g. discharging onboard weaponry), as preventing atypical actions impedes less with the normal operation of the agent. Capability of the Agent. The inaction incentive will become more apparent for agents that are highly capable of predicting the detailed consequences of their actions, for example by using a powerful physics engine. As the ability to predict the consequences of an action is fundamental to minimizing side effects, limiting the prediction capabilities of an agent to prevent the inaction incentive is not desired. Rather, for agents that can very accurately predict the implications of their actions, it is necessary to have an accompanying intelligent impact regularizer. State Features. Armstrong and Levinstein (2017) point out that for IR one should not represent states with overly fine-grained features, as presenting an agent with too much information exposes them to basing decisions on irrelevancies. For example, it would be counterproductive for an agent attempting to forecast demand in an online sales situation to model each potential customer separately, when broader aggregates would suffice. However, there remain two issues with this approach to mitigate the inaction incentive. First, the intrinsic dynamics of the environment remain unchanged, so it is still highly sensitive to small perturbations, of which the results can be visible in the coarser features (e.g. the specific weather conditions). Second, for advanced AI systems, it might be beneficial to change their feature representation to become more capable of predicting the consequences of their actions. In this case, one would have no control over the granularity of the features. Deviation Measures. At the core of the inaction problem is that some negative side effects are worse than others. Usually, it does not matter if the agent changes the weather conditions by moving around, however, it would matter if the agent causes a serious negative side effect, for example a hurricane. While both outcomes can be a result of complex and chaotic dynamics of the environment, we care less about the former and more about the latter. Differentiating between negative, neutral and positive side effects is a task of the deviation measure used in the IR, which is discussed in the next section. ## 4 Choosing a Deviation Measure A baseline defines a “safe” counterfactual to the agent’s actions. The deviation measure determines how much a deviation from this baseline by the agent should be penalized or rewarded. Currently, the main approaches to a deviation measure are the relative reachability (RR) measure (Krakovna et al. 2019), the attainable utility preservation (AUP) measure (Turner, Hadfield- Menell, and Tadepalli 2020) and the future task (FT) reward (Krakovna et al. 2020a). AUP and FT still require a specification of which tasks the agent might want to achieve in future. In this section, we argue that the current deviation measures still require to specify a notion of _value_ of the impact to avoid unsatisfactory performance of the agent and that new rollout policies should be designed that allow for a proper incorporation of delayed effects into the deviation measure. ### 4.1 Which Side Effects are Negative? The goal of IRs is to approximate $R^{\textnormal{env}}$ for all states in a tractable manner. It does this by penalizing impact on the environment, built upon the assumption that the environment is already optimized for human preferences (Shah et al. 2019). The IR aims to penalize impact proportionally to the magnitude of this impact which corresponds with the magnitude of the side effect (Krakovna et al. 2019; Turner, Hadfield-Menell, and Tadepalli 2020). However, not all impact is negative, but it can also be neutral or even positive. $R^{\textnormal{env}}$ does not only consider the magnitude the impact on the environment, but also to which degree this impact is negative, neutral or positive. Neglecting the associated value of impact can lead to suboptimal agent behavior, as highlighted in the example below. Example: The Chemical Production Plant. Consider an AI system controlling a plant producing a chemical product for which various unknown reactions exist, each producing a different combination of waste products. The task of the AI system is to optimize the production rate of the plant, i.e. it gets a reward proportional to the production rate. To minimize the impact of the plant on the environment, the reward function of the agent is augmented with an impact regularizer, which penalizes the mass of waste products released in the environment, compared to an inaction baseline (where the plant is not operational). Some waste products are harmless (e.g. $O_{2}$), whereas others can be toxic. When the deviation measure of the impact regularizer does not differentiate between negative, neutral or positive impact, the AI system is incentivized to use a reaction mechanism that maximizes production while minimizing waste. However, this reaction might output mostly toxic waste product, whereas another reaction outputs only harmless waste products and hence has no negative side effects. Tuning the regularizer magnitude $\lambda$ does not provide a satisfactory solution in this case, as either the plant is not operational (for high lambda), or the plant is at risk of releasing toxic waste products in the environment. Positive Side Effects. The distinction between positive, neutral and negative impact is not only needed to allow for a satisfactory performance of the agent in many environments, it is also desirable for encouraging unanticipated positive side effects. Expanding upon the example in 4.1: if the agent discovered a way to costlessly sequester carbon dioxide alongside its other tasks it should do so, whilst an IR would encourage the robot to not interfere. While very positive unexpected outcomes might be unlikely, this possibility should not be neglected in the analysis of impact regularizers. Value Differences. To distinguish between positive, neutral and negative side effects, we need an approximation of $R^{\textnormal{env}}$ that goes beyond measuring impact as a sole source of information. Attainable utility preservation (Turner, Hadfield-Menell, and Tadepalli 2020) allows for differentiating between positive and negative impact by defining the deviation measure as a sum of differences in value between a baseline and the agent’s state-action pair for various value functions. Hence, it is possible to reflect how much the designer’s values different kinds of side effects in these value functions. However, the challenge remains to design value functions that approximate $R^{\textnormal{env}}$ to a sufficient degree on the complete state space, which is again prone to reward misspecification. So although the value difference framework allows for specifying values for side effects, _how_ to specify this notion of value is still an open problem. ### 4.2 Rollout Policies Often, the actions of an agent cause delayed effects, i.e. effects that are not visible immediately after taking the action. The stepwise inaction baseline (Turner, Hadfield-Menell, and Tadepalli 2020) ignores all actions that took place before $t-1$, hence, to correctly penalize delayed effects, the deviation measure needs to incorporate future effects. This can be done by collecting rollouts of future trajectories using a simulator or model of the environment. These rollouts depend on which _rollout policy_ is followed by the agent in the simulation. For the baseline states, the inaction policy is the logical choice. For the rollout of the future effects of the agent’s action, it is less clear which rollout policy should be used. Turner, Hadfield-Menell, and Tadepalli (2020) use the inaction policy in this case. Hence, this IR considers a rollout where the agent takes its current action, after which it cannot do any further actions. This approach has significant downsides, because IR does not allow the agent to do a series of actions when determining the impact penalty (e.g. the agent can take an action to jump, but cannot plan for its landing accordingly in the rollout). Therefore, we argue that future work should develop rollout policies different from the inaction policy, such as the current policy of the agent. ## 5 Choosing the Magnitude of the Regularizer To combine the IR with a specified reward function, the designer has to choose the magnitude of the regularizer $\lambda$. Turner, Hadfield-Menell, and Tadepalli (2020) say that “loosely speaking, $\lambda$ can be interpreted as expressing the designer’s beliefs about the extent to which $R$ [the specified reward] might be misspecified”. It is crucial to choose the correct $\lambda$. If $\lambda$ is too small, the regularizer may not reduce the risk of undesirable side effects effectively. If $\lambda$ is too big, the regularizer will overly restrict necessary effects of the agent on the environment, and the agent will be less effective at achieving its goal. Note, that while the regularizers proposed by Krakovna et al. (2019) and Turner, Hadfield-Menell, and Tadepalli (2020) already measure utility, in general $\lambda$ must also handle a unit-conversion of the regularizer to make it comparable with the reward function. Some intuition for choosing $\lambda$ comes from a Bayesian perspective, where the regularizer encodes prior knowledge and $\lambda$ controls how far from the prior the posterior should have moved. Another distinct view on setting $\lambda$ comes from the dual optimization problem, where it represents the Lagrange multiplier on an implied set of constraints: $\lambda$ is the magnitude of the regularizer for which the solution to the penalized optimization problem coincides with a constrained optimization problem. Hence, the designer can use $\lambda$ to communicate constraints to the AI system, which is a natural way to phrase some common safety problems (Ray, Achiam, and Amodei 2019). Armstrong and Levinstein (2017) discuss the problem of tuning $\lambda$ and note that contrary to intuition the region of useful $\lambda$’s can be very small and hard to find safely. In practice $\lambda$ is often tuned until the desired behavior is achieved, e.g., by starting with a high $\lambda$ and reducing it until the agent achieves the desired behavior. This approach is in general insufficient to find the correct trade-off. For a fixed step-size in decreasing $\lambda$, the tuning might always jump from a $\lambda$ that leads to inaction, to a $\lambda$ that yields unsafe behavior. The same holds for other common procedures to tune hyperparameters. ## 6 Ways Forward In this section, we put forward promising future research directions to overcome the challenges discussed in the previous sections. ### 6.1 A Causal Framing of Offsetting In Section 3.2, we highlighted that some offsetting behavior is desired and some undesired. To design an IR that allows for desired offsetting but prevents undesired offsetting, one firsts needs to have a mechanism that can predict and differentiate between these two types of offsetting. Undesired offsetting concerns the environment states that are a consequence of the task. The difficulty lies in determining which states are a causal consequence of the task being completed and differentiate them from states that could have occurred regardless of the task. Goal-based Tasks. When the task consists of reaching a certain goal state, the consequences of performing a task can be formalized in a causal framework (Pearl 2009). When a causal graph of the environment-agent-interaction is available, the states that are a consequence of the task can be obtained from the graph as the causal children nodes of the goal state. Hence, a baseline that allows for desired offsetting behavior but prevents undesired offsetting behavior prevents the agent from interfering with the children nodes of the goal states, while allowing for offsetting on other states. General Tasks. Not all tasks have a distinct goal state which indicates the completion of a task, but accumulate instead task-related rewards at various time steps during an episode. Extending this argument to general tasks remains an open issue, for which causal influence diagrams (Everitt et al. 2019) can provide a mathematical framework. ### 6.2 Probabilities Instead of Counterfactuals as Baseline Armstrong and Levinstein (2017) made the interesting argument that probabilities are better suited than counterfactuals for measuring the impact of actions. Current implementations of IRs use a counterfactual as baseline (e.g. the inaction baseline or stepwise inaction baseline). Because this baseline is one specific trajectory, it will differ considerably from the actual trajectory of the agent in environments that exhibit chaotic dynamics. However, chaotic environments will also be highly sensitive to perturbations that do not originate from the agent’s actions. One possible way forward towards a more robust measure of the agent’s impact on the environment is hence to compare probabilities that marginalize over all external perturbations instead of comparing specific trajectories. Define $p(s_{t}|A)$ as the probability of having state $s_{t}$, given the trajectory of actions $A$ the agent took and $p(s_{t}|B)$ as the probability of $s_{t}$ given actions prescribed by the baseline. All influences of perturbations that did not arise from the agent are marginalized out in these probabilities. Hence, a divergence measure between these two probabilities can give a more robust measure of potential impact of the agent, without being susceptible to non- necessary inaction incentives. To our best knowledge, this idea has not yet been implemented as a concrete IR method and would hence be a promising direction for future research. ### 6.3 Improved Human-Computer interaction Side effects occur if there is a difference between the outcome an AI system achieves and the intent of its (human) designer. Thus improving how well the designer can communicate their intent to the AI system is an important aspect of eliminating side effects (Leike et al. 2018). This emphasis on the human component of learning to avoid negative side effects connects it closely to the problem of _scalable oversight_ proposed by Amodei et al. (2016). Improved Tools for Reward Designers. Commonly, a designer will aim to iteratively improve the AI system and its reward function. Similarly, when choosing an impact regularizer, a designer will iterate on the choice of baseline, deviation measure, and regularization strength and test them in a sequence of environments that increasingly resemble the production environment. At each iteration, the designer identifies weaknesses and corrects them, such that the criterion being optimized becomes increasingly true to the designer’s intent. For example, an AI with the goal to trade financial assets may be run against historical data (“backtested”) in order to understand how it might have reacted in the past, and presented with deliberately extreme inputs (“stress-tested”) in order to understand likely behavior in “out of sample” situations. To design a reward function and a regularizer, it is crucial for the designer to be able to understand how the system would react in novel situations and how to fix it in case it exhibits undesired behavior. Further research aiming to increase the designer’s ability to understand how a system will react, will substantially help the designer to communicate their intent more effectively. Recent work in this direction concerning _interpretability_ (Gilpin et al. 2018), _verification_ (e.g. Huang et al. 2017) of machine learning models is particularly promising. Actively Learning from Humans. Considering the problem from the perspective of the AI system, the goal is to improve its ability to understand the designer’s intent, especially in novel, unanticipated, scenarios. Instead of the designer _telling_ the system their intent, this problem can be addressed by the system _asking_ the designer about their intent. To decide what to ask the designer, the system may be able to determine which states it is highly uncertain about, even if it is not able to accurately ascribe values to some of them. Recent work shows that such an approach can be effectively used to learn from the human about a task at hand (Christiano et al. 2017), but it may also be used to learn something about the constraints of the environment and which side effects are desired or undesired (Zhang, Durfee, and Singh 2018). Active learning could also provide a different perspective on impact regularizers: instead of directly penalizing impact on the environment, a high value of the regularization term could be understood as indicating that the designer should give feedback. In particular, this approach could help to resolve situations in which a positive task reward conflicts with the regularization term. ## 7 Conclusion Avoiding negative side effects in systems that have the capacity to cause harm is necessary to fully realize the promise of artificial intelligence. In this paper, we discussed a popular approach to reduce negative side effects in RL: impact regularization (IR). We discussed the practical difficulty of choosing each of the three components: a baseline, a deviation measure and a regularization strength. Furthermore, we pointed to fundamental problems that are currently not addressed by state-of-the-art methods, and presented several new future research directions to address these. While our discussion showed that current approaches still leave significant opportunities for future work, IRs are a promising idea for building the next generation of safe AI systems, and we hope that our discussion is valuable for researchers trying to build new IRs. ## Acknowledgments We thank Andreas Krause, François Fleuret and Benjamin Grewe for their valuable comments and suggestions. Kyle Matoba was supported by the Swiss National Science Foundation under grant number FNS-188758 “CORTI”. ## References * Amodei et al. (2016) Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; and Mané, D. 2016. Concrete problems in AI safety. _arXiv:1606.06565_ . * Armstrong and Levinstein (2017) Armstrong, S.; and Levinstein, B. 2017. Low impact artificial intelligences. _arXiv:1705.10720_ . * Christiano et al. (2017) Christiano, P. F.; Leike, J.; Brown, T.; Martic, M.; Legg, S.; and Amodei, D. 2017\. Deep reinforcement learning from human preferences. In _Advances in Neural Information Processing Systems_. * Everitt et al. (2019) Everitt, T.; Ortega, P. A.; Barnes, E.; and Legg, S. 2019. Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings. _arXiv:1902.09980_ . * Eysenbach et al. (2018) Eysenbach, B.; Gu, S.; Ibarz, J.; and Levine, S. 2018. Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In _International Conference on Learning Representations (ICLR)_. * Garcıa and Fernández (2015) Garcıa, J.; and Fernández, F. 2015. A comprehensive survey on safe reinforcement learning. _Journal of Machine Learning Research_ 16(1): 1437–1480. * Gilpin et al. (2018) Gilpin, L. H.; Bau, D.; Yuan, B. Z.; Bajwa, A.; Specter, M.; and Kagal, L. 2018\. Explaining explanations: An overview of interpretability of machine learning. In _IEEE 5th International Conference on data science and advanced analytics (DSAA)_ , 80–89. * Hadfield-Menell et al. (2017) Hadfield-Menell, D.; Dragan, A.; Abbeel, P.; and Russell, S. 2017. The off-switch game. In _Proceedings of International Joint Conferences on Artificial Intelligence (IJCAI)_. * Huang et al. (2017) Huang, X.; Kwiatkowska, M.; Wang, S.; and Wu, M. 2017. Safety verification of deep neural networks. In _International Conference on Computer Aided Verification_ , 3–29. Springer. * Krakovna et al. (2019) Krakovna, V.; Orseau, L.; Kumar, R.; Martic, M.; and Legg, S. 2019. Penalizing side effects using stepwise relative reachability. In _Workshop on Artificial Intelligence Safety at IJCAI_. * Krakovna et al. (2020a) Krakovna, V.; Orseau, L.; Ngo, R.; Martic, M.; and Legg, S. 2020a. Avoiding Side Effects By Considering Future Tasks. In _Advances in Neural Information Processing Systems_. * Krakovna et al. (2020b) Krakovna, V.; Uesato, J.; Mikulik, V.; Rahtz, M.; Everitt, T.; Kumar, R.; Kenton, Z.; Leike, J.; and Legg, S. 2020b. Specification gaming: the flip side of AI ingenuity. _URL https://deepmind. com/blog/article/Specification-gamingthe-flip-side-of-AI-ingenuity_ . * Leike et al. (2018) Leike, J.; Krueger, D.; Everitt, T.; Martic, M.; Maini, V.; and Legg, S. 2018. Scalable agent alignment via reward modeling: a research direction. _arXiv:1811.07871_ . * McCarthy and Hayes (1969) McCarthy, J.; and Hayes, P. 1969. Some philosophical problems from the standpoint of ai, Machine Intelligence (Meltzer B. and Michie D., eds.), vol. 4. * Pearl (2009) Pearl, J. 2009. _Causality_. Cambridge university press. * Rahaman et al. (2019) Rahaman, N.; Wolf, S.; Goyal, A.; Remme, R.; and Bengio, Y. 2019. Learning the Arrow of Time for Problems in Reinforcement Learning. In _International Conference on Learning Representations (ICLR)_. * Ray, Achiam, and Amodei (2019) Ray, A.; Achiam, J.; and Amodei, D. 2019. Benchmarking safe exploration in deep reinforcement learning. _arXiv:1910.01708_ . * Saisubramanian, Kamar, and Zilberstein (2020) Saisubramanian, S.; Kamar, E.; and Zilberstein, S. 2020. A Multi-Objective Approach to Mitigate Negative Side Effects. In _Proceedings of International Joint Conferences on Artificial Intelligence (IJCAI)_. * Saisubramanian, Zilberstein, and Kamar (2020) Saisubramanian, S.; Zilberstein, S.; and Kamar, E. 2020. Avoiding negative side effects due to incomplete knowledge of ai systems. _arXiv:2008.12146_ . * Saunders et al. (2018) Saunders, W.; Sastry, G.; Stuhlmüller, A.; and Evans, O. 2018. Trial without Error: Towards Safe Reinforcement Learning via Human Intervention. In _Proceedings of International Conference on Autonomous Agents and MultiAgent Systems_. * Shah et al. (2019) Shah, R.; Krasheninnikov, D.; Alexander, J.; Abbeel, P.; and Dragan, A. 2019. Preferences Implicit in the State of the World. In _International Conference on Learning Representations (ICLR)_. * Turner, Hadfield-Menell, and Tadepalli (2020) Turner, A. M.; Hadfield-Menell, D.; and Tadepalli, P. 2020. Conservative agency via attainable utility preservation. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_. * Turner, Ratzlaff, and Tadepalli (2020) Turner, A. M.; Ratzlaff, N.; and Tadepalli, P. 2020. Avoiding Side Effects in Complex Environments. In _Advances in Neural Information Processing Systems_. * Zhang, Durfee, and Singh (2020) Zhang, S.; Durfee, E.; and Singh, S. 2020. Querying to Find a Safe Policy under Uncertain Safety Constraints in Markov Decision Processes. In _Proceedings of the AAAI Conference on Artificial Intelligence_. * Zhang, Durfee, and Singh (2018) Zhang, S.; Durfee, E. H.; and Singh, S. P. 2018. Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes. In _Proceedings of International Joint Conferences on Artificial Intelligence (IJCAI)_.
# A Model of WiFi Performance With Bounded Latency Bjørn Ivar Teigen University of Oslo<EMAIL_ADDRESS>, Neil Davies Predictable Network Solutions Limited<EMAIL_ADDRESS>, Kai Olav Ellefsen University of Oslo<EMAIL_ADDRESS>, Tor Skeie University of Oslo<EMAIL_ADDRESS>and Jim Torresen University of Oslo <EMAIL_ADDRESS> ###### Abstract. In September 2020, the Broadband Forum published a new industry standard for measuring network quality. The standard centers on the notion of quality attenuation. Quality attenuation is a measure of the distribution of latency and packet loss between two points connected by a network path. A vital feature of the quality attenuation idea is that we can express detailed application requirements and network performance measurements in the same mathematical framework. Performance requirements and measurements are both modeled as latency distributions. To the best of our knowledge, existing models of the 802.11 WiFi protocol do not permit the calculation of complete latency distributions without assuming steady-state operation. We present a novel model of the WiFi protocol. Instead of computing throughput numbers from a steady-state analysis of a Markov chain, we explicitly model latency and packet loss. Explicitly modeling latency and loss allows for both transient and steady-state analysis of latency distributions, and we can derive throughput numbers from the latency results. Our model is, therefore, more general than the standard Markov chain methods. We reproduce several known results with this method. Using transient analysis, we derive bounds on WiFi throughput under the requirement that latency and packet loss must be bounded. ††copyright: none††doi: ††isbn: ††conference: ; 2021††journalyear: ;††price: ## 1\. Introduction In September 2020 the Broadband Forum published a new industry standard for measuring network quality (Forum2020TR-452.1Requirements, 10). The standard is called “Quality Attenuation Measurement Architecture and Requirements”. Quality attenuation is a measure of the latency and packet loss performance of packet-switched networks. In this light, we revisit established modeling methodologies for the WiFi protocol because most previous work on modeling the 802.11 protocol has focused on analysis of throughput values only (Bianchi2000PerformanceFunction, 2, 9, 19, 20). Throughput analysis can be used to calculate the WiFi link’s average latency (Bianchi2005RemarksAnalysis, 4), but average latency is not sufficient to model Quality of Experience (QoE) (1892, 15). “‘Performance’ is typically considered as a positive attribute of a service. However, a perfect service would be one without error, failure or delay, whereas real services always fall short of this ideal; we can say that their quality is attenuated relative to the ideal” (Thompson2020TowardsSystems, 18). The quality attenuation (abbreviated $\Delta Q$) concept has been developed through several decades of academic work (bradley-1999, 5, 7, 12, 15, 18). Thompson and Davies (Thompson2020TowardsSystems, 18) present a framework for performance management based on the notion of quality attenuation. The $\Delta Q$ framework centers on the assertion that network performance should be defined as the amount of latency and packet loss the network introduces. $\Delta Q$ can be modelled as the distribution of latency introduced by each hop along the network path, with packet loss modelled as infinite latency. (Thompson2020TowardsSystems, 18) shows how the $\Delta Q$ of a network link can be used to model application performance over that link. In particular, the tail of the latency distribution at each hop is important for the end-to- end performance of an application, especially when many hops are involved in transmitting data. Understanding the tail of the latency distribution is therefore key to understanding network performance as seen from the end-user perspective. The model described by Bianchi (Bianchi2000PerformanceFunction, 2) is perhaps the most influential WiFi model in the literature. The analysis in (Bianchi2000PerformanceFunction, 2) is performed using steady-state analysis of a Markov chain description of the WiFi protocol. Such steady-state analysis is suitable for analyzing average throughput over long time-scales. Throughput is proportional to average latency when the system is saturated, as shown in (Bianchi2005RemarksAnalysis, 4), and throughput and average latency therefore represent the same information about the system performance at saturation. However, for the purposes of relating WiFi performance to application-level outcomes, we need a more complete description of the latency distribution. This work presents a novel WiFi model that describes complete latency distributions and packet loss probabilities. We can describe long-term average throughput by computing the average latency, and our method is thus more general than the steady-state Markov chain analysis approach. This work analyzes the latency and packet loss performance of the WiFi protocol. We propose a method that explicitly models latency and packet loss. Evaluating our model requires more computational resources than a comparable Markov chain method, but we gain the ability to do transient analysis because we do not rely on the system being in a steady state. We compute throughput values from the latency results and show that our throughput values match those derived by Markov chain methods. We then derive bounds on throughput under the requirement that latency and packet loss must be bounded. We also reproduce some know results such as the WiFi performance anomaly (Heusse2003Performance802.11b, 11). Our model is suitable for analysis of application layer performance using the methods described in (1892, 15, 18) because it accurately models the tail of the latency distribution. Section 2 lays out the most relevant related work. We explain our method and its application to WiFi in section 3. In section 4, we validate the convergence and accuracy of our model. In section 5, we use our model to find an upper bound on WiFi throughput with latency guarantees. We expand on the work on upper bounds in section 6 by exploring the impacts of several improvements to the WiFi protocol. Finally, we conclude the work in section 7. This work does not raise any ethical issues. ## 2\. Background Reeve (1892, 15) shows that mean value analysis of network latency is not sufficient to model application performance. One reason why average latency does not capture the notion of performance is that network users care about how reliably an outcome is delivered on time. We require a model that can capture the risk of not delivering the desired outcome in a specified time. Mean latency is not at all sufficient to capture this risk. Consider a network that loads a website in 1 millisecond 99 out of 100 times, but once every 100th time, loading the website takes 10 seconds. The average delay of this outcome is only about 100ms. An observer monitoring this network using average values only might conclude that a 100ms load time is very reasonable, but the unpredictable behavior is likely to annoy users. Thompson and Davies (Thompson2020TowardsSystems, 18) show how the notion of quality attenuation is related to the probability of delivering application outcomes in time. Bianchi (Bianchi2000PerformanceFunction, 2) models the WiFi distributed coordination function (DCF) using a Markov chain, and in doing so, makes a few key simplifications. The most important simplification is to abstract away the details of delays in the model. A time-step in the model is defined by the value of the back-off counters. That is to say, the model does not separate the case in which the medium is idle from the case in which the station (STA) has to wait for another transmission to complete before the back-off counter is decremented. In other words, the time-steps are defined in terms of the model state, not how much actual time has passed. Defining the time-steps in terms of back-off counter values is a useful simplification for a Markov chain analysis, but it comes at the cost of discarding timing information. Bianchi also points this out (Bianchi2000PerformanceFunction, 2, Section IV, A). Tinnirello et al. (Tinnirello2010RethinkingMethodology, 19) extend the methodology of Bianchi (Bianchi2000PerformanceFunction, 2). Here, a Markov chain is solved for the steady-state distribution of back-off timer values. While this approach was chosen to better model the different channel access probabilities of the Wireless Multi-Media (WMM) extension of WiFi, this method is also closer to modeling latency distributions. Tinnirello et al. deals with the distribution of back-off timers instead of simply the probability of packet transmission. Their model still makes simplifications that hide latency information because the model does not deal with differences in transmission times due to different data rates. Heusse (Heusse2003Performance802.11b, 11) shows that differences in data rates are very important for WiFi performance. A time-step in Tinnirello’s model is defined as the period from the end of one transmission or collision to the end of the next transmission or collision. Thus, this model does not take into account that the size of the transmission time interval may change. That is not to say this model is incorrect, only that it lacks fidelity in modeling latency. In (Engelstad2006Non-saturationPrediction, 9), Engelstad et al. expand the model of (Bianchi2000PerformanceFunction, 2) to include the Access Categories of 802.11e and to handle both saturated and unsaturated networks. The model is validated by comparison to a simulation, but only throughput numbers are reported. Youm and Kim (Youm2013LatencyLANs, 21) derive latency distributions from the model of Bianchi (Bianchi2000PerformanceFunction, 2). Their analysis begins by assuming the system is in the steady-state and assigning the appropriate latency values to each of the transitions. This approach is similar to the one we present in that it computes latency distributions by explicitly modelling the latency of each possible transition from one state to the next. Our work differs from that of Youm and Kim by avoiding the assumption that the system starts out in the steady state. Avoiding this requirement allows us to perform transient analysis starting from any system state. From our review of previous work we see a lack of analysis of the latency and packet loss performance of WiFi. We address this shortcoming by developing a novel model of the 802.11 WiFi protocol that allows for the exact computation of the statistical distribution of latency and packet loss. ## 3\. Method Our model is based on the quality attenuation framework ($\Delta Q$ framework) (Thompson2020TowardsSystems, 18). Conceptually, quality attenuation is the delay and loss introduced by a network element. It describes how far the network element is from delivering the “perfect service” of zero delay and no chance of loss. Quality attenuation, denoted by $\Delta Q$, can be described by an improper random variable describing the probability distribution over the possible latency of an outcome. The variable is improper because there can be some chance that the outcome is never delivered. The possibility of an outcome never being achieved is modeled by including infinity in the domain of the random variable. Analyzing the quality attenuation of the WiFi protocol is a matter of describing the latency of every possible path through the protocol state machine. The $\Delta Q$ framework provides the tools for adding up the contributions (the $\Delta Q$’s) from each possible path so that the complete system’s latency distribution can be accurately described. ### 3.1. Labeled Transition System Our model of the WiFi protocol describes the protocol state machine as a labeled transition system. A Labeled Transition System (LTS) is a directed multigraph with labeled edges (see the example LTS in Figure 1). Nodes in the graph represent states of the protocol state machine, and edges represent transitions between the states. Each edge is labeled with a $\Delta Q$ value, which describes the distribution of time needed to complete the transition and includes the probability that the transition never terminates. Each edge also has a probability associated with it, which describes the likelihood of the system progressing through that edge conditioned on the system being the source state of the edge. Figure 1. An example labeled transition system ### 3.2. Unrolling the labeled transition system In the LTS shown in Figure 1, each transition is labeled with a distribution of possible delays and a probability of that transition from the source state. To make the states of the LTS Markovian, we transform the LTS so that every possible path to each state ends up in a separate copy of that state. In the unrolled LTS shown in Figure 2, new indices are added to distinguish different versions of the states from Figure 1. The cost of this transformation is a large (possibly infinitely large) increase in the size of the state space. If the LTS contains loops, this transformation will introduce infinite recursion, as shown in Figure 2. We can solve this problem by defining a maximum latency and equating any path that exceeds this limit with a loss. The WiFi protocol state machine contains no loops, and so introducing the maximum latency is unnecessary in this case, but we include the idea here to show the generality of the method. Figure 2. The unrolled labeled transition system. ### 3.3. Composable operations #### 3.3.1. Convolution Our goal is to compute the time required for the LTS to evolve from a given starting state to some state which represents the desired outcome. This calculation includes answering the question; “Knowing the starting state, how long does it take to reach a given state?” For states that can only be reached by a single path through the graph, we simply sum the latency contributions of each transition along that path. In the unrolled version of the LTS (see Figure 2) we have, by design, only one path to every state. Because the latency of each edge is described by an improper random variable, the total $\Delta Q$ of a sequence of transitions can be calculated by convolving the $\Delta Q$ of each transition (bradley-1999, 5). See Figure 3. Figure 3. $\Delta Q$ convolution #### 3.3.2. Mixture density When we have more than one path from the starting state to the target state, we cannot compute the arrival time distribution to the target state with convolution alone. In this case, we create a mixture distribution consisting of the latency along each possible path. The weights in the mixture distribution are determined by the probability of taking each of the paths, see Figure 4. There are only two possible outcomes for a packet going through the WiFi protocol stack: Successful transmission or packet loss. Consider the branches that go to the state “Done” in Figure 1. We unroll the LTS as illustrated in Figure 2 and perform the convolution operation above on each of the paths ending in a copy of the “Done” state. Now, we know the latency distribution associated with each possible path to the “Done” state. The total latency of the done state in the not-unrolled LTS is, therefore, the mixture density formed by weighting each of the possible latency distributions that terminate in a copy of the “Done” state by their respective probabilities. See Figure 5. Figure 4. $\Delta Q$ mixture density Figure 5. The reduced form of the WiFi protocol model ### 3.4. WiFi protocol background The WiFi protocol is “listen before talk”. That means that a WiFi station must check that the radio frequency is idle for a certain amount of time before starting a transmission. When a WiFi station begins a transmission procedure, it first selects a random number in the range $[0,CW_{min}]$ and assign the number to a back-off counter (Man2013, 17, Section 10.2.2). “CW” here stands for contention window. If the back-off counter is not zero, the station will wait for a single “slot time”. If the radio frequency is sensed to be idle during the waiting period, the back-off counter is decremented by one. When the back-off counter becomes zero, the station starts transmitting. The reason for this somewhat convoluted scheme is that a station cannot listen while transmitting. This limitation means a transmitting stations cannot sense that another station is also transmitting at the same time. Therefore, collisions can not be handled by both stations interrupting their ongoing transmissions. When many WiFi stations compete for access to the frequency, collisions can waste a significant amount of time. The back-off counter mechanism was introduced to reduce the risk of collisions. When a station is involved in a collision, the station will again choose a random value for its back-off counter. The size of the interval from which random values can be selected is a function of how many times a packet has been retried. The back-off window size doubles with each new retry of the same packet, up to a maximum value of $CW_{max}$. $maxretries$ determines how many times to retry a packet before it is dropped. See Table 2 for the values of each parameter. ### 3.5. 802.11 Model details We represent the state of a WiFi system with $n$ competing stations as an allocation of each of the $n$ stations to a back-off counter and retry counter value. Table 1 shows an example of the state representation. Each entry in the state representation counts how many stations have a specific combination of back-off counter and retry counter values. In the example in Table 1, two stations are at back-off counter value two after zero retries, and one station is at back-off counter value six after one retry. Note that our state representation is similar to that of Bianchi (Bianchi2000PerformanceFunction, 2, Figure 4). | Back-off counter value ---|--- | 0 | 1 | 2 | 3 | 4 | 5 | 6 | $\dots$ retry 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | $\dots$ retry 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | $\dots$ $\vdots$ | | | | | | | | retry $maxretries$ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | $\dots$ Table 1. The state representation for the 802.11 model For a given state, the 802.11 protocol defines the possible transitions to a next state. Only three cases are possible: 1. (1) No stations have a back-off counter value of zero, so all stations decrease their back-off counter value by one after one slot time 2. (2) Exactly one station has a back-off counter value of zero. This station successfully transmits, spending the time required for transmission. The transmitting station is then finished sending its packet, and it either leaves the system or restarts the back-off procedure with a new packet. The transmitting station resets its retry counter to zero. The remaining stations hold their back-off counters constant for the duration of the transmission. 3. (3) More than one station has a back-off counter value of zero. This causes a collision, spending the amount of time required for the slowest of the colliding stations to transmit. All the colliding stations then increase their retry counter by one and select a random back-off counter value from the range $(0,CW_{r})$, where r is the new retry counter value. The stations not involved in the collision keep their back-off counter values constant for the duration of the collision. We include the minimum interval between subsequent WiFi transmissions (Man2013, 17, Figure 10.4) in the time required for each transmission. The duration of this interval is $SIFS+\textit{Slot time}$ such that the earliest possible time for a transmission following a period with busy medium is $SIFS+2*\textit{Slot time}$. This simplifies the model because we do not need to keep track of whether a transmission just occurred or not. Designing the model this way makes it difficult to model the 802.11e extension of WiFi where the inter-frame space varies for traffic from different access categories. Future work will expand our model in this direction. ## 4\. Model evaluation, validation and comparison to existing WiFi models In this section we explain how we evaluate our model, empirically test the accuracy of our model, and verify that that the model converges to the same latency distribution with each evaluation. The state-space of our WiFi LTS model is large. Assuming the values for maximum retries, $CW_{min}$, and $CW_{max}$ from table 2, the number of possible configurations of a single station is $\sum_{i=0}^{6}2^{4+i}=2032$. For $n$ stations, there are then $2032^{n}$ ways to assign them to back-off and retry counter values. Some of these assignments will be equivalent, but even so, evaluating the evolution of this system for all possible state configurations quickly becomes infeasible as $n$ increases. Progress has been made in solving large-scale semi-Markov models similar to the one we use here, although these models have been solved only up to the order of tens of millions of states (Bradley2004Hypergraph-basedModels, 6). We approach the state-space explosion problem by using Monte Carlo simulation to approximate the evolution of the LTS model. In this work we evaluate our model in two different ways; The case where all stations always has a packet to send, and the case where all stations only have a single packet to send. We label these “Ergodic evaluation” and “Transient evaluation”. #### 4.0.1. Ergodic evaluation When a station has either successfully transmitted it’s packet, or the packet is dropped, the station immediately re-starts its back-off process. This corresponds to the saturation conditions used by Bianchi (Bianchi2000PerformanceFunction, 2). We call this method of evaluation “ergodic” because it corresponds to the evaluation of an ergodic Markov chain. For this case we run the model forward from a random starting state until we have observed the outcome of $10^{4}$ packets. We arrive at the throughput numbers by first calculating the latency distribution seen by the head of line packet at each station, and then calculating throughput using equation 2(see section 5.1) appropriately scaled by the number of stations. #### 4.0.2. Transient evaluation When a station has either successfully transmitted it’s packet, or the packet is dropped, the station leaves the system. We record the time-to-empty, defined as the time at which the last station leaves the system. We call this method of evaluation “transient” because we are essentially modelling the transient response of the system to one packet simultaneously arriving at each of $n$ stations. To compute the distribution of the measured time-to-empty, we evaluate the model starting from a random state $10^{4}$ times. The ability to do transient analysis of latency distributions is the main advantage of our model over steady-state analysis of Markov chains. ### 4.1. Rate of convergence We now establish empirical results for the rate of convergence of our Monte Carlo simulations for both the ergodic and the transient evaluation method. For these experiments we use the parameters in table 2, and 5 competing stations. This analysis does not confirm that the results produced are correct, but it shows how consistent the results are across different runs of the Monte Carlo simulation. We have chosen to look at the convergence of the 90th percentile of latency because we are interested in accurately characterizing the tail of the latency distribution. Since events in the tail of the distribution are by definition rare, it takes longer for the 90th percentile to converge. Therefore, these results are stronger than showing convergence of the mean latency. Figure 6 shows the distribution of the 90th percentile latency as a function of the number of packet outcomes observed for the ergodic evaluation method described in section 4.0.1. We ran the simulation 1000 times to compute the distributions of the results. Figure 6. Convergence of the 90th percentile latency estimate as a function of number of packet outcomes observed for the ergodic evaluation method Figure 7 shows the distribution of 90th percentile time-to-empty as a function of the number of evaluations for the transient evaluation method described in section 4.0.2. We ran 1000 separate simulations starting from a random state $k*1000$ times in each simulation for $k$ from 1 to 10, and recorded the 90th percentile time-to-empty for each of the simulations. Figure 7. Convergence of the 90th percentile time-to-empty estimate as a function of number of evaluations observed for the transient evaluation method We conclude that the 90th percentile of the distribution converges with a high probability for both the ergodic and the transient evaluation method with the amount of packet outcomes or evaluations chosen. ### 4.2. Comparison to existing models To replicate the results of Bianchi (Bianchi2000PerformanceFunction, 2) we evaluate our model using the ergodic method described in section 4.0.1. We perform the evaluation using the same parameters as Bianchi (Bianchi2000PerformanceFunction, 2, Table 2), shown in table 2. Figure 8 shows results for total system throughput as a function of initial back-off window size in the ergodic case, compared to results from (Bianchi2000PerformanceFunction, 2, Figure 9). We consider these results sufficiently close to those of (Bianchi2000PerformanceFunction, 2), which demonstrates that our model accurately describes a saturated WiFi system for a set of different system parameters. Parameter | Value ---|--- Slot time ($\mu s$) | 50 SIFS ($\mu s$) | 28 DIFS ($\mu s$) | 128 PHY Header (bits) | 128 MAC Header (bits) | 272 ACK ($\mu s$) | PHY Header + 14*8/base rate Base rate (Mbit/s) | 1 $CW_{min}$ exponent | 4 $CW_{min}$ | 15 $CW_{max}$ | 1023 $maxretries$ | 6 packet size | 1023 Back-off window size | $2^{CW_{min}\text{ exponent}+\text{Number of retries}}$ Table 2. Parameters used for comparison to the model of Bianchi(Bianchi2000PerformanceFunction, 2) Figure 8. System throughput as a function of initial back-off window size. ## 5\. Analysis of latency in the WiFi protocol In this section we explore the relation between latency and throughput in the WiFi protocol. First we show how latency and throughput are related in the ergodic case, and show that latency for the ergodic case grows very quickly as the number of competing stations increases. We then discuss the notion of an upper bound on throughput under the condition that latency and packet loss must be bounded. ### 5.1. Relating latency to throughput Time is related to throughput as shown in equation 1, where $T$ is throughput in packets per second, $N$ is the number of packets sent in some interval, and $D$ is the duration of that interval (in seconds). If we also know the average packet size, for instance in bytes, we can calculate the average throughput in Mbit/s. (1) $T=\frac{N}{D}$ In our model we record the delay of each packet, and so we do not directly measure $D$ in equation 1. Assuming the interval $D$ ends with the transmission of a packet, and that there was no idle time which did not count towards the delay of any of the recorded packets, we can calculate $D$. Consider the packets from a single station which sends packets back-to-back, as is true in the ergodic case. We denote the latency of packet $i$ by $d_{i}$, and assert $D=\sum_{i=0}^{N}d_{i}$. Observe that this means throughput is the inverse of the average per-packet delay, as shown in equation 2. (2) $\frac{1}{T}=\frac{1}{N}\sum_{i=0}^{N}d_{i}$ Note that we only consider packets that are not lost, or else the sum of delays would be infinite. This is correct because lost packets do not contribute to the throughput. Lost packets will, however, increase the average latency for the packets that are not lost. This is also in accordance with the method used by Bianchi to calculate mean latency from throughput values (Bianchi2005RemarksAnalysis, 4). ### 5.2. Analyzing latency under saturation load We now proceed to investigate latency and packet loss performance in the ergodic case. Figures 9 and 10 show the latency and packet loss performance of the WiFi DCF for different back-off timer values and different numbers of stations. We read packet loss values in figures 9 and 10 by observing how far the maximum of each CDF is from 1 on the y-axis. The quality attenuation found in these experiments is so large, especially for a high number of stations, that we argue the throughput results of Figure 8 are of little practical use. Even though total system throughput is very close to the theoretical optimum for the 50-station case with a back-off window size of 1024, the vast majority of user applications will not perform well when running over a network with this much latency and packet loss. Interactive applications such as gaming and video conferencing obviously cannot function well with this much latency, and TCP throughput is severely affected by loss rates as high as in the 50-station case, as shown by Padhye et al. (Padhye2000ModelingValidation, 14). Our results are consistent with those of Youm and Kim (Youm2013LatencyLANs, 21). Note that the results show the latency of a head-of-line packet, and so queuing delays and potential packet loss due to full buffers will come in addition to the delays shown here. Existing WiFi models mostly evaluate performance under the assumption that the system is in the steady-state and that the system is saturated (Bianchi1996PerformanceLANs, 3, 19, 20, 21). The results presented here, along with those of (Youm2013LatencyLANs, 21), shows that the latency of WiFi under these conditions is very large. A more complete way of modeling WiFi performance is needed. We therefore argue for a different perspective on WiFi performance modeling and optimization. Instead of looking for the system parameters that will give the best throughput, or the highest system utilization, we should look for the system parameters most likely to deliver good Quality of Experience with typical end-user applications. The main drivers of QoE, as argued in (1892, 15), are latency and packet loss. It is therefore crucial that our models accurately capture latency and packet loss performance under realistic conditions. Figure 9. CDF for latency and packet loss with initial back-off window size of 8 Figure 10. CDF for latency and packet loss with initial back-off window size of 1024 ### 5.3. Establishing an upper bound on time-to-empty We now compute throughput bounds under the condition that latency and packet loss must be kept bounded. We consider bounded latency and packet loss to be the absolute minimum requirement for a good user experience. The worst-case scenario for a WiFi system with $n$ stations is that all $n$ stations begin their back-off procedure at the same time. We can think of this as all stations being maximally correlated, or as having worst-case correlation between the stations. This scenario has the highest risk of collisions and will therefore lead to the longest possible time-to-empty. We investigate this worst-case correlation scenario to establish an upper bound on the time-to-empty of a WiFi system with $n$ stations. The time-to-empty of an 802.11b WiFi link is shown in Figure 11 for one to nine stations. To make the results comparable to those of Bianchi (Bianchi2000PerformanceFunction, 2) we use a channel rate of 1 Mbit/s. The time-to-empty represents the time from all stations simultaneously initiate the back-off process until all stations have completed the transmission of or dropped a single packet. To calculate the distribution of time-to-empty we evaluate the model using the transient evaluation method described in section 4.0.2 Figure 11. Time-to-empty for for $n$ competing stations starting back-off procedures at the same time ### 5.4. Finding the maximum system throughput with bounded latency and packet loss A well-known result from queuing theory states that if, on average, packet arrivals to an unbounded queue occur more frequently than departures from the queue, then the queue length grows toward infinity. Thus, to avoid unbounded latency growth (or packet loss when queues are not infinitely large), the packet arrival rate must be smaller than the service rate. This relationship is expressed by the inequality in equation 3, where $\lambda$ is the mean arrival rate in packets/second, and $E[s]$ is the mean service time per packet (also in seconds). We are now ready to find the upper bound on throughput. Our logic is as follows: If the arrival rate is slower than the worst-case service rate, then we know that latency is bounded. Using the mean service time we calculate an upper bound on the packet arrival rate by setting $\lambda=\frac{1}{E[s]}$. (3) $\lambda<\frac{1}{E[s]}$ Because the time-to-empty represents the time for the system to process one packet from each station, the upper bound on throughput is reached when one packet arrives at each of the stations every mean time-to-empty. The throughput in Mbit/s can then be calculated, assuming we know the PHY rate and the packet size. The upper bound on throughput is shown in Figure 12 for one to seven stations. The throughput shown in Figure 12 is total system throughput, so to compute the per-station throughput, we must divide by the number of stations. The per station throughput is shown in Figure 13. These results use the parameters shown in table 2, which are the same as those in Bianchi’s analysis(Bianchi2000PerformanceFunction, 2). We now have a tool for guiding the design and configuration of WiFi networks. The upper bound on throughput gives us a way to know whether a given WiFi configuration can support a certain set of applications without building queues. We can potentially use this to inform queuing and scheduling algorithms about the available capacity of the WiFi link so that large delays due to unnecessary congestion can be avoided. In particular, our results show that WiFi performance is very sensitive to the number of simultaneously active stations on a channel. Several methods for improving WiFi performance by reducing the number of simultaneously active stations have been reported. Saeed et al. (Saeed2018IfChanges, 16) proposed a token-based WiFi scheduling algorithm for reducing contention overhead. Maity et al. (Maity2017TCPSolution, 13) proposed a WiFi scheduler for TCP downloads which reduces the number of different stations transmitting ACKs to an access point at the same time. Channel planning and transmit power management can also reduce the number of concurrently active stations and thus improve WiFi performance (Akella2007Self-ManagementDeployments, 1, 8). We believe our model can help inform further work on WiFi optimization. Figure 12. Upper bound on total system throughput for $n$ competing stations with bounds on latency and packet loss Figure 13. Upper bound on throughput per station for $n$ competing stations with bounds on latency and packet loss ## 6\. Exploring modern WiFi standards Readers familiar with WiFi performance might object that the throughput bounds presented above are too strict. Indeed, WiFi networks exist today with much greater throughput performance. In this section we explore some of the improvements that have been made to the WiFi protocol, and investigate how each improvement affects the throughput bounds. This section explores the impact of various protocol features introduced in 802.11n and its accompanying amendments. We compute the impact on the throughput bound of Request-to-send/Clear-to-send (RTS/CTS) in section 6.2 and of packet aggregation in section 6.3. We also reproduce the “WiFi performance anomaly” first reported by Heusse et al. (Heusse2003Performance802.11b, 11) in section 6.4. ### 6.1. An 802.11n baseline Parameter | Value ---|--- Slot time ($\mu s$) | 9 SIFS ($\mu s$) | 10 DIFS ($\mu s$) | 28 PHY Header ($\mu s$) | 24 MAC Header (bits) | 272 ACK ($\mu s$) | PHY Header + 14*8/base rate Basic rate set (Mbit/s) | [1, 2, 5.5, 11, 24] $CW_{min}$ exponent | 4 $CW_{min}$ | 15 $CW_{max}$ | 1023 $maxretries$ | 6 packet size | 1023 Back-off window size | $2^{CW_{min}\text{ exponent}+\text{Number of retries}}$ Table 3. Parameters used throughout section 6, unless otherwise specified. These represent the default parameters of 802.11n. Figure 14. Upper bound on per station throughput for $n$ competing stations using 802.11n parameters (see table 3) Figure 14 shows the upper bound on per-station throughput assuming throughput is fairly divided and latency and packet loss is bounded. Comparing to Figure 13, we see very similar behavior. However, as expected the throughput is significantly higher using the 802.11n parameters compared to those of (Bianchi2000PerformanceFunction, 2). The results in Figure 14 will serve as a baseline comparison for the protocol features explored in this section. ### 6.2. RTS/CTS The request-to-send, clear-to-send mechanism in WiFi introduces an extra handshake between sender and receiver. Before a data packet is transmitted, the sender transmits a RTS packet. Upon hearing the RTS packet, the receiver responds with a CTS packet. If the sender hears the CTS packet, the data packet is transmitted. Because the RTS and CTS packets are small, and therefore take little time to transmit, the introduction of the extra handshake can reduce the amount of time spent transmitting whenever a collision occurs. The RTS/CTS mechanism also reduces the impact of hidden node problems. We can model the RTS/CTS mechanism by increasing the time to complete each transmission by the time required to perform the RTS/CTS handshake. If a collision occurs, the elapsed time is decreased, because the collision is detected by all involved stations when they fail to receive a CTS frame. Because we do not consider hidden nodes in this work, RTS/CTS can only reduce the amount of time spent waiting for colliding transmission to cease. We now compare our RTS/CTS results to those of (Bianchi2000PerformanceFunction, 2, 20). We calculate the mean time-to-empty for a WiFi system with and without RTS/CTS enabled. We use WiFi parameters equal to those of (Bianchi2000PerformanceFunction, 2) (see Table 2), and vary the number of stations and the packet size. The packet size is increased in steps of 100 bytes from 100 to 9900 bytes. Figure 15 shows the percentage change in the upper bound on throughput from enabling RTS/CTS. Positive values indicate that the system with RTS/CTS enabled allows for greater throughput. Our results show a trend similar to those found in (Bianchi2000PerformanceFunction, 2, 20), but our results indicate that RTS/CTS should be enabled for a smaller packet size and for a smaller number of stations compared to the results of (Bianchi2000PerformanceFunction, 2, 20). Whereas Bianchi(Bianchi2000PerformanceFunction, 2) sets the threshold for enabling RTS/CTS in a system with five stations running at 1Mbit/s at 3160 bytes, and Tinnirello et al. (Tinnirello2005RevisitNetworks, 20) sets the threshold at 800 bytes, our results indicate that RTS/CTS should be enabled for packets above 600 bytes. Figure 15. Heatmap showing the percent impact of enabling RTS/CTS as a function of number of stations and packet size. Positive values (towards top right corner) indicate that enabling RTS/CTS increases the upper bound on throughput. The parameters are those listed in Table 2. Figure 16 shows the impact of enabling RTS/CTS for 802.11n WiFi using the parameters listed in Table 3 and a channel rate of 144 Mbit/s. Comparing to Figure 15, it is clear that we are more at risk of negatively impacting the system throughput in this case, because the overhead of the RTS/CTS handshake is relatively larger when the channel rate is high. Figure 17 compares the time-to-empty CDF with and without RTS/CTS enabled for 5 stations and a packet size of 1023. According to the results shown in Figure 16, enabling RTS/CTS reduces total system throughput using these parameters. Figure 17 clearly shows that enabling RTS/CTS reduces the likelihood of high latency, at the cost of added overhead. Increasing predictability at the cost of higher minimum latency may be a desirable trade-off for jitter-sensitive applications, even if the total system throughput decreases. Figure 16. Heatmap showing the percent impact of enabling RTS/CTS as a function of number of stations and packet size. Positive values (towards top right corner) indicate that enabling RTS/CTS increases the upper bound on throughput. The parameters are those listed in Table 3. The channel rate is 144 Mbit/s. ### 6.3. Packet aggregation Packet aggregation is a mechanism by which several higher-layer packets (here typically IP packets) are transmitted together over a link, without individual MAC-layer headers. Packet aggregation increases the total throughput by reducing the MAC-layer overhead because the number of transmit opportunities required to send a given number of IP packets is reduced. The 802.11 protocol describes the following two types of packet aggregation: Mac Service Data Unit (MSDU) aggregation and Mac Protocol Data Unit (MPDU) aggregation. To take advantage of either aggregation mechanism, the station must have buffered several packets with the same destination address. The WiFi protocol imposes a limit on the time of each transmit opportunity. When this time expires, the station must perform the back-off mechanism. #### 6.3.1. A-MSDU Mac Service Data Unit aggregation works by grouping several IP-layer packets into a single Mac-layer packet for the WiFi transmission. This grouping reduces the protocol overhead, both in terms of transmission time and waiting time due to the back-off mechanism. Implementing this kind of aggregation in our model is very straightforward, because increasing the packet size parameter is sufficient. In 802.11n, the maximum A-MSDU size is 7935 octets. Using this packet size, we arrive at the maximum stable throughput per station shown in Figure 18. Figure 17. Time-to-empty distributions for the cases of 5 stations and Tx and Rx rates of 144 Mbit/s with different WiFi extensions. Figure 18. Upper bound on per station throughput for $n$ competing stations with A-MSDU packet aggregation We now compare the latency of a WiFi link using packet aggregation to one that does not. Figure 17 shows the time-to-empty CDF for a WiFi network with five stations and transmit and receive rates of 144 Mbit/s. “Baseline80211n” is using the exact parameters presented in Table 3. “RTSCTS” uses the RTS/CTS mechanism, and “AMSDU” uses packet aggregation with a packet size of 7935 bytes. #### 6.3.2. A-MPDU Mac Protocol Data Unit aggregation groups several MAC-layer packets into a single transmit opportunity. Once a station has won a transmit opportunity, the station can transmit several packets back-to-back without performing the back-off procedure or waiting for individual ACKs between each MAC-layer packet. The main difference between A-MPDU and A-MSDU is that with A-MPDU, each packet that is part of the aggregate can be ACKed separately. Separate ACKs mean the overhead of packet errors is smaller. Because we are investigating the latency induced by the WiFi DCF specifically, we do not explore the details of A-MPDU, other than to note that the DCF performance will be very similar to that of A-MSDU because both methods compete for transmit opportunities in the same manner. Both methods can send similar amounts of data in a single aggregate. ### 6.4. The WiFi performance anomaly Heusse et al.(Heusse2003Performance802.11b, 11) first described “The WiFi Performance Anomaly,” a phenomenon by which a single low-rate station can lay claim to a large portion of the available airtime. This effect emerges because the WiFi CDF is designed to give each station an equal amount of transmit opportunities. When one station holds on to their transmit opportunity for a longer period than all other stations each time it wins one, the result is an uneven allocation of airtime resources. Heusse et al. show that when this happens, all stations achieve the same throughput as the station using the lowest transmit rate. Our WiFi model can readily reproduce this phenomenon. The state representation is modified to include station identifiers such that we can assign different parameters to each station. Then, we assign one station a rate of 1 Mbit/s and vary the rates of the other stations. The resulting throughput per station is shown in figure 19. As expected, the per-station rate is bounded above by 1 Mbit/s. Our results match those of Heusse et al. (Heusse2003Performance802.11b, 11). Figure 19. Upper bound on throughput for each of $n$ stations when one of the stations transmit and receive at 1 Mbit/s. ## 7\. Conclusion In this paper we have presented and validated a novel method for WiFi performance analysis. Our primary contribution is the modeling of complete latency distributions. At the cost of added computational complexity, explicit latency modeling allows more accurate performance analysis by directly modeling the reliability of network outcomes. Our model does this while retaining the ability to produce throughput numbers. We derive upper bounds for WiFi throughput under the requirement that latency and packet loss are bounded. We also investigate the consequences of RTS/CTS and packet aggregation, and reproduce the result known as “The WiFi performance anomaly”. Our model lets us quantify the impact of trade-offs such as RTS/CTS, where some stability is gained at the expense of added overhead. Using a single framework, we can measure these impacts in terms of throughput and in terms of average latency, jitter and packet loss. This flexible modeling means we can determine which trade-offs should be made depending on the particular use-case of a given WiFi network. Future work will investigate how this new insight can be used to create WiFi control systems that automatically configure the WiFi network to best suit the needs of specific applications, such as video conferencing. ## References * (1) Aditya Akella, Glenn Judd, Srinivasan Seshan and Peter Steenkiste “Self-Management in Chaotic Wireless Deployments” In _Wireless Networks_ 13.6, 2007, pp. 737–755 DOI: 10.1007/s11276-006-9852-4 * (2) Giuseppe Bianchi “Performance analysis of the IEEE 802.11 distributed coordination function” In _IEEE Journal on Selected Areas in Communications_ 18.3 IEEE, 2000, pp. 535–547 DOI: 10.1109/49.840210 * (3) Giuseppe Bianchi, Luigi Fratta and Matteo Oliveri “Performance evaluation and enhancement of the CSMA/CA MAC protocol for 802.11 wireless LANs” In _IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC_ 2 IEEE, 1996, pp. 392–396 DOI: 10.1109/pimrc.1996.567423 * (4) Giuseppe Bianchi and Ilenia Tinnirello “Remarks on IEEE 802.11 DCF performance analysis” In _IEEE Communications Letters_ 9.8, 2005, pp. 765–767 DOI: 10.1109/LCOMM.2005.1496609 * (5) Jeremy T. Bradley “Towards Reliable Modelling with Stochastic Process Algebras” In _Transition_ , 1999 URL: http://portal.acm.org/citation.cfm?id=895880 * (6) Jeremy T. Bradley, Nicholas J. Dingle, William J. Knottenbelt and Helen J. Wilson “Hypergraph-based parallel computation of passage time densities in large semi-Markov models” In _Linear Algebra and Its Applications_ 386.1-3 SUPPL. North-Holland, 2004, pp. 311–334 DOI: 10.1016/j.laa.2003.12.018 * (7) Neil Davies, Judy Holyer and Peter Thompson “End-to-end management of mixed applications across networks” In _Proceedings - 1999 IEEE Workshop on Internet Applications_ , 1999, pp. 12–19 DOI: 10.1109/WIAPP.1999.788012 * (8) Frank Den Hartog et al. “A pathway to solving the Wi-Fi tragedy of the commons in apartment blocks” In _2017 27th International Telecommunication Networks and Applications Conference, ITNAC 2017_ 2017-Janua, 2017, pp. 1–6 DOI: 10.1109/ATNAC.2017.8215382 * (9) Paal E. Engelstad and Olav N. Østerbø “Non-saturation and saturation analysis of IEEE 802.11e EDCA with starvation prediction” In _ACM MSWiM 2005 - Proceedings of the Eighth ACM Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems_ 2006, 2006, pp. 224–233 DOI: 10.1145/1089444.1089485 * (10) Broadband Forum “TR-452.1 Quality Attenuation Measurement Architecture and Requirements”, 2020 URL: https://www.broadband-forum.org/download/TR-452.1.pdf * (11) Martin Heusse, Franck Rousseau, Gilles Berger-Sabbatel and Andrzej Duda “Performance anomaly of 802.11b” In _Proceedings - IEEE INFOCOM_ 2, 2003, pp. 836–843 DOI: 10.1109/infcom.2003.1208921 * (12) Lucian Leahu “Analysis and predictive modeling of the performance of the ATLAS TDAQ network”, 2013 * (13) Mukulika Maity, Bhaskaran Raman and Mythili Vutukuru “TCP download performance in dense WiFi scenarios: Analysis and solution” In _IEEE Transactions on Mobile Computing_ 16.1 Institute of ElectricalElectronics Engineers Inc., 2017, pp. 213–227 DOI: 10.1109/TMC.2016.2540632 * (14) Jitendra Padhye, Victor Firoiu, Donald F. Towsley and James F. Kurose “Modeling TCP Reno performance: A simple model and its empirical validation” In _IEEE/ACM Transactions on Networking_ 8.2, 2000, pp. 133–145 DOI: 10.1109/90.842137 * (15) David C Reeve “A New Blueprint for Network QoS”, 2003, pp. 182–196 URL: http://www.cs.kent.ac.uk/pubs/2003/1892 * (16) Ahmed Saeed, Mostafa Ammar, Ellen Zegura and Khaled A. Harras “If you can’t Beat Them, Augment Them: Improving Local WiFi with Only Above-Driver Changes” In _Proceedings - International Conference on Network Protocols, ICNP_ 2018-September IEEE Computer Society, 2018, pp. 378–388 DOI: 10.1109/ICNP.2018.00053 * (17) The Institute of Electrical and Electronics Engineers “IEEE Standard for Information technology – Telecommunications and information exchange between systems LAN and MAN – Specific requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Std 802.11-2016” In _IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012)_ 2016, 2013, pp. 1–3534 URL: http://ieeexplore.ieee.org/document/7786995/ * (18) Peter Thompson and Neil Davies “Towards a RINA-Based Architecture for Performance Management of Large-Scale Distributed Systems” In _Computers_ 9.2 MDPI AG, 2020, pp. 53 DOI: 10.3390/computers9020053 * (19) Ilenia Tinnirello and Giuseppe Bianchi “Rethinking the IEEE 802.11e EDCA performance modeling methodology” In _IEEE/ACM Transactions on Networking_ 18.2, 2010, pp. 540–553 DOI: 10.1109/TNET.2009.2029101 * (20) Ilenia Tinnirello, Sunghyun Choi and Youngsoo Kim “Revisit of RTS/CTS exchange in high-speed IEEE 802.11 networks” In _Proceedings - 6th IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks, WoWMoM 2005_ , 2005, pp. 240–248 DOI: 10.1109/WOWMOM.2005.89 * (21) Sungkwan Youm and Eui-Jik Kim “Latency and Jitter Analysis for IEEE 802.11e Wireless LANs” In _Journal of Applied Mathematics_ 2013 Hindawi Publishing Corporation, 2013 DOI: 10.1155/2013/792529
e-mail<EMAIL_ADDRESS>Phone: +43-1-4277-72640 11institutetext: 1 Electronic Properties of Materials, University of Vienna - Faculty of Physics, Vienna XXXX # Synthesis of nitrogen doped single wall carbon nanotubes with caffeine. Filippo Fedi,1 Oleg Domanov1 Paola Ayala1 Thomas Pichler1 (XXXX, revised XXXX, accepted XXXX) ###### Abstract Nitrogen doped single wall carbon nanotubes have many functional benefits. Doping opens the possibility to control the electronic energy levels, surface energy, surface reactivity and charge carrier density. The additional electron in the outer shell changes the electronic properties of the nanotubes when introduced into the carbon lattice. Here we present the latest findings in the in-situ doping during synthesis of single wall carbon nanotubes using caffeine as a precursor of both carbon and nitrogen. A special furnace with two heating elements allowed us to sublimate and decompose the solid precursor. Caffeine allowed us to reach a high doping percentage with high quality nanotubes directly in a one-step synthesis procedure. ###### keywords: Single wall carbon nanotubes, Nitrogen doping, Raman scattering, X-ray photoelectron spectroscopy, synthesis. [width=7cm,height=4.5cm]Caffeine ## 1 Introduction A precise control of the structure and properties is essential for gaining on the exceptional features of carbon nanotubes (CNTs) for real-life applications [2, 3, 4]. CNTs in their pristine form are materials with negligible chemical reactivity and with low surface energy, insufficient to satisfy diverse applications [5, 6]. A smart idea to regulate their properties is using different functionalization methods [7, 8, 9]. In order to achieve high control of the material properties, different approaches have been proposed by different groups [10]. A fascinating type of doping CNTs is by substitutionally, introducing heteroatoms into the graphitic lattice, e.g. nitrogen [11, 12, 13]. A lot of work has been done with multiwalled N-doped CNTs over the past two decades and important applications have been found [14]. Synthesizing single-walled (SW) tubes has imposed more challenges. While some groups have achieved this by post growth treatments of pristine nanocarbons [15, 16, 17], doping has also been obtained in situ, i.e. directly growing SWCNTs with incorporated heteroatoms from a specific precursor [18, 19, 20]. Thanks to the additional electron that N contains compared to C, using this heteroatom has gained special interest over the past years [21, 22, 23], since some novel electronic properties have been predicted because the N-CNT would become an n-type doped material [24]. The efficiency of the incorporation of N atoms within the CNTs lattice depends on several factors: choice of precursor, catalyst type, reaction temperature and time, gas flow rate and relative pressures [25, 26]. Although N-CNTs have several advantages, a synthesis method for their production that ensures high quality, in terms of D/G ratio, of N-CNTs is not yet available. In fact, the quality of the materials obtained sometimes can be very low, the reproducibility is difficult and the retention of N atoms is weak and N2 gas may be trapped into the CNT [27, 28]. In addition it is quite common to use toxic and corrosive precursors that are not easy to be handled. Tuning of the electronic properties of nitrogen doped single wall carbon nanotubes (N-SWCNTs) is very challenging and if achieved in a controlled manner, a very promising material will be available for applications such as in active elements of future nano- electronic devices, optical devices and sensors. The aim of this study is to elucidate the possibility to synthesize high quality, stable N-SWCNTs starting from caffeine, a very common, cheap, and non-toxic organic molecule that contains both N and C. The innovative precursor that we employed is a purine, heterocyclic aromatic organic compound made by a pyrimidine ring fused to an imidazole ring in addition with some other specific functional groups made of carbon, oxygen and nitrogen. Furthermore, to the best of our knowledge, employing caffeine a precursor is something that has never been done so far. In order to characterize the doping level and sample quality, Raman and X-ray photoelectron spectroscopy (XPS) have been utilized. ## 2 Materials and methods N-SWCNTs were synthesized using catalytic chemical vapor deposition (CCVD) with a two stage furnace in high vacuum conditions. Caffeine by Sigma-Aldrich (ReagentPlus®) was used, which is a powder. The catalyst as produced mixing 3wt.% ammonium iron citrate (Sigma-Aldrich) with 97 wt.% magnesium oxide (Sigma-Aldrich) in ethanol, bath-sonication for 72 hours, and drying at $70^{\circ}\text{C}$ for 24h as described elsewhere [29]. The growth temperature was set to $850^{\circ}\text{C}$ in a home built quartz furnace with a base pressure of 8 x 10-7 mBar while the pressure of the carrier gas (Argon) was 5 x 10-6 mBar. After the synthesis, the as-grown N-doped SWCNTs were subjected to a purification process. The obtained powder material was soaked for 24 hours in HCl (Sigma–Aldrich, ACS reagent, 37%) to remove most of the MgO and catalyst particles. In order to extract the CNTs, the liquid phase was removed with by filtration through membrane (MF-Millipore, 0.20$\mu$). Afterward we collected a N-SWCNTs thin film for the further characterizations. Raman spectroscopy was performed with a Horiba Jobin Yvon LabRAM HR800 Raman spectrometer under ambient conditions with 633nm laser, 0.5 mW laser power and spectral resolution of $\sim$2 cm-1. XPS analysis was done using monochromatic AlK$\alpha$ radiation (1486.6 eV) and a hemispherical SCIENTA RS4000 photoelectron analyzer operating with a base pressure of 6x10-10 mBar with an overall spectral resolution of 0.5 eV. ## 3 Results and discussion The goal of this study was to grow high quality N-SWCNTs using a solid precursor that contains carbon and nitrogen, expecting to increase the incorporation of N heteroatoms in the lattice of the SWCNT compared to previously reported work. The choice to use a single precursor for both N and C has been already proved by several groups using melamine [30], acetonitrile [31, 32, 33], imidazole [28] and benzylamine [34]. It is worth mentioning that all the abovementioned are ofter either in liquid or gas form, whereas caffeine is a powder and for that reason we adapted our system to a two stage one. In this work, caffeine has been chosen because is not poisonous, easy to manage, and with an easy highly scalable production. According to Ghosh et al.[28], during pyrolysis these C and N feedstocks supply the pre-existing C–N fragments on the catalyst’s surface, which consequently allows for the incorporation of the N atoms into the nanotubes matrix. Our method was proved successful. The product of the synthesis were black powders similar to that of typical N-CNTs. At first, we characterized the material using Raman spectroscopy to inspect the nanotube characteristic features. Fig. 1 shows in (A) the radial breathing mode (RBM) region, which is the typical fingerprint of SWCNTs [35, 36]. Figure 1: A)Raman spectrum of the low-energy section of the N-SWCNT. The RBM have a narrow distribution, with the diameter between 2.1 and 1.1 nm. B) Raman spectrum of the higher-energy section of the N-SWCNT. The nanotubes show a small D-band as confirmation of their purity. According to Kuzmany et al [37] is possible to extrapolate the values of the diameters from the RBM frequencies using the formula: $\nu[cm\textsuperscript{-1}]=\frac{234}{d[nm]}+C\textsubscript{2}$ (1) A predominant peak around 190 cm-1 is observed from the high resonance of the tubes corresponding to the matching laser excitation. Nevertheless, carrying out measurements with more than one laser line have allowed us to identify a narrow distribution, with diameters between 1.1 and 2.1 nm. In Fig. 1B, the Raman spectrum of the higher-energy section embracing the D and G bands of the N-SWCNT is shown. The small D-band intensity, which is $\sim$20 times smaller than that of the G band is a clear signature of the quality of the single-wall material and the low content of other carbonaceous species. The D peak located around $\simeq$1348 cm-1 is related to an inter-valley double resonance Raman associated to defects, vacancies in the lattice, distortions and out-of plane atoms [38, 39]. In our material we can observe a small D-band with ratio of the area of the D band and G band (ID/IG) of only 0.04. Subsequently to the Raman characterization, we performed a wet purification, meaning soaking of the as-grown material for 24 hours in HCl. Focusing again on the RBM region, the measurements with the 633 nm excitation wavelength, a small window of highest abundance around the mean diameter centered at 1.2 nm has been clearly identified [37]. It is interesting to observe the narrow distribution of the peaks in the low-frequency bands. Figure 2: Insert top figure: XPS survey of the material. The main peak around 285 is related to C, then are shown the peak of N and O. Top figure: line shape analysis of C1s, where the peak at 284.5 eV corresponds to the C in sp2 configuration, while the one at 285.6 eV is related to bonding C-N. Lower figure: line shape analysis in the N1s region. The voigtians (from right to left) are related to the different types of nitrogen: pyridinic, pyrrolic, substituional N, and other gaseous species arising from the synthesis conditions. XPS was employed to study analytically the material chemical composition, the nitrogen content and the its bonding environment. The survey spectrum recorded on the nanotubes sample shown in figure 2 has three main lines: the C1s line of carbon around 285 eV as the most prominent line, the N1s line of nitrogen around 400 eV and oxygen around 530 eV. The last one is related to the minimal oxygen remaining of the surface originated during the purification procedure and also the air manipulation of the sample before the measurements. This oxygen may be reduced with annealing at high temperature, but we refrained from doing this treatment to avoid causing damages to the doping species. Important to notice is the absence of any metals, metal oxides, filter residues or other contaminations in the survey, indicating the high quality of the N-SWCNTs film. Note that the only signal that appears on the survey that could represent a foreign element corresponds to gold which is related to the sample holder of our spectrometer. Apart from that, no other signals have been detected within the experimental limit. The line shape analysis of the C1s line is shown in the 2 on the top figure, where the main peak at 284,5 eV is related to the carbon with sp2 configuration. The broad peak at 285.6 eV is related to the C-N bonds, which is the first evidence that nitrogen is actually incorporated in the carbon lattice [40, 41]. One of the main goals of this work is to unambiguously confirm the effective presence of nitrogen in the nanotubes which has been successfully proved by the N1s core level measurements, taking into account the different atomic cross sections of C and N. Considering all types of possibilities in which N bonds, we observed a total amount of 2.56% N. This value has been estimated considering the ratio of the areas of the peaks C and C-N in the C1s line. Furthermore have analyzed the different types of N bonding environments along the wall of the wall of the SWCNTs, which has been done taking into account previously reported work [13, 25]. The N1s line is shown in the lower part of 2, and it is mainly composed of four types of nitrogen-functional groups: pyridinic ($\simeq$398 eV), pyrrolic ($\simeq$399 eV), substitutional ($\simeq$400,7 eV ) and nitrogenated gaseous compounds or N2 gas molecules ($\geq$ 402 eV) which are present at the nanotubes surface. Performing the line shape analysis, a high value of 1.55% substitutional sp2 nitrogen and of 0.4% of pyridinic nitrogen was observed. This value is above most of the other reported values for N-SWCNTs. Note that the XPS survey spectrum show the absence, within the experimental limit of 1%, of any magnesium line, that could derive from the catalyst used. This is a confirmation of the favorable outcome of the purification process. ## 4 Conclusion In this work we successfully achieved the growth of high quality N-SWCNTs using a novel precursor, caffeine, which is not toxic, easy to handle, to synthesize and highly scalable production. The nanotubes have a high content of substitutional nitrogen atoms in the lattice of 1,55%, contributing to the doping of the system, which is higher than most of the previously reported and experimentally confirmed values. This paves the way to further studies with this precursor with the scope to increase the control of the specific type of doping in order to tailor the properties of the nanotubes. The next steps will be the achievement of higher purity and the incorporation of these nanotubes in devices like e.g. gas-sensors with the scope to employ their increased reactivity. This work was supported by the Austrian Science Fund through Project FWF P27769-N20 and by the EU project (2D-Ink FA726006). PA would like to acknowledge the contribution of the COST Action CA15107 (MultiComp). ## References * [1] * [2] P. J. F. Harris, Carbon nanotube science: synthesis, properties and applications (Cambridge University Press, 2009). * [3] R. H. Baughman, A. A. Zakhidov, and W. A. De Heer, Science 297(5582), 787–792 (2002). * [4] M. F. De Volder, S. H. Tawfick, R. H. Baughman, and A. J. Hart, Science 339(6119), 535–539 (2013). * [5] A. Hirsch, Angewandte Chemie International Edition 41(11), 1853–1859 (2002). * [6] Y. Zhao, L. Yang, S. Chen, X. Wang, Y. Ma, Q. Wu, Y. Jiang, W. Qian, and Z. Hu, Journal of the American Chemical Society 135(4), 1201–1204 (2013). * [7] P. Ayala, R. Arenal, A. Loiseau, A. Rubio, and T. Pichler, Reviews of modern physics 82(2), 1843 (2010). * [8] Y. P. Sun, K. Fu, Y. Lin, and W. Huang, Accounts of Chemical Research 35(12), 1096–1104 (2002). * [9] U. N. Maiti, W. J. Lee, J. M. Lee, Y. Oh, J. Y. Kim, J. E. Kim, J. Shim, T. H. Han, and S. O. Kim, Advanced materials 26(1), 40–67 (2014). * [10] Y. Deng, Y. Xie, K. Zou, and X. Ji, Journal of Materials Chemistry A 4(4), 1144–1173 (2016). * [11] M. Terrones, P. Ajayan, F. Banhart, X. Blase, D. Carroll, J. C. Charlier, R. Czerw, B. Foley, N. Grobert, R. Kamalakaran et al., Applied Physics A 74(3), 355–361 (2002). * [12] M. Terrones, A. G. Souza Filho, and A. M. Rao, Doped carbon nanotubes: synthesis, characterization and applications, in: Carbon nanotubes, (Springer, 2007), pp. 531–566. * [13] P. Ayala, R. Arenal, M. Rümmeli, A. Rubio, and T. Pichler, Carbon 48(3), 575–586 (2010). * [14] K. Gong, F. Du, Z. Xia, M. Durstock, and L. Dai, Science 323(5915), 760–764 (2009). * [15] S. Esconjauregui, L. D’Arsié, Y. Guo, J. Yang, H. Sugime, S. Caneva, C. Cepek, and J. Robertson, ACS Nano (2015). * [16] E. Van Hooijdonk, C. Bittencourt, R. Snyders, and J. F. Colomer, Beilstein journal of nanotechnology 4(1), 129–152 (2013). * [17] A. Shrestha, M. Batmunkh, C. J. Shearer, Y. Yin, G. G. Andersson, J. G. Shapter, S. Qiao, and S. Dai, Advanced Energy Materials 7(8) (2017). * [18] P. Ayala, F. L. Freire, M. H. Rümmeli, A. Grüneis, and T. Pichler, Phys. Status Solidi (b) 244(11), 4051–4055 (2007). * [19] P. Ayala, A. Grüneis, T. Gemming, D. Grimm, C. Kramberger, M. H. Rümmeli, F. L. Freire, H. Kuzmany, R. Pfeiffer, A. Barreiro et al., The Journal of Physical Chemistry C 111(7), 2879–2884 (2007). * [20] A. Elias, P. Ayala, A. Zamudio, M. Grobosch, E. Cruz-Silva, J. Romo-Herrera, J. Campos-Delgado, H. Terrones, T. Pichler, and M. Terrones, Journal of nanoscience and nanotechnology 10(6), 3959–3964 (2010). * [21] H. Sun, C. Kwan, A. Suvorova, H. M. Ang, M. O. Tadé, and S. Wang, Applied Catalysis B: Environmental 154, 134–141 (2014). * [22] T. Susi, Z. Zhu, G. Ruiz-Soria, R. Arenal, P. Ayala, A. G. Nasibulin, H. Lin, H. Jiang, O. Stephan, T. Pichler et al., Phys. Status Solidi (b) 247(11-12), 2726–2729 (2010). * [23] T. Sharifi, F. Nitze, H. R. Barzegar, C. W. Tai, M. Mazurkiewicz, A. Malolepszy, L. Stobinski, and T. Wågberg, Carbon 50(10), 3535–3541 (2012). * [24] R. Czerw, M. Terrones, J. C. Charlier, X. Blase, B. Foley, R. Kamalakaran, N. Grobert, H. Terrones, D. Tekleab, P. Ajayan et al., Nano Letters 1(9), 457–460 (2001). * [25] P. Ayala, A. Grüneis, T. Gemming, B. Büchner, M. Rümmeli, D. Grimm, J. Schumann, R. Kaltofen, F. Freire Jr, H. F. Filho et al., Chemistry of Materials 19(25), 6131–6137 (2007). * [26] P. Ayala, A. Grüneis, C. Kramberger, M. Rümmeli, I. Solorzano, F. Freire Jr, and T. Pichler, The Journal of chemical physics 127(18), 184709 (2007). * [27] M. Terrones, H. Terrones, N. Grobert, W. Hsu, Y. Zhu, J. Hare, H. Kroto, D. Walton, P. Kohler-Redlich, M. Rühle et al., Applied Physics Letters 75(25), 3932–3934 (1999). * [28] K. Ghosh, M. Kumar, T. Maruyama, and Y. Ando, Journal of Materials Chemistry 20(20), 4128–4134 (2010). * [29] L. Shi, M. Sauer, O. Domanov, P. Rohringer, P. Ayala, and T. Pichler, Phys. Status Solidi (b) 252(11), 2558–2563 (2015). * [30] J. Gaillard, M. Skove, and A. M. Rao, Applied Physics Letters 86(23), 233109 (2005). * [31] M. Glerup, M. Castignolles, M. Holzinger, G. Hug, A. Loiseau, and P. Bernier, Chemical Communications(20), 2542–2543 (2003). * [32] T. Thurakitseree, C. Kramberger, P. Zhao, S. Aikawa, S. Harish, S. Chiashi, E. Einarsson, and S. Maruyama, Carbon 50(7), 2635–2640 (2012). * [33] T. Thurakitseree, C. Kramberger, A. Kumamoto, S. Chiashi, E. Einarsson, and S. Maruyama, ACS Nano 7(3), 2205–2211 (2013). * [34] P. Ayala, A. Grüneis, T. Gemming, D. Grimm, C. Kramberger, M. H. Rümmeli, F. L. Freire, H. Kuzmany, R. Pfeiffer, A. Barreiro et al., The Journal of Physical Chemistry C 111(7), 2879–2884 (2007). * [35] M. S. Dresselhaus, G. Dresselhaus, R. Saito, and A. Jorio, Physics reports 409(2), 47–99 (2005). * [36] A. Rao, E. Richter, S. Bandow, B. Chase, P. Eklund, K. Williams, S. Fang, K. Subbaswamy, M. Menon, A. Thess et al., Science 275(5297), 187–191 (1997). * [37] H. Kuzmany, W. Plank, M. Hulman, C. Kramberger, A. Grüneis, T. Pichler, H. Peterlik, H. Kataura, and Y. Achiba, The European Physical Journal B-Condensed Matter and Complex Systems 22(3), 307–320 (2001). * [38] I. O. Maciel, N. Anderson, M. A. Pimenta, A. Hartschuh, H. Qian, M. Terrones, H. Terrones, J. Campos-Delgado, A. M. Rao, L. Novotny et al., Nature materials 7(11), 878–883 (2008). * [39] J. Campos-Delgado, I. O. Maciel, D. A. Cullen, D. J. Smith, A. Jorio, M. A. Pimenta, H. Terrones, and M. Terrones, ACS Nano 4(3), 1696–1702 (2010). * [40] G. Ruiz-Soria, A. Perez Paz, M. Sauer, D. J. Mowbray, P. Lacovig, M. Dalmiglio, S. Lizzit, K. Yanagi, A. Rubio, A. Goldoni et al., ACS nano 8(2), 1375–1383 (2014). * [41] R. Lv, Q. Li, A. R. Botello-Méndez, T. Hayashi, B. Wang, A. Berkdemir, Q. Hao, A. L. Elías, R. Cruz-Silva, H. R. Gutiérrez et al., Scientific reports 2, 586 (2012).
# A comparative accuracy and convergence study of eigenerosion and phase-field models of fracture A. Pandolfi1, K. Weinberg2 and M. Ortiz3 1Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy. 2Department of Mechanical Engineering, Universität Siegen, 57068 Siegen, Germany. 3Division of Engineering and Applied Science, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA. <EMAIL_ADDRESS> ###### Abstract. We compare the accuracy, convergence rate and computational cost of eigenerosion (EE) and phase-field (PF) methods. For purposes of comparison, we specifically consider the standard test case of a center-crack panel loaded in biaxial tension and assess the convergence of the energy error as the length scale parameter and mesh size tend to zero simultaneously. The panel is discretized by means of a regular mesh consisting of standard bilinear or $\mathbb{Q}$1 elements. The exact stresses from the known analytical linear elastic solution are applied to the boundary. All element integrals over the interior and the boundary of the domain are evaluated exactly using the symbolic computation program Mathematica. When the EE inelastic energy is enhanced by means of Richardson extrapolation, EE is found to converge at twice the rate of PF and to exhibit much better accuracy. In addition, EE affords a one-order-of-magnitude computational speed-up over PF. ## 1\. Introduction The tracking of crack growth in solids is a free-discontinuity problem involving the formation of new internal surfaces. Modelling crack propagation remains a challenging problem in computational mechanics. In recent years, the phase-field (PF) method has gained in popularity, especially for crack propagation and tracking problems, cf., e. g., [1, 2, 3, 4, 5, 6, 7, 8]. Proposed originally by Ambrosio and Tortorelli [9, 10] as a regularization of the Mumford-Shah image segmentation functional, the PF method introduces an auxiliary continuous field, the phase field, as a means of representing the state of the material in vicinity of the crack. In effect, the phase field smooths the sharp surfaces of the cracks over a neighboring volume of finite thickness. In this way, the problem is reformulated in terms of displacement and phase fields defined over the entire volume of the domain. The governing equations define a system of second-order partial differential equations, thus in principle eschewing the difficulties inherent to evolving boundaries and discontinuities. However, the convenience of the initial implementation comes at the price of exceedingly fine discretization requirements in the vicinity of the cracks, a doubling of degrees of freedom, a sensitive dependence on the choice of intrinsic length scale, strongly non-linear and possibly unstable dynamics, difficulties enforcing strict irreversibility, no-healing and positive dissipation, difficulties enforcing crack closure, difficulties enforcing mode-mixity dependent fracture criteria, onerous computing time requirements and other difficulties, which need to be carefully addressed and assessed. Element erosion (ER) methods [11, 12, 13, 14], consisting of approximating cracks as notches of small but finite width, supply another well-established class of computational methods for simulating crack growth which has been extensively used to simulate fracture in a number of areas of application, including fragmentation and terminal ballistics. In seminal work, Negri [15] noted that some of the early versions of element erosion fail to converge, or converge to the wrong limit, due to mesh-dependency of the crack path, and provided a remedy based on the use of local averages over intermediate scales [16, 17]. Subsequent enhancements of element erosion incorporating such local averaging [18, 19, 20, 21, 22, 23, 24, 25, 26] are provably convergent to Griffith fracture in the limit of vanishingly small mesh sizes [18]. Phase-field and element erosion methods have a common variational structure: i) an elastic energy-release mechanism, namely, progressive damage in the case of PF and abrupt damage in the case of ER; and ii) an energy cost of damage, derived from the phase-field and its gradients in the case of PF and from an estimate of the fracture area in the case of ER. In both cases, the static equilibrium configurations of the solid follow from global energy minimization. In addition, crack propagation is modeled in both cases by means of a rate-independent gradient flow that balances elastic energy-release rate and dissipation. We begin by formalizing the commonalities between PF and ER methods and showing how they are special cases of a common variational structure based on the general notion of eigendeformation. Eigendeformations are widely used in mechanics to describe deformation modes that cost no local energy, cf., e. g., [27]. In some sense, eigendeformations provide the most general representation of elastic energy-release mechanisms. Not surprisingly, therefore, the method of eigendeformations provides a common framework for both PF and ER methods. Within this unified view, PF and ER methods simply correspond to particular choices of restricted classes of allowable eigendeformations and cost functions thereof. Whereas the convergence properties of finite-element approximations of EE and PF is well established mathematically [28, 18], a direct quantitative comparison and assessment of both methods appears to have been missing. In this work, we endeavor to fill that gap by means of selected numerical tests. By convergence we specifically understand convergence of the EE and PF solutions to the Griffith solution as the length parameter $\epsilon$ and the mesh size $h$ both tend to zero. Thus, we regard the Griffith solution as exact and the EE and PF solutions as approximations thereof. Given the variational and energy minimization principles at work for both EE and PF, it is natural to measure errors and convergence rates in terms of energy and endeavor to ascertain the rate at which the EE and PF energies approach the limiting Griffith energy. We also recall that, for propagating cracks, the energy release rate supplies the requisite driving force for crack advance. Therefore, accuracy and convergence of the energy is a sine qua non prerequisite for the accuracy and convergence of the crack tracking problem. It bears emphasis that we seek to characterize the convergence of solutions with respect to two parameters simultaneously, namely, $\epsilon$ and $h$. This double limit raises the fundamental question of the relative rates at which $\epsilon$ and $h$ should be reduced to zero. Mathematical analysis [28, 18] shows that convergence requires $\epsilon$ to decrease to zero more slowly than $h$, i. e., $\epsilon$ must be chosen on an scale intermediate between $h$ and the size of the domain. Remarkably, this requirement is contrary to rules of thumb often used in practice that recommend setting $\epsilon$ to a fixed multiple of $h$. Here, we instead seek to optimize $\epsilon$ for given $h$ as part of the approximation scheme. Specifically, for a given mesh size $h$ we determine the optimal value $\epsilon_{h}$ of $\epsilon$ by recourse to energy minimization, in the spirit of variational adaption. We show that $\epsilon_{h}$ indeed yields the energy closest to the Griffith limit and thus minimizes the energy error for given mesh size $h$. The resulting mesh-size convergence plots for EE and PF may therefore be viewed as the best possible for each method, which makes the method comparison fair. We specifically consider the standard test case of a center-crack panel loaded in biaxial tension as a means of assessing the accuracy, convergence rate and computational cost performance of EE and PF. The panel is discretized by means of a regular square mesh consisting of standard bilinear or $\mathbb{Q}$1 elements. In order to render the method comparison fair, all fields, including displacements and phase fields, are interpolated using the same shape functions. The exact stresses from the known analytical linear elastic solution are applied to the boundary. In order to deconvolve the discretization and quadrature errors, all element integrals over the interior and the boundary of the domain are computed exactly using the symbolic computation program Mathematica [29]. The results of the numerical tests reveal a superior accuracy and computational efficiency of EE over PF. In particular, when the inelastic EE energy is enhanced by means of Richardson extrapolation, EE converges at twice the rate of PF and exhibits better accuracy. In addition, EE is found to afford a one-order-of-magnitude computational speed-up over PF. ## 2\. Multi-field models of brittle fracture According to Griffith’s criterion for fracture, in a brittle material crack growth is results from the competition between elastic energy minimization and the fracture energy cost of creating new surface. Assuming rate independence, crack growth in a solid occupying a domain $\Omega\subset\mathbb{R}^{3}$ is governed by the potential energy (1) $\displaystyle\Pi(u)=E(u)+\text{(forcing terms)}\,,$ where (2) $\displaystyle E(u)$ $\displaystyle=\int_{\Omega\backslash J_{u}}W(\varepsilon(u))\,{dx}+G_{c}|J_{u}|\,,$ is the total energy, including the elastic energy of the solid and the energy cost of fracture, $W(\varepsilon(u))$ denotes the strain energy density, $\varepsilon(u)=\operatorname{sym}\nabla u$ the linearized strain tensor, $u(x)$ the displacement field, $dx$ the element of volume and the forcing terms (not spelled out for brevity) include body forces, boundary tractions and prescribed displacements. The jump set $J_{u}$ collects the cracks across which the displacement $u$ may jump discontinuously and $|J_{u}|$ denotes the crack surface area. The material-specific parameter $G_{c}$ is the specific fracture energy density per unit area and measures the fracture strength of the solid. The central and all-encompassing governing principle of energy minimization posits that the displacement field $u$ at any given time is expected to minimize the potential energy $\Pi(u)$ subject to monotonicity of the jump set $J_{u}$, i. e., to the constraint that $J_{u}$ must contain all prior jump sets, and to crack closure constraints. In this manner, the problem of crack tracking is reduced to a pseudo-elastic problem, with monotonicity and closure constraints, for every state of loading. Such pseudo-elastic problems arise generally for rate-independent inelastic solids under monotonicity constraints (cf., e. g., [30] for a rigorous derivation) and were initially formulated in connection with deformation theory of plasticity [31]. The problem thus defined is a free-discontinuity problem in the sense that the displacement field $u$ is allowed to be discontinuous and the discontinuity or jump set $J_{u}$ itself, i. e., the crack surface in the present application, is an unknown of the problem. The existence and approximation properties of such problems have been extensively investigated in the mathematical literature (cf., e. g., [32] for a review). Free-discontinuity problems are notoriously difficult to solve computationally, which has spurred the search for sundry regularizations of the problem that relax, to good computational advantage, the sharpness of the discontinuities. In the present work, we specifically focus on two such regularizations, eigenfracture and phase-field models, which we briefly summarize next. ### 2.1. Eigenfracture The method of eigenfracture (EF) is an approximation scheme for generalized Griffith models based on the notion of eigendeformation [18]. The approximating energy functional is assumed to be of the form (3) $\begin{split}E_{\epsilon}(u,\varepsilon^{*})&=\int_{\Omega}W(\varepsilon(u)-{\varepsilon^{*}})\,{dx}+\frac{G_{c}}{2\epsilon}|\\{{\varepsilon^{*}}\neq 0\\}_{\epsilon}|\\\ &=E^{e}(u,\varepsilon^{*})+E_{\epsilon}^{i}(\varepsilon^{*})\end{split}$ where ${\varepsilon^{*}}$ is the eigendeformation field that accounts for fracture, $E^{e}(u,\varepsilon^{*})$ is the elastic energy, $E_{\epsilon}^{i}(\varepsilon^{*})$ is the energy cost of the eigendeformation, or inelastic energy, and $\epsilon$ is a small length parameter. The elastic energy $E^{e}(u,\varepsilon^{*})$ follows as the integral over the entire domain of the strain energy density $W$ as a function of the total strain $\varepsilon(u)$ reduced by the eigenstrain ${\varepsilon^{*}}$. In this manner, eigendeformations allow the displacement field to develop jumps at no cost in elastic energy. This local relaxation comes at the expense of a certain amount of fracture energy. The challenge in regularized models of fracture is to estimate the inelastic fracture energy $E_{\epsilon}^{i}(\varepsilon^{*})$ in a manner that converges properly as $\epsilon\to 0$. In the method of eigenfracture, the crack area is estimated as the volume of the $\epsilon$-neighborhood $\\{{\varepsilon^{*}}\neq 0\\}_{\epsilon}$ of the support $\\{{\varepsilon^{*}}\neq 0\\}$ of the eigendeformations scaled by $1/\epsilon$, cf. Fig. 3b. Specifically, in this construction $\\{{\varepsilon^{*}}\neq 0\\}$ is the set of points where the eigendeformations differ from zero, $\\{{\varepsilon^{*}}\neq 0\\}_{\epsilon}$ is the $\epsilon$-neighborhood of $\\{{\varepsilon^{*}}\neq 0\\}$, i. e., the set of points at a distance to $\\{{\varepsilon^{*}}\neq 0\\}$ less or equal to $\epsilon$, and $|\\{{\varepsilon^{*}}\neq 0\\}_{\epsilon}|$ is the volume of $\\{{\varepsilon^{*}}\neq 0\\}_{\epsilon}$. Remarkably, the method of eigenfracture is provably convergent [18], in the sense that the total energy $(\ref{TM7cDG})$ $\Gamma$-converges to the Griffith energy (2) in the limit of $\epsilon\to 0$. This convergence property shows that the eigenfracture method is indeed physically and mathematically sound. We recall that $\Gamma$-convergence of the energy functionals in turn implies convergence of the solutions as $\epsilon\to 0$, i. e., the eigenfracture solutions converge to the solutions of Griffith fracture in the limit of vanishingly small length parameter $\epsilon$. ### 2.2. Phase-field models of fracture In the PF approximation of Griffith fracture, the state of the material is characterized by an additional continuous field $v(x)$ taking values in the interval $[0,1]$ and $v=0$ at the crack. The crack set $J_{u}$ is then approximated as a diffuse interface where $v\neq 1$. The corresponding fracture model traces back to the pioneering work of Ambrosio and Tortorelli [10], who showed that a two-field functional $\Gamma$-converges to the Mumford-Shah functional of image segmentation. Generalized to three- dimensional elasticity, the two-field functional of Ambrosio and Tortorelli assumes the form (4) $\begin{split}E_{\epsilon}(u,v)&=\int_{\Omega}\Big{(}(v^{2}+o(\epsilon))W(\varepsilon(u))+G_{c}\Big{(}\frac{(1-v)^{2}}{4\epsilon}+\epsilon|\nabla v|^{2}\Big{)}\Big{)}\,{dx}\\\ &=E_{\epsilon}^{e}(u,v)+E_{\epsilon}^{i}(v),\end{split}$ where $\epsilon$ is a small length parameter and $o(\epsilon)$ stands in for a positive function that decreases to zero faster than the small parameter $\epsilon$. The work of Ambrosio and Tortorelli, and other similar works [33, 32], subsequently spawned numerous variants, extensions and implementations (e. g., [34, 35, 1, 15, 16, 36], but the differential structure of the fracture energy $E_{\epsilon}^{i}(v)$ in (4) has remained essentially unchanged in the later works. ### 2.3. Eigenerosion Eigenerosion (EE) supplies an efficient implementation of the eigenfracture model [19]. To establish the connection between eigenfracture, eq. (3), and eigenerosion, assume that $W(\varepsilon)$ is quadratic and restrict eigendeformations to the particular form (5) $\varepsilon^{*}=\varepsilon(u)-(w+o(\epsilon))^{1/2}\varepsilon(u),$ with $w$ taking the values $0$ or $1$, i. e., $w(x)\in\\{0,1\\}$. Inserted into eq. (3) this gives the EE functional (6) $\begin{split}E_{\epsilon}(u,w)&=\int_{\Omega}(w+o(\epsilon))W(\varepsilon(u))\,{dx}+\frac{G_{c}}{2\epsilon}|\\{w=0\\}_{\epsilon}|\\\ &=E_{\epsilon}^{e}(u,w)+E_{\epsilon}^{i}(w).\end{split}$ By Jensen’s inequality and properties of extreme points [37], it follows that the range of $w$ can be extended to the entire interval $[0,1]$, i. e., $0\leq w(x)\leq 1$, without changing the solutions. It thus follows that EE is a restricted form of eigenfracture and, therefore, it supplies an upper bound of the eigenfracture energy in general. Evidently, the EE energy (6) may be regarded as a PF model with phase field (7) $v=\sqrt{w}$ and a fracture energy computed by the $\epsilon$-neighborhood construction. Conversely, PF models of fracture may be viewed as special cases of EE, and hence eigenfracture, where the fracture energy is of the Ambrosio-Tortorelli type. The great advantage of the EE model (6) vs. the conventional Ambrosio- Tortorelli-type phase-field model (4) is that in the former, eq. (6), the phase-field is undifferentiated and evaluates the fracture energy through an integral expression, whereas the latter, eq. (4), requires the phase-field to be differentiated. Differentiation in turn requires regularity and conforming interpolation, e. g., by the finite-element method. By contrast, the integral form of the fracture energy in (6) allows the phase-field to be approximated, e. g., as piecewise constant $0$ or $1$, which leads to a considerable increase in implementational simplicity and robustness [19, 20]. ### 2.4. Non-local fracture as an Artificial Neural Network Figure 1. Artificial Neural Network representation of the $\epsilon$-neighborhood construction for the computation of the fracture energy. Artificial neural networks provide a compelling interpretation of the $\epsilon$-neighborhood construction that suggests an entire class of extensions thereof. To make this connection, we introduce the mollifier (8) $\varphi_{\epsilon}(x)=\left\\{\begin{array}[]{ll}1/(4\pi\epsilon^{3}/3),&|x|<\epsilon,\\\ 0,&\text{otherwise},\end{array}\right.$ and the activation function (9) $f(w)=\left\\{\begin{array}[]{ll}0,&w\leq 0,\\\ 1,&\text{otherwise}.\end{array}\right.$ Next we note that the function (10) $w_{\epsilon}(x)=\int_{\Omega}\varphi_{\epsilon}(x-y)w(y)\,\mathrm{d}y=(\varphi_{\epsilon}*w)(x),$ obtained by taking the convolution of $\varphi_{\epsilon}$ and $w$, is positive in the $\epsilon$-neighborhood $\\{w\neq 0\\}_{\epsilon}$ of the crack set and vanishes elsewhere. Therefore, the filtered function $f(w_{\epsilon}(x))$ is $1$ in the $\epsilon$-neighborhood $\\{w\neq 0\\}_{\epsilon}$ and vanishes elsewhere, i. e. it is the characteristic function of the $\epsilon$-neighborhood. Finally, we have (11) $E_{\epsilon}^{i}(w)=\frac{G_{c}}{2\epsilon}\int_{\Omega}f(w_{\epsilon}(x))\,{dx}=\frac{G_{c}}{2\epsilon}\int_{\Omega}f\big{(}(\varphi_{\epsilon}*w)(x)\big{)}\,{dx},$ which supplies an integral representation of the $\epsilon$-neighborhood construction. In (11), we immediately recognize the structure characteristic of an artificial neural network (cf., e. g., [38]), Fig. 1. Thus, the fracture energy $E_{\epsilon}^{i}(w)$, which is the output of the network, follows from the input $w$ through the composition of three operations. The first operation is a convolution $\varphi_{\epsilon}*w$, which defines a linear neural network. The outcome $w_{\epsilon}$ of this operation is filtered locally by means of the activation function $f$ in the form of a binary switch, with the result that $f(w_{\epsilon})$ is the characteristic function of the $\epsilon$-neighborhood of the crack set. Finally, the fracture energy follows as an integral of $f(w_{\epsilon})$, suitably scaled by $G_{c}/2\epsilon$. The artificial neural network interpretation suggests an extension of eigenfracture to a more general class of fracture models where the fracture energy is of the form (11) but $\varphi_{\epsilon}$ and $f$ and a general mollifier and activation function. This generalization immediately raises the question of how the accuracy and convergence properties of the model depend on the choice of $\varphi_{\epsilon}$ and $f$. ## 3\. Test case: Slit crack under all-around tension We proceed to assess the accuracy and convergence of PF and EE approaches by recourse to the standard test case of a slit crack in an infinite solid subjected to all around tension, Fig. 2. We regard the Griffith solution as exact and the EE and PF solutions as approximations thereof. Since both the PF and EE problems are energy minimization problems, we specifically monitor energy errors and seek to characterize the convergence with respect to the length parameter $\epsilon$ and mesh size $h$ simultaneously. ### 3.1. Exact reference results Figure 2. (a) Slit crack in infinite plate under all around tension $\sigma_{0}$. (b) Definition of the geometrical coordinates defining the stress state around the crack. We specifically consider an infinite solid containing a straight crack of length $2a$ deforming in plane strain under the action of equibiaxial stress $\sigma_{0}$ at infinity, see Fig. 2. The crack-tip stress field is mode I since the loads are symmetric with respect to the crack line. Conveniently, the problem has an exact solution [39], namely, (12a) $\displaystyle\sigma_{11}=$ $\displaystyle\sigma_{0}\,\frac{r}{\sqrt{r_{1}r_{2}}}\left[\cos\left(\theta-\frac{1}{2}\theta_{1}-\frac{1}{2}\theta_{2}\right)-\frac{a^{2}}{r_{1}r_{2}}\sin\theta\sin\frac{3}{2}\left(\theta_{1}+\theta_{2}\right)\right],$ (12b) $\displaystyle\sigma_{22}=$ $\displaystyle\sigma_{0}\,\frac{r}{\sqrt{r_{1}r_{2}}}\left[\cos\left(\theta-\frac{1}{2}\theta_{1}-\frac{1}{2}\theta_{2}\right)+\frac{a^{2}}{r_{1}r_{2}}\sin\theta\sin\frac{3}{2}\left(\theta_{1}+\theta_{2}\right)\right],$ (12c) $\displaystyle\sigma_{12}=$ $\displaystyle\sigma_{0}\,\frac{r}{\sqrt{r_{1}r_{2}}}\left[\frac{a^{2}}{r_{1}r_{2}}\sin\theta\cos\frac{3}{2}\left(\theta_{1}+\theta_{2}\right)\right],$ where the coordinates $\theta$, $\theta_{1}$, $\theta_{2}$, $r$, $r_{1}$, and $r_{2}$ are defined in Fig. 2 and the lower indices correspond to cartesian coordinates $(x_{1},x_{2})$ aligned and centered at the crack. Assuming plane strain conditions, the strains follow from Hooke’s law as (13a) $\displaystyle\varepsilon_{11}=\frac{1-\nu^{2}}{E}\sigma_{11}-\frac{\nu(1+\nu)}{E}\sigma_{22},$ (13b) $\displaystyle\varepsilon_{22}=\frac{1-\nu^{2}}{E}\sigma_{22}-\frac{\nu(1+\nu)}{E}\sigma_{11},$ (13c) $\displaystyle\gamma_{12}=\frac{2(1+\nu)}{E}\sigma_{12},$ where $E$ denotes the Young modulus and $\nu$ the Poisson’s ratio. The displacements $u$ can then be computed by integrating the relations (14) $\varepsilon_{11}=\frac{\partial u_{1}}{\partial x_{1}},\qquad\varepsilon_{22}=\frac{\partial u_{2}}{\partial x_{2}},\qquad\gamma_{12}=2\varepsilon_{12}=\frac{\partial u_{1}}{\partial x_{2}}+\frac{\partial u_{2}}{\partial x_{1}},$ using Cesaro’s method. Finally, we recall that the strain-energy density of the solid is (15) $W(\varepsilon)=\frac{\lambda}{2}(\varepsilon_{11}+\varepsilon_{22})^{2}+2\mu(\varepsilon_{12})^{2}\,,\quad\lambda=\frac{E\nu}{(1+\nu)(1-2\nu)}\,,\quad\mu=\frac{E}{2(1+\nu)}\,.$ For a crack-free solid, the stresses reduce to (16) ${\sigma}^{0}_{11}={\sigma}^{0}_{22}=\sigma_{0},\qquad{\sigma}^{0}_{12}=0,$ the strains to (17) ${\varepsilon}^{0}_{11}={\varepsilon}^{0}_{22}=\frac{(1-2\nu)(1+\nu)}{E}\sigma_{0},\qquad{\gamma}^{0}_{12}=0,$ and the strain-energy density to (18) $W_{0}=\frac{(1-2\nu)(1+\nu)}{E}\sigma_{0}^{2}.$ In order to facilitate numerical calculations, we restrict the analysis to a bounded domain $\Omega$ surrounding the crack. To that end, we begin by noting that the restriction of the infinite-body displacement field $u$ to $\Omega$ minimizes the total potential energy (19) $\Pi(u)=\int_{\Omega}W(\varepsilon(x))\,{dx}_{1}\,{dx}_{2}-\int_{\partial\Omega}\sigma_{ij}n_{j}u_{i}\,{ds}$ where $\varepsilon(x)$ are the strains attendant to the trial displacements $u(x)$, $\partial\Omega$ denotes the boundary of $\Omega$, $n$ its outward unit normal and $\,{ds}$ is the element of arclength over $\partial\Omega$. The potential energy (19) represents a free-standing body occupying the domain $\Omega$ deforming under the action of tractions $\sigma_{ij}n_{j}$ corresponding to the stress field (12). In the absence of the crack, i. e., for crack length $a=0$, an application of Clapeyron’s theorem gives (20) $\Pi(u^{0})=-W_{0}|\Omega|,$ where $|\Omega|$ is the area of $\Omega$ and $W_{0}$ is given by (18). For a finite crack, the minimum value of the potential energy follows directly from an application of Rice’s $J$-integral [40]. Specifically, the energy-release rate is given by (21) $-\frac{\Pi(u)}{\partial a}=\int_{\Gamma}\Big{(}W(\varepsilon)\,n_{1}-\sigma_{ij}n_{j}u_{i,1}\Big{)}\,{ds},$ where $\Gamma$ denotes a counter-clockwise closed contour surrounding the crack and contained in $\Omega$, $n$ is its outward unit normal, and $\,{ds}$ is the element of arc-length on $\Gamma$. Choosing $\Gamma$ to coincide with the flanks of the crack, together with small loops at the tips, and using the asymptotic $K$-field gives (22) $-\frac{\Pi(u)}{\partial a}=2\frac{1-{\nu}^{2}}{E}K_{\rm I}^{2}\,,$ where $K_{\rm I}$ is the mode I stress-intensity factor, which can be computed directly from the stress field (12), with the result (23) $K_{\rm I}=\sigma_{0}\sqrt{\pi a}\,.$ Inserting (23) into (22), integrating with respect to $a$, and using (20) with (18) as initial condition gives (24) $\Pi(u)=-\frac{(1-2\nu)(1+\nu)}{E}\sigma_{0}^{2}|\Omega|-\frac{1-{\nu}^{2}}{E}\pi a^{2}\sigma_{0}^{2},$ In addition, according to the Griffith model (2), the inelastic or fracture energy expended in the extension of the crack is (25) $E^{i}(u)=G_{c}\,2a.$ The exact results (24) and (25) are subsequently taken as a convenient basis for the analysis of the accuracy and convergence of EE and PF approximation schemes. ## 4\. Discretization and implementation Next we describe the discretization of the EE and PF models used in calculations. All calculations are performed on a square domain $\Omega$ of size $D\gg 2a$. Figure 3. (a) Eigenerosion discretization of a slit crack. The string of shaded elements containing the crack are disabled. (b) $\epsilon$-neighborhood construction for the calculation of the crack length. ### 4.1. Eigenerosion model In the implementation of the EE model, the potential energy (26) $\Pi_{\epsilon}(u,w)=E_{\epsilon}(u,w)-\int_{\partial\Omega}\sigma_{ij}n_{j}u_{i}\,{ds}\,,$ with the energy $E_{\epsilon}(u,w)$ as in (6), is discretized by means of a regular mesh consisting of standard bilinear or $\mathbb{Q}$1 elements of size $h$. The exact stresses $\sigma_{ij}(x)$ in (26) are taken directly from the analytical solution (12). In order to disentangle the convergence with respect to the mesh size $h$ from quadrature error, in all the calculations we evaluate all element integrals over the interior and the boundary of the domain exactly using the symbolic computation program Mathematica [29]. The implementation of the EE fracture energy is illustrated in Fig. 3. The crack is represented as a string of missing, or ‘eroded’, elements where $w=0$, dark gray elements in Fig. 3, with $w=1$ elsewhere. The eroded elements approximate the crack geometry and the corresponding inelastic or fracture energy is approximated as (27) $E_{{\rm EE},\epsilon,h}^{i}=\frac{G_{c}}{2\epsilon}A_{\epsilon,h}\,,$ where $\epsilon$ is a length parameter and $A_{\epsilon,h}$ is the area of the $\epsilon$-neighborhood of the eroded elements, light gray area in Fig. 3, i. e., the set of points at a distance smaller or equal to $\epsilon$ from the eroded elements. Examples concerned with crack growth through slanted meshes have been presented in [19]. For a slit crack centered and aligned with the mesh, the eroded elements are precisely those which contain the crack, and $A_{\epsilon,h}$ follows exactly as (28) $A_{\epsilon,h}={\lceil 2a/h\rceil}h^{2}+2({\lceil 2a/h\rceil}+1)h\epsilon+\pi\epsilon^{2}\,.$ In this expression, ${\lceil 2a/h\rceil}$ is the number of eroded elements. The ceiling function $\lceil x\rceil$ equals the smallest integer larger or equal $x$. Inserting (28) into (27), we obtain (29) $E_{{\rm EE},\epsilon,h}^{i}=\frac{G_{c}}{2\epsilon}\Big{(}{\lceil 2a/h\rceil}h^{2}+2({\lceil 2a/h\rceil}+1)h\epsilon+\pi\epsilon^{2}\Big{)}.$ As expected, the EE approximation of the inelastic energy $E_{{\rm EE},\epsilon,h}^{i}$ depends on the choice of length parameter $\epsilon$. From a variational viewpoint, the optimal value $\epsilon_{h}$ of $\epsilon$ is that which minimizes $E_{{\rm EE},\epsilon,h}^{i}$, namely, (30) $\epsilon_{h}=h\sqrt{\frac{{\lceil 2a/h\rceil}}{\pi}}.$ The corresponding optimal inelastic energy is (31) $E_{{\rm EE},\epsilon_{h},h}^{i}\equiv E_{{\rm EE},h}^{i}=G_{c}h(1+{\lceil 2a/h\rceil}+\sqrt{\pi{\lceil 2a/h\rceil}}).$ This energy is the minimum inelastic energy that can be attained for fixed $h$. Thus, For $\epsilon\gg\epsilon_{h}$, the error incurred by the $\epsilon$-neighborhood construction becomes dominant, causing $E_{{\rm EE},\epsilon,h}^{i}$ to increase, whereas for $\epsilon\ll\epsilon_{h}$ the under-resolution of the $\epsilon$-neighborhood by the mesh size dominates and causes $E_{{\rm EE},\epsilon,h}^{i}$ to again increase. With (32) ${\lceil 2a/h\rceil}=\frac{2a}{h}+2\delta,\qquad 0\leq\delta<1,$ an asymptotic expansion of (30) gives in the limit of $h/a\ll 1$ (33) $\epsilon_{h}=\sqrt{\frac{2ah}{\pi}}+{\rm O}(h^{3/2}).$ A similar asymptotic expansion of the optimal inelastic energy (31) likewise gives (34) $E_{{\rm EE},h}^{i}=G_{c}2a+G_{c}\sqrt{2\pi ah}+{\rm O}(h).$ We observe from (33) that, to leading order, the optimal length parameter $\epsilon_{h}$ scales as $\sqrt{2ah}$, i. e., as the geometrical mean of the crack length and the mesh size. Thus, the optimal length parameter depends not only on the mesh size but also on the geometry of the crack and, in general, of the domain. We note that $\epsilon_{h}\to 0$ as $h\to 0$, albeit at a slower rate, as required by convergence [18]. We also observe from (34) that, to leading order, the fracture energy error is of order ${\rm O}(h^{1/2})$. This rate of convergence is slower than the ${\rm O}(h)$ rate of convergence of the elastic energy and, therefore, dominates the overall energy error. However, this loss of convergence can be remedied simply by recourse to a standard Richardson extrapolation technique (cf., e. g., [41, 42]). To this end, we note from (34) that the fracture energy attendant to a mesh of size $2h$ is (35) $E_{{\rm EE},2h}^{i}=G_{c}2a+2G_{c}\sqrt{\pi ah}+{\rm O}(h).$ We may replace $E_{{\rm EE},h}^{i}$ by the weighted sum (36) $E_{{\rm EE+RE},h}^{i}=\lambda E_{{\rm EE},h}^{i}+(1-\lambda)E_{{\rm EE},2h}^{i}$ without disturbing the limit. We then choose the weight $\lambda$ so as to cancel the ${\rm O}(h^{1/2})$ term, with the result (37) $\lambda=\frac{\sqrt{2}}{\sqrt{2}-1}.$ From (34) and (36) it then follows that (38) $E_{{\rm EE+RE},h}^{i}=G_{c}2a+{\rm O}(h),$ as desired. Inserting (31) and (37) into (36), we find (39) $\begin{split}E_{{\rm EE+RE},h}^{i}&=\frac{\sqrt{2}}{\sqrt{2}-1}G_{c}h(1+{\lceil 2a/h\rceil}+\sqrt{\pi{\lceil 2a/h\rceil}})\\\ &-(1+\sqrt{2})G_{c}2h(1+{\lceil a/h\rceil}+\sqrt{\pi{\lceil a/h\rceil}}).\end{split}$ explicitly. This simple Richardson extrapolation effectively eliminates the low-order accuracy of the original $\epsilon$-neighborhood construction and restores the full order of convergence expected of the finite element method. ### 4.2. Phase-field model In order to have a fair comparison with EE, we discretize the potential energy (40) $\Pi_{\epsilon}(u,v)=E_{\epsilon}(u,v)-\int_{\partial\Omega}\sigma_{ij}n_{j}u_{i}\,{ds}$ with the energy $E_{\epsilon}(u,v)$ as in (4), by means of a regular mesh likewise consisting of standard bilinear or $\mathbb{Q}$1 elements of size $h$. Conforming interpolation is used for both the displacement and the phase fields. In order to separate interpolation errors from numerical quadrature errors, we again evaluate all element integrals over the interior and boundary of the domain exactly using the symbolic computation program Mathematica [29]. The phase field is unconstrained and, therefore, satisfies free Neumann boundary conditions on the outer boundary of the domain. The unknown fields $(u_{h},v_{h})$ are solved iteratively by the method of alternating directions [43], i. e., by successively fixing one of the fields and solving for the other. Conveniently, the scheme reduces the solution to a sequence of linear problems (cf., e. g., [44, 45]). The iteration is primed by setting, as initial condition for the iteration, $v_{h}=0$ on the nodes lying on the crack and $v_{h}=1$ elsewhere. The iteration may then be expected to approximate the solution for a crack of length $2a$ provided that the applied stress $\sigma_{0}$ equals the critical stress for crack extension, i. e., if (41) $G_{c}=\frac{1-\nu^{2}}{E}\,K^{2}_{I},\qquad K_{I}=\sigma_{0}\sqrt{\pi a}\,,$ as in this case the exact elasticity solution (12) and the crack length $2a$ jointly minimize the Griffith potential energy (1). The choice of the length parameter $\epsilon$, and its relation to the mesh size $h$ and geometrical features of the domain, is known to have a strong effect on the accuracy and convergence properties of PF approximations [28]. In particular, convergence requires $h$ to decrease to zero faster than $\epsilon$ ([28], Theorems 4.1 and 5.1). We expect the exact potential energy (24) to be approached by the PF potential energy from above (cf., e. g., Fig. 4b). For fixed $h$, we also expect the PF potential energy to exhibit a minimum at a certain value $\epsilon_{h}$ of $\epsilon$. Thus, for small $\epsilon$ the mesh size $h$ is unable to resolve the width of the crack, resulting in an overly stiff response and high PF potential energy. Contrariwise, since the exact potential energy (24) is attained from above for $\epsilon\to 0$, the PF potential energy diverges from the exact value for large $\epsilon$. As a consequence of these opposing trends, for fixed $h$ the PF potential energy attains a minimum at a certain $\epsilon_{h}$, as surmised (cf., e. g., Fig. 4b). Since, as already noted, the exact potential energy (24) is approached by the PF potential energy from above, the energy minimizing $\epsilon_{h}$ results in the least energy error for given $h$ and is therefore optimal from the standpoint of convergence. Evidently, $\epsilon_{h}$ depends on the geometry of the crack and of the body, the state of damage and the discretization. In calculations, we determine $\epsilon_{h}$ numerically by computing the PF minimum potential energy for given $h$ over a range of equally-spaced values of $\epsilon$, interpolating the computed energies as a function of $\epsilon$ and computing the minimum of the interpolated function (cf., e. g., Fig. 4b). ## 5\. Numerical Results E | $\nu$ | $G_{c}$ | $\sigma_{0}$ | $2a$ ---|---|---|---|--- 106 | 0.25 | 5.936506 10-5 | 10 | 0.403125 Table 1. Parameters used in the numerical calculations. We proceed to investigate the relative accuracy and convergence rates of the EE and PF methods by way of numerical testing. To this end, we fix the domain size at $D=5$, the crack length at $2a=0.403215$ and consider a sequence of five uniform meshes of sizes $h/D=0.02$, $0.01$, $0.005$, $0.0025$, and $0.00125$. The numerical parameters used in calculations are listed in Table 1. We note that the parameters are chosen so as to satisfy the criticality condition (41). (a) EE (b) PF Figure 4. Dependence of the minimum potential energy on the length parameter $\epsilon$ for various mesh sizes $h$. The limiting value of the minimum potential energy, (24), for $\epsilon\to 0$ is shown in red for reference. ### 5.1. Optimal choice of length parameter Fig. 4 shows the computed dependence of the minimum potential energy $\Pi_{\epsilon}$ on $\epsilon$ at fixed $h$ for both EE, eq. (26), and PF, eq. (40). As surmised, at fixed $h$ the minimum potential energy $\Pi_{\epsilon}$ attains a minimum at a well-defined optimal value $\epsilon_{h}$ for both EE and PF. For EE, $\epsilon_{h}$ is given analytically and in close form by (30). For PF, $\epsilon_{h}$ is determined numerically by interpolating the results shown in Fig. 4b. As is evident from the figure, the optimized values $\epsilon_{h}$ of the length parameter minimize the potential energy error for fixed $h$ and thus result in the best possible rate of energy convergence with respect to mesh size $h$. (a) EE (b) PF Figure 5. Optimal value $\epsilon_{h}$ of the length parameter $\epsilon$ as a function of mesh size $h$. (a) Eigenerosion, eqs. (30) and (33). (b) Phase- field values obtained from Fig. 4b. The optimal $\epsilon_{h}$ values for EE and PF, i. e., the minimizers of the curves displayed in Fig. 4, are shown in Fig. 5 as a function of the mesh size $h$. For EE, Fig. 5(a) simply displays the theoretical optimal values, eqs. (30) and (33) for purposes of comparison. In all cases, the dependence of the optimal $\epsilon_{h}$ on the mesh size $h$ is strongly suggestive of a power- law scaling (42) $\epsilon_{h}\sim h^{1/2}.$ As expected [28, 18], the optimal length parameter $\epsilon_{h}$ converges to zero more slowly than the mesh size $h$. The scaling law (42) follows heuristically if we assume that near the crack the phase field varies on the scale of $\epsilon^{2}/L$, where $L$ is a characteristic size (intrinsic geometric feature, e. g., domain size, crack size, ligament size). In order to resolve this variation, we must choose $h\sim\epsilon^{2}/L$, whence (42) follows. The square-root nonlinearity of the scaling law (42), or equivalently $\epsilon_{h}\sim\sqrt{Lh}$, results in optimal values of the length parameter that may be strongly incommensurate with $h$, contrary to standard computational practice. Thus, for fine meshes, $h\ll L$ it follows that $\epsilon_{h}\gg h$, i. e., $\epsilon_{h}$ lags behind $h$ and $h$ resolves $\epsilon_{h}$ finely. Contrariwise, for coarse meshes, $h\gg L$, we have $\epsilon_{h}\ll h$, i. e., $\epsilon_{h}$ runs ahead of $h$ and is unresolved by $h$. This latter regime is clearly visible in Fig. 4b, e. g., at $h=0.1$, for which the corresponding $\epsilon_{h}$ is over one order of magnitude smaller. Figure 6. a) Energy convergence plots showing normalized potential energy errors vs. normalized mesh size $h/2a$ for eigenerosion (EE), eigenerosion with Richardson extrapolation (EE+RE) and phase-field (PF). b) Execution times as a function of mesh size $h$ for eigenerosion and phase-field ### 5.2. Energy convergence Potential energy convergence plots for eigenerosion (EE), eigenerosion with Richardson extrapolation (EE+RE) and phase-field (PF) are shown in Fig. 6a. The plot displays least-square fits of the data by functions of the form (43) $\inf\Pi_{\epsilon_{h}}=\Pi_{0}+Ch^{\alpha},$ with respect to the parameters $C$ and $\alpha$. The limiting potential energy $\Pi_{0}$ in (43) is given by (24). The exponent $\alpha$ is the rate of convergence and the constant $C$ shifts the error line vertically in log-log convergence plots. All energy errors are computed using the optimal $\epsilon_{h}$ corresponding to each mesh size, computed as described earlier. The figures visualize the least error in minimum potential energy as a function $h$ for each method. All terms are normalized: the mesh size $h$ is normalized by the crack length, $2a$, and the potential energies are normalized by $\Pi_{0}$. As may be seen from Fig. 6a, both PF and EE exhibit sublinear (square-root) energy convergence, though the constant for EE is nearly one order of magnitude smaller (better) than that for PF, indicating superior accuracy of EE over PF under the conditions of the test. By far the best accuracy and convergence rate is obtained for EE+RE, which exhibits linear energy convergence, or twice the rate of convergence of EE and EF, and a better constant than that of both EE and PF. ### 5.3. Computational cost Fig. 6b shows a comparison between execution times for EE and PF. We recall that the preceding convergence calculations are carried out using exact integration of the element energy and nodal forces. This procedure has the advantage of singling out interpolation errors and deconvolving them from other sources of error such as numerical quadrature. However, the valuation of the exact integrals is costly and deviates from common practice, which invariable relies on numerical quadrature as a further approximation. Therefore, in order to obtain practical estimates of execution times, we repeat all calculations using standard finite element numerical quadrature. Specifically, we use four-point Gaussian quadrature for the element integrals and two-point Gaussian quadrature for the boundary-edge integrals. All calculations are performed using a sparse linear solver [46] in memory-shared configuration, using a single node, 16-core Intel Skylake (2.1 GHz), with 192 GB Memory and 2666 MT/s speed. Fig. 6b shows that, under the conditions of the test and for the particular computer architecture used in the calculations, EE is about one order of magnitude faster than PF. The higher efficiency of EE over PF is in fact expected, since EE entails displacement degrees of freedom and a no-iteration direct solution, whereas PF entails both displacement and phase-field degrees of freedom and an iterative solution. ## 6\. Summary and concluding remarks We have presented a comparison of the accuracy, convergence and computational cost performance of the eigenerosion (EE) and phase-field (PF) methods for brittle fracture. Both approaches operate on the principle of minimization of a potential energy functional that accounts for the elastic energy of the system, the inelastic energy attendant to crack growth and the work of the applied loads. Both approaches can be derived as special cases of the general method of eigendeformations by effecting particular choices of the eigendeformation field. The energy functionals incorporate an intrinsic length scale $\epsilon$ representing an effective crack width. The solution for Griffith fracture is obtained in the limit of $\epsilon\to 0$. Whereas the convergence of finite-element approximations of EE and PF is well established mathematically [28, 18], a head-on relative assessment of both methods appears to have been missing. For the standard test case of a center- crack panel loaded in biaxial tension, the results of the numerical tests reveal a superior accuracy and computational efficiency of EE over PF. In particular, when the accuracy of the EE inelastic energy is enhanced by means of Richardson extrapolation, EE+RE converges at twice the rate of both PF and the original low-order version of EE. In addition, EE affords a one-order-of- magnitude computational speed-up over PF. There are other intangible benefits that confer EE an advantage over PF. Thus, element erosion is exceedingly easy to implement in accordance with a critical energy-release rate criterion and it guarantees automatically both irreversibility and positive dissipation. Another considerable disadvantage of PF relative to EE is that it requires iteration, a doubling of the degrees of freedom and defines an extremely nonlinear and non-convex problem with vastly many local minima, with the attendant difficulties in terms of stability and convergence. In closing, we emphasize that the conclusions reached in this study are based on a limited and selected set of numerical tests. Additional tests would be greatly beneficial and are likely to shed further light on the relative merits of the methods. ## Acknowledgements MO gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via project 211504053 - SFB 1060 and project 390685813 - GZ 2047/1 - HCM. ## References * [1] B. Bourdin, G. A. Francfort, and J.-J. Marigo. Numerical experiments in revisited brittle fracture. Journal of the Mechanics and Physics of Solids, 48:797–826, 2000\. * [2] B. Bourdin and A. Chambolle. Implementation of an adaptive finite element approximation of the Mumford-Shah functional. Numerische Mathematik, 85:609–646, 2000. * [3] A. Karma, D. A. Kessler, and H. Levine. Phase-field model of mode III dynamic fracture. Physical Review Letters, 81:045501, 2001. * [4] C. Miehe, M. Hofacker, and F. Welschinger. A phase field model for rate-independent crack propagation: Robust algorithmic implementation based on operator splits. Computer Methods in Applied Mechanics and Engineering, 199:2765–2778, 2010. * [5] M. J. Borden, C. V. Verhoosel, M. A. Scott, T. J. R. Hughes, and C. M. Landis. A phase-field description of dynamic brittle fracture. Computer Methods in Applied Mechanics and Engineering, 217–220:77–95, 2012. * [6] C. V. Verhoosel and R. de Borst. A phase-field model for cohesive fracture. International Journal for Numerical Methods in Engineering, 96(1):43–62, 2013. * [7] M. Ambati, T. Gerasimov, and L. De Lorenzis. A review on phase-field models of brittle fracture and a new fast hybrid formulation. Computational Mechanics, 55(2):383–405, 2015. * [8] C. Bilgen and K. Weinberg. On the crack-driving force of phase-field models in linearized and finite elasticity. Computer Methods in Applied Mechanics and Engineering, 353:348–372, 2019. * [9] L. Ambrosio and V. M. Tortorelli. Approximation of functionals depending on jumps by elliptic functionals via $\Gamma$-convergence. Communications on Pure and Applied Mathematics, 43:999–1036, 1990\. * [10] L. Ambrosio and V. M. Tortorelli. On the approximation of free discontinuity problems. Bollettino dell’Unione Matematica Italiana 6-B, 7:105–123, 1992\. * [11] G. R. Johnson and R. A. Stryk. Eroding interface and improved tetrahedral element algorithms for high velocity impacts in three dimensions. International Journal of Impact Engineering, 5:414–427, 1987. * [12] T. Belytschko and J. Lin. A three-dimensional impact-penetration algorithm with erosion. International Journal of Impact Engineering, 5(1–4):111–127, 1988\. * [13] M. Ortiz and A. E. Giannakopoulos. Crack propagation in monolithic ceramics under mixed mode loading. International Journal of Fracture, 44:233–258, 1990. * [14] T. Borvik, O.S. Hopperstad, and K.O. Pedersen. Quasi-brittle fracture during structural impact of AA7075-T651 aluminum plates. International Journal of Impact Engineering, 37:537–551, 2008. * [15] M. Negri. Finite element approximation of the Griffith’s model in fracture mechanics. Numerische Mathematik, 95(4):653–687, 2003. * [16] M. Negri. A discontinuous finite element approximation of free discontinuity probems. Advances in Mathematical Sciences and Applications, 15:283–306, 2005. * [17] M. Negri. A non-local approximation of free discontinuity problems in $SBV$ and $SBD$. Calculs of Variations and Partial Differential Equations, 25:33–62, 2005. * [18] B. Schmidt, F. Fraternali, and M. Ortiz. Eigenfracture: an eigendeformation approach to variational fracture. SIAM Multiscale Modeling & Simulation, 7(3):1237–1266, 2009. * [19] A. Pandolfi and M. Ortiz. An eigenerosion approach to brittle fracture. International Journal for Numerical Methods in Engineering, 92(8):694–714, 2012. * [20] A. Pandolfi, B. Li, and M. Ortiz. Modeling fracture by material-point erosion. International Journal of Fracture, 184(1-2):3–16, 2013. * [21] L. Bichet, F. Dubois, Y. Monerie, C. Pélissou, and F. Perales. An eigenerosion method for heterogeneous media. Materiaux et Techniques, 103(3), 2015. * [22] F. Stochino, A. Qinami, and M. Kaliske. Eigenerosion for static and dynamic brittle fracture. Engineering Fracture Mechanics, 182:537–551, 2017. * [23] P. Navas, R.C. Yu, B. Li, and G. Ruiz. Modeling the dynamic fracture in concrete: an eigensoftening meshfree approach. International Journal of Impact Engineering, 113:9–20, 2018. * [24] A. Qinami, E.C. Bryant, W.C. Sun, and M. Kaliske. Circumventing mesh bias by r- and h-adaptive techniques for variational eigenfracture. International Journal of Fracture, 220(2):129–142, 2019. * [25] K. Zhang, S.-L. Shen, and A. Zhou. Dynamic brittle fracture with eigenerosion enhanced material point method. International Journal for Numerical Methods in Engineering, 121(17):3768–3794, 2020. * [26] A. Qinami, A. Pandolfi, and M. Kaliske. Variational eigenerosion for rate-dependent plasticity in concrete modeling at small strain. International Journal for Numerical Methods in Engineering, 121(7):1388–409, 2020. * [27] T. Mura. Mechanics of Defects in Solids. Martinus Nijhoff, Dordrecht, 1987. * [28] G. Bellettini and A. Coscia. Discrete approximation of a free discontinuity problem. Numerical Functional Analysis and Optimization, 15(3-4):201–224, 1994. * [29] Wolfram Research, Inc. Mathematica, Version 12.1. Champaign, IL, 2020. * [30] L. Fokoua, S. Conti, and M. Ortiz. Optimal scaling in solids undergoing ductile fracture by void sheet formation. Archive for Rational Mechanics and Analysis, 212(1):331–357, 2014\. * [31] J. B. Martin. Plasticity : fundamentals and general results. MIT Press, Cambridge, Mass, 1975. * [32] L. Ambrosio, N. Fusco, and D. Pallara. Functions of Bounded Variation and Free Discontinuity Problems. Oxford University Press, Oxford – New York, 2000. * [33] A. Braides and G. Dal Maso. Nonlocal approximation of the Mumford-Shah functional. Calculus of Variations and Partial Differential Equations, 5:293–322, 1997. * [34] G. Cortesani and R. Toader. Implementation of an adaptive finite element approximation of the Mumford-Shah functional. Numerical Functional Analysis and Optimization,, 18:921–940, 1997\. * [35] A. Chambolle and G. Dal Maso. Discrete approximation of the Mumford-Shah functional in dimension two. ESAIM: Mathematical Modelling and Numerical Analysis, 33:651–672, 1999. * [36] S. Conti, M. Focardi, and F. Iurlano. Phase field approximation of cohesive fracture models. Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 33(4):1033–1067, 2016. * [37] A. J. Larsen. Local minimality and crack prediction in quasi-static griffith fracture evolution. Discrete & Continuous Dynamical Systems-S, 6(1):121, 2013. * [38] M. H. Hassoun. Fundamentals of artificial neural networks. MIT press, 1995. * [39] A. T. Zehnder. Fracture Mechanics. Springer, 2012. * [40] J. R. Rice. A path independent integral and the approximate analysis of strain concentration by notches and cracks. Journal of Applied Mechanics, 35:379–386, 1968. * [41] C. Brezinski and M. Redivo-Zaglia. Extrapolation methods. Applied Numerical Mathematics, 15(2):123–131, 1994. * [42] Z. Zlatev, I. Dimov, I. Faragó, and A. Havasi. Richardson Extrapolation: Practical Aspects and Applications. De Gruyter, Berlin/Boston, 2017. * [43] D. W. Peaceman and H. H. Rachford, Jr. The numerical solution of parabolic and elliptic differential equations. Journal of the Society for Industrial and Applied Mathematics, 3(1):28–41, 1955. * [44] C. Bilgen, A. Kopaničáková, R. Krause, and K. Weinberg. A phase-field approach to conchoidal fracture. Meccanica, 53(6):1203–1219, 2018. * [45] D. Knees and M. Negri. Convergence of alternate minimization schemes for phase-field fracture and damage. Mathematical Models and Methods in Applied Sciences, 27(09):1743–1794, 2017. * [46] X.S. Li, J.W. Demmel, J.R. Gilbert, L. Grigori, M. Shao, and I. Yamazaki. SuperLU Users’ Guide. Technical Report LBNL-44289, Lawrence Berkeley National Laboratory, September 1999.
# Sequential Mechanisms for Multi-type Resource Allocation Sujoy Sikdar Binghamton University <EMAIL_ADDRESS>Xiaoxi Guo Peking University <EMAIL_ADDRESS><EMAIL_ADDRESS>Haibin Wang Peking University <EMAIL_ADDRESS><EMAIL_ADDRESS>Lirong Xia Rensselaer Polytechnic Institute <EMAIL_ADDRESS>Yongzhi Cao Peking University <EMAIL_ADDRESS> ###### Abstract Several resource allocation problems involve multiple types of resources, with a different agency being responsible for “locally” allocating the resources of each type, while a central planner wishes to provide a guarantee on the properties of the final allocation given agents’ preferences. We study the relationship between properties of the local mechanisms, each responsible for assigning all of the resources of a designated type, and the properties of a sequential mechanism which is composed of these local mechanisms, one for each type, applied sequentially, under lexicographic preferences, a well studied model of preferences over multiple types of resources in artificial intelligence and economics. We show that when preferences are $O$-legal, meaning that agents share a common importance order on the types, sequential mechanisms satisfy the desirable properties of anonymity, neutrality, non- bossiness, or Pareto-optimality if and only if every local mechanism also satisfies the same property, and they are applied sequentially according to the order $O$. Our main results are that under $O$-legal lexicographic preferences, every mechanism satisfying strategyproofness and a combination of these properties must be a sequential composition of local mechanisms that are also strategyproof, and satisfy the same combinations of properties. ## 1 Introduction Consider the example of a hospital where patients must be allocated surgeons and nurses with different specialties, medical equipment of different types, and a room Huh et al. (2013). This example illustrates multi-type resource allocation problems (MTRAs), first introduced by Moulin (1995), where there are $p\geq 1$ types of indivisible items which are not interchangeable, and a group of agents having heterogeneous preferences over receiving combinations of an item of each type. The goal is to design a mechanism which allocates each agent with a bundle consisting of an item of each type. Often, a different agency is responsible for the allocation of each type of item in a distributed manner, using possibly different local mechanisms, while a central planner wishes that the mechanism composed of these local mechanisms satisfies certain desirable properties. For example, different departments may be responsible for the allocation of each type of medical resources, while the hospital wishes to deliver a high standard of patient care and satisfaction given the patients’ preferences and medical conditions; in an enterprise, clients have heterogeneous preferences over cloud computing resources like computation and storage Ghodsi et al. (2011, 2012); Bhattacharya et al. (2013), possibly provided by different vendors; in a university, students must be assigned to different types of courses handled by different departments; in a seminar class, the research papers and time slots Mackin and Xia (2016) may be assigned separately by the instructor and a teaching assistant respectively, and in rationing Elster (1992), different agencies may be responsible for allocating different types of rations such as food and shelter. Unfortunately, as Svensson (1999) shows, even when there is a single type of items and each agent is to be assigned a single item, serial dictatorships are the only strategyproof mechanisms which are non-bossy, meaning that no agent can falsely report her preferences to change the outcome without also affecting her own allocation, and neutral, meaning that the outcome is independent of the names of the items. In a serial dictatorship, agents are assigned their favorite remaining items one after another according to a fixed priority ordering of the agents. Pápai (2001) shows a similar result for the multiple assignment problem, where agents may be assigned more than one item, that the only mechanisms which are strategyproof, non-bossy, and Pareto- optimal are sequential dictatorships, where agents pick a favorite remaining item one at a time according to a hierarchical picking sequence, where the next agent to pick an item depends only on the allocations made in previous steps. Pareto-optimality is the property that there is no other allocation which benefits an agent without making at least one agent worse off. More recently, Hosseini and Larson (2019) show that even under lexicographic preferences, the only mechanisms for the multiple assignment problem that are strategyproof, non-bossy, neutral and Pareto-optimal are serial dictatorships with a quota for each agent. Mackin and Xia (2016) study MTRAs in a slightly different setting to ours: a monolithic central planner controls the allocation of all types of items. They characterize serial dictatorships under the unrestricted domain of strict preferences over bundles with strategyproofness, non-bossiness, and type-wise neutrality, a weaker notion of neutrality where the outcome is independent of permutations on the names of items within each type. Perhaps in light of this and other negative results described above, there has been little further work on strategyproof mechanisms for MTRAs. This is the problem we address in this paper. We study the design of strategyproof sequential mechanisms for MTRAs with $p\geq 2$ types, which are composed of $p$ local mechanisms, one for each type, applied sequentially one after the other, to allocate all of the items of the type, under the assumption that agents’ preferences are lexicographic and $O$-legal. For MTRAs, lexicographic preferences are a natural, and well-studied assumption for reasoning about ordering alternatives based on multiple criteria in social science Gigerenzer and Goldstein (1996). In artificial intelligence, lexicographic preferences have been studied extensively, for MTRAs Sikdar et al. (2017, 2019); Sun et al. (2015); Wang et al. (2020); Guo et al. (2020), multiple assignment problems Hosseini and Larson (2019); Fujita et al. (2015), voting over multiple issues Lang and Xia (2009); Xia et al. (2011), and committee selection Sekar et al. (2017), since lexicographic preferences allow reasoning about and representing preferences in a structured and compact manner. In MTRAs, lexicographic preferences are defined by an importance order over the types of items, and local preferences over items of each type. The preference relation over any pair of bundles is decided in favor of the bundle that has the more preferred item of the most important type at which the pair of bundles differ, and this decision depends only on the items of more important types. In several problems, it is natural to assume that every agent shares a common importance order. For example, when rationing Elster (1992), it may be natural to assume that every agent thinks food is more important than shelter, and in a hospital Huh et al. (2013), all patients may consider their allocation of surgeons and nurses to be more important than the medical equipment and room. $O$-legal lexicographic preference profiles, where every agent has a common importance order $O$ over the types, have been studied recently by Lang and Xia (2009); Xia et al. (2011) for the multi-issue voting problem. When agents’ preferences are $O$-legal and lexicographic, it is natural to ask the following questions about sequential mechanisms that decide the allocation of each type sequentially using a possibly different local mechanism according to $O$, which we address in this paper: (1) if every local mechanism satisfies property $X$, does the sequential mechanism composed of these local mechanisms also satisfy $X$?, and (2) what properties must every local mechanism satisfy so that their sequential composition satisfies property $X$? ### 1.1 Contributions For $O$-legal preferences, a property $X\in\\{$anonymity, type-wise neutrality, non-bossiness, monotonicity, Pareto-optimality$\\}$, and any sequential mechanism $f_{O}=(f_{1},\dots,f_{p})$ which applies each local mechanism $f_{i}$ one at a time according to the importance order $O$, we show in 1 and 2 that $f_{O}$ satisfies $X$ if and only if every local mechanism it is composed of satisfies $X$. However, sequential compositions of locally strategyproof mechanisms are not guaranteed to be strategyproof, which raises the question: under what conditions are sequential mechanisms strategyproof? We begin by showing in 1, that when agents preferences are lexicographic, but agents have different importance orders, sequential mechanisms composed of locally strategyproof mechanisms are, unfortunately, not guaranteed to be strategyproof. In contrast, we show in 2 that sequential composition of strategyproof mechanisms are indeed strategyproof when either: (1) agents’ preferences are separable and lexicographic, even when different agents may have different importance orders, or (2) agents’ preferences are lexicographic and $O$-legal and all of the local mechanisms are also non-bossy. Our main results characterize the class of mechanisms that satisfy strategyproofness, along with different combinations of non-bossiness, neutrality, and Pareto-optimality under $O$-legal preferences as $O$-legal sequential mechanisms. We show: * • In 3, that under $O$-legal lexicographic preferences, the class of mechanisms satisfying strategyproofness and non-bossiness is exactly the class of mechanisms that can be decomposed into multiple locally strategyproof and non- bossy mechanisms, one for each combination of type and allocated items of more important types. This class of mechanisms is exactly the class of $O$-legal conditional rule nets (CR-nets) Lang and Xia (2009); * • In 4, that a mechanism is strategyproof, non-bossy, and type-wise neutral if and only if it is an $O$-legal sequential composition of serial dictatorships; * • In 5, that a mechanism is strategyproof, non-bossy, and Pareto-optimal if and only if it is an $O$-legal CR-net composed of serial dictatorships. Finally, we show that despite the negative result in 1 that when agents’ preferences do not share a common importance order on the types, sequential compositions of locally strategyproof mechanisms may not satisfy strategyproofness, we show in Theorem 6, that computing beneficial manipulations w.r.t. a sequential mechanism is NP-complete. ## 2 Related Work and Discussion The MTRA problem was introduced by Moulin (1995). More recently, it was studied by Mackin and Xia (2016), who characterize the class of strategyproof and non-bossy mechanisms under the unrestricted domain of strict preferences over bundles as the class of serial dictatorships. However, as they note, it may be unreasonable to expect agents to express preferences as complete rankings over all possible bundles, besides the obvious communication and complexity issues arising from agents’ preferences being represented by complete rankings. The literature on characterizations of strategyproof mechanisms Svensson (1999); Pápai (2001); Hosseini and Larson (2019) for resource allocation problems belong to the line of research initiated by the famous Gibbard- Satterthwaite Theorem Gibbard (1973); Satterthwaite (1975) which showed that dictatorships are the only strategyproof voting rules which satisfy non- imposition, which means that every alternative is selected under some preference profile. Several following works have focused on circumventing these negative results by identifying reasonable and natural restrictions on the domain of preferences. For voting, Moulin (1980) provide non-dictatorial rules satisfying strategyproofness and non-imposition under single-peaked Black (1948) preferences. Our work follows in this vein and is closely related to the works by Le Breton and Sen (1999), who assume that agents’ preferences are separable, and more recently, Lang and Xia (2009) who consider the multi- issue voting problem under the restriction of $O$-legal lexicographic preferences, allowing for conditional preferences given by CP-nets similar to our work. Xia and Conitzer (2010) consider a weaker and more expressive domain of lexicographic preferences allowing for conditional preferences. Here, agents have a common importance order on the issues, and the agents preferences over any issue is conditioned only on the outcome of more important issues. They characterize the class of voting rules satisfying strategyproofness and non-imposition as being exactly the class of all CR- nets. CR-nets define a hierarchy of voting rules, where the voting rule for the most important issue is fixed, and the voting rule for every subsequent issue depends only on the outcome of the previous issues. Similar results were shown earlier by Barbera et al. (1993, 1997, 1991). In a similar vein, Sikdar et al. (2017, 2019) consider the multi-type variant of the classic housing market Shapley and Scarf (1974), first proposed by Moulin (1995), and Fujita et al. (2015) consider the variant where agents can receive multiple items. These works circumvent previous negative results on the existence of strategyproof and core-selecting mechanisms under the assumption of lexicographic extensions of CP-nets, and lexicographic preferences over bundles consisting of multiple items of a single type respectively. Wang et al. (2020); Guo et al. (2020) study MTRAs with divisible and indivisible items, and provide mechanisms that are fair and efficient under the notion of stochastic dominance by extending the famous probabilistic serial Bogomolnaia and Moulin (2001) and random priority Abdulkadiroğlu and Sönmez (1998) mechanisms, and show that while their mechanisms do not satisfy strategyproof in general, under the domain restriction of lexicographic preferences, strategyproofness is restored, and stronger notions of efficiency can be satisfied. ## 3 Preliminaries A multi-type resource allocation problem (MTRA) Mackin and Xia (2016), is given by a tuple $(N,M,P)$. Here, (1) $N=\\{1,\dots,n\\}$is a set of agents, (2) $M=D_{1}\cup\dots\cup D_{p}$is a set of items of $p$ types, where for each $i\leq p$, $D_{i}$ is a set of $n$ items of type $i$, and (3) $P=(\succ_{j})_{j\leq n}$is a preference profile, where for each $j\leq n$, $\succ_{j}$ represents the preferences of agent $j$ over the set of all possible bundles ${\mathcal{D}}=D_{1}\times\dots\times D_{p}$ . For any type $i\leq p$, we use $k_{i}$ to refer to the $k$-th item of type $i$, and we define $T=\\{D_{1},\dots,D_{p}\\}$. We also use $D_{<i}$ to refer to the set of $\\{D_{1},\dots,D_{i-1}\\}$, $D_{>i}$ refers to $\\{D_{i+1},\dots,D_{p}\\}$, and $D_{\leq i},D_{\geq i}$ are in the same manner. For any profile $P$, and agent $j\leq n$, we define $P_{-j}=(\succ_{k})_{k\leq n,k\neq j}$, and $P=(P_{-j},\succ_{j})$. #### Bundles. Each bundle ${\bf{x}}\in{\mathcal{D}}$ is a $p$-vector, where for each type $i\leq p$, $[{\bf{x}}]_{i}$ denotes the item of type $i$. We use $a\in{\bf{x}}$ to indicate that bundle ${\bf{x}}$ contains item $a$. For any $S\subseteq T$, we define ${\mathcal{D}}_{S}=\times_{D\in S}D$, and $-S=T\setminus S$. For any $S\subseteq T$, any bundle ${\bf{x}}\in{\mathcal{D}}_{S}$, for any $D\in-S$, and item $a\in D$, $(a,{\bf{x}})$ denotes the bundle consisting of $a$ and the items in ${\bf{x}}$, and similarly, for any $U\subseteq-S$, and any bundle ${\bf{y}}\in{\mathcal{D}}_{U}$, we use $({\bf{x}},{\bf{y}})$ to represent the bundle consisting of the items in ${\bf{x}}$ and ${\bf{y}}$. For any $S\subseteq T$, we use ${{\bf{x}}}_{\perp S}$ to denote the items in ${\bf{x}}$ restricted to the types in $S$. #### Allocations. An allocation $A:N\to{\mathcal{D}}$ is a one-to-one mapping from agents to bundles such that no item is assigned to more than one agent. ${\mathcal{A}}$ denotes the set of all possible allocations. Given an allocation $A\in{\mathcal{A}}$, $A(j)$ denotes the bundle allocated to agent $j$. For any $S\subseteq T$, we use ${A}_{\perp S}:N\to{\mathcal{D}}_{S}$ to denote the allocation of items restricted to the types in $S$. #### CP-nets and $O$-legal Lexicographic Preferences. An acyclic CP-net Boutilier et al. (2004) ${\mathcal{N}}$ over ${\mathcal{D}}$ is defined by (i) a directed graph $G=(T,E)$ called the dependency graph, and (ii) for each type $i\leq p$, a conditional preference table $CPT(D_{i})$ that contains a linear order $\succ^{{\bf{x}}}$ over $D_{i}$ for each ${\bf{x}}\in{\mathcal{D}}_{Pa(D_{i})}$, where $Pa(D_{i})$ is the set of types corresponding to the parents of $D_{i}$ in $G$. A CP-net ${\mathcal{N}}$ represents a partial order over ${\mathcal{D}}$ which is the transitive closure of the preference relations represented by all of the $CPT$ entries which are $\\{(a_{i},{\bf{u}},{\bf{z}})\succ(b_{i},{\bf{u}},{\bf{z}}):i\leq p,a_{i},b_{i}\in D_{i},{\bf{u}}\in{\mathcal{D}}_{Pa(D_{i})},{\bf{z}}\in{\mathcal{D}}_{-Pa(D_{i})\setminus\\{D_{i}\\}}\\}$. Let $O=[D_{1}\rhd\dots\rhd D_{p}]$ be a linear order over the types. A CP-net is $O$-legal if there is no edge $(D_{i},D_{l})$ with $i>l$ in its dependency graph. A lexicographic extension of an $O$-legal CP-net ${\mathcal{N}}$ is a linear order $\succ$ over ${\mathcal{D}}$, such that for any $i\leq p$, any ${\bf{x}}\in{\mathcal{D}}_{D_{<i}}$, any $a_{i},b_{i}\in D_{i}$, and any ${\bf{y}},{\bf{z}}\in{\mathcal{D}}_{D_{>i}}$, if $a_{i}\succ^{{\bf{x}}}b_{i}$ in ${\mathcal{N}}$, then, $({\bf{x}},a_{i},{\bf{y}})\succ({\bf{x}},b_{i},{\bf{z}})$. The linear order $O$ over types is called an importance order, and $D_{1}$ is the most important type, $D_{2}$ is the second most important type, etc. We use ${\mathcal{O}}$ to denote the set of all possible importance orders over types. Given an important order $O$, we use ${\mathcal{L}}_{O}$ to denote the set of all possible linear orders that can be induced by lexicographic extensions of $O$-legal CP-nets as defined above. A preference relation $\succ\in{\mathcal{L}}_{{\mathcal{O}}}$ is said to be an $O$-legal lexicographic preference relation, and a profile $P\in{\mathcal{L}}_{O}^{n}$ is an $O$-legal lexicographic profile. An $O$-legal preference relation is separable, if the dependency graph of the underlying CP-net has no edges. We will assume that all preferences are $O$-legal lexicographic preferences throughout this paper unless specified otherwise. Figure 1: An $O$-legal lexicographic preference with an underlying CP-net, where $O=[D_{1}\rhd D_{2}]$. ###### Example 1. Here we show how to compare bundles under an $O$-legal lexicographic preference with CP-net. In Figure 1(a) is a dependency graph which shows that $D_{2}$ depends on $D_{1}$. Figure 1(b) is the $CPT$ for both types, which implies $(1_{1},1_{2})\succ(1_{1},2_{2}),(2_{1},2_{2})\succ(2_{1},1_{2})$. Figure 1(c) gives the importance order $O=[D_{1}\rhd D_{2}]$. With $O$ we can compare some bundles directly. For example, $(1_{1},2_{2})\succ(2_{1},1_{2}),(1_{1},1_{2})\succ(2_{1},2_{2})$ because the most important type with different allocations is $D_{1}$ and $1_{1}\succ^{\emptyset}2_{1}$. Finally, Figure 1(d) shows the relations among all the bundles. We note that any lexicographic extension of an $O$-legal CP-net according to the order $O$ does not violate any of the relations induced by the original CP-net, and always induces a linear order over all possible bundles unlike CP- nets which may induce partial orders. For any $O$-legal lexicographic preference relation $\succ$ over ${\mathcal{D}}$, and given any ${\bf{x}}\in{\mathcal{D}}_{D_{<i}}$, we use ${\succ}_{\perp D_{i},{\bf{x}}}$ as the projection of the relation $\succ$ over $D_{i}$ given ${\bf{x}}$, and ${\succ}_{\perp D_{\geq i},{\bf{x}}}$ as the projection of $\succ$ over $\\{({\bf{x}},{\bf{z}}):{\bf{z}}\in{\mathcal{D}}_{D_{\geq i}}\\}$. For convenience, given an allocation $A$, for any $i\leq p$, we define ${\succ}_{\perp D_{i},A}$ and ${\succ}_{\perp D_{\geq i},A}$ similarly, where the preferences are projected based on the allocation of items of types that are more important than $i$, and given an $O$-legal lexicographic profile $P$, we define ${P}_{\perp D_{i},A}$ and ${P}_{\perp D_{\geq i},A}$ similarly, by projecting the preferences of every agent. We just leave out ${\bf{x}}$ (and similarly, $A$) if $i=1$. We use $D_{-i}$ to stand for the set of all types except $D_{i}$. #### Sequential and Local Mechanisms. An _allocation mechanism_ $f:{\mathcal{P}}\to{\mathcal{A}}$ maps $O$-legal preference profiles to allocations. Given an importance order $O=[D_{1}\rhd\dots\rhd D_{p}]$, an $O$-legal sequential mechanism $f_{O}=(f_{1},\dots,f_{p})$ is composed of $p$ local mechanisms, that are applied one after the other in $p$ rounds, where in each round $i\leq p$, a local mechanism $f_{i}$ allocates all of the items of $D_{i}$ given agents’ projected preferences over $D_{i}$ conditioned on the partial allocation in previous rounds. #### Desirable Properties. An allocation mechanism $f$ satisfies: * • _anonymity_ , if for any permutation $\Pi$ on the names of agents, and any profile $P$, $f(\Pi(P))=\Pi(f(P))$; * • _type-wise neutrality_ , if for any permutation $\Pi=(\Pi_{1},\dots,\Pi_{p})$, where for any $i\leq p$, $\Pi$ only permutes the names of the items of type $i$ according to a permutation $\Pi_{i}$, and any profile $P$, $f(\Pi(P))=\Pi(f(P))$; * • _Pareto-optimality_ , if for every allocation $A$ such that there exists an agent $j$ such that $A(j)\succ_{j}f(P)(j)$, there is another agent $k$ such that $f(P)(k)\succ_{j}A(k)$. * • _non-bossiness_ , if no agent can misreport her preferences and change the allocation of other agents without also changing her own allocation, i.e. there does not exist any pair $(P,\succ^{\prime}_{j})$ where $P$ is a profile and $\succ^{\prime}_{j}$ is the misreported preferences of agent $j$ such that $f(P)(j)=f(P_{-j},\succ^{\prime}_{j})(j)$ and for some agent $k\neq j$, $f(P)(k)\neq f(P_{-j},\succ^{\prime}_{j})(k)$. * • _non-bossiness of more important types_ , if no agent $j$ can misreport her local preferences for less important types and change the allocation of more important types to other agents without also changing her own allocation of more important types. i.e. for every profile $P$, every agent $j\leq n$, every type $D_{i},i\leq p$, and every misreport of agent $j$’s preferences $\succ^{\prime}_{j}$ where for every $h<i$, every $u\in Par(D_{h})$, ${\succ^{\prime}}_{j\perp D_{h},u}={\succ}_{j\perp D_{h},u}$, it holds that if for some agent $k\neq j$, ${f(P_{-j},\succ^{\prime}_{j})(k)}_{\perp D_{\leq i}}\neq{f(P)(k)}_{\perp D_{\leq i}}$, then ${f(P_{-j},\succ^{\prime}_{j})(j)}_{\perp D_{\leq i}}\neq{f(P)(j)}_{\perp D_{\leq i}}$. * • _monotonicity_ , for any agent $j$, any profile $P$, let $\succ^{\prime}_{j}$ be a misreport preference such that if $Y\subseteq{\mathcal{D}}$ is the set of all bundles whose ranks are raised and it holds that for every ${\bf{x}},{\bf{z}}\in Y$, ${\bf{x}}\succ_{j}{\bf{z}}\implies{\bf{x}}\succ^{\prime}_{j}{\bf{z}}$, then, $f(P_{-j},\succ^{\prime}_{j})(j)\in\\{f(P)(j)\\}\cup Y$. * • _strategyproofness_ , if no agent has a beneficial manipulation i.e. there is no pair $(P,\succ^{\prime}_{j})$ where $P$ is a profile and $\succ^{\prime}_{j}$ is a manipulation of agent $j$’s preferences such that $f(P_{-j},\succ^{\prime}_{j})(j)\succ_{j}f(P)(j)$. ## 4 Properties of Sequential Mechanisms Under Lexicographic Preferences ###### Theorem 1. For any importance order $O\in{\mathcal{O}}$, any $X\in\\{$anonymity, type- wise neutrality, non-bossiness, monotonicity, Pareto-optimality$\\}$, and $f_{O}=(f_{1},\dots,f_{p})$ be any $O$-legal sequential mechanism. Then, for $O$-legal preferences, if for every $i\leq p$, the local mechanism $f_{i}$ satisfies $X$, then $f_{O}$ satisfies $X$. ###### Proof. (Sketch) Throughout, we will assume that $O=[D_{1}\rhd\dots\rhd D_{p}]$, and that $P$ is an arbitrary $O$-legal preference profile over $p$ types. For any $i\leq p$, we define $g_{i}$ to be the sequential mechanism $(f_{1},\dots,f_{i})$. The proofs of anonymity and type-wise neutrality are relegated to the appendix. non-bossiness. Let us assume for the sake of contradiction that the claim is false, i.e. there exists a profile $P$, an agent $j$ and a misreport $\succ^{\prime}_{j}$ such that for $P^{\prime}=(\succ_{-j},\succ^{\prime}_{j})$, $f_{O}(P)(j)=f_{O}(P^{\prime})(j)$, and $f_{O}(P)\neq f_{O}(P^{\prime})$. Then, there is a type $i\leq p$ such that, ${f_{O}(P)}_{\perp D_{<i}}={f_{O}(P^{\prime})}_{\perp D_{<i}}$ and ${f_{O}(P)}_{\perp D_{i}}\neq{f_{O}(P^{\prime})}_{\perp D_{i}}$. Let $A={f_{O}(P)}_{\perp D_{<i}}$. Then, there is an agent $k$ such that $f_{i}({P}_{\perp D_{i},A})(k)\neq f_{i}({P^{\prime}}_{\perp D_{i},A})(k)$. By the choice of $i$, and the assumption that every other agent reports preferences truthfully, ${\succ_{j}}_{\perp D_{i},A}\neq{\succ^{\prime}_{j}}_{\perp D_{i},A}$. Then, $f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ_{j}}_{\perp D_{i},A})(j)=f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ^{\prime}_{j}}_{\perp D_{i},A})(j)$, but $f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ_{j}}_{\perp D_{i},A})(k)\neq f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ^{\prime}_{j}}_{\perp D_{i},A})(k)$, a contradiction to our assumption that $f_{i}$ is non-bossy. monotonicity. Let $P^{\prime}=(P_{-j},\succ^{\prime}_{j})$ be an $O$-legal profile obtained from $P$ and $Y\subseteq{\mathcal{D}}$ is the set of bundles raising the ranks in $P^{\prime}$ such that the relative rankings of bundles in $Y$ are unchanged in $P$ and $P^{\prime}$. For any $Y\subseteq{\mathcal{D}}$, and any ${\bf{u}}\in{\mathcal{D}}_{D_{<i}}$, let $Y^{D_{i}\mid{\bf{u}}}=\\{x_{i}:{\bf{x}}\in Y,x_{h}=u_{h}\text{ for all }h\leq i-1\\}$. It is easy to see that if ${\bf{x}}_{1}={f_{O}(P^{\prime})(j)}_{\perp\\{D_{1}\\}}$, then it follows from strong monotonicity of $f_{1}$ that ${\bf{x}}_{1}\in{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}\cup Y^{D_{1}}$. Now, either ${\bf{x}}_{1}\neq{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$, or ${\bf{x}}_{1}={f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Suppose ${\bf{x}}_{1}\neq{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Then, by strong monotonicity of $f_{1}$, ${\bf{x}}_{1}\succ{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Then, by our assumption of $O$-legal lexicographic preferences, for any ${\bf{z}}\in{\mathcal{D}}_{\\{D_{2},\dots,D_{p}\\}}$, $({\bf{x}}_{1},{\bf{z}})\in Y$. Therefore, $f_{O}(P^{\prime})(j)\in Y$. Suppose ${\bf{x}}_{1}={f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$, then by a similar argument, ${f_{O}(P^{\prime})(j)}_{\perp\\{D_{2}\\}}\in\\{{f_{O}(P)(j)}_{\perp\\{D_{2}\\}}\\}\cup Y^{D_{2}\mid({\bf{x}}_{1})}$. Applying our argument recursively, we get that $f_{O}(P^{\prime})(j)\in\\{f_{O}(P)(j)\\}\cup Y$. Pareto-optimality. Suppose the claim is true for $p\leq k$ types. Let $P$ be an $O$-legal lexicographic profile over $k+1$ types, and $f_{O}=(f_{i})_{i\leq k+1}$ is a sequential composition of Pareto-optimal local mechanisms. Suppose for the sake of contradiction that there exists an allocation $B$ such that some agents strictly better off compared to $f_{O}(P)$, and no agent is worse off. Then, by our assumption of lexicographic preferences, for every agent $k$ who is not strictly better off, $B(k)=f_{O}(P)(k)$, and for every agent $j$ who is strictly better off, one of two cases must hold. (1) ${B(j)}_{\perp D_{1}}\succ_{j}{f_{O}(P)(j)}_{\perp D_{1}}$, or (2) ${B(j)}_{\perp D_{1}}={f_{O}(P)(j)}_{\perp D_{1}}$. (1): If there exists an agent such that ${B(j)}_{\perp D_{1}}\succ_{j}{f_{O}(P)(j)}_{\perp D_{1}}$, this is a contradiction to our assumption that $f_{1}$ is Pareto-optimal. (2): Suppose ${B(j)}_{\perp D_{1}}={f_{O}(P)(j)}_{\perp D_{1}}$ for all agents who are strictly better off. Let $g=(f_{2},\dots,f_{k+1})$. W.l.o.g. let agent $1$ strictly prefer $B(1)$ to $f_{O}(P)(1)$. Then, $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(1)\succ_{1}{B(1)}_{\perp D_{\leq k+1}\setminus D_{1}}$, and for every other agent $l\neq 1$, either $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(l)\succ_{l}{B(l)}_{\perp D_{\leq k+1}\setminus D_{1}}$, or $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(l)={B(l)}_{\perp D_{\leq k+1}\setminus D_{1}}$, which is a contradiction to our induction assumption. ∎ ###### Theorem 2. For any importance order $O\in{\mathcal{O}}$, $X\in\\{$anonymity, type-wise neutrality, non-bossiness, monotonicity, Pareto-optimality$\\}$, and $f_{O}=(f_{1},\dots,f_{p})$ be any $O$-legal sequential mechanism. For $O$-legal preferences, if $f_{O}$ satisfies $X$, then for every $i\leq p$, $f_{i}$ satisfies $X$. ###### Proof. (Sketch) We only provide the proof of non-bossiness here. The rest of the proofs are in the appendix. non-bossiness. Assume for the sake of contradiction that $k\leq p$ is the most important type such that $f_{k}$ does not satisfy non-bossiness. Then, there exists a preference profile $Q=(\succ^{k})_{j\leq n}$ over $D_{k}$, and a bossy agent $l$ and a misreport $Q^{\prime}=(\succ^{k}_{-l},\bar{\succ}^{k}_{l})$, such that $f_{k}(Q^{\prime})(l)=f_{k}(Q)(l)$, but $f_{k}(Q^{\prime})\neq f_{k}(Q)$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=Q$, and the profile $P^{\prime}$ obtained from $P$ by replacing $\succ_{l}$ with $\succ^{\prime}_{l}$, which in turn is obtained from $\succ_{l}$ by replacing ${\succ_{l}}_{\perp D_{k}}$ with $\bar{\succ}^{k}_{l}$. It is easy to see that ${f_{O}(P^{\prime})}_{\perp D_{<k}}={f_{O}(P)}_{\perp D_{<k}}$, and ${f_{O}(P^{\prime})(l)}_{\perp D_{k}}={f_{O}(P)(l)}_{\perp D_{k}}$, but ${f_{O}(P^{\prime})}_{\perp D_{k}}\neq{f_{O}(P)}_{\perp D_{k}}$, and by our assumption of separable preferences, ${f_{O}(P^{\prime})}_{\perp D_{>k}}={f_{O}(P)}_{\perp D_{>k}}$. This implies that $f_{O}(P^{\prime})(l)=f_{O}(P)(l)$, but $f_{O}(P^{\prime})\neq f_{O}(P)$, implying that $f_{O}$ does not satisfy non- bossiness, which is a contradiction. ∎ ## 5 Strategyproofness of Sequential Mechanisms A natural question to ask is whether it is possible to design strategyproof sequential mechanisms when preferences are lexicographic, but each agent $j\leq n$ may have a possibly different importance order $O_{j}\in{\mathcal{O}}$ over the types, and their preference over ${\mathcal{D}}$ is $O_{j}$-legal and lexicographic. A sequential mechanism applies local mechanisms according to some importance order $O\in{\mathcal{O}}$ and is only well defined for $O$-legal preferences. When preferences are not $O$-legal, it is necessary to define how to project agents’ preferences given a partial allocation when a sequential mechanism is applied. Consider an agent $j$ with $O_{j}$-legal lexicographic preferences, and a partial allocation ${A}_{\perp S}$ for some $S\subseteq T$, which allocates ${\bf{x}}\in{\mathcal{D}}_{S}$ to $j$. A natural question to ask is how should agent $j$’s preferences be interpreted over a type $D_{i}$ which has not been allocated yet. We define two natural ways in which agents’ may wish their preferences to be interpreted. We say that an agent is optimistic, if for any type $D_{i}\not\in S$, and any pair of items $a_{i},b_{i}\in D_{i}$, $a_{i}\succ b_{i}$ if and only if according to their original preferences $\sup\\{{\bf{y}}\in{\mathcal{D}}:{\bf{y}}_{k}={\bf{x}}_{k}\text{ for every }D_{k}\in S,{\bf{y}}_{i}=a_{i}\\}\succ\sup\\{{\bf{y}}\in{\mathcal{D}}:{\bf{y}}_{k}={\bf{x}}_{k}\text{ for every }D_{k}\in S,{\bf{y}}_{i}=b_{i}\\}$. Similarly, an agent is pessimistic, if for any type $D_{i}\not\in S$, and any pair of items $a_{i},b_{i}\in D_{i}$, $a_{i}\succ b_{i}$ if and only if $\inf\\{{\bf{y}}\in{\mathcal{D}}:{\bf{y}}_{k}={\bf{x}}_{k}\text{ for every }D_{k}\in S,{\bf{y}}_{i}=a_{i}\\}\succ\inf\\{{\bf{y}}\in{\mathcal{D}}:{\bf{y}}_{k}={\bf{x}}_{k}\text{ for every }D_{k}\in S,{\bf{y}}_{i}=b_{i}\\}$. ###### Proposition 1. For any importance order $O\in{\mathcal{O}}$, when the preferences are not $O$-legal, and agents are either optimistic or pessimistic, a sequential mechanism $f_{O}$ composed of strategyproof mechanisms is not necessarily strategyproof. ###### Proof. When preferences are lexicographic, and not $O$-legal, a sequential mechanism composed of locally strategyproof mechanisms is not necessarily strategyproof, when agents are either optimistic or pessimistic, as we show with counterexamples. Consider the profile with two agents and two types $H$ and $C$. Agent $1$’s importance order is $H\rhd C$, preferences over $H$ is $1_{H}\succ 2_{H}$ and over $C$ is conditioned on the assignment of house $1_{H}:1_{C}\succ 2_{C},2_{H}:2_{C}\succ 1_{C}$. Agent $2$ has importance order $C\rhd H$ and separable preferences with order on cars being $2_{C}\succ 1_{C}$, and order on houses $1_{H}\succ 2_{H}$. Consider the sequential mechanism composed of serial dictatorships where $H\rhd C$ and for houses the picking order over agents is $(2,1)$, and for cars $(1,2)$. When agents are truthful and either optimistic or pessimistic, the allocation is $2_{H}2_{C}$ and $1_{H}1_{C}$ respectively to agents $1$ and $2$. When agent $2$ misreports her preferences over houses as $2_{H}\succ 1_{H}$, and agent $1$ is truthful and either optimistic or pessimistic, the allocation is $1_{H}1_{C}$ and $2_{H}2_{C}$ to agents $1$ and $2$ respectively, a beneficial misreport for agent $2$. ∎ In contrast, sequential mechanisms composed of locally strategyproof mechanisms are guaranteed to be strategyproof under two natural restrictions on the domain of lexicographic preferences: (1) when agents’ preferences are lexicographic and separable, but not necessarily $O$-legal w.r.t. a common importance order $O$, and (2) when agents’ have $O$-legal lexicographic preferences, and the local mechanisms also satisfy non-bossiness. ###### Proposition 2. For any importance order $O\in{\mathcal{O}}$, a sequential mechanism composed of strategyproof local mechanisms is strategyproof, 1. (1) when agents are either optimistic or pessimistic, and their preferences are separable and lexicographic, or 2. (2) when agents’ preferences are lexicographic and $O$-legal and the local mechanisms also satisfy non-bossiness. ###### Proof. (1): Let $P$ be a profile of separable lexicographic preferences. Suppose for the sake of contradiction that an agent $j$ has a beneficial misreport $\succ^{\prime}_{j}$, and let $P^{\prime}=(P_{-j},\succ^{\prime}_{j})$. Let $k$ be the type of highest importance to $j$ for which $[f_{O}(P^{\prime})(j)]_{k}\neq[f_{O}(P)(j)]_{k}$. Then, by our assumption that preferences are lexicographic, $k$ being the most important type for $j$ where her allocated item differs, and that $P^{\prime}$ is a beneficial manipulation, it must hold that $[f_{O}(P^{\prime})(j)]_{k}\succ[f_{O}(P)(j)]_{k}$. Since, preferences are separable, $[f(P^{\prime})]_{k}=f_{k}({P^{\prime}}_{\perp\\{D_{k}\\}})$. Since every other agent is truthful, ${P^{\prime}}_{\perp\\{D_{k}\\}}=({P_{-j}}_{\perp\\{D_{k}\\}},{\succ^{\prime}_{j}}_{\perp D_{k},)}$, and ${\succ^{\prime}_{j}}_{\perp D_{k}}\neq{\succ_{j}}_{\perp D_{k}}$ is a beneficial manipulation, which implies that $f_{k}$ is not strategyproof, a contradiction to our assumption. (2) Now, we consider the case where the profile of truthful preferences $P$ is an arbitrary $O$-legal and lexicographic profile of preferences that may not be separable, and the local mechanisms are non-bossy and strategyproof. Suppose for the sake of contradiction that an agent $j$ has a beneficial misreport $\succ^{\prime}_{j}$, and let $P^{\prime}=(P_{-j},\succ^{\prime}_{j})$. W.l.o.g. let $O=[1\rhd\dots\rhd p]$. Let $k$ be the most important type for which agent $j$ receives a different item. We begin by showing that by our assumption that the local mechanisms are non-bossy, and our assumption of $O$-legal lexicographic preferences, it holds that for every $i<k$ according to $O$, ${f_{i}(P^{\prime})}_{\perp D_{i}}={f_{i}(P)}_{\perp D_{i}}$. For the sake of contradiction, let $h<k$ be the first type for which some agent $l$ receives a different item, i.e. $[f(P^{\prime})(l)]_{h}\neq[f(P)(l)]_{h}$, and ${f(P^{\prime})}_{\perp D_{<h}}={f(P)}_{\perp D_{<h}}$. Then, by our assumption of $O$-legal lexicographic preferences, and every other agent reporting truthfully, ${P^{\prime}}_{\perp D_{h},{f(P^{\prime})}_{\perp D_{<h}}}=({P}_{-j\perp D_{h},{f(P^{\prime})}_{\perp D_{<h}}},{\succ^{\prime}}_{j\perp D_{h},{f(P^{\prime})}_{\perp D_{<h}}})$. By minimality of $k$, we know that ${f_{h}(P^{\prime})(j)}_{\perp D_{h}}={f_{h}(P)(j)}_{\perp D_{h}}$. But, ${f_{h}(P^{\prime})(l)}_{\perp D_{h}}\neq{f_{h}(P)(l)}_{\perp D_{h}}$, which implies that $f_{h}$ does not satisfy non-bossiness, which is a contradiction. Now, by minimality of $k$ and our assumption that preferences are $O$-legal and lexicographic and that $k$ is the most important type for which any agents’ allocation changes as we just showed, it must hold that $[f(P^{\prime})(j)]_{k}=f_{k}({P^{\prime}}_{\perp D_{k},{f(P^{\prime})}_{\perp D_{<k}}})(j)\succ f_{k}({P}_{\perp D_{k},{f(P)}_{\perp D_{<k}}})(j)=[f(P)(j)]_{k}$. However, ${f(P^{\prime})}_{\perp D_{<k}}={f(P)}_{\perp D_{<k}}$, and ${P^{\prime}_{-j}}_{\perp D_{k},{f(P^{\prime})}_{\perp D_{<k}}}={P_{-j}}_{\perp D_{k},{f(P^{\prime})}_{\perp D_{<k}}}$. This implies that $f_{k}$ is not strategyproof, which is a contradiction. ∎ Having established that it is possible to design strategyproof sequential mechanisms, we now turn our attention to strategyproof sequential mechanisms that satisfy other desirable properties such as non-bossiness, neutrality, monotonicity, and Pareto-optimality. In 3, we show that under $O$-legal preferences, a mechanism satisfies strategyproofness and non-bossiness of more important types if and only if it is an $O$-legal CR-net composed of mechanisms that satisfy the corresponding counterparts of these properties for allocating items of a single type, namely, local strategyproofness and non- bossiness. ###### Definition 1. [CR-net] A (directed) conditional rule net (CR-net) ${\mathcal{M}}$ over ${\mathcal{D}}$ is defined by 1. (i) a directed graph $G=(\\{D_{1},...,D_{p}\\},E)$, called the dependency graph, and 2. (ii) for each $D_{i}$, there is a conditional rule table $\text{CRT}_{i}$ that contains a mechanism denoted ${{\mathcal{M}}}_{\perp D_{i},A}$ for $D_{i}$ for each allocation $A$ of all items of types that are parents of $D_{i}$ in $G$, denoted $\text{Pa}(D_{i})$. Let $O=[D_{1}\rhd\dots\rhd D_{p}]$, then a CR-net is $O$-legal if there is no edge $(D_{i},D_{l})$ in its dependency with $i>l$. Figure 2: A serial dictatorship CR-net $f$. ###### Example 2. We note that the local mechanisms in a CR-net may be any mechanism that can allocate $n$ items to $n$ agents given strict preferences. In Figure 2, we show a CR-net $f$ where all the local mechanisms are serial dictatorships. The directed graph is shown in Figure 2(a), which implies $D_{2}$ depends on $D_{1}$. Figure 2(b) shows the CRT of $f$. In the CRT, $f_{1}:(b,a)$ means that in the serial dictatorship $f_{1}$, agent $b$ picks her most preferred item first followed by agent $a$, and it is similar for $f_{2},f^{\prime}_{2}$. The conditions in the CR-net, which are partial allocations are represented by mappings, for example, $(a\rightarrow 2_{1})$ means agent $a$ gets $2_{1}$. Figure 2 (c) and (d) are the $O$-legal preferences of agents $a$ and $b$, respectively, where $O=[D_{1}\rhd D_{2}]$. According to $f$, first we apply $f_{1}$ on $D_{1}$, and we have $a\rightarrow 1_{1},b\rightarrow 2_{1}$. Then, by CRT of $f$ we use $f_{2}$ for $D_{2}$, and we have $a\rightarrow 1_{2},b\rightarrow 2_{2}$. Therefore $f$ outputs an allocation where $a\rightarrow(1_{1},1_{2}),b\rightarrow(2_{1},2_{2})$. ###### Lemma 1. When agents’ preferences are restricted to the $O$-legal lexicographic preference domain, for any strategyproof mechanism $f$, any profile $P$, and any pair $(P_{-j},\succ^{\prime}_{j})$ obtained by agent $j$ misreporting her preferences by raising the rank of $f(P)(j)$ such that for any bundle $b$, $f(P)(j)\succ_{j}b\implies f(P)(j)\succ^{\prime}_{j}b$, it holds that $f(P_{-j},\succ^{\prime}_{j})(j)=f(P)(j)$. ###### Proof. Suppose for the sake of contradiction that $f$ is a strategyproof mechanism that does not satisfy monotonicity. Let $P=(\succ_{j})_{j\leq n}$ be a profile, $j$ be an agent who misreports her preferences as $\succ^{\prime}_{j}$ obtained from $\succ_{j}$ by raising the rank of $f(P)(j)$, specifically, for any bundle $b$, $f(P)(j)\succ_{j}b\implies f(P)(j)\succ^{\prime}_{j}b$. Then, either: (1) $f(P_{-j},\succ^{\prime}_{j})(j)\succ^{\prime}_{j}f(P)(j)$, or (2) $f(P)(j)\succ^{\prime}_{j}f(P_{-j},\succ^{\prime}_{j})(j)$. (1) Suppose $f(P_{-j},\succ^{\prime}_{j})(j)\succ^{\prime}_{j}f(P)(j)$. First, we claim that if $f(P_{-j},\succ^{\prime}_{j})(j)\allowbreak\succ^{\prime}_{j}f(P)(j)$, then $f(P_{-j},\succ^{\prime}_{j})(j)\succ_{j}f(P)(j)$. Suppose for the sake of contradiction that this were not true, then $f(P)(j)\succ_{j}f(P_{-j},\succ^{\prime}_{j})(j)$ and $f(P_{-j},\succ^{\prime}_{j})(j)\succ^{\prime}_{j}f(P)(j)$. This is a contradiction to our assumption on $\succ^{\prime}_{j}$. This implies that $f(P_{-j},\succ^{\prime}_{j})(j)\succ_{j}f(P)(j)$ and $\succ^{\prime}_{j}$ is a beneficial misreport for agent $j$, a contradiction to our assumption that $f$ is strategyproof. (2) If $f(P)(j)\succ^{\prime}_{j}f(P_{-j},\succ^{\prime}_{j})(j)$, then $\succ_{j}$ is a beneficial misreport for agent $j$ w.r.t. $P^{\prime}$, a contradiction to our assumption that $f$ is strategyproof. ∎ ###### Theorem 3. For any importance order $O$, a mechanism satisfies strategyproofness and non- bossiness of more important types under the $O$-legal lexicographic preference domain if and only if it is an $O$-legal locally strategyproof and non-bossy CR-net. ###### Proof. The if part is obvious (and is proved in Proposition 2). We prove the only if part by induction. ###### Claim 1. If an allocation mechanism satisfies non-bossiness of more important types and strategyproofness, then it can be decomposed into a locally strategyproof and non-bossy CR-net. Proof by induction on the number of types. The claim is trivially true for the base case with $p=1$ type. Suppose the claim holds true for $p=k$ types i.e. when there are at most $k$ types, if an allocation mechanism is non-bossy in more important types and strategyproof, then it can be decomposed into locally strategyproof and non-bossy mechanisms. When $p=k+1$, we prove that any non-bossy and strategyproof allocation mechanism $f$ for a basic type-wise allocation problem can be decomposed into two parts by Step 1: 1. 1. Applying a local allocation mechanism $f_{1}$ to $D_{1}$ to compute allocation $[A]_{1}$. 2. 2. Applying an allocation mechanism ${f}_{\perp D_{-1},[A]_{1}}$ to types $D_{-1}$. $\bullet$ Step 1. For any strategyproof allocation mechanism satisfying non- bossiness of more important types, allocations for type $1$ depend only on preferences restricted to $D_{1}$. ###### Claim 2. For any pair of profiles $P=(\succ_{j})_{j\leq n},Q=(\succ^{\prime}_{j})_{j\leq n}$, and ${P}_{\perp D_{1}}={Q}_{\perp D_{1}}$, we must have that ${f(P)}_{\perp D_{1}}={f(Q)}_{\perp D_{1}}$. ###### Proof. Suppose for sake of contradiction that ${f(P)}_{\perp D_{1}}\neq{f(Q)}_{\perp D_{1}}$. For any $0\leq j\leq n$, define $P_{j}=(\succ^{\prime}_{1},\dots,\succ^{\prime}_{j},\succ_{j+1},\dots,\succ_{n})$ and suppose ${f(P_{j})}_{\perp D_{1}}\neq{f(P_{j+1})}_{\perp D_{1}}$ for some $j\leq n-1$. Let $[A]_{1}={f(P_{j})(j+1)}_{\perp D_{1}}$ and $[B]_{1}={f(P_{j+1})(j+1)}_{\perp D_{1}}$. Now, suppose that Case 1: $[A]_{1}=[B]_{1}$, but for some other agent $\hat{j}$, ${f(P_{j})(\hat{j})}_{\perp D_{1}}\neq{f(P_{j+1})(\hat{j})}_{\perp D_{1}}$. This is a direct violation of non-bossiness of more important types because ${P}_{j\perp D_{1}}={P}_{j+1\perp D_{1}}$ by construction. Case 2: $[A]_{1}\neq[B]_{1}$. If $[B]_{1}{\succ}_{j+1\perp D_{1}}[A]_{1}$, then $(P_{j},\succ^{\prime}_{j+1})$ is a beneficial manipulation due to agents’ lexicographic preferences. Otherwise, if $[A]_{1}{\succ}_{j+1\perp D_{1}}[B]_{1}$, then $(P_{j+1},\succ_{j+1})$ is a beneficial manipulation due to our assumption that ${\succ}_{j+1\perp D_{1}}={\succ^{\prime}}_{j+1\perp D_{1}}$ and agents’ lexicographic preferences. This contradicts the strategyproofness of $f$. ∎ $\bullet$ Step 2. Show that $f_{1}$ satisfies strategyproofness and non- bossiness. First, we show that $f_{1}$ must satisfy strategyproofness by contradiction. Suppose for the sake of contradiction that $f$ is strategyproof but $f_{1}$ is not strategyproof. Let $P=(\succ_{j})_{j\leq n}$ be a profile of agents’ preferences over $D_{1}$. Then, there exists an agent $j^{*}$ with a beneficial manipulation $\succ^{\prime}_{j^{*}}$. Now, consider a profile $Q=(\bar{\succ}_{j})_{j\leq n}$ where for every agent $j,{\bar{\succ}}_{j\perp D_{1}}=\succ_{j}$ and the mechanism $f$ whose local mechanism for $D_{1}$ is $f_{1}$. We know from Step 1 that ${f(Q)}_{\perp D_{1}}=f_{1}({Q}_{\perp D_{1}})=f_{1}(P)$. However, in that case, because agents’ preferences are lexicographic with $D_{1}$ being the most important type, agent $j^{*}$ has a successful manipulation $\bar{\succ}^{\prime}_{j^{*}}$ where ${\bar{\succ}^{\prime}}_{j^{*}\perp D_{1}}=\succ^{\prime}_{j^{*}}$ since the resulting allocation of $f_{1}(\bar{\succ}_{-j^{*}},\bar{\succ}^{\prime}_{j^{*}})$ is a strictly preferred item of type $D_{1}$. This is a contradiction to our assumption on the strategyproofness of $f$. Then, we also show that $f_{1}$ satisfies non-bossiness. Suppose for the sake of contradiction that $f_{1}$ is not non-bossy. Let $P=(\succ_{j})_{j\leq n}$ be a profile of agents’ preferences over $D_{1}$. Then, there exists an agent $j^{*}$ with a bossy preference $\succ^{\prime}_{j^{*}}$ such that for $P^{\prime}=(\succ_{-j^{*}},\succ^{\prime}_{j^{*}})$, $f_{1}(P)(j^{*})=f_{1}(P^{\prime})(j^{*})$ while $f_{1}(P)(j)\neq f_{1}(P^{\prime})(j)$ for some $j$. Now, consider a profile $Q=(\bar{\succ}_{j})_{j\leq n}$ where for every agent $j,{\bar{\succ}}_{j\perp D_{1}}=\succ_{j}$ and the mechanism $f$ whose local mechanism for $D_{1}$ is $f_{1}$. We know from Step 1 that ${f(Q)}_{\perp D_{1}}=f_{1}({Q}_{\perp D_{1}})=f_{1}(P)$. However, in that case, because agents’ preferences are lexicographic with $D_{1}$ being the most important type, agent $j^{*}$ has a bossy preference $\bar{\succ}^{\prime}_{j^{*}}$ where ${\bar{\succ}^{\prime}}_{j^{*}\perp D_{1}}=\succ^{\prime}_{j^{*}}$ such that ${f(Q)(j^{*})}_{\perp D_{1}}={f(\bar{\succ}_{-j^{*}},\bar{\succ}^{\prime}_{j^{*}})(j^{*})}_{\perp D_{1}}$ while ${f(Q)(j)}_{\perp D_{1}}\neq{f(\bar{\succ}_{-j^{*}},\bar{\succ}^{\prime}_{j^{*}})(j)}_{\perp D_{1}}$ for some $j$. This is a contradiction to our assumption that $f$ satisfies non-bossiness of more important types. $\bullet$ Step 3. The allocations for the remaining types only depend on the allocations for $D_{1}$. ###### Claim 3. Consider any pair of profiles $P_{1},P_{2}$ such that $[A]_{1}=f_{1}({P}_{1\perp D_{1}})=f_{1}({P}_{2\perp D_{1}})$, and ${P}_{1\perp D_{-1},[A]_{1}}={P}_{2\perp D_{-1},[A]_{1}}$, then $f(P_{1})=f(P_{2})$. ###### Proof. We prove the claim by constructing a profile $P$ such that $f(P)=f(P_{1})=f(P_{2})$. Let $P_{1}=(\succ_{j})_{j\leq n}$, $P_{2}=(\bar{\succ}_{j})_{j\leq n}$ and $P=(\hat{\succ}_{j})_{j\leq n}$. Let $\hat{\succ}_{j}$ be obtained from $\succ_{j}$ by changing the preferences over $D_{1}$ by raising $[A]_{1}(j)$ to the top position. Agents’ preference over $D_{-1}$ are ${\hat{\succ}}_{j\perp D_{-1},[A]_{1}}={\succ}_{j\perp D_{-1},[A]_{1}}(={\bar{\succ}}_{j\perp D_{-1},[A]_{1}})$. It is easy to check that for every bundle $b$, $f(P)(j)\succ_{j}b\implies f(P)(j)\hat{\succ}_{j}b$. By applying Lemma 1 sequentially to every agent, $f(P)=f(P_{1})$. Similarly, $f(P)=f(P_{2})$. It follows that for any allocation $[A]_{1}$ of items of type $D_{1}$, there exists a mechanism ${f}_{\perp D_{-1},[A]_{1}}$ such that for any profile $P$, we can write $f(P)$ as $(f_{1}({P}_{\perp D_{1}}),{f}_{\perp D_{-1},[A]_{1}}({P}_{\perp D_{-1},[A]_{1}}))$. ∎ $\bullet$ Step 4. Show that ${f}_{\perp D_{-1},[A]_{1}}$ satisfies strategyproofness and non-bossiness of important types for any allocation $[A]_{1}$ of $D_{1}$. Suppose for the sake of contradiction that ${f}_{\perp D_{-1},[A]_{1}}$ is not strategyproof for some profile ${P}_{\perp D_{-1},[A]_{1}}$. Then, for $P=(\succ_{j})_{j\leq n}$ there is an agent $j^{*}$ with a beneficial manipulation w.r.t. $P$ and $[A]_{1}$, ${\succ^{\prime}}_{j^{*}\perp D_{-1},[A]_{1}}\neq{\succ}_{j^{*}\perp D_{-1},[A]_{1}}$ and ${\succ^{\prime}}_{j^{*}\perp D_{1}}={\succ}_{j^{*}\perp D_{1}}$. Let $Q=(\succ_{-j^{*}},\succ^{\prime}_{j^{*}})$. Then, $f(Q)(j)=([A]_{1},{f}_{\perp D_{-1},[A]_{1}}({Q}_{\perp D_{-1},[A]_{1}}))(j)\succ_{j}([A]_{1},{f}_{\perp D_{-1},[A]_{1}}({P}_{\perp D_{-1},[A]_{1}}))(j)=f(P)(j)$. This is a contradiction to the strategyproofness of $f$. Suppose for sake of contradiction that ${f}_{\perp D_{-1},[A]_{1}}$ does not satisfy non-bossiness of important types. Then, there is a profile $P=(\succ_{j})_{j\leq n}$, and an agent $j^{*}$ with a bossy manipulation of her preferences ${\succ}_{j^{*}\perp D_{-1},[A]_{1}}$. Then, it is easy to verify that $f$ also does not satisfy non-bossiness of important types. In Step 1, we showed that the allocation for $D_{1}$ only depends on the restriction of agents’ preferences to $D_{1}$ i.e. over ${P}_{\perp D_{1}}$. In Step 3 we showed that $f(P)$ can be decomposed as $(f_{1}({P}_{\perp D_{1}}),{f}_{\perp D_{-1},[A]_{1}}({P}_{\perp D_{-1},[A]_{1}}))$ where $[A]_{1}=f_{1}({P}_{\perp D_{1}})$. In Steps 2 we showed that $f_{1}$ must be strategyproof and non-bossy. In Step 4, we showed that for any output $[A]_{1}$ of $f_{1}$, the mechanism ${f}_{\perp D_{-1},[A]_{1}}$ satisfies both strategyproofness and non-bossiness of important types i.e. that we can apply the induction assumption that ${f}_{\perp D_{-1},[A]_{1}}$ is a locally strategyproof and non-bossy CR-net of allocation mechanisms. Together with the statement of Step 2, this completes the inductive argument. ∎ In 4, we characterize the class of strategyproof, non-bossy of more important types, and type-wise neutral mechanisms under $O$-legal lexicographic preferences, as the class of $O$-legal sequential compositions of serial dictatorships. The proof relies on 3 and 4, where we show that any CR-net mechanism that satisfies type-wise neutrality is an $O$-legal sequential composition of neutral mechanisms, one for each type. ###### Claim 4. For any importance order $O$, an $O$-legal CR-net with type-wise neutrality is an $O$-legal sequential composition of neutral mechanisms. ###### Proof. We prove the claim by induction. Suppose $f$ is such a CR-net. From the decomposition in the proof of Claim 1, we observe that the mechanism used for type $i$ depends on ${f(P)}_{\perp D_{\leq i}}$. From this observation, and the importance order $O$, we can deduce that the mechanism for type $1$ depends on no other type, and therefore there is only one mechanism for type $1$, say, $f_{1}$. First we show that $f_{1}$ is neutral. Otherwise, there exists a permutation $\Pi_{1}$ over $D_{1}$, $f_{1}(\Pi_{1}({P}_{\perp D_{1}}))\neq\Pi_{1}(f_{1}({P}_{\perp D_{1}}))$. Let $I=(I_{i})_{i\leq p}$ where $I_{i}$ is the identity permutation for type $i$. Then for $\Pi=(\Pi_{1},I_{-1})$, we have ${f(\Pi(P))}_{\perp D_{1}}=f_{1}(\Pi_{1}({P}_{\perp D_{1}}))\neq\Pi_{1}(f_{1}({P}_{\perp D_{1}}))={\Pi(f(P))}_{\perp D_{1}}$, a contradiction. Now, suppose that for a given $i$, there is only one mechanism $f_{i^{\prime}}$ for each type $i^{\prime}\leq i$, and each $f_{i^{\prime}}$ is neutral. Let $\Pi=(\Pi_{\leq i},I_{>i})$ and we have ${f(\Pi(P))}_{\perp D_{\leq i}}={\Pi(f(P))}_{\perp D_{\leq i}}$. Let $A={f(P)}_{\perp D_{\leq i}}$ and $B={f(\Pi(P))}_{\perp D_{\leq i}}=\Pi_{\leq i}(A)$. Because $P$ is chosen arbitrarily, $A$ and $B$ are also arbitrary outputs of mechanism $f$ over $D_{\leq i}$. Let $f_{i+1}={f}_{\perp D_{i+1},A}$,and $f^{\prime}_{i+1}={f}_{\perp D_{i+1},B}$. Similarly both $f_{i+1}$ and $f^{\prime}_{i+1}$ are arbitrary mechanisms in $CRT$. Because $f$ is neutral, we have ${f(\Pi(P))}_{\perp D_{i+1}}={\Pi(f(P))}_{\perp D_{i+1}}$, i.e. $f_{i+1}({P}_{\perp D_{i+1},A})=f^{\prime}_{i+1}({M(P)}_{\perp D_{i+1},B})$. By assumption we know that $\Pi_{i+1}=I_{i+1}$, so ${P}_{\perp D_{i+1},A}={\Pi(P)}_{\perp D_{i+1},B}$. That means $f_{i+1}$ and $f^{\prime}_{i+1}$ can replace each other in $CRT$ of $f$ for type $i+1$. Therefore in fact there is only one mechanism $f_{i+1}$ for type $i+1$ in $CRT$. Moreover $f_{i+1}$ must be neutral. Otherwise, there must be some permutation $\Pi_{i+1}$ over $D_{i+1}$, $f_{i+1}(\Pi_{i+1}({P}_{\perp D_{i+1},A}))\neq\Pi_{i+1}(f_{i+1}({P}_{\perp D_{i+1},A}))$. Then for $\Pi=(\Pi_{\leq i+1},I_{>i+1})$, we have ${f(\Pi(P))}_{\perp D_{i+1}}=f_{i+1}({\Pi(P)}_{\perp D_{i+1},B})=f_{i+1}(\Pi_{i+1}({P}_{\perp D_{i+1},A}))\neq\Pi_{i+1}(f_{i+1}({P}_{\perp D_{i+1},A}))={\Pi(f(P))}_{\perp D_{i+1}}$, a contradiction. ∎ ###### Theorem 4. For any importance order $O$, under the $O$-legal lexicographic preference domain, an allocation mechanism satisfies strategyproofness, non-bossiness of more important types, and type-wise neutrality if and only if it is an $O$-legal sequential composition of serial dictatorships. ###### Proof. Let $O=[D_{1}\rhd D_{2}\rhd\dots\rhd D_{p}]$. When $p=1$, we know that serial dictatorship is characterized by strategyproofness, non-bossiness, and neutrality Mackin and Xia (2016). Let $P=(\succ_{j})_{j\leq n}$ be an arbitrary $O$-legal lexicographic preference profile. $\Rightarrow$: Let $f_{O}=(f_{1},\dots,f_{p})$. It follows from 3 that if each $f_{i}$ satisfies strategyproofness and non-bossiness, then $f_{O}$ satisfies strategyproofness and non-bossiness of more important types, because $f_{O}$ can be regarded as a CR-net with no dependency among types. If each $f_{i}$ satisfies neutrality, then by 1 we have that $f$ satisfies type-wise neutrality. Therefore, since each $f_{i}$ is a serial dictatorship, which implies that it satisfies strategyproofness, non-bossiness, and neutrality, we have that $f_{O}$ satisfies strategyproofness, non-bossiness of more important types, and type-wise neutrality. $\Leftarrow$: We now prove the converse. Let $f$ be a strategyproof and non- bossy mechanism under $O$-legal lexicographic preferences. Then by 3, we have that $f$ is an $O$-legal strategyproof and non-bossy CR-net. The rest of the proof depends on the following claim: 4 implies that there is only one mechanism $f_{i}$ for each type $i$ in $CRT$, and $f_{i}$ is neutral. Therefore with Theorem 3 and Claim 4, if $f$ satisfies strategyproofness, non-bossiness of more important types, and type-wise neutrality, we have that $f$ is an $O$-legal sequential composition of local mechanisms that are strategyproof, non-bossy, and neutral, which implies that they are serial dictatorships Mackin and Xia (2016). ∎ ###### Theorem 5. For any arbitrary importance order $O$, under the $O$-legal lexicographic preference domain, an allocation mechanism satisfies strategyproofness, non- bossiness of more important types, and Pareto-optimality if and only if it is an $O$-legal CR-net composed of serial dictatorships. ###### Proof. (Sketch) For a single type, we know that serial dictatorship is characterized by strategyproofness, Pareto-optimality, and non-bossiness Pápai (2001). The proof is similar to 4, and uses a similar argument to Theorems 1 and 2, to show that an $O$-legal CR-net is Pareto-optimal if and only if every local mechanism is Pareto-optimal. The details are provided in the apoendix. ∎ Finally, we revisit the question of strategyproofness when preferences are not $O$-legal w.r.t. a common importance order. We show in 6 that even when agents’ preferences are restricted to lexicographic preferences, there is a computational barrier against manipulation; determining whether there exists a beneficial manipulation w.r.t. a sequential mechanism is NP-complete for MTRAs, even when agents’ preferences are lexicographic. Details and the full proof are relegated to the appendix. ###### Definition 2. Given an MTRA $(N,M,P)$, where $P$ is a profile of lexicographic preferences, and a sequential mechanism $f_{O}$. in BeneficialManipulation, we are asked whether there exists an agent $j$ and an $O$-legal lexicographic preference relation $\succ^{\prime}_{j}$ such that $f_{O}((P_{-j},\succ^{\prime}_{j}))(j)\succ_{j}f_{O}(P)(j)$. ###### Theorem 6. BeneficialManipulation is NP-complete when preferences are not $O$-legal. ## 6 Conclusion and Future Work We studied the design of strategyproof sequential mechanisms for MTRAs under $O$-legal lexicographic preferences, and showed the relationship between properties of sequential mechanisms and the local mechanisms that they are composed of. In doing so, we obtained strong characterization results showing that any mechanism satisfying strategyproofness, and combinations of appropriate notions of non-bossiness, neutrality, and Pareto-optimality for MTRAs must be a sequential composition of local mechanisms. This decomposability of strategyproof mechanisms for MTRAs provides a fresh hope for the design of decentralized mechanisms for MTRAs and multiple assignment problems. Going forward, there are several interesting open questions such as whether it is possible to design decentralized mechanisms for MTRAs that are fair, efficient, and strategyproof under different preference domains. ## Acknowledgments LX acknowledges support from NSF #1453542 and #1716333 and a gift fund from Google. YC acknowledges NSFC under Grants 61772035 and 61932001. ## References * Abdulkadiroğlu and Sönmez (1998) Atila Abdulkadiroğlu and Tayfun Sönmez. Random Serial Dictatorship and the Core from Random Endowments in House Allocation Problems. _Econometrica_ , 66(3):689–702, 1998. * Barbera et al. (1991) Salvador Barbera, Hugo Sonnenschein, and Lin Zhou. Voting by committees. _Econometrica_ , 59(3):595–609, 1991. * Barbera et al. (1993) Salvador Barbera, Faruk Gul, and Ennio Stacchetti. Generalized median voter schemes and committees. _Journal of Economic Theory_ , 61(2):262–289, 1993. * Barbera et al. (1997) Salvador Barbera, Jordi Masso, and Alejandro Neme. Voting under constraints. _Journal of Economic Theory_ , 76(2):298–321, 1997. * Bhattacharya et al. (2013) Arka A. Bhattacharya, David Culler, Eric Friedman, Ali Ghodsi, Scott Shenker, and Ion Stoica. Hierarchical scheduling for diverse datacenter workloads. In _Proceedings of the 4th Annual Symposium on Cloud Computing_ , pages 4:1–4:15, Santa Clara, CA, USA, 2013. * Black (1948) Duncan Black. On the rationale of group decision-making. _Journal of Political Economy_ , 56(1):23–34, 1948. * Bogomolnaia and Moulin (2001) Anna Bogomolnaia and Hervé Moulin. A New Solution to the Random Assignment Problem. _Journal of Economic Theory_ , 100(2):295–328, 2001. * Boutilier et al. (2004) Craig Boutilier, Ronen Brafman, Carmel Domshlak, Holger Hoos, and David Poole. CP-nets: A tool for representing and reasoning with conditional ceteris paribus statements. _Journal of Artificial Intelligence Research_ , 21:135–191, 2004. * Elster (1992) Jon Elster. _Local justice: How institutions allocate scarce goods and necessary burdens_. Russell Sage Foundation, 1992. * Fujita et al. (2015) Etsushi Fujita, Julien Lesca, Akihisa Sonoda, Taiki Todo, and Makoto Yokoo. A Complexity Approach for Core-Selecting Exchange with Multiple Indivisible Goods under Lexicographic Preferences. In _Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence_ , pages 907–913, 2015. * Ghodsi et al. (2011) Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, and Ion Stoica. Dominant Resource Fairness: Fair Allocation of Multiple Resource Types. In _Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation_ , pages 323–336, Boston, MA, USA, 2011. * Ghodsi et al. (2012) Ali Ghodsi, Vyas Sekar, Matei Zaharia, and Ion Stoica. Multi-resource Fair Queueing for Packet Processing. In _Proceedings of the ACM SIGCOMM 2012 conference on Applications, technologies, architectures, and protocols for computer communication_ , volume 42, pages 1–12, Helsinki, Finland, 2012. * Gibbard (1973) Allan Gibbard. Manipulation of voting schemes: A general result. _Econometrica_ , 41:587–601, 1973. * Gigerenzer and Goldstein (1996) Gerd Gigerenzer and Daniel G. Goldstein. Reasoning the fast and frugal way: Models of bounded rationality. _Psychological Review_ , 103(4):650–669, 1996\. * Guo et al. (2020) Xiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao, and Hanpin Wang. Probabilistic serial mechanism for multi-type resource allocation. _arXiv preprint arXiv:2004.12062_ , 2020. * Hosseini and Larson (2019) Hadi Hosseini and Kate Larson. Multiple assignment problems under lexicographic preferences. In _Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems_ , pages 837–845, 2019. * Huh et al. (2013) Woonghee Tim Huh, Nan Liu, and Van-Anh Truong. Multiresource Allocation Scheduling in Dynamic Environments. _Manufacturing and Service Operations Management_ , 15(2):280–291, 2013. * Lang and Xia (2009) Jérôme Lang and Lirong Xia. Sequential composition of voting rules in multi-issue domains. _Mathematical Social Sciences_ , 57(3):304–324, 2009. * Le Breton and Sen (1999) Michel Le Breton and Arunava Sen. Separable preferences, strategyproofness, and decomposability. _Econometrica_ , 67(3):605–628, 1999. * Mackin and Xia (2016) Erika Mackin and Lirong Xia. Allocating Indivisible Items in Categorized Domains. In _Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)_ , pages 359–365, 2016. * Moulin (1980) Hervé Moulin. On strategy-proofness and single peakedness. _Public Choice_ , 35(4):437–455, 1980. * Moulin (1995) Hervé Moulin. _Cooperative Microeconomics: A Game-Theoretic Introduction_. Prentice Hall, 1995. * Pápai (2001) Szilvia Pápai. Strategyproof and nonbossy multiple assignments. _Journal of Public Economic Theory_ , 3(3):257–71, 2001. * Satterthwaite (1975) Mark Satterthwaite. Strategy-proofness and Arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions. _Journal of Economic Theory_ , 10:187–217, 1975. * Sekar et al. (2017) Shreyas Sekar, Sujoy Sikdar, and Lirong Xia. Condorcet consistent bundling with social choice. In _Proceedings of the 2017 International Conference on Autonomous Agents and Multiagent Systems_. International Foundation for Autonomous Agents and Multiagent Systems, 2017. * Shapley and Scarf (1974) Lloyd Shapley and Herbert Scarf. On cores and indivisibility. _Journal of Mathematical Economics_ , 1(1):23–37, 1974. * Sikdar et al. (2017) Sujoy Sikdar, Sibel Adali, and Lirong Xia. Mechanism Design for Multi-Type Housing Markets. In _Proceedings of the 31st AAAI Conference on Artificial Intelligence_ , 2017. * Sikdar et al. (2019) Sujoy Sikdar, Sibel Adalı, and Lirong Xia. Mechanism design for multi-type housing markets with acceptable bundles. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 2165–2172, 2019. * Sun et al. (2015) Zhaohong Sun, Hideaki Hata, Taiki Todo, and Makoto Yokoo. Exchange of Indivisible Objects with Asymmetry. In _Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015)_ , pages 97–103, 2015. * Svensson (1999) Lars-Gunnar Svensson. Strategy-proof allocation of indivisible goods. _Social Choice and Welfare_ , 16(4):557–567, 1999\. * Wang et al. (2020) H. Wang, Sujoy Sikdar, Xiaoxi Guo, Lirong Xia, Yongzhi Cao, and Hanpin Wang. Multi-type resource allocation with partial preferences. In _AAAI_ , 2020. * Xia and Conitzer (2010) Lirong Xia and Vincent Conitzer. Strategy-proof voting rules over multi-issue domains with restricted preferences. In _Proceedings of the Sixth Workshop on Internet and Network Economics (WINE)_ , pages 402–414, Stanford, CA, USA, 2010. * Xia et al. (2011) Lirong Xia, Vincent Conitzer, and Jérôme Lang. Strategic sequential voting in multi-issue domains and multiple-election paradoxes. In _Proceedings of the ACM Conference on Electronic Commerce (EC)_ , pages 179–188, San Jose, CA, USA, 2011. ## 7 Appendix ### 7.1 Proof of 1 See 1 ###### Proof. Throughout, we will assume that $O=[D_{1}\rhd\dots\rhd D_{p}]$, and that $P$ is an arbitrary $O$-legal preference profile over $p$ types. For any $i\leq p$, we define $g_{i}$ to be the sequential mechanism $(f_{1},\dots,f_{i})$. anonymity. It is easy to see that the claim is true when $p=1$. Now, suppose that claim is true for all $p\leq k$. Let $P$ be an arbitrary profile over $k+1$ types. Let $g=(f_{2},\dots,f_{k+1})$. Now, $\Pi(f_{O}(P))=(\Pi(f_{1}({P}_{\perp D_{1}})),\Pi(g({P}_{\perp D_{\leq k+1}\setminus D_{1},\Pi(f_{1}({P}_{\perp D_{1}}))})))$, and $f_{O}(\Pi(P))=(f_{1}(\Pi({P}_{\perp D_{1}})),g(\Pi({P}_{\perp D_{\leq k+1}\setminus D_{1},f_{1}(\Pi({P}_{\perp D_{1}}))})))$. Since $f_{1}$ is anonymous, $\Pi(f_{1}({P}_{\perp D_{1}}))=f_{1}(\Pi({P}_{\perp D_{1}}))$. Therefore, ${P}_{\perp D_{\leq k+1}\setminus D_{1},\Pi(f_{1}({P}_{\perp D_{1}}))}={P}_{\perp D_{\leq k+1}\setminus D_{1},f_{1}(\Pi({P}_{\perp D_{1}}))}$. Then, by the induction assumption, $g$ satisfies anonymity, and we have $\Pi(g({P}_{\perp D_{\leq k+1}\setminus D_{1},\Pi(f_{1}({P}_{\perp D_{1}}))}))=g(\Pi({P}_{\perp D_{\leq k+1}\setminus D_{1},f_{1}(\Pi({P}_{\perp D_{1}}))}))$. It follows that $\Pi(f_{O}(P))=f_{O}(\Pi(P))$. type-wise neutrality. We show only the induction step. Suppose that the claim is always true when $p\leq k$. Let $P$ be an arbitrary profile over $k+1$ types. Let $g=(f_{2},\dots,f_{k+1})$, and $\Pi_{-1}=(\Pi_{2},\dots,\Pi_{k+1})$. Let $A_{1}=\Pi_{1}(f_{O}({P}_{\perp D_{1}}))$ and $B_{1}=f_{1}(\Pi_{1}({P}_{\perp D_{1}}))$. Now, $\Pi(f_{O}(P))=(A_{1},\Pi_{-1}(g({P}_{\perp{D_{\leq k+1}\setminus D_{1}},A_{1}})))$, and $f_{O}(\Pi(P))=(B_{1},g(\Pi_{-1}({P}_{\perp D_{\leq k+1}\setminus D_{1},B_{1}})))$. Since $f_{1}$ is neutral, $A_{1}=B_{1}$. Then, ${P}_{\perp D_{\leq k+1}\setminus D_{1},A_{1}}={P}_{\perp D_{\leq k+1}\setminus D_{1},B_{1}}$. Then, by the induction assumption, $g$ satisfies type-wise neutrality, and $\Pi_{-1}(g({P}_{\perp D_{\leq k+1}\setminus D_{1},A_{1}}))=g(\Pi_{-1}({P}_{\perp D_{\leq k+1}\setminus D_{1},B_{1}}))$. It follows that $\Pi(f_{O}(P))=f_{O}(\Pi(P))$. non-bossiness. Let us assume for the sake of contradiction that the claim is false, i.e. there exists a profile $P$, an agent $j$ and a misreport $\succ^{\prime}_{j}$ such that for $P^{\prime}=(\succ_{-j},\succ^{\prime}_{j})$, $f_{O}(P)(j)=f_{O}(P^{\prime})(j)$, and $f_{O}(P)\neq f_{O}(P^{\prime})$. Then, there is a type $i\leq p$ such that, ${f_{O}(P)}_{\perp D_{<i}}={f_{O}(P^{\prime})}_{\perp D_{<i}}$ and ${f_{O}(P)}_{\perp D_{i}}\neq{f_{O}(P^{\prime})}_{\perp D_{i}}$. Let $A={f_{O}(P)}_{\perp D_{<i}}$. Then, there is an agent $k$ such that $f_{i}({P}_{\perp D_{i},A})(k)\neq f_{i}({P^{\prime}}_{\perp D_{i},A})(k)$. By the choice of $i$, and the assumption that every other agent reports preferences truthfully, ${\succ_{j}}_{\perp D_{i},A}\neq{\succ^{\prime}_{j}}_{\perp D_{i},A}$. Then, $f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ_{j}}_{\perp D_{i},A})(j)=f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ^{\prime}_{j}}_{\perp D_{i},A})(j)$, but $f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ_{j}}_{\perp D_{i},A})(k)\neq f_{i}({\succ_{-j}}_{\perp D_{i},A},{\succ^{\prime}_{j}}_{\perp D_{i},A})(k)$, a contradiction to our assumption that $f_{i}$ is non-bossy. monotonicity. Let $P^{\prime}=(P_{-j},\succ^{\prime}_{j})$ be an $O$-legal profile obtained from $P$ and $Y\subseteq{\mathcal{D}}$ is the set of bundles raising the ranks in $P^{\prime}$ such that the relative rankings of bundles in $Y$ are unchanged in $P$ and $P^{\prime}$. For any $Y\subseteq{\mathcal{D}}$, and any ${\bf{u}}\in{\mathcal{D}}_{D_{<i}}$, let $Y^{D_{i}\mid{\bf{u}}}=\\{x_{i}:{\bf{x}}\in Y,x_{h}=u_{h}\text{ for all }h\leq i-1\\}$. It is easy to see that if ${\bf{x}}_{1}={f_{O}(P^{\prime})(j)}_{\perp\\{D_{1}\\}}$, then it follows from strong monotonicity of $f_{1}$ that ${\bf{x}}_{1}\in{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}\cup Y^{D_{1}}$. Now, either ${\bf{x}}_{1}\neq{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$, or ${\bf{x}}_{1}={f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Suppose ${\bf{x}}_{1}\neq{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Then, by strong monotonicity of $f_{1}$, ${\bf{x}}_{1}\succ{f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$. Then, by our assumption of $O$-legal lexicographic preferences, for any ${\bf{z}}\in{\mathcal{D}}_{\\{D_{2},\dots,D_{p}\\}}$, $({\bf{x}}_{1},{\bf{z}})\in Y$. Therefore, $f_{O}(P^{\prime})(j)\in Y$. Suppose ${\bf{x}}_{1}={f_{O}(P)(j)}_{\perp\\{D_{1}\\}}$, then by a similar argument, ${f_{O}(P^{\prime})(j)}_{\perp\\{D_{2}\\}}\in\\{{f_{O}(P)(j)}_{\perp\\{D_{2}\\}}\\}\cup Y^{D_{2}\mid({\bf{x}}_{1})}$. Applying our argument recursively, we get that $f_{O}(P^{\prime})(j)\in\\{f_{O}(P)(j)\\}\cup Y$. Pareto-optimality. Suppose the claim is true for $p\leq k$ types. Let $P$ be an $O$-legal lexicographic profile over $k+1$ types, and $f_{O}=(f_{i})_{i\leq k+1}$ is a sequential composition of Pareto-optimal local mechanisms. Suppose for the sake of contradiction that there exists an allocation $B$ such that some agents strictly better off compared to $f_{O}(P)$, and no agent is worse off. Then, by our assumption of lexicographic preferences, for every agent $k$ who is not strictly better off, $B(k)=f_{O}(P)(k)$, and for every agent $j$ who is strictly better off, one of two cases must hold. (1) ${B(j)}_{\perp D_{1}}\succ_{j}{f_{O}(P)(j)}_{\perp D_{1}}$, or (2) ${B(j)}_{\perp D_{1}}={f_{O}(P)(j)}_{\perp D_{1}}$. (1): If there exists an agent such that ${B(j)}_{\perp D_{1}}\succ_{j}{f_{O}(P)(j)}_{\perp D_{1}}$, this is a contradiction to our assumption that $f_{1}$ is Pareto-optimal. (2): Suppose ${B(j)}_{\perp D_{1}}={f_{O}(P)(j)}_{\perp D_{1}}$ for all agents who are strictly better off. Let $g=(f_{2},\dots,f_{k+1})$. W.l.o.g. let agent $1$ strictly prefer $B(1)$ to $f_{O}(P)(1)$. Then, $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(1)\succ_{1}{B(1)}_{\perp D_{\leq k+1}\setminus D_{1}}$, and for every other agent $l\neq 1$, either $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(l)\succ_{l}{B(l)}_{\perp D_{\leq k+1}\setminus D_{1}}$, or $g({P}_{\perp D_{\leq k+1}\setminus D_{1},{f_{O}(P)}_{\perp D_{1}}})(l)={B(l)}_{\perp D_{\leq k+1}\setminus D_{1}}$, which is a contradiction to our induction assumption. ∎ ### 7.2 Proof of 2 See 2 ###### Proof. anonymity. Suppose that for some $k\leq p$, $f_{k}$ does not satisfy anonymity. Then, there exists a profile $P_{k}$ on $D_{k}$ such that for some permutation $\Pi$ on agents $f_{k}(\Pi(P_{k})\neq\Pi(f_{k}(P_{k}))$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=P_{k}$. It is easy to see that, $f_{O}(\Pi(P))=(f_{i}(\Pi({P}_{\perp D_{i}})))_{i\leq p}$, and $\Pi(f_{O}(P))=\Pi(f_{1}({P}_{\perp D_{1}}),\dots,f_{p}({P}_{\perp D_{p}}))=(\Pi(f_{1}({P}_{\perp D_{1}})),\dots,\Pi(f_{p}({P}_{\perp D_{p}}))))$. By anonymity of $f$, $f_{O}(\Pi(P))=\Pi(f_{O}(P))$, which implies that $f_{k}(\Pi({P}_{\perp D_{k}}))=\Pi(f_{k}({P}_{\perp D_{k}}))$, which is a contradiction. type-wise neutrality. Suppose that some $k\leq p$, $f_{k}$ does not satisfy neutrality. Then, there exists a profile $P_{k}$ on $D_{k}$ such that for some permutation $\Pi_{k}$ on $D_{k}$ $f_{k}(\Pi_{k}(P_{k})\neq\Pi_{k}(f_{k}(P_{k}))$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=P_{k}$, and let $\Pi=(\Pi_{1},\dots,\Pi_{k},\dots,\Pi_{p})$ be a permutation over ${\mathcal{D}}$ by applying $Pi_{i}$ on $D_{i}$ for each type $i\leq p$. $f_{O}(\Pi(P))=(f_{i}(\Pi_{i}({P}_{\perp D_{i}})))_{i\leq p}$, and $\Pi(f_{O}(P))=(\Pi_{1}(f_{1}({P}_{\perp D_{1}})),\dots,\Pi_{p}(f_{p}({P}_{\perp D_{p}}))))$. By type-wise neutrality of $f_{O}$, $f_{O}(\Pi(P))=\Pi(f_{O}(P))$. This implies that $f_{k}(\Pi_{k}({P}_{\perp D_{k}}))=\Pi_{k}(f_{k}({P}_{\perp D_{k}}))$, where ${P}_{\perp D_{k}})=P_{k}$, which is a contradiction. non-bossiness. Assume for the sake of contradiction that $k\leq p$ is the most important type such that $f_{k}$ does not satisfy non-bossiness. Then, there exists a preference profile $Q=(\succ^{k})_{j\leq n}$ over $D_{k}$, and a bossy agent $l$ and a misreport $Q^{\prime}=(\succ^{k}_{-l},\bar{\succ}^{k}_{l})$, such that $f_{k}(Q^{\prime})(l)=f_{k}(Q)(l)$, but $f_{k}(Q^{\prime})\neq f_{k}(Q)$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=Q$, and the profile $P^{\prime}$ obtained from $P$ by replacing $\succ_{l}$ with $\succ^{\prime}_{l}$, which in turn is obtained from $\succ_{l}$ by replacing ${\succ_{l}}_{\perp D_{k}}$ with $\bar{\succ}^{k}_{l}$. It is easy to see that ${f_{O}(P^{\prime})}_{\perp D_{<k}}={f_{O}(P)}_{\perp D_{<k}}$, and ${f_{O}(P^{\prime})(l)}_{\perp\\{D_{k}\\}}={f_{O}(P)(l)}_{\perp\\{D_{k}\\}}$, but ${f_{O}(P^{\prime})}_{\perp\\{D_{k}\\}}\neq{f_{O}(P)}_{\perp\\{D_{k}\\}}$, and by our assumption of separable preferences, ${f_{O}(P^{\prime})}_{\perp D_{>k}}={f_{O}(P)}_{\perp D_{>k}}$. This implies that $f_{O}(P^{\prime})(l)=f_{O}(P)(l)$, but $f_{O}(P^{\prime})\neq f_{O}(P)$, implying that $f_{O}$ does not satisfy non-bossiness, which is a contradiction. monotonicity. Suppose for the sake of contradiction that $k$ is the most important type for which $f_{k}$ does not satisfy monotonicity. Then, there exists a profile $Q=(\succ^{k}_{j})_{j\leq n}$ of linear orders over $D_{k}$, such that for some agent $j$, $\bar{\succ}^{k}_{l}$ obtained from $\succ^{k}_{l}$ by raising the rank of a set of items $Z\subseteq D_{k}$ without changing their relative order, $f_{k}((Q{-l},\bar{\succ}_{l}))(l)\not\in\\{f_{k}(Q)(l)\\}\cup Z$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=Q$, and the profile $P^{\prime}$ obtained from $P$ by replacing $\succ_{l}$ with $\succ^{\prime}_{l}$, which in turn is obtained from $\succ_{l}$ by replacing ${\succ_{l}}_{\perp D_{k}}$ with $\bar{\succ}^{k}_{l}$. It is easy to see that ${f_{O}(P^{\prime})}_{\perp D_{<k}}={f_{O}(P)}_{\perp D_{<k}}$, and ${f_{O}(P^{\prime})(l)}_{\perp D_{k}}\notin{f_{O}(P)(l)}_{\perp D_{k}}\cup Z$. By our assumption of $O$-legal separable lexicographic preferences, this implies that $f_{O}$ does not satisfy monotonicity, which is a contradiction. Pareto-optimality. Suppose that some $k\leq p$, $f_{k}$ does not satisfy Pareto-optimality. Then, there exists a profile $P_{k}$ such that $f_{k}(P_{k})$ is Pareto-dominated by an allocation $B$ of $D_{k}$. Now, consider the $O$-legal separable lexicographic profile $P$, where for any type $i\leq p$, the preferences over type $D_{i}$ is denoted ${P}_{\perp D_{i}}$ and ${P}_{\perp D_{k}}=P_{k}$. Then, $f_{O}(P)=(f_{i}({P}_{\perp D_{i},)})_{i\leq p}$ is Pareto-dominated by the allocation $B$ of all types, where for all types $i\neq k$, ${B}_{\perp D_{i}}=f_{i}({P}_{\perp D_{i}})$, and ${B}_{\perp D_{k}}=A$, which is a contradiction to the assumption that $f_{O}$ is Pareto-optimal. ∎ ### 7.3 Proof of 5 See 5 ###### Proof. Let $O=[D_{1}\rhd D_{2}\rhd\cdots\rhd D_{p}]$. Under single type, we know that serial dictatorship is characterized by strategyproofness, Pareto-Optimality, and non-bossiness Pápai (2001). Let $P=(\succ_{j})_{j\leq n}$ be an arbitrary $O$-legal lexicographic preference profile. $\Rightarrow$: Let $f$ be an $O$-legal CR-net. From 3 we know that if each local mechanism of $f$ satisfies strategyproofness and non-bossiness, then $f$ satisfies strategyproofness and non-bossiness of more important types. We now prove that if each local mechanism is Pareto-Optimal, then $f$ is Pareto-optimal, similarly to 1. Suppose for the sake of contradiction that $f$ is not Pareto-optimal, i.e. for some $P$, the allocation $B=(B_{i})_{i\leq p}$ Pareto-dominates $f(P)=A=(A_{i})_{i\leq p}$. Let $i$ be the most important type that $A$ and $B$ and different allocuation, and we have $A_{<i}=B_{<i}$ and $B_{i}$ Pareto-dominates $A_{i}$. Let $P_{i}={P}_{\perp D_{i},A_{<i}}$. However, by the assumption that $f$ is a CR-net, we know that $A_{i}={f}_{\perp D_{i},A_{<i}}(P_{i})$ is Pareto-optimal, i.e. $A_{i}$ does not Pareto-dominated by $B_{i}$, which is a contradiction. Therefore if each local mechanism of $f$ is a serial dictatorship, which implies that it satisfies strategyproofness, non-bossiness, and Pareto- optimality, then $f$ satisfies strategyproofness, non-bossiness of more important types, and Pareto-optimality. $\Leftarrow$: Let $f$ be a mechanism for $O$-legal lexicographic preferences. With Theorem 3, we have that if $f$ satisfies strategyproofness and non- bossiness of more important types, then it is an $O$-legal strategyproof and non-bossy CR-net. We also have that if $f$ is a CR-net satisfying Pareto- optimality, then each local mechanism is also Pareto-optimal with a similar proof to Theorem 2. Together we have that if $f$ satisfies strategyproofness, non-bossiness of more important types, and Pareto-optimality, then $f$ is an $O$-legal CR-net and each local mechanism satisfies strategyproofness, non- bossiness, and Pareto-Optimality, which implies that it is a serial dictatorship Pápai (2001). ∎ ### 7.4 Proof of 6 See 6 ###### Proof. We show a reduction from 3-SAT. In an instance $I$ of 3-SAT involving $s$ Boolean variables $\\{x_{1},\dots,x_{s}\\}$, and a formula ${\mathcal{F}}$ involving $t$ clauses $\\{c_{1},\dots,c_{t}\\}$ in 3-CNF, we are asked if ${\mathcal{F}}$ is satisfiable. Given such an arbitrary instance $I$ of 3-SAT, we construct an instance $J$ of BeneficialManipulation in polynomial time. We will show that $I$ is a Yes instance of 3-SAT if and only if $J$ is a Yes instance of BeneficialManipulation. For each $j\leq t$, we label the three literals in clause $j$ as $l_{j_{1}}^{j},l_{j_{2}}^{j}$, and $l_{j_{3}}^{j}$ where $j_{1}<j_{2}<j_{3}$. We construct instance $J$ of BeneficialManipulation to have: Types: $s+1$ types. Agents: * • For every variable $i\leq s$, and every clause $j\leq t$, two agents $0_{i}^{j},1_{i}^{j}$, and a dummy agent $d_{i}^{j}$. * • For every clause $j$, an agent $c_{j}$. * • A special agent $0$. Items: For every agent $a$ and every type $k\leq s+1$, an item named $[a]_{k}$. Preferences: For some agents, we only specify their importance orders (or local preferences) over types (or items) that are important for this proof, and assume that their preferences are an arbitrary linear order with the specified preferences over the top few types (or items). * • agent $0$ has importance order $s+1\rhd{\text{o}thers}$, with local preferences: * – type $s+1$: $[0]_{s+1}\succ[c_{1}]_{s+1}\succ{\text{o}thers}$ * – every other type $k<s+1$: $[0]_{k}\succ{\text{o}thers}$ * • agents $l_{i}^{j}$, $l\in\\{0,1\\}$ have importance order $i\rhd{\text{o}thers}$. * – type $i$: ${\text{N}EXT}_{i}^{j}\succ l_{i}^{j}\succ 0_{i}\succ{\text{o}thers}$, where ${\text{N}EXT}_{i}^{j}=[l_{i}^{j+1}]_{i}$ if $j<t$, and ${\text{N}EXT}_{i}^{j}=[l_{i}^{1}]_{i}$ if $j=t$. * – type $s+1$ preferences are conditioned on assignment on type $i$. * * $D_{i}\setminus\\{{\text{N}EXT}_{i}^{j}$}: $[d_{i}^{j}]_{i}\succ{\text{o}thers}$. * * ${\text{N}EXT}_{i}^{j}$: $[l_{i}^{j}]_{i}\succ{\text{o}thers}$. * – every other type $k$: $[l_{i}^{j}]_{k}\succ{\text{o}thers}$. * • agents $d_{i}^{j}$ have importance order $i\rhd{\text{o}thers}$. * – type $i$: $[d_{i}^{j}]_{i}\succ{\text{o}thers}$. * • agents $c_{j}$ have importance order $s+1\rhd{\text{o}thers}$. * – type $s+1$: $[l_{j1}^{j}]_{s+1}\succ[l_{j2}^{j}]_{s+1}\succ[l_{j3}^{j}]_{s+1}\succ[0]_{s+1}\succ{\text{o}thers}$. * – every other type $k$: $[c_{j}]_{k}\succ{\text{o}thers}$. Sequential mechanism: composed of serial dictatorships applied in the order $O=1\rhd\dots\rhd s+1$, where the priority orders over agents are: * • for types $i\leq s$: $({\text{o}thers},0,0_{i}^{t},\dots,0_{i}^{1},1_{i}^{t},\dots,1_{i}^{1})$. * • type $s+1$: $(0_{1}^{1},\dots 0_{1}^{t},0_{2}^{1},\dots,0_{s}^{t},1_{1}^{1},\dots,1_{1}^{t},1_{2}^{1},\dots,1_{s}^{t},c_{1},\dots,c_{t},0,\allowbreak{\text{o}thers})$. Similar to preferences, we only specify the part of the priority orderings over the agents for each serial dictatorship that is relevant to the proof, and assume that the priority orderings are linear orders over the agents, where the specified part holds. The main idea is that if the 3-SAT instance is satisfiable, special agent $0$ enables every $c_{j}$ agent to get an item of type $s+1$ corresponding to a literal $l_{i}^{j}$ that satisfies the clause $j$ by a beneficial manipulation which results in agents $l_{i}^{j}$ corresponding to literals in clause $j$ being allocated their favorite item of type $i$. When agents report preferences truthfully and are either optimistic or pessimistic, it is easy to check $f_{O}$ allocates items as follows: for types $i<s+1$ agent $0$ gets $0_{i}$ and every agent $l_{i}^{j}$ gets ${\text{N}EXT}_{i}^{j}$. Then, for type $s+1$, for any $l\in\\{0,1\\}$, every agent $l_{i}^{j}$ gets the item $[l_{i}^{j}]_{s+1}$. This in turn makes it so that for every $i\leq s,j\leq t$, the items $[l_{i}^{j}]_{s+1}$ unavailable to the agent $c_{j}$. Then, $c_{1}$ gets $[0]_{s+1}$, and finally, $0$ gets $[c_{1}]_{s+1}$. Upon examining agent $0$’s preferences, it is easy to check that the only way for agent $0$ to improve upon this allocation is to receive a better item of type $s+1$, specifically, item $[0]_{s+1}$. $\Rightarrow$ Let $\phi$ be a satisfying assignment for instance $I$. Consider the manipulation where agent $0$ reports her top item of type $i$ to be $[0_{i}^{1}]_{i}$ if $\phi_{i}=0$, and $[1_{i}^{1}]_{i}$ if $\phi_{i}=1$. Now, suppose that every other agent reports preferences truthfully. Let us consider the case where for some $i\leq s$, $\phi_{i}=0$. It is easy to check that for type $i$, agents’ allocations are as follows: Agent $0$ gets $[0_{i}^{1}]_{i}$ if $\phi_{i}=0$, and in the sequence $j=t\dots 2$ agents $0_{i}^{j}$ get items $[0_{i}^{j}]_{i}$ respectively, and agent $0_{i}^{1}$ gets $[0]_{i}$, i.e. none of the agents $0_{i}^{j}$ gets their corresponding top item ${\text{N}EXT}_{i}^{j}$. Now, for type $s+1$, agent $0_{i}^{j}$ gets $[d_{i}^{j}]_{s+1}$ according to their true preferences since they did not receive their item ${\text{N}EXT}_{i}^{j}$ of type $i$, leaving item $[0_{i}^{j}]_{s+1}$ available. Then, agents $1_{i}^{j}$ get the items $[1_{i}^{j}]_{s+1}$, crucially, before agents $c_{j}$ get to choose any item. Then for every agent $c_{j}$, if $0_{i^{*}}^{j}$ is the literal with the lowest index $i^{*}$ such that $\phi_{i^{*}}$ corresponds to a satisfying assignment of clause $c_{j}$, $[0_{i^{*}}^{j}]_{s+1}$ must be available when $c_{j}$ gets her turn to pick an item, and gets it. Moreover, since $\phi$ is a satisfying assignment, there is such an item for every $c_{j}$. This leaves $[0]_{s+1}$ available when it is agent $0$’s turn to pick an item. Thus, special agent $0$ prefers the resulting allocation to her allocation when she picked items truthfully, and the manipulation was beneficial, irrespective of whether agent $0$ is optimistic or pessimistic. $\Leftarrow$ Suppose agent $0$ has a beneficial manipulation. Then, as we have already established, agent $0$ must get item $[0]_{s+1}$ as a result of the manipulation. Now, agents $c_{j}$ get their turn to pick an item before agent $0$ in the serial dictatorship for type $s+1$. Then, since they are truthful, each agent $c_{j}$ receives an item $[l_{i}^{j}]_{s+1}$ corresponding to a satisfying assignment. Otherwise one of them must get $[0]_{s+1}$, a contradiction. Let us construct an assignment $\phi$ as follows: if $c_{j}$ gets item $[0_{i^{*}}^{j}]_{s+1}$ in the final allocation, set $\phi_{i}=0$, and set $\phi_{i}=1$ otherwise. We will show that $\phi$ is a satisfying assignment for $I$. Since agents $0_{i}^{j}$ and $1_{i}^{j}$ come before agents $c_{j}$ in the serial dictatorship, and are also truthful, it must be that for every item $[l_{i^{*}}^{j}]_{s+1}$ allocated to agent $c_{j}$ in the final allocation, the corresponding agent $l_{i^{*}}^{j}$ does not get ${\text{N}EXT}_{i^{*}}^{j}$ of type $i^{*}$. By construction of the preferences over type $i^{*}$ and the serial dictatorship for type $i^{*}$, it must be that special agent $0$ picks an item $[l_{i^{*}}^{k}]_{i^{*}}$, where either $k>j$ or $k=1$. It is easy to check from the construction that if this is not the case, agent $l_{i}^{j}$ can pick item ${\text{N}EXT}_{i}^{j}$ when every agent other than $0$ picks truthfully. Further, if agent $0$ picks some item $[0_{i^{*}}^{\hat{j}}]_{i^{*}}$, it is easy to check that by the construction of the preferences, every agent other than $1_{i^{*}}^{j}$ gets their top item. Thus, none of the agents $c_{j}$ may receive the item $[1_{i^{*}}^{j}]$ since it must already have been picked by the agent $1_{i^{*}}^{j}$ in the serial dictatorship. Together with the fact that every agent $c_{j}$ receives an item that corresponds to a satisfying assignment, $\phi$ constructed above is a satisfying assignment for instance $I$. ∎
# Normalized ground states for the critical fractional NLS equation with a perturbation Maoding Zhena, Binlin Zhangb,111Corresponding author. E-mail address: <EMAIL_ADDRESS>(M. Zhen<EMAIL_ADDRESS>(B. Zhang) ${}^{a}\,$School of Mathematics, Hefei University of Technology, Hefei 230009, P.R. China ${}^{b}\,$College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, 266590, P.R. China ###### Abstract In this paper, we study normalized ground states for the following critical fractional NLS equation with prescribed mass: $\begin{cases}(-\Delta)^{s}u=\lambda u+\mu|u|^{q-2}u+|u|^{2_{s}^{\ast}-2}u,&x\in\mathbb{R}^{N},\\\ \int_{\mathbb{R}^{N}}u^{2}dx=a^{2},\\\ \end{cases}$ where $(-\Delta)^{s}$ is the fractional Laplacian, $0<s<1$, $N>2s$, $2<q<2_{s}^{\ast}=2N/(N-2s)$ is a fractional critical Sobolev exponent, $a>0$, $\mu\in\mathbb{R}$. By using Jeanjean’s trick in [28], and the standard method which can be found in [9] to overcome the lack of compactness, we first prove several existence and nonexistence results for a $L^{2}$-subcritical (or $L^{2}$-critical or $L^{2}$-supercritical) perturbation $\mu|u|^{q-2}u$, then we give some results about the behavior of the ground state obtained above as $\mu\rightarrow 0^{+}$. Our results extend and improve the existing ones in several directions. _Keywords:_ Fractional Laplacian; Critical exponent; Normalized ground state solution. _2010 Mathematics Subject Classification:_ 35J20, 35B33, 58E05. ## 1 Introduction and main results In this paper, we consider the following critical nonlinear Schrödinger equation involving the fractional Laplacian: $\displaystyle(-\Delta)^{s}u=\lambda u+\mu|u|^{q-2}u+|u|^{2_{s}^{\ast}-2}u$ $\displaystyle\ \ x\in\mathbb{R}^{N},$ (1.1) and possessing prescribed mass $\displaystyle\int_{\mathbb{R}^{N}}u^{2}dx=a^{2},$ (1.2) where $(-\Delta)^{s}$ is the fractional Laplacian, $0<s<1,\ 2<q<2_{s}^{\ast}=2N/(N-2s)$ is a fractional critical Sobolev exponent. The fractional Laplacian $(-\Delta)^{s}$ is defined by $(-\Delta)^{s}u(x)=C(N,s){\rm P.V.}\int_{\mathbb{R}^{N}}\frac{u(x)-u(y)}{|x-y|^{N+2s}}dy,\ \ x\in\mathbb{R}^{N}$ for $u\in C^{\infty}_{0}(\mathbb{R}^{N})$, where $C(N,s)$ is a suitable positive normalizing constant and P.V. denotes the Cauchy principle value. We refer to [16, 30, 17, 20, 37, 7] for a simple introduction to basic properties of the fractional Laplace operator and concrete applications based on variational methods. Our main driving force for the study of (1.1) arises in the study of the following time-dependent fractional Schrödinger equation with combined power nonlinearities: $\displaystyle i\psi_{t}-(-\Delta)^{s}\psi+\mu|\psi|^{q-2}\psi+|\psi|^{2_{s}^{\ast}-2}\psi=0\ \ \mbox{in}\ \mathbb{R}^{N}.$ (1.3) When searching for stationary waves of the form $\psi(t,x)=e^{-i\lambda t}u(x)$, where $\lambda\in\mathbb{R}$ is the chemical potential and $u(x):\mathbb{R}^{N}\rightarrow\mathbb{C}$ is a time-independent function, one is led to studying (1.1). In this case, particular attention is paid to ground state solutions, a.e., solutions minimizing an energy functional among all non-trivial solutions. An alternative choice is to look for solutions to (1.1) having prescribed mass, and in this case $\lambda\in\mathbb{R}$ is part of the unknown. This approach seems particularly meaningful from the physical point of view, since, in addition to being a conserved quantity for the time dependent (1.3), the mass has often an evident physical meaning, for example, it indicates the power supply in nonlinear optics, or the total number of atoms in Bose-Einstein condensation. Moreover, this approach gives a better insight of the properties of the stationary solutions for (1.1), for example, stability or instability, see [32] for more details. The existence of normalized stationary states can be summarized as follows: given $a>0$ and $\mu\in\mathbb{R}$, $2<q<2_{s}^{*}$, our aim is to find $(\lambda,u)\in\mathbb{R}\times H^{s}(\mathbb{R}^{N},\mathbb{C})$ such that (1.1) and (1.2) hold. For the Laplacian case, i.e., $s=1$ in (1.1), we would like to mention a seminal paper by Jeanjean in [28], which dealt with the existence of normalized solutions when the energy function is unbounded from below on the $L^{2}$ constraint. In fact, the normalized solutions for nonlinear Schrödinger equation or system have attracted much attention in recent years, both for their interesting theoretical structure and their concrete applications (see [2, 3, 4, 32, 33] and references therein). Since $\lambda$ and $\mu$ are parts of the unknown, the Nehari manifold method is not available in the framework of normalized solutions. Meanwhile, the appearance of the $L^{2}$ constraint makes some classical methods, used to prove the boundedness of any Palais-Smale sequence for the unconstrained problem, difficult to implement. It is well known that a new $L^{2}$-critical exponent $\widetilde{p}=2+4/N$ plays a special role. Indeed, if the problem is $L^{2}$-subcritical, i.e., $2<q<p<\widetilde{p}$, the energy functional $E_{\mu}$ (defined in (1.4)) is bounded from below on the constraint $\overline{S}_{a}=\\{u\in H^{1}(\mathbb{R}^{N},\mathbb{C}):\int_{\mathbb{R}^{N}}u^{2}dx=a^{2}\\}$, so the ground state solution can be found as global minimizers of $E_{\mu}|_{\overline{S}_{a}}$. Moreover, if the problem is $L^{2}$-supercritical, i.e., $\widetilde{p}<q<p<2^{\ast}=2N/(N-2)$, then the energy functional $E_{\mu}$ is unbounded both from above and from below on $\overline{S}_{a}$. In this case, the ideas introduced by Jeanjean in [28] can be employed to consider the existence of normalized solutions for any $a,\mu>0.$ Compared to the semilinear case that corresponds to the Laplace operator, the fractional Laplacian problems are nonlocal and more challenging. For fractional Laplacian equations or systems with fixed $\lambda_{i}$, the existence and non-degeneracy of solutions have been studied by a lot of researchers and there are many results about existence, nonexistence, multiplicity of solutions for fractional Laplacian equation, since it seems almost impossible for us to provided a complete list of references, we just refer the readers to [10, 11, 6, 12, 14, 15, 25, 26, 27, 40, 41, 39, 23, 24] and references therein. Recently, Soave in [32, 33] first investigated the existence and properties of ground states for the nonlinear Schrödinger equation with combined power type nonlinearities and also gave new criteria for global existence and finite time blow-up in the associated dispersive equation. More precisely, Soave in [32] considered the normalized solutions for subcritical exponent and give a complete classification about the existence and nonexistence of normalized solution for $L^{2}$-subcritical, $L^{2}$-critical and $L^{2}$-supercritical. For the critical case, the problem is also interesting and challenging. By focusing the leading nonlinearity and analysing how the introduction of lower order term modifies the energy functional structure, Soave in [33] obtained the existence and nonexistence of normalized solutions for $L^{2}$-subcritical, $L^{2}$-critical and $L^{2}$-supercritical in the Sobolev critical case. Due to the lack of compactness of the Sobolev embedding $H^{1}(\mathbb{R}^{N})\hookrightarrow L^{2^{\ast}}(\mathbb{R}^{N})$, the problem is more complicated, however, the difficulty was overcome ingeniously by combining some ideas from [9] and [28]. Inspired by the above-mentioned works, especially by [32, 33], in the present paper our goal is two-fold. One is to show the existence and nonexistence of normalized ground states for fractional elliptic equations with critical exponent. Another is to give some results about the behavior of ground state solutions obtained above as $\mu\rightarrow 0^{+}$. The method we use is Jeanjean’s method [28] combined with Pohozaev manifold argument. By using the test function as in [31], we show that the least energy of the equation is below the critical energy $\frac{s}{N}S^{N/(2s)}_{s}$ under the proper conditions given on $N,s,p,\lambda$ under which the Palais-Smale condition is satisfied. The main difficulty is to prove the convergence of constrained Palais-Smale sequence. Indeed, if we find a bounded Palais-Smale sequence, according to the compactness of the embedding $H^{s}_{rad}(\mathbb{R}^{N})\hookrightarrow L^{p}(\mathbb{R}^{N}),\ 2<p<2_{s}^{\ast}$, we just get a strongly convergent subsequence in $L^{p}(\mathbb{R}^{N})$, but we cannot deduce the strong convergence in $L^{2}(\mathbb{R}^{N}).$ Hence we require new arguments to overcome the lack of compactness of the embedding $H^{s}_{rad}(\mathbb{R}^{N})\hookrightarrow L^{2}(\mathbb{R}^{N}).$ To this end, we adopt some ideas of [19] to obtain a Liouville-type result. Before we state our main results, we first introduce some notations. Let ${H^{s}(\mathbb{R}^{N})}$ be the Hilbert space of function in $\mathbb{R}^{N}$ endowed with the standard inner product and norm $\langle u,v\rangle=\int_{\mathbb{R}^{N}}\left((-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}v+uv\right)dx,\ \ \|u\|_{H^{s}(\mathbb{R}^{N})}^{2}=\langle u,u\rangle.$ Let $D_{s}(\mathbb{R}^{N})$ be the Hilbert space defined as the completion of $C_{c}^{\infty}(\mathbb{R}^{N})$ with the inner product $\langle u,v\rangle_{D_{s}(\mathbb{R}^{N})}=\frac{C(N,s)}{2}\iint_{\mathbb{R}^{2N}}\frac{(u(x)-u(y))(v(x)-v(y))}{|y-x|^{N+2s}}dxdy$ and norm $\|u\|^{2}_{D_{s}(\mathbb{R}^{N})}=\int_{\mathbb{R}^{N}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx=\frac{C(N,s)}{2}\iint_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|y-x|^{N+2s}}dxdy.$ The energy functional associated with (1.1) and the constraint are given by $\displaystyle E_{\mu}(u)$ $\displaystyle=\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ (1.4) and $S_{a}=\left\\{u\in H^{s}(\mathbb{R}^{N},\mathbb{C}):\int_{\mathbb{R}^{N}}u^{2}dx=a^{2}\right\\}.$ Let $S_{s}$ be the sharp imbedding constant of ${D_{s}(\mathbb{R}^{N})}\hookrightarrow L^{2^{\ast}}(\mathbb{R}^{N}),$ $\displaystyle S_{s}=\inf\limits_{u\in{D_{s}(\mathbb{R}^{N})})\setminus\\{0\\}}\frac{\|u\|^{2}_{D_{s}(\mathbb{R}^{N})}}{(\int_{\mathbb{R}^{N}}|u|^{2^{\ast}}dx)^{\frac{2}{2^{\ast}}}}.$ (1.5) From [15] we know that $S_{s}$ is attained in $\mathbb{R}^{N}$ by $U_{\epsilon,x_{0}}(x)=\kappa(\epsilon^{2}+|x-x_{0}|^{2})^{-\frac{N-2s}{2}}$ (1.6) where $\kappa\neq 0\in\mathbb{R},\ \varepsilon>0$ are fixed constants and $x_{0}\in\mathbb{R}^{N}$. To present our main results, put $\displaystyle\gamma_{q,s}=\frac{N(q-2)}{2qs},$ $\displaystyle C^{\prime}=\frac{q(2_{s}^{\ast}-2)}{2C^{q}_{N,q,s}(2_{s}^{\ast}-q\gamma_{q,s})}\left(\frac{(2-q\gamma_{q,s})2_{s}^{\ast}}{2(2_{s}^{\ast}-q\gamma_{q,s})}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right)^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}},$ (1.7) $\displaystyle C^{\prime\prime}=\frac{22_{s}^{\star}}{(2_{s}^{\star}-q\gamma_{q,s})C^{q}_{N,q,s}}\left(\frac{Nq\gamma^{2}_{q,s}S^{\frac{N}{2s}}_{s}}{(2-q\gamma_{q,s})s}\right)^{\frac{2-q\gamma_{q,s}}{2}}.$ (1.8) ###### Theorem 1.1. Let $N>2s,\ a,\mu>0$ and $2<q<\overline{p}:=2+4s/N$. If there exists a constant $\alpha=\alpha(N,q)>0$ such that $\displaystyle\mu a^{q(1-\gamma_{q,s})}<\alpha:=\min\\{C^{\prime},C^{\prime\prime}\\},$ (1.9) then $E_{\mu}|_{S_{a}}$ has a ground state $\widetilde{u}$ with the following properties: $\widetilde{u}$ is a positive, radially symmetric function and solves (1.1)–(1.2) for some $\widetilde{\lambda}<0.$ Moreover, $m(a,\mu)<0$ and $\widetilde{u}$ is an interior local minimizer of $E_{\mu}(u)$ on the set $A_{k}=\\{u\in S_{a}:||u||_{D_{s}(\mathbb{R}^{N})}<k\\},$ for suitable $k$ small enough. Any other ground state solution of $E_{\mu}$ on $S_{a}$ is a local minimizer of $E_{\mu}$ on $A_{k}$. ###### Theorem 1.2. Let $N>2s,\ a,\mu>0$ and $2<q=\overline{p}$. If $\displaystyle\mu a^{\frac{4s}{N}}<\overline{p}\left(2C^{\overline{p}}_{N,\overline{p},s}\right)^{-1},$ (1.10) then $E_{\mu}|_{S_{a}}$ has a ground state $\widetilde{u}$ with the following properties: $\widetilde{u}$ is a positive, radially symmetric function and solves (1.1)–(1.2) for some $\widetilde{\lambda}<0.$ Moreover, $0<m(a,\mu)<\frac{s}{N}S^{N/(2s)}_{s},$ and $\widetilde{u}$ is a critical point of Mountain Pass type. ###### Theorem 1.3. Let $N>2s,\ a,\mu>0$ and $\overline{p}<q<2_{s}^{\ast}$. If one of the following conditions holds: 1. $(1)$ $N>4s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}},$ 2. $(2)$ $N=\frac{q}{q-1}2s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}},$ 3. $(3)$ $N=4s\ \text{or}\ \frac{q}{q-1}2s<N<4s\ \text{or}\ 2s<N<\frac{q}{q-1}2s,$ then $E_{\mu}|_{S_{a}}$ has a ground state $\widetilde{u}$ with the following properties: $\widetilde{u}$ is a positive, radially symmetric function and solves (1.1)–(1.2) for some $\widetilde{\lambda}<0.$ Moreover, $0<m(a,\mu)<\frac{s}{N}S^{N/(2s)}_{s},$ and $\widetilde{u}$ is a critical point of Mountain Pass type. ###### Theorem 1.4. Let $a>0$ and $\mu=0$. Then we have the following conclusions: 1. $(1)$ If $N>4s$, then $E_{0}$ on $S_{a}$ has a unique positive radial ground state $U_{\epsilon,0}$ defined in (1.6) for the unique choice of $\epsilon>0$ which gives $||U_{\epsilon,0}||_{L^{2}(\mathbb{R}^{N})}=a.$ 2. $(2)$ If $2s<N\leq 4s$, then (1.1) has no positive solutions in $S_{a}$ for any $\lambda\in\mathbb{R}$. ###### Theorem 1.5. Let $u_{\mu}$ be the corresponding positive ground state solution obtained in Theorems 1.1–1.3 with energy level $m(a,\mu)$. Then the following conclusions hold: 1. $1)$ If $2<q<\overline{p}$, then $m(a,\mu)\rightarrow 0$, and $\|u_{\mu}\|^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow 0$ as $\mu\rightarrow 0^{+}$. 2. $2)$ If $\overline{p}\leq q<2^{\ast}$, then $m(a,\mu)\rightarrow\frac{s}{N}S^{\frac{N}{2s}}_{s}$, and $\|u_{\mu}\|^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow S^{\frac{N}{2s}}_{s}$ as $\mu\rightarrow 0^{+}$. ###### Remark 1.1. The assumptions (1.7), (1.8) and (1.10) are used to describe the geometry of $E_{\mu}$. Meanwhile, the assumptions in Theorem 1.3 are applied to overcome the lack of compactness. ###### Remark 1.2. We should point out that Luo and Zhang in [29] considered the subcritical fractional equation with combined nonlinearities and proved the existence and nonexistence of normalized solutions, however, in this paper we consider the existence and nonexistence of normalized solutions for the critical fractional equation with combined nonlinearities. Compared with the subcritical case, the critical case is more complicated and needs to overcome the lack of compactness. In this paper, we invoke some ideas proposed by Soave in [32, 33]. Compared to the Laplacian problems, the fractional Laplacian problems are nonlocal and more challenging. Indeed, when we consider the fractional Laplacian problem, the corresponding algebraic equation is about fractional order, which is more complicated to deal with than an integer-order algebraic equation. Moreover, one of the main difficulties is to analyze the convergence of constrained Palais-Smale sequence. To overcome the lack of compactness, we employ delicate methods which can be found in [9], that is, cut-off technique and energy estimate. To show the least energy strictly less than “the threshold energy”, our analysis is more difficult and complicated. Indeed, when we deal with the $L^{2}$-critical and $L^{2}$-supercritical case, we need to give an exact classification of dimensions $N$, which depends on $s$ and $q$, and give exact estimate for energy function in each cases. In particular, for $2s<N<4s$ and $\mu=0$, to show Theorem 1.4 (2), we need to prove that all solutions for equation $(-\Delta)^{s}v=v^{2_{s}^{\ast}-1},\ v\geq 0\ \text{in}\ \mathbb{R}^{N},$ must be $\alpha U_{\epsilon,0}$ for some $\alpha,\epsilon>0$. Finally, let us sketch the proof of above theorems. In general, this study can be considered as a counterpart of the fractional Brézis-Nirenberg problem in the context of normalized solutions. To overcome the lack of compactness which is a crucial step for the critical case, we show that the least energy strictly less than “the threshold energy”, we employ delicate methods which can be found in [9], that is, cut-off technique and energy estimate. The convergence of Palais-Smale sequence (see Proposition 2.2) is one of the most delicate ingredient in the proofs of our main results. We introduce a fiber maps $\Psi^{\mu}_{u}(t)$ (see (2.9)), it is well known that any critical point of $E_{\mu}|_{S_{a}}$ stays in $\mathcal{P}_{a,\mu}$(see (2.5)), the monotonicity and convexity properties of $\Psi^{\mu}_{u}(t)$ strongly affect the structure of $\mathcal{P}_{a,\mu}$. It is easy to see that $(\Psi^{\mu}_{u})^{\prime}(t)=P_{\mu}(t\star u)$, so that $t$ is a critical point of $\Psi^{\mu}_{u}(t)$ if and only if $t\star u\in\mathcal{P}_{(a,\mu)}$ and in particular $u\in\mathcal{P}_{(a,\mu)}$ if and only if $0$ is a critical point of $\Psi^{\mu}_{u}(t)$. In this spirit, we split $\mathcal{P}_{a,\mu}$ into three parts, then we prove that $\mathcal{P}^{0}_{a,\mu}=\emptyset$ and $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension 1 in $S_{a}$ under suitable conditions. For $L^{2}$-subcritical case, we restricted the energy function $E_{\mu}$ on the $\mathcal{P}_{a,\mu}$ and we can prove that $E_{\mu}|_{\mathcal{P}_{a,\mu}}$ is bounded from below, so a local minimizer $\widetilde{u}$ for $E_{\mu}$ on the $\mathcal{P}_{a,\mu}$ can be obtained. For $L^{2}$-critical/supercritical, we construct different linking structures to obtain the Mountain Pass type solutions. The paper is organized as follows. In Section 2, we introduce some preliminaries that will be used to prove Theorems 1.1–1.3. In Section 3, we give some lemmas for $L^{2}$-subcritical perturbation. In Section 4, we give some preliminaries for $L^{2}$-critical perturbation. In Section 5, we give some lemmas for $L^{2}$-supercritical perturbation. In Section 6, we prove Theorem 1.1. In Section 7, we prove Theorems 1.2–1.3. In Section 8, we prove Theorem 1.4. Finally, the proof of Theorem 1.5 will be given in Section 9. ## 2 Preliminaries Let $S_{s}$ be the sharp embedding constant of ${D^{s}(\mathbb{R}^{N})}\hookrightarrow L^{2_{s}^{\ast}}(\mathbb{R}^{N}),$ $\displaystyle S_{s}=\inf\limits_{u\in{D^{s}(\mathbb{R}^{N})})\setminus\\{0\\}}\frac{\|u\|^{2}_{D^{s}(\mathbb{R}^{N})}}{(\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx)^{\frac{2}{2_{s}^{\ast}}}}.$ (2.1) from [15] $S_{s}$ is attained in $\mathbb{R}^{N}$ by $\widetilde{u}(x)=\kappa(\varepsilon^{2}+|x-x_{0}|^{2})^{-\frac{N-2s}{2}}$, where $\kappa\neq 0\in\mathbb{R},\ \varepsilon>0$ are fixed constants and $x_{0}\in\mathbb{R}^{N}$. It is useful to introduce the fractional Gagliardo-Nirenberg-Sobolev inequality (see[24]) $\int_{\mathbb{R}^{N}}|u|^{p}dx\leq C_{N,p,s}\left(\int_{\mathbb{R}^{N}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx\right)^{\frac{N(p-2)}{4s}}\left(\int_{\mathbb{R}^{N}}|u|^{2}dx\right)^{\frac{p}{2}-\frac{N(p-2)}{4s}}\ \ \text{for all}\ \ u\in H^{s}(\mathbb{R}^{N}).$ (2.2) Define $\displaystyle\gamma_{p,s}=\frac{N(p-2)}{2ps},$ it is easy to see that $p\gamma_{p,s}\left\\{\begin{array}[]{ll}<{\displaystyle 2},&\text{if}\ 2<p<\overline{p},\\\ ={\displaystyle 2},&\text{if}\ p=\overline{p},\\\ >{\displaystyle 2},&\text{if}\ \overline{p}<p<2_{s}^{\ast},\end{array}\right.\text{and that}\ \ \gamma_{2_{s}^{\ast}}=1,$ (2.3) and $\displaystyle\|u\|_{L^{p}}\leq C_{N,p,s}\|(-\Delta)^{s}u\|^{\gamma_{p,s}}_{L^{2}}\|u\|^{1-\gamma_{p,s}}_{L^{2}}\ \text{for all}\ \ u\in H^{s}(\mathbb{R}^{N}).$ (2.4) We first give the following key Pohozaev identity for the fractional Laplace operator. ###### Proposition 2.1 (Theorem A.1 in [8]). Let $u\in H^{s}(\mathbb{R}^{N})\bigcap L^{\infty}(\mathbb{R}^{N})$ be a positive solution of $(-\Delta)^{s}u=f(u)$ and $F(u)\in L^{1}(\mathbb{R}^{N})$, then it hold that $\frac{N-2s}{2}\int_{\mathbb{R}^{N}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx=N\int_{\mathbb{R}^{N}}F(u)dx,$ where $F(u)=\int^{u}_{0}f(t)dt$. ###### Remark 2.1. Since $u\in H^{s}(\mathbb{R}^{N})$, by the fractional Sobolev embedding theorem (see [34, Theorem 2.2]), it is easy to see $u\in L^{p}(\mathbb{R}^{N}),\ p\in[2,2_{s}^{\ast}],$ which implies that $F(u)=\frac{\lambda}{2}u^{2}+\frac{\mu}{q}|u|^{q}+\frac{1}{2_{s}^{\ast}}|u|^{2_{s}^{\ast}}\in L^{1}(\mathbb{R}^{N})$, hence we can modify the proof of Proposition 5.1 in [5] to obtain that $u\in L^{\infty}B(0,\frac{r}{2})$. Using the same arguments for a neighborhood of any $x\in\mathbb{R}^{N}$, we get $u\in L_{loc}^{\infty}(\mathbb{R}^{N})$. Thus we can use similar arguments as in the proof of Theorem 3.4 in [18] to obtain that $u\in L^{\infty}(\mathbb{R}^{N})$, which implies that the above Pohozaev identity can be applied to our equation. In fact, similar method to prove $u\in L^{\infty}(\mathbb{R}^{N})$ can also be used to prove Proposition 4.1 in [13]. ###### Lemma 2.1. Let $u\in H^{s}(\mathbb{R}^{N})$ is a solution of (1.1), then $\mathcal{P}_{a,\mu}=\\{u\in S_{a}:P_{\mu}(u)=0\\},$ (2.5) where $P_{\mu}(u)=s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q}dx-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ ###### Proof. From Proposition 2.1, we have $\frac{N-2s}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\lambda\frac{N}{2}\int_{\mathbb{R}^{N}}u^{2}dx+\frac{N\mu}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx+\frac{N}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (2.6) Since $u$ is a solution of (1.1), we have $||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\lambda\int_{\mathbb{R}^{N}}u^{2}dx+\mu\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (2.7) Combining (2.6) with (2.7), we obtain $s||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q}dx+s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ As desired. ∎ Define $(t\star u)(x)=e^{\frac{Nt}{2}}u(e^{t}x)\ \text{for a.e. }x\in\mathbb{R}^{N},$ (2.8) it is easy to see that $t\star u\in S_{a}.$ We define the fiber map as follows: $\Psi^{\mu}_{u}(t)=E_{\mu}(t\star u)=\frac{e^{2st}}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\frac{e^{q\gamma_{q,s}st}}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (2.9) It is easy to see that $(\Psi^{\mu}_{u})^{\prime}(t)=P_{\mu}(t\star u)$, so that $t$ is a critical point of $\Psi^{\mu}_{u}(t)$ if and only if $t\star u\in\mathcal{P}_{(a,\mu)}$ and in particular $u\in\mathcal{P}_{(a,\mu)}$ if and only if $0$ is a critical point of $\Psi^{\mu}_{u}(t)$. We split $\mathcal{P}_{a,\mu}$ into three parts. $\displaystyle\mathcal{P}^{+}_{a,\mu}$ $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid(\Psi^{\mu}_{u})^{\prime\prime}(0)>0\bigg{\\}}$ $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid 2s^{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}>\mu q\gamma^{2}_{q,s}s^{2}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}s^{2}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}},$ $\displaystyle\mathcal{P}^{0}_{a,\mu}$ $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid(\Psi^{\mu}_{u})^{\prime\prime}(0)=0\bigg{\\}}$ $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid 2s^{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu q\gamma^{2}_{q,s}s^{2}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}s^{2}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}},$ $\displaystyle\mathcal{P}^{-}_{a,\mu}$ $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid(\Psi^{\mu}_{u})^{\prime\prime}(0)<0\bigg{\\}}$ (2.10) $\displaystyle=\bigg{\\{}u\in\mathcal{P}_{a,\mu}\mid 2s^{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}<\mu q\gamma^{2}_{q,s}s^{2}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}s^{2}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}}.$ It is easy to see that $\mathcal{P}_{a,\mu}=\mathcal{P}^{+}_{a,\mu}\cup\mathcal{P}^{0}_{a,\mu}\cup\mathcal{P}^{-}_{a,\mu}.$ ###### Lemma 2.2. Let $N>2s,\ 2<q<2_{s}^{\ast}$ and $a,\mu>0$. Let $\\{u_{n}\\}\subset S_{a,r}=S_{a}\cap H^{s}(\mathbb{R}^{N})$ be a Palais-Smale sequence for $E_{\mu}|_{S_{a}}$ at level $m(a,\mu)$. Then $\\{u_{n}\\}$ is bounded in $H^{s}(\mathbb{R}^{N})$. ###### Proof. Case 1: $q<\overline{p}$. This yields that $\gamma_{q,s}q<2$. Since $P_{\mu}(u_{n})\rightarrow 0$, we have $s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q}dx-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=o_{n}(1).$ Thus, by fractional Gagliardo-Nirenberg-Sobolev inequality (2.4), we have $\displaystyle E_{\mu}(u_{n})$ $\displaystyle=\frac{s}{N}||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)\int_{\mathbb{R}^{N}}|u_{n}|^{q}dx+o_{n}(1)$ $\displaystyle\geq\frac{s}{N}||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)C^{q}_{N,q,s}||u_{n}||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}a^{q(1-\gamma_{q,s})}.$ Since $\\{u_{n}\\}$ is a Palais-Smale sequence for $E_{\mu}|_{S_{a}}$ at level $m(a,\mu)$, we have $E_{\mu}(u_{n})\leq m+1$ for $n$ large. Hence $\frac{s}{N}||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}\leq\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)C^{q}_{N,q,s}||u_{n}||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}a^{q(1-\gamma_{q,s})}+m(a,\mu)+2,$ which implies that $\\{u_{n}\\}$ is bounded in $H^{s}(\mathbb{R}^{N})$. Case 2: $q=\overline{p}$. Then $\gamma_{\overline{p},s}\overline{p}=2$. Since $P_{\mu}(u_{n})\rightarrow 0$, we know $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{\overline{p},s}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx-\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=o_{n}(1).$ (2.11) Thus, $E_{\mu}(u_{n})=\frac{s}{N}\int_{\mathbb{R}^{N}}|u_{n}|^{2_{s}^{\ast}}dx+o_{n}(1)\leq+m(a,\mu)+1\Rightarrow\int_{\mathbb{R}^{N}}|u_{n}|^{2_{s}^{\ast}}dx\leq C.$ Since $q\in(2,2_{s}^{\ast})$, we have $q=\alpha 2+(1-\alpha)2_{s}^{\ast}$ for suitable $\alpha\in(0,1)$, so by Hölder’s inequality, we have $\int_{\mathbb{R}^{N}}|u_{n}|^{q}dx\leq\left(\int_{\mathbb{R}^{N}}|u_{n}|^{2}dx\right)^{\alpha}\left(\int_{\mathbb{R}^{N}}|u_{n}|^{2_{s}^{\ast}}dx\right)^{1-\alpha}\leq C.$ Thus, from (2.11), we know that $||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{\overline{p},s}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\leq C.$ Case 3: $\overline{p}<q<2_{s}^{\ast}$. This implies that $\gamma_{q,s}q>2$. Since $P_{\mu}(u_{n})\rightarrow 0$, we know $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx-\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=o_{n}(1).$ Thus $E_{\mu}(u_{n})=\frac{\mu}{q}\left(\frac{\gamma_{q,s}q}{2}-1\right)\int_{\mathbb{R}^{N}}|u|^{q}dx+\frac{s}{N}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\leq m(a,\mu)+1.$ So $\int_{\mathbb{R}^{N}}|u|^{q}dx$ and $\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ are both bounded. Hence $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx+o_{n}(1)\leq C.$ This completes the proof. ∎ ###### Proposition 2.2. Let $N>2s,2<q<2_{s}^{\ast}$ and $a,\mu>0$. Let $\\{u_{n}\\}\subset S_{a,r}=S_{a}\bigcap H^{s}(\mathbb{R}^{N})$ be a Palais-Smale sequence for $E_{\mu}|_{S_{a}}$ at level $m(a,\mu)$ with $m(a,\mu)<\frac{s}{N}S^{\frac{N}{2s}}_{s}\ \text{and}\ m\neq 0.$ Suppose in addition that $\mathcal{P}_{a,\mu}(u_{n})\rightarrow 0\ \text{as}\ n\rightarrow+\infty.$ Then one of the following alternatives holds: 1. $(i)$ either up to a subsequence $u_{n}\rightharpoonup u$ weakly in $H^{s}(\mathbb{R}^{N})$ but not strongly , where $u\not\equiv 0$ is a solution of (1.1) for some $\lambda<0$, and $E_{\mu}(u)\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ 2. $(ii)$ or up to a subsequence $u_{n}\rightarrow u$ strongly in $H^{s}(\mathbb{R}^{N}),$ $E_{\mu}(u)=m(a,\mu)$ and $u$ solves (1.1)–(1.2) for some $\lambda<0.$ ###### Proof. By Lemma 2.2, we know that the sequence $\\{u_{n}\\}$ is bounded in $H^{s}(\mathbb{R}^{N})$, which is radial functions, and by compactness of $H_{rad}^{s}(\mathbb{R}^{N})\hookrightarrow\hookrightarrow L^{q}(\mathbb{R}^{N})$, which implies that $u_{n}\rightharpoonup u\ \text{in}\ H^{s}(\mathbb{R}^{N}),\ \ u_{n}\rightarrow u\ \text{in}\ L^{q}(\mathbb{R}^{N})\ a.e\ \text{in}\ \mathbb{R}^{N}.$ Since $\\{u_{n}\\}$ is a bounded Palais-Smale sequence for $E_{\mu}|_{S_{a}}$ at level $m(a,\mu)$, by Lagrange multipliers rule, there exists $\\{\lambda_{n}\\}\subset\mathbb{R}$ such that for every $\varphi\in H^{s}(\mathbb{R}^{N})$ $\displaystyle\int_{\mathbb{R}^{N}}\left((-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}\varphi dx-\lambda_{n}u_{n}\varphi-\mu|u|^{q-2}u_{n}\varphi-|u|^{2_{s}^{\ast}-2}u_{n}\varphi\right)dx=o_{n}(1)\|\varphi\|,\ \text{as}\ n\rightarrow+\infty.$ (2.12) If we choose that $\varphi=u_{n}$, from (2.12), it is easy to see that $\\{{u_{n}}\\}$ is bounded, hence up to a subsequence $\lambda_{n}\rightarrow\lambda\in\mathbb{R}$. By the fact that $P_{\mu}(u_{n})\rightarrow 0$ and $\gamma_{q,s}<1$, we deduce that $\displaystyle\lambda a^{2}$ $\displaystyle=\lim_{n\rightarrow+\infty}\lambda_{n}\int_{\mathbb{R}^{N}}|u_{n}|^{2}dx=\lim_{n\rightarrow+\infty}\left(||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}-\int_{\mathbb{R}^{N}}\left(\mu|u_{n}|^{q}+|u_{n}|^{2_{s}^{\ast}}\right)dx\right)$ (2.13) $\displaystyle=\lim_{n\rightarrow+\infty}\mu(\gamma_{q,s}-1)\int_{\mathbb{R}^{N}}|u_{n}|^{q}dx=\mu(\gamma_{q,s}-1)\int_{\mathbb{R}^{N}}|u|^{q}dx\leq 0.$ It is easy to see that $\lambda=0$ if and only if $u\equiv 0$. Next, we show that the $u\not\equiv 0.$ Assume by contradiction that $u\equiv 0$, by $\\{u_{n}\\}$ is bounded in $H^{s}(\mathbb{R}^{N})$, hence up to a subsequence $||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow\ell\in\mathbb{R}$. From $P_{\mu}(u_{n})\rightarrow 0$ and $u_{n}\rightarrow 0$ strongly in $L^{q}(\mathbb{R}^{N})$, hence $\displaystyle\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx\rightarrow\ell,$ Therefore, by the definition of $S_{s}$ in (1.5), we have $\ell\geq S_{s}\ell^{\frac{2}{2_{s}^{\ast}}}$, we can deduce $\ell=0\ \text{ or}\ \ell\geq S^{\frac{N}{2s}}_{s}.$ Case 1. If $||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow\ell=0$, then $\int_{\mathbb{R}^{N}}|u_{n}|^{q}dx\rightarrow 0,\ \ \int_{\mathbb{R}^{N}}|u_{n}|^{2_{s}^{\ast}}dx\rightarrow 0$, which implies that $E_{\mu}(u_{n})\rightarrow 0$, this contradict the fact that $E_{\mu}(u_{n})\rightarrow m(a,\mu)$. Case 2. If $\ell\geq S^{\frac{N}{2s}}_{s}$, from $P_{\mu}(u_{n})\rightarrow 0$ and $E_{\mu}(u_{n})\rightarrow m(a,\mu)$, we obtain $\displaystyle m(a,\mu)+o_{n}(1)$ $\displaystyle=E_{\mu}(u_{n})=\frac{s}{N}||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)\int_{\mathbb{R}^{N}}|u|^{q}dx+o_{n}(1)$ $\displaystyle=\frac{s}{N}||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}+o_{n}(1)=\frac{s}{N}\ell+o_{n}(1),$ which implies that $m(a,\mu)=\frac{s}{N}\ell\geq\frac{s}{N}S^{\frac{N}{2s}}_{s},$ which contradicts our assumptions. Thus, $u\not\equiv 0.$ From (2.13), we know that $\lambda<0$. Pass to the limit in (2.12) by the weak convergence, we obtain that $\displaystyle(-\Delta)^{s}u=\lambda u+\mu|u|^{q-2}u+|u|^{2_{s}^{\ast}-2}u,\ $ $\displaystyle x\in\mathbb{R}^{N}.$ (2.14) By the Pohozaev identity, $P_{\mu}(u)=0.$ Let $\sigma_{n}=u_{n}-u$, then $\sigma_{n}\rightharpoonup 0$ in $H^{s}(\mathbb{R}^{N})$. Since $\displaystyle||u_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}=||\sigma_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}+||u||^{2}_{D_{s}(\mathbb{R}^{N})}+o_{n}(1)$ (2.15) and by the well-known Brézis-Lieb lemma, we get $\displaystyle\int_{\mathbb{R}^{N}}|u_{n}|^{2_{s}^{\ast}}dx=\int_{\mathbb{R}^{N}}|\sigma_{n}|^{2_{s}^{\ast}}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx+o_{n}(1)$ (2.16) Therefore, from $P_{\mu}(u_{n})\rightarrow 0$ and $u_{n}\rightarrow u$ in $L^{q}(\mathbb{R}^{N})$, we have $||\sigma_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}+||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|\sigma_{n}|^{2_{s}^{\ast}}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx+o_{n}(1).$ Combining this with $P_{\mu}(u)=0,$ we know that $||\sigma_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}=\int_{\mathbb{R}^{N}}|\sigma_{n}|^{2_{s}^{\ast}}dx+o_{n}(1),$ thus $\lim_{n\rightarrow+\infty}||\sigma_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}=\lim_{n\rightarrow+\infty}\int_{\mathbb{R}^{N}}|\sigma_{n}|^{2_{s}^{\ast}}dx=\ell\geq 0.$ By the definition of $S_{s}$ in (1.5), we have $\ell\geq S_{s}\ell^{\frac{2}{2_{s}^{\ast}}}$, hence we can deduce $\ell=0\ \text{ or}\ \ell\geq S^{\frac{N}{2s}}_{s}.$ Case 1. $\ell\geq S^{\frac{N}{2s}}_{s}$. By (2.15) and (2.16), we have $\displaystyle m(a,\mu)$ $\displaystyle=\lim_{n\rightarrow+\infty}E_{\mu}(u_{n})=\lim_{n\rightarrow+\infty}\left(E_{\mu}(u)+\frac{1}{2}||\sigma_{n}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|\sigma_{n}|^{2_{s}^{\ast}}dx\right)$ $\displaystyle=E_{\mu}(u)+\frac{s}{N}\ell\geq E_{\mu}(u)+\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ Thus, the conclusion i) holds, i.e., up to a subsequence, $u_{n}\rightharpoonup u$ weakly in $H^{s}(\mathbb{R}^{N})$ but not strongly, where $u\not\equiv 0$ is a solution of (1.1) for some $\lambda<0$, and $E_{\mu}(u)\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ Case 2. $\ell=0.$ Then $u_{n}\rightarrow u$ strongly in $D_{s}(\mathbb{R}^{N})$, which implies that $u_{n}\rightarrow u$ strongly in $L^{2_{s}^{\ast}}(\mathbb{R}^{N})$ by Sobolev embedding inequality. Next, we show that $u_{n}\rightarrow u$ strongly in $L^{2}(\mathbb{R}^{N}).$ If we let $\varphi=u_{n}-u$ in (2.12) and multiply $u_{n}-u$ on both side of (2.14), we obtain $\displaystyle||u_{n}-u||^{2}_{D_{s}(\mathbb{R}^{N})}-\int_{\mathbb{R}^{N}}(\lambda_{n}u_{n}-\lambda u)(u_{n}-u)dx$ $\displaystyle=\int_{\mathbb{R}^{N}}(|u_{n}|^{q-2}u_{n}-|u|^{q-2}u)(u_{n}-u)dx+\int_{\mathbb{R}^{N}}(|u_{n}|^{2_{s}^{\ast}-2}u_{n}-|u|^{2_{s}^{\ast}-2}u)(u_{n}-u)dx+o_{n}(1)$ Thus, by $u_{n}\rightarrow u$ strongly in $D_{s}(\mathbb{R}^{N})$ and $u_{n}\rightarrow u$ strongly in $L^{2_{s}^{\ast}}(\mathbb{R}^{N})$, we have $0=\lim_{n\rightarrow+\infty}\int_{\mathbb{R}^{N}}(\lambda_{n}u_{n}-\lambda u)(u_{n}-u)dx=\lim_{n\rightarrow+\infty}\lambda\int_{\mathbb{R}^{N}}(u_{n}-u)^{2}dx,$ which implies that $u_{n}\rightarrow u$ strongly in $L^{2}(\mathbb{R}^{N})$ by $\lambda<0$. Thus, the conclusion ii) holds, i.e. up to a subsequence $u_{n}\rightarrow u$ strongly in $H^{s}(\mathbb{R}^{N}),$ $E_{\mu}(u)=m(a,\mu)$ and $u$ solves (1.1)–(1.2) for some $\lambda<0.$ The proof is thus complete. ∎ By the similar arguments as in Proposition 2.2, we can obtain the following proposition. ###### Proposition 2.3. Let $N>2s,2<q<2_{s}^{\ast}$ and $a,\mu>0$. Let $\\{u_{n}\\}\subset S_{a}$ be a Palais-Smale sequence for $E_{\mu}|_{S_{a}}$ at level $m(a,\mu)$ with $m(a,\mu)<\frac{s}{N}S^{\frac{N}{2s}}_{s}\ \text{and}\ m\neq 0.$ Suppose in addition that $\mathcal{P}_{a,\mu}(u_{n})\rightarrow 0\ \text{as}\ n\rightarrow+\infty,$ and that there exists $\\{v_{n}\\}\subset S_{a}$ and $v_{n}$ is radially symmetric for every $n$ such that $\|u_{n}-v_{n}\|\rightarrow 0$ as $n\rightarrow+\infty.$ Then one of the alternatives (i) and (ii) in Proposition 2.2 holds. ## 3 $L^{2}$-subcritical perturbation For $N>2s$ and $2<q<2+4s/N$, let us recall $C^{\prime}$ in (1.7). We consider the constrained functional $E_{\mu}|_{S_{a}}$. For every $u\in S_{a}$, by fractional Gagliardo-Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5) $\displaystyle E_{\mu}(u)\geq\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}C^{q}_{N,q,s}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}a^{q(1-\gamma_{p,s})}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}.$ (3.1) Therefore, we consider the function $h:\mathbb{R}^{+}\rightarrow\mathbb{R}$ $\displaystyle h(t)=\frac{1}{2}t^{2}-\frac{\mu}{q}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}t^{q\gamma_{q,s}}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}t^{2_{s}^{\ast}}.$ (3.2) Since $\mu>0$ and $q\gamma_{q,s}<2<2_{s}^{\ast}$, we have $h(0^{+})=0^{-}$ and $h(+\infty)=-\infty.$ ###### Lemma 3.1. Under the assumption that $\mu a^{(1-\gamma_{q,s})q}<C^{\prime}$ (see (1.7)), the function $h$ has a local strict minimum at negative level, a global strict maximum at positive level, and no other critical points, and there exists a $R_{0}$ and $R_{1}$ both depending on $a$ and $\mu$, such that $h(R_{0})=0=h(R_{1})$ and $h(t)\geq 0$ if and only if $t\in(R_{0},R_{1}).$ ###### Proof. For $t>0$, we have $h(t)>0$ if and only if $\varphi(t)>\frac{\mu}{q}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})},\ \text{with}\ \varphi(t)=\frac{1}{2}t^{2-q\gamma_{q,s}}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}t^{2_{s}^{\ast}-q\gamma_{q,s}}.$ Since $\varphi^{\prime}(t)=\frac{2-q\gamma_{q,s}}{2}t^{1-q\gamma_{q,s}}-\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}t^{2_{s}^{\ast}-1-q\gamma_{q,s}},$ it is easy to see that $\varphi(t)$ is increasing on $(0,\overline{t})$ and decreasing on $(\overline{t},+\infty)$ and has a unique global maximum point at positive level on $(0,+\infty)$, where $\overline{t}=\left(\frac{(2-q\gamma_{q,s})2_{s}^{\ast}}{2(2_{s}^{\ast}-q\gamma_{q,s})}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right)^{\frac{1}{2_{s}^{\ast}-2}}$. Thus the maximum level is $\varphi(\overline{t})=\frac{2_{s}^{\ast}-2}{2(2_{s}^{\ast}-q\gamma_{q,s})}\left(\frac{(2-q\gamma_{q,s})2_{s}^{\ast}}{2(2_{s}^{\ast}-q\gamma_{q,s})}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right)^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}}.$ Therefore, $h$ is positive on an open interval $(R_{0},R_{1})$ if and only if $\varphi(\overline{t})>\frac{\mu}{q}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}$, which implies that $\mu a^{q(1-\gamma_{q,s})}<\frac{q(2_{s}^{\ast}-2)}{2C^{q}_{N,q,s}(2_{s}^{\ast}-q\gamma_{q,s})}\left(\frac{(2-q\gamma_{q,s})2_{s}^{\ast}}{2(2_{s}^{\ast}-q\gamma_{q,s})}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right)^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}}.$ Since $h(0^{+})=0^{-}$ , $h(+\infty)=-\infty$ and $h$ is positive on an open interval $(R_{0},R_{1})$ , it is easy to see that $h$ has a global maximum at positive level in $(R_{0},R_{1})$ and has a local minimum point at negative level in $(0,R_{0})$. Since $h^{\prime}(t)=t^{q\gamma_{q,s}-1}\left[t^{2-q\gamma_{q,s}}-\mu\gamma_{q,s}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}-S^{-\frac{2_{s}^{\ast}}{2}}_{s}t^{2_{s}^{\ast}-q\gamma_{q,s}}\right]=0$ if and only if $\psi(t)=\mu\gamma_{q,s}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}\ \text{where}\ \psi(t)=t^{2-q\gamma_{q,s}}-S^{-\frac{2_{s}^{\ast}}{2}}_{s}t^{2_{s}^{\ast}-q\gamma_{q,s}}.$ Obviously, $\psi(t)$ has only one critical point, which is a strict maximum. Therefore, if $\psi(t)_{max}\leq\mu\gamma_{q,s}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})},$ then it is easy to see that contract with $h$ is positive on an open interval $(R_{0},R_{1})$. Thus, $\psi(t)_{max}>\mu\gamma_{q,s}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}$, which implies that $h$ only has a local strict minimum at negative level, a global strict maximum at positive level, and no other critical points, ∎ ###### Lemma 3.2. Under the condition of $\mu a^{(1-\gamma_{q,s})q}<C^{\prime}$ (see (1.7)), then $\mathcal{P}^{0}_{a,\mu}=\emptyset$ and $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension 1 in $S_{a}$. ###### Proof. Assume by contradiction that there exists a $u\in\mathcal{P}^{0}_{a,\mu}$ such that $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx-\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=0,$ (3.3) and $\displaystyle 2||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu q\gamma^{2}_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (3.4) Thus, from (1.5), (2.4), (3.3), and (3.4), we have $\mu\gamma_{q,s}(2-q\gamma_{q,s})\int_{\mathbb{R}^{N}}|u|^{q}dx=(2_{s}^{\ast}-2)\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2-q\gamma_{q,s}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\leq\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2-q\gamma_{q,s}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})},$ (3.5) $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}\int_{\mathbb{R}^{N}}|u|^{q}dx\leq\mu\gamma_{q,s}\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})},$ (3.6) From (3.5) and (3.6), we can infer that $\left[\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-q\gamma_{q,s}}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right]^{\frac{1}{2_{s}^{\ast}-2}}\leq\left[\mu\gamma_{q,s}\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}\right]^{\frac{1}{2-q\gamma_{q,s}}},$ which implies that $\displaystyle\mu a^{q(1-\gamma_{q,s})}\geq\frac{2_{s}^{\ast}-2}{\gamma_{q,s}C^{q}_{N,q,s}(2_{s}^{\ast}-q\gamma_{q,s})}\left[\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-q\gamma_{q,s}}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right]^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}}.$ (3.7) Next, we show that the right hand of (3.7) is greater than or equal to $C^{\prime}$. To show $\frac{2_{s}^{\ast}-2}{\gamma_{q,s}C^{q}_{N,q,s}(2_{s}^{\ast}-q\gamma_{q,s})}\left[\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-q\gamma_{q,s}}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right]^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}}\geq\frac{q(2_{s}^{\ast}-2)}{2C^{q}_{N,q,s}(2_{s}^{\ast}-q\gamma_{q,s})}\left(\frac{(2-q\gamma_{q,s})2_{s}^{\ast}}{2(2_{s}^{\ast}-q\gamma_{q,s})}S^{\frac{2_{s}^{\ast}}{2}}_{s}\right)^{\frac{2-q\gamma_{q,s}}{2_{s}^{\ast}-2}},$ we only need to prove that $\left(\frac{q\gamma_{q,s}}{2}\right)^{2_{s}^{\ast}-2}\left(\frac{2_{s}^{\ast}}{2}\right)^{2-q\gamma_{q,s}}\leq 1,\ \text{for every}\ \ 2<q<\overline{p}<2_{s}^{\ast}.$ Let $q\gamma_{q,s}=x\in(0,2)$, we need to show that $\left(\frac{x}{2}\right)^{2_{s}^{\ast}-2}\left(\frac{2_{s}^{\ast}}{2}\right)^{2-x}\leq 1.$ For this, we set $f(x)=\left(\frac{x}{2}\right)^{2_{s}^{\ast}-2}\left(\frac{2_{s}^{\ast}}{2}\right)^{2-x}$, it is easy to see that $f(x)$ is strictly increasing on $(0,\frac{2_{s}^{\ast}-2}{\ln 2_{s}^{\ast}-\ln 2})$ and decreasing on $(\frac{2_{s}^{\ast}-2}{\ln 2_{s}^{\ast}-\ln 2},+\infty)$. Thus, when $x\in(0,2)$ $f(x)\leq f(2)=1,$ which implies that $\left(\frac{q\gamma_{q,s}}{2}\right)^{2_{s}^{\ast}-2}\left(\frac{2_{s}^{\ast}}{2}\right)^{2-q\gamma_{q,s}}\leq 1.$ This contradicts the assumption $\mu a^{q(1-\gamma_{q,s})}<C^{\prime}.$ Thus, $\mathcal{P}^{0}_{a,\mu}=\emptyset.$ Next, we show that $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension 1 on $S_{a}$. Since $\mathcal{P}_{a,\mu}=\bigg{\\{}u\in S_{a}:||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}}$, we know that $\mathcal{P}_{a,\mu}$ is defined by $P_{\mu}(u)=0$, $G(u)=0$, where $P_{\mu}(u)=s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q}dx-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\ \ \text{and}\ \ G(u)=\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2}.$ Since $P_{\mu}(u)$ and $G(u)$ are class of $C^{1}$, we only need to check that $d(P_{\mu}(u),G(u))$: $H^{s}(\mathbb{R}^{N})\rightarrow\mathbb{R}^{2}$ is surjective. If this not true, $dP_{\mu}(u)$ has to be linearly dependent from $dG(u)$ i.e. there exist a $\nu\in\mathbb{R}$ such that $2s\int_{\mathbb{R}^{N}}(-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}\varphi dx-\mu q\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q-2}u\varphi dx-s2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}-2}u\varphi dx=\nu\int_{\mathbb{R}^{N}}u\varphi dx$ for every $\varphi\in H^{s}(\mathbb{R}^{N})$, which implies that $2s(-\Delta)^{2}u=\nu u+\mu q\gamma_{q,s}su^{q-1}+2_{s}^{\ast}su^{2_{s}^{\ast}-1}\ \ \text{in}\ \mathbb{R}^{N}.$ By the Pohozaev identity for above equation, we know that $2s^{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu q\gamma^{2}_{q,s}s^{2}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}s^{2}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ that is $u\in\mathcal{P}^{0}_{a,\mu}$, a contradiction. Hence, $\mathcal{P}_{a,\mu}$ is a natural constraint. ∎ ###### Lemma 3.3. For every $u\in S_{a}$, the function $\Psi^{\mu}_{u}(t)$ has exactly two critical points $a_{u}<t_{u}\in\mathbb{R}$ and two zeros $c_{u}<d_{u}\in\mathbb{R}$, with $a_{u}<c_{u}<t_{u}<d_{u}$. Moreover, 1. $(1)$ $a_{u}\star u\in\mathcal{P}^{+}_{a,\mu}$ and $t_{u}\star u\in\mathcal{P}^{-}_{a,\mu}$, and if $t\star u\in\mathcal{P}_{a,\mu}$, then either $t=a_{u}$ or $t=t_{u}.$ 2. $(2)$ $||t\star u||_{D_{s}(\mathbb{R}^{N})}\leq R_{0}$ for every $t\leq c_{u},$ and $E_{\mu}(u)(a_{u}\star u)=\min\\{E_{\mu}(t\star u):t\in\mathbb{R}\ \text{and}\ ||t\star u||_{D_{s}(\mathbb{R}^{N})}<R_{0}\\}<0.$ 3. $(3)$ We have $E_{\mu}(u)(t_{u}\star u)=\max\\{E_{\mu}(t\star u):t\in\mathbb{R}\\}>0$ and $\Psi^{\mu}_{u}(t)$ is strictly decreasing and concave on $(t_{u},+\infty)$. In particular, if $t_{u}<0$, then $P_{\mu}(u)<0.$ 4. $(4)$ The maps $u\in S_{a}:a_{u}\in\mathbb{R}$ and $u\in S_{a}:t_{u}\in\mathbb{R}$ are of class $C^{1}$. ###### Proof. Let $u\in S_{a}$, since $t\star u\in\mathcal{P}_{a,\mu}$ if and only if $(\Psi^{\mu}_{u})^{\prime}(t)=0$. Thus, we first show that $\Psi^{\mu}_{u}(t)$ has at least two critical points. From (3.1), we have $\Psi^{\mu}_{u}(t)=E_{\mu}(t\star u)\geq h(||t\star u||_{D_{s}(\mathbb{R}^{N})})=h(e^{st}||u||_{D_{s}(\mathbb{R}^{N})}).$ Thus, the $C^{2}$ function $\Psi^{\mu}_{u}(t)$ is positive on $\left(\frac{\ln\left(\frac{R_{0}}{||u||_{D_{s}(\mathbb{R}^{N})}}\right)}{s},\frac{\ln\left(\frac{R_{1}}{||u||_{D_{s}(\mathbb{R}^{N})}}\right)}{s}\right)$ and $\Psi^{\mu}_{u}(-\infty)=0^{-}$, $\Psi^{\mu}_{u}(+\infty)=-\infty$, thus it is easy to see that $\Psi^{\mu}_{u}(t)$ has a local minimum point $a_{u}$ at negative level in $(0,\frac{\ln\left(\frac{R_{0}}{||u||_{D_{s}(\mathbb{R}^{N})}}\right)}{s})$ and has a global maximum point $t_{u}$ at positive level in $\left(\frac{\ln\left(\frac{R_{0}}{||u||_{D_{s}(\mathbb{R}^{N})}}\right)}{s},\frac{\ln\left(\frac{R_{1}}{||u||_{D_{s}(\mathbb{R}^{N})}}\right)}{s}\right).$ Next, we show that $\Psi^{\mu}_{u}(t)$ has no other critical points. Indeed, $(\Psi^{\mu}_{u})^{\prime}(t)=0$ implies that $\Psi(t)=\mu\gamma_{q,s}s\int_{\mathbb{R}^{N}}|u|^{q}dx\ \text{where}\ \Psi(t)=se^{(2-q\gamma_{q,s})st}\|u\|^{2}_{D_{s}(\mathbb{R}^{N})}-se^{(2_{s}^{\ast}-q\gamma_{q,s})st}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ It is easy to see that $\Psi(t)$ has a unique maximum point, thus the above equation has at most two solutions. From $u\in S_{a},$ $t\in\mathbb{R}$ is a critical point of $\Psi^{\mu}_{u}(t)$ if and only if $t\star u\in\mathcal{P}_{a,\mu},$ we have $a_{u}\star u,\ \ t_{u}\star u\in\mathcal{P}_{a,\mu}$ and $t\star u\in\mathcal{P}_{a,\mu}$ if and only if $t=a_{u}$ or $t=t_{u}$. Since $a_{u}$ is a local minimum point of $\Psi^{\mu}_{u}(t)$, we know that $(\Psi^{\mu}_{a_{u}\star u})^{\prime\prime}(0)=(\Psi^{\mu}_{u})^{\prime\prime}(a_{u})\geq 0$, since $\mathcal{P}^{0}_{a,\mu}=\emptyset$, we know that $(\Psi^{\mu}_{u})^{\prime\prime}(a_{u})\neq 0$, thus $(\Psi^{\mu}_{a_{u}\star u})^{\prime\prime}(0)=(\Psi^{\mu}_{u})^{\prime\prime}(a_{u})>0$, which implies that $a_{u}\star u\in\mathcal{P}^{+}_{a,\mu},$ similarly, we have $t_{u}\star u\in\mathcal{P}^{-}_{a,\mu}.$ By the monotonicity and the behavior at infinity of $\Psi^{\mu}_{u}$, we know that $\Psi^{\mu}_{u}$ has exactly two zeros $c_{u}<d_{u}$ with $a_{u}<c_{u}<t_{u}<d_{u}$ and $\Psi^{\mu}_{u}$ has exactly two inflection points, in particular, $\Psi^{\mu}_{u}$ is concave on $[t_{u},+\infty)$ and hence if $t_{u}<0$, then $P_{\mu}(u)=(\Psi^{\mu}_{u})^{\prime}(0)<0.$ Finally, we prove that $u\in S_{a}:a_{u}\in\mathbb{R}$ and $u\in S_{a}:t_{u}\in\mathbb{R}$ are of class $C^{1}$. Indeed, we can apply the implicit function theorem on the $C^{1}$ function $\Phi(t,u)=(\Psi^{\mu}_{u})^{\prime}(t)$, then $\Phi(a_{u},u)=(\Psi^{\mu}_{u})^{\prime}(a_{u})=0,\partial_{s}\Phi(a_{u},u)=(\Psi^{\mu}_{u})^{\prime\prime}(a_{u})<0$, by the implicit function theorem, we know that $u\in S_{a}:a_{u}\in\mathbb{R}$ is class of $C^{1}$, similarly, we can prove that $u\in S_{a}:t_{u}\in\mathbb{R}$ is class of $C^{1}$. ∎ For $k>0$, set $A_{k}=\\{u\in S_{a}:\|u\|^{2}_{D_{s}(\mathbb{R}^{N})}<k\\},\ \text{and}\ m(a,\mu)=\inf_{u\in A_{R_{0}}}E_{\mu}(u).$ From Lemma 3.3, we immediately have the following corollary: ###### Corollary 3.1. The set $\mathcal{P}^{+}_{a,\mu}$ is contained in $A_{R_{0}}=\\{u\in S_{a}:||u||_{D_{s}(\mathbb{R}^{N})}<R_{0}\\}\ \text{ and }\ \sup_{\mathcal{P}^{+}_{a,\mu}}E_{\mu}\leq 0\leq\inf_{\mathcal{P}^{-}_{a,\mu}}E_{\mu}.$ ###### Lemma 3.4. We have $m(a,\mu)\in(-\infty,0)$ that $m(a,\mu)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}=\inf_{\mathcal{P}^{+}_{a,\mu}}E_{\mu}\ \text{and that}\ m(a,\mu)<\inf_{\overline{A_{R_{0}}}\setminus A_{R_{0}-\rho}}E_{\mu}$ for $\rho>0$ small enough. ###### Proof. For $u\in A_{R_{0}}$, we have $E_{\mu}(u)\geq h(\|u\|_{D_{s}(\mathbb{R}^{N})})\geq\min_{t\in[0,R_{0}]}h(t)>-\infty.$ Therefore, $m(a,\mu)>-\infty.$ Moreover, for any $u\in S_{a}$, we obtain $||t\star u||_{D_{s}(\mathbb{R}^{N})}<R_{0}$ and $E_{\mu}(t\star u)<0$ for $t\ll-1$ and hence $m(a,\mu)<0.$ Since $\mathcal{P}^{+}_{a,\mu}\subset A_{R_{0}}$, we know that $m(a,\mu)\leq\inf_{\mathcal{P}^{+}_{a,\mu}}E_{\mu}.$ On the other hand, if $u\in A_{R_{0}}$, then $a_{u}\star u\in\mathcal{P}^{+}_{a,\mu}\subset A_{R_{0}}$ and $E_{\mu}(u)(a_{u}\star u)=\min\\{E_{\mu}(t\star u):t\in\mathbb{R}\ \text{and}\ ||t\star u||_{D_{s}(\mathbb{R}^{N})}<R_{0}\\}\leq E_{\mu}(u),$ which implies that $\inf_{\mathcal{P}^{+}_{a,\mu}}E_{\mu}\leq m(a,\mu)$. Since $E_{\mu}>0$ on $\mathcal{P}^{-}_{a,\mu}$, we know that $\inf_{\mathcal{P}^{+}_{a,\mu}}E_{\mu}=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}.$ Finally, by the continuity of $h$ there exists $\rho>0$ such that $h(t)\geq\frac{m(a,\mu)}{2}$ if $t\in[R_{0}-\rho,R_{0}]$. Therefore, $E_{\mu}(u)\geq h(\|u\|_{D_{s}(\mathbb{R}^{N})})\geq\frac{m(a,\mu)}{2}>m(a,\mu)$ for every $u\in S_{a}$ with $R_{0}-\rho\leq\|u\|_{D_{s}(\mathbb{R}^{N})}\leq R_{0}$. This completes the proof. ∎ ## 4 $L^{2}$-critical perturbation In this section, we consider the case $N>2s$ and $2<q=\overline{p}$. Assume that $\displaystyle\mu a^{\frac{4s}{N}}<\overline{p}\left(2C^{\overline{p}}_{N,\overline{p},s}\right)^{-1}.$ (4.1) We recall the decomposition of $\mathcal{P}_{a,\mu}=\mathcal{P}^{+}_{a,\mu}\cup\mathcal{P}^{0}_{a,\mu}\cup\mathcal{P}^{-}_{a,\mu}.$ ###### Lemma 4.1. $\mathcal{P}^{0}_{a,\mu}=\emptyset$ and $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension 1 in $S_{a}$. ###### Proof. Assume by contradiction that if there exists a $u\in\mathcal{P}^{0}_{a,\mu}$, then $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx-\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=0,$ (4.2) and $\displaystyle 2||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu 2\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ (4.3) thus, from (4.2) and (4.3), we have $\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=0$, which is not possible since $u\in S_{a}.$ The rest of the proof is similar to that of Lemma 3.2, so we omit the details here. ∎ ###### Lemma 4.2. Under the condition of (4.1), for every $u\in S_{a}$, there is a unique $t_{u}\in\mathbb{R}$ such that $t_{u}\star u\in\mathcal{P}_{a,\mu},$ where $t_{u}$ is the unique critical point of the function of $\Psi^{\mu}_{u}$ and is a strict maximum point at positive level. Moreover, 1. $(1)$ $\mathcal{P}_{a,\mu}=\mathcal{P}^{-}_{a,\mu}$. 2. $(2)$ $\Psi^{\mu}_{u}(t)$ is strictly decreasing and concave on $(t_{u},+\infty)$ and $t_{u}<0$ implies that $P_{\mu}(u)<0.$ 3. $(3)$ The map $u\in S_{a}:t_{u}\in\mathbb{R}$ os of class $C^{1}$. 4. $(4)$ If $P_{\mu}(u)<0$, then $t_{u}<0$. ###### Proof. Since $\Psi^{\mu}_{u}(t)=E_{\mu}(t\star u)=\left[\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\right]e^{2st}-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ and $t\star u\in\mathcal{P}_{a,\mu}$ if and only if $(\Psi^{\mu}_{u})^{\prime}(t)=0$, it is easy to see that if $\left[\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\right]$ is positive, then $\Psi^{\mu}_{u}(t)$ has a unique critical point $t_{u}$, which is is a strict maximum point at positive level. By the fractional Gagliardo-Nirenberg-Sobolev inequality (2.4), we have $\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\geq\left(\frac{1}{2}-\frac{\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})},$ so under the condition of $\mu a^{\frac{4s}{N}}<\overline{p}(2C^{\overline{p}}_{N,\overline{p},s})^{-1},$ we know that $\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx>0.$ Since, if $u\in\mathcal{P}_{a,\mu}$, then $t_{u}$ is a maximum point, we have that $\Psi^{\mu}_{u}(0)\leq 0.$ Since $\mathcal{P}^{0}_{a,\mu}=\emptyset,$ we have $\Psi^{\mu}_{u}(0)<0.$ Thus, $\mathcal{P}_{a,\mu}=\mathcal{P}^{-}_{a,\mu}.$ To prove that the map $u\in S_{a}:t_{u}\in\mathbb{R}$ is of class $C^{1}$, we can apply the implicit function theorem as in Lemma 3.3. Finally, since $(\Psi^{\mu}_{u})^{\prime}(t)<0$ if and only if $t>t_{u}$, so $P_{\mu}(u)=(\Psi^{\mu}_{u})^{\prime}(0)<0$ if and only if $t_{u}<0$. ∎ ###### Lemma 4.3. $m(a,\mu)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}>0.$ ###### Proof. If $u\in\mathcal{P}_{a,\mu},$ then $P_{\mu}(u)=0$, and then by fractional Gagliardo-Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5), we have $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\frac{2}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\leq\mu\frac{2}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}||u||^{2}_{D_{s}(\mathbb{R}^{N})}+S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}$ From (4.1) and above inequality, we have $\displaystyle||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}\geq S^{\frac{2_{s}^{\ast}}{2}}_{s}\left(1-\mu\frac{2}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})}\Rightarrow\inf_{\mathcal{P}_{a,\mu}}||u||_{D_{s}(\mathbb{R}^{N})}>0.$ (4.4) Thus, from $P_{\mu}(u)=0$ and above inequality, we have $\displaystyle E_{\mu}(u)$ $\displaystyle=\frac{s}{N}\left(||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{2\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\right)$ $\displaystyle\geq\frac{s}{N}\left(1-\mu\frac{2}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})}>0.$ Therefore, $m(a,\mu)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}>0.$ As required. ∎ ###### Lemma 4.4. There exists $k>0$ sufficiently small such that $0<\sup_{\overline{A_{k}}}E_{\mu}<m(a,\mu)\ \text{and}\ u\in\overline{A_{k}}\Rightarrow E_{\mu}(u),P_{\mu}(u)>0.$ where $A_{k}=\left\\{u\in S_{a}:||u||^{2}_{D_{s}(\mathbb{R}^{N})}<k\right\\}.$ ###### Proof. By fractional Gagliardo-Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5), we have $\displaystyle E_{\mu}(u)\geq\left(\frac{1}{2}-\frac{\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}>0,$ and $\displaystyle P_{\mu}(u)$ $\displaystyle=s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-s\mu\frac{2}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ $\displaystyle\geq s\left(1-\frac{2\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{s}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}>0,$ provided that $u\in\overline{A_{k}}$ for $k$ small enough. By Lemma 4.4, we know that $m(a,\mu)>0$, thus if necessary replacing $k$ with smaller quantity, we also have $E_{\mu}(u)\leq\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}<m(a,\mu).$ The proof is complete. ∎ In order to apply Proposition 2.2 and recover compactness, we need an estimate from above on $m_{r}(a,\mu)=\inf_{\mathcal{P}_{a,\mu}\bigcap S^{r}_{a}}E_{\mu}$, where $S^{r}_{a}$ is the subset of the radial functions in $S_{a}.$ ###### Lemma 4.5. Under condition (4.1), we have $m_{r}(a,\mu)<\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ ###### Proof. From [15], we know that $S_{s}$ is attained in $\mathbb{R}^{N}$ by $U_{0}(x)=C(N,s)\left(\frac{1}{1+|x|^{2}}\right)^{\frac{N-2s}{2}}$ with $C(N,s)$ chosen so that $\|U_{0}(x)\|^{2}_{D_{s}(\mathbb{R}^{N})}=\int_{\mathbb{R}^{N}}|U_{0}(x)|^{2_{s}^{\ast}}dx=S^{\frac{N}{2s}}_{s}.$ Take $\eta(x)\in C^{\infty}_{0}(\mathbb{R}^{N},[0,1])$ be a cut-off function such that $0\leq\eta\leq 1,\eta=1\ \text{on}\ B(0,\delta)$ and $\eta=1\text{ on }\ \mathbb{R}^{N}\setminus B(0,2\delta)$. Let $u_{\epsilon}=\eta(x)U_{\epsilon}(x),\ \ \ v_{\epsilon}=a\frac{u_{\epsilon}}{\|u_{\epsilon}\|_{L^{2}(\mathbb{R}^{N})}}$ and $U_{\epsilon}(x)=\epsilon^{-\frac{N-2s}{2}}U_{0}(\frac{x}{\epsilon}).$ From Proposition 21 and Proposition 22 in [31], it is easy to deduce that the following estimates hold true $\|v_{\epsilon}\|^{2}_{D_{s}(\mathbb{R}^{N})}\leq\|U_{0}\|^{2}_{D_{s}(\mathbb{R}^{N})}+o(\epsilon^{N-2s}),$ (4.5) $\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2}dx=\left\\{\begin{array}[]{rcl}C\epsilon^{2s}+o(\epsilon^{N-2s}),&&if\ N>4s,\\\ C\epsilon^{2s}\log(\frac{1}{\epsilon})+o(\epsilon^{2s}),&&if\ N=4s,\\\ C\epsilon^{N-2s}+o(\epsilon^{2s}),&&if\ N<4s,\end{array}\right.$ (4.6) It is easy to see that $\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{{}^{2_{s}^{\ast}}}dx=\int_{\mathbb{R}^{N}}|U_{0}|^{{}^{2_{s}^{\ast}}}dx+o(\epsilon^{N}).$ (4.7) By the similar arguments as Proposition 22 in [31], we can deduce that $\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{\overline{p}}dx=\left\\{\begin{array}[]{rcl}&C\epsilon^{N-\frac{N-2s}{2}\overline{p}}+o(\epsilon^{\frac{N-2s}{2}\overline{p}}),&if\ N>\frac{\overline{p}}{\overline{p}-1}2s,\\\ &C\epsilon^{\frac{N}{2}}\log(\frac{1}{\epsilon})+o(\epsilon^{\frac{N}{2}}),&if\ N=\frac{\overline{p}}{\overline{p}-1}2s,\\\ &C\epsilon^{\frac{N-2s}{2}\overline{p}}+o(\epsilon^{N-\frac{N-2s}{2}\overline{p}}),&if\ N<\frac{\overline{p}}{\overline{p}-1}2s.\end{array}\right.$ (4.8) It is easy to see that $u_{\epsilon}\in C^{\infty}_{0}(\mathbb{R}^{N},[0,1])$ and $v_{\epsilon}\in S_{a}^{r}.$ By Lemma 4.2, we know that $m_{r}(a,\mu)=\inf_{\mathcal{P}_{a,\mu}\bigcap S^{r}_{a}}E_{\mu}\leq E_{\mu}(t_{v_{\epsilon}}\star v_{\epsilon})=\max_{t\in\mathbb{R}}E_{\mu}(t\star v_{\epsilon}).$ Next, we give a upper estimate of $E_{\mu}(t_{v_{\epsilon}}\star v_{\epsilon})=\max_{t\in\mathbb{R}}E_{\mu}(t\star v_{\epsilon}).$ Step 1. Consider the case $\mu=0$ and estimate $\max_{t\in\mathbb{R}}\Psi^{0}_{v_{\epsilon}}(t)=E_{0}(t\star v_{\epsilon}).$ Since $\Psi^{0}_{v_{\epsilon}}(t)=\frac{e^{2st}}{2}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx.$ It is easy to see that for every $v_{\epsilon}\in S_{a}$ the function $\Psi^{0}_{v_{\epsilon}}(t)$ has a unique critical point $t_{v_{\epsilon},0}$, which is a strict maximum point and is given by $\displaystyle e^{st_{v_{\epsilon},0}}=\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{1}{2_{s}^{\ast}-2}}.$ (4.9) Thus, from the estimates (4.5)–(4.7), we have $\displaystyle\max_{t\in\mathbb{R}}E_{0}(t\star v_{\epsilon})$ $\displaystyle=E_{0}(t_{v_{\epsilon},0}\star v_{\epsilon})=\Psi^{0}_{v_{\epsilon}}(t_{v_{\epsilon},0})$ $\displaystyle=\left[\frac{1}{2}\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{2}{2_{s}^{\ast}-2}}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{2_{s}^{\ast}}{2_{s}^{\ast}-2}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx\right]$ $\displaystyle=\frac{s}{N}\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\left(\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}}\right)^{\frac{2_{s}^{\ast}}{2_{s}^{\ast}-2}}=\frac{s}{N}\left[\frac{S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s})}{\left(S^{\frac{N}{2s}}_{s}+O(\epsilon^{N})\right)^{\frac{2}{2_{s}^{\ast}}}}\right]^{\frac{N}{2s}}=\frac{s}{N}S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s})$ Step 2. Estimate on $t_{v_{\epsilon},\mu}$. Since $\Psi^{\mu}_{v_{\epsilon}}(t)=E_{\mu}(t\star v_{\epsilon})=\frac{e^{2st}}{2}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\frac{e^{2st}}{\overline{p}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{\overline{p}}dx-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx.$ Let $t_{v_{\epsilon},\mu}$ be the unique maximum point of $\Psi^{\mu}_{v_{\epsilon}}(t)$, then by $(\Psi^{\mu}_{v_{\epsilon}})^{\prime}(t)=P_{\mu}(t_{v_{\epsilon},\mu}\star v_{\epsilon})=0$ and fractional Gagliardo-Nirenberg-Sobolev inequality (2.4), we have $e^{(2_{s}^{\ast}-2)st}=\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}-\frac{2\mu}{\overline{p}}\frac{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{\overline{p}}dx}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\geq\left(1-\frac{2\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}.$ Step 3. Estimate on $\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)$. Since $\displaystyle\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)=\Psi^{\mu}_{v_{\epsilon}}(t_{v_{\epsilon},\mu})=\Psi^{0}_{v_{\epsilon}}(t_{v_{\epsilon},\mu})-\mu\frac{e^{2st_{v_{\epsilon},\mu}}}{\overline{p}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{\overline{p}}dx$ $\displaystyle\leq\sup_{\mathbb{R}}\Psi^{0}_{v_{\epsilon}}-\frac{\mu}{\overline{p}}\left(1-\frac{2\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)^{\frac{2}{2_{s}^{\ast}-2}}\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{2}{2_{s}^{\ast}-2}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{\overline{p}}dx$ $\displaystyle\leq\frac{s}{N}S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s})-\frac{\mu}{\overline{p}}\left(1-\frac{2\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)^{\frac{2}{2_{s}^{\ast}-2}}\frac{a^{\frac{4s}{N}}}{\|u_{\epsilon}\|^{\frac{4s}{N}}_{L^{2}(\mathbb{R}^{N})}}\frac{||u_{\epsilon}||^{\frac{4}{2_{s}^{\ast}-2}}_{D_{s}(\mathbb{R}^{N})}\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{\overline{p}}dx}{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}-2}}}$ $\displaystyle\leq\frac{s}{N}S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s})-C_{N,a,\mu}\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{\overline{p}}dx}{\|u_{\epsilon}\|^{\frac{4s}{N}}_{L^{2}(\mathbb{R}^{N})}}$ From (4.6) and (4.8), we have the following estimate: $\displaystyle\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{\overline{p}}dx}{\|u_{\epsilon}\|^{\frac{4s}{N}}_{L^{2}(\mathbb{R}^{N})}}=\left\\{\begin{aligned} &C\epsilon^{N-\frac{N-2s}{2}\overline{p}-\frac{4s^{2}}{N}}=C,&\text{if}\ &N>4s,\\\ &C\epsilon^{4s-s\overline{p}-s}|\ln\epsilon|^{-\frac{1}{2}}=C|\ln\epsilon|^{-\frac{1}{2}},&\text{if}\ &N=4s,\\\ &C\epsilon^{N-\frac{N-2s}{2}\overline{p}-\frac{N-2s}{2}\frac{4s}{N}}=C\epsilon^{\frac{2s(4s-N)}{N}},&\text{if}\ &\frac{\overline{p}}{\overline{p}-1}2s<N<4s,\\\ &C\epsilon^{\frac{N}{2}-\frac{N-2s}{2}\frac{4s}{N}}|\ln\epsilon|,&\text{if}\ &N=\frac{\overline{p}}{\overline{p}-1}2s,\\\ &C\epsilon^{\frac{N-2s}{2}\overline{p}-\frac{N-2s}{2}\frac{4s}{N}}=C\epsilon^{N-2s},&\text{if}\ &2s<N<\frac{\overline{p}}{\overline{p}-1}2s.\end{aligned}\right.$ Thus, $\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)\leq\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ The proof is thus finished. ∎ ## 5 $L^{2}$-supercritical perturbation In this section, we consider $N>2s$ and $\overline{p}<q<2_{s}^{\ast}$. We recall the decomposition of $\mathcal{P}_{a,\mu}=\mathcal{P}^{+}_{a,\mu}\cup\mathcal{P}^{0}_{a,\mu}\cup\mathcal{P}^{-}_{a,\mu}.$ ###### Lemma 5.1. $\mathcal{P}^{0}_{a,\mu}=\emptyset$ and $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension 1 in $S_{a}$. ###### Proof. Assume by contradiction that there exists a $u\in\mathcal{P}^{0}_{a,\mu}$, then $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx-\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=0,$ (5.1) and $\displaystyle 2||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu q\gamma^{2}_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (5.2) Thus, from (5.1) and (5.2), we have $(2-q\gamma_{q,s})\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx=(2_{s}^{\ast}-2)\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=0.$ Since $2-q\gamma_{q,s}<0,2_{s}^{\ast}-2>0$, we have $u=0,$ which is not possible, thanks to $u\in S_{a}.$ The rest of the proof is similar to the one of Lemma 3.2, so we omit the details here. ∎ ###### Lemma 5.2. For every $u\in S_{a}$, there is a unique $t_{u}\in\mathbb{R}$ such that $t_{u}\star u\in\mathcal{P}_{a,\mu},$ where $t_{u}$ is the unique critical point of the function of $\Psi^{\mu}_{u}$ and is a strict maximum point at positive level, moreover, 1. $(1)$ $\mathcal{P}_{a,\mu}=\mathcal{P}^{-}_{a,\mu}$. 2. $(2)$ $\Psi^{\mu}_{u}(t)$ is strictly decreasing and concave on $(t_{u},+\infty)$ and $t_{u}<0$ implies that $P_{\mu}(u)<0.$ 3. $(3)$ The map $u\in S_{a}:t_{u}\in\mathbb{R}$ os of class $C^{1}$. 4. $(4)$ If $P_{\mu}(u)<0$, then $t_{u}<0$. ###### Proof. Since $\Psi^{\mu}_{u}(t)=E_{\mu}(t\star u)=\frac{e^{2st}}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\frac{e^{q\gamma_{q,s}st}}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ and $(\Psi^{\mu}_{u})^{\prime}(t)=se^{2st}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}se^{q\gamma_{q,s}st}\int_{\mathbb{R}^{N}}|u|^{q}dx- se^{2_{s}^{\ast}st}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ it follows that $(\Psi^{\mu}_{u})^{\prime}(t)=0$ if and only if $||u||^{2}_{D_{s}(\mathbb{R}^{N})}=f(t):=\mu\gamma_{q,s}e^{(q\gamma_{q,s}-2)st}\int_{\mathbb{R}^{N}}|u|^{q}dx+e^{(2_{s}^{\ast}-2)st}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ It is easy to see that $f(t)$ is positive , continuous, monotone increasing and $f(t)\rightarrow 0^{+}$ as $t\rightarrow-\infty$ and $f(t)\rightarrow+\infty$ as $t\rightarrow+\infty$. Thus, there exists a unique point $t_{u,s}$ such that $f(t)=||u||^{2}_{D_{s}(\mathbb{R}^{N})}$. Since $\Psi^{\mu}_{u}\rightarrow 0^{+}$ as $s\rightarrow-\infty$ and $\Psi^{\mu}_{u}\rightarrow-\infty$ as $s\rightarrow+\infty$, we know that there is a unique $t_{u}\in\mathbb{R}$ such that $t_{u}\star u\in\mathcal{P}_{a,\mu},$ where $t_{u}$ is the unique critical point of the function of $\Psi^{\mu}_{u}$ and is a strict maximum point at positive level. Since $t_{u}$ is a strict maximum point, we know that $(\Psi^{\mu}_{u})^{\prime\prime}(t_{u})\leq 0.$ Because $\mathcal{P}^{0}_{a,\mu}=\emptyset,$ we have $(\Psi^{\mu}_{u})^{\prime\prime}(t_{u})\neq 0,$ which implies that $t_{u}\star u\in\mathcal{P}^{-}_{a,\mu}$, since $\Psi^{\mu}_{u}(t)$ has exactly one maximum point, so $\mathcal{P}_{a,\mu}=\mathcal{P}^{-}_{a,\mu}.$ To prove that the map $u\in S_{a}:t_{u}\in\mathbb{R}$ os of class $C^{1}$, we can apply the implicit function theorem as Lemma 3.3. Finally, since $(\Psi^{\mu}_{u})^{\prime}(t)<0$ if and only if $t>t_{u}$, so $P_{\mu}(u)=(\Psi^{\mu}_{u})^{\prime}(0)<0$ if and only if $t_{u}<0$. ∎ ###### Lemma 5.3. There holds $m(a,\mu)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}>0.$ ###### Proof. If $u\in\mathcal{P}_{a,\mu},$ then $P_{\mu}(u)=0$, then by fractional Gagliardo-Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5), we have $\displaystyle||u||^{2}_{D_{s}(\mathbb{R}^{N})}$ $\displaystyle=\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ $\displaystyle\leq\mu\gamma_{q,s}C^{q}_{N,q,s}a^{(1-\gamma_{q,s})q}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}+S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}$ Thus, from above inequality and $||u||^{2}_{D_{s}(\mathbb{R}^{N})}\neq 0$ (since $u\in S_{a}$), we have $\displaystyle\mu\gamma_{q,s}C^{q}_{N,q,s}a^{(1-\gamma_{q,s})q}||u||^{q\gamma_{q,s}-2}_{D_{s}(\mathbb{R}^{N})}+S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}-2}_{D_{s}(\mathbb{R}^{N})}\geq 1,\ \forall\ u\in\mathcal{P}_{a,\mu},$ which implies that $\inf_{u\in\mathcal{P}_{a,\mu}}||u||_{D_{s}(\mathbb{R}^{N})}>0$. Since $\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=||u||^{2}_{D_{s}(\mathbb{R}^{N})},$ we have $\inf_{u\in\mathcal{P}_{a,\mu}}\left[\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right]>0.$ Thus, from $P_{\mu}(u)=0$ and above inequality, we have $\displaystyle\inf_{u\in\mathcal{P}_{a,\mu}}E_{\mu}(u)$ $\displaystyle=\inf_{u\in\mathcal{P}_{a,\mu}}\left[\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right]$ $\displaystyle=\inf_{u\in\mathcal{P}_{a,\mu}}\left[\frac{\mu}{q}\left(\frac{q\gamma_{q,s}}{2}-1\right)\int_{\mathbb{R}^{N}}|u|^{q}dx+\frac{s}{N}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right]>0.$ Therefore, $m(a,\mu)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}>0.$ This finishes the proof. ∎ ###### Lemma 5.4. There exists $k>0$ sufficiently small such that $0<\sup_{\overline{A_{k}}}E_{\mu}<m(a,\mu)\ \text{and}\ u\in\overline{A_{k}}\Rightarrow E_{\mu}(u),P_{\mu}(u)>0,$ where $A_{k}=\left\\{u\in S_{a}:||u||^{2}_{D_{s}(\mathbb{R}^{N})}<k\right\\}.$ ###### Proof. By fractional Gagliardo-Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5), we have $\displaystyle E_{\mu}(u)\geq\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}>0,$ and $\displaystyle P_{\mu}(u)$ $\displaystyle=s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-s\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ $\displaystyle\geq s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-s\mu\gamma_{q,s}C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}-sS^{-\frac{2_{s}^{\ast}}{2}}_{s}||u||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}>0.$ If $u\in\overline{A_{k}}$ for $k$ small enough. By Lemma 5.3, we know that $m(a,\mu)>0$, thus if necessary replacing $k$ with smaller quantity, we also have $E_{\mu}(u)\leq\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}<m(a,\mu).$ This ends the proof. ∎ In order to apply Proposition 2.2 and recover compactness, we need an estimate from above on $m_{r}(a,\mu)=\inf_{\mathcal{P}_{a,\mu}\bigcap S^{r}_{a}}E_{\mu}$, where $S^{r}_{a}$ is the subset of the radial functions in $S_{a}.$ ###### Lemma 5.5. If one of following conditions holds: 1. $(1)$ $N>4s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}}$; 2. $(2)$ $N=\frac{q}{q-1}2s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}}$; 3. $(3)$ $N=4s\ \text{or}\ \frac{q}{q-1}2s<N<4s\ \text{or}\ 2s<N<\frac{q}{q-1}2s$, then we have $m_{r}(a,\mu)<\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ ###### Proof. Let us recall the definition of $u_{\epsilon}$ and $v_{\epsilon}$ as Lemma 4.5. It is easy to see that $u_{\epsilon}\in C^{\infty}_{0}(\mathbb{R}^{N},[0,1])$ and $v_{\epsilon}\in S_{a}^{r}.$ By Lemma 4.2, we know that $m_{r}(a,\mu)=\inf_{\mathcal{P}_{a,\mu}\bigcap S^{r}_{a}}E_{\mu}\leq E_{\mu}(t_{v_{\epsilon},\mu}\star v_{\epsilon})=\max_{t\in\mathbb{R}}E_{\mu}(t\star v_{\epsilon}).$ By the same argument as step 1 in Lemma 4.5, we have $\Psi^{0}_{v_{\epsilon}}(t_{v_{\epsilon},0})=\frac{s}{N}S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s}).$ Step 1. Estimate on $t_{v_{\epsilon},\mu}$. Since $\Psi^{\mu}_{v_{\epsilon}}(t)=E_{\mu}(t\star v_{\epsilon})=\frac{e^{2st}}{2}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\frac{e^{q\gamma_{q,s}st}}{q}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx$ and $t_{v_{\epsilon},\mu}$ be the unique maximum point of $\Psi^{\mu}_{v_{\epsilon}}(t)$, then by $(\Psi^{\mu}_{v_{\epsilon}})^{\prime}(t)=P_{\mu}(t_{v_{\epsilon},\mu}\star v_{\epsilon})=0,$ we have $e^{2_{s}^{\ast}st_{v_{\epsilon},\mu}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx=e^{2st_{v_{\epsilon},\mu}}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}e^{q\gamma_{q,s}st_{v_{\epsilon},\mu}}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx\leq e^{2st_{v_{\epsilon},\mu}}||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})},$ which means that $\displaystyle e^{st_{v_{\epsilon},\mu}}\leq\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{1}{2_{s}^{\ast}-2}}.$ (5.3) By (5.3), $q\gamma_{q,s}>2$ and $v_{\epsilon}=\frac{au_{\epsilon}}{\|u_{\epsilon}\|_{L^{2}(\mathbb{R}^{N})}}$, we have $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}$ (5.4) $\displaystyle=\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}-\mu\gamma_{q,s}e^{(q\gamma_{q,s}-2)st_{v_{\epsilon},\mu}}\frac{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}$ $\displaystyle\geq\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}-\mu\gamma_{q,s}\frac{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\left(\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}$ $\displaystyle\geq\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\frac{||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx}-\mu\gamma_{q,s}\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-q}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-q}}\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx}\left(\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\frac{||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx}\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}$ $\displaystyle\geq\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\frac{\left(||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}}{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx}\left[\left(||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}\right)^{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}-\frac{\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}\|u_{\epsilon}\|^{q(1-\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}\right].$ By the estimates in (4.5), (4.6), (4.7) and (4.8), we can infer that there exist $C_{1},C_{2},C_{3}>0$ (depending on $N,q$) such that $\displaystyle\left(||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}\right)^{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}\geq C_{1}\ \text{and}\ C_{2}\leq\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}\leq\frac{1}{C_{2}}$ (5.5) and $\displaystyle\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\|u_{\epsilon}\|^{q(1-q\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}\leq\left\\{\begin{aligned} &C\epsilon^{N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})},&\text{if}\ &N>4s,\\\ &C\epsilon^{N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})}|\ln\epsilon|^{\frac{q(\gamma_{q,s}-1)}{2}},&\text{if}\ &N=4s,\\\ &C\epsilon^{N-\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})},&\text{if}\ &\frac{q}{q-1}2s<N<4s,\\\ &C\epsilon^{\frac{N}{2}-\frac{N-2s}{2}q(1-\gamma_{q,s})}|\ln\epsilon|,&\text{if}\ &N=\frac{q}{q-1}2s,\\\ &C\epsilon^{\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})},&\text{if}\ &2s<N<\frac{q}{q-1}2s.\end{aligned}\right.$ (5.6) Next, we claim that $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}},$ under suitable conditions. Case 1: $N>4s$. Since $\overline{p}<q<2_{s}^{\ast}$, we can deduce that $\displaystyle N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})<0.$ (5.7) Indeed, since $\overline{p}<q<2_{s}^{\ast}$, we have $4s/N<q-2<4s/(N-2s)$, so $N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})=N-\frac{N-2s}{2}(q-2)-(N-2s)-(q-2)-2+\frac{N(q-2)}{2s}:=f(q-2),$ it is easy to deduce that $f(q-2)$ is strictly increasing about $q-2$, since $f(\frac{4s}{N-2s})=0$, thus we obtain $N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})<0.$ So we can not get $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\left[C_{1}-\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\frac{C_{3}}{C_{2}}o_{\epsilon}(1)\right]\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}$ for a positive constant $C=C(N,q,\mu,a)>0$ for every $\epsilon\in(0,\epsilon_{0})$ with $\epsilon_{0}$ sufficiently small. Thus, we have to give a more precise estimate, let us recall the inequality about $e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}$ in (5.4), by well-known interpolation inequality, we have $\displaystyle\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}\|u_{\epsilon}\|^{q(1-\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}$ $\displaystyle\leq\frac{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q-2}{2_{s}^{\ast}-2}}\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2}dx\right)^{\frac{2_{s}^{\ast}-q}{2_{s}^{\ast}-2}}}{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}\|u_{\epsilon}\|^{q(1-\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}$ (5.8) $\displaystyle\leq\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{q(1-\gamma_{q,s})}{2_{s}^{\ast}-2}}=\left(\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}\right)^{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}.$ Therefore, by (5.4) and (5.8), we have $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}=\frac{||v_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}-\mu\gamma_{q,s}e^{(q\gamma_{q,s}-2)st_{v_{\epsilon},\mu}}\frac{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx}{\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{2_{s}^{\ast}}dx}$ (5.9) $\displaystyle\geq\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\frac{\left(||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}\right)^{\frac{q\gamma_{q,s}-2}{2_{s}^{\ast}-2}}}{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx}\left[\left(||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}\right)^{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}-\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\left(\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}\right)^{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}\right].$ Thus, if the right hand of above is positive provided that $\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}<\left(\frac{||u_{\epsilon}||^{2}_{D_{s}(\mathbb{R}^{N})}}{\left(\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}}\right)^{{\frac{2_{s}^{\ast}-q\gamma_{q,s}}{2_{s}^{\ast}-2}}}=S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}+O(\epsilon^{N-2s})$ Thus, if $N>4s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}}$, we have $e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq\frac{C\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}.$ Case 2: $N=4s$. Then we have $3<q<4$ and $|\ln\epsilon|\backsim\frac{1}{\epsilon}$ as $\epsilon\rightarrow 0.$ Thus $\epsilon^{N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})}|\ln\epsilon|^{\frac{q(\gamma_{q,s}-1)}{2}}=\epsilon^{(4-q)(s-1)}|\ln\epsilon|^{q-4}\rightarrow 0\ \text{as}\ \epsilon\rightarrow 0.$ Furthermore, $\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\|u_{\epsilon}\|^{q(1-q\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}\leq C\epsilon^{N-\frac{N-2s}{2}q-q(1-\gamma_{q,s})}|\ln\epsilon|^{\frac{q(\gamma_{q,s}-1)}{2}}=o_{\epsilon}(1).$ So, we have $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\left[C_{1}-\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\frac{C_{3}}{C_{2}}o_{\epsilon}(1)\right]\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}.$ Case 3: $\frac{q}{q-1}2s<N<4s$. By the same arguments as (5.7), we have $N-\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})>0.$ Thus, $\epsilon^{N-\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})}\rightarrow 0\ \text{as}\ \epsilon\rightarrow 0.$ Therefore, $\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\|u_{\epsilon}\|^{q(1-q\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}\leq C\epsilon^{N-\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})}=o_{\epsilon}(1).$ So, we have $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\left[C_{1}-\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\frac{C_{3}}{C_{2}}o_{\epsilon}(1)\right]\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}.$ Case 4: $N=\frac{q}{q-1}2s$. By the similar arguments as Case 1, we get $C\epsilon^{\frac{N}{2}-\frac{N-2s}{2}q(1-\gamma_{q,s})}|\ln\epsilon|\rightarrow+\infty\ \text{as}\ \epsilon\rightarrow 0.$ Thus, by the same argument as Case 1, we know that if $N=\frac{q}{q-1}2s\ \text{and}\ \mu a^{q(1-\gamma_{q,s})}<\frac{S^{\frac{N}{4s}q(1-\gamma_{q,s})}_{s}}{\gamma_{q,s}}$, then we have $e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq\frac{C\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}.$ Case 5: $2s<N<\frac{q}{q-1}2s$. It is easy to see that $\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\|u_{\epsilon}\|^{q(1-q\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}\leq C\epsilon^{\frac{N-2s}{2}q-\frac{N-2s}{2}q(1-\gamma_{q,s})}=o_{\epsilon}(1).$ Then we have $\displaystyle e^{(2_{s}^{\ast}-2)st_{v_{\epsilon},\mu}}\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}\left[C_{1}-\mu\gamma_{q,s}a^{q(1-\gamma_{q,s})}\frac{C_{3}}{C_{2}}o_{\epsilon}(1)\right]\geq C\frac{\|u_{\epsilon}\|^{2_{s}^{\ast}-2}_{L^{2}(\mathbb{R}^{N})}}{a^{2_{s}^{\ast}-2}}.$ Step 2. Estimate on $\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)$. $\displaystyle\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)=\Psi^{\mu}_{v_{\epsilon}}(t_{v_{\epsilon},\mu})=\Psi^{0}_{v_{\epsilon}}(t_{v_{\epsilon},\mu})-\mu\frac{e^{q\gamma_{q,s}st_{v_{\epsilon},\mu}}}{q}\int_{\mathbb{R}^{N}}|v_{\epsilon}|^{q}dx$ $\displaystyle\leq\sup_{\mathbb{R}}\Psi^{0}_{v_{\epsilon}}-\frac{\mu C}{q}\frac{\|u_{\epsilon}\|^{q\gamma_{q,s}}_{L^{2}(\mathbb{R}^{N})}}{a^{q\gamma_{q,s}}}\frac{a^{q}}{\|u_{\epsilon}\|^{q}_{L^{2}(\mathbb{R}^{N})}}\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx$ $\displaystyle\leq\frac{s}{N}S^{\frac{N}{2s}}_{s}+O(\epsilon^{N-2s})-C\frac{\mu}{q}\gamma_{q,s}a^{q(1-\gamma_{q,s})}\frac{\int_{\mathbb{R}^{N}}|u_{\epsilon}|^{q}dx}{\|u_{\epsilon}\|^{q(1-q\gamma_{q,s})}_{L^{2}(\mathbb{R}^{N})}}.$ By (5.6), we know that $\max_{t\in\mathbb{R}}\Psi^{\mu}_{v_{\epsilon}}(t)\leq\frac{s}{N}S^{\frac{N}{2s}}_{s},$ for $\epsilon$ small enough. This completes the proof. ∎ ## 6 Proof of Theorem 1.1 Let $\\{v_{n}\\}$ be a minimizing sequence for $\inf_{A_{R_{0}}}E_{\mu}(u)$. By Lemma 3.3, for every $n$ we can take $t_{v_{n}}\star v_{n}\in\mathcal{P}^{+}_{a,\mu}$ such that $||t_{v_{n}}\star v_{n}||_{D_{s}(\mathbb{R}^{N})}\leq R_{0}$ and $E_{\mu}(u)(t_{v_{n}}\star v_{n})=\min\\{E_{\mu}(t\star v_{n}):t\in\mathbb{R}\ \text{and}\ ||t\star v_{n}||_{D_{s}(\mathbb{R}^{N})}<R_{0}\\}\leq E_{\mu}(v_{n}).$ Thus, we obtain a new minimizing sequence $\\{w_{n}=t_{v_{n}}\star v_{n}\\}$ with $w_{n}\in S^{r}_{a}\cap\mathcal{P}^{+}_{a,\mu}$ radially decreasing for every $n$. By Lemma 3.4, we have $||w_{n}||_{D_{s}(\mathbb{R}^{N})}<R_{0}-\rho$ for every $n$ and hence by Ekeland’s variational principle in a standard way, we know the existence of a new minimizing sequence for $\\{u_{n}\\}\subset A_{R_{0}}$ for $m(a,\mu)$ with $\|u_{n}-w_{n}\|\rightarrow 0$ as $n\rightarrow+\infty$, which is also a Palais-Smale sequence for $E_{\mu}$ on $S_{a}$. By the boundedness of $\\{w_{n}\\}$, $\|u_{n}-w_{n}\|\rightarrow 0$, Brézis-Lieb lemma and Sobolev embedding theorem, we have $\|u_{n}\|^{2}_{D_{s}(\mathbb{R}^{N})}=\|u_{n}-w_{n}\|^{2}_{D_{s}(\mathbb{R}^{N})}+\|w_{n}\|^{2}_{D_{s}(\mathbb{R}^{N})})+o_{n}(1)=\|w_{n}\|^{2}_{D_{s}(\mathbb{R}^{N})})+o_{n}(1),$ $\int_{\mathbb{R}^{N}}|u_{n}|^{p}dx=\int_{\mathbb{R}^{N}}|u_{n}-w_{n}|^{p}dx+\int_{\mathbb{R}^{N}}|w_{n}|^{p}+o_{n}(1)=\int_{\mathbb{R}^{N}}|w_{n}|^{p}+o_{n}(1),\ \text{for}\ \ p\in[2,2_{s}^{\ast}].$ Thus, $P_{\mu}(u_{n})=P_{\mu}(u_{n})+o_{n}(1)\rightarrow\ \text{as}\ n\rightarrow+\infty.$ Therefore, one of the alternatives in Proposition 2.2 holds. We prove that the second alternative in Proposition 2.2 occurs. Assume by contradiction that there exists a sequence $u_{n}\rightharpoonup u$ weakly in $H^{s}(\mathbb{R}^{N})$ but not strongly , where $u\not\equiv 0$ is a solution of (1.1) for some $\lambda<0$, and $E_{\mu}(u)\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ Since $u$ is a solution of(1.1), we have $P_{\mu}(u)=0,$ which implies that $||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u|^{q}dx+\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ Therefore $\displaystyle m(a,\mu)$ $\displaystyle\geq E_{\mu}(u)+\frac{s}{N}S^{\frac{N}{2s}}_{s}=\frac{s}{N}S^{\frac{N}{2s}}_{s}+\frac{s}{N}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)\int_{\mathbb{R}^{N}}|u|^{q}dx$ $\displaystyle\geq\frac{s}{N}S^{\frac{N}{2s}}_{s}+\frac{s}{N}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}||u||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}.$ Next, we show that the right hand side of above inequality is positive under suitable conditions, then we can get a contradiction with $m(a,\mu)<0$. Let $\vartheta(t)=\frac{s}{N}t^{2}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)C^{q}_{N,q,s}a^{q(1-\gamma_{q,s})}t^{q\gamma_{q,s}}.$ Then it is easy to see that the function $\vartheta(t)$ has a unique minimum point $\overline{t}$ and $\vartheta(\overline{t})=-\frac{2-q\gamma_{q,s}}{q}\left[\frac{N\gamma_{q,s}}{s}\right]^{\frac{q\gamma_{q,s}}{2-q\gamma_{q,s}}}\left[\frac{2_{s}^{\star}-q\gamma_{q,s}}{22_{s}^{\star}}C^{q}_{N,q,s}\right]^{\frac{2}{2-q\gamma_{q,s}}}[\mu a^{q(1-\gamma_{q,s})}]^{\frac{2}{2-q\gamma_{q,s}}}<0.$ If $\vartheta(\overline{t})>-\frac{s}{N}S^{\frac{N}{2s}}_{s},$ which yields that $\mu a^{q(1-\gamma_{q,s})}<\frac{22_{s}^{\star}}{(2_{s}^{\star}-q\gamma_{q,s})C^{q}_{N,q,s}}\left(\frac{Nq\gamma^{2}_{q,s}S^{\frac{N}{2s}}_{s}}{(2-q\gamma_{q,s})s}\right)^{\frac{2-q\gamma_{q,s}}{2}},$ then we have $\displaystyle m(a,\mu)\geq\frac{s}{N}S^{\frac{N}{2s}}_{s}+\vartheta(||u||_{D_{s}(\mathbb{R}^{N})})\geq\frac{s}{N}S^{\frac{N}{2s}}_{s}+\vartheta(\overline{t})>0,$ which contradicts the fact that $m(a,\mu)<0$. Thus $u_{n}\rightarrow u$ strongly in $H^{s}(\mathbb{R}^{N}),$ $E_{\mu}(u)=m(a,\mu)$ and $u$ solves (1.1)–(1.2) for some $\lambda<0.$ It remains to show that any ground state is a local minimizer for $E_{\mu}$ on $A_{R_{0}}$. Since $E_{\mu}(u)=m(a,\mu)$, then $u\in\mathcal{P}_{a,\mu}$ and $E_{\mu}(u)<0$, so by Lemma 3.3 we have that $u\in\mathcal{P}^{+}_{a,\mu}\subset A_{R_{0}}$ and $E_{\mu}(u)=m(a,\mu)=\inf_{A_{R_{0}}}E_{\mu}(u)\ \text{and}\ ||u||_{D_{s}(\mathbb{R}^{N})}<R_{0}.$ Therefore, the proof of Theorem 1.1 is complete. ## 7 Proof of Theorems 1.2–1.3 We first list some well-known results, which will be used to prove Theorems 1.2–1.3. For this purpose, we give the following definition. ###### Definition 7.1. (see [21, Definition 3.1]) Let B be a closed subset of X. We shall say that a class $\mathcal{F}$ of compact subsets of X is a homotopy-stable family with boundary B provided that 1. $(a)$ every set in $\mathcal{F}$ contains B. 2. $(b)$ for any set A in $\mathcal{F}$ and any $\eta\in([0,1]\times X;X)$ satisfying $\eta(t,x)=x$ for all $(t,x)\in({0}\times X)\cup([0,1]\times B)$, we have $\eta({1}\times A)\in\mathcal{F}.$ ###### Theorem 7.1. (see [21, Theorem 3.2]) Let $\varphi$ be a $C^{1}$ function on a complete connected $C^{1}$-Finsler manifold X (without boundary) and consider a homotopy-stable family $\mathcal{F}$ of compact subsets of X with a closed boundary B. Set $c=c(\varphi,\mathcal{F})=\inf\limits_{A\in\mathcal{F}}\max\limits_{x\in A}\varphi(x)$ and suppose that $\sup\varphi(B)<c.$ Then, for any sequence of sets $(A_{n})_{n}$ in $\mathcal{F}$ such that $\lim\limits_{n}\sup\limits_{A_{n}}\varphi=c,$ there exists a sequence $(x_{n})_{n}$ in X such that ${\rm(i)}$ $\lim\limits_{n}\varphi(x_{n})=c\ \ $(ii)$\ \lim\limits_{n}\|d\varphi(x_{n})\|=0\ \ $(iii)$\ \lim\limits_{n}{\rm dist}(x_{n},A_{n})=0.$ Moreover, if $d\varphi$ is uniformly continuous, then $x_{n}$ can be chosen to be in $A_{n}$ for each n. Now we are in a position to prove Theorems 1.2–1.3. Case 1: $L^{2}$-critical perturbation, i.e., $q=\overline{p}$. Let $k>0$ be defined by Lemma 4.4, we follow the ideas introduced in [28] and consider the following functional $\overline{E}_{\mu}(s,\mu):\mathbb{R}\times H^{s}(\mathbb{R}^{N})\rightarrow\mathbb{R}$: $\displaystyle\widetilde{E}_{\mu}(t,\mu)=E_{\mu}(t\star u)=\left[\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\right]e^{2st}-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ (7.1) It is easy to see that $\overline{E}_{\mu}(t,\mu)$ is of class of $C^{1}$, since $\overline{E}_{\mu}(t,\mu)$ is invariant under rotations applied to $u$, a Palais-Smale sequence for $\overline{E}_{\mu}(t,\mu)|_{\mathbb{R}\times S^{r}_{a}}$ is a Palais-Smale sequence for $\overline{E}_{\mu}(t,\mu)|_{\mathbb{R}\times S_{a}}$. Let $E^{c}$ be the closed sublevel set $\\{u\in S_{a}:E_{\mu}\leq c\\},$ we introduce the minimax class $\displaystyle\Gamma:=\\{\gamma=(\alpha,\beta)\in C([0,1],\mathbb{R}\times S^{r}_{a}):\gamma(0)\in(0,\overline{A_{k}}),\gamma(1)\in(0,E^{0})\\}$ (7.2) with associated minimax level $\sigma(a,\mu):=\inf_{\gamma\in\Gamma}\max_{(t,u)\in\gamma([0,1])}\widetilde{E}_{\mu}(t,\mu).$ Since $||t\star u||^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow 0^{+}$ as $t\rightarrow-\infty$ and $E_{\mu}(t\star u)\rightarrow-\infty$ as $t\rightarrow+\infty$. Let $u\in S^{r}_{a}$. There exist $t_{0}\ll-1$ and $t_{1}\gg 1$ such that $\displaystyle\gamma_{u}:\tau\in[0,1]\rightarrow(0,((1-\tau)t_{0}+\tau t_{1})\star u)\in\mathbb{R}\times S^{r}_{a}$ (7.3) is a path in $\Gamma$. Then $\sigma(a,\mu)$ is a real number. For any $\gamma=(\alpha,\beta)\in\Gamma$, let us consider the function $P_{\gamma}:\tau\in[0,1]\rightarrow P(\alpha(\tau)\star\beta(\tau))\in\mathbb{R}.$ We have $P_{\gamma}(0)=P(\beta(0))>0$, by Lemmas 4.3 and 4.4. Since $\Psi_{\beta(1)}(t)>0$ for every $t\in(-\infty,t_{\beta(1)})$ and $\Psi_{\beta(1)}(0)=E_{\mu}(\beta(1))\leq 0,$ we have that $t_{\beta(1)}<0.$ Thus, by Lemma 4.2, we have $P_{\gamma}(1)=P(\beta(1))<0.$ Moreover, the map $\tau:\alpha(\tau)\star\beta(\tau)$ is continuous from $[0,1]$ to $H^{s}(\mathbb{R}^{N})$, hence we deduce that there exists $\tau_{\gamma}\in(0,1)$ such that $P_{\gamma}(\tau_{\gamma})=0$, namely $\alpha(\tau_{\gamma})\star\beta(\tau_{\gamma})\in\mathcal{P}_{a,\mu}$, this implies that $\max_{\gamma([0,1])}\widetilde{E}_{\mu}\geq\widetilde{E}_{\mu}(\gamma(\tau_{\gamma}))=E_{\mu}(\alpha(\tau_{\gamma})\star\beta(\tau_{\gamma}))\geq\inf_{\mathcal{P}_{a,\mu}\cap S^{r}_{a}}E_{\mu}=m_{r}(a,\mu).$ Consequently, $\sigma(a,\mu)\geq m_{r}(a,\mu)$. On the other hand, if $u\in\mathcal{P}_{a,\mu}\cap S^{r}_{a}$, then $\gamma_{u}$ defined in (7.3) ia s path in $\Gamma$ with $E_{\mu}(u)=\max_{\gamma_{u}([0,1])}\widetilde{E}_{\mu}\geq\sigma(a,\mu),$ which implies that $m_{r}(a,\mu)\geq\sigma(a,\mu).$ Combining this with Lemmas 4.3–4.4, we have that $\sigma(a,\mu)=m_{r}(a,\mu)>\sup_{(\overline{A_{k}}\cup E^{0})\cap S^{r}_{a}}E_{\mu}=\sup_{((0,\overline{A_{k}})\cup(0,E^{0}))\cap(\mathbb{R}\times S^{r}_{a})}\widetilde{E}_{\mu}.$ By Theorem 7.1, we know that $\\{\gamma([0,1]):\gamma\in\Gamma\\}$ is a homotopy stable family of compact subsets of $\mathbb{R}\times S^{r}_{a}$ with closed boundary $(0,\overline{A}_{k})\cup(0,E^{0})$ and the superlevel set $\\{\widetilde{E}\geq\sigma(a,\mu)\\}$ is a dual set for $\Gamma$. By Theorem 7.1 we can taking any minimizing sequence $\\{\gamma_{n}=(\alpha_{n},\beta_{n})\\}\subset\Gamma_{n}$ for $\sigma(a,\mu)$ with the property that $\alpha_{n}=0$ and $\beta_{n}(\tau)\geq 0$ a.e in $\mathbb{R}^{N}$, there exists a Palais-Smale sequence $\\{(t_{n},w_{n})\\}\subset\mathbb{R}\times S^{r}_{a}$ for $\widetilde{E}|_{\mathbb{R}\times S^{r}_{a}}$ at level $\sigma(a,\mu)$, that is $\displaystyle\partial_{t}\widetilde{E}_{\mu}(t_{n},w_{n})\rightarrow 0\ \text{and}\ \ \|\partial_{u}\widetilde{E}_{\mu}(t_{n},w_{n})\|\rightarrow 0\ \text{as}\ \ n\rightarrow+\infty.$ (7.4) with the additional property that $\displaystyle|t_{n}|+{\rm dist}_{H^{s}}(w_{n},\beta_{n}([0,1]))\rightarrow 0\ \text{as}\ \ n\rightarrow+\infty.$ (7.5) By the definition of $\widetilde{E}_{\mu}(t_{n},w_{n})$ in (7.1), from (7.4) we know that $P(t_{n},w_{n})\rightarrow 0$, that is $\displaystyle dE_{\mu}(t_{n}\star w_{n})[t_{n}\star\varphi]=o(1)\|\varphi\|=o(1)\|t_{n}\star\varphi\|\ \text{as}\ \ n\rightarrow+\infty\ \text{for every}\ \varphi\in T_{w_{n}}S^{r}_{a}.$ (7.6) Let $u_{n}=t_{n}\star w_{n}$, by (7.6), we know that $\\{u_{n}\\}$ is a Palais-Smale sequence for $E_{\mu}|_{S^{r}_{a}}$ at the level $\sigma(a,\mu)=m_{r}(a,\mu)$ and $P(u_{n})\rightarrow 0$. Thus, by Lemmas 4.3–4.5, we obtain that $m_{r}(a,\mu)\in(0,\frac{s}{N}S^{\frac{N}{2s}}_{s})$, so by Proposition 2.2, one of the alternatives occurs. Assume (i) occurs in Proposition 2.2, then up to a subsequence $u_{n}\rightharpoonup\widetilde{u}$ weakly in $H^{s}(\mathbb{R}^{N})$ but not strongly, where $\widetilde{u}\not\equiv 0$ is a solution of (1.1) for some $\lambda<0$, and $E_{\mu}(\widetilde{u})\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}<0.$ Hence by the Pohozaev identity, $P(\widetilde{u})=0$ holds, which implies that $||\widetilde{u}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{2\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{\overline{p}}dx-\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx=0.$ Thus $\displaystyle E_{\mu}(u)=\frac{1}{2}||\widetilde{u}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{2\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{\overline{p}}dx-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx=\frac{s}{N}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx>0,$ which contradicts the fact that $E_{\mu}(\widetilde{u})\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}<0.$ Therefore, the alternative (ii) in Proposition 2.2 holds. There exists a subsequence $u_{n}\rightarrow\widetilde{u}$ strongly in $H^{s}(\mathbb{R}^{N}),$ $E_{\mu}(\widetilde{u})=m(a,\mu)$ and $\widetilde{u}$ solves (1.1)–(1.2) for some $\lambda<0.$ By $\beta_{n}(\tau)\geq 0$ a.e in $\mathbb{R}^{N}$, (7.5) and the convergence implies that $\widetilde{u}\geq 0$, by the strong maximum principle for the fractional Laplacian, see Proposition 2.17 in [36], we have $\overline{u}$ is positive. Finally, we prove that $\widetilde{u}$ is a ground state solution. Since any normalized solutions in $\mathcal{P}_{a,\mu}$ and $E_{\mu}(\overline{u})=m_{r}(a,\mu)=\inf_{\mathcal{P}_{a,\mu}\cap S_{a}}E_{\mu}.$ It is sufficient to show that $\inf_{\mathcal{P}_{a,\mu}\cap S_{a}}E_{\mu}=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}=m(a,\mu).$ Assume by contradiction that there is a $u\in\mathcal{P}_{a,\mu}\setminus S^{r}_{a}$ such that $E_{\mu}(u)<\inf_{\mathcal{P}_{a,\mu}\cap S_{a}}E_{\mu}$ and there exists a minimizer $u$, let $v=|u|^{\ast}$ the symmetric decreasing rearrangement of $u$. Then by the properties of symmetric decreasing rearrangement, we have $||v||^{2}_{D_{s}(\mathbb{R}^{N})}\leq||u||^{2}_{D_{s}(\mathbb{R}^{N})},\ E_{\mu}(v)\leq E_{\mu}(u)\ \text{and}\ P_{\mu}(v)\leq 0=P_{\mu}(u).$ If $P_{\mu}(v)=0,$ then $P_{\mu}(v)=P_{\mu}(v)=0,$ which is a contradiction with above inequalities. If $P_{\mu}(v)<0,$ then by Lemma 4.2, we know that $t_{v,\mu}<0$, thus $E_{\mu}(u)\leq E_{\mu}(t_{v,\mu}\star u)=\frac{se^{2_{s}^{\star}st_{v,\mu}}}{N}\int_{\mathbb{R}^{N}}|v|^{2_{s}^{\ast}}dx=\frac{se^{2_{s}^{\star}st_{v,\mu}}}{N}\int_{\mathbb{R}^{N}}|v|^{2_{s}^{\ast}}dx=e^{2_{s}^{\star}st_{v,\mu}}E_{\mu}(u)<E_{\mu}(u),$ which is a contraction. Thus $m(a,\mu)=m_{r}(a,\mu)$ and hence $\widetilde{u}$ is a ground state solution. Case 2: $L^{2}$-supercritical perturbation, i.e., $2+4s/N<q<2_{s}^{\ast}$. Proceeding exactly as in the case $q=\overline{p}$, we obtain a Palais-Smale sequence $\\{u_{n}\\}\subset S^{r}_{a}$ for $E_{\mu}|_{S_{a}}$ at the level $\sigma(a,\mu)=m_{r}(a,\mu)$ and $P(u_{n})\rightarrow 0$. Thus, by Lemma 5.5, we obtain that $m_{r}(a,\mu)\in(0,\frac{s}{N}S^{\frac{N}{2s}}_{s})$, so by Proposition 2.2, one of the alternatives occurs. Assume (i) occurs in Proposition 2.2, then up to a subsequence $u_{n}\rightharpoonup\widetilde{u}$ weakly in $H^{s}(\mathbb{R}^{N})$ but not strongly, where $\widetilde{u}\not\equiv 0$ is a solution of (1.1) for some $\lambda<0$, and $E_{\mu}(\widetilde{u})\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}<0,$ hence by the Pohozaev identity $P(\widetilde{u})=0$ holds, which implies that $||\widetilde{u}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{q}dx-\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx=0,$ thus, by $q\gamma_{q,s}>2$, we have $\displaystyle E_{\mu}(u)$ $\displaystyle=\frac{1}{2}||\widetilde{u}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{q}dx-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx$ $\displaystyle=\frac{\mu}{q}\left(\frac{q\gamma_{q,s}}{2}-1\right)\int_{\mathbb{R}^{N}}|\widetilde{u}|^{q}dx+\frac{s}{N}\int_{\mathbb{R}^{N}}|\widetilde{u}|^{2_{s}^{\ast}}dx>0,$ which contradicts the fact that $E_{\mu}(\widetilde{u})\leq m(a,\mu)-\frac{s}{N}S^{\frac{N}{2s}}_{s}<0.$ Therefore, the alternative (ii) in Proposition 2.2 holds. There exists a subsequence $u_{n}\rightarrow\widetilde{u}$ strongly in $H^{s}(\mathbb{R}^{N}),$ $E_{\mu}(\widetilde{u})=m(a,\mu)$ and $\widetilde{u}$ solves (1.1)–(1.2) for some $\lambda<0.$ By $\beta_{n}(\tau)\geq 0$ a.e in $\mathbb{R}^{N}$, (7.5) and the convergence implies that $\widetilde{u}\geq 0$, by the strong maximum principle for fractional Laplacian (see Proposition 2.17 in [36]), we have $\overline{u}$ is positive. The next arguments are the same as case 1. This completes the proof. ## 8 Proof of Theorem 1.4 ###### Proof of Theorem 1.4. If we focus on the case $\mu=0$, then $\displaystyle E_{0}(u)$ $\displaystyle=\frac{1}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx$ on $S_{a}$. The associated Pohozaev manifold is $\mathcal{P}_{a,0}=\bigg{\\{}u\in S_{a}:s||u||^{2}_{D_{s}(\mathbb{R}^{N})}=s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}}=\bigg{\\{}u\in S_{a}:(\Psi^{0}_{u})^{\prime}(0)=0\bigg{\\}},$ where $\Psi^{0}_{u}(t)=\frac{e^{2st}}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx.$ Recall the decomposition $\mathcal{P}_{a,0}=\mathcal{P}^{+}_{a,0}\cup\mathcal{P}^{0}_{a,0}\cup\mathcal{P}^{-}_{a,0}.$ Since $\Psi^{0}_{u}(t)=\frac{e^{2st}}{2}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{e^{2_{s}^{\ast}st}}{2_{s}^{\ast}}\int_{\mathbb{R}^{N}}u^{2_{s}^{\ast}}dx.$ It is easy to see that for every $u\in S_{a}$, the function $\Psi^{0}_{u}(t)$ has a unique critical point $t_{u,0}$, which is a strict maximum point and is given by $\displaystyle e^{st_{u,0}}=\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx}\right)^{\frac{1}{2_{s}^{\ast}-2}}.$ (8.1) By the definition of $\mathcal{P}^{+}_{a,0}$, we know that $\mathcal{P}^{+}_{a,0}=\emptyset$. If $u\in\mathcal{P}^{0}_{a,0}$, then $u\in\mathcal{P}_{a,0}$ and $(\Psi^{\mu}_{u})^{\prime\prime}(0)=0$, which implies that $2||u||^{2}_{D_{s}(\mathbb{R}^{N})}=2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx=2_{s}^{\ast}||u||^{2}_{D_{s}(\mathbb{R}^{N})}\Rightarrow||u||_{D_{s}(\mathbb{R}^{N})}=0,$ which is not possible since $u\in S_{a}$. Then $\mathcal{P}_{a,0}=\mathcal{P}^{-}_{a,0}$. Next, we show that $\mathcal{P}_{a,0}$ is a smooth manifold of codimension 1 on $S_{a}$. Since $\mathcal{P}_{a,0}=\bigg{\\{}u\in S_{a}:||u||^{2}_{D_{s}(\mathbb{R}^{N})}=\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\bigg{\\}},$ we know that $\mathcal{P}_{a,0}$ is defined by $P_{0}(u)=0$, $G(u)=0$, where $P_{0}(u)=s||u||^{2}_{D_{s}(\mathbb{R}^{N})}-s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\ \ \text{and}\ \ G(u)=\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2}.$ Since $P_{0}(u)$ and $G(u)$ are class of $C^{1}$, we only need to check that $d(P_{0}(u),G(u))$: $H^{s}(\mathbb{R}^{N})\rightarrow\mathbb{R}^{2}$ is surjective. If this not true, $dP_{0}(u)$ has to be linearly dependent from $dG(u)$ i.e. there exist a $\nu\in\mathbb{R}$ such that $2s\int_{\mathbb{R}^{N}}(-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}\varphi dx-s2_{s}^{\ast}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}-2}u\varphi dx=\nu\int_{\mathbb{R}^{N}}u\varphi dx\ \ \text{for every }\ \ \varphi\in H^{s}(\mathbb{R}^{N}),$ which implies that $2s(-\Delta)^{2}u=\nu u+2_{s}^{\ast}su^{2_{s}^{\ast}-1}\ \ \text{in}\ \mathbb{R}^{N}.$ By the Pohozaev identity for above equation, we know that $2s||u||^{2}_{D_{s}(\mathbb{R}^{N})}=2_{s}^{\ast}s\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx,$ that is, $u\in\mathcal{P}^{+}_{a,0}$, a contradiction. Hence $\mathcal{P}_{a,0}$ is a natural constraint. Indeed, if $u\in\mathcal{P}_{a,0}$ is a critical point of $E_{0}|_{\mathcal{P}_{a,0}}$, then $u$ is a critical point of $E_{0}|_{S_{a}}$. Thus, for every $u\in S_{a}$ there exist a unique $t_{u,0}\in\mathbb{R}$ such that $t_{u,0}\star u\in\mathcal{P}_{a,0}$ and $t_{u,0}$ is a strict maximum point of $\Psi^{0}_{u}(t)$, if $u\in\mathcal{P}_{a,0}$, we have that $t_{u,0}=0$ and $E_{0}(u)=\max_{t\in\mathbb{R}}E_{0}(t\star u)\geq\inf_{v\in S_{a}}\max_{t\in\mathbb{R}}E_{0}(t\star u).$ On the other hand, if $u\in S_{a},$ then $t_{u,0}\star u\in\mathcal{P}_{a,0}$, so $\max_{t\in\mathbb{R}}E_{0}(t\star u)=E_{0}(t_{u,0}\star u)\geq\inf_{v\in\mathcal{P}_{a,0}}E_{0}(v).$ Thus $\inf_{u\in\mathcal{P}_{a,0}}E_{0}(u)=\inf_{u\in S_{a}}\max_{t\in\mathbb{R}}E_{0}(t\star u).$ Now, by (8.1), we have $\displaystyle\inf_{u\in\mathcal{P}_{a,0}}E_{0}(u)$ $\displaystyle=\inf_{u\in S_{a}}\max_{t\in\mathbb{R}}E_{0}(t\star u)$ $\displaystyle=\inf_{u\in S_{a}}\left[\frac{1}{2}\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx}\right)^{\frac{2}{2_{s}^{\ast}-2}}||u||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{1}{2_{s}^{\ast}}\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx}\right)^{\frac{2_{s}^{\ast}}{2_{s}^{\ast}-2}}\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right]$ $\displaystyle=\inf_{u\in S_{a}}\frac{s}{N}\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\left(\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}}\right)^{\frac{2_{s}^{\ast}}{2_{s}^{\ast}-2}}=\inf_{u\in H^{s}(\mathbb{R}^{N})\setminus\\{0\\}}\frac{s}{N}\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\left(\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}}\right)^{\frac{N}{2s}}.$ So it follows that $\inf_{u\in H^{s}(\mathbb{R}^{N})\setminus\\{0\\}}\frac{s}{N}\left(\frac{||u||^{2}_{D_{s}(\mathbb{R}^{N})}}{\left(\int_{\mathbb{R}^{N}}|u|^{2_{s}^{\ast}}dx\right)^{\frac{2}{2_{s}^{\ast}}}}\right)^{\frac{N}{2s}}=\frac{s}{N}S^{\frac{N}{2s}}_{s}$ and the infimum is achieved if and only if by the extremal functions $U_{\epsilon,y}$ defined in (1.6) when $N>4s$ and stay in $L^{2}(\mathbb{R}^{N})$. If $2s<N\leq 4s$, we show that the infimum of $E_{0}$ on $\mathcal{P}_{a,0}$ is not achieved. Assume by contradiction that there exists a minimizer $u$, let $v=|u|^{\ast}$ the symmetric decreasing rearrangement of $u$. Then by the properties of symmetric decreasing rearrangement, we have $||v||^{2}_{D_{s}(\mathbb{R}^{N})}\leq||u||^{2}_{D_{s}(\mathbb{R}^{N})},\ E_{0}(v)\leq E_{0}(u)\ \text{and}\ P_{0}(v)\leq 0=P_{0}(u).$ If $P_{0}(v)<0,$ then by (8.1), we know that $t_{v,0}<0$, thus $E_{0}(u)\leq E_{0}(t_{v,0}\star u)=\frac{se^{2st_{v,0}}}{N}||v||^{2}_{D_{s}(\mathbb{R}^{N})}\leq\frac{se^{2st_{v,0}}}{N}||u||^{2}_{D_{s}(\mathbb{R}^{N})}=e^{2st_{v,0}}E_{0}(u)<E_{0}(u),$ which is a contradiction. Thus $P_{0}(v)=0\Rightarrow v\in\mathcal{P}_{a,0}$. Since $\mathcal{P}_{a,0}$ is a natural constraint, we obtain $\displaystyle(-\Delta)^{s}v=\lambda v+v^{2_{s}^{\ast}-1},\ v\geq 0\ \text{in}\ \mathbb{R}^{N},$ (8.2) for some $\lambda\in\mathbb{R}$. Since $P_{0}(v)=0$, which implies that $\lambda=0$. By the strong maximum principle, we have $v>0$ in $\mathbb{R}^{N}$. From [22], we know that $v=\alpha U_{\epsilon,0}$ for some $\alpha,\epsilon>0$, this is not possible, since $U_{\epsilon,0}\notin H^{s}(\mathbb{R}^{N})$ for $2s<N\leq 4s.$ The proof is thus complete. ∎ ## 9 Proof of Theorem 1.5 In this section, we prove Theorem 1.5. Before the proof, we give some lemmas. ###### Lemma 9.1. Let $a>0$ , $\mu\geq 0,\ \overline{p}\leq q<2_{s}^{\ast}$ and (1.9) holds. Then $\inf_{u\in\mathcal{P}_{a,\mu}}E_{\mu}(u)=\inf_{u\in S_{a}}\max_{t\in\mathbb{R}}E_{\mu}(t\star u).$ ###### Proof. Since $\overline{p}\leq q<2_{s}^{\ast}$ and $\mu\geq 0$, by Lemma 4.2 and Lemma 5.2, we know that $\mathcal{P}_{a,\mu}=\mathcal{P}^{-}_{a,\mu}$, for every $u\in S_{a}$, there is a unique $t_{u,\mu}\in\mathbb{R}$ such that $t_{u,\mu}\star u\in\mathcal{P}_{a,\mu},$ where $t_{u,\mu}$ is the unique critical point of the function of $\Psi^{\mu}_{u}$ (see Proposition 1.4 for $\mu=0$). So, if $u\in\mathcal{P}_{a,\mu},$ we have that $t_{u,\mu}=0$ and $E_{\mu}(u)=\max_{t\in\mathbb{R}}E_{\mu}(t\star u)\geq\inf_{v\in S_{a}}\max_{t\in\mathbb{R}}E_{\mu}(t\star v).$ On the other hand, if $u\in S_{a},$ then $t\star u\in\mathcal{P}_{a,\mu}$ and hence $\max_{t\in\mathbb{R}}E_{\mu}(t\star u)=E_{\mu}(t_{u,\mu}\star u)\geq\inf_{v\in\mathcal{P}_{a,\mu}}E_{\mu}(v).$ This ends the proof. ∎ ###### Lemma 9.2. Let $a>0$, $\overline{p}\leq q<2_{s}^{\ast}$, $\widetilde{\mu}\geq 0$ satisfy (1.9) holds. Then the function $\mu\in[0,\widetilde{\mu}]\rightarrow m(a,\mu)\in\mathbb{R}$ is monotone non-increasing. ###### Proof. Let $0\leq\mu_{1}\leq\mu_{2}\leq\widetilde{\mu}$, by Lemma 9.1, we know that $\displaystyle m(a,\mu_{2})$ $\displaystyle=\inf_{u\in S_{a}}\max_{t\in\mathbb{R}}E_{\mu_{2}}(t\star u)=\inf_{u\in S_{a}}E_{\mu_{2}}(t_{u,\mu_{2}}\star u)$ $\displaystyle=\inf_{u\in S_{a}}\left[E_{\mu_{1}}(t_{u,\mu_{2}}\star u)+(\mu_{1}-\mu_{2})\frac{e^{q\gamma_{q,s}st}}{q}\int_{\mathbb{R}^{N}}|u|^{q}dx\right]$ $\displaystyle\leq\inf_{u\in S_{a}}\max_{t\in\mathbb{R}}E_{\mu_{1}}(t\star u)=m(a,\mu_{1}).$ As desired. ∎ ###### Proof of Theorem 1.5. We divide the proof into two cases. Case 1: $2<q<\overline{p}$. Since $u_{\mu}$ is a positive ground state solution of $E_{\mu}(u)$ on $\\{u\in S_{a}:||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}<R_{0}\\}$, where $R_{0}(a,\mu)$ is defined by Lemma 3.1, since $R_{0}$ is defined by $h(R_{0})=0$, see $h$ in (3.2), we can check that $R_{0}=R_{0}(a,\mu)\rightarrow 0$ as $\mu\rightarrow 0^{+}$, thus $||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}<R_{0}\rightarrow 0$ as $\mu\rightarrow 0^{+}$. Since for every $u\in S_{a}$, by fractional Gagliardo- Nirenberg-Sobolev inequality (2.4) and Sobolev inequality (1.5) $\displaystyle 0>m(a,\mu)=E_{\mu}(u_{\mu})\geq\frac{1}{2}||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}C^{q}_{N,q,s}||u_{\mu}||^{q\gamma_{q,s}}_{D_{s}(\mathbb{R}^{N})}a^{q(1-\gamma_{p,s})}-\frac{1}{2_{s}^{\ast}}S^{-\frac{2_{s}^{\ast}}{2}}_{s}||u_{\mu}||^{2_{s}^{\ast}}_{D_{s}(\mathbb{R}^{N})}\rightarrow 0$ as $\mu\rightarrow 0^{+}$. Case 2: $\overline{p}\leq q<2_{s}^{\ast}$. Let $\widetilde{\mu}\geq 0$ and (1.9) holds. Firstly, we show that the family of positive radial ground states $\\{u_{\mu}:0<\mu<\widetilde{\mu}\\}$ is a bounded in $H^{s}(\mathbb{R}^{N})$. If $q=\overline{p}=2+4s/N$, then by Lemma 9.2 and $P_{\mu}(u_{\mu})=0$, we have $\displaystyle m(a,0)\geq m(a,\mu)=E_{\mu}(u_{\mu})$ $\displaystyle=\frac{s}{N}\left(||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{2\mu}{\overline{p}}\int_{\mathbb{R}^{N}}|u|^{\overline{p}}dx\right)$ $\displaystyle\geq\frac{s}{N}\left(1-\frac{2\mu}{\overline{p}}C^{\overline{p}}_{N,\overline{p},s}a^{\frac{4s}{N}}\right)||u||^{2}_{D_{s}(\mathbb{R}^{N})}.$ If $\overline{p}<q<2_{s}^{\ast}$, by the similar arguments as above, we have $\displaystyle m(a,0)\geq m(a,\mu)=E_{\mu}(u_{\mu})=\frac{s}{N}\int_{\mathbb{R}^{N}}|u_{\mu}|^{2_{s}^{\ast}}dx+\frac{\mu}{q}\left(\frac{q\gamma_{q,s}}{2}-1\right)\int_{\mathbb{R}^{N}}|u_{\mu}|^{q}dx.$ Thus, $\\{u_{\mu}\\}$ is bounded in $L^{q}(\mathbb{R}^{N})\cap L^{2_{s}^{\ast}}(\mathbb{R}^{N})$. From $P_{\mu}(u_{\mu})=0,$ we also have $\\{u_{\mu}\\}$ is bounded in $H^{s}(\mathbb{R}^{N}).$ Since $\widetilde{\lambda}_{\mu}a^{2}=||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\int_{\mathbb{R}^{N}}|u_{\mu}|^{q}dx-\int_{\mathbb{R}^{N}}|u_{\mu}|^{2_{s}^{\ast}}dx=\mu(\gamma_{q,s}-1)\int_{\mathbb{R}^{N}}|u_{\mu}|^{q}dx\rightarrow 0$ as $\mu\rightarrow 0^{+}$. Therefore $u_{\mu}\rightharpoonup u$ weakly in $H^{s}(\mathbb{R}^{N}),\ D_{s}(\mathbb{R}^{N}),\ L^{2_{s}^{\ast}}(\mathbb{R}^{N})$ and $u_{\mu}\rightharpoonup u$ strongly in $L^{q}(\mathbb{R}^{N})$, $\widetilde{\lambda}_{\mu}\rightarrow 0$. Let $||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}\rightarrow\ell\geq 0$, if $\ell=0$, then $u_{\mu}\rightarrow 0$ strongly in $D_{s}(\mathbb{R}^{N})$, so $E_{\mu}(u_{\mu})\rightarrow 0.$ However, by Lemma 9.2, we know that $E_{\mu}(u_{\mu})\geq m(a,\widetilde{\mu})>0$ for every $0<\mu<\widetilde{\mu}$, a contradiction. Thus $\ell>0$. Since $P_{\mu}(u_{\mu})=0$, we have $\int_{\mathbb{R}^{N}}|u_{\mu}|^{2_{s}^{\ast}}dx=||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}-\mu\gamma_{q,s}\int_{\mathbb{R}^{N}}|u_{\mu}|^{q}dx\rightarrow\ell,\ \ \text{as}\ \mu\rightarrow 0^{+}.$ Therefore, by the Sobolev embedding $\ell\geq S_{s}\ell^{\frac{2}{2_{s}^{\ast}}}$, which implies that $\ell\geq S^{\frac{N}{2s}}_{s}$. On the other hand, we have $\frac{\ell}{N}=\lim_{\mu\rightarrow 0^{+}}\left[\frac{s}{N}||u_{\mu}||^{2}_{D_{s}(\mathbb{R}^{N})}-\frac{\mu}{q}\left(1-\frac{q\gamma_{q,s}}{2_{s}^{\ast}}\right)\int_{\mathbb{R}^{N}}|u_{\mu}|^{q}dx\right]=\lim_{\mu\rightarrow 0^{+}}E_{\mu}(u)\leq m(a,0)=\frac{s}{N}S^{\frac{N}{2s}}_{s}.$ Thus, $\ell=S^{\frac{N}{2s}}_{s}$ and the desired conclusion follows. ∎ ## Acknowledgements B. Zhang was supported by the National Natural Science Foundation of China (No. 11871199), the Heilongjiang Province Postdoctoral Startup Foundation, PR China (LBH-Q18109), and the Cultivation Project of Young and Innovative Talents in Universities of Shandong Province. ## References * [1] A. Ambrosetti and P.H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973) 349–381. * [2] T. Bartsch, L. Jeanjean and N. Soave, Normalized solutions for a system of coupled cubic Schrödinger equations on $\mathbb{R}^{3}$, J. Math. Pures Appl., 106 (2016) 583–614. * [3] T. Bartsch and N. Soave, A natural constraint approach to normalized solutions of nonlinear Schrödinger equations and systems, J. Funct. Anal., 272 (2017) 4304–4333. * [4] T. Bartsch and N. Soave, Multiple normalized solutions for a competing system of Schrödinger equations, Calc. Var. Partial Differential Equations, 58 (2019), 24 pp. * [5] B. Barrios, E. Colorado, A. de Pablo and U. Sánchez, On some critical problems for the fractional Laplacian operator, J. Differential Equations, 252 (2012), 6133-6162. * [6] B. Barrios, E. Colorado, R. Servadei and F. Soria, A critical fractional equation with concave-convex power nonlinearities, Ann. Inst. H. Poincare. Anal. Non Lineaire, 32 (2015) 875–900. * [7] C. Bucur and E. Valdinoci, Nonlocal diffusion and applications, Lecture Notes of the Unione Matematica Italiana, 20. Springer, [Cham]; Unione Matematica Italiana, Bologna, 2016. xii +155 pp. * [8] M. Bhakta and D. Mukherjee, Semilinear nonlocal elliptic equations with critical and supercritical exponents, Commun. Pure Appl. Anal., 16 (2017) 1741–1766 * [9] H. Brézis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponent, Commun. Pure. Appl. Math., 36 (1983) 437–477. * [10] L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007) 1245–1260. * [11] X. Cabré and Y. Sire, Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates. Ann. Inst. H. Poincaré Anal. Non Linéaire., 31 (2014) 23–53. * [12] L. Caffarelli, J. Roquejoffre and Y. Sire, Variational problems with free boundaries for the fractional Laplacian, J. Eur. Math. Soc., 12 (2010) 1151–1179. * [13] X. Chang and Z. Wang, Ground state of scalar field equations involving a fractional Laplacian with general nonlinearity, Nonlinearity 26 (2013) 479¨C494. * [14] E. Colorado, A. de Pablo and U. Sánchez, Perturbations of a critical fractional equation, Pacific J. Math., 271 (2014) 65–85. * [15] A. Cotsiolis and N.K. Tavoularis, Best constants for Sobolev inequalities for higher order fractional derivatives, J. Math. Anal. Appl., 295 (2004) 225–236. * [16] E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker’s guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012) 521–573. * [17] G. Devillanova and G. Carlo Marano, A free fractional viscous oscillator as a forced standard damped vibration, Fract. Calc. Appl. Anal., 19 (2016), 319–356. * [18] P. Felmer, Q. Alexander and J. Tan, Positive solutions of the nonlinear Schr¡§odinger equation with the fractional Laplacian Proceedings of the Royal Society of Edinburgh, 142(2012), 1237-1262. * [19] P. Felmer and A. Quaas, Fundamental solutions and Liouville type theorems for nonlinear integral operators, Adv. Math., 226 (2011), 2712–2738. * [20] A. Fiscella and E. Valdinoci, A critical Kirchhoff type problem involving a nonlocal operator, Nonlinear Anal., 94 (2014), 156–170. * [21] N. Ghoussoub, Duality and Perturbation Methods in Critical Point Theory, Cambridge Tracts in Mathematics, vol. 107, Cambridge University Press, 1993. * [22] T. Jin, Y. L and J. Xiong, On a fractional Nirenberg problem, part I: blow up analysis and compactness of solutions, J. Eur. Math. Soc., 16 (2014) 1111–1171. * [23] R.L. Frank and E. Lenzmann, Uniqueness of non-linear ground states for fractional Laplacians in $\mathbb{R}$, Acta Math., 210 (2013) 261–318. * [24] R.L. Frank, E. Lenzmann and L. Silvestre, Uniqueness of radial solutions for the fractional Laplacian, Comm. Pure Appl. Math., 69 (2016) 1671–1726. * [25] Z. Guo, S. Luo and W. Zou, On critical systems involving fractional Laplacian, J. Math. Anal. Appl., 446 (2017) 681–706. * [26] X. He, M. Squassina and W. Zou, The Nehari manifold for fractional systems involving critical nonlinearities, Commun. Pure Appl. Anal., 15 (2016) 1285–1308. * [27] Q. He, S. Peng and Y. Peng, Existence, non-degeneracy of proportional positive solutions and least energy solutions for a fractional elliptic system, Adv. Differential Equations, 22 (2017) 867–892. * [28] L. Jeanjean, Existence of solutions with prescribed norm for semilinear elliptic equation, Nonlinear Anal., 28 (1997) 1633–1659. * [29] H. Luo and Z. Zhang, Normalized solutions to the fractional Schrödinger equations with combined nonlinearities, Calc. Var. Partial Differential Equations, 2020, 59:143. * [30] G. Molica Bisci, V. Rădulescu and R. Servadei, Variational Methods for Nonlocal Fractional Equations, Encyclopedia of Mathematics and its Applications, vol. 162, Cambridge University Press, Cambridge, 2016. * [31] R. Servadei and E. Valdinoci, The Brézis-Nirenberg result for the fractional Laplacian, Trans. Amer. Math. Soc., 367 (2015) 67–102. * [32] N. Soave, Normalized ground states for the NLS equation with combined nonlinearities, J. Differential Equations, 269 (2020) 6941–6987. * [33] N. Soave, Normalized ground states for the NLS equation with combined nonlinearities: The Sobolev critical case, J. Funct. Anal., 279 (2020), 108610. * [34] S. Secchi, On fractional Schrödinger equations in $\mathbb{R}^{N}$ without the Ambrosetti-Rabinowitz condition, Topol. Method. Nonl. Anal., 47 (2016), 19-41. * [35] R. Servadei and E. Valdinoci, A Brézis-Nirenberg result for non-local critical equations in low dimension, Commun. Pure Appl. Anal., 12 (2013) 2445–2464. * [36] L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math., 60 (2007) 67–112. * [37] E. Valdinoci, From the long jump random walk to the fractional Laplacian, Bol. Soc. Esp. Mat. Apl.Se MA, 49(2009), 33-44. * [38] M. Willem, Minimax Theorems, Birkh$\ddot{{\rm a}}$ser, Boston, 1996. * [39] M. Xiang, B. Zhang and V. Rădulescu, Superlinear Schrödinger-Kirchhoff type problems involving the fractional $p$-Laplacian and critical exponent, Adv. Nonlinear Anal. 9 (2020) 690–709. * [40] M. Zhen and B. Zhang, Complete classification of ground state solutions with different Morse index for critical fractional Laplacian system, Math. Meth. Appl. Sci., doi: 10.1002/mma.6862. * [41] M. Zhen and B. Zhang, A different approach to ground state solutions for $p$-Laplacian system with critical exponent, Appl. Math. Lett., 111 (2021), 106593. * [42]
# Phases of Holographic Interfaces Constantin Bachas and Vassilis Papadopoulos ###### Abstract We compute the phase diagram of the simplest holographic bottom-up model of conformal interfaces. The model consists of a thin domain wall between three- dimensional Anti-de Sitter (AdS) vacua, anchored on a boundary circle. We distinguish five phases depending on the existence of a black hole, the intersection of its horizon with the wall, and the fate of inertial observers. We show that, like the Hawking-Page phase transition, the capture of the wall by the horizon is also a first order transition and comment on its field- theory interpretation. The static solutions of the domain-wall equations include gravitational avatars of the Faraday cage, black holes with negative specific heat, and an intriguing phenomenon of suspended vacuum bubbles corresponding to an exotic interface/anti-interface fusion. Part of our analysis overlaps with recent work by Simidzija and Van Raamsdonk but the interpretation is different. $\,{}^{1}$ Laboratoire de Physique de l’École Normale Supérieure, CNRS, PSL Research University and Sorbonne Universités 24 rue Lhomond, 75005 Paris, France ###### Contents 1. 1 Introduction 2. 2 Finite-temperature AdS/CFT 1. 2.1 Coordinates for the AdS3 black string 2. 2.2 Hawking-Page transition 3. 3 Topology of slices 4. 4 Solving the wall equations 1. 4.1 Matching conditions 2. 4.2 Solution near the boundary 3. 4.3 Critical tensions 4. 4.4 Turning point and horizon 5. 5 Phases: cold, hot $\&$ warm 6. 6 Equations of state 1. 6.1 High-$T$ phase 2. 6.2 Low-$T$ phase(s) 3. 6.3 Warm phases 7. 7 Phase transitions 1. 7.1 ICFT interpretation 2. 7.2 Sweeping transitions 3. 7.3 Warm-to-hot transitions 8. 8 Exotic fusion and bubbles 9. 9 Phase diagrams 1. 9.1 Defect CFT 2. 9.2 Non-degenerate vacua 3. 9.3 Unstable black holes 10. 10 Outlook 11. A Renormalized on-shell action 12. B Opening arcs as elliptic integrals 13. C Sweeping is continuous 14. D Bubbles exist ## 1 Introduction Begining with the classic paper of Coleman and De Lucia [1] there have been many studies of thin gravitating domain walls between vacua with different values of the cosmological constant. Such walls figure in models of localized gravity [2, 3, 4], in holographic duals of conformal interfaces [5, 6, 7], in efforts to embed inflation in string theory by studying dynamical bubbles [8, 9, 10, 11], and more recently, following the ideas in refs. [12, 13, 14], in toy models of black hole evaporation [15, 16, 17, 19, 18, 20, 21, 22, 23]. Besides being a simple form of matter coupled to gravity, domain walls are also a key ingredient [24] in effective descriptions of the string-theory landscape – see [25, 26, 27] for some recent discussions of domain walls in this context.111The above list of references is nowhere nearly complete. It is only meant as an entry to the vast and growing literature in these subjects. In this paper we study a thin static domain wall between Anti-de Sitter (AdS) vacua, anchored at the conformal boundary of spacetime. If a dual holographic setup were to exist, it would have two conformal field theories, CFT1 and CFT2, separated by a conformal interface [5, 6, 7]. We will calculate the phase diagram of the system as function of the AdS radii, the tension of the wall and the boundary data. Several parts of this analysis have appeared before (see below) but the complete phase diagram has not, to the best of our knowledge, been worked out. We will be interested in phenomena that are hard to see at weak CFT coupling. A broader motivation, as in much of the AdS/CFT literature, is understanding how the interior geometry is encoded on the boundary and vice versa, but we will only briefly allude to this question in the present work. Our analysis is classical in gravity. Different phases are distinguished by the presence/absence of a black hole and by the fate of inertial observers, either those moving freely in the bulk or those bound to the wall. Inertial observers are a guiding fixture of the analysis, not emphasized in earlier works. In the high-temperature or ‘hot’ phase all inertial observers eventually cross the black-hole horizon. In intermediate or ‘warm’ phases the wall avoids the horizon, and may also shield bulk observers from falling inside. Such two-center warm solutions are gravitational avatars of the Faraday cage. Finally what differentiates ‘cold’ horizonless phases is whether all timelike geodesics intersect inevitably the wall, or not. Besides the domain wall and the black hole, the third actor in the problem is the center of global AdS where an inertial observer may rest. The rich phase diagram is the result of several competing forces: The attraction of the AdS trap, with or without a black hole in its center, the tension of the wall, and the repulsion between the domain wall and massive particles. In addition to the first-order Hawking-Page transition [28] that signals the formation of a black hole, new phase transitions occur when the wall sweeps an AdS rest point or when part of it enters the horizon, see figure 1. One of our conclusions is that the latter transition is always first-order. Figure 1: A domain wall sweeping the center of the ‘false AdS vacuum’ where an inertial observer could rest (left), or entering the horizon of a black hole (right). We work in 2+1 dimensions because calculations can be performed in closed form. We expect, however, qualitative features of the phase diagram to carry over to higher dimensions. For simplicity we consider a single type of non- intersecting wall, and only comment briefly on extended models that allow junctions of different types of wall. The capture of the wall by the black hole is related to a transition analyzed in a very interesting recent paper by Simidzija and Van Raamsdonk [29], see also [30, 31]. These authors consider time-dependent spherically-symmetric walls whose intersections with the conformal boundary describe Hamiltonian quenches in the dual field theory. In this setting the boundary is the infinite cylinder, with a stripe describing the evolution of the dual CFT between the quench and ‘unquench’ times. By contrast, we are interested in equilibrium configurations. This means that the domain-wall geometry is static, and the stripes on the conformal boundary point in the time direction. Furthermore, the boundary is not the cylinder but an orthogonal torus, adding an extra parameter to the problem. Although the interpretation is different, many of our formulae are nevertheless related to those of refs. [30, 29, 31] by swapping the roles of boundary space and time (thereby also swapping the BTZ geometry with thermal AdS3, see section 2). This is fortuitous to 2+1 dimensions and does not carry over to higher dimensions. The gravitational action of the thin-wall model reads $\displaystyle\,\hskip-42.67912ptI_{\rm gr}=-\frac{1}{2}\int_{{\mathbb{S}}_{1}}d^{3}x\sqrt{g_{1}}\,(R_{1}+\frac{2}{\ell_{1}^{2}})\hskip-5.69054pt$ $\displaystyle-$ $\displaystyle\hskip-5.69054pt\frac{1}{2}\int_{{\mathbb{S}}_{2}}d^{3}x\sqrt{g_{2}}\,(R_{2}+\frac{2}{\ell_{2}^{2}})$ (1.1) $\displaystyle\hskip-51.21495pt+\,\lambda\int_{\mathbb{W}}d^{2}s\sqrt{\hat{g}_{w}}\,+\,{\rm GHY\ terms}\,+\,{\rm ct.}\ ,\ $ where $R_{j}(g_{j})$ are the Ricci scalars of the spacetime slices $\mathbb{S}_{j}$ on either side of the wall, and $\hat{g}_{w}$ is the induced metric on the wall’s worldvolume. The Gibbons-Hawking-York terms and counterterms are given in appendix A. The action $I_{\rm gr}$ depends on three parameters: the two AdS radii $\ell_{1},\ell_{2}$ and the wall tension $\lambda$. The radii are related to the central charges of the dual CFTs [32], and the tension to the entropy [33, 29] and to the energy-transport coefficient [34] of the dual interface. Static solutions exist for $\displaystyle\lambda_{\rm min}<\lambda<\lambda_{\rm max}\,,\qquad{\rm with}\quad\lambda_{\rm max}={1\over\ell_{1}}+{1\over\ell_{2}}\,;\ \ \lambda_{\rm min}=\bigl{|}{1\over\ell_{1}}-{1\over\ell_{2}}\bigr{|}\ .$ (1.2) The classical phase diagram depends on two dimensionless ratios of the above (e.g. $\ell_{2}/\ell_{1}:=b\,$ and $\lambda\ell_{2}:=\kappa$) and on the two parameters that determine the conformal class of the striped boundary torus, e.g. $\tau_{1}:=TL_{1}$ and $\tau_{2}:=TL_{2}$, see figure 2. Without loss of generality we assume henceforth that $\ell_{2}\geq\ell_{1}$, i.e. that $\mathbb{S}_{1}$ is the true-vacuum slice. An important question is how much of this analysis has a chance to carry over to top-down holographic models, where back-reacting domain walls are not thin.222Many examples of supergravity domain walls have been worked out in the literature, a representative sample is [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]. None of these solutions depends, however, on non-trivial (non-Lagrangian) boundary data, indeed all but one are scale-invariant AdSn fibrations. The size of the horizon and the number of stable rest points are order parameters that can be also defined for thick walls, but a sharp criterion, that decides whether a thick domain wall enters or avoids the horizon is hard to imagine. Nevertheless, the field-theory interpretation of the transition suggests that such an order parameter may exist, as we will explain in section 7.1. It is worth stressing that the thin-wall model is a minimal gravity dual of I(nterface) CFT in the same way that pure Einstein theory is a minimal dual for homogeneous CFT. The model captures the two universal boundary operators – the energy-momentum tensors on either side of the interface, as well as their combination refered to as the displacement operator [54]. Top-down models have many more operators, some of which correspond to internal excitations of the domain wall. Note also that boundaries, or end-of-the-world branes (EWBs), can be considered as a limit of domain walls when one side becomes the zero-radius AdS spacetime [55]. In this sense holographic B(oundary) CFT [56, 57] can be recovered from holographic ICFT, though the limit is subtle and should be handled with care. A last remark concerns the Ryu-Takayanagi surfaces [58, 59] that delimit the entanglement wedges of boundary subregions [60, 62, 61].333See e.g. [63, 64] for reviews. It is clearly of interest to study if these surfaces intersect the domain wall, as is done for BCFT in ref. [21, 22]. We hope to return to this question elsewhere. The plan of the paper and a summary of our results follows. In section 2 we review some standard facts about AdS3/CFT2 at finite temperature. The wall separates space in two slices that we color green (true-vacuum side) and pink (false-vacuum side). Each of these comes in one of four topological types described in section 3. In section 4 we solve the matching equations obeyed by a thin static domain wall, which we parametrize conveniently by the blueshift metric factor $g_{tt}$. This section overlaps substantially with ref. [29] via double Wick rotation – a trick specific to 2+1 dimensions as earlier noted. In section 5 we start analyzing the solutions. By studying the turning point of the wall we classify the possible phases, i.e. the topologically distinct solutions. We rule out in particular centerless geometries, in which no inertial observer can avoid the wall, and solutions with two black holes whose merging is prevented by the wall. In section 6 we write down the equations of state that characterize these phases. They relate the canonical variables $\tau_{1},\tau_{2}$ to microcanonical variables that are natural for describing the interior geometry. We also point out the relevance of a critical tension $\lambda_{0}=\sqrt{\lambda_{\rm max}\lambda_{\rm min}}\,$ below which the hot solution disappears from a region of parameter space. In section 7 we compute the critical lines for sweeping transitions in both the cold and the warm phases, and we show that the warm-to-hot transition is always first-order – the domain wall cannot be lowered continuously to the black horizon. The proof requires a detailed analysis of the region $\mu\approx 1$ with $\lambda\leq\lambda_{0}$, where the hot and a warm solution come arbitrarily close. We also point out some puzzles regarding the ICFT interpretation of these phase transitions. Section 8 presents a striking phenomenon: bubbles of the true vacuum suspended from a point on the conformal boundary of the false vacuum. This is surprising from the perspective of ICFT, since it implies that the fusion of an interface and anti-interface does not produce the trivial (identity) defect, as expected from free-field calculations [65], but an exotic defect that generates spontaneously a new scale. In section 9 we present numerical plots of the complete phase diagram in the canonical ensemble, for different values of the Lagrangian parameters $\lambda,\ell_{j}$. These plots confirm our earlier conclusions. We point out a critical threshold $b=\ell_{2}/\ell_{1}=3$, probably an artifact of the thin-wall approximation, above which black-hole solutions on the false-vacuum side of the wall cease to exist. We also exhibit coexisting black-hole solutions, including black holes with negative specific heat. This parallels the discussion of black holes in deformed JT gravity in ref. [66]. Section 10 contains concluding remarks. In order not to interrupt the flow of the arguments we relegate some detailed calculations to four appendices. ## 2 Finite-temperature AdS/CFT For completeness we recall here some standard facts about AdS/CFT at finite temperature in three spacetime dimensions. While doing this we will be also setting notation and conventions. ### 2.1 Coordinates for the AdS3 black string The metric of the static AdS3 black string in 2+1 dimensions is $\displaystyle ds^{2}_{\rm BS}\,=\,{\ell^{2}dr^{2}\over r^{2}-M\ell^{2}}-({r^{2}}-M\ell^{2})\,dt^{2}+r^{2}dx^{2}\,,$ (2.1) where $M>0$, $\ell$ is the radius of AdS3 and the horizon at $r^{\rm H}=\ell\sqrt{M}$ has temperature $T=\sqrt{M}/2\pi$. Length units on the gravity side are such that $8\pi G=1$. The dual CFT lives on the AdS3 boundary, at $r={1/\epsilon}\to\infty$, with conformal coordinates $x^{\pm}\equiv x\pm t\,\in\mathbb{R}^{2}$. Its central charge is $c=12\pi\ell$ [32]. The holographic dictionary becomes transparent in Fefferman-Graham coordinates, in which any asymptotically-Poincaré AdS3 solution takes the following form [67, 68] $ds^{2}={\ell^{2}dz^{2}\over z^{2}}+{1\over z^{2}}\Bigl{(}dx^{+}+{\ell z^{2}}h_{-}dx^{-}\Bigr{)}\Bigl{(}dx^{-}+{\ell z^{2}}h_{+}dx^{+}\Bigr{)}\ .$ (2.2) Here $h_{\pm}=\langle T_{\pm\pm}\rangle$ are the expectation values of the canonically-normalized energy-momentum tensor of the CFT. Note that it is a special feature of 2+1 dimensions that the Fefferman-Graham expansion stops at order $z^{2}$. For the static black string, $h_{+}=h_{-}=M\ell/4\,$ giving $\langle T_{tt}\rangle=M\ell/2=(c/6)\pi T^{2}$. This is indeed the energy density of finite-temperature CFT in two dimensions. The relation between $z$ and $r$ is $\displaystyle{r}={1\over z}+{M\ell^{2}z\over 4}\ \ \Longleftrightarrow\ \ z={2\over M\ell^{2}}\left(r-\sqrt{{r^{2}}-M\ell^{2}}\,\right)\ ,$ (2.3) and the black-string metric in the $(z,t,x)$ coordinates reads $\displaystyle ds^{2}_{\rm BS}\,=\,\ell^{2}{dz^{2}\over z^{2}}-\Bigl{(}{1\over z}-{M\ell^{2}z\over 4}\Bigr{)}^{2}dt^{2}+\Bigl{(}{1\over z}+{M\ell^{2}z\over 4}\Bigr{)}^{2}dx^{2}\ .$ (2.4) Note that $z$ covers only the region outside the horizon ($r>r^{\rm H}$) and that near the conformal boundary $z\approx r^{-1}$. A last change of coordinates worth recording, even though we will not use it in this paper, is the one that maps $(z,t,x)$ to the standard Poincaré parametrization of AdS3. Such a map is guaranteed to exist because all constant-negative-curvature Einstein manifolds in three dimensions can be obtained from AdS3 by identifications and excisions. For the case at hand 444The general transformation, for an arbitrary (conformally-flat) boundary metric and vacuum expectation value $\langle T_{ab}\rangle$, is given in refs. [69, 70, 71]. the transformation reads $\displaystyle w^{\pm}=\zeta^{\pm}\left({4-M\ell^{2}z^{2}\over 4+M\ell^{2}z^{2}}\right)\,,\quad y={4z(M\zeta^{+}\zeta^{-})^{1/2}\over 4+M\ell^{2}z^{2}}\quad{\rm with}\quad\zeta^{\pm}=e^{\sqrt{M}(x\pm t)}\ .\ \ $ (2.5) The reader can check that in these coordinates the metric (2.4) becomes $\displaystyle ds^{2}_{\rm BS}\,=\,{\ell^{2}dy^{2}+dw^{+}dw^{-}\over y^{2}}\ ,$ (2.6) i.e. the standard Poincaré form of AdS3 as advertized. Outside the black horizon ($M\ell^{2}z^{2}<4$) the coordinates $x^{\pm}\equiv x\pm t$ cover only a Rindler wedge of the $w^{\pm}$ plane. Since we will be refering to this later, let us verify the well-known fact that no inertial observer can avoid crossing the horizon. In the proper-time parametrization of the trajectory a simple calculation gives $\displaystyle\ell^{2}\,{{\ddot{r}}\over r}=-1-M\ell^{2}{\dot{x}}^{2}$ (2.7) where dots denote derivatives with respect to proper time. Since $M$ is positive there is no centrifugal acceleration QED. Note that this is a property of the asymptotically AdS black hole, not shared by asymptotically flat black holes in higher dimension. ### 2.2 Hawking-Page transition From the perspective of the CFT, the temperature $T$ is the only dimensionful parameter of the infinite-black-string solution. By a scale transformation we can always set it to one. Things get more interesting if the black string is compactified, $x\sim x+L$, thereby converting the solution (2.1) to the non- spinning BTZ black hole [72, 73]. In addition to the central charge $c$, there is now a new dimensionless parameter $LT$. In the Euclidean geometry $\tau=i\,LT$ is the complex-structure modulus of the boundary torus. At the critical temperature $T_{\rm HP}=1/L$ the theory undergoes a Hawking- Page phase transition [28, 74]. This is seen by comparing the action of the two competing saddle points for the interior geometry: 555Thermal AdS3 and Euclidean BTZ are part of an infinite SL(2,$\mathbb{Z}$) orbit of gravitational instantons, [74, 75] but they are the only dominant ones for an orthogonal torus. Their regularized Euclidean actions are $I_{\rm TAdS}=-2\pi^{2}\ell/|\tau|$ and $I_{\rm BTZ}=-2\pi^{2}\ell|\tau|$, see below. (i) the Euclidean BTZ black hole, and (ii) thermal AdS3, whose metric is the same as (2.1) but with $M$ replaced by $\tilde{M}=-(2\pi/L)^{2}$. The difference of free energies of these two saddle points reads $\displaystyle F_{\rm BTZ}-F_{\rm TAdS}=-2\pi^{2}\ell\,\bigl{(}LT^{2}-{1\over L}\bigr{)}\ .$ (2.8) Thus thermal AdS3 is the dominant solution when $LT<1$, while the BTZ black hole dominates when $LT>1$. Thermal AdS3 and the Euclidean BTZ black hole differ in the choice of boundary cycle that becomes contractible in the interior geometry. The periodicity conditions, respectively $x\sim x+2\pi/|\tilde{M}|^{1/2}$ and $t_{E}\sim t_{E}+2\pi/M^{1/2}$, ensure regularity when this contractible cycle degenerates. Below we will encounter situations in which either the center of AdS or the BTZ horizon are excised. In such cases the regularity conditions can be relaxed. One other comment in order here concerns the difference of free energies, eq. (2.8). The renormalized gravitational action $I_{\rm gr}$ (where $I_{\rm gr}=F/T$) is calculated for the general interface model in appendix A. In the case of a homogeneous CFT one can, however, obtain the answer faster. Indeed, from the Fefferman-Graham form of the metric, eq. (2.2), one reads the energy of the CFT state, $\displaystyle E=\,L\langle T_{tt}\rangle\,=\,{1\over 2}\ell ML\ .$ (2.9) For $M=(2\pi T)^{2}$ this is the internal energy of the high-temperature state, as previously noted, and for $M=-(2\pi/L)^{2}$ it is the Casimir energy of the vacuum. The corresponding free energies obey the thermodynamic identity $\displaystyle E=-T^{2}{\partial\over\partial T}\Bigr{(}{F\over T}\Bigl{)}\ .$ (2.10) Eqs. (2.9) and (2.10) determine $F$ up to a term linear in $T$. This can be argued to vanish both at low $T$, since the ground state has no entropy, and at large $L$ since $F$ must be extensive. The final result is eq. (2.8). Let us finally note that since in empty AdS the mass $M$ is negative, there is a centrifugal contribution in eq.(2.7). An inertial observer may thus either rest at, or orbit around the center $r=0$. But in the centerless slices that we are about to discuss, all inertial observers hit the wall. ## 3 Topology of slices Consider now two conformal field theories, CFT1 and CFT2, coexisting at thermal equlibrium on a circle. This is illustrated in figure 1. The horizontal and vertical axes parametrize space and Euclidean time. In addition to the central charges $c_{1},c_{2}$, and to the properties of the interfaces between the two CFTs, there are three more parameters in this system: the sizes $L_{1},L_{2}$ of the regions in which each CFT lives, and the equilibrium temperature $T$. This gives two dimensionless parameters, which we can choose for instance to be $\tau_{1}:=TL_{1}$ and $\tau_{2}:=TL_{2}$. Figure 2: The finite-temperature interface CFT at the AdS boundary. Both space and Euclidean time are compact, so the depicted surface is an orthogonal torus. The gravity dual of this ICFT features domain walls, i.e. strings in 2+1 dimensions, 666We reserve the word “interface” for the CFT, and “domain wall” or “string” for gravity. Interfaces are anchor points of domain walls on the AdS boundary. The string of our bottom-up model should not be confused with the black string responsible for the interior horizon. In top-down supergravity embeddings the two types of string may be however interchangeable. anchored at the interfaces on the conformal boundary. We will make the simplifying assumption that the two domain walls differ only in orientation, and can join smoothly in the interior of spacetime. Extended models allowing junctions of different domain walls are very interesting but they are beyond our present scope. We will comment briefly on them in a later section. The green and pink boundary regions of fig. 2, in which CFT1 and CFT2 live, extend in the interior to slices of gravitational solutions that belong to one of several topological types. These are illustrated for the green slice in figure 3. Each slice is either part of thermal AdS3 with the center, marked by a grey flag, included (E1) or excised (E2), or part of the BTZ geometry with the horizon excised (E2′), included (H1) or intersecting the domain wall (H2). The same options are available for the pink spacetime slice.777 The Euclidean manifold is a (thermal) circle fibration over the fixed-time slice drawn in our figures. The fiber degenerates at the horizon, when one exists. Figure 3: The different types of space-time slice described in the main text. The actual slice is colored in green, the complementary region is excised. The letters ‘E’ and ‘H’ stand for ‘empty’ and ‘horizon’, and the grey flag denotes the rest point of an inertial observer. Note that since this is excised in E2, a conical singularity in its place is permitted. The centerful slice E1 can act as a gravitational Faraday cage. As was explained in section 2, we may adopt the unified parametrization (2.1) for all types of slice, with $M$ negative for the slices of type E1 and E2 of global AdS3, and positive for the slices of type E2′, H1 and H2 of the BTZ spacetime. We are interested in static configurations which are dual to equilibrium CFT states, so time is globally defined and has fixed imaginary period $t_{E}\sim t_{E}+1/T$. The coordinates $(x,r)$ on the other hand need not be continuous across the wall. We therefore write the spacetime metric in terms of two coordinate charts, $\displaystyle ds^{2}\,=\,{\ell_{j}^{2}dr_{j}^{2}\over r_{j}^{2}-M_{j}\ell_{j}^{2}}-({r_{j}^{2}}-M_{j}\ell_{j}^{2})\,dt^{2}+r_{j}^{2}dx_{j}^{2}\qquad{\rm with}\quad(x_{j},r_{j})\in\Omega_{j}\,,\,\,$ (3.1) where $\Omega_{1}$ is the range of coordinates for the green slice and $\Omega_{2}$ the range of coordinates for the pink slice. These ranges are delimited as follows: * • by the embeddings of the static wall in the two coordinate systems, $\\{x_{j}(\sigma),r_{j}(\sigma)\\}$, where $\sigma$ parametrizes the wall ; * • by the horizon whenever the slice contains one, i.e. in cases H1 and H2 ; * • by the cutoff surface $r_{j}\approx 1/\epsilon\to\infty$ . The mass parameters of the slices, $M_{1}$ and $M_{2}$, are in general different. Regularity requires however that $\displaystyle M_{j}=(2\pi T)^{2}\qquad{\text{for\ slices\ \ {\small{H1, H2}}}}$ (3.2) that include a horizon, whereas $M_{j}$ is unconstrained for the other slice types. Furthermore, for a slice of type E1 in which the spatial circle is contractible, interior regularity fixes the periodicity of $x$, $\displaystyle x_{j}\sim x_{j}+2\pi/\sqrt{-M_{j}}\qquad{\text{ in\ case\ {\small E1}}}\ .$ (3.3) For E2, E2′ and H2 the coordinate $x_{j}$ is not periodic, while for H1 its period, proportional to the horizon size, is unconstrained. Since the horizon is a closed surface, a green slice of type H2 can only be paired with a pink slice of the same type. This is the topology that dominates at very high temperature when the black hole eats up most of the bulk spacetime. As the temperature is lowered different pairs of the remaining slice types dominate. The pairs that correspond to actual solutions of the domain-wall equations will be determined in section 5. For the time being let us comment on the differences between the horizonless slices in the top row of fig. 3. What distinguishes E1 from the other two is the existence of the AdS center (or ‘refuge’) where an inertial observer may sit at rest. By contrast, in the slices of type E2 and E2′ all inertial observers will inevitably hit the domain wall as explained in the previous subsection. This discontinuous behavior differentiates the phases on either side of a sweeping transition. Note that there is no topological difference between the slices of type E2 and E2′, which is why we distinguish them only by a prime. These slices differ only in the sign of $M_{j}$, or equivalently the energy density per degree of freedom in the boundary theory. Together E2 and E2′ describe a continuum ($-\infty<M_{j}<\infty$) of horizonless slices with no rest point. ## 4 Solving the wall equations In this section we find the general solution of the domain wall equations in terms of the mass parameters $M_{1},M_{2}$, the AdS radii $\ell_{1},\ell_{2}$, and the tension of the wall $\lambda$. That a solution always exists for any bulk geometries is a special feature of 2+1 dimensions, as is the double Wick rotation that relates this part of our analysis to ref. [29]. ### 4.1 Matching conditions The matching conditions at a thin domain wall have appeared in numerous studies of cosmology and AdS/CFT. They are especially simple in the case at hand, where the wall/string is static and is characterized only by its tension. Matching the induced worldsheet metric of the two charts (3.1) gives one algebraic and one first-order differential equation for the embedding functions $x_{1}(\sigma),r_{1}(\sigma),x_{2}(\sigma)$ and $r_{2}(\sigma)$: $\displaystyle{r_{1}^{2}}-M_{1}\ell_{1}^{2}\,=\,{r_{2}^{2}}-M_{2}\ell_{2}^{2}\,\,\equiv\,f(\sigma)$ (4.1) $\displaystyle{\rm and}\qquad\ \ f^{-1}{\ell_{1}^{2}\,r_{1}^{\prime\,2}}+r_{1}^{2}\,x_{1}^{\prime\,2}=f^{-1}{\ell_{2}^{2}\,r_{2}^{\prime\,2}}+r_{2}^{2}\,x_{2}^{\prime\,2}\,\equiv\,g(\sigma),$ (4.2) where the prime denotes a derivative with respect to $\sigma$. We have defined the auxiliary functions $f$ and $g$ in terms of which the induced worldsheet metric reads $d{\hat{s}}^{2}|_{\mathbb{W}}=-f(\sigma)dt^{2}+g(\sigma)d\sigma^{2}$. A third matching equation888The Israel-Lanczos matching conditions are matrix equations, $[K_{\alpha\beta}]-[\textrm{tr}K]\hat{g}_{\alpha\beta}=\lambda\,\hat{g}_{\alpha\beta}$, where $K_{\alpha\beta}$ is the extrinsic curvature, $\hat{g}_{\alpha\beta}$ the induced metric, and brackets denote the discontinuity across the wall. Only the trace part of this equation is non-trivial. The traceless part of $K$ is automatically continuous by virtue of the momentum constraints $D^{\alpha}K_{\alpha\beta}-D_{\beta}K=0\,,$ where $D_{\alpha}$ is the covariant derivative with respect to the induced metric. Equation (4.3) is the $tt$ component of the matrix equation. expresses the discontinuity of the extrinsic curvature in terms of the tension, $\lambda$, of the wall [76, 77]. It can be written as follows : $\displaystyle{r_{1}^{2}x_{1}^{\prime}\over\ell_{1}}+{r_{2}^{2}x_{2}^{\prime}\over\ell_{2}}=\lambda\,\sqrt{fg}\ .$ (4.3) Our convention is that $\sigma$ increases as one circles $\Omega_{j}$ in the $(x_{j},r_{j})$ plane clockwise. Other conventions introduce signs in front of the two terms on the left-hand side of this equation. Eqs. (4.1)–(4.3) are three equations for four unknown functions, but one of these functions can be specified at will using the string-reparametrization freedom. Furthermore the equations only involve first derivatives of $x_{j}$, so the integration constants are irrelevant choices of the origin of the $x_{j}$ axes. For given $\ell_{1},\ell_{2}$ and $\lambda$, the wall embedding functions $x_{j}(r_{j})$, are thus uniquely determined by the parameteres $M_{1}$ and $M_{2}$. Different choices of ($M_{1},M_{2}$) may correspond, however, to the same boundary data ($L_{1},L_{2},T$). These are the competing phases of the system. ### 4.2 Solution near the boundary Near the conformal boundary, $r_{j}\to\infty$, the parameters $M_{j}$ can be neglected and the worldsheet metric asymptotes to AdS2 by virtue of scale invariance. Explicitly the solution reads [78] $\displaystyle r_{1}\approx r_{2}\,,\qquad x_{j}\,\approx\,-\ell_{j}\,(\tan\theta_{j})\,r_{j}^{-1}\ ,$ (4.4) where $\theta_{j}$ is the angle in the $(x_{j}\,,\ell_{j}/r_{j})$ plane between the normal to the boundary and the interface, see figure 4. The matching eqs. (4.2) and (4.3) relate these angles to the bulk radii $\ell_{j}$ and to the string tension $\lambda$: $\displaystyle{\ell_{1}\over\cos\theta_{1}}={\ell_{2}\over\cos\theta_{2}}\,\equiv\,\ell_{w}\qquad{\rm and}\qquad\tan\theta_{1}+\tan\theta_{2}=\,{\lambda}\,\ell_{w}\ ,$ (4.5) where $\ell_{w}$ is the radius of the AdS2 worldsheet, and $-{\pi/2}<\theta_{j}<{\pi/2}$ . Figure 4: Near the AdS boundary in the $(x_{j},\ell_{j}/r_{j})$ plane the string is a straight line subtending an angle $\theta_{j}$ with the normal. Without loss of generality we assume that $\ell_{1}\leq\ell_{2}$, so that CFT1 has the smaller of the two central charges. Its gravity dual has the lower vacuum energy, i.e. the green slice is the ‘true vacuum’ side of the domain wall and the pink slice is the ‘false-vacuum’ side. The first eq. (4.5) then implies that $|\tan\theta_{1}|\geq|\tan\theta_{2}|$ and, provided that the tension is positive, the second eq. (4.5) implies that $\theta_{1}>0$. The sign of $\theta_{2}$, on the other hand, depends on the precise value of $\lambda$. Expressing the tangents in terms of cosines brings indeed this equation to the form $\displaystyle({1\over\ell_{1}^{\,2}}-{1\over\ell_{w}^{\,2}})^{1/2}\,+\,\varepsilon\,\sqrt{{1\over\ell_{2}^{\,2}}-{1\over\ell_{w}^{\,2}}}\,=\,\lambda\ \qquad{\rm with}\quad\varepsilon={\rm sign}(\theta_{2})\ .$ (4.6) Since $\lambda$ is real we must have $\,\ell_{2}<\ell_{w}<\infty$. Furthermore to each value of the worldsheet radius $\ell_{w}$ there correspond two values of the tension $\lambda$, depending on sign($\theta_{2}$). Explicitly, 999It was argued in ref. [79] that the walls in the $\lambda<\lambda_{0}$ range are unstable. But the radius instability in this reference reduces the action by an amount proportional to the infinite volume of AdS2 and does not correspond to a normalizable mode. The only normalizable mode of the wall in the thin- brane model corresponds to the displacement operator which is an irrelevant (dimension = 2) operator [34]. $\displaystyle\lambda_{\rm min}<\lambda<\lambda_{0}\quad\ {\rm for}\ \varepsilon=-\qquad{\rm and}\qquad\lambda_{0}<\lambda<\lambda_{\rm max}\quad{\rm for}\ \varepsilon=+\ ,$ (4.7) where the three critical tensions read $\displaystyle\lambda_{\rm min}={1\over\ell_{1}}-{1\over\ell_{2}}\ ,\qquad\lambda_{\rm max}={1\over\ell_{1}}+{1\over\ell_{2}}\ ,\qquad\lambda_{0}=\sqrt{\lambda_{\rm max}\lambda_{\rm min}}\ .$ (4.8) Let us pause here to discuss the significance of these critical tensions. ### 4.3 Critical tensions The meaning of the critical tensions $\lambda_{\rm min}$ and $\lambda_{\rm max}$ has been understood in the work of Coleman-De Lucia [1] and Randall- Sundrum [2]. Below $\lambda_{\rm min}$ the false vacuum is unstable to nucleation of true-vacuum bubbles, so the two phases cannot coexist in equilibrium.101010Ref. [1] actually computes the critical tension for a domain wall separating Minkowski from AdS spacetime. Their result can be compared to $\lambda_{\rm min}$ in the limit $\ell_{2}\to\infty$. The holographic description of such nucleating bubbles raises fascinating questions in its own right, see e.g. refs.[8, 9, 10]. It has been also advocated that expanding true-vacuum bubbles could realize accelerating cosmologies in string theory [11]. Since our focus here is on equilibrium configurations, we will not discuss these interesting issues any further. The maximal tension $\lambda_{\rm max}$ is a stability bound of a different kind.111111Both the $\lambda=\lambda_{\rm max}$ and the $\lambda=\lambda_{\rm min}$ walls can arise as flat BPS walls in supergravity theories coupled to scalars [80, 81]. These two extreme types of flat wall, called type II and type III in [81], differ by the fact that the superpotential avoids, respectively passes through zero as fields extrapolate between the AdS vacua [82]. For $\lambda>\lambda_{\rm max}$ the two phases can coexist, but the large tension of the wall forces this latter to inflate [4]. The phenomenon is familiar for gravitating domain walls in asymptotically-flat spacetime [83], i.e. in the limit $\ell_{1},\ell_{2}\to\infty$. The meaning of $\lambda_{0}$ is less clear, its role will emerge later. For now note that it is the turning point at which the worldsheet radius $\ell_{w}(\lambda)$ reaches its minimal value $\ell_{2}$. Note also that the range $\lambda_{\rm min}<\lambda<\lambda_{0}$ only exists for non-degenerate AdS vacua, that is when $\ell_{1}$ is strictly smaller than $\ell_{2}$. Since the wall in this minimal model is described by a single parameter, its tension $\lambda$, all properties of the dual interface depend on it. These include the interface entropy, and the energy-transport coefficients. The entropy or $g$-factor, computed in [33, 29], reads 121212There are many calculations of the boundary, defect, and interface entropy in a variety of holographic models – a partial list is [84, 56, 57, 85, 86, 87]. The formula for arbitrary left and right central charges, which we rederive below, was found in ref.[29]. $\displaystyle{\rm log}\,g_{\rm I}\ =\ 2\pi\ell_{1}\ell_{2}\left[\lambda_{\rm max}\,{\rm tanh}^{-1}\Bigl{(}{\lambda\over\lambda_{\rm max}}\Bigr{)}-\lambda_{\rm min}\,{\rm tanh}^{-1}\Bigl{(}{\lambda_{\rm min}\over\lambda}\Bigr{)}\right]\ .$ (4.9) It varies monotonically between $-\infty$ and $\infty$ as $\lambda$ varies inside its allowed range (4.7). The fraction of transmitted energy for waves incident on the interface from the CFT1 side, respectively CFT2 side, was computed in [34] with the result (reexpressed here in terms of critical tensions) $\displaystyle{\cal T}_{1\to 2}={\lambda_{\rm max}+\lambda_{\rm min}\over\lambda_{\rm max}+\lambda}\,,\qquad{\cal T}_{2\to 1}={\lambda_{\rm max}-\lambda_{\rm min}\over\lambda_{\rm max}+\lambda}\ .$ (4.10) Note that using $\lambda_{\rm max}+\lambda_{\rm min}=2/\ell_{1}$ and $\lambda_{\rm max}-\lambda_{\rm min}=2/\ell_{2}$, one can check that these coefficients obey the detailed-balance condition $c_{1}{\cal T}_{1\to 2}=c_{2}{\cal T}_{2\to 1}$. The larger of the two transmission coefficients reaches the unitarity bound when $\lambda=\lambda_{\rm min}$, and both coefficients attain their minimum when $\lambda=\lambda_{\rm max}$. Total reflection (from the false-vacuum to the true-vacuum side) is only possible if $\ell_{1}/\ell_{2}\to 0$, i.e. when the “true-vacuum” CFT1 is almost entirely depleted of degrees of freedom relative to CFT2. Using eqs. (4.8) and the Brown-Henneaux formula one can express the central charges $c_{1,2}$ in terms of the critical tensions $\lambda_{\rm min}$ and $\lambda_{\rm max}$. As we just saw, $\lambda$ parametrizes two key properties of the interface. The triplet ($\lambda_{\rm min},\,\lambda_{\rm max},\,\lambda)$ of parameters in the gravitational action defines therefore the basic data of the putative dual ICFT. ### 4.4 Turning point and horizon We will now derive the general solution of the equations (4.1) - (4.3), and then relate the geometric parameters $M_{j}$ to the data $(T,L_{j})$ of the boundary torus shown in fig. 2. A convenient parametrization of the string outside any black horizons is in terms of the blueshift factor of the worldsheet metric, eq.(4.1), $\displaystyle f(\sigma)=\sigma\ \ \Longrightarrow\ \ r_{j}=\sqrt{\sigma+M_{j}\ell_{j}^{\,2}}\ .$ (4.11) In this parametrization $d{\hat{s}}^{2}|_{\mathbb{W}}=-\sigma\,dt^{2}+g(\sigma)d\sigma^{2}$. Let $\sigma_{+}$ correspond to the minimal value of the blueshift, this is either zero or positive. If $\sigma_{+}=0$ the string enters the horizon. If on the other hand $\sigma_{+}>0$ then, as we will confirm in a minute, this is the turning point of $r_{j}(\sigma)$ where both $x_{1}^{\prime}$ and $x_{2}^{\prime}$ diverge. A static string has (at most) one turning point, and is symmetric under reflection in the axis that passes through the centers of the boundary arcs, 131313In ref.[29] this corresponds to the time-reflection symmetry of the instanton solutions. as illustrated in figure 5. It follows that the parametrization is one-to-two. Henceforth we focus on the half string with positive $x_{j}$ (at least near the conformal boundary). The other half string is obtained by $x_{j}\to-x_{j}$. Figure 5: Schematic drawing of a low-temperature and a high-temperature solution, corresponding to pairs of type [E1,E2] and [H2,H2]. The broken line is the axis of reflection symmetry. The blueshift parameter $|\sigma|$ decreases monotonically until the string reaches either the turning point or the black-hole horizon. Eqs. (4.11) imply that $2r_{j}r_{j}^{\prime}=1$. Inserting in eq. (4.2) gives $\displaystyle(x_{j}^{\prime})^{2}=r_{j}^{-2}\Bigl{(}g(\sigma)-{\ell_{j}^{2}\over 4\sigma r_{j}^{2}}\Bigr{)}\ .$ (4.12) Squaring now twice eq. (4.3) and replacing $(x_{j}^{\prime})^{2}$ from the above expressions leads to a quadratic equation for $g(\sigma)$, the ${\tiny\sigma\sigma}$ component of the worldsheet metric. This equation has a singular solution $g=0$, and a non-trivial one $g(\sigma)=\lambda^{2}\Biggl{[}\Bigl{(}\frac{2r_{1}r_{2}}{\ell_{1}\ell_{2}}\Bigr{)}^{2}\hskip-1.13809pt-\Bigl{(}\frac{r_{1}^{2}}{\ell_{1}^{\,2}}+\frac{r_{2}^{2}}{\ell_{2}^{\,2}}-\lambda^{2}\sigma\Bigr{)}^{2}\,\Biggr{]}^{-1}\hskip-5.406pt={\lambda^{2}\over A\sigma^{2}+2B\sigma+C}\,,$ (4.13) where in the second equality we used eqs. (4.11), and $A=(\lambda_{\rm max}^{2}-\lambda^{2})(\lambda^{2}-\lambda_{\rm min}^{2})\ ;$ $\displaystyle B=\lambda^{2}(M_{1}+M_{2})-\lambda_{0}^{2}(M_{1}-M_{2})\ ;\quad C=-(M_{1}-M_{2})^{2}\ .$ (4.14) We expressed the quadratic polynomial appearing in the denominator of (4.13) in terms of $M_{j},\lambda$ and the critical tensions, eqs. (4.8), in order to render manifest the fact that for $\lambda$ in the allowed range, $\lambda_{\rm min}<\lambda<\lambda_{\rm max}$, the coefficient $A$ is positive. This is required for $g(\sigma)$ to be positive near the boundary where $\sigma\to\infty$. In addition, $AC\leq 0$ which ensures that the two roots of the denominator in eq. (4.13) $\displaystyle\sigma_{\pm}={-B\pm(B^{2}-AC)^{1/2}\over A}$ (4.15) are real, and that the larger root $\sigma_{+}$ is non-negative. Inserting eq. (4.13) in eq. (4.12) and fixing the sign of the square root near the conformal boundary gives after a little algebra $\displaystyle\frac{x_{1}^{\prime}}{\ell_{1}}\,=\,-\frac{\sigma\,(\lambda^{2}+\lambda_{0}^{2})+M_{1}-M_{2}}{{2(\sigma+M_{1}\ell_{1}^{\,2})}\,\sqrt{A\sigma(\sigma-\sigma_{+})(\sigma-\sigma_{-})}}\ \ ,$ (4.16a) $\displaystyle\frac{x_{2}^{\prime}}{\ell_{2}}\,=\,-\frac{\sigma\,(\lambda^{2}-\lambda_{0}^{2})+M_{2}-M_{1}}{{2(\sigma+M_{2}\ell_{2}^{\,2})}\,\sqrt{A\sigma(\sigma-\sigma_{+})(\sigma-\sigma_{-})}}\ \ .$ (4.16b) We may now confirm our earlier claim that if $\sigma_{+}>0$ then both $x_{1}^{\prime}\propto dx_{1}/dr_{1}$ and $x_{2}^{\prime}\propto dx_{2}/dr_{2}$ diverge at this point. Furthermore, since $\sigma+M_{j}\ell_{j}^{\,2}=r_{j}^{2}$ is positive,141414Except for the measure-zero set of solutions in which the string passes through the center of global AdS3. the $x_{j}^{\prime}$ are finite at all $\sigma>\sigma_{+}$. Thus $\sigma_{+}$ is the unique turning point of the string, as advertized. Eqs. (4.11) and (4.16) give the general solution of the string equations for arbitrary mass parameters $M_{1},M_{2}$ of the green and pink slices. These must be determined by interior regularity, and by the Dirichlet conditions at the conformal boundary. Explicitly, the boundary conditions for the different slice types of figure 3 read: $L_{j}\ =\,2\int_{\,\sigma_{+}}^{\infty}d\sigma\,x_{j}^{\prime}\,\qquad\qquad{\text{for \ \ {\small E2, E2${}^{\prime}$}}}\ ;$ (4.17a) $L_{j}\ =\ nP_{j}+\,2\int_{\,\sigma_{+}}^{\infty}d\sigma\,x_{j}^{\prime}\,\qquad{\text{ for \ \ {\small E1, H1}}}\ ;$ (4.17b) $L_{j}\ =\ \Delta x_{j}\bigl{|}_{\rm Hor}\,+\,2\int_{\,\sigma_{+}}^{\infty}d\sigma\,x_{j}^{\prime}\,\qquad\quad{\text{ for \ \ {\small H2}}}\ .$ (4.17c) The integrals in these equations are the opening arcs, $\Delta x_{j}$, between the two endpoints of a half string. They can be expressed as complete elliptic integrals of the first, second and third kind, see appendix B. For the slices E1, H1 where $x_{j}$ is a periodic coordinate, we have denoted by $P_{j}>0$ its period, and by $n$ the string winding number. Finally for strings entering the horizon we denote by $\Delta x_{j}|_{\rm Hor}$ the opening arc between the two horizon-entry points in the $j$th coordinate chart. Possible phases of the ICFT for given torus parameters $T,L_{j}$ must be solutions to one pair of conditions (4.17). Apart from interior regularity, we will also require that the string does not self-intersect. In principle, two string bits intersecting at an angle $\not=\pi$ could join into another string. Such string junctions would be the gravitational counterparts of interface fusion [65], and allowing them would make the holographic model much richer.151515Generically the intersection point in one slice will correspond to two points that must be identified in the other slice; this may impose further conditions. To keep, however, our discussion simple we will only allow a single type of domain wall in this work. The reader can easily convince herself that to avoid string intersections we must have $P_{j}>L_{j}$ and $n=1$ in (4.17b), and $\Delta x_{j}\bigl{|}_{\rm Hor}>0$ in (4.17c). ## 5 Phases: cold, hot $\&$ warm Among the five slice types of figure 3, H2 stands apart because it can only pair with itself. This is because a horizon is a closed surface, so it cannot end on the domain wall.161616Except possibly in the limiting case where the wall is the boundary of space. We will now show that the matching equations actually rule out several other pairs among the remaining slice types. One pair that is easy to exclude is [H1,H1], i.e. solutions that describe two black holes sitting on either side of the wall. Interior regularity would require in this case $M_{1}=M_{2}=(2\pi T)^{2}$. But eqs. (4.14) and (4.15) then imply that $\sigma_{+}=0$, so the wall cannot avoid the horizon leading to a contradiction. This gives our first no-go lemma: Two black holes on either side of a static domain wall are ruled out. Note by contrast that superheavy domain walls ($\lambda>\lambda_{\rm max}$) inflate and could thus prevent the black holes from coalescing.171717Asymptotically-flat domain walls, which have been studied a lot in the context of Grand Unification [83], are automatically in this range. A second class of pairs one can exclude are the ‘centerless geometries’ [E2,E2], [E2,E2′], [E2′,E2] and [E2′,E2′]. We use the word ‘centerless’ for geometries that contain neither a center of global AdS, nor a black hole in its place (see fig. 3). If such solutions existed, all inertial observers would necessarily hit the domain wall since there would be neither a center where to rest, nor a horizon where to escape.181818In the double-Wick rotated context of Simidzija and Van Raamsdonk the [E2,E2] geometries give traversible wormholes [29]. The argument excluding such solutions is based on a simple observation: What distinguishes the centerless slices E2 and E2′ from those with a AdS center (E1) or a black hole (H1) is the sign of $x_{j}^{\prime}$ at the turning point, $\displaystyle{\rm sign}\bigl{(}x_{j}^{\prime}\bigl{|}_{\sigma\approx\sigma_{+}}\bigr{)}\ =\begin{cases}+\quad{\rm for\ {\small E2},{\small E2}}^{\prime}\,,\\\ -\quad{\rm for\ {\small E1},{\small H1}}\ .\end{cases}$ (5.1) Now from eqs. (4.16) one has $\displaystyle(\sigma+M_{1}\ell_{1}^{\,2})\,\frac{x_{1}^{\prime}}{\ell_{1}}+(\sigma+M_{2}\ell_{2}^{\,2})\,\frac{x_{2}^{\prime}}{\ell_{2}}\,<\,0\ ,$ (5.2) so both $x_{j}^{\prime}$ cannot be simultaneously positive. This holds for all $\sigma$, and hence also near the turning point QED. This is our second no-go lemma: ‘Centerless’ static spacetimes in which all inertial observers would inevitably hit the domain wall are ruled out. We can actually exploit this argument further. As is clear from eq. (4.16a), if $M_{1}>M_{2}$ then $x^{\prime}_{1}$ is manifestly negative, i.e. the green slice is of type E1 or H1. The pairs [E2′,E1] and [E2′,E2] for which the above inequality is automatic are thus ruled out. One can also show that $x^{\prime}_{2}|_{\sigma\approx\sigma_{+}}$ is negative if $M_{2}>0>M_{1}$. This is obvious from eq. (4.16b) in the range $\lambda>\lambda_{0}$, and less obvious but also true as can be checked by explicit calculation for $\lambda<\lambda_{0}$.191919 The tedious algebra is straightforward and not particularly instructive, so we chose not to present it here. We did it with mathematica but also tested it numerically. The pairs [E1,E2′] and [E2,E2′] for which the above mass inequality is automatic, are thus also excluded. Recall that the energy density of the $j$th CFT reads $\langle T_{tt}\rangle={1\over 2}\ell_{j}M_{j}$. Ruling out all pairs of E2′ with E1 or E2 implies therefore that in the ground state the energy density must be everywhere negative. When one $L_{j}$ is much smaller than the other, the Casimir energy scales like $E_{0}\sim\\#/L_{j}$. The fact that the coefficient $\\#$ is negative means that the Casimir force is attractive, in agreement with general theorems [88, 89]. This is the third no-go lemma: Figure 6: Phases of the domain-wall spacetime. The type of the green slice labels the rows of the table, and that of the pink slice the columns. In the hot (red) phase the wall enters the black-hole horizon, while in the warm (yellow) phases it avoids it. The cold (blue) phases have no black hole. Geometries in which an inertial observer is attracted to two different centers are indicated by a different shade (light yellow or darker blue). A slice of global AdS3 cannot be paired with a horizonless BTZ slice. This implies that in the ground state of the putative dual ICFT the energy density is everywhere negative. We have collected for convenience all these conclusions in fig. 6. The table shows the eligible slice pairs, or the allowed topologies of static-domain- wall spacetimes. It also defines a color code for phase diagrams. The light yellow phases that feature a wall between the black hole and an AdS restpoint are the gravitational avatars of the Faraday cage. Such solutions are easier to construct for larger $\lambda$. Domain walls lighter than $\lambda_{0}$, in particular, can never shield from a black hole in the ‘true- vacuum’ side. Indeed, as follows easily from eq. (4.16b), for $\lambda<\lambda_{0}$ and $M_{1}>0>M_{2}$, the sign of $x_{2}^{\prime}|_{\sigma\approx\sigma_{+}}$ is positive, so geometries of type [H1,E1] are excluded. ## 6 Equations of state The different colors in figure 6 describe different phases of the system, since the corresponding geometries are topologically distinct. They differ in how the wall, the horizon (if one exists) and inertial observers intersect or avoid each other. Let us now think thermodynamics. For fixed Lagrangian parameters $\lambda,\ell_{j}$, the canonical variables that determine the state of the system are the temperature $T$ and the volumes $L_{1},L_{2}$. Because of scale invariance only two dimensionless ratios matter: $\displaystyle\tau_{1}:=TL_{1}\ ,\quad\tau_{2}:=TL_{2}\qquad{\rm or}\quad\quad\gamma:={L_{1}\over L_{2}}={\tau_{1}\over\tau_{2}}\ .$ (6.1) The microcanonical variables, the energy density and the entropy of each subsystem, read (see section 2, and recall that $8\pi G=1$) $\displaystyle{E_{j}\over L_{j}}={\ell_{j}\over 2}M_{j}\qquad{\rm and}\qquad S_{j}={r^{\rm H}_{j}\Delta x_{j}|_{\rm Hor}\over 4G}=2\pi\ell_{j}\sqrt{M_{j}}\,\Delta x_{j}|_{\rm Hor}\ .\ \ $ (6.2) These are the natural parameters of the interior geometry. The entropies are scale invariant. The other key dimensionless variable is the mass ratio, viz. the ratio of energy densities per degree of freedom in the two CFTs $\displaystyle\mu:={M_{2}\over M_{1}}\ .$ (6.3) When several phases coexist the dominant one is the one with the lowest free energy $F=\sum_{j}(E_{j}-TS_{j})$. As a sanity check, we rederive $F$ from the renormalized on-shell gravitational action in appendix A. The Dirichlet conditions, eqs. (4.17), give for each type of geometry two relations among the above variables that play the role of equations of state.202020In homogeneous systems there is a single equation of state. Here we have one equation for each subsystem. They relate the natural interior parameters $S_{j}$ and $\mu$ to the variables $\tau_{j}$ and $\gamma$ of the boundary torus. Note that in each phase of the system since for horizonless slices $S_{j}=0$ and for slices with horizon $M_{j}=(2\pi T)^{2}$. In computing the phase diagram we will have to invert these equations of state. ### 6.1 High-$T$ phase For fixed $L_{j}$ and very high temperature the black hole grows so large that it eats away a piece of the domain wall and the AdS rest points. The dominant solution is thus of type [H2,H2] and regularity fixes the mass parameters in both slices, $M_{1}=M_{2}=(2\pi T)^{2}$. The boundary conditions (4.17c) reduce in this case to simple equations for the opening horizon arcs $\Delta x_{j}|_{\rm Hor}$. Performing explicitly the integrals (see appendix B) gives $\displaystyle L_{1}-\Delta x_{1}\bigl{|}_{\rm Hor}\ =\ -{1\over\pi T}\,{\rm tanh}^{-1}\left({\ell_{1}(\lambda^{2}+\lambda^{2}_{0})\over 2\lambda}\right)\ ,$ (6.4a) $\displaystyle L_{2}-\Delta x_{2}\bigl{|}_{\rm Hor}\ =\ -{1\over\pi T}\,{\rm tanh}^{-1}\left({\ell_{2}(\lambda^{2}-\lambda^{2}_{0})\over 2\lambda}\right)\ .$ (6.4b) For consistency we must have $\Delta x_{j}|_{\rm Hor}>0$, which is automatic if $\lambda>\lambda_{0}$. If $\lambda<\lambda_{0}$, on the other hand, positivity of $\Delta x_{2}|_{\rm Hor}$ puts a lower bound on $\tau_{2}$, $\displaystyle\tau_{2}\,\geq\,{1\over\pi}\,{\rm tanh}^{-1}\left({\ell_{2}(\lambda_{0}^{2}-\lambda^{2})\over 2\lambda}\right)\ :=\,\tau_{2}^{*}\ .$ (6.5) We see here a first interpretation of the critical tension $\lambda_{0}$ encountered in section 4.3. For walls lighter than $\lambda_{0}$ there is a region of parameter space where the hot solution ceases to exist, even as a metastable phase. The total energy and entropy in the high-$T$ phase read $\displaystyle E_{\rm[hot]}\,=\,{1\over 2}(\ell_{1}L_{1}M_{1}+\ell_{2}L_{2}M_{2})\,=\,2\pi^{2}T^{2}\,(\ell_{1}L_{1}+\ell_{2}L_{2})\ ,$ (6.6) $\displaystyle S_{\rm[hot]}=4\pi^{2}T\bigl{(}\ell_{1}\Delta x_{1}\bigl{|}_{\rm Hor}+\,\ell_{2}\Delta x_{2}\bigl{|}_{\rm Hor}\bigr{)}=4\pi^{2}T^{2}(\ell_{1}L_{1}+\ell_{2}L_{2})+2\log g_{I}\ ,\ \ \ $ (6.7) where $\log g_{\rm I}\,$ is given by eq. (4.9) and the rightmost expression of the entropy follows from eqs. (6.4) and a straightforward reshuffling of the arctangent functions. This is a satisfying result. Indeed, the first term in the right-hand side of (6.7) is the thermal entropy of the two CFTs (being extensive these entropies cannot depend on the ratio $L_{1}/L_{2}$), while the second term is the entropy of the two interfaces on the circle. The Bekenstein-Hawking formula captures nicely both contributions. Eqs.(6.4) and (6.7) show that shifting the $L_{j}$ at fixed $T$ does not change the entropy if and only if $\ell_{1}\,\delta L_{1}=-\ell_{2}\,\delta L_{2}$. Moving in particular a defect (for which $\ell_{1}=\ell_{2}$) without changing the volume $L_{1}+L_{2}$ is an adiabatic process, while moving a more general interface generates/absorbs entropy by modifying the density of degrees of freedom. ### 6.2 Low-$T$ phase(s) Consider next the ground state of the system, at $T=0$. The dual geometry belongs to one of the three horizonless types: the double-center geometry [E1, E1], or the single-center ones [E1, E2] and [E2, E1] (see fig. 6). Here the entropies $S_{j}=0$, and the only relevant dimensionless variables are the volume and energy-density ratios, $\gamma$ and $\mu$. Note that they are both positive since $L_{j}>0$ and $M_{j}<0$ for both $j$. The Dirichlet conditions (4.16) for horizonless geometries read $\displaystyle\sqrt{|M_{1}|}\,L_{1}=2\pi\,\delta_{{\mathbb{S}}_{1},{\rm E1}}-f_{1}(\mu)\ ,\qquad\sqrt{|M_{1}|}\,L_{2}={2\pi\over\sqrt{\mu}}\,\delta_{{\mathbb{S}}_{2},{\rm E1}}-f_{2}(\mu)\ ,$ (6.8) where $\delta_{{\mathbb{S}}_{j},{\rm E1}}=1$ if the $j$th slice is of type E1 and $\delta_{{\mathbb{S}}_{j},{\rm E1}}=0$ otherwise, and $\displaystyle f_{1}(\mu)\,=\,{\ell_{1}\over\sqrt{A}}\int_{s_{+}}^{\infty}ds{s(\lambda^{2}+\lambda_{0}^{2})-1+\mu\over(s-\ell_{1}^{\,2})\sqrt{s(s-s_{+})(s-s_{-})}}\ ,\ \ \ $ (6.9a) $\displaystyle f_{2}(\mu)\,=\,{\ell_{2}\over\sqrt{A}}\int_{s_{+}}^{\infty}ds{s(\lambda^{2}-\lambda_{0}^{2})+1-\mu\over(s-\mu\ell_{2}^{\,2})\sqrt{s(s-s_{+})(s-s_{-})}}\ ,\ \ \ $ (6.9b) with $\displaystyle A\,s_{\pm}=\lambda^{2}(1+\mu)-\lambda_{0}^{2}(1-\mu)\pm 2\lambda\sqrt{{1-\mu\over\ell_{2}^{2}}+{\mu^{2}-\mu\over\ell_{1}^{2}}+\mu\lambda^{2}}\ \ .$ (6.10) The dummy integration variable $s$ is the appropriately rescaled blueshift factor of the string worldsheet, $s=\sigma/|M_{1}|$ . Dividing the two sides of eqs. (6.8) gives $\gamma$ as a function of $\mu$ for each of the three possible topologies.212121The functions $f_{j}(\mu)$ are combinations of complete elliptic integrals of the first, second and third kind, see appendix B. The value $\mu=1$ gives $\gamma=1$, corresponding to the scale-invariant AdS2 string worldsheet. The known supersymmetric top-down solutions live at this special point in phase space. If the ground state of the putative dual quantum-mechanical system was unique, we should find a single slice-pair type and value of $\mu$ for each value of $\gamma$. Numerical plots show that this is indeed the case. Specifically, we found that $\gamma(\mu)$ is a monotonically-increasing function of $\mu$ for any given slice pair, and that it changes continuously from one type of pair to another. We will return to these branch-changing ‘sweeping transitions’ in section 7. Let us stress that the uniqueness of the cold solution did not have to be automatic in classical gravity, nor in the dual large-$N$ quantum mechanics. For most of the $(\ell_{j},\lambda)$ parameter space, as $\gamma$ ranges in $(0,\infty)$ the mass ratio $\mu$ covers also the entire range $(0,\infty)$. However, if $\ell_{1}<\ell_{2}$ (strict inequality) and for sufficiently light domain walls, we found that $\gamma$ vanishes at some positive $\mu=\mu_{0}(\lambda,\ell_{j})$. Below this critical value $\gamma$ becomes negative signaling that the wall self-intersects and the solution must be discarded. This leads to a striking phenomenon that we discuss in section 8. ### 6.3 Warm phases The last set of solutions of the model are the yellow- or orange-coloured ones in fig. 6. Here the string avoids the horizon, so the slice pair is of type [H1,X] or [X,H1] with X one of the horizonless types: E1, E2 or E2′. Assume first that the black hole is on the green side of the wall, so that $M_{1}=(2\pi T)^{2}$. In terms of $\mu$ the Dirichlet conditions (4.17a, 4.17b) read: $\displaystyle 2\pi T\Delta x_{1}\bigl{|}_{{\rm Hor}}-\,2\pi\tau_{1}=\tilde{f}_{1}(\mu)\ ,\qquad 2\pi\tau_{2}={2\pi\over\sqrt{-\mu}}\,\delta_{{\mathbb{S}}_{2},{\rm E1}}-\tilde{f}_{2}(\mu)\ ,$ (6.11) where $\displaystyle\tilde{f}_{1}(\mu)\,=\,{\ell_{1}\over\sqrt{A}}\int_{\tilde{s}_{+}}^{\infty}ds{s(\lambda^{2}+\lambda_{0}^{2})+1-\mu\over(s+\ell_{1}^{\,2})\sqrt{s(s-\tilde{s}_{+})(s-\tilde{s}_{-})}}\ ,\ \ \ $ (6.12a) $\displaystyle\tilde{f}_{2}(\mu)\,=\,{\ell_{2}\over\sqrt{A}}\int_{\tilde{s}_{+}}^{\infty}ds{s(\lambda^{2}-\lambda_{0}^{2})-1+\mu\over(s+\mu\ell_{2}^{\,2})\sqrt{s(s-\tilde{s}_{+})(s-\tilde{s}_{-})}}\ ,\ \ \ $ (6.12b) and the roots $\tilde{s}_{\pm}=\sigma_{\pm}/M_{1}$ inside the square root are given by $\displaystyle A\,\tilde{s}_{\pm}=-\lambda^{2}(1+\mu)+\lambda_{0}^{2}(1-\mu)\pm 2\lambda\sqrt{{1-\mu\over\ell_{2}^{2}}+{\mu^{2}-\mu\over\ell_{1}^{2}}+\mu\lambda^{2}}\ \ .$ (6.13) In the first condition (6.11) we have used the fact that the period of the green slice that contains the horizon is $P_{1}=\Delta x_{1}|_{{\rm Hor}}$. If the black hole is on the pink side of the wall, the conditions take a similar form in terms of the inverse mass ratio $\hat{\mu}=\mu^{-1}=M_{1}/M_{2}$, $\displaystyle 2\pi T\Delta x_{2}\bigl{|}_{{\rm Hor}}-\,2\pi\tau_{2}={\hat{f}}_{2}(\hat{\mu})\ ,\quad 2\pi\tau_{1}={2\pi\over\sqrt{-\hat{\mu}}}\,\delta_{{\mathbb{S}}_{1},{\rm E1}}-\hat{f}_{1}(\hat{\mu})\ ,$ (6.14) where here $\displaystyle\hat{f}_{1}(\hat{\mu})\,=\,{\ell_{1}\over\sqrt{A}}\int_{\hat{s}_{+}}^{\infty}ds{s(\lambda^{2}+\lambda_{0}^{2})+\hat{\mu}-1\over(s+\hat{\mu}\ell_{1}^{\,2})\sqrt{s(s-\hat{s}_{+})(s-\hat{s}_{-})}}\ ,\ \ \ $ (6.15a) $\displaystyle\hat{f}_{2}(\mu)\,=\,{\ell_{2}\over\sqrt{A}}\int_{\hat{s}_{+}}^{\infty}ds{s(\lambda^{2}-\lambda_{0}^{2})-\hat{\mu}+1\over(s+\ell_{2}^{\,2})\sqrt{s(s-\hat{s}_{+})(s-\hat{s}_{-})}}\ .\ \ \ $ (6.15b) and the roots $\hat{s}_{\pm}=\sigma_{\pm}/M_{2}$ inside the square root are given by $\displaystyle A\,\hat{s}_{\pm}=-\lambda^{2}(\hat{\mu}+1)+\lambda_{0}^{2}(\hat{\mu}-1)\pm 2\lambda\sqrt{{\hat{\mu}^{2}-\hat{\mu}\over\ell_{2}^{2}}+{1-\hat{\mu}\over\ell_{1}^{2}}+\hat{\mu}\lambda^{2}}\ \ .$ (6.16) The functions $\tilde{f}_{j}$ and $\hat{f}_{j}$, as well as the $f_{j}$ of the cold phase, derive from the same basic formulae (4.16a, 4.16b) and differ only by a few signs. We chose to write them out separately because these signs are important. Note also that while in cold solutions $\mu$ is always positive, here $\mu$ and its inverse $\hat{\mu}$ can have either sign. All the values of $\mu$ and $\hat{\mu}$ do not, however, correspond to admissible solutions. For a pair of type [H1,X] we must demand (i) that the right-hand sides in (6.11) be positive – the non-intersection requirement, and (ii) that $x_{1}^{\prime}|_{\sigma\approx\sigma_{+}}$ be negative – the turning point condition (5.1). Likewise for solutions of type [X, H1] we must demand that the right-hand sides in (6.14) be positive and that $x_{2}^{\prime}|_{\sigma\approx\sigma_{+}}$ be negative. The turning-point requirement is easy to implement. In the [H1,X] case, $x_{1}^{\prime}|_{\sigma\approx\sigma_{+}}$ is negative when the numerator of the integrand in (6.12a), evaluated at at $s=\tilde{s}_{+}$, is positive. Likewise for the [X,H1] pairs, $\,x_{2}^{\prime}|_{\sigma\approx\sigma_{+}}$ is negative when the numerator of the integrand in (6.15b), evaluated at at $s=\hat{s}_{+}$, is positive. After a little algebra these conditions take a simple form $\displaystyle{\rm for}\ \ {\rm[H1,X]}\quad\mu\in(-\infty,1]\ ;\qquad{\rm for}\ \ {\rm[X,H1]}\quad\hat{\mu}=\mu^{-1}\in(-\infty,1]\ .\ \ \ $ (6.17) Recalling that $\mu=\hat{\mu}^{-1}=M_{2}/M_{1}$, we conclude that in all the cases the energy density per degree of freedom in the horizonless slice is lower than the corresponding density in the black hole slice. This agrees with physical intuition: the energy density per degree of freedom in the cooler CFT is less than the thermal density $\pi T^{2}/6$ – the interfaces did not let the theory thermalize. When $\mu\to 1$ or $\hat{\mu}\to 1$, the wall enters the horizon and the energy is equipartitioned. This completes our discussion of the equations of state. To summarize, these equations relate the parameters of the interior geometry ($\mu,S_{j}$) to those of the conformal boundary ($\gamma,\tau_{j}$). The relation involves elementary functions in the hot phase, and was reduced to a single function $\gamma(\mu)$, that can be readily plotted, in the cold phases. Furthermore at any given point in parameter space the hot and cold solutions, when they exist, are unique. The excluded regions are $\tau_{2}<\tau_{2}^{*}(\lambda,\ell_{j})$ for the hot solutions, and $\mu>\mu_{0}(\lambda,\ell_{j})$ for the cold solution with $\mu_{0}$ the point where $\gamma=0$. In warm phases the story is richer since more than one solutions typically coexist at any given value of $(\gamma,\tau_{j})$. Some solutions have negative specific heat, as we will discuss later. To find the parameter regions where different solutions exist requires inverting the relation between ($\gamma,\tau_{j}$) and ($\mu,S_{j}$). We will do this analytically in some limiting cases, and numerically to compute the full phase diagram in section 9. ## 7 Phase transitions The transitions between different phases are of three kinds: * • Hawking-Page transitions describing the formation of a black hole. These transitions from the cold to the hot or warm phases of fig. 6 are always first order; * • Warm-to-hot transitions during which part of the wall is captured by the horizon. We will show that these transitions are also first-order; * • Sweeping transitions where the wall sweeps away a center of global AdS, i.e. a rest point for inertial observers. These are continuous transitions between the one- and two-center phases of fig. 6. It is instructive to picture these transitions by plotting the metric factor $g_{tt}$ while traversing space along the axis of reflection symmetry. The curve changes qualitatively as shown in figure 9, illustrating the topological nature of the transitions on the gravity side. Figure 7: Curves of the blueshift factor $g_{tt}$ as one traverses space along the $Z_{2}$ symmetry axis. The color code is the same as in fig. 6. The wall is located at the turning point $g_{tt}=\sigma_{+}$ where the curve is discontinuous. The grey arrows indicate possible transitions. The blackened parts of the curves are regions behind the horizon. Before embarking in numerical plots, we will first do the following things: (i) Comment on the ICFT interpretation of these transitions; (ii) Compute the sweeping transitions analytically; and (iii) Prove that the warm-to-hot transitions are first order, i.e. that one cannot lower the wall to the horizon continuously by varying the boundary data. ### 7.1 ICFT interpretation When a holographic dual exists, Witten has argued that the appearance of a black hole at the Hawking-Page (HP) transition signals deconfinement in the gauge theory [90]. Assuming this interpretation 222222There is an extensive literature on the subject including [91, 92], studies specific to two dimensions [93, 94], and recent discussions in relation with the superconformal index in $N=4$ super Yang Mills [95, 96, 97, 98]. For an introductory review see [99]. leads to the conclusion that in warm phases a confined theory coexists with a deconfined one. We will see below that such coexistence is easier when the confined theory is CFT2, i.e. the theory with the larger central charge.232323Even though for homogeneous 2-dimensional CFTs the critical temperature, $\tau_{\rm HP}=1$, does not depend on the central charge by virtue of modular invariance. This is natural from the gravitational perspective. Solutions of type [H1, X] are more likely than solutions of type [X, H1] because a black hole forms more readily on the ‘true-vacuum’ side of the wall. We will actually provide some evidence later that if $c_{2}>3c_{1}$ there are no equilibrium phases at all in which CFT2 is deconfined while CFT1 stays confined. The question that jumps to one’s mind is what happens for thick walls, where one expects a warm-to-hot crossover rather than a sharp transition. One possibility is that the coexistence of confined and deconfined phases is impossible in microscopic holographic models. Alternatively, an appropriately defined Polyakov loop [90] could provide a sharp order parameter for this transition. For sweeping transitions the puzzle is the other way around. Here a sharp order parameter exists in classical gravity – it is the number of rest points for inertial observers. This can be defined both for thin- and for thick-wall geometriess. The interpretation on the field theory side is however unclear. The transitions could be related to properties of the low-lying spectrum at infinite $N$, or to the entanglement structure of the ground state. We leave these questions open for future work. ### 7.2 Sweeping transitions Sweeping transitions are continuous transitions that happen at fixed values of the mass ratio $\mu$. We will prove these statements here. Assume for now continuity, and let the $j$th slice go from type E1 to type E2. The transition occurs when the string turning point and the center of the $j$th AdS slice coincide, i.e. when $\displaystyle r_{j}(\sigma_{+})=\sqrt{\sigma_{+}+M_{j}\ell_{j}^{2}}\,=\,0\ .$ (7.1) Clearly this has a solution only if $M_{j}<0$. Inserting in (7.1) the expressions (4.15) - (4.14) for $\sigma_{+}$ gives two equations for the critical values of $\mu$ with the following solutions $\displaystyle\mu_{1}^{*}={1-\ell_{2}^{\,2}\lambda^{2}\over\ell_{2}^{2}/\ell_{1}^{2}}\quad\ {\rm and}\quad\ \mu_{2}^{*}={\ell_{1}^{2}/\ell_{2}^{2}\over 1-\ell_{1}^{\,2}\lambda^{2}}\ \ .$ (7.2) In the low-$T$ phases both $M_{j}$ are negative and $\mu$ is positive. Furthermore, a little algebra shows that for all $\lambda\in(\lambda_{\rm min},\,\lambda_{\rm max})$ the following is true $\displaystyle x_{1}^{\prime}\bigl{|}_{\sigma\approx\sigma_{+}}<0\quad{\rm at}\ \ \mu\gg 1\ \quad{\rm and}\quad x_{2}^{\prime}\bigl{|}_{\sigma\approx\sigma_{+}}<0\quad{\rm at}\ \ \mu\ll 1\,.\ \ $ (7.3) This means that for $\mu\gg 1$ the green slice is of type E1, and for $\mu\ll 1$ the pink slice is of type E1. A sweeping transition can occur if the critical mass ratios (7.2) are in the allowed range. We distinguish three regimes of $\lambda$: * • Heavy ($\lambda>1/\ell_{1}$): None of the $\mu_{j}^{*}\,$ is positive, so the solution is of type [E1,E1] for all $\mu$, i.e. cold solutions are always double-center ; * • Intermediate (${1/\ell_{1}}>\lambda>1/\ell_{2}$): Only $\mu_{2}^{*}\,$ is positive. If this is inside the range of non-intersecting walls, the solution goes from [E1,E2] at large $\mu\,$, to [E1,E1] at small $\mu$. Otherwise the geometry is always of the single-center type [E1,E2] ; * • Light ($\lambda<1/\ell_{2}$): Both $\mu_{1}^{*}\,$ and $\mu_{2}^{*}\,$ are positive, so there is the possibility of two sweeping transitions: from [E2,E1] at small $\mu$ to [E1,E2] at large $\mu$ passing through the double- center type [E1,E1] . Note that since $\lambda_{\rm min}=1/\ell_{1}-1/\ell_{2}$, this range of $\lambda$ only exists if $\ell_{2}<2\ell_{1}$, i.e. when CFT2 has no more than twice the number of degrees of freedom of the more depleted CFT1. We can now confirm that sweeping transitions are continuous, not only in terms of the mass ratio $\mu$ but also in terms of the ratio of volumes $\gamma$. To this end we expand the relations (6.8) around the above critical points and show that the $L_{j}$ vary indeed continuously across the transition. The calculations can be found in appendix C . For the warm phases we proceed along similar lines. One of the two $M_{j}$ is now equal to $(2\pi T)^{2}>0$, so sweeping transitions may only occur for negative $\mu$. Consider first warm solutions of type [H1,X] with the black hole in the ‘true vacuum’ side. A little calculation shows that $\,x_{2}^{\prime}|_{\sigma\approx\sigma_{+}}$ is negative, i.e. X=E1, if and only if $\displaystyle\lambda>{1\over\ell_{1}}\qquad{\rm and}\qquad\mu<\mu_{2}^{*}<0\ .$ (7.4) Recall that when X=E1 some inertial observers can be shielded from the black hole by taking refuge at the restpoint of the pink slice. We see that this is only possible for heavy walls ($\lambda>1/\ell_{1}$) and for $\mu<\mu_{2}^{*}$. A sweeping transition [H1,E1] $\to$ [H1,E2] takes place at $\mu=\mu_{2}^{*}$. Consider finally a black hole in the ‘false vacuum’ side, namely warm solutions of type [X,H1]. Here $\,x_{1}^{\prime}|_{\sigma\approx\sigma_{+}}$ is negative, i.e. $X$ has a rest point, if and only if the following conditions are satisfied $\displaystyle\lambda>{1\over\ell_{2}}\qquad{\rm and}\qquad\hat{\mu}:=\mu^{-1}<(\mu_{1}^{*})^{-1}<0\ .$ (7.5) Shielding from the black hole looks here easier, both heavy and intermediate- tension walls can do it. In reality, however, we have found that solutions with the black hole in the ‘false vacuum’ side are rare, and that the above inequality pushes $\hat{\mu}$ outside the admissible range. The general trend emerging from the analysis is that the heavier the wall the more likely are the two-center geometries. A suggestive calculation actually shows that $\displaystyle{\partial\sigma_{+}\over\partial\lambda}\Bigl{|}_{M_{j}\ {\rm fixed}}\ \ {\rm is}\ \begin{cases}{\textrm{ positive\ \ for \ two-center\ solutions}}\\\ \,{\textrm{negative\ \ for \ single-center\ solutions}}\end{cases}$ (7.6) where the word ‘center’ here includes both an AdS restpoint and a black hole. At fixed energy densities a single center is therefore pulled closer to a heavier wall, while two centers are instead pushed away. It might be interesting to also compute ${\partial\sigma_{+}/\partial\lambda}$ and ${\partial V/\partial\lambda}$ at $L_{j}$ fixed, where $V$ is the regularized volume of the interior space. In the special case of the vacuum solution with an AdS2 wall, the volume (and the associated complexity [100]) can be seen to grow with the tension $\lambda$. ### 7.3 Warm-to-hot transitions In warm-to-hot transitions the thin domain wall enters the black-hole horizon. One may have expected this to happen continuously, i.e. to be able to lower the wall to the horizon smoothly, by slowly varying the boundary data $L_{j},T$. We will now show that, if the tension $\lambda$ is fixed, the transition is actually always first order. Note first that in warm solutions the slice that contains the black hole has $M_{j}=(2\pi T)^{2}$. If the string turning point approaches continuously the horizon, then $\sigma_{+}\to 0$. From eqs. (4.14, 4.15) we see that this can happen if and only if $(M_{1}-M_{2})\to 0$, which implies in passing that the solution must necessarily be of type [H1,E2′] or [E2′,H1]. Expanding around this putative point where the wall touches the horizon we set $\displaystyle{M_{1}-M_{2}\over M_{1}+M_{2}}:=\delta\quad{\rm with}\quad|\delta|\ll 1\ \ \Longrightarrow\ \ \sigma_{+}\approx\biggl{(}{2\pi T\over\lambda}\biggr{)}^{2}\delta^{2}\,.$ (7.7) Recalling that the horizonless slice has the smaller $M_{j}$ we see that for positive $\delta$ the black hole must be in the green slice and $\mu=1-2\delta+{\cal O}(\delta^{2})$, while for negative $\delta$ the black hole is in the pink slice and $\hat{\mu}=1+2\delta+{\cal O}(\delta^{2})$. The second option can be immediately ruled out since it is impossible to satisfy the boundary conditions (6.14). Indeed, $\hat{f}_{1}(\hat{\mu}\approx 1)$ is manifetsly positive, as is clear from eq. (6.15a), and we have assumed that $\mathbb{S}_{1}$ is of type E2′. Thus the second condition (6.14) cannot be obeyed. By the same reasonning we see that for $\delta$ positive, and since now $\mathbb{S}_{2}$ is of type E2′, we need that $\tilde{f}_{2}(\mu\approx 1)$ be negative. As is clear from the expression (6.12b) this implies that $\lambda<\lambda_{0}$. The upshot of the discussion is that a warm solution arbitrarily close to the hot solution may exist only if $\lambda<\lambda_{0}$ and if the black hole is on the true-vacuum side. It is easy to see that under these conditions the two branches of solution indeed meet at $\mu=1$, $\Delta x_{2}|_{{\rm Hor}}=0$ and hence from (6.4b) $\displaystyle\tau_{2}\,=\,{1\over\pi}\,{\rm tanh}^{-1}\left({\ell_{2}(\lambda_{0}^{2}-\lambda^{2})\over 2\lambda}\right)\,:=\,\tau_{2}^{*}\,.$ (7.8) Recall from section 6.1 that this is the limiting value for the existence of the hot solution – the solution ceases to exist at $\tau_{2}<\tau_{2}^{*}\,.$ The nearby warm solution could in principle take over in this forbidden range, provided that $\tau_{2}(\delta)$ decreases as $\delta$ moves away from zero. It actually turns out that $\tau_{2}(\delta)$ initially increases for small $\delta$, so this last possibility for a continuous warm-to-hot transition is also ruled out. To see why this is so, expand (6.11) and (6.12b) around $\mu=1$, $\tilde{s}_{+}={\delta^{2}\over\lambda^{2}}+{\cal O}(\delta^{3})\,,\quad\tilde{s}_{-}=-{4\lambda^{2}\over A}\Bigl{(}1-\delta(1+{\lambda_{0}^{2}\over\lambda^{2}})\Bigr{)}+{\cal O}(\delta^{2})\,,\quad$ and shift the integration variable $s:=y+\tilde{s}_{+}$ so that (6.12b) reads $\displaystyle 2\pi\,\tau_{2}(\delta)={\ell_{2}\over\sqrt{A}}\int_{0}^{\infty}dy\left[{y(\lambda_{0}^{2}-\lambda^{2})+2\delta\over(y+\mu\ell_{2}^{2})\sqrt{y(y+\tilde{s}_{+})(y-\tilde{s}_{-})}}+{\cal O}(\delta^{2})\right]\ .$ (7.9) We neglected in the integrand all contributions of ${\cal O}(\delta^{2})$ except for the $\tilde{s}_{+}$ in the denominator that regulates the logarithmic divergence of the ${\cal O}(\delta\log\delta)$ correction. Now use the inequalities ${y(\lambda_{0}^{2}-\lambda^{2})+2\delta\over\sqrt{(y+\tilde{s}_{+})(y-\tilde{s}_{-})}}>{y(\lambda_{0}^{2}-\lambda^{2})+2\delta\over\sqrt{(y+\delta^{2}/\lambda^{2})(y+4\lambda^{2}/A)}}>{\sqrt{y}(\lambda_{0}^{2}-\lambda^{2})\over\sqrt{(y+4\lambda^{2}/A)}}\ ,$ where the second one is equivalent to $2\delta>(\lambda_{0}^{2}/\lambda^{2}-1)\delta^{2}$, which is true for small enough $\delta$. Plugging in (7.9) shows that $\tau_{2}(\delta)>\tau_{2}(0)$ at the leading order in $\delta$, proving our claim. Figure 8: The function $\tau_{2}(\mu)$ in the [H1,E2] and [H1,E2′] branches of solutions, for $2\ell_{2}=3\ell_{1}$ and $\lambda={3/5\ell_{2}}<\lambda_{0}={\sqrt{5}/2\ell_{2}}$ . The red line indicates the bound $\tau_{2}^{*}$ below which the hot solution ceases to exist. A typical $\tau_{2}(\mu)$ in the [H1,E2] and [H1,E2′] branch of solutions, and for $\lambda<\lambda_{0}$, is plotted in figure 8. The function grows initially as $\mu$ moves away from 1, reaches a maximum value and then turns around and goes to zero as $\mu\to-\infty$. The red line indicates the limiting value $\tau_{2}^{*}$ below which there is no hot solution. For $\tau_{2}$ slightly above $\tau_{2}^{*}$ we see that there are three coexisting black holes, the hot and two warm ones. For $\tau_{2}<\tau_{2}^{*}$, on the other hand, only one warm solution survives, but it describes a wall at a finite distance from the horizon. Whether this is the dominant solution or not, the transition is therefore necessarily first order. ## 8 Exotic fusion and bubbles Before proceeding to the phase diagram, we pause here to discuss the peculiar phenomenon announced earlier, in section 6.2. This arises in the limits $\gamma=L_{1}/L_{2}\to 0$ or $\gamma\to\infty$, with $L_{1}+L_{2}$ and $T$ kept fixed. In these limits the conformal boundary of one slice shrinks to a point. Consider for definiteness the limit $L_{1}\to 0$. In the language of the dual field theory the interface and anti-interface fuse in this limit into a defect of CFT2. The naive expectation, based on free-field calculations [65, 101, 102], is that this is the trivial (or identity) defect. Accordingly, the green interior slice should recede to the conformal boundary, leaving as the only remnant a (divergent) Casimir energy. We have found that this expectation is not always borne out as we will now explain. Suppose first that the surviving CFT2 is in its ground state, and that the result of the interface-antiinterface fusion is the expected trivial defect. The geometry should in this case approach global AdS3 of radius $\ell_{2}$, with $M_{2}$ tending to $-(2\pi/L_{2})^{2}$, see section 2. Furthermore, $\sigma_{+}$ should go to infinity in order for the green slice to shrink towards the ultraviolet region. As seen from eqs. (4.15, 4.14) this requires $M_{1}\to-\infty$, so that $\mu\,$ should vanish together with $\gamma$. This is indeed what happens in much of the $(\lambda,\ell_{1},\ell_{2})$ parameter space. One finds $\mu\sim\gamma^{2}\to 0$, a scaling compatible with the expected Casimir energy $\sim\\#/L_{1}$. Nevertheless, sometimes $\gamma$ vanishes at finite $\mu_{0}$. In such cases, as $\mu\to\mu_{0}$ the green slice does not disappear even though its conformal boundary has shrunk to a point. This is illustrated by the left figure 9, which shows a static bubble of ‘true vacuum’ suspended from a point on the boundary of the ‘false vacuum’.242424 These are static solutions, not to be confused with ‘bags of gold’ which are cosmologies glued onto the backside of a Schwarzschild-AdS spacetime, see e.g. [103, 30]. The phenomenon is reminiscent of spacetimes that realize ‘wedge’ or codimension-2 holography, like those in refs. [104, 105, 106]. To convince ourselves that the phenomenon is real, we give an analytic proof in appendix D of the existence of such suspended bubbles in at least one region of parameters ($\ell_{2}>\ell_{1}$ and $\lambda\approx\lambda_{\rm min}>0$). Furthermore, since the vacuum solution for a given $\gamma$ is unique, there is no other competing solution. In the example of appendix D, in particular, $\gamma$ is finite and negative at $\mu=0$. Figure 9: Left: A bubble of true vacuum that survives inside the false vacuum despite the fact that its conformal boundary shrinks to a point. Right: A bubble of false vacuum with $\lambda=\lambda_{0}$ inscribed between the boundary and the horizon of a black hole. In the language of field theory this is a striking phenomenon. It implies that interface and anti-interface do not annihilate, but fuse into an exotic defect generating spontaneously a new scale in the process. This is the blueshift at the tip of the bubble, $\sigma_{+}(\mu_{0},L_{2})$, or better the corresponding frequency scale $r_{2}(\sigma_{+})$ in the D(efect)CFT. The phenomenon is not symmetric under the exchange $1\leftrightarrow 2$. Static bubbles of the false vacuum (pink) spacetime inside the true (green) vacuum do not seem to exist. We proved this analytically for $\lambda<\lambda_{0}$, and numerically for all other values of the tension. We have also found that the suspended green bubble can be of type E1, i.e. have a center. The redshift factor $g_{tt}$ inside the bubble can even be lower than in the surrounding space, so that the bubble hosts the excitations of lowest energy. We did not show this analytically, but the numerical evidence is compeling. Do suspended bubbles also exist when the surrounding spacetime contains a black hole ? The answer is affirmative as one can show semi-analytically by focussing on the region $\lambda\approx\lambda_{0}$. We have seen in the previous section that near this critical tension there exist warm solutions of type [H1,E2′] with the wall arbitrarily close the horizon. Let us consider the function $\tau_{2}(\mu,\lambda)$ given in this branch of solutions by eqs. (6.12b) and (6.11) (with $\mathbb{S}_{2}\not=$E1). This is a continuous function in both arguments, so as $\lambda$ increases past $\lambda_{0}$, $\tau_{2}(1)$ goes from positive to negative with the overall shape of the function varying smoothly. This is illustrated in figure 10, where we plot $\tau_{2}(\mu)$ for $\lambda$ slightly below and slightly above $\lambda_{0}$. It should be clear from these plots that for $\lambda>\lambda_{0}$ (the plot on the right) $\tau_{2}$ vanishes at a finite $\mu\approx 1$. This is a warm bubble solution, as advertized. Figure 10: Plots of the function $\tau_{2}(\mu)$ in the [H1,E2] or [H1,E2′] branch of solutions for $\ell_{2}/\ell_{1}=1.5$. The critical tension is $\lambda_{0}\ell_{2}\approx 1.12$. The curve on the left is for $\lambda_{0}\ell_{2}=1.05$, and the curve on the right for $\lambda_{0}\ell_{2}=1.35$. We have found more generally that warm bubbles can be also of type E1, thus acting as a suspended Faraday cage that protects inertial observers from falling towards the horizon of the black hole. Contrary, however, to what happened for the ground state, warm bubble solutions are not unique. There is always a competing solution at $\mu\to-\infty$, and it is the dominant one by virtue of its divergent negative Casimir energy. A stability analysis would show if warm bubble solutions can be metastable and long-lived, but this is beyond our present scope. As for warm bubbles of type [X, H1], that is with the black hole in the false- vacuum slice, these also exist but only if $\ell_{2}<3\ell_{1}$. Indeed, as we will see in a moment, when $\ell_{2}>3\ell_{1}$ the wall cannot avoid a horizon located on the false-vacuum side. Finally simple inspection of fig. 10 shows that by varying the tension, the bubble solutions for $\lambda>\lambda_{0}$ go over smoothly to the hot solution at $\lambda=\lambda_{0}$. At this critical tension the bubble is inscribed between the horizon and the conformal boundary, as in figure 9. This gives another meaning to $\lambda_{0}$: Only walls with this tension may touch the horizon without falling inside. ## 9 Phase diagrams In this last section of the paper we present numerical plots of the phase diagram of the model. We work in the canonical ensemble, so the variables are the temperature and volumes, or by scale invariance two of the dimensionless ratios defined in (6.1). We choose these to be $\,\tau=\tau_{1}+\tau_{2}=T(L_{1}+L_{2})$ and $\,\gamma=L_{1}/L_{2}$. The color code is as in fig. 6. We plot the phase diagram for different values of the action parameters $\ell_{1},\ell_{2},\lambda$. Since our analysis is classical in gravity, Newton’s constant $G$ plays no role. Only two dimensionless ratios matter,252525Dimensionless in gravity, not in the dual ICFT. for instance $\displaystyle b:={\ell_{2}\over\ell_{1}}={c_{2}\over c_{1}}\geq 1\qquad{\rm and}\quad\kappa:=\lambda\ell_{2}\in(b-1,b+1)\ .$ (9.1) The value $b=1$ corresponds to a defect CFT, while $b\gg 1$ is the opposite “near void” limit in which the degrees of freedom of CFT2 ovewhelm those of CFT1. The true vacuum approaches in this limit the infinite-radius AdS, and/or the false vacuum approaches flat spacetime. The critical tension $\lambda_{0}$ corresponds to $\kappa_{0}=\sqrt{b^{2}-1}$. To plot the phase diagrams we solved numerically for $\mu$ in terms of the boundary data $(\gamma,\tau)$ and for all types of slice pair, and compared their free energies when solutions of different type coexist. As explained in the introduction, although the interpretation is different, our diagrams are related to the ones of Simidzija and Van Raamsdonk [29] by double-Wick rotation (special to 2+1 dimensions). Since time in this reference is non- compact, only the boundaries of our phase diagrams, at $\gamma=0$ or $\gamma=\infty$, can be compared. The roles of thermal AdS and BTZ are also exchanged ### 9.1 Defect CFT Consider first $b=1$. By symmetry, we may restrict in this case to $\gamma\geq 1$. Figure 11 presents the phase diagram in the $(\gamma,\tau)$ plane for a very light ($\kappa=0.03$) and a very heavy ($\kappa=1.8$) domain wall. For the light, nearly tensionless, wall the phase diagram approaches that of a homogeneous CFT. The low-$T$ solution is single-center, and the Hawking-Page (HP) transition occurs at $\tau\approx 1$. Light domain walls follow closely geodesic curves, and avoid the horizon in a large region of parameter space. 262626One can compute this phase diagram analytically by expanding in powers of $\lambda$. Figure 11: Phase diagrams of a very light (left) and a very heavy (right) domain wall between degenerate vacua ($b=1$). The horizontal and vertical axes are $\gamma$ and $\tau$. The broken line in the left diagram separates solutions of type [H1, E2′] and [H1,E2] that only differ in the sign of the energy of the horizonless slice. The color code is as in fig. 6. Comparing the left with the right figure 11 shows that heavy walls facilitate the formation of the black hole and have a harder time staying outside. Indeed, in the right figure the HP transition occurs at lower $T$, and the warm phase recedes to $L_{1}\gg L_{2}$. Furthermore, both the cold and the warm solutions have now an additional AdS restpoint. This confirms the intuition that heavier walls repel probe masses more strongly, and can shield them from falling inside the black hole. The transition that sweeps away this AdS restpoint is shown explicitly in the phase diagrams of figure 12. Recall from the analysis of section 7.2 that in the low-$T$ phase such transitions happen for $\lambda<1/\ell_{1}\Longrightarrow\kappa<b=1$. Furthermore, the transitions take place at the critical mass ratios $\mu_{j}^{*}\,$, given by eq. (7.2). Since in cold solutions the relation between $\mu$ and $\gamma$ is one-to-one, the dark-light blue critical lines are lines of constant $\gamma$. These statements are in perfect agreement with the findings of fig. 12. Figure 12: Phase diagrams for intermediate-tension walls exhibiting sweeping transitions. On the left a restpoint of the vacuum solution is swept away as $\gamma$ increases beyond a critical value. On the right the same happens in the warm solution but for decreasing $\gamma$. In these diagrams, the one- center warm solution is always [H1,E2]. Warm solutions of type [H1,E1], respectively [E1,H1], exist for tensions $\lambda>1/\ell_{1}\Longrightarrow\kappa>b$, respectively $\lambda>1/\ell_{2}\Longrightarrow\kappa>1$. In the case of a defect these two ranges coincide. The stable black hole forms in the larger of the two slices, i.e. for $\gamma>1$ in the $j=1$ slice. The sweeping transition occurs at the critical mass ratio $\mu_{2}^{*}=(b^{2}-\kappa^{2})^{-1}$, which through eqs. (6.11) and (6.12a) corresponds to a fixed value of $\tau_{2}$. Since $\tau=\tau_{2}(1+\gamma)$, the critical orange-yellow line is a straight line in the $(\gamma,\tau)$ plane, in accordance again with the findings of fig. 12. A noteworthy fact is the rapidity of these transitions as function of $\kappa$. For $\kappa$ a little below or above the critical value the single- center cold, respectively warm phases almost disappear. Note also the cold-to- warm transitions are always near $\tau\approx 1$. This is the critical value for Hawking-Page transitions in the homogeneous case, as expected at large $\gamma$ when the $j=1$ slice covers most of space. The critical curves for the cold-to-hot and warm-to-hot transitions also look linear in the above figures, but this is an illusion. Since the transitions are first order we must compare free energies. Equating for example the hot and cold free energies gives after some rearrangements (and with $\ell_{1}=\ell_{2}:=\ell$) $\displaystyle 2\pi^{2}\tau+{2\over\ell}\log g_{I}={1\over 2\tau_{1}}|M_{1}|L_{1}^{2}\,(1+{\mu\over\gamma})\ .$ (9.2) Now $|M_{1}|L_{1}^{2}$ can be expressed in terms of $\mu$ through eq. (6.8, 6.9a), and $\mu$ in the cold phase is a function of $\gamma$. Furthermore $\log g_{I}/\ell=4\pi{\rm tanh}^{-1}(\kappa/2)$ is constant, see eq. (4.9), and $\tau_{1}=\tau/(1+\gamma^{-1})$. Thus (9.2) can be written as a relation $\tau=\tau_{\rm hc}(\gamma)$, and we have verified numerically that $\tau_{\rm hc}$ is not a linear function of $\gamma$. ### 9.2 Non-degenerate vacua Figure 13 presents the phase diagram in the case of non-degenerate AdS vacua, $b=\ell_{2}/\ell_{1}=c_{2}/c_{1}=3$, and for different values of the tension in the allowed range, $\kappa\in(2,4)$. Since there is no $\gamma\to\gamma^{-1}$ symmetry, $\gamma$ here varies between 0 to $\infty$. To avoid squeezing the $\gamma\in(0,1)$ region, we use for horizontal axis $\alpha:=\gamma-\gamma^{-1}$. This is almost linear in the larger of $\gamma$ or $\gamma^{-1}$, when either of these is large, but the region $\gamma\approx 1$ is distorted compared to figs. 11 and 12 of the previous section. Figure 13: Phase diagrams for $b=3$, and values of the tension that increase from the top-left figure clockwise. The horizontal and vertical axes are $\alpha:=\gamma-\gamma^{-1}$ and $\tau$. The broken red curve is the bound $\tau=\tau_{2}^{*}(1+\gamma)$ below which the hot solution does not exist (there is no such bound in the lower panels in which the tension $\lambda>\lambda_{0}$) . Note the absence of a warm phase in the left ($\gamma<1$) region of the diagrams. For the heaviest wall all non-hot solutions are double-center. The most notable new feature in these phase diagrams is the absence of a warm phase in the region $\gamma<1$. This shows that it is impossible to keep the wall outside the black hole when the latter forms on the false-vacuum side. From the perspective of the dual ICFT, see section 7.1, the absence of [X,H1]-type solutions means that no interfaces, however heavy, can keep CFT1 in the confined phase if CFT2 (the theory with larger central charge) has already deconfined. We suspect that this is a feature of the thin-brane model which does not allow interfaces to be perfectly-reflecting [34]. Warm solutions with the horizon in the pink slice appear to altogether disappear above the critical ratio of central charges $b_{c}=3$.272727This critical value was also noticed in ref. [29], who also note that multiple branes can evade the bound confirming the intuition that it is a feaure specific to thin branes. As a matter of fact, although [X,H1] solutions do exist for $b<3$ as we show below, they have very large $\gamma$, outside the range of our numerical plots, unless $b$ is very close to 1. The boundary conditions corresponding to topologies of type [X,H1] are given by eqs. (6.14). We plotted the right-hand side of the second condition (6.14) for different values of $\lambda$ and $\mu$ in their allowed range, and found no solution with positive $\tau_{1}$ for $b>3$. Analytic evidence for the existence of a strict $b_{c}=3$ bound can be found by considering the limit of a maximally isolating wall, $\lambda\approx\lambda_{\rm max}$, and of a shrinking green slice $\hat{\mu}\to-\infty$. In this limit, the right-hand side of (6.14) can be computed in closed form with the result $\displaystyle\tau_{1}(\hat{\mu})={\pi\over\sqrt{-\hat{\mu}}}\biggl{(}2-\sqrt{1+{\ell_{2}\over\ell_{1}}}\,\,\biggr{)}+{\rm subleading}\ .$ (9.3) We took X=E1 as dictated by the analysis of sweeping transitions, see section 7.2 and in particular eq. (7.5). This limiting $\tau_{1}(\hat{\mu})$ is negative for $b>3$, and positive for $b<3$ where warm [E1,H1] solutions do exist, as claimed. An interesting corollary is that end-of-the-world branes cannot avoid the horizon of a black hole, since the near-void limit, $\ell_{1}\ll\ell_{2}$, is in the range that has no [X,H1] solutions. ### 9.3 Unstable black holes The phase diagrams in figs. 11, 12, 13 show the solution with the lowest free energy in various regions of parameter space. Typically, this dominant phase coexists with solutions that describe unstable or metastable black-holes which are ubiquitous in the thin-wall model.282828For a similar discussion of deformed JT gravity see ref.[66]. Note that in the absence of a domain wall, the only static black hole solution of pure Einstein gravity in 2+1 dimensions is the non-spinning BTZ black hole. Figure 14 shows the number of black hole solutions in the degenerate case, $b=1$, for small, intermediate and large wall tension, and in different regions of the $(\tau,\gamma)$ parameter space. The axes are the same as in figs. 11 and 12 but the range of $\gamma$ is halved. At sufficiently high temperature the growing horizon captures the wall, and the only solution is the hot solution. We see Figure 14: The number of independent black hole solutions in the $(\gamma,\tau)$ parameter space for $b=1$, and three values of the tension $(\kappa=0.2;\ 1.1;{\rm and}\ 1.8)$. The darker the shade the larger the number of black holes. however that in a large region of intermediate temperatures the hot solution coexists with two warm solutions. Finally at very low temperature the hot solution coexists with four other black-hole solutions, two on either side of the wall. The dominant phase in this region is vacuum, so the black holes play no role in the canonical ensemble. The hot solution exists almost everywhere, except when $\lambda<\lambda_{0}$ and $\tau=\tau_{2}(1+\gamma)<\tau_{2}^{*}(1+\gamma)$ with $\tau_{2}^{*}$ given by eq. (6.5). It has positive specific heat even when it is not the dominant phase. For warm black holes, on the other hand, the specific heat can have either sign. One can see this semi-analytically by focussing once again on our favourite near-critical region $\lambda\approx\lambda_{0}$. Simple inspection of fig. 8 shows that in some range $\tau_{2}^{*}<\tau_{2}<\tau_{2}^{\rm max}$ the hot solution coexists with two nearby warm solutions. At the maximum $\tau_{2}^{\rm max}$, where $d\tau_{2}/d\mu=0$, the warm solutions merge and then disappear. Since the black hole is in the $j=1$ slice, $M_{1}=(2\pi T)^{2}$ and their energy reads $\displaystyle E_{\rm[warm]}\,=\,{1\over 2}(\ell_{1}M_{1}L_{1}+\ell_{2}M_{2}L_{2})\,=\,2\pi^{2}T^{2}L_{2}\,\bigl{(}\ell_{1}\gamma+\ell_{2}{\mu}\,\bigr{)}\ .$ (9.4) Taking a derivative with respect to $T$ with $L_{1},L_{2}$ kept fixed we obtain $\displaystyle{d\over dT}E_{\rm[warm]}\,=\,{2\over T}E_{\rm[warm]}+2\pi^{2}T^{2}\,L_{2}^{\,2}\,\ell_{2}\,{d\mu\over d\tau_{2}}\ .$ (9.5) Near $\tau_{2}^{\rm max}$ the dominant contribution to this expression comes from the derivative ${d\mu/d\tau_{2}}$ which jumps from $-\infty$ to $+\infty$. It follows that the warm black hole with the higher mass has negative specific heat, and should decay to its companion black hole either classically or in the quantum theory.292929We have verified numerically that the black holes with negative specific heat are never the ones with lowest free energy, a conclusion similar to the one reached in deformed JT gravity in ref. [66]. It would be very interesting to calculate this decay process, but we leave this for future work. One last comment concerns transitions from the double-center vacuum geometries, of type [E1,E1], to warm solutions where the wall avoids the horizon. One can ask what side of the wall does the black hole choose. A natural guess is that it forms in the deepest of the two AdS wells. The relative depth is the ratio of blueshift factors at the two rest points, $\displaystyle{\mathfrak{R}}:=\sqrt{g_{tt}|_{r_{1}=0}\over g_{tt}|_{r_{2}=0}}\,=\,{\ell_{2}\over\ell_{1}\sqrt{\mu(\gamma)}}\,\ .$ (9.6) One expects the black hole to form in the $j=1$ (green) slice if ${\mathfrak{R}}<1$ and in the $j=2$ (red) slice if ${\mathfrak{R}}>1$. Our numerical plots confirmed in all cases this expectation. ## 10 Outlook One urgent question, already noted in the introduction, is how much of this analysis will survive in top-down interface models, where gravitating domain walls are typically thick. The order parameters of the Hawking-Page and sweeping transitions – the area of the horizon and the number of rest points for inertial observers, do not depend on the assumption of a thin wall and could go through. The warm-to-hot transition, on the other hand, may be replaced by a crossover, since there is no sharp criterion to decide if a thick wall enters or avoids the horizon. As discussed, however, in section 7.1 a sharp order parameter, such as a Polyakov loop, may be suggested by the field theory side of the correspondence. One other question left open in the present work is the entanglement structure of the equilibrium states. Indeed, a guiding thread of our paper were the intersections of the domain wall with the black hole horizon and the trajectories of inertial observers. The Ryu-Takayanagi (RT) surfaces [58, 59] are another natural class of curves whose intersection with the wall should be studied along lines similar to ref. [21, 22] for BCFT. Simple extensions of the minimal model, such as the addition of a Chern-Simons field (see e.g. [107]) might also be worth exploring. Last but not least, the simplicity of the model and its rich spectrum of black holes make it a promising ground where to try to shed some more light on the recent exciting developments related to black hole evaporation, islands and the Page curve [12, 13, 14]. We hope to return to some of these questions in the near future. Aknowledgements We are grateful to Mark Van Raamsdonk for his critical reading of a preliminary draft of this paper and for many useful comments. Many thanks also to Panos Betzios, Shira Chapman, Dongsheng Ge, Elias Kiritsis, Ioannis Lavdas, Bruno Le Floch, Emil Martinec, Olga Papadoulaki and Giuseppe Policastro for discussions during the course of this work. ## Appendix A Renormalized on-shell action The Euclidean action of the holographic-interface model, in units $8\pi G=1$, is the sum of bulk, brane, boundary and corner contributions, see e.g.[108] $\displaystyle I_{\rm gr}=-\frac{1}{2}$ $\displaystyle\hskip-7.11317pt\int_{{\mathbb{S}}_{1}}d^{3}x\sqrt{g_{1}}\,(R_{1}+\frac{2}{\ell_{1}^{2}})-\frac{1}{2}\int_{{\mathbb{S}}_{2}}d^{3}x\sqrt{g_{2}}\,(R_{2}+\frac{2}{\ell_{2}^{2}})+\lambda\int_{\mathbb{W}}d^{2}s\sqrt{\hat{g}_{w}}\ \ \ \ \ \ $ (A.1) $\displaystyle\hskip-17.07164pt+\int_{\partial{\mathbb{S}}_{1}}d^{2}s\sqrt{\hat{g}_{1}}\,K_{1}+\int_{\partial{\mathbb{S}}_{2}}d^{2}s\sqrt{\hat{g}_{2}}\,K_{2}\ +\ \int_{\rm C}(\theta-\pi)\sqrt{\hat{g}_{c}}+{\rm c.t.}\ $ where the counterterms, abbreviated above by c.t., read [109] $\displaystyle{\rm c.t.}=\,{1\over\ell_{1}}\int_{{\rm B}_{1}}\sqrt{\hat{g}_{1}}\,+{1\over\ell_{2}}\int_{{\rm B}_{2}}\sqrt{\hat{g}_{2}}\,-\,\int_{{\rm B}_{1}\cap{\rm B}_{2}}(\theta_{1}+\theta_{2})\sqrt{\hat{g}_{c}}\ .$ (A.2) Here $\mathbb{S}_{j}$ are the spacetime slices whose boundary is the sum of the cutoff surface Bj and of the string worldsheet $\mathbb{W}$, i.e. $\partial\mathbb{S}_{j}=$B${}_{j}\cup\mathbb{W}$ . The induced metrics are denoted by hats. The $K_{j}$ are traces of the extrinsic curvatures on each slice computed with the inward-pointing normal vector. Finally, in addition to the standard Gibbons-Hawking-York boundary terms, one must add the Hayward term [110, 108] at corners of $\partial\mathbb{S}_{j}$ denoted by C. 303030These play no role here, but they can be important in the case of string junctions. There is at least one such corner at the cutoff surface, B${}_{1}\cap$B2, where $\theta-\pi$ is the sum of the angles $\theta_{j}$ defined in figure 4. Let us break the action into an interior and a conformal boundary term, $I_{\rm gr}=I_{\rm int}+I_{\rm B}$, with the former including contributions from the worldsheet W. Using the field equations $R_{j}=-{6/\ell_{j}^{2}}$ and $K_{1}|_{\rm W}+K_{2}|_{\rm W}=-2\lambda$, and the volume elements that follow from eqs. (3.1) and (4.1, 4.2), $\sqrt{g_{j}}\,d^{3}x=\ell_{j}r_{i}dr_{j}dx_{j}dt\,\quad{\rm and}\quad\,\sqrt{\hat{g}_{w}}\,d^{2}s=\sqrt{fg}\,d\sigma dt\ ,$ we can write the interior on-shell action as follows : $I_{\rm int}\,=\,\frac{2}{\ell_{1}}\int_{{\Omega}_{1}}r_{1}\,dr_{1}dx_{1}dt+\frac{2}{\ell_{2}}\int_{{\Omega}_{2}}r_{2}\,dr_{2}dx_{2}dt-\lambda\int_{\mathbb{W}}\sqrt{fg}\,d\sigma dt\ .$ (A.3) We have been careful to distinguish the spacetime slice Sj from the coordinate chart $\Omega_{j}$, because we will now use Stoke’s theorem treating $\Omega_{j}$ as part of flat Euclidean space, $\displaystyle\sum_{j=1,2}\,{2\over\ell_{j}}\int_{{\Omega}_{j}}r_{j}dr_{j}dx_{j}dt\ =\ \sum_{j=1,2}\,{1\over\ell_{j}}\oint_{{\partial\Omega}_{j}}r_{j}^{2}(\hat{r}_{j}\cdot d\hat{n}_{j})dt\ ,$ (A.4) with $d\hat{n}_{j}dt$ the surface element on the boundary $\partial\Omega_{j}$. Crucially, the boundary of $\Omega_{j}$ may include a horizon which is a regular interior submanifold of the Euclidean spacetime and is not therefore part of $\partial\mathbb{S}_{j}$ . In particular, there is no Gibbons-Hawking-York contribution there. The boundary integral in eq. (A.4) receives contributions from the three pieces of ${\partial\Omega}_{1,2}$ : the cutoff surface B${}_{1}\cup$B2, the horizon if there is one, and the worldsheet $\mathbb{W}$. Conveniently, this last term precisely cancels the third term in (A.3) by virtue of the Israel- Lanczos equation (4.3). Thus, after all the dust has settled, the action can be written as the sum of terms evaluated either at the black-hole horizon or at the cutoff. After integrating over periodic time the interior part of the action, eq. (A.3) , reads $\displaystyle I_{\rm int}\,=\ {1\over\ell_{1}T}\,\Bigl{[}r_{1}^{2}\,\Delta x_{1}\Bigr{]}_{\rm Hor}^{{\rm B}_{1}}+\,{1\over\ell_{2}T}\,\Bigl{[}r_{2}^{2}\,\Delta x_{2}\Bigr{]}_{\rm Hor}^{{\rm B}_{2}}$ (A.5) where we employ the shorthand notation $[X]_{a}^{b}=X|_{b}-X|_{a}$ , and $X|_{a}$ for $X$ evaluated at $a$ . If the slice $\mathbb{S}_{j}$ does not contain a horizon the corresponding contribution is absent. We now turn to the conformal-boundary contributions from the lower line in the action (A.1). For a fixed-$r_{j}$ surface, the inward-pointing unit normal expressed as a 1-form is n${}_{j}=-dr_{j}/\sqrt{r_{j}^{2}-M_{j}\ell_{j}^{2}}$ . One finds after a little algebra (we here drop the index $j$ for simplicity) $\displaystyle K_{xx}=K_{tt}=-{r\over\ell}\sqrt{r^{2}-M\ell^{2}}\,\Longrightarrow\,\sqrt{\hat{g}}\,K=\,-{1\over\ell}(2r^{2}-M\ell^{2})\ .$ (A.6) Combining the Gibbons-Hawking-York terms and the counterterms gives $\displaystyle I_{\rm B}={1\over\ell_{1}T}(r_{1}\sqrt{r_{1}^{2}-M_{1}\ell_{1}^{2}}-2r_{1}^{2}+M_{1}\ell_{1}^{2})\,\Delta x_{1}\,\Bigl{|}_{{\rm B}_{1}}\,+\,(1\rightarrow 2)\ .$ (A.7) Expanding for large cutoff radius, $r_{j}|_{{\rm B}_{j}}\to\infty$, and dropping the terms that vanish in the limit we obtain $\displaystyle I_{\rm B}={1\over\ell_{1}T}(-r_{1}^{2}+{1\over 2}M_{1}\ell_{1}^{2})\,\Delta x_{1}\,\Bigl{|}_{{\rm B}_{1}}\,+\,(1\rightarrow 2)\ .$ (A.8) Upon adding up (A.5) and (A.8) the leading divergent term cancels, giving the following result for the renormalized on-shell action : $\displaystyle I_{\rm gr}\ =\ {M_{1}\ell_{1}\over 2T}\bigl{(}L_{1}-2\Delta x_{1}\bigl{|}_{{\rm Hor}}\bigr{)}\,+\,{M_{2}\ell_{2}\over 2T}\bigl{(}L_{2}-2\Delta x_{2}\bigl{|}_{{\rm Hor}}\bigr{)}\,\ .$ (A.9) We used here the fact that $\Delta x_{j}|_{{\rm B}_{j}}=L_{j}$, and that $r_{j}^{2}=M_{j}\ell_{j}^{2}$ at the horizon when one exists. We also used implicitly the fact that for smooth strings the Hayward term receives no contribution from the interior and is removed by the counterterm at the boundary. As a check of this on-shell action let us compute the entropy. Using our formula for the internal energy $\langle E\rangle={1\over 2}(M_{1}\ell_{1}L_{1}+M_{2}\ell_{2}L_{2})$, see section 2, and $I_{\rm gr}=\langle E\rangle/T-S$ we find $\displaystyle S={1\over T}$ $\displaystyle\hskip-28.45274pt\bigl{(}M_{1}\ell_{1}\Delta x_{1}\bigl{|}_{{\rm Hor}}+M_{2}\ell_{2}\Delta x_{2}\bigl{|}_{{\rm Hor}}\bigr{)}$ (A.10) $\displaystyle=$ $\displaystyle\hskip-5.69054pt4\pi^{2}T\bigl{(}\ell_{1}\Delta x_{1}\bigl{|}_{{\rm Hor}}+\ell_{2}\Delta x_{2}\bigl{|}_{{\rm Hor}}\bigr{)}\,=\,{A({\rm horizon})\over 4G}\ .$ In the lower line we used the fact that $M_{j}=(2\pi T)^{2}$ and $r_{j}^{\rm H}=2\pi T\ell_{j}$ for slices with horizon, plus our choice of units $8\pi G=1$. The calculation thus reproduces correctly the Bekenstein-Hawking entropy. ## Appendix B Opening arcs as elliptic integrals In this appendix we express the opening arcs, eqs. (4.17), in terms of complete elliptic integrals of the first, second and third kind, $\displaystyle{\rm\bf K}(\nu)=\int_{0}^{1}\frac{dy}{\sqrt{(1-y^{2})(1-\nu y^{2})}}$ (B.1) $\displaystyle{\bf E}(\nu)=\int_{0}^{1}\frac{\sqrt{1-\nu y^{2}}\,dy}{\sqrt{1-y^{2}}}\ .$ (B.2) $\displaystyle{\bf\Pi}(u,\nu)=\int_{0}^{1}\frac{dy}{(1-uy^{2})\sqrt{(1-y^{2})(1-\nu y^{2})}}\ .$ (B.3) Consider the boundary conditions (4.17a). The other conditions (4.17b, 4.17c) differ only by the constant periods or horizon arcs, $P_{j}$ or $\Delta x_{j}|_{\rm hor}$. Inserting the expression (4.16) for $x_{1}^{\prime}$ gives $\displaystyle L_{1}=-\int_{\sigma_{+}}^{\infty}\frac{\ell_{1}\,d\sigma}{(\sigma+M_{1}\ell_{1}^{2})}\,\frac{(\lambda^{2}+\lambda_{0}^{2})\,\sigma+M_{1}-M_{2}}{\sqrt{A\sigma(\sigma-\sigma_{+})(\sigma-\sigma_{-})}}\ ,$ (B.4) and likewise for $L_{2}$. The roots $\sigma_{\pm}$ are given by eqs. (4.14, 4.15). We assume that we are not in the case [H2, H2] where $M_{1}=M_{2}>0$, nor in the fringe case $\sigma_{+}=-M_{j}\ell_{j}^{2}$ when the string goes through an AdS center. These cases will be treated separately. Separating the integral in two parts, and trading the integration variable $\sigma$ for $y$, with $y^{2}:=\sigma_{+}/\sigma$, we obtain $L_{1}=-\frac{2\ell_{1}}{\sqrt{A\,\sigma_{+}}}\bigg{[}{M_{1}-M_{2}\over M_{1}\ell_{1}^{2}}\int_{0}^{1}\frac{dy}{\sqrt{(1-y^{2})(1-\nu y^{2})}}$ $\displaystyle+\Bigl{(}(\lambda^{2}+\lambda_{0}^{2})-{M_{1}-M_{2}\over M_{1}\ell_{1}^{2}}\Bigr{)}\,\int_{0}^{1}\frac{y^{2}dy}{(1-u_{1}y^{2})\sqrt{(1-y^{2})(1-\nu y^{2})}}\,\bigg{]}\ \ \ $ (B.5) where $\nu=\sigma_{-}/\sigma_{+}$ and $u_{1}=-M_{1}\ell_{1}^{2}/\sigma_{+}$ . Identifying the elliptic integrals finally gives $\displaystyle L_{1}=-\frac{2\ell_{1}}{\sqrt{A\,\sigma_{+}}}\bigg{[}\frac{M_{1}-M_{2}}{M_{1}\ell_{1}^{\,2}}\,\Bigl{(}{\rm\bf K}(\nu)-{\bf\Pi}(u_{1},\nu)\Bigr{)}+(\lambda^{2}+\lambda_{0}^{2})\,{\bf\Pi}(u_{1},\nu)\bigg{]}\ ,\ \ \ $ (B.6) and a corresponding expression for $L_{2}$ $\displaystyle L_{2}=-\frac{2\ell_{2}}{\sqrt{A\,\sigma_{+}}}\bigg{[}\frac{M_{2}-M_{1}}{M_{2}\ell_{2}^{\,2}}\,\Bigl{(}{\rm\bf K}(\nu)-{\bf\Pi}(u_{2},\nu)\Bigr{)}+(\lambda^{2}-\lambda_{0}^{2})\,{\bf\Pi}(u_{2},\nu)\bigg{]}\ \ \ $ (B.7) with $u_{2}=-M_{2}\ell_{2}^{2}/\sigma_{+}$ . The prefactors in (B.6) diverge when $M_{1}\to 0$ but the singilarity is removed by expanding ${\bf\Pi}(u_{1},\nu)$ around $u_{1}=0$. In this limit $\displaystyle L_{1}(M_{1}=0)=-\frac{2\ell_{1}}{\sqrt{A\,\sigma_{+}}}\bigg{[}\frac{M_{2}}{\sigma_{-}}({\rm\bf E}(\nu)-{\rm\bf K}(\nu))+(\lambda^{2}+\lambda_{0}^{2}){\rm\bf K}(\nu)\bigg{]}$ (B.8a) and similarly $\displaystyle L_{2}(M_{2}=0)=-\frac{2\ell_{2}}{\sqrt{A\,\sigma_{+}}}\bigg{[}\frac{M_{1}}{\sigma_{-}}({\rm\bf E}(\nu)-{\rm\bf K}(\nu))+(\lambda^{2}-\lambda_{0}^{2}){\rm\bf K}(\nu)\bigg{]}$ (B.8b) with ${\rm\bf E}(\nu)$ the complete elliptic integral of the second kind. The $M_{1}=M_{2}>0$ geometries correspond to the high-temperature phase where $M_{j}=(2\pi T)^{2}$, $\sigma_{+}=0$ and $\sigma_{-}=-(4\pi T\lambda)^{2}/A$. The integrals (4.17c) simplify to elementary functions in this case: $L_{1}-\Delta_{1}^{\rm Hor}\ =\ -{\ell_{1}(\lambda^{2}+\lambda_{0}^{2})\over\sqrt{A\,|\sigma_{-}|\,}}\underbrace{\int_{0}^{\infty}{ds\over(s+a)\,\sqrt{s+1}\,}}_{\textstyle\begin{array}[]{c}={2\over\sqrt{1-a}}{\rm arctanh}({\sqrt{1-a}})\end{array}}$ with $a=A\ell_{1}^{2}/4\lambda^{2}$. Using the expression (4.14) for $A$, and going through the same steps for $j=2$, gives after a little algebra $\displaystyle L_{1}-\Delta_{1}^{\rm Hor}\ =\ -{1\over\pi T}\,{\rm tanh}^{-1}\left({\ell_{1}(\lambda^{2}+\lambda^{2}_{0})\over 2\lambda}\right)\ ,$ (B.9a) $\displaystyle L_{2}-\Delta_{2}^{\rm Hor}\ =\ -{1\over\pi T}\,{\rm tanh}^{-1}\left({\ell_{2}(\lambda^{2}-\lambda^{2}_{0})\over 2\lambda}\right)\ .$ (B.9b) Interestingly, since $\Delta_{2}^{\rm Hor}$ must be positive, $TL_{2}$ is bounded from below in the range $\lambda<\lambda_{0}$ as discussed in section 6.2. In the high-temperature phase the on-shell action, eq. (A.9), reads $\displaystyle I_{\rm gr}^{\rm(high-T)}=4\pi^{2}T\Bigl{[}-{1\over 2}(\ell_{1}L_{1}+\ell_{2}L_{2})+\ell_{1}(L_{1}-\Delta_{1}^{\rm Hor})+\ell_{2}(L_{2}-\Delta_{2}^{\rm Hor})\Bigr{]}\,.\ \ $ (B.10) Using the expressions (B.9) and rearranging the arc-tangent functions gives $\displaystyle I_{\rm gr}^{\rm(high-T)}:={E\over T}-S=-2\pi^{2}T(\ell_{1}L_{1}+\ell_{2}L_{2})-\log\,g_{I}$ (B.11) where the interface entropy $S=\log\,g_{I}$ is given by eq. (4.9). ## Appendix C Sweeping is continuous In this appendix we show that sweeping transitions are continuous. We focus for definiteness on the sweeping of the $j=2$ AdS center at zero temperature (all other cases work out the same). The transition takes place when $\mu$ crosses the critical value $\mu_{2}^{*}$ given by eq. (7.2). Setting $\mu=\mu_{2}^{*}(1-\delta)$ in expression (6.9b) gives $f_{2}(\mu)=\,\frac{\ell_{2}}{\sqrt{A}}\int_{s_{+}}^{\infty}ds\,\frac{(\lambda^{2}-\lambda_{0}^{2})(s-\mu\ell_{2}^{2})\,+\delta}{(s-\mu\ell_{2}^{2})\,\sqrt{As(s-s_{+})(s-s_{-})}}$ $\displaystyle=\,\frac{2\ell_{2}(\lambda^{2}-\lambda_{0}^{2})}{\sqrt{As_{+}}}\,{\bf K}\bigl{(}\frac{s_{-}}{s_{+}}\bigr{)}\,+\,\frac{\ell_{2}\,\delta}{\sqrt{A}}\,\underbrace{\int_{s_{+}}^{\infty}\frac{ds}{(s-\mu\ell_{2}^{2})\sqrt{s(s-s_{+})(s-s_{-})}}}_{J}\ .\ \ $ (C.1) The first term is continuous at $\delta=0$, but the second requires some care because the integral $J$ diverges. This is because for small $\delta$ $\displaystyle s_{+}-\mu\ell_{2}^{2}\,=\,\frac{\delta^{2}}{4\lambda^{2}\mu_{2}^{*}}+\mathcal{O}(\delta^{3})\ ,$ (C.2) as one finds by explicit computation of the expression (6.10). If we set $\delta=0$, $J$ diverges near the lower integration limit. To bring the singular behavior to $0$ we perform the change of variable $u^{2}=s-s_{+}$, so that $\displaystyle J=\int_{0}^{\infty}\frac{2du}{(u^{2}+\delta^{2}/4\lambda^{2}\mu_{2}^{*})\sqrt{(u^{2}+s_{+}^{*})(u^{2}+s_{+}^{*}-s_{-}^{*})}}\ ,$ (C.3) where we kept only the leading order in $\delta$, and $s_{\pm}^{*}$ are the roots at $\mu=\mu_{2}^{*}$. Since $s_{+}^{*}$ and $s_{+}^{*}-s_{-}^{*}$ are positive and finite, the small-$\delta$ behavior of the integral is (after rescaling appropriately $u$) $\displaystyle J={4\lambda|\mu_{2}^{*}|\over|\delta|\sqrt{s_{+}^{*}(s_{+}^{*}-s_{-}^{*})}}\,\underbrace{\int_{0}^{\infty}{du\over u^{2}+1}}_{\pi/2}\ +\ {\rm finite}\ .$ (C.4) Inserting in expression (C.1) and doing some tedious algebra leads finally to a discontinuity of the function $f_{2}(\mu)$ equal to sign$(\delta)\,\pi/\sqrt{\mu_{2}^{*}}$ . This is precisely what is required for $L_{2}$, eq. (6.8), to be continuous when the red ($j=2$) slice goes from type E1 at negative $\delta$ to type E2 at positive $\delta$. ## Appendix D Bubbles exist We show here that the bubble phenomenon of section 8 is indeed realized in a region of the parameter space of the holographic model. This is the region of non-degenerate gravitational vacua ($\ell_{2}$ strictly bigger than $\ell_{1}$) and a sufficiently light domain wall. Specifically, we will show that for $\lambda$ close to its minimal value, $\lambda_{\rm min}$, the arc $L_{1}(\mu=0)$ is negative, so the wall self-intersects and $\mu_{0}$ is necessarily finite. Let $\lambda=\lambda_{\rm min}(1+\delta)$ with $\delta\ll 1$. Setting $\mu=0$ and expanding eqs. (6.10) for small $\delta$ gives $A={8\lambda_{\rm min}^{2}\,\delta\over\ell_{1}\ell_{2}}+\mathcal{O}(\delta^{2})\ ,\quad s_{+}={\ell_{2}\over 4\lambda_{\rm min}}+\mathcal{O}(\delta)\ ,\quad s_{-}=-{\ell_{1}\over\ 2\lambda_{\rm min}\delta}+\mathcal{O}(1)\ .$ Plugging into eq. (B.6) with $M_{2}=\mu M_{1}\approx 0$ we find : $\sqrt{|M_{1}|}\,L_{1}=-\frac{2}{\ell_{1}\sqrt{As_{+}}}\left[{\bf K}\left({s_{-}\over s_{+}}\right)+(1-\frac{2\ell_{1}}{\ell_{2}}){\bf\Pi}\left({\ell_{1}^{2}\over s_{+}},\,{s_{-}\over s_{+}}\right)\right]$ (D.1) where we have only kept leading orders in $\delta$. Now we need the asymptotic form of the elliptic integrals when their argument diverges ${\bf K}\left[-\frac{a}{\delta}\right]\approx{\bf\Pi}\left[u,-\frac{a}{\delta}\right]\approx-\frac{\ln(\delta)\sqrt{\delta}}{2\sqrt{a}}+\mathcal{O}(\sqrt{\delta})$ (D.2) for $\delta\to 0_{+}$ with $u,a$ fixed. Using $a=2\ell_{1}/\ell_{2}$ finally gives $\displaystyle\sqrt{|M_{1}|}\,L_{1}\,\approx\,\bigl{(}\frac{\ell_{2}}{\ell_{1}}-1\bigr{)}^{1/2}\ln(\delta)\,+\,{\rm subleading}\ .$ (D.3) For $\delta\ll 1$ this is negative, proving our claim. Note that we took the green slice to be of type E2, as follows from our analysis of the sweeping transitions for light domain walls – see section 7.2. ## References * [1] S. R. Coleman and F. De Luccia, “Gravitational Effects on and of Vacuum Decay,” Phys. Rev. D 21 (1980), 3305 * [2] L. Randall and R. Sundrum, “An Alternative to compactification,” Phys. Rev. Lett. 83 (1999), 4690-4693 [hep-th/9906064]. * [3] G. R. Dvali, G. Gabadadze and M. Porrati, “4-D gravity on a brane in 5-D Minkowski space,” Phys. Lett. B 485 (2000), 208-214 [hep-th/0005016]. * [4] A. Karch and L. Randall, “Locally localized gravity,” JHEP 05 (2001), 008 [hep-th/0011156]. * [5] A. Karch and L. Randall, “Open and closed string interpretation of SUSY CFT’s on branes with boundaries,” JHEP 06 (2001), 063 [hep-th/0105132]. * [6] O. DeWolfe, D. Z. Freedman and H. Ooguri, “Holography and defect conformal field theories,” Phys. Rev. D 66 (2002), 025009 [hep-th/0111135]. * [7] C. Bachas, J. de Boer, R. Dijkgraaf and H. Ooguri, “Permeable conformal walls and holography,” JHEP 06 (2002), 027 [hep-th/0111210]. * [8] B. Freivogel, V. E. Hubeny, A. Maloney, R. C. Myers, M. Rangamani and S. Shenker, “Inflation in AdS/CFT,” JHEP 03 (2006), 007 [hep-th/0510046]. * [9] B. Freivogel, G. T. Horowitz and S. Shenker, “Colliding with a crunching bubble,” JHEP 05 (2007), 090 [hep-th/0703146]. * [10] J. L. F. Barbon and E. Rabinovici, “Holography of AdS vacuum bubbles,” JHEP 04 (2010), 123 [arXiv:1003.4966 [hep-th]]. * [11] S. Banerjee, U. Danielsson, G. Dibitetto, S. Giri and M. Schillo, “Emergent de Sitter Cosmology from Decaying Anti–de Sitter Space,” Phys. Rev. Lett. 121 (2018) no.26, 261301 [arXiv:1807.01570 [hep-th]]. * [12] G. Penington, “Entanglement Wedge Reconstruction and the Information Paradox,” JHEP 09 (2020), 002 [arXiv:1905.08255 [hep-th]]. * [13] A. Almheiri, N. Engelhardt, D. Marolf and H. Maxfield, “The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole,” JHEP 12 (2019), 063 [arXiv:1905.08762 [hep-th]]. * [14] A. Almheiri, R. Mahajan, J. Maldacena and Y. Zhao, “The Page curve of Hawking radiation from semiclassical geometry,” JHEP 03 (2020), 149 [arXiv:1908.10996 [hep-th]]. * [15] M. Rozali, J. Sully, M. Van Raamsdonk, C. Waddell and D. Wakeham, “Information radiation in BCFT models of black holes,” JHEP 05 (2020), 004 [arXiv:1910.12836 [hep-th]]. * [16] V. Balasubramanian, A. Kar, O. Parrikar, G. Sárosi and T. Ugajin, “Geometric secret sharing in a model of Hawking radiation,” [arXiv:2003.05448 [hep-th]]. * [17] H. Z. Chen, R. C. Myers, D. Neuenfeld, I. A. Reyes and J. Sandor, “Quantum Extremal Islands Made Easy, Part I: Entanglement on the Brane,” JHEP 10 (2020), 166 [arXiv:2006.04851 [hep-th]]. * [18] D. Bak, C. Kim, S. H. Yi and J. Yoon, “Unitarity of Entanglement and Islands in Two-Sided Janus Black Holes,” JHEP 01 (2021), 155 [arXiv:2006.11717 [hep-th]]. * [19] R. Emparan, A. M. Frassino and B. Way, “Quantum BTZ black hole,” JHEP 11 (2020), 137 [arXiv:2007.15999 [hep-th]]. * [20] H. Z. Chen, R. C. Myers, D. Neuenfeld, I. A. Reyes and J. Sandor, “Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane,” JHEP 12 (2020), 025 [arXiv:2010.00018 [hep-th]]. * [21] I. Akal, Y. Kusuki, N. Shiba, T. Takayanagi and Z. Wei, “Entanglement entropy in holographic moving mirror and Page curve,” [arXiv:2011.12005 [hep-th]]. * [22] F. Deng, J. Chu and Y. Zhou, “Defect extremal surface as the holographic counterpart of Island formula,” [arXiv:2012.07612 [hep-th]]. * [23] H. Geng, A. Karch, C. Perez-Pardavila, S. Raju, L. Randall, M. Riojas and S. Shashi, “Information Transfer with a Gravitating Bath,” [arXiv:2012.04671 [hep-th]]. * [24] J. D. Brown and C. Teitelboim, “Neutralization of the Cosmological Constant by Membrane Creation,” Nucl. Phys. B 297 (1988), 787-836 * [25] H. Ooguri and T. Takayanagi, “Cobordism Conjecture in AdS,” [arXiv:2006.13953 [hep-th]]. * [26] S. Lanza, F. Marchesano, L. Martucci and I. Valenzuela, “Swampland Conjectures for Strings and Membranes,” [arXiv:2006.15154 [hep-th]]. [arXiv:2006.15154 [hep-th]]. * [27] A. Bedroya, M. Montero, C. Vafa and I. Valenzuela, “de Sitter Bubbles and the Swampland,” [arXiv:2008.07555 [hep-th]]. [arXiv:2008.07555 [hep-th]]. * [28] S. W. Hawking and D. N. Page, “Thermodynamics of Black Holes in anti-De Sitter Space,” Commun. Math. Phys. 87 (1983), 577. * [29] P. Simidzija and M. Van Raamsdonk, “Holo-ween,” JHEP 12 (2020), 028 [arXiv:2006.13943 [hep-th]]. * [30] Z. Fu and D. Marolf, “Bag-of-gold spacetimes, Euclidean wormholes, and inflation from domain walls in AdS/CFT,” JHEP 11 (2019), 040 [arXiv:1909.02505 [hep-th]]. * [31] A. May and M. Van Raamsdonk, “Interpolating between multi-boundary wormholes and single-boundary geometries in holography,” [arXiv:2011.14258 [hep-th]]. * [32] J. D. Brown and M. Henneaux, “Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity,” Commun. Math. Phys. 104, 207 (1986). * [33] T. Azeyanagi, A. Karch, T. Takayanagi and E. G. Thompson, “Holographic calculation of boundary entropy,” JHEP 03 (2008), 054 [arXiv:0712.1850 [hep-th]]. * [34] C. Bachas, S. Chapman, D. Ge and G. Policastro, “Energy Reflection and Transmission at 2D Holographic Interfaces,” Phys. Rev. Lett. 125 (2020) no.23, 231602 [arXiv:2006.11333 [hep-th]]. * [35] D. Bak, M. Gutperle and S. Hirano, “A Dilatonic deformation of AdS(5) and its field theory dual,” JHEP 05 (2003), 072 [hep-th/0304129]. * [36] J. Gomis and C. Romelsberger, “Bubbling Defect CFT’s,” JHEP 08 (2006), 050 [hep-th/0604155]. * [37] O. Lunin, “1/2-BPS states in M theory and defects in the dual CFTs,” JHEP 10 (2007), 014 [arXiv:0704.3442 [hep-th]]. * [38] E. D’Hoker, J. Estes and M. Gutperle, “Exact half-BPS Type IIB interface solutions. I. Local solution and supersymmetric Janus,” JHEP 06 (2007), 021 [arXiv:0705.0022 [hep-th]]. * [39] E. D’Hoker, J. Estes and M. Gutperle, “Exact half-BPS Type IIB interface solutions. II. Flux solutions and multi-Janus,” JHEP 06 (2007), 022 [arXiv:0705.0024 [hep-th]]. * [40] E. D’Hoker, J. Estes, M. Gutperle and D. Krym, “Exact Half-BPS Flux Solutions in M-theory. I: Local Solutions,” JHEP 08 (2008), 028 [arXiv:0806.0605 [hep-th]]. * [41] E. D’Hoker, J. Estes, M. Gutperle and D. Krym, “Janus solutions in M-theory,” JHEP 06 (2009), 018 [arXiv:0904.3313 [hep-th]]. * [42] M. Chiodaroli, M. Gutperle, L. Y. Hung and D. Krym, “String Junctions and Holographic Interfaces,” Phys. Rev. D 83 (2011), 026003 [arXiv:1010.2758 [hep-th]] * [43] M. Chiodaroli, E. D’Hoker, Y. Guo and M. Gutperle, “Exact half-BPS string-junction solutions in six-dimensional supergravity,” JHEP 12 (2011), 086 [arXiv:1107.1722 [hep-th]]. . * [44] C. Bachas and J. Estes, “Spin-2 spectrum of defect theories,” JHEP 06 (2011), 005 [arXiv:1103.2800 [hep-th]]. * [45] O. Aharony, L. Berdichevsky, M. Berkooz and I. Shamir, “Near-horizon solutions for D3-branes ending on 5-branes,” Phys. Rev. D 84 (2011), 126003 [arXiv:1106.1870 [hep-th]]. * [46] B. Assel, C. Bachas, J. Estes and J. Gomis, “Holographic Duals of D=3 N=4 Superconformal Field Theories,” JHEP 08 (2011), 087 [arXiv:1106.4253 [hep-th]]. * [47] D. Bak, M. Gutperle and R. A. Janik, “Janus Black Holes,” JHEP 10 (2011), 056 [arXiv:1109.2736 [hep-th]]. * [48] N. Bobev, K. Pilch and N. P. Warner, “Supersymmetric Janus Solutions in Four Dimensions,” JHEP 06 (2014), 058 [arXiv:1311.4883 [hep-th]]. * [49] C. Bachas, E. D’Hoker, J. Estes and D. Krym, “M-theory Solutions Invariant under $D(2,1;\gamma)\oplus D(2,1;\gamma)$,” Fortsch. Phys. 62 (2014), 207-254 [arXiv:1312.5477 [hep-th]]. * [50] E. D’Hoker, M. Gutperle and C. F. Uhlemann, “Warped $AdS_{6}\times S^{2}$ in Type IIB supergravity II: Global solutions and five-brane webs,” JHEP 05 (2017), 131 [arXiv:1703.08186 [hep-th]]. * [51] Y. Lozano, N. T. Macpherson, C. Nunez and A. Ramirez, “AdS3 solutions in massive IIA, defect CFTs and T-duality,” JHEP 12 (2019), 013 [arXiv:1909.11669 [hep-th]]. * [52] Y. Lozano, C. Nunez, A. Ramirez and S. Speziali, “$M$-strings and AdS3 solutions to M-theory with small $\mathcal{N}=(0,4)$ supersymmetry,” JHEP 08 (2020), 118 [arXiv:2005.06561 [hep-th]]. * [53] I. Arav, K. C. M. Cheung, J. P. Gauntlett, M. M. Roberts and C. Rosen, “Superconformal RG interfaces in holography,” JHEP 11 (2020), 168 [arXiv:2007.07891 [hep-th]]. * [54] M. Billò, V. Gonçalves, E. Lauria and M. Meineri, “Defects in conformal field theory,” JHEP 04 (2016), 091 [arXiv:1601.02883 [hep-th]]. * [55] A. R. Brown and A. Dahlen, “On ’nothing’ as an infinitely negatively curved spacetime,” Phys. Rev. D 85 (2012), 104026 [arXiv:1111.0301 [hep-th]]. * [56] T. Takayanagi, “Holographic Dual of BCFT,” Phys. Rev. Lett. 107 (2011), 101602 [arXiv:0712.1850 [hep-th]]. * [57] M. Fujita, T. Takayanagi and E. Tonni, “Aspects of AdS/BCFT,” JHEP 11 (2011), 043 [arXiv:1108.5152 [hep-th]]. * [58] S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from AdS/CFT,” Phys. Rev. Lett. 96 (2006), 181602 [arXiv:hep-th/0603001 [hep-th]]. * [59] S. Ryu and T. Takayanagi, “Aspects of Holographic Entanglement Entropy,” JHEP 08 (2006), 045 [arXiv:hep-th/0605073 [hep-th]]. * [60] B. Czech, J. L. Karczmarek, F. Nogueira and M. Van Raamsdonk, “The Gravity Dual of a Density Matrix,” Class. Quant. Grav. 29 (2012), 155009 [arXiv:1204.1330 [hep-th]]. * [61] A. C. Wall, “Maximin Surfaces, and the Strong Subadditivity of the Covariant Holographic Entanglement Entropy,” Class. Quant. Grav. 31 (2014) no.22, 225007 [arXiv:1211.3494 [hep-th]]. * [62] M. Headrick, V. E. Hubeny, A. Lawrence and M. Rangamani, “Causality \& holographic entanglement entropy,” JHEP 12 (2014), 162 [arXiv:1408.6300 [hep-th]]. * [63] M. Van Raamsdonk, “Lectures on Gravity and Entanglement,” [arXiv:1609.00026 [hep-th]]. * [64] M. Rangamani and T. Takayanagi, “Holographic Entanglement Entropy,” Lect. Notes Phys. 931 (2017), pp.1-246 [arXiv:1609.01287 [cond-mat.str-el]]. * [65] C. Bachas and I. Brunner, “Fusion of conformal interfaces,” JHEP 02 (2008), 085 [arXiv:0712.0076 [hep-th]]. * [66] E. Witten, “Deformations of JT Gravity and Phase Transitions,” [arXiv:2006.03494 [hep-th]]. * [67] M. Banados, “Three-dimensional quantum geometry and black holes,” AIP Conf. Proc. 484 (1999) no.1, 147-169 [hep-th/9901148]. * [68] K. Skenderis and S. N. Solodukhin, “Quantum effective action from the AdS/CFT correspondence,” Phys. Lett. B 472 (2000), 316-322 [hep-th/9910023]. * [69] M. Rooman and P. Spindel, “Uniqueness of the asymptotic AdS(3) geometry,” Class. Quant. Grav. 18 (2001), 2117-2124 [gr-qc/0011005]. * [70] K. Krasnov, “On holomorphic factorization in asymptotically AdS 3-D gravity,” Class. Quant. Grav. 20 (2003), 4015-4042 [hep-th/0109198]. * [71] G. Compère, P. Mao, A. Seraj and M. M. Sheikh-Jabbari, “Symplectic and Killing symmetries of AdS3 gravity: holographic vs boundary gravitons,” JHEP 01 (2016), 080 [arXiv:1511.06079 [hep-th]]. * [72] M. Banados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-time,” Phys. Rev. Lett. 69 (1992), 1849-1851 [arXiv:hep-th/9204099 [hep-th]]. . * [73] M. Banados, M. Henneaux, C. Teitelboim and J. Zanelli, “Geometry of the (2+1) black hole,” Phys. Rev. D 48 (1993), 1506-1525 [erratum: Phys. Rev. D 88 (2013), 069902] [arXiv:gr-qc/9302012 [gr-qc]]. . * [74] J. M. Maldacena and A. Strominger, “AdS(3) black holes and a stringy exclusion principle,” JHEP 12 (1998), 005 [hep-th/9804085]. * [75] R. Dijkgraaf, J. M. Maldacena, G. W. Moore and E. P. Verlinde, “A Black hole Farey tail,” [hep-th/0005003]. * [76] W. Israel, “Singular hypersurfaces and thin shells in general relativity,” Nuovo Cim. B 44S10 (1966), 1 [Nuovo Cim. B 48 (1967), 463]. * [77] K. Lanczos, Annalen der Physik 379, 518 (1924). * [78] C. Bachas, “Asymptotic symmetries of AdS2 branes,” in “Strings and Gravity: Tying the Forces Together,” proceedings of the 5th Francqui Colloquium, M. Henneaux and A. Sevrin eds. (De Boeck, Brussels) [hep-th/0205115]. * [79] B. Czech, P. H. Nguyen and S. Swaminathan, “A defect in holographic interpretations of tensor networks,” JHEP 03 (2017), 090 [arXiv:1612.05698 [hep-th]]. [arXiv:1612.05698 [hep-th]]. * [80] M. Cvetic, S. Griffies and S. Rey, “Static domain walls in N=1 supergravity,” Nucl. Phys. B 381 (1992), 301-328 [hep-th/9201007]. * [81] M. Cvetic and H. H. Soleng, “Supergravity domain walls,” Phys. Rept. 282 (1997) 159 [hep-th/9604090]. * [82] A. Ceresole, G. Dall’Agata, A. Giryavets, R. Kallosh and A. D. Linde, “Domain walls, near-BPS bubbles, and probabilities in the landscape,” Phys. Rev. D 74 (2006), 086010 [hep-th/0605266]. * [83] A. Vilenkin and E. P. S. Shellard, “Cosmic Strings and Other Topological Defects,” Cambridge University Press (Cambridge 1994). * [84] M. Chiodaroli, M. Gutperle and L. Y. Hung, “Boundary entropy of supersymmetric Janus solutions,” JHEP 09 (2010), 082 [arXiv:0712.1850 [hep-th]]. * [85] K. Jensen and A. O’Bannon, “Holography, Entanglement Entropy, and Conformal Field Theories with Boundaries or Defects,” Phys. Rev. D 88 (2013) no.10, 106006 [arXiv:1309.4523 [hep-th]]. * [86] J. Erdmenger, M. Flory, C. Hoyos, M. N. Newrzella and J. M. S. Wu, “Entanglement Entropy in a Holographic Kondo Model,” Fortsch. Phys. 64 (2016), 109-130 [arXiv:1511.03666 [hep-th]]. * [87] M. Gutperle and A. Trivella, “Note on entanglement entropy and regularization in holographic interface theories,” Phys. Rev. D 95 (2017) no.6, 066009 [arXiv:1611.07595 [hep-th]]. * [88] O. Kenneth and I. Klich, “Opposites attract: A Theorem about the Casimir force,” Phys. Rev. Lett. 97 (2006), 160401 [arXiv:0601011 [quant-ph]]. * [89] C. P. Bachas, “Comment on the sign of the Casimir force,” J. Phys. A 40 (2007), 9089-9096 [arXiv:0611082 [quant-ph]]. * [90] E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys. 2 (1998), 505-532 [hep-th/9803131]. * [91] B. Sundborg, “The Hagedorn transition, deconfinement and N=4 SYM theory,” Nucl. Phys. B 573 (2000), 349-363 [hep-th/9908001]. * [92] O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas and M. Van Raamsdonk, “The Hagedorn - deconfinement phase transition in weakly coupled large N gauge theories,” Adv. Theor. Math. Phys. 8 (2004), 603-696 [hep-th/0310285]. * [93] O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas, M. Van Raamsdonk and T. Wiseman, “The Phase structure of low dimensional large N gauge theories on Tori,” JHEP 01 (2006), 140 [hep-th/0508077]. * [94] C. A. Keller, “Phase transitions in symmetric orbifold CFTs and universality,” JHEP 03 (2011), 114 [arXiv:1101.4937 [hep-th]]. * [95] A. Cabo-Bizet, D. Cassani, D. Martelli and S. Murthy, “Microscopic origin of the Bekenstein-Hawking entropy of supersymmetric AdS5 black holes,” JHEP 10 (2019), 062 [arXiv:1810.11442 [hep-th]]. * [96] S. Choi, J. Kim, S. Kim and J. Nahmgoong, “Large AdS black holes from QFT,” [arXiv:1810.12067 [hep-th]]. * [97] S. Choi, J. Kim, S. Kim and J. Nahmgoong, “Comments on deconfinement in AdS/CFT,” [arXiv:1811.08646 [hep-th]]. * [98] C. Copetti, A. Grassi, Z. Komargodski and L. Tizzano, [arXiv:2008.04950 [hep-th]]. * [99] J. D. Marsano, “Phase transitions in Yang-Mills theories and their gravity duals,” UMI-32-17820. * [100] S. Chapman, D. Ge and G. Policastro, “Holographic Complexity for Defects Distinguishes Action from Volume,” JHEP 05 (2019), 049 [arXiv:1811.12549 [hep-th]]. * [101] C. Bachas, I. Brunner and D. Roggenkamp, “A worldsheet extension of O(d,d:Z),” JHEP 10 (2012), 039 [arXiv:1205.4647 [hep-th]]. * [102] C. Bachas, I. Brunner and D. Roggenkamp, “Fusion of Critical Defect Lines in the 2D Ising Model,” J. Stat. Mech. 1308 (2013), P08008 [arXiv:1303.3616 [cond-mat.stat-mech]]. * [103] D. Marolf, “Black Holes, AdS, and CFTs,” Gen. Rel. Grav. 41 (2009), 903-917 [arXiv:0810.4886 [gr-qc]]. * [104] C. Bachas and I. Lavdas, “Quantum Gates to other Universes,” Fortsch. Phys. 66 (2018) no.2, 1700096 [arXiv:1711.11372 [hep-th]]. * [105] I. Akal, Y. Kusuki, T. Takayanagi and Z. Wei, “Codimension two holography for wedges,” Phys. Rev. D 102 (2020) no.12, 126007 [arXiv:2007.06800 [hep-th]]. * [106] R. X. Miao, “An Exact Construction of Codimension two Holography,” JHEP 01 (2021), 150 [arXiv:2009.06263 [hep-th]]. * [107] S. Zhao, C. Northe and R. Meyer, “Symmetry-Resolved Entanglement in AdS3/CFT2 coupled to $U(1)$ Chern-Simons Theory,” [arXiv:2012.11274 [hep-th]]. * [108] T. Takayanagi and K. Tamaoka, “Gravity Edge Modes and Hayward Term,” JHEP 02 (2020), 167 [arXiv:1912.01636 [hep-th]]. * [109] V. Balasubramanian and P. Kraus, “A Stress tensor for Anti-de Sitter gravity,” Commun. Math. Phys. 208 (1999), 413-428 [arXiv:hep-th/9902121 [hep-th]]. * [110] G. Hayward, “Gravitational action for space-times with nonsmooth boundaries,” Phys. Rev. D 47 (1993), 3275-3280
aainstitutetext: Department of Physics, University of Siegen, 57068 Siegen, Germanybbinstitutetext: Department of Physics and INFN, Sapienza University of Rome, Rome I-00185, Italyccinstitutetext: School of Physics Sciences, University of Chinese Academy of Sciences, Beijing 100039, Chinaddinstitutetext: Center for future high energy physics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100039, Chinaeeinstitutetext: Centre for Cosmology, Particle Physics and Phenomenology (CP3), Universit catholique de Louvain, Chemin du Cyclotron, 2, B-1348 Louvain-la-Neuve, Belgium # Highly Boosted Higgs Bosons and Unitarity in Vector-Boson Fusion at Future Hadron Colliders Wolfgang Kilian b Sichun Sun c,d Qi-Shu Yan e Xiaoran Zhao d Zhijie Zhao <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We study the observability of new interactions which modify Higgs-pair production via vector-boson fusion processes at the LHC and at future proton- proton colliders. In an effective-Lagrangian approach, we explore in particular the effect of the operator $h^{2}W_{\mu\nu}^{a}W^{a,\mu\nu}$, which describes the interaction of the Higgs boson with transverse vector-boson polarization modes. By tagging highly boosted Higgs bosons in the final state, we determine projected bounds for the coefficient of this operator at the LHC and at a future 27 TeV or 100 TeV collider. Taking into account unitarity constraints, we estimate the new-physics discovery potential of Higgs pair production in this channel. ††preprint: SI-HEP-2021-05, CP3-21-02 ## 1 Introduction In 2012, the Higgs boson was discovered at the LHC Aad:2012tfa ; Chatrchyan:2012ufa . Current experimental data show that its properties agree with the predictions of Standard Model (SM), but further measurements are still necessary to examine the SM and search for new physics. The SM predicts that the Higgs boson has three kinds of interaction at tree level: (1) the Yukawa interaction with fermions; (2) the interaction with massive vector bosons ($W^{\pm}$ and $Z$); (3) the cubic and quartic Higgs self-interactions. One of the prime targets of the second run and future runs of the LHC is to measure the Higgs self-couplings (3). Higgs pair-production in vector-boson fusion (VBF) Jones:1979bq , $VV\to hh$, is sensitive to the last two kinds of interaction. VBF-type processes are possible at both electron-positron and hadron colliders. At a hadron collider, the two incoming vector bosons $V=W^{\pm},Z$ are radiated from two initial quarks. In the final state, in addition to two Higgs bosons, there are two back-to-back hard jets which should be tagged in the forward and backward regions of a detector, respectively. This allows us to use VBF cuts to reject the QCD type backgrounds efficiently. In this paper, we study double Higgs production in the VBF process in an effective-field theory (EFT) approach. We follow the conventions given in our previous paper Kilian:2018bhs , and use the following phenomenological effective Lagrangian: $\displaystyle\mathcal{L}_{EFT}=$ $\displaystyle\mathcal{L}_{\overline{SM}}+\mathcal{L}_{VVh}+\mathcal{L}_{Vh},$ (1) $\displaystyle\mathcal{L}_{VVh}=$ $\displaystyle-\left(g_{W,b1}\frac{h}{v}+g_{W,b2}\frac{h^{2}}{2v^{2}}+g_{W,b3}\frac{h^{3}}{6v^{3}}+\cdots\right)W^{+}_{\mu\nu}W^{-\,\mu\nu}$ $\displaystyle-\left(g_{A,b1}\frac{h}{2v}+g_{A,b2}\frac{h^{2}}{4v^{2}}+g_{A,b3}\frac{h^{3}}{12v^{3}}+\cdots\right)F_{\mu\nu}F^{\mu\nu}$ $\displaystyle-\left(g_{X,b1}\frac{h}{v}+g_{X,b2}\frac{h^{2}}{2v^{2}}+g_{X,b3}\frac{h^{3}}{6v^{3}}+\cdots\right)F_{\mu\nu}Z^{\mu\nu}$ $\displaystyle-\left(g_{Z,b1}\frac{h}{2v}+g_{Z,b2}\frac{h^{2}}{4v^{2}}++g_{Z,b3}\frac{h^{3}}{12v^{2}}+\cdots\right)Z_{\mu\nu}Z^{\mu\nu}$ (2) $\displaystyle\mathcal{L}_{VH}=$ $\displaystyle g_{W,a1}\frac{2m_{W}^{2}}{v}hW^{+,\mu}W^{-}_{\mu}+g_{W,a2}\frac{m_{W}^{2}}{v^{2}}h^{2}W^{\mu}W_{\mu}+g_{W,a3}\frac{m_{W}^{2}}{3v^{3}}h^{3}W^{\mu}W_{\mu}$ $\displaystyle+g_{Z,a1}\frac{m_{Z}^{2}}{v}hZ^{\mu}Z_{\mu}+g_{Z,a2}\frac{m_{Z}^{2}}{2v^{2}}h^{2}Z^{\mu}Z_{\mu}+g_{Z,a3}\frac{m_{Z}^{2}}{6v^{3}}h^{3}Z^{\mu}Z_{\mu}+\cdots\,.$ (3) Dots indicate higher-dimensional interactions which are not relevant for the VBF Higgs production process that we consider. We only include the CP- conserving interactions and omit any CP-violating operators. The relations $g_{W,a1}=g_{W,a2}=g_{Z,a1}=g_{Z,a2}=1$ and $g_{V,b1}=g_{V,b2}=g_{V,b3}=g_{W,a3}=g_{Z,a3}=0$ characterize the SM reference values at the tree level, where the subscript letter $V$ denotes $W,A,X,Z$, respectively. The corresponding terms have been removed from the SM Lagrangian, as indicated by the overline notation $\overline{SM}$, such that they are not double-counted. Introducing gauge degrees of freedom in this phenomenological Lagrangian, the vertices can be rewritten as gauge-invariant operators which are understood as the low-energy effect of short-range structure or new heavy degrees of freedom beyond the SM. Such effects would appear, for instance, in a composite Higgs model – for concrete examples, cf. Qi:2019ocx ; Agrawal:2019bpm ; Xu:2019xuo ; Li:2019ghf . In Sec.3, we relate the parameterization to the formulation in terms of gauge-invariant operators, adopting a concrete basis and truncating the expansion at dimension six, as commonly done in the literature. The single Higgs couplings $g_{V,a1}$ can be determined to up to $5-10\%$ via measuring the decay fractions $h\to WW^{*}$ and $h\to ZZ^{*}$ at the LHC. Current LHC data exclude deviations from the SM prediction Aaboud:2017vzb ; Sirunyan:2017exp ; Aaboud:2018jqu ; Aad:2019mbh of more than about $15\%$. (This limit, as well as the bounds discussed below, depends on a universal assumption about the absence of undetected Higgs decays which we will adopt for this paper.) For the couplings of type $g_{V,b1}$, the bounds are weaker. For instance ($g_{V,b1}\in[0.8,4.5]$) was reported in Ref. Aaboud:2017vzb . The measurement of double Higgs couplings hhVV ($g_{V,a2}$ and $g_{V,b2}$) is challenging at the LHC. Recently, ATLAS reported a search for double Higgs production in VBF Aad:2020kub , which excludes the ranges $g_{V,a2}<-0.56$ and $g_{V,a2}>2.89$. At the LHC, searches for a double Higgs final state focus on the gluon-gluon fusion process $gg\to hh$. The Higgs decay channels $b\bar{b}b\bar{b}$ Aaboud:2018knk ; Sirunyan:2018tki , $b\bar{b}\gamma\gamma$ Aaboud:2018ftw ; Sirunyan:2018iwt , $b\bar{b}\tau\tau$ Aaboud:2018sfw ; Sirunyan:2017djm and $b\bar{b}VV$ Aaboud:2018zhh ; Sirunyan:2017guj have been investigated by both ATLAS and CMS. In addition, result for the channels $WW\gamma\gamma$ Aaboud:2018ewm and $WWWW$ Aaboud:2018ksn were reported by ATLAS. A combination of these searches can be found in Ref. Sirunyan:2018ayu ; Aad:2019uzh . It is shown that the Higgs self-coupling $\lambda_{3}$ can be constrained to $[-5,12]$ by data, while no constraint of $\kappa_{5}$ has been reported. As a complementary process, double Higgs production via VBF at hadron colliders has been extensively studied in the literature Dolan:2013rja ; Liu- Sheng:2014gxa ; Dolan:2015zja ; Bishara:2016kjn ; Arganda:2018ftn . The NLO QCD and higher order correction for this process has been calculated in Ref. Baglio:2012np ; Frederix:2014hta ; Dreyer:2018qbw ; Dreyer:2018rfu ; Dreyer:2020urf ; Dreyer:2020xaj ; they find an enhancement of around $7\%$ as it is natural for a pure electroweak process. For the high-luminosity LHC (HL- LHC), assuming $\mathcal{L}=3$ ab-1 at 14 TeV, it is expected that the couplings of type $g_{V,a2}$ can be constrained to $20\%$ Dolan:2015zja , while a precision of around $1\%$ should be achievable at a future 100 TeV hadron collider Bishara:2016kjn . Furthermore, the hhVV coupling is also accessible via $hVV$ or $hhV$ final states. Ref. Englert:2017gdy argues that a measurement of the $W^{\pm}W^{\pm}h$ final state can constrain the $hhWW$ coupling to $\mathcal{O}(100\%)$ at the HL-LHC, and to $20\%$ at a 100 TeV collider, while the determination of this coupling from the $hhV$ final state can only yield a weak bound Nordstrom:2018ceg . Possible measurements of the $g_{V,a2}$ couplings, i.e., the Higgs interacting with the logitudinal components of massive vector bosons, have thus been covered in some detail in previous work. However, without further assumptions it is not evident that couplings of the Higgs to transversal vector bosons play a lesser role. In this work, we will aim at filling this gap and thus perform a Monte-Carlo study of the sensitivity to couplings of type $g_{V,b2}$, both for the LHC and for future high-energy hadron colliders, and correlate this with the determination of $g_{V,a2}$. The VBF process $VV\to hh$ receives a contribution from the Higgs cubic self- coupling. It is well known that at hadron colliders, the Higgs self-coupling is most accessible in the gluon-gluon fusion process Plehn:1996wb , whose cross section is one order of magnitude larger than that of the VBF process. This fact has received a lot of attention Baur:2002rb ; Li:2013flc ; Cao:2015oaa ; Cao:2016zob ; Baur:2002qd ; Ren:2017jbg ; Baur:2003gp ; Yao:2013ika ; Kling:2016lay ; Chang:2018uwu ; Kim:2018uty ; He:2015spf ; Papaefstathiou:2012qe ; Baur:2003gpa ; Dolan:2012rv ; Barr:2013tda ; deLima:2014dta ; Behr:2015oqq ; Barger:2013jfa ; Barr:2014sga ; Papaefstathiou:2015iba ; Li:2015yia ; Zhao:2016tai ; Contino:2016spe ; Goncalves:2018yva . A measurement of the VBF process will provide additional precision for the Higgs self-coupling. However, for simplicity we will assume here that the couplings which are accessible in gluon-gluon fusion are known, and we fix those at their SM value. This allows us to focus on the couplings which are specific to the VBF class of processes, and lets us more easily estimate the sensitivity potential for those. This paper is organized as follows. In Sec. 2, we briefly introduce the mass- drop method used for tagging highly boosted Higgs bosons, and use it to explore the SM case at the LHC and at a 100 TeV collider. In Sec. 3, we investigate the potential for the discovery of new physics via multi-Higgs production, in form of the interactions discussed above. In Sec. 4, we provide projections for bounds on $g_{V,b2}$ and $g_{V,a2}$ for different collision energies. We conclude this paper with a discussion of our findings in Sec. 5. ## 2 Higgs pair production in the Standard Model We will focus on the signal process $pp\to hhjj\to 4b2j$ due to the reason that the decay channel $h\to b\bar{b}$ has the largest branch ratio. The partonic signal events are generated using WHIZARD Kilian:2007gr with the cuts listed in Table 1. We takes the parton distribution functions from CTEQ6l1 Pumplin:2002vw . In this work, we consider the main backgrounds $pp\to t\bar{t}\to 2b4j$, $pp\to 2b4j$ (QCD) and $pp\to 4b2j$. The background events $pp\to t\bar{t}\to 2b4j$ are generated by WHIZARD. We have cross-checked the results with Madgraph Alwall:2014hca . Pure-QCD partonic events of type $pp\to 2b4j$ and $pp\to 4b2j$ are generated using ALPGEN Mangano:2002ea . For all event samples, parton shower and hadronization are performed by Pythia 8 Sjostrand:2007gs . Jets are reconstructed by FastJet Cacciari:2011ma using the anti-$k_{t}$ algorithm Cacciari:2008gp with a jet radius $R=0.4$ and transverse momentum cut $P_{t}>20$ GeV. We do not account for detector effects in detail but insert values for efficiency and mistagging rates where appropriate. ### 2.1 Analysis method at the 14 TeV LHC In Table 2 (first column) we list the expected number of signal and background events in the SM for the LHC with $\sqrt{s}=14$ TeV and luminosity $\mathcal{L}=3$ ab-1. To suppress the QCD background, we require four b-tagged jets in the final state ($n_{b}=4$). We assume the b-tagging efficiency $\epsilon_{b}=0.7$ and mistagging rate $\epsilon_{miss}=0.001$. The number of events after applying the b-tagging requirement are listed in the 2nd column of Table 2. To identify two forward jets in the VBF process, we first select the jet with highest energy and label it as $j_{1}$. If its energy satisfies $E_{j_{1}}>500$ GeV, we scan over all other jets and determine the maximal rapidity difference $\Delta\eta(j_{1},j)$ and invariant mass $m(j_{1},j)$ with respect to the leading jet. If the conditions $\Delta\eta(j_{1},j)_{max}>3.6$ and $m(j_{1},j)_{max}>500$ GeV are met simultaneously, we identify the most energetic j as $j_{2}$ and label corresponding pair of jets as the tagging forward jets of a VBF process. Otherwise, the event is rejected. The number of events after applying this VBF cut are listed in the 3rd column of Table 2. With these b-jet and forward-jet tagging requirements, the backgrounds $pp\to t\bar{t}$ and $pp\to 2b4j$ are greatly reduced. The dominant remaining background originates from QCD processes $pp\to 4b2j$, where the cross section after cuts is still five orders of magnitude larger than that of the signal. To further suppress this huge background, we select those events with massive jets formed by highly boosted Higgs bosons and adopt the mass drop method Butterworth:2008iy . Cuts | $\sqrt{s}=14$ TeV | $\sqrt{s}=27$ TeV | $\sqrt{s}=100$ TeV ---|---|---|--- $P_{t}(j)$ | $>20$ GeV | $>20$ GeV | $>30$ GeV $\Delta R(j,j)$ | $>0.8$ | $>0.8$ | $>0.8$ $|\eta(j)|$ | $<5.0$ | $<5.0$ | $<8.0$ $\Delta\eta(j_{1},j_{2})$ | $>3.6$ | $>3.6$ | $>4.0$ $m(j_{1},j_{2})$ | $>500$ GeV | $>500$ GeV | $>800$ GeV Table 1: Acceptance cuts used for the calculation of VBF Higgs production in $pp$ collision (VBF cuts), for three different collider energies. The $j_{1}$ and $j_{2}$ are the tagged forward jets for VBF process. Process | $\sigma\times\mathcal{L}$ | $n_{b}=4$ | VBF ---|---|---|--- SM signal | $993$ | $238$ | $171$ $pp\to 4b2j$ | $2.28\times 10^{8}$ | $5.47\times 10^{7}$ | $1.86\times 10^{7}$ $pp\to 2b4j$ (QCD) | $2.38\times 10^{10}$ | $1.14\times 10^{4}$ | $3.85\times 10^{4}$ $pp\to t\bar{t}\to 2b4j$ | $7.89\times 10^{8}$ | $387$ | $58$ Table 2: The cut efficiencies of b-tagging and VBF at 14 TeV LHC are demonstrated. The total integrated luminosity is assumed to be $\mathcal{L}=3000$ fb-1. The b-tagging efficiency is $\epsilon_{b}=0.7$, and the mis-tagging rate of light quarks is $\epsilon_{miss}=0.001$. #### 2.1.1 The mass drop method for highly-boosted Higgs boson tagging A significant fraction (of the order of $10\;\%$) of the VBF event sample in the SM contains highly boosted Higgs bosons. A highly boosted Higgs boson has a large transverse momentum ($P_{t}>200$ GeV) and can be detected in the central region of the detector. The decay products of the Higgs boson typically form a fat jet with a large jet mass, if a large cone parameter is used for the analysis. A fraction of the QCD background events also contains massive jets, but the fat jets originating from Higgs pairs can in principle be distinguished by their characteristic jet sub-structure. In recent years, various methods for jet substructure analysis have been developed: (1) Jet-grooming methods aim at removing soft radiation which is unlikely to originate from the hard process (see, e.g.,Butterworth:2008iy ). (2) Radiation-constraint methods impose a cut on jet shape to separate the signal from the background (see, e.g., Thaler:2010tr ). (3) Prong-finder methods detect a massive boosted object as a fat jet with multiple hard cores by exploiting the recombination history of the jet algorithm. As a particular prong-finder algorithm, the mass-drop tagger method is particularly suited for isolating boosted Higgs bosons, decaying to $b\bar{b}$, from the QCD background Butterworth:2008iy . A detail review of jet substructure can be found in Ref. Marzani:2019hun In this work, we adopt the mass-drop tagger Butterworth:2008iy as a means for tagging highly boosted Higgs bosons in the final state of the VBF process. The method consists of the following two steps: (1) We first identify jets by the standard anti-$k_{t}$ algorithm with the cone parameter $R=0.4$. After identifying the forward jets associated with the VBF process, we recluster the remaining jets using the Cambridge/Aachen (CA) algorithm Dokshitzer:1997in ; Wobisch:1998wt with $R=1.2$. (2) If there are 2 to 4 CA jets in an event, and its leading or subleading jet has a transverse momentum satisfying $P_{t}>200$ GeV and a jet mass $m_{j}>100$ GeV, we apply the following procedure to tag candidates for highly boosted Higgs bosons. 1. 1. Undo the last step of clustering of jet $j$ to get two daughter jets $j_{1}$ and $j_{2}$ with $m_{j1}>m_{j2}$. 2. 2. If the conditions $\displaystyle m_{j1}<\mu m_{j}\qquad\text{and}\qquad\frac{\min(P_{t}(j_{1}),P_{t}(j_{2}))}{m^{2}_{j}}\Delta R^{2}_{j1,j2}>y_{cut}$ (4) are satified, we identify $j$ as a fat jet associated with a highly boosted Higgs boson. 3. 3. Otherwise, we redefine $j$ as $j_{1}$ and repeat the above procedure. There are two dimensionless parameters $\mu$ and $y_{cut}$ in this method, as given in Eq. (4). In this work, we fix them as $\mu=0.67$ and $y_{cut}=0.09$. After two subjets are found, we also apply the filtering method to remove soft radiation which originates from the underlying event and contaminates CA jets with a larger cone size. After applying this tagging algorithm, both signal and background events fall into three classes: 2-boosted Higgs (2BH), 1-boosted Higgs (1BH) and 0-boosted Higgs (0BH) candidate events. The number of events for each class are listed in Table 3. We observe that the fraction of signal events in the 2BH category is still small ($2-3\%$). Nevertheless, the corresponding background is two orders of magnitude lower than for the 0BH category, where the fraction of signal events is even less. The 1BH category falls in between. The tagger significantly improves the chance for finding signal events, but by itself it is clearly not sufficient for a measurement if the rate is SM-like. (a) 2BH (b) 1BH (c) 0BH Figure 1: Distributions of the reconstructed Higgs mass with (a) 2-boosted Higgs events, (b) 1-boosted Higgs events, and (c) 0-boosted Higgs events. In order to construct kinematic observables which improve the signal vs. background discrimination, we can reconstruct the mass peaks of the Higgs bosons for each event category. For each of the 2BH events, two jet masses $m_{j}$ should peak around Higgs the boson mass $m_{h}$, as shown in Fig. (1). For a 1BH event, the jet mass of the leading fat jet should peak around the mass of the Higgs boson, while the mass of the second reconstructed Higgs candidate should coincide with the invariant mass of two b jets. For a 0BH event, two Higgs boson candidates are reconstructed by using the $\chi^{2}$ method. To this end, we define $\chi^{2}$ as follows: $\displaystyle\chi^{2}(m)$ $\displaystyle=$ $\displaystyle\frac{|m(j_{1},j_{2})-m_{h}|^{2}}{\sigma^{2}_{j}}+\frac{|m(j_{3},j_{4})-m_{h}|^{2}}{\sigma^{2}_{j}}\,,$ (5) We assume $m_{h}=125$ GeV, and $m(j_{1},j_{2})$ and $m(j_{3},j_{4})$ are the invariant mass of two jet pairs in the final state, scanning over each combination. $\sigma_{j}=10$ GeV is used to take into account the error in jet energy resolution. The pairing which minimizes $\chi^{2}$ is selected, and the corresponding invariant masses determined by the pairs of jets are taken as the reconstructed masses of Higgs bosons. We display the distributions of the reconstructed mass of the leading Higgs boson in Fig. 1. The shapes of the 2BH and 1BH cases are almost identical. We note that the Higgs peak in the 2BH case is narrower than in the 0BH case, since wrong pairings are rejected more efficiently. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) SM Signal | $4$ | $21$ | $146$ $pp\to 4b2j$ | $1.17\times 10^{5}$ | $1.56\times 10^{6}$ | $1.69\times 10^{7}$ $pp\to 2b4j$ (QCD) | $28$ | $349$ | $3.81\times 10^{4}$ $pp\to t\bar{t}\to 2b4j$ | $3$ | $13$ | $42$ Table 3: The numbers of events in the 2BH case, 1BH case and 0BH case at 14 TeV LHC are tabulated. #### 2.1.2 Multivariate analysis After applying the VBF cuts and boosted-Higgs tagging, in order to further improve the ratio of signal over background, we employ the method of boosted decision tree (BDT) which utilizes the correlation of observables in the signal and can help to further suppress background. We select the following observables as the input to our BDT analysis: * • $P_{t}(h_{i})$: the transverse momenta of the two reconstructed Higgs bosons. * • $m(h_{i})$: the invariant masses of the reconstructed Higgs bosons. * • $P_{t}(j_{i})$: the transverse momenta of the two forward jets. * • $E(j_{i})$: the energies of the forward jets. * • $\eta(j_{i})$: the pseudo-rapidity of the forward jets. * • $m(j,j)$: the invariant mass of the forward jets. * • $\Delta\eta(j,j)$: the rapidity difference of the forward jets. * • $P_{t}(j^{sub}_{i})$: the transverse momenta of the two subjets of each highly boosted Higgs boson. * • $m(h,h)$: the invariant mass of the two Higgs boson candidates. * • $\chi^{2}_{min}$ (only for the 0BH case): the minimum value of $\chi^{2}$. The results of the BDT response are presented in the Fig. 2. Obviously, signal and background can separated best in the 2BH case. For the 1BH and 0BH cases, extracting the signal is challenging even after exploiting the BDT method. We can optimize the BDT cut to achieve the maximal significance, which is defined as $S/\sqrt{S+B}$, where $S$ is the event number of the signal and $B$ is the event number of the total background. The efficiencies and significances of the optimized BDT cut are listed the Table 4 for all three event classes. Comparing the results given in Table 3 and Table 4, we conclude that the BDT reduces the background by one additional order of magnitude, improving on the sequential cut method. Nevertheless, the final number of SM signal events is still tiny compared to the background in all cases, and the significance can only reach $0.02$ (2BH, 1BH) or $0.04$ (0BH). This is still far from the requirement of a discovery at the LHC. (a) 2-boosted Higgs (b) 1-boosted Higgs (c) 0-boosted Higgs Figure 2: The response of the discriminants to the SM signal and background at 14 TeV LHC with (a) 2-boosted events, (b) 1-boosted events, and (c) 0-boosted events. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) SM Signal with BDT cut | $3$ | $13$ | $90$ Background with BDT cut | $2.06\times 10^{4}$ | $3.05\times 10^{5}$ | $4.42\times 10^{6}$ $S/B$ | $1.40\times 10^{-4}$ | $4.30\times 10^{-5}$ | $2.04\times 10^{-5}$ $S/\sqrt{S+B}$ | $0.020$ | $0.024$ | $0.043$ Table 4: The significances of the BDT at 14 TeV LHC are demonstrated. ### 2.2 Analysis of $pp\to hhjj$ in SM at 100 TeV hadron collider We now apply the methods as described above to the VBF process at a future 100 TeV hadron collider. We assume a high integrated luminosity of $\mathcal{L}=30$ ab-1, and adapt all selection parameters to the different environment as appropriate. We require that the most energetic jet has an energy $E_{j}>800$ GeV. The VBF cuts are adjusted as $\Delta\eta_{jj,max}>4.0$ and $m_{jj,max}>800$ GeV. Table 5 shows the number of events after imposing b-tagging and VBF cuts. Due to the increased cross section at high energy and the high luminosity of the collider, the number of signal events is increased by a factor 1000 compared to the high-luminosity LHC. After applying b-tagging and VBF cuts, the total number of the signal events is $8.96\times 10^{4}$. Of course, the number of background events also increases and reaches $10^{9}$, so background reduction is still essential. The number of events for the 2BH, 1BH and 0BH cases are listed in Table 6. As one would expect, more events with highly boosted Higgs bosons can be observed at the 100 TeV collider than at the LHC. This is illustrated by the SM curves of Fig. (4). To enable signal/background discrimination in these three cases, we apply the BDT method as described above. The results are shown in Fig. 3 and Table 7. We can set the BDT cut in a region where the residual background is effectively zero. These results show that it is possible in principle to discover the SM signal at a 100 TeV collider. The large rate of the VBF process at high collision energy leaves enough signal events after all measures for background reduction have been applied. We note, however, that pileup effects in a high luminosity run might pose a challenge for analyses such as this one. For a final verdict on the detectability of the SM signal, a more sophisticated full-simulation study would be necessary which is beyond the scope of this paper. Process | $\sigma\times\mathcal{L}$ | $n_{b}=4$ | VBF ---|---|---|--- SM signal | $4.28\times 10^{5}$ | $1.03\times 10^{5}$ | $8.96\times 10^{4}$ $pp\to 4b2j$ | $5.02\times 10^{10}$ | $1.21\times 10^{10}$ | $8.51\times 10^{9}$ $pp\to 2b4j$ | $5.04\times 10^{12}$ | $2.47\times 10^{6}$ | $1.83\times 10^{6}$ $pp\to t\bar{t}\to 2b4j$ | $3.93\times 10^{11}$ | $1.93\times 10^{5}$ | $6.20\times 10^{4}$ Table 5: The cut efficiencies of b-tagging and VBF at 100 TeV collider are demonstrated. Here, the total integrated luminosity is assumed to be $\mathcal{L}=30$ ab-1. b-tagging efficiency is $\epsilon_{b}=0.7$, and miss tagging rate is $\epsilon_{miss}=0.001$. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) SM Signal | $4265$ | $1.76\times 10^{4}$ | $6.77\times 10^{4}$ $pp\to 4b2j$ | $3.65\times 10^{8}$ | $2.01\times 10^{9}$ | $6.13\times 10^{9}$ $pp\to 2b4j$ | $8.35\times 10^{4}$ | $4.40\times 10^{5}$ | $1.31\times 10^{6}$ $pp\to t\bar{t}\to 2b4j$ | $8244$ | $2.20\times 10^{4}$ | $3.18\times 10^{4}$ Table 6: The numbers of events for 2BH, 1BH and 0BH cases at 100 TeV collider are tabulated. (a) 2BH (b) 1BH (c) 0BH Figure 3: The response of the discriminants to the SM signal and background at 100 TeV collider with (a) 2BH case, (b) 1BH case, and (c) 0BH case. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) SM Signal with BDT cut | $298$ | $12$ | $90$ Background with BDT cut | $0$ | $0$ | $78$ $S/B$ | - | - | $1.15$ $S/\sqrt{S+B}$ | $17.27$ | $3.43$ | $6.923$ Table 7: The significances of the BDT at 100 TeV collider are demonstrated. ## 3 Multi-Higgs production in VBF processes with dimension-6 operators In this section, we extend our study of the $pp\to hhjj$ process to contributions beyond the SM. Given the phenomenological Lagrangian 3, such effects are parameterized in terms of the coefficients $g_{i}$, where we focus in particular on the $g_{V,b2}$ coupling that describes a non-SM double-Higgs interaction with the transverse polarization components of the vector bosons. ### 3.1 Effective-Theory description If we introduce the gauge symmetry of the SM in the phenomenological description, any anomalous effects can be re-expressed in terms of higher- dimensional gauge-invariant operators. To avoid redundancy, it is convenient to choose a particular operator basis such as the SILH basis Giudice:2007fh that we adopt for this work. As usual, we truncate the power-series expansion of the gauge-invariant effective theory at dimension six. This truncation allows us to express the phenomenological couplings in 3 in terms of a small number of SILH operator coefficients. Such a simplification allows for relating quantitative results for the VBF Higgs-pair process to the analysis of existing data, as well as to studies of different processes and interations. We list the relevant terms in the SILH basis, $\displaystyle\begin{split}{\cal L}_{\text{SILH}}\supset&\frac{ic_{W}g}{2m_{\rho}^{2}}\left(H^{\dagger}\sigma^{i}\overleftrightarrow{D^{\mu}}H\right)(D^{\nu}W_{\mu\nu})^{i}+\frac{ic_{B}g^{\prime}}{2m_{\rho}^{2}}\left(H^{\dagger}\overleftrightarrow{D^{\mu}}H\right)(\partial^{\nu}B_{\mu\nu})\\\ &+\frac{ic_{HW}g}{16\pi^{2}f^{2}}(D^{\mu}H)^{\dagger}\sigma^{i}(D^{\nu}H)W_{\mu\nu}^{i}+\frac{ic_{HB}g^{\prime}}{16\pi^{2}f^{2}}(D^{\mu}H)^{\dagger}(D^{\nu}H)B_{\mu\nu}\\\ &+\frac{c_{\gamma}{g^{\prime}}^{2}}{16\pi^{2}f^{2}}\frac{g^{2}}{g_{\rho}^{2}}H^{\dagger}HB_{\mu\nu}B^{\mu\nu}.\end{split}$ (6) In the following, we make use of the result from Kilian:2017nio ; Kilian:2018bhs in Table 8 which relates the phenomenological coefficients such as $g_{V,b}$ to the parameters of 6. In particular, the couplings $g_{V,b}$ that we are interested in, only depend on $c_{HW},c_{HB}$ and $c_{\gamma}$, but not on $c_{W}$ and $c_{B}$ which determine $g_{V,a}$. The SILH parameterization has been evaluated as the low-energy limit of various ultra-violet theories, such as models where the Higgs becomes a pseudo Nambu- Goldstone boson theory or holographic completions with extra dimensions; cf. Qi:2019ocx ; Agrawal:2019bpm ; Xu:2019xuo ; Li:2019ghf . We note the correlations between $g_{V,a1}$ and $g_{V,a2}$, $g_{V,b1}$ and $g_{V,b2}$ which follow from the dimension-six truncation, and should be tested against real data if actual deviations from the SM show up. In the SILH effective Lagrangian, Higgs interactions include further dimension-six operators such as $\displaystyle\frac{c_{H}}{2f^{2}}\partial^{\mu}\left(H^{\dagger}H\right)\partial_{\mu}\left(H^{\dagger}H\right)+\frac{c_{T}}{2f^{2}}\left(H^{\dagger}{\overleftrightarrow{D^{\mu}}}H\right)\left(H^{\dagger}{\overleftrightarrow{D}}_{\mu}H\right).$ (7) The coefficients of these operators are switched off in our analysis because they are to be measured in different processes. In particular, the first one entails a global shift to all Higgs interactions which is equivalent to a modified Higgs total width, while the latter violates the custodial symmetry of weak interactions and globally modifies $ZZ$ vs. $WW$ Higgs couplings. In our current work we assume that no custodial-symmetry violation beyond the SM is present, as detailed below. In the third column of Table 8, we present some typical numerical values for these parameters. Considering $g\sim 0.654,g^{\prime}\sim 0.350,v\sim 246\text{GeV},\text{tan}\theta=g^{\prime}/g=0.535$, $\alpha=\frac{g^{2}v^{2}}{32\pi^{2}f^{2}}=8.2\times 10^{-5}$ with $f=1$ TeV and $g_{\rho}\sim\frac{4\pi}{\sqrt{3}}$, $m_{\rho}=g_{\rho}f=7.3$ TeV, we can simplify the expressions in this table with $\displaystyle\zeta_{h}\sim 1,\quad\zeta_{A}\sim 1,\quad y_{ZA}\sim\alpha\frac{g^{\prime}}{g}(c_{HW}-c_{HB})\quad$ (8) $\displaystyle\zeta_{Z}\sim 1-\frac{1}{8}\alpha^{2}(\frac{g^{\prime}}{g})^{2}(c_{HW}-c_{HB})^{2},\quad\zeta_{AZ}\sim\alpha\frac{g^{\prime}}{4g}(c_{HW}-c_{HB}),\quad\zeta_{W}\sim 1-\frac{\alpha}{2}c_{HW}$ (9) (Our notation slightly differs from the definitions for $\bar{c}_{i}$ used in Ref. Ellis:2018gqa .) To simplify our discussion on the future collider sensitivity on $g_{V,b2}$ and $g_{V,a2}$, we assume custodial-symmetry relations, namely that $g_{W,b2}=g_{Z,b2}$ and $g_{X,b2}=g_{A,b2}=0$ and $g_{W,a2}=g_{Z,a2}$. This roughly renders $c_{\gamma},c_{HB},c_{B}\sim 0$, so we can perform a two-parameter study on $c_{W}$ and $c_{HW}$. The interactions proportional to $g_{W,b2}$ and $g_{W,a2}$ ($g_{Z,b2}$ and $g_{Z,a2}$) account for dominant contributions to the cross section of $pp\to hhjj$ process, up to $70\%$ (30$\%$), respectively. | SILH | numerics with assumptions below ---|---|--- $g_{W,b1}$ | $c_{HW}\frac{g^{2}v^{2}}{32\pi^{2}f^{2}}\zeta_{h}\zeta^{2}_{W}$ | $10^{-4}c_{HW}$ $g_{W,b2}$ | $g_{W,b1}\zeta_{h}$ | $10^{-4}c_{HW}$ $g_{A,b1}$ | $-c_{\gamma}\frac{g^{2}v^{2}}{8\pi^{2}f^{2}}\frac{{g^{\prime}}^{2}}{g^{2}_{\rho}}\cos^{2}{\theta}\zeta_{h}\zeta_{A}^{2}$ | -$10^{-6}c_{\gamma}$ $g_{A,b2}$ | $g_{A,b1}\zeta_{h}$ | -$10^{-6}c_{\gamma}$ $g_{X,b1}$ | $\frac{g{g^{\prime}}v^{2}}{64\pi^{2}f^{2}}\left[(c_{HW}-c_{HB})+8c_{\gamma}\frac{g^{2}}{g^{2}_{\rho}}\sin^{2}\theta\right]\zeta_{h}\zeta_{A}\zeta_{Z}$ | $+c_{\gamma}\frac{g^{2}v^{2}}{4\pi^{2}f^{2}}\frac{{g^{\prime}}^{2}}{g^{2}_{\rho}}\cos^{2}{\theta}\zeta_{h}\zeta_{AZ}^{2}$ | $10^{-5}(c_{HW}-c_{HB})$ $g_{X,b2}$ | $g_{X,b1}\zeta_{h}$ | $10^{-5}(c_{HW}-c_{HB})$ $g_{Z,b1}$ | $\frac{g^{2}v^{2}}{32\pi^{2}f^{2}}(c_{HW}+c_{HB}\tan^{2}{\theta})\zeta_{h}\zeta^{2}_{Z}-c_{\gamma}\frac{g^{2}v^{2}}{8\pi^{2}f^{2}}\frac{{g^{\prime}}^{2}}{g^{2}_{\rho}}\cos^{2}{\theta}\zeta_{h}\zeta_{AZ}^{2}$ | | $-\frac{g{g^{\prime}}v^{2}}{64\pi^{2}f^{2}}\left[(c_{HW}-c_{HB})+8c_{\gamma}\frac{g^{2}}{g^{2}_{\rho}}\sin^{2}\theta\right]\zeta_{h}\zeta_{AZ}\zeta_{Z}$ | $10^{-4}(c_{HW}+0.29c_{HB})$ $g_{Z,b2}$ | $g_{Z,b1}\zeta_{h}$ | $10^{-4}(c_{HW}+0.29c_{HB})$ $g_{W,a1}$ | $\left[1-\left(c_{W}\frac{g^{2}v^{2}}{m^{2}_{\rho}}+c_{HW}\frac{g^{2}v^{2}}{16\pi^{2}f^{2}}\right)\right]\zeta_{h}\zeta^{2}_{W}$. | $1-2\times 10^{-4}(3c_{W}+c_{HW}$ ) $g_{W,a2}$ | $\left[1-3\left(c_{W}\frac{g^{2}v^{2}}{m^{2}_{\rho}}+c_{HW}\frac{g^{2}v^{2}}{16\pi^{2}f^{2}}\right)\right]\zeta^{2}_{h}\zeta^{2}_{W}$ | $1-6\times 10^{-4}(3c_{W}+c_{HW}$ ) $g_{Z,a1}$ | $\left[1-\left(c_{W}\frac{g^{2}v^{2}}{m^{2}_{\rho}}+c_{B}\frac{{g^{\prime}}^{2}v^{2}}{m^{2}_{\rho}}+c_{HW}\frac{g^{2}v^{2}}{16\pi^{2}f^{2}}+c_{HB}\frac{{g^{\prime}}^{2}v^{2}}{16\pi^{2}f^{2}}\right)\right]\zeta_{h}\zeta^{2}_{Z}$ | $1-2\times 10^{-4}[3(c_{W}+0.29c_{B})+c_{HW}+0.29c_{HB}]$ $g_{Z,a2}$ | $\left[1-3\left(c_{W}\frac{g^{2}v^{2}}{m^{2}_{\rho}}+c_{B}\frac{{g^{\prime}}^{2}v^{2}}{m^{2}_{\rho}}+c_{HW}\frac{g^{2}v^{2}}{16\pi^{2}f^{2}}+c_{HB}\frac{{g^{\prime}}^{2}v^{2}}{16\pi^{2}f^{2}}\right)\right]\zeta^{2}_{h}\zeta^{2}_{Z}$. | $1-6\times 10^{-4}[3(c_{W}+0.29c_{B})+c_{HW}+0.29c_{HB}]$ Table 8: Relations between the phenomenological Lagrangian parameters in (1-3) (first column), the SILH effective Lagrangian 6 (second column), The extra parameters $\zeta^{n}_{h}$, $\zeta^{n}_{W}$, $\zeta^{n}_{Z}$, $\zeta^{n}_{A}$, $\zeta^{n}_{AZ}$ (defined in Kilian:2017nio ; Kilian:2018bhs ) are induced by the Higgs and gauge-boson wave-function normalization, respectively. ### 3.2 Higgs-pair couplings to transverse vector polarizations in VBF Introducing non-SM effects proportional to the $g_{V,b2}$ couplings, we consider kinematical distributions and their discriminating power. In Fig. 4, we display the distributions of the $P_{t}$ of the leading Higgs boson and the invariant mass the Higgs-bosons pair, respectively. The subfigures (a) and (b) show the distributions at the LHC with collision energy 14 TeV. The new- physics effect is illustrated by the green and blue curves which correspond to two different values of $g_{V,b2}$, namely $g_{V,b2}=0.09$ and $g_{V,b2}=0.18$, repectively. We observe a huge enhancement at high $P_{t}$ in the curves which include the new interaction of Higgs bosons with transversal gauge bosons. For the selected parameter values, the fraction of events in the region with $P_{t}>200$ GeV increases to $50\%$ and $70\%$, respectively, while in the SM this fraction is just $18\%$. The corresponding distributions at a 100 TeV hadron collider are shown in the Fig. 4 (c) and (d), where we select $g_{V,b2}=0.018$ and $0.024$. In this case, the fraction of events in the region with $P_{t}>200$ GeV increases to $50\%$ and $60\%$, respectively, while in the SM, it is $25\%$. Subfigures (b) and (d) contain the Higgs-pair invariant-mass distributions for the LHC and for the 100 TeV collider, which are likewise enhanced in the high- mass region if anomalous effects are included. () () () () Figure 4: Distributions of (a) the $P_{t}$ of the leading Higgs and (b) the invariant mass of the Higgs pair at the 14 TeV LHC, with $g_{V,b2}=0.09$ and $g_{V,b2}=0.18$. The corresponding distributions for a 100 TeV collider are shown in (c) and (d), with $g_{V,b2}=0.018$ and $g_{V,b2}=0.024$. As a concrete numerical example for the analysis in presence of new-physics effects, in Table 9 we demonstrate the cut flow for the value $g_{V,b2}=0.18$ at the 14 TeV LHC. For this parameter value, the total signal cross section ($\sigma\times\mathcal{L}$) is enhanced by a factor of 4 over the SM value. As Figure (4a) demonstrates, most of enhancement occurs in the boosted region. After b-tagging and VBF cuts have been applied, there are more than $600$ signal events left that can be observed. In Table. 10 we present the resulting numbers for the three classes of events. In this scenario, the signal accounts for $25\%$ and $35\%$ of the total events in the 2BH and 1BH classes, respectively. Compared with the results given in Table 4, the increase in signal events in the 2BH (1BH) case amounts to a factor 35 (10), respectively. When the BDT method is applied, this result is further improved, as shown in Fig. 5 and Table. 11. By comparing this with the results of Table 4, we conclude that isolating the boosted-Higgs region significantly enhances the discovery potential of this process, both for the 2BH and 1BH cases. Process | $\sigma\times\mathcal{L}$ | $n_{b}=4$ | VBF ---|---|---|--- $g_{V,b2}=0.18$ signal | $4243$ | $1019$ | $617$ $pp\to 4b2j$ | $2.28\times 10^{8}$ | $5.47\times 10^{7}$ | $1.86\times 10^{7}$ $pp\to 2b4j$ | $2.38\times 10^{10}$ | $1.14\times 10^{4}$ | $3.85\times 10^{4}$ $pp\to t\bar{t}\to 2b4j$ | $7.89\times 10^{8}$ | $387$ | $58$ Table 9: The cut efficiencies of b-tagging and VBF at 14 TeV LHC are demonstrated. Here, the total integrated luminosity is assumed to be $\mathcal{L}=3$ ab-1. b-tagging efficiency is $\epsilon_{b}=0.7$, and mis-tagging rate is $\epsilon_{miss}=0.001$. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) $g_{V,b2}=0.18$ Signal | $153$ | $217$ | $247$ $pp\to 4b2j$ | $1.17\times 10^{5}$ | $1.56\times 10^{6}$ | $1.69\times 10^{7}$ $pp\to 2b4j$ | $28$ | $349$ | $3.81\times 10^{4}$ $pp\to t\bar{t}\to 2b4j$ | $3$ | $13$ | $42$ Table 10: The numbers of 2BH, 1BH and 0BH cases at 14 TeV LHC are tabulated. By contrast, in the 0BH case the signal significance is too low for the chosen collider parameters and benchmark values of $g_{V,b2}$. (a) 2BH (b) 1BH (c) 0BH Figure 5: The response of the discriminants to the $g_{V,b2}$ signal and background at 14 TeV LHC with (a) 2BH, (b) 1BH, and (c) 0BH. | 2-boosted Higgs | 1-boosted Higgs | 0-boosted Higgs ---|---|---|--- | (2BH) | (1BH) | (0BH) $g_{V,b2}=0.18$ Signal with BDT cut | $54$ | $30$ | $10$ Background with BDT cut | $0$ | $0$ | $3430$ $S/B$ | - | - | $2.99\times 10^{-3}$ $S/\sqrt{S+B}$ | $7.348$ | $5.477$ | $0.175$ Table 11: The significances of the BDT at 14 TeV collider are demonstrated. ### 3.3 Unitarity limits and discovery reach The interactions of Higgs bosons with transverse gauge bosons in the Lagrangian (3) involve derivative couplings, and therefore are enhanced over the SM interactions for high values of the four-momenta. This is clearly visible in Fig. 4, where the contribution of the new interactions dominates for sufficiently large values of the transverse momentum $P_{t}$ or the Higgs- pair invariant mass $m(h,h)$. If this behavior is naively extrapolated, the computed amplitudes will violate unitarity constraints. In Ref. Kilian:2018bhs , we have derived a generic unitarity bound for the $pp\to hhjj$ process, which relates the value of $g_{V,b2}$ to a UV cutoff $\Lambda_{UV}$. $\displaystyle\frac{\Lambda^{4}_{UV}}{2^{9}\pi^{2}v^{4}}|g_{V,b2}|^{2}\leq\frac{1}{4},$ (10) For energy-momentum values beyond $\Lambda_{UV}$, the EFT expansion breaks down. We expect higher-order contributions and, eventually, a new structure of underlying interactions to dampen the rise of the amplitudes, e.g., a resonance. Conversely, for the EFT description to remain useful, the value of $g_{V,b2}$ must be such that the cutoff implied by (10) is outside the accessible kinematic range, or we would have to apply a cutoff or form factor on the calculated distributions. In Fig. 6, we display the sensitivity on $\Lambda_{UV}$ by using the (a) 2BH case and (b) 1BH case at the 14 TeV LHC, respectively. To this end, we determine the maximally allowed $g_{V,b2}$ value as a function of $\Lambda_{UV}$ using (10). The y-axis indicates the number of signal events that result for corresponding values of the cutoff $\Lambda_{UV}$ and the maximal $g_{V,b2}$, respectively. The dashed blue line, for each plot, marks the $5\sigma$ discovery threshold as it follows from the signal and background event rates derived above. The crossing points of the signal curves and the discovery thresholds in Fig. 6 determine the sensitivity of the analysis to heavy new physics, assuming that the bound (10) for $g_{V,b2}$ is saturated. We conclude that the effect of a heavy resonance, for instance, can be accessible up to $4.4$ TeV in the 2BH case and up to $3.6$ TeV in the 1BH case. The cleaner environment of the 2BH case is clearly preferred. By contrast, the discovery reach of the 0BH case is limited to about $2.4$ TeV. () () () () Figure 6: The $5\sigma$ discovery constraints on new physics cutoff at 14 TeV LHC are demonstrated. (a) is the constraint obtained from 2BH case, and (b) is the constraint obtained from 1BH case. (c) and (d) are the corresponding constraints on the $g_{V,b2}$ in the 2BH and 1BH case, respectively. We can apply the analogous analysis to a 100 TeV hadron collider. Based on the results obtained for the LHC with $\sqrt{s}=14$ TeV where the 2BH case has been most useful, below we only present the results for this analysis which requires two highly boosted Higgs bosons. As shown above, at a 100 TeV machine the signal can be discovered already in the SM case where no enhancement is present. Therefore, we define the observability of new physics in terms of the deviation in rate, $N_{2BH}-N^{SM}_{2BH}$, where $N^{SM}_{2BH}$ is the number of events in the 2BH category in the SM case. By using the BDT analysis as before, we derive the $5\sigma$ excess bound for new physics at the 100 TeV collider, as marked in Fig. 7 by the blue dashed line. We read off this figure that this collider can yield a meaningful constraint on $g_{V,b2}$ at a value which corresponds to a sensitivity in the new-physics scale of $\Lambda_{UV}=27$ TeV. () () Figure 7: The $5\sigma$ excess constraint obtained from 2BH events on new physics cutoff at 100 TeV hadron collider is demonstrated in (a), while (b) is the corresponding $g_{V,b2}$. ## 4 Two-parameter bounds So far, we have considered only the dependence of the Higgs-pair VBF process on the couplings $g_{V,b2}$ to the transverse polarization of vector bosons, which dominate the distribution in the highly boosted region. In this section, we complement the discussion by taking into account also the $g_{V,a2}$ couplings which describe the interaction of a Higgs pair with longitudinally polarized vector bosons. This interaction exists in the SM but can receive a correction if dimension-6 operators are included. In order to express the cross sections of $pp\to hhjj$ in terms of the parameters given in Eq. (1), we impose some simplifying assumptions on the phenomenological information from expected data on single-Higgs processes. For instance, the $WWh$ vertex should be strongly constrained by data from the Higgs decay to $WW$ as well as from VBF single-Higgs production. As stated before, we ignore the universal ambiguity in the Higgs-coupling determination due to undetected Higgs decays at the LHC, which can be lifted by model- independent measurements at Higgs factories such as the CEPC, the ILC, or the CLIC collider. Accordingly, we fix $g_{W,a1}$ and $g_{W,b1}$, the single-Higgs couplings to longitudinal and transverse $W$ polarizations, to their SM values. As a further simplification, we assume the custodial-symmetry relations $g_{W,ai}=g_{Z,ai}$ and $g_{W,bi}=g_{Z,bi}$ ($i=1,2$) whenever contributions of Z bosons are considered. Since the amplitudes are at most linear in the parameters, we can parameterize the cross section of $pp\to hhjj$ in terms of $g_{V,a2}$ as given below $\displaystyle\sigma(pp\to hhjj)$ $\displaystyle=$ $\displaystyle\sigma^{0}_{a_{2}}+\sigma^{1}_{a_{2}}g_{V,a2}+\sigma^{2}_{a_{2}}g_{V,a2}^{2}\,.$ (11) We compute the coefficients $\sigma^{0}_{a_{2}}$, $\sigma^{1}_{a_{2}}$, and $\sigma^{2}_{a_{2}}$ numerically using Monte-Carlo methods, evaluating the total cross section for a sufficient number of different coupling values. The results are given in Table 12. | $\sigma^{0}_{a_{2}}$ (fb) | $\sigma^{1}_{a_{2}}$(fb) | $\sigma^{2}_{a_{2}}$(fb) ---|---|---|--- 14 TeV | $17.71$ | -$29.33$ | $12.68$ 27 TeV | $88.3$ | \- $152.2$ | $68.1$ 100 TeV | $1601.2$ | -$2963.8$ | $1401$ Table 12: Coefficients $\sigma^{0}_{a_{2}}$, $\sigma^{1}_{a_{2}}$, and $\sigma^{2}_{a_{2}}$ in the expression (11) for $pp\to hhjj$ at three different collider energies. Analogously, we can parameterize the cross section as a function on $g_{V,b2}$ as follows $\displaystyle\sigma(pp\to hhjj)$ $\displaystyle=$ $\displaystyle\sigma^{0}_{b_{2}}+\sigma^{1}_{b_{2}}g_{V,b2}+\sigma^{2}_{b_{2}}g_{V,b2}^{2}\,,$ (12) The numerical results for $\sigma^{0}_{b_{2}}$, $\sigma^{1}_{b_{2}}$, and $\sigma^{2}_{b_{2}}$ are given in Table 13. When both $g_{V,b2}$ and $g_{V,a2}$ are turned on, we have to include a mixed coefficient proportional to $g_{V,b2}g_{V,a2}$. The corresponding results can be found in Kilian:2018bhs ; the values are $1.7$ fb for 14 TeV, $9.6$ fb for 27 TeV, and $95$ fb for 100 TeV, respectively. | $\sigma^{0}_{b_{2}}$(fb) | $\sigma^{1}_{b_{2}}$(fb) | $\sigma^{2}_{b_{2}}$(fb) ---|---|---|--- 14 TeV | $1.06$ | $1.52$ | $106.8$ 27 TeV | $4.2$ | $6.97$ | $1135.2$ 100 TeV | $38.4$ | $83.08$ | $54070$ Table 13: Coefficients $\sigma^{0}_{b_{2}}$, $\sigma^{1}_{b_{2}}$, and $\sigma^{2}_{b_{2}}$ in the expression (12) for $pp\to hhjj$ at three different collider energies. The numerical results given in Table 12 and Table 13 have been obtained with Madgraph5 and Whizard by independent calculations, with very good numerical agreement. It should be pointed out that the results in Table 12 clearly reflect the strong gauge cancellation which occurs between individual terms of (11) in the SM limit. Some of the coefficients $\sigma^{0}_{a_{2}}$, $\sigma^{1}_{a_{2}}$, and $\sigma^{2}_{a_{2}}$ are one order of magnitude larger than that of the cross section in the SM, as given as $\sigma^{0}_{b_{2}}$ in Table 13. Using this parameterization and applying the results of our study, we derive the parameter ranges tabulated in Table 14. Outside the given limits, the deviation from the SM can be detectable as a $5\sigma$ discovery. | 14 TeV (3 ab-1) | 27 TeV (3 ab-1) | 100 TeV (30 ab-1) ---|---|---|--- $\delta g_{V,a2}$ | $(-0.31,0.39)$ | $(-0.11,0.13)$ | $(-0.013,0.047)$ $g_{V,b2}$ | $(-0.10,0.11)$ | $(-0.03,0.02)$ | $(-0.003,0.003)$ Table 14: $5\sigma$ discovery (excess) bounds of $g_{V,a2}$ and $g_{V,b2}$ at a 14 TeV, 27 TeV and 100 TeV hadron collider, assuming the respective total integrated luminosity in brackets. We show the projected bounds on $g_{V,a2}$ and $g_{V,b2}$ in Fig. (8). We have compared our results for $g_{V,a2}$ with those given in Ref. Bishara:2016kjn and found good agreement. Regarding the unitarity bounds shown in the plots, the bounds on $g_{V,b2}$ correspond to Eq. (10), while for $g_{V,a2}$ we make use of the result Kilian:2018bhs $\displaystyle\frac{\Lambda^{4}_{UV}}{2^{9}\pi^{2}v^{4}}|g_{V,a2}-g_{V,a1}^{2}|^{2}\leq\frac{1}{4}\,.$ (13) (a) 14 TeV (b) 27 TeV (c) 100 TeV (d) 14 TeV (e) 27 TeV (f) 100 TeV Figure 8: Total cross section after VBF cuts for the process $pp\to hhjj$ as a function of the $WWhh$ couplings $g_{a_{2}}$ (upper row) and $g_{b_{2}}$ (lower row), for three different collider energies. The vertical lines are unitarity bounds, which are derived from Eq. (13) and Eq. 10). It is interesting to explore whether it is possible to disentangle the effects of the operator $h^{2}W^{a}_{\mu\nu}W^{a,\mu\nu}$ from those of the operator $h^{2}W_{\mu}^{a}W^{a,\mu}$. As mentioned in our previous work Kilian:2018bhs , the azimuthal angle between two forward jets can be a useful observable for this purpose. In Fig. (9), we display the azimuthal angle distributions for the 14 TeV, 27 TeV and 100 TeV cases. We take into account only events in category 2BH and apply no further cuts beyond the VBF cuts. The relative azimuthal angle is defined as $\Delta\phi=|\phi(j_{1})-\phi(j_{2})|$, where $\phi(j_{1})$ denotes the azimuthal angle of the leading forward jet, and $\phi(j_{2})$ denotes that of the second leading forward jet. We note the main features of the distributions given in Fig. (9): * • In the SM, the azimuthal-angle distribution of the SM is flat due to the dominance of longitudinal polarized vector bosons in the process $pp\to hhjj$, as shown by the red curve in each of plot. Similarly, the distribution is also flat for the case where the term $h^{2}W_{\mu}^{a}W^{a,\mu}$ is turned on, as shown by the green curve in each of plot where both the contrbutions of the SM and new physics have been included. Although new physics represented by the operator $h^{2}W_{\mu}^{a}W^{a,\mu}$ enhances the total cross section, the azimuthal angle distribution remains similar to that of the SM. * • If the operator $h^{2}W_{\mu\nu}^{a}W^{a,\mu\nu}$ is turned on, the Higgs pair being coupled to transversely polarized vector bosons leads to distributions that significantly differ from that of the SM. As the blue curves indicate, this type of new interaction causes more events with back-to-back outgoing jets. * • At the 100 TeV collider, the difference in shape of the angular distribution is much less pronounced. In fact, the chosen benchmark values of the operator coefficients cause just a small disturbance of the SM signal in this plot, which is still detectable due to the much larger event rate for the high energy and high luminosity of the machine. To distinguish the two kinds of contributions, a more detailed analysis of this and other distributions becomes necessary. (a) 14 TeV (b) 27 TeV (c) 100 TeV Figure 9: The relative azimuthal angle distributions of two forward jets for 14 TeV, 27 TeV, and 100 TeV . Figure 10: The fit results with inclusive cross section(green) and differential cross section(blue) with three different assumptions on the central values: $(\delta g_{V,a2}^{\textrm{true}},g_{V,b2}^{\textrm{true}})=(0,0)$, $(0.07,0)$, $(0,0.01)$ with $\sqrt{s}=100$ TeV and an integrated luminosity 30 ab-1. To fully utilise the differential information, we perform a $\chi^{2}$-fit on the distribution of $\Delta\phi(j,j)$, using the 2BH events after the BDT cuts with the collision energy $\sqrt{s}=100$ TeV and integrated luminosity 30 ab-1. In Fig. 10, we show the $2\sigma$ allowed region obtained with differential information (blue) or just from the total cross section (green). For the leftmost plot, we assume the SM for the true values of the coefficients. It is evident that the information from the differential distribution significantly improves the precisions on both $\delta g_{V,a2}$ and $g_{V,b2}$. The middle plot assumes $(\delta g_{V,a2}^{\rm true},g_{V,b2}^{\rm true})=(0.07,0)$ for the true value, i.e., only $g_{V,a2}$ receives a new-physics contribution. The inclusive cross section confines the region allowed by a measurement to a ring, due to the quadratic dependence on both $\delta g_{V,a2}$ and $g_{V,b2}$, while the differential distribution singles out a small area around the point $(\delta g_{V,a2},g_{V,b2})=(0.07,0)$. Analogously, for the right plot we assume the true values $(\delta g_{V,a2}^{\rm true},g_{V,b2}^{\rm true})=(0,0.07)$. Again, the differential distribution selects a small region, but in this case there is a two-fold sign ambiguity left for $g_{V,b2}$. This reflects the fact that the effects of $g_{V,b2}$ is dominated by the squared term, and thus the sign of $g_{V,b2}$ cannot be determined. Finally, using the relations given in Table 8, we can map this contour onto the plane spanned by $c_{W}$ and $c_{HW}$. The assumption of a linearly realized symmetry and truncation at the dimension-6 order enforces the constraint $\delta g_{V,a2}\sim 6g_{V,b2}$, if both couplings depend only on a single parameter $c_{HW}$. The two-parameter analysis that we describe in this paper, would allow us to search for the relation of $c_{W}$ and $c_{HW}$, which would point to the presence of contributions that do not follow the simple power-counting assumption underlying the dimension-six truncation of the SILH basis. Currently, the experimental data constraints on the parameter set are rather weak, to the level of $c_{V}\sim O(10^{3})$ Ellis:2018gqa . In this study, we conclude that we can reach a sensitivity of up to $c_{V}\sim O(10)$ at future colliders. ## 5 Discussions and Conclusions We have studied double-Higgs production in vector-boson at a proton-proton collider, $pp\to hhjj\to 4b2j$. We compare the 14 TeV LHC with future high- energy and high-luminosity colliders of 27 TeV and 100 TeV. The analysis of this process benefits greatly from identifying highly-boosted Higgs bosons. To this end, we use the mass-drop method to analyse the jet substructure, and we optimize the significance by the boosted decision-tree method. Our results show that the 2BH case where two highly-boosted Higgs are tagged can provide the cleanest experimental environment to discover new physics in the the VBF signal. At the LHC, the number of 2BH events is too small to discover this channel if the SM is valid without new contributions. Conversely, at a 100 TeV collider with a high luminosity of the order of 30 ab-1, the number of 2BH events is large enough to discover the SM signal. To study the effects of new physics, we use a phenomenological effective Lagrangian (3). We explore the effect of the interactions of type $h^{2}V_{\mu}^{a}V^{a,\mu}$ and $h^{2}V_{\mu\nu}^{a}V^{a,\mu\nu}$ and extract bounds for the associated coefficients $g_{V,a2}$ and $g_{V,b2}$. Our results demonstrate that new-physics scales up to $4.4$ TeV at the 14 TeV LHC, and $27$ TeV at a 100 TeV hadron collider, are within reach of discovery. Fig. 11 collects the projected bounds on $g_{V,a2}$ and $g_{V,b2}$ that we have determined in this work, cf. also Table 14. The bounds that can be obtained at 27 TeV, are one order of magnitude stronger than for the 14 TeV LHC. The 100 TeV machine can further constrain these parameters by another order of magnitude. At both the 27 TeV and 100 TeV machines, this offers a significant indirect sensitivity to new physics in this sector. Figure 11: The summarized bounds of $g_{V,a2}$ and $g_{V,b2}$ at 14 TeV, 27 TeV and 100 TeV hadron collider, which is the same as Table 14. Our study does not account for underlying-event or pile-up effects. These produce additional soft radiation and may render the extraction of a hard- process signal more difficult. In fact, the mass-drop method selects two- pronged jets in the final state. This reduces the impact of QCD jets from underlying events or pile-up, which are one-pronged. We also use filtering to remove soft radiation after reconstructing two sub-jets in the final state, which is helpful to reject jets from extra radiation produced by the parton shower. Recent studies indicate that modern pile-up mitigation techniques Cacciari:2014gra can minimize the pile-up contamination efficiently for the 4b final state Behr:2015oqq . A detailed pile-up analysis can be done but is beyond the scope of the current paper. For a further improvement of our result, we may consider color-flow properties Maltoni:2002mq as a tool to further discriminate $h\to b\bar{b}$ decays from b jets in the QCD background. Color-connection information can be quantified by observables such as the pull vector Gallicchio:2010sw . In a recent study of double Higgs production at LHC Kim:2019wns , it was argued that while the color flow is very different between the double-Higgs signal and the QCD background, this information may be diluted after applying kinematics cuts. The authors of Kim:2019wns proposed to use jet-image and a Deep Neutral Network analysis methods to discriminate the signal from background, rather than constructing the pull vector. We defer these refinements of our study to future work. In summary, the analysis of highly boosted Higgs-boson pairs in vector-boson fusion is promising and can be used to significantly improve our knowledge about Higgs-sector interactions. In particular, the method should become important for a future high-energy proton collider where the sensitivity is sufficient to extract a signal down to the SM rate. ###### Acknowledgements. W.K. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 – TRR 257. S. Sun is supported by MIUR in Italy under Contract(No. PRIN 2015P5SBHT) and ERC Ideas Advanced Grant (No. 267985) “DaMeSyFla"; Q.S. Yan is supported by the Natural Science Foundation of China under the grant NO. 11475180 and NO. 11875260. X.R. Zhao is supported by the “Fonds Spécial de Recherche” (FSR) of the UCLouvain. ## References * (1) ATLAS collaboration, _Observation of a new particle in the search for the standard model higgs boson with the atlas detector at the lhc_ , _Phys.Lett.B_ 716 (2012) 1 [1207.7214]. * (2) CMS collaboration, _Observation of a new boson at a mass of 125 gev with the cms experiment at the lhc_ , _Phys.Lett.B_ 716 (2012) 30 [1207.7235]. * (3) D. Jones and S. Petcov, _Heavy higgs bosons at lep_ , _Phys.Lett.B_ 84 (1979) 440. * (4) W. Kilian, S. Sun, Q.-S. Yan, X. Zhao and Z. Zhao, _Multi-Higgs Production and Unitarity in Vector-Boson Fusion at Future Hadron Colliders_ , 1808.05534. * (5) Y.-H. Qi, J.-H. Yu and S.-H. Zhu, _Effective Field Theory Perspective on Next-to-Minimal Composite Higgs_ , 1912.13058. * (6) P. Agrawal, D. Saha, L.-X. Xu, J.-H. Yu and C. Yuan, _Shape of Higgs Potential at Future Colliders_ , 1907.02078. * (7) L.-X. Xu, J.-H. Yu and S.-H. Zhu, _Holographic Completion of Minimal Neutral Naturalness Model and Deconstruction_ , 1905.12796. * (8) H.-L. Li, L.-X. Xu, J.-H. Yu and S.-H. Zhu, _EFTs meet Higgs Nonlinearity, Compositeness and (Neutral) Naturalness_ , _JHEP_ 09 (2019) 010 [1904.05359]. * (9) ATLAS collaboration, _Measurement of the Higgs boson coupling properties in the $H\rightarrow ZZ^{*}\rightarrow 4\ell$ decay channel at $\sqrt{s}$ = 13 TeV with the ATLAS detector_, _JHEP_ 03 (2018) 095 [1712.02304]. * (10) CMS collaboration, _Measurements of properties of the Higgs boson decaying into the four-lepton final state in pp collisions at $\sqrt{s}=13$ TeV_, _JHEP_ 11 (2017) 047 [1706.09936]. * (11) ATLAS collaboration, _Measurements of gluon-gluon fusion and vector-boson fusion Higgs boson production cross-sections in the $H\to WW^{\ast}\to e\nu\mu\nu$ decay channel in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Phys. Lett._ B789 (2019) 508 [1808.09054]. * (12) ATLAS collaboration, _Combined measurements of Higgs boson production and decay using up to $80$ fb-1 of proton-proton collision data at $\sqrt{s}=$ 13 TeV collected with the ATLAS experiment_, _Phys. Rev. D_ 101 (2020) 012002 [1909.02845]. * (13) ATLAS collaboration, _Search for the $HH\rightarrow b\bar{b}b\bar{b}$ process via vector-boson fusion production using proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, 2001.05178. * (14) ATLAS collaboration, _Search for pair production of Higgs bosons in the $b\bar{b}b\bar{b}$ final state using proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, _JHEP_ 01 (2019) 030 [1804.06174]. * (15) CMS collaboration, _Search for nonresonant Higgs boson pair production in the $\mathrm{b\overline{b}b\overline{b}}$ final state at $\sqrt{s}=$ 13 TeV_, _JHEP_ 04 (2019) 112 [1810.11854]. * (16) ATLAS collaboration, _Search for Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state with 13 TeV $pp$ collision data collected by the ATLAS experiment_, _JHEP_ 11 (2018) 040 [1807.04873]. * (17) CMS collaboration, _Search for Higgs boson pair production in the $\gamma\gamma\mathrm{b\overline{b}}$ final state in pp collisions at $\sqrt{s}=$ 13 TeV_, _Phys. Lett. B_ 788 (2019) 7 [1806.00408]. * (18) ATLAS collaboration, _Search for resonant and non-resonant Higgs boson pair production in the ${b\bar{b}\tau^{+}\tau^{-}}$ decay channel in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Phys. Rev. Lett._ 121 (2018) 191801 [1808.00336]. * (19) CMS collaboration, _Search for Higgs boson pair production in events with two bottom quarks and two tau leptons in proton–proton collisions at $\sqrt{s}$ =13TeV_, _Phys. Lett. B_ 778 (2018) 101 [1707.02909]. * (20) ATLAS collaboration, _Search for Higgs boson pair production in the $b\bar{b}WW^{*}$ decay mode at $\sqrt{s}=13$ TeV with the ATLAS detector_, _JHEP_ 04 (2019) 092 [1811.04671]. * (21) CMS collaboration, _Search for resonant and nonresonant Higgs boson pair production in the $\mathrm{b}\overline{\mathrm{b}}\mathit{\ell\nu\ell\nu}$ final state in proton-proton collisions at $\sqrt{s}=13$ TeV_, _JHEP_ 01 (2018) 054 [1708.04188]. * (22) ATLAS collaboration, _Search for Higgs boson pair production in the $\gamma\gamma WW^{*}$ channel using $pp$ collision data recorded at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Eur. Phys. J. C_ 78 (2018) 1007 [1807.08567]. * (23) ATLAS collaboration, _Search for Higgs boson pair production in the $WW^{(*)}WW^{(*)}$ decay channel using ATLAS data recorded at $\sqrt{s}=13$ TeV_, _JHEP_ 05 (2019) 124 [1811.11028]. * (24) CMS collaboration, _Combination of searches for Higgs boson pair production in proton-proton collisions at $\sqrt{s}=$ 13 TeV_, _Phys. Rev. Lett._ 122 (2019) 121803 [1811.09689]. * (25) ATLAS collaboration, _Combination of searches for Higgs boson pairs in $pp$ collisions at $\sqrt{s}=$13 TeV with the ATLAS detector_, _Phys. Lett. B_ 800 (2020) 135103 [1906.02025]. * (26) M. J. Dolan, C. Englert, N. Greiner and M. Spannowsky, _Further on up the road: $hhjj$ production at the LHC_, _Phys. Rev. Lett._ 112 (2014) 101802 [1310.1084]. * (27) L.-S. Ling, R.-Y. Zhang, W.-G. Ma, L. Guo, W.-H. Li and X.-Z. Li, _NNLO QCD corrections to Higgs pair production via vector boson fusion at hadron colliders_ , _Phys. Rev._ D89 (2014) 073001 [1401.7754]. * (28) M. J. Dolan, C. Englert, N. Greiner, K. Nordstrom and M. Spannowsky, _$hhjj$ production at the LHC_, _Eur. Phys. J._ C75 (2015) 387 [1506.08008]. * (29) F. Bishara, R. Contino and J. Rojo, _Higgs pair production in vector-boson fusion at the LHC and beyond_ , _Eur. Phys. J._ C77 (2017) 481 [1611.03860]. * (30) E. Arganda, C. Garcia-Garcia and M. J. Herrero, _Probing the Higgs self-coupling through double Higgs production in vector boson scattering at the LHC_ , _Nucl. Phys._ B945 (2019) 114687 [1807.09736]. * (31) J. Baglio, A. Djouadi, R. Gröber, M. M. Mühlleitner, J. Quevillon and M. Spira, _The measurement of the Higgs self-coupling at the LHC: theoretical status_ , _JHEP_ 04 (2013) 151 [1212.5581]. * (32) R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, P. Torrielli et al., _Higgs pair production at the LHC with NLO and parton-shower effects_ , _Phys. Lett._ B732 (2014) 142 [1401.7340]. * (33) F. A. Dreyer and A. Karlberg, _Vector-Boson Fusion Higgs Pair Production at N 3LO_, _Phys. Rev. D_ 98 (2018) 114016 [1811.07906]. * (34) F. A. Dreyer and A. Karlberg, _Fully differential Vector-Boson Fusion Higgs Pair Production at Next-to-Next-to-Leading Order_ , _Phys. Rev. D_ 99 (2019) 074028 [1811.07918]. * (35) F. A. Dreyer, A. Karlberg and L. Tancredi, _On the impact of non-factorisable corrections in VBF single and double Higgs production_ , _JHEP_ 10 (2020) 131 [2005.11334]. * (36) F. A. Dreyer, A. Karlberg, J.-N. Lang and M. Pellen, _Precise predictions for double-Higgs production via vector-boson fusion_ , _Eur. Phys. J. C_ 80 (2020) 1037 [2005.13341]. * (37) C. Englert, Q. Li, M. Spannowsky, M. Wang and L. Wang, _VBS ${\rm W}^{\pm}{\rm W}^{\pm}{\rm H}$ production at the HL-LHC and a 100 TeV $pp$-collider_, _Int. J. Mod. Phys._ A32 (2017) 1750106 [1702.01930]. * (38) K. Nordström and A. Papaefstathiou, _$VHH$ production at the High-Luminosity LHC_, _Eur. Phys. J. Plus_ 134 (2019) 288 [1807.01571]. * (39) T. Plehn, M. Spira and P. M. Zerwas, _Pair production of neutral Higgs particles in gluon-gluon collisions_ , _Nucl. Phys._ B479 (1996) 46 [hep-ph/9603205]. * (40) U. Baur, T. Plehn and D. L. Rainwater, _Measuring the Higgs Boson Self Coupling at the LHC and Finite Top Mass Matrix Elements_ , _Phys. Rev. Lett._ 89 (2002) 151801 [hep-ph/0206024]. * (41) Q. Li, Q.-S. Yan and X. Zhao, _Higgs Pair Production: Improved Description by Matrix Element Matching_ , _Phys. Rev._ D89 (2014) 033015 [1312.3830]. * (42) Q.-H. Cao, B. Yan, D.-M. Zhang and H. Zhang, _Resolving the Degeneracy in Single Higgs Production with Higgs Pair Production_ , _Phys. Lett._ B752 (2016) 285 [1508.06512]. * (43) Q.-H. Cao, G. Li, B. Yan, D.-M. Zhang and H. Zhang, _Double Higgs production at the 14 TeV LHC and a 100 TeV $pp$ collider_, _Phys. Rev._ D96 (2017) 095031 [1611.09336]. * (44) U. Baur, T. Plehn and D. L. Rainwater, _Determining the Higgs Boson Selfcoupling at Hadron Colliders_ , _Phys. Rev._ D67 (2003) 033003 [hep-ph/0211224]. * (45) J. Ren, R.-Q. Xiao, M. Zhou, Y. Fang, H.-J. He and W. Yao, _LHC Search of New Higgs Boson via Resonant Di-Higgs Production with Decays into 4W_ , _JHEP_ 06 (2018) 090 [1706.05980]. * (46) U. Baur, T. Plehn and D. L. Rainwater, _Probing the Higgs selfcoupling at hadron colliders using rare decays_ , _Phys. Rev._ D69 (2004) 053004 [hep-ph/0310056]. * (47) W. Yao, _Studies of measuring Higgs self-coupling with $HH\rightarrow b\bar{b}\gamma\gamma$ at the future hadron colliders_, in _Proceedings, 2013 Community Summer Study on the Future of U.S. Particle Physics: Snowmass on the Mississippi (CSS2013): Minneapolis, MN, USA, July 29-August 6, 2013_ , 2013, 1308.6302, http://www.slac.stanford.edu/econf/C1307292/docs/submittedArxivFiles/1308.6302.pdf. * (48) F. Kling, T. Plehn and P. Schichtel, _Maximizing the significance in Higgs boson pair analyses_ , _Phys. Rev._ D95 (2017) 035026 [1607.07441]. * (49) J. Chang, K. Cheung, J. S. Lee, C.-T. Lu and J. Park, _Higgs-boson-pair production $H(\to b\bar{b})H(\to\gamma\gamma)$ from gluon fusion at the HL-LHC and HL-100 TeV hadron collider_, _Phys. Rev._ D100 (2019) 096001 [1804.07130]. * (50) J. H. Kim, Y. Sakaki and M. Son, _Combined analysis of double Higgs production via gluon fusion at the HL-LHC in the effective field theory approach_ , _Phys. Rev._ D98 (2018) 015016 [1801.06093]. * (51) H.-J. He, J. Ren and W. Yao, _Probing new physics of cubic Higgs boson interaction via Higgs pair production at hadron colliders_ , _Phys. Rev._ D93 (2016) 015003 [1506.03302]. * (52) A. Papaefstathiou, L. L. Yang and J. Zurita, _Higgs boson pair production at the LHC in the $b\bar{b}W^{+}W^{-}$ channel_, _Phys. Rev._ D87 (2013) 011301 [1209.1489]. * (53) U. Baur, T. Plehn and D. L. Rainwater, _Examining the Higgs boson potential at lepton and hadron colliders: A Comparative analysis_ , _Phys. Rev._ D68 (2003) 033001 [hep-ph/0304015]. * (54) M. J. Dolan, C. Englert and M. Spannowsky, _Higgs self-coupling measurements at the LHC_ , _JHEP_ 10 (2012) 112 [1206.5001]. * (55) A. J. Barr, M. J. Dolan, C. Englert and M. Spannowsky, _Di-Higgs final states augMT2ed – selecting $hh$ events at the high luminosity LHC_, _Phys. Lett._ B728 (2014) 308 [1309.6318]. * (56) D. E. Ferreira de Lima, A. Papaefstathiou and M. Spannowsky, _Standard model Higgs boson pair production in the $(b\bar{b})(b\bar{b})$ final state_, _JHEP_ 08 (2014) 030 [1404.7139]. * (57) J. K. Behr, D. Bortoletto, J. A. Frost, N. P. Hartland, C. Issever and J. Rojo, _Boosting Higgs pair production in the $b\bar{b}b\bar{b}$ final state with multivariate techniques_, _Eur. Phys. J._ C76 (2016) 386 [1512.08928]. * (58) V. Barger, L. L. Everett, C. B. Jackson and G. Shaughnessy, _Higgs-Pair Production and Measurement of the Triscalar Coupling at LHC(8,14)_ , _Phys. Lett._ B728 (2014) 433 [1311.2931]. * (59) A. J. Barr, M. J. Dolan, C. Englert, D. E. Ferreira de Lima and M. Spannowsky, _Higgs Self-Coupling Measurements at a 100 TeV Hadron Collider_ , _JHEP_ 02 (2015) 016 [1412.7154]. * (60) A. Papaefstathiou, _Discovering Higgs boson pair production through rare final states at a 100 TeV collider_ , _Phys. Rev._ D91 (2015) 113016 [1504.04621]. * (61) Q. Li, Z. Li, Q.-S. Yan and X. Zhao, _Probe Higgs boson pair production via the 3 $\ell$2j+$E\\!\\!\\!/$ mode_, _Phys. Rev._ D92 (2015) 014015 [1503.07611]. * (62) X. Zhao, Q. Li, Z. Li and Q.-S. Yan, _Discovery potential of Higgs boson pair production through 4 $\ell$+$E\\!\\!\\!/$ final states at a 100 TeV collider_, _Chin. Phys._ C41 (2017) 023105 [1604.04329]. * (63) R. Contino et al., _Physics at a 100 TeV pp collider: Higgs and EW symmetry breaking studies_ , _CERN Yellow Rep._ (2017) 255 [1606.09408]. * (64) D. Gonçalves, T. Han, F. Kling, T. Plehn and M. Takeuchi, _Higgs boson pair production at future hadron colliders: From kinematics to dynamics_ , _Phys. Rev._ D97 (2018) 113004 [1802.04319]. * (65) W. Kilian, T. Ohl and J. Reuter, _WHIZARD: Simulating Multi-Particle Processes at LHC and ILC_ , _Eur. Phys. J._ C71 (2011) 1742 [0708.4233]. * (66) J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky and W. K. Tung, _New generation of parton distributions with uncertainties from global QCD analysis_ , _JHEP_ 07 (2002) 012 [hep-ph/0201195]. * (67) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., _The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations_ , _JHEP_ 07 (2014) 079 [1405.0301]. * (68) M. L. Mangano, M. Moretti, F. Piccinini, R. Pittau and A. D. Polosa, _ALPGEN, a generator for hard multiparton processes in hadronic collisions_ , _JHEP_ 07 (2003) 001 [hep-ph/0206293]. * (69) T. Sjostrand, S. Mrenna and P. Z. Skands, _A Brief Introduction to PYTHIA 8.1_ , _Comput. Phys. Commun._ 178 (2008) 852 [0710.3820]. * (70) M. Cacciari, G. P. Salam and G. Soyez, _FastJet User Manual_ , _Eur. Phys. J._ C72 (2012) 1896 [1111.6097]. * (71) M. Cacciari, G. P. Salam and G. Soyez, _The anti- $k_{t}$ jet clustering algorithm_, _JHEP_ 04 (2008) 063 [0802.1189]. * (72) J. M. Butterworth, A. R. Davison, M. Rubin and G. P. Salam, _Jet substructure as a new Higgs search channel at the LHC_ , _Phys. Rev. Lett._ 100 (2008) 242001 [0802.2470]. * (73) J. Thaler and K. Van Tilburg, _Identifying Boosted Objects with N-subjettiness_ , _JHEP_ 03 (2011) 015 [1011.2268]. * (74) S. Marzani, G. Soyez and M. Spannowsky, _Looking inside jets: an introduction to jet substructure and boosted-object phenomenology_ , vol. 958. Springer, 2019, 10.1007/978-3-030-15709-8, [1901.10342]. * (75) Y. L. Dokshitzer, G. D. Leder, S. Moretti and B. R. Webber, _Better jet clustering algorithms_ , _JHEP_ 08 (1997) 001 [hep-ph/9707323]. * (76) M. Wobisch and T. Wengler, _Hadronization corrections to jet cross-sections in deep inelastic scattering_ , in _Monte Carlo generators for HERA physics. Proceedings, Workshop, Hamburg, Germany, 1998-1999_ , pp. 270–279, 1998, hep-ph/9907280. * (77) G. Giudice, C. Grojean, A. Pomarol and R. Rattazzi, _The Strongly-Interacting Light Higgs_ , _JHEP_ 06 (2007) 045 [hep-ph/0703164]. * (78) W. Kilian, S. Sun, Q.-S. Yan, X. Zhao and Z. Zhao, _New Physics in multi-Higgs boson final states_ , _JHEP_ 06 (2017) 145 [1702.03554]. * (79) J. Ellis, C. W. Murphy, V. Sanz and T. You, _Updated Global SMEFT Fit to Higgs, Diboson and Electroweak Data_ , _JHEP_ 06 (2018) 146 [1803.03252]. * (80) M. Cacciari, G. P. Salam and G. Soyez, _SoftKiller, a particle-level pileup removal method_ , _Eur. Phys. J. C_ 75 (2015) 59 [1407.0408]. * (81) F. Maltoni, K. Paul, T. Stelzer and S. Willenbrock, _Color Flow Decomposition of QCD Amplitudes_ , _Phys. Rev. D_ 67 (2003) 014026 [hep-ph/0209271]. * (82) J. Gallicchio and M. D. Schwartz, _Seeing in Color: Jet Superstructure_ , _Phys. Rev. Lett._ 105 (2010) 022001 [1001.5027]. * (83) J. H. Kim, M. Kim, K. Kong, K. T. Matchev and M. Park, _Portraying Double Higgs at the Large Hadron Collider_ , _JHEP_ 09 (2019) 047 [1904.08549].
# Operator lifetime and the force-free electrodynamic limit of magnetised holographic plasma Napat Poovuttikul<EMAIL_ADDRESS>Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE, UK University of Iceland, Science Institute, Dunhaga 3, IS-107, Reykjavik, Iceland Aruna Rajagopal<EMAIL_ADDRESS>University of Iceland, Science Institute, Dunhaga 3, IS-107, Reykjavik, Iceland ###### Abstract Using the framework of higher-form global symmetries, we examine the regime of validity of force-free electrodynamics by evaluating the lifetime of the electric field operator, which is non-conserved due to screening effects. We focus on a holographic model which has the same global symmetry as that of low energy plasma and obtain the lifetime of (non-conserved) electric flux in a strong magnetic field regime. The lifetime is inversely correlated to the magnetic field strength and thus suppressed in the strong field regime. ###### Contents 1. I Introduction 2. II The Holographic Model 1. II.1 Linearised solutions in $\omega/T\ll 1$ limit and matching procedure 1. II.1.1 Perturbation parallel to equilibrium magnetic field 2. II.1.2 Perturbation perpendicular to equilibrium magnetic field 2. II.2 Checking $T\gtrsim 0$ limit in $\omega/\sqrt{|\bf{B|}}\ll 1$ regime 1. II.2.1 Zero temperature 2. II.2.2 $T\lesssim\omega\ll\sqrt{|{\bf B}|}$ limit 3. III Conclusion 4. A Numerical solution and evaluation of operators lifetime 5. B Frobenius analysis in $AdS_{3}\times\mathbb{R}^{2}$ region ## I Introduction Hydrodynamics LLfluid is a well-established theoretical framework which universally describes the long wavelength, low frequency behaviour of interacting systems at finite temperature. Essentially, hydrodynamic theory is a description of conserved quantities and the manifestation of the corresponding symmetries in a system in thermal equilibrium. Theories with widely varying microscopics can have the same macroscopic hydrodynamic description. One possible explanation why such a universal description is possible is that all operators except conserved charges have parametrically short lifetimes compared to the scale of interest and, once the longest-lived non-conserved operator111While this operator language is more familiar in the context of quantum systems, it is also applicable to classical systems via e.g. memory matrix formalism zwanzig1995 ; forster1995 . A more modern introduction may be found in Hartnoll:2012rj . has decayed away, the hydrodynamic description becomes viable (see Fig. 1). Figure 1: A cartoon illustration of the lifetime of operators of a theory that exhibit hydrodynamic behaviour at late time. Here, there is a parametrically large gap between conserved charges $\rho$ and the rest. The life time $\tau_{1}$ of the longest-lived operator, denoted by $\mathcal{O}_{1}$ set the time scale in which hydrodynamics becomes applicable. The hydrodynamic framework may be generalised to systems where the conserved charges are those of a higher-form symmetry Gaiotto:2014kfa which counts the number density of extended objects. A recent exploration of this idea Grozdanov_2017 (see also Schubring:2014iwa ; Hernandez:2017mch ; Armas:2018atq ; Armas:2018zbe ) shows that the resulting hydrodynamics of a one-form $U(1)$ charge reproduces the theory of magnetohydrodynamics (MHD)222For the formulation of MHD that closely resembles higher-form symmetry formulation, see e.g. dixon1982special ; anile2005relativistic ; komissarov1999 .. This should not come as a big surprise. MHD is, after all, a low energy effective theory of plasma where the (dynamical) electric field is screened – the one-form U(1) symmetry associated to electric flux is explicitly broken. This implies that in, for example, a plasma at zero magnetic field (where the Ohm’s law ${\bf j}=\sigma{\bf E}$ is a good approximation) the electric field has a finite lifetime, $\delta{\bf E}\propto\exp(-t/\tau_{E})\qquad\Longleftrightarrow\qquad\langle E^{i}(-\omega)E^{j}(\omega)\rangle\sim\frac{\delta^{ij}}{\omega+i/\tau_{E}}\,,$ (1) with the lifetime of the electric field $\tau_{E}=1/\sigma$. The conductivity $\sigma$ can be computed from first principles. For instance, in quantum electrodynamics, it can be written as Arnold_2000 333The fact that this quantity has only been computed at the beginning of this century indicates the difficulty of the required computations. $\sigma\propto\frac{T}{e^{2}\log e^{-1}}\,,$ (2) where $e$ is the electromagnetic coupling. The lifetime of electric field $\tau\sim 1/T$, is then much shorter than the scale $t\gg 1/T$ (or $\omega/T\ll 1$ in Fourier space) where hydrodynamic behaviour is expected. If, in this late-time limit, all other operators except energy density $T^{tt}$ and the momentum $T^{ti}$ have already decayed away, one can expect the hydrodynamic description of a plasma to be governed by $\partial_{\mu}T^{\mu\nu}=0\,,\qquad\partial_{\mu}J^{\mu\nu}=0\,.$ (3) The conserved currents $T^{\mu\nu}$ and $J^{\mu\nu}$ are expressed in terms of energy, momentum, magnetic flux $J^{ti}\equiv B^{i}$ and their conjugates, organised order by order in the gradient expansion. This formulation of MHD only requires macroscopic consistency and does not require the introduction of the gauge field $\star J=F=dA$ which, due to screening effect, is not a long- lived degree of freedom. This brings us to the central question of the present paper: Is a hydrodynamic description of the form (3) applicable in the limit low temperature compared to magnetic flux density $T^{2}/|{\bf B}|\ll 1$ ? This question is important if one wants to apply the MHD description to astrophysical plasmas where the magnetic field is many orders of magnitude larger than the scale set by the temperature. %ูA more precise way to phrase this question is whether or not non-conserved operators, decayed away at the following time scale If one were to naively extrapolate (1)-(2), the lifetime of the electric field appears to become arbitrarily long as the temperature decreases. However, there exists a macroscopic description of plasma in this regime that has been successfully applied. This theory is called force-free electrodynamics or FFE, and has been used extensively in astrophysical setups such as in the magnetosphere of black holes Blandford:1977ds ; Komissarov_2004 , neutron star 1969ApJ…157..869G and solar corona Wiegelmann_2012 just to name a few. In its conventional form, this theory is applied to a system which is magnetically dominated (i.e. $|{\bf B}^{2}|>|{\bf E}^{2}|$ or, covariantly $F_{\mu\nu}F^{\mu\nu}>0$) and whose dynamics is governed by $\displaystyle\epsilon^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$ $\displaystyle=0\,,$ (4a) $\displaystyle F^{\mu\nu}\nabla_{\lambda}F^{\lambda}_{\;\;\nu}$ $\displaystyle=0\,.$ (4b) Here, the first relation implies that ${\bf E}\cdot{\bf B}=0$ while the second relation implies that the force $j_{el}^{\mu}F_{\mu\nu}$, with $j^{\mu}_{el}:=\nabla_{\nu}F^{\nu\mu}$ via Maxwell’s equations, acting on plasma vanishes (hence the name force-free electrodynamics). More details on the geometric and effective action view point of FFE can be found in e.g. Gralla_2014 ; Compere:2016xwa and Uchida:1997 ; Thompson:1998ss ; Gralla_2019 ; glorioso2018effective . One should emphasise that the system of equations in Eq.(4) is independent of the microscopic details of the cold plasma, which then strongly resembles hydrodynamic descriptions. In fact, it turns out that (4) can arise in a special limit where $T\ll\sqrt{|{\bf B}|}$ of a hydrodynamics description with one-form $U(1)$ symmetry in (3), see Grozdanov_2017 ; Gralla_2019 ; glorioso2018effective 444Recasting of force- free electrodynamics in the hydrodynamic language also allows the systematic gradient expansions Gralla_2019 ; Benenowski:2019ule . This could serve to classify correction to FFE in order to account for phenomena such as pulsar radio emission where ${\bf E}\cdot{\bf B}\neq 0$. . The existence of FFE is usually justified by saying that the cold plasma is, on one hand, dense enough to screen the electric field (4a) but, on the other hand, dilute enough so that force-free condition (4b) is applicable. This statement can be made more precise in the light of relations between the equations of FFE and hydrodynamics. Thus, we propose a criterion for testing the validity of FFE using the lifetime of non-conserved operators – FFE, or equivalently, hydrodynamic description of cold plasma in the $T\ll\sqrt{|{\bf B}|}$ limit, is valid when the lifetime of all non-conserved operators is parametrically shorter than the time scale of interest. A key advantage of this approach is that the operator lifetime can be, in principle, computed explicitly from microscopic description and therefore allows one to find the ‘cutoff’ scale where FFE description should break down. Computing the operator lifetime from microscopic description is, however, not always an easy task. In fact, we are not aware of a genuine computation directly from quantum electrodynamics (in the sense of Arnold_2000 ) when both $T$ and ${\bf B}$ are turned on. To simplify the computations, we shall demonstrate the validity of FFE in the strongly interacting magnetised plasma with a holographic dual as proposed in Grozdanov_2019b ; Hofman_2018 where the one-form $U(1)$ global symmetry is taken into account via a two-form gauge field in the gravity dual. This provides two key advantages. First, the computation of correlation functions boils down to solving simple linearised differential equations (see e.g. Son:2002sd ). Second, there is strong evidences that charge neutral operators, apart from energy and momentum, have a parametrically short lifetime in this class of theories 555To be more precise, it has been shown in $\mathcal{N}=4$ supersymmetric Yang-Mills theory, which constitutes the matter sector of the holographic model Grozdanov_2019b ; Hofman_2018 , that there is no long-lived mode besides hydrodynamic modes at any $T\neq 0$ and $|{\bf B}|=0$ Kovtun:2003wp . A similar conclusion was reached for the same theory in the charge neutral sector at finite non-dynamical magnetic field Fuini:2015hba ; Janiszewski:2015ura . . Therefore, we shall focus on non-conserved operators in the electromagnetic sector of the theory: the electric flux operators, whose lifetime can be extracted via two-point correlation function as in (1). This will provide strong evidence for the validity of FFE limit in a strongly interacting holographic plasma. On the technical side, the computations presented in this note show that there are no quasinormal modes present in the vicinity of the hydrodynamic regime $\omega/T\ll 1$ (and $\omega/\sqrt{|{\bf B}|}\ll 1$). The pole in the electric flux correlation function in this regime then implies that the operator has a parametrically long lifetime which could interfere with the hydrodynamic modes. The presence of such long-lived mode can be determined analytically in the usual hydrodynamic regime of $\omega/T\ll 1$ for a large class of theories. It is usually difficult to go beyond this regime towards the limit $\omega/T\sim 1,\omega/\sqrt{|{\bf B}|}\ll 1$. Such computation can, however, be done analytically in the simple model of Grozdanov_2019b thanks to the presence of the BTZ$\times\mathbb{R}^{2}$ bulk geometry in the deep IR DHoker:2009mmn . We should also note that the treatment of a (long-lived) non- hydrodynamic modes has been extensively used to determine the breakdown of hydrodynamic descriptions in the context of QFTs with holographic duals, see e.g. Grozdanov_2019 ; Davison:2014lua ; Chen:2017dsy ; Davison:2018nxm . The remainder of this paper is organised as the follows. In section II, we summarise the procedure involved in the computation of the two-point correlation function in the holographic dual to one-form global symmetry. In section II.1, we outline the method for exploring the existence of decaying modes in the vicinity of the usual hydrodynamic limit $\omega/T\ll 1$ at $T/\sqrt{{\bf B}|}\ll 1$. Due to the simplicity of the bulk geometry, we are able to further extend the analysis to arbitrary value of $\omega/T$ with $\omega/\sqrt{|{\bf B}|}\ll 1$ and $T/\sqrt{|{\bf B}|}\ll 1$. This is described in section II.2. Further open questions and future directions are discussed in section III. ## II The Holographic Model A simple holographic dual to a strongly interacting field theory of matter charged under dynamical $U(1)$ electromagnetism (that is, the dynamical plasma described by low energy MHD) and formulated in the language of higher-form symmetry was constructed in Grozdanov_2019b ; Hofman_2018 . We present a brief review here for completeness. The five-dimensional bulk theory is comprised of Einstein gravity coupled to a two-form bulk gauge field, $B_{\mu\nu}$, and a negative cosmological constant, $S=\int d^{5}X\sqrt{-G}\left(R-2\Lambda-\frac{L^{2}}{3}H_{abc}H^{abc}\right)+S_{bnd}-\frac{1}{\kappa(\Lambda)}\int_{r=\Lambda}d^{4}x\sqrt{-\gamma}(n^{a}H_{a\mu\nu})(n_{b}H^{b\mu\nu}),$ (5) where $H=dB$ and $B_{ab}$ is the bulk 2-form gauge field, $\Lambda$ is the UV- cutoff, $n^{a}$ is the unit normal to the boundary, and $S_{bnd}$ denotes the Gibbons-Hawking and gravitational counter term. Roughly speaking, the two bulk fields $G_{ab}$ and $B_{ab}$, asymptote to $g_{\mu\nu}$ and $b_{\mu\nu}$ respectively, which then source the currents, $T^{\mu\nu}$ and $J^{\mu\nu}$. $\langle T_{\mu\nu}\rangle\equiv\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g_{\mu\nu}}\,,\qquad\langle J_{\mu\nu}\rangle\equiv\frac{1}{\sqrt{-g}}\frac{\delta S}{\delta b_{\mu\nu}}$ (6) The generating functional takes the form, $Z[g_{\mu\nu},b_{\mu\nu}]=\left\langle\exp\left[i\int d^{4}x\sqrt{-g}\left(\frac{1}{2}T^{\mu\nu}g_{\mu\nu}+J^{\mu\nu}b_{\mu\nu}\right)\right]\right\rangle$ (7) and diffeomorphism invariance and gauge symmetry lead to the following equations, $\nabla_{\mu}\langle T^{\mu\nu}\rangle=(db)^{\nu}_{\rho\sigma}\langle J^{\rho\sigma}\rangle\,,\qquad\nabla_{\mu}\langle J^{\mu\nu}\rangle=0.$ (8) $H=db$ is the three-form field strength of the two-form external source. The equilibrium solution of this holographic model is a domain wall interpolating between an asymptotic $AdS_{5}$ geometry in the UV ($r\to\infty$ in our convention), and $BTZ\times\mathbb{R}^{2}$ in the near-horizon IR ($r=r_{h}$). It is described by the following metric and gauge field $\displaystyle ds^{2}$ $\displaystyle=G_{ab}dX^{a}dX^{b}=-r^{2}f(r)dt^{2}+\frac{dr^{2}}{r^{2}f(r)}+e^{2V(r)}(dx^{2}+dy^{2})+e^{2W(r)}dz^{2}\,,$ (9) $\displaystyle B$ $\displaystyle=h(r)dt\wedge dz\qquad\text{with}\qquad\star_{5}H=\mathcal{B}dx\wedge dy$ Modulo the subtleties due to the mixed boundary conditions, this is nothing but the hodge dual of the magnetised black brane solution of DHoker:2009mmn . The radial coordinate is chosen such that $r\to\infty$ corresponds to the usual asymptotic $AdS_{5}$ with $f(r)=1\,,\qquad e^{2V(r)}=e^{2W(r)}=r^{2}$ (10) in the $r\to\infty$ limit. The $BTZ\times\mathbb{R}^{2}$ solution near the horizon can be written as $f(r)=3\left(1-\frac{r_{h}^{2}}{r^{2}}\right)\,,\qquad e^{2V}=\frac{\mathcal{B}}{\sqrt{3}}\,,\qquad e^{2W}=3r^{2}\,.$ (11) The temperature is set by the horizon radius via $4\pi T=r_{h}^{2}|f^{\prime}(r_{h})|=6r_{h}/L^{2}$. We set $L=1$ for simplicity. Note also that $\mathcal{B}$ is related to the $z-$component of the ‘physical’ magnetic field ${\bf B}$ which differs by a prefactor $L$ or the 2-form gauge field coupling in the bulk (e.g. if one were to define the action with $S\sim\int(1/g^{2})H^{2}$). We will keep using $\mathcal{B}$ to emphasise its holographic origin but there is no harm in thinking of it as simply ${\bf B}$. One interesting feature of this model is that the leading divergence of $B_{\mu\nu}$ in the Fefferman-Graham expansion is logarithmic. Thus, the definition of the source $b_{\mu\nu}$ requires mixed boundary condition $b_{\mu\nu}=B_{\mu\nu}(\Lambda)-\frac{1}{\kappa(\Lambda)}\langle J_{\mu\nu}\rangle\,,\qquad\text{with}\qquad\langle J^{\mu\nu}\rangle=-\sqrt{-G}n_{\alpha}H^{\alpha\mu\nu}$ (12) Requiring the source $b_{\mu\nu}$ to be independent of the UV cutoff fixes the form of the ‘coupling constant’ $1/\kappa(\Lambda)$ which turns out to be logarithmically running. This is a common feature for fields with this type of near-boundary behaviour where the counterterm also plays the role of the double-trace deformation Witten:2001ua ; Berkooz:2002ug , see also Hofman_2018 ; Grozdanov_2019b for a discussion in the present context. Mapping $J^{\mu\nu}$ in to a more familiar dynamical field strength via $J^{\mu\nu}=\frac{1}{2}\mathcal{\epsilon}^{\mu\nu\rho\sigma}F_{\rho\sigma}$, one can see that the double-trace deformation plays a role similar to the Maxwell term for the dynamical gauge field in the dual QFT with $1/\kappa(\Lambda)$ as a (logarithmically running) electromagnetic coupling. The finite part of $1/\kappa(\Lambda)$ plays a crucial role in this setup. While the finite counterterm in the ordinary bulk Maxwell theory simply results in a contact term in the correlation function, the mixed boundary condition for $B_{ab}$ implies the existence of the purely decaying mode $\omega=-i/\tau_{E}$ that can interfere with the gapless hydrodynamic excitation. This is nothing but the life-time of the electric flux operator $Q_{E}\sim\int dS_{ij}J^{ij}$ which appears in the following correlation function Hofman_2018 ; Grozdanov_2019 $\langle J^{ij}(t)J^{kl}(0)\rangle\sim\exp\left(-it/\tau_{E}\right)\,.$ (13) Note that, due to the anisotropy introduced by finite equilibrium magnetic field, the value of $\tau_{E}$ depends on which direction of the electric field in consideration. The limit where $\tau_{E}$ is small, but finite, compared to the length scale of interest (set by temperature or magnetic flux density) is of particular interest as it allows one to extract $\tau_{E}$ analytically, via a matching procedure that we outline below. As argued in the introduction, the lifetime of the electric flux determines the validity of MHD and FFE description. ### II.1 Linearised solutions in $\omega/T\ll 1$ limit and matching procedure In this section, we outline the computation required to obtain the relaxation time of the electric field. We focus on the hydrodynamic regime where $\omega/T\ll 1$, and the low temperature limit 666Similar computation for the holographic theory dual to a system with ordinary(zero-form) $U(1)$ symmetry can be found in e.g. Davison:2013bxa ; Moitra:2020dal . $T/\sqrt{|{\bf B}|}\ll 1$. This allows us to solve the bulk equation of motion analytically via a matching method similar to that was employed in Kovtun:2003wp (see also Grozdanov_2019 for a recent review). We consider the decay rate of the electric field both along and perpendicular to the equilibrium magnetic field denoted by $E^{\parallel}=J^{xy}$ and $E^{\perp}=J^{xz},J^{yz}$ respectively. Before proceeding, let us summarise the matching procedure for the $\omega/T\ll 1$ expansion. It involves separating the bulk into three suitably defined pieces: inner region, intermediate region and outer region. The inner region is a suitably defined region close to the horizon while the outer region is defined to be the range of $r$ such that $\omega/r\ll 1$ so that one can drop terms quadratic in $(\omega/r)^{2}$, which includes the near boundary region. The integration constants of the solution in the outer region are determined by matching the form of inner region solution for intermediate value of $r$ that connect the two regions together. In our case, this is the region of $r$ close to $r_{h}$ but $\frac{\omega}{T}\log f(r)\ll 1$ (14) This intermediate region defined above is also consistent with the outer region assumption where $\omega/r\ll 1$ and thus we are able to match the two solution together. Note that, while this procedure is applicable to any bulk solution with event horizon, the limit $\omega/T\ll 1$ is crucial. We now present the key equations and resulting lifetime of the electric flux. #### II.1.1 Perturbation parallel to equilibrium magnetic field As the magnetic field in equilibrium points along the $z-$direction, we are interested in $E^{\parallel}=\frac{1}{2}\varepsilon^{zxy}\langle J_{xy}\rangle$. The corresponding bulk perturbation is $\delta B_{xy}$ which decouples from the metric perturbation in the zero wave vector limit. The bulk equation of motion can be written as $\left(r^{2}fe^{W-2V}\delta B_{xy}^{\prime}\right)^{\prime}+\frac{\omega^{2}}{r^{2}f}e^{W-2V}\delta B_{xy}=0$ (15) where $(...)^{\prime}$ denotes a derivative w.r.t. the radial coordinate $r$. The inner region solution for $\delta B_{xy}$, where we substitute the $BTZ\times\mathbb{R}^{2}$ solution for $f,V,W$, with the ingoing boundary condition can be written as $\delta B_{xy}^{inner}=c^{H}\exp\left(-\frac{i\omega}{4\pi T}\log f(r)\right)$ (16) The outer region solution can be obtained by considering the solution at linear order in $\omega/r$ and one obtains, $\displaystyle\delta B_{xy}^{outer}(r)$ $\displaystyle=c_{1}-c_{2}\left(\log\Lambda-\int^{\Lambda}_{\mathtt{r}=r}d\mathtt{r}\,\frac{e^{2V(\mathtt{r})-W(\mathtt{r})}}{\mathtt{r}^{2}f(\mathtt{r})}\right)$ (17) $\displaystyle=c_{1}-c_{2}\left(\log r-\phi(r)+\frac{e^{2V-W}}{r_{h}^{2}f^{\prime}}\Big{|}_{r=r_{h}}\log f\right)\,,$ where $\phi(r)$ is a function regular everywhere in the bulk defined as $\phi(r)=\int^{\Lambda}_{\mathtt{r}=r}d\mathtt{r}\,\left[\frac{e^{2V(\mathtt{r})-W(\mathtt{r})}}{\mathtt{r}^{2}f(\mathtt{r})}-\left(\frac{e^{2V(\mathtt{r})-W(\mathtt{r})}}{r_{h}^{2}f^{\prime}(\mathtt{r})}\right)_{\mathtt{r}=r_{h}}\frac{f^{\prime}(\mathtt{r})}{f(\mathtt{r})}-\frac{1}{\mathtt{r}}\right]\,.$ This parametrisation allows us to single out leading contributions that dominate when considering the solution near $r=\Lambda$, where $\phi(r)$ and $\log(e^{-2V}r^{2}f)$ vanish, as well as near $r\approx r_{h}$ where the $\log f$ term dominates. The integration constants $c_{1},c_{2}$ in (17) are related to the source $b_{\mu\nu}$ and the 2-form current $\langle J^{xy}\rangle$. The precise relations can be obtained via Eq. (12) to be $\langle J^{xy}\rangle=c_{2}\,,\qquad b_{xy}=c_{1}-\left(\log\,\Lambda+\frac{1}{\kappa(\Lambda)}\right)c_{2}$ (18) Note that, for the source to be independent of the UV cutoff, one requires $\kappa(\Lambda)^{-1}=\text{finite term}-\log\Lambda$. This is the logarithmically running coupling usually found in a double-trace deformed theory and resembles the running of electromagnetic coupling as pointed out in Grozdanov_2019b ; Hofman_2018 ; Grozdanov_2019 . For the outer and inner region solutions to match, we consider both solutions in the intermediate region where we can write the inner solution as $\exp\left(-\frac{i\omega}{4\pi T}\log f\right)\approx 1-i\frac{\omega}{4\pi T}\log f+\mathcal{O}\left(\frac{\omega}{T}\right)^{2}$ (19) The matching condition $\delta B_{xy}^{inner}=\delta B_{xy}^{outer}$ in this region prompts yield the following algebraic relations between the boundary quantities $b_{xy},\langle J_{xy}\rangle$: $\displaystyle\frac{i\omega}{4\pi T}c^{H}$ $\displaystyle=\left(\frac{\mathcal{B}/r_{h}}{3r_{h}^{2}f^{\prime}(r_{h})}\right)\langle J_{xy}\rangle\,$ (20) $\displaystyle c^{H}$ $\displaystyle=b_{xy}+\left[\frac{1}{\kappa(\Lambda)}+\log\left(\frac{\Lambda}{r_{h}}\right)+\phi(r_{h})\right]\langle J_{xy}\rangle\,.$ Solving these equations at vanishing source $b_{xy}=0$ yields the spectrum of the form $\omega=-i/\tau_{E^{\parallel}}$ where $\tau_{E^{\parallel}}$ is the lifetime of the electric flux parallel to the equilibrium magnetic field. This is the first key result that we advertised earlier, namely $\tau_{E^{\parallel}}=\frac{2\pi T}{\mathcal{B}}\left(e_{r}^{-2}+\phi(r_{h})\right)\,,$ (21) where we write $e_{r}^{-2}=\log(\Lambda/r_{h})+\kappa(\Lambda)^{-1}$ which plays the role of renormalised electromagnetic coupling. More details on the $T/\sqrt{\mathcal{B}}$ dependence of $\phi(r_{h})$ can be found in Appendix A. What does this result tell us about the lifetime of the electric flux operator? While the integral $\phi(r_{h})$ can be a dimensionless function of $T$ and $\mathcal{B}$, the renormalised electromagnetic coupling can be chosen in such a way that $e_{r}^{-2}\gg\phi(r_{h})$ and $e_{r}^{-2}T^{2}/\mathcal{B}\ll 1$ so that $\omega\tau_{E^{\parallel}}\sim\omega/T\ll 1$. The second limit is essential as the matching procedure assumes that $\omega/T\ll 1$ and the solution outside this regime has to be discarded. Taking these factors into account, one concludes that the temperature dependence of the electric flux is different from the high temperature $T/\sqrt{\mathcal{B}}\gg 1$ limit where $\tau_{E}\sim 1/T$ ( see Fig 2). Naively taking the limit $T\to 0$ in (21) will result in the vanishing lifetime of the electric flux in contrast to the result in (2). However, one has to carefully remove the limit $\omega/T\ll 1$ in order to access the lower temperature limit $\omega/T\sim 1,\omega/\sqrt{\mathcal{B}}\ll 1$. Figure 2: A sketch of the decay rate (inverse of the lifetime) of the electric field as a function of $T/\sqrt{\mathcal{B}}$, measured in the unit of $\sqrt{\mathcal{B}}$. The high temperature regime (red) depict the result of decay rate at zero magnetic field found in Hofman_2018 ; Grozdanov_2019 which has the same temperature dependence as in (1)-(2). In the low temperature regime (blue), however, the operator lifetime becomes those found in (21). #### II.1.2 Perturbation perpendicular to equilibrium magnetic field Unlike the previous case, the perturbation $\delta B_{xz}$ that corresponds to $E^{\perp}=\frac{1}{2}\epsilon^{yzx}\langle J_{zx}\rangle$ is coupled to the metric perturbation. This is manifest in the equations of motion $\displaystyle\frac{d}{dr}\Big{(}r^{2}fe^{-W}\delta B_{xz}^{\prime}+\mathcal{B}(\delta G^{x}_{t})\Big{)}+\frac{\omega^{2}e^{-W}}{r^{2}f}\delta B_{xz}$ $\displaystyle=0\,,$ (22) $\displaystyle\frac{d}{dr}\left(e^{4V+W}(\delta{G^{x}_{t}})^{\prime}+4\mathcal{B}\delta B_{xz}\right)$ $\displaystyle=0\,,$ where $\delta G_{\mu\nu}$ denotes the metric perturbations. Note that the coupled perturbations $\\{\delta B_{xz},\delta G_{tx}\\}$ and $\\{B_{yz},\delta G_{yz}\\}$ are equivalent due to $SO(2)$ symmetry in the plane perpendicular to the equilibrium magnetic field. Also, the second equation of motion in (22) can be written in a total derivative form $d\pi_{tx}/dr=0$ with $\pi_{tx}$ is related to the momentum $\langle T^{tx}\rangle$. Since we are working in the zero wavevector limit, the conservation of momentum implies that $\pi_{tx}=0$ in Fourier space (which can be shown explicitly using the $rx-$component of the Einstein equation). The solution for $\delta B_{xz},\delta G_{tx}$ in the outer region can be found by using the property of the background geometry combined with the Wronskian method as in Grozdanov_2019 . To be more precise, one first notes that the time-independent solution of the magnetised black brane can be written in a total derivative form, which implies the existence of two radially conserved currents. $\displaystyle Q_{1}=r^{2}f(V^{\prime}-W^{\prime})e^{2V+W}+2\mathcal{B}h(r)$ $\displaystyle=0\,,$ (23a) $\displaystyle Q_{2}=e^{4V+W}\frac{d}{dr}\left(e^{-2V}r^{2}f\right)-4\mathcal{B}h(r)$ $\displaystyle=sT\,,$ (23b) where we write the equilibrium ansatz for the gauge field as $B=h(r)\,dt\wedge dz$ with gauge choice $h(r_{h})=0$, which, together with the horizon regularity, sets $Q_{1}=0$. The relation between $h(r)$ and the 3-form field strength is $e^{2V-W}h^{\prime}=\mathcal{B}\,.$ (23c) More details on obtaining these radially conserved quantities can be found in e.g. Gubser:2009cg . With this ansatz, we can compare (22) and (23) and find that one of the solutions of (22) when $\omega/r\to 0$ are $\delta B_{xz}=\Phi_{1}(r)=h(r)+\frac{sT}{4\mathcal{B}}\,,\qquad\delta G^{x}_{t}=\Psi_{1}(r)=-e^{-2V}r^{2}f\,.$ (24) One can use the Wronskian method to find find a pair of solution of (22) that are linearly independent to $\\{\Phi_{1},\Psi_{1}\\}$. These solutions are $\displaystyle\Phi_{2}(r)=\frac{1}{4\mathcal{B}}-\int^{\infty}_{r}d\mathtt{r}\left(\frac{\mathcal{B}e^{W(\mathtt{r})}\Psi_{2}(\mathtt{r})}{\mathtt{r}^{2}f(\mathtt{r})}\right)\,,\qquad\Psi_{2}(r)=\Psi_{1}(r)\int^{\infty}_{r}d\mathtt{r}\left(\frac{e^{-W(\mathtt{r})}}{\mathtt{r}^{4}f(\mathtt{r})^{2}}\right)$ (25) As a result, the outer region solution can be written as $\begin{pmatrix}\delta B_{xz}^{outer}\\\ (\delta G^{x}_{t})^{outer}-\frac{1}{\mathcal{B}}\mathcal{J}_{xz}\end{pmatrix}=c_{1}\begin{pmatrix}\Phi_{1}\\\ \Psi_{1}\end{pmatrix}+c_{2}\begin{pmatrix}\Phi_{2}\\\ \Psi_{2}\end{pmatrix}$ (26) where $\mathcal{J}_{xz}:=(r^{2}fe^{-W}\delta B_{xz}^{\prime}+\mathcal{B}\delta G^{t}_{x})$ is an integration constant of (22) at $\omega=0$. One can substitute the $BTZ\times\mathbb{R}^{2}$ ansatz into the solution in (26) to check that $\Phi_{1},\Psi_{1,2}$ are finite at $r=r_{h}$ while $\Phi_{2}$ is singular. It is convenient to separate out the singular part of $\Phi_{2}$ in the following form $\Phi_{2}(r)=\phi_{2}(r)-\left(\frac{\mathcal{B}e^{W}\Psi_{2}}{r^{2}f^{\prime}}\right)_{r=r_{h}}\log f(r)$ (27) where $\phi_{2}(r)$ is the integral in (25) with the logarithmic divergence subtracted. The boundary condition where the source for both metric and 2-form gauge field fluctuation vanishes corresponds to the following values of $c_{1}$ and $c_{2}$ $c_{1}=\frac{\mathcal{J}_{xz}}{\mathcal{B}}\,,\qquad c_{2}=-4\left(\frac{sT}{4\mathcal{B}}+h(\Lambda)+\frac{\mathcal{B}}{\hat{\kappa}(\Lambda)}\right)\mathcal{J}_{xz}$ (28) One can also check that $\mathcal{J}_{xz}$ is identical to the one-point function $\langle\delta J^{xz}\rangle$ via the definition (12). Note also that the ratio $c_{2}/\mathcal{J}_{xz}$ is finite due to the cancellation of the logarithmic divergence of $1/\kappa(\Lambda)$ and that of the near boundary solution of $h(r)$, obtained via (23c). Let us also pointed out another way to organise the equations of motion for $\delta B_{xz}$. It turns out that (22) can be combined into a single equation of motion that reduces to a total derivative form at $\omega=0$. Following the procedure in e.g. Davison:2015taa and some manipulation, we find $\left([e^{4V+W}\left(e^{-2V}r^{2}f\right)^{\prime}]^{2}r^{2}fe^{-W}\delta\tilde{B}_{xz}^{\prime}\right)^{\prime}+\frac{\omega^{2}}{r^{2}fe^{W}}[e^{4V+W}(e^{-2V}r^{2}f)^{\prime}]^{2}\delta\tilde{B}_{xz}=0$ (29) where $\delta\tilde{B}_{xz}=\delta B_{xz}/[e^{4V+W}(e^{-2V}r^{2}f)^{\prime}]$. The outer region solution of (29) is easily obtained and can be shown to be identical to those of (26). We can now proceed to the inner region solution. This can be found by solving Eq.(29) and one find $\delta B^{inner}_{xz}=c^{H}\exp\left(-\frac{i\omega}{4\pi T}\log f(r)\right)\,.$ (30) In the intermediate region, we apply the expansion in (19). The coefficients $c_{1},c_{2}$ are related to $c^{H}$ via $\displaystyle\left(-\frac{i\omega}{4\pi T}\right)c^{H}$ $\displaystyle=-\left(\frac{\mathcal{B}e^{W}\Psi_{2}}{r^{2}f^{\prime}}\right)_{r=r_{h}}c_{2}\,,$ (31) $\displaystyle c^{H}$ $\displaystyle=\left(\frac{sT}{4\mathcal{B}}\right)c_{1}+\phi_{2}(r_{h})c_{2}\,,$ Substituting the form of $c_{1},c_{2}$ in terms of $\langle\delta J_{xz}\rangle$, we can write the relations in a form similar to $\langle\delta J_{xy}\rangle$, namely $\left(-i\omega+\frac{1}{\tau_{E^{\perp}}}\right)\langle\delta J_{xz}\rangle=0\,.$ (32) In the case of vanishing sources, we can write $\frac{c_{2}}{c_{1}}=-4\mathcal{B}\left(\frac{sT}{4\mathcal{B}}+h(\Lambda)+\frac{\mathcal{B}}{\kappa(\Lambda)}\right)$ and the relaxation time of the electric field perpendicular to the equilibrium magnetic field is $\tau_{E^{\perp}}=\frac{\sqrt{3}}{2\pi T\mathcal{B}\Psi_{2}(r_{h})}\left[\frac{sT}{4\mathcal{B}}\frac{c_{1}}{c_{2}}+\phi_{2}(r_{h})\right]$ (33) In contrast to the result at $e_{r}^{-2}\gg 1$ and zero equilibrium magnetic field in Hofman_2018 ; Grozdanov_2019 , the lifetime at strong magnetic field $\mathcal{B}/T^{2}$ has a very different form. To see this, it is useful to examined that the combinations that enter the lifetime as follows $\Psi_{2}(r_{h})\propto\frac{1}{\mathcal{B}T^{2}}\,,\qquad\phi_{2}(r_{h})\propto\frac{1}{\mathcal{B}}\,,\qquad\frac{c_{1}}{c_{2}}\propto\frac{1}{\mathcal{B}^{2}}\,\quad\text{for large}1/\kappa(\Lambda)$ (34) with proportionality constants given by some numbers of order $\mathcal{O}(1)$. In the limit of large electromagnetic coupling $1/\kappa(\Lambda)\gg 1$ and $\mathcal{B}/T^{2}\gg 1$, we find that this gives a short lifetime of the form $\tau_{E^{\perp}}\propto T/\mathcal{B}$. However, the location of this decaying mode $\omega=-i/\tau_{E^{\perp}}$ lies outside the hydrodynamic regime $\omega/T\ll 1$. Thus, one conclude that there are no modes with long lifetime in this regime777Note also that, if one were to perform this analysis for a perturbation in the holographic dual to a theory with zero-form $U(1)$ at $T>0,\mu=0$ (as in Kovtun:2003wp , see also Grozdanov_2019 ), one would find a spectrum of the form $\omega\sim T$. This solution is spurious as it lies outside the hydrodynamic regime $\omega/T\ll 1$ and, in fact, is not present in the genuine spectrum obtained numerically at finite $\omega/T$ Kovtun:2005ev . . ### II.2 Checking $T\gtrsim 0$ limit in $\omega/\sqrt{|\bf{B|}}\ll 1$ regime While the result in the previous section strongly indicated that the electric flux lifetime becomes very short at extremely low temperature, the simplicity of the holographic model also allows us to extend the analysis beyond the usual hydrodynamic $\omega/T\ll 1$ regime. We will first show that the zero temperature theory does not support the purely decaying mode of the form $\omega=-i/\tau$ in the small $\omega/\sqrt{|{\bf B}|}$ regime. Next, we further extend the regime of validity to that of $\omega/\sqrt{|{\bf B}|}\ll 1$ but for arbitrary $\omega/T$. The purpose of the latter is to show that $\tau_{E}\propto T/\sqrt{|{\bf B}|}$ without relying on the $\omega/T\ll 1$ limit. #### II.2.1 Zero temperature A simple argument for the non-existence of such a slowly decaying mode, is the presence of Lorentz symmetry at zero temperature on the $AdS_{3}$ submanifold in the deep infrared. On the other hand, one can also show this, using matching methods similar to those in DHoker:2010xwl ; DHoker:2010onp ; Davison:2013bxa . To obtain this result, one first realises that the geometry of the magnetised black brane is that of an interpolation between IR $AdS_{3}\times\mathbb{R}^{2}$ and UV $AdS_{5}$. Roughly speaking, the IR geometry starts to becomes a good approximation as one starts to probe the scale below the magnetic field i.e. $r\sim\sqrt{|\bf{B}|}$. The inner and outer regions are defined such that they start off from the IR and UV geometry respectively, and extend to cover the overlap region (see Figure 3). This is achievable when $\omega/\sqrt{|{\bf B}|}\ll 1$. Figure 3: A sketch of the bulk geometry at zero temperature. The inner region, whose solutions only depends on the ratio $\omega/r$ extended from the near horizon limit $r\to 0$ to the one where $\omega/r\sim\omega/\sqrt{|{\bf B}|}\to 0$ as we are working in the $\omega/\sqrt{|{\bf B}|}\ll 1$ limit. The outer region is defined to be the region where the $\omega^{2}/r^{2}$ and higher power in $\omega/r$ is suppressed, which can be extended toward $r\gg\sqrt{|\bf{B}|}$ as long as the frequency is small. For concreteness, let us demonstrate how this works in the $E^{\parallel}$ channel that involves the bulk field $\delta B_{xy}$ governed by Eq.(15). The solution can be written in the same form as (17) evaluated at zero temperature (i.e. $r_{h}=0$). It is worth noting that the singular behaviour near $r/\sqrt{\mathcal{B}}\to 0$ is different from that in earlier section. Instead, it can be written as $\delta B_{xy}^{outer}(r)=c_{1}-c_{2}\left(\log\Lambda-\bar{\phi}(r)+\frac{\mathcal{B}/3}{6r^{2}}\right)+\mathcal{O}\left(\frac{\omega^{2}}{r^{2}},\frac{\omega^{2}}{r^{2}}\log\left(\frac{\omega}{r}\right)\right)$ (35) where the integration constants can be related to source and response via (12). It is worth noting that the logarithmic divergence appears at order $\omega^{2}$. This is can be confirmed via Frobenius analysis in $AdS_{5}$ region (see e.g. Kovtun:2006pf ) and $AdS_{3}\times\mathbb{R}^{2}$ region (see appendix B). The prefactor of the $r^{-2}$ divergence is obtained by evaluating $e^{2V(r)-W(r)}/f(r)$ at the horizon $r\to 0$. Here $\bar{\phi}(r)$ is the integral in (17) subtracted by the $r^{-2}$ divergent and logarithmic divergent pieces. The resulting integral evalutated from $r=r_{0}\sim\sqrt{\mathcal{B}}$ of the overlapping region to the UV cutoff $r=\Lambda$ is finite and its number is not extremely relevant for us as long as one keep $e_{r}^{-2}$ large. Next, we consider the inner region solution, which can be obtained by solving (15) in the $AdS_{3}\times\mathbb{R}^{2}$ region. Upon imposing horizon regularity at $r\to 0$, we find that the inner region solution is $\delta B_{xy}^{inner}=c^{H}\zeta K_{1}(\zeta)\,,\qquad\zeta=\frac{3\omega}{r}$ (36) For these two branches of solutions to match, we extend the inner region solution to the regime where $\zeta=\omega/r\ll 1$. We find that the ‘near boundary’ expansion takes the form $\delta B_{xy}^{inner}=c^{H}\left(1+\frac{1}{2}\gamma\zeta^{2}+\frac{1}{2}\zeta^{2}\log\zeta+...\right)$ (37) Matching this solution to the outer region, we find that $c_{2}\propto\omega^{2}$ unlike what happened in the previous section. Carrying on the matching procedure, we find that the polynomial governing the spectrum only depends on $\omega^{2}$ and thus rules out the purely imaginary mode $\omega=-i/\tau$. The same argument can also be made for the $E^{\perp}$ channel involving $\delta B_{xz}$. This is because, the part that is relevant to the matching procedure only depends on $\zeta^{2}$. See appendix B for more details on the form of $\delta B_{xz}$ in the $AdS_{3}\times\mathbb{R}^{2}$ region. #### II.2.2 $T\lesssim\omega\ll\sqrt{|{\bf B}|}$ limit In this section, we show that the electric flux lifetime can also be obtained regime where $\omega/T\gtrsim 1$ and $\omega/\sqrt{|\bf{B}|}\ll 1$ while keeping $\sqrt{|\bf{B}|}/T\gg 1$. The calculations closely resembles that of the zero temperature case except that the deep IR geometry is now $BTZ\times\mathbb{R}^{2}$ instead of $AdS_{3}\times\mathbb{R}^{2}$. Figure 4 illustrates this geometry where the $AdS_{5}$ joined with the $BTZ\times\mathbb{R}^{2}$ at the ‘boundary’ $AdS_{3}\times\mathbb{R}^{2}$ of the IR geometry. We will only focus on the $E^{\parallel}$ fluctuations as it is the only channel that contains the decaying modes in the $\omega/T\ll 1$ regime. Similar computation for this type of geometry can also be found in DHoker:2010onp . Figure 4: A sketch of the bulk geometry at low temperature $T\ll\sqrt{|\bf{B}|}$. The inner region, whose solutions only depends on the ratio $\omega/r$ extends from the near horizon limit $r\to r_{h}\ll\sqrt{\mathbf{B}}$ to the one where $\omega/r\sim\omega/\sqrt{|{\bf B}|}\ll 1$, which corresponds to the near boundary region of $BTZ\times\mathbb{R}^{2}$ geometry, described by $AdS_{3}\times\mathbb{R}^{2}$. The outer region is defined to be the region where $\omega^{2}/r^{2}$ (and higher powers) is negligible and, therefore, can be extended toward $r\sim\sqrt{|\bf{B}|}$ in the $\omega/\sqrt{|\bf{B}|}\ll 1$ limit. The outer region solution, which extends from the UV $AdS_{5}$ to the intermediate $AdS_{3}\times\mathbb{R}^{2}$ region has the same form as in (35). This is possible only in the limit where $\sqrt{\mathcal{B}}\gg T$ so that $r/\sqrt{\mathcal{B}}$ is always much greater than $T/\sqrt{\mathcal{B}}\sim r_{h}/\sqrt{\mathcal{B}}$ in this region. The inner region solution in the $BTZ\times\mathbb{R}^{2}$ region can be expressed in terms of a hypergeometric function (upon imposing ingoing boundary condition) $\delta B_{xy}^{inner}=c^{H}\left(1-\frac{r_{h}^{2}}{r^{2}}\right)^{-i\mathfrak{w}/2}\,{{}_{2}F_{1}}\left(-\frac{i\mathfrak{w}}{2},-\frac{i\mathfrak{w}}{2},-\frac{i\mathfrak{w}}{2};1-\mathfrak{w};1-\frac{r_{h}^{2}}{r^{2}}\right)$ (38) where $\mathfrak{w}=\omega/(2\pi T)=\omega/3r_{h}$. Extending this solution in the $r\gg r_{h}$ limit (which is possible due to $r_{h}/r\to 0$ as we approach the limit $\omega/r\to 0$) yields the following expansion abramowitz+stegun $\delta B_{xy}^{inner}\propto c^{H}\left[1+\frac{i\omega r_{h}}{6r^{2}}+\frac{1}{4}\left(\frac{\omega}{3r}\right)^{2}\left(2-2\gamma-2\psi(1-i\mathfrak{w}/2)-\log\left(\frac{r_{h}^{2}}{r^{2}}\right)\right)+\mathcal{O}(\omega^{3})\right]$ (39) where $\psi(x)$ is the digamma function and the constants of proportionality are combinations of gamma functions that can be absorbed in the definition of $c^{H}$. The first two terms in $[...]$ are what important for us. By working to leading order in $\omega/r\ll 1$ as one approaches the intermediate $AdS_{3}\times\mathbb{R}^{2}$ region, we find the following matching solution $c_{1}-c_{2}\log(\Lambda/\sqrt{\mathcal{B}})+\bar{\phi}=c^{H}\,,\qquad\left(\frac{\mathcal{B}}{3}\right)c_{2}=i\omega\left(\frac{2\pi T}{3}\right)c^{H}$ (40) We can convert $c_{1}$ to the source $b_{xy}$ and $c_{2}$ as done in the previos sections. Upon taking $e_{r}^{-2}\gg\bar{\phi}$ (so that the solution lies in the regime of validity $\omega/\sqrt{\mathcal{B}}\ll 1$), we find the solution of the form $\omega=-i/\tau_{E^{\parallel}}$ where $\tau_{E^{\parallel}}$ is the same as in (21). This indicates that the lifetime indeed grows as $T/\sqrt{\mathcal{B}}$ increases regardless of the ratio $\omega/T$. ## III Conclusion The higher-form symmetry viewpoint of magnetohydrodynamics and its low temperature incarnation, the force-free electrodynamics, leads to new insights. The central focus of the present work was to established the absence of long-lived non-conserved operators. In turn, this indicates the validity of a hydrodynamic description at low temperature and strong magnetic field. The question of whether the only operators that govern the deep IR dynamics are the conserved charges is important and ought to be asked before any quantitative attempt is made to study hydrodynamic properties (such as shear viscosity etc). All non-conserved operators must decay much faster than the scale of interest if a hydrodynamic interpretation is to be meaningful. We work with a holographic model which shares the same global symmetry as that of the plasma, namely only the energy, momentum and magnetic flux commute with the Hamiltonian. The model is simple enough for the lifetime of electric flux to be determined by classical bulk dynamics and the precise question is whether or not the electric flux is sufficiently long-lived to interfere with hydrodynamic modes. Due to the anisotropy of the system in the presence of a strong expectation value of magnetic field, the lifetime of the electric field depends on its orientation. Our results can be summarised as follows * • For electric flux $E_{\parallel}$ parallel to the magnetic field, the lifetime has a strong dependence on the double-trace coupling $\kappa$ which plays a role similar to the renormalised electromagnetic coupling. In the extreme limit of $e^{-2}_{r}\gg|\bf{B}|/T^{2}$, the lifetime can be large enough to be detectable by the analytic computation in both the ‘usual’ hydrodynamic regime $\omega/T\ll 1$ and on even lower temperature regime where $\omega/\sqrt{\bf{B}}\ll 1$ while $\omega/T$ may remains finite. We found that the lifetime becomes shorter as one decreases the ratio of $T/\sqrt{|\bf{B}|}$. The latter indicates that the lifetime will become extremely short in the extremely strong magnetic field regime $T/\sqrt{|\bf{B}|}\ll 1$ and cannot interfere with the low energy regime of $\omega/\sqrt{|\bf{B}|}\ll 1$ where the FFE limit is thought to be applicable. * • For the component of electric flux $E_{\perp}$ perpendicular to the magnetic field, we find that there is no pole in the vicinity of $\omega/T\ll 1$. The dependence of the lifetime on the renormalised electromagnetic coupling disappears as one approaches the strong magnetic field limit. We also performed a consistency check at $T\to 0$ to ensure that there are no modes in the deep IR limit of $\omega/\sqrt{|\bf{B}|}\ll 1$. In this regime, the modes that indicate (potentially) long lifetime of $E_{\parallel}$ disappear from the low energy spectrum as anticipated. These computations are basic checks on the validity of FFE description. In the holographic context, it would be interesting to check if all the accessible non-conserved operator truly have a parametrically short lifetime as well as confirming the low energy spectrum predicted by force-free electrodynamics (and its subsequent derivative corrections). Extraction of FFE effective action from gravity akin to Nickel:2010pr ; Glorioso:2018mmw ; deBoer:2018qqm or the full constitutive relation as in Bhattacharyya:2008jc ; Banerjee:2008th ; Erdmenger:2008rm would be desirable as a definitive proof of FFE description in the dynamically magnetised black brane geometry. Last but not least, it would be very interesting to investigate operators lifetime in (weakly coupled) quantum electrodynamics at finite $T$ and ${\bf B}$ to better understand FFE and its limitations in a system more directly connected to astrophysical plasma than the strongly coupled holographic model considered here. ## Acknowledgements We would like to thank Jay Armas, Sašo Grozdanov, Nabil Iqbal, Kieran Macfarlane, Watse Sybesma and Lárus Thorlacius for helpful discussions and comments. We are particularly grateful to S. Grozdanov, N. Iqbal and L. Thorlacius for commenting on the manuscript. The work of N. P. was supported by Icelandic Research Fund grant 163422-052 and STFC grant number ST/T000708/1. The work of A.R was supported in part by the Icelandic Research Fund under grant 195970-052 and by the University of Iceland Research Fund. ## Appendix A Numerical solution and evaluation of operators lifetime In this section, remarks on the evaluation of the electric flux are elaborated. The numerical background solution for this geometry can be constructed in the same way as DHoker:2009mmn using shooting method. The solution is a one-parameter family characterised by $\mathcal{B}/T^{2}$ which allows us the freedom to choose $r_{h}=1,r_{h}^{2}f^{\prime}(r_{h})=1$ (or equivalently $T=1/4\pi$). It is also convenient to set $V(r_{h})=W(r_{h})=0$ which results in the UV boundary metric of the form $\lim_{r\to\infty}ds^{2}=r^{2}\left(-dt^{2}+v(dx^{2}+dy^{2})+wdz^{2}\right)+\frac{dr^{2}}{r^{2}}$ (41) Upon rescaling of spatial coordinates $\\{dx,dy,dz\\}\to\\{dx/\sqrt{v},dy/\sqrt{v},dz/\sqrt{w}\\}$, we recover the desired background solutions. Note also that the physical magnetic flux is related to the input parameter (that produced the metric in (41)) by $\mathcal{B}_{\text{physical}}=\mathcal{B}_{\text{input}}/v$. A small caveat of this method is that one cannot find a smooth solution beyond $\mathcal{B}_{\text{input}}\gtrsim\sqrt{3}/2$ which corresponds to the temperature $T/\sqrt{\mathcal{B}}=(4\pi\sqrt{\mathcal{B}_{\text{input}}/v})^{-1}\approx 0.05$. This is most likely an artifact of the presented numerical method as there exists a smooth solution in the zero temperature limit corresponding to the $AdS_{3}\times\mathbb{R}^{2}$ geometry in the deep IR. We should also note that this is a sufficiently low energy temperature as the entropy becomes sufficiently close to $s\propto T$ obtained from $BTZ\times\mathbb{R}^{2}$ geometry (c.f. DHoker:2009mmn ; Grozdanov_2019b ). The background is generated for $r$ from $[1+10^{-3},10^{6}]$ and varying the (numerical) cutoffs within this order of magnitude does not change the obtained numerical results. Let us also remark on the the numerical value of the renormalised electromagnetic coupling $e_{r}^{-2}=\log(\Lambda/r_{h})+\kappa(\Lambda)^{-1}$. This quantity strongly influences both the thermodynamics and low energy spectrum Grozdanov_2019b ; Hofman_2018 ; Grozdanov_2019 of the model. In particular a small value of $e_{r}^{-2}$ would result in the speed of sound becoming imaginary Grozdanov_2019b . Another way to see that this quantity should be large is to write it in terms of a renormalisation group independent scale $M_{*}$ that denotes the energy scale of a Landau pole Hofman_2018 i.e. $e_{r}^{-2}\sim\log(M_{\star}/T)$ where $M_{\star}\gg T$. We take this to be the largest scale in the problem–much larger than the accessible value of $\sqrt{\mathcal{B}}/T$. Figure 5: Numerical evaluation of $\phi(r_{h})$ in (21) as a function of $T/\sqrt{\mathcal{B}}$. The black dots denote the numerical evaluation while the red line denotes the fitting function for small $T/\sqrt{\mathcal{B}}$ as $\phi\approx-(0.008)\frac{\mathcal{B}}{T^{2}}\log(5.7\mathcal{B}/T^{2})$. For high temperatures, the value of $\phi(r_{h})$ is approximately constant around $0.69$. The value of $\phi(r_{h})$ at lowest achievable temperature is at $\phi(r_{h})=-23.49$. Numerical value of the integral for $\phi(r_{h})$ in (21) is shown in Figure 5. For a larger temperature (when $\phi(r_{h})\approx\mathcal{O}(1)$), the lifetime can be sensibly approximated to be $\tau_{E^{\parallel}}\approx 2\pi(T/\mathcal{B})e_{r}^{-2}$. As $T/\sqrt{\mathcal{B}}$ decreases, the lifetime becomes shorter and, if we are to extrapolate the fitting function $\phi\sim\frac{\mathcal{B}}{T^{2}}\log\frac{\mathcal{B}}{T^{2}}$ to even lower temperature where $e_{r}^{-2}\gtrsim\phi$, it will escape the regime of the validity of small $\omega/T,\omega/\sqrt{\mathcal{B}}$ expansions. In this scenario, one shall conclude that there are no long-lived modes that can interfere with the low energy excitations. ## Appendix B Frobenius analysis in $AdS_{3}\times\mathbb{R}^{2}$ region Consider the equation of motion for $\delta B_{xy}$ in the intermediate $AdS_{3}\times\mathbb{R}^{2}$ region: $\delta B^{\prime\prime}_{xy}(r)+\frac{3}{r}\delta B_{xy}^{\prime}(r)+\frac{\omega^{2}}{9r^{2}}\delta B_{xy}(r)=0$ (42) The solution in this region can be obtained via Frobenius method. More precisely, one can change the radial coordinate into $\zeta=3\omega/r$ and redefine $\delta B_{xy}=\zeta c(\zeta)$. It follows that $c(\zeta)$ is the solution of the Bessel equation of order $1$, which has a regular singular point at $\zeta=0$. The near-boundary $r\to\infty$, or equivalently $\zeta\to 0$, akin to the Fefferman-Graham expansion in the usual holographic renormalisation, can be written as $\delta B_{xy}(\zeta)=c_{1}^{M}\mathcal{P}_{1}(\zeta)+\Big{(}c_{2}^{M}+\mathfrak{h}\log\zeta\Big{)}\mathcal{P}_{2}(\zeta)$ (43a) where $c^{M}_{1},c^{M}_{2}$ are integration constants and $\mathcal{P}_{i}(\zeta)$ are regular polynomials of the following form $\mathcal{P}_{1}=1+\sum_{n=1}^{\infty}p_{1}^{[n]}\zeta^{n}\,,\qquad\mathcal{P}_{2}=\zeta^{2}\left(1+\sum_{n=1}^{\infty}p_{2}^{[n]}\zeta^{n}\right)$ (43b) Similar to the usual procedure in the holographic renormalisation deHaro:2000vlm , all the coefficients $p_{1}^{[n]},p_{2}^{[n]},\mathfrak{h}$ except $p_{1}^{[2]}$, which can be set to zero without loss of generality Kovtun:2006pf , can be obtained recursively. The important piece of information here is the coefficient $\mathfrak{h}=1$ which can be obtained by recursively solving the equation (42). Another easy way to see this is to recast (42) as the Bessel equation of order 1 as pointed out earlier. Then, using the fact that the Bessel functions $K_{1}(\zeta)$ and $I_{1}(\zeta)$ are two independent solutions of such equation and, for small $\zeta$ they admit the following asymptotic expansions (see e.g. §3.3 of bender2013advanced ) $I_{1}(\zeta)=\frac{\zeta}{2}+\frac{\zeta^{2}}{16}+\mathcal{O}(\zeta)^{3}\,,\qquad K_{1}(\zeta)=\left(\gamma+\log\frac{\zeta}{2}\right)I_{1}(\zeta)+\frac{1}{\zeta}$ (44) will result in the series expansions of the solution in $AdS_{3}\times\mathbb{R}^{2}$ region in (43a). A similar procedure can also be applied for $E^{\perp}$ using Eq.(29). Substituting $\delta\tilde{B}_{xz}=\zeta^{2}c(\zeta)$, one finds that it obeys the Bessel equation of order $2$ whose $\zeta\ll 1$ expansion only yields even power in $\zeta$. ## References * (1) L. D. Landau and E. M. Lifshitz, Fluid Mechanics. Butterworth-Heinemann, 2nd ed., 1987. * (2) R. W. Zwanzig, Statistical mechanics of irreversibility. Lectures on Theoretical Physics Volume 3, 139 (Interscience, 1961), 1961\. * (3) D. Forster, Hydrodynamic Fluctuations, Broken Symmetry, and Correlation Functions. Perseus Books, 1995. * (4) S. A. Hartnoll and D. M. Hofman, “Locally Critical Resistivities from Umklapp Scattering,” Phys. Rev. Lett. 108 (2012) 241601, arXiv:1201.3917 [hep-th]. * (5) D. Gaiotto, A. Kapustin, N. Seiberg, and B. Willett, “Generalized Global Symmetries,” JHEP 02 (2015) 172, arXiv:1412.5148 [hep-th]. * (6) S. Grozdanov, D. M. Hofman, and N. Iqbal, “Generalized global symmetries and dissipative magnetohydrodynamics,” Phys. Rev. D 95 no. 9, (2017) 096003, arXiv:1610.07392 [hep-th]. * (7) D. Schubring, “Dissipative String Fluids,” Phys. Rev. D 91 no. 4, (2015) 043518, arXiv:1412.3135 [hep-th]. * (8) J. Hernandez and P. Kovtun, “Relativistic magnetohydrodynamics,” JHEP 05 (2017) 001, arXiv:1703.08757 [hep-th]. * (9) J. Armas and A. Jain, “Magnetohydrodynamics as superfluidity,” Phys. Rev. Lett. 122 no. 14, (2019) 141603, arXiv:1808.01939 [hep-th]. * (10) J. Armas and A. Jain, “One-form superfluids & magnetohydrodynamics,” JHEP 01 (2020) 041, arXiv:1811.04913 [hep-th]. * (11) W. G. Dixon, Special relativity: the foundation of macroscopic physics. CUP Archive, 1982. * (12) A. M. Anile, Relativistic fluids and magneto-fluids: With applications in astrophysics and plasma physics. Cambridge University Press, 2005. * (13) S. S. Komissarov, “A Godunov-type scheme for relativistic magnetohydrodynamics,” Monthly Notices of the Royal Astronomical Society 303 no. 2, (02, 1999) 343–366. * (14) P. B. Arnold, G. D. Moore, and L. G. Yaffe, “Transport coefficients in high temperature gauge theories. 1. Leading log results,” JHEP 11 (2000) 001, arXiv:hep-ph/0010177. * (15) R. Blandford and R. Znajek, “Electromagnetic extractions of energy from Kerr black holes,” Mon. Not. Roy. Astron. Soc. 179 (1977) 433–456. * (16) S. Komissarov, “Electrodynamics of black hole magnetospheres,” Mon. Not. Roy. Astron. Soc. 350 (2004) 407, arXiv:astro-ph/0402403. * (17) P. Goldreich and W. H. Julian, “Pulsar Electrodynamics,” Astrophys. J. 157 (Aug., 1969) 869. * (18) T. Wiegelmann and T. Sakurai, “Solar Force-free Magnetic Fields,” Living Rev. Sol. Phys. 9 (2012) 5, arXiv:1208.4693 [astro-ph.SR]. * (19) S. E. Gralla and T. Jacobson, “Spacetime approach to force-free magnetospheres,” Mon. Not. Roy. Astron. Soc. 445 no. 3, (2014) 2500–2534, arXiv:1401.6159 [astro-ph.HE]. * (20) G. Compère, S. E. Gralla, and A. Lupsasca, “Force-Free Foliations,” Phys. Rev. D 94 no. 12, (2016) 124012, arXiv:1606.06727 [math-ph]. * (21) T. Uchida, “Theory of force-free electromagnetic fields. i. general theory,” Phys. Rev. E 56 (Aug, 1997) 2181–2197. https://link.aps.org/doi/10.1103/PhysRevE.56.2181. * (22) C. Thompson and O. Blaes, “Magnetohydrodynamics in the extreme relativistic limit,” Phys. Rev. D 57 (1998) 3219–3234. * (23) S. E. Gralla and N. Iqbal, “Effective Field Theory of Force-Free Electrodynamics,” Phys. Rev. D 99 no. 10, (2019) 105004, arXiv:1811.07438 [hep-th]. * (24) P. Glorioso and D. T. Son, “Effective field theory of magnetohydrodynamics from generalized global symmetries,” arXiv:1811.04879 [hep-th]. * (25) B. Benenowski and N. Poovuttikul, “Classification of magnetohydrodynamic transport at strong magnetic field,” arXiv:1911.05554 [hep-th]. * (26) S. Grozdanov and N. Poovuttikul, “Generalised global symmetries in holography: magnetohydrodynamic waves in a strongly interacting plasma,” JHEP 04 (2019) 141, arXiv:1707.04182 [hep-th]. * (27) D. M. Hofman and N. Iqbal, “Generalized global symmetries and holography,” SciPost Phys. 4 no. 1, (2018) 005, arXiv:1707.08577 [hep-th]. * (28) D. T. Son and A. O. Starinets, “Minkowski space correlators in AdS / CFT correspondence: Recipe and applications,” JHEP 09 (2002) 042, arXiv:hep-th/0205051. * (29) P. Kovtun, D. T. Son, and A. O. Starinets, “Holography and hydrodynamics: Diffusion on stretched horizons,” JHEP 10 (2003) 064, arXiv:hep-th/0309213. * (30) J. F. Fuini and L. G. Yaffe, “Far-from-equilibrium dynamics of a strongly coupled non-Abelian plasma with non-zero charge density or external magnetic field,” JHEP 07 (2015) 116, arXiv:1503.07148 [hep-th]. * (31) S. Janiszewski and M. Kaminski, “Quasinormal modes of magnetic and electric black branes versus far from equilibrium anisotropic fluids,” Phys. Rev. D 93 no. 2, (2016) 025006, arXiv:1508.06993 [hep-th]. * (32) E. D’Hoker and P. Kraus, “Magnetic Brane Solutions in AdS,” JHEP 10 (2009) 088, arXiv:0908.3875 [hep-th]. * (33) S. Grozdanov, A. Lucas, and N. Poovuttikul, “Holography and hydrodynamics with weakly broken symmetries,” Phys. Rev. D 99 no. 8, (2019) 086012, arXiv:1810.10016 [hep-th]. * (34) R. A. Davison and B. Goutéraux, “Momentum dissipation and effective theories of coherent and incoherent transport,” JHEP 01 (2015) 039, arXiv:1411.1062 [hep-th]. * (35) C.-F. Chen and A. Lucas, “Origin of the Drude peak and of zero sound in probe brane holography,” Phys. Lett. B 774 (2017) 569–574, arXiv:1709.01520 [hep-th]. * (36) R. A. Davison, S. A. Gentle, and B. Goutéraux, “Impact of irrelevant deformations on thermodynamics and transport in holographic quantum critical states,” Phys. Rev. D 100 no. 8, (2019) 086020, arXiv:1812.11060 [hep-th]. * (37) E. Witten, “Multitrace operators, boundary conditions, and AdS / CFT correspondence,” arXiv:hep-th/0112258. * (38) M. Berkooz, A. Sever, and A. Shomer, “’Double trace’ deformations, boundary conditions and space-time singularities,” JHEP 05 (2002) 034, arXiv:hep-th/0112264. * (39) R. A. Davison and A. Parnachev, “Hydrodynamics of cold holographic matter,” JHEP 06 (2013) 100, arXiv:1303.6334 [hep-th]. * (40) U. Moitra, S. K. Sake, and S. P. Trivedi, “Near-Extremal Fluid Mechanics,” arXiv:2005.00016 [hep-th]. * (41) S. S. Gubser and A. Nellore, “Ground states of holographic superconductors,” Phys. Rev. D 80 (2009) 105007, arXiv:0908.1972 [hep-th]. * (42) R. A. Davison, B. Goutéraux, and S. A. Hartnoll, “Incoherent transport in clean quantum critical metals,” JHEP 10 (2015) 112, arXiv:1507.07137 [hep-th]. * (43) P. K. Kovtun and A. O. Starinets, “Quasinormal modes and holography,” Phys. Rev. D 72 (2005) 086009, arXiv:hep-th/0506184. * (44) E. D’Hoker, P. Kraus, and A. Shah, “RG Flow of Magnetic Brane Correlators,” JHEP 04 (2011) 039, arXiv:1012.5072 [hep-th]. * (45) E. D’Hoker and P. Kraus, “Magnetic Field Induced Quantum Criticality via new Asymptotically AdS5 Solutions,” Class. Quant. Grav. 27 (2010) 215022, arXiv:1006.2573 [hep-th]. * (46) P. Kovtun and A. Starinets, “Thermal spectral functions of strongly coupled N=4 supersymmetric Yang-Mills theory,” Phys. Rev. Lett. 96 (2006) 131601, arXiv:hep-th/0602059. * (47) M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York, ninth dover printing, tenth gpo printing ed., 1964. * (48) D. Nickel and D. T. Son, “Deconstructing holographic liquids,” New J. Phys. 13 (2011) 075010, arXiv:1009.3094 [hep-th]. * (49) P. Glorioso, M. Crossley, and H. Liu, “A prescription for holographic Schwinger-Keldysh contour in non-equilibrium systems,” arXiv:1812.08785 [hep-th]. * (50) J. de Boer, M. P. Heller, and N. Pinzani-Fokeeva, “Holographic Schwinger-Keldysh effective field theories,” JHEP 05 (2019) 188, arXiv:1812.06093 [hep-th]. * (51) S. Bhattacharyya, V. E. Hubeny, S. Minwalla, and M. Rangamani, “Nonlinear Fluid Dynamics from Gravity,” JHEP 02 (2008) 045, arXiv:0712.2456 [hep-th]. * (52) N. Banerjee, J. Bhattacharya, S. Bhattacharyya, S. Dutta, R. Loganayagam, and P. Surowka, “Hydrodynamics from charged black branes,” JHEP 01 (2011) 094, arXiv:0809.2596 [hep-th]. * (53) J. Erdmenger, M. Haack, M. Kaminski, and A. Yarom, “Fluid dynamics of R-charged black holes,” JHEP 01 (2009) 055, arXiv:0809.2488 [hep-th]. * (54) S. de Haro, S. N. Solodukhin, and K. Skenderis, “Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence,” Commun. Math. Phys. 217 (2001) 595–622, arXiv:hep-th/0002230. * (55) C. Bender and S. Orszag, Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory. Springer New York, 2013.
# On some efficiency conditions for vector optimization problems with uncertain cone constraints: a robust approach A. Uderzoa CONTACT A. Uderzo. Email<EMAIL_ADDRESS>a Dipartimento di Matematica e Applicazioni, Università di Milano-Bicocca, Milano, Italy ###### Abstract In the present paper, several types of efficiency conditions are established for vector optimization problems with cone constraints affected by uncertainty, but with no information of stochastic nature about the uncertain data. Following a robust optimization approach, data uncertainty is faced by handling set-valued inclusion problems. The employment of recent results about error bounds and tangential approximations of the solution set to the latter enables one to achieve necessary conditions for weak efficiency via a penalization method as well as via the modern revisitation of the Euler- Lagrange method, with or without generalized convexity assumptions. The presented conditions are formulated in terms of various nonsmooth analysis constructions, expressing first-order approximations of mappings and sets, while the metric increase property plays the role of a constraint qualification. ###### keywords: Vector optimization problem; data uncertainty; robust approach; weak efficiency condition; generalized derivative; generalized convexity. ## 1 Introduction Consider a vector optimization problem ${\rm Min}_{K}f(x)\ \hbox{\ subject to $x\in{\mathcal{R}}$},$ $None$ where ${\mathcal{R}}\subseteq\mathbb{X}$ is a decision set defining the feasible region of the problem, $f:\mathbb{X}\longrightarrow\mathbb{Y}$ represents the criterion with respect to which decisions in ${\mathcal{R}}$ are to be optimized, and $K\subseteq\mathbb{Y}$ is a convex cone defining the partial order, according to which the outcomes of decisions are compared in the criteria space. Throughout the paper $(\mathbb{X},\|\cdot\|)$ and $(\mathbb{Y},\|\cdot\|)$ denote real Banach spaces and it will be assumed that ${\rm int}\,K\neq\varnothing$. For vector optimization problems the concept of solution is not uniquely defined but several notions, reflecting different aspects of the issue, can be considered. Among the others, the notions of efficient and weakly efficient solution are well recognized and largely investigated in the literature devoted to vector optimization (see [9, 14, 15, 17, 24]). Recall that an element $\bar{x}\in{\mathcal{R}}$ is said to be a locally weakly efficient (for short, w-eff.) solution to $({\mathcal{P}})$ if there exists $\delta>0$ such that $f({\mathcal{R}}\cap{\rm B}(\bar{x},\delta))\cap[f(\bar{x})-{\rm int}\,K]=\varnothing;$ an element $\bar{x}\in{\mathcal{R}}$ is said to be a locally efficient (for short, eff.) solution to $({\mathcal{P}})$ if there exists $\delta>0$ such that $f({\mathcal{R}}\cap{\rm B}(\bar{x},\delta))\cap[f(\bar{x})-K]=\\{f(\bar{x})\\}.$ Clearly, any local eff. solution to $({\mathcal{P}})$ is also a local w-eff. one. The present paper deals with conditions of local weak efficiency for vector optimization problems, whose decision set ${\mathcal{R}}$ is formalized by uncertain cone constraints, namely problems of the form ${\rm Min}_{K}f(x)\ \hbox{\ with $x\in S$ subject to }\ g(\omega,x)\in C,$ $None$ where $C\subseteq\mathbb{Z}$ is a (proper) closed, convex cone in a real Banach space $(\mathbb{Z},\|\cdot\|)$, with $C\neq{\mathbf{0}}$, $S\subseteq\mathbb{X}$ is a closed set expressing a geometric constraint free from uncertainty, and $g:\Omega\times\mathbb{X}\longrightarrow\mathbb{Z}$ is a given mapping. Here $\Omega$ represents a given uncertainty set, which allows one to describe a decision environment characterized by a crude knowledge of the data. This means that the constraining mapping $g$, as a structural element of the problem, is affected by uncertainty, but this uncertainty can not be tackled by handling probability distributions as in stochastic optimization, because such an information is not at disposal. The only information about the data element $\omega$ is that $\omega\in\Omega$. The paper often credited as a first reference in undertaking an aware and systematic study of optimization problems, whose data are affected by this form of uncertainty, is [3] 111To be more precise, in [3] the authors do indicate as a forerunner of their approach A.L. Soyster, who in [26] introduced a similar point of view in dealing with uncertainly constrained problems in mathematical programming.. There, reasons for such a crude knowledge of the data are widely discussed. In this circumstance, situations quite common in reality may require that the cone constraint $g(\omega,x)\in C$ is satisfied, whatever the actual realization of $\omega\in\Omega$ is. In other terms, the decision maker is forced to regard as feasible only those elements of $S$ such that $g(\omega,x)\in C$ for every $\omega\in\Omega$. Examples of such situations, emerging especially in engineering applications, are described in [3]. On this basis the authors developed an approach hedging the decision maker against the worst cases that may occur, called robust approach to uncertain optimization, in analogy with robust control. This ‘pessimistic’ (or ‘ultraconservative’, in the Soyster’s words) approach to uncertainty opened a flourishing line of research, in scalar as well as in vector optimization, known as robust optimization (see [3, 4, 5] and references therein). In the case of vector optimization problems such as $({\mathcal{P}}_{\omega})$, where the objective function is not affected by uncertainty, this approach reduces to consider as a feasible region the set ${\mathcal{R}}=\\{x\in S:\ g(\omega,x)\in C,\ \forall\omega\in C\\}.$ Thus, by introducing the set-valued mapping $G:\mathbb{X}\rightrightarrows\mathbb{Z}$, defined as being $G(x)=g(\Omega,x)=\\{z=g(\omega,x)\in\mathbb{Z}:\ \omega\in\Omega\\},$ (1) the robust counterpart of the feasible region of $({\mathcal{P}}_{\omega})$ leads naturally to consider the so-called set-valued inclusion problem: given a (nonempty) closed set $S\subseteq\mathbb{X}$, a proper, closed and convex cone $C\subseteq\mathbb{Z}$ and a set-valued mapping $G:\mathbb{X}\rightrightarrows\mathbb{Z}$ $\hbox{ find $x\in S$ such that }\ G(x)\subseteq C.$ $None$ In fact, recalling that the upper inverse image of $C$ through the set-valued mapping $G$ is the set $G^{+1}(C)=\\{x\in\mathbb{X}:\ G(x)\subseteq C\\}$, one has ${\mathcal{R}}=S\cap G^{+1}(C).$ To the best of the author’s knowledge, problem $(\mathcal{SVI})$ began to be investigated independently of robust optimization in [6], which focuses on error bound estimates. Solvability and solution stability issues for $(\mathcal{SVI})$ have been studied more recently in [27, 28]. In the light of the role played by $(\mathcal{SVI})$ in the robust approach to optimization problems with uncertain constraints, it seems to be natural to assess an impact evaluation of the recent achievements about the solution set to $(\mathcal{SVI})$ and its approximations within the theory of optimality/efficiency conditions. Some initial results along this line of research have been obtained in the case of scalar optimization in [27, 29]. So, the present analysis can be regarded as a development of ideas and techniques, presented especially in [29], towards the specific context of vector optimization, in considering problems of the form ${\rm Min}_{K}f(x)\ \hbox{\ with $x\in S$ subject to }\ G(x)\subseteq C.$ $None$ This analysis will be performed here by well-known techniques: in fact, some first-order efficiency conditions are obtained by means of the Clarke penalization principle, through its vector counterpart due to J.J. Ye (see [30]). Some other first-order efficiency conditions are achieved by exploiting tangential approximations of the solution set to $(\mathcal{SVI})$, following a modern revisitation of the celebrated Euler-Lagrange method. In both the cases, the main tools employed come from nonsmooth and variational analysis as well as from generalized convexity. Optimality conditions for vector optimization problems with uncertain constraints are a subject intensively investigated in the last years, in particular through the robust approach (see, among others, [7, 8, 16] and references therein). A feature distinguishing the analysis here proposed is the great generality kept on $\Omega$, in the very spirit of robust optimization, owing to the introduction of the set-valued mapping $G$. The presentation of the contents is organized according to the following arrangement. Section 2 collects some basic technical preliminaries of large employment in optimization and related fields. Some more specific constructions needed in the subsequent analysis will be recalled contextually to their use. In Section 3 first-order necessary conditions for the local weak efficiency of solutions to $({\mathcal{P}}_{G})$ are established via a penalization method, with and without generalized convexity assumptions. In Section 4 different Lagrangian-type necessary conditions for local weak efficiency are formulated in terms outer prederivatives of $G$, with or without smoothness assumption on $f$. The notations in use throughout the paper are mainly standard. Quite often, capital letters in bold will denote real Banach spaces. The null vector in a Banach space is denoted by $\mathbf{0}$. In a metric space setting, the closed ball centered at an element $x$, with radius $r\geq 0$, is indicated with ${\rm B}(x,r)$. In particular, in a Banach space, ${\mathbb{B}}={\rm B}(\mathbf{0},1)$. Whenever $A$ is a subset of a metric space, ${\rm B}(A,r)$ indicates the $r$-enlargement of $A$, whereas the distance of a point $x$ from $A$ is denoted by ${\rm dist}\left(x,A\right)$. If $W$ is a subset of the same metric space, ${\rm exc}(A,W)=\sup_{a\in A}{\rm dist}\left(a,W\right)$ indicates the excess of $A$ over $W$. Symbols ${\rm cl}\,A$ and ${\rm int}\,A$ denote the topological closure and the interior of $A$, respectively. If $A$ is a subset of a Banach space, its convex hull is denoted by ${\rm conv}\,A$ and, when $A$ is convex, its relative interior is denoted by ${\rm ri}\,A$. By $\mathcal{L}(\mathbb{X},\mathbb{Y})$ the Banach space of all bounded linear operators acting between $\mathbb{X}$ and $\mathbb{Y}$ is denoted, equipped with the operator norm $\|\cdot\|_{\mathcal{L}}$. In particular, $\mathbb{X}^{*}=\mathcal{L}(\mathbb{X},\mathbb{R})$ stands for the dual space of $\mathbb{X}$, in which case $\|\cdot\|_{\mathcal{L}}$ is simply marked by $\|\cdot\|$. The null vector of a dual space will be marked by $\mathbf{0}^{*}$. The duality pairing a Banach space with its dual will be denoted by $\langle\cdot,\cdot\rangle$. Given a function $\varphi:\mathbb{X}\longrightarrow\mathbb{R}\cup\\{\mp\infty\\}$, by $[\varphi\leq 0]=\varphi^{-1}([-\infty,0])$ its sublevel set is denoted, whereas $[\varphi>0]=\varphi^{-1}((0,+\infty])$ denotes the strict superlevel set of $\varphi$. The acronyms l.s.c., u.s.c. and p.h. stand for lower semicontinuous, upper semicontinuous and positively homogeneous, respectively. The symbol ${\rm dom}\,\varphi=\varphi^{-1}(\mathbb{R})$ indicates the domain of $\varphi$, whenever $\varphi$ is a functional, whereas if $F:\mathbb{X}\rightrightarrows\mathbb{Y}$ is a set-valued mapping, ${\rm dom}\,F=\\{x\in\mathbb{X}:\ F(x)\neq\varnothing\\}$. ## 2 Basic tools of analysis Let $A\subseteq\mathbb{X}$ be a nonempty closed subset of a Banach space and let $\bar{x}\in A$. Nonsmooth analysis provides a large variety of concepts for the local, first-order conic approximation of $A$ near $\bar{x}$. For the purposes of the present analysis, the following ones are to be mentioned: ${\rm T}(A;\bar{x})=\\{v\in\mathbb{X}:\ \exists(v_{n})_{n}\hbox{ with }v_{n}\to v,\ \exists(t_{n})_{n}\hbox{ with }t_{n}\downarrow 0:\ \bar{x}+t_{n}v_{n}\in A,\ \forall n\in\mathbb{N}\\},$ ${\rm I}(A;\bar{x})=\\{v\in\mathbb{X}:\ \exists\delta>0:\bar{x}+tv\in A,\ \forall t\in(0,\delta)\\},$ and ${\rm I_{w}}(A;\bar{x})=\\{v\in\mathbb{X}:\ \forall\epsilon>0,\ \exists t_{\epsilon}\in(0,\epsilon):\ \bar{x}+t_{\epsilon}v\in A\\},$ called the contingent (or Bouligand tangent) cone, the feasible direction cone and the weak feasible direction cone to $A$ at $\bar{x}$, respectively. They are known to be linked by the inclusion relation of general validity ${\rm I}(A;\bar{x})\subseteq{\rm I_{w}}(A;\bar{x})\subseteq{\rm T}(A;\bar{x}),$ where strict inclusion may hold (see [25]). Whenever $A$ is locally convex around $\bar{x}$, i.e. there exists $r>0$ such that $A\cap{\rm B}(\bar{x},r)$ is a convex set, the above inclusion relation collapses to ${\rm cl}\,{\rm I}(A;\bar{x})={\rm cl}\,{\rm I_{w}}(A;\bar{x})={\rm T}(A;\bar{x})$ (see [25, Proposition 11.1.2(d)]). In such an event, ${\rm T}(A;\bar{x})$ is a closed convex cone, while ${\rm I}(A;\bar{x})$ is a convex cone. Let $Q\subseteq\mathbb{Y}$ be a cone. The sets ${Q}^{{}^{\oplus}}=\\{y^{*}\in\mathbb{Y}^{*}:\ \langle y^{*},y\rangle\geq 0,\quad\forall y\in Q\\}\quad\hbox{ and }\quad{Q}^{{}^{\ominus}}=-{Q}^{{}^{\oplus}}$ are called the positive and the negative dual cone of $Q$, respectively. ###### Remark 1. (i) Note that, whenever a set $A$ is locally convex around $\bar{x}$ (so ${\rm T}(A;\bar{x})$ is convex) the negative dual cone operator allows one to represent the normal cone to $A$ in the sense of convex analysis at some element $\bar{x}\in A$ in terms of contingent cone as follows ${\rm N}(A;\bar{x})=\\{x^{*}\in\mathbb{X}^{*}:\ \langle x^{*},x-\bar{x}\rangle\leq 0,\quad\forall x\in A\\}={{\rm T}(A;\bar{x})}^{{}^{\ominus}}.$ (ii) The interaction of the negative dual cone operator with some set operations is described by the following formula: given $\Lambda\in\mathcal{L}(\mathbb{X},\mathbb{Y})$ and two closed convex cones $Q\subseteq\mathbb{Y}$ and $P\subseteq\mathbb{X}$, it holds ${(P\cap\Lambda^{-1}(Q))}^{{}^{\ominus}}={\rm cl}\,({P}^{{}^{\ominus}}+\Lambda^{*}({Q}^{{}^{\ominus}})),$ where $\Lambda^{*}\in\mathcal{L}(\mathbb{Y}^{*},\mathbb{X}^{*})$ denotes the adjoint operator to $\Lambda$ (see [25, Lemma 2.4.1]). From this formula one can derive, as a special case, the equality ${\left[\Lambda^{-1}(Q)\right]}^{{}^{\ominus}}={\rm cl}\,\Lambda^{*}({Q}^{{}^{\ominus}}),$ (2) and, under the qualification condition ${\rm int}\,P_{1}\cap{\rm int}\,P_{2}\neq\varnothing$, ${(P_{1}\cap P_{2})}^{{}^{\ominus}}={P_{1}}^{{}^{\ominus}}+{P_{2}}^{{}^{\ominus}},$ (3) with $P_{1}$ and $P_{2}$ being closed convex cones in $\mathbb{X}$ (see [1, Table 4.3 (5)b)]. Note that in the equality $(\ref{eq:dconeadj})$ the closure operation can be omitted if $\Lambda^{-1}({\rm ri}\,Q)\neq\varnothing$. Such a condition is evidently satisfied if $\Lambda\mathbb{X}\supseteq Q$ and $Q\neq\\{\mathbf{0}\\}$ (see, for instance, [23, Corollary 16.3.2]). Let $K\subseteq\mathbb{Y}$ be a (proper) convex cone inducing a partial order $\leq_{{}_{K}}$ on $\mathbb{Y}$ and let $f:\mathbb{X}\longrightarrow\mathbb{Y}$ be a mapping between Banach spaces. Then $f$ is said to be $K$-convex on the convex set $A\subseteq\mathbb{X}$ if the set ${\rm epi}_{K}(f)=\\{(x,y)\in\mathbb{X}\times\mathbb{Y}:\ x\in A,\ f(x)\leq_{{}_{K}}y\\}$ is convex. If, in addition, $A$ is a cone and $f$ is also positively homogeneous, then $f$ is said to be $K$-sublinear on $A$. It is well known that if $f$ is $K$-convex on $A$, then $f(A)+K$ is convex, while if $A$ is a cone and $f$ is $K$-sublinear, then $f(A)+K$ is a convex cone. Following [11, Definition 2.3], a mapping $f$ is said to be $K$-convexlike on a set (not necessarily convex) $A$ if the set $f(A)+K$ is a convex. ###### Remark 2. In Section 3 it will be used the fact, which is readily proved by handling the related definitions, that if $\nu:\mathbb{X}\longrightarrow\mathbb{R}$ is a sublinear function on $\mathbb{X}$ and $e\in K$, then the mapping $\nu e:\mathbb{X}\longrightarrow\mathbb{Y}$, defined by $x\mapsto\nu(x)e$ is $K$-sublinear on $\mathbb{X}$. Generalized convexity notions apply also to set-valued mappings. Following [6], a set-valued mapping $F:\mathbb{X}\rightrightarrows\mathbb{Z}$ between Banach spaces is said to be $C$-concave on $\mathbb{X}$, where $C\subseteq\mathbb{Z}$ is a (proper) convex cone, if $F(tx_{1}+(1-t)x_{2})\subseteq tF(x_{1})+(1-t)F(x_{2})+C,\quad\forall x_{1},\,x_{2}\in\mathbb{X}.$ Some examples of $C$-concave set-valued mappings of interest in optimization can be found in [28]. For the purposes of the present analysis, the special class of $C$-concave set-valued mappings known as fans is to be mentioned. Recall that, after [13], a set-valued mapping $H:\mathbb{X}\rightrightarrows\mathbb{Z}$ is called fan if it fulfils all the following conditions: (i) it is p.h.; (ii) $\mathbf{0}\in H(\mathbf{0})$; (iii) it is convex-valued; (iv) $H(x_{1}+x_{2})\subseteq H(x_{1})+H(x_{2}),\quad\forall x_{1},\,x_{2}\in\mathbb{X}$. Fans may appear in a variety of forms. In Section 4, only fans which are generated by bundles of linear mappings will be actually employed, i.e. fans $H_{\mathcal{G}}:\mathbb{X}\rightrightarrows\mathbb{Z}$ that can be represented as $H_{\mathcal{G}}(x)=\\{\Lambda x:\ \Lambda\in\mathcal{G}\\},$ where $\mathcal{G}\subseteq\mathcal{L}(\mathbb{X},\mathbb{Z})$ is a (nonempty) convex and weakly closed set. ###### Remark 3. Whenever a fan $H_{\mathcal{G}}$ is generated by a bounded set $\mathcal{G}$, it turns out to be a Lipschitz set-valued mapping, i.e. it holds ${\rm haus}(H_{\mathcal{G}}(x_{1}),H_{\mathcal{G}}(x_{2}))\leq l\|x_{1}-x_{2}\|,\quad\forall x_{1},\,x_{2}\in\mathbb{X},$ with $l\geq\sup\\{\|\Lambda\|_{\mathcal{L}}:\ \Lambda\in\mathcal{G}\\}$, where ${\rm haus}(A,W)=\max\\{{\rm exc}(A,W),{\rm exc}(W,A)\\}$ denotes the Hausdorff distance between two sets $A$ and $W$ (see [29, Remark 2.14(iii)]). ## 3 Weak efficiency conditions via penalization ###### Definition 3.1 ($K$-Lipschitz continuity). Let $f:\mathbb{X}\longrightarrow\mathbb{Y}$ be a mapping between normed spaces and let $K\subseteq\mathbb{Y}$ be a convex cone, with ${\rm int}\,K\neq\varnothing$. $f$ is said to be $K$-Lipschitz on the set $D\subseteq\mathbb{X}$ if there exist a constant $\ell_{f}>0$ and a vector $e\in{\rm int}\,K\cap{\mathbb{B}}$ such that $f(x_{1})\in f(x_{2})-\ell_{f}\|x_{1}-x_{2}\|e+K,\quad\forall x_{1},\,x_{2}\in D.$ If $\bar{x}\in\mathbb{X}$ and $f$ is $K$-Lipschitz on a set $D={\rm B}(\bar{x},\delta)$ for some $\delta>0$, then $f$ is said to be $K$-Lipschitz near $\bar{x}$. The above notion has been used in [30] as a key concept to extend the Clarke penalization principle from the scalar case to vector optimization problems. This is done here directly through a local error bound function, whose definition is recalled below. ###### Definition 3.2 (Local error bound function). Let $\bar{x}\in{\mathcal{R}}\subseteq S\subseteq\mathbb{X}$. A function $\psi:\mathbb{X}\longrightarrow[0,+\infty]$ is said to be a local error bound function for ${\mathcal{R}}$ near $\bar{x}$ if there exists $\delta>0$ such that both the following conditions are satisfied: (i) ${\rm dist}\left(x,{\mathcal{R}}\right)\leq\psi(x),\quad\forall x\in{\rm B}(\bar{x},\delta)\cap S$; (ii) ${\rm dist}\left(x,{\mathcal{R}}\right)=\psi(x),\quad\forall x\in{\mathcal{R}}$. ###### Proposition 3.3. ([30, Theorem 4.2(i)]) With reference to a problem $({\mathcal{P}})$, let $\bar{x}\in{\mathcal{R}}$ and suppose that: (i) $f$ is $K$-Lipschitz near $\bar{x}$, with constant $\ell_{f}$ and vector $e\in{\rm int}\,K$; (ii) $\psi:\mathbb{X}\longrightarrow[0,+\infty]$ is an error bound function near $\bar{x}$. Then, for any $\ell\geq\ell_{f}$, every local $w$-eff. solution to $({\mathcal{P}})$ is also a local w-eff. solution of the problem ${\rm Min}_{K}[f(x)+\ell\psi(x)e].$ $None$ Furthermore, if ${\mathcal{R}}$ is closed, for any $\ell>\ell_{f}$, every local eff. solution to $({\mathcal{P}})$ is also a local eff. solution of the problem $({\mathcal{P}}_{\ell})$. Proposition 3.3 enables one to free the original problem from its constraints. Notice indeed that problem $({\mathcal{P}}_{\ell})$ is unconstrained. For problems such as $({\mathcal{P}}_{G})$, where the feasible region is structured as a solution set to $(\mathcal{SVI})$, one has to adequate the local error bound function to the the constraint definition. In the present analysis, the following merit function $\nu_{G,C}:\mathbb{X}\longrightarrow\mathbb{R}\cup\\{\pm\infty\\}$ for problems $(\mathcal{SVI})$ is exploited to treat the data uncertainty in the constraints: $\nu_{G,C}(x)=\sup_{z\in G(x)}{\rm dist}\left(z,C\right)={\rm exc}(G(x),C).$ Henceforth, as a standing assumption it is assumed that ${\rm dom}\,G=\mathbb{X}$. As a consequence, one has $\nu_{G,C}:\mathbb{X}\longrightarrow[0,+\infty]$ and therefore the following characterization of the feasible region of $({\mathcal{P}}_{G})$: ${\mathcal{R}}=S\cap[\nu_{G,C}\leq 0].$ The next lemma singles out a constraint qualification, under which the merit function $\nu_{G,C}$ is shown to actually work as a local error bound function. In order to formulate it, let us recall that, after [10], given a function $\varphi:X\longrightarrow\mathbb{R}\cup\\{\pm\infty\\}$ defined on a metric space $(X,d)$ and $\bar{x}\in\varphi^{-1}(\mathbb{R})$, the strong slope of $\varphi$ at $\bar{x}$ is defined as the quantity $\displaystyle|\nabla\varphi|(\bar{x})=\left\\{\begin{array}[]{ll}0,&\hbox{ if $\bar{x}$ is a local minimizer of $\varphi$},\\\ \displaystyle\limsup_{x\to\bar{x}}{\varphi(\bar{x})-\varphi(x)\over d(x,\bar{x})},&\hbox{ otherwise.}\end{array}\right.$ In view of the formulation of the next lemma, it is useful to observe that, if as a metric space $X$ one takes a closed subset $S\subseteq\mathbb{X}$ containing $\bar{x}$ and as a distance $d$ one takes the distance induced by $\|\cdot\|$, the above definition becomes $\displaystyle|\nabla_{S}\varphi|(\bar{x})=\left\\{\begin{array}[]{ll}0,&\hbox{ if $\bar{x}$ is a local minimizer}\\\ &\hbox{ of $\varphi$ over $S$},\\\ \displaystyle\inf_{r>0}\sup_{x\in{\rm B}(\bar{x},r)\cap S\backslash\\{\bar{x}\\}}{\varphi(\bar{x})-\varphi(x)\over\|x-\bar{x}\|},&\hbox{ otherwise.}\end{array}\right.$ ###### Lemma 3.4. Let $G:\mathbb{X}\rightrightarrows\mathbb{Z}$, $S$ and $C$ as in problem $(\mathcal{SVI})$, and let $\bar{x}\in{\mathcal{R}}$. Suppose that: (i) $G$ is l.s.c. in a neighbourhood of $\bar{x}$; (ii) there exist positive $\sigma$ and $r$ such that $|\nabla_{S}\nu_{G,C}|(x)\geq\sigma,\quad\forall x\in{\rm B}(\bar{x},r)\cap S\cap[\nu_{G,C}>0].$ $None$ Then function $\psi=\sigma^{-1}\nu_{G,C}$ is a local error bound function for ${\mathcal{R}}$. ###### Proof. From [27, Lemma 2.3(i)] it is known that the lower semicontinuity of $G$ (in the sense of set-valued mappings) implies the lower semicontinuity for the functional $\nu_{G,C}$. Thus, by hypothesis (i), for some $\delta_{0}>0$ it is true that $\nu_{G,C}$ is l.s.c. on ${\rm B}(\bar{x},\delta_{0})\cap S$. Notice that, as a closed subset of a Banach space, ${\rm B}(\bar{x},\delta_{0})\cap S$ is a complete metric space, if equipped with the induced metric. Besides, without any loss of generality, it is possible to assume that, if $r>0$ is as in hypothesis (ii), it is $r<\delta_{0}$. Notice that the case ${\rm B}(\bar{x},r)\cap S\cap[\nu_{G,C}>0]=\varnothing$ means ${\rm B}(\bar{x},r)\cap S\subseteq G^{+1}(C)\cap S$, so it holds ${\rm dist}\left(x,{\mathcal{R}}\right)=0\leq\psi(x)$ for every $x\in{\rm B}(\bar{x},r)\cap S$ and any $\psi:\mathbb{X}\longrightarrow[0,+\infty]$. Otherwise, it is possible to apply [2, Corollary 3.1] with $X={\rm B}(\bar{x},r)\cap S$, according to which ${\rm dist}\left(x,{\mathcal{R}}\right)={\rm dist}\left(x,S\cap[\nu_{G,C}\leq 0]\right)\leq\frac{\nu_{G,C}(x)}{\sigma},\quad\forall x\in{\rm B}(\bar{x},r/2)\cap S.$ Thus, setting $\delta=r/2$ and $\psi(x)=\sigma^{-1}\nu_{G,C}$, the condition (i) in Definition 3.2 is fulfilled. Since under the above assumptions ${\mathcal{R}}$ is closed, one has ${\rm dist}\left(x,{\mathcal{R}}\right)=0=\psi(x),\quad\forall x\in{\mathcal{R}},$ so also the condition (ii) in Definition 3.2 is readily satisfied. This completes the proof. ∎ With the specialization of $\psi$ above introduced, upon the constraint qualification $(\mathcal{CQ})$, the penalization principle for vector optimization takes the following form. ###### Proposition 3.5. With reference to a problem $({\mathcal{P}}_{G})$, let $\bar{x}\in{\mathcal{R}}=S\cap G^{+1}(C)$. Suppose that: (i) $f$ is $K$-Lipschitz near $\bar{x}$, with constant $\ell_{f}$ and $e\in{\rm int}\,K$; (ii) $G$ is l.s.c. in a neighbourhood of $\bar{x}$ and condition $(\mathcal{CQ})$ is satisfied. Then, for any $\ell\geq\ell_{f}$, every local w-eff. solution to $({\mathcal{P}}_{G})$ is also a local w-eff. solution to problem ${\rm Min}_{K}[f(x)+\ell\sigma^{-1}\nu_{G,C}(x)e]\quad\hbox{ subject to $x\in S$ }.$ $None$ For any $\ell>\ell_{f}$, every local eff. solution to $({\mathcal{P}}_{G})$ is a local eff. solution to $({\mathcal{P}}_{G,\ell})$. ###### Proof. Since $f$ is $K$-Lipschitz near $\bar{x}$, ${\rm int}\,K$ is open and, under the above assumptions, according to Lemma 3.4, $\sigma^{-1}\nu_{G,C}$ is a local error bound function for ${\mathcal{R}}$, then the first assertion in Proposition 3.3 can be invoked. This yields that $\bar{x}$ is a local w-eff. solution to problem $({\mathcal{P}}_{G,\ell})$, for any $\ell\geq\ell_{f}$. Since ${\mathcal{R}}$ is closed, in the case $\bar{x}\in{\mathcal{R}}$ is a local eff. solution to $({\mathcal{P}}_{G})$, it suffices to apply the second assertion in Proposition 3.3, in order to conclude that $\bar{x}$ is a local eff. solution to problem $({\mathcal{P}}_{G,\ell})$, for any $\ell>\ell_{f}$. ∎ The constraint qualification $(\mathcal{CQ})$ is expressed in terms of $\nu_{G,C}$. This function can be built by means of the problem data. Nevertheless, it would be useful to formulate conditions ensuring the validity of $(\mathcal{CQ})$ directly on $G$. This can be done by exploiting the metric increase property, as introduced in [27]. ###### Definition 3.6 (Metrically $C$-increasing mapping). Let $S\subseteq\mathbb{X}$ be a (nonempty) closed set and let $C\subseteq\mathbb{Z}$ be a closed, convex cone, with $C\neq\\{\mathbf{0}\\}$. A set-valued mapping $F:\mathbb{X}\rightrightarrows\mathbb{Z}$ between Banach spaces is said to be metrically $C$-increasing around $\bar{x}\in{\rm dom}\,G$, relative to $S$, if there exist $\delta>0$ and $\alpha>1$ such that $\forall x\in{\rm B}(\bar{x},\delta)\cap S,\ \forall r\in(0,\delta)\quad\exists z\in{\rm B}(x,r)\cap S:\ {\rm B}(F(z),\alpha r)\subseteq{\rm B}(F(x)+C,r).$ (6) The quantity ${\rm inc}_{C}(F;S;\bar{x})=\sup\\{\alpha>1:\ \exists\delta>0\hbox{ for which (\ref{in:mincrSx}) holds}\\}$ is called exact exact bound of metric $C$-increase of $F$ around $\bar{x}$, relative to $S$. Several examples of metrically increasing mappings, along with an infinitesimal criterion for detecting the occurrence of this property, are provided in [27]. The next proposition enlightens the role of the metric increase property as a constraint qualification condition. ###### Proposition 3.7. Let $G:\mathbb{X}\rightrightarrows\mathbb{Z}$, $S$ and $C$ as in problem $(\mathcal{SVI})$, and let $\bar{x}\in{\mathcal{R}}$. Suppose that: (i) $G$ is l.s.c. in a neighbourhood of $\bar{x}$; (ii) $G$ is metrically $C$-increasing around $\bar{x}$, relative to $S$. Then condition $(\mathcal{CQ})$ holds true with $\sigma=\alpha-1$ and $r=\delta$, for any $\alpha\in(1,{\rm inc}_{C}(G;S;\bar{x}))$ and $\delta$ as in $(\ref{in:mincrSx})$. ###### Proof. As already seen, by hypothesis (i) the function $\nu_{G,C}$ is l.s.c. in ${\rm B}(\bar{x},\delta_{0})$, for some $\delta_{0}>0$. According to hypothesis (ii), fixed $\alpha\in(1,{\rm inc}_{C}(G;S;\bar{x})))$, there exists $\delta_{\alpha}>0$ such that $(\ref{in:mincrSx})$ holds. Observe that the nature of the metric $C$-increase property around $\bar{x}$ allows one to assume without loss of generality that $\delta_{\alpha}<\delta_{0}$. Now, let us take an arbitrary $x\in{\rm B}(\bar{x},\delta_{\alpha})\cap S\cap[\nu_{G,C}>0]$. Since, under the current assumptions, $\nu_{G,C}$ is l.s.c. at $x\in{\rm B}(\bar{x},\delta_{0})$, there exists $\delta_{x}>0$ such that $\nu_{G,C}(z)>0$ for every $z\in{\rm B}(x,\delta_{x})$. Take any $r>0$ such that $r<\min\\{\delta_{\alpha},\delta_{x}\\}$. According to $(\ref{in:mincrSx})$, there exists $z_{r}\in{\rm B}(x,r)\cap S$ such that ${\rm B}(G(z_{r}),\alpha r)\subseteq{\rm B}(G(x)+C,r).$ (7) Notice that it must be $z_{r}\neq x$. Indeed, since it is $\nu_{G,C}(x)>0$ (namely, it is ${\rm exc}(G(x),C)>0$), if it were $z_{r}=x$, then by inclusion $(\ref{in:mincrSzr})$ and [27, Lemma 2.2], one would find $\displaystyle\nu_{G,C}(x)+\alpha r$ $\displaystyle=$ $\displaystyle{\rm exc}({\rm B}(G(x),\alpha r),C)={\rm exc}({\rm B}(G(z_{r}),\alpha r),C)$ $\displaystyle\leq$ $\displaystyle{\rm exc}({\rm B}(G(x)+C,r),C)=\nu_{G,C}(x)+r,$ wherefrom $\alpha\leq 1$, in contrast with the fact that $\alpha>1$. Furthermore, by applying once again inclusion $(\ref{in:mincrSzr})$ and taking into account that $\nu_{G,C}(z_{r})>0$, so [27, Lemma 2.2] still works, one obtains $\displaystyle\nu_{G,C}(z_{r})$ $\displaystyle=$ $\displaystyle{\rm exc}({\rm B}(G(z_{r}),\alpha r),C)-\alpha r\leq{\rm exc}({\rm B}(G(x)+C,r),C)-\alpha r$ $\displaystyle=$ $\displaystyle{\rm exc}(G(x)+C,C)+r-\alpha r=\nu_{G,C}(x)+(1-\alpha)r.$ As it is $z_{r}\in{\rm B}(x,r)\cap S$, from the last inequality chain it follows $\nu_{G,C}(x)-\nu_{G,C}(z_{r})\geq(\alpha-1)r\geq(\alpha-1)\|z_{r}-x\|.$ This inequality says that $x$ fails to be a local minimizer for $\nu_{G,C}$ and therefore allows one to get the following estimate $|\nabla_{S}\nu_{G,C}|(x)=\limsup_{z\xrightarrow{S}x}\frac{\nu_{G,C}(x)-\nu_{G,C}(z)}{\|x-z\|}\geq\lim_{r\downarrow 0}\frac{\nu_{G,C}(x)-\nu_{G,C}(z_{r})}{\|x-z_{r}\|}\geq\alpha-1.$ By arbitrariness of $x\in{\rm B}(\bar{x},\delta_{\alpha})\cap S\cap[\nu_{G,C}>0]$, the last inequalities show that condition $(\mathcal{CQ})$ is satisfied with $\sigma=\alpha-1$ and $r=\delta_{\alpha}$, thereby completing the proof. ∎ On the base of the constraint system analysis exposed above, one is now in a position to formulate necessary weak efficiency condition for problems $({\mathcal{P}}_{G})$. To this aim, it remains to recall some further element of nonsmooth analysis. Let $\varphi:\mathbb{X}\longrightarrow\mathbb{R}\cup\\{\pm\infty\\}$ be a function which is finite around $\bar{x}\in\varphi^{-1}(\mathbb{R})$. Following [19, Section 1.3.2], the set $\widehat{\partial}^{+}\varphi(\bar{x})=\left\\{x^{*}\in\mathbb{X}^{*}:\ \limsup_{x\to\bar{x}}{\varphi(x)-\varphi(\bar{x})-\langle x^{*},x-\bar{x}\rangle\over\|x-\bar{x}\|}\leq 0\right\\}$ is called the Fréchet upper subdifferential of $\varphi$ at $\bar{x}$. It is readily seen that, whenever $\varphi$ is (Fréchet) differentiable at $\bar{x}$, then $\widehat{\partial}^{+}\varphi(\bar{x})=\\{{\rm D}\varphi(\bar{x})\\}$, whereas whenever $\varphi:\mathbb{X}\longrightarrow\mathbb{R}$ is concave, the set $\widehat{\partial}^{+}\varphi(\bar{x})$ reduces to the superdifferential of $\varphi$ at $\bar{x}$, in the sense of convex analysis. ###### Remark 4. The following variational description of the Fréchet upper subdifferential of $\varphi$ at $\bar{x}$ will be exploited in the sequel: for every $x^{*}\in\widehat{\partial}^{+}\varphi(\bar{x})$ there exists a function $\varsigma:\mathbb{X}\longrightarrow\mathbb{R}$, Fréchet differentiable at $\bar{x}$ and with $\varphi(\bar{x})=\varsigma(\bar{x})$, such that $\varphi(x)\leq\varsigma(x)$ for every $x\in\mathbb{X}$ and ${\rm D}\varsigma(\bar{x})=x^{*}$ (to get it, it suffices to remember that $\widehat{\partial}^{+}\varphi(\bar{x})=-\widehat{\partial}(-\varphi)(\bar{x})$, where $\widehat{\partial}$ denotes the Fréchet subdifferential, and then apply [19, Theorem 1.88(i)]). Given a mapping $f:\mathbb{X}\longrightarrow\mathbb{Y}$ between Banach spaces and $\bar{x}\in\mathbb{X}$ $f^{\prime}(\bar{x};v)$ indicates the directional derivative of $f$ at $\bar{x}$, in the direction $v\in\mathbb{X}$. If its directional derivative exists for every $v\in\mathbb{X}$, $f$ is said to be directionally differentiable at $\bar{x}$. A first-order necessary condition for weak efficiency of solutions to $({\mathcal{P}}_{G})$ can be stated as follows. ###### Theorem 3.8 (Weak efficiency condition via penalization). With reference to a problem $({\mathcal{P}}_{G})$, let $\bar{x}\in{\mathcal{R}}=S\cap G^{+1}(C)$ be a local w-eff. solution to $({\mathcal{P}}_{G})$. Suppose that: (i) $f$ is $K$-Lipschitz near $\bar{x}$, with constant $\ell_{f}$ and $e\in{\rm int}\,K$, and is directionally differentiable at $\bar{x}$; (ii) $G$ is l.s.c. in a neighbourhood of $\bar{x}$ and metrically $C$-increasing around $\bar{x}$, relative to $S$; (iii) $\widehat{\partial}^{+}\nu_{G,C}(\bar{x})\neq\varnothing$. Then for any $\alpha\in(1,{\rm inc}_{C}(G;S;\bar{x}))$, $\ell\geq\ell_{f}$ and $x^{*}\in\widehat{\partial}^{+}\nu_{G,C}(\bar{x})$ it must be $f^{\prime}(\bar{x};v)+\frac{\ell}{\alpha-1}\langle x^{*},v\rangle e\not\in{\rm int}\,K,\quad\forall v\in{\rm I}(S;\bar{x}).$ (8) ###### Proof. Under the assumptions made, in the light of Proposition 3.7 the condition $(\mathcal{CQ})$ is satisfied. Thus, it is possible to invoke Proposition 3.5, according to which $\bar{x}$ turns out to be a w-eff. solution of problem $({\mathcal{P}}_{G,\ell})$, for any $\alpha\in(1,{\rm inc}_{C}(G;S;\bar{x}))$ and $\ell\geq\ell_{f}$. This means that there exists $\delta>0$ such that $\left(f+\frac{\ell}{\alpha-1}\nu_{G,C}e\right)({\rm B}(\bar{x},\delta)\cap S)\cap[f(\bar{x})-{\rm int}\,K]=\varnothing.$ (9) Take an arbitrary $v\in{\rm I}(S;\bar{x})\cap{\mathbb{B}}$. By reducing the value of $\delta>0$ in $(\ref{eq:weffVOPGlempty})$ if needed, one can assume that $\bar{x}+tv\in S$, for all $t\in(0,\delta)$. Therefore, from the relation in $(\ref{eq:weffVOPGlempty})$ it follows $\frac{f(\bar{x}+tv)-f(\bar{x})}{t}+\frac{\ell\nu_{G,C}(\bar{x}+tv)e}{(\alpha-1)t}\in\mathbb{Y}\backslash(-{\rm int}\,K),\quad\forall t\in(0,\delta).$ (10) Let $x^{*}$ be an arbitrary element of $\widehat{\partial}^{+}\nu_{G,C}(\bar{x})$. According to the characterization of upper Fréchet subgradients mentioned in Remark 4, there exists a Fréchet differentiable function $\varsigma:\mathbb{X}\longrightarrow\mathbb{R}$ such that $\varsigma(\bar{x})=\nu_{G,C}(\bar{x})=0$, $\varsigma(x)\geq\nu_{G,C}(x)$ for every $x\in\mathbb{X}$, and ${\rm D}\varsigma(\bar{x})=x^{*}$. Therefore, one has $\varsigma(\bar{x}+tv)-\nu_{G,C}(\bar{x}+tv)\geq 0,\quad\forall t\in(0,+\infty),$ whence $\frac{\ell[\varsigma(\bar{x}+tv)-\nu_{G,C}(\bar{x}+tv)]e}{(\alpha-1)t}\in K,\quad\forall t\in(0,+\infty).$ (11) By combining $(\ref{eq:weffpenal})$ and $(\ref{in:sigmameritposK})$ and observing that, for every $y\in\mathbb{Y}\backslash(-{\rm int}\,K)$ it holds $y+K\subseteq\mathbb{Y}\backslash(-{\rm int}\,K)$, one obtains $\frac{f(\bar{x}+tv)-f(\bar{x})}{t}+\frac{\ell\varsigma(\bar{x}+tv)e}{(\alpha-1)t}\in\mathbb{Y}\backslash(-{\rm int}\,K),\quad\forall t\in(0,\delta).$ By passing to the limit as $t\downarrow 0$ in the last inclusion, while taking into account that the cone $\mathbb{Y}\backslash(-{\rm int}\,K)$ is closed and that $f$ is directionally differentiable at $\bar{x}$, one achieves the relation in $(\ref{notin:weffconpenal})$ for any $v\in{\rm I}(S;\bar{x})\cap{\mathbb{B}}$. Since the mapping $v\mapsto f^{\prime}(\bar{x};v)+\frac{\ell}{\alpha-1}\langle x^{*},v\rangle e$ is positively homogeneous and $\mathbb{Y}\backslash(-{\rm int}\,K)$ is a cone, the validity of relation in $(\ref{notin:weffconpenal})$ can be extended to the whole set ${\rm I}(S;\bar{x})$. By arbitrariness of $x^{*}$, this reasoning completes the proof. ∎ Among the hypotheses of Theorem 3.8, the most involved is (iii), so its deserves some comment. In the next remark, some elements for discussion are provided in order to clarify the meaning of such an assumption. ###### Remark 5. According to its definition, the merit function $\nu_{G,C}$ is nonnegative and, since it is $\bar{x}\in G^{+1}(C)$ one has $\nu_{G,C}(\bar{x})=0$, so the hypothesis (iii) in Theorem 3.8 is about the nontriviality of the Fréchet upper subdifferential at a (global) minimizer. A systematic study of this tool of nonsmooth analysis (actually, not so often employed as its lower counterpart) and related optimality conditions for constrained minimization problems can be found in [18, 20]. In particular, it was shown that, for given a function $\varphi:\mathbb{X}\longrightarrow\\{\pm\infty\\}$, which is defined on an Asplund space and locally Lipschitz around $\bar{x}$, the nonemptiness of $\widehat{\partial}^{+}\varphi(\bar{x})$ is automatic if $\varphi$ is upper regular at $\bar{x}$, i.e. $\widehat{\partial}^{+}\varphi(\bar{x})=\partial^{+}\varphi(\bar{x})$, where $\partial^{+}\varphi(\bar{x})$ denotes the limiting upper subdifferential of $\varphi$ at $\bar{x}$, defined through the basic normals to the hypergraph of $\varphi$ (see [19, Definition 1.78]). In such a circumstance, it holds $\partial_{\rm Cl}\varphi(\bar{x})={\rm cl}^{*}\,\widehat{\partial}^{+}\varphi(\bar{x}),$ where $\partial_{\rm Cl}\varphi(\bar{x})$ denotes the Clarke generalized gradient of $\varphi$ at $\bar{x}$ and ${\rm cl}^{*}\,A$ marks the closure of a set $A$ with respect to the weak${}^{*}\,$ topology (see, for more details, [20, Remark 4.5] and [20, Section 5.5.4]). Note that, as it is possible to check at once, $\nu_{G,C}$ is locally Lipschitz around $\bar{x}$ whenever $G$ is Lipschitz continuous around $\bar{x}$. By introducing proper convexity/concavity assumptions on the problem data $S$, $f$ and $G$, it is possible to establish a first-order necessary weak efficiency condition in a scalarized form. To this aim, the next remark will be useful. ###### Remark 6. (i) It is readily seen that, whenever $G:\mathbb{X}\rightrightarrows\mathbb{Z}$ is $C$-bounded around a point $\bar{x}\in\mathbb{X}$, i.e. there exists $\delta>0$ such that $G(x)\backslash C$ is bounded for every $x\in{\rm B}(\bar{x},\delta)$, then $\bar{x}\in{\rm int}\,({\rm dom}\,\nu_{G,C})$. (ii) whenever $G:\mathbb{X}\rightrightarrows\mathbb{Z}$ is $C$-concave on $\mathbb{X}$ the function $\nu_{G,C}$ is convex on $\mathbb{X}$ (see, for instance, [28, Remark 4.14]). ###### Theorem 3.9 (Weak efficiency condition via penalization under convexity). With reference to a problem $({\mathcal{P}}_{G})$, let $\bar{x}\in{\mathcal{R}}=S\cap G^{+1}(C)$ be a local w-eff. solution to $({\mathcal{P}}_{G})$. Suppose that: (i) $S$ is locally convex around $\bar{x}$; (ii) $f$ is $K$-Lipschitz near $\bar{x}$, with constant $\ell_{f}$ and $e\in{\rm int}\,K$, and is directionally differentiable at $\bar{x}$, with $f^{\prime}(\bar{x};\cdot):\mathbb{X}\longrightarrow\mathbb{Y}$ being $K$-sublinear; (iii) $G$ is l.s.c. in a neighbourhood of $\bar{x}$ and metrically $C$-increasing around $\bar{x}$, relative to $S$; (iv) $G$ is $C$-bounded around $\bar{x}$ and Hausdorff u.s.c. at $\bar{x}$; (v) $G$ is $C$-concave in $\mathbb{X}$. Then there exists $y^{*}\in{K}^{{}^{\oplus}}\backslash\\{\mathbf{0}^{*}\\}$ such that $\langle y^{*},f^{\prime}(\bar{x};v)\rangle+\frac{\ell}{\alpha-1}\langle y^{*},\nu_{G,C}^{\prime}(\bar{x};v)e\rangle\geq 0,\quad\forall v\in{\rm I}(S;\bar{x}).$ (12) ###### Proof. Let us start with observing that, by virtue of the hypotheses of $C$-concavity, $\nu_{G,C}$ is convex on $\mathbb{X}$. Moreover, by hypothesis (iv), it is $\bar{x}\in{\rm int}\,({\rm dom}\,\nu_{G,C})$. Moreover, since $G$ is Hausdorff u.s.c. at $\bar{x}$, function $\nu_{G,C}$ is also u.s.c. at $\bar{x}$ (see [27, Lemma 2.3(ii)]). So, remembering that it is also l.s.c. around $\bar{x}$ on the account of hypothesis (iii), $\nu_{G,C}$ turns out to be continuous and hence directionally differentiable at $\bar{x}$ (remember [31, Theorem 2.4.9]). From Remark 2 it follows that the mapping $v\mapsto\nu_{G,C}(v)e$ is $K$-sublinear on $\mathbb{X}$. As a sum of two $K$-sublinear mappings, $f^{\prime}(\bar{x};\cdot)+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};\cdot)e$ is $K$-sublinear on $\mathbb{X}$. On the other hand, since according to hypothesis (i) $S$ is locally convex near $\bar{x}$, the cone ${\rm I}(S;\bar{x})$ is convex. It follows that the subset of $\mathbb{Y}$, given by $f^{\prime}(\bar{x};{\rm I}(S;\bar{x}))+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};{\rm I}(S;\bar{x}))e+K,$ is a convex cone as an image of a convex cone through a $K$-sublinear mapping. Since $\bar{x}$ is a local w-eff. solution of $({\mathcal{P}}_{G})$, by arguing as in the proof of Theorem 3.8, it is possible to show that $\left[f^{\prime}(\bar{x};{\rm I}(S;\bar{x}))+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};{\rm I}(S;\bar{x}))e\right]\cap(-{\rm int}\,K)=\varnothing.$ This entails that it holds also $\left[f^{\prime}(\bar{x};{\rm I}(S;\bar{x}))+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};{\rm I}(S;\bar{x}))e+K\right]\cap(-{\rm int}\,K)=\varnothing.$ In such a circumstance one can invoke the Eidelheit theorem (see, for instance, [31, Theorem 1.1.3]). It ensures the existence of $y^{*}\in\mathbb{Y}^{*}\backslash\\{\mathbf{0}^{*}\\}$ and $\gamma\in\mathbb{R}$, such that $\langle y^{*},f^{\prime}(\bar{x};v)+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};v)e\rangle\geq\gamma\geq\langle y^{*},y\rangle,$ (13) $\forall v\in{\rm I}(S;\bar{x}),\ \forall y\in{\rm cl}\,(-{\rm int}\,K)=-K.$ Since, in particular, it holds $\langle y^{*},f^{\prime}(\bar{x};\mathbf{0})+\frac{\ell}{\alpha-1}\nu_{G,C}^{\prime}(\bar{x};\mathbf{0})e\rangle=0\geq\gamma\geq 0=\langle y^{*},\mathbf{0}\rangle,$ it follows that $\gamma$ must be $0$ and, by consequence, the second inequality in $(\ref{in:linsepdderfmer})$ gives $y^{*}\in{K}^{{}^{\oplus}}$. This completes the proof. ∎ ## 4 Weak efficiency conditions via tangential approximations Throughout this section, as a first-order approximation of set-valued mappings the notion of outer prederivative, introduced in [13], will be employed. ###### Definition 4.1 (Outer prederivative). Let $F:\mathbb{X}\rightrightarrows\mathbb{Z}$ be a set-valued mapping between Banach spaces and let $\bar{x}\in{\rm dom}\,F$. A p.h. set-valued mapping $H_{F}(\bar{x};\cdot):\mathbb{X}\rightrightarrows\mathbb{Z}$ is said to be an outer prederivative of $F$ at $\bar{x}$ if for every $\epsilon>0$ there exists $\delta>0$ such that $F(x)\subseteq F(\bar{x})+H_{F}(\bar{x};x-\bar{x})+\epsilon\|x-\bar{x}\|{\mathbb{B}},\quad\forall x\in{\rm B}(\bar{x},\delta).$ Extended discussions about this generalized differentiation concept can be found, for instance, in [13, 12, 21]. For subsequent considerations, it is worth observing that the notion of outer prederivative collapses to the notion of Bouligand-derivative (or B-derivative), when both $F$ and $H_{F}(\bar{x};\cdot)$ are single-valued and $H_{F}(\bar{x};\cdot)$ is continuous. More precisely, following [22], a mapping $f:\mathbb{X}\longrightarrow\mathbb{Z}$ between Banach spaces is said to be $B$-differentiable at $\bar{x}\in\mathbb{X}$ if there exists a p.h. and continuous mapping ${\rm D}_{B}f(\bar{x};\cdot):\mathbb{X}\longrightarrow\mathbb{Z}$, called the B-derivative of $f$ at $\bar{x}$, such that $\lim_{x\to\bar{x}}{f(x)-f(\bar{x})-{\rm D}_{B}f(\bar{x};x-\bar{x})\over\|x-\bar{x}\|}=0.$ By exploiting as a constraint qualification the metric $C$-increase property of $G$, the following inner tangential approximation of ${\mathcal{R}}$ has been established in [29], which is expressed in terms of outer prederivatives and tangent cones. Its proof was provided in a finite-dimensional setting, but a perusal of the involved arguments reveals that it can be extended without any modification to a Banach space setting. ###### Proposition 4.2. ([29, Theorem 3.1]) Let $G:\mathbb{X}\rightrightarrows\mathbb{Z}$, $S$ and $C$ as in problem $(\mathcal{SVI})$, and let $\bar{x}\in{\mathcal{R}}=S\cap G^{+1}(C)$. Suppose that: (i) $G$ is l.s.c. in a neighbourhood of $\bar{x}$; (ii) $G$ is metrically $C$-increasing around $\bar{x}$, relative to $S$; (iii) $G$ admits $H_{G}(\bar{x};\cdot):\mathbb{X}\rightrightarrows\mathbb{Z}$ as an outer prederivative at $\bar{x}$. Then it holds $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm I_{w}}(S;\bar{x})\subseteq{\rm T}({\mathcal{R}};\bar{x}).$ (14) If, in addition, (iv) $H_{G}(\bar{x};\cdot)$ is Lipschitz, then the stronger inclusion holds $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})\subseteq{\rm T}({\mathcal{R}};\bar{x}).$ (15) Following the well-known Euler-Lagrange scheme for deriving necessary optimality conditions in the presence of constraints, from the above tangential approximation of the feasible region of $({\mathcal{P}}_{G})$, it is possible to obtain the below first-order weak efficiency condition. ###### Theorem 4.3. With reference to a problem $({\mathcal{P}}_{G})$, let $\bar{x}\in{\mathcal{R}}$ be a local w-eff. solution. Suppose that: (i) $f$ is $B$-differentiable at $\bar{x}$; (ii) $G$ is l.s.c. in a neighbourhood of $\bar{x}$ and is metrically $C$-increasing around $\bar{x}$, relative to $S$; (iii) $G$ admits $H_{G}(\bar{x};\cdot):\mathbb{X}\rightrightarrows\mathbb{Z}$ as an outer prederivative at $\bar{x}$. Then, ${\rm D}_{B}f(\bar{x};v)\not\in-{\rm int}\,K,\quad\forall v\in H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm I_{w}}(S;\bar{x}).$ (16) If, in addition, (iv) ${\rm D}_{B}f(\bar{x};\cdot):\mathbb{X}\longrightarrow\mathbb{Y}$ is $K$-convexlike on the set $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$; (v) $H_{G}(\bar{x};\mathbf{0})\subseteq C$; (vi) $H_{G}(\bar{x};\cdot)$ is Lipschitz, there exists $y^{*}\in{K}^{{}^{\oplus}}\backslash\\{\mathbf{0}^{*}\\}$ such that $y^{*}\circ{\rm D}_{B}f(\bar{x};v)\geq 0,\quad\forall v\in H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}).$ (17) In particular, whenever $f$ is Fréchet differentiable at $\bar{x}$, it results in $-{\rm D}f(\bar{x})^{*}y^{*}\in{[H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})]}^{{}^{\ominus}}.$ (18) ###### Proof. Upon hypotheses (ii) and (iii), the inner tangential approximation given by $(\ref{in:intanappIang})$ can be employed. So, take an arbitrary $v\in H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm I_{w}}(S;\bar{x})$. As it is also $v\in{\rm T}({\mathcal{R}};\bar{x})$, there exist sequences $(v_{n})$, with $v_{n}\to v$, and $(t_{n})$, with $t_{n}\downarrow 0$, as $n\to\infty$, such that $\bar{x}+t_{n}v_{n}\in{\mathcal{R}}$. Since $\bar{x}$ is a local w-eff. solution to $({\mathcal{P}}_{G})$, by recalling hypothesis (i), one obtains ${\rm D}_{B}f(\bar{x};v_{n})+\frac{o(\bar{x};t_{n}v_{n})}{t_{n}}=\frac{f(\bar{x}+t_{n}v_{n})-f(\bar{x})}{t_{n}}\in\mathbb{Y}\backslash(-{\rm int}\,K).$ By passing to the limit as $n\to\infty$, taking into account that $\mathbb{Y}\backslash(-{\rm int}\,K)$ is a closed set and the mapping ${\rm D}_{B}f(\bar{x};\cdot)$ is continuous, one achieves the inequality in $(\ref{in:nweffcond1})$. Upon the hypothesis (iv), the set ${\rm D}_{B}f(\bar{x};H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}))+K$ is a convex subset of $\mathbb{Y}$. By arguing as in the first part of the proof, one can show that ${\rm D}_{B}f(\bar{x};v)\not\in-{\rm int}\,K,\quad\forall v\in H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}),$ which amounts to say $\left[{\rm D}_{B}f(\bar{x};H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}))\right]\cap(-{\rm int}\,K)=\varnothing.$ Notice that this implies $[{\rm D}_{B}f(\bar{x};H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}))+K]\cap(-{\rm int}\,K)=\varnothing.$ By the Eidelheit theorem there exists $y^{*}\in\mathbb{Y}^{*}\backslash\\{\mathbf{0}^{*}\\}$ and $\gamma\in\mathbb{R}$ such that $\langle y^{*},y\rangle\leq\gamma\leq\langle y^{*},{\rm D}_{B}f(\bar{x};v)\rangle,$ (19) $\quad\forall y\in{\rm cl}\,(-{\rm int}\,K)=-K,\quad\forall v\in H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}).$ Since owing to hypothesis (v) it is $\mathbf{0}\in-K\cap H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$, according to the inequalities in $(\ref{in:Eidsepar})$ it must be $\gamma=0$. Consequently, the first inequality in $(\ref{in:Eidsepar})$ gives $y^{*}\in{\mathbb{Y}}^{{}^{\oplus}}$, whereas the second one yields $(\ref{in:thmscalariz})$. In the case of Fréchet differentiability of $f$ at $\bar{x}$, inclusion $(\ref{in:thminclus})$ is a direct consequence of inequality $(\ref{in:thmscalariz})$. The proof is complete. ∎ ###### Remark 7. The property of a mapping to be $K$-convexlike on a set $D$ depends essentially on the set $D$. Notice that, if $D_{1}\subseteq D$, a mapping $K$-convexlike on $D$ may fail to be $K$-convexlike on $D_{1}$. Thus, hypothesis (iv) links crucially the behaviour of ${\rm D}_{B}f(\bar{x};\cdot)$ with the geometry of the set $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$. On the other hand, the $K$-sublinearity property is stable under convex restrictions, in the sense that if $h:\mathbb{X}\longrightarrow\mathbb{Y}$ is $K$-sublinear on a set $D\subseteq\mathbb{X}$, it still remains so on each convex subset $D_{1}\subseteq D$. This fact makes it convenient to consider the following replacement of hypothesis (iv), with separate (but stricter) requirements on the involved problem data: (iv’) ${\rm D}_{B}f(\bar{x};\cdot)$ is $K$-sublinear on $\mathbb{X}$, $H_{G}(\bar{x};\cdot)$ is $C$-superlinear on $\mathbb{X}$, and $S$ is locally convex near $\bar{x}$. In such a circumstance, ${\rm T}(S;\bar{x})$ is a convex cone as well as $H_{G}(\bar{x};\cdot)^{+1}(C)$. Since the set ${\rm D}_{B}f(\bar{x};H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x}))+K$ is convex, ${\rm D}_{B}f(\bar{x};\cdot)$ turns out to be $K$-convexlike on the set $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$. The next result provides a refinement of Theorem 4.3, which can be established, under proper qualification conditions, by replacing general first-order approximations of the data with the local convexity of $S$ and linear approximations of $f$ and $G$. ###### Theorem 4.4 (Multiplier rule via fans). Let $\bar{x}\in{\mathcal{R}}$ be a local w-eff. solution to problem $({\mathcal{P}}_{G})$. Suppose that hypotheses (i)-(iii) are satisfied and, in addition, that: (iv) $S$ is locally convex near $\bar{x}$; (v) $f$ is Fréchet differentiable at $\bar{x}$; (vi) $H_{G}(\bar{x};\cdot)$ is a fan finitely generated by ${\mathcal{G}}={\rm conv}\,\\{\Lambda_{1},\dots,\Lambda_{p}\\}$; (vii) the further qualification condition holds $\left(\bigcap_{i=1}^{p}{\rm int}\,\Lambda_{i}^{-1}(C)\right)\cap{\rm int}\,{\rm T}(S;\bar{x})\neq\varnothing.$ (20) Then there exist $y^{*}\in{K}^{{}^{\oplus}}\backslash\\{\mathbf{0}^{*}\\}$ and, for each $i=1,\dots,p$, $x^{*}_{i}\in\mathbb{X}^{*}$ and sequences $(z^{*}_{i,n})_{n}$ in $\mathbb{Z}^{*}$, with $z^{*}_{i,n}\in{C}^{{}^{\ominus}}$ and $\Lambda_{i}^{*}z^{*}_{i,n}\to x_{i}^{*}$, such that $\mathbf{0}^{*}\in{\rm D}f(\bar{x})^{*}y^{*}+\sum_{i=1}^{p}x_{i}^{*}+{\rm N}(S;\bar{x}).$ (21) ###### Proof. Observe that by hypothesis (v), it is $H_{G}(\bar{x};\mathbf{0})=\\{\mathbf{0}\\}\subseteq C$, so hypothesis (v) of Theorem 4.3 is fulfilled. As recalled in Remark 3, since the bundle generating $H_{G}(\bar{x};\cdot)$ is bounded according to hypothesis (vi), $H_{G}(\bar{x};\cdot)$ is Lipschitz. Moreover, it is readily seen that if $H_{G}(\bar{x};\cdot)$ is generated by ${\mathcal{G}}={\rm conv}\,\\{\Lambda_{1},\dots,\Lambda_{p}\\}$, it holds $H_{G}(\bar{x};\cdot)^{+1}(C)=\bigcap_{i=1}^{p}\Lambda_{i}^{-1}(C).$ Since each element $\Lambda_{i}^{-1}(C)$ in the above intersection is a convex cone as well as ${\rm T}(S;\bar{x})$ by hypothesis (iv), the set $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$ turns out to be a convex cone. By hypothesis (v) it is ${\rm D}_{B}f(\bar{x};\cdot)={\rm D}f(\bar{x})$, so, as a linear mapping it is $K$-convexlike on $H_{G}(\bar{x};\cdot)^{+1}(C)\cap{\rm T}(S;\bar{x})$. One is therefore in a position to apply Theorem 4.3. Thus, there exists $y^{*}\in{K}^{{}^{\oplus}}\backslash\\{\mathbf{0}^{*}\\}$ such that $-{\rm D}f(\bar{x})^{*}y^{*}\in{\left[\bigcap_{i=1}^{p}\Lambda_{i}^{-1}(C)\cap{\rm T}(S;\bar{x})\right]}^{{}^{\ominus}}.$ By virtue of the qualification condition in hypothesis (vii), on account of the relations discussed in Remark 1, the last inclusion implies $-{\rm D}f(\bar{x})^{*}y^{*}\in\sum_{i=1}^{p}{\left(\Lambda_{i}^{-1}(C)\right)}^{{}^{\ominus}}+{{\rm T}(S;\bar{x})}^{{}^{\ominus}}=\sum_{i=1}^{p}{\rm cl}\,\Lambda_{i}^{*}({C}^{{}^{\ominus}})+{\rm N}(S;\bar{x}).$ This means that there must exist $x_{i}^{*}\in{\rm cl}\,\Lambda_{i}^{*}({C}^{{}^{\ominus}})$, for every $i=1,\dots,p$, such that $-{\rm D}f(\bar{x})^{*}y^{*}\in\sum_{i=1}^{p}x_{i}^{*}+{\rm N}(S;\bar{x}),$ which immediately entails the existence of such sequences $(z^{*}_{i,n})_{n}$ in $\mathbb{Z}^{*}$ as in asserted in the thesis, thereby completing the proof. ∎ ###### Remark 8. It is worth noting that, whenever ${\rm int}\,C\neq\varnothing$ and $\bar{x}\in{\rm int}\,S$, the qualification condition in $(\ref{ne:conecq})$ is satisfied provided that the following Slater-type condition holds: $\exists x_{0}\in\mathbb{X}:\ \Lambda_{i}x_{0}\in{\rm int}\,C,\quad\forall i=1,\dots,p.$ (22) As one expects, the formulation of the multiplier rule expressed by $(\ref{in:multrule})$ simplifies if specialized to a finite-dimensional space setting. This is done in the next result, where the adjoint operation (which can be viewed as a matrix transposition) is now denoted by the symbol $\top$. ###### Corollary 4.5 (Weak Pareto efficiency condition in finite-dimensional spaces). Let $\bar{x}\in{\mathcal{R}}$ be a local w-eff. solution to problem $({\mathcal{P}}_{G})$, with $\mathbb{X}=\mathbb{R}^{n}$, $\mathbb{Y}=\mathbb{R}^{m}$, $\mathbb{Z}=\mathbb{R}^{p}$, $K=\mathbb{R}^{m}_{+}$ and ${\rm int}\,C\neq\varnothing$. Suppose that hypotheses (i)-(vi) are satisfied, whereas (vii) is replaced by condition $(\ref{in:Slater})$. Then there exist $v\in\mathbb{R}^{m}_{+}\backslash\\{\mathbf{0}\\}$ and $c_{i}\in{C}^{{}^{\ominus}}$, $i=1,\dots,p$, such that $\mathbf{0}\in{\rm D}f(\bar{x})^{\top}v+\sum_{i=1}^{p}\Lambda_{i}^{\top}c_{i}+{\rm N}(S;\bar{x}).$ (23) If, in particular, $\bar{x}\in{\rm int}\,S$, it results in $\mathbf{0}={\rm D}f(\bar{x})^{\top}v+\sum_{i=1}^{p}\Lambda_{i}^{\top}c_{i}.$ ###### Proof. Under condition $(\ref{in:Slater})$, also the hypothesis (vii) of Theorem 4.4 is fulfilled. So, in applying this result, it suffices to observe that $\Lambda_{i}x_{0}\in{\rm int}\,C$ implies $x_{0}\in\Lambda_{i}^{-1}({\rm int}\,C)=\Lambda_{i}^{-1}({\rm ri}\,C)\neq\varnothing$. Thus, by taking into account what noted in Remark 1(ii), it is true that ${[\Lambda^{-1}_{i}(C)]}^{{}^{\ominus}}=\Lambda_{i}^{\top}({C}^{{}^{\ominus}}),\quad\forall i=1,\dots,p.$ This completes the proof. ∎ ## References * [1] Aubin J.-P., Frankowska H.: Set-valued analysis. Birkhäuser Boston, Inc., Boston, MA, 1990. * [2] Azé D., Corvellec J.-N.: Nonlinear local error bounds via a change of metric, J. Fixed Point Theory Appl. 16(1-2) (2014), 351–372. * [3] Ben-Tal A., Nemirovski A.: Robust convex optimization. Math. Oper. Res. 23 (1998), no. 4, 769–805. * [4] Ben-Tal A., Nemirovski A.: Robust optimization–methodology and applications. Math. Program. 92 (2002), no. 3, Ser. B, 453–480. * [5] Ben-Tal A., Ghaoui L.E., Nemirovski A.: Robust optimization. Princeton series in applied mathematics. Princeton University Press, Princeten, 2009. * [6] Castellani M.: Error bounds for set-valued maps. Generalized convexity and optimization for economic and financial decisions, 121–135, Pitagora, Bologna, 1999. * [7] Chen J., Köbis E., Yao J.-C.: Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. 181 (2019), no. 2, 411–436. * [8] Chuong T.D.: Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 134 (2016), 127–143. * [9] Dauer J.P., Stadler W.: A survey of vector optimization in infinite-dimensional spaces II. J. Optim. Theory Appl. 51 (1986), no. 2, 205–241. * [10] De Giorgi E., Marino A., Tosques M.: Problems of evolution in metric spaces and maximal decreasing curves. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 68 (1980), 180–187. * [11] Frenk J.B., Kassay G.: On classes of generalized convex functions, Gordan-Farkas type theorems, and Lagrangian duality. J. Optim. Theory Appl. 102 (1999), no. 2, 315–343. * [12] Gaydu M., Geoffroy M.H., Marcelin Y.: Prederivatives of convex set-valued maps and applications to set optimization problems. J. Global Optim. 64 (2016), no. 1, 141–158. * [13] Ioffe A.D.: Nonsmooth analysis: Differential calculus of nondifferentiable mappings. Trans. Amer. Math. Soc. 266(1) (1981), 1–56. * [14] Jahan J.: Vector optimization. Theory, applications, and extensions. Springer–Verlag, Berlin, 2004. * [15] Khan A.A., Tammer C., Zălinescu C.: Set-valued optimization. An introduction with applications. Springer, Heidelberg, 2015. * [16] Kuroiwa D., Lee G.M.: On robust multiobjective optimization. Vietnam J. Math. 40 (2012), no. 2-3, 305–317. * [17] Luc D.T.: Theory of vector optimization. Lecture Notes in Economics and Mathematical Systems, 319. Springer-Verlag, Berlin, 1989. * [18] Mordukhovich B.S.: Necessary conditions in nonsmooth minimization via lower and upper subgradients. Set-Valued Anal. 12 (2004), no. 1-2, 163–193. * [19] Mordukhovich B.S.: Variational analysis and generalized differentiation. I. Basic theory. Springer, Berlin, 2006. * [20] Mordukhovich B.S.: Variational analysis and generalized differentiation. II. Applications. Springer-Verlag, Berlin, 2006. * [21] Pang C.H.J.: Generalized differentiation with positively homogeneous maps: applications in set-valued analysis and metric regularity. Math. Oper. Res. 36 (2011), no. 3, 377–397. * [22] Robinson S.M.: An implicit-function theorem for a class of nonsmooth functions. Math. Oper. Res. 16 (1991), no. 2, 292–309. * [23] Rockafellar R.T.: Convex analysis. Princeton University Press, Princeton, N.J. 1970. * [24] Sawaragi Y., Nakayama, H., Tanino T.: Theory of multiobjective optimization. Mathematics in Science and Engineering, 176. Academic Press, Inc., Orlando, FL, 1985. * [25] Schirotzek W.: Nonsmooth analysis. Universitext. Springer, Berlin, 2007. * [26] Soyster A.L.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. res. 21 1154–1157. * [27] Uderzo A.: On some generalized equations with metrically $C$-increasing mappings: solvability and error bounds with applications to optimization. Optimization 68 (2019), no. 1, 227–253. * [28] Uderzo A.: On differential properties of multifunctions defined implicitly by set-valued inclusions, to appear on Pure Appl. Funct. Anal. * [29] Uderzo A.: On tangential approximations of the solution set of set-valued inclusions, to appear on J. Appl. Anal. * [30] Ye J.J.: The exact penalty principle. Nonlinear Anal. 75 (2012), no. 3, 1642–1654. * [31] Zălinescu C.: Convex analysis in general vector spaces, World Scientific Publishing Co., River Edge, NJ, 2002.
Further author information: (Send correspondence to B.S.) B.S.: E-mail<EMAIL_ADDRESS>Phone: +49 36427 863 54 # TAUKAM: A 6k$\,\times\,$6k prime-focus camera for the Tautenburg Schmidt Telescope Bringfried Stecklum Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Jochen Eislöffel Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Sylvio Klose Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Uwe Laux Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Tom Löwinger Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Helmut Meusinger Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Michael Pluto Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Johannes Winkler Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Frank Dionies Leibniz Institut für Astrophysik, An der Sternwarte 16, 14482 Potsdam, Germany ###### Abstract TAUKAM stands for ”TAUtenburg KAMera”, which will become the new prime-focus imager for the Tautenburg Schmidt telescope. It employs an e2v 6k $\times$ 6k CCD and is under manufacture by Spectral Instruments Inc. We describe the design of the instrument and the auxiliary components, its specifications as well as the concept for integrating the device into the telescope infrastructure. First light is foreseen in 2017. TAUKAM will boost the observational capabilities of the telescope for what concerns optical wide- field surveys. ###### keywords: Schmidt telescope, CCD camera ## 1 INTRODUCTION The 2-m telescope of the Karl Schwarzschild Observatory, Tautenburg (IAU station code 033) – which became the Thüringer Landessternwarte (TLS) after the German re-unification – was built by Carl Zeiss Jena and went into operation in 1960 [1]. It is a versatile device which offers three optical configurations (Coude, Nasmyth, Schmidt). The Schmidt mode utilizes the 1.34-m correction plate. In this mode the telescope still represents the largest imaging Schmidt system with a vignette-free field of view (FOV) of 3$.\\!\\!^{\degree}$3$\,\times\,$3$.\\!\\!^{\degree}$3, and is regularly used during dark time for wide-field imaging (typically $\pm$1 week around New Moon). The exchange of the photographic plate holder by a prime focus CCD camera in 1996 led to a fractional coverage of the FOV only. Replacing the outdated 2k $\times$ 2k CCD camera by a more powerful one was urgently needed to foster our research programs which are tailored to the local observing conditions ($\sim$1000$\pm$200 observing hours per year at a median seeing of $\sim$2′′). These comprise, e.g., variability studies of young stars and quasars, target of opportunity observations, and follow-up of near-Earth objects (NEOs). In particular, the slow read-out electronics of the old camera leads to substantial overheads. Moreover, its pixel scale of 1$.\\!\\!^{\prime\prime}$24 implies undersampling of the point spread function (PSF) in case of good seeing conditions. When the Thuringian Ministry for Education, Science, and Culture issued a call for proposals in 2014 within the framework of their infrastructure development program, we submitted an application for a new instrument, named TAUKAM. For reasons becoming more clear in the following, the 1110S CCD camera from Spectral Instruments Inc. (SI) was used as the reference case in our proposal. The latter was approved which allowed us to submit a Europe-wide call for tender in 2015. The successful bid was offered by Photon Lines SAS which is the French distributor of products from SI. The contract, signed in September 2015, includes the delivery of a 1110S CCD camera equipped with an e2v 6k$\,\times\,$6k detector in the beginning of 2017. It is foreseen to start regular observations at the end of that year. In the following sections we describe the technical and scientific specifications of TAUKAM, the design of the dewar including the entry window as well as the concepts to integrate the instrument into the telescope infrastructure. While various technical components are fixed already, others are still being developed or under consideration. Thus, the present publication represents rather a summary of the implementation of TAUKAM than a final report. ## 2 Telescope optics and tube Figure 1: Cut of the Schmidt optics with the 1110S camera mounted at the focal plane flange. Light is passing the Schmidt plate (left, light brown), gets reflected at the spherical mirror (right), and is focused onto the CCD. Only the tube section close to the focal plane is shown. The Tautenburg telescope represents a classical Schmidt system with a spherical 2-m main mirror and a 1.34-m correction lens (focal ratio f/3, see Fig. 1). Apart from the Schmidt mode it also offers Nasmyth and Coude configurations where the latter is used during bright time for high-resolution spectroscopy. Being a classical Schmidt telescope it features a closed tube of square cross section. The telescope guiding is realized with a separate 0.3-m refractor housed in the tube which hosts an intensified video camera. The closed tube has to be taken into account for what concerns the heat dissipation of TAUKAM. Unlike the former CCD cameras its read-out electronics will be integrated in the dewar, i.e. located close to the focal plane. The effect of the dissipation of the predicted power of $\sim$35 W on both image quality and tube temperature was tested experimentally using a device of similar power consumption. While no adverse affects were noticed, a preventive countermeasure is to allow for air circulation by opening the lateral tube sliders which were formerly used to insert the photographic plate holder. Table 1: Detector specifications CCD format | 6144 (H) $\times$ 6160 (V) pixel ---|--- Image area | 92.2 mm $\times$ 92.4 mm Pixel size and scale | 15 $\mu$m, 0$.\\!\\!^{\prime\prime}$771 FoV (unvignetted) | 1.73 ${{}^{\square}}\degree$ Read speeds | $\sim$100, $\sim$433, and 752 kHz Read-out times (using 4 ports) | 94, 21, and 12.5 s Read-out noise at 100, 752 kHz | goal $<$2.5, 4.1 e- Full-well capacity | 150,000 e- Operating temperature | $-110$…$-100\,\degree$C Dark current | 0.0008 $\rm e^{-}pix^{-}s^{-}$ Flatness | $<$40 $\mu$m (peak to valley) ## 3 CCD detector The detector of TAUKAM will be a 6144 (H) $\times$ 6160 (V) back-illuminated scientific CCD sensor (CCD-231-C6-1-G11) produced by e2v, Chelmsford (UK). It is of deep-depletion silicon type, features four outputs with 16 bit analog- to-digital (AD) conversion, and is operated in non-inverted mode. The pixel size of 15 $\mu$m corresponds to 0$.\\!\\!^{\prime\prime}$771 which ensures proper image sampling for the prevailing seeing conditions. The silicon carbide package provides a compact footprint with guaranteed flatness at cryogenic temperatures. The selected astro-multi-2 AR coating provides more than 50 % quantum efficiency over a wavelength range from 350 to 920 nanometer. Using this detector TAUKAM will provide a FOV of 1.73 ${{}^{\square}}\degree$, a more than fourfold increase compared to its predecessor. The full-frame read-out time amounts to about half of that of the present device and may be even much shorter, which implies a large overhead reduction. Table 1 summarizes major specifications of the detector. ## 4 CCD Dewar ### 4.1 Dewar window and design The dewar window represents a plano-convex lens which will correct the field curvature of the Schmidt focal plane to match the flat CCD surface. The optical design was performed using Zemax111Zemax is a trademark of Zemax LLC and own software [2]. The lens diameter amounts to 166 mm and its curvature radius is 1349.2 mm. Fig. 2 (left) shows the beam quality provided by the field lens for the center, edge mid-points, and corners of the detector. The mechanical stress test of the lens design was performed using a finite element method (FEM) analysis (Fig. 2 right). The peak tension of $\sim$10 MPa due to the outside atmospheric pressure is much lower than the critical value of 54 MPa for a substrate made from Corning 7980 fused silica. The lens was produced by Korth Kristalle GmbH, Altenholz (Germany), and shipped to the camera manufacturer for integration in early 2016. --- Figure 2: Left: Encircled energy for center, edge mid-points, and corner positions of the field compared to the diffraction limit for polychromatic light. Right: Tension distribution across a quarter of the window. ### 4.2 Dewar body and mounting flange --- Figure 3: Left: Exploded view of major camera components. The dewar head (blue) was designed to meet the adapter plate requirements. Right: Dewar with attached shutter mounted at the flange and incident rays. While the filters for the old CCD camera are small enough to be housed in a filter wheel which fits behind the focus flange between the dewar and the adapter plate this is no longer possible with TAUKAM. Even populating two wheels with filters (and an empty placeholder) is no remedy since their diameter would still exceed that of the focus flange, leading to severe vignetting. Moreover, the presence of four bolts which serve for the movement of the focus flange prevents the use of filter wheels this big as well. Fig. 3 shows the major camera components along with a section view of the assembled state. For the new camera the filters have to be put in place from the opposite side of the focus flange (cf. Sect. 6). This will allow us to move the CCD slightly closer to the adapter plate, i.e. in a more advantageous focus range. Thanks to a re-design of the original dewar head, a reproducable and well-centered insertion of the dewar head into the adapter plate is guaranteed. ## 5 Shutter The 1110S camera incorporates an internal flexible shutter drive to trigger an external shutter. The shutter will be located on the backside of the flange where the dewar is mounted (Fig. 3). It needs to be compact as well to prevent beam vignetting. Its thickness is restricted to 30 mm and according to the Schmidt optics the free opening has to be 115 mm $\times$ 115 mm. The removal of an obsolete mechanical unit (used to widen photographic objective-prism spectra) provided extra space for the shutter. For what concerns commercially available devices, the dual double-blade compact shutter of Bonn Shutter UG, Bonn (Germany), meets these requirements. The mechanics is based on two carbon fiber or sandwich-type blades moving on a pair of linear ball bearings, driven by two stepper motors and toothed belts. The control electronics hardware consists of two identical micro-controller systems – one for each shutter blade. The Bonn shutters are impact free, low acceleration (i.e. low power) devices. Negotiations with Bonn Shutter started in 2015 for what concerns the incorporation of this product. They have expertise with regard to driving their shutters with cameras from SI. ## 6 Filters and filter unit The filter size amounts to 120 mm $\times$ 120 mm. With TAUKAM we will move away from Johnson-Cousins UBVRI glass filters in favor of the more advantageous Sloan Digital Sky Survey (SDSS) $u\,g\,r\,i\,z$ system. This filter set will be augmented by narrow-band filters (e.g., H$\alpha$, [Sii] ) and a few others. A Johnson-Cousins $B$ filter will be added for the consistency of long-term photometric monitoring. Moreover, a broad-band $V$ filter ($VB$) will be utilized as well for the observation of NEOs where a large band-width is crucial to achieve a high signal-to-noise ratio (SNR). All filters will be based on multi-layer dielectric coatings and of same thickness, i.e. confocal. As outlined in Sect. 4.2 the use of wheels for housing the filters is not feasible. An alternative concept is a filter box located at the inner tube wall, i.e. out of the beam in Schmidt mode, from which filters will be grabbed and placed into position by a robotic unit (Fig. 4). The unit will hide behind one of the spider arms to prevent additional obscuration. It is intended that the filter exchange will be as fast as the shortest full-frame read-out time. --- Figure 4: Two concepts for the filter unit. Left: A rail mechanism carries a filter from the storage box located at the edge of the tube to the nominal position in front of the shutter. Right: The same task is performed using robotic lever arms in connection with torsion drives. ## 7 Cryogenics Unlike the previous prime-focus CCD cameras which were cooled using liquid nitrogen (LN2), TAUKAM will be cooled by a cryo compressor which is part of the contract. It will be installed in the basement to avoid dome seeing due to the nominal power dissipation of 600 W. From there the cooling lines will run to the dewar inside the tube. For that reason three segments of cooling lines were ordered with a total length of 45.7 m. The effective length change of the lines due to the telescope rotation will be canceled by cable twisters, similar to those for the GROND multi-channel imager [3] mounted at the Max Planck Society 2.2-m telescope on ESO/La Silla, Chile, or the 0.7-m Bigelow Schmidt telescope of the Catalina Sky Survey [4]. The camera will be kept connected to the cooling unit when the telescope is operated in other optical configurations (Coude, Nasmyth). During these periods the dewar is foreseen to be stowed away in one corner of the tube. ## 8 Computing and software The data are transferred in FITS format using a fiber optical cable to the image acquisition host computer via a proprietary PCIe card. For this purpose the accompanying software package SI image II can be used. However, we will use the software development kit which is also provided by SI to integrate the camera control and data acquisition into our software environment. ## 9 Scientific applications In order to illustrate the impact of the TAUKAM imaging capabilities, their relevance for various science fields under investigation at TLS will be mentioned briefly. ### 9.1 High-energy transients TLS joined the LIGO-Virgo Collaboration for follow-up observations of Gravitational Wave (GW) events in 2015, which includes a world-wide network of telescopes. The goal here is the search for optical transients following GW events that appear in the right time window. Since in the first years (2015+) the expected error boxes sizes are in the 100 to 1000 square degree range, telescopes with large field of views are required to find potential candidates. In addition, TLS has a long-standing activity in Gamma-ray burst (GRB) afterglow research, including observations with the Schmidt telescope. Error boxes here can be in the square degree range, too (mainly those coming from the Fermi satellite). As such, having a bigger field of view will represent a substantial advantage. ### 9.2 Long-term variability of quasars TLS carries out a unique, long-term variability study of $\sim$350 quasars with an epoch range of more than 50 years. This data base holds the potential to answer crucial questions concerning the time-scales of the quasar variability and its origin[5]. While the collection of Schmidt plates represents the foundation for this project, the monitoring is continued with the prime-focus CCD camera for more than ten years now. However, the old CCD covers only 4% of one monitoring field which leads to a sparse time sampling. Both the bigger FOV and the shorter read-out time of TAUKAM will improve the efficiency of the monitoring, and yield light curves of higher fidelity. ### 9.3 Variability and rotation periods of young stars The removal of angular momentum during proto-stellar growth is one of the key issues of star formation. In order to gain insights into this process, we, for the first time, derived and studied rotation periods over a wide range of masses for objects of star clusters. These include so-called brown dwarfs which are “failed stars” of very low mass, incapable of nuclear burning. Much to our surprise these tend to rotate faster than solar-like stars which indicates a less-efficient breaking during their formation[6]. These ongoing investigations will profit from the capabilities of TAUKAM in particular for what concerns sky coverage and imaging duty cycle. ### 9.4 Near-Earth objects TLS joined the NEO confirmation campaign coordinated by the Minor Planet Center (MPC), Cambridge (USA), on behalf of the IAU in 2010[7]. While more than 3500 positions were reported to MPC since then, these are based on sub- frames which had to be taken to avoid the large read-out overhead of the old camera. This is particularly disadvantageous for the observations of newly- detected bodies (one-nighters) which often have large position errors. TAUKAM will allow us to generally employ the full-frame mode which implies an important increase of the observable NEO target sample, and improves chances for new asteroid discoveries by a large margin. ###### Acknowledgements. We thank Eric Cristensen, University of Arizona, Catalina Sky Survey, for valuable advice. We are grateful for the cooperative exchange of information with Spectral Instruments Inc. and SAS Photon Lines. ## References * [1] von Kluber, H., “The new reflecting telescope at the Karl-Schwarzschild Observatory Tautenburg,” The Observatory 81, 91–94 (June 1961). * [2] Laux, U., [Astrooptik ], Verlag Sterne und Weltraum, Hüthig GmbH, second ed. (2001). * [3] Greiner, J., Bornemann, W., Clemens, C., Deuter, M., Hasinger, G., Honsberg, M., Huber, H., Huber, S., Krauss, M., Krühler, T., Küpcü Yoldaş, A., Mayer-Hasselwander, H., Mican, B., Primak, N., Schrey, F., Steiner, I., Szokoly, G., Thöne, C. C., Yoldaş, A., Klose, S., Laux, U., and Winkler, J., “GROND — a 7-Channel Imager,” PASP 120, 405–424 (Apr. 2008). * [4] Christensen, E. J., Carson Fuls, D., Gibbs, A. R., Grauer, A. D., Hill, R. E., Johnson, J. A., Kowalski, R. A., Larson, S. M., Matheny, R. G., and Shelly, F. C., “Status of The Catalina Sky Survey,” in [AAS/Division for Planetary Sciences Meeting Abstracts ], AAS/Division for Planetary Sciences Meeting Abstracts 47, 308.19 (Nov. 2015). * [5] Meusinger, H., Henze, M., Birkle, K., Pietsch, W., Williams, B., Hatzidimitriou, D., Nesci, R., Mandel, H., Ertel, S., Hinze, A., and Berthold, T., “J004457+4123 (Sharov 21): not a remarkable nova in M 31 but a background quasar with a spectacular UV flare,” A&A 512, A1 (Mar. 2010). * [6] Scholz, A. and Eislöffel, J., “Rotation and variability of very low mass stars and brown dwarfs near $\epsilon$ Ori,” A&A 429, 1007–1023 (Jan. 2005). * [7] Stecklum, B., “Status of NEO confirmation observations at the Thüringer Landessternwarte (033).” IAA Planetary Defense Conference, 2015, http://iaaweb.org/iaa/Scientific%20Activity/conf/pdc2015/IAA-PDC-15-P-30.pdf. (Accessed: 24 May 2016).
Strongly Coupled Polaron on the Torus]The Strongly Coupled Polaron on the Torus: Quantum Corrections to the Pekar Asymptotics Dario Feliciangeli and Robert Seiringer IST Austria, Am Campus 1, 3400 Klosterneuburg, Austria We investigate the Fröhlich polaron model on a three-dimensional torus, and give a proof of the second-order quantum corrections to its ground-state energy in the strong-coupling limit. Compared to previous work in the confined case, the translational symmetry (and its breaking in the Pekar approximation) makes the analysis substantially more challenging. § INTRODUCTION The underlying physical system we are interested in studying is that of a charged particle (e.g., an electron) interacting with the quantized optical modes of a polar crystal (called phonons). In this situation, the electron excites the phonons by inducing a polarization field, which, in turn, interacts with the electron. In the case of a `large polaron' (i.e., when the De Broglie wave-length of the electron is much larger than the lattice spacing in the medium), this system is described by the Fröhlich Hamiltonian [10], which represents a simple and well-studied model of non-relativistic quantum field theory (see [1, 8, 11, 20, 27, 28] for properties, results and further references). A key parameter that appears in the problem is the coupling constant, usually denoted by $\alpha$. We study the strong coupling regime of the model, i.e., its asymptotic behavior as $\alpha \to \infty$. In this limit, the ground state energy of the Fröhlich Hamiltonian agrees to leading order with the prediction of the Pekar approximation [24], which assumes a classical behavior for the phonon field. This was first proved in [4], using a path integral approach (see also [21] and [22], for recent work on the Pekar process [28]). Later, the result was improved in [18], by providing explicit bounds on the leading order correction term. The object of our study is, precisely, the main correction to the classical (Pekar) approximation of the polaron model, i.e., the leading error term in the aforementioned asymptotics for the ground state energy. Such correction is expected to be of order $O(\alpha^{-2})$ smaller than the leading term, and arises from the quantum fluctuations about the classical limit [2]. This claim was first verified rigorously in [9], where both the electron and the phonon field are confined to a bounded domain (of linear size adjusted to the natural length scale set by the Pekar ansatz) with Dirichlet boundary conditions. Such restriction breaks translation invariance and simplifies the structure of the Pekar problem in comparison with the unconfined case, guaranteeing, at least in the case of the domain being a ball [6], uniqueness up to phase of the Pekar minimizers and non-degeneracy of the Hessian of the Pekar functional. We build upon the strategy developed in [9] to treat the ultraviolet singularity of the model, which in turn relies on multiple application of the Lieb–Yamazaki commutator method [19] and a subsequent use of Nelson's Gross transformation [13, 23]. The key novelty of the present work is to deal with a translation invariant setting. We investigate the quantum correction to the Pekar approximation of the polaron model on a torus, and prove the validity of the predictions in [2] also in this setting. As a first step, we analyze the structure of the set of minimizers of the corresponding Pekar functional, proving uniqueness of minimizers up to symmetries, which was so far known to hold only in the unconfined case [16, 15] and on balls with Dirichlet boundary conditions [6]. The translation invariance leads to a degeneracy of the Hessian of the Pekar functional and corresponding zero modes, substantially complicating the analysis of the quantum fluctuations. In order to `flatten' the surface of minimizers, we introduce a convenient diffeomorphism inspired by formal computations in [14], which effectively allows us to decouple the zero modes. § SETTING AND MAIN RESULTS §.§ The Model We consider a $3$-dimensional flat torus of side length $L>0$. We denote by $\Delta_L$ the Laplacian on $\TL$ and by $\Tker$ the integral kernel of its `inverse', which we define by \begin{equation} \begin{cases} &(-\Delta_L) \left[(-\Delta_L)^{-1}(\,\cdot\,,y)\right]= \delta_y\\ &\int_{\TL} \Tker dx =0. \end{cases} \end{equation} An explicit formula for $\Tker$ is given by \begin{equation} \label{expltker} \Tker=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac 1 {|k|^2} \frac{e^{ik\cdot(x-y)}}{L^3}, \end{equation} which, for any $x\in \TL$, yields an $L^2$ function of $y$, its Fourier coefficients being in $\ell^2$. Analogously we define $(-\Delta_L)^{-s}$ for any $s>0$. In the following, we identify $\TL$ with the box $[-L/2,L/2]^3\subset \Rtre$, and the Laplacian with the corresponding one on $[-L/2,L/2]^3$ with periodic boundary conditions. \begin{align} \label{eq:ElPhCouplDef} v_L(y) :=(-\Delta_L)^{-1/2}(0,y)=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac 1 {|k|}\frac {e^{-ik\cdot y}}{L^3}, \end{align} and $\elecphononcoupl(y):=v_L(y-x)$. The Fröhlich Hamiltonian [10] for the polaron is given by \begin{align} \label{eq:FrHam} \HL&:=(-\Delta_L)\otimes \unit+\unit\otimes\numero -a(\elecphononcoupl)-\ad(\elecphononcoupl)\nonumber\\ &\;=(-\Delta_L)\otimes \unit+\unit\otimes \left(\sum_{ k\in \frac {2\pi}{L} \mathbb{Z}^3}\ad_k a_k\right) -\frac 1 {L^{3/2}}\sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3}\frac 1 {|k|}\left(a_ke^{ik\cdot x}+\ad_k e^{-ik\cdot x}\right), \end{align} acting on $L^2(\TL)\otimes \mathcal{F}(L^2(\TL))$, where $\mathcal{F}(L^2(\TL))$ denotes the bosonic Fock space over $L^2(\TL)$. The number operator, denoted by $\numero$, accounts for the field energy, whereas $-\Delta_L$ accounts for the electron kinetic energy. The creation and annihilation operators for a plane wave of momentum $k$ are denoted by $a_k^\dagger$ and $a_k$, respectively, and they are assumed to satisfy the rescaled canonical commutation relations \begin{align} \label{eq:commrel} [a_k,\ad_j]=\alpha^{-2} \delta_{k,j}. \end{align} In light of (<ref>), $\numero$ has spectrum $\sigma(\numero)=\alpha^{-2} \{0,1,2,\dots\}$. We note that the definition (<ref>) is somewhat formal, since $v_L\not \in L^2(\TL)$. It is nevertheless possible to define $\HL$ via the associated quadratic form, and to find a suitable domain on which it is self-adjoint and bounded from below (see [12], or Remark <ref> in Section <ref> below). We shall investigate the ground state energy of $\mathbb{H}_L$, for fixed $L$ and $\alpha \to \infty$. By rescaling all lengths by $\alpha$, $\HL$ is unitarily equivalent to the operator $\alpha^{-2} \widetilde{\mathbb{H}}_L$, where $\widetilde{\mathbb{H}}_L$ can be written compactly as \begin{align} \widetilde{\mathbb{H}}_L=(-\Delta_{\alpha^{-1} L})\otimes \unit-\sqrt{\alpha}\left[\tilde{a}(v_{\alpha^{-1}L}^x)+\tilde{a}^{\dagger}(v_{\alpha^{-1}L}^x)\right]+\unit\otimes\widetilde{\mathbb{N}}, \end{align} with the creation and annihilation operators $\tilde{a}^{\dagger}$ and $\tilde{a}$ now satisfying the (un-scaled) canonical commutation relations $[\tilde{a}(f),\tilde{a}^{\dagger}(g)]=\bra{f}\ket{g}$, and $\tilde{\mathbb{N}}$ the corresponding number operator. Large $\alpha$ hence corresponds to the strong-coupling limit of a polaron confined to a torus of side length $L\alpha^{-1}$. We find it more convenient to work in the variables defined in (<ref>), however. The Fröhlich polaron model is typically considered without confinement, i.e., as a model on $L^2(\Rtre)\otimes \mathcal{F}(L^2(\Rtre))$ with electron-phonon coupling function given by $(-\Delta_{\Rtre})^{-1/2}(x,y)= (2\pi^2)^{-1} |x-y|^{-2}$. In the confined case studied in [9], $\R^3$ was replaced by a bounded domain $\Omega$, and thus the electron-phonon coupling function was given by $(-\Delta_{\Omega})^{-1/2}(x,y)$, where $\Delta_{\Omega}$ denotes the Dirichlet Laplacian on $\Omega$. The latter setting, similarly to ours, has the advantage of guaranteeing compactness for the corresponding inverse Laplacian, which is a key technical ingredient both for [9] and our main results. In addition, for generic domains $\Omega$ the Pekar functional has a unique minimizer up to phase (which is proved in [6] for $\Omega$ a ball, and enters the analysis in [9] for general $\Omega$ as an assumption). Compared with [9], setting the problem on the torus (or on $\R^3$) introduces the extra difficulty of having to deal with translation invariance and a whole continuum of Pekar minimizers. Hence the present work can be seen as a first step in the direction of generalizing the results of [9] to the case of $\R^3$. §.§ Pekar Functional(s) For $\psi\in H^1(\TL)$, $\|\psi\|_2=1$, and $\varphi\in L^2_{\R}(\TL)$, we introduce the classical energy functional corresponding to (<ref>) as \begin{align} \label{eq:Gfun} \GL(\psi,\varphi):=\expval{h_{\varphi}}{\psi}+\|\varphi\|_2^2, \end{align} where $h_\varphi$ is the Schrödinger operator \begin{align} \label{eq:hVphi} h_{\varphi}:=-\Delta_L+V_{\varphi}, \quad V_{\varphi}:= -2 (-\Delta_L)^{-1/2} \varphi. \end{align} We define the Pekar energy as \begin{align} \label{eq:pekaren} \eL:=\min_{\psi,\varphi} \GL(\psi,\varphi). \end{align} In the case of $\Rtre$, it was shown in [4] and [18] that the infimum of the spectrum of the Fröhlich Hamiltonian converges to the minimum of the corresponding classical energy functional as $\alpha\to \infty$. In [9], it was shown that the same holds for the model confined to a bounded domain with Dirichlet boundary conditions and the subleading correction in this asymptotics was computed. Our goal is to extend the results of [9] to the case of $\TL$. We define the two functionals \begin{align} \label{eq:EFfun} \EL(\psi):=\min_{\varphi} \GL(\psi,\varphi),\quad \FL(\varphi):=\min_{\psi} \GL(\psi,\varphi), \end{align} and their respective sets of minimizers \begin{align} \label{eq:minEL} \MinLe:=&\left\{\psi\in H^1(\TL) \,|\, \|\psi\|_2=1, \; \EL(\psi)=\eL\right\},\\ \label{eq:minLf} &\MinLf:=\{\varphi\in L^2_{\R}(\TL) \,\,|\,\, \FL(\varphi)=\eL\}. \end{align} Clearly, $\EL$ is invariant under translations and changes of phase and $\FL$ is invariant under translations. It is thus useful to introduce the notation \begin{align} \label{eq:invariantsurfaceE} \Theta_L(\psi):=\{e^{i\theta}&\psi^y(\,\cdot\,):=e^{i\theta} \psi(\,\cdot\,-y)\,\,|\,\,\theta\in [0,2\pi), \,y\in \TL\},\\ \label{def:invariantsurfaceF} &\Omega_L(\varphi)=\{\varphi^y \,\,|\,\, y\in \TL\}, \end{align} for $\psi \in H^1(\TL)$ and $\varphi \in L^2_{\R}(\TL)$, respectively. Our first result, Theorem <ref> (or, more precisely, Corollary <ref>) is a fundamental ingredient to prove our main result, Theorem <ref>. It concerns the uniqueness of minimizers of $\EL$ up to symmetries and shows the validity of a quadratic lower bound for $\EL$ in terms of the $H^1$-distance from the surface of minimizers. We shall prove these properties for sufficiently large $L$. [Uniqueness of Minimizers and Coercivity for $\EL$] There exist $L_1>0$ and a positive constant $\kappa_1$ independent of $L$, such that for $L>L_1$ there exists $0<\psi_L\in C^{\infty}(\TL)$ such that \begin{align} \label{bigLregime} \eL<0, \quad \MinLe=\Theta_L(\psi_L). \end{align} Moreover $\psi_L^y\neq \psi_L$ for any $0\neq y\in \TL$ and, for any $L^2$-normalized $f\in H^1(\TL)$, \begin{align} \label{globalquadbound} \EL(f)-\eL\geq \kappa_1\dist^2_{H^1}\left(\MinLe,f\right). \end{align} These properties of $\EL$ translate easily to analogous properties for the functional $\FL$, as stated in the following corollary. For $L>L_1$ (where $L_1$ is the same as in Theorem <ref>) there exists $\varphi_L\in C^{\infty}(\TL)$ such that \begin{align} \MinLf=\Omega_L(\varphi_L). \end{align} Moreover, with $\psi_L$ as in Theorem <ref>, we have \begin{align} \label{eq:psilphil} \varphi_L=\sigma_{\psi_L}:=(-\Delta_L)^{-1/2} |\psi_L|^2, \quad \psi_L= \text{unique positive g.s. of } h_{\varphi_L}. \end{align} Finally, there exists $\kappa'>0$ independent of $L$ such that, for all $\varphi\in L^2(\TL)$, \begin{align} \label{eq:Fglobalquadbound} \FL(\varphi)-\eL&\geq \min_{y\in \TL} \expval{\unit -(\unit+\kappa'(-\Delta_L)^{1/2})^{-1}}{\varphi-\varphi_L^y}+\left|L^{-3/2}\int_{\TL}\varphi\right|^2, \end{align} and this implies \begin{align} \label{eq:Fglobalquadbound2} \FL(\varphi)-\eL\geq \tau_L \dist_{L^2}^2(\MinLf, \varphi) \end{align} with $\tau_L:=\frac {\kappa' (2\pi/L)^2}{1+\kappa' (2\pi/L)^2}$. In the case of $\Rtre$, similar results are known to hold. In particular, the analogue of (<ref>) was shown in [16] and the analogue of (<ref>) follows from the results in [15]. In the case of a bounded domain with Dirichlet boundary conditions, an equivalent formulation of Theorem <ref> was taken as working assumption in [9]. In the case of a ball in $\Rtre$ with Dirichlet boundary conditions, the analogue of Theorem <ref> was proved in [6]. In both the case of $\Rtre$ and of balls, rotational symmetry plays a key role in the proof of these results. Rotational symmetry is not present in our setting, hence a different approach is required. Our method of proof of Theorem <ref> relies on a comparison of the models on $\TL$ and $\Rtre$, for large $L$. As a consequence, our analysis does not easily yield quantitative estimates on $L_1$. To state our main result, which also holds in the case $L>L_1$, we need to introduce the Hessian of the functional $\FL$ at its unique (up to translations) minimizer $\varphi_L$, \begin{align} \lim_{\varepsilon\to 0} \frac 1 {\varepsilon^2} \left(\FL(\varphi_L+\varepsilon \phi)-\eL\right)=:\expval{\HF}{\phi} \quad \forall \phi\in L^2_{\R}(\TL). \end{align} An explicit computation gives (see Proposition <ref>) \begin{align} \label{eq:Hessianexpr} \HF=\unit-4(-\Delta_L)^{-1/2} \psi_L &\frac {Q_{\psi_L}} {h_{\varphi_L}-\infspec h_{\varphi_L}}\psi_L (-\Delta_L)^{-1/2}, \end{align} where $h_{\varphi_L}$ is defined in (<ref>), $\psi_L$ is interpreted as a multiplication operator and $Q_{\psi_L}:=\unit-\ket{\psi_L}\bra{\psi_L}$. Clearly, by minimality of $\varphi_L$, $\HF\geq 0$, and it is also easy to see that $\HF\leq1$. We shall show that $\HF$ has a three-dimensional kernel, given by $\spn\{\partial_j \varphi_L\}_{j=1}^3$, corresponding to the invariance under translations of the functional. Note that we could define the Hessian of $\FL$ at any other minimizer $\varphi_L^y$, obtaining a unitarily equivalent operator $H^{\FL}_{\varphi_L^y}$. §.§ Main Result Recall the definition (<ref>) for the Pekar energy $\eL$ as well as (<ref>) for the Hessian of $\FL$ at its minimizers, for $L>L_1$. Our main result is as follows. For any $L>L_1$, as $\alpha \to \infty$ \begin{align} \label{eq:infspecHrough} \infspec \HL=\eL- \frac 1 {2\alpha^2} \Tr\left(\unit-\sqrt{H_{\varphi_L}^{\FL}}\right)+o(\alpha^{-2}). \end{align} More precisely, the bounds \begin{align} \label{eq:infspecHsharp} -C_L\alpha^{-1/7}\leq\alpha^2\infspec \HL -\alpha^2\eL+\frac 1 2 \Tr \left(\unit-\sqrt{\HF}\right)\leq C_L\alpha^{-2/11} \end{align} hold for some $C_L>0$ and $\alpha$ sufficiently large. The trace appearing in (<ref>) and (<ref>) is over $L^2(\TL)$. Note that, since $\HF\leq1$, the coefficient of $\alpha^{-2}$ in (<ref>) is negative. In the case of bounded domains with Dirichlet boundary conditions, an analogue of Theorem <ref> was proven in [9] (where logarithmic corrections appear in the bounds that correspond to (<ref>) as a consequence of technical complications due to the boundary). Showing the validity of an analogous result on $\Rtre$ still remains an open problem, however. Indeed, the constant $C_L$ appearing in the lower bound in (<ref>) diverges as $L\to \infty$. This is mainly due to the lack of compactness of the resolvent of the full-space Laplacian (which leads, for instance, to a zero lower bound in (<ref>) and, in particular, a divergence of the effective number of modes in (<ref>)). On the other hand, our method of proof used in Section <ref> to show the upper bound in (<ref>) does apply, with little modifications, to the full space case. In any case, both the upper and lower bound are expected to hold in the case of $\Rtre$ as well [2, 14, 9, 27]. Compared to the results obtained in [9], Theorem <ref> deals with the additional complication of the invariance under translations of the problem, which implies that the set of minimizers of $\FL$ is a three-dimensional manifold. This substantially complicates the proof of the lower bound in (<ref>), as we shall see in Section <ref>. In particular, we need to perform a precise local study around the manifold of minimizers $\Omega_L(\varphi_L)$, which we carry out by introducing a suitable diffeomorphism (inspired by [14]). As we show in Lemma <ref>, there exists $L_0>0$ such that the analogue of Theorem <ref> for $L<L_0$ can be proven with a few-line-argument. In this case, $\EL$ is simply non-negative and is therefore minimized by the constant function. In particular, $e_L = 0$ and $\varphi_L=0$. Also an analogue of Theorem <ref> can be proven in the regime $L< L_0$, i.e., it is possible to show that for $L<L_0$ there exists $C_L>0$ such that \begin{align} -C_L\alpha^{-1/7}\leq\alpha^2\infspec \HL+\frac 1 {2} \sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3} \left(1-\sqrt{1-\frac 4 {L^3|k|^4}}\right)\leq C_L\alpha^{-2/11} \end{align} for large $\alpha$. In this case (unlike the regime $L>L_1$ where the set of minimizers $\MinLf$ is a three-dimensional manifold) $\MinLf$ only consists of the $0$ function, and this allows to follow essentially the same arguments of [9] (with only small modifications, which are also needed in the regime $L>L_1$ and hence are discussed in this paper). We shall therefore not carry out the details of this analysis here. Whether uniqueness of Pekar minimizers up to symmetries holds for all $L>0$ (i.e., also in the regime $L_0\leq L \leq L_1$) remains an open problem. Throughout the paper, we use the word universal to describe any constant (which is generally denoted by $C$) or property that is independent of all the parameters involved and in particular independent of $L$, for $L\geq L_0$ (for some fixed $L_0>0$). Also, we write $a\lesssim b$ whenever $a\leq Cb$ for some universal and positive $C$. We write $C_L$ whenever a constant depends on $L$ but is otherwise universal with respect to all other parameters. Finally, we write $a\lesssim_L b$ whenever $a\leq C_L b$ for some positive $C_L$. §.§ Proof Strategy and Structure of the Paper In Section <ref> we study the properties of the Pekar functionals $\EL$ and $\FL$ defined in (<ref>). We start by recalling the relevant properties of the Pekar functionals on $\R^3$ in Section <ref>. In the long Section <ref> we give the proof of Theorem <ref>. Our method of proof relies on showing the convergence, as $L\to \infty$, of $\EL$ to its full-space counterpart $\mathcal{E}$. Proposition <ref> in Section <ref> formalizes the precise meaning of this convergence. Then, in Section <ref>, we prove a stronger notion of convergence, namely that the Hessian of $\EL$ at any minimizer converges to the Hessian of $\mathcal{E}$ at a corresponding minimizer (in the sense of Proposition <ref>); in particular, it is strictly positive above its trivial zero modes for large $L$. By combining the results obtained in Sections <ref> and <ref>, we conclude the proof of Theorem <ref> in Section <ref>. Section <ref> is dedicated to the investigation of the properties of $\FL$. First, in Section <ref>, we show the validity of Corollary <ref>. Subsequently we compute the Hessian of $\FL$ (in Proposition <ref> in Section <ref>) and characterize its kernel (in Proposition <ref> in Section <ref>). Finally, in Section <ref> we introduce a family of weighted norms (see (<ref>)) which is of key importance in Section <ref> and we show, in Lemma <ref>, that the surface of minimizers of $\FL$ locally admits a unique projection w.r.t. any of these norms. In Section <ref> we prove Theorem <ref>. First of all, in Section <ref> we construct a trial state and use it to obtain an upper bound to the ground state energy of $\HL$. This is carried out using the $Q$-space representation of the bosonic Fock space $\mathcal{F}(L^2(\TL))$ (see [25]) and follows ideas contained in [9], with only small modifications. The remaining sections are devoted to the lower bound. In Section <ref>, we show that it is possible to apply an ultraviolet cutoff on momenta of size larger than some $\Lambda$ to $\HL$ at an expense of order $\Lambda^{-5/2}$ (see Proposition <ref>). This is proven following closely the approach used in [9]: as a first step we apply a triple Lieb–Yamazaki bound [19] (in Section <ref>) and then make use of a Gross transformation [13, 23] (in Section <ref>). In Section <ref> we show the validity of the lower bound in (<ref>), thus completing the proof of Theorem <ref>. With Proposition <ref> at hand, we have good estimates on the cost of applying an ultraviolet cutoff to $\HL$ and this allows to reduce the problem to a finite dimensional one (with dimension $N$ diverging as $\alpha\to \infty$). We adopt a similar strategy to [9], using IMS localization to split the space into an inner region close to the surface of minimizers of $\FL$ and an outer region far away from it. The goal is to extract the relevant quantum correction to the ground state energy from the inner region and to show, using the bound (<ref>), that the outer region contributes only as an error term. Compared to [9], the translation invariance substantially complicates the analysis. In contrast to the case considered in [9], the set of minimizers of $\FL$ is a three-dimensional manifold and does not only consist of a single function. Hence, in order to treat the inner region and decouple the zero-modes of the Hessian of $\FL$, we have to introduce a suitable diffeomorphism (see Definition <ref> in Section <ref>) that 'flattens' the manifold of minimizers and the region close to it. It is here where we make use Lemma <ref>, which allows us to understand the local structure of the tubular neighborhood of the surface of minimizers of $\FL$. Another technical complication relates to the metric used to distinguish between the inner and outer region, as simply considering the $L^2$-norm is not sufficient for our purposes, and we need the weighted norms defined in (<ref>) (in particular we apply the IMS localization with respect to a metric which depends on $\alpha$). § PROPERTIES OF THE PEKAR FUNCTIONALS In this section we derive important properties of the functionals $\EL$ and $\FL$, introduced in Section <ref> and defined in (<ref>). In Section <ref>, we show the validity of Theorem <ref>, relying on the comparison of the models on $\TL$ and $\Rtre$ for large $L$. In Section <ref>, we study the functional $\FL$. In particular, we prove Corollary <ref> and compute the Hessian of $\FL$ at its minimizers. Given a function $f\in L^2(\TL)$ and $k\in \frac{2\pi}{L}\mathbb{Z}^3$, we denote by $f_k$ the $k$-th Fourier coefficient of $f$. We also denote \begin{align} \hat{f}:=f-L^{-3}\int_{\TL} f. \end{align} We shall use the following definition of fractional Sobolev semi-norms for functions $f\in L^2(\TL)$, $0\neq s\in \mathbb{R}$: \begin{align} \label{def:fracsobnorm} \|f\|_{\mathring{H}^s(\TL)}^2=\expval{(-\Delta_L)^{s}}{f}=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |k|^{2s} |f_k|^2. \end{align} Before moving on with the discussion, we recall in the following subsection the definition and relevant properties of the full-space Pekar functional. §.§ The Full-Space Pekar Functional Let $\psi\in H^1(\Rtre)$ be an $L^2(\Rtre)$-normalized function and $\varphi\in L^2_{\R}(\Rtre)$. Then \begin{align} \mathcal{G}(\psi,\varphi):=\expval{h^{\Rtre}_\varphi}{\psi} + \|\varphi\|_2^2 \end{align} where $h_{\varphi}^{\Rtre}$ is the Schrödinger operator \begin{align} h_{\varphi}^{\Rtre}:=-\Delta_{\R^{3}}+V_{\varphi}, \quad V_{\varphi}:=-2(-\Delta_{\Rtre})^{-1/2}\varphi. \end{align} Comparing with (<ref>) and (<ref>), we note the analogy between the definitions and observe that we are slightly abusing notation by denoting both potentials with the same symbol (we do this for simplicity and since ambiguity does not arise). Analogously to (<ref>), we define \begin{align} \label{eq:EinfF} \Einf(\psi):=\inf_{\varphi} \mathcal{G}(\psi,\varphi), \quad \mathcal{F}(\varphi):=\inf_{\psi} \mathcal{G}(\psi,\varphi). \end{align} In analogy with (<ref>), we denote \begin{align} \label{eq:pekareninf} \einf:=\inf_{\psi,\varphi}\mathcal{G}(\psi,\varphi)=\inf_{\psi}\mathcal{E}(\psi)=\inf_{\varphi}\mathcal{F}(\varphi). \end{align} For our purposes, in the case of $\Rtre$, it is sufficient to focus our discussion on the functional $\Einf$, of which we now recall the main properties. As shown in [16], $\Einf$ admits a unique positive and radially decreasing minimizer $\Emininf$ which is also smooth, the set of minimizers of $\Einf$ coincides with \begin{align} \label{eq:SpaceSurfaceofMin} \Theta(\Emininf):=\{e^{i\theta}\Emininf^y\,\,|\,\, \theta\in[0,2\pi), \,\, y\in \Rtre\}, \end{align} and $\Emininf$ satisfies the Euler–Lagrange equation \begin{align} \left(-\Delta_{\Rtre} +V_{\sigma_{\Emininf}} -\LagrMinf_{\Emininf}\right)\Emininf=0, \end{align} \begin{align} \label{eq:fullspaceVmu} \sigma_\Emininf:=(-\Delta_{\Rtre})^{-1/2}|\Emininf|^2, \quad V_{\sigma_\Emininf}= -2 (-\Delta_{\Rtre})^{-1} |\Emininf|^2, \quad \LagrMinf_{\Emininf}=T(\Emininf)-2W(\Emininf), \end{align} where $T$ and $W$ are defined in (<ref>) below. Furthermore, as was shown in [15], the Hessian of $\Einf$ at its minimizers is strictly positive above the trivial zero modes resulting from the invariance under translations and changes of phase. This implies the validity of the following Theorem, which is not stated explicitly in [15] but can be obtained by standard arguments (see, e.g., <cit.> or [7]) as a consequence of the results therein contained. There exists a constant $C>0$, such that, for any $L^2$-normalized $f \in H^1(\Rtre)$ \begin{align} \Einf(f)-\einf\geq C\dist_{H^1}^2\left(\Theta (\Emininf),f\right). \end{align} Our strategy to prove Proposition <ref> relies on Theorem <ref> and in comparing $\TL$ with $\Rtre$ for large $L$. §.§ Study of $\EL$and Proof of Theorem <ref> To compare $\EL$ and $\Einf$, we prefer to write both of them in the following form, which can be obtained from (<ref>) and (<ref>), respectively, by a simple completion of the square, \begin{align} \label{eq:FullSpaceE} \Einf(\psi)&=\int_{\Rtre} |\nabla \psi(x)|^2 dx - \int_{\Rtre}\int_{\Rtre} \rho_{\psi}(x)(-\Delta_{\Rtre})^{-1}(x,y)\rho_{\psi}(y)dxdy=: T(\psi)-W(\psi),\\ \label{eq:EFunChar} \EL(\psi)&=\int_{\TL} |\nabla \psi(x)|^2 dx - \int_{\TL}\int_{\TL} \rho_{\psi}(x)(-\Delta_L)^{-1}(x,y)\rho_{\psi}(y)dxdy=: T_L(\psi)-W_L(\psi). \end{align} The next ingredient, needed for the comparison of $\EL$ and $\Einf$, is the following lemma. There exists a universal constant $C$ such that \begin{align} \sup_{x,y\in \TL}\left| \Tker-(4\pi)^{-1} (\dist_{\TL}(x,y))^{-1}\right|\leq \frac C L. \end{align} We define $F_L(x):=-\Delta^{-1}_L(x,0)$ and $F(x)=(4\pi)^{-1}|x|^{-1}$ and observe that our statement is equivalent to showing that \begin{align} \label{star} \|F_L-F\|_{L^{\infty}([-L/2,L/2]^3)}\leq \frac C L. \end{align} By definition, we have $F_L(x)=\frac 1 L F_1(\frac x L)$. Hence, (<ref>) is equivalent to \begin{align} \|F_1-F\|_{L^{\infty}([-1/2,1/2]^3)}\leq C. \end{align} Again by definition, $F_1-F$ is harmonic (distributionally and hence also classically) on $\left(\Rtre \setminus \{\mathbb{Z}^3\}\right)\cup \{0\}$ (when $F_1$, and only $F_1$, is extended to the whole space by periodicity). Thus we conclude that $F_1-F$ is in $C^{\infty}\left( (-1,1)^3\right)$ and, in particular, bounded on $[-1/2,1/2]^3$. The analogy between (<ref>) and (<ref>), combined with Lemma <ref>, clearly suggests that $\EL$ formally converges to $\Einf$ as $L\to \infty$. Hence, we set out to show that this convergence can be made rigorous and allows to infer properties of $\EL$ by comparing it to $\Einf$, in the large $L$ regime. In Section <ref> we derive an important preliminary result, namely Proposition <ref>. It formalizes in a mathematical useful way the concept of $\EL$ converging to $\Einf$. In Section <ref>, we study the Hessian of $\EL$, showing that it converges (in the sense of Proposition <ref>) to the Hessian of $\Einf$ and therefore is strictly positive above its trivial zero modes for large $L$. Finally, in Section <ref> we use the results obtained in Sections <ref> and <ref> to show the validity of Theorem <ref>. We remark that our approach differs from the one used on $\Rtre$ and on balls to show, for the related $\mathcal{E}$-functional, uniqueness of minimizers and strict positivity of the Hessian (see [16] and [15] for the case of $\Rtre$ and [6] for the case of balls). In those cases, rotational symmetry allows to first show uniqueness of minimizers and then helps to derive the positivity of the Hessian at the minimizers. We take somewhat the opposite road: comparing $\EL$ to $\Einf$, we first show that minimizers (even if not unique) all localize around the full-space minimizers (see Proposition <ref>) and that the Hessian at each minimizer is universally strictly positive (see Proposition <ref>) for large $L$. We then use these two properties to derive, as a final step, uniqueness of minimizers. §.§.§ Preliminary Results The next Lemma proves the existence of minimizers for any $L>0$. Moreover, it shows that there exists $L_0>0$ such that, for $L<L_0$, $\EL$ is strictly positive on any non-constant $L^2$-normalized function, as already mentioned in Remark <ref>. For any $L>0$, $\eL$ in (<ref>) is attained, and there exists a universal constant $C>0$ such that $\eL>-C$. Moreover, there exists $L_0>0$ such that, for $L<L_0$, $\EL(\psi)>0$ for any non-constant $L^2$-normalized $\psi$. We consider any $L^2$-normalized $\psi \in H^1(\TL)$ and begin by observing that in terms of the Fourier coefficients we have \begin{align} &W_L(\psi)=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac {|(\ropsi)_k|^2} {|k|^2} ,\\ &(\ropsi)_k= \sum_{j\in \frac {2\pi}{L} \mathbb{Z}^3} \frac {\bar{\psi}_j \psi_{j+k}}{L^{3/2}}=(\rho_{\hat{\psi}})_k+\frac{\bar{\psi}_0\psi_k}{L^{3/2}}+\frac{\bar{\psi}_{-k}\psi_0}{L^{3/2}}. \end{align} By Parseval's identity $|\psi_0|\leq 1$ and thus, using the Cauchy–Schwarz inequality, we can deduce that \begin{align} \label{robound} \begin{cases} 3|(\rho_{\hat{\psi}})_k|^2+\frac 3 {L^3} (|\psi_k|^2+|\psi_{-k}|^2). \end{cases} \end{align} \begin{align} W_L(\psi)&\leq 3 \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac {|(\rho_{\hat{\psi}})_k|^2} {|k|^2}\right) + \frac 6 {L^3} \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac {|\psi_k|^2} {|k|^2}\right) \nonumber\\ &\leq 3W_L(\hat{\psi})+\frac {6}{(2\pi)^2 L}\|\hat{\psi}\|_{L^2(\TL)}^2. \end{align} We can bound both terms on the r.h.s. in two different ways, one which is good for small $L$ and one which is good for all the other $L$. Indeed, by applying estimate (<ref>) and using the Poincaré-Sobolev inequality (see [17], chapter 8) on the zero-mean function $\hat{\psi}$, we get \begin{align} W_L(\hat{\psi})&\leq \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac{|(\rho_{\hat{\psi}})_k|^2}{|k|^4}\right)^{1/2}\left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |(\rho_{\hat{\psi}})_k|^2\right)^{1/2}\lesssim L^2\|(\rho_{\hat{\psi}})_k\|_{l^{\infty}}\|\hat{\psi}\|_{L^4(\TL)}^2\nonumber\\ &\lesssim L^{1/2}\|\hat{\psi}\|_{L^4(\TL)}^2\lesssim L\|\hat{\psi}\|_{L^6(\TL)}^2\lesssim LT_L(\hat{\psi})=L T_L(\psi). \end{align} \begin{align} L^{-1} \|\hat{\psi}\|_{L^2(\TL)}^2\lesssim L T_L(\hat{\psi})=L T_L(\psi). \end{align} Therefore, we can conclude that \begin{align} W_L(\psi)\lesssim L T_L(\psi)\;\;\Rightarrow\;\; \EL(\psi)\geq (1-CL)T_L(\psi). \end{align} Thus, for $L< L_0:=C^{-1}$, either ${\psi\equiv \const}$ and $\EL(\psi)=0$ or $\EL(\psi)\gtrsim T_L(\psi)>0$. Moreover, this also implies \begin{align} \EL(\psi)\gtrsim T_L(\psi)\geq \frac {(2\pi)^2}{2L_0^2}\|\hat\psi\|_2^2+\frac 1 2 T_L(\psi)\gtrsim \dist^2_{H^1}\left(\Theta_L\left(\frac 1 {L^{3/2}}\right), \psi\right), \end{align} which is the analogue of (<ref>) from Theorem <ref> in the case $L< L_0$. We now proceed to study the more interesting regime $L\geq L_0$. By Lemma <ref>, splitting $\dist^{-1}_{\TL}(x,\cdot)$ into an $L^{3/2}$ part and the remaining $L^{\infty}$ part (whose norms can be chosen to be proportional to $\varepsilon$ and $\varepsilon^{-1}$, respectively, for any $\varepsilon>0$), and by applying again the Poincaré-Sobolev inequality, we obtain \begin{align} W_L(\hat{\psi})\leq \int_{\TL\times \TL} \frac{\rho_{\hat{\psi}}(x) \rho_{\hat{\psi}}(y)} {4\pi \dist_{\TL}(x,y)} dx dy + \frac C L\lesssim \varepsilon\|\hat{\psi}\|_{L^6(\TL)}^2+\varepsilon^{-1}+1\leq \frac{T_L(\psi)} 6+C. \end{align} Moreover, since $L\geq L_0$, trivially $ L^{-1} \|\hat{\psi}\|_{L^2(\TL)}^2\lesssim 1$ and we can conclude that for any $L^2$-normalized $\psi\in H^1(\TL)$ \begin{align} \label{Tbounds} W_L(\psi)\leq \frac{T_L(\psi)} 2+ C \;\;\Rightarrow\;\; \EL(\psi)\geq \frac {T_L(\psi)} 2 - C. \end{align} From this we can infer that $\eL\geq-C$ for any $L$. To show existence of minimizers, we observe that by (<ref>) any minimizing sequence $\psi_n$ on $\TL$ must be bounded in $H^1(\TL)$. Therefore, there exists a subsequence (which we still denote by $\psi_n$ for simplicity) that converges weakly in $H^1(\TL)$ and strongly in $L^p(\TL)$, for any $1\leq p<6$ to some $\psi$ (by the Banach-Alaoglu Theorem and the Rellich-Kondrachov embedding Theorem). The limit function $\psi$ is $L^2$-normalized and \begin{align} T_L(\psi)\leq \liminf_{n\to \infty} T_L(\psi_n) \end{align} by weak lower semicontinuity of the norm. Using the $L^4$-convergence of $\psi_n$ to $\psi$ and the fact that $\|\cdot\|_{\mathring{H}^{-1}(\TL)}\lesssim L\|\cdot\|_{L^2(\TL)}$, we finally obtain \begin{align} &\lesssim L\|\rho_{\psi_n}-\rho_{\psi}\|_{\mathring{H}^{-1}(\TL)}\lesssim L^2 \|\rho_{\psi_n}-\rho_{\psi}\|_{L^2(\TL)}\nonumber\\ &\leq L^2\|\psi_n-\psi\|_{L^4(\TL)}\left(\|\psi_n\|_{L^4(\TL)}+\|\psi\|_{L^4(\TL)}\right)\to 0. \end{align} This implies that \begin{align} \EL(\psi)\leq \liminf_{n\to \infty} \EL(\psi_n)=\eL, \end{align} and thus that $\psi$ is a minimizer. Note that, since $\EL(\psi_n)\to \eL=\EL(\psi)$ by definition of $\psi_n$ and, as shown, $W_L(\psi_n)\to W_L(\psi)$, it also holds \begin{align} T_L(\psi_n)=\EL(\psi_n)+W_L(\psi_n)\to \EL(\psi)+W_L(\psi)=T_L(\psi) \end{align} which implies that $\psi_n$ actually converges to $\psi$ strongly in $H^1(\TL)$. Once we have shown existence of minimizers, we need to investigate more carefully their properties. Some of them are derived in the following Lemma. Recall that \begin{align} \label{eq:Vsigmatorus} V_{\psi}= -2(-\Delta_L)^{-1/2} \psi, \quad \sigma_{\psi}= (- \Delta_L)^{-1/2} |\psi|^2, \end{align} and that, as stated above, we call any property universal which does not depend on $L\geq L_0$. Let $\psi\in \MinLe$ (as defined in (<ref>)). Then $\psi$ satisfies the following Euler-Lagrange equation \begin{align} \label{eulag} &(-\Delta_L+V_{\sigma_{\psi}}-\LagrML_{\psi})\psi=0, \quad \text{with} \quad \LagrML_{\psi}= T_L(\psi)-2W_L(\psi). \end{align} Moreover, $\psi \in C^{\infty}(\TL)$, is universally bounded in $H^2(\TL)$ (and therefore in $L^{\infty}(\TL)$), has constant phase and never vanishes. Finally, any $L^2$-normalized sequence $f_n\in H^1(\mathbb{T}^3_{L_n})$ such that $\ELn(f_n)$ is universally bounded, is universally bounded in $H^1(\mathbb{T}^3_{L_n})$. The fact that sequences $f_n\in H^1(\mathbb{T}^3_{L_n})$ of $L^2$-normalized functions for which $\ELn$ is universally bounded are universally bounded in $H^1(\mathbb{T}^3_{L_n})$ follows trivially from estimate (<ref>). This immediately yields a universal bound on the $H^1$-norm of minimizers. The Euler–Lagrange equation (<ref>) for the problem is derived by standard computations omitted here. By Lemma <ref> and by splitting $(\dist_{\TL}(0,\,\cdot\,))^{-1}$ in its $L^{3/2}$ and $L^{\infty}$ parts, we have \begin{align} |V_{\sigma_{\psi}}(x)|\leq 2\int_{\TL} \frac 1 {\dist_{\TL}(x,y)} |\psi(y)|^2 dy +\frac C L\lesssim \left( \|\psi\|_{L^6(\TL)}^2+1\right)\lesssim\left( T_L(\psi)+1\right). \end{align} Therefore, by the universal $H^1$-boundedness of minimizers, $V_{\sigma_{\psi}}$ is universally bounded in $L^{\infty}(\TL)$, for any $\psi \in \MinLe$. This immediately allows to conclude universal $\mathring{H}^2$ (and hence $H^2$) bounds for functions in $\MinLe$, using the Euler–Lagrange equation (<ref>), Lemma <ref> and the universal $H^1$-boundedness of minimizers, which guarantee that \begin{align*} 0\geq \LagrML_{\psi}=2\EL(\psi)-T_L(\psi)\geq -C. \end{align*} Since $L\geq L_0$, universal $H^2$-boundedness also implies universal $L^{\infty}$-boundedness of minimizers by the Sobolev inequality. For any $L>0$, any $\psi\in \MinLe$ satisfies $\eqref{eulag}$, is in $H^1(\TL)$ and is such that $V_{\sigma_{\psi}}\in L^{\infty}(\TL)$. Therefore $\psi$ also satisfies, for any $\lambda>0$ \begin{align} \psi=(-\Delta_L+\lambda)^{-1}(-V_{\sigma_{\psi}}+\LagrML_{\psi}+\lambda)\psi. \end{align} In particular, by a bootstrap argument we can conclude that $\psi\in C^{\infty}(\TL)$. Moreover, picking $\lambda>-\LagrML_{\psi}+\|V_{\sigma_{\psi}}\|_{L^{\infty}(\TL)}$ and using that $(-\Delta_L+\lambda)^{-1}$ is positivity improving, we can also conclude that if $\psi\geq 0$ then $\psi>0$. By the convexity properties of the kinetic energy (see [17], Theorem 7.8), we have that $T_L(|\psi|)\leq T_L(\psi)$ which implies that if $\psi\in \MinLe$ then $T_L(\psi)=T_L(|\psi|)$ and also $|\psi|\in \MinLe$. Hence both $\psi$ and $|\psi|$ are eigenfunctions of the least and simple (by positivity of one of the eigenfunctions) eigenvalue $\LagrML_{\psi}=\LagrML_{|\psi|}$ of the Schrödinger operator $-\Delta_L+V_{\sigma_{\psi}}$, which allows us to infer that $\psi$ has constant phase and never vanishes. We now proceed to develop the tools that will allow to show the validity of Theorem <ref>. We begin with a simple Lemma. For $\psi\in H^1(\TL)$, \begin{align} \|\rho_{\psi}\|_{\mathring{H}^{1/8}(\TL)}\lesssim \|\psi\|_{H^{1}(\TL)}^{2}. \end{align} We have \begin{align} \label{eq:rhophibound} &\|\rho_{\psi}\|^2_{\mathring{H}^{1/8}(\TL)}=|\langle \nabla \rho_{\psi} | \nabla ((-\Delta_{L})^{-7/8}\rho_{\psi})\rangle|\nonumber\\ &=2 \left|\int_{\TL}|\psi(x)| \nabla(|\psi(x)|) \cdot \nabla_x \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac{(\rho_{\psi})_k}{|k|^{7/4}}\frac{e^{ik\cdot x}}{L^{3/2}}\right) dx\right|\nonumber\\ &=\left|\sum_{i=1}^3\int_{\TL}|\psi(x)| \partial_i \left(|\psi(x)|\right) \sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac{k_i(\rho_{\psi})_{k}}{|k|^{7/4}} \frac {e^{ik\cdot x}}{L^{3/2}}dx\right|. \end{align} We define \begin{align} g_i(x):=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac{k_i(\rho_{\psi})_{k}}{|k|^{7/4}} \frac {e^{ik\cdot x}}{L^{3/2}}, \end{align} and observe that $(g_i)_0=0$ and $|(g_i)_k|=\frac {|k_i(\rho_{\psi})_{k}|}{|k|^{7/4}}\leq\frac {|(\rho_{\psi})_k|}{|k|^{3/4}}$ for $k\neq 0$. These estimates on the Fourier coefficients of $g_i$ imply that, for $i=1,2,3$, \begin{align} \|g_i\|_{\mathring{H}^{3/4}(\TL)}^2=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |k|^{3/2} |(g_i)_k|^2\leq\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |(\rho_{\psi})_k|^2\leq\|\psi\|_{L^4(\mathbb{T}^3_{L})}^4. \end{align} Moreover, using the fractional Sobolev embeddings (see, for example, [3]) and that $g_i$ has zero mean, we have \begin{align} \|g_i\|_{L^4(\TL)}\lesssim \|g_i\|_{\mathring{H}^{3/4}(\TL)}\leq \|\psi\|_{L^4(\TL)}^2. \end{align} Applying these results to (<ref>) and using Hölder's inequality two times, the Poincaré-Sobolev inequality and the convexity properties of the kinetic energy (see [17], Theorem 7.8), we conclude that \begin{align} \|\rho_{\psi}\|^2_{\mathring{H}^{1/8}(\mathbb{T}^3_{L})}&\lesssim \|\psi\|_{L^4(\TL)}\|g^{1/8}_i\|_{L^4(\mathbb{T}^3_{L})}\|\nabla(|\psi|)\|_{L^2(\TL)}\leq \|\psi\|^3_{L^4(\TL)}\|\psi\|_{\mathring{H}^1(\TL)}\nonumber\\ &\leq \|\psi\|_{L^2(\TL)}^{3/4}\|\psi\|_{L^6(\TL)}^{9/4}\|\psi\|_{\mathring{H}^1(\TL)}\lesssim \|\psi\|_{H^1(\TL)}^{4}. \end{align} Our next goal is to show that $\eL\to \einf$ as $L\to \infty$, and that in the large $L$ regime the states that are relevant for the minimization of $\EL$ are necessarily close to the full space minimizer (or any of its translates). This is a key ingredient for the discussion carried out in the following sections, and is stated in a precise way in the next proposition. The coercivity results obtained in [15] are of fundamental importance here as they guarantee that, at least for the full space model, low energy states are close to minimizers. We recall that the full-space Pekar functional, defined in (<ref>), admits a unique positive and radial minimizer $\Emininf$ which is also smooth (see (<ref>)), and we introduce the notation \begin{align} \label{eq:PsiL} \Emininf_L:=\Emininf \chi_{[-L/2,L/2]^3}. \end{align} Note that $\Emininf_L\in H^1(\TL)$, by radiality and regularity of $\Emininf$. We have \begin{align} \lim_{L\to \infty}\eL= \einf. \end{align} Moreover, for any $\varepsilon>0$ there exist $L_{\varepsilon}$ and $\delta_{\varepsilon}$ such that for any $L>L_{\varepsilon}$ and any $L^2$-normalized $\psi \in H^1(\TL)$ with $\EL(\psi)-\eL<\delta_{\varepsilon}$, \begin{align} \label{eq:convofmin} \dist_{H^1}\left(\Theta_L(\psi),\Emininf_L\right)\leq \varepsilon, \quad |\LagrML_{\psi}-\LagrMinf_{\Emininf}|\leq \varepsilon, \end{align} where $\Theta_L(\psi)$, $\Emininf_L$, $\LagrML_{\psi}$ and $\LagrMinf_{\Emininf}$ are defined in (<ref>), (<ref>), (<ref>) and (<ref>), respectively. We first show that $\limsup_{L\to \infty} \eL\leq \einf$ by using $\Emininf_L$ as a trial state for $\EL$. Observe that ${\|\Emininf_L\|_{L^2(\TL)}\to 1}$ and $T_L(\Emininf_L)\to T(\Emininf)$ as $L\to \infty$. To estimate the difference of the interaction terms we note that $\Emininf_L(\Emininf-\Emininf_L)=0$ and therefore \begin{align} |W_L(\Emininf_L)-W(\Emininf)|\leq |W_L(\Emininf_L)-W(\Emininf_L)|+W(\Emininf-\Emininf_L)+2\bra{(\Emininf-\Emininf_L)^2}\ket{\Delta_{\Rtre}^{-1} \Emininf_L^2}. \end{align} By dominated convergence, the last two terms converge to zero as $L\to \infty$. On the other hand, by Lemma <ref> and since $\Emininf$ is normalized \begin{align} &|W_L(\Emininf_L)-W(\Emininf_L)|\leq \frac C L +\frac 1 {4\pi}\int_{[-L/2,L/2]^6} \Emininf_L(x)^2\Emininf_L(y)^2 \left|\frac 1 {\dist_{\TL}(x,y)}-\frac 1 {|x-y|}\right|dxdy. \end{align} Moreover, since $\dist_{\TL}(x,y)=|x-y|$ for $x,y\in [-L/4,L/4]^3$ and using the symmetry and the positivity of the integral kernel and the fact that $\dist_{\TL}(x,y)\leq |x-y|$, we get \begin{align} \label{eq:InteractionBounds} &\int_{[-L/2,L/2]^6} \Emininf_L(x)^2\Emininf_L(y)^2 \left|\frac 1 {\dist_{\TL}(x,y)}-\frac 1 {|x-y|}\right|dxdy\notag\\ &\leq 2\int_{[-L/2,L/2]^3} \Emininf_L^2(x)\left(\int_{[-L/2,L/2]^3}\frac{(\Emininf_L-\Emininf_{L/2})^2(y)} {\dist_{\TL}(x,y)}dy\right)dx. \end{align} Finally, by splitting $\dist_{\TL}^{-1}(x,\cdot)$ in its $L^{\infty}$ and $L^1$ parts and using that $\Emininf$ is normalized, we can bound the r.h.s. of (<ref>) by $\left(C_1\|\Emininf_L-\Emininf_{L/2}\|_2^2+C_2\|\Emininf_L-\Emininf_{L/2}\|_{\infty}^2\right)$, which vanishes as $L\to \infty$, since $\Emininf(x)\xrightarrow{|x|\to\infty}0$. Putting the pieces together, we conclude that \begin{align} \end{align} This shows our first claim, since \begin{align} \eL\leq\EL(\Emininf_L/\|\Emininf_L\|_2)=\frac 1 {\|\Emininf_L\|_2^2}\left(T_L(\Emininf_L)-\frac 1 {\|\Emininf_L\|_2^2}W_L(\Emininf_L)\right)\to \einf. \end{align} We now proceed to show that \begin{align} \label{eq:liminf} \liminf_{L\to \infty} \eL\geq \einf \end{align} and the validity of (<ref>) using IMS localization. We shall show that for any $L^2$-normalized sequence $\psi_n \in H^1(\TLn)$ with $L_n\to \infty$ such that \begin{align} \label{eq:lowenergy} \ELn(\psi_n)-\eLn\to 0, \end{align} we have \begin{align} \label{eq:sequenceclaims} \liminf_{n \to \infty}\ELn(\psi_n)\geq \einf, \quad \lim_{n\to \infty}\dist_{H^1}\left(\Theta_{L_n}(\psi_n),\Emininf_{L_n}\right)=0, \quad \lim_{n\to \infty}|\mu^{L_n}_{\psi_n}-\LagrMinf_{\Emininf}|=0, \end{align} which implies the claim of the proposition. Pick $\eta\in C^{\infty}(\Rtre)$ with $\text{supp}(\eta)\subset B_1$ and $\|\eta\|_2=1$. We denote by $\eta_R$ the rescaled copy of $\eta$ supported on $B_R$ with $L^2$-norm equal to $1$. As long as $R\leq L/2$, $\eta_R \in C^{\infty}(\TL)$ and we then consider the translates $\eta_R^y$ for any $y\in \TL$. Given $\psi\in H^1(\TL)$, we also define \begin{align} \psi_R^y:=\psi \eta_R^y/\|\psi \eta_R^y\|_2. \end{align} By standard properties of IMS localization, for any $R\leq L/2$, we have \begin{align} \label{Test} \int_{\TL} T_L(\psi_R^y) \|\psi\eta_R^y\|_2^2dy=\int_{\TL} T_L(\psi\eta_R^y) dy=T_L(\psi)+\frac {\int |\nabla \eta|^2} {R^2}. \end{align} Moreover, by using that $|\psi|^2=\int_{\TL} |\psi \eta_R^y|^2 dy=\int_{\TL} |\psi_R^y|^2 \|\psi \eta_R^y\|^2 dy$ and completing the square \begin{align} \label{West} W_L(\psi)=\int_{\TL} \left[W_L(\psi_R^y)-\left\||\psi_R^y|^2-|\psi|^2\right\|_{\mathring{H}^{-1}(\TL)}^2\right] \|\psi \eta_R^y\|_2^2dy. \end{align} Combining (<ref>) and (<ref>), we therefore obtain \begin{align} \EL(\psi)+\frac C {R^2}=\int_{\TL} \left[\EL(\psi_R^y)+\left\||\psi_R^y|^2-|\psi|^2\right\|_{\mathring{H}^{-1}(\TL)}^2\right]\|\psi \eta_R^y\|_2^2dy. \end{align} Since the integrand on the r.h.s. is equal to the l.h.s. on average (indeed $\|\psi \eta_R^y\|_2^2dy$ is a probability measure) there exists $\bar y\in \TL$ such that \begin{align} \EL(\psi_R^{\bar y})+\left\||\psi_R^{\bar y}|^2-|\psi|^2\right\|_{\mathring{H}^{-1}(\TL)}^2\leq\EL(\psi)+\frac C {R^2}. \end{align} This fact has several consequences and it is particularly useful if we apply it to our sequence $\psi_n$ with a radius $R=R_n\leq L_n/2$ (we take for simplicity $R=L_n/4$). Indeed, by the above discussion and (<ref>), we obtain that there exists $\bary_n\in \TLn$ such that the $L^2$-normalized functions \begin{align} \bar\psi_n:=\frac {\psi_n\eta^{\bary_n}_{L_n/4}}{\|\psi_n\eta^{\bary_n}_{L_n/4}\|_2} \end{align} are competitors both for the minimization of $\ELn$ and $\Einf$ (indeed, $\bar\psi_n$ can then be thought of as a function in $C^{\infty}_c(\Rtre)$, supported on $B_{L_n/4}$) and satisfy \begin{align} \label{phinprop} \ELn(\bar\psi_n)&\leq \ELn(\psi_n)+\frac C {L_n^2}\leq \eLn+o_{L_n}(1), \nonumber\\ &\|\rho_{\bar\psi_n}-\rho_{\psi_n}\|^2_{\mathring{H}^{-1}(\mathbb{T}^3_{L_n})}\leq \frac C {L_n^2}. \end{align} In other words, we can localize any element of our sequence $\psi_n$ to a ball of radius $R= L_n/4$ with an energy expense of order $L_n^{-2}$, and the localized function is close (in the sense of the second line of (<ref>)) to $\psi_n$ itself, up to an error again of order $L_n^{-2}$. Moreover $T_{L_n}(\bar \psi_n)=T(\bar\psi_n)$ and, using Lemma <ref> and the fact that $\dist_{\TLn}(x,y)=|x-y|$ for all $x,y \in B_{L_n/4}$, we have \begin{align} \label{eq:ApproxInteraction} |W_{L_n}(\bar\psi_n)-W(\bar\psi_n)|\lesssim \frac 1 {L_n}. \end{align} Therefore, using (<ref>) \begin{align} \label{energysequence} \einf\leq \Einf(\bar\psi_n)\leq\ELn(\bar\psi)+\frac C {L_n}\leq \eLn+o_{L_n}(1), \end{align} which shows the first claim in (<ref>). By Theorem <ref> and (<ref>), it also follows that \begin{align} \dist_{H^1} \left( \Theta(\Emininf), \bar\psi_n\right)\xrightarrow{n\to\infty} 0. \end{align} Hence, up to an $n$-dependent translation and change of phase (which we can both assume to be zero without loss of generality by suitably redefining $\psi_n$), $\bar\psi_n\xrightarrow{H^1(\Rtre)} \Emininf$, and the convergence also holds in $L^p(\Rtre)$ for any $2\leq p \leq6$. From this and the second line of (<ref>), we would like to deduce that also $\psi_n$ and $\Emininf_{L_n}$ are close. We first note that, by a simple application of Hölder's inequality, it follows that for any $f\in L^2(\TL)$ with zero mean \begin{align} \label{l2tofractsob} \| f\|_{L^2(\TL)}^2&\leq \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |k|^{1/4} |f_k|^2\right)^{8/9}\left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3} |k|^{-2} |f_k|^2\right)^{1/9}\nonumber\\ \end{align} We combine this with (<ref>) and apply it to the zero mean function $(\rho_{\psi_n}-\rho_{\bar\psi_n})$ , obtaining \begin{align} \|\rho_{\bar\psi_n}-\rho_{\psi_n}\|_{L^2(\mathbb{T}^3_{L_n})}^2\lesssim \left(\frac{\|\rho_{\psi_n}\|^2_{\mathring{H}^{1/8}(\TLn)}+\|\rho_{\bar\psi_n}\|^2_{\mathring{H}^{1/8}(\mathbb{T}^3_{L_n})}}{L_n^{1/4}}\right)^{8/9}. \end{align} Applying Lemma <ref> to $\psi_n$ and $\bar\psi_n$ (which are uniformly bounded in $H^1$ by Lemma <ref>) we conclude that $(\rho_{\psi_n}-\rho_{\bar\psi_n})\xrightarrow{L^2}0$. As a consequence, since $\psi_n$ and $\bar\psi_n$ have the same phase, $\psi_n$ and $\bar\psi_n$ are arbitrarily close in $L^4$. Indeed, \begin{align} \|\psi_n-\bar\psi_n\|_{L^4(\mathbb{T}^3_{L_n})}^4= \int_{\mathbb{T}^3_{L_n}} \left||\psi_n|-|\bar\psi_n|\right|^4dx\leq\int_{\mathbb{T}^3_{L_n}} (\rho_{\psi_n}-\rho_{\bar\psi_n})^2 dx\xrightarrow{n\to \infty} 0. \end{align} By the identification of $\mathbb{T}^3_{L_n}$ with $[-L_n/2,L_n/2]^3$, we finally get $\|\psi_n-\Emininf\|_{L^4(\Rtre)}\to 0$, if $\psi_n$ is set to be $0$ outside $[-L_n/2,L_n/2]^3$. Moreover, $\psi_n$ converges to $\Emininf$ in $L^p(\Rtre)$ for any $2\leq p <6$, since $\|\psi_n\|_2=1$, $\psi_n\xrightarrow{L^4}\Emininf$, $\|\Emininf\|_2=1$ and $\|\psi_n\|_p$ is uniformly bounded for any $2\leq p \leq 6$. To show the second claim in (<ref>), we need to show that the convergence actually holds in $H^1(\TLn)$, i.e., that $\|\psi_n-\Emininf_{L_n}\|_{H^1(\mathbb{T}^3_{L_n})}\to 0$. First, we show convergence in $H^1(B_R)$ for fixed $R$. Note that \begin{align} \label{eq:convergenceofnorms} \left(\|\psi_n\|_{H^1(\TLn)}-\|\Emininf\|_{H^1(\Rtre)}\right)\to 0, \end{align} \begin{align} &\leq|\ELn(\psi_n)-\ELn(\bar\psi_n)|+|W_{L_n}(\psi_n)-W_{L_n}(\bar\psi_n)|\to 0, \end{align} and $T_{L_n}(\bar\psi_n)=T(\bar\psi_n) \to T(\Emininf)$ by $H^1$ convergence. Moreover, given that $\psi_n$ is uniformly bounded in $H^1(B_R)$ and $\psi_n \to \Emininf$ in $L^2(B_R)$, we have $\psi_n\rightharpoonup \Emininf$ in $H^1(B_R)$ for any $R$ and this, together with (<ref>) and weak lower semicontinuity of the norms, implies $\psi_n\to \Psi$ in $H^1(B_R)$ for any $R$. Finally, for any $\varepsilon>0$ there exists $R=R(\varepsilon)$ such that $\|\Emininf\|_{H^1(B_R^c)}\leq \varepsilon$ and, using strong $H^1$-convergence on balls and again (<ref>), we obtain \begin{align} \|\psi_n-\Emininf_{L_n}\|_{H^1(\mathbb{T}^3_{L_n})}&\leq \|\psi_n-\Emininf\|_{H^1(B_R)}+\|\psi_n-\Emininf\|_{H^1([-L_n/2,L_n/2]^3\setminus B_R)}\nonumber\\ &\leq \|\psi_n-\Emininf\|_{H^1(B_R)}+\|\psi_n\|_{H^1([-L_n/2,L_n/2]^3\setminus B_R)}+\|\Emininf\|_{H^1([-L_n/2,L_n/2]^3\setminus B_R)}\nonumber\\ &\leq \|\psi_n-\Emininf\|_{H^1(B_R)}+2\varepsilon+o_n(1)\to 2\varepsilon, \end{align} which concludes the proof of the second claim in (<ref>). Finally, we show the third claim in (<ref>). This simply follows from the previous bounds, which guarantee that $\ELn(\psi_n)\to \einf$ and $T_{L_n}(\psi_n)\to T(\Psi)$ and hence \begin{align} \LagrML_{\psi_n}=T_{L_n}(\psi_n)-2W_{L_n}(\psi_n)=2\ELn(\psi_n)-T_{L_n}(\psi_n)\to 2\einf-T(\Psi)=\LagrMinf_{\Emininf}. \end{align} We conclude this section with a simple corollary of Proposition <ref>. There exists $L^*$ such that for $L>L^*$ and any $\psi\in\MinLe$ we have $\psi\neq \psi^y$ for $0\neq y\in \TL$. It is clearly sufficient to show the claim for $\psi \in \MinLe$ such that \begin{align} \dist_{H^1}(\Theta_L(\psi),\Psi_L)=\|\psi-\Psi_L\|_{H^1(\TL)} \end{align} and for $y \in \TL$ such that $|y|\geq L/4$ (indeed, if the claim fails for some $y'$ such that $|y'|<L/4$ it also fails for some $y$ such that $|y|\geq L/4$). For any such $\psi$ and $y$, Proposition <ref> and the fact that $\Psi\neq \Psi^y$ for any $y\in \Rtre$ guarantee the existence of $L^*$ such that for any $L>L^*$ we have \begin{align} \|\psi-\psi^y\|_{H^1(\TL)}\geq \|\Psi_L^y-\Psi_L\|_{H^1(\TL)}-2\|\psi-\Psi_L\|_{H^1(\TL)}\geq C>0 \end{align} and this completes the proof. §.§.§ Study of the Hessian of $\EL$ In this section we study the Hessian of $\EL$ at its minimizers, showing that it is strictly positive, universally, for $L$ big enough. Positivity is of course understood up to the trivial zero modes resulting from the symmetries of the problem (translations and changes of phase). This is obtained by comparing $\EL$ with $\mathcal{E}$ and exploiting Theorem <ref>. For any minimizer $\psi\in \MinLe$, the Hessian of $\EL$ at $\psi$ is defined by \begin{align} \lim_{\varepsilon\to 0} \frac 1 {\varepsilon^2} \left(\EL\left(\frac{\psi+\varepsilon f}{\|\psi+\varepsilon f\|_2}\right)-\eL\right)=H^{\EL}_{\psi}(f)\quad \forall f\in H^1(\TL). \end{align} An explicit computation gives \begin{align} \label{Hessian} H^{\EL}_{\psi}(f)=\expval{\LL_{\psi}}{\Im f}+\expval{Q_{\psi}(\LL_{\psi}-4\XL_{\psi}) Q_{\psi}}{\Re f}, \end{align} with $Q_{\psi}=\unit- |\psi \rangle\langle\psi|$ and \begin{align} \LL_{\psi}:= - \Delta_{L} +V_{\sigma_{\psi}}-\LagrML_{\psi} \ , \quad \XL_{\psi}(x,y):= \psi(x)\Tker \psi(y) \label{def:lpm}. \end{align} (We use the same notation for the operator $\XL_\psi$ and its integral kernel for simplicity.) We recall that $\LagrML_{\psi}=T_L(\psi)-2W_L(\psi)$ (see (<ref>)) and that $V_{\sigma_{\psi}}= -2(-\Delta_L)^{-1} \rho_{\psi}$ (see (<ref>)) and we note that $\LL_{\psi} \psi=0$ is exactly the Euler–Lagrange equation derived in Lemma <ref>. By minimality of $\psi$, we know that $\infspec L = \infspec Q (L-4X) Q =0$, since both operators are clearly nonnegative and $\psi$ is in the kernel of both of them. Moreover, $\ker \LL_{\psi}= \spn\{\psi\}$, since it is a Schrödinger operator of least (simple) eigenvalue $0$. The situation is more complicated for $Q_{\psi}(\LL_{\psi}-4\XL_{\psi}) Q_{\psi}$, whose kernel contains at least $\psi$ and $\partial_i \psi$ (by the translation invariance of the problem). Since both $\LL_{\psi}$ and $Q_{\psi}(\LL_{\psi}-4\XL_{\psi})Q_{\psi}$ have compact resolvents (they are given by bounded perturbations of $-\Delta_L$), they both have discrete spectrum. Our aim is two-fold: first we need to show that the kernel of $Q_{\psi}(\LL_{\psi}-4\XL_{\psi})Q_{\psi}$ is exactly spanned by $\psi$ and its partial derivatives, secondly we want to show that the spectral gap (above the trivial zero modes) of both operators is bounded by a universal positive constant. Before stating the main result of this section, we introduce the relevant full-space objects: let again $\Emininf$ be the unique positive and radial full-space minimizer of the Pekar functional (<ref>) and, analogously to (<ref>), define \begin{align} L_{\Emininf}:= - \Delta_{\R^{3}} +V_{\sigma_{\Emininf}}-\LagrMinf_{\Emininf} \ , \quad X_{\Emininf}(x,y):= \Emininf(x)(-\Delta_{\Rtre})^{-1}(x,y)\Emininf(y). \end{align} We introduce \begin{align} \label{eq:hinfty} &h_{\infty}':=\inf_{f\in H_{\mathbb{R}}^1(\Rtre), \|f\|_2=1\atop f \in (\spn\{\Emininf\})^{\perp}} \expval{L_{\Emininf}}{f},\nonumber\\ &h_{\infty}'':=\inf_{f\in H_{\mathbb{R}}^1(\Rtre), \|f\|_2=1\atop f \in (\spn\{\Emininf, \partial_1 \Emininf, \partial_2 \Emininf, \partial_3 \Emininf\})^{\perp}} \expval{L_{\Emininf}-4X_{\Emininf}}{f}. \end{align} We emphasize that the results contained in [15] imply that $\min \{h_{\infty}',h_{\infty}''\}>0$. Moreover, it is easy to see, using that $V_{\sigma_{\Emininf}}(x)\lesssim -|x|^{-1}$ for large $x$, that $L_{\Emininf}$ has infinitely many eigenvalues between $0$, its least and simple eigenvalue with eigenfunction given by $\Emininf$, and $-\LagrMinf_{\Emininf}$, the bottom of its continuous spectrum. Since furthermore $X_{\Emininf}$ is positive, this implies, in particular, that \begin{align} \label{eq:LowerBoundh2} h_{\infty}'', h_{\infty}'< -\LagrMinf_{\Emininf}, \end{align} which we shall use later. For any $L>0$, we define \begin{align} \label{Lminusineq} &\quad\,h_L':=\inf_{\psi\in \MinLe}\inf_{f \in H_{\mathbb{R}}^1(\TL), \|f\|_2=1 \atop f \in(\spn\{\psi\})^{\perp}} \quad\expval{\LL_{\psi}}{f}, \\ \label{Lplusineq} &h_L'':=\inf_{\psi\in\MinLe}\inf_{f \in H^1_{\mathbb{R}}(\TL), \|f\|_2=1 \atop f \in (\spn\{\psi, \partial_1 \psi, \partial_2 \psi, \partial_3 \psi\})^{\perp}} \expval{\LL_{\psi}-4\XL_{\psi}}{f}. \end{align} \begin{align} \label{eq:claimliminf} &\liminf_{L\to \infty} h_L'\geq h_{\infty}', \quad\liminf_{L\to \infty} h_L''\geq h_{\infty}''. \end{align} It is not difficult to show that \begin{align} \label{eq:ConvergenceToFullSpace} &\limsup_{L\to \infty} h_L'\leq h_{\infty}', \quad\limsup_{L\to \infty} h_L''\leq h_{\infty}'', \end{align} simply by considering localizations of the full-space optimizers and using Proposition <ref>. Hence there is actually equality in (<ref>). To prove Proposition <ref> we need the following two Lemmas. For $\psi\in \MinLe$, the operator $Y^L_{\psi}$ with integral kernel $Y^L_{\psi}(x,y):= \Tker\psi(y)$ is universally bounded from $L^2(\TL)$ to $L^{\infty}(\TL)$. This in particular implies that the operators $\XL_{\psi}$, defined in (<ref>), are universally bounded from $L^2(\TL)$ to $L^2(\TL)$. Using Lemma <ref> and the normalization of $\psi$, we have \begin{align} |Y^L_{\psi}(f)(x)|&=\left|\int_{\TL} \Tker \psi(y) f(y) dy\right|\lesssim \|f\|_2+\int_{\TL} \frac {|\psi(y)f(y)|}{4\pi \dist_{\TL}(x,y)} dy\nonumber\\ &\lesssim \|f\|_2+\int_{B_{1}(x)}\frac {|\psi(y)f(y)|}{\dist_{\TL}(x,y)} dy\leq (1+C\|\psi\|_{\infty})\|f\|_2\lesssim \|f\|_2. \end{align} To conclude, we also made use of the fact that the minimizers are universally bounded in $L^{\infty}$ by Lemma <ref>. Recall the definition of $\Emininf_L$ in (<ref>). For any $\varepsilon>0$, there exists $R'_{\varepsilon}$ and $L'_{\varepsilon}$ (with $R_{\varepsilon}'\leq L_{\varepsilon}'/2$) such that for any $L>L'_{\varepsilon}$, any normalized $f$ in $L^2(\TL)$ supported on $B_{R'_{\varepsilon}}^c:=[-L/2,L/2]^3\setminus B_{R'_{\varepsilon}}$, and any $\psi\in \MinLe$ such that \begin{align} \|\psi-\Emininf_L\|_{H^1(\TL)}=\dist_{H^1}(\Theta_L(\psi),\Emininf_L) \end{align} we have \begin{align} \expval{\LL_{\psi}-4\XL_{\psi}}{f}\geq -\LagrMinf_{\Emininf}-\varepsilon. \end{align} By definition of $\LL_{\psi}$ and $\XL_{\psi}$, we have \begin{align} \expval{\LL_{\psi}-4\XL_{\psi}}{f}&=T_L(f)-\LagrML_{\psi} +\expval{V_{\sigma_{\psi}}}{f}-4\expval{\XL_{\psi}}{f}\nonumber\\ &\geq -\LagrML_{\psi}+\expval{V_{\sigma_{\psi}}}{f}-4\expval{\XL_{\psi}}{f}. \end{align} By Proposition <ref>, taking $L_{\varepsilon}'$ sufficiently large guarantees that \begin{align} |\LagrML_{\psi}-\LagrMinf_{\Emininf}|\leq \varepsilon/2. \end{align} Thus we only need to show that $\expval{V_{\sigma_{\psi}}}{f}$ and $\expval{\XL_{\psi}}{f}$ can be made arbitrary small by taking $L_{\varepsilon}'$ and $R_{\varepsilon}'$ sufficiently large. Since $f$ is normalized and supported on $B_{R'_{\varepsilon}}^c$, \begin{align} |\expval{V_{\sigma_{\psi}}}{f}|\leq \|V_{\sigma_{\psi}}\|_{L^{\infty}\left(B_{R'_{\varepsilon}}^c\right)}. \end{align} Moreover, using Lemma <ref>, splitting the integral over $B_t(x)$ and $B_t^c(x)$ (for some $t>0$), and assuming $x\in B_{R'_{\varepsilon}}^c$, we find \begin{align} |V_{\sigma_{\psi}}(x)|\leq \frac C L +C\int_{\TL} \frac {|\psi(y)|^2}{\dist_{\TL}(x,y)} dy\leq \frac C L+C t \|\psi\|^2_{L^6\left(B_{R'_{\varepsilon}-t}^c\right)} +1/t. \end{align} On the other hand, by Lemma <ref>, \begin{align} |\expval{\XL_{\psi}}{f}|\leq C\|f\|_{2}\int_{\TL} \psi(y)|f(y)|dy\leq C \|\chi_{B_{R'_{\varepsilon}}^c} \psi\|_2. \end{align} Therefore, by applying Proposition <ref>, we can conclude that there exists $L'_{\varepsilon}$ and $R'_{\varepsilon}$ such that, for any $L>L'_{\varepsilon}$ and any $L^2$-normalized $f$ supported on $B_{R'_{\varepsilon}}^c$, we have \begin{align} \expval{V_{\sigma_{\psi}}}{f}-4\expval{\XL_{\psi}}{f}\geq -\varepsilon/2, \end{align} which concludes our proof. We only show the second inequality in (<ref>), as its proof can easily be modified to also show the first. Moreover, we observe that the second inequality in (<ref>) is equivalent to the statement that for any sequence $\psi_n\in \mathcal{M}_{L_n}$ with $L_n\to \infty$, \begin{align} \liminf_{n} \inf_{f \in H^1(\mathbb{T}^3_{L_n}), \|f\|_2=1 \atop f \in \spn\{\psi_n, \partial_1 \psi_n, \partial_2 \psi_n, \partial_3 \psi_n\}^{\perp}} \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{f}\geq h_{\infty}'', \end{align} which we shall prove in the following. We consider $\psi_n\in \mathcal{M}_{L_n}$, $L_n\to \infty$, and define \begin{align} h_n:=\inf_{f \in H^1(\TLn), \|f\|_2=1 \atop f \in \spn\{\psi_n, \partial_1 \psi_n, \partial_2 \psi_n, \partial_3 \psi_n\}^{\perp}} \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{f}. \end{align} By translation invariance of $\mathcal{E}_{L_n}$ and by Proposition <ref>, we can also restrict to sequences $\psi_n$ converging to $\Emininf$ in $L^2(\Rtre)$ and such that $\|\psi_n-\Emininf_{L_n}\|_{H^1\left(\TLn\right)}\to 0$, where $\Emininf_{L_n}$ is defined in (<ref>). Let now $g_n$ be a normalized function in $L^2(\TLn)$, orthogonal to $\psi_n$ and its partial derivatives, realizing $h_n$ (which exists by compactness, and can be taken to be a real-valued function). We define the following partition of unity $0\leq\eta^1_R,\eta^2_R\leq1$, with $\eta^i_R\in C^{\infty}(\Rtre)$, $\eta^i_R(x)=\eta_i(x/R)$ and \begin{align} \eta_1(x)= \begin{cases} &1 \quad x\in B_1,\\ &0 \quad x\in B_{2}^c \end{cases} \quad \quad \quad \eta_2=\sqrt{1-|\eta_1|^2}. \end{align} We define $\eta^i_n:=\eta^i_{L_n/8}$ and \begin{align} \end{align} Standard properties of IMS localization imply that \begin{align} h_n & = \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n}\notag\\ &=\sum_{i=1,2} \|\eta^i_ng_n\|_2^2\expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^i} \notag\\ & \quad -\sum_{i=1,2} \Big(\expval{|\nabla \eta^i_n|^2}{g_n}+2\expval{[\eta^i_n,[\eta^i_n,\XLn_{\psi_n}]]}{g_n}\Big). \label{llo} \end{align} Clearly, the first summand in the second sum is of order $O(L_n^{-2})$, by the scaling of $\eta^i_n$. For the second summand, we observe that \begin{align} \end{align} and proceed to bound the Hilbert-Schmidt norm of both operators ($i=1,2$), which will then bound the last line of (<ref>). We make use of Lemma <ref> to obtain \begin{align} &\int_{\TLn\times\TLn} |\Delta_{L_n}^{-1}(x,y)|^2 \psi_n(x)^2\psi_n(y)^2 \left(\eta^R_i(x)-\eta^i_n(y)\right)^4 dx dy\nonumber\\ &\lesssim \frac 1 {L_n^2} +\int_{\TLn\times \TLn} \frac{ \left(\eta^i_n(x)-\eta^i_n(y)\right)^4} {d^2_{\TLn}(x,y)}\psi_n(x)^2\psi_n(y)^2 dx dy\leq \frac 1 {L_n^2}+ \|\nabla \eta^i_n\|^2_{\infty}. \end{align} Therefore, also the second summand in the error terms is order $L_n^{-2}$, which allows us to conclude that \begin{align} \label{approxest} \sum_{i=1,2} \|\eta^i_ng_n\|_2^2\expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^i}=h_n+O(L_n^{-2}). \end{align} By Lemma <ref> applied to $g_n^2$ (which is supported on $B^c_{L_n/4}$) and (<ref>), we find \begin{align} \label{eq:boundong2} \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^2}\geq -\LagrMinf_{\Emininf}+o_n(1)> h_{\infty}''+o_n(1). \end{align} Since the l.h.s. of (<ref>) is a convex combination and $(\LLn_{\psi_n}-4\XLn_{\psi_n})$ is uniformly bounded from below, (<ref>) allows to restrict to sequences $\psi_n$ such that \begin{align} \label{eq:LowerBoundNorm} \|\eta^1_n g_n\|_2\geq C \end{align} uniformly in $n$ and \begin{align} \label{g1est} \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^1}\leq h_n+o_n(1), \end{align} since our claim holds on any sequence for which (<ref>) and (<ref>) are not simultaneously satisfied. Using (<ref>) it is easy to see that $g_n^1$ is almost orthogonal to $\psi_n$, in the sense that \begin{align} |\bra{g_n^1}\ket{\psi_n}|=\frac 1 {\|g_n\eta^1_n\|_2}|\bra{g_n(\eta^1_n-1)}\ket{\psi_n}|\leq \frac 1 C \|(1-\eta^1_n)\psi_n\|_2\leq\frac 1 C \|\chi_{B^c_{L_n/8}}\psi_n\|_2 \xrightarrow{n\to\infty} 0. \end{align} Here we used the $L^2$-convergence of $\psi_n$ to $\Emininf$. Clearly, the same computation (together with the $H^1$-convergence of $\psi_n$ to $\Emininf$) shows that $g_n^1$ is also almost orthogonal to the partial derivatives of $\psi_n$. To conclude, we wish to modify $g_n^1$ in order to obtain a function $\tilde{g}_n$ which satisfies the constraints (i.e., is a competitor) of the full-space variational problem introduced in (<ref>). We also wish to have \begin{align} \label{ultimatewish} \expval{L_{\Emininf}-4X_{\Emininf}}{\tilde{g}_n}=\expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^1}+o_n(1). \end{align} Indeed, (<ref>) together with (<ref>) and the fact that $\tilde{g}_n$ is a competitor on $\Rtre$, would imply that \begin{align} h_n\geq \expval{\LLn_{\psi_n}-4\XLn_{\psi_n}}{g_n^1}-o_n(1)=\expval{L_{\Emininf}-4X_{\Emininf}}{\tilde{g}_n}-o_n(1)\geq h_{\infty}''-o_n(1), \end{align} which finally yields the proof of the Proposition also for sequences $\psi_n$ satisfying (<ref>) and (<ref>). We have a natural candidate for $\tilde{g}_n$, which is simply \begin{align} \tilde{g}_n:= \frac{(\unit-\mathcal{P})g_n^1} \end{align} with $\mathcal{P}(g_n^1):=\Emininf \bra{\Emininf}\ket{g_n^1}+\sum_{i=1,2,3}\frac{\partial_i\Emininf}{\|\partial_i \Emininf\|_2} \bra{\frac{\partial_i\Emininf}{\|\partial_i \Emininf\|_2}}\ket{g_n^1}$. Clearly $\tilde{g}_n$ is a competitor for the full space minimization and we are only left with the task of proving that $\tilde{g}_n$ satisfies (<ref>). We observe that, since $g_n^1$ is almost orthogonal to $\psi_n$ and its partial derivatives, and using Proposition <ref>, \begin{align} \label{scalarbound} |\bra{\Emininf}\ket{g_n^1}|&\leq \|\Emininf-\psi_n\|_{L^2(B_{L_n/4})}+|\bra{\psi_n}\ket{g_n^1}|=o_n(1),\nonumber\\ |\bra{\partial_i\Emininf}\ket{g_n^1}|&\leq \|\Emininf-\psi_L\|_{H^1(B_{L_n/4})}+|\bra{\partial_i\psi_n}\ket{g_n^1}|=o_n(1). \end{align} \begin{align} \|\mathcal{P}(g_n^1)\|_2\to 0 \quad \text{and} \quad \|(\unit-\mathcal{P})g_n^1\|_2\to 1. \end{align} Hence, the normalization factor does not play any role in the proof of (<ref>). Moreover \begin{align} \end{align} and thus we can conclude that also $\mathcal{P}(g_n^1)$ does not play any role in the proof of (<ref>), since $(L_{\Emininf}-4X_{\Emininf})\mathcal{P}$ is a bounded operator ($\mathcal{P}$ has finite dimensional range contained in the domain of $(L_{\Emininf}-4X_{\Emininf})$), $\mathcal{P}$ is a projection and $\|\mathcal{P}(g_n^1)\|_2\to 0$. With this discussion, we reduced our problem to showing that \begin{align} \label{veryultimatewish} \expval{(L_{\Emininf}-4X_{\Emininf})}{g_n^1}=\expval{(\LLn_{\psi_n}-4\XLn_{\psi_n})}{g_n^1}+o_n(1). \end{align} Clearly the kinetic energy terms coincide for every $n$ and $\LagrMLn_{\psi_n}\to \LagrMinf$, by Proposition <ref>. Therefore we only need to prove that \begin{align} |\expval{V_{\sigma_{\psi_n}}-V_{\sigma_{\Emininf}}}{g_n^1}|,|\expval{\XLn_{\psi_n}-X_{\Emininf}}{g_n^1}|\to 0. \end{align} For the first term, using that $g_n^1$ is supported on $B_{L_n/4}$, we have \begin{align} |\expval{V_{\sigma_{\psi_n}}-V_{\sigma_{\Emininf}}}{g_n^1}|\leq \|V_{\sigma_{\Emininf}}-V_{\sigma_{\psi_n}}\|_{L^{\infty}(B_{L_n/4})}. \end{align} If we define $\Emininf_R:=\chi_{B_R} \Emininf$ and $(\psi_n)_R:=\chi_{B_R} \psi_n$ we have $V_{\sigma_{\Emininf}}=V_{\sigma_{\Emininf_R}}+V_{\sigma_{[\Emininf-\Emininf_R]}}$ and $V_{\sigma_{\psi_n}}=V_{\sigma_{(\psi_n)_R}}+V_{\sigma_{[\psi_n-(\psi_n)_R]}}$. We consider $R=R(n)=L_n/8$ and observe that \begin{align} |V_{\sigma_{[\Emininf-\Emininf_R]}}(x)|=2\int_{\Rtre} \Gkerspace (\Emininf-\Emininf_R)^2dy\lesssim\|\Emininf-\Emininf_R\|_6^2+\|\Emininf-\Emininf_R\|_2^2\to 0. \end{align} Similar computations, together with Lemma <ref>, yield similar estimates for $|V_{\sigma_{[\psi_n-(\psi_n)_R]}}(x)|$. Moreover, since $\dist_{\TLn}(x,y)=|x-y|$ for $x,y\in B_{L_n/8}$, we have, for any $x\in B_{L_n/8}$ \begin{align} |(V_{\sigma_{\Emininf_R}}-V_{\sigma_{(\psi_n)_R}})(x)|&\lesssim \left|\int_{B_{L_n/4}} \frac 1 {|x-y|} (\Emininf(y)-\psi_n(y))(\Emininf(y)+\psi_n(y))dy\right| + \frac 1 {L_n}\nonumber\\ &\lesssim \|\Emininf+\psi_n\|_{\infty}\|\Emininf-\psi_n\|_6+\|\Emininf-\psi_n\|_2\|\Emininf+\psi_n\|_2+\frac 1 {L_n}\to 0. \end{align} Here we used again Lemma <ref>, the convergence of $\psi_n$ to $\Emininf$ and the universal $L^{\infty}$-bounded­ness of minimizers. Putting the pieces together we obtain \begin{align}\nonumber \|V_{\sigma_{\Emininf}}-V_{\sigma_{\psi_n}}\|_{L^{\infty}(B_{L_n/4})} & \leq \|V_{\sigma_{[\Emininf-\Emininf_R]}}\|_{\infty}+\|V_{\sigma_{[\psi_n-(\psi_n)_R]}}\|_{\infty} \\ & \quad +\|V_{\sigma_{\Emininf_R}}-V_{\sigma_{(\psi_n)_R}}\|_{L^{\infty}(B_{R(n)})}\to 0, \end{align} as desired. The study is similar for $\expval{\XLn_{\psi_n}-X_{\Emininf}}{g_n^1}$, hence we shall not write it down explicitly. We conclude that (<ref>) holds and, by the discussion above, the proof is complete. §.§.§ Proof of Theorem <ref> In this section we first prove universal local bounds for $\EL$ around minimizers. These are a direct consequence of the results on the Hessian in the previous subsection, the proof follows along the lines of [7], <cit.> and <cit.>. Such universal local bounds yield universal local uniqueness of minimizers, i.e., the statement that minimizers that are not equivalent (i.e., not obtained one from the other by translations and changes of phase) must be universally apart (in $H^1(\TL)$). Together with Proposition <ref>, this clearly implies uniqueness of minimizers for $L$ big enough, which is the first part of Theorem <ref>. A little extra effort will then complete the proof of Theorem <ref>. In this section, for any $\psi \in \MinLe$ and any $f\in L^2(\TL)$, we write $e^{i\theta}\psi^y=P^{L^2}_{\Theta_L(\psi)}(f)$, respectively $e^{i\theta}\psi^y=P^{H^1}_{\Theta_L(\psi)}(f)$, to mean that $e^{i\theta} \psi^y$ realizes the $L^2$-distance, respectively the $H^1$-distance, between $f$ and $\Theta_L(\psi)$. Note that by compactness these always exist, but they might not be unique. The possible lack of uniqueness is not a concern for our analysis, however. There exist universal constants $K_1>0$ and $K_2>0$ and $L^{**} >0$ such that, for any $L>L^{**}$, any $\psi \in \MinLe$ and any $L^2$-normalized $f\in H^1(\TL)$ with \begin{align} \label{localrequirement} \dist_{H^1}(\Theta_L(\psi), f)\leq K_1, \end{align} we have \begin{align} \EL(f)-\eL\geq K_2\|P^{L^2}_{\Theta_L(\psi)}(f)-f\|_{H^1(\TL)}\geq K_2\dist_{H^1}^2\left(\Theta_L(\psi),f\right). \end{align} We can restrict to positive $\psi\in \MinLe$ and normalized $f$ such that \begin{align} \label{l2proj} \end{align} which clearly implies \begin{align} \label{eq:minconseq} \bra{\psi}\ket{f}\geq 0, \quad \bra{\Re f}\ket{\partial_i\psi}=0. \end{align} Under this assumption, we prove that if (<ref>) holds then \begin{align} \label{localhessbound} \EL(\phi)-\eL\geq K_2\|\psi-f\|_{H^1(\TL)}^2\geq K_2\dist^2_{H^1}\left(\Theta_L(\psi),f\right). \end{align} The general result follows immediately by invariance of $\EL$ under translations and changes of phase. We denote $\delta:=f-\psi$ and proceed to expand $\EL$ around $\psi$: \begin{align} \label{hessexpansion} \EL(f)=\EL(\psi+\delta)=\eL+H^{\EL}_{\psi}(\delta)+\Err_{\psi}(\delta). \end{align} We recall that $H^{\EL}_{\psi}$ is simply the quadratic form associated to the Hessian of $\EL$ at $\psi$ and it is defined in (<ref>). We denote $P_{\psi}:=\ket{\psi}\bra{\psi}$. The last term, which we see as an error contribution, is explicitly given by \begin{align} \Err_{\psi}(\delta)=&-8\bra{\Re \delta} \XL_{\psi}\ket{P_{\psi} \Re \delta}+4\expval{\XL_{\psi}}{P_{\psi}\Re \delta}\nonumber\\ &-4\bra{|\delta|^2} (-\Delta_L)^{-1}\ket{\psi \Re \delta}+W_L(\delta). \end{align} Our first goal is to estimate $|\Err_{\psi}(\delta)|$. By (<ref>) and the normalization of both $\psi$ and $f$, we find \begin{align} \label{trick} \|\delta\|_2^2=2-2\bra{\psi}\ket{f}. \end{align} Therefore, also using the positivity of $\psi$, we have \begin{align} P_{\psi} \Re \delta=\psi(\bra{\psi}\ket{f}-1)=-\frac 1 2 \psi \|\delta\|_2^2. \end{align} We now apply Lemma <ref> to obtain \begin{align} \label{eq:FirstErrEst} &|\bra{\Re \delta} \XL_{\psi}\ket{P_{\psi} \Re \delta}|\lesssim\|\Re \delta\|_2 \|P_{\psi} \Re \delta\|_2\lesssim\|\delta\|_2^3,\nonumber\\ &|\expval{\XL_{\psi}}{P_{\psi}\Re \delta}|\lesssim \|P_{\psi}\Re \delta\|_2^2\lesssim \|\delta\|_2^4,\nonumber\\ &|\bra{|\delta|^2} (-\Delta_L)^{-1}\ket{\psi \Re \delta}|\lesssim\|\delta\|_2^2\|\Re\delta\|_2\leq\|\delta\|_2^3. \end{align} Finally, by (<ref>), \begin{align} \label{eq:WLest} W_L(\delta)=\|\delta\|_2^4 W_L\left(\frac{\delta}{\|\delta\|_2}\right)\leq \|\delta\|_2^4\left(\frac 1 2 T_L\left(\frac{\delta}{\|\delta\|_2}\right)+C\right)\lesssim\|\delta\|_2^2\|\delta\|_{H^1(\TL)}^2. \end{align} Recalling (<ref>), we can estimate \begin{align} \|\delta\|_2=\dist_{L^2}(f,\Theta_L(\psi))\leq\dist_{H^1}(f,\Theta_L(\psi))\leq K_1, \end{align} and this implies, combined with (<ref>) and (<ref>), that \begin{align} \label{remainderbound} |\Err_{\psi}(\delta)|\lesssim \|\delta\|_{H^1(\TL)}^3. \end{align} We now want to bound $H^{\EL}_{\psi}(\delta)$. We fix $0<\tau<\min\{h_{\infty}',h_{\infty}''\}$, where $h_{\infty}'$ and $h_{\infty}''$ are defined in (<ref>). Proposition <ref> implies that there exists $L^{**}$ such that for $L>L^{**}$ and $\psi \in \MinLe$, we have \begin{align} \LL_{\psi}\geq \tau Q_{\psi}, \quad Q_{\psi}(\LL_{\psi}-4\XL_{\psi})Q_{\psi}\geq \tau Q_{\psi}', \end{align} where we define $Q_{\psi}=\unit-P_{\psi}$ and $Q_{\psi}':=\unit-P_{\psi}-\sum_{i=1,2,3} P_{\partial_i \psi/\|\partial_i \psi\|_2}$. We note that, by (<ref>) and since $\psi$ is orthogonal in $L^2$ to its partial derivatives, we have \begin{equation} Q_{\psi}(\Re f- \psi)=Q_{\psi}'(\Re f- \psi). \end{equation} Therefore, recalling the definition of $H^{\EL}_{\psi}$ given in (<ref>), \begin{align} H^{\EL}_{\psi}(\delta)&=\expval{\LL_{\psi}}{\Im f}+\expval{Q_{\psi}(\LL_{\psi}-4\XL_{\psi} )Q_{\psi}}{\Re f- \psi}\nonumber\\ &\geq \tau (\|Q_{\psi}\Im f\|_{2}^2+\|Q_{\psi}'(\Re f -\psi)\|_{2}^2)=\tau\|Q_{\psi}\delta\|^2_{L^2(\TL)}. \end{align} Moreover, applying (<ref>), \begin{align} \|Q_{\psi}\delta\|_{L^2(\TL)}^2=\|\delta\|_2^2-\bra{\psi}\ket{\delta}^2=\|\delta\|_2^2\left(1-\frac 1 4 \|\delta\|_2^2\right)\geq \frac 1 2 \|\delta\|_2^2, \end{align} and we can thus conclude that \begin{align} \label{HessL2} H^{\EL}_{\psi}(\delta)\geq \frac {\tau} 2 \|\delta\|_{2}^2. \end{align} On the other hand, by the universal boundedness of $V_{\sigma_{\psi}}$ in $L^{\infty}(\TL)$ and the universal boundedness of $\LagrML_{\psi}$ (see Proposition <ref>), we have, for some universal $C_1>0$, \begin{align} \LL_{\psi}\geq -\Delta_L - C_1. \end{align} Similarly, also using Lemma <ref>, for some universal $C_2>0$, \begin{align} Q(\LL_{\psi}-4\XL_{\psi})Q\geq -\Delta_L- C_2. \end{align} If we then define $C:=(\max\{C_1,C_2\}+1)$, we can conclude the validity of the universal bound \begin{align} \label{HessH1} H^{\EL}_{\psi}(\delta)\geq \|\delta\|_{H^1(\TL)}^2 - C \|\delta\|_{L^2(\TL)}^2. \end{align} By interpolating between (<ref>) and (<ref>), we obtain \begin{align} \label{eq:HboundwrtH1} H^{\EL}_{\psi}(\delta)\geq \frac {\tau}{\tau+2C} \|\delta\|_{H^1(\TL)}^2. \end{align} Using (<ref>) and (<ref>) in (<ref>), we can conclude that there exists a universal constant $C$ such that for any $L>L^{**}$, any $0<\psi \in\MinLe$ and any normalized $f$ satisfying (<ref>), \begin{align} \EL(f)-\eL\geq \frac 1 C\|\delta\|_{H^1(\TL)}^2-C\|\delta\|_{H^1(\TL)}^3. \end{align} In particular, for $K_2$ sufficiently small, we can find a universal constant $c$ such that (<ref>) holds, as long as \begin{align} \label{wronglocalrequirement} \|\delta\|_{H^1(\TL)}=\|P^{L^2}_{\Theta_L(\psi)}(f)-f\|_{H^1(\TL)}\leq c. \end{align} To conclude the proof, it only remains to show that there exists a universal $K_1$ such that (<ref>) holds as long as (<ref>) holds. This can be achieved as follows. We have, using that both $\psi$ and $P^{H^1}_{\Theta_L(\psi)}(f)$ are in $\MinLe$ and thus are universally bounded in $H^2(\TL)$ (by Lemma <ref>) and recalling (see (<ref>)) that $\psi=P^{L^2}_{\Theta_L(\psi)}(f)$, \begin{align} \|\psi-P^{H^1}_{\Theta_L(\psi)}(f)\|_{\mathring{H}^1(\TL)}&\leq \|\psi-P^{H^1}_{\Theta_L(\psi)}(f)\|^{1/2}_{L^2(\TL)}\|(-\Delta_L)(\psi-P^{H^1}_{\Theta_L(\psi)}(f))\|^{1/2}_{L^2(\TL)}\nonumber\\ &\lesssim \|\psi-P^{H^1}_{\Theta_L(\psi)}(f)\|_{L^2(\TL)}^{1/2}\nonumber\\ &\leq \left(\dist_{L^2}\left(\Theta_L(\psi),f\right)+\|f-P^{H^1}_{\Theta_L(\psi)}(f)\|_{L^2(\TL)}\right)^{1/2}\nonumber\\ &\lesssim \dist^{1/2}_{H^1}\left(\Theta_L(\psi),f\right). \end{align} Therefore, for some universal $C$ \begin{align} \|f-\psi\|_{H^1(\TL)}\leq \dist_{H^1}\left(\Theta_L(\psi),f\right)+C\dist^{1/2}_{H^1}\left(\Theta_L(\psi),f\right), \end{align} and it suffices to take $K_1\leq \left[(-C+\sqrt{C^2+4c})/2\right]^2$ to conclude our discussion. We are ready to prove Theorem <ref>. Fix $K_1$ as in Proposition <ref>. Using Proposition <ref>, we know that there exists $L_{K_1/2}$ such that, for any $L>L_{K_1/2}$ and any $\psi\in \MinLe$, we have \begin{align} \dist_{H^1}\left(\Theta_L(\psi),\Emininf_L\right)\leq K_1/2. \end{align} We claim that (<ref>) holds with $L_1:=\max\{L_{K_1/2}, L^*, L^{**}\}$, where $L^*$ is the same as in Corollary <ref> and $L^{**}$ is the same as in Proposition <ref>. Let $L>L_1$ and $\psi\in \MinLe$. Since $L>L_1\geq L^*$, we have $\psi^y\neq\psi$ for any $0\neq y\in \TL$. Moreover, since $L>L_1\geq L_{K_1/2}$ and using the triangle inequality, for any other $\psi_1\in \MinLe$ we have \begin{align} \dist_{H^1}\left(\Theta_L(\psi),\psi_1\right)\leq K_1. \end{align} Since $L>L_1\geq L^{**}$, we can apply Proposition <ref>, finding \begin{align} \end{align} i.e., $\psi_1\in \Theta_L(\psi)$, and (<ref>) holds for $L>L_1$. For $\psi\in \MinLe=\Theta_L(\psi)$, and $L>L_1$, we now show the quadratic lower bound (<ref>), independently of $L$. Lemma <ref>, which guarantees universal $H^1$-boundedness of minimizers, and estimate (<ref>) ensure, by straightforward computations, that there exists $0<\kappa^*<1/2$ such that, if $f\in L^2(\TL)$ is normalized and satisfies \begin{align} \label{noquadbound} \EL(f)-\eL< \kappa^*\dist_{H^1}^2\left(\Theta_L(\psi),f\right), \end{align} then $f$ is universally bounded in $H^1(\TL)$ and must satisfy \begin{align} \label{noquadboundI} \EL(f)-\eL< \delta_{K_1}, \end{align} where $\delta_{K_1}$ is the $\delta_{\varepsilon}$ from Proposition <ref> with $\varepsilon=K_1$. On the other hand, Proposition <ref> and Proposition <ref> combined with the fact that we have taken $L_1\geq L_{K_1/2}$ (and that trivially $L_{K_1/2}\geq L_{K_1}$), guarantee that any $L^2$-normalized $f$ satisfying (<ref>) must satisfy \begin{align} \EL(f)-\eL\geq K_2\dist_{H^1}^2(\Theta_L(\psi),f). \end{align} Therefore the bound (<ref>) from Theorem <ref> holds with the universal constant $\kappa_1:=\min\{\kappa^*,K_2\}$ and our proof is complete. This concludes our study of $\EL$. We now move on to the study of the functional $\FL$. §.§ Study of $\FL$ This section is structured as follows. In Section <ref> we prove Corollary <ref>. In Section <ref>, we compute the Hessian of $\FL$ at its minimizers, showing the validity of (<ref>). This allows to obtain a more precise lower bound for $\FL$ (compared to the bounds (<ref>) and (<ref>) from Corollary <ref>), which holds locally around the $3$-dimensional surface of minimizers $\MinLf=\Omega_L(\varphi_L)$. Finally, in Section <ref>, we investigate closer the surface of minimizers $\Omega_L(\varphi_L)$ and the behavior of the functional $\FL$ close to it. In particular, we show that the Hessian of $\FL$ at its minimizers is strictly positive above its trivial zero modes and derive some key technical tools, which we exploit in Section <ref>. §.§.§ Proof of Corollary <ref> In this section, we show the validity of Corollary <ref>. We need the following Lemma. Recall that in our discussion constants are universal if they are independent of $L$ for $L\geq L_0>0$. For $\psi,\phi \in H^1(\TL)$, $\|\psi\|_2=\|\phi\|_2=1$, \begin{align} \expval{(-\Delta_L)^{-1/2}}{\rho_{\psi}-\rho_{\phi}}\lesssim \||\psi|-|\phi|\|_{H^1(\TL)}^2. \end{align} We define $f(x):=|\psi(x)|+|\phi(x)|$ and $g(x):=|\psi(x)|-|\phi(x)|$. By the Hardy-Littlewood-Sobolev and the Sobolev inequality (see for example [3] for a comprehensive overview of such results on the torus), and using the normalization of $\phi$ and $\psi$ we have \begin{align} \expval{(-\Delta_L)^{-1/2}}{\rho_{\psi}-\rho_{\phi}}&=\|(-\Delta_L)^{-1/4} (fg)\|_2^2\leq C \|fg\|_{3/2}^2\leq C\|f\|_2^2\|g\|_6^2\nonumber\\ &\leq C' \|g\|_{H^1(\TL)}^2 = C'\||\psi|-|\phi|\|_{H^1(\TL)}^2, \end{align} which proves the Lemma. With $\psi_L$ as in Theorem <ref>, let $\varphi_L:=\sigma_{\psi_L}\in C^{\infty}(\TL)$. Observing that \begin{align} \label{eq:GLalternate} \GL(\psi,\varphi)=\EL(\psi)+\|\sigma_{\psi}-\varphi\|_2^2, \end{align} and using Theorem <ref> we can immediately conclude that in the regime $L>L_1$ \begin{align} \MinLf=\Omega_L(\varphi_L). \end{align} It is also immediate, recalling the definition of $\GL$ in (<ref>) and that $\psi_L>0$ (as proven in Theorem <ref>), to conclude that $\psi_L$ must be the unique positive ground state of $h_{\varphi_L}$. To prove (<ref>), we first of all observe that if $\varphi\in L^2(\TL)$, we have \begin{align} \FL(\varphi)=|(\varphi)_0|^2+\FL(\hat{\varphi}). \end{align} Therefore, it is sufficient to restrict to $\varphi$ with zero-average and show that in this case \begin{align} \FL(\varphi)-\eL\geq \min_{y\in \TL} \expval{\unit -(\unit+\kappa'(-\Delta_L)^{1/2})^{-1}}{\varphi-\varphi_L^y}. \end{align} Using Theorem <ref>, we obtain \begin{align} \GL(\psi,\varphi)-\eL&=\EL(\psi)-\eL+\|\varphi-\sigma_{\psi}\|_2^2\nonumber\geq\EL(|\psi|)-\eL+\|\varphi-\sigma_{\psi}\|_2^2\nonumber\\ &\geq \kappa_1\dist_{H^1}^2(|\psi|,\Theta(\psi_L))+\|\varphi-\sigma_{\psi}\|_2^2\nonumber\\ \end{align} for some $y\in \TL$. We now apply Lemma <ref> and use that $\varphi_L^y=\sigma_{\psi_L^y}$ (see (<ref>)), obtaining with a simple completion of the square \begin{align} \GL(\psi,\varphi)-\eL&\geq\kappa'\expval{(-\Delta_L)^{-1/2}}{\rho_{\psi}-\rho_{\psi_L^y}}+\|\varphi-\sigma_{\psi}\|_2^2\nonumber\\ &\quad +\expval{\unit-F^{-1}}{\varphi-\varphi_L^y}, \end{align} where $F=\unit+\kappa'(-\Delta_L)^{1/2}$. Dropping the first term and minimizing over $\psi$ yields our claim. Finally, (<ref>) immediately follows from (<ref>) and the spectral gap of the Laplacian, using the fact that $\varphi_L$ and all its translates have zero average since $\varphi_L=\sigma_{\psi_L}$. §.§.§ The Hessian of $\FL$ For any $\varphi \in L^2_{\R}(\TL)$, we introduce the notation \begin{align} e(\varphi):=\infspec h_{\varphi}, \end{align} and observe that $\FL$, defined in (<ref>), can equivalently be written as \begin{align} \label{eq:Ffundef} \FL(\varphi)= \|\varphi\|_2^2+e(\varphi), \quad \varphi\in L^2_{\R}(\TL). \end{align} We compute the Hessian of $\FL$ at its minimizers using standard arguments in perturbation theory, showing the validity of expression (<ref>). We need the following two Lemmas. For $L\geq L_0>0$, any $\varphi\in L^2(\TL)$ and any $T>0$ \begin{align} \label{eq:L2infinbddness} \|(-\Delta_L+T)^{-1}\varphi\|=\|\varphi(-\Delta_L+T)^{-1}\|\leq C_T\|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)} \end{align} for some constant $C_T>0$ with $\lim_{T\to \infty} C_T = 0$. Here $\varphi$ is understood as a multiplication operator, $\|\cdot\|$ denotes the operator norm on $L^2(\TL)$, and \begin{align} \|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)}:=\inf_{\varphi_1+\varphi_2=\varphi \atop \varphi_1\in L^2(\TL), \, \varphi_2\in L^{\infty}(\TL)} \left(\|\varphi_1\|_{L^2(\TL)}+\|\varphi_2\|_{L^{\infty}(\TL)}\right). \end{align} Note that \begin{align} \|\varphi\|_{L^2(\TL)}\leq L^{3/2} \|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)}\leq L^{3/2} \|\varphi\|_{L^2(\TL)}, \end{align} which clearly makes the two norms equivalent. Nevertheless, we find it more natural to work with a bound of the form (<ref>), where $C_T$ is independent of $L$. Lemma <ref> implies that, for any $\varphi\in L^2(\TL)+L^{\infty}(\TL)$, the multiplication operator associated with $\varphi$ is infinitesimally relatively bounded with respect to $-\Delta_L$. More precisely, for any $\delta>0$, there exists $C\left(\delta, \|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)}\right)$ depending on $\varphi$ only through $\|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)}$, such that for any $f \in \text{Dom}(-\Delta_L)$ \begin{align} \|\varphi f\|\leq \delta \|\Delta_L f\|+C\left(\delta, \|\varphi\|_{L^2(\TL)+L^{\infty}(\TL)}\right)\|f\|. \end{align} Whenever infinitesimal relative boundedness holds with a constant $C(\delta)$ uniform over a class of operators, we will say that the class is uniformly infinitesimally relatively bounded. In this case, Lemma <ref> ensures that multiplication operators associated to functions in $(L^2+L^{\infty})$-balls are uniformly infinitesimally relatively bounded with respect to $-\Delta_L$. We first observe that, by self-adjointness of $(-\Delta_L + T)^{-1}$, it is sufficient to show that the claimed bound holds for $\|\varphi(-\Delta_L+T)^{-1}\|$. For any $f, \varphi\in L^2(\TL)$ and any decomposition of the form $\varphi=\varphi_1+\varphi_2$ with $\varphi_1\in L^2(\TL)$ and $\varphi_2\in L^{\infty}(\TL)$ we have \begin{align} \|\varphi (-\Delta_L+T)^{-1} f\|_2&\leq \|\varphi_1\|_2 \|(-\Delta_L+T)^{-1}f\|_{\infty}+\|\varphi_2\|_{\infty} \|(-\Delta_L+T)^{-1} f\|_2\nonumber\\ &\leq \|\varphi_1\|_2 \|(-\Delta_L+T)^{-1}f\|_{\infty}+T^{-1}\|\varphi_2\|_{\infty}\|f\|_2. \end{align} \begin{align} \|(-\Delta_L+T)^{-1} f\|_{\infty}&\leq \sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac 1 {L^{3/2}(|k|^2+T)} |f_k|\leq \left(\frac 1 {L^3}\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac 1 {(|k|^2+T)^2}\right)^{1/2}\|f\|_2\nonumber\\ &\leq C \left(\int_{\Rtre} \frac 1 {(|x|^2+T)^2}\right)^{1/2}\|f\|_2 = C T^{-1/2} \|f\|_2. \end{align} Therefore, picking $C_T:=\max\left\{T^{-1}, C T^{-1/2} \right\}$ yields \begin{align} \|\varphi (-\Delta_L+T)^{-1} f\|_2\leq C_T\left(\|\varphi_1\|_2 +\|\varphi_2\|_{\infty}\right)\|f\|_2, \end{align} optimizing over $\varphi_1$ and $\varphi_2$ completes the proof. For $\varphi \in L^2(\TL)$ \begin{align} \|(-\Delta_L)^{-1/2}\varphi\|_{L^{\infty}(\TL)+L^2(\TL)}\lesssim \|(-\Delta_L+1)^{-1/2}\varphi\|_{L^2(\TL)}. \end{align} We write $f_1=\chi_{[0,1)}$ and $f_2=\chi_{[1,+\infty)}$ and \begin{align} \varphi_1= f_1\left[(-\Delta_L)^{-1/2}\right] \varphi, \quad \varphi_2=f_2\left[(-\Delta_L)^{-1/2}\right]\varphi. \end{align} Clearly $(-\Delta_L)^{-1/2} \varphi=\varphi_1+\varphi_2$. \begin{align} \|(-\Delta_L)^{-1/2} \varphi\|_{L^{\infty}+L^2}&\leq \|\varphi_1\|_{\infty}+\|\varphi_2\|_2\nonumber\\ &\leq \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|<1} \frac 1 {L^3|k|^2}\right)^{1/2}\left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|<1} |\varphi_k|^2\right)^{1/2}+\left(\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|\geq 1} \frac {|\varphi_k|^2}{|k|^2}\right)^{1/2}\nonumber\\ &\lesssim \left(\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|<1} |\varphi_k|^2\right)^{1/2}+\left(\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|\geq 1} \frac {|\varphi_k|^2}{|k|^2}\right)^{1/2}\nonumber\\ &\lesssim \left(\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3} \frac 1 {|k|^2+1}|\varphi_k|^2\right)^{1/2}=C\|(-\Delta_L+1)^{-1/2}\varphi\|_{L^2(\TL)}. \end{align} This concludes the proof. Lemmas <ref> and <ref> together yield the following Corollary, whose proof is omitted as it is now straightforward. For any $\varphi$ such that $\|(-\Delta_L+1)^{-1/2}\varphi\|_2$ is finite, the multiplication operator $V_{\varphi}$ (defined in (<ref>)) is infinitesimally relatively bounded with respect to $(-\Delta_L)$. Moreover, for $T> 0$ there exists $C_T$ such that \begin{align} \|(-\Delta_L+T)^{-1} V_{\varphi}\|\leq C_T \|(-\Delta_L+1)^{-1/2}\varphi\|_2, \quad \text{and} \;\;\; C_T\searrow 0\,\,\, \text{as} \,\,\, T\to \infty. \end{align} In particular, Corollary <ref> implies that the family of multiplication operators associated to $\{V_{\varphi} \,|\, \|(-\Delta_L+1)^{-1/2}\varphi\|_2 \leq M\}$ is uniformly infinitesimally relatively bounded with respect to $-\Delta_L$ for any $M$. With these tools at hand we now investigate $\FL$ close to its minimum and, in particular, compute the Hessian of $\FL$ at its minimizers. We follow very closely the analogous analysis carried out in [9]. By translation invariance of the problem, it is clearly sufficient to perform the computation with respect to $\varphi_L$, where $\varphi_L$ is the same as in Corollary <ref>. For $L> L_1$ let $\varphi\in L^2_{\mathbb{R}}(\TL)$ be such that \begin{align} \label{eq:FHessianregime} \|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|_{L^2(\TL)}\leq \varepsilon_L \end{align} for some $\varepsilon_L>0$ small enough. Then \begin{align} \label{eq:claimedHessianF} &\lesssim_L \|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|_{2}\expval{J_L}{\varphi-\varphi_L}, \end{align} \begin{align} \label{eq:KJdef} &K_L:=4(-\Delta_L)^{-1/2} \psi_L \frac {\GSOrthProj} {h_{\varphi_L}-e(\varphi_L)}\psi_L (-\Delta_L)^{-1/2},\notag\\ &J_L=4(-\Delta_L)^{-1/2}\psi_L (-\Delta_L+1)^{-1} \psi_L (-\Delta_L)^{-1/2}, \end{align} and $\psi_L$, which we recall (see (<ref>)) is the (positive) ground state of $h_{\varphi_L}$, is understood, in the expressions for $K_L$ and $J_L$, as a multiplication operator. Note that this implies that $H^{\FL}_{\varphi_L}=\unit-K_L$, as claimed in (<ref>). In particular, $K_L\leq \unit$ by minimality of $\varphi_L$. It is also clear, by definition, that $K_L\geq0$. We emphasize that $J_L$ is trace class, being the square of $2(-\Delta_L+1)^{-1/2}\psi_L(-\Delta_L)^{-1/2}$, which is Hilbert-Schmidt since $\psi_L$ is in $L^2$, as a function of $x$, and $f(k):=(|k|^2+1)^{-1/2}|k|^{-1}$ is in $L^2$, as a function of $k$. From the trace class property of $J_L$, together with the boundedness of $(-\Delta_L+1)^{1/2}\frac {\GSOrthProj} {h_{\varphi_L}-e(\varphi_L)} (-\Delta_L+1)^{1/2}$ (which follows from Corollary <ref>), we immediately infer the trace class property of $K_L$. We even show in Lemma <ref> that $J_L,K_L\lesssim_L (-\Delta_L+1)^{-2}$. We shall in the following denote by $K_L^y$, respectively $J_L^y$, the unitary equivalent operators obtained from $K_L$ and $J_L$ by a translation by $y$. Note that $K_L^y$ and $J_L^y$ appear if one expands $\FL$ with respect to $\varphi_L^y$ instead of $\varphi_L$. Moreover, the invariance under translations of $\FL$ implies that \begin{align} \spn\{\partial_j \varphi_L\}_{j=1}^3 \subset \ker (\unit-K_L). \end{align} We show in Section <ref> that these two sets coincide. Finally, even though both $\varepsilon_L$ and the estimate (<ref>) in Proposition <ref> depend on $L$, with a little extra work one can show that the bound is actually uniform in $L$ (for large $L$). For simplicity we opt for the current version of Proposition <ref>, as it is sufficient for the purpose of our investigation, which is set on a torus of fixed linear size $L>L_1$. We shall denote $h_0:=h_{\varphi_L}$. By assumption (<ref>) and since $\varphi_L \in L^2(\TL)$, we can apply Corollary <ref> to $\varphi_L$ and to $(\varphi-\varphi_L)$. This way we see that $V_{\varphi-\varphi_L}$ is uniformly infinitesimally relatively bounded with respect to $h_{0}$ for any $\varphi$ satisfying (<ref>). It is clear that $h_{0}$ admits a simple and isolated least eigenvalue $e(\varphi_L)$. Standard results in perturbation theory then imply that there exist $\varepsilon_L>0$ and a contour $\gamma$ around $e(\varphi_L)$ such that for any $\varphi$ satisfying (<ref>) $e(\varphi)$ is the only eigenvalue of $h_{\varphi}=h_0+V_{\varphi-\varphi_L}$ inside $\gamma$. (For fixed $\varphi$, the statement above is a standard result in perturbation theory, see <cit.>; moreover it is also possible to get a $\varphi$-independent $\gamma$ encircling $e(\varphi)$ (see <cit.>) since $V_{\varphi-\varphi_L}$ is uniformly infinitesimally relatively bounded with respect to $h_0$.) We can thus write \begin{align} \label{eq:perturbedev} e(\varphi)=\Trace \int_{\gamma}\frac z {z-(h_0+V_{\varphi-\varphi_L})} \frac {dz}{2\pi i}. \end{align} Moreover, by the uniform infinitesimal relative boundedness of $V_{\varphi-\varphi_L}$ with respect to $h_0$, we have \begin{align} \label{eq:resolventrelbddness} \sup_{z\in \gamma} \|V_{\varphi-\varphi_L}(z-h_0)^{-1}\|<1, \end{align} for $\varepsilon_L$ sufficiently small. For any $z\in \gamma$, we can thus use the resolvent identity in the form \begin{align} \label{eq:resexp} \frac 1 {z-h_{0}-V_{\varphi-\varphi_L}}=&\left(\unit-\frac {\GSOrthProj} {z-h_0} V_{\varphi-\varphi_L}\right)^{-1} \frac {\GSOrthProj} {z-h_0}\nonumber\\ &+\left(\unit- \frac {\GSOrthProj} {z-h_0} V_{\varphi-\varphi_L}\right)^{-1} \frac {P_{\psi_L}} {z-h_0} \left(\unit- V_{\varphi-\varphi_L} \frac 1 {z-h_0}\right)^{-1}. \end{align} The first term is analytic inside the contour $\gamma$ and hence it gives zero after integration when inserted in (<ref>). Inserting the second term of (<ref>), which is rank one, in (<ref>) and using Fubini's Theorem to interchange the trace and the integral, we obtain \begin{align} \label{eq:ebvexpansion} e(\varphi)=\int_{\gamma} \frac{z}{z-e(\varphi_L)} \left \langle \psi_L \left|\left(\unit- V_{\varphi-\varphi_L} \frac 1 {z-h_0}\right)^{-1}\left(\unit- \frac {\GSOrthProj} {z-h_0} V_{\varphi-\varphi_L}\right)^{-1}\right| \psi_L \right \rangle \frac {dz}{2\pi i}. \end{align} For simplicity, we introduce the notation \begin{align} A= V_{\varphi-\varphi_L} \frac 1 {z-h_0}, \quad B= \frac {\GSOrthProj} {z-h_0} V_{\varphi-\varphi_L}. \end{align} Because of (<ref>), both $A$ and $B$ are smaller than $1$ in norm, uniformly in $z\in \gamma$. We shall use the identity \begin{align} \frac 1 {\unit-A} \frac 1 {\unit-B}=&\unit+A+A(A+B)+\frac B {\unit-B} \notag\\ &+ \frac {A^3}{\unit-A}+\frac{A^2}{\unit-A} B+\frac A{\unit-A} \frac {B^2}{\unit-B}. \end{align} We insert the various terms in (<ref>) and do the contour integration. The term $\unit$ gives $e(\varphi_L)$. The term $A$, recalling (see (<ref>)) that $(-\Delta_L)^{-1/2} \rho_{\psi_L}=\varphi_L$, yields \begin{align} \expval{V_{\varphi-\varphi_L}}{\psi_L}=-2\bra{\varphi-\varphi_L}\ket{\varphi_L}. \end{align} A standard calculation shows that the term $A(A+B)$ gives \begin{align} \left\langle \psi_L \left|V_{\varphi-\varphi_L} \frac {\GSOrthProj} {e(\varphi_L)-h_0} V_{\varphi-\varphi_L}\right|\psi_L\right\rangle=-\expval{K_L}{\varphi-\varphi_L}. \end{align} Furthermore, since $\GSOrthProj \psi_L=0$, the term $B(\unit-B)^{-1}$ yields zero. Recalling that $\FL(\varphi)=\|\varphi\|^2+e(\varphi)$ we obtain from (<ref>) \begin{align} &=\int_{\gamma} \frac z {z-e(\varphi_L)}\left \langle \psi_L \left| \frac {A^3}{\unit-A}+A\left(\frac{A}{\unit-A} +\frac 1{\unit-A} \frac {B}{\unit-B}\right) B\right| \psi_L \right\rangle \frac {dz}{2\pi i}. \end{align} We observe that, since $\gamma$ is uniformly bounded and uniformly bounded away from $e(\varphi_L)$, we can get rid of the integration, i.e., it suffices to bound \begin{align} &(I):=\sup_{z\in \gamma} \left| \left \langle \psi_L \left| \frac {A^3}{\unit-A}\right| \psi_L \right\rangle\right|,\notag\\ &(II):=\sup_{z\in \gamma}\left| \left \langle \psi_L \left|A\left(\frac{A}{\unit-A} +\frac 1{\unit-A} \frac {B}{\unit-B}\right) B\right| \psi_L \right\rangle\right|, \end{align} with the r.h.s. of (<ref>) to conclude the proof. We note that \begin{align} \expval{J_L}{\varphi-\varphi_L}=\left\|(-\Delta_L+1)^{1/2} V_{\varphi-\varphi_L} \psi_L\right\|_2^2, \end{align} and that, by infinitesimal relative boundedness of $V_{\varphi_L}$ with respect to $(-\Delta_L)$ and since $\gamma$ is uniformly bounded away from $e(\varphi_L)$, there exists some constant $C_L>0$ such that \begin{align} &\sup_{z\in \gamma} \left\|(-\Delta_L+1)^{1/2} (z-h_0)^{-k}(-\Delta_L+1)^{1/2}\right\|\leq C_L \quad \text{for} \,\, k=1,2. \end{align} \begin{align} (I)&=\sup_{z\in \gamma}\left|(z-e(\varphi_L))^{-1}\expval{(z-h_0)^{-1}A (\unit-A)^{-1}}{V_{\varphi-\varphi_L} \psi_L}\right|\notag\\ &\lesssim_L \sup_{z\in\gamma}\left\|(-\Delta_L+1)^{1/2} (z-h_0)^{-1} \frac A {\unit-A} (-\Delta_L+1)^{1/2}\right\|\expval{J_L}{\varphi-\varphi_L}\notag\\ &\lesssim_L \sup_{z\in\gamma}\left\|(-\Delta_L+1)^{-1/2} \frac A {\unit-A} (-\Delta_L+1)^{1/2}\right\|\expval{J_L}{\varphi-\varphi_L},\\ (II)&\leq \sup_{z\in \gamma}\left\|\frac A {\unit-A} + \frac 1 {\unit-A} \frac B {\unit-B}\right\|\expval{AA^{\dagger}}{\psi_L}^{1/2} \expval{BB^{\dagger}}{\psi_L}^{1/2}\notag\\ &\lesssim_L \sup_{z\in \gamma}\left\|\frac A {\unit-A} + \frac 1 {\unit-A} \frac B {\unit-B}\right\| \expval{J_L}{\varphi-\varphi_L}. \end{align} \begin{align} \end{align} it follows that \begin{align} &\left\|(-\Delta_L+1)^{-1/2} \frac A {\unit-A}(-\Delta_L+1)^{1/2}\right\|\notag\\ &\leq \|(-\Delta_L+1)^{-1/2}V_{\varphi-\varphi_L}(-\Delta_L)^{-1/2}\|\|(-\Delta_L)^{1/2}(z-h_{\varphi})^{-1}(-\Delta_L)^{1/2}\|\notag\\ &\lesssim_L \|(-\Delta_L+1)^{-1} (\varphi-\varphi_L)\|, \end{align} where we used the relative boundedness of $h_{\varphi}$ w.r.t to $-\Delta_L$ and Corollary <ref>. This yields the right bound for $(I)$. Similar estimates yield the right bounds for $\|A(\unit-A)^{-1}\|$ and $\|(\unit-A)^{-1}B(\unit-B)^{-1}\|\lesssim_L\|B\|$, concluding the proof. As a final result of this subsection, we prove the following Lemma about the operators $K_L$ and $J_L$. Let $K_L$ and $J_L$ be the operators defined in (<ref>). We have \begin{align} K_L, J_L \lesssim_L (-\Delta_L+1)^{-2}. \end{align} We prove the result for $J_L$. By the relative boundedness of $h_{\varphi_L}$ with respect to $-\Delta_L$ the same proof applies to $K_L$. We shall show that $(-\Delta_L+1)(-\Delta_L)^{-1/2}\psi_L (-\Delta_L+1)^{-1/2}$ is bounded as an operator on $L^2(\TL)$. In fact, for $f\in L^2(\TL)$, \begin{align} &\|(-\Delta_L+1)(-\Delta_L)^{-1/2}\psi_L (-\Delta_L+1)^{-1/2} f\|_2^2\nonumber\\ &=\sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3} \left(\frac{|k|^2+1}{|k|}\right)^2 \left|\sum_{\xi \in \frac {2\pi} L \mathbb{Z}^3} (\psi_L)_{k-\xi} \frac{f_{\xi}}{(|\xi|^2+1)^{1/2}}\right|^2\nonumber\\ &\leq \|(-\Delta_L+1)^{3/2}\psi_L\|_2^2\sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3} \left(\frac{|k|^2+1}{|k|}\right)^2 \sum_{\xi \in \frac {2\pi} L \mathbb{Z}^3} \frac{|f_{\xi}|^2}{(|k-\xi|^2+1)^{3}(|\xi|^2+1)}\notag\\ &\lesssim_L \sum_{ \xi \in \frac {2\pi} L \mathbb{Z}^3} \frac{|f_{\xi}|^2}{|\xi|^2+1} \sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3} \frac{(|k|^2+1)^2}{|k|^2(|k-\xi|^2+1)^{3}}\lesssim_L \|f\|_2^2, \end{align} where we used that $\psi_L\in C^{\infty}(\TL)$ and that $\sum_{0\neq k\in \frac {2\pi} L \mathbb{Z}^3}\frac{(|k|^2+1)^2}{|k|^2(|k-\xi|^2+1)^{3}}\lesssim |\xi|^2+1$. Therefore \begin{equation} J_L \leq \|(-\Delta_L+1)(-\Delta_L)^{-1/2} \psi_L (-\Delta_L+1)^{-1/2}\|^2(-\Delta_L+1)^{-2}\lesssim_L (-\Delta_L+1)^{-2}, \end{equation} as claimed. §.§.§ Local Properties of $\MinLf$ and $\FL$ For $L>L_1$ we introduce the notation \begin{align} \label{eq:projdef} \GradProj:= \text{$L^2$-projection onto} \,\, \spn\{\partial_j \varphi_L\}_{j=1}^3, \end{align} which is going to be used throughout this section and Section <ref>. According to Theorem <ref>, the condition $L> L_1$ guarantees that $\psi_L^y\neq \psi_L$ for any $\psi_L \in \MinLe$ and any $y\neq 0$, which implies that $\ran \GradProj$ is three dimensional (i.e that the partial derivatives of $\varphi_L$ are linearly independent); if not, there would exist $\nu \in \mathbb{S}^2$ such that $\partial_{\nu} \psi_L=0$ and this would imply $\psi_L=\psi_L^y$ for any $y$ parallel to $\nu$. For technical reasons, we also introduce a family of weighted norms which will be needed in Section <ref>. For $T\geq0$, we define \begin{align} \label{eq:weightednorm} \|\varphi\|_{W_T}:=\expval{W_T}{\varphi}^{1/2}, \end{align} where $W_T$ acts in $k$-space as multiplication by \begin{align} \label{eq:WTdef} \begin{cases} 1 & |k|\leq T\\ (|k|^2+1)^{-1} & |k|> T. \end{cases} \end{align} Note that $\|\varphi\|^2_{W_0}=\expval{(-\Delta_L+1)^{-1}}{\varphi}$ and $\|\varphi\|_{W_{\infty}}=\|\varphi\|_2$. For the purpose of this section we could formulate the following Lemma only with respect to $\|\cdot\|_2=\|\cdot\|_{W_{\infty}}$, but we opt for this more general version since we shall need it in Section <ref>. For any $L>L_1$, there exists $\varepsilon'_L$ (independent of $T$) such that for any $\varphi\in L^2_{\R}(\TL)$ with $\dist_{W_T}(\varphi,\Omega_L(\varphi_L))\leq \varepsilon'_L$ there exist a unique couple $(y_{\varphi},v_{\varphi})$, depending on $T$, with $y_{\varphi}\in \TL$ and $v_{\varphi}\in (\spn_{i=1,2,3} \{W_T\partial_i \varphi_L\})^{\perp}$, such that \begin{align} \label{eq:diffreq1} \varphi=\varphi_L^{y_{\varphi}}+(v_{\varphi})^{y_{\varphi}} \quad \text{and} \quad \|v_{\varphi}\|_{W_T}\leq \varepsilon'_L. \end{align} As Proposition <ref> above, we opt for an $L$-dependent version of Lemma <ref> for simplicity, as it is sufficient for our purposes. We nevertheless believe it is possible to prove a corresponding statement that is uniform in $L$. Note that Lemma <ref> is equivalent to the statement that there exists a $T$-independent $\varepsilon'_L$ such that the $W_T$-projection onto $\Omega_L(\varphi_L)$ is uniquely defined in an $\varepsilon_L'$-neighborhood of $\Omega_L(\varphi_L)$ with respect to the $W_T$-norm, and that, for any $\varphi$ therein, $\varphi_L^{y_{\varphi}}$ characterizes the $W_T$-projection of $\varphi$ onto $\Omega_L(\varphi_L)$, so that \begin{align} \dist_{W_T}(\varphi,\Omega_L(\varphi_L))=\|\varphi-\varphi_L^{y_{\varphi}}\|_{W_T}=\|v_{\varphi}\|_{W_T}. \end{align} We begin by observing that the Lemma is equivalent to showing that for any $\|\cdot\|_{W_T}$-normalized ${v \in (\spn_{i=1,2,3} \{W_T\partial_i \varphi_L\})^{\perp}}$, any $\varepsilon \leq \varepsilon_L'$ and any $0\neq y\in \TL$ we have \begin{align} \label{eq:diffreq2} \varepsilon < \|\varphi_L+\varepsilon v-\varphi_L^y\|_{W_T}. \end{align} Indeed, if the Lemma holds then $\varphi=\varphi_L+\varepsilon v $ does not admit other decompositions of the form (<ref>), which implies that, for any $y\neq 0$, (<ref>) holds (otherwise there would exist $y\neq 0$ minimizing the $W_T$-distance of $\varphi$ from $\Omega_L(\varphi_L)$ and such $y$ would necessarily yield a second decomposition of the form (<ref>)). On the other hand, if the statement (<ref>) holds and the Lemma does not, then there exists $\varphi$ such that $\dist_{W_T}(\varphi,\Omega_L(\varphi_L))\leq \varepsilon'_L$ and also such that $(y_1,v_1)$ and $(y_2,v_2)$ yield two different decompositions of the form (<ref>) for $\varphi$ (note that at least one decomposition of the form (<ref>) always exist, as there exist at least one element of $\Omega_L(\varphi_L)$ realizing the $W_T$-distance of $\varphi$ from $\Omega_L(\varphi_L)$). By considering $\varphi^{-y_1}$ (respectively $\varphi^{-y_2}$) we find $\|v_1\|_{W_T}>\|v_2\|_{W_T}$ (respectively $\|v_2\|_{W_T}>\|v_1\|_{W_T}$), which is clearly a contradiction. We shall hence proceed to prove the statement (<ref>). Taylor's formula and the regularity of $\varphi_L$ imply the existence of a $T$-independent constant $C^1_L$ such that \begin{align} \label{eq:hessianbound} \varphi_L^y=\varphi_L+y\cdot (\nabla \varphi_L)+ g_y, \quad \text{with} \quad \|g_y\|_{W_T}\leq \|g_y\|_2\leq C^1_L |y|^2. \end{align} As remarked after (<ref>), the kernel of $\GradProj$ is three-dimensional, hence there exists a constant $C^2_L$ independent of $T$ such that \begin{align} \label{eq:gradlb} \min_{\nu \in \mathbb{S}^2} \|\nu \cdot \nabla \varphi_L\|_{W_T}\geq \min_{\nu \in \mathbb{S}^2} \|\nu \cdot \nabla \varphi_L\|_{W_0}\geq C^2_L. \end{align} Therefore, using that $v\perp_{W_T} \nabla \varphi_L$ in combination with (<ref>) and (<ref>), we find, for \begin{align} \label{eq:firstybound} |y|<(C^2_L-2\varepsilon C^1_L)^{1/2}(C^1_L)^{-1}, \end{align} \begin{align} \|\varphi_L+\varepsilon v-\varphi_L^y\|_{W_T}&=\|\varepsilon v-y\cdot (\nabla \varphi_L)-g_y\|_{W_T}\geq \left(\varepsilon^2+|y|^2 C_L^2\right)^{1/2}-C_L^1 |y|^2> \varepsilon, \end{align} i.e., that (<ref>) holds for $y$ satisfying (<ref>). Furthermore, we have \begin{align} \label{eq:initialest} \|\varphi_L+\varepsilon v-\varphi_L^y\|_{W_T}^2\geq \varepsilon^2+\|\varphi_L-\varphi_L^y\|_{W_T}\left(\|\varphi_L-\varphi_L^y\|_{W_T}-2\varepsilon\right), \end{align} and this implies that (<ref>) holds for any $y$ such that \begin{align} \label{eq:firstscan} \|\varphi_L-\varphi_L^y\|_{W_T}> 2 \varepsilon. \end{align} Using again (<ref>) and (<ref>), there exist $C^3_L,c^1_L,c^4_L>0$ independent of $T$ such that \begin{align} \label{eq:localexp} \|\varphi_L-\varphi_L^y\|_{W_T}=\|y\cdot (\nabla \varphi_L)&+g_y\|_{W_T}\geq C^2_L|y|-C^1_L|y|^2\geq C^3_L |y|, \quad \text{for} \quad |y|\leq c^1_L,\nonumber\\ &\|\varphi_L-\varphi_L^y\|_{W_T}>c^4_L \quad \text{for}\quad |y|>c^1_L, \end{align} where the second line simply follows from $\|\cdot\|_{W_T}\geq\|\cdot\|_{W_0}$, the fact that $\varphi_L\neq \varphi_L^y$ for any $0\neq y \in [-L/2,L/2]^3$ and the continuity of $\varphi_L$. Combining (<ref>) and (<ref>), we conclude that (<ref>) holds if either $|y|>c^1_L$ or \begin{align} \label{eq:secondybound} \end{align} Picking $\varepsilon_L'$ sufficiently small, the fact that (<ref>) holds both under the conditions (<ref>) and (<ref>) shows that it holds for any $y\in \TL$, and this completes the proof. We conclude our study of the Pekar functional $\FL$ by showing that $\ker (\unit-K_L)=\spn\{\partial_j \varphi_L\}_{j=1}^3=\ran \GradProj$. Since clearly $\ran \GradProj\subset \ker(\unit-K_L)$, this is a consequence of the following Proposition. Recalling the definition of $\tau_L$ from Corollary <ref>, we have \begin{align} \unit-K_L\geq \tau_L (\unit-\GradProj). \end{align} We need to show that for all normalized $v\in \ran (\unit-\GradProj)$ the bound \begin{align} \expval{\unit-K_L}{v}\geq \tau_L \end{align} holds. Using Lemma <ref> in the case $T=\infty$, for any such $v$ and $\varepsilon$ small enough, denoting $\varphi=\varphi_L+\varepsilon v$, we obtain \begin{align} \dist_{L^2}^2(\varphi,\Omega_L(\varphi_L))= \varepsilon^2. \end{align} Moreover, since $\|(-\Delta_L+1)^{-1}(\varphi-\varphi_L)\|\leq \varepsilon\|v\|_2=\varepsilon$, for $\varepsilon$ small enough we can expand $\FL(\varphi)$ with respect to $\varphi_L$ using Proposition <ref>. Combining this with (<ref>), we arrive at \begin{align} \tau_L \varepsilon^2\leq\FL(\varphi_L+\varepsilon v)-\eL \leq \varepsilon^2\expval{\unit-K_L}{v}+\varepsilon^3\expval{J_L}{v}. \end{align} Since $\varepsilon$ can be taken arbitrarily small, the proof is complete. § PROOF OF MAIN RESULTS In this Section we give the proof of Theorem <ref>. In Section <ref> we prove the upper bound in (<ref>). In Section <ref> we estimate the cost of substituting the full Hamiltonian $\HL$ with a cut-off Hamiltonian depending only on finitely many phonon modes, a key step in providing a lower bound for $\infspec \HL$. Finally, in Section <ref>, we show the validity of the lower bound in (<ref>). The approach used in Sections <ref> and <ref> follows closely the one used in [9], even if, in our setting, minor complications arise in the proof of the upper bound due the presence of the zero modes of the Hessian. For the lower bound in Section <ref>, however, a substantial amount of additional work is needed to deal with the translation invariance, which complicates the proof significantly. §.§ Upper Bound In this section we construct a trial state, which will be used to obtain an upper bound on the ground state energy of $\HL$ for fixed $L>L_1$. This is carried out using the $Q$-space representation of the bosonic Fock space $\mathcal{F}(L^2(\TL))$ (see [25]). Even though the estimates contained in this section are $L$-dependent, we believe it is possible, with little modifications to the proof, to obtain the same upper bound with the same error estimates uniformly in $L$. Our trial state depends non-trivially only on finitely many phonon variables, and we proceed to describe it more in detail. If one picks $\Pi$ to be a real finite rank projection on $L^2(\TL)$, then \begin{align} \mathcal{F}(L^2(\TL))\cong \mathcal{F}(\Pi L^2(\TL))\otimes \mathcal{F}((\unit-\Pi)L^2(\TL)). \end{align} The first factor $\mathcal{F}(\Pi L^2(\TL))$ can isomorphically be identified with $L^2(\mathbb{R}^N)$, where $N$ is the complex dimension of $\ran \Pi$. In particular, there is a one-to-one correspondence between any real $\varphi\in \ran \Pi$ and $\lambda=(\lambda_1,\dots,\lambda_N)\in \mathbb{R}^N$, explicitly given by \begin{align} \label{eq:rangeRniso} \varphi=\sum_{i=1}^N \lambda_i \varphi_i\cong(\lambda_1,\dots,\lambda_N)=\lambda, \end{align} where $\{\varphi_i\}_{i=1}^N$ denotes an orthonormal basis of $\ran \Pi$ consisting of real-valued functions. The trial state we use corresponds to the vacuum in the second factor ${\mathcal{F}((\unit-\Pi)L^2(\TL))}$ and shall hence be written only as a function of $x$ (the electron variable) and $\lambda$ (the finitely many phonon variables selected by $\Pi$). We begin by specifying some properties we wish $\Pi$ to satisfy. Consider $\varphi_L$ from Corollary <ref> and define $\Pi$ to be a projection of the form $\Pi=\Pi'+\GradProj$, where $\GradProj$ is defined in (<ref>) and $\Pi'$ is an $(N-3)$-dimensional projection onto $(\spn\{\partial_j \varphi_L\}_{j=1}^3)^{\perp}=\ran (\unit-\GradProj)$ that will be further specified later but will always be taken so that $\varphi_L \in \ran \Pi$. Our trial state is of the form \begin{align} \Psi(x,\varphi)=G(\varphi)\eta(\varphi) \psi_{\varphi}(x), \end{align} * $x\in \TL$ and $\varphi$ is a real element of $\ran \Pi$ (identified with $\lambda\in \mathbb{R}^N$ as in (<ref>)), * $G(\varphi)$ is a Gaussian factor explicitly given by \begin{align} G(\varphi)=\exp(-\alpha^2\expval{\left[\Pi(\unit-K_L)\Pi \right]^{1/2}}{\varphi-\varphi_L}), \end{align} * $\eta$ is a `localization factor' given by \begin{align} \label{eq:eta} \eta(\varphi)=\chi\left(\varepsilon^{-1}\|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|_{L^2(\TL)}\right), \end{align} for some $0<\varepsilon<\varepsilon_L$ (with $\varepsilon_L$ as in Proposition <ref>), where $0\leq \chi\leq 1$ is a smooth cut-off function such that $\chi(t)=1$ for $t\leq 1/2$ and $\chi(t)=0$ for $t\geq 1$, * $\psi_{\varphi}$ is the unique positive ground state of $h_{\varphi}=-\Delta_L+V_{\varphi}$. We note that our state actually depends on two parameters ($N$ and $\varepsilon$) and, of course, on the specific choice of $\Pi'$. We choose $\{\varphi_i\}_{i=1,\dots, N}$ to be a real orthonormal basis of eigenfunctions of $[\Pi(\unit-K_L)\Pi]$ corresponding to eigenvalues $\mu_i=0$ for $i=1,2,3$ and $\mu_i\geq \tau_L>0$ for $i=4,\dots,N$. Recalling Proposition <ref>, this amounts to choosing $\{\varphi_i\}_{i=1,2,3}$ to be a real orthonormal basis of $\ran \GradProj$ and $\{\varphi_i\}_{i=4,\dots,N}$ to be a real orthonormal basis of eigenfunctions of $[\Pi'(\unit-K_L)\Pi']$. With this choice, we have (with a slight abuse of notation) \begin{align} G(\lambda_4,\dots,\lambda_N)=\exp(-\alpha^2\sum_{i=4}^{N} \mu_i^{1/2}(\lambda_i-\lambda^L_i)^2), \end{align} where $\varphi_L\cong\lambda_L=(0,0,0,\lambda^L_4,\dots,\lambda_N^L)$, since $\varphi_L \in \ran \Pi$ by construction, and the first three coordinates are $0$ since $\varphi_L \in \left(\ran \GradProj\right)^{\perp}$. We first show that even if $G$ does not have finite $L^2(\mathbb{R}^N)$-norm, $\Psi$ does due to the presence of $\eta$. We define \begin{align} \label{eq:Teps} T_{\varepsilon}:=\{\|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|\leq \varepsilon\}\subset \R^{N} \end{align} \begin{align} \gamma_L:=\inf_{\varphi\in \ran \GradProj\atop \|\varphi\|_2=1} \expval{(-\Delta_L+1)^{-1}}{\varphi}>0. \end{align} Then, on $T_{\varepsilon}$, noting that $\GradProj \varphi_L =0$, we have \begin{align} &\gamma_L^{1/2}\sqrt{\lambda_1^2+\lambda_2^2+\lambda_3^2}=\gamma_L^{1/2}\|\GradProj\varphi\|\leq \|(-\Delta_L+1)^{-1/2} \GradProj (\varphi-\varphi_L)\|_2\nonumber\\ &\leq \|(-\Delta_L+1)^{-1/2} \Pi' (\varphi-\varphi_L)\|_2+\varepsilon \leq \|\Pi'(\varphi-\varphi_L)\|+\varepsilon=\left(\sum_{i=4}^N (\lambda_i-\lambda_i^L)^2\right)^{1/2}+\varepsilon \end{align} and this implies, using the normalization of $\psi_{\varphi}$, that \begin{align} \|\Psi\|^2&=\int_{\mathbb{R}^N} G(\lambda_4,\dots, \lambda_N)^2 \eta(\lambda)^2 d \lambda_1 \dots d \lambda_N\leq \int_{\mathbb{R}^N} G(\lambda_4,\dots, \lambda_N)^2\unit_{T_{\varepsilon}}(\lambda) d \lambda_1 \cdots d \lambda_N\notag\\ &\leq \frac {4\pi} 3 \int_{\mathbb{R}^{N-3}} G(\lambda_4,\dots,\lambda_N)^2 \gamma_L^{-3/2}\left[\left(\sum_{i=4}^N (\lambda_i-\lambda_i^L)^2\right)^{1/2}+\varepsilon\right]^3 d\lambda_4 \cdots d\lambda_N<\infty. \end{align} We spend a few words to motivate our choice of $\Psi$. The absolute value squared of $\Psi$ has to be interpreted as a probability density over the couples $(\varphi,x)$, with $\varphi$ being a classical state for the phonon field and $x$ the position of the electron. In the electron coordinate, our $\Psi$ corresponds to the ground state of $h_{\varphi}$ for any value of $\varphi$. This implies, by straightforward computations, that the expectation value of the Fröhlich Hamiltonian in $\Psi$ equals the one of $e(\varphi)+\numero$, $e(\varphi)$ being the ground state energy of $h_{\varphi}$ and $\mathbb{N}$ the number operator. Moreover, because of the factor $\eta$, we are localizing our state to the regime where the Hessian expansion of $e(\varphi)$ from Proposition <ref> holds. To leading order, this effectively makes our system formally correspond to a system of infinitely many harmonic oscillators with frequencies given by the eigenvalues of $(\unit-K_L)^{1/2}$, with a Gaussian ground state. To carry out this analysis out rigorously, we need to choose a suitable finite rank projection $\Pi$, as detailed in the remainder of this section. We are now ready to delve into the details of the proof. It is easy to see that the interaction term appearing in the Fröhlich Hamiltonian acts in the $Q$-space representation as the multiplication by $V_{\varphi}(x)$. Therefore, since $\Psi$ corresponds to the vacuum on $(\unit-\Pi)L^2(\TL)$ and only depends on $x$ through the factor $\psi_{\varphi}(x)$, the g.s. of $h_{\varphi}$, it follows that \begin{align} \expval{\HL}{\Psi}=\expval{e(\varphi)+\numero}{\Psi} \end{align} where $\varphi = \Pi \varphi \cong \lambda\in \R^N$ and the inner product on the r.h.s. is naturally interpreted as the one on $L^2(\TL)\otimes L^2(\mathbb{R}^{N})$. In the $Q$-space representation, the number operator takes the form \begin{align} \numero=\sum_{n=1}^N \left(-\frac 1 {4\alpha^4} \partial_{\lambda_n}^2+\lambda_n^2-\frac 1 {2\alpha^2}\right)= \frac 1 {4\alpha^4} (-\Delta_{\lambda})+|\lambda|^2-\frac N {2\alpha^2}. \end{align} Using the fact that $\eta$ is supported on the set $T_{\varepsilon}$ defined in (<ref>), we can use the Hessian expansion from Proposition <ref> to obtain bounds on $e(\lambda)$. Consequently, for a suitable positive constant $C_L$, \begin{align} \label{eq:firstevaluationtrial} \expval{\HL}{\Psi}&\leq \expval{\eL+\expval{\unit-K_L+\varepsilon C_L J_L}{\varphi-\varphi_L}}{\Psi}\nonumber\\ &\quad +\left \langle \Psi \left|\frac 1 {4\alpha^4}(-\Delta_{\lambda})-\frac N{2\alpha^2} \right| \Psi\right \rangle\nonumber\\ & =\left(\eL-\frac 1 {2\alpha^2}\Tr(\Pi)\right) \|\Psi\|^2+A+B, \end{align} \begin{align} &A=\left \langle \Psi \left| \frac 1 {4\alpha^4}(-\Delta_{\lambda})+\sum_{i=4}^{N} \mu_i(\lambda_i-\lambda^L_i)^2 \right| \Psi \right\rangle &B=\varepsilon C_L\expval{\expval{J_L}{\varphi-\varphi_L}}{\Psi}. \end{align} We shall now proceed to first show that $B$ only contributes as an error term and then to rewrite $A$ as the sum of a leading order energy correction term and an error term. We recall that by Lemma <ref> \begin{align} J_L\lesssim_L (-\Delta_L+1)^{-2}. \end{align} Therefore, since $\eta$ is supported on $T_{\varepsilon}$, we have \begin{align} \label{eq:orderofB} B\lesssim_L \varepsilon^3\|\Psi\|^2. \end{align} To treat $A$ a bit more work is required. A direct calculation shows that \begin{align} \left[\frac 1 {4\alpha^4}(-\Delta_{\lambda})+\sum_{i=4}^{N} \mu_i(\lambda_i-\lambda^L_i)^2\right]G=\frac 1 {2\alpha^2} \Tr(\left[\Pi(\unit-K_L)\Pi\right]^{1/2})G. \end{align} The previous identity, together with straightforward manipulations involving integration by parts, shows that \begin{align} \label{eq:boundA} A&=\frac 1 {4\alpha^4}\left(\langle \psi_\varphi G \eta| \psi_\varphi (-\Delta_{\lambda} G)\eta \rangle+ \int_{\TL\times \mathbb{R}^{N}} G^2 |\nabla_{\lambda} (\eta \psi_{\varphi})|^2 \right) +\left\langle \Psi \left|\sum_{i=4}^N \mu_i(\lambda-\lambda^L_i)^2\right|\Psi\right \rangle \notag\\ &\leq\frac 1 {2\alpha^2} \Tr(\left[\Pi(\unit-K_L)\Pi\right]^{1/2})\|\Psi\|^2 \nonumber\\ & \quad +\frac 1 {2\alpha^4}\left[ \int_{\TL\times \mathbb{R}^{N}} G^2 \eta^2 |\nabla_{\lambda} \psi_{\varphi}|^2 +\int_{\TL\times \mathbb{R}^{N}} G^2 |\nabla_{\lambda}\eta|^2 |\psi_{\varphi}|^2 \right] \nonumber\\ &=:\frac 1 {2\alpha^2} \Tr(\left[\Pi(\unit-K_L)\Pi\right]^{1/2})\|\Psi\|^2+A_1+A_2, \end{align} where the first term is clearly a leading order energy correction whereas $A_1$ and $A_2$ have to be interpreted as error terms, as we now proceed to show. By standard first order perturbation theory (using that the phase of $\psi_{\varphi}$ is chosen so that it is the unique positive minimizer of $h_{\varphi}$) we have \begin{align} \partial_{\lambda_n} \psi_{\varphi}=-\frac {Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)} V_{\varphi_n} \psi_{\varphi}, \end{align} where we recall that $Q_{\psi_{\varphi}}=\unit-\ket{\psi_{\varphi}}\bra{\psi_{\varphi}}$. This implies that, for fixed $\varphi$, \begin{align} &\int_{\TL} |\nabla_{\lambda} \psi_{\varphi}(x)|^2 dx=\sum_{n=1}^N\left\|\frac {Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)} V_{\varphi_n} \psi_{\varphi}\right\|^2_{L^2(\TL)}\notag\\ &=\sum_{n=1}^N \expval{(-\Delta_L)^{-1/2} \psi_{\varphi} \left(\frac{Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)}\right)^2 \psi_{\varphi}(-\Delta_L)^{-1/2}}{\varphi_n}\notag\\ &=\Tr(\Pi(-\Delta_L)^{-1/2} \psi_{\varphi} \left(\frac{Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)}\right)^2 \psi_{\varphi}(-\Delta_L)^{-1/2}\Pi), \end{align} where $\psi_{\varphi}$ is interpreted as a multiplication operator in the last two expressions. Since $(-\Delta_L+1)^{1/2} \left(\frac{Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)}\right)^2 (-\Delta_L+1)^{1/2}$ is uniformly bounded over the support of $\eta$ (the potential $V_{\varphi}$ being uniformly infinitesimally relatively bounded with respect to $-\Delta_L$ by Corollary <ref>) and recalling that $\psi_{\varphi}$ is normalized by definition, we get \begin{align} &\Tr(\Pi(-\Delta_L)^{-1/2} \psi_{\varphi} \left(\frac{Q_{\psi_{\varphi}}}{h_{\varphi}-e(\varphi)}\right)^2 \psi_{\varphi}(-\Delta_L)^{-1/2}\Pi)\nonumber\\ &\lesssim_L \Tr(\Pi(-\Delta_L)^{-1/2}\psi_{\varphi} (-\Delta_L+1)^{-1} \psi_{\varphi} (-\Delta_L)^{-1/2}\Pi)\lesssim_L 1. \end{align} In summary, we conclude that \begin{align} \label{eq:A1bound} A_1\lesssim_L \frac 1 {\alpha^4} \|\Psi\|^2. \end{align} Finally, we proceed to bound $A_2$. Recalling the definition of $\eta$ and $T_{\varepsilon}$, we see that \begin{align} \label{eq:gradetaest} |\nabla_{\lambda} \eta|^2&=\left|\nabla_{\lambda}\left[\chi\left(\varepsilon^{-1}\|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|_{L^2(\TL)}\right)\right]\right|^2 \nonumber\\ &\lesssim\varepsilon^{-2}\unit_{T_{\varepsilon}}(\varphi)\left|\nabla_{\lambda} \|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|_{L^2(\TL)}\right|^2\nonumber\\ &\lesssim \varepsilon^{-2}\unit_{T_{\varepsilon}}(\varphi)\frac{\|(-\Delta_L+1)^{-1}(\varphi-\varphi_L)\|^2}{\|(-\Delta_L+1)^{-1/2}(\varphi-\varphi_L)\|^2}\leq \unit_{T_{\varepsilon}}(\varphi)\varepsilon^{-2}, \end{align} where we used that $\eta$ is supported on $T_{\varepsilon}$ and that $\chi$ is smooth and compactly supported. Therefore, using also the normalization of $\psi_{\varphi}$, we obtain \begin{align} \label{eq:A2boundfirst} A_2\lesssim \frac 1 {\alpha^4\varepsilon^2} \|\unit_{T_{\varepsilon}}G\|_{L^2(\R^N)}^2. \end{align} We now need to bound $\|\unit_{T_{\varepsilon}}G\|_{L^2(\R^N)}$ in terms of $\|\Psi\|= \|\eta G\|_{L^2(\R^N)}$. We define \begin{align} S_{\nu}:=\{\varphi\in \ran \Pi \,|\, \|\Pi'(\varphi-\varphi_L)\|_2\leq \nu\} \end{align} and observe that on $S_{\nu}\cap T_{\varepsilon}$ we have, by the triangle inequality, \begin{align} \label{eq:insidebound} \|(-\Delta_L+1)^{-1/2}\GradProj \varphi\|_2 \leq \varepsilon+\nu, \end{align} and that on $S_{\nu}^c$ \begin{align} \label{eq:outsidebound} G(\lambda)\leq \exp(-\alpha^2 \tau_L^{1/2} \nu^2), \end{align} where we used that $\left[\Pi(\unit-K_L)\Pi\right]^{1/2}\geq \tau_L^{1/2} \Pi'$ (with $\tau_L$ being the constant appearing in Proposition <ref>). We then have, using (<ref>), that \begin{align} \|\unit_{T_{\varepsilon}}G\|_2^2&= \|\unit_{T_{\varepsilon}\cap S_{\nu}}G\|_2^2+\|\unit_{T_{\varepsilon}\cap S^c_{\nu}}G\|_2^2\notag\\ &\leq \int_{\{\|(-\Delta_L+1)^{-1/2}\GradProj \varphi\|_2 \leq \varepsilon+\nu\}\cap S_{\nu}} G^2 d\lambda_1 \dots d\lambda_N+\int_{T_{\varepsilon}\cap S^c_{\nu}} G^2 d\lambda_1 \dots d\lambda_N. \end{align} We now perform the change of variables $(\lambda_1,\lambda_2,\lambda_3)=3(\lambda'_1,\lambda'_2,\lambda'_3)$ in the first integral and the change of variables $\lambda-\lambda_L=2(\lambda'-\lambda_L)$ in the second integral and fix $\nu=\varepsilon/8$, obtaining \begin{align} \|\unit_{T_{\varepsilon}}G\|_2^2&\leq 27 \int_{\{\|(-\Delta_L+1)^{-1/2}\GradProj \varphi\|_2 \leq (\varepsilon+\nu)/3\}\cap S_{\nu}} G^2 d\lambda+2^N\int_{T_{\varepsilon/2}\cap S_{\nu/2}^c} G(\lambda')^8 d\lambda'\notag\\ &\leq \left(27+2^N \exp(-6\alpha^2 \tau_L^{1/2} \nu^2/4)\right) \int_{T_{\varepsilon/2}} G^2 d\lambda \nonumber\\ &\leq \left(27+2^N \exp(-6\alpha^2 \tau_L^{1/2} \nu^2/4)\right)\|\Psi\|^2, \end{align} where in the second step we used that $\{\|(-\Delta_L+1)^{-1/2}\GradProj \varphi\|_2 \leq (\varepsilon+\nu)/3\}\cap S_{\nu} \subset T_{\varepsilon/2}$ by the triangle inequality if $\nu=\varepsilon/8$, and (<ref>) to estimate the Gaussian factor on $S_{\nu/2}^c$. Therefore, as long as $\sqrt{N}\leq C_L^1 \alpha \varepsilon$ for a sufficiently small $C_L^1$, we conclude that \begin{align} \label{eq:A2boundsecond} A_2\lesssim \frac 1 {\alpha^4 \varepsilon^2} \|\Psi\|^2. \end{align} Plugging estimates (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>), we infer, for $\sqrt{N}\leq C_L^1 \alpha\varepsilon$, that for a sufficiently large $C_L^2$ \begin{align} \label{eq:FinalUpEst} &\frac{\expval{\HL}{\Psi}}{\bra{\Psi}\ket{\Psi}}\leq \eL-\frac 1 {2\alpha^2}\Tr(\Pi-\left[\Pi(\unit-K_L)\Pi\right]^{1/2})+ C_L^2(\varepsilon^3+\alpha^{-4}\varepsilon^{-2}). \end{align} We now proceed to choose a real orthonormal basis for $\ran \Pi$ which is convenient to bound the r.h.s. of (<ref>). Let $\{g_j\}_{j\in \mathbb{N}}$ be an orthonormal basis of eigenfunctions of $K_L$ with corresponding eigenvalue $k_j$, ordered such that $k_{j+1}\geq k_j$. By Proposition <ref> we have $k_j=1$ for $j=1,2,3$ and $k_j<1$ for $j>3$. Moreover, $\GradProj$ coincides with the projection onto $\spn\{g_1,g_2,g_3\}$. We pick $\Pi'$ to be the projection onto $\spn\{g_4,\dots, g_{N}\}$ if $\varphi_L$ is spanned by $\{g_1,\dots, g_N\}$ and onto $\spn\{g_4,\dots, g_{N-1}, \varphi_L\}$ otherwise. With this choice the eigenvalues $\mu_i$ of $\Pi(\unit-K_L)\Pi$ appearing in the Gaussian factor $G$ are equal to \begin{align} \mu_j=1-k_j, \,\,\,\, j=1, \dots, N-1, \quad \mu_N=\begin{cases} 1-k_N & \text{if} \,\, \varphi_L \in \spn\{g_1,\dots,g_N\},\\ \expval{\unit-K_L}{\tilde{\varphi}_L} & \text{otherwise}, \end{cases} \end{align} with $\tilde{\varphi}_L:= \frac{\varphi_L-\sum_{j=4}^{N-1} g_j \bra{g_j}\ket{\varphi_L}}{\|\varphi_L-\sum_{j=4}^{N-1} g_j \bra{g_j}\ket{\varphi_L}\|_2}$. In any case \begin{align} &\geq \sum_{j=1}^{N-1} (1-(1-k_j)^{1/2})=\Tr(\unit-(\unit-K_L)^{1/2})-\sum_{j=N}^{\infty} (1-(1-k_j)^{1/2}). \end{align} In order to estimate $\sum_{j=N}^{\infty} (1-(1-k_j)^{1/2})$, we note that Lemma <ref> implies that $k_j\lesssim_L (l_j+1)^{-2}$, where $l_j$ denotes the ordered eigenvalues of $-\Delta_L$. Since $l_j\sim j^{2/3}$ for $j\gg1$, we have \begin{align} \sum_{j=N}^{\infty} (1-(1-k_j)^{1/2})\lesssim_L N^{-1/3}. \end{align} This allows us to conclude that \begin{align} \frac{\expval{\HL}{\Psi}}{\bra{\Psi}\ket{\Psi}}\leq \eL-\frac 1 {2\alpha^2}\Tr(\unit-(\unit-K_L)^{1/2})+C_L^3(\varepsilon^3+\alpha^{-4}\varepsilon^{-2}+\alpha^{-2}N^{-1/3}), \end{align} as long as $\sqrt{N}\leq C_L^1 \alpha \varepsilon$. The error term is minimized, under this constraint, for $\varepsilon\sim\alpha^{-8/11}$ and $N\sim \alpha^2 \varepsilon^2 \sim\alpha^{6/11}$, which yields \begin{align} \frac{\expval{\HL}{\Psi}}{\bra{\Psi}\ket{\Psi}}\leq \eL-\frac 1 {2\alpha^2}\Tr(\unit-(\unit-K_L)^{1/2})+C_L \alpha^{-24/11}, \end{align} as claimed in (<ref>). §.§ The Cutoff Hamiltonian As a first step to derive the lower bound in (<ref>), we show that it is possible to apply an ultraviolet cutoff of size $\Lambda$ to $\HL$ at an expense of order $\Lambda^{-5/2}$ (this is proven in Proposition <ref> in Section <ref>). Our approach follows closely the one in [9]. It relies on an application of a triple Lieb–Yamazaki bound (extending the method of [19]) which we carry out in Section <ref>, and on a consequent use (in Section <ref>) of a Gross transformation [13, 23]. We shall in the following, for any real-valued $f\in L^2(\TL)$, denote \begin{align} &\Pi(f):= \Phi(if)=i(\ad(f)-a(f)). \end{align} We recall that (see (<ref>)) the interaction term in the Fröhlich Hamiltonian is given by \begin{align} \end{align} where $v_L$ was defined in (<ref>) and $a$ and $a^{\dagger}$ satisfy the rescaled commutation relations (<ref>). We shall apply an ultraviolet cutoff of size $\Lambda$ in $k$-space, which amounts to substituting the interaction term with \begin{align} \end{align} \begin{align} \label{eq:vxlambda} v_{L,\Lambda}(y):=\sum_{0\neq k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|< \Lambda} \frac 1 {|k|} \frac{e^{-i k \cdot y}}{L^3}. \end{align} To quantify the expense of such a cutoff we clearly need to bound \begin{align} \end{align} \begin{align} \label{eq:wx} w_{L,\Lambda}(y)=v_{L}(y)-v_{L,\Lambda}(y)=\sum_{k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|\geq \Lambda} \frac 1 {|k|} \frac{e^{-i k \cdot y}}{L^3}. \end{align} §.§.§ Triple Lieb–Yamazaki Bounds Let us introduce the notation $p=(p_1,p_2,p_3)=-i\nabla_x$ for the electron momentum operator. Note that on any function of the form $f(x,y)=f(y-x)$, such as $\elecphononcouplREST$ for example, the operator $p$ simply acts as multiplication by $k$ in $k$-space and agrees, up to a sign, with $-i\nabla_y$. The purpose of this section is to prove the following Proposition. Let $w_{L,\Lambda}$ be defined as in (<ref>) and $\Lambda>1$. Then \begin{align} \label{eq:awxOpBoundI} a^{\dagger}(\elecphononcouplREST)+a(\elecphononcouplREST)=\Phi(\elecphononcouplREST)\lesssim (|p|^2+\mathbb{N}+1)^2(\Lambda^{-5/2}+\alpha^{-1} \Lambda^{-3/2}), \end{align} as quadratic forms on $L^2(\TL)\otimes \mathcal{F}(L^2(\TL))$. We first need the following Lemma. Let $w_{L,\Lambda}$ be defined as in (<ref>) and $\Lambda>1$. Then for any $j,l,m\in\{1,2,3\}$ \begin{align} \label{eq:awx1} &a^{\dagger}\left[(\partial_j \partial_l \partial_m (-\Delta_L)^{-3}w_{L,\Lambda})^x\right]a\left[(\partial_j \partial_l \partial_m (-\Delta_L)^{-3}w_{L,\Lambda})^x\right]\lesssim \Lambda^{-5} \mathbb{N},\\ \label{eq:awx2} &\|\partial_j \partial_l(-\Delta_L)^{-2} w_{L,\Lambda}\|^2_{L^2(\TL)} \lesssim \Lambda^{-3},\\ \label{eq:awx3} &a^{\dagger}\left[(\partial_j \partial_l(-\Delta_L)^{-2} w_{L,\Lambda})^x\right]a\left[(\partial_j \partial_l(-\Delta_L)^{-2} w_{L,\Lambda})^x\right]\lesssim \Lambda^{-5} (|p|^2+L^{-3}\Lambda^{-1})\numero, \end{align} as quadratic forms on $L^2(\TL)\otimes \mathcal{F}(L^2(\TL))$. For any $j,l,m\in\{1,2,3\}$, (<ref>) follows from $\ad(g)a(g)\leq \|g\|_2^2 \numero$ for $g\in L^2(\TL),$ and then proceeding along the same lines of the proof of (<ref>). To prove (<ref>) we estimate \begin{equation} \|\partial_j \partial_l(-\Delta_L)^{-2} w_{L,\Lambda}\|^2_{L^2(\TL)} = \frac 1{L^3} \sum_{|k|\geq \Lambda \atop k\in \frac{2\pi} L \mathbb{Z}^3} \frac {k_j^2 k_l^2} {|k|^{10}} \lesssim \int_{B_{\Lambda}^c} \frac 1 {|t|^6} dt =\frac{4\pi}{3} \Lambda^{-3}. \end{equation} If we denote $f_{j,l}^x:=(-\partial_j \partial_l (-\Delta_L)^{-2} w_{L,\Lambda})^x$, in order to show (<ref>) it suffices to prove that \begin{align} \ket{f_{j,l}^x}\bra{f^x_{j,l}} \lesssim \Lambda^{-5}\left(|p|^2+\Lambda^{-1}\right) \quad \text{on} \quad L^2(\TL)\otimes L^2(\TL), \end{align} where the bracket notation refers to the second factor in the tensor product, i.e., the left side is a rank-one projection on the second factor parametrized by $x$, which acts via multiplication on the first factor. For any $\Psi \in L^2(\TL)\otimes L^2(\TL)$ with Fourier coefficients $\Psi_{q,k}$, we have \begin{align} \label{eq:EstimateSup} &\left\langle \Psi \Big|\ket{f_{j,l}^x}\bra{f^x_{j,l}} \Big| \Psi \right \rangle=\int dx \left|\int dy \overline{f_{j,l}^x(y)} \Psi(x,y) \right|^2 =\sum_{q\in \frac{2\pi} L \mathbb{Z}^3} \left| \sum_{k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|\geq \Lambda} \frac{k_j k_l}{L^{3/2}|k|^5} \Psi_{q-k,k}\right|^2\notag\\ &\leq \sum_{q \in \frac{2\pi} L \mathbb{Z}^3} \left(\sum_{k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|\geq \Lambda, \,\,k\neq q} \frac{1}{L^3|k|^6|q-k|^2}\right)\left(\sum_{k\in \frac{2\pi} L \mathbb{Z}^3} |q-k|^2|\Psi_{q-k,k}|^2\right)+\sum_{q \in \frac{2\pi} L \mathbb{Z}^3\atop |q|\geq \Lambda} \frac{|\Psi_{0,q}|^2}{L^3|q|^6}\notag\\ &\leq \sup_{q\in\frac{2\pi} L \mathbb{Z}^3}\left(\sum_{k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|\geq \Lambda, \, \, k\neq q} \frac{L^{-3}}{|k|^6|q-k|^2}\right) \expval{|p|^2}{\Psi}+L^{-3} \Lambda^{-6} \| \Psi\|^2 \nonumber \\ &\lesssim \expval{\Lambda^{-5} (|p|^2+ L^{-3}\Lambda^{-1})}{\Psi}, \end{align} which shows our claim. We only need to justify the last step, i.e., that the supremum appearing in (<ref>) is bounded by $C\Lambda^{-5}$. We have \begin{align} \sum_{0\neq k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|\geq \Lambda, \,\, k\neq q} \frac{L^{-3}}{|k|^6|q-k|^2}&\lesssim \int_{B_{\Lambda}^c} \frac 1 {|x|^6|q-x|^2} dx=\Lambda^{-5}\int_{B_1^c}\frac 1 {|x|^6|\Lambda^{-1}q-x|^2}\nonumber\\ &\leq \Lambda^{-5} \left(\int_{B_1(\Lambda^{-1}q)}\frac 1 {|\Lambda^{-1}q-x|^2}+\int_{B_1^c} |x|^{-6}\right)\leq \frac {16\pi} 3 \Lambda^{-5}. \end{align} This concludes the proof. We are now able to prove Proposition <ref>. Following the approach by Lieb and Yamazaki in [19], we have \begin{align} \sum_{j=1}^3 [p_j,a(p_j |p|^{-2}\elecphononcouplREST)]=-a(\elecphononcouplREST). \end{align} Applying this three times, we obtain \begin{align} \sum_{j,k,l =1}^3 [p_j,[p_k,[p_l,a(p_jp_kp_l |p|^{-6}\elecphononcouplREST)]]]=-a(\elecphononcouplREST). \end{align} \begin{align} \sum_{j,k,l =1}^3 [p_j,[p_k,[p_l,a^{\dagger}(p_jp_kp_l |p|^{-6}\elecphononcouplREST)]]]=a^{\dagger}(\elecphononcouplREST). \end{align} Therefore, if we define \begin{align} B_{jkl}&:=a^{\dagger}(p_jp_kp_l |p|^{-6}\elecphononcouplREST)-a(p_jp_kp_l |p|^{-6}\elecphononcouplREST)\nonumber\\ &\;=a^{\dagger}\left[(\partial_j \partial_l \partial_m (-\Delta_L)^{-3}w_{L,\Lambda})^x\right]-a\left[(\partial_j \partial_l \partial_m (-\Delta_L)^{-3}w_{L,\Lambda})^x\right], \end{align} we have \begin{align} a^{\dagger}(\elecphononcouplREST)+a(\elecphononcouplREST)=\Phi(\elecphononcouplREST)=\sum_{j,k,l =1}^3 [p_j,[p_k,[p_l,B_{jkl}]]]. \end{align} Using that $B_{jkl}^{\dagger}=-B_{jkl}$ and that $B_{jkl}$ is invariant under exchange of indices, we arrive at \begin{align} \label{eq:lyexpression} \Phi(\elecphononcouplREST)=\sum_{j,k,l =1}^3 \left(p_j p_k [p_l,B_{jkl}]+[B_{jkl}^{\dagger},p_l]p_jp_k\right) - 2\sum_{j,k,l=1}^3 \left(p_j p_k B_{jkl} p_l +p_l B_{jkl}^{\dagger} p_j p_k \right). \end{align} By the Cauchy–Schwarz inequality, we have for any $\lambda>0$ \begin{align} \label{eq:boundB1} -p_j p_k B_{jkl} p_l -p_l B_{jkl}^{\dagger} p_j p_k \leq \lambda p_j^2 p_k^2+ \lambda^{-1} p_l B_{jkl}^{\dagger} B_{jkl} p_l. \end{align} Moreover, using (<ref>) and the rescaled commutation relations (<ref>) satisfied by $a$ and $a^{\dagger}$, we have \begin{align} \label{eq:boundB2} B_{jkl}^{\dagger} B_{jkl}\leq C\left(4\mathbb{N}+2\alpha^{-2}\right)\Lambda^{-5}. \end{align} Using (<ref>) and (<ref>) and picking $\lambda=C^{1/2}\Lambda^{-5/2}$ we conclude that \begin{align} \label{eq:lzboundII} -2\sum_{j,k,l=1}^3 \left(p_j p_k B_{jkl} p_l +p_l B_{jkl}^{\dagger} p_j p_k \right)\lesssim \Lambda^{-5/2}\left(|p|^4+3 |p|^2(4\mathbb{N}+2\alpha^{-1})\right). \end{align} We now define \begin{align} C_{jk}&:= \sum_{l=1}^3 [p_l,B_{jkl}]=a^{\dagger}(p_jp_k|p|^{-4}\elecphononcouplREST)+a(p_jp_k|p|^{-4}\elecphononcouplREST)\nonumber\\ &\;=a^{\dagger}\left[(\partial_j \partial_k (-\Delta_L)^{-2}w_{L,\Lambda})^x\right]+a\left[(\partial_j \partial_k (-\Delta_L)^{-2}w_{L,\Lambda})^x\right]=C_{jk}^{\dagger}. \end{align} Using (<ref>), (<ref>) and the Cauchy-Schwarz inequality, we have for any $\lambda>0$ \begin{align} p_jp_k C_{jk}+C_{jk}p_j p_k \leq \lambda p_j^2 p_k^2 +\lambda^{-1} C_{jk}^2 . \end{align} \begin{align} C_{jk}^2&\leq 4 a^{\dagger}(p_jp_k |p|^{-4}\elecphononcouplREST) a(p_jp_k |p|^{-4}\elecphononcouplREST)+2\alpha^{-2} \|p_jp_k |p|^{-4} \elecphononcouplREST\|_2^2\nonumber\\ &\lesssim \Lambda^{-5} (|p|^2+\Lambda^{-1}) \mathbb{N}+\alpha^{-2} \Lambda^{-3}. \end{align} Picking $\lambda=\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2}$, we therefore conclude that \begin{align}\nonumber &\sum_{j,k,l =1}^3 \left(p_j p_k [p_l,B_{jkl}]+[B_{jkl}^{\dagger},p_l]p_jp_k\right) \\ & \lesssim (\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2})[|p|^4+\mathbb{N}(|p|^2+ L^{-3}\Lambda^{-1})+1]. \label{eq:lzboundI} \end{align} Applying (<ref>) and (<ref>) in (<ref>), we finally obtain \begin{align} \Phi(\elecphononcouplREST)&\lesssim (\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2})\left[|p|^4+\mathbb{N}(|p|^2+ L^{-3}\Lambda^{-1})+1\right] \notag\\ & \quad +\Lambda^{-5/2}\left(|p|^4+3 |p|^2(4\mathbb{N}+2\alpha^{-1})\right)\notag\\ &\lesssim (|p|^2+\mathbb{N}+1)^2(\Lambda^{-5/2}+\alpha^{-1} \Lambda^{-3/2}), \end{align} as claimed. §.§.§ Gross Transformation The bound (<ref>), derived in Proposition <ref>, is not immediately useful as it stands. In order to relate the r.h.s. of (<ref>) to the square of the Fröhlich Hamiltonian $\HL$ in (<ref>), we shall apply a Gross transformation [13], [23]. For a real-valued $f\in H^1(\TL)$, recalling that $f^x(\,\cdot\,)=f(\,\cdot\,-x)$, we consider the following unitary transformation on $L^2(\TL)\otimes \mathcal{F}$ \begin{align}\label{def:U} \end{align} where $U$ is understood to act as a `multiplication' with respect to the $x$ variable. For any $g\in L^2(\TL)$, we have \begin{align} \label{eq:Ua} Ua(g)U^{\dagger}=a(g)+\bra{g}\ket{f^x} \quad \text{and} \quad U\ad(g)U^{\dagger}=\ad(g)+\bra{f^x}\ket{g}, \end{align} and therefore \begin{align} U\numero U^{\dagger} = \numero+\Phi(f^x)+\|f\|_2^2. \end{align} \begin{align} UpU^{\dagger}=p+\alpha^2\Phi(p f^x)=p+\alpha^2\Phi[(i\nabla f)^x]. \end{align} This implies that \begin{align} U p^2 U^{\dagger}=p^2+\alpha^4 (\Phi[(i\nabla f)^x])^2+2\alpha^2 p\cdot a[(i\nabla f)^x]+2\alpha^2 \ad[(i\nabla f)^x]\cdot p+\alpha^2 \Phi[(-\Delta_L f)^x]. \end{align} Therefore, we also have \begin{align} \label{eq:UHUd} U \HL U^{\dagger}& = |p|^2+\alpha^4 (\Phi[(i\nabla f)^x])^2+2\alpha^2 p\cdot a[(i\nabla f)^x]+2\alpha^2 \ad[(i\nabla f)^x]\cdot p\nonumber\\ & \quad + \Phi[(-\alpha^2\Delta_L f +f-v_L)^x]+\numero +\|f\|^2_2 -2\bra{v_L}\ket{f}. \end{align} We denote \begin{align} \label{eq:gxexpr} g= -\alpha^2\Delta_L f +f-v_L, \end{align} and we shall pick \begin{align} \label{eq:fxexpr} f(y)&=\left[(-\alpha^2 \Delta_L +1)^{-1}(-\Delta_L)^{-1/2} \chi_{B_{K^2}^c}(-\Delta_L)\right](0,y)\nonumber\\ &=\sum_{|k|\geq K \atop k\in \frac {2\pi} L \mathbb{Z}^3} \frac 1 {(\alpha^2 |k|^2 +1)|k|} \frac{e^{-ik\cdot y}}{L^3} \end{align} for some $K>0$. Recalling (<ref>), this implies that \begin{align} \label{gxexplexpr} g(y)= -v_{L,K}(y)=-\sum_{0\neq k\in \frac{2\pi} L \mathbb{Z}^3\atop |k|< K} \frac 1 {|k|} \frac{e^{-i k \cdot y}}{L^3}. \end{align} For simplicity we suppress the dependence on $K$ in the notation for $f$ and $g$, but we will keep track of the parameter $K$ by denoting the operator $U$ related to this choice of $f$ (depending on $\alpha$ and $K$) via (<ref>) by $U^K_{\alpha}$. We shall need the following estimates for norms involving $f$ and $g$. We have \begin{align} \label{eq:estgross1} &\|g\|_2^2=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|<K}\frac 1 {L^3|k|^2}\lesssim K,\\ \label{eq:estgross2} &\|f\|_2^2=\sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|\geq K} \frac 1 {L^3|k|^2(\alpha^2|k|^2+1)^2}\lesssim \alpha^{-4}\int_{B_K^c} \frac 1 {|t|^6} dt \lesssim \alpha^{-4} K^{-3},\\ \label{eq:estgross3} &\bra{v_L}\ket{f}=\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|\geq K}\frac 1 {L^3|k|^2(\alpha^2|k|^2+1)}\lesssim \alpha^{-2} \int_{B_K^c} \frac 1 {|t|^4} dt\lesssim \alpha^{-2} K^{-1},\\ \label{eq:estgross4} &\|\nabla f\|_2^2=\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|\geq K} \frac 1 {L^3(\alpha^2|k|^2+1)^2}\lesssim \alpha^{-4} \int_{B_K^c} \frac 1 {|t|^4} dt\lesssim \alpha^{-4} K^{-1}. \end{align} We now state and prove the main result of this subsection, the proof of which follows the approach used in [12] for the analogous statement on $\Rtre$, and in [9] for the analogous statement on a domain with Dirichlet boundary conditions. For any $\varepsilon>0$ there exist $K_{\varepsilon}>0$ and $C_{\varepsilon}>0$ such that, for all $\alpha\gtrsim 1$ and any $\Psi\in L^2(\TL)\otimes \mathcal{F}$ in the domain of $|p|^2+\mathbb{N}$ \begin{align} \label{eq:Grossfinal} (1-\varepsilon)\|(|p|^2+\mathbb{N})\Psi\|-C_{\varepsilon}\|\Psi\|\leq\|U^{K_{\varepsilon}}_{\alpha} \HL (U^{K_{\varepsilon}}_{\alpha})^{\dagger}\Psi\|\leq(1+\varepsilon)\|(|p|^2+\mathbb{N})\Psi\|+C_{\varepsilon}\|\Psi\|. \end{align} We shall use the following standard (given the rescaled commutation relations satisfied by $a$ and $\ad$) properties, which hold for any $\Psi \in \mathcal{F}$, any $f\in L^2(\TL)$ and any function $h: [0,\infty)\to \mathbb{R}$ \begin{align} &\|a(f)\Psi\|\leq \|f\|_2\|\sqrt \numero \Psi\|, \quad \|\ad(f)\Psi\|\leq \|f\|_2\|\sqrt{\numero+\alpha^{-2}} \Psi\|,\\ &h(\numero+\alpha^{-2})a=a h(\numero),\quad h(\numero) \ad=\ad h(\numero+\alpha^{-2}). \end{align} It is then straightforward, with the aid of the estimates (<ref>), (<ref>), (<ref>) and (<ref>), to show, for any $\Psi \in L^2(\TL)\otimes \mathcal{F}$, any $\delta>0$ and any $K>0$, that \begin{align} \label{eq:Grossest1} &\alpha^4\|(\Phi[(i\nabla f)^x])^2 \Psi\|\lesssim \alpha^4 \|\nabla f\|^2 \|(\numero+\alpha^{-2}) \Psi\|\lesssim K^{-1} \|(\numero+\alpha^{-2}) \Psi\|,\\ \label{eq:Grossest2} & \|\Phi(g^x) \Psi\|\lesssim K^{1/2} \|\sqrt{\numero+\alpha^{-2}} \Psi\|\lesssim \delta \|(\numero+\alpha^{-2})\Psi\|+\delta^{-1} K \|\Psi\|,\\ \label{eq:Grossest3} &\alpha^2\|\ad[(i\nabla f)^x]\cdot p\Psi\|\lesssim K^{-1/2}\|\sqrt{\numero+\alpha^{-2}}\sqrt{|p|^2} \Psi\|\lesssim K^{-1/2}\|(|p|^2+\numero+\alpha^2)\Psi\|. \end{align} It remains to bound the term \begin{align} \|\alpha^2 p \cdot a[(i\nabla f)^x] \Psi\|\leq \|\alpha^2 a[(i\nabla f)^x] \cdot p \Psi\|+\|a[(-\alpha^2 \Delta_L f)^x] \Psi\|=:(\mathrm{I})+(\mathrm{II}). \end{align} As in (<ref>), we can easily bound \begin{align} \label{eq:GrosslasttermI} (\mathrm{I})\lesssim K^{-1/2}\|(|p|^2+\numero+\alpha^{-2})\Psi\|. \end{align} By (<ref>) and (<ref>) and recalling (<ref>) and (<ref>), we have \begin{align} a[(-\alpha^2 \Delta_L f)^x]=a[(g-f+v_L)^x]=-a(f^x)+a(w_{L,K}^x). \end{align} With the same arguments used in the proof of Lemma <ref> we obtain \begin{align} \|a(w_{L,K}^x)\Psi\|\lesssim K^{-1/2}\|\sqrt{\numero(|p|^2+K^{-1})}\Psi\|, \end{align} and therefore, using (<ref>) to bound $\|a(f^x) \Psi\|$ , we arrive at \begin{align} \label{eq:GrosslasttermII} (\mathrm{II})&\lesssim \alpha^{-2}K^{-3/2}\|\sqrt{\numero} \Psi\|+K^{-1/2}\|\sqrt{\numero (|p|^2+K^{-1})} \Psi\|\nonumber\\ &\lesssim \alpha^{-2}K^{-3/2}(\|(\numero+\alpha^{-2})\Psi\|+\|\Psi\|)+K^{-1/2}\|(|p|^2+\numero+K^{-1})\Psi\|. \end{align} Combining (<ref>)–(<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) with (<ref>), we obtain, for any $K\geq 1$ \begin{align} &\|U^K_{\alpha}\HL (U^{k}_{\alpha})^{\dagger}\Psi\|\leq [1+C(K^{-1/2}+\delta)]\|(|p|^2+\mathbb{N})\Psi\|+C(\delta^{-1} K+3\alpha^{-2}K^{-1})\|\Psi\|,\\ &\|U^K_{\alpha}\HL (U^K_{\alpha})^{\dagger}\Psi\|\geq [1-C(K^{-1/2}+\delta)]\|(|p|^2+\mathbb{N})\Psi\|-C(\delta^{-1} K+3\alpha^{-2}K^{-1})\|\Psi\|, \end{align} which allows to conclude the proof by picking $K_{\varepsilon}\sim \varepsilon^{-2}$, $\delta \sim \varepsilon$ and $C_{\varepsilon}\sim \varepsilon^{-3}$. Proposition <ref> has as an important consequence the fact that the ground state energy of $\HL$ is uniformly bounded for $\alpha \gtrsim 1$. §.§.§ Final Estimates for Cut-off Hamiltonian With Propositions <ref> and <ref> at hand, we are finally ready to prove the main result of this section. Note that all the estimates performed in this section are actually independent of $L$. \begin{align} \label{eq:Hlambda} \HL^{\Lambda}=-\Delta_L-\Phi(\elecphononcouplCUT)+\numero, \end{align} where $v_{L,\Lambda}$ is defined in (<ref>). Then, for any $\Lambda\gtrsim 1$ and $\alpha \gtrsim 1$, \begin{align} \label{eq:cutofffinal} \infspec{\HL}-\infspec{\HL^{\Lambda}}\gtrsim -(\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2}+\alpha^{-2}\Lambda^{-1}). \end{align} Note that for the error term introduced in (<ref>) to be negligible compared to $\alpha^{-2}$ it suffices to pick $\Lambda \gg \alpha^{4/5}$. We begin by recalling that Proposition <ref> implies that \begin{align} a(\elecphononcouplREST)+\ad(\elecphononcouplREST)=\Phi(\elecphononcouplREST)\lesssim (\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2})(|p|^2+\numero+1)^2. \end{align} Applying the unitary Gross transformation $U^K_{\alpha}$ introduced in the previous subsection (with $f$ defined in (<ref>) and $K$ large enough for Proposition <ref> to hold for some $0<\varepsilon<1$) to both sides of the previous inequality and recalling (<ref>), we obtain \begin{align} \label{eq:phiwU} (U^K_{\alpha})^{\dagger}\Phi(\elecphononcouplREST) U^K_{\alpha}&=\Phi(\elecphononcouplREST)+2 \bra{f}\ket{w_{L,\Lambda}}\nonumber\\ &\lesssim (\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2})(U^K_{\alpha})^{\dagger}(|p|^2+\numero+1)^2U^K_{\alpha}. \end{align} Proposition <ref> implies that \begin{align} \label{eq:hc2} (U^K_{\alpha})^{\dagger}(|p|^2+\numero+1)^2U^K_{\alpha}\lesssim (\HL+C)^2, \end{align} where $C$ is a positive constant (independent of $\alpha$ for $\alpha \gtrsim 1$). Recalling the definitions of $f$ and $w_{L,\Lambda}$ we also have \begin{align} |\bra{f}\ket{w_{L,\Lambda}}|\leq \sum_{0\neq k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|>\Lambda} \frac 1{L^3(\alpha^2|k|^2+1)|k|^2}\lesssim \alpha^{-2}\Lambda^{-1}, \end{align} and this allows us to conclude, in combination with (<ref>) and (<ref>), that \begin{align} \Phi(\elecphononcouplREST)\lesssim (\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2}+\alpha^{-2}\Lambda^{-1})(\HL+C)^2. \end{align} \begin{align} \expval{\HL}{\Psi}\geq\expval{\HL^{\Lambda}}{\Psi}-(\Lambda^{-5/2}+\alpha^{-1}\Lambda^{-3/2}+\alpha^{-2}\Lambda^{-1})\expval{(\HL+C)^2}{\Psi}. \end{align} By Remark <ref>, to compute the ground state energy of $\HL$ it is clearly sufficient to restrict to the spectral subspace relative to $|\HL|\leq C$ for some suitable $C$, which then yields (<ref>). This concludes the proof and the section. §.§ Final Lower Bound In this section we show the validity of the lower bound in (<ref>), thus completing the proof of Theorem <ref>. With Proposition <ref> at hand, we have good estimates on the cost of substituting $\HL$ with $\HL^{\Lambda}$ and, in particular, we know that the difference between the ground state energies of the two is negligible for $\Lambda\gg \alpha^{4/5}$. We are thus left with the task of giving a lower bound on $\infspec \HL^{\Lambda}$. While the previous steps in the lower bound follow closely the analogous strategy in [9], the translation invariance of our model leads to substantial complications in the subsequent steps, and the analysis given in this subsection is the main novel part of our proof. In contrast to the case considered in [9], the set of minimizers $\MinLf=\Omega_L(\varphi_L)$ is a three-dimensional manifold, and in order to decouple the resulting zero-modes of the Hessian of the Pekar functional we find it necessary introduce a suitable diffeomorphism that 'flattens' the manifold of minimizers and the region close to it. Special attention also has to be paid on the metric in which this closeness is measured, necessitating the introduction of the family of norms in (<ref>). We emphasize that the non-uniformity in $L$ also results from the subsequent analysis, where the compactness of resolvent of $-\Delta_L$ enters in an essential way. Let $\Pi$ denote the projection \begin{align} \label{eq:defPI} \ran \Pi=\spn\left\{ L^{-3/2}e^{i k\cdot x}, \,\, k \in \frac {2\pi}L \mathbb{Z}^3, \,\,|k|\leq \Lambda\right\}, \quad N=\dim_{\mathbb{C}} \ran \Pi. \end{align} For later use we note that \begin{align} \label{eq:Nlambdaasymp} N \sim \left(\frac L {2\pi}\right)^3 \Lambda^3 \quad \text{as} \,\, \Lambda \to \infty. \end{align} The Fock space $\mathcal{F}(L^2(\TL))$ naturally factorizes into the tensor product $\mathcal{F}(\Pi L^2(\TL))\otimes \mathcal{F}((\unit-\Pi) L^2(\TL))$ and $\HL^{\Lambda}$ is of the form $\mathbb{A}\otimes \unit + \unit \otimes \mathbb{N}^>$, with $\mathbb{A}$ acting on $L^2(\TL)\otimes \mathcal{F}(\Pi L^2(\TL))$ and $\mathbb{N}^>$ being the number operator on $\mathcal{F}((\unit-\Pi) L^2(\TL))$. In particular, $\infspec \HL^{\Lambda}=\infspec \mathbb{A}$. As in Section <ref>, we can, for any $L^2$-orthonormal basis of real-valued functions $\{f_n\}$ of $\ran \Pi$, identify $\mathcal{F}(\Pi L^2(\TL))$ with $L^2(\mathbb{R}^N)$ through the $Q$-space representation (see [25]). In particular, any real-valued $\varphi \in \ran \Pi$ corresponds to a point $\lambda \in \R^N$ via \begin{align} \label{eq:identification} \varphi=\Pi \varphi= \sum_{n=1}^N \lambda_n f_n\cong(\lambda_1,\dots,\lambda_N)=\lambda. \end{align} Note that, compared to Section <ref>, we are using a different choice of $\Pi$ here for the decomposition $L^2(\TL)=\ran \Pi \oplus (\ran \Pi)^{\perp}$. In the representation given by (<ref>), the operator $\mathbb{A}$ is given by \begin{align} \mathbb{A}=-\Delta_L+V_{\varphi}(x)+\sum_{n=1}^N \left(-\frac 1 {4\alpha^4} \partial_{\lambda_n}^2+\lambda_n^2-\frac 1 {2\alpha^2}\right) \end{align} on $L^2(\TL)\otimes L^2(\mathbb{R}^N)$. For a lower bound, we can replace $h_{\varphi}=-\Delta_L+V_{\varphi}$ with the infimum of its spectrum $e(\varphi)$, obtaining \begin{align} \infspec \HL^{\Lambda}\geq \infspec \mathbb{K}, \end{align} where $\mathbb{K}$ is the operator on $L^2(\mathbb{R}^N)$ defined as \begin{align} \label{eq:defK} \mathbb{K}=-\frac 1 {4\alpha^4} \sum_{n=1}^N \partial_{\lambda_n}^2- \frac N {2 \alpha^2}+\FL(\varphi)=\frac 1 {4\alpha^4} (-\Delta_{\lambda})- \frac N {2 \alpha^2}+\FL(\lambda), \end{align} where $\FL$, which is understood as a multiplication operator in (<ref>), can be seen as a function of $\varphi\in \spn_{\R}\{f_j\}_{j=1}^N$ or $\lambda\in \R^N$ through the identification (<ref>). Using IMS localization we shall split $\mathbb{R}^N$ into two regions, one localized around the surface of minimizers of $\FL$, i.e., $\MinLf=\Omega_L(\varphi_L)$, and the other localized away from it. On each of these regions we can bound $\FL$ from below with the estimates contained in Proposition <ref> and in Corollary <ref>, respectively. Because of the prefactor $\alpha^{-4}$ in front of $-\Delta_{\lambda}$ the outer region turns out to be negligible compared to the inner one (at least if we define the inner and outer region with respect to an appropriate norm). At the same time, employing an appropriate diffeomorphism, the inner region can be treated as if $\Omega_L(\varphi_L)$ was a a flat torus, leading to a system of harmonic oscillators whose ground state energy can be calculated explicitly. We start by specifying the norm with respect to which we measure closeness to $\Omega_L(\varphi_L)$. Recall the definition of the $W_T$-norms given in (<ref>). Note that for $T\geq \Lambda$ the $L^2$-norm coincides with the $W_T$-norm on $\ran \Pi$, which makes $0<T< \Lambda$ the relevant regime for our discussion. In fact, we shall pick \begin{align} \label{eq:TlambdaRegime} 1\ll T \ll \Lambda^{2/3} \ , \quad \alpha^{4/5}\ll\Lambda, \end{align} where $T\gg 1$ is needed for the inner region to yield the right contribution, and $T\ll \Lambda^{2/3}$ ensures that the outer region contribution is negligible. We proceed by introducing an IMS type localization with respect to $\|\cdot\|_{W_T}$. Let $\chi:\mathbb{R}_+\to [0,1]$ be a smooth function such that $\chi(t)=1$ for $t\leq 1/2$ and $\chi(t)=0$ for $t\geq 1$. Let $\varepsilon>0$ and let $j_1$ and $j_2$ denote the multiplication operators on $L^2(\mathbb{R}^N)$ \begin{align} j_1=\chi\left(\varepsilon^{-1} \text{dist}_{W_T}(\varphi,\Omega_L(\varphi_L))\right), \quad j_2=\sqrt{1-j_1^2}. \end{align} \begin{align}\label{r1} \mathbb{K}=j_1 \mathbb{K} j_1+j_2\mathbb{K} j_2-\mathbb{E}, \end{align} where $\mathbb{E}$ is the IMS localization error given by \begin{align} \mathbb{E}=\frac 1 {4\alpha^4} \sum_{n=1}^N \left(|\partial_{\lambda_n} j_1|^2+|\partial_{\lambda_n} j_2|^2\right), \end{align} which is estimated in the following lemma. \begin{align}\label{r2} \mathbb{E}\lesssim \alpha^{-4} \varepsilon^{-2} \end{align} To bound $\mathbb{E}$ we apply Lemma <ref>, which states that for $\varepsilon$ sufficiently small, for any $\varphi\in \text{supp} j_1$, there exists a unique $y_{\varphi}\in \TL$ such that \begin{equation} \dist_{W_T}^2(\varphi,\Omega_L(\varphi_L))=\expval{W_T}{\varphi-\varphi_L^{y_{\varphi}}}. \end{equation} Likewise, for any $n\in \{1,\dots,N\}$ and any $h$ sufficiently small there exists a unique $y_{n,h}\in \TL$ such that \begin{equation} \dist_{W_T}^2(\varphi+hf_n,\Omega_L(\varphi_L))=\expval{W_T}{\varphi+hf_n-\varphi_L^{y_{n,h}}}. \end{equation} It is easy to see, using again Lemma <ref>, that $\lim_{h\to 0}y^{h,n}= y^{\varphi}$ for any $n$. Therefore, using that $\dist_{W_T}(\varphi+hf_n,\Omega_L(\varphi_L))\leq \|\varphi-\varphi_L^{y_{\varphi}}\|_{W_T}$ and $\dist_{W_T}(\varphi,\Omega_L(\varphi_L))\leq \|\varphi-\varphi_L^{y_{h,n}}\|_{W_T}$, we arrive at \begin{align} & 2\bra{f_n}W_T\ket{\varphi-\varphi_L^{y_{\varphi}}}=\lim_{h\to 0}2\bra{f_n}W_T\ket{\varphi-\varphi_L^{y_{h,n}}}\notag\\ &\leq\lim_{h\to 0} h^{-1}\left(\dist_{W_T}^2(\varphi+hf_n,\Omega_L(\varphi_L))-\dist_{W_T}^2(\varphi,\Omega_L(\varphi_L))\right)\notag \\ &\leq 2\bra{f_n}W_T\ket{\varphi-\varphi_L^{y_{\varphi}}}, \end{align} which shows that \begin{align} \partial_{\lambda_n} \dist^2_{W_T}(\varphi, \Omega_L(\varphi_L))=2\bra{f_n}W_T(\varphi-\varphi_L^{y_{\varphi}})\rangle. \end{align} Using that $|\chi'|, \left|\left[(1-\chi^2)^{1/2}\right]'\right|\lesssim \unit_{[1/2,1]}$, for $k=1,2$ we obtain \begin{align}\nonumber \left|\left[\partial_{\lambda_n} j_k\right](\varphi)\right|^2&\lesssim \varepsilon^{-4}\left|\partial_{\lambda_n} \dist^2_{W_T}(\varphi, \Omega_L(\varphi_L))\right|^2\unit_{\left\{\dist_{W_T}(\varphi, \Omega_L(\varphi_L))\leq \varepsilon\right\}}&\\ &\lesssim \varepsilon^{-4} |\bra{f_n}W_T(\varphi-\varphi_L^{y_{\varphi}})\rangle|^2\unit_{\left\{\dist_{W_T}(\varphi, \Omega_L(\varphi_L))\leq \varepsilon\right\}}. \end{align} Summing over $n$, using that $\|W_T\|\leq 1$ and that $\{f_n\}$ is an orthonormal system, we arrive at (<ref>). Thus, the localization error is negligible as long as $\varepsilon \gg \alpha^{-1}$. Hence, we are left with the task of providing lower bounds for $j_1 \mathbb{K} j_1$ and $j_2 \mathbb{K} j_2$ under the constraint $\varepsilon \gg \alpha^{-1}$. We carry out these estimates in the next two subsections, <ref> and <ref>. Finally, in Section <ref>, we combine these bounds to prove the lower bound in (<ref>). §.§.§ Bounds on $j_1 \mathbb{K} j_1$ Let us look closer at the intersection of the $\varepsilon$-neighborhood of $\Omega_L(\varphi_L)$ with respect to the $W_T$-norm with $\ran \Pi$, i.e., the set \begin{align} \Tneigh:=\{\varphi\in \ran \Pi \,|\, \bar{\varphi}=\varphi, \,\, \text{dist}_{W_T}(\varphi,\Omega_L(\varphi_L))\leq \varepsilon\}=\text{supp} j_1 \cap \ran \Pi . \end{align} In the following we shall show that this set is, for $\varepsilon$ small enough, a tubular neighborhood of $\Pi\Omega_L(\varphi_L)$, which can be mapped via a suitable diffeomorphism (given in Definition <ref>) to a tubular neighborhood of a flat torus. Since $\varphi \in \ran \Pi$ and $\Pi$ commutes both with $W_T$ and with the transformation $g\mapsto g^y$ for any $y\in \TL$, we have \begin{align} \dist^2_{W_T}(\varphi,\Omega_L(\varphi_L))=\|(\unit-\Pi)\varphi_L\|^2_{W_T}+\dist^2_{W_T}(\varphi,\Omega_L(\Pi\varphi_L)). \end{align} This implies that $\Tneigh$ is non-empty if and only if \begin{align} \end{align} Since $\varphi_L\in C^{\infty}(\TL)$, $r_{T,\varepsilon}>0$ as long as \begin{align} \label{eq:restr1} \varepsilon \gtrsim_L \Lambda^{-h} \end{align} for some $h>0$ and $\Lambda$ sufficiently large. In particular, (<ref>) is satisfied with $h=5/4$ for $\alpha$ large enough since, as discussed above, we need to pick $\varepsilon\gg \alpha^{-1}$ and $\Lambda \gg \alpha^{4/5}$ for the IMS and the cutoff errors to be negligible. Lemma <ref> implies that any $\varphi \in \Tneigh$, for $\varepsilon\leq \varepsilon'_L$ (independently of $T$ and $N$), admits a unique $W_T$-projection $\varphi_L^{y_{\varphi}}$ onto $\Omega_L(\varphi_L)$ and \begin{align} \label{eq:Decomp} \varphi=\varphi_L^{y_{\varphi}}+(v_{\varphi})^{y_{\varphi}}, \quad \text{with} \quad v_{\varphi} \in (\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3)^{\perp_{L^2}}. \end{align} Since $W_T$ and $\Pi$ commute, $\Omega_L(\varphi_L)$ is `parallel' to $\ran \Pi$ with respect to $\|\cdot\|_{W_T}$, i.e., $\dist_{W_T}(\ran \Pi, \varphi_L^y)$ is independent of $y$ and the $W_T$-projection of $\varphi_L^y$ onto $\Pi$ is simply $\Pi (\varphi_L^y)=(\Pi \varphi_L)^y$. Therefore, for $\varepsilon\leq \varepsilon'_L$, any $\varphi\in \Tneigh$ admits a unique $W_T$-projection $(\Pi \varphi_L)^{y_\varphi}$ onto $\Omega_L(\Pi\varphi_L)$ and (<ref>) induces a unique decomposition of the form \begin{align} \label{eq:ProjDecomp} \varphi=(\Pi \varphi_L)^{y_{\varphi}}+(\eta_{\varphi})^{y_{\varphi}},\quad \text{with} \quad \eta_{\varphi}\in (\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3)^{\perp_{L^2}}, \,\, \|\eta_{\varphi}\|_{W_T}\leq r_{T,\varepsilon}, \end{align} where $\eta_{\varphi}=\Pi v_{\varphi}$ (note that $(\unit-\Pi)v_{\varphi}=-(\unit-\Pi)\varphi_L$). This allows to introduce the following diffeomorphism, which is a central object in our discussion. It maps $\Tneigh$ onto a tubular neighborhood of a flat torus. We shall call this diffeomorphism Gross coordinates, as it is inspired by an approach introduced in [14]. [Gross coordinates] \begin{align} \label{eq:BTLedef} B^{T,\Lambda}_{\varepsilon}:=\left\{\eta\in (\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3)^{\perp_{L^2}}\cap \ran \Pi\,\,|\,\,\|\eta\|_{W_T}\leq r_{T,\varepsilon}\right\}\subset \ran \Pi, \end{align} we define the Gross coordinates map $u$ as \begin{align} \label{eq:uinversedef} u:\Tneigh& \rightarrow \TL \times B^{T,\Lambda}_{\varepsilon},\nonumber\\ \varphi& \mapsto (y_{\varphi},\eta_{\varphi}), \end{align} where $y_{\varphi}$ and $\eta_{\varphi}$ are defined through the decomposition (<ref>). By the discussion above it is clear that $u$ is well-defined and invertible, for $\varepsilon\leq \varepsilon'_L$ (defined in Lemma <ref>), with inverse $u^{-1}$ given by \begin{align} \label{eq:diffeoimpl} u^{-1}: \TL \times B^{T,\Lambda}_{\varepsilon} &\rightarrow \Tneigh \nonumber\\ (y,\eta)&\mapsto (\Pi\varphi_L)^y+\eta^y. \end{align} We emphasize that the whole aim of the discussion above is to show that $u$ is well-defined, since once that has been shown the invertibility of $u$ and the form of $u^{-1}$ are obvious. In other words, the map $u^{-1}$ as defined in (<ref>) is trivially-well defined, but it is injective and surjective with inverse $u$ only thanks to the existence and uniqueness of the decomposition (<ref>). To show that $u$ is a smooth diffeomorphism, we prefer to work with its inverse $u^{-1}$, which we proceed to write down more explicitly. For this purpose, we pick a real $L^2$-orthonormal basis $\{f_k\}_{k=1}^N$ of $\ran \Pi$, such that $f_1$, $f_2$ and $f_3$ are an orthonormal basis of $\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3$ and $f_4=\frac{\Pi \varphi_L}{\|\Pi \varphi_L\|_2}$. Note that $\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3$ is three dimensional, as remarked after (<ref>), at least for $N$ and $T$ large enough, and that $f_4$ is indeed orthogonal to $f_1$, $f_2$ and $f_3$ since in $k$-space $W_T$ and $\Pi$ are even multiplication operators while the partial derivatives are odd multiplication operators. We denote the projection onto $\spn \{\Pi W_T \partial_j \varphi_L\}_{j=1}^3$ by \begin{align} \GradProjT:=\sum_{k=1}^3 \ket{f_k}\bra{f_k}. \end{align} Having fixed a real orthonormal $L^2$-basis, we can identify any real-valued function in $\ran \Pi$ (and hence also any function in $\Tneigh$) with a point $(\lambda_1,\dots,\lambda_N)$ via (<ref>). In these coordinates, the orthogonal transformation that acts on functions in $\ran \Pi$ as the translation by $y$, i.e., $\varphi\mapsto\varphi^y$, reads \begin{align}\label{def:Ry} R(y):=\sum_{k=1}^{N} \ket{f_k^y}\bra{f_k}, \end{align} and we can write $B^{T,\Lambda}_{\varepsilon}$ in (<ref>) as \begin{align} B^{T,\Lambda}_{\varepsilon}:=\left\{\eta=(\eta_4,\dots,\eta_N)\in \spn_{\R}\{f_4,\dots, f_N\}\; \Big| \; \left\|\sum_{k=4}^N \eta_k f_k\right\|_{W_T}\leq r_{T,\varepsilon}\right\}. \end{align} In this basis, we can write $u^{-1}$ explicitly as \begin{equation} \label{eq:diffeo} u^{-1}(y,\eta)=(\Pi\varphi_L)^y+\eta^y = R(y)(0,0,0,\|\Pi\varphi_L\|_2+\eta_4,\eta_5,\dots,\eta_N). \end{equation} The following Lemma uses this explicit expression for $u^{-1}$ and shows that it is a smooth diffeomorphism (therefore showing that the Gross coordinates map $u$ is as well). Let $u^{-1}$ be the map defined in (<ref>). There exists $\varepsilon^1_L\leq \varepsilon'_L$ (independent of $T$ and $N$) and $N_L>0$ such that for any $\varepsilon\leq \varepsilon_L^1$, any $T>0$ and any $N>N_L$ the map $u^{-1}$ is a $C^{\infty}$-diffeomorphism from $\TL\times B_{\varepsilon}^{T,\Lambda}$ onto $\Tneigh$. Moreover, for $\varepsilon\leq \varepsilon^1_L$, $|\det Du^{-1}|$ and all its derivatives are uniformly bounded independently of $T$ and $N$. We introduce the notation $J(y,\eta)=D u^{-1}(y,\eta)$ and $d(y,\eta):=| \det J(y,\eta)|$. Note that $R(y)$ in (<ref>) satisfies $R(-y)=R(y)^{-1}=R(y)^t$ since $\{f_j^y\}_{j=1}^N$ is an orthonormal basis of $\ran \Pi$ for any $y$. Hence, for $j=1,\dots,N$ we have \begin{align} \bra{R(-y)f_j}\ket{\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l}. \end{align} This yields the smoothness of $u^{-1}$ in $\eta$ and in $y$ (noting that $\{f_j\}_{j=1}^N\subset \ran \Pi$ is a set of smooth functions for any $N$). We proceed to compute $J$. We have, for $4\leq k \leq N$, \begin{align} \partial_{\eta_k} (u^{-1})_j (y,\eta)=\bra{R(-y) f_j}\ket{f_k}=\bra{f_j}\ket{R(y) f_k}, \end{align} \begin{align}\nonumber \partial_{y_k} (u^{-1})_j (y,\eta) & = \bra{f_j}\ket{\partial_{y_k}R(y)\left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)} \\ & = - \bra{f_j}\ket{R(y) \partial_k\left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)} \end{align} for $1\leq k \leq 3$. \begin{align} J (y,\eta)&= R(y)\left[\sum_{k=1}^3 \ket{v_k}\bra{f_k}+\sum_{k\geq 4} \ket{f_k}\bra{f_k}\right]\nonumber\\ &=R(y)\left(\unit-\GradProjT+ \sum_{k=1}^3 \ket{v_k}\bra{f_k}\right)=:R(y)J_0(\eta), \label{JRJ} \end{align} where $v_k(\eta):= -\partial_k u^{-1}(0,\eta)=-\partial_k \left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)$. Since $R(y)$ is orthogonal, we see that $d=|\det J_0|$ (implying, in particular, that $d$ is independent of $y$). Observe that \begin{align} \label{eq:Jzero} \begin{pmatrix} A_0 & 0 \\ A_1 & \unit \end{pmatrix}, \end{align} where $A_0$ is the $3\times 3$ matrix given by \begin{align} \label{eq:Adef} (A_0)_{jk}=\bra{f_j}\ket{v_k}=\bra{f_j}\ket{-\partial_k\left(\Pi \varphi_L +\sum_{l=4}^N \eta_l f_l\right)}, \quad j,k\in\{1,2,3\}, \end{align} and $A_1$ is the $(N-3)\times 3$ matrix defined by \begin{align} (A_1)_{jk}=\bra{f_{j+3}}\ket{-\partial_k\left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)} \quad j\in \{1,\dots, N-3\},\;\;k\in\{1,2,3\}. \end{align} Since $J_0$ is the identity in the bottom-right $(N-3)\times (N-3)$ corner and $0$ in the top-right $3\times (N-3)$ corner, $d=|\det A_0|$. On $\ran \GradProjT$ the operators $\partial_k$ with $k=1,2,3$ and $W_T^{-1}$ are uniformly bounded in $N$ and $T$. Recall also that $\|\eta\|_{W_T}\leq \varepsilon^1_L$. Hence, for some constant $C_L$ independent of $N$ and $T$, and for any $j,k\in \{1,2,3\}$, we have \begin{align} \label{eq:Abdd} |(A_0)_{jk}|\leq \|\partial_k f_j\|_2 \|\Pi \varphi_L\|_2+\|W_T^{-1}\partial_k f_j\|_{W_T}\|\eta\|_{W_T}\leq C_L. \end{align} Moreover, for any $j, k \in \{1,2,3\}$ and any $l, l_1, l_2\in\{4,\dots,N\}$, we also have \begin{align} \label{eq:Ader} \partial_{\eta_l} (A_0)_{jk}=\bra{\partial_k f_j}\ket{f_l}, \quad \partial_{\eta_{l_1}}\partial_{\eta_{l_2}} (A_0)_{jk}=0. \end{align} Clearly, (<ref>) and (<ref>) together with the fact that $d=|\det A_0|$ show that $d$ and all its derivatives are uniformly bounded in $N$ and $T$. To show that there exists $\varepsilon_L^1$ and $N_L$ such that $d\geq C_L>0$ for all $\varepsilon\leq \varepsilon_L^1$, $T>0$ and $N>N_L$, we show that the image of the $3$-dimensional unit sphere under $A_0$ is uniformly bounded away from $0$, which clearly yields our claim. For this purpose, we observe that the $k$-th column of $A_0$ is given by $\GradProjT \left[-\partial_k \left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)\right]$ and therefore, for any unit vector $a=(a_1,a_2,a_3)\in \R^3$, \begin{align} A_0 a&=\sum_{k=1}^3 a_k \GradProjT \left[-\partial_k \left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)\right] =-\GradProjT\partial_a u^{-1}(0,\eta), \end{align} where we denote $\sum_{k=1}^3 a_k \partial_k = \partial_{a}$. To bound the norm of $A_0 a$ from below, it is then sufficient to test $\partial_a u^{-1}(0,\eta)$ against one normalized element of $\ran \GradProjT$, say $\frac{\Pi W_T \partial_a \varphi_L}{\|\Pi W_T \partial_a \varphi_L\|_2}$. We obtain \begin{align} \label{eq:partialaest} \|A_0a\|_2^2&=\|\GradProjT\partial_a u^{-1}(0,\eta)\|_2^2\geq \left|\bra{\frac{\Pi W_T \partial_a \varphi_L}{\|\Pi W_T \partial_a \varphi_L\|_2}}\ket{\partial_a\left(\Pi \varphi_L+\sum_{l=4}^N\eta_l f_l\right)}\right|^2\nonumber\\ &=\|\Pi W_T \partial_a \varphi_L\|_2^{-2}\left|\|\Pi W_T^{1/2}\partial_a \varphi_L\|_2^2-\bra{\Pi \partial_a^2 \varphi_L}\ket{\eta}_{W_T}\right|^2\notag\\ &\geq \|\partial_a \varphi_L\|_2^{-2}\left(\|\Pi W_0^{1/2}\partial_a \varphi_L\|_2^2-\|\Pi \partial_a^2 \varphi_L\|_{W_T}\|\eta\|_{W_T}\right)_+^2\nonumber\\ &\geq \|\partial_a \varphi_L\|_2^{-2}\left(\|\Pi W_0^{1/2}\partial_a \varphi_L\|_2^2-\varepsilon\|\partial_a^2 \varphi_L\|_2\right)_+^2, \end{align} where we used that $\|\eta\|_{W_T}\leq \varepsilon$, $0\leq W_T\leq \unit$ and $\Pi\leq \unit$, and $(\, \cdot\,)_+$ denotes the positive part. As remarked after (<ref>), $\partial_a \varphi_L =(-\Delta_L)^{-1/2} \partial_a |\psi_L|^2\neq 0$ and since $\varphi_L\in C^{\infty}$, $\partial_a \varphi_L$ and $\partial_a^2 \varphi_L$ are uniformly bounded in $a$. We can thus find $N_L>0$ and $\varepsilon_L^1$ such that the r.h.s. of (<ref>) is bounded from below by some constant $C_L>0$ uniformly for $T>0$, $N>N_L$ and $\varepsilon\leq \varepsilon_L^1$. This shows that $A_0$ (and hence $J$) is invertible at every point and that $d\geq C_L>0$ uniformly in $T>0$, $N>N_L$ and $\varepsilon\leq\varepsilon^1_L$, as claimed. This concludes the proof. Since $u$ is a diffeomorphism, we can introduce a unitary operator that lifts $u^{-1}$ to $L^2$, defined by \begin{align} \label{eq:unitary} &U:L^2(\TL \times B^{T,\Lambda}_{\varepsilon}) \longrightarrow L^2(\Tneigh) \nonumber\\ &U(\psi):=|\det \left(D u\right)|^{1/2} \psi \circ u. \end{align} Recall that $j_1$ is supported in $\Tneigh$, hence we can apply $U$ to $j_1 \mathbb{K} j_1$, obtaining an operator that acts on functions on $\TL \times \R^{N-3}$ that are supported in $\TL \times B^{T,\Lambda}_{\varepsilon}$. In particular, \begin{align} j_1 \mathbb{K} j_1\geq j_1^2 \infspec_{H^1_0\left(\TL\times B^{T,\Lambda}_{\varepsilon}\right)}[U^*\mathbb{K}U], \end{align} where the subscript indicates that the operator has to be understood as the corresponding quadratic form with form domain $H^1_0(\TL\times B^{T,\Lambda}_{\varepsilon})$ (i.e., with Dirichlet boundary conditions on the boundary of $B^{T,\Lambda}_{\varepsilon}$). We are hence left with the task of giving a lower bound on $\infspec_{H^1_0\left(\TL\times B^{T,\Lambda}_{\varepsilon}\right)}[U^*\mathbb{K}U]$, which will be done in the remainder of this subsection. Recalling the definition of $\mathbb{K}$ given in (<ref>), we proceed to find a convenient lower bound for $U^* \FL U$. Any $(\Pi\varphi_L)^{y_{\varphi}}+(w_{\varphi})^{y_{\varphi}}=\varphi \in \Tneigh$ satisfies (<ref>) with $\varphi_L^{y_{\varphi}}$ in place of $\varphi_L$, and we can therefore expand $\FL(\varphi)$ using Proposition <ref>, obtaining \begin{align} \FL(\varphi)-\eL\geq&\expval{\unit-K_L^{y_{\varphi}}-\varepsilon C_LJ_L^{y_{\varphi}}}{(w_{\varphi})^{y_{\varphi}}-((\unit-\Pi)\varphi_L)^{y_{\varphi}}}\notag\\ =& \expval{(\unit-\Pi)(\unit-K_L-\varepsilon C_LJ_L)(\unit-\Pi)}{\varphi_L}\notag\\ &-2\bra{(\unit-\Pi)\varphi_L} \unit-K_L-\varepsilon C_LJ_L\ket{w_{\varphi}}+\expval{\unit-K_L-\varepsilon C_LJ_L}{w_{\varphi}}. \end{align} Since $K_L$ and $J_L$ are trace class operators, \begin{align} (\unit-\Pi)(\unit-K_L-\varepsilon C_LJ_L)(\unit-\Pi)>0 \end{align} holds for $\Lambda$ sufficiently large and $\varepsilon$ sufficiently small. Moreover, since $\varphi_L\in C^{\infty}(\TL)$ \begin{align}\nonumber & |\bra{(\unit-\Pi)\varphi_L} \unit-K_L-\varepsilon C_LJ_L\ket{w_{\varphi}}| \\ &\leq \|W_T^{-1/2}(\unit-K_L-\varepsilon C_LJ_L)(\unit-\Pi)\varphi_L\|_2\|w_{\varphi}\|_{W_T}=O(\varepsilon \Lambda^{-h}) \end{align} for arbitrary $h>0$ and uniformly in $T$. This implies that, for any $\varphi=(\Pi\varphi_L)^{y_{\varphi}}+(w_{\varphi})^{y_{\varphi}} \in \Tneigh$, any $\Lambda$ sufficiently large, any $\varepsilon$ sufficiently small and an arbitrary $h$ \begin{align} \label{eq:FLlocalbound} \FL(\varphi)=\FL((\Pi\varphi_L)^{y_{\varphi}}+(w_{\varphi})^{y_{\varphi}})\geq \eL -O(\varepsilon \Lambda^{-h})+\expval{\unit-K_L-\varepsilon C_LJ_L}{w_{\varphi}}. \end{align} Therefore, if we define the $[(N-3)\times (N-3)]$-matrix $M$ with coefficients \begin{align} \label{eq:Mdef} M_{k,j}:= \bra{f_{k+3}} \unit-K_L-\varepsilon C_LJ_L \ket{f_{j+3}}, \end{align} then, by (<ref>), the multiplication operator $U^* \FL U$ satisfies \begin{align}\label{p1} (U^* \FL U) (y,\eta)\geq \eL+\expval{M}{\eta}-O(\varepsilon \Lambda^{-h}). \end{align} It is easy to see that $M$ is a positive matrix, at least for $\varepsilon$ sufficiently small and $T$ and $\Lambda$ sufficiently large. Indeed, the positivity of $M$ is equivalent to the positivity of $(\unit-K_L-\varepsilon C_LJ_L)$ on $\ran (\Pi-\GradProjT)$ and, by Proposition <ref>, $(\unit-K_L-\varepsilon C_LJ_L)$ is positive on any vector space with trivial intersection with $\ran \GradProj$. Clearly, since $\GradProjT\to \GradProj$ as $T\to \infty$, the bound \begin{align} \label{eq:Mpos} M\geq c_L>0 \end{align} holds, uniformly in $T$, $\Lambda$ and for $\varepsilon$ sufficiently small. We now proceed to bound $- U^* \Delta_{\lambda} U$ from below. Let $U$ be the unitary transformation defined in (<ref>). There exists $C_L>0$, independent of $N$, $T$ and $\varepsilon$, such that, for $\varepsilon \leq \varepsilon^1_L$, $T>0$ and $N>N_L$ \begin{align} \label{eq:LemmaDiffLaplClaim} U^*\left(-\Delta_{\lambda}\right)U\geq -\Delta_{\eta}-C_L. \end{align} Since (<ref>) shows that $J(y,\eta)=R(y)J_0(\eta)$ with $R(y)$ orthogonal, we have \begin{align} \label{eq:diffeomLapl1} U^* \left(-\Delta_{\lambda} \right)U&=-d^{-1/2} \grad \cdot d^{1/2}\left[J^{-1} (J^{-1})^t\right] d^{1/2} \grad d^{-1/2}\nonumber\\ &=-d^{-1/2} \grad \cdot d^{1/2}\left[J_0^{-1} (J_0^{-1})^t\right] d^{1/2} \grad d^{-1/2}, \end{align} with $d(y,\eta)=| \det J(y,\eta)|$ and $\grad$ denoting the gradient with respect to $(y,\eta)\in \R^N$. Recalling the expression (<ref>) for $J_0$, we find \begin{align} \begin{pmatrix} A_0^{-1} & 0 \\ -A_1 A_0^{-1} & \unit \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & \unit \end{pmatrix} \begin{pmatrix} A_0^{-1} & 0 \\ -A_1 A_0^{-1} & 0 \end{pmatrix} \end{align} Since $D(\unit-\GradProjT)=(\unit-\GradProjT)D^t=0$, we have \begin{align} \label{eq:JzeroJzerot} J_0^{-1} (J_0^{-1})^t=(\unit-\GradProjT)+D D^t\geq \unit-\GradProjT. \end{align} With (<ref>) and (<ref>), we thus obtain \begin{align} U^* \left(-\Delta_{\lambda}\right) U&\geq -d^{-1/2} \grad \cdot d^{1/2}\left(\unit -\GradProjT\right) d^{1/2} \grad d^{-1/2}\nonumber\\ &=-\Delta_{\eta}-(2d)^{-2} |\nabla d|^2+(2d)^{-1}\Delta d. \end{align} Lemma <ref> guarantees that $d$ and all its derivatives are bounded, and $d$ is bounded away from $0$ uniformly in $N>N_L$, $T>0$ and $\varepsilon\leq \varepsilon_L^1$, leading to (<ref>). In combination, (<ref>), (<ref>) and the positivity of $M$ imply that \begin{align} j_1 \mathbb{K} j_1&\geq j_1^2 \infspec_{H^1_0\left(\TL\times B^{T,\Lambda}_{\varepsilon}\right)}(U^* \mathbb{K} U)\\ &\geq j_1^2\left(\eL-\frac N {2 \alpha^2}-O(\varepsilon \Lambda^{-h})-O(\alpha^{-4})+\infspec_{L^2(\R^N)} \left[-\frac 1 {4\alpha^4} \Delta_{\eta}+\expval{M}{\eta}\right] \right)\nonumber\\ &=j_1^2\left(\eL-\frac 1 {2 \alpha^2}(N-\Tr (M^{1/2}))-O(\varepsilon \Lambda^{-h})- O(\alpha^{-4})\right). \end{align} Note that since we are taking $\Lambda\gg \alpha^{4/5}$, $\varepsilon\ll 1$ and $h>0$ was arbitrary, picking $h=5$ allows to absorb the error term $O(\varepsilon \Lambda^{-h})$ in the error term $O(\alpha^{-4})$. Recalling the definition of $M$ given in (<ref>), we have \begin{align} \Tr(M^{1/2})=\Tr\left[\sqrt{(\Pi-\GradProjT)(\unit-K_L-\varepsilon C_L J_L)(\Pi-\GradProjT)}\right]. \end{align} With $\{t_j\}_{j=1}^{N-3}$ an orthonormal basis of $\ran(\Pi-\GradProjT)$ of eigenfunctions of $(\Pi-\GradProjT)(\unit-K_L-\varepsilon C_LJ_L)(\Pi-\GradProjT)$, we can write \begin{align} \Tr(M^{1/2})&=\sum_{j=1}^{N-3} \expval{\unit-K_L-\varepsilon C_LJ_L}{t_j}^{1/2}\nonumber\\ &=\sum_{j=1}^{N-3} \left[ \expval{\unit-K_L}{t_j}^{1/2}-\frac{\varepsilon C_L}{2 \xi_j^{1/2}} \expval{J_L}{t_j}\right] \end{align} for some $\{\xi_j\}_{j=1}^{N-3}$ satisfying \begin{align} c_L\leq \expval{\unit-K_L-\varepsilon C_L J_L}{t_j}\leq \xi_j \leq \expval{\unit-K_L}{t_j}\leq 1 \end{align} for $T$ and $\Lambda$ large enough and $\varepsilon$ small enough, where we used (<ref>) for the lower bound. Using the concavity of the square root and the trace class property of $J_L$, we conclude that \begin{align} \Tr(M^{1/2})\geq \sum_{j=1}^{N-3} \expval{\sqrt{\unit-K_L}}{t_j}-\varepsilon C_L \Tr(J_L)=\Tr\left[(\Pi-\GradProjT)\sqrt{\unit-K_L}\right]-\varepsilon C_L. \end{align} Since $\varphi_L\in C^{\infty}$ and recalling (<ref>), for an arbitrary $h>0$ we can bound \begin{align} \|\GradProj-\GradProjT\|\lesssim_L \min\{\Lambda,T\}^{-h}=T^{-h}, \end{align} which also implies the same estimate for the trace-norm of the difference of $\GradProj$ and $\GradProjT$, both operators being of rank $3$. Recalling that $\GradProj$ projects onto $\ker (\unit-K_L)$, we finally obtain \begin{align} \Tr(M^{1/2})\geq \Tr [\Pi\sqrt{\unit-K_L}]-O(\varepsilon)-O(T^{-h}). \end{align} The error term $O(T^{-h})$ forces $T\to \infty$ as $\alpha \to \infty$, but allows $T$ to grow with an arbitrarily small power of $\alpha$. By picking $h$ to be sufficiently large we can absorb it in the error term $O(\varepsilon)$. We obtain the final lower bound \begin{align} j_1 \mathbb{K} j_1 &\geq j_1^2 \left[\eL-\frac 1 {2\alpha^2}\Tr[\Pi(\unit-(\unit- K_L )^{1/2})]-O(\varepsilon \alpha^{-2})-O(\alpha^{-4})\right]\nonumber\\ &\geq j_1^2 \left[ \eL-\frac 1 {2\alpha^2}\Tr[(\unit-(\unit-K_L )^{1/2})]-O(\varepsilon \alpha^{-2})-O(\alpha^{-4})\right]. \label{r3} \end{align} §.§.§ Bounds on $j_2 \mathbb{K} j_2$ We recall Corollary <ref>, which implies that, for any $\varphi \in L^2_{\mathbb{R}}(\TL)$, \begin{align} \FL(\varphi)\geq \eL+\inf_{y\in \TL}\expval{B}{\varphi-\varphi_L^y}, \end{align} where $B$ acts in $k$-space as the multiplication by \begin{align} 1 &\text{for} \,\, k=0,\\ 1-(1+\kappa'|k|)^{-1} &\text{for}\,\, k\neq 0. \end{cases} \end{align} Note that $B-\eta W_T>0$ for $\eta>0$ small enough (independently of $T$). Moreover, for any $\varphi$ in the support of $j_2$ and any $y\in \TL$, \begin{align} \expval{W_T}{\varphi-\varphi_L^y}\geq \varepsilon^2/4. \end{align} Therefore, on the support of $j_2$, we have \begin{align} \FL(\varphi) \geq \eL+\inf_{y\in \TL}\expval{B-\eta W_T}{\varphi-\varphi_L^y}+\eta \varepsilon^2/4. \end{align} By the Cauchy–Schwarz inequality, using that all the operators involved commute, we have \begin{align} \expval{B-\eta W_T}{\varphi-\varphi_L^y}&\geq \expval{(\unit-W_{\gamma}^{1/2})(B-\eta W_T)}{\varphi}\nonumber\\ &\quad +\expval{(\unit-W_{\gamma}^{-1/2})(B-\eta W_T)}{\varphi_L} \end{align} for any $\gamma>0$. Note that the right hand side is independent of $y$. Since $\varphi_L\in C^{\infty}(\TL)$, the Fourier coefficients of $\varphi_L$ satisfy \begin{align} (1+|k|^2)^{5/2}|(\varphi_L)_k|^2\leq C_{L,t} \gamma^{-t} \quad \text{for} \quad |k|\geq \gamma \end{align} for any $t>0$. Using the positivity of $B-\eta W_T$ we can bound \begin{align} \expval{(\unit-W_{\gamma}^{-1/2})(\mathbb{B}-\eta W_T)}{\varphi_L}&\geq-\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|>\gamma} (B(k)-\eta W_T(k))(1+|k|^2)^{1/2} |(\varphi_L)_k|^2\nonumber\\ &= -\sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|>\gamma} \frac{(B(k)-\eta W_T(k))}{(1+|k|^2)^{2}}(1+|k|^2)^{5/2} |(\varphi_L)_k|^2\nonumber\\ &\geq -C_{L,t} \gamma^{-t} \sum_{k\in \frac {2\pi}{L} \mathbb{Z}^3\atop |k|>\gamma} \frac{1}{(1+|k|^2)^{2}}\gtrsim_L \gamma^{-t-1} . \end{align} Therefore we conclude, using the positivity of $\unit-W_{\gamma_{\beta}}^{1/2}$ and of $B-\eta W_T$, that \begin{align} \label{eq:j2Kbound1} &j_2\mathbb{K} j_2\notag\\ &\geq j_2^2 \infspec\left[\eL- \frac N {2 \alpha^2}+\frac{\eta\varepsilon^2}4-O(\gamma^{-t-1})-\frac 1 {4\alpha^4} \Delta_{\lambda}+\expval{(\unit-W_{\gamma}^{1/2})(B-\eta W_T)}{\varphi}\right] \notag\\ &=j_2^2\left(\eL+\frac{\eta\varepsilon^2}4-O(\gamma^{-t-1})-\frac 1 {2\alpha^2} \Tr\left[\Pi\left(\unit-\sqrt{(\unit-W_{\gamma}^{1/2})(B-\eta W_T)}\right)\right]\right). \end{align} We need to estimate the behavior in $N=\rank \Pi$, $T$ and $\gamma$ of the trace appearing in the last equation, which equals \begin{align} \nonumber &\Tr\left[\Pi\left(\unit-\sqrt{(\unit-W_{\gamma}^{1/2})(B-\eta W_T)}\right)\right] \\ &=\sum_{k\in \frac{2\pi} L \mathbb{Z}^3 \atop |k|\leq \Lambda} \left(1-\sqrt{(1-W_{\gamma}(k)^{1/2})(B(k)-\eta W_T(k))}\right). \label{eq:estimatesTraceOutside} \end{align} The contribution to the sum from $|k|\leq \max\{\gamma,T\}$ can be bounded by $C(L\max\{\gamma,T\})^3$. For $|k|> \max\{\gamma,T\}$, $W_\gamma(k) = W_T(k) = (1+|k|^2)^{-1}$, and the coefficient under the square root in the last line of (<ref>) behaves asymptotically for large momenta as $1 - |k|^{-1}$. Hence, recalling (<ref>), we conclude that \begin{align} \label{eq:tracebound} \Tr\left[\Pi\left(\unit-\sqrt{(\unit-W_{\gamma}^{1/2})(B-\eta W_T)}\right)\right]\leq O\left(\max\{\gamma,T\}^3\right)+O(\Lambda^2). \end{align} Because of (<ref>), the first term on the right hand side is negligible compared to the second if we choose $\gamma$ to equal $\alpha$ to some small enough power. Because $t$ was arbitrary, we thus arrive at \begin{align} j_2\mathbb{K} j_2\geq \end{align} Therefore, if \begin{align} \label{eq:restr2bis} \varepsilon\geq C_L \alpha^{-1}\Lambda \end{align} for a sufficiently large constant $C_L$, we conclude that for sufficiently large $\alpha$ and $\Lambda$ \begin{align}\label{r4} j_2\mathbb{K} j_2\geq j_2^2 \eL. \end{align} §.§.§ Proof of Theorem <ref>, lower bound By combining the results (<ref>) and (<ref>) of the previous two subsections with (<ref>) and (<ref>), we obtain \begin{align} \mathbb{K}& \geq j_1 \mathbb{K} j_1+j_2 \mathbb{K} j_2 +O(\alpha^{-4}\varepsilon^{-2})\nonumber\\ &\geq j_1^2\left[ \eL-\frac 1 {2\alpha^2}\Tr[(\unit-(\unit-K_L )^{1/2})]+O(\varepsilon \alpha^{-2})+O(\alpha^{-4})\right]+j_2^2 \eL+O(\alpha^{-4}\varepsilon^{-2})\nonumber\\ &\geq \eL-\frac 1 {2\alpha^2}\Tr[(\unit-(\unit-K_L )^{1/2})]+O(\varepsilon \alpha^{-2})+O(\alpha^{-4})+O(\alpha^{-4}\varepsilon^{-2}) \end{align} under the constraint (<ref>). With Proposition <ref> we can thus conclude that \begin{align} \infspec \HL&\geq \infspec \HL^{\Lambda}+O(\Lambda^{-5/2})+O(\alpha^{-1} \Lambda^{-3/2})+O(\alpha^{-2}\Lambda^{-1})\nonumber\\ &\geq \infspec \mathbb{K}+O(\Lambda^{-5/2})+O(\alpha^{-1} \Lambda^{-3/2})+O(\alpha^{-2}\Lambda^{-1})\nonumber\\ &\geq \eL-\frac 1 {2\alpha^2}\Tr[(\unit-(\unit-K )^{1/2})]+O(\varepsilon \alpha^{-2})+O(\alpha^{-4})+O(\alpha^{-4}\varepsilon^{-2})\nonumber\\ &\quad +O(\Lambda^{-5/2})+O(\alpha^{-1} \Lambda^{-3/2})+O(\alpha^{-2}\Lambda^{-1}). \end{align} To minimize the error terms under the constraint (<ref>), we pick $\varepsilon \sim \alpha^{-1/7}$ and $\Lambda \sim \alpha^{6/7}$, which yields the claimed estimate \begin{align} \infspec \HL\geq \eL-\frac 1 {2\alpha^2}\Tr[(\unit-(\unit-K_L )^{1/2})]+O(\alpha^{-15/7}). \end{align} This concludes the proof of the lower bound, and hence the proof of Theorem <ref>. § ACKNOWLEDGMENTS Funding from the European Union's Horizon 2020 research and innovation programme under the ERC grant agreement No 694227 is gratefully acknowledged. We would also like to thank Rupert Frank for many helpful discussions, especially related to the Gross coordinate transformation defined in Def. <ref>. [1] A. S. Alexandrov and J. T. Devreese, Advances in polaron physics, Springer lecture notes vol. 159, 2010. [2] G. Allcock, Strong-coupling theory of the polaron, in: Polarons and Excitons, (1963), pp. 45–70. [3] Á. Bényi and T. Oh, The Sobolev inequality on the torus revisited, Publicationes Mathematicae Debrecen, 83 (2013), p. 359. [4] M. D. Donsker and S. S. Varadhan, Asymptotics for the polaron, Comm. Pure Appl. Math., 36 (1983), pp. 505–528. [5] D. Feliciangeli, S. Rademacher, and R. Seiringer, Persistence of the spectral gap for the Landau–Pekar equations, Lett. Math. Phys. 111, 19 (2021). [6] D. Feliciangeli and R. Seiringer, Uniqueness and nondegeneracy of minimizers of the Pekar functional on a ball, SIAM Journal on Mathematical Analysis, 52 (2020), pp. 605–622. [7] R. L. Frank, E. H. Lieb, and R. Seiringer, Symmetry of bipolaron bound states for small coulomb repulsion, Communications in Mathematical Physics, 319 (2013), pp. 557–573. [8] R. L. Frank, E. H. Lieb, R. Seiringer, and L. E. Thomas, Ground state properties of multi-polaron systems, in XVIIth International Congress on Mathematical Physics, World Scientific, 2014, pp. 477–485. [9] R. L. Frank and R. Seiringer, Quantum corrections to the Pekar asymptotics of a strongly coupled polaron, Commun. Pure Appl. Math., 74 (2021), pp. 544–588. [10] H. Fröhlich, Theory of electrical breakdown in ionic crystals, Proceedings of the Royal Society of London. Series A-Mathematical and Physical Sciences, 160 (1937), pp. 230–241. [11] B. Gerlach and H. Löwen, Analytical properties of polaron systems or: do polaronic phase transitions exist or not?, Reviews of Modern Physics, 63 (1991), p. 63. [12] M. Griesemer and A. Wünsch, Self-adjointness and domain of the Fröhlich Hamiltonian, Journal of Mathematical Physics, 57 (2016), p. 021902. [13] E. P. Gross, Particle-like solutions in field theory, Annals of Physics, 19 (1962), pp. 219–233. [14] height 2pt depth -1.6pt width 23pt, Strong coupling polaron theory and translational invariance, Annals of Physics, 99 (1976), pp. 1–29. [15] E. Lenzmann, Uniqueness of ground states for pseudorelativistic Hartree equations, Analysis & PDE, 2 (2009), pp. 1–27. [16] E. H. Lieb, Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation, Studies in Applied Mathematics, 57 (1977), pp. 93–105. [17] E. H. Lieb and M. Loss, Analysis, volume 14 of graduate studies in mathematics, American Mathematical Society, Providence, RI (2001). [18] E. H. Lieb and L. E. Thomas, Exact ground state energy of the strong-coupling polaron, Commun. Math. Phys, 183 (1997), p. 519. [19] E. H. Lieb and K. Yamazaki, Ground-state energy and effective mass of the polaron, Physical Review, 111 (1958), p. 728. [20] J. S. Møller, The polaron revisited, Reviews in Mathematical Physics, 18 (2006), pp. 485–517. [21] C. Mukherjee and S. Varadhan, Identification of the polaron measure in strong coupling and the Pekar variational formula, preprint arXiv:1812.06927, (2018). [22] height 2pt depth -1.6pt width 23pt, Strong coupling limit of the polaron measure and the Pekar process, preprint arXiv:1806.06865, (2018). [23] E. Nelson, Interaction of nonrelativistic particles with a quantized scalar field, Journal of Mathematical Physics, 5 (1964), pp. 1190–1197. [24] S. I. Pekar, Investigations on the electron theory of crystals, Akademie-Verlag, 1954. [25] M. Reed and B. Simon, II: Fourier Analysis, Self-Adjointness, vol. 2, Academic Press, 1975. [26] height 2pt depth -1.6pt width 23pt, IV: Analysis of Operators, vol. 4, Elsevier, 1978. [27] R. Seiringer, The polaron at strong coupling, Rev. Math. Phys. 33, 2060012 (2021). [28] H. Spohn, Effective mass of the polaron: A functional integral approach, Annals of Physics, 175 (1987), pp. 278–318.
On the Meaning of Various Mass Definitions for Asymptotically Flat Spacetimes Dan N. Vollick Irving K. Barber Faculty of Science University of British Columbia Okanagan 3333 University Way Kelowna, B.C. Canada V1V 1V7 Abstract The mass contained in an arbitrary spacetime in general relativity is not well defined. However, for asymptotically flat spacetimes various definitions of mass have been proposed. In this paper I consider eight masses and show that some of them correspond to the active gravitational mass while the others correspond to the inertial mass. For example, the ADM mass corresponds to the inertial mass while the M$\o$ller mass corresponds to the active gravitational mass. In general the inertial and active gravitational masses are not equal. If the spacetime is vacuum at large $r$ the Einstein equations force the inertial and active gravitational masses to be the same. The Einstein equations also force the masses to be the same if any matter that extends out to large $r$ satisfies the weak, strong or dominant energy condition. I also examine the contributions of the inertial and active gravitational masses to the gravitational redshift, the deflection of light, the Shapiro time delay, the precession of perihelia and to the motion of test bodies in the spacetime. ## 1 Introduction In general relativity he mass contained in an arbitrary spacetime is not well defined. However, for asymptotically flat spacetimes various definitions of mass have been proposed. In this paper I consider eight masses: The active gravitational mass, the Einstein mass, the Landau-Lifshitz mass, the ADM mass, the M$\o$ller mass, the Tolman mass, the Komar mass and the $\rho$-mass in asymptotically flat spacetimes. The metric is taken to be $ds^{2}\simeq-\left[1-\frac{2GM_{1}}{r}\right]dt^{2}+\left[1+\frac{2GM_{2}}{r}\right]dr^{2}+r^{2}d\Omega^{2}$ (1) at large $r$, where $M_{1}$ and $M_{2}$ are constants. Spacetimes with $M_{1}\neq M_{2}$ occur in string theory [12] and in Brans-Dicke theory [13]. Some of the above masses correspond to $M_{1}$, while others correspond to $M_{2}$. The physical significance of $M_{1}$ is well known from the Newtonian limit. It is the active gravitational mass of the system. But, what is the physical significance of $M_{2}$? If the spacetime is vacuum at large distances we must have $M_{1}=M_{2}$, but this won’t be the case if the matter distribution extends out to infinity. In this paper I examine the physical meaning of $M_{2}$ and argue that it corresponds to the inertial mass of the system. This identification is made by considering the mass, associated with the energy-momentum tensor, that flows in from infinity in a spacetime that evolves from a Minkowski spacetime into the spacetime described by the metric (1). The mass that flows in is $M_{2}$ and since the energy-momentum tensor contains the inertial mass, not the gravitational mass $M_{2}$ is the inertial mass. Some of the masses, therefore, correspond to the inertial mass, while the others correspond to the active gravitational mass. It is interesting to note that if $M_{1}\neq M_{2}$ the inertial and active gravitational masses are not equal. This does not imply a violation of the weak equivalence principle which requires the equality of inertial and passive gravitational masses. The paper is organized as follows. In section 2 the above mass definitions are discussed in detail. In section 3 the masses for the spacetime with metric (1) are computed and the physical significance of the parameters $M_{1}$ and $M_{2}$ are investigated. I also examine the contributions of $M_{1}$ and $M_{2}$ to the gravitational redshift, the deflection of light, the Shapiro time delay and the precession of perihelia. The results of the paper are summarized in section 4. ## 2 Discussion of the Various Mass Definitions In this section I will consider eight definitions of the mass for asymptotically flat spacetimes. The Active Gravitational Mass. In weak gravitational fields, where $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ and $|h_{\mu\nu}|<<1$, the geodesic equation is given by $\frac{d^{2}\vec{r}}{dt^{2}}\simeq\frac{1}{2}\vec{\nabla}h_{tt}.$ (2) Comparing this to the Newtonian equation gives $h_{tt}=\frac{2GM_{G}}{r}\;,$ (3) where $M_{G}$ is the active gravitational mass. In asymptotically flat spacetimes $g_{tt}\rightarrow-\left(1-\frac{2GM_{G}}{r}\right)\;\;\;\;\;\;\;as\;\;\;\;\;\;\;\;r\rightarrow\infty$ (4) allowing $M_{G}$ to be easily identified. The $\rho$-mass. Consider a spherically symmetric asymptotically flat static spacetime. The Einstein equation $G_{tt}=-8\pi GT_{tt}$ gives $\frac{1}{r^{2}}\frac{d}{dr}\left[r\left(1-g_{rr}^{-1}\right)\right]=8\pi G\rho\;.$ (5) the solution to this equation is $g_{rr}(r)=\left[1-\frac{2Gm(r)}{r}\right]^{-1}$ (6) where $m(r)=4\pi\int_{0}^{r}\rho(r^{\prime})(r^{\prime})^{2}dr^{\prime}\;.$ (7) The $\rho$-mass is defined to be $M_{\rho}=4\pi\int_{0}^{\infty}\rho(r)r^{2}dr\;.$ (8) Thus, for large $r$ $g_{rr}\simeq 1+\frac{2GM_{\rho}}{r}$ (9) allowing $M_{\rho}$ to be easily identified. Note that the spacetime must be spherically symmetric for $M_{\rho}$ to be defined. The Einstein, Landau-Lifshitz and ADM Masses. Einstein showed that the conservation laws $\nabla_{\nu}T^{\;\;\nu}_{\mu}=0$ can be written in the form $\frac{\partial}{\partial x^{\nu}}\left[\sqrt{-g}\left(T^{\;\;\nu}_{\mu}+t^{\;\;\nu}_{\mu}\right)\right]=0\;,$ (10) where $t^{\;\;\nu}_{\mu}$ is the Einstein or canonical energy-momentum pseudotensor and is given by $t^{\;\;\nu}_{\mu}=\frac{1}{2\kappa\sqrt{-g}}\left[\frac{\partial(\sqrt{-g}L)}{\partial g^{\alpha\beta}_{\;\;\;\;\;,\nu}}g^{\alpha\beta}_{\;\;\;\;\;,\mu}-\delta^{\;\;\nu}_{\mu}\sqrt{-g}L\right]\;,$ (11) where $L=g^{\mu\nu}\left(\Gamma^{\alpha}_{\mu\nu}\Gamma^{\beta}_{\alpha\beta}-\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha}\right)\;.$ (12) If the spacetime is foliated into spacelike hypersurfaces the quantity $P_{\mu}=\int\left[\sqrt{-g}\left(T^{\;\;t}_{\mu}+t^{\;\;t}_{\mu}\right)\right]d^{3}x$ (13) is the same on each hypersurface, if the system is closed. It is not possible to think of $t^{\;\;\nu}_{\mu}$ as the energy-momentum tensor of the gravitational field since it is not a tensor. For example, Bauer [2] has shown that $t^{\;\;\nu}_{\mu}$ does not vanish in flat spacetime if spherical coordinates are used and that the total energy of flat spacetime in spherical coordinates diverges. Freud [3] has shown that $\sqrt{-g}\left(T^{\;\;\nu}_{\mu}+t^{\;\;\nu}_{\mu}\right)=\frac{\partial(\sqrt{-g}U_{\mu}^{\;\;\nu\alpha})}{\partial x^{\alpha}}$ (14) and M$\o$ller [4] has shown that the $U_{\mu}^{\;\;\nu\alpha}$ can be written as $\sqrt{-g}U_{\mu}^{\;\;\nu\alpha}=\frac{g_{\mu\beta}}{2\kappa\sqrt{-g}}\left[(-g)\left(g^{\nu\beta}g^{\alpha\lambda}-g^{\alpha\beta}g^{\nu\lambda}\right)\right]_{,\lambda}\;.$ (15) The quantity $P_{\mu}$ can now be written as a surface integral at infinity $P_{\mu}=\int\sqrt{-g}U_{\mu}^{\;\;tk}dS_{k}\;.$ (16) If the spacetime is asymptotically flat and coordinates are chosen so that the $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ at large distances with $h_{\mu\nu}$ dropping off as $1/r$ or faster the integral will give a finite result. It is also easy to see that $P_{\mu}$ is left unchanged by arbitrary coordinate transformations that approach the identity at infinity. $P_{\mu}$ also transforms as a four vector under Lorentz transformations. These properties of $P_{\mu}$ make it a candidate for the total energy and momentum of the system. Landau and Lifshitz [9] have shown that $(-g)\left(T^{\mu\nu}+t^{\mu\nu}\right)=\frac{\partial h^{\mu\nu\alpha}}{\partial x^{\alpha}}\;,$ (17) where $h^{\mu\nu\alpha}=\frac{1}{16\pi G}\left[(-g)\left(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\alpha}g^{\nu\beta}\right)\right]_{,\beta}\;.$ (18) The antisymmetry of $h^{\mu\nu\alpha}$ in $\nu$ and $\alpha$ implies that $\frac{\partial}{\partial x^{\nu}}\left[(-g)\left(T^{\mu\nu}+t^{\mu\nu}\right)\right]=0$ (19) which in turn implies that $P^{\mu}=\int(-g)\left(T^{\mu\nu}+t^{\mu\nu}\right)dS_{\nu}$ (20) is a conserved quantity. Arnowitt, Deser and Misner [10] constructed the Hamiltonian for general relativity and identified it with the total energy. In asymptotically flat coordinates the ADM mass is given by $M_{ADM}=\frac{1}{16\pi G}\int\left(g_{ik,k}-g_{kk,i}\right)dS^{i}\;.$ (21) This mass also follows from the energy-momentum pseudo-tensor discussed by Weinberg [11]. In this approach the metric is written as $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\;,$ (22) where $h_{\mu\nu}$ vanishes at infinity, but is not necessarily small everywhere. The Einstein equations are written as $G_{\mu\nu}^{(1)}=-8\pi G\left(T_{\mu\nu}+t_{\mu\nu}\right)\;,$ (23) where $G_{\mu\nu}^{(1)}$ is the part of the Einstein tensor that is linear in $h_{\mu\nu}$ and $t_{\mu\nu}=\frac{1}{8\pi G}\left[G_{\mu\nu}-G_{\mu\nu}^{(1)}\right]\;.$ (24) The mass associated with $t_{\mu\nu}$ is identical to the ADM mass. The pseudo-tensor $t^{\mu\nu}$ can be written as $t^{\mu\nu}=\partial_{\alpha}\tilde{Q}^{\alpha\nu\alpha}\;,$ (25) where I have modified Weinberg’s notation ($\tilde{Q}^{\alpha\nu\alpha}$ is given below in equation (26)). Consider an asymptotically flat spacetime with $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, where $h_{\mu\nu}$ vanishes at infinity. In the asymptotic region $\sqrt{-g}U_{\mu}^{\;\;\nu\alpha}=h_{\mu}^{\;\;\nu\alpha}=\tilde{Q}_{\mu}^{\;\;\nu\alpha}=\frac{1}{2\kappa}\left\\{\delta^{\nu}_{\;\;\mu}\left[\partial^{\alpha}h-\partial_{\beta}h^{\beta\alpha}\right]-\delta^{\alpha}_{\;\;\mu}\left[\partial^{\nu}h-\partial_{\beta}h^{\beta\nu}\right]+\partial^{\nu}h^{\alpha}_{\;\;\mu}-\partial^{\alpha}h^{\nu}_{\;\;\mu}\right\\}$ (26) implying that the Einstein, Landau-Lifshitz and ADM masses are the same. The M$\bf{\o}$ller and Tolman Masses. M$\o$ller [4] noted that $P_{\mu}$ is not invariant under coordinate transformations on the $t=$constant hypersurfaces because $\sqrt{-g}\left(T^{\;\;t}_{\mu}+t^{\;\;t}_{\mu}\right)$ is not a 4-vector density. To correct this he used the freedom to modify the energy-momentum tensor by adding $S^{\;\;\nu}_{\mu}$ to $\Theta^{\;\;\nu}_{\mu}=\sqrt{-g}\left(T^{\;\;\nu}_{\mu}+t^{\;\;\nu}_{\mu}\right)$, where $\partial_{\nu}S^{\;\;\nu}_{\mu}=0$. M$\o$ller’s energy-momentum pseudo- tensor is given by $J^{\;\;\nu}_{\mu}=\Theta^{\;\;\nu}_{\mu}+S^{\;\;\nu}_{\mu}\;.$ (27) where $S^{\;\;\nu}_{\mu}$ was chosen so that $J^{\;\;t}_{\mu}=\Theta^{\;\;t}_{\mu}+S^{\;\;t}_{\mu}$ is a 4-vector density. In addition M$\o$ller showed that $J^{\;\;\nu}_{\mu}=\frac{\partial\chi^{\;\;\nu\alpha}_{\mu}}{\partial x^{\alpha}}\;,$ (28) where $\chi^{\;\;\nu\alpha}_{\mu}=\frac{\sqrt{-g}}{8\pi G}\left[\frac{\partial g_{\mu\beta}}{\partial x^{\lambda}}-\frac{\partial g_{\mu\lambda}}{\partial x^{\beta}}\right]g^{\nu\lambda}g^{\alpha\beta}\;.$ (29) The energy and momentum that follows from M$\o$ller’s energy-momentum pseudo- tensor are given by $P_{\mu}=\int\chi_{\mu}^{\;\;tk}dS_{k}\;.$ (30) M$\o$ller has shown that the total mass of a static spacetime that follows from $J^{\;\;t}_{\mu}$ can be written as $M_{M}=\int\left(T^{\;\;k}_{k}-T^{\;\;t}_{t}\right)\sqrt{-g}d^{3}x\;.$ (31) This is the same expression that was derived by Tolman [5]. The Tolman and M$\o$ller masses are, therefore, the same in static spacetimes (see [6]). In the asymptotic region $\chi^{\;\;\nu\alpha}_{\mu}$ is given by $\chi^{\;\;\nu\alpha}_{\mu}=\frac{1}{\kappa}\left[\partial^{\nu}h_{\mu}^{\;\;\alpha}-\partial^{\alpha}h_{\mu}^{\;\;\nu}\right]\;.$ (32) The Komar Mass. In an asymptotically flat spacetime with a timelike Killing vector $\xi^{\mu}$ the quantity $J^{\mu}=R^{\mu\nu}\xi_{\nu}$ (33) satisfies $\nabla_{\mu}J^{\mu}=0\;.$ (34) This continuity equations allows a conserved energy $M_{K}=\frac{1}{4\pi G}\int_{\Sigma}\sqrt{\gamma}n_{\mu}J^{\mu}d^{3}x\;,$ (35) to be defined, where $n^{\mu}$ is the normal to the hypersurface $\Sigma$ and $M_{K}$ is the Komar mass [7, 8]. The volume integral can be converted into a surface integral giving $M_{K}=\frac{1}{4\pi G}\int_{\partial\Sigma}\sqrt{\gamma^{(2)}}n_{\mu}\sigma_{\nu}\nabla^{\mu}\xi^{\nu}d^{2}x\;,$ (36) where $\sigma^{\mu}$ is the normal to $\partial\Sigma$. In the asymptotic region $n_{\mu}\sigma_{\nu}\nabla^{\mu}\xi^{\nu}$ is given by $n_{\mu}\sigma_{\nu}\nabla^{\mu}\xi^{\nu}=-\frac{x^{k}}{2r}\partial_{k}h_{tt}\;.$ (37) ## 3 Comparison of the Masses in Asymptotically Flat Spacetimes Consider a static asymptotically flat spacetime with a metric given by $ds^{2}\simeq-\left[1-\frac{2GM_{1}}{r}\right]dt^{2}+\left[1+\frac{2GM_{2}}{r}\right]dr^{2}+r^{2}d\Omega^{2}\;.$ (38) at large $r$, where $M_{1}$ and $M_{2}$ are constants. Note that the spacetime does not have to be spherically symmetric, but only have the above asymptotic form. The energy-momentum tensor can be found from the Einstein equations $T^{\mu}_{\;\;\;\nu}=-\frac{1}{8\pi G}G^{\mu}_{\;\;\;\nu}.$ (39) The non-zero components, at large $r$, are given by $T^{t}_{\;\;\;t}\simeq-\frac{M_{2}^{2}}{4\pi r^{4}}\;,$ (40) $T^{r}_{\;\;\;r}\simeq\frac{(M_{1}-M_{2})}{4\pi r^{3}}\;,$ (41) and $T^{\theta}_{\;\;\;\theta}=T^{\phi}_{\;\;\;\phi}\simeq\frac{(M_{2}-M_{1})}{8\pi r^{3}}\;.$ (42) The expression for $T^{t}_{\;\;\;t}$ is sensitive to higher order terms in $g_{rr}$ (i.e. terms that fall off as $1/r^{2}$ or faster). For example if $g_{rr}=(1-2GM_{2}/r)^{-1}$ then $T^{t}_{\;\;\;t}=0$. It can be shown that $T^{t}_{\;\;\;t}$ will always fall off as $1/r^{4}$ or faster. The leading order terms in $T^{r}_{\;\;\;r},T^{\theta}_{\;\;\;\theta}$ and $T^{\phi}_{\;\;\;\phi}$ are insensitive to higher order terms in $g_{tt}$ and $g_{rr}$. It is interesting to note that this energy-momentum tensor violates the weak, strong and dominant energy condition if $M_{1}\neq M_{2}$. This implies that any field, such as the electromagnetic field or the Klein-Gordon scalar field, that satisfies any of the energy conditions, cannot produce a spacetime with $M_{1}\neq M_{2}$. Spacetimes with $M_{1}\neq M_{2}$ do occur in string theory. For example, the string metric for a charged dilaton black hole is given by (G=1)[12] $ds_{string}^{2}=-\frac{(1-2Me^{\phi_{0}}/r)}{(1-Q^{2}e^{3\phi_{0}}/Mr)}dt^{2}+\frac{dr^{2}}{(1-2Me^{\phi_{0}}/r)(1-Q^{2}e^{3\phi_{0}}/Mr)}+r^{2}d\Omega^{2}.$ (43) The values of $M_{1}$ and $M_{2}$ are given by $M_{1}=Me^{\phi_{0}}-\frac{Q^{2}e^{3\phi_{0}}}{2M}\;\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;M_{2}=Me^{\phi_{0}}+\frac{Q^{2}e^{3\phi_{0}}}{2M}.$ (44) If $Q\neq 0$ we see that $M_{1}\neq M_{2}$. In fact, for an extremal black hole ($Q^{2}=2M^{2}e^{-2\phi_{0}}$) $M_{1}$ vanishes and $M_{2}=2Me^{\phi_{0}}$. Spacetimes with $M_{1}\neq M_{2}$ also occur in Brans- Dicke theory [13]. The masses discussed in the previous section can be found from (3), (9), (21), (30), (36) and $h_{tt}=\frac{2GM_{1}}{r}\;\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;h_{ij}=\frac{2GM_{2}}{r^{3}}x_{i}x_{j}\;.$ (45) They are given by $M_{G}=M_{M}=M_{T}=M_{K}=M_{1}\;\;\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;\;\;\;M_{\rho}=M_{E}=M_{LL}=M_{ADM}=M_{2}\;,$ (46) where $M_{G}$ is the active gravitational mass, $M_{M}$ is the M$\o$ller mass, $M_{T}$ is the Tolamn mass, $M_{K}$ is the Komar mass, $M_{\rho}$ is the $\rho$-mass, $M_{E}$ is the Einstein mass, $M_{LL}$ is the Landau-Lifshitz mass and $M_{ADM}$ is the ADM mass. Recall that the $\rho$-mass is defined only in spherically symmetric spacetimes. From $M_{G}=M_{1}$ it is clear that $M_{1}$ is the active gravitational mass of the system. What then does $M_{2}$ represent? To answer this question consider a time dependent asymptotically flat spacetime with a metric given by $ds^{2}\simeq-\left[1-\frac{2GM_{1}(t)}{r}\right]dt^{2}+\left[1+\frac{2GM_{2}(t)}{r}\right]dr^{2}+r^{2}d\Omega^{2}$ (47) at large $r$ where $\lim_{t\rightarrow-\infty}M_{1,2}(t)=0$ (48) and $\lim_{t\rightarrow\infty}M_{1,2}(t)=M_{1,2}.$ (49) Alternatively $M_{1}(t)$ and $M_{2}(t)$ can be taken to be $M_{1,2}(t)=\left\\{\begin{array}[]{cc}0\;\;\;\;\;\;\;\;\;\;t\leq t_{1}&\\\ \tilde{M}_{1,2}(t)\;\;\;\;\;t_{1}<t<t_{2}&\\\ M_{1,2}\;\;\;\;\;\;\;\;\;\;t\geq t_{2}\end{array}\right\\}\;.$ (50) where $M_{1,2}(t)\in C^{n}$, with $n\geq 2$. This spacetime starts as a Minkowski spacetime and evolves into the spacetime discussed earlier which has a metric given by (38) at large $r$. The non-zero components of the energy-momentum tensor, at large $r$, are given by $T^{t}_{\;\;\;t}\simeq-\frac{M_{2}(t)^{4}}{4\pi r^{4}}\;,$ (51) $T^{r}_{\;\;\;r}\simeq\frac{\left[M_{1}(t)-M_{2}(t)\right]}{4\pi r^{3}}$ (52) $T^{\theta}_{\;\;\;\theta}=T^{\phi}_{\;\;\;\phi}\simeq-\frac{1}{8\pi r}\frac{d^{2}M_{2}(t)}{dt^{2}}\;,$ (53) and $T^{t}_{\;\;\;r}\simeq-\frac{1}{4\pi r^{2}}\frac{dM_{2}(t)}{dt}\;,$ (54) Once again, $T^{t}_{\;\;\;t}$ is sensitive to higher order terms in $g_{rr}$ while the other components are not sensitive to higher order terms in $g_{tt}$ or $g_{rr}$. Note that $T^{t}_{\;\;\;r}\neq 0$. This implies that mass will flow in from infinity. Consider a sphere of fixed coordinate radius $R$ with $R\rightarrow\infty$. Observers at rest on this sphere will measure an energy flux given by $T^{t}_{\;\;\;r}$ and the total rate at which energy flows in through the sphere is given by $\frac{dE}{dt}=\frac{dM_{2}(t)}{dt}\;.$ (55) The total energy that flows in through the sphere is, therefore, given by $E=M_{2}\;.$ (56) The energy-momentum tensor $T^{\mu\nu}$ contains the inertial mass, not the gravitational mass. To see this consider charged dust coupled to the electromagnetic field. The energy-momentum tensor for the system is given by $T^{\mu\nu}=\rho_{m}U^{\mu}U^{\nu}+F^{\mu}_{\;\;\alpha}F^{\nu\alpha}-\frac{1}{4}g^{\mu\nu}F_{\alpha\beta}F^{\alpha\beta},$ (57) where $\rho_{m}$ is the mass density of the dust, $U^{\mu}$ is its four- velocity and $F^{\mu\nu}$ is the electromagnetic field tensor. The equations of motion that follow from $\nabla_{\mu}T^{\mu\nu}=0$ and the field equations $\nabla_{\mu}F^{\mu\nu}=-\rho_{c}U^{\nu}\;\;\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;\;\;\nabla_{[\mu}F_{\alpha\beta]}=0$ (58) are $\rho_{m}U^{\nu}\nabla_{\nu}U^{\mu}=\rho_{c}F^{\mu}_{\;\;\;\nu}U^{\nu},$ (59) where $\rho_{c}$ is the charge density of the dust and $[\cdot\cdot\cdot]$ denotes antisymmetrization. The mass density $\rho_{m}$ is obviously the inertial mass density. Therefore, an amount of inertial mass equal to $M_{2}$ flowed into the system. The inertial mass may, of course, contribute to the active gravitational mass, but there is no a priori reason that they must be equal. There are two independent conservation laws: $\partial_{\nu}J^{\;\;\nu}_{\mu}=0$ (60) and $\frac{\partial}{\partial x^{\nu}}\left[\sqrt{-g}\left(T^{\;\;\nu}_{\mu}+t^{\;\;\nu}_{\mu}\right)\right]=0$ (61) where $J^{\;\;\nu}_{\mu}$ is the M$\o$ller pseudotensor and $t^{\;\;\nu}_{\mu}$ is the Einstein pseudotensor. The Landau-Lifshitz or Weinberg pseudotensor could be used instead of the Einstein pseudotensor since they all give the same mass. Since $M_{M}=M_{G}$, I postulate that (60) corresponds to the conservation of the active gravitational mass. Since $M_{E}$ corresponds to the inertial mass that flowed into the system, I postulate that (61) corresponds to the conservation of inertial mass. I therefore identify $M_{2}$ as the inertial mass of the system. Note that the inertial and active gravitational masses do not have to be the same, but both are individually conserved. This does not imply a violation of the weak equivalence principle which requires the equality of inertial and passive gravitational masses. The Einstein equations force the inertial and active gravitational masses to be the same if the matter that extends out to large $r$ satisfies the weak, strong or dominant energy condition. If the spacetime has a metric of the form (38) and is vacuum at large $r$ the Einstein equations also force the inertial and active gravitational masses to be the same. Bonner [14] and Rosen and Cooperstock [15] discuss the equality of inertial, active and passive gravitational masses for bodies composed of ideal fluids. Bonner claims that the active gravitational mass does not equal the passive gravitational mass (which equals the inertial mass). Rosen and Cooperstock claim that, if the gravitational self energy is included, all three masses are the same. In both articles it is assumed that $M_{1}=M_{2}$. For the remainder of this paper (except in the conclusion) I will denote $M_{1}$ by $M_{G}$ and $M_{2}$ by $M_{ADM}$. I have chosen to denote $M_{2}$ by $M_{ADM}$ instead of $M_{I}$ (the inertial mass) because the ADM mass is commonly used in the literature. Ohanian [16, 17, 18] comes to the same conclusion (i.e. $M_{2}$ equals the inertial mass) by identifying the volume integral over the canonical energy- momentum pseudotensor of the matter and gravitational field as the inertial mass. There can be, however, a problem with using the canonical energy- momentum pseudotensor. Consider, for example, the electromagnetic field in flat spacetime. The canonical energy-momentum pseudotensor $\Theta_{\mu}^{\;\;\nu}$ is related to the standard energy-momentum tensor $T_{\mu}^{\;\;\nu}$ by $\Theta_{\mu}^{\;\;\nu}=T_{\mu}^{\;\;\nu}-\partial_{\alpha}\left(F^{\alpha\nu}A_{\mu}\right)\;.$ (62) Note that $\Theta_{\mu\nu}$ is neither symmetric nor gauge invariant. Ohanian argues that the total energy derived from $\Theta_{\mu}^{\;\;\nu}$ is the same as the total energy derived from $T_{\mu}^{\;\;\nu}$. This can only be true if the last term in (62) does not contribute to the energy. The contribution of this term to the energy is given by $\int\partial_{k}\left(F^{tk}A_{t}\right)d^{3}x=\int\left(F^{tk}A_{t}\right)dS_{k}=-\int\phi\vec{E}\cdot d\vec{a}\;.$ (63) This term will vanish if the fields drop off sufficiently rapidly at infinity, which Ohanion assumed. This term, however, is not necessarily zero and is not gauge invariant. Consider, for example the two sets of fields $\vec{E}=\frac{q}{r^{2}}\hat{r},\;\;\;\;\;\;\;\;\;\;\phi=\frac{q}{r}\;\;\;\;\;\;\;\;\;\;\vec{A}=0$ (64) and $\vec{E}=\frac{q}{r^{2}}\hat{r},\;\;\;\;\;\;\;\;\;\;\phi=\frac{q}{r}-\chi(t)\;\;\;\;\;\;\;\;\;\;\vec{A}=0\;$ (65) which are related by a gauge transformation. The integral in (63) vanishes for the first set of fields but is given by $\int\partial_{k}\left(F^{tk}A_{t}\right)d^{3}x=q\chi(t)$ (66) for the second set of fields. This term is neither zero nor gauge invariant. Note that this term is not time independent, which implies that the four- momentum $P_{\mu}=\int\Theta_{\mu}^{\;\;\;t}d^{3}x$ (67) is not conserved. To see why this is the case consider $\frac{dP_{\mu}}{dt}=\int\frac{\partial\Theta_{\mu}^{\;\;\;t}}{\partial t}d^{3}x=-\int\partial_{k}\Theta_{\mu}^{\;\;\;k}d^{3}x=-\int\Theta_{\mu}^{\;\;\;k}dS_{k}\neq 0.$ (68) The canonical energy-momentum tensor, therefore, cannot be used in general. Now consider the motion of a test mass at large $r$. In isotropic coordinates $ds^{2}\simeq-\left[1-\frac{2GM_{G}}{\rho}\right]dt^{2}+\left[1+\frac{2GM_{ADM}}{\rho}\right]\left[dx^{2}+dy^{2}+dz^{2}\right]$ (69) at large $\rho$, where $r=\rho(1+GM_{ADM}/2\rho)^{2}$. The equations of motion are $\left[1+\frac{2GM_{ADM}}{c^{2}\rho}\right]\frac{d^{2}\vec{\rho}}{d\tau^{2}}=-\left[\frac{GM_{G}}{\rho^{2}}\hat{\rho}\right]\dot{t}^{2}-\frac{GM_{ADM}}{c^{2}\rho^{2}}\left[v^{2}\hat{\rho}-2(\hat{\rho}\cdot\vec{v})\vec{v}\right]$ (70) and $\left[1-\frac{2GM_{G}}{c^{2}\rho}\right]c^{2}\dot{t}^{2}-\left[1+\frac{2GM_{ADM}}{c^{2}\rho}\right]v^{2}=1,$ (71) where $\dot{t}=dt/d\tau$, $\vec{v}=d\vec{\rho}/d\tau$ and $\vec{\rho}=(x,y,z)$. Note that higher order terms in the metric (69) will produce additional terms in the above equations of motion. The gravitational mass gives the expected contribution, but the ADM mass only produces corrections to the Newtonian force. If $M_{G}=0$ a particle at rest will not experience a “force”. This is hidden in Schwarzschild spacetimes because the ADM mass has the same value as the gravitational mass. It is also interesting to examine the contributions of $M_{G}$ and $M_{ADM}$ to the gravitational redshift, the deflection of light, the Shapiro time delay and the precession of perihelia. Consider two observers, one at position $x_{1}^{\mu}$ and the second at $x_{2}^{\mu}$. If the first observer sends a light signal with frequency $\nu_{1}$ the second observer will measure its frequency to be [11] $\nu_{2}=\sqrt{\frac{g_{tt}(x_{2})}{g_{tt}(x_{1})}}\nu_{1}\;.$ (72) To lowest order $\frac{\Delta\nu}{\nu}\simeq\frac{GM_{G}}{r_{1}}-\frac{GM_{G}}{r_{2}},$ (73) which is the result obtained from the equivalence principle or by considering a photon moving in a Newtonian gravitational field. It is interesting to consider the gravitational redshift in isotropic coordinates. Transforming (38) to isotropic coordinates gives $g_{tt}^{(iso)}=-\left(1-\frac{2GM_{G}}{\rho}+\frac{2M_{G}M_{ADM}}{\rho^{2}}+\cdot\cdot\cdot\right).$ (74) In isotropic coordinates the gravitational redshift, to lowest order, depends only on $M_{G}$, but higher order corrections depend on $M_{ADM}$. This results from the fact that the transformation between $r$ and $\rho$ involves $M_{ADM}$. One could also consider the transformation $r=\frac{M_{G}}{M_{ADM}}\bar{r}$ giving $\bar{g}_{tt}=-(1-2GM_{ADM}/{\bar{r}})$. In this coordinate system the gravitational redshift depends on $M_{ADM}$ not $M_{G}$. However, in this coordinate system the metric does not go to the Minkowski metric, in spherical coordinates, at large $\bar{r}$. If the radial coordinate transformation is restricted to transformations that satisfy $\bar{r}\rightarrow r$ as $r\rightarrow\infty$ the lowest order term will depend only on $M_{G}$. To examine light deflection it is convenient to use the Eddington-Robertson expansion $ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\left(1+\frac{2\gamma GM}{r}\right)dr^{2}+r^{2}d\Omega^{2}.$ (75) To match the metric (38) with $M_{1}=M_{G}$ and $M_{2}=M_{ADM}$ requires that $M=M_{G},\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;M_{ADM}=\gamma M\;.$ (76) In terms of the Eddington-Robertson parameters the bending of light, to lowest order, is given by $\Delta\theta=\frac{4GM}{b}\left(\frac{1+\gamma}{2}\right),$ (77) where $b$ is the impact parameter. In terms of $M_{G}$ and $M_{E}$ this becomes $\Delta\theta=\frac{4G}{b}\left(\frac{M_{G}+M_{ADM}}{2}\right),$ (78) Note that for the Schwarzschild metric ($M_{G}=M_{ADM}$) the gravitational mass produces half of the total deflection, which corresponds to the prediction the equivalence principle. The predictions of the equivalence principle for the deflection of light and gravitational redshift, therefore, correspond to setting the mass equal to $M_{G}$ and setting $M_{ADM}=0$. The Shapiro time delay is given by $\Delta T=4GM\left(\frac{1+\gamma}{2}\right)\left[\ln\left(\frac{4r_{E}r_{p}}{r_{0}^{2}}\right)+1\right],$ (79) where $r_{E}$ is the radius of the Earth’s orbit, $r_{p}$ is the radius of the orbit of the planet used to reflect the radar signal and $r_{0}$ is the minimum distance of the radar beam from the Sun. In terms of $M_{G}$ and $M_{E}$ this becomes $\Delta T=4G\left(\frac{M_{G}+M_{ADM}}{2}\right)\left[\ln\left(\frac{4r_{E}r_{p}}{r_{0}^{2}}\right)+1\right].$ (80) Thus, in a Schwarzschild spacetime the gravitational mass and the ADM mass each contribute half of the time delay. Now consider the perihelion shift. In the Eddington-Roberston approach a term of order $G^{2}M^{2}/r^{2}$ is added to $g_{tt}$. This term does not effect the light deflection or Shapiro time delay (to lowest order), but it does effect the perihelion shift. In this paper I have taken $g_{tt}\simeq-(1-2GM_{G}/r)$ at large distances and not considered higher order terms. It is not possible to deduce what portion of the perihelion shift is due to $M_{G}$ and what portion is due to $M_{ADM}$ without knowing the higher order terms. ## 4 Conclusion In this paper I examined various mass definitions in asymptotically flat spacetimes with a metric given by $ds^{2}\simeq-\left[1-\frac{2GM_{1}}{r}\right]dt^{2}+\left[1+\frac{2GM_{2}}{r}\right]dr^{2}+r^{2}d\Omega^{2}$ (81) at large $r$, where $M_{1}$ and $M_{2}$ are constants. I showed that $M_{G}=M_{M}=M_{T}=M_{K}=M_{1}\;\;\;\;\;\;\;\;\;\;\;and\;\;\;\;\;\;\;\;\;\;\;\;\;M_{\rho}=M_{E}=M_{LL}=M_{ADM}=M_{2}\;,$ (82) where $M_{G}$ is the active gravitational mass, $M_{M}$ is the M$\o$ller mass, $M_{T}$ is the Tolamn mass, $M_{K}$ is the Komar mass, $M_{\rho}$ is the $\rho$ mass, $M_{E}$ is the Einstein mass, $M_{LL}$ is the Landau-Lifshitz mass and $M_{ADM}$ is the ADM mass. From $M_{G}=M_{1}$ it is clear that $M_{1}$ is the active gravitational mass of the system. The energy-momentum tensor in such spacetimes violates the weak, strong, and dominant energy conditions and, therefore, cannot be produced by an electromagnetic or a Klein-Gordon scalar field. Such spacetimes do, however, appear in string theory [12], and in Brans-Dicke theory [13]. To determine the physical significance of $M_{2}$ I considered a time dependent spacetime with a metric given by $ds^{2}\simeq-\left[1-\frac{2GM_{1}(t)}{r}\right]dt^{2}+\left[1+\frac{2GM_{2}(t)}{r}\right]dr^{2}+r^{2}d\Omega^{2}$ (83) at large $r$ where $\lim_{t\rightarrow-\infty}M_{1,2}(t)=0$ (84) and $\lim_{t\rightarrow\infty}M_{1,2}(t)=M_{1,2}.$ (85) This spacetime starts as a Minkowski spacetime and evolves into the spacetime with a metric given by (83) at large $r$. The energy momentum-tensor, at large $r$, has a $T^{t}_{\;\;\;r}$ component given by $T^{t}_{\;\;\;r}\simeq-\frac{1}{4\pi Gr^{2}}\frac{dM_{2}(t)}{dt}\;.$ (86) The rate at which energy flows in through a sphere of radius $R\rightarrow\infty$ is $\frac{dE}{dt}=\frac{dM_{2}(t)}{dt}$ (87) implying that total energy that flows in through the sphere is given by $E=M_{2}\;.$ (88) Since the energy-momentum tensor contains the inertial mass, not the gravitational mass, $M_{2}$ is the amount of inertial mass that flowed into the system. There are two independent conservation laws: $\partial_{\nu}J^{\;\;\nu}_{\mu}=0$ (89) and $\frac{\partial}{\partial x^{\nu}}\left[\sqrt{-g}\left(T^{\;\;\nu}_{\mu}+t^{\;\;\nu}_{\mu}\right)\right]=0$ (90) where $J^{\;\;\nu}_{\mu}$ is the M$\o$ller pseudotensor and $t^{\;\;\nu}_{\mu}$ is the Einstein pseudotensor. The Landau-Lifshitz or Weinberg pseudotensor could be used instead of the Einstein pseudotensor. Since $M_{M}=M_{G}$, I postulate that (89) corresponds to the conservation of active gravitational mass. Since $M_{E}$ corresponds to the inertial mass that flowed into the system, I postulate that (90) corresponds to the conservation of inertial mass. I therefore identify $M_{2}$ as the inertial mass of the system. Note that the inertial and active gravitational masses do not have to be the same, but both are individually conserved. This does not imply a violation of the weak equivalence principle which requires the equality of inertial and passive gravitational masses. The Einstein equations force the inertial and active gravitational masses to be the same if the matter that extends out to large $r$ satisfies the weak, strong or dominant energy condition. If the spacetime has a metric of the form (81) and is vacuum at large $r$ the Einstein equations also force the inertial and active gravitational masses to be the same. For the remainder of the conclusion I will denote $M_{1}$ by $M_{G}$ and $M_{2}$ by $M_{ADM}$. The effect of $M_{G}$ and $M_{ADM}$ on the motion of a test particle was determined from the geodesic equation. At large $r$ the gravitational mass gives the expected Newtonian contribution, but the ADM mass only produces corrections to the Newtonian force. If $M_{G}=0$ a particle at rest will not experience a “force”. This is hidden in Schwarzschild spacetimes because the ADM mass has the same value as the gravitational mass. I next examined the contributions of $M_{G}$ and $M_{ADM}$ to the gravitational redshift, the deflection of light, the Shapiro time delay and the precession of perihelia. I showed that, to lowest order, only $M_{G}$ contributes to the gravitational redshift while $M_{G}$ and $M_{ADM}$ contribute equally to the deflection of light. Thus, the predictions of the equivalence principle correspond to setting the mass equal to $M_{G}$ and setting $M_{ADM}=0$. I also showed that $M_{G}$ and $M_{ADM}$ contribute equally to the Shapiro time delay. The precession of perihelia depend on a higher order term in $g_{tt}$, so this term must be chosen before the contributions of $M_{G}$ and $M_{ADM}$ can be determined. ## Acknowledgements This research was supported by the Natural Sciences and Engineering Research Council of Canada. ## References * [1] A. Einstein, Ann. Physik 49, 769 (1916); Sitzungsber. preuss. Acad. Wiss. 1, 448 (1918) * [2] H. Bauer, Phys. Zeits. 19, 163 (1918) * [3] P. Freud, Ann. Math., Princeton 40, 417 (1939) * [4] C. M$\o$ller, Annals of Physics 4, 347 (1958) * [5] R. Tolman, Phys. Rev. 35, 875 (1930) * [6] P. S. Florides, J. Phys.: Conf. Ser. 189 012014 (2009) * [7] A. Komar, Phys. Rev. 113, 934 (1959) * [8] S.M. Carroll, Spacetime and Geometry. An Introduction to General Relativity (Addison-Wesley, 2004) * [9] L.D. Landau and E.M. Lifshitz, The Classical Theory of Fields (Pergamon Press, Oxford, 1975) * [10] R. Arnowitt, S. Deser and C.W. Misner, Gravitation: an introduction to current research, L. Witten, ed. (Wiley, New York, 1962) pp. 227-264 * [11] S. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity (John Wiley and Sons, New York, Chichester, Brisbane, Toronto, 1972) * [12] D. Garfinkle, G.T. Horowitz and A. Strominger, Phys. Rev. D43, 3140 (1991) * [13] C. Brans and R.H. Dicke, Phys. Rev. 124, 925 (1961); C. Brans, Phys. Rev. 125, 2195 (1962) * [14] W.B. Bonner, Class. Quantum Grav. 9, 269 (1992) * [15] N. Rosen and F.I. Cooperstock, Class. Quantum. Grav. 9, 2657 (1992) * [16] H.C. Ohanian, Ann. Phys. 67, 648 (1971) * [17] H.C. Ohanian, Int. J. Theor. Phys. 4, 273 (1971) * [18] H.C. Ohanian, J. Math. Phys. 14, 1892 (1973)
# Sufficient conditions for the unique solution of a class of new Sylvester- like absolute value equation††thanks: This research was supported by National Natural Science Foundation of China (No. 11961082). Shi-Liang Wu, Cui-Xia Li †School of Mathematics, Yunnan Normal University, Kunming, 650500, PR China wushiliang1999<EMAIL_ADDRESS> ###### Abstract In this paper, a class of new Sylvester-like absolute value equation (AVE) $AXB-|CXD|=E$ with $A,C\in\mathbb{R}^{m\times n}$, $B,D\in\mathbb{R}^{p\times q}$ and $E\in\mathbb{R}^{m\times q}$ is considered, which is quite distinct from the published work by Hashemi [Applied Mathematics Letters, 112 (2021) 106818]. Some sufficient conditions for the unique solution of the Sylvester- like AVE are obtained. Keywords: New Sylvester-like absolute value equation; unique solution; sufficient condition AMS classification: 90C05, 90C30, 65F10 ## 1 Introduction As is known, the standard absolute value equation (AVE) $Ax+|x|=f$ (1.1) and its general version $Ax+C|x|=f$ (1.2) are very strong tools in the field of optimization, including the complementarity problem, linear programming and convex quadratic programming, where $A$ and $C$ may be rectangular matrices of the same order. Based on this, the AVE (1.1)/(1.2) has caused wide public concern over the recent years. The AVE (1.2) was first introduced in [1] by Rohn. Therewith, its main research contents consist of two aspects: one is to multifarious numerical methods for obtaining its numerical solution, see [2, 3, 4, 5, 6, 7, 8, 9, 10, 11], and the other is theoretical analysis, including the existence of solvability, bounds for the solutions, various equivalent reformulations, and so on, see [12, 13, 14, 15, 16, 17, 18]. Recently, in [19], Hashemi generalized the concept of absolute value equation and considered the following Sylvester-like absolute value equation (AVE) $AXB+C|X|D=F,$ (1.3) where $A,C\in\mathbb{R}^{m\times n}$, $B,D\in\mathbb{R}^{p\times q}$, $F\in\mathbb{R}^{m\times q}$ are given. For the Sylvester-like AVE (1.3), Hashemi in [19] established some sufficient conditions for its unique solution. Further, in [20], Wang and Li considered the Sylvester-like AVE (1.3) with square coefficient matrices. Some new sufficient conditions were gained in [20], which are different from the results in [19]. Further, in [21], Wu found a type of new generalized absolute value equation (NGAVE), which is below $Ax-|Bx|=d,$ (1.4) with $A,B\in\mathbb{R}^{n\times n}$ and $d\in\mathbb{R}^{n}$. Likewise, Wu in [21] established some necessary and sufficient conditions for the unique solution of the NGAVE (1.4). Clearly, the NGAVE (1.4) is quite different from the AVE (1.2). Inspired by the work in [21], together with the Sylvester-like AVE (1.3), in this paper, we consider a type of new Sylvester-like absolute value equation (AVE) below $AXB-|CXD|=F,$ (1.5) where $A,C\in\mathbb{R}^{m\times n}$, $B,D\in\mathbb{R}^{p\times q}$, $F\in\mathbb{R}^{m\times q}$ are given. Here, the new Sylvester-like AVE (1.5) not only is the generalization form of the GAVE (1.4), but also is from other fields, such as interval matrix equations [22, 23, 24, 25, 26, 27], robust control [28], and so on. Similar to the Sylvester-like AVE (1.3), the theory and practice of the new Sylvester-like AVE (1.5) is still interested and challenged because of the nonlinear and nondifferentiable term $|CXD|$ in (1.5). This is our motivation for this paper. At present, to our knowledge, for the unique solution of the new Sylvester-like AVE (1.5), the necessary and sufficient condition is _vacant_. Based on this, the goal of the present paper is to fill in this vacant, gain some sufficient conditions for the unique solution of the new Sylvester-like AVE (1.5). What’s more, some useful necessary and sufficient conditions for the unique solution of the new Sylvester-like AVE (1.5) are obtained with square coefficient matrices. ## 2 Main result In this section, we will present some conditions for the unique solution of the new Sylvester-like AVE (1.5). To achieve this goal, by using the Kronecker product and the vec operator, the new Sylvester-like AVE (1.5) can be expressed as the NGAVE below $Sx-|Tx|=f$ (2.1) with $S=B^{T}\otimes A$, $T=D^{T}\otimes C$, $x=vec(X)$ and $f=vec(F)$, where ‘$\otimes$’, ‘$vec$’ stand for the Kronecker product and the vec operator, respectively. To discuss the sufficient condition for the unique solution of the new Sylvester-like AVE (1.5), Lemmas 2.1, 2.2, 2.3 and 2.4 are required. ###### Lemma 2.1. _[21]_ Let matrix $A$ in $(\ref{eq:4})$ be nonsingular. If $\rho((I-2D)BA^{-1})<1$ (2.2) for any diagonal matrix $D=\mbox{diag}(d_{i})$ with $d_{i}\in[0,1]$, then the NGAVE $(\ref{eq:4})$ for any $d\in\mathbb{R}^{n}$ has a unique solution. ###### Lemma 2.2. _[21]_ The NGAVE $(\ref{eq:4})$ for any $d\in\mathbb{R}^{n}$ has a unique solution if and only if matrix $A+(I-2D)B$ is nonsingular for any diagonal matrix $D=\mbox{diag}(d_{i})$ with $d_{i}\in[0,1]$. ###### Lemma 2.3. _[29]_ Let $A,B\in\mathbb{R}^{n\times n}$. Then $\sigma_{i}(A+B)\geq\sigma_{i}(A)-\sigma_{1}(B),i=1,2\ldots,n,$ where $\sigma_{1}\geq\ldots\geq\sigma_{n}(\geq 0)$ are the singular values of matrix. Based on Lemmas 2.2 and 2.3, Lemma 2.4 can be obtained. ###### Lemma 2.4. If $\sigma_{1}(B)<\sigma_{n}(A)$ (2.3) where $\sigma_{1}$ and $\sigma_{n}$ denote the largest and smallest singular value, respectively, then the NGAVE $(\ref{eq:4})$ for any $d\in\mathbb{R}^{n}$ has a unique solution. Proof. Based on Lemma 2.3, the NGAVE $(\ref{eq:4})$ has a unique solution for any $d\in\mathbb{R}^{n}$ when the matrix $A+(I-2D)B$ is nonsingular for any diagonal matrix $D=\mbox{diag}(d_{i})$ with $0\leq d_{i}\leq 1$. So, let $\sigma_{n}(A+(I-2D)B)$ stand for the minimal singular value of the matrix $A+(I-2D)B$. Based on Lemma 2.3, we have $\sigma_{n}(A+(I-2D)B)\geq\sigma_{n}(A)-2\sigma_{1}((I-2D)B).$ Since $\sigma_{1}((I-2D)B)\leq\sigma_{1}((I-2D))\sigma_{1}(B)\leq\sigma_{1}(B)$, the result in Lemma 2.4 holds under the condition (2.3). $\hfill{}\Box$ Based on the above lemmas, we can present some conditions for the unique solution of the new Sylvester-like AVE $(\ref{eq:5})$ for any $F$. First, by using Lemma 2.1 indirectly, we can obtain the following result, see Theorem 2.1. ###### Theorem 2.1. Let $A,B$ be square nonsingular matrix in $(\ref{eq:5})$. If $\rho((I-2\Lambda)((B^{-1}D)^{T}\otimes CA^{-1}))<1$ (2.4) for any diagonal matrix $\Lambda=\mbox{diag}(\lambda_{i})$ with $\lambda_{i}\in[0,1]$, then the new Sylvester-like AVE $(\ref{eq:5})$ has a unique solution for any $F$. Proof. Since $S^{-1}=(B^{T}\otimes A)^{-1}=B^{-T}\otimes A^{-1},$ we have $TS^{-1}=(D^{T}\otimes C)(B^{-T}\otimes A^{-1})=D^{T}B^{-T}\otimes CA^{-1}=(B^{-1}D)^{T}\otimes CA^{-1}.$ Based on Lemma 2.1, clearly, the result in Theorem 2.1 is right. $\hfill{}\Box$ Clearly, we can use $\rho(((B^{-1}D)^{T}\otimes CA^{-1})(I-2\Lambda))<1$ instead of the condition (2.4) Theorem 2.1. In fact, the condition (2.4) in Theorem 2.1 is not easy to implement in practice. Even if the condition (2.4) in Theorem 2.1 can be executed, the number of arithmetic operations is required to compute the spectral radius of the huge matrix. For instance, for $A,C\in\mathbb{R}^{m\times m}$ and $B,D\in\mathbb{R}^{n\times n}$, the number of arithmetic operations is $\mathcal{O}(n^{3}m^{3})$. Therefore, the sum of the powers here is $3+3=6$, i.e., the complexity here is sextic. Faced with this situation, we have to get some conditions that can be detected. By the simple calculation, we have $\displaystyle(I-2\Lambda)((B^{-1}D)^{T}\otimes CA^{-1})$ $\displaystyle\leq|(I-2\Lambda)((B^{-1}D)^{T}\otimes CA^{-1}|$ $\displaystyle\leq|I-2\Lambda||(B^{-1}D)^{T}\otimes CA^{-1})|$ $\displaystyle\leq|(B^{-1}D)^{T}\otimes CA^{-1})|$ $\displaystyle=|(B^{-1}D)^{T}|\otimes|CA^{-1}|.$ Note that $\rho(|(B^{-1}D)^{T}|\otimes|CA^{-1}|)=\rho(|(B^{-1}D)^{T}|)\rho(|CA^{-1}|)=\rho(|B^{-1}D|)\rho(|CA^{-1}|).$ So, we have the following result, see Theorem 2.2. ###### Theorem 2.2. Let $A,B$ be square nonsingular matrix in $(\ref{eq:5})$. If $\rho(|B^{-1}D|)\rho(|CA^{-1}|)<1,$ (2.5) then the new Sylvester-like AVE $(\ref{eq:5})$ has a unique solution for any $F$. In addition, based on Theorem 5.6.10 in [30], we also have $\displaystyle\rho((I-2\Lambda)((B^{-1}D)^{T}\otimes CA^{-1}))$ $\displaystyle\leq\sigma_{1}((I-2\Lambda)((B^{-1}D)^{T}\otimes CA^{-1}))$ $\displaystyle\leq\sigma_{1}(I-2\Lambda)\sigma_{1}((B^{-1}D)^{T}\otimes CA^{-1})$ $\displaystyle\leq\sigma_{1}((B^{-1}D)^{T}\otimes CA^{-1})$ $\displaystyle=\sigma_{1}((B^{-1}D)^{T})\sigma_{1}(CA^{-1})$ $\displaystyle=\sigma_{1}(B^{-1}D)\sigma_{1}(CA^{-1}).$ Based on this, Theorem 2.3 can be obtained. ###### Theorem 2.3. Let $A,B$ be square nonsingular matrix in $(\ref{eq:5})$. If $\sigma_{1}(B^{-1}D)\sigma_{1}(CA^{-1})<1,$ (2.6) then the new Sylvester-like AVE $(\ref{eq:1})$ has a unique solution for any $F$. Compared with Theorem 2.1, indeed, the conditions of Theorems 2.2 and 2.3 can be easy to execute. Here, it is noted that the conditions of Theorems 2.2 and 2.3 only work if $A$ and $B$ are nonsingular. In addition, for a general square matrix $H$, there is no relations between $\sigma_{1}(H)$ and $\rho(|H|)$ unless one adds yet more additional requirements. Based on this fact, Theorem 2.3 sometimes performs better than Theorem 2.2, vice versa. To obtain the sufficient condition that is more general, based on Lemma 2.4, Theorem 2.4 can be obtained and its proof is omitted. ###### Theorem 2.4. If $\sigma_{1}(C)\sigma_{1}(D)<\sigma_{n}(A)\sigma_{n}(B)$ (2.7) then the new Sylvester-like AVE $(\ref{eq:1})$ has a unique solution for any $F$. Compared with Theorems 2.2 and 2.3, Theorem 2.4 not only is fit for the square matrix, but also is fit for the rectangular matrix. This implies that the condition (2.7) in Theorem 2.4 is indeed more general. It is not difficult to find that all the conditions of Theorems 2.2, 2.3 and 2.4 can be checked. Not only that, the computational complexity of all the conditions in Theorems 2.2, 2.3 and 2.4 is cubic. By the way, for $m=n=p=q$ in (1.5), combining Theorems 3.1 and 3.2 in [21] with Lemma 2.4, we can obtain the following necessary and sufficient conditions for the unique solution of the new Sylvester-like AVE $(\ref{eq:5})$, see Theorem 2.5. ###### Theorem 2.5. Let $S=B^{T}\otimes A$, $T=D^{T}\otimes C$. Then the following statements are equivalent: $1.$ the new Sylvester-like AVE $(\ref{eq:1})$ has a unique solution for any $F\in\mathbb{R}^{n\times n}$; $2.$ $\\{S+T,S-T\\}$ has the row $\mathcal{W}$-property; $3.$ $det(F_{1}(S+T)+F_{2}(S-T))\neq 0$ for arbitrary nonnegative diagonal matrices $F_{1},F_{2}\in\mathbb{R}^{n\times n}$ with $\mbox{diag}(F_{1}+F_{2})>0$; $4$. matrix $(S-T)(S+T)^{-1}$ is a $P$-matrix (all its principal minors are positive), where matrix $S+T$ is invertible; $5$. matrix $S+(I-2\Lambda)T$ is nonsingular for any diagonal matrix $\Lambda=\mbox{diag}(\lambda_{i})$ with $\lambda_{i}\in[0,1]$. Since the order of the matrices $S$ and $T$ in Theorem 2.5 is $n^{2}\times n^{2}$, the number of arithmetic operations required to check both parts 2, 3, 4 and 5 of Theorem 2.5 is at-least sextic, i.e., $\mathcal{O}(n^{6})$. Therefore, the computational complexity of all the conditions in Theorem 2.5 is at-least $\mathcal{O}(n^{6})$. ## 3 Conclusions In this paper, the unique solution of a type of new Sylvester-like absolute value equation (AVE) $AXB-|CXD|=E$ with $A,C\in\mathbb{R}^{m\times n}$, $B,D\in\mathbb{R}^{p\times q}$ and $E\in\mathbb{R}^{m\times q}$ has been discussed. Some useful sufficient conditions for the unique solution of the new Sylvester-like AVE are obtained. Particularly, all the sufficient conditions of Theorems 2.2, 2.3 and 2.4 can be checked with a cubic complexity in the light of the order of the input matrices. ## References * [1] J. Rohn, Systems of linear interval equations, Linear Algebra Appl., 126 (1989) 39-78. * [2] L. Caccetta, B. Qu, G.-L. Zhou, A globally and quadratically convergent method for absolute value equations, Comput. Optim. Appl., 48 (2011) 45-58. * [3] O.L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009) 101-108. * [4] J. Rohn, An algorithm for solving the absolute value equations, Electron. J. Linear Algebra., 18 (2009) 589-599. * [5] D.K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014) 2191-2202. * [6] S.-L. Wu, C.-X. Li, A special Shift-splitting iterative method for the absolute value equations, AIMS Math., 5 (2020) 5171-5183. * [7] C.-X. Li, S.-L. Wu, Modified SOR-like iteration method for absolute value equations, Math. Probl. Eng., 2020 (2020) 9231639. * [8] P. Guo, S.-L. Wu, C.-X. Li, On SOR-like iteration method for solving absolute value equations, Appl. Math. Lett., 97 (2019) 107-113. * [9] Y.-Y. Lian, C.-X. Li, S.-L. Wu, Weaker convergent results of the generalized Newton method for the generalized absolute value equations, J. Comput. Appl. Math., 338 (2018) 221-226. * [10] C.-X. Li, A preconditioned AOR iterative method for the absolute value equations, Inter. J. Comput. Meth., 14 (2017) 1750016\. * [11] C.-X. Li, A modified generalized Newton method for the absolute value equations, J. Optim. Theory Appl., 170 (2016) 1055-1059. * [12] J. Rohn, A theorem of the alternatives for the equation $Ax+B|x|=b$, Linear Multilinear A., 52 (2004) 421-426. * [13] S.-L. Wu, C.-X. Li, A note on unique solvability of the absolute value equation, Optim. Lett., 14 (2020) 1957-1960. * [14] O.L. Mangasarian, R.R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006) 359-367. * [15] S.-L. Wu, C.-X. Li, The unique solution of the absolute value equations, Appl. Math. Lett., 76 (2018) 195-200. * [16] O. Prokopyev, On equivalent reformulations for absolute value equations, Comput. Optim. Appl., 44 (2009) 363-372. * [17] M. Hladík, Bounds for the solutions of absolute value equations, Comput. Optim. Appl., 69 (2018) 243-266. * [18] S.-L. Wu, S.-Q. Shen, On the unique solution of the generalized absolute value equation, Optim. Lett., 2020, https://doi.org/10.1007/s11590-020-01672-2. * [19] B. Hashemi, Sufficient conditions for the solvability of a Sylvester-like absolute value matrix equation, Appl. Math. Lett., 112 (2021) 106818. * [20] L.-M. Wang, C.-X. Li, New sufficient conditions for the unique solution of a square Sylvester-like absolute value equation, Appl. Math. Lett., 116 (2021) 106966. * [21] S.-L. Wu, The unique solution of a class of the new generalized absolute value equation, Appl. Math. Lett., 116 (2021) 107029. * [22] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990. * [23] N.P. Seif, S.A. Hussein, A.S. Deif, The interval Sylvester equation, Computing., 52 (1994) 233-244. * [24] B. Hashemi, M. Dehghan, Results concerning interval linear systems with multiple right-hand sides and the interval matrix equation $AX=B$, J. Comput. Appl. Math., 235 (2011) 2969-2978. * [25] B. Hashemi, M. Dehghan, The interval Lyapunov matrix equation: analytical results and an efficient numerical technique for outer estimation of the united solution set, Math. Comput. Model., 55 (2012) 622-633. * [26] M. Dehghani-Madiseh, M. Hladík, Efficient approaches for enclosing the united solution set of the interval generalized Sylvester matrix equations, Appl. Numer. Math., 126 (2018) 18-33. * [27] M. Dehghani-Madiseh, M. Dehghan, Generalized solution sets of the interval generalized Sylvester matrix equation $\sum^{p}_{i=1}A_{i}X_{i}+\sum^{q}_{j=1}Y_{j}B_{j}=C$ and some approaches for inner and outer estimations, Comput. Math. Appl., 68 (2014) 1758-1774. * [28] V.N. Shashikhin, Robust assignment of poles in large-scale interval systems, Autom. Rem. Contr., 63 (2002) 200-208. * [29] F.-Z. Zhang, Matrix Theory: Basic results and techniques (Second edition). Springer, New York, 2011. * [30] R.A. Horn, C.R. Johnson, Matrix Analysis. Cambridge University Press, Cambridge, 1986.
Dispersive Analysis of Low Energy γ^* N→π N Process Xiong-Hui Cao^1 Yao Ma^1 aaron<EMAIL_ADDRESS>Han-Qing Zheng^1,2 [Present Address: College of Physics, Sichuan University, Chengdu, Sichuan 610065, Peoples Republic of China] \(^1\)Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, Peoples Republic of China \(^2\)Collaborative Innovation Center of Quantum Matter, Beijing 100871, Peoples Republic of China We use a dispersion representation based on unitarity and analyticity to study the low energy \(\gamma^* N\rightarrow \pi N\) process in the $S_{11}$ channel. Final state interactions among the $\pi N$ system are critical to this analysis. The left-hand part of the partial wave amplitude is imported from $\mathcal{O}(p^2)$ chiral perturbation theory result. On the right-hand part, the final state interaction is calculated through Omnès formula in $S$ wave. It is found that a good numerical fit can be achieved with only one subtraction parameter, and the eletroproduction experimental data of multipole amplitudes \(E_{0+},\ S_{0+}\) in the energy region below \(\Delta(1232)\) are well described when the photon virtuality $Q^2 \leq 0.1 \mathrm{GeV}^2$. § INTRODUCTION The electromagnetic interactions of nucleon have long been recognized as an important source of information for understanding strong interaction physics [1, 2, 3, 4, 5, 6, 7]. The investigation of pion photoproduction started in the 1950s with the seminal work of Chew et al. (CGLN) [1], where the formalism for pion photoproduction on a nucleon target was developed, and fixed-$t$ dispersion relations (DRs) were used as a tool for the analyses of the reaction data. Postulates underlying the DR approach are analyticity, unitarity, and crossing symmetry of a $S$ matrix. The CGLN formalism was later extended to pion electroproduction [8, 9], and DR was used in the analyses of the experimental data [10, 9, 11, 12]. Based on the recent low energy experiments, partial wave analyses have been performed to study the underlying structure of the reaction amplitudes and describing the properties of the nucleon resonances [13, 14, 15, 7]. Since the 1980s, it has been successful to explore the electroproduction and relevant processes using chiral perturbation theory ($\chi$PT) at low energies [16, 17, 18, 19, 20]. For the calculation of loop diagrams, there are several renormalization schemes, which are, e.g., the heavy-baryon approach in Ref. [16] and the EOMS scheme adopted in Refs. [17, 20], to solve the power-counting breaking problems. However, $\chi$PT only works well near the threshold and fails at slightly higher energies. So the unitary method is necessarily adopted in order to suppress the contributions from large energy and recast unitarity of the amplitude. Some unitarity methods have already been explored (for a recent review, see Ref. [21]). The couple channel $N/D$ method was used to unitarize $\chi$PT amplitudes in Ref. [15], and the Jülich model was adopted to study photoproduction and the relevant process in Ref. [7]. In this paper, our \(\gamma^* N\rightarrow \pi N\) amplitudes are obtained through the dispersive analysis [22], in the case we set up with chiral $\mathrm{O}(p^2)$ \(\gamma^* N\rightarrow \pi N\) amplitudes and a \(\pi N\) final state interaction estimated by the Omnès solution [23] in the single channel approximation. In order to achieve such a dispersive analysis, efforts have been made in understanding the complicated analytic structure of the amplitudes. Based on our dispersion representation, the multipole amplitudes ($S_{11}E_{0+}$ and $S_{11}S_{0+}$) data from Refs. [24, 13, 25, 26, 27, 5] below the \(\Delta(1232)\) peak have been fitted. This work extends our previous analyses on pion photoproduction [28] to the virtual-photon process with a photon virtuality $Q^2$ up to $0.2 \mathrm{GeV}^2$, and finds a good description of the data when $Q^2\leq 0.1\mathrm{GeV}^2$, with only one parameter. Besides, the comparison between the $\mathcal{O}(p^2)$ calculation in this paper and the one up to $\mathcal{O}(p^4)$ from Ref. [17] is performed and a discrepancy between the two results are noticed in the higher $Q^2$ region. This paper is organized as follows. In Sec. <ref>, a brief introduction to pion electroproduction is given. In Sec. <ref>, we set up the dispersive formalism for \(\gamma^* N \to \pi N\) process and make an analysis about the singularities which appear in this process. In Sec. <ref>, numerical results of multipoles are carried out. Finally we give our conclusions in Sec. <ref>. § PION ELECTROPRODUCTION §.§ Basics of single pion electroproduction off the nucleon In this section we provide a short introduction to the notations describing the electroproduction of pions. Single pion electroproduction off the nucleon is the process described by \begin{align} e\left(l_{1}\right)+N\left(p_{1}\right) \rightarrow e\left(l_{2}\right)+N\left(p_{2}\right)+\pi^a(q)\ , \end{align} where \(a\) is the isospin index of the pion and $l_1(l_2),\ p_1(p_2),\ \ q$ are incoming (outgoing) electron, incoming (outgoing) nucleon and pion momentum, respectively. Because the interaction between electron and nucleon is pure electromagnetic, for every additional virtual photon exchange, there will be one more fine structure constant $\alpha=e^{2} /(4 \pi) \approx 1 / 137$ suppression factor. Hence, we can only consider the lowest contribution or the so-called one-photon-exchange approximation; see Fig. <ref>. [scale = 1.5] [nucleon, thick](-2,0)node[above]$e(l_1)$to(-0.5,0); [nucleon, thick](-0.5,0)to(1,0)node[above]$e(l_2)$; [photon, very thick](-0.5,0)to(0,-1.5); [left] at (-0.5,-0.7)$\gamma^*(k)$; [nucleon, very thick](-1.8,-2.5)node[below]$N(p_1)$to(0,-1.5); [nucleon, very thick](0,-1.5)to(1.8,-2.5)node[below]$N(p_2)$; [pion, dashed, very thick](0,-1.5)to(2.1,-1.5)node[above]$\pi(q)$; [very thick](0,-1.5)circle(0.4); [white](0,-1.5) circle(0.4); [pattern color=black, pattern=north west lines](0,-1.5)circle(0.4); Pion electroproduction in the one-photon-exchange approximation. $k=l_1-l_2$ represents the momentum of the single exchanged virtual photon. The shaded circle represents the full hadronic vertex. In this approximation, the invariant amplitude $\mathcal{M}$ is interpreted as the product of the polarization vector $\epsilon_\mu$ of the virtual photon and the hadronic transition current matrix element $\mathcal{M}^\mu$, \begin{align}\label{eps mu} \mathcal{M}=\epsilon_{\mu} \mathcal{M}^{\mu}=e \frac{\bar{u}\left(l_{1}\right) \gamma_{\mu} u\left(l_{2}\right)}{k^{2}} \mathcal{M}^{\mu}\ , \end{align} \begin{align} \mathcal{M}^{\mu}=-i e\left\langle N\left(p_{2}\right), \pi(q)\left|J^{\mu}(0)\right| N\left(p_{1}\right)\right\rangle\ , \end{align} with $J^\mu$ the electromagnetic current operator. Since $k^\mu \epsilon_\mu =0$ in both photoproduction and eletroproduciton, it is possible to separate the pure electromagnetic part of the process from the hadronic part, which is the process, \begin{align} \gamma^{*}(k)+N\left(p_{1}\right) \rightarrow N\left(p_{2}\right)+\pi(q)\ , \end{align} where $\gamma^*$ refers to a (spacelike) virtual photon, so we can define $k^2=-Q^2<0$, and $Q^2$ called photon virtuality. Mandelstam variables $s,\ t, \text{ and } u$ are defined as \begin{align} s=\left(p_{1}+k\right)^{2},\quad t=\left(p_{1}-p_{2}\right)^{2},\quad u=\left(p_{1}-q\right)^{2}\ , \end{align} and satisfy $s+t+u=2 m_{N}^{2}+m_{\pi}^{2}-Q^{2}$, where $m_N$ and $m_\pi$ denote the nucleon mass and the pion mass, respectively. In the center-of-mass (cm) frame, $\pi N$ final state system[In this section, the superscript $*$ refers to the physical quantity in the cm frame.], the energies of the photon, $k_0^*$, the pion, $E^*_\pi$ and incoming (outgoing) nucleon, $E_1^*\ (E_2^*)$ are given by \begin{align} \begin{aligned} k_{0}^{*} &=\frac{W^{2}-Q^{2}-m_{N}^{2}}{2 W} ,\quad E_\pi^*=\frac{W^{2}+m_{\pi}^{2}-m_{N}^{2}}{2 W}\ , \\ E_1^* &=\frac{W^{2}+m_{N}^{2}+Q^{2}}{2 W} ,\quad E_2^*=\frac{W^{2}+m_{N}^{2}-m_{\pi}^{2}}{2 W}\ , \end{aligned} \end{align} where $W=\sqrt{s}$ is the cm total energy. The values of the initial and final state momentum in the cm frame are \begin{align} \begin{aligned} \left|\bk^{*}\right| &=\sqrt{\left(\frac{W^{2}-m_{N}^{2}-Q^{2}}{2 W}\right)^{2}+Q^{2}}\ , \\ \left|\bq^{*}\right| &=\sqrt{\left(\frac{W^{2}-m_{N}^{2}+m_{\pi}^{2}}{2 W}\right)^{2}-m_{\pi}^{2}}\ , \end{aligned} \end{align} The real photon equivalent energy in laboratory frame $k^\mathrm{lab}$ is given by \begin{align} k^\mathrm{lab}=\frac{W^{2}-m_{N}^{2}}{2 m_{N}}\ . \end{align} and $k^{\mathrm{cm}}=(m_N/W)k^{\mathrm{lab}}$. The cm scattering angle $\theta^*$ between the pion three momentum and the $z$ axis, defined by the incoming photon direction, is depicted in Fig. <ref> [scale = 1.5] [-stealth] (1,-0.5) – (1.5,-0.5) node[below] $z$ axis; [-latex] (1.42,0) coordinate (a) node[right] $N(-k^*)$ – (0,0) coordinate (b); [-latex] (0,0) – (-1,-1) coordinate (c) node[below] $N(-q^*)$; [-latex, dashed, blue] (0,0) – (1,1) coordinate (d) node[above] $\pi(q^*)$; [photon,-latex] (-1.42,0 )coordinate (e) node[left] $\gamma^*(k^*)$ – (-0.01,0); pic["$\theta^*$", draw=black, <->, angle eccentricity=1.2, angle radius=1cm] Scattering angle $\theta^*$ in the cm frame. The scattering amplitude of pion electroproduction can be parametrized in terms of the Ball amplitudes [10], which are defined in Lorentz-covariant form, \begin{align} -i e\left\langle N^{\prime} \pi\left|J^{\mu}(0)\right| N\right\rangle=\bar{u}\left(p_{2}\right)\left(\sum_{i=1}^{8} B_{i} V_{i}^{\mu}\right) u\left(p_{1}\right)\ , \end{align} where $u(p_1$) and $\overline{u}(p_2)$ are the Dirac spinors of the nucleon in the initial and final states, respectively. Here we use the notation of [29, 9, 16], but it is slightly different from [2, 17]: \begin{align} \begin{aligned} \label{b8} V_1^\mu&=\gamma_5\gamma^\mu \slashed{k}\ ,\quad V_2^\mu=2\gamma_5 P^\mu\ , \quad V_3^\mu=2\gamma_5q^\mu\ ,\quad V_4^\mu=2\gamma_5 k^\mu\ , \\ V_5^\mu&=\gamma_5\gamma^\mu\ ,\quad V_6^\mu=\gamma_5 P^\mu\slashed{k}\ , \quad V_7^\mu=\gamma_5k^\mu \slashed{k}\ ,\quad V_8^\mu=\gamma_5 q^\mu\slashed{k}\ , \end{aligned} \end{align} where $P=(p_1+p_2)/2$ and $\slashed{k}=\gamma^\mu k_\mu$. Using the electromagnetic current conservation $k_\mu \mathcal{M}^\mu=0$, only six independent amplitudes are required for the description of pion electroproduction. Furthermore, in pion photoproduction ($Q^2 = 0$), only four independent amplitudes survive. The parameterization of Ref. [16] takes care of current conservation already from the beginning, which contains only six independent amplitudes $A_i$, \begin{align}\label{A_i} \mathcal{M}^{\mu}=\bar{u}\left(p_{2}\right)\left(\sum_{i=1}^{6} A_{i} M_{i}^{\mu}\right) u\left(p_{1}\right) \end{align} \begin{align} \begin{aligned} M_{1}^{\mu} &=\frac{1}{2}\gamma_{5}\left(\gamma^{\mu} \slashed{k}-\slashed{k} \gamma^{\mu}\right)\ , \\ M_{2}^{\mu} &=2\gamma_{5}\left(P^{\mu} k \cdot\left(q-\frac{1}{2} k\right)-\left(q-\frac{1}{2} k\right)^{\mu} k \cdot P\right)\ , \\ M_{3}^{\mu} &=\gamma_{5}\left(\gamma^{\mu} k \cdot q-\slashed{k} q^{\mu}\right)\ , \\ M_{4}^{\mu} &=2\gamma_{5}\left(\gamma^{\mu} k \cdot P-\slashed{k} P^{\mu}\right)-2 m_{N} M_{1}^{\mu}\ , \\ M_{5}^{\mu} &=\gamma_{5}\left(k^{\mu} k \cdot q-k^{2} q^{\mu}\right)\ , \\ M_{6}^{\mu} &=\gamma_{5}\left(k^{\mu}\slashed{k}-k^{2} \gamma^{\mu}\right)\ . \end{aligned} \end{align} Each of them individually satisfies gauge invariance $k_{\mu} M_{i}^{\mu}=0$. The scalar functions $A_i$ and $B_i$ can be linked through \begin{align} \begin{aligned}\label{AB} A_{1} &=B_{1}-m_N B_{6}\ ,\quad A_{2} =\frac{2}{m_{\pi}^{2}-t} B_{2}\ , \quad A_{3} =-B_{8}\ ,\quad A_{4} =-\frac{1}{2} B_{6}\ ,\quad A_{6} =B_{7}\ , \\ A_{5} &=\frac{2}{s+u-2m_N^{2}}\left(B_{1}-\frac{s-u}{2\left(m_{\pi}^{2}-t\right)} B_{2}+2 B_{4}\right)=\frac{1}{k^{2}}\left(\frac{s-u}{t-m_{\pi}^{2}} B_{2}-2 B_{3}\right)\ . \end{aligned} \end{align} The CGLN amplitudes $\mathcal{F}_i$ are another common parameterization [1, 29], which plays an important role in experiments and partial wave analyses. These amplitudes are defined in the cm frame via \begin{align} \epsilon_{\mu} \bar{u}\left(p_{2}\right)\left(\sum_{i=1}^{6} A_{i} M_{i}^{\mu}\right) u\left(p_{1}\right)=\frac{4 \pi W}{m_{N}} \chi_{2}^{\dagger} \mathbf{F} \chi_{1}\ , \end{align} where $\chi_1$ and $\chi_2$ denote initial and final Pauli spinors, respectively. Electromagnetic current conservation allows us to work in the gauge where the polarization vector of virtual photon has a vanishing longitudinal component. In terms of the polarization vector of Eq. (<ref>) this is achieved by introducing the vector [16, 30, 31], \begin{align} b_\mu=\epsilon_\mu-\frac{\bep\cdot\hat{\bk}}{|\bk|}k_\mu\ , \end{align} where $b_0 \neq 0$, but $\bb\cdot\hat{\bk}=0$ ($\hat{\bk}=\bk/|\bk|$). $\mathcal{F}$ may be written as [$\bsi=(\sigma_1,\sigma_2,\sigma_3)$], \begin{align} \mathbf{F}=&i\bsi\cdot\bb \mathcal{F}_1+\bsi\cdot\hat{\bq}\bsi\cdot(\hat{\bk}\times\bb)\mathcal{F}_2+i\bsi\cdot\hat{\bk}\hat{\bq}\cdot\bb \mathcal{F}_3+i\bsi\cdot\hat{\bq}\hat{\bq}\cdot\bb \mathcal{F}_4 \nonumber \\ &-i\bsi\cdot\hat{\bq}b_0 \mathcal{F}_7-i\bsi\cdot\hat{\bk}b_0 \mathcal{F}_8\ . \end{align} We can connect $A_i$ and $\mathcal{F}_i$ through algebraic calculations, and the results can be found in Appendix <ref>. The CGLN amplitudes can be expanded into multipole amplitudes[29], \begin{align}\label{F8} \begin{aligned} \mathcal{F}_{1} &=\sum_{l=0}^{\infty}\left\{\left[l M_{l+}+E_{l+}\right] P_{l+1}^{\prime}(x)+\left[(l+1) M_{l-}+E_{l-}\right] P_{l-1}^{\prime}(x)\right\}\ , \\ \mathcal{F}_{2} &=\sum_{l=1}^{\infty}\left\{(l+1) M_{l+}+l M_{l-}\right\} P_{l}^{\prime}(x)\ , \\ \mathcal{F}_{3} &=\sum_{l=1}^{\infty}\left\{\left[E_{l+}-M_{l+}\right] P_{l+1}^{\prime \prime}(x)+\left[E_{l-}+M_{l-}\right] P_{l-1}^{\prime \prime}(x)\right\}\ , \\ \mathcal{F}_{4} &=\sum_{l=2}^{\infty}\left\{M_{l+}-E_{l+}-M_{l-}-E_{l-}\right\} P_{l}^{\prime \prime}(x)\ , \\ \mathcal{F}_{7} &=\sum_{l=1}^{\infty}\left[l S_{l-}-(l+1) S_{l+}\right] P_{l}^{\prime}(x)=\frac{\left|\bk^{*}\right|}{k_{0}^{*}} \mathcal{F}_{6}\ , \\ \mathcal{F}_{8} &=\sum_{l=0}^{\infty}\left[(l+1) S_{l+} P_{l+1}^{\prime}(x)-l S_{l-} P_{l-1}^{\prime}(x)\right]=\frac{\left|\bk^{*}\right|}{k_{0}^{*}} \mathcal{F}_{5}\ , \end{aligned} \end{align} with $x=\cos\theta=\hat{\bq} \cdot \hat{\bk}$, $P_l(x)$ the Legendre polynomial of degree $l$, $P^\prime_l=\mathrm{d}P_l/\mathrm{d}x$ and so on. Subscript $l$ denotes the orbital angular momentum of the pion-nucleon system in the final state. The multipoles $E_{l \pm}, M_{l \pm}, \text{ and } S_{l \pm}$ are functions of the cm total energy $W$ and the photon virtuality $Q^2$, and refer to transversal electric, magnetic transitions, and scalar transitions, [Sometimes the longitudinal multipoles are used instead of the scalar multipoles, they satisfies a relationship $L_{l\pm}=(k_0/|\bk|)S_{l\pm}$] respectively. The subscript $l_\pm$ denotes the total angular momentum $j = l \pm 1/2$ in the final state. By inverting the above equations, the angular dependence can be completely figured out[9], \begin{align} \begin{aligned} E_{l+} &= \int_{-1}^{1} \frac{d x}{2(l+1)}\left[P_{l} \mathcal{F}_{1}-P_{l+1} \mathcal{F}_{2}+\frac{l}{2 l+1}\left(P_{l-1}-P_{l+1}\right) \mathcal{F}_{3}+\frac{l+1}{2 l+3}\left(P_{l}-P_{l+2}\right) \mathcal{F}_{4}\right]\ , \\ E_{l-} &= \int_{-1}^{1} \frac{d x}{2 l}\left[P_{l} \mathcal{F}_{1}-P_{l-1} \mathcal{F}_{2}-\frac{l+1}{2 l+1}\left(P_{l-1}-P_{l+1}\right) \mathcal{F}_{3}+\frac{l}{2 l-1}\left(P_{l}-P_{l-2}\right) \mathcal{F}_{4}\right]\ , \\ M_{l+} &=\int_{-1}^{1} \frac{d x}{2(l+1)}\left[P_{l} \mathcal{F}_{1}-P_{l+1} \mathcal{F}_{2}-\frac{1}{2 l+1}\left(P_{l-1}-P_{l+1}\right) \mathcal{F}_{3}\right]\ , \\ M_{l-} &=\int_{-1}^{1} \frac{d x}{2 l}\left[-P_{l} \mathcal{F}_{1}+P_{l-1} \mathcal{F}_{2}+\frac{1}{2 l+1}\left(P_{l-1}-P_{l+1}\right) \mathcal{F}_{3}\right]\ , \\ S_{l+} &=\int_{-1}^{1} \frac{d x}{2(l+1)}\left[P_{l+1} \mathcal{F}_{7}+P_{l} \mathcal{F}_{8}\right]\ , \\ S_{l-} &=\int_{-1}^{1} \frac{d x}{2 l}\left[P_{l-1} \mathcal{F}_{7}+P_{l} \mathcal{F}_{8}\right]\ . \end{aligned} \end{align} Please refer to Appendix <ref> for the connections between multipoles and partial wave helicity amplitudes. The isospin structure of the scattering amplitude can be written as \begin{align} A(\gamma^*+N \to \pi^a+N^\prime)=\chi_2^\dagger \bigg \{ \delta^{a3} A^{(+)} +i \epsilon^{a 3 b} \tau^b A^{(-)} +\tau^a A^{(0)}\bigg \} \chi_1 \ , \end{align} where \(\tau_a\) (\(a=1,2,3\)) are Pauli matrices. The isospin amplitudes corresponding to $A_i$ of Eq. (<ref>) obey a crossing symmetry[9, 16], so \begin{align}\label{iso} \begin{aligned} A_{i}^{(+, 0)}(s, t, u) &=A_{i}^{(+, 0)}(u, t, s) \qquad i=1,2,4 \\ A_{i}^{(+, 0)}(s, t, u) &=-A_{i}^{(+, 0)}(u, t, s) \qquad i=3,5,6 \\ A_{i}^{(-)}(s, t, u) &=A_{i}^{(-)}(u, t, s) \qquad i=3,5,6 \\ A_{i}^{(-)}(s, t, u) &=-A_{i}^{(-)}(u, t, s) \qquad i=1,2,4 \end{aligned} \end{align} We can define the isospin transition amplitudes by $A^{I, I_3}(A^{\frac{3}{2}, \pm \frac{1}{2}},A^{\frac{1}{2}, \pm \frac{1}{2}})$, where $\{I, I_3\}$ denote isospin of the final $\pi N$ system. In the notation $\ket{I, I_3}$, the isospin part of the state vectors for the nucleon and the pion is written as \begin{gather} |p\rangle=\left|\frac{1}{2},+\frac{1}{2}\right\rangle\ , \quad|n\rangle=\left|\frac{1}{2},-\frac{1}{2}\right\rangle\ , \\ \left|\pi^{+}\right\rangle=-|1,+1\rangle\ , \quad\left|\pi^{0}\right\rangle=|1,0\rangle\ , \quad\left|\pi^{-}\right\rangle=|1,-1\rangle\ . \end{gather} So isospin transition amplitudes can be obtained from \(A^{(\pm)}\) and \(A^{(0)}\) via \begin{align} A^{\frac{3}{2}, \frac{1}{2}} &=A^{\frac{3}{2},-\frac{1}{2}}=\sqrt{\frac{2}{3}}\left(A^{(+)}-A^{(-)}\right)\ , \\ A^{\frac{1}{2} , \frac{1}{2}} &=-\sqrt{\frac{1}{3}}\left(A^{(+)}+2 A^{(-)}+3 A^{(0)}\right)\ , \\ A^{\frac{1}{2},-\frac{1}{2}} &=\sqrt{\frac{1}{3}}\left(A^{(+)}+2 A^{(-)}-3 A^{(0)}\right)\ . \end{align} In the one-photon-exchange approximation, the differential cross section can be factorized as[3, 4] \begin{align} \frac{\mathrm{d} \sigma}{\mathrm{d} \mathcal{E}_2 \mathrm{d} \Omega_{l} \mathrm{d} \Omega_{\pi}^{*}}=\frac{\alpha}{2 \pi^{2}} \frac{\mathcal{E}_2}{\mathcal{E}_1}\frac{1}{Q^{2}} \frac{k^{\mathrm{lab}}}{1-\epsilon} \frac{\mathrm{d} \sigma_{v}}{\mathrm{d} \Omega_{\pi}^{*}} \equiv \Gamma \frac{\mathrm{d} \sigma_{v}}{\mathrm{d} \Omega_{\pi}^{*}}\ , \end{align} where $\Gamma$ is the flux of the virtual photon, $\mathcal{E}_{1,2}$ denote the energy of the initial and final electrons in the laboratory frame, respectively. The parameter $\epsilon$ expresses the transverse polarization of the virtual photon in the laboratory frame, and it is an invariant under collinear transformations. In terms of laboratory electron variables, it is given by [3] \begin{align} \epsilon= \left(1+2 \frac{\bk^{2}}{Q^{2}} \tan ^{2}\left(\frac{\theta_{l}}{2}\right)\right)^{-1}\ , \end{align} where $\theta_l$ is the scattering angle of the electron in the laboratory frame. The virtual photon differential cross section, $\mathrm{d} \sigma_{v} / \mathrm{d} \Omega_{\pi}^*$, for an unpolarized target without recoil polarization can be written in the form [4], \begin{align} \begin{aligned}\label{sigpi} \frac{\mathrm{d} \sigma_{v}}{\mathrm{d} \Omega_{\pi}^*}=& \frac{\mathrm{d} \sigma_{T}}{\mathrm{d} \Omega_{\pi}^*}+\epsilon \frac{\mathrm{d} \sigma_{L}}{\mathrm{d} \Omega_{\pi}^*}+\sqrt{2 \epsilon(1+\epsilon)} \frac{\mathrm{d} \sigma_{L T}}{\mathrm{d} \Omega_{\pi}^*} \cos \phi_{\pi}^*+\epsilon \frac{\mathrm{d} \sigma_{T T}}{\mathrm{d} \Omega_{\pi}^*} \cos 2 \phi_{\pi}^* \\ &+h \sqrt{2 \epsilon(1-\epsilon)} \frac{\mathrm{d} \sigma_{L T^{\prime}}}{\mathrm{d} \Omega_{\pi}^*} \sin \phi_{\pi}^*+h \sqrt{1-\epsilon^{2}} \frac{\mathrm{d} \sigma_{T T^{\prime}}}{\mathrm{d} \Omega_{\pi}^{*}}\ , \end{aligned} \end{align} in which $\phi_\pi^*$ is the azimuthal angle of pion and $h$ is the helicity of the incoming electron. For further details about Eq. (<ref>), especially concerning polarization observables, we refer to Ref. [4]. If we integrate the dependence of azimuthal angle, at the end, we will get \begin{align}\label{signoomig} \sigma_v=\sigma_T+\epsilon \sigma_L\ . \end{align} In the following chapter, we will introduce $\chi$PT as an effective field theory which allow us to calculate pion production. The upper limit for the cm total energy $W$, restricted by the fact that we only consider pion and nucleon degrees of freedom, is below the $\Delta (1232)$ resonance peak. Furthermore, through the experience gained by studying EM form factors[32, 33], the estimate of the upper limit of momentum transfers is $Q^2\simeq 0.1 \mathrm{GeV}^2$ in $\chi$PT [17, 34]. § PARTIAL WAVE AMPLITUDES §.§ $\chi$PT amplitudes and unitarity method We recalculated the pion electroproduction process close to the threshold using $\chi$PT up to \(\mathcal{O}(p^2)\) and confirm the results of [16]. The invariant scalar functions can be extracted from full amplitudes. The results are listed in the Appendix <ref>. for higher order $\mathcal{O}(p^3)$ contributions and the influence of $\Delta(1232)$ resonance, readers can refer to Ref. [20]. In the following part, superscripts and subscripts $I,~J$ (isospin, total angular momentum) are ignored for brevity. Considering the final-state theorem[35] and using the dispersion relation, the unitarized $S$ wave amplitude can be written as [22, 36, 37, 38, 39, 40, 41, 42, 28] \begin{align}\label{eq:disRep} \mathcal{M}(s)=\mathcal{M}_L(s)+\Omega(s)\left(-\frac{s}{\pi}\int_{(m_\pi +m_N)^2}^{\infty}\frac{\big({\rm Im}\ \Omega(s^\prime)^{-1}\big)\mathcal{M}_L(s^\prime)}{s'(s'-s)}{\rm d} s'+\mathcal{P}(s)\right)\ , \end{align} where \(\mathcal{P}(s)\) is subtraction polynomial. The amplitude \(\mathcal{M}_L\) only contains left-hand cut singularity. Thus, the pion electroproduction amplitude \(\mathcal{M}(s)\) is determined up to a polynomial. \(\Omega(s)\) is the so-called Omnès function [23], \begin{align}\label{eq:omnes} \Omega(s)=\tilde{\mathcal{P}}(s)\exp\bigg[\frac{s}{\pi}\int_{(m_\pi +m_N)^2}^\infty\frac{\delta(s^\prime)}{s^\prime(s^\prime-s)}{\rm d}s^\prime\bigg] \end{align} with \(\tilde{\mathcal{P}}\) representing a polynomial, reflecting the zeros of $\Omega(s)$ in the complex plane and \(\delta^I_{J}(s)\) being the elastic \(\pi N\) partial wave phase shift. For our calculation, we use the $\chi$PT result to estimate $\mathcal{M}_L$, so as long as function $\Omega(s)$ is known, we can get the amplitude with correct unitarity and analyticity property. §.§ Singularity structure of partial wave amplitudes Applicability of the Omnès method to the amplitudes of interest relies on the ability to separate the amplitude into a piece having only a left-hand cut and a piece having only a right-hand one. This, a priori, is not the case if the left-hand cuts overlapped with the unitary cut. So we review the analytic structures arising in our calculation and find that the singularities in this virtual process are rather more complicated than real photoproduction. There will be some additional cuts in the complex $s$ plane, compared with the photoproduction one. We follow Ref. [43] which relies on the Mandelstam double spectral representation to illustrate the analytic structure of the partial wave amplitudes. According to crossing symmetry, one amplitude can simultaneously describe the three channels of $s,\ t,\ u$, \begin{align} \begin{aligned} &s:\qquad \gamma^*+N \to \pi+N^\prime\ , \quad \sigma_1=M^2,\quad\rho_1=(m+M)^2\ ; \\ &t:\qquad \gamma^*+\pi \to \overline{N}+N^\prime\ , \quad \sigma_2=m^2,\quad\rho_2=4m^2\ ; \\ &u:\qquad \gamma^*+\overline{N}^\prime \to \pi+\overline{N}\ , \quad \sigma_3=M^2,\quad\rho_3=(m+M)^2\ . \end{aligned} \end{align} Here, for brevity, we define $m=m_\pi,\ M=m_N$, $\sigma_i$ represent the the mass squares of strongly interacting intermediate bound states, and the continuous spectra will begin at $\rho_i$ which is the threshold of two particle intermediate states. Note here that the Mandelstam variable \(t\) defined in the $s$ plane is related to $z_s=\cos\theta$ via \begin{equation}\label{eq_rhot} \begin{aligned} t=& -Q^{2}+m^{2}-\frac{\left(s-Q^{2}-M^{2}\right)\left(s+m^{2}-M^{2}\right)}{2 s} \\ &+\left\{\left[s^{2}+2\left(Q^{2}-M^{2}\right) s+\left(Q^{2}+M^{2}\right)^{2}\right]\left(s-s_L\right)\left(s-s_R\right)\right\}^{\frac{1}{2}} \frac{z_{s}}{2 s}\ . \end{aligned} \end{equation} where $s_L=(m-M)^2,s_R=(m+M)^2$, and we can define $\nu=(s-u)/(4m_N)$ as the crossing symmetric variable. The physical $s,u$-channel region is shown in the following for $Q^2=0.1\mathrm{GeV}^2$. The threshold for $\pi$ electroproduction lies at \begin{align} \begin{aligned} \nu_{\mathrm{thr}} &=\frac{m_{\pi}\left[\left(2 m_{N}+m_{\pi}\right)^{2}+Q^{2}\right]}{4 m_{N}\left(m_{N}+m_{\pi}\right)}\ , \\ t_{\mathrm{thr}} &=-\frac{m_{N}\left(m_{\pi}^{2}+Q^{2}\right)}{m_{N}+m_{\pi}}\ . \end{aligned} \end{align} [>=stealth,scale=1,line width=0.8pt] (A) at (0,0); (B) at (0,6); (C) at (4,6); (D) at (4,0); [label=left:90$t(\mathrm{GeV}^2)$](E) at ($(B)+(-0.4,-0.2)$); [label=below:$\nu(\mathrm{GeV})$](F) at ($(D)+(-0.2,-0.4)$); ıin 1,2,3,4 (1*ı,0) –(1*ı,); ȷin 2,4,6 (0,1*ȷ) –(,1*ȷ); [label=below:$-1$](a) at ($(0,0)$); [label=below:$-0.5$](b) at ($(1,0)$); [label=below:$0$](c) at ($(2,0)$); [label=below:$0.5$](d) at ($(3,0)$); [label=below:$1$](e) at ($(4,0)$); [label=left:$-0.2$](f) at ($(0,0)$); [label=left:$-0.1$](g) at ($(0,2)$); [label=left:$0$](h) at ($(0,4)$); [label=left:$0.1$](i) at ($(0,6)$); ıin 0.2,0.4,0.6,0.8,1.2,1.4,1.6,1.8,2.2,2.4,2.6,2.8,3.2,3.4,3.6,3.8 (1*ı,0) –(1*ı,0.8*); ȷin 0.4,0.8,1.2,1.6,2.4,2.8,3.2,3.6,4.4,4.8,5.2,5.6 (0,1*ȷ) –(0.8*,1*ȷ); [line width=1.5pt,red,name path = pathcurve1] ($(4,3.8)$)..controls ($(2,3.5)$)and ($(2,2.5)$)..($(2.4,0)$); [line width=1.5pt,blue,name path = pathcurve2] ($(0,3.8)$)..controls ($(2,3.5)$)and ($(2,2.5)$)..($(1.6,0)$); (G) at (4,2); (F) at (0,2); [name path = pathGLine](F)–(G); [draw,fill,name intersections=of = pathGLine and pathcurve1,by=H](H)circle(2pt); [green,dashed, very thick] ($(0,4.3)$)–($(4,4.3)$); [green,dashed, very thick] ($(1.85,0)$)–($(2.15,6)$); [green,dashed, very thick] ($(2.15,0)$)–($(1.85,6)$); [label=above:${t=t_{\mathrm{thr}}}$](I) at ($(3,2)$); [label=above:${s=m_N^2}$](L) at ($(2.7,5)$); [label=above:${u=m_N^2}$](L) at ($(1.3,5)$); [label=above:${t=m_\pi^2}$](L) at ($(3,4.3)$); The Mandelstam plane for $\pi$ electroproduction off the nucleon: The red line shows the boundary of $s$ channel physical region for $Q^2=0.1\mathrm{GeV}^2$. The blue line corresponds to the physical region boundary of $u$ channel process. The nucleon and $\pi$ pole positions are indicated by the dotted green lines $s=m_N^2,u=m_N^2$, and $t=m_\pi^2$. The threshold of $\pi$ electroproduction is represented by solid black circle. With regard to dynamical cut positions of the partial wave $T$ matrix, we first take the $t$ channel as an illustration. The full amplitude can be written as a dispersion integral, \begin{align} T(s, t)=\int_{\sigma_{2}}^{\infty} \frac{\mathcal{F}\left(s, t^{\prime}\right)}{t^{\prime}-t} d t^{\prime}\ , \end{align} where $\mathcal{F}$ is a spectral function. The partial wave amplitude is the projection of the full amplitude onto a rotation function $d^J$, \begin{align} T^{J}(s) &=\int_{-1}^{1} \mathrm{d} z_{s} d^{J}\left(z_{s}\right) \int_{\sigma_{2}}^{\infty} \mathrm{d} t^{\prime}\frac{\mathcal{F}\left(s, t^{\prime}\right)}{t^{\prime}-t\left(s, z_{s}\right)} \nonumber\\ &=\int_{\sigma_{2}}^{\infty} \mathrm{d} t^{\prime} \mathcal{F}\left(s, t^{\prime}\right) \int_{-1}^{1} \mathrm{d} z_{s} \frac{d^{J}\left(z_{s}\right)}{\alpha\left(t^{\prime}, s\right)-\beta(s) z_{s}}\ , \end{align} where the integration $\int_{\sigma_{2}}^{\infty}$ denotes the sum of the value at pole $t^\prime=\sigma_2$ and $\int_{\rho_{2}}^{\infty}$. It can be proven that the final singularity only comes from the form in a logarithmic function, \begin{align} \ln (\alpha+\beta)-\ln (\alpha-\beta)\ . \end{align} We classify all cuts as follows: * unitarity cut: \(s\in[s_R,\infty)\) on account of the \(s\)-channel continuous spectrum; * \(t\)-channel cut: 1. the arc, on the left of $s=s_c$, stems from \(t\)-channel continuous spectrum for \(4m^2\leq t \leq 4M^2\); 2. $s\in(-\infty,0]$, corresponding to $t$-channel continuous spectrum for $t\geq 4M^2$; * \(u\)-channel cut: \(s\in(-\infty,s_u]\) with \(s_u=\frac{M^{3}-m^{2} M-m\left(M^{2}+Q^{2}\right)}{m+M}\) due to the \(u\)-channel continuous spectrum for \({u\geq(m+M)}^2\); * $t$-channel cut from pion pole: due to $t$ channel single pion exchange, and the branch points locate at $0, C_t, C_t^\dagger$; * $u$-channel cut from nucleon pole: due to $t$ channel single nucleon exchange, and the branch points located at $0, C_u, C_u^\dagger$, where the branch points in the complex plane are (the other three cases are symmetric about the real axis) \begin{align} C_t&=M^2-\frac{Q^2}{2}+i\sqrt{4M^2Q^2-m^2Q^2-\frac{Q^4}{4}+\frac{M^2}{m^2}Q^4} \ , \\ C_u&=M^2-\frac{1}{2}\frac{m^2}{M^2}Q^2+i\sqrt{4m^2Q^2-\frac{m^2}{M^2}m^2Q^2+\frac{m^2}{M^2}Q^4-\frac{1}{4}\left(\frac{m^2}{M^2}\right)^2Q^4}\ . \end{align} The singularities caused by the pole exchanges of $t, u$ channels are complicated but definitely separated from the unitarity cut. Aside from the above dynamical singularities, there exist additional kinematical singularities from relativistic kinematics and polarization spinor of fermions, especially in an inelastic scattering process. These inelastic ones will naturally introduce some square-root functions in the partial wave amplitudes (or multipole amplitudes) which will cause the kinematical singularities. They provide some of the most obvious characteristics in the case of relativistic theory. Here kinematical cuts are introduced when the arguments of the square-root functions from Eq. (<ref>) are negative. All the involved arguments together with their corresponding domains with their variable less than zero are listed in Table <ref>. Arguments causing singularities Arguments Domain \(s\) \(\left(-\infty,0\right)\) \(s-s_R\) \(\left(-\infty,s_R\right)\) \(s-s_L\) \(\left(-\infty,s_L\right)\) \(s^{2}+2\left(Q^{2}-M^{2}\right) s+\left(Q^{2}+M^{2}\right)^{2}\) \((M^2-Q^2 \pm 2iMQ, M^2-Q^2 \pm i\infty)\) There is some arbitrariness when fixing the cut position [44, 45]. For example, compare $\sqrt{\left(s-s_L\right)\left(s-s_R\right)}$ and $\sqrt{s-s_L} \sqrt{s-s_R}$; they may correspond to different cut structure. The former will have an extra cut, which is perpendicular to the real axis and passes the midpoint of $s_L$ and $s_R$. So we choose the latter one to make sure that left cuts are lying on the real axis. In addition, there is a pole like singularity at \(M^2\) that comes from the fact that Eq. (<ref>) has the $1/(s-M^2)$ term (See Appendix <ref>), which will appear in partial wave amplitudes. Finally, there may be a pole derived from the gauge invariant amplitudes. Relations (<ref>) have introduced the $1/(t-m^2)$ pole singularity, if we consider the partial wave integral, e.g., \begin{align} \int\mathrm{d}z\frac{z}{\left(t(z)-m^2\right)\left(u(z)-M^2\right)} \propto \int\mathrm{d}z\frac{z}{(a+bz)(c-bz)} \propto \frac{1}{a+c} \propto \frac{1}{s-M^2+Q^2}\ , \end{align} where $a=-m^2M^2+M^4-m^2Q^2+M^2Q^2+m^2s-2M^2s+Q^2s+s^2,~c=m^2M^2-M^4+m^2Q^2-M^2 Q^2-m^2s+Q^2s+s^2,~b=\sqrt{s-s_L} \sqrt{s-s_R} \sqrt{s^{2}+2\left(Q^{2}-M^{2}\right) s+\left(Q^{2}+M^{2}\right)^{2}}$. So the possible additional singularities in our partial wave analysis are displayed in Fig. <ref>. [global scale = 0.5] [ultra thick, blue](-8,0)to (0,0); [ultra thick, blue](1,0)to (7,0); [above] at (7,0)\(s_R\); [above] at (1,0)\(s_L\); (4.8,0) circle (0.1); [red](4.8,0) circle (0.1); [above] at (4.8,0.1)\(m_N^2\); (2.8,0) circle (0.1); [red](2.8,0) circle (0.1); [above] at (2.8,0.1)\(m_N^2-Q^2\); [ultra thick, blue](2.8,1)to (2.8,2); [ultra thick, blue](2.8,-1)to (2.8,-2); [right] at (2.8,1)\(\Delta\); [right] at (2.8,-1)\(\Delta^\dagger\); [below] at (8,0)\(\mathrm{Re}(s)\); [left] at (0,2)\(\mathrm{Im}(s)\); [below] at (-0.2,0)\(0\); Kinematical singularities, where $\Delta=2MQ$. The red dot represents the nucleon pole, and the two vertical solid rays represent the kinematic cuts from $\sqrt{s^{2}+2\left(Q^{2}-M^{2}\right) s+\left(Q^{2}+M^{2}\right)^{2}}$. For a certain channel we are considering, these singularities may not all appear due to the cancellation from linear combinations. Therefore, it must be analyzed in detail when it is used. § NUMERICAL ANALYSES We are now in the position to compare the unitary representation of the virtual photoproduction amplitude given in Eq. (<ref>) with experimental multipole amplitude data in the \(S_{11}\) channel. Here we use MAID2007 [24, 13] and DMT2001 [25, 26, 27, 5] results for the fitting. These models provide a good description to multipole amplitudes, differential cross sections as well as polarization observables. They can be used as the basis for the prediction and the analysis of meson photo- and electroproduction data on proton and neutron targets. §.§ Fitting procedure In the fit, unknown parameters include the low energy constants (LECs) that appeared in \(\mathcal{M}_{\chi PT}(s)\), the subtraction constants in the auxiliary function \(\Omega(s)\) and the ones in the subtraction polynomial \(\mathcal{P}(s)\). However, the parameters in chiral lagrangian appearing in \(\mathcal{M}_{\chi PT}(s)\) up to $\mathcal{O}(p^2)$ are well fixed. They are \(m_N=0.9383~{\rm GeV}\), \(m_{\pi}=0.1396~{\rm GeV}\), \(e=\sqrt{4\pi\alpha}=0.303\), \(g_A=1.267\), \(F_{\pi}=0.0924~{\rm GeV}\), \(c_6={3.706}/{(4m_N)}\), and \(c_7={-0.12}/{(2m_N)}\) [46][Neglecting $\chi$PT correction beyond tree level, the two LECs \(c_6\) and \(c_7\) can be related to the anomalous magnetic moments of the nucleon via $c_6=(k_p+k_n)/2m_N,\quad c_7=(k_p-k_n)/4m_N$, with \(k_p\) and \(k_n\) being anomalous magnetic moments of proton and neutron, respectively. Since $k_p$ and $k_n$ are precisely determined by experiments, one can infer the uncertainties of $c_6$ and $c_7$ must be negligible and shall hardly change our results.]. Hence, \(\mathcal{M}_{\chi PT}(s)\) is parameter free. Further, we set \(\tilde{\mathcal{P}}(s)=1\) and compute \(\Omega(s)\) by using the partial wave phase shift extracted from the \(\pi N\) \(S\) matrix given in Ref. [47]. Note that it should be a good approximation for a single channel case that the integrations in Eqs. (<ref>) and (<ref>) are performed up to \(2.1\rm GeV^2\) (below the $\eta N$ threshold). Lastly, the subtraction polynomial $\mathcal{P}$ is taken to be a constant, \(\mathcal{P}(s)=a(Q^2)\), i.e., here we only consider once subtraction. [ The influence of twice subtractions is also examined to test the fit result. It is found that the major physical outputs are almost inert.] We further assume $a$ to be independent of $Q^2$, since there is no nearby resonance involved.[Discussions on the similar issue in the mesonic sector can be found in Ref [38].] As a reasonable assumption, we do not take into account $Q^2$ dependent subtraction constants in the following. The above fit method is simultaneously performed on the data from the MAID and DMT models. In the numerical analyses, we fit the multipole amplitudes with the $S_{11}$ channel from \(\pi N\) threshold to \(1.440~\rm GeV^2\) just below the resonance \(\Delta(1232)\). Since no error bars are given, we assign them according to Refs. [48, 49], \begin{align} \operatorname{err}(\mathcal{M}_l^I)=\sqrt{\left(e_{s}^{R,I}\right)^2+\left(e_{r}^{R,I}\right)^2 \left(\mathcal{M}_l^I\right)^{2}}\ . \end{align} Here the superscripts $R,I$ represent the real and imaginary parts of the amplitude. We choose $e_{s}^{R,I}=0.4,0.1 [10^{-3}/m_\pi],\ e_{r}^{R,I}=10\%$. We take into account the errors caused by the model dependence of the partial wave data as much as possible [13, 5]. The fit results to MAID2007 and DMT2001 data are displayed in Figs. <ref>, <ref> and <ref>, <ref>, respectively. For comparison, we also show the \(\mathcal{O}(p^2)\) chiral results of multipole amplitudes. As expected, the chiral results only describe the data at low energies close to threshold and in low $Q^2$. The values of the fit parameters are collected in Table <ref>. $S_{11}E_{0+}$ for proton (the abbreviation is $pE$): The `data' are from MAID (two left columns) and DMT (two right columns), respectively. Moreover, the black lines represent our fit result. Meanwhile, we also show the green error band depicting the statistical error from the DR subtraction constant (variation within $2\sigma$ as in Table <ref>). For comparison, the chiral result is also shown by the blue lines. $S_{11}E_{0+}$ for neutron ($nE$): descriptions the same as in Fig. <ref> $S_{11}S_{0+}$ for proton ($pS$): descriptions the same as in Fig. <ref> $S_{11}S_{0+}$ for neutron ($nS$): descriptions the same as in Fig. <ref> Fit results of once subtraction ($\mathcal{P}=a$). The parameter \(a\) is given in unit of $[10^{-3}/m_\pi]$. Multipole Target Case Value \(\chi^2/d.o.f\) 4*$E_{0+}$ 2*p MAID \(-0.12 \pm 0.05\) \(0.46\) DMT \(0.21 \pm 0.03\) \(0.17\) 2*n MAID \(2.11 \pm 0.07\) \(0.71\) DMT \(1.25 \pm 0.06\) \(0.49\) 4*$S_{0+}$ 2*p MAID \(-1.07 \pm 0.03\) \(0.49\) DMT \(-0.46 \pm 0.02\) \(0.23\) 2*n MAID \(2.25 \pm 0.06\) \(1.14\) DMT \(1.13 \pm 0.05\) \(0.60\) In Figs. <ref>, <ref> and <ref>, <ref>, we fit amplitudes from $Q^2=0$ to $Q^2=0.1 \mathrm{GeV^2}$ in the increments of $0.02\mathrm{GeV^2}$. We also draw the result where $Q^2=0.2\mathrm{GeV^2}$. It can be seen that, except that the fit to $pS_{0+}$ is rather good, the other fit results do not improve much when $Q^2=0.2\mathrm{GeV^2}$. This is within the expectation that we did not consider corrections of vector meson exchanges [32, 33]. In general, it can be seen in the Table <ref> that our results are in good agreement with the experimental data. It is observed that the imaginary part is an order of magnitude smaller than the real part since it is of higher orders in $\chi$PT expansions. Moreover, the agreement is a direct consequence of unitarity and follows automatically from Watson's theorem [35]. Meanwhile, the central value of $a$ is very small in any case. That can be understood by the fact that multipoles calculated from $\chi$PT and the unitarity method can already well describe the experimental data. We also do the fit which uses a twice subtraction polynomial; i.e., $\mathcal{P}=a+bs$. However, the fit parameters \(a\) and \(b\) are found to be highly negative correlated. Thus, once subtraction is more advisable. In a $\chi$PT calculation, the source of error is the systematical one of the theory due to the truncation of the chiral expansion at a given $\mathcal{O}(p^n)$. Using the method of [50, 20], for an order $n$ calculation $\mathcal{O}(p^n)$, we estimate this systematical error as \begin{align} \delta O_{\mathrm{Th}}^{(n)}=\max \left(\left|O^{\left(n_{L O}\right)}\right| B^{n-n_{L O}+1},\left\{\left|O^{(k)}-O^{(l)}\right| B^{n-l}\right\}\right), \quad n_{L O} \leq l \leq k \leq n\ , \end{align} where $B=m_{\pi} / \Lambda_{b}$, and $\Lambda_{b}=4 \pi F_{\pi} \sim 1 \mathrm{GeV}$ is the breakdown scale of the chiral expansion. Here, we set $n_{L O}=1$, and $n=2$. The error is roughly estimated to be less than $5\%$. Furthermore, the possible errors caused by the truncation of dispersion integration can be estimated, \begin{align}\label{truncation} I_{\alpha}=-\frac{s}{\pi} \int_{s_R}^{\Lambda} \frac{\left(\operatorname{Im} \Omega\left(s^{\prime}\right)^{-1}\right) \mathcal{M}_{\alpha}\left(s^{\prime}\right)}{s^{\prime}\left(s^{\prime}-s\right)} \mathrm{d} s^{\prime}\ , \end{align} where $\alpha=pE,nE,pS,nS$, represent muitipoles in $p,n$ targets, respectively. We set the upper limit of the dispersion integrals, $\Lambda$, to $2.1\mathrm{GeV}^2$. The upper limit in DRs has less physical significance, and it is rather an indicator of how well one estimates the remainder of the integral. As a comparison, we list the integration results with the cutoff set to $2.2\mathrm{GeV}^2$ and $2.5\mathrm{GeV}^2$. The imaginary part of the dispersion integral function $I_\alpha$ is almost insensitive to truncation. However, the real part of $I_\alpha$ is somewhat sensitive to truncation, and has a weak linear dependence on truncation as a whole, but this dependence can be compensated by the subtraction constant of the dispersion integral. Finally, the overall physical results are not sensitive to the dispersion integral truncation. Tests of cutoff dependence. $\Lambda$ corresponds to truncation of integrand in Eq. (<ref>). It is convincing that no matter what data are used, the fit results of multipole amplitudes are very similar. The results can illustrate that our unitarity method is very powerful and effective in low energy regions and low $Q^2$ regions. But our results do not fit well in the case of high $Q^2$. In our future work, we will consider using resonance $\chi$PT to improve the description of this method at high $Q^2$. At last, we make a brief comparison of our work and the work of Hilt et al. Here we find that the $\mathcal{O}(p^4)$ results [17] and our amplitudes of multipole $S_{11}pE_{0+}$ are different in higher $Q^2$ regions. Furthermore, $S_{11}nE_{0+},~S_{11}pS_{0+}$, and $S_{11}nS_{0+}$ are even different from our calculations at lower $Q^2$. This needs to be clarified in the future. For comparison, we list MAID , DMT model, our results, and that of [17]. The black solid curves show our calculations at $\mathcal{O}(p^2)$ and the blue long-dashed curves are the outputs of the $\mathcal{O}(p^4)$ results [17]. The red dot-dashed and green short-dashed curves are the predictions of MAID and DMT model, respectively. § SUMMARY In this paper, we have performed a careful dispersive analysis about the process of single pion electroproduction off the nucleon in the \(S_{11}\) channel of the final $\pi N$ system. In the dispersive representation, the right-hand cut contribution can be related to an Omnès solution, which takes the elastic \(\pi N\) phase shifts as inputs. At the same time, we estimate the left-hand cut contribution by making use of the \(\mathcal{O}(p^2)\) amplitudes taken from $\chi$PT. A detailed discussion on a virtual photoproduction amplitude at the level of multipoles is presented. Different from Refs. [17, 20], here we go beyond pure $\chi$PT calculations by applying the final state interaction theorem to partial wave amplitudes. To pin down the free parameters in the dispersive amplitude, we perform fits to the experimental data of multipole amplitudes \(E_{0+}\) and \(S_{0+}\) for the energies ranging from \(\pi N\) threshold to \(1.440~\rm GeV^2\). It is found that the experimental data can be well described by the dispersive amplitude with only one free subtraction parameter, when $Q^2\leq0.1\mathrm{GeV}^2$. As $Q^2$ further increases to $0.2\mathrm{GeV}^2$, the fit fails, similar to what happened in the literature [32, 33, 17]. Our dispersive approach does not always do better as compared with the pure $\chi$PT results apparently, for fitting the real parts of multipole amplitudes. However, the power of dispersion relations is nicely visible in the imaginary parts of multipoles. In this situation, even at low energies, the $\mathcal{O}(p^2)$ perturbative calculation is not sufficient. Therefore, our method is superior to $\mathcal{O}(p^2)$ perturbation theory in the sense that DRs can generate the corresponding imaginary parts. Further, it is hard to compare the $\mathcal{O}(p^4)$ $\chi$PT results and our calculations. In our calculation, we only use the left-hand part contribution extracted from the $\mathcal{O}(p^2)$ amplitude. In principle, an $\mathcal{O}(p^4)$ calculation is advantageous compared with an $\mathcal{O}(p^2)$ calculation. But in a perturbation calculation, unitarization effects are not taken into account, which are automatically fulfilled in our scheme. § ACKNOWLEDGMENTS The authors would like to thank De-Liang Yao, Yu-Fei Wang, and Wen-Qi Niu for helpful discussions. This work is supported in part by National Nature Science Foundations of China (NSFC) under Contracts No.11975028 and No.10925522. § INVARIANT AMPLITUDES \begin{align} \begin{aligned} A_{1}^{(+)} &=-\frac{e g_{A} m_{N}}{2 F}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right)-\frac{e g_{A} c_{6}}{F}, \\ A_{2}^{(+)} &=\frac{- e g_{A} m_{N}}{F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{3}^{(+)} &=\frac{e g_{A} m_{N} c_{6}}{F}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{4}^{(+)} &=\frac{e g_{A} m_{N} c_{6}}{F}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{5}^{(+)} &=-\frac{e g_{A} m_{N}}{2 F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{6}^{(+)} &=0, \\ A_{1}^{(-)} &=\frac{- e g_{A} m_{N}}{2 F}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{2}^{(-)} &=\frac{- e g_{A} m_{N}}{F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{3}^{(-)} &=\frac{e g_{A} m_{N} c_{6}}{F}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{4}^{(-)} &=\frac{e g_{A} m_{N} c_{6}}{F}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{5}^{(-)} &=-\frac{e g_{A} m_{N}}{2 F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{6}^{(-)} &=0, \\ A_{1}^{(0)} &=\frac{- e g_{A} m_{N}}{2 F}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right)-\frac{e g_{A} c_{7}}{2F}, \\ A_{2}^{(0)} &=\frac{- e g_{A} m_{N}}{F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{3}^{(0)} &=\frac{e g_{A} m_{N} c_{7}}{2F}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{4}^{(0)} &=\frac{e g_{A} m_{N} c_{7}}{2F}\left(\frac{1}{s-m_{N}^{2}}+\frac{1}{u-m_{N}^{2}}\right), \\ A_{5}^{(0)} &=-\frac{e g_{A} m_{N}}{2 F} \frac{1}{t-m_{\pi}^{2}}\left(\frac{1}{s-m_{N}^{2}}-\frac{1}{u-m_{N}^{2}}\right), \\ A_{6}^{(0)} &=0\ , \end{aligned} \end{align} where the two LECs $F$ and $g_A$ denote the chiral limit of pion decay constant and the axial-vector coupling constant, respectively. Here \(c_{6}\) and \(c_7\) are LECs of the \(\mathcal{O}(p^2)\) chiral Lagrangian. § THE RELATIONS BETWEEN CGLN AMPLITUDES AND INVARIANT AMPLITUDES The functions $A_i$ and $\mathcal{F}_i$ are connected with each other as the following relation [51, 31]: \begin{align} \mathcal{F}_{1} &=\left(\sqrt{s}-m_{N}\right) \frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \nonumber\\ &\times \left[A_{1}+\frac{k \cdot q}{\sqrt{s}-m_{N}} A_{3}+\left(\sqrt{s}-m_{N}-\frac{k \cdot q}{\sqrt{s}-m_{N}}\right) A_{4}-\frac{k^{2}}{\sqrt{s}-m_{N}} A_{6}\right]\ , \\ \mathcal{F}_{2} &=\left(\sqrt{s}+m_{N}\right) \frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \frac{|\mathbf{q}||\mathbf{k}|}{\left(E_{1}+m_{N}\right)\left(E_{2}+m_{N}\right)} \nonumber\\ &\times \left[-A_{1}+\frac{k \cdot q}{\sqrt{s}+m_{N}}A_3+\left(\sqrt{s}+m_{N}-\frac{k \cdot q}{\sqrt{s}+m_{N}}\right) A_{4}-\frac{k^{2}}{\sqrt{s}+m_{N}} A_{6}\right]\ , \\ \mathcal{F}_{3} &=\left(\sqrt{s}+m_{N}\right) \frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \frac{|\mathbf{q}||\mathbf{k}|}{E_{1}+m_{N}}\left[\frac{m_{N}^{2}-s+\frac{1}{2} k^{2}}{\sqrt{s}+m_{N}} A_{2}+A_{3}-A_{4}-\frac{k^{2}}{\sqrt{s}+m_{N}} A_{5}\right]\ , \\ \mathcal{F}_{4} &=\left(\sqrt{s}-m_{N}\right) \frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \frac{|\mathbf{q}|^{2}}{E_{2}+m_{N}}\left[\frac{s-m_{N}^{2}-\frac{1}{2} k^{2}}{\sqrt{s}-m_{N}} A_{2}+A_{3}-A_{4}+\frac{k^{2}}{\sqrt{s}-m_{N}} A_{5}\right]\ , \\ \mathcal{F}_{7} &=\frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \frac{|\mathbf{q}|}{E_{2}+m_{N}}\Bigg[ \left(m_{N}-E_{1}\right) A_{1}-\left(\frac{|\mathbf{k}|^{2}}{2 k_{0}}\left(2k_0\sqrt{s}-3k\cdot q\right)-\frac{\mathbf{q} \cdot \mathbf{k}}{2 k_{0}}\left(2 s-2 m_{N}^{2}-k^{2}\right)\right) A_{2} \nonumber\\ &+\left(q_{0}\left(\sqrt{s}-m_{N}\right)-k \cdot q\right) A_{3}+\left(k \cdot q-q_{0}\left(\sqrt{s}-m_{N}\right)+\left(E_{1}-m_{N}\right)\left(\sqrt{s}+m_{N}\right)\right) A_{4} \nonumber\\ &+\left(q_{0} k^{2}-k_{0} k \cdot q\right) A_{5}-\left(E_{1}-m_{N}\right)\left(\sqrt{s}+m_{N}\right) A_{6} \Bigg]\ , \\ \mathcal{F}_{8} &=\frac{N_{1} N_{2}}{8 \pi \sqrt{s}} \frac{|\mathbf{k}|}{E_{2}+m_{N}}\Bigg[ \left(m_{N}+E_{1}\right) A_{1}+\left(\frac{|\mathbf{k}|^{2}}{2 k_{0}}\left(2k_0\sqrt{s}-3k\cdot q\right)-\frac{\mathbf{q} \cdot \mathbf{k}}{2 k_{0}}\left(2 s-2 m_{N}^{2}-k^{2}\right)\right) A_{2} \nonumber\\ &+\left(q_{0}\left(\sqrt{s}+m_{N}\right)-k \cdot q\right) A_{3}+\left(k \cdot q-q_{0}\left(\sqrt{s}+m_{N}\right)+\left(E_{1}+m_{N}\right)\left(\sqrt{s}-m_{N}\right)\right) A_{4} \nonumber\\ &+\left(q_{0} k^{2}-k_{0} k \cdot q\right) A_{5}-\left(E_{1}+m_{N}\right)\left(\sqrt{s}-m_{N}\right) A_{6} \Bigg]\ , \end{align} \begin{align} N_i=\sqrt{E_i+m_N},\quad E_i=\sqrt{\bp_i^2+m_N^2},\quad i=1,2\ . \end{align} § PARTIAL WAVE HELICITY AMPLITUDES In the following part, we introduce the partial wave helicity amplitude method of pion photo- and electroproduction [52]. It is convenient to perform partial wave projection using the helicity formalism proposed in Refs. [53, 52]. Here, we define \(\lambda_i\) (\(i=1,2,3,4\)), which stand for the helicity of photon, initial nucleon, pion, and final nucleon. For each set of helicity quantum numbers, denoted by \(H_s\equiv \{\lambda_1\lambda_2\lambda_3\lambda_4\} \), there is a helicity amplitude \(\mathcal{A}_{H_s}\), which can be expanded as \begin{align}\label{AHs} A_{H_s}\left(s,t(\theta)\right)=16\pi\sum_{J=M}^\infty(2J+1)A_{H_s}^{J}(s)\,{d}_{\lambda \mu}^J(\theta)\ , \end{align} where \(M=\mathrm{max}\{|\lambda|,|\mu|\} \), \(\lambda\equiv\lambda_1-\lambda_2\) and \(\mu\equiv\lambda_3-\lambda_4=-\lambda_4\) , and \(d^J(\theta)\) is the standard Wigner function. By imposing the orthogonal properties of the \(d\) functions, the partial wave helicity amplitudes \(A_{H_s}^{J}(s)\) in the above equation may be projected; i.e, \begin{align}\label{pwamp} A^{J}_{H_s}(s)=\frac{1}{32\pi}\int_{-1}^{1} \mathrm{d} \cos\theta A_{H_s}(s,t)d_{\lambda,\lambda'}^{J}(\theta)\ . \end{align} Helicity amplitudes $\{A_{\mu\lambda}(\theta)\}=\{A_{H_s}$}. 2c $\lambda_1=+1$ 2c $\lambda_1=-1$ 2c $\lambda_1=0$ (lr)2-3 (lr)4-5 (lr)6-7 $\frac{3}{2}$ $\frac{1}{2}$ $-\frac{1}{2}$ $-\frac{3}{2}$ $\frac{1}{2}$ $-\frac{1}{2}$ $\frac{1}{2}$ $H_1$ $H_2$ $H_4$ $-H_3$ $H_5$ $H_6$ $-\frac{1}{2}$ $H_3$ $H_4$ $-H_2$ $H_1$ $H_6$ $-H_5$ In particular, we use $H_i (i=1 \sim 6)$ as symbols to define the helicity amplitude. The relations between $A_{\mu \lambda}$ and $H_i$ are listed in Table <ref> [52]. The differential scattering cross section can be written as \begin{align}\label{Sig} \frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=\frac{1}{2} \frac{|\bq|}{k^\mathrm{cm}} \sum_{\lambda_i }\left|A_{\mu \lambda}\right|^{2}\ . \end{align} From Eq. (<ref>) and Table <ref>, we can integrate the angle dependence, \begin{align} \sigma=2 \pi\frac{|\bq|}{k^\mathrm{cm}} \sum_{J} \sum_{i=1}^{6}(2 j+1)\left|H_{i}^{J}\right|^{2}\ . \end{align} $A_{\mu\lambda}^J (H_i^J)$ has definite angular momentum but cannot be determined in parity. Therefore, we can add the final state with the opposite helicity $\mu,-\mu$ to obtain the so-called partial wave helicity parity eigenstates, \begin{align} \begin{aligned} A_{l+} &=-\frac{1}{\sqrt{2}}\left(A_{\frac{1}{2},\frac{1}{2}(\lambda_1=1)}^{J}+A_{-\frac{1}{2},\frac{1}{2}(\lambda_1=1)}^{J}\right)\ , \\ A_{(l+1)-} &=\frac{1}{\sqrt{2}}\left(A_{\frac{1}{2},\frac{1}{2}(\lambda_1=1)}^{J}-A_{-\frac{1}{2},\frac{1}{2}(\lambda_1=1)}^{J}\right)\ , \\ B_{l+} &=\sqrt{\frac{2}{l(l+2)}} \left(A_{\frac{1}{2},\frac{3}{2}}^{J}+A_{-\frac{1}{2},\frac{3}{2}}^{J}\right)\quad \ell \geq 1\ , \\ B_{(l+1)-} &=-\sqrt{\frac{2}{l(l+2)}} \left(A_{\frac{1}{2},\frac{3}{2}}^{J}-A_{-\frac{1}{2},\frac{3}{2}}^{J}\right)\quad \ell \geq 1\ , \\ S_{l+} &=-\frac{Q}{2|\bk|}(l+1) \left(A_{\frac{1}{2},\frac{1}{2}(\lambda_1=0)}^{J}+A_{-\frac{1}{2},\frac{1}{2}(\lambda_1=0)}^{J}\right)\ , \\ S_{(l+1)-} &=-\frac{Q}{2|\bk|}(l+1) \left(A_{\frac{1}{2},\frac{1}{2}(\lambda_1=0)}^{J}-A_{-\frac{1}{2},\frac{1}{2}(\lambda_1=0)}^{J}\right)\ . \end{aligned} \end{align} Notice that the normalization coefficients we use here are different from those in Refs. [54, 55], and $J=l+1/2$ for `$+$' amplitudes and $J=l-1/2$ for `$-$' amplitudes. $A,\ B$, and $S$ represent amplitudes with initial helicity of $1/2,\ 3/2,\ 1/2$, respectively, so it can also be written as $\mathcal{A}^{1/2}, \mathcal{A}^{3/2}$, and $\mathcal{S}^{1/2}$ up to some normalization factors; see Eq. (<ref>). Furthermore, with the definitions of Eqs. (<ref>) and (<ref>), then we can obtain \begin{align} \begin{aligned}\label{HAB} H_{1}&=\frac{1}{\sqrt{2}} \sin \theta \cos \frac{\theta}{2} \sum\left(B_{l+}-B_{(l+1)-}\right)\left(P_{l}^{\prime \prime}-P_{l+1}^{\prime \prime}\right)\ , \\ H_{2}&=\sqrt{2} \cos \frac{\theta}{2} \sum\left(A_{l+}-A_{(l+1)-}\right)\left(P_{l}^{\prime}-P_{l+1}^{\prime}\right)\ , \\ H_{3}&=\frac{1}{\sqrt{2}} \sin \theta \sin \frac{\theta}{2} \sum\left(B_{l+}+B_{(l+1)-}\right)\left(P_{l}^{\prime \prime}+P_{l+1}^{\prime \prime}\right)\ , \\ H_{4}&=\sqrt{2} \sin \frac{\theta}{2} \sum\left(A_{l+}+A_{(l+1)-}\right)\left(P_{l}^{\prime}+P_{l+1}^{\prime}\right)\ , \\ H_{5} &=\frac{Q}{|\mathbf{k}|} \cos \frac{\theta}{2} \sum(l+1)\left(S_{l+}+S_{(l+1)-}\right)\left(P_{l}^{\prime}-P_{l+1}^{\prime}\right)\ , \\ H_{6} &=\frac{Q}{|\mathbf{k}|} \sin \frac{\theta}{2} \sum(l+1)\left(S_{l+}-S_{(l+1)-}\right)\left(P_{l}^{\prime}+P_{l+1}^{\prime}\right)\ . \end{aligned} \end{align} According to the expansion method of CGLN [1], the relationship between helicity amplitudes and CGLN multipole amplitudes can also be obtained [1, 56], \begin{align} \begin{aligned}\label{HF} H_{1}&=-\frac{1}{\sqrt{2}}\sin \theta \cos\frac{\theta}{2} \left(\mathcal{F}_{3}+\mathcal{F}_{4}\right)\ , \\ H_{2}&=\sqrt{2} \cos\frac{\theta}{2} \left[\left(\mathcal{F}_{2}-\mathcal{F}_{1}\right)+\frac{1}{2}(1-\cos \theta)(\mathcal{F}_{3}-\mathcal{F}_{4}) \right]\ , \\ H_{3}&=\frac{1}{\sqrt{2}}\sin \theta \sin \frac{\theta}{2} \left(\mathcal{F}_{3}-\mathcal{F}_{4}\right)\ , \\ H_{4}&=\sqrt{2} \sin \frac{\theta}{2} \left[\left(\mathcal{F}_{1}+\mathcal{F}_{2}\right)+\frac{1}{2}(1+\cos \theta)(\mathcal{F}_{3}+\mathcal{F}_{4}) \right]\ , \\ H_{5} &=\cos \frac{\theta}{2}\left(\mathcal{F}_{5}+\mathcal{F}_{6}\right)\ , \\ H_{6} &=-\sin \frac{\theta}{2}\left(\mathcal{F}_{5}-\mathcal{F}_{6}\right)\ . \end{aligned} \end{align} Compare Eqs. (<ref>) and (<ref>) with the CGLN expansion, we have \begin{align} \begin{aligned} A_{l+} &=\frac{1}{2}\left[(l+2) E_{l+}+l M_{l+}\right]\ , \\ B_{l+} &=E_{l+}-M_{l+}\ , \\ A_{(l+1)-} &=-\frac{1}{2}\left[l E_{(l+1)-}-(l+2) M_{(l+1)-}\right]\ , \\ B_{(l+1)-} &=E_{(l+1)-}+M_{(l+1)-} \ . \end{aligned} \end{align} $\mathcal{A}^h,\mathcal{S}^{1/2}$ can be related to the resonant part of the corresponding multipole amplitudes at the pole position in the following way: \begin{align} \begin{aligned}\label{AAS} \mathcal{A}^{1/2}_{l+} &=-\frac{1}{2}\left[(l+2) E_{l+}+l M_{l+}\right]\ , \\ \mathcal{A}^{3/2}_{l+} &=\frac{1}{2}\sqrt{l(l+2)} \left(E_{l+}-M_{l+}\right)\ , \\ \mathcal{S}^{1/2}_{l+} &=-\frac{l+1}{\sqrt{2}} S_{l+}\ , \\ \mathcal{A}^{1/2}_{(l+1)-} &=-\frac{1}{2}\left[l E_{(l+1)-}-(l+2) M_{(l+1)-}\right]\ , \\ \mathcal{A}^{3/2}_{(l+1)-} &=-\frac{1}{2}\sqrt{l(l+2)} \left(E_{(l+1)-}+M_{(l+1)-} \right)\ , \\ \mathcal{S}^{1/2}_{(l+1)-} &=-\frac{l+1}{\sqrt{2}} S_{(l+1)-}\ . \end{aligned} \end{align} The scattering cross section is written in terms of $\mathcal{A}_\alpha^h$ as \begin{align} \begin{aligned} \sigma_{T} &=\left(\sigma_{T}^{1 / 2}+\sigma_{T}^{3 / 2}\right)+\epsilon\sigma_L\ , \\ \sigma_{T}^{h} &=2 \pi \frac{|\bq|}{k^\mathrm{cm}} \sum_{\alpha(\ell, J)}(2 J+1)\left|\mathcal{A}_{\alpha}^{h}\right|^{2}\ , \\ \sigma_{L} &=2 \pi \frac{|\bq|}{k^\mathrm{cm}} \frac{Q^{2}}{k^{2}} \sum_{\alpha(\ell, J)}(2 J+1)\left|\mathcal{S}_{\alpha}^{1 / 2}\right|^{2}\ , \end{aligned} \end{align} where superscript $h$ stands for helicity. Expand the above formula; it can be obtained, \begin{align} \begin{aligned} \sigma_T^{1 / 2} &=2 \pi \frac{|\bq|}{k^\mathrm{cm}} \sum 2(l+1)\left[\left|A_{l+}\right|^{2}+\left|A_{(1+1)-}\right|^{2}\right]\ , \\ \sigma_T^{3 / 2} &=2 \pi \frac{|\bq|}{k^\mathrm{cm}} \sum \frac{l}{2}(l+1)(l+2)\left[\left|B_{l+}\right|^{2}+\left|B_{(l+1)-}\right|^{2}\right]\ , \\ \sigma_{L} &=4 \pi \frac{|\bq|}{k^\mathrm{cm}} \sum \frac{Q^{2}}{\bk^{2}}(l+1)^{3}\left[\left|C_{l+}\right|^{2}+\left|C_{(l+1)-}\right|^{2}\right]\ . \end{aligned} \end{align} § ELECTROMAGNETIC COUPLINGS OF THE SUBTHRESHOLD RESONANCE In Refs. [57, 49, 58], evidences are found on the possible existence of a sub-thresthod resonance named \(N^\ast(890)\) in the \(S_{11}\) channel using the method proposed in Refs. [59, 60, 61, 62, 63], assisted by chiral amplitudes obtained in Refs. [48, 64, 65, 66]. In this appendix, further results are provided on $\gamma^* N$ coupling to this resonance for future reference. In the main text, all the involved parameters in the partial wave virtual photoproduction amplitude \(\mathcal{M}_l(s)\) have been determined. Since \(N^\ast(890)\), as a subthreshold resonance, is located on the second Riemann sheet, one needs to perform analytic continuation in order to extract its couplings to the \(\gamma^* N\) and \(\pi N\) systems. It can be proved that the residue can be extracted from[28] \begin{align}\label{eq:gggp} g_{\gamma N}g_{\pi N} \simeq \frac{\mathcal{M}_l(s_{{\rm p}})}{\mathcal{S}_l^\prime(s_{{\rm p}})}\ , \end{align} where \(\mathcal{S}_l(s)\) corresponds to partial wave \(S\) matrix of elastic \(\pi N\) scattering. Residues \(g_{\gamma N}\) and \(g_{\pi N}\) denote the \(\gamma N\) and \(\pi N\) couplings, respectively. The \(\pi N\) coupling can also be extracted from elastic \(\pi N\) scattering, i.e., $g_{\pi N}^2 \simeq \mathcal{T}_l(s_{{\rm p}})/\mathcal{S}_l^\prime(s_{{\rm p}})$, where \(\mathcal{T}_l\) is the corresponding partial wave \(\pi N\) scattering amplitude. In order to compare the results with Refs.[67, 68], which are extracted directly from multipole amplitudes parameterized in the \(W(\sqrt{s})\) plane, so we can write \begin{align} E_{0+}^{\mathrm{II}}(s\rightarrow s_p)\simeq\frac{g_{\gamma N}^E g_{\pi N}}{s-s_p}\simeq\frac{g_{\gamma N}^E g_{\pi N}}{2\sqrt{s_p}(\sqrt{s}-\sqrt{s_p})}=\frac{\left( g_{\gamma N}^E g_{\pi N}/2W_p \right)}{W-W_p}\ , \\ S_{0+}^{\mathrm{II}}(s\rightarrow s_p)\simeq\frac{g_{\gamma N}^S g_{\pi N}}{s-s_p}\simeq\frac{g_{\gamma N}^S g_{\pi N}}{2\sqrt{s_p}(\sqrt{s}-\sqrt{s_p})}=\frac{\left( g_{\gamma N}^S g_{\pi N}/2W_p \right)}{W-W_p}\ , \end{align} where subscript \(p\) stands for pole parameters. Using the above formulas, we can calculate the virtual-photon decay amplitudes \(A^{\mathrm{pole}}_{h},S_{1/2}^{\mathrm{pole}}\) at the \(S_{11}N^\ast(890)\) pole position, which is Refs. [69, 67, 68]: \begin{align} A_{h}^{\mathrm{pole}} &=C \sqrt{\frac{|\bq_{p}|}{k^\mathrm{cm}_{p}} \frac{2 \pi(2 J+1) W_{p}}{m_{N} \operatorname{Res}{\mathcal{T}_{\pi N}}}} \operatorname{Res} \mathcal{A}_{\alpha}^{h}\ , \\ S_{1/2}^{\mathrm{pole}} &=C \sqrt{\frac{|\bq_{p|}}{k^\mathrm{cm}_{p}} \frac{2 \pi(2 J+1) W_{p}}{m_{N} \operatorname{Res}{\mathcal{T}_{\pi N}}}} \operatorname{Res} \mathcal{S}_{\alpha}^{1/2}\ . \end{align} Refer to appendix <ref> for the definition of $\mathcal{A}^h_\alpha,\ \mathcal{S}^{1/2}_\alpha$, here we use $\mathcal{A}^{1/2}_{0+}=-E_{0+},\ \mathcal{S}^{1/2}_{0+}=-(1/\sqrt{2})S_{0+}$. Intuitively, $\mathcal{A}^h_\alpha,\ \mathcal{S}^{1/2}_\alpha$ characterize the power of electromagnetic couplings and the amplitudes of the decay process $N^* \to \gamma^* N$. $|\bq_p|$ is the pion momenta at the pole. The factor $C$ is $\sqrt{2/3}$ for isospin $3/2$ and $-\sqrt{3}$ for isospin $1/2$. So if we focus only on $S_{11}$ channel, the corresponding virtual-photon decay amplitudes are given by \begin{align} A^{\mathrm{pole}}_{1/2}(Q^2) =g_{\gamma N}^E \sqrt{\frac{3 \pi W_p}{m_N k^\mathrm{cm}_p}}\ ,\quad S^{\mathrm{pole}}_{1/2}(Q^2) =g_{\gamma N}^S \sqrt{\frac{3 \pi W_p}{2 m_N k^\mathrm{cm}_p}}\ . \end{align} It can be seen that the amplitudes, $\mathcal{A}_{\alpha}^{1/2}$ and $\mathcal{S}_{\alpha}^{1/2}$, as well as the residues, $A^{\mathrm{pole}}_{1/2}$ and $S^{\mathrm{pole}}_{1/2}$, are functions of the photon virtuality $Q^2$. According to Eqs. (<ref>), \(N^*(890)\) residues or couplings, \(g_{\gamma N}g_{\pi N}\), can be extracted from multipole amplitudes $E_{0+},S_{0+}$. In the meantime, \(g_{\pi N}^2\) can be computed by using $g_{\pi N}^2 \simeq \mathcal{T}_l(s_{{\rm p}})/\mathcal{S}_l^\prime(s_{{\rm p}})$, which was already obtained in Ref. [57]. We employed the MAID solution of the fit (The result of DMT is similar), and chose central value \(\sqrt{s}=0.882-0.190i\) for pole position to extract pole residues. \(\mathcal{T}(s_p)\) can be obtained through \(\mathcal{S}(s_p)=1+2i\rho_{\pi N}(s_p)\mathcal{T}(s_p)=0\) and \(\frac{1}{S'(s_p)}\) is just the residue of \(\mathcal{S}^{\rm II}\) as explained Ref. [57]. The values of the decay amplitudes \(A_{1/2}\) and $S_{1/2}$ at the pole position are shown in Figs. <ref>. The blue solid line and the red solid line represent real and imaginary parts of virtual-photon decay amplitudes at pole position, respectively. [1] G. F. Chew, M. L. Goldberger, F. E. Low, and Y. Nambu, Phys. Rev. 106, 1345 (1957). [2] S. L. Adler, Annals Phys. 50, 189 (1968). [3] E. Amaldi, S. Fubini, and G. Furlan, Springer Tracts Mod. Phys. 83, 1 (1979). [4] D. Drechsel and L. Tiator, J. Phys. G 18, 449 (1992). [5] V. Pascalutsa, M. Vanderhaeghen, and S. N. Yang, Phys. Rept. 437, 125 (2007). [6] I. Aznauryan and V. Burkert, Prog. Part. Nucl. Phys. 67, 1 (2012). [7] D. Rönchen et al., Eur. Phys. J. A 50, 101 (2014), [Erratum: Eur.Phys.J.A 51, 63 (2015)]. [8] S. Fubini, Y. Nambu, and V. Wataghin, Phys. Rev. 111, 329 (1958). [9] F. A. Berends, A. Donnachie, and D. L. Weaver, Nucl. Phys. B4, 1 (1967). [10] J. S. Ball, Phys. Rev. 124, 2014 (1961). [11] R. Devenish and D. Lyth, Phys. Rev. D 5, 47 (1972), [Erratum: Phys.Rev.D 6, 2067 (1972)]. [12] R. Crawford and W. Morton, Nucl. Phys. B 211, 1 (1983). [13] D. Drechsel, S. S. Kamalov, and L. Tiator, Eur. Phys. J. A34, 69 (2007). [14] M. Doring and K. Nakayama, Eur. Phys. J. A 43, 83 (2010). [15] A. Gasparyan and M. Lutz, Nucl. Phys. A848, 126 (2010). [16] V. Bernard, N. Kaiser, T. S. H. Lee, and U.-G. Meissner, Phys. Rept. 246, 315 (1994). [17] M. Hilt, B. C. Lehnhart, S. Scherer, and L. Tiator, Phys. Rev. C88, 055207 (2013). [18] D. L. Yao, L. Alvarez-Ruso, A. N. Hiller Blin, and M. J. Vicente Vacas, Phys. Rev. D 98, 076004 (2018). [19] D. L. Yao, L. Alvarez-Ruso, and M. Vicente Vacas, Phys. Lett. B 794, 109 (2019). [20] G. H. Guerrero Navarro and M. J. Vicente Vacas, Phys. Rev. D 102, 113016 (2020). [21] D. L. Yao, L. Y. Dai, H. Q. Zheng, and Z. Y. Zhou, (2020), 2009.13495. [22] O. Babelon, J.-L. Basdevant, D. Caillerie, and G. Mennessier, Nucl. Phys. B 113, 445 (1976). [23] R. Omnès, Nuovo Cim. 8, 316 (1958). [24] D. Drechsel, O. Hanstein, S. Kamalov, and L. Tiator, Nucl. Phys. A 645, 145 (1999). [25] S. Kamalov and S. N. Yang, Phys. Rev. Lett. 83, 4494 (1999). [26] S. S. Kamalov, S. N. Yang, D. Drechsel, O. Hanstein, and L. Tiator, Phys. Rev. C 64, 032201 (2001). [27] S. Kamalov, G.-Y. Chen, S. N. Yang, D. Drechsel, and L. Tiator, Phys. Lett. B 522, 27 (2001). [28] Y. Ma, W. Q. Niu, D.-L. Yao, and H. Q. Zheng, Chin. Phys. C 45, 014104 (2021). [29] P. Dennery, Phys. Rev. 124, 2000 (1961). [30] R. M. Davidson, Czech. J. Phys. 44, 365 (1995). [31] B. Borasoy, P. C. Bruns, U.-G. Meissner, and R. Nissler, Eur. Phys. J. A34, 161 (2007). [32] B. Kubis and U. G. Meissner, Eur. Phys. J. C 18, 747 (2001). [33] B. Kubis and U.-G. Meissner, Nucl. Phys. A 679, 698 (2001). [34] L. Tiator et al., Phys. Rev. C 94, 065204 (2016). [35] K. M. Watson, Phys. Rev. 95, 228 (1954). [36] Y. Mao, X. G. Wang, O. Zhang, H. Q. Zheng, and Z. Y. Zhou, Phys. Rev. D79, 116008 (2009). [37] R. Garcia-Martin and B. Moussallam, Eur. Phys. J. C 70, 155 (2010). [38] B. Moussallam, Eur. Phys. J. C 73, 2539 (2013). [39] I. V. Danilkin, M. F. M. Lutz, S. Leupold, and C. Terschlusen, Eur. Phys. J. C 73, 2358 (2013). [40] I. Danilkin and M. Vanderhaeghen, Phys. Lett. B 789, 366 (2019). [41] I. Danilkin, O. Deineka, and M. Vanderhaeghen, Phys. Rev. D 101, 054008 (2020). [42] M. Hoferichter and P. Stoffer, JHEP 07, 073 (2019). [43] J. Kennedy and T. Spearman, Phys. Rev. 126, 1596 (1962). [44] M. Doring, C. Hanhart, F. Huang, S. Krewald, and U.-G. Meissner, Nucl. Phys. A 829, 170 (2009). [45] S. Ceci et al., Phys. Rev. C 84, 015205 (2011). [46] M. Tanabashi et al., Particle Data Group, Phys. Rev. D98, 030001 (2018). [47] M. Hoferichter, J. Ruiz de Elvira, B. Kubis, and U.-G. Meißner, Phys. Rept. 625, 1 (2016). [48] Y. H. Chen, D. L. Yao, and H. Q. Zheng, Phys. Rev. D87, 054019 (2013). [49] Y. F. Wang, D. L. Yao, and H. Q. Zheng, Front. Phys. 14, 1 (2019). [50] G. H. Guerrero Navarro, M. J. Vicente Vacas, A. N. Hiller Blin, and D. L. Yao, Phys. Rev. D100, 094021 (2019). [51] B. Pasquini, D. Drechsel, and L. Tiator, Eur. Phys. J. A34, 387 (2007). [52] R. Walker, Phys. Rev. 182, 1729 (1969). [53] M. Jacob and G. C. Wick, Annals Phys. 7, 404 (1959), [Annals Phys.281,774(2000)]. [54] F. A. Berends and A. Donnachie, Nucl. Phys. B 136, 317 (1978). [55] R. Arndt, R. Workman, Z. Li, and L. Roper, Phys. Rev. C 42, 1864 (1990). [56] L. Tiator, R. Workman, Y. Wunderlich, and H. Haberzettl, Phys. Rev. C 96, 025210 (2017). [57] Y. F. Wang, D. L. Yao, and H. Q. Zheng, Chin. Phys. C43, 064110 (2019). [58] Y. F. Wang, D. L. Yao, and H. Q. Zheng, Eur. Phys. J. C78, 543 (2018). [59] Z. G. Xiao and H. Q. Zheng, Nucl. Phys. A695, 273 (2001). [60] J. Y. He, Z. G. Xiao, and H. Q. Zheng, Phys. Lett. B536, 59 (2002), [Erratum: Phys. Lett. B549,362 (2002)]. [61] H. Q. Zheng et al., Nucl. Phys. A733, 235 (2004). [62] Z. Y. Zhou et al., JHEP 02, 043 (2005). [63] Z. Y. Zhou and H. Q. Zheng, Nucl. Phys. A775, 212 (2006). [64] J. Alarcon, J. Martin Camalich, and J. Oller, Annals Phys. 336, 413 (2013). [65] D. L. Yao et al., JHEP 05, 038 (2016). [66] D. Siemens et al., Phys. Rev. C96, 055205 (2017). [67] A. Švarc et al., Phys. Rev. C 88, 035206 (2013). [68] A. Švarc et al., Phys. Rev. C 89, 065208 (2014). [69] N. Suzuki, T. Sato, and T.-S. Lee, Phys. Rev. C 82, 045206 (2010).
# On the Baire class of n-Dimensional Boundary Functions Connor Paul Wilson 530 Church Street Ann Arbor, MI 48109<EMAIL_ADDRESS> ###### Abstract. We show an extention of a theorem of Kaczynski to boundary functions in n-dimensional space. Let $H$ denote the upper half-plane, and let $X$ denote its frontier, the $x$-axis. Suppose that $f$ is a function mapping $H$ into some metric space $Y.$ If $E$ is any subset of $X,$ we will say that a function $\varphi:E\rightarrow Y$ is a boundary function for $f$ if and only if for each $x\in E$ there exists an arc $\gamma$ at $x$ such that $\lim_{z\rightarrow x\atop z\in\gamma}f(z)=\varphi(x)$ ## 1\. Introduction ### 1.1. Preliminaries and Notation Let $H$ denote the upper half-plane, and let $X$ denote its frontier, the $x$-axis. If $x\in X,$ then by an arc at $x$ we mean a a simple arc $\gamma$ with one endpoint at $x$ such that $\gamma-\\{x\\}\subseteq H.$ Suppose that $f$ is a function mapping $H$ into some metric space $Y.$ If $E$ is any subset of $X,$ we will say that a function $\varphi:E\rightarrow Y$ is a boundary function for $f$ if and only if for each $x\in E$ there exists an arc $\gamma$ at $x$ such that $\lim_{z\rightarrow x\atop z\in\gamma}f(z)=\varphi(x)$ We will also define Baire classes as Kaczynski does in [1], such that a function $f:M\rightarrow Y$ is said to be of Baire class $O(M,Y)$ if and only if it is continuous; if $\xi$ is an ordinal number greater than or equal to $1,$ then f is said to be of Baire class $\xi(M,Y)$ if and only if there exists a sequence of functions $\left\\{f_{n}\right\\}_{n=1}^{\infty}$ mapping M into $Y,f_{n}$ being of Baire class $\eta_{n}(M,Y)$ for some $\eta_{n}<\xi$, such that $f_{n}\rightarrow f$ pointwise. ## 2\. Boundary functions for discontinuous functions ###### Theorem 2.1. Let $Y$ be a separable arc-wise connected metric space, with $f:H\rightarrow Y$ a function of Baire class $\xi(H,Y)$ where $\xi\geq 1,$ $E$ a subset of $X$, and $\varphi:E\rightarrow Y$ a boundary function for $f$. Therefore $\varphi$ is of Baire class $\xi+1(E,Y).$ ###### Proof. Let $U$ be an open subset of $Y$ such that $V=Y-\operatorname{clos}(U)$. Set $A=\varphi^{-1}(U),\ B=\varphi^{-1}(V),\ C=A\cup B.$ Notice that we clearly have an empty intersection between $A$ and $B$. $\forall x\in C$, choose an arc $\gamma_{x}$ at $x$ such that: $\lim_{z\rightarrow x\atop z\in\gamma_{x}}f(z)=\varphi(x)$ with $\gamma_{x}\subseteq\\{z:\mid z-x\mid\leq 1\\}$ where $\begin{cases}\gamma_{x}-\\{x\\}\subseteq f^{-1}(U)&\text{ if }x\in A\\\ \gamma_{x}-\\{x\\}\subseteq f^{-1}(V)&\text{ if }x\in B\end{cases}$ Note once again the empty intersection $\gamma_{x}\cap\gamma_{y}=\emptyset$ for $x\in A\ \wedge\ y\in B$ Let us define the terminology $\gamma_{x}$ meets $\gamma_{y}$ in $\operatorname{clos}(H_{n})$ provided that the two arcs have respective subarcs, $\gamma_{x}\prime$ and $\gamma_{y}\prime$ with $x\in\gamma_{x}\prime\subseteq\operatorname{clos}(H_{n})$ and $x\in\gamma_{y}\prime\subseteq\operatorname{clos}(H_{n})$, with $\gamma_{x}\prime\cap\gamma_{y}\prime\neq\varnothing$ Let: $L_{a}:={x\in A:\forall n\exists y,\text{ such that }y\in C,\ y\neq x,\ \gamma_{y}\text{ meets }\gamma_{x}\text{ in }\operatorname{clos}(H_{n})}$ $L_{b}:={x\in B:\forall n\exists y,\text{ such that }y\in C,\ y\neq x,\ \gamma_{y}\text{ meets }\gamma_{x}\text{ in }\operatorname{clos}(H_{n})}$ $M_{a}:={x\in A:\exists n,\gamma_{x}\text{ meets no }\gamma_{y}\text{ in }\operatorname{clos}(H_{n})}$ $M_{b}:={x\in B:\exists n,\gamma_{x}\text{ meets no }\gamma_{y}\text{ in }\operatorname{clos}(H_{n})}$ $L=L_{a}\cup L_{b}$ $M=M_{a}\cup M_{b}$ $L_{a},L_{b},M_{a},$ and $M_{b}$ are notably pairwise disjoint, and $A=L_{a}\cup M_{a},$ $B=L_{b}\cup M_{b}.$ Let $n(x)\in\mathbb{Z}^{+}$ for each $x\in M$ such that $\gamma_{x}$ meets no $\gamma_{y}$ in $\operatorname{clos}(H_{n(x)})$ such that $x\neq y,$ where $n\geq n(x)$ gives the obvious no meeting case in $\operatorname{clos}(H_{n})$. Moreover, take $K_{n}:=\\{x\in C:\gamma_{x}\text{ meets }X_{n}\wedge\text{ if }x\in M,n\geq n(x)\\}$ It is clear that we have for every $n,K_{n}\subseteq K_{n+1}$, as well as $C=\bigcup_{n=1}^{\infty}K_{n}.$ It follows by the work of Kaczynski[1] and the following lemma that we have Theorem $2.1$. ###### Lemma 2.2. Let Y be a separable arc-wise connected metric space, E any metric space, and $\varphi:E\rightarrow Y$ a function such that for every open set $U\subseteq Y$ there exists a set $T\in P^{\xi+1}(E)$ where $\varphi^{-1}(U)\subseteq T\subseteq\varphi^{-1}(\operatorname{clos}(U))$. Thus for $\xi\geq 2,$ $\varphi$ is of Baire class $\xi(E,Y).$ ###### Proof. Let $\mathcal{B}$ be a countable base for $Y,$ and suppose $W$ is some open subset of $Y.$ Let: $\mathcal{A}(W)=\\{U\in\mathcal{B}:\operatorname{clos}(U)\subseteq W\\}$ By Kaczynski, we take: $W=\bigcup_{U\in\mathcal{A}(W)}U=\bigcup_{U\in\mathcal{A}(W)}\operatorname{clos}(U).$ $\forall U\in\mathcal{B},$ let $T(U)\in P^{\xi+1}(E)$ be chosen so that $\varphi^{-1}(U)\subseteq T(U)\subseteq\varphi^{-1}(\operatorname{clos}(U)).$ Thus we have: $\displaystyle\varphi^{-1}(W)$ $\displaystyle=\bigcup_{U\in\mathcal{A}(W)}\varphi^{-1}(U)\subseteq\bigcup_{U\in\mathcal{A}(W)}T(U)$ $\displaystyle\subseteq\bigcup_{U\in\mathcal{A}(W)}\varphi^{-1}(\operatorname{clos}(U))=\varphi^{-1}(W).$ Therefore $\varphi^{-1}(W)=\bigcup_{U\in\mathcal{A}(W)}T(U)$, and given $P^{\xi+1}(E)$ is closed under countable unions, we have $\varphi^{-1}(W)\in P^{\xi+1}(E),$ and $\varphi$ is of Baire class $\xi(E,Y)$ ∎ And following from above we therefore have: $\varphi^{-1}(U)=A\subseteq T\cap E\subseteq E-B=E-\varphi^{-1}(V)=\varphi^{-1}(\operatorname{clos}(U))$ for some $T\in P^{\xi+2}(X)$, and we know $T\cap E\in P^{\xi+2}(E)$ which by the above lemma gives us $\varphi$ of Baire class $\xi+1(E,Y).$ ∎ ## 3\. Sets of curvilinear convergence for $\mathbb{R}^{3}$ Let $f:H\rightarrow Y$ of Baire class $\xi(H,Y)$, and $\varphi:E\rightarrow Y$ a boundary function of $f.$ Let us define some function to analyse the properties of $M_{a},$ following Kaczynski. Take $\pi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}$ such that $\pi(\langle x,y,z\rangle)=\left\|\langle x,y\rangle\right\|_{2}.$ If $\left\|\langle x,y\rangle\right\|_{2}\in M\cap K_{n},$ then let us define $p_{n}(\left\|\langle x,y\rangle\right\|_{2})$ as the first point of $X_{n}$ reached along $\gamma_{x}$ starting from $x.$ It is clear thus that by Kaczynski, the function $\pi(p_{n}(\left\|\langle x,y\rangle\right\|_{2})$ is strictly increasing on $M\cap K_{n}.$ Thus by the above logic, and Lemma $2.2$, we can show the following theorem: ###### Theorem 3.1. Let $Y$ be a separable arc-wise connected metric space in $\mathbb{R}^{n}$, with $f:H\rightarrow Y$ a function of Baire class $\xi+n-1(H,Y)$ where $\xi\geq 1,$ $E$ a subset of $X$, and $\varphi:E\rightarrow Y$ a boundary function for $f$. Therefore $\varphi$ is of Baire class $\xi+n(E,Y).$ Although this does not resolve the fourth open problem of Kaczynski’s work, it does provide an extension to a theorem shown, and is valuable nonetheless to the field. ## References * [1] Kaczynski, Theodore J., Boundary functions, Doctoral Dissertation, University of Michigan (1967).
# Discovering dependencies in complex physical systems using Neural Networks Sachin Kasture<EMAIL_ADDRESS>OptoAI, 2805DL, Gouda, The Netherlands ###### Abstract In todays age of data, discovering relationships between different variables is an interesting and a challenging problem. This problem becomes even more critical with regards to complex dynamical systems like weather forecasting and econometric models, which can show highly non-linear behaviour. A method based on mutual information and deep neural networks is proposed as a versatile framework for discovering non-linear relationships ranging from functional dependencies to causality. We demonstrate the application of this method to actual multivariable non-linear dynamical systems. We also show that this method can find relationships even for datasets with small number of datapoints, as is often the case with empirical data. ††preprint: APS/123-QED ## I Introduction Finding relationships between different variables in large datasets [1, 2, 3] is an important problem that has ramifications in fields ranging from environmental science to economics and genetic networks. Understanding what variables affect a certain quantity becomes increasingly challenging when these relationships are highly non-linear, like those occurring in dynamical systems with several variables. Quite often in a large dataset with several variables, only a few variables maybe significantly affecting the target variable and identifying these variables is first vital step in exploring these dependencies in more detail. Several methods exist which can help find dependencies and correlations between variables. However most of these methods are good at detecting a certain class of functions while they fail for others. There are some methods which are quite good at detecting functional dependencies between 2 variables [1, 4], they have however not been demonstrated in a multi-variable scenario where a target variable depends on several input variables. Finding functional dependencies has been a topic explored extensively in context of relational databases[5, 6]. However these methods rely on finding exact functional relationships by finding all attributes which have a one to one or one to many relationship with a certain column Y. But this approach does not work well for small databases which are just a sample of the true distribution as in these cases one to one relations are more likely to occur. Also in such cases, it is difficult to reliably find the smallest subset of variables which are sufficient to describe Y. These methods do not offer any control over what kind of functional relationships maybe considered intuitively as good or interesting candidates. Also, these methods do not provide any kind of score to evaluate functional dependencies. In this paper, we use Neural networks as devices to model nonlinear behavior and find complex non-linear relationships. Especially deep neural networks (DNN) which consist of more than 1 hidden layer are excellent candidates for efficiently modelling multi-variable non-linear polynomial functions with small number of neurons [7, 8]. Additionally a regularization mechanism allows us to control the complexity of the model we wish to consider [9]. Neural networks have been used recently to discover physical concepts, identify phase transitions and design quantum experiments[10, 11, 12]. To help find dependencies, we use an DNN based autoencoder architecture which consists of an encoder-decoder pair. The encoder maps the input space to a latent space, while the decoder maps the latent space to the output space. This architecture has been used, amongst other applications, for non-linear Principle Component analysis (PCA) where the goal is to find a compressed representation of data [13]. As such the input and the output of the autoencoder is conventionally the same. In our method the input will be $X$, which is the set of input features and $Y$ is the target feature or the set of features. We then use compression of mutual information in the latent space to derive a loss function which can be minimized to find the smallest set of features in $X$ which can be used to reliably reconstruct $Y$. The loss function can be used to assign a score to compare the functional dependencies on different set of input parameters.We then demonstrate this method to find dependencies in chaotic dynamical systems. Also we show that this method can be used to find non-linear causal connections in the Grangier sense for chaotic systems [14, 15, 16], even for a small dataset of 100 samples. Figure 1: Plot shows comparison between $x_{i}$ and the corresponding scaled version of $l_{i}$ for (a)-(d) different values of $y_{i}=dx_{i}/dt$ for equation 17. In the plots where $l_{i}$ is essentially noise, information from the corresponding $x_{i}$ is not used to reconstruct $y_{i}$ using the decoder. $fac$ is a scaling factor chosen so that $x_{i}$ and $l_{i}/fac$ are comparable ## II Theory We now derive a loss function using the information bottleneck method [17] based on the fact that the latent intermediate layer can be used to extract only relevant information from $X$ and used to reconstruct $Y$. We represent this latent representation by $L$. We also now assume a Markov chain $Y\rightarrow X\rightarrow\ L$. This means $P(Y|X,L)=P(Y|X)$. This is because $X,Y$ correspond to observed ground truth data.We now use the fact that we want to extract only relevant information from $X$ which can reconstruct $Y$. We use Shannon mutual information to quantify this information [17, 18]. Therefore want to maximize the quantity $I(L,Y)-\lambda_{enc}I(L,X)$. The first term and the second term describe the capacity of the encoder and the decoder respectively with $\lambda_{enc}$ determining the relative weight between the two terms. We can write $I(L,Y)$ as: $\begin{split}&I(L,Y)=\int dydlp(y,l)log\frac{p(y|l)}{p(y)}\\\ &=\int dlp(l)\int dyp(y|l)log(p(y|l)+H(Y)\\\ \end{split}$ (1) where $H(Y)$ is the Shannon entropy. We neglect $H(Y)$ since it is fixed by the data. Since it is very difficult to calculate $p(y|l)$, we can approximate it by another analytic function $\phi(y|l)$. Using the fact that the KL divergence which measures the ‘distance’ between 2 probability distributions is always non-negative: $\begin{split}&KL(p(y|l),\phi(y|l))\geq 0\\\ &\implies\int dyp(y|l)logp(y|l)\geq\int dyp(y|l)log\phi(y|l)\\\ \end{split}$ (2) we can write $I(L,Y)\geq\int dydlp(y,l)log\phi(y|l)$ (3) We can now choose an appropriate function for $\phi(y|l)$ which allows us to derive a suitable loss function as well as allows us to tune the complexity of the decoder. The output of the decoder is given by $\theta_{dec}(l)$ which describes the composite function of the decoder neural network which acts on the latent variable $l$. To also include an additional L1 [9]regulation parameter which helps restrict the magnitude of the weights in the decoder neural network, we use the following function for $\phi(y|l)$ $\phi(y|l)=e^{-(\theta_{dec}(l)-y)^{2}/\sigma_{dec}^{2}-\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)}$ (4) where $\theta_{d1},\theta_{d2}..$ etc. are weights of different neurons in the decoder network. Therefore we can write $\begin{split}I(L,Y)&\geq-\int dydlp(y,l)[\frac{(\theta_{dec}(l)-y)^{2}}{\sigma_{dec}^{2}}\\\ &+\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)]\\\ \end{split}$ (5) Now we use the fact that $p(y,l)=\int dxp(x,y,l)=\int dxp(l|x,y)p(x,y)$. Using the Markov chain condition, this can be written as $p(y,l)=\int dxp(l|x)p(x,y)$. Approximating $\int dxdyp(x,y)A(x,y)=(1/M)\sum_{k=1}^{M}A(x^{k},y^{k})$ where $M$ is the number of distinct data points, we can write $\begin{split}I(L,Y)&\geq-(1/M)\sum_{k=1}^{M}\int dlp(l|x)[\frac{(\theta_{dec}(l)-y^{k})^{2}}{\sigma_{dec}^{2}}\\\ &+\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)]\\\ \end{split}$ (6) Similarly we can define $I(L,X)$ as: $\begin{split}I(L,X)&=\int dldxp(x,l)log\frac{p(l|x)}{p(l)}\\\ &=\int dxdlp(x,l)logp(l|x)-\int dlp(l)logp(l)\\\ \end{split}$ (7) We now again use another analytical function $g(l)$ in place of $p(l)$ and use the result on positivity of KL divergence and get: $\begin{split}I(L,X)&=\int dldxp(x,l)logp(l|x)-\int p(l)logp(l)\\\ &\leq\int dxdlp(x,l)log\frac{p(l|x)}{g(l)}\\\ \end{split}$ (8) For convenience we use a Gaussian function centred at 0. $g(l)=e^{-\sum_{i}l_{i}^{2}/\sigma_{enc}^{2}}$ (9) where $l=(l_{1},l_{2}..)$ are different components of $l$ and $\sigma_{enc}$ is an adjustable parameter. For $p(l|x)$ we can use: $p(l|x)=\prod_{i}e^{-(l_{i}-W_{i}x_{i})^{2}/\sigma_{enc}^{2}}$ (10) where $x=(x_{1},x_{2},..)$ This means we use a linear transformation from $X$ and add a independent Gaussian noise with variance $\sigma_{enc}^{2}$ and mean 0 to each component. We now plug in definitions 9,10 into equation 8 and obtain: $I(L,X)\leq\int dxdlp(x,l)loge^{-\sum_{i}W_{i}x_{i}(W_{i}x_{i}-2l_{i})/\sigma_{enc}^{2}}$ (11) Writing $p(x,l)=p(x)p(l|x)$ we can write the above equation as $\begin{split}I(L,X)&\leq-\int dxdlp(x)\prod_{i}e^{-(l_{i}-W_{i}x_{i})^{2}/\sigma_{enc}^{2}}\\\ &[\frac{\sum_{i}W_{i}x_{i}(W_{i}x_{i}-2l_{i})}{\sigma_{enc}^{2}}]\\\ \end{split}$ (12) Using the approximation $\int dxp(x)A(x)=(1/M)\sum_{k=1}^{M}A(x^{k})$, we can write $\begin{split}I(L,X)&\leq-(1/M)\sum_{k=1}^{M}\int dl\prod_{i}e^{-(l_{i}-W_{i}x_{i}^{k})^{2}/\sigma_{enc}^{2}}\\\ &[\frac{\sum_{i}W_{i}x_{i}^{k}(W_{i}x_{i}^{k}-2l_{i})}{\sigma_{enc}^{2}}]\\\ \end{split}$ (13) Similarly substituting equation 10 into equation 6 and assuming $\sigma_{enc}^{2}$ to be small enough so that $e^{-(l_{i}-W_{i}x_{i})^{2}/\sigma_{enc}^{2}}\approx\delta(l_{i}-W_{i}x_{i})$we obtain: $\begin{split}I(L,Y)&-\lambda_{enc}I(L,X)\geq-(1/M)\sum_{k=1}^{M}[\frac{(\theta_{dec}(l)-y^{k})^{2}}{\sigma_{dec}^{2}}+\\\ &\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)+\lambda_{enc}\sum_{i}\frac{(W_{i}x_{i}^{k})^{2}}{\sigma_{enc}^{2}}]\\\ \end{split}$ (14) Figure 2: Plots shows the case of fan-in causality pattern for set of delay equations in equation 18 for set of $\xi_{ij}$ values used to obtain results in Figure 3 Figure 3: Plot shows comparison between $Y_{i}$ and the corresponding scaled version of $l_{i}$ for (a)-(c) different values of $y_{i}=Y_{i}$ for the set of delay equations 18. In the plots where $l_{i}$ is noise, information from the corresponding $x_{i}$ is not used to reconstruct $y_{i}$ using the decoder Therefore we can define a loss function to be minimized as $\begin{split}\mathcal{L}&=(1/M)\sum_{k=1}^{M}[\frac{(\theta_{dec}(l)-y^{k})^{2}}{\sigma_{dec}^{2}}+\\\ &\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)+\lambda_{enc}\sum_{i}\frac{(W_{i}x_{i}^{k})^{2}}{\sigma_{enc}^{2}}]\\\ \end{split}$ (15) We observe that the first term tries to minimize the least squares difference between $\theta_{dec}(l)$ and $y$ and the second term controls the size of the weights of the decoder which in turn controls the maximum degree polynomials the decoder NN can approximate. For the third term we see that as we increase the $\lambda_{inc}$, the NN will try to keep $(W_{i}x_{i}^{k})^{2}$ small to keep the total loss function small. Assuming now that we standardize our data so that $x_{i}^{\prime}s$ on an average have similar magnitudes, we absorb it into $\lambda_{enc}$. The third term will now be smallest when only $W_{i}^{\prime}s$ corresponding to those $x_{i}^{\prime}s$ are non-zero, which are required to reproduce $Y$. Using this intution and the fact that term inside the summation over $i$ in equation 17 is always $\geq 0$, we can further simplify the loss function as $\begin{split}\mathcal{L}&=(1/M)\sum_{k=1}^{M}[\frac{(\theta_{dec}(l)-y^{k})^{2}}{\sigma_{dec}^{2}}]+\\\ &\lambda_{dec}(|\theta_{d1}|+|\theta_{d2}|+..)+\lambda_{enc}\sum_{i}(|W_{i}|)\\\ \end{split}$ (16) where we have merged $\sigma_{enc}^{2}$ with $\lambda_{enc}$. This way we treat both the encoder and decoder weights on equal terms using L1 regularization. From a practical standpoint L1 is advantageous since it can shrink weights faster. ## III Application For further study we use a NN in which the encoder has 2 linear layers. This gives us a mapping $X\rightarrow L$. We then add Gaussian noise to the latent variables $l_{i}=l_{i}+N(0,\sigma^{2}_{enc})$. The latent code is then sent through a multilayer decoder network with non-linear activation functions to give the output $\theta_{dec}(l)$. We perform batch-normalization in between intermediate neural network layers [19]. This layers prevents change in data distributions between adjacent layers and allows neural network learning at a higher learning rate. We then minimize the loss function in equation 16 using Stochastic gradient descent with different batch sizes. We can tune the values of $\lambda_{enc},\lambda_{dec}$ (regularization parameters) to obtain as low values of loss function as possible. This choice of regularization parameters may also depend on our prior knowledge about the complexity of the system. The data is split into the training and validation set. The training data is used to build the model and validation set checks how well the model generalizes. The basic heuristic for tuning these parameters is as follows: after fixing the learning rate for the gradient descent, we first increase the value of $\lambda_{dec}$ which basically fixes the complexity of functions the decoder can simulate. We then increase the value of $\lambda_{enc}$ and look at the value of the mean square error and stop when the mean square error is as small as possible for both the training and the validation set. We now use this method to infer relationships in well known non-linear systems. We first consider a Lorenz96 non-linear system which is defined as: $\frac{dx_{i}}{dt}=(x_{i+1}-x_{i-2}x_{i-1}-x_{i}+F)$ (17) where $i$ goes from $1$ to $N$ where $N$ is the number of oscillators and $x_{N+1}=x_{1}$,$x_{-1}=x_{N-1}$, $x_{0}=x_{N}$. $F$ is the driving term and we choose $F=8$ where the system behaves in the chaotic regime. Figure 1 shows the results for N=5. We run N=5 times with each time $y=\frac{dx_{i}}{dt}$ for i from 1 to 5. We see that the latent representation $l_{i}$ is basically just the added Gaussian noise when the corresponding $y$ has no dependency on $l_{i}$. The number of data points was 3000 and learning rate was 0.0001 and values of $\lambda_{dec},\lambda_{enc}$ where 0 and 0.1 respectively. The training was run for 1000 epochs with a batch size of 300. Next we apply NN to infer causal relationship in a set of non-linear delay equations. For this we look at the following set of equations: $Y_{i}(t+1)=(\xi_{ii}-\sum_{j=1,2,3}(\xi_{jj}-\xi_{ij}Y_{j}(t)))Y_{i}(t)$ (18) for i=1,2,3. We choose to choose parameters $\xi_{ij}$ which correspond to a fan-in pattern shown in Figure 2. The values of $\xi$ are as follows $\xi_{11}=4,\xi_{22}=3,\xi_{33}=2,\xi_{31}=0.6,\xi_{32}=-0.6$. These parameters corresponds to a chaotic regime. In this case both $Y_{2}$ and $Y_{3}$ are causally driven by $Y_{1}$. A fan-in pattern is a good test because correlation based tests would falsely infer a causal relationship between $Y_{2}$ and $Y_{3}$ [2]. To infer the causal relationships, we run the NN with $y=Y_{i}(t+1)$ and input $X=[Y_{1}(t),Y_{2}(t),Y_{3}(t)]$. From Figure 3 we can see that we are able to correctly infer the dependencies, even for a very small data-set of 50 points. The plots were obtained for a learning rate of 0.001 and $\lambda_{enc},\lambda_{dec}$ values of 0.1 and 0.005 respectively.The number of epochs was 1500 with a batch size of 32. Figure 4: Plot shows the plot for FD vs MR for different values of $\lambda_{enc}$. The legend also mentions the non-linear system for the plotted data. ‘dde’ stands for the delay difference equations in equation 18 We also summarize the performance of this method using 2 metrics False discovery (FD) and Miss rate (MR) which are defined as: $\begin{split}&FD=\frac{FP}{FP+TP}\\\ &MR=\frac{FN}{FN+TP}\\\ \end{split}$ (19) where FN, FP, TP are False negatives, false positives and true positives respectively. Here a positive means a certain variable has been discovered to be independent of the output. The negative means a variable has been discovered to be related to the output.This data is obtained by obtaining results over 20 independent runs of the model. For the Lorenz96 model, the best result is obtained with $\lambda_{enc}=0.2$ while for the set of equations 18, best results are obtained for $\lambda_{enc}=0.1$ ## IV Conclusion The proposed approach using NN is a versatile platform for inferring relationships, especially in complex non-linear systems. This is because NN are a powerful tool to model such non-linear functions. Even though it is difficult to infer the exact functional form using a NN, this method can help locate functional dependencies between variables in a multivariable system. These variables can then be probed more extensively to find the functional (or approximate functional) form of the relationships. Methods based on sparse regression have been used in the past to find functional relationships. However they rely on pre-knowledge of the set of basis functions to use for the regression. The proposed method has no such requirement and with a large enough NN, can simulate any complex non-linear function. Besides locating functional relationships, it can also help infer causal relationships in non- linear data as seen in the discussed example, where it correctly inferred causal relationship even for a small dataset of 50 samples. ## V Acknowledgements The author would like to thank Akshatha Mohan for helpful comments and critical assessment of the manuscript. ## References * Reshef _et al._ [2011] D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S. Lander, M. Mitzenmacher, and P. C. Sabeti, Science 334, 1518 (2011). * Marbach _et al._ [2010] D. Marbach, R. J. Prill, T. Schaffter, C. Mattiussi, D. Floreano, and G. Stolovitzky, Proceedings of the National Academy of Sciences 107, 6286 (2010). * Brunton _et al._ [2016] S. L. Brunton, J. L. Proctor, and J. N. Kutz, Proceedings of the National Academy of Sciences 113, 3932 (2016). * Dembo _et al._ [2001] A. Dembo, A. Kagan, and L. A. Shepp, Bernoulli 7, 343 (2001). * Liu _et al._ [2012] J. Liu, J. Li, C. Liu, and Y. Chen, IEEE Transactions on Knowledge and Data Engineering 24, 251 (2012). * Huhtala [1999] Y. Huhtala, The Computer Journal 42, 100 (1999). * Lin _et al._ [2017] H. W. Lin, M. Tegmark, and D. Rolnick, Journal of Statistical Physics 168, 1223 (2017). * Rolnick and Tegmark [2018] D. Rolnick and M. Tegmark, arXiv:1705.05502 [cs, stat] (2018), arXiv: 1705.05502. * Tibshirani [1996] R. Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological) 58, 267 (1996). * Iten _et al._ [2020] R. Iten, T. Metger, H. Wilming, L. del Rio, and R. Renner, Physical Review Letters 124, 010508 (2020). * Rem _et al._ [2019] B. S. Rem, N. Käming, M. Tarnowski, L. Asteria, N. Fläschner, C. Becker, K. Sengstock, and C. Weitenberg, Nature Physics 15, 917 (2019). * Melnikov _et al._ [2018] A. A. Melnikov, H. Poulsen Nautrup, M. Krenn, V. Dunjko, M. Tiersch, A. Zeilinger, and H. J. Briegel, Proceedings of the National Academy of Sciences 115, 1221 (2018). * Hinton [2006] G. E. Hinton, Science 313, 504 (2006). * Detto _et al._ [2012] M. Detto, A. Molini, G. Katul, P. Stoy, S. Palmroth, and D. Baldocchi, The American Naturalist 179, 524 (2012). * Runge _et al._ [2012] J. Runge, J. Heitzig, V. Petoukhov, and J. Kurths, Physical Review Letters 108, 258701 (2012). * Ma _et al._ [2015] H. Ma, K. Aihara, and L. Chen, Scientific Reports 4, 7464 (2015). * Tishby _et al._ [2000] N. Tishby, F. C. Pereira, and W. Bialek, arXiv:physics/0004057 (2000), arXiv: physics/0004057. * Giannella and Robertson [2004] C. Giannella and E. Robertson, Information Systems 29, 483 (2004). * Ioffe and Szegedy [2015] S. Ioffe and C. Szegedy, arXiv:1502.03167 [cs] (2015), arXiv: 1502.03167.
# No-Regret Caching via Online Mirror Descent Tareq Si Salem<EMAIL_ADDRESS>Inria, Université Côte d’AzurFrance , Giovanni Neglia<EMAIL_ADDRESS>Inria, Université Côte d’AzurFrance and Stratis Ioannidis<EMAIL_ADDRESS>Northeastern UniversityUSA ###### Abstract. We study an online caching problem in which requests can be served by a local cache to avoid retrieval costs from a remote server. The cache can update its state after a batch of requests and store an arbitrarily small fraction of each file. We study no-regret algorithms based on Online Mirror Descent (OMD) strategies. We show that the optimal OMD strategy depends on the request diversity present in a batch. We also prove that, when the cache must store the entire file, rather than a fraction, OMD strategies can be coupled with a randomized rounding scheme that preserves regret guarantees. ## 1\. Introduction Caches are deployed at many different levels in computer systems: from CPU hardware caches to operating system memory caches, from application caches at clients to CDN caches deployed as physical servers in the network or as cloud services like Amazon’s ElastiCache (AWS, 2021). They aim to provide a faster service to the user and/or to reduce the computation/communication load on other system elements, like hard disks, file servers, etc. The ubiquity of caches has motivated extensive research on the performance of existing caching policies, as well as on the design of new policies with provable guarantees. To that end, most prior work has assumed that caches serve requests generated according to a stochastic process, ranging from the simple, memory-less independent reference model (Coffman and Denning, 1973) to more complex models trying to capture temporal locality effects and time- varying popularities (e.g., the shot-noise model (Traverso et al., 2013)). An alternative modeling approach is to consider an _adversarial_ setting. Assuming that the sequence of requests is generated by an adversary, an online caching policy can be compared to the optimal offline policy that views the sequence of requests in advance. Caching was indeed one of the first problems studied by Sleator and Tarjan in the context of the competitive analysis of online algorithms (Sleator and Tarjan, 1985). In competitive analysis, the metric of interest is the _competitive ratio_ , i.e., the worst-case ratio between the costs incurred by the online algorithm and the optimal offline _dynamic_ algorithm. This line of work led to the study of metrical task systems (Borodin et al., 1992), a popular research area in the algorithms community (Koutsoupias, 2009). Recently, Paschos et al. (Paschos et al., 2019) proposed studying caching as an online convex optimization (OCO) problem (Hazan, 2016). OCO considers again an adversarial setting, but the metric of interest is the _regret_ , i.e., the difference between the costs incurred over a time horizon $T$ by the algorithm and by the optimal offline _static_ solution. Online algorithms whose regret grows sublinearly with $T$ are called _no-regret_ algorithms, as their time- average regret becomes negligible for large $T$. Paschos et al. proposed a no- regret caching policy based on the classic online gradient descent method ($\mathrm{OGD}$), under the assumption that (1) the cache can store arbitrarily small fractions of each file (the so-called fractional setting), and (2) the cache state is updated after each request. In this paper, we extend and generalize the analysis of Paschos et al. in three different directions: 1. (1) We assume the cache can update its state after processing a batch of $R\geq 1$ requests. This is of interest both in high-demand settings, as well as in cases when updates are infrequent, because they are costly w.r.t. either computation or communication. 2. (2) We consider a broad family of caching policies based on online mirror descent ($\mathrm{OMD}$); $\mathrm{OGD}$ can be seen as a special instance of this family. 3. (3) We also depart from the fractional setting, extending our analysis to the case when the cache can only store entire files (the integral setting). Our contributions are summarized as follows. First, we show that caching policies based on $\mathrm{OMD}$ enjoy $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret in the fractional setting. Most importantly, we show that bounds for the regret crucially depend on the diversity of the request process. In particular, the regret depends on the _diversity ratio_ $R/h$, where $R$ is the size of the batch, and $h$ is the maximum multiplicity of a request in a given batch. Second, we characterize the optimality of OMD caching policies w.r.t. regret under different diversity regimes. We observe that, for a large region of possible values of the diversity ratio, the optimum is either $\mathrm{OGD}$ or $\mathrm{OMD}$ with a neg-entropy mirror map ($\mathrm{OMD}_{\mathrm{NE}}$). In particular, $\mathrm{OGD}$ is optimal in the _low diversity_ regime, while $\mathrm{OMD}_{\mathrm{NE}}$ is optimal in the _high diversity_ regime. Third, $\mathrm{OMD}$ algorithms include a gradient update followed by a projection to guarantee that the new solution is in the feasible set (e.g., it does not violate the cache capacity constraints). The projection is often the most computationally expensive step of the algorithm. We show that efficient polynomial algorithms exist both for $\mathrm{OGD}$ (slightly improving the algorithm in (Paschos et al., 2019)) and for $\mathrm{OMD}_{\mathrm{NE}}$. Finally, $\mathrm{OMD}$ algorithms work in a continuous space, and are therefore well-suited for the fractional setting originally studied by Paschos et al. Still, we show that, if coupled with opportune rounding techniques, they can also be used when the cache can only store a file in its entirety, while preserving their regret guarantees. The remainder of this paper is organized as follows. After an overview of the related work in Sec. 2, we introduce our model assumptions in Sec. 3 and provide technical background on gradient algorithms in Sec. 4. Section 4.3 presents our main results on the regret of $\mathrm{OMD}$ caching policies and their computational complexity. A discussion about extending the model to include cache update costs, in Sec. 5, is required to introduce the integral setting in Sec. 6. Finally, numerical results are presented in Sec. 7. ## 2\. Related work The caching problem has been extensively studied in the literature under different assumptions on the request process. When the requests occur according to a given stochastic process, the analysis leads usually to complex formulas even in simple settings. For example, even the hit ratio of a single cache managed by the LRU eviction policy under the independent reference model is hard to precisely characterize (King, 1972; Flajolet et al., 1992). The characteristic time approximation (often referred to as Che’s approximation) significantly simplifies this analysis by assuming that a file, in absence of additional requests for it, stays in the cache for a random time sampled independently from requests for other files. Proposed by Fagin (Fagin, 1977) and rediscovered and popularized by Che et al. (Che et al., 2002), the approximation has been justified formally by several works (Jelenkovic, 1999; Fricker et al., 2012; Jiang et al., 2018) and has allowed the study of a large number of existing (Garetto et al., 2016) and new (Gast and Van Houdt, 2017; Leonardi and Neglia, 2018) caching policies. It also applies to networked settings (Fofack et al., 2014; Berger et al., 2014; Alouf et al., 2016; Chu et al., 2018) and to more general utilities beyond the hit ratio (Dehghan et al., 2019; Neglia et al., 2018), all under stochastic requests. Online caching policies based on gradient methods have also been studied in the stochastic request setting, leading to Robbins-Monro/stochastic approximation algorithms (see, e.g., (Ioannidis et al., 2010; Ioannidis and Yeh, 2016)). Though related to OCO, guarantees are very different than the regret metric we study here. Many works have also explored the offline, network-wide static allocation of files, presuming demand is known (Borst et al., 2010; Shanmugam et al., 2013; Poularakis et al., 2016). We differ from the work above, as we consider adversarial requests. Caching under adversarial requests has been studied since Sleator and Tarjan’s seminal paper (Sleator and Tarjan, 1985) through the competitive ratio metric. An algorithm is said to be $\alpha$-competitive when its competitive ratio is bounded by $\alpha$ over all possible input sequences. The problem has been generalized by Manasse et al. (Manasse et al., 1988) under the name _$k$ -server problem_, and further generalized by Borodin et al. under the name _metrical task systems (MTS)_ (Borodin et al., 1992). The literature on both the $k$-server and MTS problems is vast. A recent trend is to apply continuous optimization techniques to solve these combinatorial problems. Bansal et al. (Bansal et al., 2012) study the $k$-server problem on a weighted star metric space. In the same spirit, Bubeck et al. (Bubeck et al., 2018) use the framework of continuous online mirror descent to provide an $o(k)$-competitive algorithm for the $k$-server problem on hierarchically separated trees. In this paper, we focus on regret rather than competitive ratio as main performance metric. Andrew et al. (Andrew et al., 2013) give a formal comparison between competitive ratio and regret and prove that there is an intrinsic incompatibility between the two: no algorithm can have both sub- linear regret and a constant competitive ratio. At the same time, they propose an algorithm with sub-linear regret and slowly increasing competitive ratio. Online convex optimization (OCO) was first proposed by Zinkevich (Zinkevich, 2003), who showed that projected gradient descent attains sublinear regret bounds in the online setting. OCO generalizes previous online problems like the experts problem (Littlestone and Warmuth, 1994), and has become widely influential in the learning community (Hazan, 2016; Shalev-Shwartz, 2012). To the best of our knowledge, Paschos et al. (Paschos et al., 2019) were the first to apply the OCO framework to caching. Beside proposing OGD for the single cache, they extended it to a simple networked scenario, where users have access to a set of parallel caches that store pseudo-random linear combinations of the files. They proposed no-regret algorithms in both settings. Bhattacharjee et al. (Bhattacharjee et al., 2020) extended this work proving tighter lower bounds for the regret and proposing new caching policies for the networked setting that do not require file coding; Mukhopadhyay and Sinha (Mukhopadhyay and Sinha, 2021) accounted for switching costs due to file retrievals. We depart from these works in considering OMD algorithms, a more general request process, and allowing for integral cache states obtained through randomized rounding. This work is an extension of our previous work (Si Salem et al., 2021). In particular, (1) we analyse and derive regret bounds for a broader class of OMD algorithms ($q$-norm mirror maps), and (2) we extend our analysis to the integral caching setting. ## 3\. System description | Notational Conventions | $\mathcal{R}_{R,h}$ | Set of possible adversarial requests ---|---|---|--- $[n]$ | Set of integers $\\{1,2,\dots,n\\}$ | $\boldsymbol{r}_{t}$ | Batch of request at timeslot $t$ | Caching | $f_{\boldsymbol{r}_{t}}$ | Cost received at timeslot $t$ $\mathcal{N}$ | Catalog set with size $\left|\mathcal{N}\right|=N$ | $\mathrm{UC}_{\boldsymbol{r}_{t}}$ | Update cost of the cache at timeslot $t$ $k$ | Cache capacity | $\boldsymbol{w}$ / $\boldsymbol{w}^{\prime}$ | Service / update costs in $\mathbb{R}_{+}^{N}$ $\mathcal{X}$ | Set of fractional cache states | | Online Learning $\mathcal{X}_{\delta}=\mathcal{X}\cap[\delta,1]^{N}$ | The $\delta$-interior of $\mathcal{X}$ | $T$ | The time horizon $\mathcal{Z}=\mathcal{X}\cap\\{0,1\\}^{N}$ | Set of integral cache states | $\eta$ | Learning rate $\boldsymbol{x}_{t}$ | Fractional cache state at timeslot $t$ | $\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})$ | Update cost at timeslot $t$ $\boldsymbol{\zeta}_{t}$ | Integral cache state at timeslot $t$ | Regret${}_{T}(\mathcal{A})$ | Regret of policy $\mathcal{A}$ over $T$ $\boldsymbol{z}_{t}$ | Random integral cache state at timeslot $t$ | $\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)$ | Extended regret of policy $\mathcal{A}$ over $T$ $\boldsymbol{x}_{*}$ | Optimal cache allocation in hindsight | $\Phi(\boldsymbol{x})$ | Mirror map $R$ | Number of files’ requests in a batch | $D_{\Phi}(\boldsymbol{x},\boldsymbol{y})$ | Bregman divergence associated to $\Phi$ $h$ | Maximum multiplicity of a requested file | $\Pi^{\Phi}_{\mathcal{B}}(\boldsymbol{y})$ | The projection onto $\mathcal{B}$ under $D_{\Phi}$ Table 1. Notation Summary Remote Service and Local Cache. We consider a system in which requests for files are served either remotely or by an intermediate cache of finite capacity; a cache miss incurs a file-dependent remote retrieval cost. Formally, we consider a sequence of requests for files of equal size from a catalog $\mathcal{N}=\\{1,2,\dots,N\\}$. These requests can be served by a remote server at cost $w_{i}\in\mathbb{R}_{+}$ per request for file $i\in\mathcal{N}$. This cost could be, e.g., an actual monetary cost for using the network infrastructure, or a quality of service cost incurred due to fetching latency. Costs may vary across files, as each file may be stored at a different remote location. We denote by $\boldsymbol{w}=[w_{i}]_{i\in\mathcal{N}}\in\mathbb{R}_{+}^{N}$ the vector of costs and assume that $\boldsymbol{w}$ is known. A local cache of finite capacity is placed in between the source of requests and the remote server(s). The local cache’s role is to reduce the costs incurred by satisfying requests locally. We denote by $k\in\\{1,2,\dots,N\\}$ the capacity of the cache. The cache is allowed to store fractions of files (this assumption will be removed in Sec. 6). We assume that time is slotted, and denote by $x_{t,i}\in[0,1]$ the fraction of file $i\in\mathcal{N}$ stored in the cache at timeslot $t\in\\{1,2,\dots,T\\}$. The cache state is then given by vector $\boldsymbol{x}_{t}=[x_{t,i}]_{i\in\mathcal{N}}\in\mathcal{X}$, where $\mathcal{X}$ is the capped simplex determined by the capacity constraint, i.e., $\mathcal{X}=\left\\{\boldsymbol{x}\in[0,1]^{N}:\sum^{N}_{i=1}x_{i}=k\right\\}$. Requests. We assume that a batch of multiple requests may arrive within a single timeslot. The number of requests (i.e., the batch size) at each timeslot is given by $R\in\mathbb{N}$. A file may be requested multiple times (e.g., by different users, whose aggregated requests form the stream reaching the cache) within a single timeslot. We denote by $r_{t,i}\in\mathbb{N}$ the _multiplicity_ of file $i\in\mathcal{N}$, i.e., the number of requests for $i$, at time $t$, and by $\boldsymbol{r}_{t}=[r_{t,i}]_{i\in\mathcal{N}}\in\mathbb{N}^{N}$ the vector of such requests, representing the entire batch. We also assume that the maximum multiplicity of a file in a batch is bounded by $h\in\mathbb{N}$. As a result, $\boldsymbol{r}_{t}$ belongs to set $\mathcal{R}_{R,h}=\left\\{\boldsymbol{r}\in\\{0,\dots,h\\}^{N}:\sum^{N}_{i=1}r_{i}=R\right\\}.$ Intuitively, the ratio $\frac{R}{h}$ defines the diversity of request batches in a timeslot. For example, when $\frac{R}{h}=1$, all $R$ requests are concentrated on a single file. When $\frac{R}{h}=N$, requests are spread evenly across the catalog $\mathcal{N}$. In general, $\frac{R}{h}$ is a lower bound for the number of distinct files requested in the batch. For that reason, we refer to $\frac{R}{h}$ as the _diversity ratio_.111This definition of diversity is consistent with other notions of diversity, such as, e.g., the entropy; indeed the diversity ratio provides a lower bound on the entropy of the normalized batch vector $\frac{\boldsymbol{r}_{t}}{R}$, as $E\left(\frac{\boldsymbol{r}_{t}}{R}\right)\geq\log\left(\frac{R}{h}\right)$ (Lin, 2013, Lemma 3), where $E(\boldsymbol{p})=-\sum_{i}p_{i}\log(p_{i})$ is the entropy function. We note that our request model generalizes the setting by Paschos et al. (Paschos et al., 2019), which can be seen as the case $R=h=1$, i.e., the batch contains only one request per timeslot. We make no additional assumptions on the request arrival process; put differently, we operate in the adversarial online setting, where a potential adversary may select an arbitrary request sequence $\\{\boldsymbol{r}_{t}\\}_{t=1}^{T}$ in $\mathcal{R}_{R,h}$ to increase system costs. Service Cost Objective. When a request batch $\boldsymbol{r}_{t}$ arrives, the cache incurs the following cost: (1) $\displaystyle\textstyle f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})=\sum^{N}_{i=1}w_{i}r_{t,i}(1-x_{t,i}).$ In other words, for each file $i\in\mathcal{N}$, the system pays a cost proportional to the file fraction $(1-x_{t,i})$ missing from the local cache, weighted by the file cost $w_{i}$ and by the number of times $r_{t,i}$ file $i$ is requested in the current batch $\boldsymbol{r}_{t}$. The cost objective (1) captures several possible real-life settings. First, it can be interpreted as a QoS cost paid by each user for the additional delay to retrieve part of the file from the server. Second, assuming that the $R$ requests arrive and are served individually (e.g., because they are spread-out within a timeslot), Eq. (1) can represent the load on the servers or on the network to provide the missing part of the requested files. Our model also applies when all requests for the same file are aggregated and served simultaneously by a single fetch operation. In this case, $r_{t,i}$ in Eq. (1) should be the interpreted as the indicator variable denoting if file $i$ was requested; correspondingly, $R$ then indicates the total number of _distinct_ files requested, and $h=1$. Online Caching Algorithms and Regret. Cache files are determined online as follows. The cache has selected a state $\boldsymbol{x}_{t}\in\mathcal{X}$ at the beginning of a timeslot. The request batch $\boldsymbol{r}_{t}$ arrives, and the linear cost $f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$ is incurred; the state is subsequently updated to $\boldsymbol{x}_{t+1}$. Formally, the cache state is determined by an online policy $\mathcal{A}$, i.e., a sequence of mappings $\\{\mathcal{A}_{t}\\}^{T-1}_{t=1}$, where for every $t\geq 1$, $\mathcal{A}_{t}:(\mathcal{R}_{R,h}\times\mathcal{X})^{t}\to\mathcal{X}$ maps the sequence of past request batches and decisions $\\{(\boldsymbol{r}_{s},\boldsymbol{x}_{s})\\}^{t}_{s=1}$ to the next state $\boldsymbol{x}_{t+1}\in\mathcal{X}$. We assume that the policy starts from a feasible state $\boldsymbol{x}_{1}\in\mathcal{X}$. We measure the performance of an online algorithm $\mathcal{A}$ in terms of regret, i.e., the difference between the total cost experienced by a policy $\mathcal{A}$ over a time horizon $T$ and that of the best static state $\boldsymbol{x}_{*}$ in hindsight. Formally, (2) $\displaystyle\textstyle\text{Regret}_{T}(\mathcal{A})=\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{*})\right\\},$ where $\boldsymbol{x}_{*}=\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{X}}\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x})$ is the optimal static cache state (in hindsight). Note that, by taking the supremum in Eq. (2), we indeed measure regret in the adversarial setting, i.e., against an adversary that potentially picks requests in $\mathcal{R}_{R,h}$ trying to jeopardize cache performance. Update Costs. An online algorithm $\mathcal{A}$ updating the cache state at timeslot $t$ may require moving a portion of a file from a remote server to the cache to implement this update. The update cost of the online algorithm is not explicitly modeled in our cost and regret (Eqs. (1) and (2), respectively). We postpone the discussion of such cost in Sec. 5. For the moment we observe that updates come “for free” for files requested in the current timeslot. For example, increasing the fraction $x_{t,i}$ for some file $i$ such that $r_{t,i}>0$, can be performed by recovering the additional part out of the ($1-x_{t,i}$) missing fraction that needs to be retrieved to serve the file; updates can thus “free-ride” on regular traffic, at no additional cost. We prove in Proposition 1 that the main algorithms studied in this paper ($\mathrm{OGD}$ and $\mathrm{OMD}_{\mathrm{NE}}$) are in this regime, as _they only increase current state coordinates corresponding to files requested in the previous timeslot_ ; as such, their update costs can be considered to be zero. ## 4\. Fractional Caching and Gradient-based Algorithms Inspired by offline minimization, it is natural to design a policy that, upon seeing $\boldsymbol{r}_{t}$, selects as $\boldsymbol{x}_{t+1}$ the state that would have minimized (on hindsight) the aggregate cost up to time $t$ (i.e., $\sum_{t^{\prime}=1}^{t}f_{\boldsymbol{r}_{t^{\prime}}}(\boldsymbol{x})$). Unfortunately, such policy has poor regret: ###### Proposition 0. The aggregate cost minimization policy is a policy $\mathcal{A}$ that selects for every timeslot $t\in[T-1]$ the state $\boldsymbol{x}_{t+1}=\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{X}}\sum_{t^{\prime}=1}^{t}f_{\boldsymbol{r}_{t^{\prime}}}(\boldsymbol{x})$. This policy has linear (worst-case) regret, i.e., $\mathrm{Regret}(\mathcal{A})=\operatorname{\Omega}\left(T\right)$. We provide a proof in Appendix A.1. A more conservative approach, that indeed leads to sublinear regret, is to take gradual steps, moving in the direction of a better decision according to the latest cost; we present algorithms of this nature in this section. ### 4.1. Online Gradient Descent (OGD) In OGD, introduced by Paschos et al. (Paschos et al., 2019) for online caching, the cache is initialized with a feasible state $\boldsymbol{x}_{1}\in\mathcal{X}$ and updated as follows. Upon receiving a request batch $\boldsymbol{r}_{t}$, the cost $f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$ is incurred and the next state becomes: (3) $\displaystyle\boldsymbol{x}_{t+1}=\Pi_{\mathcal{X}}\left(\boldsymbol{x}_{t}-\eta\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})\right),\quad\text{for all}~{}t\in[T-1],$ where $\Pi_{\mathcal{X}}(\,\cdot\,)$ is the Euclidean projection onto $\mathcal{X}$, that ensures feasibility, and $\eta\in\mathbb{R}_{+}$ is called the learning rate. Note that the state $\boldsymbol{x}_{t+1}$ obtained according Eq. (3) is indeed a function of $\\{(\boldsymbol{r}_{t},\boldsymbol{x}_{t})\\}\subset\\{(\boldsymbol{r}_{s},\boldsymbol{x}_{s})\\}^{t}_{s=1}$ for every $t\geq 1$; hence, OGD is indeed an online caching policy as defined in Sec. 3. Paschos et al. (Paschos et al., 2019) show that OGD attains sub- linear regret when $R=h=1$; more specifically: ###### Theorem 2. ​​​((Paschos et al., 2019, Theorem 2)) When $R=h=1$, the regret of OGD is bounded as follows: (4) $\mathrm{Regret}_{T}(\mathrm{OGD})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}\sqrt{\min(2k,2(N-k))T}.$ In other words, OGD attains an $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret when $R=h=1$. In this paper, we study a broader class of gradient descent algorithms that include OGD as a special case. As we will see below (see Thm. 8), the regret attained by OGD is not necessarily the tightest possible when $R\neq 1\neq h$; broadening the class of algorithms we consider allows us to improve upon this bound. ### 4.2. Online Mirror Descent (OMD) 1:$\boldsymbol{x}_{1}=\underset{\boldsymbol{x}\in\mathcal{X}\cap\mathcal{D}}{\arg\min}\,\Phi(\boldsymbol{x})$ , $\eta\in\mathbb{R}_{+}$ 2:for $t\leftarrow 1,2,\dots,T$ do$\triangleright$ Incur a cost $f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$, and receive a gradient $\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x})$ 3: $\hat{\boldsymbol{x}}_{t}\leftarrow\nabla\Phi(\boldsymbol{x}_{t})$$\triangleright$ Map primal point to dual point 4: $\hat{\boldsymbol{y}}_{t+1}\leftarrow\hat{\boldsymbol{x}}_{t}-\eta\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$$\triangleright$ Take gradient step in the dual space 5: $\boldsymbol{y}_{t+1}\leftarrow\left(\nabla\Phi\right)^{-1}(\hat{\boldsymbol{y}}_{t+1})$$\triangleright$ Map dual point to a primal point 6: $\boldsymbol{x}_{t+1}\leftarrow\Pi_{\mathcal{X}\cap\mathcal{D}}^{\Phi}(\boldsymbol{y}_{t+1})$$\triangleright$ Project new point onto feasible region $\mathcal{X}$ 7:end for Algorithm 1 Online mirror descent ($\text{OMD}_{\Phi}$) OMD (Hazan, 2016, Sec. 5.3) is the online version of the mirror descent (MD) algorithm (Beck and Teboulle, 2003) for convex optimization of a fixed, known function. The main premise behind mirror descent is that variables and gradients live in two distinct spaces: the _primal space_ , for variables, and the _dual space_ , for gradients. The two are linked via a function known as a _mirror map_. Contrary to standard gradient descent, updates using the gradient occur on the dual space; the mirror map is used to invert this update to a change on the primal variables. For several constrained optimization problems of interest, mirror descent leads to faster convergence compared to gradient descent (Bubeck, 2015, Sec. 4.3). OMD arises by observing that MD is agnostic to whether the gradients are obtained from a _fixed_ function, or a sequence revealed adversarially. OMD for Caching. Applied to our caching problem, OMD takes the form summarized in Algorithm 1. In our case, both the primal and dual spaces are $\mathbb{R}^{N}$. To disambiguate between the two, we denote primal points by $\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{N}$ and dual points by $\hat{\boldsymbol{x}},\hat{\boldsymbol{y}}\in\mathbb{R}^{N}$, respectively. Formally, OMD is parameterized by (1) a fixed learning rate $\eta\in\mathbb{R}_{+}$, and (2) a differentiable map $\Phi:\mathcal{D}\to\mathbb{R}$, strictly convex over $\mathcal{D}$ and $\rho$-strongly convex over $\mathcal{X}\cap\mathcal{D}$, where $\mathcal{X}$ is included in the closure of $\mathcal{D}$; that is (5) $\displaystyle\mathcal{X}\subseteq\mathrm{closure}(\mathcal{D}).$ Function $\Phi$ is called the _mirror map_ , that links the primal to the dual space. Given $\eta$ and $\Phi$, an OMD iteration proceeds as follows. After observing the request batch $\boldsymbol{r}_{t}$ and incurring the cost $f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$, the current state $\boldsymbol{x}_{t}$ is first mapped from the primal to the dual space via: (6) $\displaystyle\hat{\boldsymbol{x}}_{t}=\nabla\Phi(\boldsymbol{x}_{t}).$ Then, a regular gradient descent step is performed _in the dual space_ to obtain an updated dual point: (7) $\displaystyle\hat{\boldsymbol{y}}_{t+1}=\hat{\boldsymbol{x}}_{t}-\eta\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t}).$ This updated dual point is then mapped back to the primal space using the inverse of mapping $\nabla\Phi$, i.e.: (8) $\displaystyle\boldsymbol{y}_{t+1}=\left(\nabla\Phi\right)^{-1}\\!(\hat{\boldsymbol{y}}_{t+1}).$ The resulting primal point $\boldsymbol{y}_{t+1}$ may lie outside the constraint set $\mathcal{X}$. To obtain the final feasible point $\boldsymbol{x}_{t+1}\in\mathcal{X}$, a projection is made using the Bregman divergence associated with the mirror map $\Phi$; that is, instead of the orthogonal projection used in OGD, the final cache state becomes: (9) $\displaystyle\boldsymbol{x}_{t+1}=\Pi_{\mathcal{X}\cap\mathcal{D}}^{\Phi}(\boldsymbol{y}_{t+1}),$ where $\Pi_{\mathcal{X}\cap\mathcal{D}}^{\Phi}(\,\cdot\,)$ is the Bregman projection, which we define formally below, in Definition 3. Together, steps (6)–(9) define OMD. Note that, as it was the case for OGD, $\boldsymbol{x}_{t+1}$ is a function of $\\{(\boldsymbol{r}_{t},\boldsymbol{x}_{t})\\}\subset\\{(\boldsymbol{r}_{s},\boldsymbol{x}_{s})\\}^{t}_{s=1}$, hence OMD is indeed an online algorithm. Two additional technical assumptions on $\Phi$ and $\mathcal{D}$ must hold for steps (8) and (9) to be well- defined.222All hold for the algorithms we consider in Sec. 4.3. First, the gradient of $\Phi$ must diverge at the boundary of $\mathcal{D}$; this, along with strict convexity, ensures the existence and uniqueness of the Bregman projection in (9). Second, the image of $\mathcal{D}$ under the gradient of $\Phi$ should take all possible values, that is $\nabla\Phi(\mathcal{D})=\mathbb{R}^{N}$; this, along again with strict convexity, ensures that $\nabla\Phi$ is one-to-one and onto, so its inverse exists and Eq. (8) is well-defined. Setting $\Phi(\boldsymbol{x})=\frac{1}{2}\left\lVert\boldsymbol{x}\right\rVert^{2}_{2}$ and $\mathcal{D}=\mathbb{R}^{N}$ yields the identity mapping $\nabla\Phi(\boldsymbol{x})=\boldsymbol{x},$ for all $\boldsymbol{x}\in\mathcal{D}$. Furthermore, the Bregman divergence associated with this map is just the Euclidean distance $D_{\Phi}(\boldsymbol{x},\boldsymbol{y})=\frac{1}{2}\left\lVert\boldsymbol{x}-\boldsymbol{y}\right\rVert^{2}_{2}$. Thus, this Euclidean version of OMD is equivalent to OGD, and OMD can be seen as a generalization of the OGD to other mirror maps. To conclude our description of OMD, we define the Bregman projection (Kiwiel, 1997). ###### Definition 0. The Bregman projection denoted by $\Pi^{\Phi}_{\mathcal{X}\cap\mathcal{D}}:\mathbb{R}^{N}\to\mathcal{X}\cap\mathcal{D}$, is defined as (10) $\displaystyle\Pi^{\Phi}_{\mathcal{X}\cap\mathcal{D}}(\boldsymbol{y})$ $\displaystyle=\underset{\boldsymbol{x}\in{\mathcal{X}\cap\mathcal{D}}}{\arg\min}\,D_{\Phi}(\boldsymbol{x},\boldsymbol{y}),$ where $\displaystyle D_{\Phi}(\boldsymbol{x},\boldsymbol{y})=\Phi(\boldsymbol{x})-\Phi(\boldsymbol{y})-\nabla{\Phi(\boldsymbol{y})}^{T}(\boldsymbol{x}-\boldsymbol{y})$ is the Bregman divergence associated with the mirror map $\Phi$. ### 4.3. Analysis of Online Mirror Descent Algorithms We present our main results regarding the application of OMD under several different mirror maps to the online caching problems. We will be concerned with both (1) the regret attained, and (2) computational complexity issues, particularly pertaining to the associated Bregman projection. Our key observation is that _the regret of different algorithms is significantly influenced by demand diversity, as captured by the diversity ratio $\frac{R}{h}$_. In particular, our analysis allows us to characterize regimes of the diversity ratio in which OGD outperforms other mirror maps, and vice versa. ### 4.4. $q$-Norm Mirror Maps A natural generalization of the OGD algorithm to a broader class of OMD algorithms is via $q$-norm mirror maps, whereby: (11) $\displaystyle\Phi(\boldsymbol{x})=\frac{1}{2}\left\lVert\boldsymbol{x}\right\rVert^{2}_{q},\quad\text{where }q\in(1,2],~{}\text{and }\quad\mathcal{D}$ $\displaystyle=\mathbb{R}^{N}.$ It is easy to verify that $\Phi$ and $\mathcal{D}$, defined as above, satisfy all technical requirements set in Sec. 4.2 on a mirror map and its domain. We define $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ to be the OMD Algorithm 1 with $\Phi$ and $q$ given by Eq. (11). Note that this map generalizes OGD, which corresponds to the special case $q=2$. In what follows, we denote by $\|\cdot\|_{p}$ the dual norm of $\|\cdot\|_{q}$. Then, $p\in[2,\infty)$ is such that $\frac{1}{p}+\frac{1}{q}=1$. Note that sometimes $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ is referred to as a _$p$ -norm algorithm_ (Shalev-Shwartz, 2012). #### 4.4.1. Regret Analysis We begin by providing a regret bound for $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ algorithms: ###### Theorem 4. For $\eta=\textstyle\sqrt{\frac{(q-1)k^{2}\left(k^{-\frac{2}{p}}-N^{-\frac{2}{p}}\right)}{\left\lVert\boldsymbol{w}\right\rVert^{2}_{\infty}h^{2}\left(\frac{R}{h}\right)^{\frac{2}{p}}T}}$, the regret of $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ over $\mathcal{X}$ satisfies: (12) $\textstyle\mathrm{Regret}_{T}(\mathrm{OMD}_{q\text{-}\mathrm{norm}})\leq\textstyle\left\lVert\boldsymbol{w}\right\rVert_{\infty}hk\left(\frac{R}{h}\right)^{\frac{1}{p}}\sqrt{\frac{1}{q-1}\left(k^{-\frac{2}{p}}-N^{-\frac{2}{p}}\right)T}.$ The proof can be found in Appendix A.3. We use an upper bound on the regret of general OMD from (Bubeck, 2015, Theorem 4.2) and relate it to our setting; in doing so, we bound the diameter of $\mathcal{X}$ w.r.t. Bregman divergence under $\Phi$ as well as the dual-norm $\left\lVert\,\cdot\,\right\rVert_{p}$ of the gradients $\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$. Comparing Theorem 4 to Theorem 2, we see that both attain an $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret. A natural question to ask when comparing the two bounds is whether there are cases where $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ with $q\neq 2$ outperforms OGD (i.e., $\mathrm{OMD}_{2\text{-}\mathrm{norm}}$). The constants in the r.h.s. of Eq. (12) depend on the diversity ratio $\frac{R}{h}$; this, in turn, affects which is the optimal $q$, i.e., the one that minimizes the bound in Eq. (12). Let $q^{*}=\mathop{\arg\inf}_{q\in(1,2]}\mathtt{ub}(q)$ be the optimal $q$, where $\mathtt{ub}:(1,2]\to\mathbb{R}_{+}$ is the upper bound in Eq. (12). Note that $q^{*}\in[1,2]$. Figure 1 shows $q^{*}$ as a function of the diversity ratio, for different values of cache capacity $k$. We observe that OGD ($q=2$) is optimal for lower diversity regimes and larger caches; when diversity $\frac{R}{h}$ increases or cache capacity $k$ decreases, values $q<2$ become optimal. The transition from $q^{*}=2$ to $q^{*}=1$ is sharp, and becomes sharper as $k$ increases. figurec Figure 1. Numerical characterization of $q^{*}\in[1,2]$ as a function of the diversity ratio $R/h$, for different cache capacities $k$ expressed as fractions of the catalog size ($N=100$). Given $R/h$, the optimal $q^{*}$ is determined as the value in $[1,2]$ that minimizes the upper-bound in Eq. (12). Higher values of $R/h$ represent more diverse requests. Under small diversity, OGD is optimal; as diversity increases, mirror maps for which $q<2$ attain a more favorable upper bound than OGD. #### 4.4.2. Optimality Regimes. Motivated by these observations, we turn our attention to formally characterizing the two regimes under which optimality transitions from $q^{*}=2$ to $q^{*}=1$. We first determine the upper bound on the regret for these two regimes. Indeed, by setting $q=2$ in Theorem 4, we obtain the following bound, generalizing Theorem 2 to the case $R/h>1$: ###### Corollary 0. For $\eta=\textstyle\sqrt{\frac{k\left(1-\frac{k}{N}\right)}{\left\lVert\boldsymbol{w}\right\rVert^{2}_{\infty}hRT}}$ the regret of OGD, satisfies: (13) $\textstyle\mathrm{Regret}_{T}(\mathrm{OGD})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}\sqrt{hRk\left(1-\frac{k}{N}\right)T}.$ This a direct consequence of Theorem 4 by replacing $q=2$ in Eq. (12). We note that, in this result, we tighten the bound of Paschos et al. (Paschos et al., 2019): for $R=h=1$, the bound in Eq. (13) is smaller than the one in Theorem 2 by at least a $\sqrt{2}$ factor. We also characterize the limiting behavior of $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ as $q$ converges to $1$. ###### Corollary 0. As $q$ converges to $1$, the upper bound on $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ regret given by Eq. (12) converges to: (14) $\textstyle\left\lVert\boldsymbol{w}\right\rVert_{\infty}hk\sqrt{2\log\left(\frac{N}{k}\right)T}.$ The proof can be found in Appendix A.4. This limit is precisely the bound on the regret attained under the neg-entropy mirror map (see Theorem 9 below). Armed with Corollaries 5 and 6, we can formally characterize the regimes in which either of the two strategies become dominant: ###### Theorem 7. The request bound for $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ in Eq. (12) is minimized for $q=2$, when $\frac{R}{h}\leq k$. In other words, when the diversity ratio is smaller than the cache size, it is preferable to update the cache via OGD. The proof, in Appendix A.5, establishes that the upper bound in Eq. (12) is monotonically decreasing w.r.t $q$ in the specified interval $\frac{R}{h}\leq k$. Our next result characterizes then the neg-entropy ($q$ converges to $1$) mirror map outperforms OGD: ###### Theorem 8. The limit, as $q$ converges to $1$, of the $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ regret bound in Eq. (14) is smaller than the corresponding bound for OGD ($\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ with $q=2$) when $\frac{R}{h}>2\sqrt{Nk}$. The proof is provided in Appendix A.6. We stress that Theorem 8 implies the sub-optimality of OGD in the regime $\frac{R}{h}>2\sqrt{Nk}$. The experiments in Fig. 1 suggest the bound in Theorem 8 is quite tight: for example for $k=7$ the bounds suggest $q=1$ should be optimal when $R/h$ exceeds $2\sqrt{100\times 7}\approx 52.9$, while experiments show that it is optimal when $R/h$ exceeds $45$. On the contrary, we observe that the bound in Theorem 7 seems to be loose and the transitions we observe in Fig. 1 are sharper than what one would predict from the bounds. #### 4.4.3. Dual-Primal Update and Bregman Projection Having characterized the regret of $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ algorithms, we turn our attention to implementation issues. The map to the dual space and back in Eq. (6) and Eq. (8) (Lines 2 and 4 in Algorithm 1), have the following expression (Gentile and Littlestone, 1999), respectively: (15) $\displaystyle\hat{x}_{t,i}=\textstyle\left(\nabla\Phi(\boldsymbol{x}_{t})\right)_{i}=\text{sign}(x_{t,i})\frac{|x_{t,i}|^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}},\quad\text{for all }~{}i\in\mathcal{N},$ (16) $\displaystyle{y}_{t+1,i}=\textstyle\left(\left(\nabla\Phi\right)^{-1}(\hat{\boldsymbol{y}}_{t+1})\right)_{i}=\text{sign}(\hat{y}_{t+1,i})\frac{|\hat{y}_{t+1,i}|^{p-1}}{\left\lVert\hat{\boldsymbol{y}}_{t+1}\right\rVert^{p-2}_{p}},\quad\text{for all }~{}i\in\mathcal{N}.$ Finally, for all $q\in(1,2]$ the Bregman projection in Eq. (9) (Line 5 in Algorithm 1) involves solving a convex optimization problem, in general. For the OGD Algorithm however ($q=2$) the projection is the usual Euclidean projection, and can be performed in $\operatorname{\mathcal{O}}\left(N^{2}\right)$ steps, using the projection algorithm by Wang and Lu (Wang and Lu, 2015). Specifically when $\frac{R}{h}=1$, only a single coefficient is updated through the gradient step (Lines 2–4 in Algorithm 1) per iteration, and Paschos et al. (Paschos et al., 2019) provide an algorithm that performs the projection in $\operatorname{\mathcal{O}}\left(N\right)$ time.333To be precise, the projection algorithm as presented in (Paschos et al., 2019) requires at each iteration a preliminary step with complexity $\operatorname{\mathcal{O}}\left(N\log(N)\right)$ to sort a vector of size $N$, followed by $\operatorname{\mathcal{O}}\left(N\right)$ steps. However, it is possible to replace sorting by $\operatorname{\mathcal{O}}\left(\log(N)\right)$ binary search and insertion operations reducing the complexity to $\operatorname{\mathcal{O}}\left(N\right)$ per iteration. ### 4.5. Neg-Entropy Mirror Map To conclude this section, we turn our attention to the neg-entropy mirror map that, as discussed earlier, attains the same regret performance as $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ as $q$ converges to $1$. Beyond its improved performance in terms of regret in the high diversity ratio regime, the neg-entropy mirror map comes with an additional computational advantage: the Bregman projection admits a highly efficient implementation. Formally, OMD under the neg-entropy mirror map uses: (17) $\displaystyle\textstyle\Phi(\boldsymbol{x})$ $\displaystyle=\sum^{N}_{i=1}x_{i}\log\left(x_{i}\right),\text{ and }\mathcal{D}=\mathbb{R}_{>0}^{N}.$ Note that, as per the requirements in Sec. 4.2, $\mathcal{X}\subseteq\mathrm{closure}(\mathcal{D})$. Also, $\nabla\Phi$ indeed diverges at the boundary of $\mathcal{D}$, and $\nabla\Phi(\mathcal{D})=\mathbb{R}^{N}$, as (18) $\displaystyle\textstyle\frac{\partial\Phi(\boldsymbol{x})}{\partial x_{i}}=1+\log x_{i},\quad\text{for all }i\in\mathcal{N}.$ We refer to the resulting algorithm as $\mathrm{OMD}_{\mathrm{NE}}$. #### 4.5.1. Regret Analysis We first characterize the regret of $\mathrm{OMD}_{\mathrm{NE}}$: ###### Theorem 9. For $\eta=\sqrt{\frac{2\log(N/k)}{\left\lVert\boldsymbol{w}\right\rVert^{2}_{\infty}h^{2}T}}$, the regret of $\mathrm{OMD}_{\mathrm{NE}}$ satisfies: (19) $\displaystyle\mathrm{Regret}_{T}(\mathrm{OMD}_{\mathrm{NE}})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}hk\sqrt{2\log(N/k)}.$ The proof, in Appendix A.8, is similar to the proof of Theorem 4. Using again the general bound of the regret of OMD algorithms in Bubeck (Bubeck, 2015, Theorem 4.2), we bound the diameter of $\mathcal{X}$ w.r.t. to the Bregman divergence as well as the dual norm $\left\lVert\,\cdot\,\right\rVert_{\infty}$ of gradients $\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$. Crucially, we observe that $\mathrm{OMD}_{\mathrm{NE}}$ indeed attains the same regret bound as the one in Corollary 6, namely, the bound on $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ when $q$ converges to $1$. This immediately implies the advantage of $\mathrm{OMD}_{\mathrm{NE}}$ over OGD in high diversity ratio regimes, as described in Sec. 4.4.2 and Theorem 8. #### 4.5.2. Dual-Primal Update and Bregman Projection As $\nabla\Phi(\boldsymbol{x})$ is given by Eq. (18), the inverse mapping is given by $\left(\left(\nabla\Phi\right)^{-1}(\hat{\boldsymbol{y}}_{t+1})\right)_{i}=\mathrm{exp}(\hat{y}_{t,i}-1)$. Hence, the map to the dual space and back in Eq. (6)–Eq. (8) (Lines 2–4 in Algorithm 1) can be concisely written as: (20) $\displaystyle\textstyle y_{t+1,i}$ $\displaystyle\textstyle=\exp\left(\hat{x}_{t,i}-\eta\frac{\partial f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})}{\partial x_{i}}-1\right)=\exp\left(\log({x}_{t,i})-\eta\frac{\partial f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})}{\partial x_{i}}\right)=x_{t,i}\,e^{-\eta\frac{\partial f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})}{\partial x_{i}}},\,\text{for all}~{}i\in\mathcal{N}.$ In other words, OMD under the neg-entropy mirror map adapts the cache state via _a multiplicative rule_ (namely, the one implied by the above equation), as opposed to the additive rule of OGD (see Eq. (3)). In Theorem 2 we prove that $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ when $q$ converges to $1$ also adapts the cache state via a multiplicative update rule; moreover, it is equivalent to $\mathrm{OMD}_{\mathrm{NE}}$ over the simplex. This justifies why the regret bounds for the two algorithms in Eq. (14) and Eq. (19) are identical. Algorithm 2 Neg-Entropy Bregman projection onto the capped simplex 1:$N$; $k$; $\left\lVert\boldsymbol{y}\right\rVert_{1}$; $P$; Partially sorted $y_{N}\geq\cdots\geq y_{N-k+1}\geq y_{i},\forall i\leq N-k$ 2:$\triangleright$ $\boldsymbol{y}$ is the intermediate cache state, and $P$ is a scaling factor initialized to 1 3:$y_{N+1}\leftarrow+\infty$ 4:for b $\in\\{N,\dots,N-k+1\\}$ do 5: $m_{b}\leftarrow\left({k+b-N}\right)/\left({\left\lVert\boldsymbol{y}\right\rVert_{1}-\sum_{i=b+1}^{N}{y_{i}P}}\right)$ 6: if $y_{b}m_{b}P<1\leq y_{b+1}m_{b}P$ then 7:$\triangleright$ Appropriate $b$ is found 8: for i $\geq$ b+1 do 9: $y_{i}\leftarrow{1}/{(m_{b}P)}$ 10: end for 11: $P\leftarrow m_{b}P$ 12: return $\boldsymbol{y}P$$\triangleright$ $\boldsymbol{y}P$ is the result of the projection 13: end if 14:end for Finally, the projection algorithm onto the capped simplex can be implemented in $\operatorname{\mathcal{O}}\left(N+k\log(k)\right)$ time for arbitrary $R$ and $h$ values using a waterfilling-like algorithm. The full procedure is presented in Algorithm 2. The algorithm receives as input the top-$k$ elements of $\boldsymbol{y}$, sorted in a descending order. It then identifies via a linear search which elements exceed an appropriate threshold and set them to one. The other elements are scaled by a constant factor to satisfy the capacity constraint. The following theorem holds: ###### Theorem 10. Algorithm 2 returns the projection $\Pi^{\Phi}_{\mathcal{X}\cap\mathcal{D}}(\boldsymbol{y})$ onto the capped simplex $\mathcal{X}$ under the neg-entropy $\Phi$. It requires $\operatorname{\mathcal{O}}\left(N+k\log(k)\right)$ operations per $\mathrm{OMD}_{\mathrm{NE}}$ iteration, for general values of ${R}$ and ${h}$, and only $\operatorname{\mathcal{O}}\left(k\right)$ operations, when $\frac{R}{h}=1$. The proof is given in Appendix A.9. To prove this theorem, we characterize the KKT conditions of the minimization problem. Then we show that these conditions can be checked in $\operatorname{\mathcal{O}}\left(k\right)$ time. Finally, we show how maintaining $\boldsymbol{y}$ in a partially sorted list across iterations leads to the reported complexity results. Theorem 10 implies that $\mathrm{OMD}_{\mathrm{NE}}$ _has significant computational savings when compared to OGD_ (cf. Sec. 4.4.3), both when $\frac{R}{h}=1$ and for general values of $R$ and $h$. ## 5\. Update Cost The model presented in Sec. 3 can be extended by adding the cost to update the cache state after the batch of $R$ requests has been served. This cost may quantify the additional load on the server or on the network. This update cost is often called _movement cost_ (Bubeck, 2015) or _switching cost_ (Andrew et al., 2013). As the state changes from $\boldsymbol{x}_{t}$ to $\boldsymbol{x}_{t+1}$, the cache evicts part of the file $i$ if $x_{t+1,i}<x_{t,i}$ and stores additional bytes of it if $x_{t+1,i}>x_{t,i}$. We make the following assumptions: 1. (1) Evictions do not engender update costs, as the cache can perform them autonomously; 2. (2) Insertions of (part of) files which have been requested do not engender update costs, as these files have already been retrieved by the cache in their entirety to satisfy the requests. 3. (3) Insertions of (part of) files which have not been requested incur a cost proportional to the fraction of file retrieved. We can then define the update cost at time slot $t$ as (21) $\displaystyle\textstyle\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})=\sum_{i\notin\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})}w_{i}^{\prime}\max\left\\{0,x_{t+1,i}-x_{t,i}\right\\},$ where $\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})=\left\\{i\in\mathcal{N}:r_{t,i}\neq 0\right\\}$ denotes the support of $\boldsymbol{r}_{t}$, i.e., the set of files that have been requested during the $t$-th timeslot, and $w^{\prime}_{i}\in\mathbb{R}_{+}$ is the cost to retrieve the whole file $i$, and can in general be different from the cost $w_{i}$ appearing in (1). If the update cost is introduced in the model, the _extended_ regret can be defined as follows: (22) $\displaystyle\text{E-Regret}_{T}({\mathcal{A}})=\textstyle\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})+\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x_{*}})\right\\}$ (23) $\displaystyle\leq\textstyle\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{*})\right\\}+\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\sum^{T}_{t=1}\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})\right\\}.$ Equation (23) shows that the regret of an arbitrary online algorithm can be bounded by considering the regret we have derived so far (Eq. (2)), ignoring update costs, and subsequently accounting for an additional term corresponding to the update. Note that the optimal static allocation does not incur any update cost. Equation (23) implies that any policy with $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret and $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ update cost in expectation has also $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ extended regret. One of the reasons why we did not introduce directly the update cost is that, in the fractional setting, OMD update cost is zero both for the Euclidean (OGD) and the neg-entropy ($\mathrm{OMD}_{\mathrm{NE}}$) mirror maps. Formally, we have: ###### Proposition 0. For any request batch $\boldsymbol{r}_{t}$ received at time slot $t\in[T]$, the update of fractional cache state from $\boldsymbol{x}_{t}\in\mathcal{X}$ to $\boldsymbol{x}_{t+1}\in\mathcal{X}$ obtained by $\mathrm{OMD}_{\mathrm{NE}}$ or $\mathrm{OGD}$ has no cost, i.e., $\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})=0$. The proof is provided in Appendix A.10. In fact, the gradient step increases the fraction $x_{t,i}$ only for files $i$ that have been requested, and the projection step reduces the fraction for all other files in order to satisfy the capacity constraint. It follows that $x_{t+1,i}-x_{t,i}>0$ if and only if $i\in\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})$, and thus $\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})=0$. Hence, the $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret guarantees we proved in the previous sections for OGD and $\mathrm{OMD}_{\mathrm{NE}}$ extend to the more general definition in (22). In the next section, we show that update costs _cannot be neglected_ when caches are forced to store files in their entirety. ## 6\. Integral Caching In the previous sections, we assumed that the cache can store arbitrarily small chunks of a file, and this allowed us to design no-regret policies that employ fractional caching. However, this assumption can be too strong in some applications. For example, when the catalog is composed of small-sized files, the discreteness of chunks sizes cannot be neglected; moreover, the metadata needed for each chunk can cause memory and computational overheads. These observations motivate us to study the case when the cache can only store the entire file. We refer to this setting as the _integral_ caching. Formally, we restrict the cache states to belong to the set $\textstyle\mathcal{Z}=\left\\{\boldsymbol{\zeta}\in\\{0,1\\}^{N}:\sum_{i\in\mathcal{N}}\zeta_{i}=k\right\\}$. Note that the set $\mathcal{Z}$ is a restriction of the set of fractional caching states $\mathcal{X}$ to its corners, i.e., $\mathcal{Z}=\mathcal{X}\cap\\{0,1\\}^{N}$; thus, we maintain the same definition of the requests and the service cost objective in Sec. 3. In this setting, we allow policies to be randomized. This extension turns out to be necessary in order to have a sublinear regret policy; formally, we have: ###### Proposition 0. Any deterministic policy restricted to select integral cache states in $\mathcal{Z}$ has the following lower bound on its regret: (24) $\displaystyle\textstyle\mathrm{Regret}_{T}(\mathcal{A})\geq k\left(1-{k}/{N}\right)T.$ To prove the proposition, we show that an adversary can exploit the deterministic nature of the policy by continuously requesting the files that are not stored in the cache. We provide the proof in Appendix B.1. We thus turn our attention to randomized policies. In particular, we focus on a special class of randomized policies, constructed by (1) a fractional online caching policy $\mathcal{A}$, i.e., of the type we have studied so far (see Sec. 3), combined with (2) a randomized rounding scheme $\Xi$, that maps fractional caching states to integral ones. In particular, for every $t\geq 1$ the randomized rounding scheme $\Xi:\mathcal{X}^{t}\times\mathcal{Z}^{t-1}\times[0,1]\to\mathcal{Z}$ maps the previous fractional cache states $\\{\boldsymbol{x}_{s}\\}^{t-1}_{s=1}\in\mathcal{X}^{t-1}$, the current fractional cache state $\boldsymbol{x}_{t}\in\mathcal{X}$, the previous random cache states $\\{\boldsymbol{z}_{s}\\}^{t-1}_{s=1}\in\mathcal{Z}^{t-1}$, and a source of randomness $\xi_{t}\in[0,1]$ to a new random cache state $\boldsymbol{z}_{t}\in\mathcal{Z}$ where (25) $\displaystyle\mathbb{E}[\boldsymbol{z}_{t}]=\boldsymbol{x}_{t}.$ Note that the rounding function takes into account not only the current fractional state $\boldsymbol{x}_{t}$, which determines its expectation, but also the past fractional and integral states ($\\{(\boldsymbol{z}_{s},\boldsymbol{x}_{s})\\}^{t-1}_{s=1}$); this is in fact instrumental in attaining a sublinear extended regret (see Theorems 3 and 1 below). We extend the definitions of the regret and the extended regret as follows: (26) $\displaystyle\textstyle\mathrm{Regret}_{T}(\mathcal{A},\Xi)=\textstyle\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\mathbb{E}\left[\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t})\right]-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{*})\right\\},$ and (27) $\displaystyle\textstyle\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)$ $\displaystyle=\textstyle\underset{\\{\boldsymbol{r}_{1},\boldsymbol{r}_{2},\dots,\boldsymbol{r}_{t}\\}\in\mathcal{R}^{T}_{R,h}}{\sup}\left\\{\mathbb{E}\left[\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t})+\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{*})\right\\},$ where the expectation is taken over the random choices of the rounding scheme $\Xi$, and (28) $\displaystyle\textstyle\boldsymbol{z}_{*}=\mathop{\arg\min}_{\boldsymbol{z}\in\mathcal{Z}}\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z})$ is the optimal static integral cache state (in hindsight). By restricting our focus to such randomized policies, we obtain a regret that is equal to the fractional caching policy’s regret. Formally, we have: ###### Proposition 0. Any randomized caching policy constructed by an online policy $\mathcal{A}$ combined with a randomized rounding scheme $\Xi$ has the same regret as $\mathcal{A}$, i.e., $\mathrm{Regret}_{T}(\mathcal{A},\Xi)=\mathrm{Regret}_{T}(\mathcal{A})$, given by (26) and (2), respectively. The result follows from the linearity of the cost functions and the expectation operator; moreover, the static optimum can always be selected to be integral from the integrality of the capacity constraint and linearity of the objective function. The proof is provided in Appendix. B.2. Proposition 2 thus implies that regret guarantees for a fractional policy $\mathcal{A}$ readily transfer to the integral regime, when coupled with rounding $\Xi$. Unfortunately, when considering the extended regret (Eq. (27)) instead, naïve rounding policies can arbitrary evict and fetch objects to the cache causing large update costs (see Theorem 3). Thus, unless rounding is carefully designed, we may fail to have sublinear regret guarantees when accounting for update costs. In the next section, we show how a randomized rounding scheme $\Xi$ can be selected to avoid incurring large update costs. ### 6.1. Rounding Schemes and Extended Regret #### 6.1.1. Online Independent Rounding. If we consider a fractional caching state $\boldsymbol{x}_{t}\in\mathcal{X}$, then a random integral caching state $\boldsymbol{z}_{t}\in\mathcal{Z}$ with the marginal $\mathbb{E}[\boldsymbol{z}_{t}]=\boldsymbol{x}_{t}$ exists and can be sampled in polynomial time (see, e.g., (Blaszczyszyn and Giovanidis, 2015; Ioannidis and Yeh, 2016)). Thus, a rounding scheme $\Xi$ can be constructed with such strategy that takes as input the current fractional cache state $\boldsymbol{x}_{t}$ ignoring previous fractional cache states $\\{\boldsymbol{x}_{s}\\}^{t-1}_{s=1}\in\mathcal{X}^{t-1}$, and previous random cache states $\\{\boldsymbol{z}_{s}\\}^{t-1}_{s=1}\in\mathcal{Z}^{t-1}$. We provide pseudocode for this procedure in Algorithm 3.444Algorithm 3 provides a linear- time variant of the algorithms proposed in (Blaszczyszyn and Giovanidis, 2015; Ioannidis and Yeh, 2016). The algorithm samples an integral caching state without constructing a distribution and its support. Because at any time $t$ the random cache states are sampled independently from previous random cache states, we refer to this rounding as _online independent rounding_. Unfortunately, when considering the extended regret (27), any caching policy coupled with this rounding scheme loses its $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ regret guarantee. Formally, we have the following: ###### Theorem 3. Any randomized caching policy constructed by an online policy $\mathcal{A}$ combined with online independent rounding as a randomized rounding scheme $\Xi$ has linear (worst-case) extended regret, i.e., $\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)=\Omega(T)$. The proof is provided in Appendix B.3. Online independent rounding causes frequent cache updates, as it samples a new state from $\boldsymbol{z}_{t}$ ignoring the previous state $\boldsymbol{\zeta}_{t-1}$ sampled from $\boldsymbol{z}_{t-1}$. Intuitively, imposing dependence (coupling) between the two consecutive random states may significantly reduce the expected update cost. 1:procedure Online Rounding($\boldsymbol{x}\in\mathcal{X}$, $\xi\in[0,1]$) 2: $\mathcal{I}_{0}=\emptyset$ 3: for $i=1,2,\dots,N$ do 4: $\mathcal{I}_{i}\leftarrow\begin{cases}\mathcal{I}_{i-1}\cup\\{i\\}&\text{if}\quad\sum^{i}_{j=1}x_{j}\geq\xi+|\mathcal{I}_{i-1}|,\\\ \mathcal{I}_{i-1}&\text{otherwise.}\end{cases}$ 5: end for 6: return $\boldsymbol{\boldsymbol{z}}\leftarrow\sum_{i\in I_{N}}\boldsymbol{e}_{i}$ 7:end procedure 8:$\triangleright$ In _online independent rounding_ , Online Rounding is called with arguments ($\boldsymbol{x}_{t}$, $\xi_{t}$), where $\boldsymbol{x}_{t}$ are provided by algorithm $\mathcal{A}$ and $\\{\xi_{t}\\}_{t=1}^{T-1}$ are i.i.d., sampled u.a.r. from $[0,1]$. 9:$\triangleright$ In _online coupled rounding_ , a $\xi$ is sampled once u.a.r. from $[0,1]$; then, Online Rounding is called with arguments ($\boldsymbol{x}_{t}$, $\xi$), i.e., using the same $\xi$ for all $\boldsymbol{x}_{t}$ provided by algorithm $\mathcal{A}$. 10:$\triangleright$ Both return an integral r.v. $\boldsymbol{z}_{t}$ s.t. $\mathbb{E}[\boldsymbol{z}_{t}]=\boldsymbol{x}_{t}$, with the expectation being over $\\{\xi_{t}\\}_{t=1}^{T-1}$ and $\xi$, respectively. Algorithm 3 Online Rounding #### 6.1.2. Online Coupled Rounding. To address this issue, our proposed _online coupled rounding_ scheme is described also in Algorithm 3, using however the same randomization source across all timeslots. In particular, the coupling across states comes from the use of the same uniform random variable $\xi$. A consequence of this coupling is that the next integral state can be computed efficiently and leads to small movement costs. Note that Algorithm 3 does not necessarily find an optimal coupling, still it yields a sublinear update cost, and thus preserves the sublinearity of the regret. This is formally expressed in the following Theorem: ###### Theorem 4. Consider a randomized caching policy constructed by an OMD policy $\mathcal{A}$ with sublinear regret (i.e., configured with learning rate $\eta=\Theta{\left({1}/{\sqrt{T}}\right)}$) combined with online coupled rounding $\Xi$ in Algorithm 3 (fixed $\xi_{t}=\xi$ for $t\in[T]$). The expected movement cost of the random integral cache states is $\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}\left(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}\right)\right]=\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$. Moreover, the extended regret is sublinear $\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)=\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$. We provide the proof in Appendix B.5. In summary, any OMD policy combined with online coupled rounding yields $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ extended regret in the integral caching setting. The computational complexity of online coupled rounding is $\operatorname{\mathcal{O}}\left(N\right)$ (see also Fig. 10). #### 6.1.3. Online Optimally-Coupled Rounding. It is possible in general to reduce the update cost of online coupled rounding. In particular, minimizing the expected update cost over all joint distributions of the random variables $\boldsymbol{z}_{t}$ and $\boldsymbol{z}_{t+1}$ leads to an optimal transport problem (Peyré et al., 2019). For completeness, we describe this rounding scheme here, though (1) it does not reduce the extended regret guarantee attained by online coupled rounding (up to multiplicative constants), and (2) it has an increased computational cost. Formally, at each time $t$ the random variables $\boldsymbol{z}_{t}$ with marginal $\boldsymbol{x}_{t}$ can be constructed by sampling from a distribution $\boldsymbol{p}_{t}$ with $\operatorname{\mathcal{O}}\left(N\right)$ support $\left\\{\boldsymbol{\zeta}^{1}_{t},\boldsymbol{\zeta}^{2}_{t},\dots,\boldsymbol{\zeta}^{|\boldsymbol{p}_{t}|}_{t}\right\\}$, where $p_{t,i}=\mathbb{P}(\boldsymbol{z}_{t}=\boldsymbol{\zeta}^{i}_{t})$ for $i\in[\left|\boldsymbol{p}_{t}\right|]$. The decomposition can be performed in $\operatorname{\mathcal{O}}\left(kN\log(N)\right)$ steps (Ioannidis and Yeh, 2016). We denote the joint probability $\mathbb{P}\left(\boldsymbol{z}_{t+1}=\boldsymbol{\zeta}^{j}_{t+1},\boldsymbol{z}_{t}=\boldsymbol{\zeta}^{i}_{t}\right)$ by the flow $f_{i,j}$ for all $(i,j)\in[|\boldsymbol{p}_{t}|]\times[|\boldsymbol{p}_{t+1}|]$. The optimal transport problem can be described by the following linear program: $\displaystyle\boldsymbol{f}=$ $\displaystyle\mathop{\arg\min}_{{[f_{i,j}]_{(i,j)\in[|\boldsymbol{p}_{t}|]\times[|\boldsymbol{p}_{t+1}|]}}}\textstyle\mathbb{E}\left[\mathrm{UC}\left(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}\right)\right]=\sum^{|\boldsymbol{p}_{t}|}_{i=1}\sum^{|\boldsymbol{p}_{t+1}|}_{j=1}\mathrm{UC}_{\boldsymbol{r}_{t}}\left(\boldsymbol{\zeta}^{i}_{t},\boldsymbol{\zeta}^{j}_{t+1}\right)f_{i,j}$ $\displaystyle\text{s.t.}\quad\textstyle\sum^{|\boldsymbol{p}_{t+1}|}_{j=1}f_{i,j}=p_{t,i},\quad\sum^{|\boldsymbol{p}_{t}|}_{i=1}f_{i,j}=p_{t+1,j},\quad f_{i,j}\in[0,1],\forall(i,j)\in[|\boldsymbol{p}_{t}|]\times[|\boldsymbol{p}_{t+1}|].$ We solve the above linear program to obtain a minimum-cost flow $\boldsymbol{f}$. If the random state at time $t$ is $\boldsymbol{\zeta}^{i}_{t}$, then we select the new random state to be $\boldsymbol{\zeta}^{j}_{t+1}$ with (conditional) probability $\mathbb{P}\left(\boldsymbol{z}_{t+1}=\boldsymbol{\zeta}^{j}_{t+1}\;|\;\boldsymbol{z}_{t}=\boldsymbol{\zeta}^{i}_{t}\right)=\frac{f^{*}_{i,j}}{p^{t}_{i}}$. Such coupling ensures that the expected update cost is minimized. When we combine this rounding scheme with a no-regret fractional policy we obtain sublinear extended regret (27): ###### Corollary 0. Consider an OMD policy $\mathcal{A}$ configured with learning rate $\eta=\Theta{\left(\frac{1}{\sqrt{T}}\right)}$ combined with online optimally- coupled rounding $\Xi$. The obtained randomized integral caching policy has sublinear extended regret, i.e., $\mathrm{E\mbox{-}Regret}(\mathcal{A},\Xi)=\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$. The corollary follows from Theorem 4, because online coupled rounding constructs a feasible transportation flow (see Fig. 11 for an illustration) that gives sublinear update costs, and the optimal flow can only have lower update costs. The naïve implementation of the optimal transport problem has $\operatorname{\mathcal{O}}\left(N^{3}\right)$ time complexity, but several efficient approximations exist in the literature (Peyré et al., 2019) at the expense of loosing the established guarantee. ## 7\. Numerical Experiments ### 7.1. Experimental setup #### 7.1.1. Datasets. Throughout all experiments, we assume equal costs per file, i.e., $w_{i}=w^{\prime}_{i}=1,\forall i\in\mathcal{N}$. The learning rate $\eta^{*}$ denotes the learning rate value specified in Corollary 5 and in Theorem 9 for $\mathrm{OGD}$ and $\mathrm{OMD}_{\mathrm{NE}}$, respectively. The learning rate is selected to be $\eta^{*}$ unless otherwise mentioned. In what follows, we distinguish the number of batches in the trace ($B$) and the time horizon ($T$). We generate the following synthetic datasets, summarized in Table 2. Fixed Popularity. Requests are i.i.d. and sampled from a catalog of $N=200$ files according to a _Zipf_ distribution with exponent $\alpha=0.8$. Each batch counts a single request ($R=1$). We set set the time horizon as $T=10^{5}$. The cache capacity is $k=100$. The total number of requests is the product of the requests in each batch ($R$) and the number of batches ($B$), both values are reported in Table 2. Batched Fixed Popularity. Request are generated as above from a Zipf distribution with exponent $\alpha$, but are now grouped in batches of $R=5\times 10^{3}$ requests. We take different exponents $\alpha\in\\{0.1,0.2,0.7\\}$ for traces _Batched Fixed Popularity_ (1), (2), and (3), respectively, in Table 2. The parameter $\alpha$ controls the diversity of the files in the request batches. If $\alpha=0$, then each file is requested with equal probability, corresponding to $\frac{R}{h}\to N$ (high diversity). As we increase $\alpha$, the requests become more concentrated; this corresponds to $\frac{R}{h}\to 1$ (low diversity). Table 2 shows the value of $h$ observed in each trace. In all cases, we select catalog size $N=10^{4}$, cache size $k\in\\{25,125,250\\}$, and time horizon $T=10^{4}$. Transient Popularity. We also generate two non-stationary request traces. In these traces, we reset the popularity distribution periodically. In the first scenario (_Partial Popularity Change_ traces), we still have batches of $R=5\times 10^{3}$ requests sampled from a catalog of $N=10^{4}$ files according to a Zipf distribution with parameter $\alpha\in\\{0.1,0.3,0.4\\}$ for traces (1), (2), and (3), respectively. But now the popularities of a subset of files is modified every $10^{3}$ time slots. In particular the 5% most popular files become the 5% least popular ones and vice versa. We want to model a situation where the cache knows the timescale over which the request process changes and which files are affected (but not how their popularity changes). Correspondingly, the time horizon is also set to $T=10^{3}$ and, at the end of each time horizon, the cache redistributes uniformly the cache space currently allocated by those files. The cache size is $k=50$. In the second scenario (_Global Popularity Change_ trace) each batch counts only a single request ($R=1$) sampled from a catalog of $N=10^{4}$ files according to a Zipf distribution with exponent $\alpha=0.8$. Every $5\times 10^{4}$ time slots (or requests in this case) the popularity of each files change: file $i\in\\{1,\dots,N\\}$ assumes the popularity of file $j=(1+(i+N/4)\\!\mod N$). The cache size is $k=200$. We also generate the _Downscaled Global Popularity Change_ trace as a downscaled version of _Global Popularity Change_ trace, where the catalog size is reduced to $N=25$, the cache size to $k=4$, and the number of requests to $9\times 10^{3}$. The learning rate is set to $\eta=0.01$. Akamai Trace. We consider also a real file request trace collected from Akamai Content Delivery Network (CDN) (Neglia et al., 2017). The trace spans 1 week, and we extract from it about $8.5\times 10^{7}$ requests for the $N=10^{4}$ most popular files. We group requests in batches of size $R=5\times 10^{3}$, and we consider a time horizon $T=100$ time slots corresponding roughly to 1 hour. The cache size is $k=25$. Trace | $B$ | $T$ | $N$ | $R$ | $h$ ---|---|---|---|---|--- Fixed Popularity | $1.5\times 10^{5}$ | $1.5\times 10^{5}$ | $10^{4}$ | 1 | 1 Batched Fixed Popularity (1) | $10^{4}$ | $10^{4}$ | $10^{4}$ | $5\times 10^{3}$ | 2 Batched Fixed Popularity (2) | $10^{4}$ | $10^{4}$ | $10^{4}$ | $5\times 10^{3}$ | 5 Batched Fixed Popularity (3) | $10^{4}$ | $10^{4}$ | $10^{4}$ | $5\times 10^{3}$ | 87 Partial Popularity Change (1) | $5\times 10^{3}$ | $10^{3}$ | $10^{4}$ | $5\times 10^{3}$ | 2 Partial Popularity Change (2) | $5\times 10^{3}$ | $10^{3}$ | $10^{4}$ | $5\times 10^{3}$ | 6 Partial Popularity Change (3) | $5\times 10^{3}$ | $10^{3}$ | $10^{4}$ | $5\times 10^{3}$ | 10 Global Popularity Change | $1.5\times 10^{5}$ | $1.5\times 10^{5}$ | $10^{4}$ | 1 | 1 Downscaled Global Popularity Change | $9\times 10^{3}$ | $9\times 10^{3}$ | $25$ | 1 | 1 Akamai CDN | $1.7\times 10^{4}$ | $10^{2}$ | $10^{3}$ | $5\times 10^{4}$ | $380$ Table 2. Trace Summary Performance metric | Definition | Range ---|---|--- Normalized Average Cost | $\mathrm{NAC}(\mathcal{A})=\frac{1}{Rt}\sum^{t}_{s=0}f_{\boldsymbol{r}_{s}}(\boldsymbol{x}_{s})$ | $[0,1]$ Normalized Moving Average Cost | $\mathrm{NMAC}(\mathcal{A})=\frac{1}{R\min(\tau,t)}\sum^{t}_{s=t-\min(\tau,t)}f_{\boldsymbol{r}_{s}}(\boldsymbol{x}_{s})$ | $[0,1]$ Time Average Regret | $\mathrm{TAR}(\mathcal{A})=\frac{1}{t}\left(\sum^{t}_{s=1}f_{\boldsymbol{r}_{s}}(\boldsymbol{x}_{s})-\sum^{t}_{s=1}f_{\boldsymbol{r}_{s}}(\boldsymbol{x_{*}})\right)$ | $[0,R]$ Cumulative Update Cost | $\mathrm{CUC}(\mathcal{A})=\sum^{t}_{s=1}\mathrm{UC}_{\boldsymbol{r}_{s}}(\boldsymbol{x}_{s},\boldsymbol{x}_{s+1})$ | $[0,\infty)$ Table 3. Performance Metrics. All are better if lower. #### 7.1.2. Online Algorithms. Starting with the gradient based algorithms, we implemented $\mathrm{OMD}_{\mathrm{NE}}$ with the projection defined in Algorithm 2. We implemented two different projection algorithms for $\mathrm{OGD}$: the one by Paschos et al. (Paschos et al., 2019) for the setting $\frac{R}{h}=1$, and the one by Wang and Lu (Wang and Lu, 2015) for the general setting $\frac{R}{h}>1$. In addition, we implemented four caching eviction policies: LRU, LFU, W-LFU, and FTPL. LRU and LFU evict the least recently used and least frequently used file, respectively. While LFU estimates file popularities considering all requests seen in the past, W-LFU (Karakostas and Serpanos, 2002) is an LFU variant that only considers requests during a recent time window $W$, which we set equal to $T\times R$ in our experiments. The policies LRU, LFU, and W-LFU are allowed to process individual requests. FTPL is a no-regret policy proposed by Mukhopadhyay and Sinha (Mukhopadhyay and Sinha, 2021), which, roughly speaking, behaves as a LFU policy whose request counters are perturbed by some Gaussian noise. Finally, we define _Best Static_ to be the optimal static allocation $\boldsymbol{x}^{*}$, i.e., the configuration storing the $k$ most popular files as we consider $w_{i}=1,\forall i\in\mathcal{N}$. We also define _Best Dynamic_ to be the caching policy that stores the $k$ most popular files at any time for the synthetic traces (for which the instantaneous popularity is well defined). The optimality of such policy is formally studied in (Panigrahy et al., 2021). #### 7.1.3. Online Rounding. We also implemented the three rounding schemes described in Sec. 6: (a) the online independent rounding in Algorithm 3, (b) the online coupled rounding in Algorithm 3, and (c) the online optimally-coupled rounding. The rounding schemes are combined with $\mathrm{OGD}$ configured with learning rate $\eta=0.01$ under the _Downscaled Global Popularity Change_ trace. #### 7.1.4. Performance Metrics. We measure performance w.r.t. four metrics defined in Table 3. The Normalized Average Cost $\mathrm{NAC}(\mathcal{A})\in[0,1]$ corresponds to the time- average cost over the first $t$ time slots, normalized by the batch size $R$. The Normalized Moving Average Cost $\mathrm{NMAC}(\mathcal{A})\in[0,1]$ is computed similarly, using a moving average instead over a time window $\tau>0$; we use $\tau=500$ in our experiments. We also consider the Time Average Regret $\mathrm{TAR}(\mathcal{A})\in[0,R]$, which is precisely the time average regret over the first $t$ time slots. Finally, when studying rounding algorithms, we also measure and report the Cumulative Update Cost $\mathrm{CUC}(\mathcal{A})\in[0,\infty)$. ### 7.2. Results #### 7.2.1. Stationary Requests Figures 2 (a) and 2 (b) show the performance w.r.t. $\mathrm{NAC}$ of OGD and $\mathrm{OMD}_{\mathrm{NE}}$, respectively, under different learning rates $\eta$ on the _Fixed Popularity_ trace. We observe that both algorithms converge slower under small learning rates, but reach a final lower cost, while larger learning rates lead to faster convergence, albeit to higher final cost. This may motivate the adoption of a diminishing learning rate, that combines the best of the two options, starting large to enable fast convergence, and enabling eventual fine-tuning. We show curves corresponding to a diminishing learning rate both for $\mathrm{OGD}$ and $\mathrm{OMD}_{\mathrm{NE}}$, and indeed they achieve the smallest costs. The learning rate $\eta^{*}$ gives the tightest worst-case regrets for $\mathrm{OGD}$ and $\mathrm{OMD}_{\mathrm{NE}}$, as stated in Theorems 5 and 9. While this learning rate is selected to protect against any (adversarial) request sequence, it is not too pessimistic: Figures 2 (a) and 2 (b) show it performs well when compared to other learning rates. Figure 2 (c) shows the time-average regret $\mathrm{TAR}$ of OGD and $\mathrm{OMD}_{\mathrm{NE}}$ over the _Fixed Popularity_ trace. As both algorithms have sub-linear regret, their time average regret goes to $0$ for $T\to\infty$. Note how instead LRU exhibits a constant time average regret. ​​ ((a)) $\mathrm{NAC}$ of OGD ​​ ((b)) $\mathrm{NAC}$ of $\mathrm{OMD}_{\mathrm{NE}}$ ((c)) Time-Average Regret Figure 2. NAC of the different caching policies over the _Fixed Popularity_ trace. Subfigures (a) and (b) show the performance of OGD and $\mathrm{OMD}_{\mathrm{NE}}$ respectively under different learning rates. For small learning rates the algorithms both converge slower but more precisely, while for larger learning rates they converge faster, but to a higher cost. Subfigure (c) shows the time average regret of the two gradient algorithms. When the regret is sub-linear, the time average regret converges to $0$ as $T\to\infty$. figurec ((a)) $k=25,\alpha=0.1$ ((b)) $k=125,\alpha=0.1$ ((c)) $k=250,\alpha=0.1$ ((d)) $k=25,\alpha=0.2$ ((e)) $k=125,\alpha=0.2$ ((f)) $k=250,\alpha=0.2$ ((g)) $k=25,\alpha=0.7$ ((h)) $k=125,\alpha=0.7$ ((i)) $k=250,\alpha=0.7$ Figure 3. NAC of $\mathrm{OMD}_{\mathrm{NE}}$ and OGD evaluated under different cache sizes and diversity regimes. We use traces _Batched Fixed Popularity_ (1), (2), and (3) corresponding to different diversity regimes. Figures from left to right correspond to different cache sizes $k\in\\{25,125,250\\}$, while figures from top to bottom correspond to different exponent values $\alpha\in\\{0.1,0.2,0.7\\}$. $\mathrm{OMD}_{\mathrm{NE}}$ outperforms OGD in the more diverse regimes and for small cache sizes, while OGD outperforms for large cache sizes and concentrated requests. #### 7.2.2. Effect of Diversity. Figure 3 shows the $\mathrm{NAC}$ performance of $\mathrm{OMD}_{\mathrm{NE}}$ and OGD on the traces _Batched Fixed Popularity_ (1), (2), and (3) under different cache capacities $k$ and exponent values $\alpha$. We observe that $\mathrm{OMD}_{\mathrm{NE}}$ outperforms OGD in the more diverse regimes ($\alpha\in\\{0.1,0.2\\}$). This is more apparent for smaller cache sizes $k$. In contrast, OGD outperforms $\mathrm{OMD}_{\mathrm{NE}}$ when requests are less diverse ($\alpha=0.7$); again, this is more apparent for larger cache size $k$. These observations agree with Theorems 8 and 7 in Sec. 4.4.2: high diversity and small cache sizes indeed favor $\mathrm{OMD}_{\mathrm{NE}}$. #### 7.2.3. Robustness to Transient Requests. Figure 4 shows the normalized average cost of $\mathrm{OMD}_{\mathrm{NE}}$ and OGD over the _Partial Popularity Change_ traces, evaluated under different diversity regimes. Dashed lines indicate the projected performance in the stationary setting (if request popularities stay fixed). Across the different diversity regimes, we find the $\mathrm{OMD}_{\mathrm{NE}}$ is more robust to popularity changes. In (a), (b) and (c) $\mathrm{OMD}_{\mathrm{NE}}$ outperforms OGD in the non-stationary popularity setting: we observe a wider performance gap as compared to the stationary setting. Figure 4 (d) and (e) show the normalized average cost over the _Global Popularity Change_ trace for the policies OGD and $\mathrm{OMD}_{\mathrm{NE}}$, respectively. We observe in Figure 4 (b) the NAC of $\mathrm{OMD}_{\mathrm{NE}}$ performance degrades after each popularity change. This is a limitation due to the multiplicative nature of $\mathrm{OMD}_{\mathrm{NE}}$. When the algorithm learns that a file, say it $i$, is not important, it can set $x_{t,i}$ arbitrarily close to $0$. If, suddenly, this content becomes popular, then $\mathrm{OMD}_{\mathrm{NE}}$ adapts slowly, due to its multiplicative nature—remember Eq. (20). This is shown in Figure 4 (e). We can overcome this limitation by requiring all state variables to be larger than some small $\delta>0$; $\mathrm{OMD}_{\mathrm{NE}}$ is then limited to $\mathcal{X}_{\delta}$, the $\delta$ interior of the capped simplex $\mathcal{X}$. More precisely, the $\delta$ interior of the capped simplex is defined as $\mathcal{X}_{\delta}\triangleq\mathcal{X}\cap[\delta,1]^{N}$. In Figure 4 (f), we use $\delta=10^{-4}$. This parameter indeed prevents the algorithm from driving the fractional allocations arbitrary close to 0, improving its adaptability. In Figure 4, we show the performance of FTPL (Bhattacharjee et al., 2020); we observe that this policy fails to adapt to popularity changes. Both our mirror descent algorithms outperform competitors (Fig. 4 (h)). ((a)) $\alpha=0.1$ ((b)) $\alpha=0.3$ ((c)) $\alpha=0.4$ ((d)) NAC of OGD ((e)) NAC of $\mathrm{OMD}_{\mathrm{NE}}$ ((f)) NAC of $\delta$-$\mathrm{OMD}_{\mathrm{NE}}$ ((g)) NAC of FTPL ((h)) NAC of the different policies. Figure 4. Subfigures (a)–(c) show NAC of OGD and $\mathrm{OMD}_{\mathrm{NE}}$ evaluated under different diversity regimes when $10\%$ of the files change popularity over time. We use traces _Partial Popularity Change_ (1), (2), and (3) corresponding to the different diversity regimes. The diversity regimes are selected, such that, in the stationary setting (dashed line): (a) $\mathrm{OMD}_{\mathrm{NE}}$ outperforms OGD, (b) $\mathrm{OMD}_{\mathrm{NE}}$ has similar performance to OGD and (c) $\mathrm{OMD}_{\mathrm{NE}}$ performs slightly worse than OGD. $\mathrm{OMD}_{\mathrm{NE}}$ is consistently more robust to partial popularity changes than OGD. Subfigures (d)–(h) show the NAC of the different caching policies evaluated on the _Global Popularity Change_ trace, where file popularity changes every $5\times 10^{4}$ iterations. While OGD adapts seamlessly to popularity changes (d), multiplicative updates can make $\mathrm{OMD}_{\mathrm{NE}}$ less reactive (e), unless $\mathrm{OMD}_{\mathrm{NE}}$ is forced to operate over the $\delta$-interior of $\mathcal{X}$ (f) ($\delta=10^{-4}$). Finally, our mirror descent policies outperform competitors (h). #### 7.2.4. Akamai Trace Figure 5 shows that the two gradient algorithms, $\mathrm{OMD}_{\mathrm{NE}}$ and OGD, perform similarly over the Akamai Trace w.r.t. NMAC; OGD is slightly better in parts of the trace. Overall, these algorithms consistently outperform LFU, W-LFU, LRU, and FTPL. Note that these caching policies process requests individually, while $\mathrm{OMD}_{\mathrm{NE}}$ and OGD adapt slower, _freezing_ their state for the entire batch size ($R=5000$). Nevertheless, $\mathrm{OMD}_{\mathrm{NE}}$ and OGD still perform better. figurec Figure 5. NMAC of the different caching policies evaluated on the _Akamai Trace_. $\mathrm{OMD}_{\mathrm{NE}}$ and OGD provide consistently the best performance compared to W-LFU, LFU, LRU, and FTPL. OGD performs slightly better than OMD in some parts of the trace. #### 7.2.5. Randomized Rounding Figure 6 shows the cumulative update cost for the online independent rounding, the online coupled rounding, and the online optimally-coupled rounding algorithms over the _Downscaled Global Popularity Change_ trace. All the rounding algorithms exhibit the same service cost in expectation. The update cost of online coupled rounding and the online optimally-coupled rounding is small, in the order of the learning rate $\eta$; moreover, we observe that online optimally-coupled rounding yields lower update costs than the online coupled rounding. In contrast, online independent rounding incurs a significantly larger update cost. Figure 7 shows the fractional and (rounded) integral cache states under _Downsampled Global Popularity Change_ trace. Online independent rounding indeed leads to more frequent updates than online coupled rounding, while the latter maintains a more stable cache configuration by coupling the consecutive states and avoiding unnecessary updates. #### 7.2.6. Computational Cost Figure 8 shows the time taken by both policies OMD and $\mathrm{OMD}_{\mathrm{NE}}$ to perform 500 iterations over the _Fixed Popularity_ trace (Fig. 8 (a)), and the time taken to perform 50 iterations over the _Batched Fixed Popularity (2)_ trace (Fig. 8 (b)). We observe that $\mathrm{OMD}_{\mathrm{NE}}$ is at least 15 times faster in computing cache states on average. figurec ((a)) Normalized average cost ((b)) Cumulative update cost Figure 6. Costs associated with the rounded integral caching over the _Downscaled Global Popularity Change_ trace. The normalized average costs shown in (a) are the same for the online independent rounding, the online coupled rounding and the online optimally-coupled rounding. The cumulative update cost of the online coupled rounding and online optimally-coupled rounding in (b) is of the same order as in the fractional setting, while the online independent rounding in (b) gives a much higher update cost. The reported values are averaged over 20 experiments, and the blue shaded area represents 10% scaling of the standard deviation. ((a)) Fractional cache states ((b)) Online independent rounding ((c)) Online coupled rounding ((d)) Online optimally-coupled rounding Figure 7. Online rounding of fractional caching states. Visually, we see that the online independent rounding has more frequent updates than the online coupled rounding. This leads to large update costs. The online coupled rounding and the online optimally-coupled rounding prevents the cache to perform unnecessary updates. figurec ((a)) $R/h=1$ ((b)) $R/h>1$ Figure 8. Runtime per iteration of $\mathrm{OMD}_{\mathrm{NE}}$ and $\mathrm{OGD}$. Subfigure (a) shows the runtime per 500 iterations over the _Fixed Popularity_ trace. Subfigure (b) shows the runtime per 50 iterations over the _Batched Fixed Popularity (2)_ trace. ## 8\. Conclusions We study no-regret caching algorithms based on OMD with $q$-norm and neg- entropy mirror maps. We find that batch diversity impacts regret performance; a key finding is that OGD is optimal in low-diversity regimes, while $\mathrm{OMD}_{\mathrm{NE}}$ is optimal under high diversity. With an appropriately designed rounding scheme, our $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ bound on the regret for general OMD algorithms extends to integral caches as well, despite the need to account for update costs in this setting. Our numerical experiments indicate that the gap between the regimes in which OGD and $\mathrm{OMD}_{\mathrm{NE}}$ are optimal, w.r.t. the diversity ratio, is narrow; this suggests that our characterization of the two regimes can be further improved. Also $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ algorithms for arbitrary values of $q\in(1,2)$ deserves more investigation to 1) devise strongly polynomial, efficient algorithms for their Bregman projection, 2) characterize their update costs, and 3) compare their performance with $\mathrm{OMD}_{\mathrm{NE}}$. ## References * [1] * Alouf et al. [2016] Sara Alouf, Nicaise Choungmo Fofack, and Nedko Nedkov. 2016. Performance Models for Hierarchy of Caches: Application to Modern DNS Caches. _Performance Evaluation_ 97 (2016), 57–82. * Andrew et al. [2013] Lachlan Andrew, Siddharth Barman, Katrina Ligett, Minghong Lin, Adam Meyerson, Alan Roytman, and Adam Wierman. 2013. A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret. _SIGMETRICS Performance Evaluation Review_ 41, 1 (June 2013), 329–330. * AWS [2021] AWS. 2021. Amazon Web Service ElastiCache. https://aws.amazon.com/elasticache/ * Bansal et al. [2012] Nikhil Bansal, Niv Buchbinder, and Joseph (Seffi) Naor. 2012\. A Primal-Dual Randomized Algorithm for Weighted Paging. _Journal of the ACM (JACM)_ 59, 4, Article 19 (Aug. 2012). * Beck and Teboulle [2003] Amir Beck and Marc Teboulle. 2003. Mirror Descent and Nonlinear Projected Subgradient Methods for Convex Optimization. _Operations Research Letters_ 31, 3 (2003), 167–175. * Berger et al. [2014] Daniel S. Berger, Philipp Gland, Sahil Singla, and Florin Ciucu. 2014. Exact Analysis of TTL Cache Networks. _Performance Evaluation_ 79 (2014), 2–23. * Bhattacharjee et al. [2020] Rajarshi Bhattacharjee, Subhankar Banerjee, and Abhishek Sinha. 2020. Fundamental Limits on the Regret of Online Network-Caching. _Proceedings of the ACM on Measurement and Analysis of Computing Systems_ 4, 2, Article 25 (June 2020). * Blaszczyszyn and Giovanidis [2015] B. Blaszczyszyn and A. Giovanidis. 2015. Optimal Geographic Caching In Cellular Networks. In _ICC_. 3358–3363. * Borodin et al. [1992] Allan Borodin, Nathan Linial, and Michael E. Saks. 1992\. An Optimal On-Line Algorithm for Metrical Task System. _Journal of the ACM (JACM)_ 39, 4 (Oct. 1992), 745–763. * Borst et al. [2010] Sem Borst, Varun Gupta, and Anwar Walid. 2010. Distributed Caching Algorithms for Content Distribution Networks. In _2010 Proceedings IEEE INFOCOM_. IEEE, 1–9. * Bubeck [2015] Sébastien Bubeck. 2015\. Convex Optimization: Algorithms and Complexity. _Foundations and Trends in Machine Learning_ 8, 3–4 (Nov. 2015), 231–357. * Bubeck et al. [2018] Sébastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. 2018. $K$-Server via Multiscale Entropic Regularization. In _Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing_ (Los Angeles, CA, USA) _(STOC 2018)_. Association for Computing Machinery, New York, NY, USA, 3–16. * Cesa-Bianchi and Lugosi [2006] Nicolo Cesa-Bianchi and Gabor Lugosi. 2006. _Prediction, Learning, and Games_. Cambridge University Press, USA. * Che et al. [2002] Hao Che, Ye Tung, and Z. Wang. 2002. Hierarchical Web Caching Systems: Modeling, Design and Experimental Results. _IEEE Journal on Selected Areas in Communications_ (Sep 2002). * Chesneau and Bagul [2020] Christophe Chesneau and Yogesh J. Bagul. 2020. New Sharp Bounds for the Logarithmic Function. _Electronic Journal of Mathematical Analysis and Applications_ 8, 1 (2020), 140–145. * Chu et al. [2018] Weibo Chu, Mostafa Dehghan, John C.S. Lui, Don Towsley, and Zhi-Li Zhang. 2018. Joint Cache Resource Allocation and Request Routing for In-network Caching Services. _Computer Networks_ 131 (2018), 1–14. * Coffman and Denning [1973] Edward Grady Coffman and Peter J. Denning. 1973. _Operating Systems Theory_. Vol. 973. Prentice-Hall, Inc. * Dehghan et al. [2019] Mostafa Dehghan, Laurent Massoulie, Don Towsley, Daniel Sadoc Menasche, and Y. C. Tay. 2019\. A Utility Optimization Approach to Network Cache Design. _IEEE/ACM Transactions on Networking_ 27, 3 (June 2019), 1013–1027. * Fagin [1977] Ronald Fagin. 1977\. Asymptotic Miss Ratios over Independent References. _J. Comput. System Sci._ 14, 2 (1977), 222–250. * Flajolet et al. [1992] Philippe Flajolet, Danièle Gardy, and Loÿs Thimonier. 1992. Birthday Paradox, Coupon Collectors, Caching Algorithms and Self-Organizing Search. _Discrete Applied Mathematics_ 39, 3 (1992), 207–229. * Fofack et al. [2014] Nicaise Choungmo Fofack, Philippe Nain, Giovanni Neglia, and Don Towsley. 2014. Performance Evaluation of Hierarchical TTL-based Cache Networks. _Computer Networks_ 65 (2014), 212–231. * Fricker et al. [2012] Christine Fricker, Philippe Robert, and James Roberts. 2012\. A Versatile and Accurate Approximation for LRU Cache Performance. In _Proceedings of the 24th International Teletraffic Congress_. 8. * Garetto et al. [2016] Michele Garetto, Emilio Leonardi, and Valentina Martina. 2016\. A Unified Approach to the Performance Analysis of Caching Systems. _ACM Transactions on Modeling and Performance Evaluation of Computing Systems_ 1, 3, Article 12 (May 2016), 12:1–12:28 pages. * Gast and Van Houdt [2017] Nicolas Gast and Benny Van Houdt. 2017. TTL Approximations of the Cache Replacement Algorithms LRU (m) and h-LRU. _Performance Evaluation_ 117 (2017), 33–57. * Gentile and Littlestone [1999] Claudio Gentile and Nick Littlestone. 1999. The Robustness of the $p$-Norm Algorithms. In _Proceedings of the Twelfth Annual Conference on Computational Learning Theory_ (Santa Cruz, California, USA) _(COLT ’99)_. Association for Computing Machinery, New York, NY, USA, 1–11. * Hazan [2016] Elad Hazan. 2016\. Introduction to Online Convex Optimization. _Foundations and Trends® in Optimization_ 2, 3–4 (Aug. 2016), 157–325. * Ioannidis et al. [2010] Stratis Ioannidis, Laurent Massoulié, and Augustin Chaintreau. 2010\. Distributed Caching over Heterogeneous Mobile Networks. In _Proceedings of the ACM SIGMETRICS_. 311–322. * Ioannidis and Yeh [2016] Stratis Ioannidis and Edmund Yeh. 2016. Adaptive Caching Networks with Optimality Guarantees. _SIGMETRICS Performance Evaluation Review_ 44, 1 (2016), 113–124. * Jelenkovic [1999] Predrag R. Jelenkovic. 1999\. Asymptotic Approximation of the Move-to-Front Search Cost Distribution and Least-Recently Used Caching Fault Probabilities. _The Annals of Applied Probability_ 9, 2 (1999), 430–464. * Jiang et al. [2018] Bo Jiang, Philippe Nain, and Don Towsley. 2018. On the Convergence of the TTL Approximation for an LRU Cache under Independent Stationary Request Processes. _ACM Transactions on Modeling and Performance Evaluation of Computing Systems_ 3, 4 (2018). * Karakostas and Serpanos [2002] George Karakostas and Dimitrios N. Serpanos. 2002. Exploitation of Different Types of Locality for Web Caches. In _Proceedings ISCC 2002 Seventh International Symposium on Computers and Communications_. IEEE, 207–212. * King [1972] W. F. King. 1972\. Analysis of Paging Algorithms. In _Proceedings of the IFIP congress on Information Processing_ , Vol. 71. 485–490. * Kiwiel [1997] Krzysztof C. Kiwiel. 1997\. Proximal Minimization Methods with Generalized Bregman Functions. _SIAM Journal on Control and Optimization_ 35, 4 (1997), 1142–1168. * Koutsoupias [2009] Elias Koutsoupias. 2009\. The $k$-server Problem. _Computer Science Review_ 3, 2 (May 2009), 105–118. * Leonardi and Neglia [2018] Emilio Leonardi and Giovanni Neglia. 2018. Implicit Coordination of Caches in Small Cell Networks Under Unknown Popularity Profiles. _IEEE Journal on Selected Areas in Communications_ 36, 6 (June 2018), 1276–1285. * Lin [2013] Jun-Lin Lin. 2013\. On the Diversity Constraints for Portfolio Optimization. _Entropy_ 15, 11 (2013), 4607–4621. * Littlestone and Warmuth [1994] N. Littlestone and M. K. Warmuth. 1994. The Weighted Majority Algorithm. _Information and computation_ 108, 2 (1994), 212–261. * Manasse et al. [1988] Mark Manasse, Lyle McGeoch, and Daniel Sleator. 1988\. Competitive Algorithms for On-Line Problems. In _Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing_ (Chicago, Illinois, USA) _(STOC ’88)_. Association for Computing Machinery, New York, NY, USA, 322–333. * Mukhopadhyay and Sinha [2021] Samrat Mukhopadhyay and Abhishek Sinha. 2021. Online Caching with Optimal Switching Regret. In _2021 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 1546–1551. * Neglia et al. [2017] Giovanni Neglia, Damiano Carra, Mingdong Feng, Vaishnav Janardhan, Pietro Michiardi, and Dimitra Tsigkari. 2017. Access-Time-Aware Cache Algorithms. _ACM Transactions on Modeling and Performance Evaluation of Computing Systems_ 2, 4, Article 21 (Nov. 2017). * Neglia et al. [2018] Giovanni Neglia, Damiano Carra, and Pietro Michiardi. 2018\. Cache Policies for Linear Utility Maximization. _IEEE/ACM Transactions on Networking_ 26, 1 (Feb. 2018), 302–313. * Panigrahy et al. [2021] Nitish K. Panigrahy, Philippe Nain, Giovanni Neglia, and Don Towsley. 2021. A New Upper Bound on Cache Hit Probability for Non-Anticipative Caching Policies. _SIGMETRICS Performance Evaluation Review_ 48, 3 (March 2021), 138–143. * Paredes and Navarro [2006] Rodrigo Paredes and Gonzalo Navarro. 2006. Optimal Incremental Sorting. In _2006 Proceedings of the Eighth Workshop on Algorithm Engineering and Experiments (ALENEX)_. SIAM, 171–182. * Paschos et al. [2019] G. S. Paschos, A. Destounis, L. Vigneri, and G. Iosifidis. 2019. Learning to Cache With No Regrets. In _IEEE INFOCOM 2019 - IEEE Conference on Computer Communications_. 235–243. * Peyré et al. [2019] Gabriel Peyré, Marco Cuturi, et al. 2019\. Computational Optimal Transport: With Applications to Data Science. _Foundations and Trends® in Machine Learning_ 11, 5-6 (2019), 355–607. * Poularakis et al. [2016] Konstantinos Poularakis, George Iosifidis, Vasilis Sourlas, and Leandros Tassiulas. 2016. Exploiting Caching and Multicast for 5G Wireless Networks. _IEEE Transactions on Wireless Communications_ 15, 4 (2016), 2995–3007. * Shalev-Shwartz [2012] Shai Shalev-Shwartz. 2012\. Online Learning and Online Convex Optimization. _Foundations and Trends in Machine Learning_ 4, 2 (Feb. 2012), 107–194. * Shalev-Shwartz and Singer [2007] Shai Shalev-Shwartz and Yoram Singer. 2007. _Online Learning: Theory, Algorithms, and Applications_. Ph.D. Dissertation. Hebrew University. * Shanmugam et al. [2013] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire. 2013\. FemtoCaching: Wireless Content Delivery Through Distributed Caching Helpers. _IEEE Transactions on Information Theory_ 59, 12 (2013), 8402–8413. * Si Salem et al. [2021] Tareq Si Salem, Giovanni Neglia, and Stratis Ioannidis. 2021\. No-Regret Caching via Online Mirror Descent. In _ICC 2021 - IEEE International Conference on Communications_. 1–6. * Sleator and Tarjan [1985] Daniel D. Sleator and Robert E. Tarjan. 1985. Amortized Efficiency of List Update and Paging Rules. _Commun. ACM_ 28, 2 (Feb. 1985), 202–208. * Traverso et al. [2013] Stefano Traverso et al. 2013\. Temporal Locality in Today’s Content Caching: Why It Matters and How to Model It. _ACM SIGCOMM Computer Communication Review_ 43, 5 (Nov. 2013), 5–12. * Wang and Lu [2015] Weiran Wang and Canyi Lu. 2015. Projection onto the Capped Simplex. _preprint arXiv:1503.01002_ (2015). * Zinkevich [2003] Martin Zinkevich. 2003\. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In _Proceedings of the Twentieth International Conference on International Conference on Machine Learning_ (Washington, DC, USA) _(ICML’03)_. AAAI Press, 928–935. ## Appendix A Fractional Caching and Gradient-based algorithms ### A.1. Proof of Proposition 1 ###### Proof. Consider a catalog of files $\mathcal{N}=\\{1,2\\}$, cache size $k=1$, and equal service costs $w_{1}=w_{2}=1$. The aggregate cost minimization policy $\mathcal{A}$ has an arbitrary initial state $\boldsymbol{x}_{1}\in\mathcal{X}$. The adversary picks the following sequence of request batches $\\{\boldsymbol{r}_{t}\\}^{T}_{t=1}=\\{\boldsymbol{e}_{1},2\boldsymbol{e}_{2},2\boldsymbol{e}_{1},2\boldsymbol{e}_{2},\dots\\}$, where $\boldsymbol{e}_{i}=[\mathds{1}_{\\{j=i\\}}]_{j\in\\{1,2\\}}$. The aggregate cost at time slot $t$ for a fixed cache state $\boldsymbol{x}\in\mathcal{X}$ is given by (29) $\displaystyle\sum^{t}_{t^{\prime}=1}f_{\boldsymbol{r}_{t^{\prime}}}(\boldsymbol{x})=\begin{cases}t(1-x_{1})+(t-1)(1-x_{2})&\text{if $t$ is odd,}\\\ (t-1)(1-x_{1})+t(1-x_{2})&\text{if $t$ is even}.\end{cases}$ For any time slot $t>1$ the aggregate cost minimization policy selects the state $\boldsymbol{x}_{t}$ that minimizes $\sum^{t-1}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x})$. The policy $\mathcal{A}$ selects the following sequence of states $\\{\boldsymbol{x}_{t}\\}^{T}_{t=1}=\\{\boldsymbol{x}_{1},\boldsymbol{e}_{1},\boldsymbol{e}_{2},\boldsymbol{e}_{1},\boldsymbol{e}_{2},\dots\\}$, and it incurs over the time horizon $T$ the total cost $\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})=f_{\boldsymbol{r}_{1}}(\boldsymbol{x}_{1})+2(T-1)$. Consider a different policy that permanently selects $\boldsymbol{e}_{1}$, such policy incurs the total cost given by $\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{e}_{1})=0+2+0+2+0+\dots\leq T$. Then the optimal static allocation $\boldsymbol{x}_{*}$ has cost at most $T$; therefore, we obtain a lower bound on the regret of $\mathcal{A}$ as $\mathrm{Regret}_{T}(\mathcal{A})\geq f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{1})+2(T-1)-T\geq T-2$. We conclude that the aggregate cost minimization policy has linear regret. ∎ ### A.2. Online Mirror Descent ###### Theorem 1. ​​​([12, Theorem 4.2]) Let (1) the map $\Phi:\mathcal{D}\to\mathbb{R}$ be a mirror map (see Sec. 4.2) $\rho$-strongly convex w.r.t a norm $\left\lVert\,\cdot\,\right\rVert$ over $\mathcal{S}\cap\mathcal{D}$ ($\mathcal{S}$ is a convex set), (2) the cost functions $f_{t}:\mathcal{S}\to\mathbb{R}$ be convex with bounded gradients (i.e., $\left\lVert\nabla f_{t}(\boldsymbol{x})\right\rVert_{*}\leq L,\forall\boldsymbol{x}\in\mathcal{S}$) for every $t\in[T]$, where $\left\lVert\,\cdot\,\right\rVert_{*}$ is the dual norm of $\left\lVert\,\cdot\,\right\rVert$, (3) and the Bregman divergence $D_{\Phi}(\boldsymbol{x},\boldsymbol{x}_{1})$ be bounded by $D^{2}$ for $\boldsymbol{x}\in\mathcal{S}$ where $\boldsymbol{x}_{1}=\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{S}\cap\mathcal{D}}\Phi(\boldsymbol{x})$. Then Algorithm 1 satisfies (30) $\textstyle\sum^{T}_{t=1}f_{t}(\boldsymbol{x}_{t})-\sum^{T}_{t=1}f_{t}(\boldsymbol{x})\leq\frac{D^{2}}{\eta}+\frac{\eta L^{2}}{2\rho}T,$ where $\boldsymbol{x}$ is a fixed point in $\mathcal{S}$.555Note that the fixed point $\boldsymbol{x}$ can be selected as the minimizer of the aggregate cost (i.e., $\boldsymbol{x}_{*}\in\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{S}}\sum^{T}_{t=1}f_{t}(\boldsymbol{x})$). We remark that Theorem 1 is expressed differently in [12], where $f_{t}=f,\forall t\in[T]$ (fixed cost function). Nonetheless, as observed in [12, Sec. 4.6] the bound obtained in Eq. (30) holds as long as the dual norms of the gradients are bounded by $L$. ### A.3. Proof of Theorem 4 The map $\Phi(x)=\frac{1}{2}\left\lVert x\right\rVert_{q}^{2},q\in(1,2]$ is $\rho=q-1$ strongly convex w.r.t $\left\lVert\,\cdot\,\right\rVert_{q}$ over $\mathcal{D}=\mathbb{R}^{N}$ a direct result from [49, Lemma 17], and the dual norm of $\left\lVert\,\cdot\,\right\rVert_{q}$ is $\left\lVert\,\cdot\,\right\rVert_{p}$ (Hölder’s inequality).Take $\mathcal{S}=\mathcal{X}$. The minimum value of $\Phi(x)$ over $\mathcal{X}$ is achieved when we spread the capacity mass $k$ over the decision variable, i.e., $x_{i}=\frac{k}{N},i\in\mathcal{N}$. If we select $\boldsymbol{x}_{1}$ to be the minimizer of $\Phi(\boldsymbol{x})$, then we have $\nabla\Phi^{T}(\boldsymbol{x}_{1})(\boldsymbol{x}-\boldsymbol{x}_{1})\geq 0,\forall\boldsymbol{x}\in\mathcal{X}$ [27, Theorem 2.2], so we obtain $D_{\Phi}(\boldsymbol{x},\boldsymbol{x}_{1})=\Phi(\boldsymbol{x})-\Phi(\boldsymbol{x}_{1})-\nabla{\Phi(\boldsymbol{x}_{1})}^{T}(\boldsymbol{x}-\boldsymbol{x}_{1})\leq\Phi(\boldsymbol{x})-\Phi(\boldsymbol{x}_{1})$. Moreover, it is easy to check that $\Phi$ is maximized at a sparse point $\boldsymbol{x}_{*}\in\mathcal{X}\cap\\{0,1\\}^{N}$; thus, we have $D_{\Phi}(\boldsymbol{x},\boldsymbol{x}_{1})\leq\Phi(\boldsymbol{x}_{*})-\Phi(\boldsymbol{x}_{1})$. By replacing $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{*}$ with their values in the previous equation we get $\textstyle\Phi(\boldsymbol{x}_{1})=\frac{1}{2}\left(\left(\frac{k}{N}\right)^{q}N\right)^{\frac{2}{q}}=\frac{1}{2}k^{2}N^{-\frac{2}{p}}$, and $\textstyle\Phi(\boldsymbol{x}_{*})=\frac{1}{2}k^{\frac{2}{q}}=\frac{1}{2}k^{2}k^{-\frac{2}{p}}$. Thus, we have (31) $D_{\Phi}(\boldsymbol{x},\boldsymbol{x}_{1})\leq\frac{1}{2}k^{2}\left(k^{-\frac{2}{p}}-N^{-\frac{2}{p}}\right)=D^{2}.$ Note that the maximum of $\left\lVert\boldsymbol{r}\right\rVert_{p}$ is achieved when $\frac{R}{h}$ components are set to $h$, then the following bound holds on the gradients (32) $\displaystyle\underset{r\in\mathcal{R}}{\max}\left\lVert\nabla f_{\boldsymbol{r}}(\boldsymbol{x})\right\rVert_{p}\leq\underset{\boldsymbol{r}\in\mathcal{R}}{\max}\left\lVert\boldsymbol{w}\right\rVert_{\infty}\left\lVert\boldsymbol{r}\right\rVert_{p}=\left\lVert\boldsymbol{w}\right\rVert_{\infty}h\left(\frac{R}{h}\right)^{\frac{1}{p}}=L.$ The gradients are bounded in the dual norm $\left\lVert\nabla f_{r}(\boldsymbol{x}_{t})\right\rVert_{p}\leq L,\forall\boldsymbol{r}\in\mathcal{R}$. The final bound follows by Theorem 1, plugging (32) and (31) in (30), and selecting the learning rate that achieves the tightest bound $\eta=\sqrt{{(q-1)k^{2}\left(k^{-\frac{2}{p}}-N^{-\frac{2}{p}}\right)}/\left({\left\lVert w\right\rVert^{2}_{\infty}h^{2}\left(\frac{R}{h}\right)^{\frac{2}{p}}T}\right)}$. ∎ ### A.4. Proof of Corollary 6 Taking $\alpha=\frac{k}{N}$ and $\beta=\frac{Nh}{R}$, we can rewrite (12) to have $\text{Regret}_{T}(\text{OMD}_{q\text{-norm}})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\beta^{\frac{1}{q}}\sqrt{\frac{\alpha^{2/q}-\alpha^{2}}{q-1}T}$. We take the limit $q\to 1$, to obtain the upper bound $\displaystyle\textstyle\text{Regret}_{T}(\text{OMD}_{1\text{-norm}})\leq\underset{q\to 1}{\text{lim}}\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\beta^{\frac{1}{q}}\sqrt{\frac{\alpha^{2/q}-\alpha^{2}}{q-1}T}=\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\beta\sqrt{\left[\alpha^{2/q}\right]^{\prime}_{q=1}T}$ $\displaystyle=\textstyle\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\beta\sqrt{\left[-2q^{-2}\alpha^{2/q}\log(\alpha)\right]_{q=1}T}=\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\alpha\beta\sqrt{2\log(\alpha^{-1})T}=\left\lVert\boldsymbol{w}\right\rVert_{\infty}hk\sqrt{2\log\left(\frac{N}{k}\right)T}.\qed$ ### A.5. Proof of Theorem 7 We take the simplified version of the regret of the general class of $q$-norm mirror maps in Eq. (12), select $\alpha=\frac{k}{N}$ and $\beta=\frac{Nh}{R}$, so we get $\text{Regret}_{T}(\text{OMD}_{q\text{-norm}})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}R\phi(q)\sqrt{T}$, where $\phi(q)\triangleq\beta^{\frac{1}{q}}\sqrt{\frac{\alpha^{2/q}-\alpha^{2}}{q-1}}$. The tightest regret bound is achieved with $q^{*}$ that minimizes $\phi(q)$. We have (33) $\displaystyle\textstyle\phi^{\prime}(q)=-\alpha^{2}\beta^{\frac{1}{q}}\frac{q^{2}\left(\alpha^{2/q-2}-1\right)+2(q-1)\left(\alpha^{2/q-2}\log(\alpha)+\left(\alpha^{2/q-2}-1\right)\log(\beta)\right)}{2q^{2}(q-1)^{2}\sqrt{\frac{\alpha^{2/q}-\alpha^{2}}{q-1}}}.$ We study the sign ($-J(q)$) of the derivative of the minimizer in Eq (33) (34) $\displaystyle J(q)=\textstyle q^{2}\left(\alpha^{2/q-2}-1\right)+2(q-1)\left(\alpha^{2/q-2}\log(\alpha)+\left(\alpha^{2/q-2}-1\right)\log(\beta)\right)$ (35) $\displaystyle\geq\textstyle 2q(1-q)\log(\alpha)+2(q-1)\left(2\frac{1-q}{q}\log(\alpha)\log(\alpha\beta)+\log(\alpha)\right)$ (36) $\displaystyle\geq\textstyle 2q(q-1)\left(\left(1-q\right)\log(\alpha)+2\frac{1-q}{q}\log(\alpha)\log(\alpha\beta)\right).$ Note that $\left(1-q\right)\log(\alpha)\geq 0$ and $\frac{1-q}{q}\log(\alpha)\geq 0$. We take $\frac{R}{h}\leq k$, this gives $\alpha\beta\geq 1$ and $J(q)\geq 0$ implying $\text{sign}(\phi^{\prime}(q))=-\text{sign}(J(q))=-1$. We conclude that $\phi(q)$ is a decreasing function of $q\in(1,2]$ when $\frac{R}{h}\leq k$; therefore, the minimum is obtained at $q=2$ for $\frac{R}{h}\leq k$. ∎ ### A.6. Proof of Theorem 8 We have the following regret upper bound for the $q$-norm mirror map, as $q\to 1$ from Corollary 6: $\text{Regret}_{T}(\text{OMD}_{1\text{-norm}})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}hk\sqrt{2\log\left(\frac{N}{k}\right)T}$. In [16], it is proved that the $\log$ function satisfies $\text{log}(u+1)\leq\frac{u}{\sqrt{u+1}},u\geq 0$. We take $u=\frac{N}{k}-1$, and note that $N\geq k>0$, so we get $u\geq 0$. We have the following $\text{log}\left(\frac{N}{k}\right)\leq\frac{N-k}{\sqrt{Nk}}=\sqrt{\frac{N}{k}}\left(1-\frac{k}{N}\right)$. Thus, the upper bound in Corollary 6 Eq. (14) can be loosened to obtain (37) $\displaystyle\textstyle\text{Regret}_{T}(\text{OMD}_{1\text{-norm}})\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}kh\sqrt{2\text{log}\left(\frac{N}{k}\right)T}\leq\left\lVert\boldsymbol{w}\right\rVert_{\infty}\sqrt{2\sqrt{Nk}h^{2}k\left(1-\frac{k}{N}\right)}.$ If we take $\frac{R}{h}\geq 2\sqrt{Nk}$, then this upper bound is tighter than the upper bound on the regret of OGD in Corollary 5. ∎ ### A.7. $\mathrm{OMD}_{\mathrm{NE}}$ and $\mathrm{OMD}$ with $q$-norm Mirror Map Correspondence ###### Theorem 2. The algorithm $\mathrm{OMD}_{1\text{-}\mathrm{norm}}$ defined as the limitting algorithm obtain by taking $q$ converges to 1 of $\mathrm{OMD}_{q\text{-}\mathrm{norm}}$ with learning rate $\eta_{q}=\eta(q-1)k$, intermediate states $\boldsymbol{y}^{(q)}_{t}$, and fractional states $\boldsymbol{x}^{(q)}_{t}$ for $t\geq 1$ is equivalent to $\mathrm{OMD}_{\mathrm{NE}}$ configured with learning rate $\eta\in\mathbb{R}_{+}$ over the simplex (capped simplex $\mathcal{X}$ with $k=1$), when both policies are initialized with same state in $\mathbb{R}^{N}_{>0}\cap\mathcal{X}$. Moreover, $\mathrm{OMD}_{1\text{-}\mathrm{norm}}$ has a multiplicative update rule over the capped simplex. ###### Proof. Let $\boldsymbol{g}_{t}=\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$ be the gradient of the cost function at time slot $t$. From lines 2–3 in Algorithm 1 and Eq. (15) we obtain the following $\hat{y}^{(q)}_{t,i}=\left(\nabla\Phi(\boldsymbol{x}_{t})\right)_{i}+\eta_{q}g_{t,i}=\text{sign}(x_{t,i})\frac{|x_{t,i}|^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta_{q}g_{t,i}$ for a given $q\in(1,2]$. The algorithm guarantees that $\boldsymbol{x}_{t}\in\mathcal{X}\cap\mathbb{R}^{N}_{>0}$, then $x_{i}>0,\forall i\in\mathcal{N}$. So, we get $\textstyle\hat{y}^{(q)}_{t,i}=\frac{(x_{t,i})^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta_{q}g_{t,i},\forall i\in\mathcal{N}$. Note that $-\eta g_{t,i}$ is non-negative. The numbers $p$ and $q$ are conjugate numbers (see Sec. 4.4) and satisfy $q-1=\frac{1}{p-1}$. We use Eq. (16) to get the expression of $y^{(q)}_{t+1,i}$ as $\displaystyle\textstyle\frac{\left(\frac{(x_{t,i})^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta_{q}g_{t,i}\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}\left(\frac{(x_{t,i})^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta_{q}g_{t,i}\right)^{p}\right)^{\frac{p-2}{p}}}=\frac{\left(\frac{(x_{t,i})^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta(q-1)kg_{t,i}\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}\left(\frac{(x_{t,i})^{q-1}}{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}-\eta(q-1)kg_{t,i}\right)^{p}\right)^{\frac{p-2}{p}}}=\textstyle\frac{x_{t,i}\left\lVert\boldsymbol{x}_{t}\right\rVert^{(2-q)}_{q}\left(1-\eta(q-1)kg_{t,i}\frac{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}{(x_{t,i})^{q-1}}\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}(x_{t,i})^{{(q-1)p}}\left(1-\eta(q-1)kg_{t,i}\frac{\left\lVert\boldsymbol{x}_{t}\right\rVert^{q-2}_{q}}{(x_{t,i})^{q-1}}\right)^{p}\right)^{\frac{p-2}{p}}}.$ We rewrite the above expression solely in terms of $p$ to obtain (38) $\displaystyle y^{\left(\frac{p}{p-1}\right)}_{t+1,i}=\textstyle\frac{x_{t,i}\left\lVert\boldsymbol{x}_{t}\right\rVert^{(2-\frac{p}{p-1})}_{\frac{p}{p-1}}\left(1-{\eta g_{t,i}k}/\left({(x_{t,i})^{\frac{1}{p-1}}\left\lVert\boldsymbol{x}_{t}\right\rVert^{(2-\frac{p}{p-1})}_{\frac{p}{p-1}}}\right)\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}(p-1)(x_{t,i})^{{(\frac{p}{p-1})}}\left(1-{\eta g_{t,i}k}/\left({(p-1)(x_{t,i})^{\frac{1}{p-1}}\left\lVert\boldsymbol{x}_{t}\right\rVert^{(2-\frac{p}{p-1})}_{\frac{p}{p-1}}}\right)\right)^{p}\right)^{\frac{p-2}{p}}}.$ Taking the limit for $q$ converges to $1$ is equivalent to let $p$ diverges to $+\infty$, so we have $\displaystyle y_{t+1,i}\triangleq\lim_{p\to+\infty}y^{\left(\frac{p}{p-1}\right)}_{t+1,i}$ $\displaystyle=\textstyle x_{t,i}k\left(\lim_{p\to+\infty}\frac{\left(1-\frac{\eta g_{t,i}}{p-1}\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}x_{t,i}\left(1-\frac{\eta g_{t,i}}{p-1}\right)^{p}\right)^{\frac{p-2}{p}}}\right)=x_{t,i}k\left(\lim_{p\to+\infty}\frac{\left(1-\frac{\eta g_{t,i}}{p-1}\right)^{p-1}}{\left(\sum_{i\in\mathcal{N}}x_{t,i}\left(1-\frac{\eta g_{t,i}}{p-1}\right)^{p}\right)}\right)$ (39) $\displaystyle=\textstyle x_{t,i}k\frac{\exp\left(-\eta g_{t,i}\right)}{\sum_{i\in\mathcal{N}}x_{t,i}\exp\left(-\eta g_{t,i}\right)},\forall i\in\mathcal{N}.$ * • The intermediate state of $\mathrm{OMD}_{1\text{-}\mathrm{norm}}$ in Eq. (39) is a multiplicative update rule identical to the update rule of $\mathrm{OMD}_{\mathrm{NE}}$ ($y_{t+1,i}=x_{t,i}\,e^{-\eta g_{t,i}}$ in Eq. (20)) with an additional multiplicative factor $\frac{k}{\sum_{i\in\mathcal{N}}x_{t,i}\exp\left(-\eta g_{t,i}\right)}$. * • For $k=1$, the intermediate state of $\mathrm{OMD}_{1\text{-}\mathrm{norm}}$ in Eq. (39) is feasible (i.e., $\boldsymbol{x}_{t+1}=\boldsymbol{y}_{t+1}$ and the projection has no effect). On the other hand, the neg-entropy projection in the case of $k=1$ is just a normalization of the intermediate states (i.e., $x_{t+1,i}=\frac{x_{t,i}\,e^{-\eta g_{t,i}}}{\sum_{i\in\mathcal{N}}x_{t,i}\exp\left(-\eta g_{t,i}\right)}$ for $i\in\mathcal{N}$). Thus, the states obtained by the two algorithms coincide. ∎ ### A.8. Proof of Theorem 9 The neg-entropy mirror map is $\rho=\frac{1}{k}$-strongly convex w.r.t the norm $\left\lVert\,\cdot\,\right\rVert_{1}$ over $\mathcal{X}\cap\mathcal{D}$ [48, Example 2.5]. The dual norm of $\left\lVert\,\cdot\,\right\rVert_{1}$ is $\left\lVert\,\cdot\,\right\rVert_{\infty}$. By taking $p\to\infty$ in Eq. (32) we can consider as bound for the gradient in Eq. (30) (40) $\displaystyle L=\left\lVert\boldsymbol{w}\right\rVert_{\infty}h.$ The initial state $\boldsymbol{x}_{1}$ with $x_{1,i}=k/N,\forall i\in\mathcal{N}$ is the minimizer of $\Phi$, and we have $\Phi(x)\leq 0,\forall\boldsymbol{x}\in X$. Thus (41) $\textstyle D_{\Phi}(\boldsymbol{x},\boldsymbol{x}_{1})\leq\Phi(\boldsymbol{x})-\Phi(\boldsymbol{x}_{1})\leq-\Phi(\boldsymbol{x}_{1})=-\sum^{N}_{i=1}\frac{k}{N}\log\left(\frac{k}{N}\right)=k\log\left(\frac{N}{k}\right)=D^{2}.$ The bound follows by Theorem 1, plugging (40) and (41) in (30), and selecting the learning rate that gives the tightest upper bound, that is $\eta=\sqrt{\frac{2\log(\frac{N}{k})}{\left\lVert w\right\rVert^{2}_{\infty}h^{2}T}}$. ∎ ### A.9. Proof of Theorem 10 We adapt the Euclidean projection algorithm in [54]. Finding the projection $\mathbf{x}=\Pi^{\Phi}_{\mathcal{X}\cap\mathcal{D}}(\mathbf{y})$ corresponds to solving a convex problem as $D_{\Phi}(\mathbf{x},\mathbf{y})$ is convex in $\mathbf{x}$ and $\mathcal{X}\cap\mathcal{D}$ is a convex set. Without loss of generality, we assume the components of $\mathbf{x}=\Pi^{\Phi}_{\mathcal{X}\cap\mathcal{D}}(\mathbf{y})$ to be in non- decreasing order. Let $b\in\mathcal{N}$ be the index of the largest component of $\mathbf{x}$ smaller than 1. The KKT conditions lead to conclude that if the components of $\mathbf{y}$ are ordered in ascending order, so are the components of $\mathbf{x}$. In particular, the smallest $b$ components of $\mathbf{x}$ can be obtained as $x_{i}=y_{i}e^{\gamma}$ and $y_{b}e^{\gamma}<1\leq y_{b+1}e^{\gamma}$, where $\gamma$ is the Lagrangian multiplier associated with the capacity constraint. If $b$ is known, then it follows from the capacity constraint that $\textstyle m_{b}\triangleq e^{\gamma}=\textstyle\frac{k+b-N}{\sum_{i=1}^{b}{y_{i}}}=\textstyle\frac{k+b-N}{\left\lVert\mathbf{y}\right\rVert_{1}-\sum_{i=b+1}^{N}{y_{i}}}$. We observe that necessarily $b\in\\{N-k+1,\dots,N\\}$. In fact, we cannot have $b\leq N-k$. If $b\leq N-k$, we get $\sum_{i=N-k+1}^{N}x_{i}\geq k$ and the capacity constraint implies that $x_{i}=0,\forall i\leq b$, but we must have $x_{i}>0$ since $\mathbf{x}\in\mathcal{X}\cap\mathcal{D}$ and $\mathcal{D}=R_{>0}^{N}$. We can then find the value of $b$, but checking which number in $\\{N-k+1,\dots,N\\}$ satisfies $y_{b}e^{\gamma}<1\leq y_{b+1}e^{\gamma}$. Note that this operation only requires the largest $k$ components of $\mathbf{y}$. The projection corresponds to setting the components $y_{b+1},\dots,y_{N}$ to 1 and multiply the other $N-b$ components by $m_{b}$. In order to avoid updating all components at each step, we can simply set the components $x_{i}$ for $i>b$ (those that should be set equal to 1) to $\frac{1}{m_{b}}$. Then, at any time $t$, we can recover the value of $x_{t,i}$, multiplying the $i$-th component of the vector $\mathbf{x}$ by $P=\prod_{s=1}^{t}{m_{b,s}}$, where $m_{b,s}$ is the returned $m_{b}$ from the Bregman projection at time step $s$. For general values of $R$ and $h$, the projection step takes $\operatorname{\mathcal{O}}\left(k\right)$ steps per iteration and a partial sort is required to maintain top-$k$ components of $\mathbf{y}_{t}$ sorted; this can be done using partial sorting in $\operatorname{\mathcal{O}}\left(N+k\log(k)\right)$ [44]. When $R=h=1$, Alg. 1 leads to only a single state coordinate update, and requires $\operatorname{\mathcal{O}}\left(\log(k)\right)$ steps to maintain top-$k$ components of $\mathbf{x}_{t}$ sorted online. ∎ ### A.10. Proof of Proposition 1 Every time slot $t\in[T]$, we obtain an intermediate cache state $\boldsymbol{y}_{t+1}$ through lines 2–4 in Algorithm 1 as $\boldsymbol{y}_{t+1}=(\nabla\Phi^{-1})(\nabla\Phi(\boldsymbol{x}_{t})-\eta\nabla f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t}))$. Let $\\{\boldsymbol{x}_{t}\\}^{T}_{t=1}$ and $\\{\boldsymbol{x}^{\prime}_{t}\\}^{T}_{t=1}$ be fractional cache states obtained by $\mathrm{OGD}$ and $\mathrm{OMD}_{\mathrm{NE}}$, respectively, and $\\{\boldsymbol{y}_{t}\\}^{T}_{t=1}$ and $\\{\boldsymbol{y}^{\prime}_{t}\\}^{T}_{t=1}$ be their intermediate fractional cache states. In the case of $\mathrm{OGD}$, or equivalently $\mathrm{OMD}$ configured with mirror map $\Phi(\boldsymbol{x})=\frac{1}{2}\left\lVert\boldsymbol{x}\right\rVert^{2}_{2}$, the intermediate fractional states have the same components as the previous cache state for files are not requested $\boldsymbol{y}_{t,i}=\boldsymbol{x}_{t,i}$ for every $i\not\in\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})$. Similarly, we also have $\boldsymbol{y}^{\prime}_{t,i}=\boldsymbol{x}^{\prime}_{t,i}$ for $\mathrm{OMD}_{\mathrm{NE}}$ for every $i\not\in\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})$. The Euclidean projection algorithm onto the capped simplex [54] can only set a component of the intermediate fractional cache state to one if it exceeds it, and the remaining components are either set to zero or reduced by a constant amount $\Delta=\frac{N-b-k+\sum^{b}_{j=a+1}{y_{t+1,j}}}{b-a},$ where $a$ is the number of components set to zero and $b$ is the number of components strictly less than one, and $\Delta\geq y_{t+1,a}$ (a KKT condition in [54]) and in turn $\Delta\geq 0$ because $y_{t+1,i}\geq 0$ for any $i\in\mathcal{N}$. Therefore, all the components $i\not\in\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})$ of the resulting state are decreased or at most kept unchanged. Similarly, the neg-entropy Bregman projection onto the capped simplex sets some components to one if they exceed it, and the remaining components are scaled by a constant $m_{b}$. In our caching setting we have $y_{t+1,i}\geq x_{t,i}$ for $i\in\mathcal{N}$ in turn $\left\lVert\boldsymbol{y}_{t+1}\right\rVert_{1}\geq k$, thus the equality constraint $\left\lVert\boldsymbol{x}\right\rVert_{1}=k$ in the projection can be replaced by $\left\lVert\boldsymbol{x}\right\rVert_{1}\leq k$. From the KKT dual feasibility condition we obtain $-\gamma\geq 0$ and $m_{b}=e^{\gamma}\leq 1$. Thus, we have $x_{t+1,i}\leq x_{t,i}$ for every $i\not\in\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})$. We conclude that the update cost is zero for both policies, i.e., $\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})=\textstyle\sum_{i\notin\mathop{\mathrm{supp}}(\boldsymbol{r}_{t})}w_{i}^{\prime}\max(0,x_{t+1,i}-x_{t,i})=0$. ∎ ## Appendix B Integral Caching ### B.1. Proof of Proposition 1 Consider equal service costs $w_{i}=1$ for any $i$ in $\mathcal{N}$. A deterministic policy denoted by $\mathcal{A}$ selects an integral cache state $\boldsymbol{x}_{t}$ from $\mathcal{Z}$ for every time slot $t$, and the adversary can select a request batch $\boldsymbol{r}_{t}$ based on the selected state. Let $\boldsymbol{r}_{t}=[\mathds{1}_{\\{x_{t,i}\neq 1\\}}]_{i\in\mathcal{N}}$ be the request batch selected by the adversary at time $t$, so the cost incurred at any time slot $t$ is $f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})=N-k$, and the total cost incurred by $\mathcal{A}$ for the time horizon $T$ is $\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})=(N-k)T$. For a fixed integral cache state $\boldsymbol{x}\in\mathcal{Z}$, (42) $\displaystyle\textstyle\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x})$ $\displaystyle=\textstyle\sum^{T}_{t=1}\sum^{N}_{i=1}(1-x_{i})(1-x_{t,i})=T(N-2k)+\sum^{N}_{i=1}x_{i}\sum^{T}_{t=1}x_{t,i}.$ The best static cache state $\boldsymbol{x}_{*}$ is given by $\boldsymbol{x}_{*}=\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{Z}}\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x})=\mathop{\arg\min}_{\boldsymbol{x}\in\mathcal{Z}}\sum^{N}_{i=1}x_{i}\sum^{T}_{i=1}x_{t,i}$. The maximum value of $\sum^{N}_{i=1}x_{*,i}\sum^{T}_{t=1}x_{t,i}$ is achieved when $\sum^{T}_{t=1}x_{t,i}=\sum^{T}_{t=1}x_{t,j}=Tk/N$ for every $i,j\in\mathcal{N}$, and in this case $\boldsymbol{x}_{*}$ can be arbitrary in $\mathcal{Z}$. Thus, the cost incurred by the static optimum is upper bounded by $\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{*})=T(N-2k)+\sum^{N}_{i=1}x_{*,i}\sum^{T}_{t=1}x_{t,i}\leq T(N-2k)+\frac{Tk^{2}}{N}$. The regret of $\mathcal{A}$ over time horizon $T$ is lower bounded by (43) $\displaystyle\textstyle\mathrm{Regret}_{T}(\mathcal{A})$ $\displaystyle=\textstyle\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})-\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{*})\geq T(N-k)-T(N-2k)-T\frac{k^{2}}{N}=k\left(1-\frac{k}{N}\right)T.$ We conclude that the regret of any deterministic policy $\mathcal{A}$ is $\Omega(T)$ compared to a static optimum selecting the best state in $\mathcal{Z}$; therefore, it also has $\Omega(T)$ regret compared to a static optimum selecting the best fractional state in $\mathcal{X}$, which includes $\mathcal{Z}$. ∎ ### B.2. Proof of Proposition 2 The expected service cost incurred when sampling the integral caching states $\boldsymbol{z}_{t}$ from $\boldsymbol{x}_{t}$ at each time $t$, by the linearity of $f_{\boldsymbol{r}_{t}}$ is $\mathbb{E}\left[\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t})\right]=\sum^{T}_{t=1}\mathbb{E}\left[f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t})\right]=\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}\left(\mathbb{E}\left[\boldsymbol{z}_{t}\right]\right)=\sum_{t=1}^{T}f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})$. The best static configuration $\boldsymbol{x}_{*}$ in the fractional setting can always be selected to be integral; this is because the objective and constraints are linear, so integrality follows from the fundamental theorem of linear programming. Hence, the expected regret for the service cost coincides with the regret of the fractional caching policy. ∎ ### B.3. Proof of Theorem 3 We consider the catalog $\mathcal{N}=\\{1,2\\}$, cache capacity $k=1$, and equal service and update costs $w_{i}=w^{\prime}_{i}=1$ for $i\in\mathcal{N}$. A policy $\mathcal{A}$ selects the states $\\{\boldsymbol{x}_{t}\\}^{T}_{t=1}\in\mathcal{X}^{T}$. The randomized states obtained by $\Xi$ are $\\{\boldsymbol{z}_{t}\\}^{T}_{t=1}$; thus, we have $\boldsymbol{z}_{t}=[1,0]$ w.p. $x_{t,1}$, and $\boldsymbol{z}_{t}=[0,1]$ w.p. $x_{t,2}$. An adversary selects the request batch as $\boldsymbol{r}_{t}=[\mathds{1}_{\\{i=i_{t}\\}}]_{i\in\mathcal{N}}$ aiming to greedily maximize the cost of the cache, where (44) $\displaystyle\textstyle i_{t}=\mathop{\arg\min}_{i\in\mathcal{N}}\\{x_{t,i}\\}.$ The expected service cost at time $t$ is $\mathbb{E}\left[{f_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t})}\right]=1-x_{t,i_{t}}$, and the expected update cost is (45) $\displaystyle\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]=\mathbb{P}\left(z_{t,i_{t}}=1,z_{t+1,i_{t}}=0\right)=\mathbb{P}\left(z_{t,i_{t}}=1\right)\mathbb{P}\left(z_{t+1,i_{t}}=0\right)=x_{t,i_{t}}(1-x_{t+1,i_{t}}).$ An update cost is incurred when $i_{t}$ is requested and the state changes from $z_{t,i_{t}}=1$ to $z_{t+1,i_{t}}=0$; we pay a unitary cost due to fetching a single file that is not requested with probability $\mathbb{P}\left(z_{t,i_{t}}=1,z_{t+1,i_{t}}=0\right)$, and this gives the first equality. We use independence of the random variables $\boldsymbol{z}_{t}$ and $\boldsymbol{z}_{t+1}$ to obtain the second equality. A fixed state $\boldsymbol{x}=\left[\frac{1}{2},\frac{1}{2}\right]$ incurs a cost of $\frac{1}{2}$ for every timeslot $t\in[T]$. We define the instantaneous extended regret w.r.t. the fixed state $\boldsymbol{x}$ for every timeslot $t\in[T]$ as $a_{t}=\mathbb{E}\left[f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})+\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]-f_{\boldsymbol{r}_{t}}(\boldsymbol{x})=\mathbb{E}\left[f_{\boldsymbol{r}_{t}}(\boldsymbol{x}_{t})+\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]-\frac{1}{2}$. Observe here that looking for a minimizer over $\mathcal{X}$ or over $\mathcal{Z}$ is equivalent. Because $\boldsymbol{x}$ is not necessarily the minimizer of the aggregate service cost $\sum^{T}_{t=1}f_{\boldsymbol{r}_{t}}(\boldsymbol{x})$, we can lower bound the extended regret (27) as (46) $\displaystyle\textstyle\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)\geq\sum^{T}_{t=1}a_{t}.$ Without loss of generality assume that $T$ is even so $\sum^{T}_{t=1}a_{t}=\textstyle\sum^{T/2}_{k=1}\left(a_{2k}+a_{2k-1}\right)$, and we have $\displaystyle\textstyle\sum^{T}_{t=1}a_{t}$ $\displaystyle=\sum^{T/2}_{k=1}\left(1-x_{2k-1,i_{2k-1}}-x_{2k,i_{2k}}+x_{2k-1,i_{2k-1}}(1-x_{2k,i_{2k-1}})+x_{2k,i_{2k}}(1-x_{2k+1,i_{2k}})\right)$ $\displaystyle\geq\textstyle\sum^{T/2}_{k=1}\left(1-x_{2k-1,i_{2k-1}}-x_{2k,i_{2k}}+x_{2k-1,i_{2k-1}}(1-x_{2k,i_{2k-1}})\right).$ From the definition of $i_{2k}$ in Eq. (44) we have $x_{2k,i_{2k}}\leq(1-x_{2k,i_{2k-1}})$ for any $k\in[T/2]$. Thus, $\displaystyle\textstyle\sum^{T}_{t=1}a_{t}\geq\textstyle\sum^{T/2}_{k=1}\left(1-x_{2k-1,i_{2k-1}}-x_{2k,i_{2k}}+x_{2k-1,i_{2k-1}}x_{2k,i_{2k}}\right)=\sum^{T/2}_{k=1}1-x_{2k-1,i_{2k-1}}-x_{2k,i_{2k}}\left(1-x_{2k-1,i_{2k-1}}\right)$ $\displaystyle\geq\textstyle\sum^{T/2}_{k=1}\left(1-x_{2k-1,i_{2k-1}}-\frac{1}{2}\left(1-x_{2k-1,i_{2k-1}}\right)\right)=\sum^{T/2}_{k=1}\left(\frac{1}{2}-\frac{1}{2}x_{2k-1,i_{2k-1}}\right)\geq\frac{T}{8}.$ The second and third inequalities are obtained using $x_{t,i_{t}}\leq\frac{1}{2}$ for every timeslot $t\in[T]$; a direct result from the definition of $i_{t}$ in Eq. (44). We combine the above lower bound with Eq. (46) to obtain $\mathrm{E\mbox{-}Regret}_{T}(\mathcal{A},\Xi)\geq\frac{T}{8}$.∎ ### B.4. Family of Coupling Schemes with Sublinear Update Cost The following theorem provides a sufficient condition for the sublinearity of the expected total update cost of the random cache states $\\{\boldsymbol{z}_{t}\\}^{T}_{t=1}$ obtained through a rounding scheme $\Xi$ from the input fractional states $\\{\boldsymbol{x}_{t}\\}^{T}_{t=1}$. ###### Theorem 1. Consider an OMD Algorithm and a joint distribution of $(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})$ that satisfy (a) $\mathbb{E}[\boldsymbol{z}_{t}]=\boldsymbol{x}_{t}$ and $\mathbb{E}[\boldsymbol{z}_{t+1}]=\boldsymbol{x}_{t+1}$, and (b) $\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}({\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}})\right]=\operatorname{\mathcal{O}}\left(\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}\right)$. This algorithm incurs an expected service cost equal to the service cost of the fractional sequence. Moreover, if $\eta=\Theta\left({\frac{1}{\sqrt{T}}}\right)$, the algorithm has also $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ expected update cost and then $\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$ extended regret. ###### Proof. Consider that the sequence $\\{\boldsymbol{x}_{t}\\}^{T}_{t=1}$ is generated by an OMD algorithm, configured with a $\rho$-strongly convex mirror map $\Phi$ w.r.t a norm $\left\lVert\,\cdot\,\right\rVert$. Assume that we can find a joint distribution of $(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})$ satisfying $\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}({\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}})\right]=\operatorname{\mathcal{O}}\left(\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}\right)$, where $\mathbb{E}[\boldsymbol{z}_{t}]=\boldsymbol{x}_{t}$ and $\mathbb{E}[\boldsymbol{z}_{t+1}]=\boldsymbol{x}_{t+1}$. Then there exists a constant $\gamma_{1}>0$, such that $\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]\leq\gamma_{1}\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}$. Moreover, there exists $\gamma_{2}>0$, such that $\gamma\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}\leq\gamma_{1}\gamma_{2}\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert$, and this gives (47) $\displaystyle\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]\leq\gamma_{1}\gamma_{2}\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert.$ As $\Phi$ is $\rho$-strongly convex w.r.t. the norm $\left\lVert\,\cdot\,\right\rVert$ $\displaystyle D_{\Phi}(\boldsymbol{x}_{t},\boldsymbol{y}_{t+1})=\Phi(\boldsymbol{x}_{t})-\Phi(\boldsymbol{y}_{t+1})-{\nabla\Phi(\boldsymbol{y}_{t+1})}^{T}(\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1})$ $\displaystyle=\Phi(\boldsymbol{x}_{t})-\Phi(\boldsymbol{y}_{t+1})+{\nabla\Phi(\boldsymbol{x}_{t})}^{T}(\boldsymbol{y}_{t+1}-\boldsymbol{x}_{t})+{\left(\nabla\Phi(\boldsymbol{x}_{t})-\nabla\Phi(\boldsymbol{y}_{t+1})\right)}^{T}(\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1})$ (48) $\displaystyle\leq-\frac{\rho}{2}\left\lVert\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1}\right\rVert^{2}+\eta g^{T}_{t}(\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1})\leq-\frac{\rho}{2}\left\lVert\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1}\right\rVert^{2}+\eta\left\lVert\boldsymbol{x}_{t}-\boldsymbol{y}_{t+1}\right\rVert L\leq\frac{\eta^{2}L^{2}}{2\rho}$ The above inequalities are obtained using the strong convexity of $\Phi$ and the update rule, Cauchy-Schwarz inequality, and the inequality $ax- bx^{2}\leq\max_{x}ax-bx^{2}=a^{2}/4b$ as in the last step in the proof of [12, Theorem 4.2], respectively. We have (49) $\displaystyle\textstyle\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert$ $\displaystyle\textstyle\leq\sqrt{\frac{2}{\rho}D_{\Phi}(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})}\leq\sqrt{\frac{2}{\rho}D_{\Phi}(\boldsymbol{x}_{t},\boldsymbol{y}_{t+1})}\stackrel{{\scriptstyle\eqref{eq:bounding_bregman_step}}}{{\leq}}\sqrt{2\eta^{2}\frac{L^{2}}{2\rho^{2}}}\leq\frac{L\eta}{\rho}.$ The first inequality is obtained using the strong convexity of $\Phi$, and the second using the generalized Pythagorean inequality[14, Lemma 11.3]. We combine Eq. (49) and (47) to obtain $\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]\leq\gamma_{1}\gamma_{2}\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert\leq\gamma_{1}\gamma_{2}\frac{L\eta}{\rho}$. The total update cost is $\sum^{T-1}_{t=1}\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]\leq\gamma_{1}\gamma_{2}\frac{L\eta}{\rho}T$. When OMD has a fixed learning rate $\eta=\Theta\left(\frac{1}{\sqrt{T}}\right)$, we obtain $\sum^{T-1}_{t=1}\mathbb{E}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1})\right]=\operatorname{\mathcal{O}}\left(\sqrt{T}\right)$. The expected service cost is $\mathbb{E}\left[\sum^{T}_{t=1}f_{t}(\boldsymbol{z}_{t})\right]=\sum^{T}_{t=1}f_{t}(\boldsymbol{x}_{t})=\operatorname{\mathcal{O}}\left(T\right)$; the first equality is obtained from the linearity of the expectation operator and the function $f_{\boldsymbol{r}_{t}}$, and the second equality is obtained using the bound in Eq. (30) with $\eta=\Theta\left(\frac{1}{\sqrt{T}}\right)$. ∎ ### B.5. Proof of Theorem 4 Lemma 2 and Lemma 4 guarantee that Algorithm 3 used with an OMD algorithm satisfies the hypothesis of Theorem 1 and, hence, provides sublinear extended regret. ###### Lemma 0. The random integral cache state $\boldsymbol{z}$ obtained by calling Algorithm 3 with fractional cache configuration input $\boldsymbol{x}\in\mathcal{X}$ satisfies $\boldsymbol{z}\in\mathcal{Z}$ and $\mathbb{E}_{\xi}[\boldsymbol{z}]=\boldsymbol{x}$. ###### Proof. We employ the shorthand notation $m_{i}=\sum^{i}_{j=1}x_{j}$ and $m^{-}_{i}=\lfloor m_{i}\rfloor$. The choice of $\xi$ (see Fig. 10) defines $k$ different thresholds $\xi,\xi+1,\dots,\xi+k-1$. For each threshold, we select the first item, whose accumulated mass exceeds the threshold. As $m_{N}=k$, we are guaranteed to exceeds all $k$ thresholds, and as $x_{i}\leq 1$, we are guaranteed to select one item for each threshold. Therefore $\boldsymbol{z}$ belongs to $\mathcal{Z}$. From Algorithm 3 for any $i\in\mathcal{N}$ we have $\mathbb{P}(z_{i}=1)=\mathbb{P}(\mathcal{I}_{i}\setminus\mathcal{I}_{i-1}=\\{i\\})=\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\right)$. figurec Figure 9. The support and distribution of the random variable $|\mathcal{I}_{i-1}|$ for $m_{i-1}\geq m_{i}^{-}$ (left) and $m_{i-1}<m_{i}^{-}$ (right). The blue shaded area represents the probability that $|\mathcal{I}_{i-1}|$ takes the value indicated below the corresponding image. This figure can be viewed as a subset of the rows in Fig. 10. figurec Figure 10. The random integral cache configuration obtained by calling Online Rounding given the fractional cache state $\boldsymbol{x}$ and $\xi$ (left). When $\xi$ is kept fixed, the probability over the initial choice of $\xi$ the random integral cache configuration rounded from a new fractional state $\boldsymbol{x}^{\prime}=\boldsymbol{x}-\delta\boldsymbol{e}_{3}+\delta\boldsymbol{e}_{7}$ is different is illustrated by the dashed areas (right). See Fig. 9 (left). If $m_{i-1}\geq m^{-}_{i}$, then $|\mathcal{I}_{i-1}|\in\\{m^{-}_{i},m^{-}_{i}+1\\}$ and $\displaystyle\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\right)$ $\displaystyle=\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\,\,\big{|}\,\,|\mathcal{I}_{i-1}|=m^{-}_{i}\right)\mathbb{P}(|\mathcal{I}_{i-1}|=m^{-}_{i})$ (50) $\displaystyle+\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\,\,\big{|}\,\,|\mathcal{I}_{i-1}|=m^{-}_{i}+1\right)\mathbb{P}(|\mathcal{I}_{i-1}|=m^{-}_{i}+1)$ (51) $\displaystyle=\frac{m_{i}-m_{i-1}}{(m^{-}_{i}+1-m_{i-1})}\cdot(m^{-}_{i}+1-m_{i-1})+0\cdot(m_{i-1}-m^{-}_{i})=x_{i}.$ See Fig. 9 (right). If $m_{i-1}<m^{-}_{i}$, then $|\mathcal{I}_{i-1}|\in\\{m^{-}_{i}-1,m^{-}_{i}\\}$ and $\displaystyle\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\right)$ $\displaystyle=\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\,\,\big{|}\,\,|\mathcal{I}_{i-1}|=m^{-}_{i}-1\right)\mathbb{P}(|\mathcal{I}_{i-1}|=m^{-}_{i}-1)$ (52) $\displaystyle+\mathbb{P}\left(m_{i}\geq\xi+|\mathcal{I}_{i-1}|\,\,\big{|}\,\,|\mathcal{I}_{i-1}|=m^{-}_{i}\right)\mathbb{P}(|\mathcal{I}_{i-1}|=m^{-}_{i})$ (53) $\displaystyle=1\cdot(m^{-}_{i}-m_{i-1})+\frac{m_{i}-m^{-}_{i}}{m_{i-1}-m^{-}_{i}+1}\cdot(m_{i-1}-m^{-}_{i}+1)=m_{i}-m_{i-1}=x_{i}.$ ∎ ###### Lemma 0. Consider a fractional cache configuration $\boldsymbol{x}^{\prime}$ obtained by an elementary mass movement of $\delta$ from $u\in\mathcal{N}$ to $v\in\mathcal{N}$ for configuration $\boldsymbol{x}\in\mathcal{X}$, i.e., $\boldsymbol{x}^{\prime}=\boldsymbol{x}-\delta\bm{e}_{u}+\delta\bm{e}_{v}$. Algorithm 3 outputs the random integral cache configurations $\boldsymbol{z}$, and $\boldsymbol{z}^{\prime}$, given the input fractional cache states $\boldsymbol{x}$ and $\boldsymbol{x}^{\prime}$, respectively. The random integral configurations satisfy $\mathbb{E}_{\xi}\left[\left\lVert\boldsymbol{z}^{\prime}-\boldsymbol{z}\right\rVert_{1,\boldsymbol{w}^{\prime}}\right]\leq 2kN\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}\delta$, where $\left\lVert\boldsymbol{x}\right\rVert_{1,\boldsymbol{w}^{\prime}}\triangleq\sum_{i\in\mathcal{N}}\left|x_{i}\right|w^{\prime}_{i}$, and note that $\mathrm{UC}_{\boldsymbol{r}_{t}}\left(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}\right)\leq\left\lVert\boldsymbol{z}_{t}-\boldsymbol{z}_{t+1}\right\rVert_{1,\boldsymbol{w}^{\prime}}$. ###### Proof. The probability that a random integral cache state $\boldsymbol{z}^{\prime}$ is changed w.r.t $\boldsymbol{z}$ can be upper bounded as $\mathbb{P}(\boldsymbol{z}\neq\boldsymbol{z}^{\prime})\leq\sum_{i\in\mathcal{N}}\mathbb{P}(z_{i}\neq z^{\prime}_{i})$. Consider w.l.g that $u<v$, then $\mathbb{P}(z_{i}\neq z^{\prime}_{i})=0$ for $i\in\mathcal{N}\setminus\\{u,u+1,\dots,v\\}$, and $\mathbb{P}(z_{i}\neq z^{\prime}_{i})=\delta$ for $i\in\\{u,u+1,\dots,v\\}$. We obtain $\mathbb{P}(\boldsymbol{z}\neq\boldsymbol{z}^{\prime})\leq\delta(v-u+1)$ (e.g., see Fig. 10). More generally, for $u\neq v$ we have $\mathbb{P}(\boldsymbol{z}\neq\boldsymbol{z}^{\prime})\leq\delta(|v-u|+1)$. We conclude that $\displaystyle\mathbb{E}\left[{\left\lVert\boldsymbol{z}^{\prime}-\boldsymbol{z}\right\rVert_{1,\boldsymbol{w}^{\prime}}}\right]$ $\displaystyle\leq\max_{(\boldsymbol{z},\boldsymbol{z}^{\prime})\in\mathcal{Z}^{2}}\left\lVert\boldsymbol{z}^{\prime}-\boldsymbol{z}\right\rVert_{1,\boldsymbol{w}^{\prime}}\cdot\mathbb{P}(\boldsymbol{z}\neq\boldsymbol{z}^{\prime})\leq 2k\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}(|v-u|+1)\delta\leq 2kN\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}\delta.\qed$ figurec Figure 11. Coupling induced by Online Rounding Algorithm 3 when $\xi$ is fixed. The flow $f_{i,j}$ is the joint probability $\mathbb{P}(\boldsymbol{z}_{t+1}=\zeta_{t+1,j},\boldsymbol{z}_{t}=\zeta_{t,i})$, so that the next state is $\zeta_{t+1,j}$ and the previously selected state is $\zeta_{t,i}$. ###### Lemma 0. The expected movement cost of the random integral cache states generate by Algorithm 3 is $\mathbb{E}_{\xi}\left[\mathrm{UC}_{\boldsymbol{r}_{t}}\left(\boldsymbol{z}_{t},\boldsymbol{z}_{t+1}\right)\right]=\operatorname{\mathcal{O}}\left(\left\lVert\boldsymbol{x}_{t}-\boldsymbol{x}_{t+1}\right\rVert_{1}\right)$, when $\xi$ is sampled once u.a.r. from the interval $[0,1]$ and then fixed for $t\in[T]$. ###### Proof. The general fractional movement caused by a policy $\mathcal{A}$ changes the cache state from fractional state $\boldsymbol{x}_{t}\in\mathcal{X}$ to $\boldsymbol{x}_{t+1}\in\mathcal{X}$, and we denote by $\mathcal{J}=\left\\{i\in\mathcal{N}:x_{t+1,i}-x_{t,i}>0\right\\}$ the set of components that have a fractional increase. We have $\boldsymbol{x}_{t+1}=\boldsymbol{x}_{t}+\sum_{j\in\mathcal{J}}\phi_{j}e_{j}-\sum_{i\in\mathcal{N}\setminus\mathcal{J}}\phi_{i}e_{i}$. where $\phi_{i},i\in\mathcal{N}$ is the absolute fractional change in component $i$ of the cache. Remark that we have $\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1,\bm{w}^{\prime}}=\sum_{i\in\mathcal{N}}w^{\prime}_{i}\phi_{i}$. From the capacity constraint we know that $\sum_{i\in\mathcal{N}\setminus\mathcal{J}}\phi_{i}=\sum_{j\in\mathcal{J}}\phi_{j}$. If we want to decompose this general fractional change to elementary operations, then we need to find a flow $\left[\delta_{i,j}\right]_{(i,j)\in(\mathcal{N}\setminus\mathcal{J})\times\mathcal{J}}$ that moves $\sum_{j\in\mathcal{J}}\phi_{j}$ mass from the components in $\mathcal{N}\setminus\mathcal{J}$ to to those in $\mathcal{J}$. This requires at most $N-1$ elementary operations. We define the map $\nu:\mathcal{N}^{2}\to\mathbb{N}$ that provides an order on the sequence of elementary operations. Let $\boldsymbol{z}^{\nu(i,j)}$ be the random cache state that could have been sampled after the $\nu(i,j)$-th elementary operation where $\mathbb{E}\left[\boldsymbol{z}^{\nu(i,j)}\right]=\boldsymbol{x}^{\nu(i,j)}$, and the total number of operations is denoted by $|\nu|\leq N-1$. Note that by definition $\boldsymbol{z}^{|\nu|}=\boldsymbol{z}_{t+1}$, and we take $\boldsymbol{z}^{0}=\boldsymbol{z}_{t}$. For each of these operations we pay in expectation at most $2kN\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}{\delta_{i,j}}$ update cost from Lemma 3. Then the total expected movement cost is: $\displaystyle\textstyle\mathbb{E}_{\xi}\big{[}\mathrm{UC}_{\boldsymbol{r}_{t}}(\boldsymbol{z}_{t}$ $\displaystyle,\boldsymbol{z}_{t+1})\big{]}\leq\textstyle\mathbb{E}_{\xi}[\left\lVert\boldsymbol{z}_{t+1}-\boldsymbol{z}_{t}\right\rVert_{1,\boldsymbol{w}^{\prime}}]=\mathbb{E}_{\xi}\left[\left\lVert\sum^{|\nu|-1}_{l=0}\left(\boldsymbol{z}^{l+1}-\boldsymbol{z}^{l}\right)\right\rVert_{1,\boldsymbol{w}^{\prime}}\right]\leq\sum^{|\nu|-1}_{l=0}\mathbb{E}_{\xi}\left[\left\lVert\boldsymbol{z}^{l+1}-\boldsymbol{z}^{l}\right\rVert_{1,\boldsymbol{w}^{\prime}}\right]$ $\displaystyle\leq\textstyle\sum_{i\in\mathcal{N}\setminus\mathcal{J}}\sum_{j\in\mathcal{J}}2k\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}N\delta_{i,j}=2kN\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}\sum_{i\in\mathcal{N}\setminus\mathcal{J}}\phi_{i}\leq kN\left\lVert\boldsymbol{w}^{\prime}\right\rVert_{\infty}\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}.$ The update cost is thus $\operatorname{\mathcal{O}}\left(\left\lVert\boldsymbol{x}_{t+1}-\boldsymbol{x}_{t}\right\rVert_{1}\right)$ in expectation. ∎
# Mating of discrete trees and walks in the quarter-plane Philippe Biane<EMAIL_ADDRESS>Institut Gaspard Monge UMR CNRS - 8049 Université Gustave Eiffel 5 boulevard Descartes, 77454 Champs-Sur-Marne FRANCE ###### Abstract. We give a general construction of triangulations starting from a walk in the quarter plane with small steps, which is a discrete version of the mating of trees. We use a special instance of this construction to give a bijection between maps equipped with a rooted spanning tree and walks in the quarter plane. We also show how the construction allows to recover several known bijections between such objects in a uniform way. ## 1\. Introduction Mating of polynomials originates in complex dynamics, where one can match two Julia sets in order to build a topological sphere or a surface, see e.g. [2] and https://www.math.univ-toulouse.fr/ cheritat/MatMovies/ for nice pictures and movies. This includes in particular the case of Julia sets which are topologically real trees. This construction has been introduced in probability by Le Gall and Paulin [11] for studying the topology of the Brownian map and then used, under the name “mating of trees” by Duplantier, Miller and Sheffield [8] in quantum gravity, followed by many others. A discrete version of this construction already appears in Mullin’s bijection (see [13]) and has been further used recently by several authors, see e.g. Gwynne, Holden and Sun [9] for a recent overview. The basic idea is to consider a walk in the quarter plane $x,y\geq 0$, with steps in the set $\\{(1,0),(-1,0),(0,1),(0,-1)\\}$, starting and ending at $0$, and to write the vertical and horizontal coordinates of the walk as two opposite Motzkin paths running vertically. See below, with one Motzkin path on the left for the horizontal coordinate and one on the right for the vertical coordinate: then mate the two Motzkin paths by drawing horizontal lines between vertices: The Motzkin paths are then contracted into trees in the usual way, while the upper and lower boundaries of the rectangle are identified. The resulting map is a planar triangulation. I will give a more detailed explanation in the following sections. In this note I use a generalization of these ideas to walks with small steps, i.e. with steps taken in the set $\\{(1,0),(-1,0),(0,1),(0,-1),(-1,-1),(1,-1),(-1,1),(1,1)\\}$, which involves constructing a map with faces of degrees three and four then contracting the faces of degree four. As we shall see, the generality of the construction and the many variants one can produce from it make it a versatile tool for producing bijections between specific classes of walks and of maps. Again, the precise definitions will be given in the next section. Using this I give a bijection between a certain class of walks in the quarter plane (which I call reversed $Y$-walks, see below) and maps equipped with a rooted spanning tree in which the degrees of the vertices are encoded by the length of the steps of the walk. This bijection is quite different from Mullin’s bijection but bears some connection with blossoming bijections [13]. These ideas also allow us to reinterpret several known bijections between walks and triangulations, or more general planar maps. In particular we will show how the following bijections fit into this framework: * • The bijection of Mullin, between walks in the quarter plane with straight steps and triangulations having a Hamiltonian cycle. * • A bijection of Bernardi between Kreweras walks and triangulations with a spanning tree. * • A bijection of Kenyon, Miller, Sheffield and Wilson between walks and maps with a bipolar orientation. * • A bijection of Li, Sun and Watson between tandem walks satisfying some further conditions and Schnyder woods. Rather than the particular results which are obtained, we think that the main interest of this paper is its general philosophy and its potential to produce a wealth of bijections between walks and maps. This paper is organized as follows. In the next section I give a general algorithm for producing planar triangulations, starting from a walk with small steps in the quarter plane. In order to obtain precise bijections one needs to specify a number of parameters in this construction. In section 3 I introduce reversed $Y$-walks in the quarter plane and I give a new bijection between these walks and planar maps equipped with a spanning tree. Then, in section 4, I show how to recover the bijections listed above using similar considerations. ## 2\. Associating a triangulation to a walk in the quarter plane ### 2.1. Trees and Dyck paths There is a well known way to associate a rooted planar tree to a Dyck path by matching up and down steps: One can think that one cuts out from the plane the striped area under the Dyck path and sews the up and down steps to produce the tree embedded into the plane. One can generalize this construction to Motzkin paths by shrinking each horizontal step to a point. ### 2.2. One can also consider paths which do not start or end at $0$ and build a forest of trees, rooted on a $V$-shaped path: Some down steps (before the minimum of the path is reached) or upward steps (after the minimum) are left unmatched and are indicated in red on the picture. They form the “$V$” on which the forest is rooted. ### 2.3. The basic construction We consider walks in the quarter plane with small steps: the coordinates of the steps belong to $\\{-1,0,1\\}$, thus we get $8$ distinct non zero steps: We can distinguish straight steps: $(1,0),(0,1),(-1,0),(0,-1)$ from oblique steps: The problem of enumerating such walks has been considered in many papers, starting from the seminal work [7]. Consider a two-dimensional walk with small steps, starting and ending at the origin, which remains in the quarter plane $x,y\geq 0$. Here is an example with twelve steps, ten of which are straight and two oblique: $(1,0);(0,1);(-1,1);(1,0);(0,-1);(1,0);(0,-1);(0,1);(-1,0);(-1,0);(1,0);(-1,-1).$ The walk is shown below: The projections of this walk on the coordinate axes are two Motzkin paths with respective steps $1;0;-1;1;0;1;0;0;-1;-1;1;-1$ and $0;1;1;0;-1;0;-1;1;0;0;0;-1$: We draw the two paths vertically in an opposite way, with the horizontal coordinate on the left and the vertical coordinate on the right, the paths running from bottom to top. The mating consists in drawing horizontal lines between the vertices of the two paths, as below: Between the two Motzkin paths we have then a succession of quadrilaterals, each one corresponding to a step of the walk. These quadrilaterals have several types, some of them, corresponding to straight steps, have one of their side vertical, while the others, corresponding to oblique steps, have tilted sides. Here are the straight steps, numbered from bottom to top, and the shaded oblique steps. We now contract the two Motzkin paths into two trees as in section 2.1. Again we can imagine that we cut out the area below each Motzkin path and sew the boundaries. In this contraction the quadrilaterals corresponding to straight steps become triangles as, for example: Once we have made these contractions we obtain a planar map whose faces are triangles (corresponding to straight steps), quadrilaterals (corresponding to oblique steps) and an external face with two sides, the two horizontal sides of the initial rectangle. If we identify these two sides by contracting the external face, we obtain a planar map, with a distinguished edge, the one corresponding to the two sides of the rectangle, on which two trees are drawn, namely the images of the two Motzkin paths. Figure 1 shows the map obtained from the above walk, where the triangles are numbered and the two quadrilaterals are shaded, while the two trees are depicted in red and blue and the distinguished edge (corresponding to the upper and lower boundaries of the rectangle) is dashed. Figure 1. Contraction of the Motzkin paths. In order to obtain a planar triangulation we will contract the quadrilaterals corresponding to oblique steps: again imagine that one cuts out one of these quadrilaterals from the plane, then identifies two opposite vertices, pairs the edges correspondingly and sews them. More precisely, consider such a quadrilateral: There are two ways to contract this quadrilateral along one of its diagonals to produce two segments, either like this by identifying opposite vertices $v$ and $w$ and accordingly sewing the edge $uv$ with $uw$ and the edge $vz$ with $wz$: or like this, identifying $u$ and $z$: Let $o$ be the number of oblique steps in the walk. From the map corresponding to the walk we can obtain $2^{o}$ triangulations by removing the quadrilaterals and, for each one, sewing its boundaries according to the two possibilites. Figure 2 shows the result of a choice of such contractions on the map of Figure 1. Figure 2. Contracting the quadrilaterals of Figure 1. The construction presented above is very general, but not one-to-one. Indeed one can associate to each path $2^{o}$ triangulations, moreover some triangulations may be obtained by more than one of these constructions. In the following I will consider specific instances of the construction, where one restricts the class of steps which are available for the walk and one gives an explicit algorithm to decide, for each quadrilateral, which of the two contractions is made. In order to visualize the maps obtained by these constructions we will find it useful to depict also the dual map as in Figure 3, where we indicate the way we contract the quadrilateral by drawing a dashed line between the opposite vertices which have been identified, then draw edges connecting the dual vertices. Figure 3. The triangulation and its dual cubic map Before going into the description of the examples, I describe some possible variations on this construction. ### 2.4. Variations #### 2.4.1. Changing the end point It is possible to generalize the construction to paths which do not start or end at zero. Also one can relax the condition that the walk remains in the quarter plane. In this case the two paths are contracted as explained in section 2.2 and there remain edges which are not matched. Together with the upper and lower horizontal sides, they form an external boundary. Here is an example, where we show vertical lines between matched edges of the paths. There remains three unmatched edges on the left and two on the right: At this point, one can either keep the unmatched edges and consider that they bound an exterior face, or idenfity these edges pairwise, if there is an even number of them. We will see some examples in the following. #### 2.4.2. Changing the set of steps It is also possible to use other types of steps. If one uses steps of type $(i,j)$ then one can consider the path obtained by replacing the step $(i,j)$ by a sequence of $|i|$ steps of type $(\operatorname{sgn}(i),0)$ and $|j|$ steps of type $(0,\operatorname{sgn}(j))$, then erase the horizontal lines in the polygon thus formed to get a face with $|i|+|j|+2$ edges on the boundary. For example here, in the case $i=3,j=4$, we get the following picture: When we erase the internal edges we get a face with nine edges There are four edges on the right, three on the left, and two horizontal ones (remember that vertical edges are contracted into points when we contract the paths). Observe that one can change the order in which the $|i|+|j|$ steps are made without changing the resulting map. One has to be careful that there is a potential conflict between such rules and the contracting rules for oblique paths. We will nevertheless see some interesting examples. #### 2.4.3. Other contractions In this paper I consider only contractions of quadrilaterals, according to one of the two non-crossing pairings of its sides. It would be possible to consider also contractions of larger faces, as constructed in section 2.4.2. Such contractions would be obtained from pairings of faces with an even number of sides. With non-crossing pairings one obtains planar maps, but it would be also possible to construct maps of higher genus by making more general pairings. However, we will not explore such possibilities in this paper. #### 2.4.4. Pattern avoiding If we consider the allowed steps of the walk as forming an alphabet then the walk can be considered as a word on this alphabet. It is possible to consider families of walks that are constrained to avoid certain patterns. Again we will encounter such examples. #### 2.4.5. Symmetries There are some obvious symmetries in this construction. For example one can make a symmetry with respect to a vertical axis in pictures like Figure 3. This corresponds to exchanging the horizontal and vertical directions of the walks, in other words to make a reflection with respect to the diagonal line $x=y$ in the plane. Symmetry with respect to a horizontal axis corresponds to considering walks with opposite steps, in reverse order. ## 3\. Maps with a spanning tree ### 3.1. In this section we consider walks with steps in the set $\\{(0,1),(-1,-1),(1,-1)\\}$. For convenience we will give a name to these walks and refer to them as “reversed $Y$-walks”, or $rY$-walks (note that, for typographical reasons, the walks with opposite steps deserve the name of $Y$-walks). It is easy to count the number of $rY$-walks in the quarterplane, starting and ending at $0$. Indeed the vertical coordinate of the walk gives a Dyck path, since the vertical coordinates of steps are either $1$ or $-1$. Choose a Dyck path of length $4n$ for the vertical coordinate and let $i_{1},\ldots,i_{2n}$ be the indices of the down steps. The horizontal coordinate moves only when a down step in the vertical direction is made, therefore the walk is specified by choosing another Dyck path of length $2n$, which describes the horizontal moves at times $i_{1},\ldots,i_{2n}$. It follows that the number of such walks is $C_{2n}C_{n}$ where $C_{l}=\frac{1}{l+1}{2l\choose l}$, the number of Dyck paths of length $2l$, a Catalan number. There is a simple model of maps which is counted by this number. Consider a cubic planar map (i.e. a map with vertices of degree three) with $2n$ vertices and thus $3n$ edges. Take $n+1$ of the edges and cut them in two, in order to obtain a tree, with the half- edges being the leaves of the tree. We call this tree a “complete” spanning tree. If we choose one of these leaves to root the tree we get a planar binary tree with $2n$ internal vertices and $2n+1$ leaves, plus the root leaf. This construction is reminiscent of so-called “blossoming bijections” (see [13], or [1] for a recent reference), except that we do not orient the leaves of the tree. The map from which we started can be recovered by pairing these $2n+2$ leaves, to reconstruct the edges which have been cut. In Figure 4(a) I show a cubic map and a complete spanning tree in 4(b). The root half-edge is in blue, with an arrow pointing at it and the other half-edges are in red. In Figure 4(c) I show the planar binary tree in standard representation, with the matching of the leaves giving back the original map. Figure 4. A cubic map with a complete spanning tree and a marked leaf Such objects, planar cubic maps on $2n$ vertices, with a complete spanning tree with a marked leaf, are thus in bijection with pairs $(T,P)$ where $T$ is a binary tree with $2n$ internal vertices and a leaf added to the root, and $P$ is a pairing of the $2n+2$ leaves. The leaves can be ordered cyclically from the root by going clockwise around the tree, thus the pairings are in bijection with non-crossing pairings of $[1,2n+2]$. There are $C_{2n}$ planar binary trees. Since there are $C_{n+1}$ non-crossing pairings we get $C_{2n}C_{n+1}$ pairs $(T,P)$. This is not quite $C_{2n}C_{n}$, but if we add the constraint that the root leaf has to pair with its immediate successor then we get the right number $C_{2n}C_{n}$. We call such an object a “special” planar cubic map on $2n$ vertices, with a complete spanning tree with a marked leaf. For example the cubic map depicted in Figure 4 is special since the root leaf is paired with its successor. We will see that our construction provides a geometric realization of this bijection between walks and such maps. We will also extend it to a walk model for “non-special” cubic maps with a complete spanning tree rooted at some half-edge. ### 3.2. The construction The $rY$-walks have two types of oblique steps, so we must give the rules for contracting the associated quadrilaterals. These rules are simple, we show them below. ###### Contraction Rules 3.1. i.e. in both cases identify the NW and SE corners. ### 3.3. Some bijections There is a simple and well known bijection, which we shall denote by $\mathcal{T}$, between rooted planar binary trees with $n$ vertices and Dyck paths with $2n$ steps, which is best described recursively. Write a Dyck path as a sequence of up $(u)$ and down $(d)$ steps. Then $\mathcal{T}(ud)$ is the unique rooted planar binary tree with one vertex. For any Dyck path $\mathcal{D}$, of length $2n$, write it uniquely as $\mathcal{D}=u\mathcal{D}_{1}d\mathcal{D}_{2}$, where $\mathcal{D}_{1}$ and $\mathcal{D}_{2}$ are Dyck paths of lengths $2k-2$ and $2n-2k$. Here $2k$ is the time of first return to $0$ for $\mathcal{D}$. Then $\mathcal{T}(\mathcal{D})$ is the binary tree whose left branch is $\mathcal{T}(\mathcal{D}_{1})$ and whose right branch is $\mathcal{T}(\mathcal{D}_{2})$. This tree can be obtained easily from our construction as follows: write the Dyck path on the right but leave the left extremities of the quadrilaterals unfinished, so that we do not make the identifications on the left of the picture. This is shown in Figure 5(a). The cut dual map is a tree and it is obvious that its construction satisfies the same recursion as the bijection $\mathcal{T}$. Observe that the leaves of the tree (in red in the picture) appear ordered on the left of the picture (except the root leaf and its immediate successor). There is also a simple and well known bijection between Dyck paths and noncrossing pairings which consists in pairing up and down steps. For example, the non-crossing pairing $(1,8),(2,3),(4,7),(5,6)$ corresponds to the Dyck path $uuduuddd$; Figure 5. $rY$-walks and their associated maps. (a) Construction of the spanning tree using the vertical coordinates. (b) Matching the leaves using the horizontal coordinates. When we complete the picture on the left, as in Figure 5(b) the Dyck path corresponding to the horizontal coordinate determines the non-crossing pairing of the leaves. It remains to pair the root leaf with the leaf going through the upper side of the rectangle to obtain a special cubic map. Conversely, given a special cubic map with a complete, rooted spanning tree, the spanning tree gives us the vertical coordinate of an $rY$-walk, while the pairing of the leaves gives us the horizontal coordinate. Finally one has the following. ###### Theorem 3.2. The construction with Contraction Rules 3.1 gives a bijection between $rY$-walks in the quarter plane, starting and ending at $0$, with $4n$ steps, and special cubic maps with a complete spanning tree, rooted at some half- edge, with $2n$ vertices. We have seen that, in this construction, the root leaf is paired with its immediate successor. In order to obtain all possible pairings, let us instead consider walks starting at $0$ but ending at the point $(2,0)$. Making the contractions of paths as in section 2.2 we end up with Figure 6(a), where there are four half-edges unmatched in the dual map: the half-edges corresponding to the upper and lower sides of the rectangle and two half-edges coming from the two up steps on the left which are not matched with a down step. We then pair these half-edges so that the upper and lower horizontal sides are not matched, as in Figure 6(b) This gives a cubic map with a complete spanning tree, in which the root is not paired with its immediate successor. Figure 6. Non special maps. (a) Map with four unpaired leaves. (b) Matching the remaining leaves . Putting this together with the preceding construction, we get: ###### Theorem 3.3. The construction with Contraction Rules 3.1 gives a bijection between $rY$-walks on the quarter plane starting at $0$ and ending at $0$ or $(2,0)$, with $4n$ steps, and cubic maps with a rooted complete spanning tree, with $2n$ vertices. ### 3.4. Variants Instead of the step $(0,1)$ we could consider $(0,2)$. Again it is easy to enumerate all walks with step set $\\{(-1,-1),(1,-1),(0,2)\\}$ which start and end at $0$: in the vertical direction we have a walk with steps $2$ and $-1$ and an application of the cycle lemma give the number of such walks with $3n$ steps, which is $\frac{1}{2n+1}{3n\choose n}=C^{(2)}_{n}$, a Fuss-Catalan number (see e.g. [5]). On the set of $2n$ negative steps we have to choose a Dyck path of length $2$ which gives a Catalan number $C_{n}$. The final count is $C^{(2)}_{n}C_{n}$. Again our construction gives a bijection between walks with step set $\\{-1,2\\}$ and ternary trees, as in Figure 7(a), which we can complete as in Figure 7(b). Figure 7. Quartic maps. ###### Theorem 3.4. The construction with Contraction Rules 3.1 gives a bijection between walks on the quarter plane with step set $\\{(0,2),(-1,-1),(1,-1)\\}$ starting and ending at $0$, with $3n$ steps, and special quartic maps with a complete spanning tree, with $3n$ vertices. When extending to walks ending at $(2,0)$, we get a bijection with quartic maps with a complete rooted spanning tree. Obviously this can be further extended to a bijection between planar maps equipped with a spanning tree and a marked leaf, with walks whose steps belong to the infinite set $\\{(-1,-1),(1,-1),(0,k);k\geq 0\\}$ starting at $0$ and ending at $0$ or $(2,0)$. Observe that the vertical component of such a walk is a Łukasiewicz path. Applying the same algorithm as above recovers a well known bijection between Łukasiewicz paths and trees (see e.g. [15]), which extends the bijection between Dyck paths and binary trees. ###### Theorem 3.5. The construction with Contraction Rules 3.1 gives a bijection between walks on the quarter plane with step set $\\{(-1,-1),(1,-1),(0,k),k\geq 0\\}$ starting at $0$ and ending at $0$ or $(2,0)$, and planar maps with vertices of degree $\geq 2$, with a complete spanning tree. In this bijection, steps of type $(0,k)$ of the walk correspond to vertices of degree $k+2$ of the map. Mullin’s construction (as explained in Schaeffer [14]) also provides a bijection between maps equipped with a spanning tree and walks in the quarter plane but this is a quite different bijection, for example the set of steps allowed in Mullin’s bijection is the set of straight steps. ## 4\. Recovering other bijections ### 4.1. Triangulations with a Hamiltonian cycle on the faces We consider a planar triangulation equipped with a Hamiltonian cycle of its dual graph. This amounts to choosing an ordering $f_{1},f_{2},\ldots,f_{n}$ of the faces of the triangulation such that, for all $i$, the faces $f_{i}$ and $f_{i+1}$ are adjacent (where $i+1$ is taken modulo $n$) and for each $i$ one chooses an edge $e_{i}$ in the common boundary of $f_{i}$ and $f_{i+1}$. Here is an example where the Hamiltonian cycle is in red, the faces are numbered from $1$ to $8$ and the edges $e_{i}$ are the edges crossed by the red path. The graph formed from the vertices of the triangulation and the edges which do not belong to the set $e_{1},e_{2},\ldots,e_{n}$ is made of two disjoint trees and the Hamiltonian cycle goes around each of them, one of them being on its right and the other one on its left. Here is the picture: We go around the cycle and record the successive triangles. Each such triangle has exactly one side in one of the trees. If this side is in the left tree we record a $(1,0)$ step, if the path goes up in the tree and a $(-1,0)$ step if it goes down. If the side is in the right tree we record similarly a $(0,\pm 1)$ step. The final picture is where we draw the Hamiltonian cycle in red, but not the whole dual map. It is immediate to see that we obtain in this way a walk in the quarter plane, with straight steps, starting and ending at $0$. This construction yields a bijection between walks with straight steps in the quarter plane, starting and ending at $0$, and planar triangulations equipped with a Hamiltonian cycle of the dual graph. This is well known and occurs in Mullin’s bijection between maps with a spanning tree and walks on the quarter plane, see e.g. [14], section 1.2. See also [9] for a recent overview of applications of this kind of bijections to random maps. In the remaining sections we will obtain examples with more flexibility by looking at classes of walks having oblique steps. ### 4.2. Kreweras walks and Bernardi’s bijection We now consider a bijection of Bernardi [5] (also used in [6]). A Kreweras walk has steps in the set $\\{a,b,c\\}$ where $a=(1,0),$ $b=(0,1)$ and $c=(-1,-1)$. It can be shown that Kreweras walks with $3n$ steps, which remain in the quarter plane, starting and ending at $0$, are enumerated by the formula $\frac{2^{n}}{2n+1}{3n\choose n}$. This formula is explained in a bijective way in Bernardi [5]. We will see how to recover his bijection using our construction. We thus consider a Kreweras walk, which remains in the quarter plane, starting and ending at $0$, When we do the construction of section 2.3 the steps of types $a$ or $b$ give triangles when we contract the Motzkin paths. The steps of type $c$ give rise to quadrilaterals as below: ###### Contraction Rules 4.1. Consider the sides $uw$ and $vz$ of the quadrilateral corresponding to a step of type $c$. In the contraction of the two Motzkin paths, each of these sides is matched with another segment below it, moreover this segment belongs to a step of type $a$ (for the $uw$ segment) and to a step of type $b$ (for the $vz$ segment). In the enumeration $x_{1},x_{2},\ldots,x_{n}$ of the steps of the walk let $i$ and $j$, respectively, be the indices of these steps, thus $x_{i}=a$ and $x_{j}=b$ . If $i<j$ then we identify $u$ and $z$ in the contraction of the quadrilateral $uvwz$. If $i>j$ then we identify $v$ and $w$. Here is the case of the path with sequence of steps $aabbccbac$, with the dual cubic map in green: Observe that the triangulation (and the dual cubic map) are loopless. Indeed a loop in the construction could be obtained only if, just after an $a$ or $b$ step, the $c$ step is contracted so that it has two sides in common with the preceding step: but the rules of contraction prevent this. Each vertex of the dual map has three half-edges, which we orient so that the half-edge corresponding to the base of the triangle is incoming and the ones corresponding to the other sides of the triangle are outgoing: If we keep only the edges of the dual map having half-edges with matching orientations we obtain a spanning tree like this: We will now see that the data of the loopless triangulation and the spanning tree of the dual are exactly the ones obtained from Bernardi’s bijection. The tree is a special kind of tree, called depth tree in [5], but we will not need to go into this here. Let us sketch Bernardi’s construction and check that it is the same as this one. More details can be found in [5]. The construction is done step by step, by growing the dual cubic map and a spanning tree, using three mappings denoted $\varphi_{a},\varphi_{b},\varphi_{c}$. Observe that in [5] the Kreweras walk has in fact opposite steps as the ones used in this paper, but since Bernardi scans the steps of the walk in reverse order, the result is the same. The algorithm starts with a vertical arrow pointing outside the root: At each step a growing map (which we depict by a circle) is constructed, with half-edges pointing outside, one of them being endowed with an arrow pointing upwards: The mapping $\varphi_{a}$ consists in transforming the half-edge containing the arrow into an edge and adding on top of this edge a pair of half-edges, with the right half-edge carrying the arrow like this: In case of a step $b$ the mapping $\varphi_{b}$ is similar but the arrow is on the left. Among all half-edges pointing out of the growing map there exists some on the right of the arrow and some on the left, in particular if there are at least one half-edge on each side we can single out the ones which are closest to the arrow. Bernardi shows that in his construction one of them is an ancestor of the other in the growing tree. Let us call $s$ this ancestor and $t$ the other half-edge. In case of a step $c$ the mapping $\varphi_{c}$ consists in joining the half- edge containing the arrow to $s$ in order to make an edge of the growing map and taking $t$ to carry the new arrow. After the last step has been made, the remaining arrow is linked to the root vertex. The algorithm is illustrated below for the Kreweras walk $aabbccbac$. We denote the final step by $f$. It is now a simple matter to check that Bernardi’s bijection corresponds exactly to our construction. Indeed by going up in the diagram associated to the walk we see that application of $\varphi_{a}$ corresponds to a step of type $a$, similarly for $\varphi_{b}$. For steps of type $c$ we have to check that our rule is the same as for Bernardi’s map $\varphi_{c}$. This follows from the fact proved by Bernardi that the two half-edges candidates for a pairing with the arrow are ordered in the tree: then the one that is closest to the root must be the one that has lowest numbering in the walk. ### 4.3. Tandem walks, prographs and maps with a bipolar orientation #### 4.3.1. A bijection between prographs and tandem walks We consider oriented planar graphs made of “products” with two inputs and one output (from bottom to top) and “coproducts” with one input and two outputs The terminology comes from the theory of operads, cf Borie [3]. One can use outputs as new inputs and get in this way configurations which are oriented from bottom to top. Prographs are such planar configurations with one input and one output. For example here is the only configuration with one product and one coproduct. A more complex example is given in Figure 8(a). One can connect the input and the output to $\infty$ and one obtains in this way a planar cubic graph, dual to a triangulation of the sphere. Prographs can be enumerated (see [3]), they are equinumerous with tandem walks i.e. walks in the quarter plane, starting and ending at $0$, with steps in the set $a=(0,1),\ b=(1,-1),\ c=(-1,0)$, A tandem walk is a word in the alphabet $a,b,c$ and the condition to stay in the quarter plane is that $\sharp a\geq\sharp b\geq\sharp c$ for any prefix. This corresponds to the natural two dimensional generalization of the ballot problem. There is a simple bijection going from tandem walks in the quarter plane, starting and ending at $0$, to the set of standard Young tableaux with rectangular shape, having three rows of the same length, the numbers in the first line being the positions of the $a$ letters in the word, of the $b$ letters in the second line and of the $c$ letters in the third line. They can thus be counted using the hook formula: there are $\frac{2(3n)!}{n!(n+1)!(n+2)!}$ tandem walks, starting and ending at $0$, with $3n$ steps. Tandem walks can be realized as a subset of $rY$-walks. Indeed, if in a tandem walk we replace each occurence of the step $c=(-1,0)$ by a step $(0,1)$ immediately followed by a step $(-1,-1)$, we obtain an $rY$-walk. Thus we can refer to tandem walks as $rY$ walks with some forbidden patterns : no step $(0,1)$ is followed by a step $(0,1)$ or a step $(-1,1)$. Consider a tandem walk in the quarter plane, starting and ending at $0$. Using the construction of section 2 there are three types of cells: Quadrilaterals such as $a$ and $c$ give triangles, while, in case $b$ we will smash the quadrilateral according to its natural bent: ###### Contraction Rules 4.2. These rules form a subset of the Contraction Rules 3.1, moreover the other Contraction Rule for $rY$-walks is compatible with the interpretation of a step $c$ as formed by a step $(0,1)$ followed by a step $(-1,-1)$. The left hand of the following picture shows the steps $(0,1)$ and $(-1,-1)$ and the right hand shows the $c$ step. Clearly the two are equivalent. The construction using Contraction Rules 4.2 associates to a tandem walk in the quarter plane, starting and ending at $0$, a prograph, obtained as the dual of the triangulation. Indeed it is easy to see that in the construction each $a$ step corresponds to a coproduct while a $c$-step corresponds to a product. Here is an example with the word $abacbc$: The corresponding Young tableau is 4 | 6 ---|--- 2 | 5 1 | 3 This correspondance between tandem walks and prographs is in fact a bijection and the inverse bijection is readily obtained from the connection, referred to above, with $rY$-walks. It can be explicitly described as follows. Given a prograph, cut the left input of each coproduct. The resulting graph is a complete spanning tree. Let us call a corner the region just above a coproduct (between its two outputs). Make a depth first search of the tree and order the vertices and corners according to their first appearance. Make an $a$-step the first time you go through a coproduct, a $b$-step the first time you go through a product, and a $c$-step for each corner, this gives the tandem walk corresponding to the prograph. The construction is shown in Figure 8. Figure 8. From prographs to tandem walks. (a) A prograph. (b) Cutting left inputs and exploring the tree. (c) The resulting path, recovering the prograph as the dual of the triangulation. #### 4.3.2. Connections with bipolar maps and the bijection of Kenyon, Miller, Sheffield and Wilson As in section 2.4, one can extend the previous construction to walks which do not start or end at zero and to walks with more general step set. Here we will consider walks starting at a point of the type $(0,n)$ and ending at $(m,0)$ for some $m,n\geq 0$. We will also consider the infinite step set $\mathcal{S}=\\{(1,-1),(-i,j);i,j\geq 0,i+j>0\\}$ The contruction using Contraction Rules 4.2 provides a map from walks with step set $\mathcal{S}$ to some set of “generalized prographs”, which are made of vertices with $i$ inputs and $j$ outputs, as shown below, with 3 inputs and 2 outputs: Instead of looking at these generalized prographs and trying to characterize the ones that we obtain, we will instead look at the dual maps. A vertex of a generalized prograph corresponds to a face in the dual map. The face associated with a step of the form $(-i,j)$ will have $i+j+2$ sides. We will orient the sides of such a face from west to east, e.g. for $i=3,j=2$ : For a quadrilateral corresponding to a step $(-1,1)$ : Clearly these orientations are compatible with the identifications between sides made when contracting the Motzkin paths or the quadrilaterals. The orientation of the map produced in this way is acyclic, has a unique source (the west-most point) and a unique sink (the east-most point). We claim that the oriented maps that we obtain are exactly the bipolar oriented maps as defined in [10] and, up to some minor twists, our construction recovers their bijection with walks in the quarter plane. In [10] this bijection proceeds inductively by looking at the steps of a walk and building an associated map by sewing faces. More precisely a face with $i+j+2$ sides is associated with every step of the form $(-i,j)$. It is easy to check that the sewing algorithm corresponds exactly to our construction. Instead of giving full proofs we will just check the example of [10], section 2.2 and leave the details to the reader. In Figure 9(a) the picture corresponding to this example is drawn. Figure 9. (a) The example of [10] with steps $\scriptstyle(1,-1);(0,2);(-1,0);(0,1);(1,-1);(1,-1);(-1,1);(0,1);(1,-1);(1,-1);(1,-1);(1,-1);(-1,0);(-2,1);(1,-1)$. (b) After a reflection through the main diagonal. (c) The map drawn as in [10]. In order to recover the map of [10], Figure 4, we make a reflection with respect to the $x=y$ axis, as shown in Figure 9(b). After contractions the result is shown in Figure 9(c). ### 4.4. Schnyder woods We now describe a bijection betwen walks and Schnyder woods, originated in Li, Sun and Watson [12]. A Schnyder wood is a planar triangulation in which the three vertices of the external face are coloured, in clockwise order, in green, red and blue. The internal edges are also coloured so that they form three trees, one of each colour, rooted on the external vertex of its own colour and containing all internal vertices. These trees are oriented towards their roots. The edges have to satisfy the Schnyder condition at each internal vertex: in clockwise order around the vertex, we have successively the outgoing blue edge, incoming red edges, outgoing green edge, incoming blue edges, outgoing red edge and incoming green edges. It will be convenient for us to also colour in blue the external edge between the blue and red vertices. Here I take as running example the same as Bernardi and Bonichon [4], see Figure 10 below. Figure 10. A Schnyder wood. To such a Schnyder wood I will associate a tandem walk in the quarter plane, closely related to the one described in [12]. Since the bijection is studied in extensively in [12], I will only sketch the proof and leave the details to the reader. We start from a Schnyder wood and construct a tandem walk. Recall that the steps of a tandem walk belong to the set $\\{a=(1,0),\ b=(-1,1),\ c=(0,-1)\\}$ and the condition to remain in the quarter plane is that $\sharp a\geq\sharp b\geq\sharp c$ for all prefix. We make a contour exploration of the blue tree, from left to right, starting from the root. The first time we encounter a blue edge we make a $a$-step in the walk and when we go down this blue edge we make a $b$ step. At each vertex we may cross several red edges. Each time we cross an incoming red edge we make a $c$-step in the walk. Finally, after we went up the last blue edge, we do not go back to the origin, so we do not make the last $b$-step. Observe that, by the Schnyder condition, a $c$ step can never occur immediately after a $b$ step. The word obtained in the running example is: $aabbacabaccabacbbaacbbbacccc$ Let us check that the walk that we obtain is a tandem walk, starting from $(0,0)$ and ending at $(1,0)$. Consider the subword of $a$ and $b$ letters. If we replace the $a$ and $b$ letters by $u$ and $v$ respectively, we obtain the Dyck path corresponding to the blue tree (with its last step missing), therefore, for any prefix in the word the number of $a$’s is larger than the number of $b$’s. Consider the map obtained by erasing the green edges and the green vertex. In this map construct the tree dual to the red tree and root it on the external face. It is shown in black in Figure 11. One can see that the subword formed by the $b$ and $c$ letters gives, upon substituting $u$ for $b$ and $v$ for $c$, the Dyck path of the exploration of this tree from left to right, therefore, for each prefix in the word, the number of $b$’s is larger than the number of $c$’s. It follows that the word on the letters $a,b,c$ that we obtain from the Schnyder wood gives a tandem walk, from $(0,0)$ to $(1,0)$. Figure 11. A dual tree Conversely, let $p$ be a tandem walk on the quarter plane from $(0,0)$ to $(1,0)$ satisfying the condition on steps $b$ and $c$ that the pattern $bc$ is forbidden. We associate to this walk the mating of trees, as in section 2.1, using Contraction Rules 4.2, but we do not identify the top and bottom sides of the rectangle. These two sides, together with the last $a$-step, which has not been matched with a $b$-step, form the boundary of the external triangle. The blue tree is obtained from the right blue path. It remains to construct the red and green trees. They are obtained by colouring some horizontal edges. We colour red the bottom edge of each triangle of type $c$. Some of the other horizontal edges are sewed to some blue edges and therefore will be coloured in blue. The remaining horizontal edges are coloured in green. We orient the red edges from west to east and the green edges from east to west. Here is the diagram we obtain in our example. We have to check that the coloured triangulation that we have constructed is a Schnyder wood. We can identify the vertices of the triangulation, namely they are either the vertices of the blue tree, the green vertex corresponding to the left side of the picture and the red vertex corresponding to the last $c$ steps. The three external edges are the blue edge from the path, corresponding to the last $a$ step and the upper and lower sides of the picture, as depicted below: The vertices of the left tree which are not the root are all matched to a vertex of the blue tree by the contraction of a quadrangle. It remains to check the Schnyder condition at each internal vertex. In figure 12 we consider a typical such vertex. The blue points all correspond to the vertex and the successive edges and faces traversed when going clockwise around the vertex are shown on the path with arrows. It is easy to check, using the fact that the pattern $bc$ is forbidden, that these edges follow the Schnyder condition. Figure 12. The Schnyder condition around an internal vertex. Finally, here is the picture of the original Schnyder wood, with the faces numbered as above. ## References * [1] M. Albenque, D. Poulalhon, A generic method for bijections between blossoming trees and planar maps. Electron. J. Combin. 22 (2015), no. 2, Paper 2.38, 44 pp. * [2] X. Buff, A.L. Epstein, S. Koch, D. Meyer, K. Pilgrim, M. Rees, T. Lei, Questions about polynomial matings. Ann. Fac. Sci. Toulouse Math. (6) 21 (2012), no. 5, 1149–1176. * [3] N. Borie, Three-dimensional Catalan numbers and product-coproduct prographs. FPSAC 29, Séminaire Lotharingien de combinatoire 76B (2017) Article $\sharp 39$. * [4] O. Bernardi, N. Bonichon, Intervals in Catalan lattices and realizers of triangulations. J. Combin. Theory Ser. A 116 (2009), no. 1, 55–75. * [5] O. Bernardi, Bijective counting of Kreweras walks and loopless triangulations. J. Combin. Theory Ser. A 114 (2007), no. 5, 931–956. * [6] O. Bernardi, N. Holden, X. Sun, Bijective path to quantum gravity. https://arxiv.org/abs/1807.01684 * [7] M. Bousquet-Mélou, M. Mishna, Walks with small steps in the quarter plane. Algorithmic probability and combinatorics, 1–-39, Contemp. Math., 520, Amer. Math. Soc., Providence, RI, 2010. * [8] B. Duplantier, J. Miller; S. Sheffield; Liouville quantum gravity as a mating of trees, https://arxiv.org/abs/1409.7055 * [9] E. Gwynne, N. Holden, X. Sun, Mating of trees for random planar maps and Liouville quantum gravity: a survey, https://arxiv.org/abs/1910.04713 * [10] R. Kenyon; J. Miller; S. Sheffield; D. Wilson; Bipolar orientations on planar maps and $SLE_{12}$, Ann. Probab. 47 (2019), no. 3, 1240–1269. * [11] J. F. Le Gall, F. Paulin, Scaling limits of bipartite planar maps are homeomorphic to the 2-sphere. Geom. Funct. Anal. 18 (2008), no. 3, 893–918. * [12] Y. Li, Yiting; X. Sun, S.S. Watson, Schnyder woods, $SLE_{16}$, and Liouville quantum gravity. https://arxiv.org/abs/1705.03573 * [13] G. Schaeffer, Bijective census and random generation of Eulerian planar maps with prescribed vertex degrees. Electron. J. Combin. 4 (1997), no. 1, Research Paper 20, 14 pp. * [14] G. Schaeffer, Planar maps. Handbook of enumerative combinatorics, 335–395, Discrete Math. Appl. (Boca Raton), CRC Press, Boca Raton, FL, 2015. G. Schaeffer, Planar maps. * [15] R.P. Stanley. Enumerative combinatorics, vol. 2. Cambridge University Press 1999.
# Rational Hypergeometric Ramanujan Identities for $1/\pi^{c}$: Survey and Generalizations Henri Cohen and Jesús Guillera ###### Abstract We give a simple unified proof for _all_ existing rational hypergeometric ramanujan identities for $1/\pi$, and give a complete survey (without proof) of several generalizations: rational hypergeometric identities for $1/\pi^{c}$, Taylor expansions, upside-down formulas, and supercongruences. ## 1 Introduction In a famous paper [20], S. Ramanujan gave $17$ formulas for $1/\pi$. These formulas were proved and generalized much later by numerous authors. In the present paper, which is mainly a survey and does not claim originality, we have several goals. First, we want to show that the $36$ rational hypergeometric formulas for $1/\pi$ follow by specialization from a _single_ general formula. Second, we give a list of all known rational hypergeometric formulas for $1/\pi^{c}$ with $c\geq 2$, many unproved. Third, we will give Taylor expansions of which the $1/\pi^{c}$ formulas are only the constant term. Fourth, we give a list of what can be called _upside-down_ formulas. Finally, we give the _supercongruences_ corresponding to all the $1/\pi^{c}$ formulas. A large part of the formulas of this paper apart from the initial $1/\pi$ formulas are due to the second author. This survey is meant to be exhaustive, which means that we would appreciate feedback from readers who are aware of formulas that are not in our list (and evidently of errors). Note that we do not list formulas involving algebraic (as opposed to rational) parameters, nor do we list the second author’s two- sided formulas where the sums are over $n\in{\mathbb{Z}}$ instead of $n\geq 0$, or other generalizations. Acknowledgment: We heartily thank Wadim Zudilin for enlightening conversations. We recall that the _Pochhammer symbol_ $(x)_{n}$ is defined by $(x)_{n}=x(x+1)\cdots(x+n-1)=\Gamma(x+n)/\Gamma(x)$, this last formula allowing us to define it also when $n$ is not an integer. ###### Definition 1.1 1. (1) Let $d\geq 2$ be an integer. For all $n\geq 0$ we define $R_{n}(d)=\prod_{\begin{subarray}{c}1\leq i\leq d\\\ \gcd(i,d)=1\end{subarray}}\dfrac{(i/d)_{n}}{n!}\;.$ 2. (2) A _rational hypergeometric product_ $H$ is a sequence of the form $H_{n}=\prod_{d\in I}R_{n}(d)^{v_{d}}$ for some finite index set $I$ and _positive_ exponents $v_{d}$. We define the _degree_ of $H$ by $\deg(H)=\sum_{d\in I}\phi(d)$. 3. (3) A _rational hypergeometric Ramanujan series_ is a series of the form $S(H,a,P)=\sum_{n\geq 0}P(n)\dfrac{H_{n}}{a^{n}}\;,$ with $P\in{\mathbb{Z}}[X]$ and $a\in{\mathbb{Q}}^{*}$. Some comments are in order: 1. (1) We only consider coefficients which are hypergeometric _products_ (as opposed to quotients). We could allow $v_{d}<0$, or more general coefficients, and indeed there is a vast amount of formulas for such general series, but we will restrict to those, although we will later give “upside-down” formulas where _all_ the $v_{d}$ are negative. In the tables that we give below, we will abbreviate $\prod_{d\in I}R_{n}(d)^{v_{d}}$ as $\prod d^{v_{d}}$. 2. (2) In addition, we restrict to such products which are _rational_ in the hypergeometric motive sense, in other words such that if some irreducible fraction $i/d$ occurs, then all irreducible fractions in $]0,1[$ with denominator $d$ occur. 3. (3) We only consider $P\in{\mathbb{Z}}[X]$ (or $P\in{\mathbb{Q}}[X]$, which is the same up to a multiplicative constant). Ramanujan himself gave formulas where $P$ has coefficients in a quadratic extension of ${\mathbb{Q}}$, but we will not consider those. Similarly, we restrict to $a\in{\mathbb{Q}}^{*}$. 4. (4) It is immediate to see that $H_{n}\sim C/n^{\deg(H)/2}$ for some constant $C$, so the series $S$ converges for $|a|>1$, also for $a=-1$ if $\deg(P)<\deg(H)/2$, and for $a=1$ if $\deg(P)<\deg(H)/2-1$. In fact, in all the examples that we will see we have $\deg(P)=(\deg(H)-1)/2$ (and in particular $\deg(H)$ is odd). 5. (5) The hypergeometric function $\sum_{n\geq 0}H_{n}z^{n}$ satisfies a linear differential equation of degree $\deg(H)$, in other words there exists a polynomial $Q(n,z)$ of degree $\deg(H)$ in $n$ such that $\sum_{n\geq 0}Q(n,z)H_{n}z^{n}=0$. We may thus restrict to polynomials $P$ such that $\deg(P)<\deg(H)$, since formulas with $P$ differing by a multiple of $Q$ are trivially equivalent. We will give examples of this below. In fact, the only hypergeometric products that we will consider in this paper are products of the following $R_{n}(d)$: $\displaystyle R_{n}(p)$ $\displaystyle=p^{-pn}\dfrac{(pn)!}{n!^{p}}\text{\quad for $p=2,\ 3,\ 5$\;,}$ $\displaystyle R_{n}(4)$ $\displaystyle=2^{-6n}\dfrac{(4n)!}{(2n)!n!^{2}}\;,\text{\quad and\quad}R_{n}(6)=2^{-4n}3^{-3n}\dfrac{(6n)!}{(3n)!(2n)!n!}\;.$ ###### Definition 1.2 A (convergent) hypergeometric Ramanujan series will be called a $1/\pi^{c}$-formula if its sum $S(H,a,P)$ is equal to an algebraic number divided by $\pi^{c}$. A search through the (abundant) literature (together with additional personal investigations) shows that the only algebraic numbers that occur are of the form $\sqrt{k}$ for some $k\in{\mathbb{Q}}^{*}$, and we have been able to find exactly $36$ series whose sum is of the form $\sqrt{k}/\pi$, $10$ whose sum is of the form $\sqrt{k}/\pi^{2}$, plus a single example for $1/\pi^{3}$ due to B. Gourevitch and two examples for $1/\pi^{4}$, one due to J. Cullen the other to Y. Zhao. In addition, there are a number of “divergent” hypergeometric series for $1/\pi^{c}$ which we will mention. ## 2 Rational Hypergeometric Formulas for $1/\pi$ There are (at least) three methods for proving such formulas. The first, due to Ramanujan, is the use of elliptic functions and generalizations, the second is the use of modular functions, and the third is to use WZ-type summation methods. In the present paper we only use modular functions. In fact, we will show that _all_ the known rational formulas for $1/\pi$ follow from a single general formula giving an identity between complex _functions_ , which can then be specialized to any CM point that we want, and in particular to CM points giving rational formulas. ### 2.1 A General Identity An important theorem which can be found for instance in [21] states that any modular form (or function) $F$ of weight $k$ (say on a congruence subgroup of $\Gamma$), expressed locally in terms of a modular _function_ $h$ (of weight $0$), is a solution of a _linear_ differential equation of order $k+1$ with algebraic coefficients, which can be explicitly constructed; if in addition $h$ is a _Hauptmodul_ , the coefficients can be chosen to be polynomials. Equivalently, there exists a sequence $u(n)$ satisfying a polynomial recurrence relation such that for $\Im(\tau)$ sufficiently large we have $F(\tau)=\sum_{n\geq 0}u(n)h(\tau)^{n}$. We then have the following easy result, where from now on we denote by $D$ the differential operator $D=(1/2\pi i)d/d\tau=qd/dq$: ###### Proposition 2.1 As above, let $F$ be a modular function of weight $k$ on some congruence subgroup of $\Gamma$, $h$ a modular function (of weight $0$), write $F(\tau)=\sum_{n\geq 0}u(n)h(\tau)^{n}$ for $\Im(\tau)$ sufficiently large, and finally set $G^{*}(\tau)=\dfrac{D(F)/F}{D(h)/h}(\tau)-\dfrac{k}{4\pi\Im(\tau)D(h)/h}\;,$ which is nonholomorphic but modular of weight $0$. We have the following general formula, which can be considered as a formula for $1/\pi$: $\sum_{n\geq 0}(n-G^{*}(\tau))u(n)h(\tau)^{n}=\dfrac{k}{4\pi\Im(\tau)}\dfrac{F}{D(h)/h}(\tau)\;.$ Proof. First apply $D$ to the formula expressing $F$ in terms of $h$, so that $\dfrac{D(F(\tau))}{D(h(\tau))/h(\tau)}=\sum_{n\geq 0}nu(n)h(\tau)^{n}\;.$ Thus, if we set $G=(D(F)/F)/(D(h)/h)$, the left hand side is $GF=G\sum_{n\geq 0}u(n)h^{n}$, so we have the identity $\sum_{n\geq 0}(n-G(\tau))u(n)h(\tau)^{n}=0\;.$ Now $D(h)/h$ is a modular function of weight $2$, but $D(F)/F$ is only quasi- modular: $D^{*}(F)/F=D(F)/F-(k/(4\pi\Im(\tau)))$ is truly modular nonholomorphic of weight $2$, Thus, we set $G^{*}=(D^{*}(F)/F)/(D(h)/h)$, and this gives both the formula for $G^{*}$ and the desired identity. $\sqcap$$\sqcup$ Now CM theory tells us that if $\tau$ is a CM point and $h(\tau)$ and $F(\tau)$ have algebraic Fourier coefficients, then both $h(\tau)$ and $G^{*}(\tau)$ will be algebraic numbers, and $(F/(D(h)/h))(\tau)$, which has weight $k-2$, will be an algebraic number times $\Omega_{\tau}^{k-2}$, where $\Omega_{\tau}$ is a suitable period, for instance $\Omega_{\tau}=\eta(\tau)^{2}$. Thus, it will itself be algebraic if $k=2$, otherwise be equal to an algebraic number times a product of values of the gamma function at rational arguments by the Lerch, Chowla–Selberg formula. The rest of the work consists simply in specializing the above general argument to specific modular functions $F$ and $h$ and specific CM points $\tau$. Remark. Under this modular interpretation the existence of these $1/\pi$ formulas is due exclusively to the existence of the modularity-preserving nonholomorphic modification $D^{*}$ of the differential operator $D$ seen above, which involves $1/\pi$. ### 2.2 First Special Case: Level $1$ We first consider modular forms on the full modular group. It is known at least since Klein–Fricke that we have the hypergeometric representation $E_{4}^{1/4}={}_{2}F_{1}(1/12,5/12;1;1/J_{1})\;,$ where $E_{k}=1-B_{k}/(2k)\sum_{n\geq 1}\sigma_{k-1}(n)q^{n}$ and $J_{1}(\tau)=j(\tau)/1728$. Thanks to the Clausen identity, we deduce that $E_{4}^{1/2}={}_{3}F_{2}(1/2,1/6,5/6;1,1;1/J_{1})\;.$ This is modular of weight $2$, so we apply the above proposition to $F=E_{4}^{1/2}$ and $h=1/J_{1}$. We compute that $D(h)/h=-D(j)/j=E_{6}/E_{4}$, $D(F)/F=(1/6)(E_{2}-E_{6}/E_{4})$, hence $D^{*}(F)/F=(1/6)(E_{2}^{*}-E_{6}/E_{4})$ where $E_{2}^{*}=E_{2}-3/(\pi\Im(\tau))$, so $G^{*}=-(1/6)(1-E_{2}^{*}E_{4}/E_{6})$, so the general identity specializes to $\sum_{n\geq 0}\left(6n+1-\dfrac{E_{2}^{*}E_{4}}{E_{6}}(\tau)\right)\dfrac{R_{n}(2)R_{n}(6)}{J_{1}(\tau)^{n}}=\dfrac{3}{\pi\Im(\tau)}\dfrac{E_{4}^{3/2}}{E_{6}}(\tau)\;,$ an identity due to the Chudnovsky brothers. For comparison with higher levels, we set $s_{1}=1/6$, so that for instance $E_{4}^{1/4}={}_{2}F_{1}(s_{1}/2,(1-s_{1})/2;1;1/J_{1})$. ### 2.3 Special Cases: Levels $2$ and $3$ There is no difference for higher levels compared to level $1$, apart from the need to give explicitly the modular functions used and the hypergeometric identities. As it happens, levels $2$ and $3$ can be treated together. For $N=2$ and $3$ set $\displaystyle F_{2}(\tau)$ $\displaystyle=\dfrac{NE_{2}(N\tau)-E_{2}(\tau)}{N-1}\;,\quad F_{4}(\tau)=\dfrac{N^{2}E_{4}(N\tau)-E_{4}(\tau)}{N^{2}-1}\;,$ $\displaystyle J_{N}(\tau)$ $\displaystyle=\dfrac{F_{2}^{4}}{F_{2}^{4}-F_{4}^{2}}\;,\text{\quad and\quad}P_{2}(\tau)=\dfrac{NE_{2}(N\tau)+E_{2}(\tau)}{N+1}\;.$ The hypergeometric identity is $F_{2}^{1/2}={}_{2}F_{1}(s_{N}/2,(1-s_{N})/2;1;1/J_{N})\;,\text{\quad with\quad}s_{N}=(N+1)/12\;,$ so by Clausen $F_{2}={}_{3}F_{2}(1/2,s_{N},1-s_{N};1,1;1/J_{N})\;.$ We apply the proposition to $F=F_{2}$ and $h=1/J_{N}$. We compute that $D(h)/h=F_{4}/F_{2}$, $D(F)/F=s_{N}(P_{2}-F_{4}/F_{2})$, hence $D^{*}(F)/F=s_{N}(P_{2}^{*}-F_{4}/F_{2})$ with $P_{2}^{*}=P_{2}-6/((N+1)\pi\Im(\tau))$, so $G^{*}=-s_{N}(1-P_{2}^{*}F_{2}/F_{4})$, and since $6/(N+1)=1/(2s_{N})$, the general identity specializes to the two identities $\displaystyle\sum_{n\geq 0}\left(4n+1-\dfrac{P_{2}^{*}F_{2}}{F_{4}}(\tau)\right)\dfrac{R_{n}(2)R_{n}(4)}{J_{2}(\tau)^{n}}$ $\displaystyle=\dfrac{2}{\pi\Im(\tau)}\dfrac{F_{2}^{2}}{F_{4}}(\tau)\;,$ $\displaystyle\sum_{n\geq 0}\left(3n+1-\dfrac{P_{2}^{*}F_{2}}{F_{4}}(\tau)\right)\dfrac{R_{n}(2)R_{n}(3)}{J_{3}(\tau)^{n}}$ $\displaystyle=\dfrac{3}{2\pi\Im(\tau)}\dfrac{F_{2}^{2}}{F_{4}}(\tau)\;.$ ### 2.4 Special Case: Level $4$ Here we set $\displaystyle F_{2}(\tau)$ $\displaystyle=\dfrac{4E_{2}(4\tau)-E_{2}(\tau)}{3}\;,\quad G_{2}(\tau)=4E_{2}(4\tau)-4E_{2}(2\tau)+E_{2}(\tau)$ $\displaystyle J_{4}$ $\displaystyle=\dfrac{F_{2}^{2}}{F_{2}^{2}-G_{2}^{2}}\;,\text{\quad and\quad}P_{2}(\tau)=E_{2}(2\tau)\;.$ The hypergeometric identity is again $F_{2}^{1/2}={}_{2}F_{1}(s_{N}/2,(1-s_{N})/2;1;1/J_{4})\;,\text{\quad with\quad}s_{4}=1/2\;,$ so by Clausen $F_{2}={}_{3}F_{2}(1/2,s_{4},1-s_{4};1,1;1/J_{4})$. We apply the proposition to $F=F_{2}$ and $h=1/J_{4}$. We compute that $D(h)/h=G_{2}$, $D(F)/F=(P_{2}-G_{2})/3$, hence $D^{*}(F)/F=(P_{2}^{*}-G_{2})/3$ with $P_{2}^{*}=P_{2}-3/(2\pi\Im(\tau))$, so $G^{*}=-(1-P_{2}^{*}/G_{2})/3$, hence the general identity specializes to $\sum_{n\geq 0}\left(3n+1-\dfrac{P_{2}^{*}}{G_{2}}(\tau)\right)\dfrac{R_{n}(2)^{3}}{J_{4}(\tau)^{n}}=\dfrac{3}{2\pi\Im(\tau)}\dfrac{F_{2}}{G_{2}}(\tau)\;.$ ## 3 The Basic List of Rational Hypergeometric $1/\pi$ Formulas From the above four specializations it is now immediate to obtain as many hypergeometric $1/\pi$ formulas as we like. To obtain such formulas which are _rational_ in the above sense, we first need $J_{N}(\tau)$ to be rational. This trivially implies that the coefficients of $1/(\pi\Im(\tau))$ on the right-hand side of the formulas are square roots of rational numbers. Thus, if we want the coefficient of $1/\pi$ to be algebraic, we need both $J_{N}(\tau)$ rational and $\Im(\tau)$ algebraic, and a transcendence theorem (well-known for $N=1$, but proved similarly for $N>1$) implies that $\tau$ is a CM point, i.e., of the form $(a+\sqrt{D})/b$ with $D<0$ and $a$, $b$ integral. In turn this implies (less trivially) that the other coefficients involved will be square roots of a rational number. The following table summarizes the results obtained in this way: each formula is of the form $\sum_{n\geq 0}P(n)H_{N}(n)/a^{n}=\sqrt{k}/\pi$, where the function $H_{N}(n)=(1/2)_{n}(s_{N})_{n}(1-s_{N})_{n}/n!^{3}$ is the coefficient of $x^{n}$ in ${}_{3}F_{2}(1/2,s_{N},1-s_{N};1,1;x)$, so that $H_{1}(n)=R_{n}(2)R_{n}(6)$, $H_{2}(n)=R_{n}(2)R_{n}(4)$, $H_{3}(n)=R_{n}(2)R_{n}(3)$, and $H_{4}(n)=R_{n}(2)^{3}$. For uniqueness, we always choose $P$ with content $1$ and positive leading coefficient, $k$ is given in factored form, and the square root of $k$ is always the positive one. For future reference, we assign a number from 1 to 36 to each formula. $\sum_{n\geq 0}P(n)\dfrac{H_{N}(n)}{a^{n}}=\dfrac{\sqrt{k}}{\pi}$ # $N$ $H_{N}$ $\tau$ $a=J_{N}(\tau)$ $P$ $k$ 1 $1$ $2\cdot 6$ $(1+\sqrt{-7})/2$ $-2^{-6}5^{3}$ $63x+8$ $3\cdot 5^{3}$ 2 $1$ $2\cdot 6$ $(1+\sqrt{-11})/2$ $-2^{9}3^{-3}$ $154x+15$ $2^{11}$ 3 $1$ $2\cdot 6$ $(1+\sqrt{-19})/2$ $-2^{9}$ $342x+25$ $2^{11}\cdot 3$ 4 $1$ $2\cdot 6$ $(1+\sqrt{-27})/2$ $-2^{9}3^{-2}5^{3}$ $506x+31$ $2^{11}\cdot 3^{-3}\cdot 5^{3}$ 5 $1$ $2\cdot 6$ $(1+\sqrt{-43})/2$ $-2^{12}5^{3}$ $5418x+263$ $2^{14}\cdot 3^{-1}\cdot 5^{3}$ 6 $1$ $2\cdot 6$ $(1+\sqrt{-67})/2$ $-2^{9}5^{3}11^{3}$ $261702x+10177$ $2^{11}\cdot 3\cdot 5^{3}\cdot 11^{3}$ 7 $1$ $2\cdot 6$ $(1+\sqrt{-163})/2$ $-2^{12}5^{3}23^{3}29^{3}$ $545140134x+13591409$ $2^{14}\cdot 3\cdot 5^{3}\cdot 23^{3}\cdot 29^{3}$ 8 $1$ $2\cdot 6$ $\sqrt{-2}$ $3^{-3}5^{3}$ $28x+3$ $5^{3}$ 9 $1$ $2\cdot 6$ $\sqrt{-3}$ $2^{-2}5^{3}$ $11x+1$ $2^{-2}\cdot 3^{-1}\cdot 5^{3}$ 10 $1$ $2\cdot 6$ $\sqrt{-4}$ $2^{-3}11^{3}$ $63x+5$ $2^{-4}\cdot 3\cdot 11^{3}$ 11 $1$ $2\cdot 6$ $\sqrt{-7}$ $2^{-6}5^{3}17^{3}$ $133x+8$ $2^{-2}\cdot 3^{-5}\cdot 5^{3}\cdot 17^{3}$ 12 $2$ $2\cdot 4$ $(1+\sqrt{-5})/2$ $-2^{2}$ $20x+3$ $2^{6}$ 13 $2$ $2\cdot 4$ $(1+\sqrt{-7})/2$ $-2^{-8}3^{4}7^{2}$ $65x+8$ $3^{4}\cdot 7$ 14 $2$ $2\cdot 4$ $(1+\sqrt{-9})/2$ $-2^{4}3$ $28x+3$ $2^{8}\cdot 3^{-1}$ 15 $2$ $2\cdot 4$ $(1+\sqrt{-13})/2$ $-2^{2}3^{4}$ $260x+23$ $2^{6}\cdot 3^{4}$ 16 $2$ $2\cdot 4$ $(1+\sqrt{-25})/2$ $-2^{6}3^{4}5$ $644x+41$ $2^{10}\cdot 3^{4}\cdot 5^{-1}$ 17 $2$ $2\cdot 4$ $(1+\sqrt{-37})/2$ $-2^{2}3^{4}7^{4}$ $21460x+1123$ $2^{6}\cdot 3^{4}\cdot 7^{4}$ 18 $2$ $2\cdot 4$ $\sqrt{-1}$ $2^{-5}3^{4}$ $7x+1$ $2^{-2}\cdot 3^{4}$ 19 $2$ $2\cdot 4$ $\sqrt{-6}/2$ $3^{2}$ $8x+1$ $2^{2}\cdot 3$ 20 $2$ $2\cdot 4$ $\sqrt{-10}/2$ $3^{4}$ $10x+1$ $2^{-3}\cdot 3^{4}$ 21 $2$ $2\cdot 4$ $\sqrt{-18}/2$ $7^{4}$ $40x+3$ $3^{-3}\cdot 7^{4}$ 22 $2$ $2\cdot 4$ $\sqrt{-22}/2$ $3^{4}11^{2}$ $280x+19$ $2^{2}\cdot 3^{4}\cdot 11$ 23 $2$ $2\cdot 4$ $\sqrt{-58}/2$ $3^{8}11^{4}$ $26390x+1103$ $2^{-3}\cdot 3^{8}\cdot 11^{4}$ 24 $3$ $2\cdot 3$ $(3+\sqrt{-27})/6$ $-2^{4}3^{-2}$ $5x+1$ $2^{4}\cdot 3^{-1}$ 25 $3$ $2\cdot 3$ $(3+\sqrt{-51})/6$ $-2^{4}$ $51x+7$ $2^{4}\cdot 3^{3}$ 26 $3$ $2\cdot 3$ $(3+\sqrt{-75})/6$ $-2^{4}5$ $9x+1$ $2^{4}\cdot 3\cdot 5^{-1}$ 27 $3$ $2\cdot 3$ $(3+\sqrt{-123})/6$ $-2^{10}$ $615x+53$ $2^{10}\cdot 3^{3}$ 28 $3$ $2\cdot 3$ $(3+\sqrt{-147})/6$ $-2^{4}3^{3}7$ $165x+13$ $2^{4}\cdot 3^{6}\cdot 7^{-1}$ 29 $3$ $2\cdot 3$ $(3+\sqrt{-267})/6$ $-2^{4}5^{6}$ $14151x+827$ $2^{4}\cdot 3^{3}\cdot 5^{6}$ 30 $3$ $2\cdot 3$ $\sqrt{-6}/3$ $2$ $6x+1$ $3^{3}$ 31 $3$ $2\cdot 3$ $\sqrt{-12}/3$ $2^{-1}3^{3}$ $15x+2$ $2^{-4}\cdot 3^{6}$ 32 $3$ $2\cdot 3$ $\sqrt{-15}/3$ $2^{-2}5^{3}$ $33x+4$ $2^{-2}\cdot 3^{3}\cdot 5^{2}$ 33 $4$ $2^{3}$ $(1+\sqrt{-2})/2$ $-1$ $4x+1$ $2^{2}$ 34 $4$ $2^{3}$ $(1+\sqrt{-4})/2$ $-2^{3}$ $6x+1$ $2^{3}$ 35 $4$ $2^{3}$ $\sqrt{-3}/2$ $2^{2}$ $6x+1$ $2^{4}$ 36 $4$ $2^{3}$ $\sqrt{-7}/2$ $2^{6}$ $42x+5$ $2^{8}$ Rational hypergeometric formulas for $1/\pi$ Note that a generalization of the proof of the class number $1$ problem for imaginary quadratic fields _proves_ that the values for $a$ listed above (together with the values for the divergent series that we will give below) are the _only_ rational values of $J_{N}(\tau)$ at CM arguments $\tau$, outside of the values $0$ and $1$ which cannot be used. Note that we do not claim that we have found all possible rational $1/\pi$ formulas, but only that, as far as we can tell, no other such formula exists in the literature, and a rather long search using linear dependence algorithms did not find any additional ones, except from trivial modifications coming from the fact that ${}_{2}F_{1}$ is solution of a linear differential equation of order $2$. For instance, we have the following formulas, which are trivially equivalent to the last three formulas of the above list: $\displaystyle\sum_{n\geq 0}(6n^{3}+n^{2})\dfrac{R_{n}(2)^{3}}{(-8)^{n}}$ $\displaystyle=-\dfrac{\sqrt{2}/6}{\pi}\;,\quad\sum_{n\geq 0}(2n^{3}-n^{2})\dfrac{R_{n}(2)^{3}}{4^{n}}=\dfrac{1/3}{\pi}\;,$ $\displaystyle\sum_{n\geq 0}(210n^{3}-5n^{2}+n)\dfrac{R_{n}(2)^{3}}{64^{n}}$ $\displaystyle=\dfrac{4/3}{\pi}\;.$ Note that the same method allows us to find _divergent_ series because $|a|<1$: # $N$ $H_{N}$ $\tau$ $a=J_{N}(\tau)$ $P$ $k$ sign 37 $2$ $2\cdot 4$ $(-1+\sqrt{-3})/2$ $-2^{-4}3^{2}$ $5x+1$ $3$ $+$ 38 $2$ $2\cdot 4$ $(1+\sqrt{-7})/4$ $2^{-8}3^{4}$ $35x+8$ $-2^{2}3^{4}$ $-$ 39 $3$ $2\cdot 3$ $(2+\sqrt{-2})/6$ $2\cdot 3^{-3}$ $10x+3$ $-2^{2}5^{2}$ $-$ 40 $3$ $2\cdot 3$ $(1+\sqrt{-11})/6$ $2^{4}3^{-3}$ $11x+3$ $-2^{4}3^{2}$ $+$ 41 $3$ $2\cdot 3$ $(3+\sqrt{-15})/6$ $-2^{-2}$ $15x+4$ $3^{3}$ $+$ 42 $4$ $2^{3}$ $(1+\sqrt{-1})/2$ $-2^{-3}$ $3x+1$ $1$ $+$ 43 $4$ $2^{3}$ $(3+\sqrt{-7})/8$ $2^{-6}$ $21x+8$ $-2^{4}$ $-$ 44 $4$ $2^{3}$ $(1+\sqrt{-3})/4$ $2^{-2}$ $3x+1$ $-2^{2}$ $-$ The values of $k$ given in this table are those coming from the general formula, but correspond to the values obtained from the analytic continuation of the hypergeometric series only when $a<0$. On the other hand, we will see below that they all lead to supercongruences, as well as so-called upside-down formulas. Here the sign of the square root of $k$ can vary, so is indicated in the last column with respect to the principal determination. ## 4 Additional Consequences of Proposition 2.1 In the previous section, we have only used Proposition 2.1 for some very specific pairs $(F,h)$ which lead to rational hypergeometric formulas for $1/\pi$. It is evidently possible to use it for other pairs: in particular we could use other subgroups of $\Gamma$, and in particular the groups $\Gamma_{0}^{*}(N)$ for $N>4$, or still the same subgroups that we have already considered, but with different $F$ (since the subgroups $\Gamma_{0}^{*}(N)$ for $N\leq 4$ all have genus $0$, there is not much point in changing the function $h$, since the resulting formulas could also be obtained by standard hypergeometric identities). Using $\Gamma_{0}^{*}(N)$ for $N>4$ will not lead to identities involving hypergeometric functions, but for instance to more general functions called _Heun functions_. Using different functions $F$ does give additional formulas. We simply give two examples, without proof since they are once again direct applications of Proposition 2.1. These are examples in level $1$, so we keep $h=J_{1}=j/1728$. First, we choose $F=E_{4}^{1/4}$, and we find the general formula $\sum_{n\geq 0}\left(12n+1-\dfrac{E_{2}^{*}E_{4}}{E_{6}}(\tau)\right)\dfrac{(1/12)_{n}(5/12)_{n}/n!^{2}}{J_{1}(\tau)^{n}}=\dfrac{3}{\pi\Im(\tau)}E_{4}(\tau)^{-1/4}\dfrac{E_{4}^{3/2}}{E_{6}}(\tau)\;.$ Specializing to $\tau=\sqrt{-3}$ and $\tau=\sqrt{-4}$, and as mentioned above using the Chowla–Selberg formula to compute the values of $E_{4}(\tau)$, we obtain the following identities: $\displaystyle\sum_{n\geq 0}(22n+1)\dfrac{(1/12)_{n}(5/12)_{n}}{n!^{2}}\dfrac{1}{(125/4)^{n}}$ $\displaystyle=\dfrac{(2^{4/3}5^{5/4}/3)\pi}{\Gamma(1/3)^{3}}=\dfrac{2^{1/3}5^{5/4}3^{-1/2}}{B(1/3,1/3)}$ $\displaystyle\sum_{n\geq 0}(126n+5)\dfrac{(1/12)_{n}(5/12)_{n}}{n!^{2}}\dfrac{1}{(11/2)^{3n}}$ $\displaystyle=\dfrac{2^{1/2}3^{1/4}11^{5/4}\pi^{1/2}}{\Gamma(1/4)^{2}}=\dfrac{2^{1/2}3^{1/4}11^{5/4}}{B(1/4,1/4)}\;,$ where $B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$ is the beta function. Choosing instead $F=E_{6}^{1/6}$ leads to the following general formula: $\sum_{n\geq 0}\left(12n+1-\dfrac{E_{2}^{*}E_{6}}{E_{4}^{2}}(\tau)\right)\dfrac{(1/12)_{n}(7/12)_{n}/n!^{2}}{(1-J_{1}(\tau))^{n}}=\dfrac{3}{\pi\Im(\tau)}E_{4}(\tau)^{-1/4}\left(\dfrac{E_{4}^{3/2}}{E_{6}}(\tau)\right)^{-7/6}\;,$ and specializing to the same values of $\tau$ gives $\displaystyle\sum_{n\geq 0}(150n+7)\dfrac{(1/12)_{n}(7/12)_{n}}{n!^{2}}\dfrac{1}{(-121/4)^{n}}$ $\displaystyle=\dfrac{2^{4/3}11^{7/6}\pi}{\Gamma(1/3)^{3}}=\dfrac{2^{1/3}3^{1/2}11^{7/6}}{B(1/3,1/3)}$ $\displaystyle\sum_{n\geq 0}(726n+29)\dfrac{(1/12)_{n}(7/12)_{n}}{n!^{2}}\dfrac{1}{(-1323/8)^{n}}$ $\displaystyle=\dfrac{2^{1/2}3^{5/2}7^{7/6}\pi^{1/2}}{\Gamma(1/4)^{2}}=\dfrac{2^{1/2}3^{5/2}7^{7/6}}{B(1/4,1/4)}\;.$ ## 5 Generalization I: Rational Hypergeometric Formulas for $1/\pi^{c}$ _Finding_ rational hypergeometric identities for $1/\pi^{c}$ is easily done using linear dependence algorithms based on the LLL algorithm. _Proving_ them is more difficult: among the methods used is the WZ method, but it is not the only one. In fact some of the identities (and the three known identities for $c\geq 3$) are still conjectural. _Explaining_ them, as we have done for $1/\pi$ formulas has only started to be done in a recent paper by Dembelé et al. [6]: the $1/\pi^{2}$ formulas are linked to Asai $L$-functions attached to Hilbert modular forms for real quadratic fields. However, this apparently still does not prove all of them. In the following table, we list all known formulas, as before coding $H$ as $\prod_{d\in I}d^{v_{d}}$, meaning that $H_{n}=\prod_{d\in I}R_{n}(d)^{v_{d}}$, and the formula is of the form $\sum_{n\geq 0}P(n)\dfrac{H_{n}}{a^{n}}=\dfrac{\sqrt{k}}{\pi^{c}}\;.$ # $c$ $H$ $a$ $P$ $k$ 1 $2$ $2^{5}$ $-2^{2}$ $20x^{2}+8x+1$ $2^{6}$ 2 $2$ $2^{5}$ $-2^{10}$ $820x^{2}+180x+13$ $2^{14}$ 3 $2$ $2^{3}\cdot 3$ $2^{6}3^{-3}$ $74x^{2}+27x+3$ $2^{8}\cdot 3^{2}$ 4 $2$ $2^{3}\cdot 4$ $2^{4}$ $120x^{2}+34x+3$ $2^{10}$ 5 $2$ $2\cdot 3\cdot 4$ $-2^{4}3$ $252x^{2}+63x+5$ $2^{8}\cdot 3^{2}$ 6 $2$ $2\cdot 3\cdot 6$ $-2^{12}3^{-6}$ $1930x^{2}+549x+45$ $2^{14}\cdot 3^{2}$ 7 $2$ $2\cdot 3\cdot 6$ $-2^{12}5^{3}$ $5418x^{2}+693x+29$ $2^{14}\cdot 5$ 8 $2$ $2\cdot 3\cdot 6$ $3^{-6}5^{6}$ $532x^{2}+126x+9$ $2^{-4}\cdot 3^{2}\cdot 5^{6}$ 9 $2$ $2\cdot 4\cdot 6$ $-2^{10}$ $1640x^{2}+278x+15$ $2^{16}\cdot 3^{-1}$ 10 $2$ $2\cdot 8$ $7^{4}$ $1920x^{2}+304x+15$ $2^{6}\cdot 7^{3}$ 11 $3$ $2^{7}$ $2^{6}$ $168x^{3}+76x^{2}+14x+1$ $2^{10}$ 12 $4$ $2^{5}\cdot 3\cdot 4$ $-2^{8}3^{-3}$ $4528x^{4}+3180x^{3}+972x^{2}+147x+9$ $2^{16}\cdot 3^{2}$ 13 $4$ $2^{7}4$ $2^{12}$ $43680x^{4}+20632x^{3}+4340x^{2}+466x+21$ $2^{22}$ Rational hypergeometric formulas for $1/\pi^{c}$ Most formulas for $1/\pi^{2}$ were found by the second author, the formula for $1/\pi^{3}$ was found by B. Gourevitch and the two formulas for $1/\pi^{4}$ were found by Y. Zhao and J. Cullen respectively. We can also find in the literature the following divergent formulas for $1/\pi^{c}$: # $c$ $H$ $a$ $P$ $k$ sign 14 $2$ $2^{5}$ $-2^{-2}$ $10x^{2}+6x+1$ $2^{4}$ $+$ 15 $2$ $2^{5}$ $-2^{-10}$ $205x^{2}+160x+32$ $2^{8}$ $+$ 16 $2$ $2^{3}3$ $-3^{-3}$ $28x^{2}+18x+3$ $2^{2}3^{2}$ $+$ 17 $2$ $2\cdot 5$ $-2^{8}5^{-5}$ $483x^{2}+245x+30$ $2^{8}5^{2}$ $+$ 18 $2$ $2\cdot 3\cdot 4$ $-2^{4}3^{-3}$ $172x^{2}+75x+9$ $2^{8}3^{2}$ $+$ 19 $3$ $2^{7}$ $2^{-6}$ $21x^{3}+22x^{2}+8x+1$ $-2^{2}3^{2}$ $-$ 20 $3$ $2^{5}3$ $2^{2}3^{-3}$ $92x^{3}+84x^{2}+27x+3$ $-2^{8}3^{2}$ $-$ 21 $4$ $2^{5}5$ $-2^{10}5^{-5}$ $5532x^{4}+5600x^{3}+2275x^{2}+425x+30$ $2^{16}5^{2}$ $+$ ## 6 Generalization II: Taylor Expansions of $1/\pi^{c}$ Formulas ### 6.1 Taylor Expansions of $1/\pi$ Formulas Following ideas of the second author [11], [14], [15], we are going to generalize the above formulas by considering them as constant terms of Taylor expansions. Note that for any $y$ we can define $(x)_{n+y}=\Gamma(x+n+y)/\Gamma(x)$, hence since $n!=(1)_{n}$: $R_{n+x}(d)=\prod_{\begin{subarray}{c}1\leq i\leq d\\\ \gcd(i,d)=1\end{subarray}}\dfrac{(i/d)_{n+x}}{(1)_{n+x}}\;.$ Thus $H_{n+x}$ makes sense, so we could define the generalized sum as $\sum_{n\geq 0}P(n+x)\dfrac{H_{n+x}}{a^{n+x}}\;.$ However, when $a<0$ the factor $a^{x}$ introduces parasitic imaginary terms, so we prefer to define $S(H,a,P;x)=\sum_{n\geq 0}P(n+x)\dfrac{H_{n+x}}{\operatorname{sign}(a)^{n}|a|^{n+x}}=\sum_{n\geq 0}P(n+x)\dfrac{H_{n+x}}{a^{n}|a|^{x}}\;,$ which is equal to $\operatorname{sign}(a)^{x}$ times the previous one. Note that this is not the only possible normalization. We could also shift all the Pochhammer indices by $x$ instead of shifing $n$. In all cases, this would give the above series multiplied by a quotient of products of gamma functions involving $x$, so the transformation from one to the other is immediate. It is clear that $S(H,a,P;x+1)=\operatorname{sign}(a)(S(H,a,P;x)-P(x)H_{x}/|a|^{x})$, so we may assume if necessary that $x\in[0,1[$. In view of the existing literature, we can ask at least two questions: first, give (at least the initial terms of) the power series expansion of $S(H,a,P;x)$ around $x=0$. Second, give the value of $S(H,a,P;1/2)$. One observes that if the value of the sum is $a_{0}\sqrt{-D}/\pi$ with $a_{0}\in{\mathbb{Q}}^{*}$ and $D$ a negative fundamental discriminant, the expansion is always of the form $S(H,a,P;x)=a_{0}|D|\left(\dfrac{\sqrt{-D}}{\pi}+0x-a_{2}|D|L(D,1)x^{2}-a_{3}D^{2}L(D,2)x^{3}+O(x^{4})\right)\;,$ with the $a_{i}$ rational, and where we write $L(D,m)$ for $\sum_{n\geq 1}\mbox{$\left(\frac{D}{n}\right)$}/n^{m}$ (of course, since $D<0$ we have $a_{2}|D|L(D,1)=a^{\prime}_{2}\sqrt{-D}\pi$ for some rational $a^{\prime}_{2}$). Note that, as mentioned above, it is in principle possible to compute the coefficient of $x^{4}$, but by laziness we have done so only for cases (33), (35), and (36), see below. The following table uses the same numbering of the formulas as that given above; the column $C_{3}$ is related to supercongruences and will be explained below: $\displaystyle S(H,a,P;x)$ $\displaystyle=a_{0}|D|\left(\dfrac{\sqrt{-D}}{\pi}+0x-a_{2}|D|L(D,1)x^{2}-a_{3}D^{2}L(D,2)x^{3}+O(x^{4})\right)\;,$ $\displaystyle S_{p}(H,a,P)$ $\displaystyle\equiv P(0)\mbox{$\left(\dfrac{D}{p}\right)$}p+C_{3}L(D,3-p)p^{3}\allowbreak\ ({\rm{mod}}\,\,p^{4})\;.$ # $D$ $a_{0}$ $a_{2}$ $a_{3}$ $\pi^{2}S(H,a,P;1/2)/(a_{0}|D|\sqrt{-D})$ $C_{3}$ 1 $-15$ $1/3$ $3/4$ $1/2$ $\log(3^{3}/5)$ $20$ 2 $-8$ $2$ $7/2$ $4$ $\log(2)$ $15$ 3 $-24$ $2/3$ $15/4$ $2$ $\log(2^{5}/3^{3})$ $5/2$ 4 $-120$ $2/27$ $23/8$ $1/3$ $\log(3^{3}.5/2^{7})$ $5/12$ 5 $-15$ $128/9$ $39/4$ $12$ $\log(2^{2}3^{9}/5^{7})$ $5/64$ 6 $-1320$ $2/3$ $63/16$ $1/26$ $\log(2^{13}11^{5}/(3^{3}5^{11}))$ $5/104$ 7 $-40020$ $16/3$ $159/128$ $1/11560$ $\log(3^{21}5^{13}29^{5}/(2^{38}23^{11}))$ $5/36992$ 8 $-20$ $1/8$ $1$ $1/2$ $2\operatorname{asin}(3/5)$ $-15/2$ 9 $-15$ $1/18$ $2$ $3/2$ $2\operatorname{asin}(7/5^{2})$ $-5/8$ 10 $-132$ $1/96$ $3/2$ $1/8$ $2\operatorname{asin}(41/(3^{3}11))$ $-5/4$ 11 $-255$ $1/162$ $1$ $1/12$ $2\operatorname{asin}(4207/(5^{4}17^{2}))$ $-5/81$ 12 $-4$ $1$ $3$ $4$ $2\log(2)$ $6$ 13 $-7$ $9/7$ $5/2$ $5/2$ $2\log((88+13\sqrt{7})/3^{4})$ $20/3$ 14 $-3$ $16/9$ $21/2$ $20$ $(3/2)\log(3^{3}/2^{4})$ $15/8$ 15 $-4$ $9$ $11$ $20$ $2\log(3^{2}/2^{3})$ $10/3$ 16 $-20$ $36/25$ $23/4$ $4$ $\log(2^{18}/(3^{4}5^{5}))$ $1/6$ 17 $-4$ $441$ $35$ $100$ $2\log(2.3^{10}/7^{6})$ $50/147$ 18 $-4$ $9/16$ $2$ $5/2$ $2\operatorname{asin}(7/3^{2})$ $-10/3$ 19 $-3$ $2/3$ $6$ $10$ $\pi/3$ $-15/8$ 20 $-8$ $9/64$ $4$ $4$ $2\operatorname{asin}(17/3^{4})$ $-1/3$ 21 $-3$ $49/27$ $24$ $60$ $2\operatorname{asin}(239/(2.7^{4}))$ $-45/392$ 22 $-11$ $18/11$ $10$ $10$ $2\operatorname{asin}(353/(2/3^{8}))$ $-5/24$ 23 $-8$ $9801/64$ $28$ $60$ $2\operatorname{asin}(8668855388657/(3^{8}11^{12}))$ $-5/1089$ 24 $-3$ $4/9$ $5/2$ $10/3$ $\log(3^{3}/2^{2})$ $5/2$ 25 $-3$ $4$ $13/2$ $10$ $3\log(4/3)$ $15/2$ 26 $-15$ $4/75$ $7/4$ $1$ $\log(3^{9}/(2^{2}5^{5}))$ $1/4$ 27 $-3$ $32$ $37/2$ $40$ $3\log(2^{8}/3^{5})$ $15/4$ 28 $-7$ $108/49$ $15/2$ $10$ $\log(7^{7}/(2^{10}3^{6}))$ $5/18$ 29 $-3$ $500$ $85/2$ $130$ $3\log(5^{6}/(2^{6}3^{5}))$ $39/50$ 30 $-3$ $1$ $2$ $5/2$ $(2/3)(\pi-\operatorname{asin}(1633/3^{9}))$ $-15/4$ 31 $-4$ $27/32$ $4$ $5$ $2\operatorname{asin}(329/3^{6})$ $-20/9$ 32 $-3$ $5/2$ $8$ $13$ $2\operatorname{asin}(239/3^{6})$ $-78/25$ 33 $-4$ $1/4$ $1$ $1$ $8L(-4,2)/\pi$ $2$ 34 $-8$ $1/8$ $3/2$ $1$ $4L(-4,2)/\pi$ $1$ 35 $-4$ $1/2$ $2$ $2$ $\pi/2$ $-2$ 36 $-4$ $2$ $6$ $8$ $\pi/6$ $-2$ ### 6.2 Observations for $1/\pi$ Concerning the Taylor expansions around $x=0$, note that the coefficient $a_{1}$ of $x^{1}$ always vanishes, and that the numerator of the first seven $a_{2}$ are equal to $|D(\tau)|-4$, where $D(\tau)=-7$, $-11$, $-19$, $-27$, $-43$, $-67$, and $-163$ is the discriminant of the corresponding $\tau$ (not to be confused with the $D$ occurring in the result). In addition, we also notice a common pattern for the value at $x=1/2$: 1. (1) In the value of $S(H,a,P;1/2)$ for $a<0$: with only a few exceptions listed below, we have $S(H,a,P;1/2)=c_{0}|D|\sqrt{-D}\log(c_{1})/\pi^{2}$, where $c_{0}$ and $c_{1}$ are rational. The exceptions are as follows: * • In cases (33) and (34) we have $S(H,a,P;1/2)=c_{0}|D|\sqrt{-D}L(-4,2)/\pi^{3}$ with $c_{0}$ rational. * • In case (13) $c_{1}$ is not rational but in the quadratic field ${\mathbb{Q}}(\sqrt{7})$. 2. (2) In the value of $S(H,a,P;1/2)$ for $a>0$: in all cases we have $S(H,a,P;1/2)=c_{0}|D|\sqrt{-D}\operatorname{asin}(c_{1})/\pi^{2}$ where $c_{0}$ and $c_{1}$ are rational (note that $c|D|\sqrt{-D}/\pi$ is of course of this form, for instance by choosing $c_{1}=1$). It is also possible to guess the coefficient of $x^{4}$: for instance in cases (33), (35), and (36), set $C_{1}=\sum_{n\geq 1}(-1)^{n}\dfrac{H_{2n}}{(2n+1)^{2}}\text{\qquad and\qquad}C_{2}=\sum_{n\geq 1}(-1)^{n}\dfrac{H_{n}}{(2n+1)^{2}}\;,$ where here $H_{n}=\sum_{1\leq j\leq n}1/j$ is the $n$th harmonic sum. Then $\displaystyle S(H,a,P;x)$ $\displaystyle=a_{0}|D|\left(\dfrac{\sqrt{-D}}{\pi}+0x-a_{2}|D|L(D,1)x^{2}\right.$ $\displaystyle\left.\phantom{=}-a_{3}D^{2}L(D,2)x^{3}+a_{4}x^{4}+O(x^{5})\right)\;,$ with $D=-4$ and $\displaystyle a_{4}$ $\displaystyle=(8/3)(50C_{1}-11C_{2}-22L(-4,2)\log(2))\;,$ $\displaystyle a_{4}$ $\displaystyle=(128/3)(10C_{1}-C_{2}-2L(-4,2)\log(2))\;,$ $\displaystyle a_{4}$ $\displaystyle=128(22C_{1}-C_{2}-2L(-4,2)\log(2))$ for cases (33), (35), and (36) respectively. ### 6.3 Taylor Expansions of $1/\pi^{c}$ Formulas for $c\geq 2$ For $c=2$ one observes that if the value of the sum is $a_{0}\sqrt{D}/\pi^{2}$ with $a_{0}\in{\mathbb{Q}}^{*}$ and $D$ a fundamental discriminant, the expansion is always of the form $\displaystyle S(H,a,P;x)$ $\displaystyle=a_{0}D\left(\dfrac{\sqrt{D}}{\pi^{2}}+0x-a_{2}D\sqrt{D}x^{2}+0x^{3}\right.$ $\displaystyle\phantom{=}\left.+a_{4}D^{3}L(D,2)x^{4}-a_{5}a_{0}D^{4}L(D,3)x^{5}+O(x^{6})\right)\;,$ with the $a_{i}$ rational (of course, when $D>0$ we have $a_{4}D^{3}L(D,2)=a^{\prime}_{4}\pi^{2}D\sqrt{D}$ for some rational $a^{\prime}_{4}$). $\displaystyle S(H,a,P;x)$ $\displaystyle=a_{0}D\left(\dfrac{\sqrt{D}}{\pi^{2}}+0x-a_{2}D\sqrt{D}x^{2}+0x^{3}\right.$ $\displaystyle\phantom{=}\left.+a_{4}D^{3}L(D,2)x^{4}-a_{5}a_{0}D^{4}L(D,3)x^{5}+O(x^{6})\right)\;,$ $\displaystyle S_{p}(H,a,P)$ $\displaystyle\equiv P(0)\mbox{$\left(\dfrac{D}{p}\right)$}p^{2}+C_{5}L(D,4-p)p^{5}\allowbreak\ ({\rm{mod}}\,\,p^{6})\;.$ # $D$ $a_{0}$ $a_{2}$ $a_{4}$ $a_{5}$ $\pi^{3}S(1/2)/(a_{0}D\sqrt{D})$ $C_{5}$ 1 $1$ $8$ $1/2$ $25/4$ $7$ $14\zeta(3)/\pi^{2}$ $-7/2$ 2 $1$ $128$ $5/2$ $305/4$ $7$ $2\zeta(3)/\pi^{2}$ $-7/2$ 3 $1$ $48$ $1/3$ $4$ $7/9$ $2\pi/3$ $21$ 4 $1$ $32$ $1$ $20$ $7$ $\pi/3$ $21/2$ 5 $1$ $48$ $3/2$ $157/4$ $91/9$ $4\log(2^{5}/3^{3})$ $-91/9$ 6 $1$ $384$ $5/6$ $85/4$ $7/9$ $2\log(2)$ $-315$ 7 $5$ $128/5$ $3/2$ $887/32$ $21/8$ $\log(2^{74}5^{5}/3^{54})$ $-35/216$ 8 $1$ $375/4$ $4/3$ $40$ $6944/1125$ $2\operatorname{asin}(164833/5^{8})$ $1953/50$ 9 $12$ $32/9$ $7/24$ $757/1152$ $3/16$ $\log(3^{9}/2^{14})$ $-15/2$ 10 $28$ $1$ $1/7$ $31/336$ $1/21$ $2\operatorname{asin}(2241857/2^{25})$ $35/8$ The last three (one for $1/\pi^{3}$ and two for $1/\pi^{4}$) obey completely similar expansions, but we give them one by one: 11: $\displaystyle S(H,a,P;x)$ $\displaystyle=32\left(\dfrac{1}{\pi^{3}}+0x-\dfrac{1}{\pi}x^{2}+0x^{3}+(16/3)L(-4,1)x^{4}+0x^{5}\right.$ $\displaystyle\left.\phantom{=}-(8224/45)L(-4,3)x^{6}+32^{2}\cdot 48L(-4,4)x^{7}+O(x^{8})\right)\;,$ $\displaystyle S(H,a,P;1/2)$ $\displaystyle=\dfrac{8}{\pi^{3}}\;,$ $\displaystyle S_{p}(H,a,P)$ $\displaystyle\equiv\mbox{$\left(\dfrac{-4}{p}\right)$}p^{3}-6L(-4,5-p)p^{7}\allowbreak\ ({\rm{mod}}\,\,p^{8})\;.$ 12: $\displaystyle S(H,a,P;x)$ $\displaystyle=768\left(\dfrac{1}{\pi^{4}}+0x-\dfrac{1/2}{\pi^{2}}x^{2}+0x^{3}+(3/8)x^{4}+0x^{5}-(147/8)\zeta(2)x^{6}\right.$ $\displaystyle\left.\phantom{=}+0x^{7}+(471187/1344)\zeta(4)x^{8}-3968\zeta(5)x^{9}+O(x^{10})\right)\;,$ $\displaystyle S(H,a,P;1/2)$ $\displaystyle=\dfrac{9216\zeta(3)}{\pi^{7}}\;,$ $\displaystyle S_{p}(H,a,P)$ $\displaystyle\equiv 9p^{4}-(837/2)\zeta(6-p)p^{9}\allowbreak\ ({\rm{mod}}\,\,p^{10})\;.$ 13: $\displaystyle S(H,a,P;x)$ $\displaystyle=2048\left(\dfrac{1}{\pi^{4}}+0x-\dfrac{2}{\pi^{2}}x^{2}+0x^{3}+(11/3)x^{4}+0x^{5}-(908/15)\zeta(2)x^{6}\right.$ $\displaystyle\left.\phantom{=}+0x^{7}+(53932/7)\zeta(4)x^{8}-95232\zeta(5)x^{9}+O(x^{10})\right)\;,$ $\displaystyle S(H,a,P;1/2)$ $\displaystyle=\dfrac{2048/15}{\pi^{4}}\;,$ $\displaystyle S_{p}(H,a,P)$ $\displaystyle\equiv 21p^{4}+(279/4)\zeta(6-p)p^{9}\allowbreak\ ({\rm{mod}}\,\,p^{10})\;.$ The observations for $1/\pi^{c}$ are essentially identical to the case of $1/\pi$, in particular the coefficients of $x^{2j-1}$ for $1\leq j\leq c$ vanish. ## 7 Generalization III: Upside-Down Series For completeness, we list the upside-down series (i.e., with $H_{n}$ in the denominator) given in [17]. We do not know if all have been proved, but probably not all among those with $c>1$. The general recipe is as follows: if $\sum_{n\geq 0}P(n)H_{n}/a^{n}=\sqrt{k}/\pi^{c}$ is a _divergent_ or semi-convergent series (i.e., with $|a|<1$ or $a=-1$) then $\sum_{n\geq 1}\dfrac{P(-n)}{n^{2c+1}H_{n}(1/a)^{n}}=A\cdot L(D,c+1)\;,$ where $D$ is the fundamental discriminant corresponding to $(-1)^{c}k$ and $A\in{\mathbb{Q}}^{*}$. Thus, the only new value is that of $A$. However, for the reader’s convenience, we give the list explicitly, the numbering corresponding to that of the initial series (which is different for $c=1$ and $c\geq 2$). $\sum_{n\geq 1}\dfrac{Q(n)}{n^{2c+1}H_{n}b^{n}}=A\cdot L(D,c+1)\;.$ # $c$ $H_{N}$ $b=1/a$ $Q$ $D$ $A$ 37 $1$ $2\cdot 4$ $-2^{4}3^{-2}$ $5x-1$ $-3$ $-45/2$ 38 $1$ $2\cdot 4$ $2^{8}3^{-4}$ $35x-8$ $1$ $72$ 39 $1$ $2\cdot 3$ $2^{-1}\cdot 3^{3}$ $10x-3$ $1$ $3$ 40 $1$ $2\cdot 3$ $2^{-4}3^{3}$ $11x-3$ $1$ $48$ 41 $1$ $2\cdot 3$ $-2^{2}$ $15x-4$ $-3$ $-27$ 42 $1$ $2^{3}$ $-2^{3}$ $3x-1$ $-4$ $-2$ 43 $1$ $2^{3}$ $2^{6}$ $21x-8$ $1$ $1$ 44 $1$ $2^{3}$ $2^{2}$ $3x-1$ $1$ $3$ 33 $1$ $2^{3}$ $-1$ $4x-1$ $-4$ $-16$ 14 $2$ $2^{5}$ $-2^{2}$ $10x^{2}-6x+1$ $1$ $-28$ 15 $2$ $2^{5}$ $-2^{10}$ $205x^{2}-160x+32$ $1$ $-2$ 16 $2$ $2^{3}3$ $-3^{3}$ $28x^{2}-18x+3$ $1$ $-14$ 17 $2$ $2\cdot 5$ $-2^{-8}5^{5}$ $483x^{2}-245x+30$ $1$ $-896$ 18 $2$ $2\cdot 3\cdot 4$ $-2^{-4}3^{3}$ $172x^{2}-75x+9$ $1$ $-1792$ 19 $3$ $2^{7}$ $2^{6}$ $21x^{3}-22x^{2}+8x-1$ $1$ $45/4$ 20 $3$ $2^{5}3$ $2^{-2}3^{3}$ $92x^{3}-84x^{2}+27x-3$ $1$ $720$ 21 $4$ $2^{5}5$ $-2^{-10}5^{5}$ $5532x^{4}-5600x^{3}+2275x^{2}-425x+30$ $1$ $-380928$ Upside-down series In particular, note that the last formula gives a series for $\zeta(5)$. We finish by giving an example of a nonrational $1/\pi$ formula, but there exist almost a hundred involving only quadratic irrationals, listed in [1]: $\sum_{n\geq 0}\dfrac{R_{n}(2)^{3}}{(2+\sqrt{3})^{4n}}(12n+(3-\sqrt{3}))=\dfrac{(2+\sqrt{3})(4/3)^{1/4}}{\pi}\;.$ ## 8 Generalization IV: Supercongruences It has been noted long ago by several authors that to _every_ $1/\pi^{c}$ formula (including divergent ones) corresponds a congruence modulo $p$ to a higher power than could be expected, what is now called a _supercongruence_. The main observation is as follows: if there exists a $1/\pi^{c}$-formula of the form $\sum_{n\geq 0}P(n)H_{n}/a^{n}=\sqrt{k}/\pi^{c}$, then for all primes $p$ such that $v_{p}(a)=v_{p}(k)=0$ and not dividing any $d$ occuring in $H$ we should have the following precise supercongruence: $\sum_{n=0}^{p-1}P(n)\dfrac{H_{n}}{a^{n}}\equiv P(0)\mbox{$\left(\dfrac{(-1)^{c}4k}{p}\right)$}p^{c}\allowbreak\ ({\rm{mod}}\,\,p^{2c+1})\;.$ For instance, we have the following supercongruences: $\displaystyle\sum_{n=0}^{p-1}(154n+15)\dfrac{R_{2}(n)R_{6}(n)}{(-8/3)^{3n}}$ $\displaystyle\equiv 15\mbox{$\left(\dfrac{-8}{p}\right)$}p\allowbreak\ ({\rm{mod}}\,\,p^{3})$ $\displaystyle\sum_{n=0}^{p-1}(5418n^{2}+693n+29)\dfrac{R_{2}(n)R_{3}(n)R_{6}(n)}{(-80)^{3n}}$ $\displaystyle\equiv 29\mbox{$\left(\dfrac{20}{p}\right)$}p^{2}\allowbreak\ ({\rm{mod}}\,\,p^{5})$ The same phenomenon is valid for the _divergent_ series for $1/\pi^{c}$ that we have given. For instance we have $\sum_{n=0}^{p-1}(35n+8)\dfrac{R_{2}(n)R_{4}(n)}{(3/4)^{4n}}\equiv 8p\allowbreak\ ({\rm{mod}}\,\,p^{3})$ However it has been noticed by several authors that these supercongruences can be refined to a higher power of $p$ (more precisely to a congruence modulo $p^{2c+2}$ instead of $p^{2c+1}$): the recipe, made precise by the second author, is simply to replace the $L(D,c+1)$ occurring in the coefficient of $x^{2c+1}$ of the Taylor expansions by $L(D,c+2-p)$ times a suitable rational number. In other words, $\displaystyle S_{p}(H,a,P)$ $\displaystyle:=\sum_{n=0}^{p-1}P(n)\dfrac{H_{n}}{a^{n}}$ $\displaystyle\equiv P(0)\mbox{$\left(\dfrac{(-1)^{c}4k}{p}\right)$}p^{c}+C_{2c+1}p^{2c+1}L(D,c+2-p)\allowbreak\ ({\rm{mod}}\,\,p^{2c+2})\;.$ The coefficients $C_{2c+1}$ have been given for all the convergent series in the above tables. The remaining coefficients for the divergent series are as follows: For the divergent $1/\pi$ formulas, $C_{3}=(15/4,0,0,0,12,2,0,0)$ for formulas (37) to (44). For the divergent $1/\pi^{c}$ formulas for $c\geq 2$, $C_{2c+1}=(-7/2,-64,-21/2,-210,-63,0,0,-1395)$ for formulas (14) to (21). The coefficients $0$ of course mean that the congruence is valid modulo $p^{2c+2}$ with no correction terms. We can observe that $C_{3}$ is almost always divisible by $5$, $C_{5}$ is almost always divisible by $7$, and $C_{9}$ is always divisible by $279=3^{2}\cdot 31$. We have no explanation for this phenomenon. ## References * [1] [Ald] A. M. Aldawoud, Ramanujan type series for $1/\pi$ with quadratic irrationals, Master’s thesis, Massey Univ., Albany, New Zealand (2012). * [2] [Alm] G. Almkvist, Some conjectured formulas for $1/\pi$ coming from polytopes, K3-surfaces, and Moonshine, arXiv:1211.6563v1. * [3] [Alm-Gui1] G. Almkvist and J. Guillera, Ramanujan-like series for $1/\pi^{2}$ and string theory, Exp. Math. 21 (2012), 223–234, arXiv:1009:5202v3. * [4] [Alm-Gui2] G. Almkvist and J. Guillera, Ramanujan-Sato-Like series, Number theory and related fields, Springer Proc. Math. Stat. 43, 2013, 55–74, arXiv:1201.5233v3. * [5] [Ayc] A. Aycock, On proving some of Ramanujan’s formulas for $1/\pi$ with an elementary method, arXiv:1309.1140v2. * [6] [DPVZ] L. Dembelé, A. Panchishkin, J. Voigt, and W. Zudilin, Special hypergeometric motives and their $L$-functions: Asai recognition, Experimental Math. , to appear (13p.), arXiv:1906.07384v3. * [7] [Gui1] J. Guillera, History of the formulas and algorithms for $\pi$, Gems in experimental mathematics, Contemp. Math. 517, Amer. Math. Soc. (2010), 173–188, arXiv:0807.0872. * [8] [Gui2] J. Guillera, A matrix form of Ramanujan-type series for $1/\pi$, Gems in experimental mathematics, Contemp. Math. 517, Amer. Math. Soc. (2010), 189–206, arXiv:0907.1547v1. * [9] [Gui3] J. Guillera, A new Ramanujan-like series for $1/\pi^{2}$, Ramanujan J. 26 (2011), 369–374, arXiv:1003.1915v2. * [10] [Gui4] J. Guillera, WZ-proofs of “divergent” Ramanujan-type series, Advances in combinatorics (2013), 187–195, arXiv:1012.2681v2. * [11] [Gui5] J. Guillera, More hypergeometric identities related to Ramanujan-type series, Ramanujan J. 32 (2013), 5–22, arXiv:1104.1994v4. * [12] [Gui6] J. Guillera, Kind of proofs of Ramanujan-like series, arXiv:1203.1255v3. * [13] [Gui7] J. Guillera, A family of Ramanujan-Orr formulas for $1/\pi$, Integral Transforms Spec. Func. 26 (2015), 531–538, arXiv:1501.06413v4. * [14] [Gui8] J. Guillera, Bilateral sums related to Ramanujan-like series, arXiv:1610.04839v2. * [15] [Gui9] J. Guillera, Ramanujan series with a shift, J. Aust. Math. Soc. 107 (2019), 367–380. * [16] [Gui10] J. Guillera, Bilateral Ramanujan-like series for $1/\pi^{k}$ and their congruences, Int. J. Number Theory 16 (2020), 1969–1988, arXiv:1908.05123. * [17] [Gui-Rog] J. Guillera and M. Rogers, Ramanujan series upside-down, J. Aust. Math. Soc. 97 (2014), 78–106, arXiv:1206.3981v1. * [18] [Gui-Zud] J. Guillera and W. Zudilin, Ramanujan-type formulae for $1/\pi$: the art of translation, The legacy of S. Ramanujan, 181–195, Ramanujan Math. Soc. Lect. Notes Series 20 (2013), arXiv:1302.0548v2. * [19] [Pie] T. Piezas III, Pi formulas, Ramanujan and the baby monster group, preprint. * [20] [Ram] S. Ramanujan, Modular Equations and Approximations to $\pi$, Quart. J. Pure Appl. Math. 45 (1913–1913), 350–372. * [21] [Zag] D. Zagier, Elliptic Modular Forms and Their Applications, in The 1-2-3 of modular forms, Lectures at a Summer School in Nordfjordeid, Norway, Springer (2008). * [22] [Zud] W. Zudilin, Quadratic transformations and Guillera’s formulae for $1/\pi^{2}$, Math. Notes 81 (2007), 297–301, arXiv:math 0509465 (2006). * [23] [Zud1] W. Zudilin, More Ramanujan-type formulae for $1/\pi^{2}$, Russian Math. Surveys 62 (2007), 634–636. * [24] [Zud2] W. Zudilin, Ramanujan-type formulae for $1/\pi$: A second wind?, in Modular Forms and String Duality, N. Yui, H. Verril and C.F. Doran (eds.), Fields Inst. Commun. Ser. 54 (2008), 179–188, arXiv:0712.1332 (2008). Henri Cohen, Université de Bordeaux, LFANT, IMB, U.M.R. 5251 du C.N.R.S, 351 Cours de la Libération, 33405 Talence Cedex, FRANCE. Jesús Guillera, Universidad de Zaragoza, Departamento de Matemáticas, 50009 Zaragoza, SPAIN.
# Direct Energy Minimization Based on Exponential Transformation in Density Functional Calculations of Finite and Extended Systems Aleksei V. Ivanov<EMAIL_ADDRESS>Elvar Ö. Jónsson Tejs Vegge Hannes Jónsson<EMAIL_ADDRESS>Science Institute and Faculty of Physical Sciences, University of Iceland VR-III,107 Reykjavík, Iceland St. Petersburg State University, 199034, St. Petersburg, Russia Department of Energy Conversion and Storage, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark ###### Abstract The energy minimization involved in density functional calculations of electronic systems can be carried out using an exponential transformation that preserves the orthonormality of the orbitals. The energy of the system is then represented as a function of the elements of a skew-Hermitian matrix that can be optimized directly using unconstrained minimization methods. An implementation based on the limited memory Broyden-Fletcher-Goldfarb-Shanno approach with inexact line search and a preconditioner is presented and the performance compared with that of the commonly used self-consistent field approach. Results are presented for the G2 set of 148 molecules, liquid water configurations with up to 576 molecules and some insulating crystals. A general preconditioner is presented that is applicable to systems with fractional orbital occupation as is, for example, needed in the k-point sampling for periodic systems. This exponential transformation direct minimization approach is found to outperform the standard implementation of the self-consistent field approach in that all the calculations converge with the same set of parameter values and it requires less computational effort on average. The formulation of the exponential transformation and the gradients of the energy presented here are quite general and can be applied to energy functionals that are not unitary invariant such as self-interaction corrected functionals. ††journal: Computer Physics Communications ## 1 Introduction There are several different approaches for finding optimal orbitals corresponding to the minimum of an energy functional in the context of Kohn–Sham density functional theory (KS-DFT) [1, 2]. The most commonly used method is based on a self-consistent field (SCF) algorithm consisting of two steps. In the first step and for a given density, one finds eigenvalues and eigenfunctions using an iterative algorithm such as the Davidson algorithm [3] or even direct diagonalization of the full Hamiltonian matrix when the size of the basis set is not too large. In the second step, the electron density or Hamiltonian matrix is updated using, for example, direct inversion in the iterative subspace (DIIS) method [4, 5]. The SCF approach is widely used and has proven to be efficient for both finite (molecules/clusters) and extended systems, but can, nevertheless, suffer from convergence problems. Various density and Hamiltonian mixing schemes have been introduced to address such cases [6, 7]. As a result, the user of typical software developed for KS-DFT calculations is often presented with the task of choosing values of various parameters and select between various types of eigensolvers. Systems with similar chemical and physical properties may even call for different choices. A further problem of the SCF method in calculations of ground electronic states is that it may converge on a saddle point of the energy surface rather than a minimum [8]. Another approach to this optimization problem is based on direct minimization of the energy with respect to the electronic degrees of freedom [9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. The challenge then is to incorporate the constraint of orthonormality of the orbitals (the single electron wave functions). One way to approach this is to follow the energy gradient projected on the subspace tangent to the orbitals [10, 11]. After such an adjustment of the orbitals within this tangent space, the orthonormality constraints will be violated and, therefore, an explicit orthonormalization of the orbitals needs to be applied after each iteration. This approach is often used in calculations with a plane wave basis set. Alternatively, when the basis set is compact, as in calculations using linear combination of atomic orbitals, a unitary transformation can be applied to a set of orthonormal reference orbitals that includes all occupied and virtual orbitals, and the energy is then minimized by optimizing the elements of the transformation matrix. The orthonormality constraints will then be satisfied, but, due to the constraints imposed by the unitary matrix, the energy is defined on a curved space. As a result, minimization algorithms need to be modified to take the curvature into account. This can be achieved by performing a line search along geodesics [19]. Alternatively, the unitary matrix can be parameterized using an exponential transformation [9, 12, 14] in which case the energy becomes a function of the elements of a skew-Hermitian matrix in linear space. Well- established, unconstrained minimization strategies can then be applied including inexact line searches that can give robust convergence. We will refer to this approach as exponential transformation direct minimization (ETDM). Furthermore, it has been used in calculations of molecules using KS- DFT [15] and previously in the context of Hartree-Fock theory [9, 20, 21, 22]. There, the occupation numbers for the orbitals have been restricted to integers so that unitary invariance with respect to rotation within the space of occupied orbitals is ensured. Preconditioners to accelerate convergence have been presented for such systems and found to be important in order to achieve good performance [9, 15]. In this article, a generalization and efficient implementation of the ETDM approach is presented as well as applications to both finite and extended systems. The method can be applied to systems with fractional occupation, for example, where k-point sampling of the Brillouine zone (BZ) is carried out. The formulation presented here is also applicable to energy functionals that are not unitary invariant, such as self-interaction corrected functionals [23]. Tests of the performance of this ETDM implementation and comparison with the SCF method including density mixing are carried out for the G2 set (a total of 148 molecules), liquid configurations consisting of up to 576 water molecules and several insulating crystals. The article is organised as follows. In section 2, the ETDM method is formulated in a general way and equations provided for the derivative of the energy with respect to the matrix elements in the exponential transformation. In section 3, an efficient preconditioner is presented, applicable to systems with non-integer occupation numbers, as well as methods for evaluating the gradient of the energy and ways to choose the search direction as well as step-length in an inexact line-search procedure. In section 4, performance tests are presented with comparison to conventional SCF calculations. Finally, discussion and conclusions are presented in section 5. ## 2 General formulation In KS-DFT, the energy functional is $\displaystyle E=\sum_{i,{\bf k}}f_{i}({\bf k})\int d^{3}{\bf r}\frac{|\nabla\phi_{i{\bf k}}({\bf r})|^{2}}{2}+\int d^{3}{\bf r}\rho({\bf r})v_{ext}({\bf r})+$ $\displaystyle+\frac{1}{2}\iint d^{3}{\bf r}\,d^{3}{\bf r^{\prime}}\frac{\rho({\bf r})\rho({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}+E_{xc}[\rho({\bf r})].$ (1) where the $\phi$ are orbitals of the non-interacting electron system that has total electron density $\rho({\bf r})=\sum_{i,{\bf k}}f_{i}({\bf k})|\phi_{i{\bf k}}({\bf r})|^{2},$ (2) equal to that of the interacting electron system, the $f_{i}({\bf k})$ are orbital occupation numbers for the $k$-th point of the BZ with $0\leq f_{i}({\bf k})\leq 1$, $v_{ext}({\bf r})$ is the external potential corresponding to electron-nuclei interaction, and $E_{xc}$ is the exchange- correlation energy. The orbitals are expanded in terms of a possibly non- orthogonal basis set consisting of $M$ basis functions $\phi_{i{\bf k}}({\bf r})=\sum_{\mu=1}^{M}O_{\mu i}({\bf k})\chi_{\mu{\bf k}}({\bf r}),$ (3) and the task is to find optimal values of the coefficients $O_{\mu i}({\bf k})$ that minimize the energy $E[\\{O({\bf k})\\}_{{\bf k}}]$ subject to the orthonormality constraints: $O^{\dagger}({\bf k})S({\bf k})O({\bf k})=I\quad{\bf k}\in BZ,$ (4) with $S_{\mu\nu}({\bf k})=\int\chi^{*}_{\mu{\bf k}}({\bf r})\chi_{\nu{\bf k}}({\bf r})\,d{\bf r}$ being the overlap matrix. The basis functions for periodic systems are Bloch states and in a localised basis set approach they can be written as $\chi_{\mu{\bf k}}({\bf r})=\frac{1}{\sqrt{N}}\sum_{{\bf R}}\exp(i{\bf k}\cdot{\bf R})\eta_{\mu}({\bf r}-{\bf R}-{\bf d}_{\mu})$ (5) where $\eta_{\mu}({\bf r}-{\bf R}-{\bf d}_{\mu})$ is an atomic orbital centered on an atom in the simulated cell. The subscript $\mu$ enumerates the atomic orbitals and ${\bf R}$ belongs to the Bravais lattice. An initial guess for the orbitals is expressed as a linear combination of the basis functions $\psi_{m{\bf k}}({\bf r})=\sum_{\mu=1..M}C_{\mu m}({\bf k})\chi_{\mu{\bf k}}({\bf r}).$ (6) Given an initial guess for the orbitals, $C_{\mu m}({\bf k})$, which we will refer to as the reference orbitals, the optimal orbital coefficients $O_{\mu m}({\bf k})$ that provide minimal energy can be found through a unitary transformation as $O({\bf k})=C({\bf k})e^{A({\bf k})}$ (7) where $A({\bf k})$ is a skew-Hermitian matrix, $A({\bf k})^{\dagger}=-A({\bf k})$. For a set of $N_{k}$ vectors used to represent the BZ, a set of matrices $\\{A({\bf k})\\}_{{\bf k}}$ is needed. For a given set of reference orbitals, a set of unitary matrices, $U({\bf k})=\exp(A({\bf k}))$, exists so that the reference orbitals are transformed to the optimal orbitals. Thus, the ground- state energy of the system is a function of the upper triangular elements of a set of matrices $A({\bf k})$, $E[n]=E[\\{a_{11},\ldots,a_{1M},a_{22},\ldots,a_{2M},\ldots,a_{MM}\\}_{{\bf k}}]$ (8) where $a_{ij}=(A)_{ij}$ and ${{\bf k}}$ denotes the set of $N_{k}$ vectors. The real part of the diagonal elements of the matrices are zeros and therefore, the energy is a function of $N_{k}M^{2}$ variables. There are $M(M-1)/2$ real elements and $M(M+1)/2$ imaginary elements for every k-point. The energy needs to be minimized with respect to the real and imaginary parts of the matrix elements $\\{a_{ij}({\bf k})\\}_{i\leq j}$. Introducing the derivative $\frac{\partial}{\partial a_{ij}({\bf k})}=\frac{1}{2}\left(\frac{\partial}{\partial{\rm Re}(a_{ij}({\bf k}))}-i\frac{\partial}{\partial{\rm Im}(a_{ij}({\bf k}))}\right)$ (9) the gradient of the energy can be evaluated as $\frac{\partial E}{\partial a_{ij}({\bf k})}=\sum_{\mu\nu}H_{\mu\nu}({\bf k})\frac{\partial\rho_{\mu\nu}({\bf k})}{\partial a_{ij}({\bf k})}$ (10) where the Hamiltonian matrix is $H_{\mu\nu}({\bf k})=\int d{\bf r}\,\chi^{*}_{\mu{\bf k}}({\bf r})\left(-\frac{1}{2}\nabla^{2}+v({\bf r})\right)\chi_{\nu{\bf k}}({\bf r}).$ (11) Here, $v({\bf r})$ is the single electron Kohn-Sham potential, and the density matrix is given in terms of the optimal coefficient matrix as $\rho_{\mu\nu}({\bf k})=\sum_{m}f_{m}({\bf k})O_{\mu m}({\bf k})\overline{O}_{\nu m}({\bf k}).$ (12) By defining the commutator $L_{mk}({\bf k})=\left[F({\bf k}),H({\bf k})\right]_{mk},$ (13) where $H({\bf k})$ is the Hamiltonian matrix represented in terms of the optimal orbitals $H({\bf k})_{mk}=\sum_{\mu\nu}\overline{O}_{\mu m}({\bf k})H_{\mu\nu}({\bf k})O_{\nu k}({\bf k}),$ and $F({\bf k})$ is a diagonal matrix with occupation numbers $f_{m}({\bf k})$ as diagonal elements, the derivatives in Eq. (10) can be written as $\frac{\partial E}{\partial a_{ij}({\bf k})}=\frac{2-\delta_{ij}}{2}\left(\int_{0}^{1}e^{tA({\bf k})}L({\bf k})e^{-tA({\bf k})}\,dt\right)_{ji}.$ (14) For the optimal orbitals, the gradient ${\partial E}/{\partial a_{ij}({\bf k})}$ must be zero so $\int_{0}^{1}e^{tA({\bf k})}L({\bf k})e^{-tA({\bf k})}\,dt=0,\quad{\bf k}\in BZ.$ (15) These non-linear equations can be used to find the skew-Hermitian matrix that provides the energy minimum. For the remainder of this article, the $k$-point index ${\bf k}$ is omitted for simplicity. Eq. (14) is general and can be applied to an objective function that depends explicitly on the orbitals as well as the total density, but then the definition of $L$ needs to be be changed accordingly. For example, for the Perdew-Zunger self-interaction correction (PZ-SIC) [23], the matrix $L$ for a single k-point calculation is $L_{mk}=\left[F,H\right]_{mk}+f_{k}\overline{V}_{km}-f_{m}V_{mk},$ (16) where $V_{km}$ is a matrix element of the SIC potential: $\displaystyle V_{mk}=\sum_{\mu\nu}\overline{O}_{\mu m}V^{k}_{\mu\nu}O_{\nu k},$ (17) $\displaystyle V_{\mu\nu}^{k}=\int\chi^{*}_{\mu}({\bf r})\left[\int d^{3}{\bf r^{\prime}}\frac{\rho_{k}({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}+v_{xc}(\rho_{k}({\bf r}))\right]\chi_{\nu}({\bf r})d{\bf r}.$ (18) Equation (15) can be expanded in a series as $\int_{0}^{1}e^{tA}Le^{-tA}\,dt=L+\frac{1}{2!}\left[A,L\right]+\frac{1}{3!}\left[A,\left[A,L\right]\right]+\ldots$ (19) If $\|L\|\gg\frac{1}{2}\|\left[A,L\right]\|$, then the first term on the right hand side can be used to estimate the gradient. This limit of ‘small rotations’ corresponds to the geometric approach used by Van Voorhis and Head- Gordon [15] and has also been used in the context of orbital-density dependent functionals [24, 25, 26]. The higher order terms can also be included to increase the accuracy of the gradient estimate, but each iteration then requires more computational effort. The minimization procedure is performed with respect to the real and imaginary parts of matrix elements using the energy gradient given by Eq. (14) $\frac{\partial E}{\partial{\rm Re}(a_{ij})}=2{\rm Re}\left(\frac{\partial E}{\partial a_{ij}}\right)$ (20) and $\frac{\partial E}{\partial{\rm Im}(a_{ij})}=-2{\rm Im}\left(\frac{\partial E}{\partial a_{ij}}\right).$ (21) Computational algorithms for the evaluation of the the matrix exponential and gradient of the energy are presented in Sec. 3.4 ## 3 Algorithms and Computational Parameters In order to find the optimal orbitals, $O$, corresponding to minimal energy, the appropriate exponential transformation of the reference orbitals, $C$, $O=Ce^{A}$ (22) needs to be determined. The reference orbitals can be chosen to be any set of orthonormal orbitals spanned by the basis set and they are held fixed during the minimization of the energy for a given number of steps while only the matrix $A$ is varied. The closer the reference orbitals are to the optimal orbitals, the faster the iterative procedure will converge. A line search method has been implemented where the $(k+1)$th iteration step is $\vec{a}\,^{(k+1)}=\vec{a}\,^{(k)}+\alpha\,^{(k)}\,\vec{p}\,^{(k)}.$ (23) Here, $\vec{a}\,^{(k+1)}$ is a vector consisting of the real and imaginary part of the upper triangular elements of matrix $A$ at the $k$th step of the minimization algorithm, $\begin{split}\vec{a}=(&{\rm Re}(a_{12}),\ldots,{\rm Re}(a_{1M}),\\\ &{\rm Re}(a_{23}),\ldots,{\rm Re}(a_{2M}),\ldots,{\rm Re}(a_{M-1M}),\\\ &{\rm Im}(a_{11}),{\rm Im}(a_{12}),\ldots,{\rm Im}(a_{1M}),\\\ &{\rm Im}(a_{22}),{\rm Im}(a_{23}),\ldots,{\rm Im}(a_{2M}),\ldots,{\rm Im}(a_{MM}))^{T},\end{split}$ (24) and $\vec{p}\,^{(k)}$ is the search direction while $\alpha^{(k)}$ is the step length. ### 3.1 Choice of search direction The search direction can be chosen according to the steepest descent method, various Quasi-Newton methods, or nonlinear conjugate gradient (CG) methods. The calculation of the search direction involves algebraic operations associated with the particular method plus the evaluation of the energy and gradient for the given energy functional. The dimensionality of the minimization problem scales as $NM$, where $N$ is the number of occupied orbitals and $M$ is the number of basis functions. While Quasi-Newton methods such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm require fewer iterations than limited-memory BFGS (L-BFGS) or CG, the algebraic operations become a bottleneck even for systems of moderate size (the BFGS algorithm scales as $\mathcal{O}(N^{2}M^{2})$ [27]). However, every iteration of the L-BFGS algorithm, in which the approximate inverse Hessian matrix is updated, can be computed with the cost of $\mathcal{O}(mNM)$ operations, where $m$ is the number of previous steps stored in memory. In the present implementation, the L-BFGS algorithm as described in Ref. [27] is used and $m=3$ in the benchmark calculations. ### 3.2 Choice of step length The step length $\alpha^{(k)}$ is chosen in such a way that it satisfies the strong Wolfe conditions [28, 29, 27] $\displaystyle E(\vec{a}\,^{(k)}+\alpha\,^{(k)}\vec{p}\,^{(k)})\leq E(\vec{a}\,^{(k)})+c_{1}\alpha\,^{(k)}\nabla_{\vec{a}}E(\vec{a}\,^{(k)})\cdot\vec{p}\,^{(k)}$ (25) and $\displaystyle|\nabla E(\vec{a}\,^{(k)}+\alpha\,^{(k)}\vec{p}\,^{(k)})\cdot\vec{p}\,^{(k)}|\leq c_{2}|\nabla_{\vec{a}}E(\vec{a}\,^{(k)})\cdot\vec{p}\,^{(k)}|$ (26) with $0<c_{1}<c_{2}<1$. A trial step of $\alpha\,^{(k)}=1$ is always used first to test the conditions. After several iterations, a step length of 1 guarantees that the strong Wolfe conditions are satisfied in the L-BFGS algorithm [27]. This is appealing since it reduces the number of energy and gradient calculations which are computationally most intensive in KS-DFT calculations. If $\alpha\,^{(k)}=1$ is not satisfied by the strong Wolfe conditions, then the inexact line search based on the interpolation of the energy along the search direction is used [27]. When the energy of the system is evaluated, the KS-DFT potential needs to be obtained and, as a result, there is little additional effort involved in evaluating the gradient. Therefore, the energy along the search direction is always interpolated by a cubic function using information about the energy values and gradient at the boundaries of the search interval $[a,b]$. Alongside the strong Wolfe conditions, approximate Wolfe conditions are also checked [30] at the minimum of the interpolated cubic function $(2\delta-1)\nabla_{\vec{a}}E(\vec{a}\,^{(k)})^{T}\vec{p}\,^{(k)}\geq\nabla_{\vec{a}}E(\vec{a}\,^{(k)}+\alpha\,^{(k)}\vec{p}\,^{(k)})\cdot\vec{p}\,^{(k)}\geq\sigma\nabla_{\vec{a}}E(\vec{a}\,^{(k)})\cdot\vec{p}\,^{(k)},$ (27) and the condition $E(\vec{a}\,^{(k)}+\alpha\,^{(k)}\vec{p}\,^{(k)})\leq E(\vec{a}\,^{(k)})+\epsilon|E(\vec{a}\,^{(k)})|$ (28) where $\delta<{\rm min}\\{0.5,\sigma\\}$, $0<\sigma<1$ and $\epsilon$ is a small fixed number. Thus, the line search algorithm is terminated when either the strong Wolfe conditions of Eqs. (25)-(26) or the approximate Wolfe conditions of Eq. (27) along with the condition in Eq (28) holds. The parameter values are set to [27, 30] $c_{1}=10^{-4},c_{2}=0.9,\delta=0.1,\sigma=0.9,\epsilon=10^{-6}.$ (29) ### 3.3 Preconditioning A preconditioner speeds up convergence of this iterative algorithm. It is constructed as the inverse of an approximate Hessian matrix that can be obtained by taking the derivative of a linear expansion of the gradient (Eq. (14)) with respect to the skew-Hermitian matrix, and neglecting first order derivatives of the effective potential. Neglecting the first order derivatives of the effective potential means that all explicit contributions from the Hartree-Exchange-Correlation kernel are neglected. For the real valued case, the Hessian can be approximated as $\displaystyle\frac{\partial^{2}E}{\partial a_{ij}\partial a_{lm}}\approx\,$ $\displaystyle\delta_{il}H_{jm}(f_{l}+f_{i}-f_{j}-f_{m})$ $\displaystyle+$ $\displaystyle\delta_{jl}H_{im}(f_{m}+f_{i}-f_{l}-f_{j})$ $\displaystyle+$ $\displaystyle\delta_{jm}H_{li}(f_{m}-f_{i}-f_{l}+f_{j})$ $\displaystyle+$ $\displaystyle\delta_{im}H_{lj}(f_{l}-f_{m}-f_{i}+f_{j})$ $\displaystyle+$ $\displaystyle\beta_{ij}\delta_{il}\delta_{jm}$ (30) where the matrix $\beta_{ij}$ must be chosen according to the following two principles: (1) the approximate Hessian must be positive definite, and (2) it must provide a good estimate of the true Hessian along the search direction such that a step size of 1 satisfies the strong Wolfe conditions. If the orbitals are chosen as eigenvectors of the Hamiltonian then the approximate Hessian is diagonal $\frac{\partial^{2}E}{\partial^{2}a_{ij}}=-2(\epsilon_{ii}-\epsilon_{jj})(f_{i}-f_{j})+\beta_{ij}.$ (31) If one keeps contributions from the Hartree-Exchange-Correlation kernel then the Hessian matrix is not diagonal and the inversion of this matrix will require considerable computational effort. The first term on the right hand side coincides with the preconditioner that has previously been used for molecular systems with integer occupation numbers [9]. There, an extra term was added in cases of degeneracy, $\epsilon_{ii}=\epsilon_{jj}$, but here the initial approximation of the Hessian in the L-BFGS algorithm [27] $\beta_{ij}$ is used. Since the approximate Hessian is diagonal, the preconditioner is simply $P_{ij}=\frac{1}{-2(\epsilon_{ii}-\epsilon_{jj})(f_{i}-f_{j})+\beta_{ij}}.$ (32) In the present implementation, the preconditioner is updated iteratively and for iteration k it is $P^{(k)}_{ij}=\frac{1}{-2(1-\gamma)(\epsilon_{ii}-\epsilon_{jj})(f_{i}-f_{j})+\gamma\beta^{(k)}},$ (33) where $\beta^{(k)}=\frac{\|\nabla_{\vec{a}}E(\vec{a}\,^{(k)})-\nabla_{\vec{a}}E(\vec{a}\,^{(k-1)})\|^{2}}{(\vec{a}\,^{(k)}-\vec{a}\,^{(k-1)})\cdot(\nabla_{\vec{a}}E(\vec{a}\,^{(k)})-\nabla_{\vec{a}}E(\vec{a}\,^{(k-1)}))}.$ (34) The parameter $\gamma$ in Eq.(33) is a number that determines the mixing of the two approximate Hessians: the one obtained from a linear expansion of the gradient, Eq.(19), and the one based on the LBFGS estimate, Eq.(34). In the calculations presented here, $\gamma=0.25$ was found empirically to give a good compromise between the rate of convergence and robustness. When k-point sampling is included for the periodic systems, $\beta^{(k)}$, needs to be multiplied by the numerical weight of the corresponding k-point. Eq. (33) is used as an initial inverse Hessian at each iteration of the L-BFGS algorithm. With this preconditioner, a step length of 1 is almost always accepted and it works well for both finite and extended systems. It is used for both the real and imaginary parts of the skew-Hermitian matrix. We note that the eigenvalues in Eq. (33) are not updated at every iteration of the minimization algorithm but only at the beginning, thereby avoiding the costly diagonalization of the Hamiltonian matrix at each step. ### 3.4 Evaluation of the matrix exponential and energy gradient The evaluation of the exponential of the skew-Hermitian matrix, $\exp(A)$, is carried out using eigendecomposition of $iA$. Let $\Omega$ be a diagonal, real-valued matrix with elements corresponding to the eigenvalues of the matrix $iA$ and let $U$ be a column matrix of the eigenvectors of $iA$. Then the matrix exponential of $A$ is $\exp(A)=U\exp(-i\Omega)U^{\dagger}.$ (35) This computation requires diagonalization of a $M\times M$ matrix and becomes a computational bottleneck for large systems. However, for unitary invariant energy functionals (such as Kohn-Sham functionals), Hutter et.al. [12] have shown that $A$ can be parametrised without loss of generality as $A=\begin{pmatrix}0&A_{ov}\\\ -A_{ov}^{\dagger}&0\end{pmatrix},$ (36) where $A_{ov}$ is a $N\times(M-N)$ matrix (N - number of occupied states) and the matrix exponential can be calculated as $\exp(A)=\begin{pmatrix}{\rm cos}(P)&P^{-1/2}{\rm sin}(P^{1/2})A_{ov}\\\ -A_{ov}^{\dagger}P^{-1/2}{\rm sin}(P^{1/2})&I_{M-N}+A_{ov}^{\dagger}{\rm cos}(P^{1/2}-I_{N})P^{-1}A_{ov})\end{pmatrix},$ (37) where $P=A_{ov}A_{ov}^{\dagger}$. In this case the computational effort scales as $\mathcal{O}(N^{2}M)$. An alternative and more general approach is provided by the scaling and squaring algorithm based on the equation $\exp(A)=\exp(A/2^{m})^{2^{m}}$ (38) and on $[q,q]$ Páde approximant to the matrix $\exp(A/2^{m})$, where $m$ and $q$ are positive integer constants [31]. The algorithm of Al-Mohy and Higham is used here [32, 33]. The two approaches are compared in the benchmark calculations presented below. If the matrix exponential is evaluated using the eigendecomposition of $iA$, then one can calculate the gradient of the energy using the matrices $U$ and $\Omega$ as $G^{T}=U\left(\left(U^{\dagger}LU\right)\otimes D\right)U^{\dagger},$ (39) where the matrix $D$ is $D_{ij}=\frac{e^{-i(\Omega_{ii}-\Omega_{jj})}-1}{i(\Omega_{ii}-\Omega_{jj})}$ (40) and the matrix G is $G_{ij}=\frac{\partial E}{\partial a_{ij}}\frac{2}{(2-\delta_{ij})}.$ (41) However, due to the sparsity of the matrix $A$ and if the norm is $\|A\|\ll 1$, the gradients can be evaluated more efficiently using only the first term on the right hand side of Eq. (19) $G\approx L^{T}.$ (42) If the norm of the matrix $A$ is larger than 1, then the reference orbitals can be updated $C\leftarrow C\exp(A)$ in which case $A\leftarrow 0$ and then Eq. (42) can be used. Namely, during the iterative process, $O^{(k)}=C\exp(A^{(k)})$ (43) check if $\|L^{(k)}\|\geq\epsilon\|\left[A^{(k)},L^{(k)}\right]\|$ then set $C^{\prime}=C\exp(A^{(k)})$, and continue with $O^{(k+1)}=C^{\prime}\exp(A^{(k+1)}).$ (44) It is found that $\epsilon=3$ provides a reasonable estimate. However, in order to avoid an additional calculation of a commutator between $A$ and $L$, one can update the reference orbitals at a regular interval (for example, at every 20th. iteration). The change of the reference orbitals should be followed by a transformation of the gradient vectors, as stored in memory for quasi-Newton algorithms, if they are to be used in following steps. However, in the implementation used for the numerical tests in this work, the memory of the L-BFGS algorithm is instead erased after an update of the reference orbitals. This can be beneficial in cases where the orbitals are near stationary points which are not the minimum. For small systems, the performance is similar for the various methods for evaluating the matrix exponential and energy gradient since the calculation of the effective Kohn-Sham potential and the total energy then dominates the computational effort. For larger systems, a difference in performance becomes evident, as illustrated below for configurations of liquid water with up to 576 molecules. ### 3.5 Implementation and parameter values We have implemented the ETDM algorithm using a numerical localized atomic basis set and the projector augmented-wave formalism (PAW) [34] to take into account the frozen, inner electrons of the atoms within the open-source GPAW software [35]. An SCF algorithm based on the eigendecomposition of the Hamiltonian in a localised atomic basis set representation is already available there and is frequently used in KS-DFT calculations [36]. To compare the efficiency of the two approaches, single-point ground-state energy calculations are performed for the G2 [37] data set of small molecules, five ionic solids, as well as liquid water configurations including 32, 64, 128, 256, 384 and 576 molecules subject to periodic boundary conditions. The double-zeta polarized basis set (which is the default basis set in GPAW) and the generalized gradient approximation (GGA) parametrized by Perdew-Burke- Ernzerhof [38] is used. An initial guess for the orbitals is taken to be the eigenvectors of the Hamiltonian obtained from a superposition of atomic densities. Convergence is considered achieved for both the SCF and the ETDM methods when the inequality $\frac{1}{N_{e}}\sum_{i=1}^{N_{b}}\int d\,{\bf r}f_{i}|\hat{H}_{KS}\psi_{i}({\bf r})-\sum_{j=1}^{N_{b}}\lambda_{ij}\psi_{j}({\bf r})|^{2}\ <\ 10^{-10}~{}{\rm eV}^{2}$ (45) is satisfied. In the equation above, the $\lambda_{ij}$ are Lagrange multipliers and for an SCF algorithm this is a diagonal matrix. $N_{b}$ is the number of occupied orbitals. Default values in GPAW are used, for example the Pulay density mixing parameters. We note that in cases where the SCF method fails to converge, it could in principle be made to converge by using, for example, other, non-default values of the density mixing parameter. Failure to reach convergence here means that convergence is not obtained in the default maximum number of iteration steps, which is 333. ## 4 Results ### 4.1 Molecules The average number of energy and gradient evaluations for the ETDM method and the average number of energy and diagonalization calculations for the SCF method are presented in Table 1 and Fig. 1. The ETDM method converges for all the 148 molecules in the G2 set using the parameter values specified in Sec. 3. The SCF method, however, fails to converge for five of the molecules: CH, SH, ClO, NO, and OH. These five molecules are also challenging for the ETDM method as it requires more iterations to reach convergence there than the average for the whole G2 set (see Fig. 1). For the molecules where SCF converges, it requires a similar number of iterations as ETDM. On average 18 and 17 iterations are required by the SCF and ETDM methods, respectively. The reason for the lack of convergence for SCF and slow convergence of ETDM in the five problematic cases could be the presence of nearby saddle points or near-degenerate higher energy states. In the SCF calculations, the orbitals obtained from the diagonalization of the Hamiltonian matrix at subsequent iterations can ‘jump’ between different energy surfaces or oscillate around a saddle point. Analogous convergence issues for the DIIS method have been reported for the G2 molecular set and transition metal complexes [15]. Table 1: Comparison of the performance of the exponential transform direct minimization, ETDM, and self-consistent field, SCF, methods for the G2 set of molecules (a total of 148 molecules). The average number of energy and gradient evaluations is reported for the former method, but the average number of energy and diagonalization calculations for the latter (in both cases denoted e/g(d)). In the column labeled ETDM∗, the five molecules for which the SCF calculations did not converge are excluded. | SCF | ETDM | ETDM∗ ---|---|---|--- average e/g(d) | 18 | 17 | 16 min e/g(d) | 12 | 6 | 6 max e/g(d) | 26 | 72 | 25 did not converge | 5 | - | - For these small molecules, the evaluation of the matrix exponential and energy gradient, i.e. the diagonalization of the Hamiltonian matrix, is not the dominant computational effort. The various algorithms presented in Sec 3.4 therefore involve similar computational effort. Figure 1: (a) Number of SCF iterations and energy/gradient evaluations in the exponential transform direct minimization needed to reach convergence according to criterion Eq. (45) for a representative set of 10 molecules from the G2 set. (b) Energy/gradient evaluations in the exponential transform direct minimization for the molecules for which the SCF method failed to converge. ### 4.2 Periodic Systems As examples of extended systems subject to periodic boundary conditions, calculations have been carried out for five crystalline solids: NaCl, NaF, LiCl, LiF and MgO. A cubic unit cell is chosen consisting of 8 atoms and $\Gamma$-centered $3\times 3\times 3$ Monkhorst-Pack meshes are used for the BZ sampling. The lattice constants are set to the optimal values obtained from PBE calculations [39]. The number of iterations required to reach convergence is presented in Fig. 2. The results show that the ETDM and the SCF algorithms have similar rate of convergence for these systems. This is an important test of the preconditioner given in equation (33) and shows that it is suitable for solids as well as molecules. Figure 2: Number of SCF iterations and energy/gradient evaluations in the exponential transform direct minimization needed to reach convergence according to criterion Eq. (45) for NaCl, NaF, LiCl, LiF and MgO crystals. Tests were also carried out for another set of extended systems representing snapshots of liquid water. The systems contain 32, 64, 128, 256, 384 and 576 water molecules subject to periodic boundary conditions. The efficiency of the two approaches for evaluating the matrix exponential in the ETDM method discussed in Sec 3.4 is compared, also in relation to SFC, and reported in Fig.3. One of the approaches is based on Eq. (37) and makes use of the fact that the energy is invariant with respect to unitary rotations of the occupied orbitals. In this case, the computation of the matrix exponential requires diagonalization of an $N\times N$ matrix and involves less computational time as compared to the SCF algorithm where the first $N$ eigenvectors of a $M\times M$ Hamiltonian matrix need to be calculated. The other approach, the scaling and squaring algorithm Eq.(38), is more general and does not rely on the parameterization of the skew-Hermitian matrix based on Eq. (38). For dense matrices, this approach is generally slower than the one based on eigendecomposion of the skew-Hermitian matrix Eq. (35), but for sparse matrices this algorithm can outperform the eigendecomposion approach. The energy gradient is calculated according to Eq. (42). Figure 3: Ratio of the CPU time used by the SCF method and the exponential transform direct minimization, ETDM, method based on either the scaling and squaring algorithm, Eq. (38) (ss, red curve) or the evaluation of the matrix exponential by diagonalization, Eq. (35), (uinv, blue curve), as a function of the number of water molecules in liquid configurations subject to periodic boundary conditions. For the largest system, the direct minimization based on matrix diagonalization outperforms the SFC method by a factor of two while the implementation based on the scaling and squaring algorithm is 20% faster than SCF. The ratio of the CPU time required by calculations using the SCF method and the ETDM method is shown as a function of the number of water molecules in Fig. 3. When the matrix exponential is evaluated using Eq. (37) the ETDM method outperforms SCF by a factor of two if more than 200 water molecules are included in the system. Also, the more general implementation of ETDM using scaling and squaring, Eq. (38), is faster than SCF by 20% for these relatively large systems. It has the advantage of being applicable to energy functionals lacking unitary invariance, unlike the SCF algorithm. ## 5 Discussion and Conclusion The main advantage of the ETDM implementation presented here, based on a general preconditioner, L-BFGS algorithm and inexact line search is robustness. For small molecules the computational effort is similar to the standard SCF approach when the latter converges, but the ETDM is found to converge for all the molecules in the G2 set with the same set of parameter values, a set that also works for extended liquid configurations and insulating solids. This demonstrates the transferability of the ETDM algorithm as implemented here. For the large systems considered here, liquid water configurations with 200 and up to 576 molecules, the ETDM outperforms the direct SCF method up to by a factor of two when special parametrization of skew-Hermitian matrix is used and by around 20% when the more general scaling and squaring method is used. The latter is more general and can be applied to any type of orbital dependent energy functional such as self-interaction corrected functionals [23]. The ETDM method involves minimization of the energy with respect to the elements of a skew-Hermitian matrix and, therefore, the number of degrees of freedom scales as $M^{2}$, where $M$ is the number of basis functions. However, for energy functionals that are unitary invariant with respect to the occupied orbitals, the skew-Hermitian matrices can be parametrized using $N\times(M-N)$ degrees of freedom,[12] where $N$ is the number of occupied orbitals. Therefore, taking into account the sparsity of the matrices, the algorithm can be implemented in such a way that the computational effort scales as $\mathcal{O}(N^{2}M)$. The scaling and squaring algorithm for evaluating the matrix exponential is not as efficient but is more generally applicable and can still outperform the SCF method as was found for the large liquid water configurations. Future work will involve generalization of the ETDM method to finite temperature KS-DFT, i.e. thermal smearing, where an additional inner loop for variational optimization of the occupation numbers is included [40], analogous to the direct minimization method used in ensemble DFT [13]. This is needed for calculations of metallic systems. A more efficient preconditioner could also likely be developed, especially for orbital density dependent functionals. Finally, we point out that the ETDM method is also useful in other types of electronic structure calculations, such as studies of excited states [41, 42]. ## 6 Acknowledgement The authors thank Gianluca Levi for fruitful discussion and valuable comments on the manuscript. This work was supported by the University of Iceland Research Fund and the Icelandic Research Fund (grant no. 174082-053). AVI is supported by a doctoral fellowship from the University of Iceland. HJ, AVI and EÖJ thank the Department of Energy Conversion and Storage at the Technical University of Denmark for hospitality during an extended visit and access to computational resources. ## References * [1] P. Hohenberg, W. Kohn, Inhomogeneous electron gas, Phys. Rev. 136 (1964) B864. * [2] W. Kohn, L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. 140 (1965) 1133. * [3] E. Davidson, The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices, J. Comput. Phys. 17 (1975) 87. * [4] P. Pulay, Convergence acceleration of iterative sequences. the case of scf iteration, Chem. Phys. Lett. 73 (1980) 393. * [5] P. Pulay, Improved scf convergence acceleration, J. Comput. Chem. 3 (1982) 556. * [6] G. Kresse, J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54 (1996) 11169. * [7] A. J. Garza, G. E. Scuseria, Comparison of self-consistent field convergence acceleration techniques, J. Chem. Phys. 137 (2012) 054110. * [8] A. C. Vaucher, M. Reiher, Steering orbital optimization out of local minima and saddle points toward lower energy, J. Chem. Theory Comput. 13 (2017) 1219. * [9] M. Head-Gordon, J. Pople, Optimization of wave function and geometry in the finite basis hartree-fock method, J. Phys. Chem. 92 (1988) 3063. * [10] M. J. Gillan, Calculation of the vacancy formation energy in aluminium, J. Phys. Condens. Matter 1 (1989) 689. * [11] M. Payne, M. Teter, D. Allan, T. Arias, J. Joannopoulos, Iterative minimization techniques for ab initio total-energy calculations: Molecular dynamics and conjugate gradients, Rev. Mod. Phys. 64 (1992) 1045. * [12] J. Hutter, M. Parrinello, S. Vogel, Exponential transformation of molecular orbitals, J. Chem. Phys 101 (1994) 3862. * [13] N. Marzari, D. Vanderbilt, M. C. Payne, Ensemble density-functional theory for ab initio molecular dynamics of metals and finite-temperature insulators, Phys. Rev. Lett. 79 (1997) 1337. * [14] S. Ismail-Beigi, T. Arias, New algebraic formulation of density functional calculation, Comput. Phys. Commun 128 (2000) 1. * [15] T. Van Voorhis, M. Head-Gordon, A geometric approach to direct minimization, Molecular Physics 100 (2002) 1713. * [16] J. VandeVondele, J. Hutter, An efficient orbital transformation method for electronic structure calculations, J. Chem. Phys. 118 (2003) 4365. * [17] V. Weber, J. Vandevondele, J. Hutter, A. Niklasson, Direct energy functional minimization under orthogonality constraints, J. Chem. Phys. 128 (2008) 084113\. * [18] C. Freysoldt, S. Boeck, J. Neugebauer, Direct minimization technique for metals in density functional theory, Phys. Rev. B 79 (2009) 241103. * [19] A. Edelman, T. Arias, S. Smith, The geometry of algorithms with orthogonality constraints, SIAM J. Matrix Anal. Appl. 20 (1998) 303. * [20] J. F. Rico, J. M. G. De La Vega, J. I. F. Alonso, P. Fantucci, Restricted Hartree–Fock approximation. I. Techniques for the energy minimization, J. Comput. Chem. 4 (1983) 33. * [21] J. Á. Fern Rico, M. Paniagua, J. I. Fern Alonso, P. Fantucci, Restricted Hartree–Fock approximation. II. Computational aspects of the direct minimization procedure, J. Comput. Chem. 4 (1983) 41. * [22] J. Douady, Y. Ellinger, R. Subra, B. Levy, Exponential transformation of molecular orbitals: A quadratically convergent SCF procedure. I. General formulation and application to closed-shell ground states, J. Chem. Phys. 72 (1980) 1452. * [23] J. P. Perdew, A. Zunger, Self-interaction correction to density-functional approximations for many-electron systems, Phys. Rev. B 23 (1981) 5048. * [24] S. Lehtola, H. Jónsson, Variational, Self-Consistent Implementation of the Perdew–Zunger Self-Interaction Correction with Complex Optimal Orbitals, J. Chem. Theory Comput. 10 (12) (2014) 5324. * [25] G. Borghi, C.-H. Park, N. Nguyen, A. Ferretti, N. Marzari, Variational minimization of orbital-density-dependent functionals, Phys. Rev. B 91 (2015) 155112\. * [26] S. Lehtola, M. Head-Gordon, H. Jónsson, Complex orbitals, multiple local minima, and symmetry breaking in perdew-zunger self-interaction corrected density functional theory calculations, J. Chem. Theory Comput. 12 (2016) 3195\. * [27] J. Nocedal, S. J. Wright, Numerical Optimization, 2nd Edition, Springer, New York, NY, USA, 2006. * [28] P. Wolfe, Convergence conditions for ascent methods, SIAM Review 11 (1969) 226. * [29] P. Wolfe, Convergence conditions for ascent methods. ii: Some corrections, SIAM Review 13 (1971) 185. * [30] W. Hager, H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim 16 (2006) 170. * [31] C. Moler, C. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later, SIAM Review 45 (2003) 3. * [32] A. Al-Mohy, N. Higham, A new scaling and squaring algorithm for the matrix exponential, SIAM J. Matrix Anal. Appl. 31 (2009) 970. * [33] E. Jones, T. Oliphant, P. Peterson, et al., SciPy: Open source scientific tools for Python (2001–). * [34] P. E. Blöchl, Projector augmented-wave method, Phys. Rev. B 50 (1994) 17953. * [35] J. Enkovaara, C. Rostgaard, J. J. Mortensen, J. Chen, M. Dułak, L. Ferrighi, J. Gavnholt, C. Glinsvad, V. Haikola, H. A. Hansen, H. H. Kristoffersen, M. Kuisma, A. H. Larsen, L. Lehtovaara, M. Ljungberg, O. Lopez-Acevedo, P. G. Moses, J. Ojanen, T. Olsen, V. Petzold, N. A. Romero, J. Stausholm-Møller, M. Strange, G. A. Tritsaris, M. Vanin, M. Walter, B. Hammer, H. Häkkinen, G. K. H. Madsen, R. M. Nieminen, J. K. Nørskov, M. Puska, T. T. Rantala, J. Schiøtz, K. S. Thygesen, K. W. Jacobsen, Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method, J. Phys. Condens. Matter 22 (2010) 253202\. * [36] A. H. Larsen, M. Vanin, J. J. Mortensen, K. S. Thygesen, K. W. Jacobsen, Localized atomic basis set in the projector augmented wave method, Phys. Rev. B 80 (2009) 195112. * [37] L. Curtiss, K. Raghavachari, P. Redfern, J. Pople, Assessment of gaussian-2 and density functional theories for the computation of enthalpies of formation, J. Chem. Phys. 106 (1997) 1063. * [38] J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77 (1996) 3865. * [39] V. N. Staroverov, G. E. Scuseria, J. Tao, J. P. Perdew, Tests of a ladder of density functionals for bulk solids and surfaces, Phys. Rev. B 69 (2004) 075102\. * [40] Á. Ruiz-Serrano, C.-K. Skylaris, A variational method for density functional theory calculations on metallic systems with thousands of atoms, J. Chem. Phys. 139 (2013) 054107. * [41] G. Levi, A. V. Ivanov, H. Jónsson, Variational calculations of excited states via direct optimization of the orbitals in DFT, Faraday Discuss. 224 (2020) 448. * [42] G. Levi, A. V. Ivanov, H. Jónsson, Variational Density Functional Calculations of Excited States via Direct Optimization, J. Chem. Theory Comput. 16 (2020) 6968.
aainstitutetext: Mathematical Institute, Oxford University Andrew Wiles Building, Woodstock Road Oxford OX2 6GG, UK bbinstitutetext: Department of Physics and Astronomy,Uppsala University SE-751 20 Uppsala, Sweden # Almost contact structures on manifolds with a $G_{2}$ structure Xenia de la Ossa b Magdalena Larfors b Matthew Magill <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We review the construction of almost contact metric (three-) structures, abbreviated ACM(3)S, on manifolds with a $G_{2}$ structure. These are of interest for certain supersymmetric configurations in string and M-theory. We compute the torsion of the $SU(3)$ structure associated to an ACMS and apply these computations to heterotic $G_{2}$ systems and supersymmetry enhancement. We initiate the study of the space of ACM3Ss, which is an infinite dimensional space with a local product structure and interesting topological features. Tantalising links between ACM3Ss and associative and coassociative submanifolds are observed. ## 1 Introduction Low-dimensional, minimally supersymmetric vacua of string theory and M-theory are of interest for a number of reasons. In four dimensions, such ground states may provide effective theories for the physical world, and give mathematically consistent UV completions of the standard models of particle physics and cosmology. Similarly, the AdS/CFT correspondence provides further motivation to explore supersymmetric vacua in spacetimes with constance negative curvature in both three and four dimensions. More broadly, both four- and three-dimensional solutions can serve as toy models where dualities, deformations and geometric invariants may be explored through the lens of superstring theory. The class of manifolds with $G_{2}$ structure, which includes torsion-free $G_{2}$ manifolds as a subclass, holds an important place in this field of research. In M-theory they provide four-dimensional $N=1$ Minkowski vacua, and in type II or heterotic string theory they give rise to three-dimensional vacua with non-positive cosmological constant. These topics have been studied for a number of years, and important results have been established. In particular, large classes of torsion-free $G_{2}$ manifolds have been constructed joyce1996:1 ; joyce1996:2 ; kovalev20 ; Corti:2012kd ; Corti:2013 and studied in detail by mathematicians and physicists. However, the topic of $G_{2}$ structures and related superstring compactifications is far from exhausted. For example, one still lacks a deformation theory of different $G_{2}$ geometries, and $G_{2}$ instantons, beyond the infinitesimal level established in Refs. joyce1996:1 ; joyce1996:2 ; Gutowski:2001fm ; Grigorian:2009ge and delaOssa:2016ivz ; delaOssa:2017pqy ; Fiset:2017auc ; delaOssa:2018azc ; Clarke:2016qtg ; Clarke:2020erl ; there are no direct constructions111However, see e.g. Braun:2017uku for constructions relying on string duality. of the singular, compact $G_{2}$ manifolds that are needed for the existence of chiral families in M-theory vacua Atiyah:2001qf ; Acharya:2001gy ; a method for counting associative submanifolds, important in physics applications due to their contributions to the non-perturbative superpotential, remains to be established Joyce:2016fij ; braun2018infinitely ; Acharya:2018nbo ; Harvey:1999as ; and proofs are lacking for conjectures regarding duality and mirror symmetry among $G_{2}$ vacua gukov2003duality ; Braun:2017uku ; braun2018towards ; braun2017mirror ; Eckhard:2018raj . In this paper we discuss the existence of almost contact (three-) structures on $G_{2}$ structure manifolds. In brief, an almost contact structure (ACS) is related to a nowhere vanishing vector field, and an almost contact three- structure (AC3S) is related to a triple of such vector fields (the formal definition is given below). Both structures are guaranteed to exist on any $G_{2}$ structure manifold thomas1969 . Our aim with this paper is, in part, to provide a detailed review of these well-established mathematical facts, which, to our knowledge, has been been partly lacking (see however friedrich1997nearly and Behrndt:2005im ). This may explain why the ACS perspective has not been emphasized in the recent physics and mathematics literature related to $G_{2}$ structure manifolds. In addition, we hope our paper gives evidence that these almost contact (three-) structures provide useful perspectives on $G_{2}$ geometry, calibrated submanifolds, string vacua and supersymmetry. Almost contact structures are, in some way, odd-dimensional cousins of almost complex structures in even dimensions. Just as for almost complex structures in even dimensions, the almost contact structures give rise to projection operators and a decomposition of e.g. differential forms into longitudinal and transverse components. More precisely, almost contact structures induce almost complex structures on the transverse geometry. As mentioned above, ACS are related to nowhere-vanishing vector fields. The existence of nowhere vanishing vector fields, or, more generally, nowhere vanishing differential forms, on a manifold indicates that the tangent bundle structure group can be reduced. In string compactifications, this is intimately tied to the amount of supersymmetry preserved by the vacuum. One may thus suspect that the existence of an AC(3)S may lead to string solutions with extended supersymmetry and we will demonstrate under which conditions this holds true. More generally, almost contact structures provide additional information that may be used in the classification of supersymmetric string vacua. Indeed, while their related AC(3)S has not necessarily been emphasised, $SU(2)$ and $SU(3)$ structures have been used in classifications of four- dimensional $N=1$ M-theory vacua Behrndt:2005im , in constructing M-theory lifts of $N=1$ type IIA solutions Andriolo:2018yrz , $N=1$ AdS type IIB vacua Kim:2005ez ; Gran:2007ps ; Passias:2019rga , and has points in common with the Gran–Papadopoulos classification of heterotic supersymmetric vacua Gran:2005wf ; Gran:2007kh ; Gran:2016zxk . Nowhere-vanishing vector fields are always allowed in odd dimensions; it is well known that the obstruction to the existence of a nowhere vanishing vector field on a closed manifolds is a non-vanishing Euler characteristic Hopf27 .222In this paper, we will focus in particular on closed (i.e. compact, boundaryless) manifolds. The results we state also hold for non-compact manifolds, and manifolds with boundary. Odd-dimensional manifolds have vanishing Euler characteristic, and hence admit an almost contact structure. It is less obvious, but nonetheless true, that seven-dimensional manifolds admit three linearly independent, nowhere vanishing vector fields 10.2307/45277146 ; thomas1969 ; kuo1970 . Moreover, as we will explain below, when combined with the positive three-form that defines a $G_{2}$ structure, these vector fields give rise to an almost contact three-structure, which is furthermore compatible with the metric induced by a $G_{2}$ structure friedrich1997nearly ; Arikan:2012acs ; Arikan:2011acs ; Todd:2015era . Importantly, the AC3S vectors will not, in general, be parallel with respect to the $G_{2}$ connection. Thus, while this shows that $G_{2}$ structure manifolds are necessarily reduced to $SU(2)$, there is no automatic reduction of the holonomy of the $G_{2}$ connection. A further reduction in the holonomy would be indicative of enhanced supersymmetry and, therefore, not generically expected. In the rest of this paper we will provide a more detailed account of almost contact metric three-structures on manifolds with $G_{2}$ structures. This is in part a review of established facts, in part a derivation of new results. We start in section 1.1 by briefly reviewing the background material and setting our notation. In section 2 we review almost contact metric structures and the associated $SU(3)$ structure using a somewhat novel perspective of differential forms. We also add new observations on this topic by elaborating on the foliation associated to the vector field of the ACMS, and the related transverse six dimensional geometry, which indeed carries an $SU(3)$ structure with intrinsic torsion that we may directly determine. With this result we can, in section 3, show how $N=1$ heterotic $G_{2}$ systems may be analysed using ACMS. We then turn, in section 4, to a review of almost contact metric three-structures and their associated $SU(2)$ structure. Building on these classical results, we expand, in section 4.3 upon the existence of three- and four-dimensional foliations related to ACM3Ss and discuss the intriguing relation between such foliations and associative and coassociative submanifolds (which are also calibrated submanifolds if the $G_{2}$ structure is closed and coclosed, respectively). This allows us to initiate a study of the space of ACM3Ss and show that it has a non-trivial structure. In section 5 we give a number of examples which illustrate the concepts we have reviewed. Finally, in section 6 we conclude and point out a number of directions for future studies. ### 1.1 Preliminary notions #### 1.1.1 $G_{2}$ structures A $G_{2}$ structure manifold is a seven dimensional manifold $Y$ along with a reduction of the tangent bundle structure group to $G_{2}$ (see for example FerGray82 ; Bryant:2005mz and joyce2000 for more details on $G_{2}$ structures). A $G_{2}$ structure exists whenever the seven manifold is orientable and spin. Since the group $G_{2}$ can be identified with the group of automorphisms of the imaginary octonions, reducing the structure group to $G_{2}$ allows us to identify the tangent bundle as a bundle of imaginary octonions, ${\rm Im}\,\mathbb{O}$. Making such an identification endows $Y$ with a metric and a vector cross product, which can in turn be encoded in a stable, positive three-form, $\varphi$. The relation between these data is as follows. Given a metric, $g$ and cross- product, $\times$, the three-form is defined by $\varphi(u,v,w)=g(u\times v,w)\,.$ (1) Locally, one can choose a trivialisation of the tangent bundle in which such a three-form takes a standard form, which in our conventions will be $\varphi_{0}=(e^{12}+e^{34}+e^{56})\wedge e^{7}+e^{135}-e^{146}-e^{236}-e^{245}\,.$ (2) On the other hand, whenever a three-form can be put in such a form, we can extract a metric and cross product. Indeed, once we have a metric, (1) defines a cross product. The metric can be defined, using a choice of orientation, by $6g_{\varphi}(u,v)\,{\rm d}{\rm vol}_{\varphi}=(u\lrcorner\varphi)\wedge(v\lrcorner\varphi)\wedge\varphi~{},$ (3) for all vectors $u$ and $v$ in $\Gamma(TY)$. In components this means $g_{\varphi\,ab}=\frac{\sqrt{\det g_{\varphi}}}{3!\,4!}\,\varphi_{ac_{1}c_{2}}\,\varphi_{bc_{3}c_{4}}\,\varphi_{c_{5}c_{6}c_{7}}\,\epsilon^{c_{1}\cdots c_{7}}=\frac{1}{4!}\,\varphi_{ac_{1}c_{2}}\,\varphi_{bc_{3}c_{4}}\,\psi^{c_{1}c_{2}c_{3}c_{4}}~{},$ where we have used the metric to define a dual four-form $\psi=*_{\varphi}\,\varphi~{},$ and ${\rm d}x^{a_{1}\cdots a_{7}}=\sqrt{\det g_{\varphi}}\ \epsilon^{a_{1}\cdots a_{7}}\,{\rm d}{\rm vol}_{\varphi}~{}.$ With respect to this metric, the three-form $\varphi$, and hence its Hodge dual $\psi$, are normalised so that $\varphi\wedge*\varphi=||\varphi||^{2}\,{\rm d}{\rm vol}_{\varphi}~{},\qquad||\varphi||^{2}=7~{},$ that is $\varphi\lrcorner\varphi=\psi\lrcorner\psi=7~{}.$ Choosing a local frame where $\varphi$ takes its standard form, one can verify that the metric, $g_{\varphi}$, is the standard Euclidean metric and the four- form will be $\psi_{0}=e^{3456}+e^{1256}+e^{1234}-e^{2467}+e^{2357}+e^{1457}+e^{1367}$ (4) The exterior derivative of the forms $(\varphi,\psi)$, which give the structure equations for the $G_{2}$ structure, can be decomposed into irreducible representations of $G_{2}$ $\displaystyle{\rm d}_{7}\varphi$ $\displaystyle=\tau_{0}\,\psi+3\,\tau_{1}\wedge\varphi+*\tau_{3}~{},$ (5) $\displaystyle{\rm d}_{7}\psi$ $\displaystyle=4\,\tau_{1}\wedge\psi-\tau_{2}\wedge\varphi~{},$ (6) where the torsion classes $\tau_{k}$ are $k$ forms, $\tau_{3}$ is in the $\bf 27$ irreducible representation of $G_{2}$ and $\tau_{2}$ in the $\bf 14$. #### 1.1.2 Almost contact structures Let $Y$ be an odd dimensional Riemannian manifold with metric $g$. If the manifold $Y$ admits the existence of an endomorphism $J$ of the tangent bundle $TY$, a unit vector field $R$ (with respect to the metric $g$), and a one-form $\sigma$ which satisfy $J^{2}=-{\bf 1}+R\otimes\sigma~{},\quad\sigma(R)=1~{},$ $Y$ is said to admit an almost contact structure $(J,R,\sigma)$ sasaki1960 ; sasaki1961 . The one-form $\sigma$ is called the contact form. The dimension of a manifold admitting an ACS must be odd and the structure group of the tangent space reduces to $U(n)\times{\bf 1}$, where $2n+1$ is the dimension of $Y$. The ACS on the manifold $Y$ is said to be a contact structure if $\sigma\wedge{\rm d}\sigma\wedge\cdots{\rm d}\sigma\neq 0~{},$ everywhere on $Y$. In this paper we are mostly interested in the existence of almost contact structures on manifols with a $G_{2}$ structure. A Riemannian manifold $Y$ with an ACS $(J,R,\sigma)$ has an almost contact metric structure $(J,R,\sigma,g)$ (ACMS) if moreover $g(Ju,Jv)=g(u,v)-\sigma(u)\,\sigma(v)~{},\qquad\forall u,v\in\Gamma(TY)~{},$ (7) is satisfied. The fundamental two-form $\omega$ of an almost contact metric manifold is defined by $\omega(u,v)=g(Ju,v)~{},\qquad\forall\,u,v\in\Gamma(TY)~{},$ (8) and satisfies $\sigma\wedge\omega\wedge\cdots\wedge\omega\neq 0~{}.$ (9) An almost contact 3-structure (AC3S) on a manifold $Y$ kuo1970 is defined by three distinct almost contact structures $(J^{\alpha},R^{\alpha},\sigma^{\alpha}),~{}\alpha=1,2,3$ on $Y$ which satisfy the following conditions $\begin{split}J^{\gamma}&=J^{\alpha}\,J^{\beta}-R^{\alpha}\otimes\sigma^{\beta}=-J^{\beta}J^{\alpha}+R^{\beta}\otimes\sigma^{\alpha}~{},\\\ R^{\gamma}&=J^{\alpha}(R^{\beta})=-J^{\beta}(R^{\alpha})~{},\\\ \sigma^{\gamma}&=\sigma^{\alpha}\circ J^{\beta}=-\sigma^{\beta}\circ J^{\alpha}~{},\\\ \quad\sigma^{\alpha}(R^{\beta})&=\sigma^{\beta}(R^{\alpha})=0~{},\end{split}$ (10) where $\\{\alpha,\beta,\gamma\\}$ are a cyclic permutation of $\\{1,2,3\\}$. A manifold admitting an AC3S must have dimension $4n+3$ where $n$ is a non- negative integer and the structure group of the tangent space reduces to $Sp(n)\times{\bf 1}_{3}$. An almost contact metric 3-structure on a Riemannian manifold $Y$ with metric $g$, is an AC3S which satifies $g(J^{\alpha}u,J^{\alpha}v)=g(u,v)-\sigma^{\alpha}(u)\,\sigma^{\alpha}(v)~{},\forall~{}u,v\in\Gamma(TY)~{},$ (11) for each $\alpha\in\\{1,2,3\\}$. An AC3S consisting of three contact structures satisfying (10) is a contact 3-structure and defines a 3-Sasakian geometry kashiwada . ## 2 $SU(3)$ structures on manifolds with a $G_{2}$ structure A manifold $Y$ with a $G_{2}$ structure $\varphi$ has more structure than expected. In this section we review the fact that $(Y,\varphi)$ admits an almost contact metric structure (ACMS) and thereby reduces the structure group to $SU(3)$ Arikan:2012acs ; Arikan:2011acs ; Todd:2015era .333In fact, we will see in section 4 that there are least three non-zero vectors on a manifold with a $G_{2}$ structure thomas1969 , which gives $Y$ an almost contact metric 3-structure (ACM3S) inducing an $SU(2)$ structure on $Y$ friedrich1997nearly . The reason this happens stems from the fact that any such manifold admits a nowhere vanishing vector field Hopf27 $R$ which can be normalized with respect to the $G_{2}$ metric $g_{\varphi}$. In the first two subsections we prove the following proposition. ###### Proposition 1. friedrich1997nearly ; Todd:2015era Let $(Y,\varphi)$ be a seven dimensional manifold $Y$ with a $G_{2}$ structure $\varphi$ and let $g_{\varphi}$ be the metric on $Y$ determined by $\varphi$. Then $Y$ admits an ACMS $(J,R,\sigma,g_{\varphi})$ determined by a unit vector field $R$, where the endomorphism $J$ is given by $J(u)=R\times_{\varphi}u~{},\quad\forall~{}u\in\Gamma(TY)~{},$ and the one form $\sigma$ dual to $R$ with respect to the $G_{2}$ metric $g_{\varphi}$ $\sigma(R)=1~{}.$ The ACMS determines a foliation ${\cal F}_{R}$ of $Y$ by the one dimensional integral curves of $R$ with $G_{2}$ metric ${\rm d}s_{\varphi}^{2}=\sigma^{2}+{\rm d}s_{\perp}^{2}~{},$ where ${\rm d}s_{\perp}^{2}$ represents the metric on the transverse geometry of ${\cal F}_{R}$ induced by the ACMS on $Y$. Furthermore, the ACMS induces a reduction of the $G_{2}$ structure to an $SU(3)$ structure $(\omega_{\varphi},\Omega)$ on the transverse geometry of the foliation, where $\omega_{\varphi}$ is the fundamental two form on $Y$ $\omega_{\varphi}=i_{R}(\varphi)~{},$ and $\Omega$ is a transverse three form of type $(3,0)$ with respect to $J$. Both forms $(\omega_{\varphi},\Omega)$ are determined uniquely by the ACS decomposition of the $G_{2}$ structure $\varphi$ on $Y$ $\varphi=\sigma\wedge\omega_{\varphi}+\Omega_{+}~{}.$ (12) The coassociative four form $\psi$ dual to $\varphi$ decomposes as $\psi=*_{\varphi}\varphi=-\sigma\wedge\,\Omega_{-}+\frac{1}{2}\,\omega_{\varphi}\wedge\omega_{\varphi}~{}.$ Notice that there is no guarantee that an ACMS will be compatible with a $G_{2}$ connection, i.e. if $\nabla$ is a given connection on $Y$ with ${\rm Hol}(\nabla)\subseteq G_{2}$, it does not follow from a choice of ACMS that ${\rm Hol}(\nabla)\subseteq SU(3)$. These issues are discussed in section 2.3. In appendix A we review the definition of manifolds with an $SU(3)$ structure. ### 2.1 Almost contact metric structures on a manifold with a $G_{2}$ structure Let $Y$ be a seven dimensional compact manifold. It is known that there exists (at least) one nowhere vanishing vector field, $R$, on $Y$ Hopf27 . This vector field defines a one dimensional “characteristic foliation” ${\cal F}_{R}$ of $Y$, where the one dimensional leaves are the integral curves of $R$. Given the non-vanishing vector field $R$ on $Y$, we can choose local coordinates on $Y$ adapted to the foliation structure $\\{x^{a},a=1,\ldots 7\\}=\\{(r,x^{m}),m=1\ldots 6\\}~{},$ (13) such that $r$ is the coordinate along the integral curves of $R$ (curves of constant $x^{m}$) and such that the vector $R$ is given by $R=\partial_{r}~{}.$ (14) Suppose now that $Y$ has a $G_{2}$ structure $\varphi$ with metric $g_{\varphi}$. Without loss of generality, we choose the vector $R$ to be of unit length with respect to the $G_{2}$ metric. We define a unique one form $\sigma$ by $\sigma(u)=g_{\varphi}(R,u)~{},\qquad\forall\,u\in\Gamma(TY)~{}.$ (15) Note that $\sigma(R)=1~{},$ (16) is the statement that $R$ has unit length in terms of $\sigma$. The $G_{2}$ structure together with a choice of vector field, $R$, defines an endomorphism, $J$, of $TY$ by $J(u)=R\times_{\varphi}u~{},\qquad\forall\ u\in\Gamma(TY)~{},$ (17) where $\times_{\varphi}$ is the cross product on $Y$ determined by the $G_{2}$ structure $\varphi(u,v,w)=g_{\varphi}(u\times_{\varphi}v,w)~{},\qquad\forall\ u,v,w\in\Gamma(TY)~{}.$ (18) Equivalently, we can consider $J$ to be a vector-valued one form on $Y$, in which case it is given, in local coordinates, by $J^{a}{}_{b}=-\varphi^{a}{}_{bc}\,R^{c}=-i_{R}(\varphi)^{a}{}_{b}~{}.$ (19) Then, one can easily prove that $J^{2}=-{\bf 1}+R\otimes\sigma~{},$ (20) using the cross product identity $u\times_{\varphi}(u\times_{\varphi}v)=-g_{\varphi}(u,u)\,v+g_{\varphi}(u,v)\,u~{},\qquad\forall\ u,v\in\Gamma(TY)~{}.$ (21) Moreover, $J(R)=0~{},\quad{\rm and}\qquad\sigma(J(u))=0~{},\quad\forall\ u\in\Gamma(TY)~{}.$ (22) Therefore Todd:2015era the $G_{2}$ structure on $Y$, together with the existence of a (unit length) nowhere vanishing vector field $R$ on $Y$ determine an almost contact structure $(J,R,\sigma)$ on $Y$ (see section 1.1.2). The one form $\sigma$ is the contact form. The existence of the almost contact structure (ACS) on $Y$ means that there is a reduction of the structure group $G_{2}$ to ${\bf 1}\times U(3)$. We will see this explicitly in this section. We remark that this ACS on $Y$ is not, in general, a contact structure as it need not be that $\sigma$ satisfies everywhere on $Y$ the condition $\sigma\wedge{\rm d}_{7}\sigma\wedge{\rm d}_{7}\sigma\wedge{\rm d}_{7}\sigma\neq 0~{}.$ Furthermore Arikan:2012acs ; Arikan:2011acs ; Todd:2015era , this ACS is compatible with the metric, in the sense that that $g_{\varphi}$ satisfies the necessary condition $g_{\varphi}(Ju,Jv)=g_{\varphi}(u,v)-\sigma(u)\,\sigma(v)~{},\qquad\forall~{}u,v\in\Gamma(TY)~{}.$ (23) and therefore $(J,R,\sigma,g_{\varphi})$ defines an ACMS. Given the almost contact metric structure $(J,R,\sigma,g_{\varphi})$ on $Y$, one also has the fundamental two form $\omega_{\varphi}$ on $Y$ $\omega_{\varphi}(u,v)=g_{\varphi}(Ju,v)=\varphi(R,u,v)~{},\qquad\forall\ u,v\in\Gamma(TY)~{}.$ (24) Equivalently, $\omega_{\varphi}=i_{R}(\varphi)~{}.$ (25) This two form indeed satisfies $\sigma\wedge\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}\neq 0~{},$ because ${\rm d}{\rm vol}_{\varphi}=\frac{1}{6}\,\sigma\wedge\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}~{},$ (26) which can be easily proven from (3) and (25). Given the ACS $(J,R,\sigma)$ on $Y$, consider the bundle, ${\rm Ker}(\sigma)$, whose fibres are the vectors which are orthogonal to $R$. This is a codimension one subbundle of $TY$ and induces an orthogonal decomposition of $TY$ as $TY={\rm Span}\\{R\\}\oplus{\rm Ker}(\sigma)~{},\qquad{\rm Ker}(\sigma)={\rm Span}\\{R\\}^{\perp}~{}.$ The endomorphism $J$ maps sections of $\Gamma(TY)$ into sections of ${\rm Ker}(\sigma)$, as can be seen from equation (22), and thus it can be used to construct projection operators on $TY$. Indeed, the operator $-\,J^{2}={\bf 1}-R\otimes\sigma~{},$ is a projection operator mapping $TY$ into ${\rm Ker}(\sigma)$. The action of the projection operators can naturally be extended to the cotangent bundle $T^{*}Y$, which now has the orthogonal decomposition $T^{*}Y={\rm Span}\\{\sigma\\}\oplus{\rm Span}\\{\sigma\\}^{\perp}~{},$ and, therefore, to any tensor on $Y$. We can uniquely decompose any $k$-form $\alpha$ as $\alpha=\sigma\wedge\alpha_{0}+\alpha_{\perp}~{},$ (27) where $\alpha_{0}$ and $\alpha_{\perp}$ are respectively a $(k-1)$-form and a $k$-form on $Y$ such that $i_{R}(\alpha)=\alpha_{0}~{},\qquad i_{R}(\alpha_{0})=0~{},\qquad i_{R}(\alpha_{\perp})=0~{}.$ Furthermore, the endomorphism $J$ induces an orthogonal decomposition of ${\rm Ker}(\sigma)$ over $\mathbb{C}$ ${\rm Ker}(\sigma)\otimes\mathbb{C}={\rm Ker}(\sigma)_{\mathbb{C}}{}^{(1,0)}\oplus{\rm Ker}(\sigma)_{\mathbb{C}}{}^{(0,1)}~{}.$ In fact, there are also projection operators $P$ and $Q$ on ${\rm Ker}(\sigma)\otimes\mathbb{C}$ $\displaystyle P$ $\displaystyle=\frac{1}{2}\,({\bf 1}-i\,J-R\otimes\sigma)=-\frac{i}{2}\,J\,({\bf 1}-iJ)~{},$ $\displaystyle Q$ $\displaystyle=\frac{1}{2}\,({\bf 1}+i\,J-R\otimes\sigma)=+\frac{i}{2}\,J\,({\bf 1}+i\,J)~{},$ which map ${\rm Ker}(\sigma)$ into ${\rm Ker}(\sigma)_{\mathbb{C}}{}^{(1,0)}$ or ${\rm Ker}(\sigma)_{\mathbb{C}}{}^{(0,1)}$ respectively. Again, the action of these operators can naturally be extended to $T^{*}Y$ and to any tensor on $Y$. For the cotangent bundle $T^{*}Y$, ${\rm Span}\\{\sigma\\}^{\perp}\otimes\mathbb{C}$ has the orthogonal decomposition ${\rm Span}\\{\sigma\\}^{\perp}\otimes\mathbb{C}=\left({\rm Span}\\{\sigma\\}^{\perp}\right)^{(1,0)}\oplus\left({\rm Span}\\{\sigma\\}^{\perp}\right)^{(0,1)}~{}.$ We can decompose any $k$-form $\alpha$ with respect to the ACS as in equation (27) where the forms $\alpha_{0}$ and $\alpha_{\perp}$ decompose further into $(p,q)$-type with respect to $J$. We have arrived at the conclusion that the $G_{2}$ structure on the manifold $Y$ is reduced to ${\bf 1}\times U(3)$. The $U(3)$ structure on $Y$ is determined by the endomorphism $J$ which is effectively an almost complex structure on $Y$. Moreover, we have a fundamental form $\omega_{\varphi}$ satisfying (26). We discuss below in more detail how this works and how the $U(3)$ structure reduces further to an $SU(3)$ structure. ### 2.2 Transverse geometry and $SU(3)$ structures on $Y$ In this section we show explicitly how the structure group $G_{2}$ is reduced to $SU(3)$. The fact that any $G_{2}$ structure manifold admits a reduction to an $SU(3)$ structure group was already shown by friedrich1997nearly . In that paper, the authors argued that a nowhere vanishing vector field $R$ on $(Y,\varphi)$, along with the nowhere vanishing spinor $\eta$ implicit in the choice of $G_{2}$ structure, induce a second spinor by Clifford multiplication, $R\eta$. These spinors can be used to construct the $SU(3)$ structure on $Y$. In this section we instead show how the structure group is reduced to $SU(3)$ by constructing the $SU(3)$ structure directly from the $G_{2}$ structure three form $\varphi$ and the ACS.444Note that, although there is always a connection $\nabla$ with $G_{2}$ holonomy on $Y$ such that $\nabla\eta=0$, it is not necessarily the case that $\nabla(R\eta)=0$, and therefore the holonomy group of $\nabla$ is not necessarily reduced to $SU(3)$. A summary of the content of this section and the previous one is given in proposition 1 at the beginning of this section 2. We begin by reviewing the notion of the transverse geometry of the foliation ${\cal F}_{R}$. Loosely speaking, locally the transverse geometry pretends to be the geometry of a hyperplane $X$ which is transverse to $R$. Note, however, that there is not necessarily a six dimensional manifold $X\subset Y$ whose tangent plane is transverse to $R$. Another way to say this is that ${\rm Ker}(\sigma)$ is not necessarily integrable. We say that a vector $u\in\Gamma(TY)$ is transverse to the foliation ${\cal F}_{R}$ if $u\in\Gamma({\rm Ker}(\sigma))$. As we saw above, the vector $J(u)$ is transverse, for any $u\in TY$ (see equation (22)). For a $k$-form given as in equation (27), we can think of $\alpha_{0}$ and $\alpha_{\perp}$ as a $k-1$ form and a $k$ form respectively on the transverse geometry of the foliation on $Y$. We will call $\alpha_{\perp}$ the transverse component of $\alpha$, and we call a form transverse if $i_{R}(\alpha)=0~{}.$ (28) Clearly, $J(\alpha)$ is transverse for any $k$-form $\alpha$ on $Y$ as $i_{R}(J(\alpha))=0$. The endomorphism $J$ restricts to an almost complex structure, $J_{\perp}$ on the transverse geometry of ${\cal F}_{R}$, that is, $J_{\perp}$ is an endomorphism of ${\rm Ker}(\sigma)$ with $J_{\perp}^{2}=-{\bf 1}~{}$. The action of $J$ on any $k$ form $\alpha$ on $Y$ can be written as $J(\alpha)=J(\alpha_{\perp})=J_{\perp}(\alpha_{\perp})~{}.$ The transverse geometry then carries all the properties of an almost complex structure. For instance, as mentioned before any transverse form decomposes into $(p,q)$-type with respect to $J_{\perp}$. Furthermore, the fundamental form $\omega_{\varphi}$ is transverse, and one can define a hermitian structure on the transverse geometry. It is not hard to prove that $\omega_{\varphi}$ is type $(1,1)$ with respect to $J_{\perp}$, that is $J(\omega_{\varphi})=\omega_{\varphi}~{}.$ One can define primitive transverse forms on $Y$ and thus define the Lefshetz decomposition, with respect to $\omega_{\varphi}$, of transverse forms. Consider the metric $g_{\varphi}$ on $Y$. Using equation (24) and the endomorphism $J$, one can establish a metric $g_{\perp}$ on the transverse geometry. In fact, (24) and (20) imply that on $Y$, the $G_{2}$ metric is given by $g_{\varphi}(u,v)=\omega_{\varphi}(u,Jv)+\sigma(u)\,\sigma(v)~{},\qquad u,\,v\,\in\,\Gamma(TY)~{},$ (29) where the fact that $R$ is unit length is clearly satisfied. Also, if $u\in\Gamma({\rm Ker}(\sigma))$, then we have by (15) that $g_{\varphi}(u,R)=\sigma(u)=0~{}.$ Now, let $u\,,v\,\in\Gamma({\rm Ker}(\sigma))$. Then we define a metric $g_{\perp}$ on the transverse geometry by $g_{\varphi}(u,v)=\omega_{\varphi}(u,Jv)=g_{\perp}(u,v)~{}.$ (30) This means we have a “bundle like” metric on $Y$ of the form ${\rm d}s_{\varphi}^{2}=\sigma^{2}+{\rm d}s_{\perp}^{2}~{},$ (31) where ${\rm d}s_{\perp}^{2}$ represents the line element on the transverse geometry and in the coordinate system adapted to the foliation ${\cal F}_{R}$, the one form $\sigma$ can be written as $\sigma={\rm d}r+\Sigma~{},\qquad\Sigma=\Sigma_{m}\,{\rm d}x^{m}~{}.$ (32) The transverse one form $\Sigma$ behaves like a one form connection on the transverse geometry as can be verified by performing a coordinate transformation on $Y$. Now, recall that the $G_{2}$ structure $\varphi$ determines a metric $g_{\varphi}$ uniquely on $Y$ by equation (3). Of course, this metric needs to be the same as that in equation (31). To compare (31) with (3), we begin by decomposing $\varphi$ with respect to the ACMS $(J,R,\sigma,g_{\varphi})$. Recall that the ACMS determines detemines the fundamental two form on $Y$ $\omega_{\varphi}=i_{R}(\varphi)~{}.$ This means that $\varphi$ can be decomposed uniquely as $\varphi=\sigma\wedge\omega_{\varphi}+\Omega_{+}~{}.$ (33) for some well defined transverse real three form $\Omega_{+}$ on $Y$. Let $u\,,v\,\in\Gamma(TY)$ and consider this decomposition of $\varphi$ together with equation (3) for the metric: $6\,g_{\varphi}(u,v)\,{\rm d}{\rm vol}_{\varphi}=i_{u}(\varphi)\wedge i_{v}(\varphi)\wedge\varphi~{}.$ Taking $u=v=R$, and using $g_{\varphi}(R,R)=1$, we have $6\,{\rm d}{\rm vol}_{\varphi}=i_{R}(\varphi)\wedge i_{R}(\varphi)\wedge\varphi=\omega_{\varphi}\wedge\omega_{\varphi}\wedge(\sigma\wedge\omega_{\varphi}+\Omega_{+})~{}.$ The last term must vanish as it is a transverse seven form, while the first term gives the volume form on $Y$ in terms of $\sigma$ and $\omega_{\varphi}$ ${\rm d}{\rm vol}_{\varphi}=\frac{1}{6}\,\sigma\wedge\omega_{\varphi}\wedge\,\omega_{\varphi}\wedge\omega_{\varphi}~{}.$ (34) We have already seen this before, see (26). Consider now the metric components for $u\in\Gamma({\rm Ker}(\sigma))$ and $v=R$. As in this case $g_{\varphi}(u,R)=\sigma(u)=0$, we have $\displaystyle 0$ $\displaystyle=\sigma\wedge\big{(}i_{u}(\Omega_{+})\wedge\omega_{\varphi}-i_{u}(\omega_{\varphi})\wedge\Omega_{+}\big{)}\wedge\omega_{\varphi}$ $\displaystyle=\sigma\wedge\left(i_{u}(\Omega_{+}\wedge\omega_{\varphi}\wedge\omega_{\varphi})+3\,i_{u}(\omega_{\varphi})\wedge\Omega_{+}\wedge\omega_{\varphi}\right)=\sigma\wedge 3\,i_{u}(\omega_{\varphi})\wedge\Omega_{+}\wedge\omega_{\varphi}~{},$ which must be true for all transverse vectors $u\in\Gamma({\rm Ker}(\sigma))$. Hence $\omega_{\varphi}\wedge\Omega_{+}=0~{}.$ (35) An important consequence of this equation (35), is that $\Omega_{+}$ is a primitive form type of $(3,0)+(0,3)$ with respect to $J$ because $\omega_{\varphi}$ is type $(1,1)$. Finally we calculate the components $g_{\varphi}(u,v)$ where $u$ and $v$ are both in $\Gamma({\rm Ker}(\sigma))$. In this case, $g_{\varphi}(u,v)=g_{\perp}(u,v)$ and we have $\begin{split}6\,g_{\perp}(u,v)\,{\rm d}{\rm vol}_{\varphi}&=\sigma\wedge\Big{(}i_{u}(\Omega_{+})\wedge i_{v}(\Omega_{+})\wedge\omega_{\varphi}\\\ &\qquad\qquad-\big{(}i_{u}(\omega_{\varphi})\wedge i_{v}(\Omega_{+})+i_{v}(\omega_{\varphi})\wedge i_{u}(\Omega_{+})\big{)}\wedge\Omega_{+}\Big{)}\\\ &=3\,\sigma\wedge\Omega_{+}\wedge\,i_{u}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})~{},\end{split}$ where we have used the constraint (35). Using the volume form (34) and contracting with $R$, we find $\frac{1}{6}\,g_{\perp}(u,v)\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}=\frac{1}{2}\,\Omega_{+}\wedge\,i_{u}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})~{},\quad u\,,v\,\in\Gamma({\rm Ker}(\sigma))~{}.$ (36) As $i_{u}(\Omega_{+})$ is a transverse two form type $(2,0)+(0,2)$, for any vector $u$, it is easy to see that $i_{u}(\Omega_{+})=-J_{\perp}\big{(}i_{u}(\Omega_{+})\big{)}=i_{J_{\perp}u}\big{(}J_{\perp}(\Omega_{+})\big{)}~{},\quad\forall\,u\in\Gamma({\rm Ker})(\sigma)~{}.$ Then $\begin{split}\frac{1}{3}\,g_{\perp}(u,v)\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}&=\Omega_{+}\wedge\,i_{J_{\perp}u}\big{(}J_{\perp}(\Omega_{+})\big{)}\wedge\,i_{v}(\omega_{\varphi})\\\ &=-i_{J_{\perp}u}\Big{(}\Omega_{+}\wedge J_{\perp}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})\Big{)}+i_{J_{\perp}u}(\Omega_{+})\wedge J_{\perp}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})\\\ &\qquad\qquad+\Omega_{+}\wedge J_{\perp}(\Omega_{+})~{}i_{J_{\perp}u}i_{v}(\omega_{\varphi})\\\ &=g_{\perp}(u,v)\,\Omega_{+}\wedge J_{\perp}(\Omega_{+})+J_{\perp}(\Omega_{+})\wedge i_{J_{\perp}u}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})~{},\end{split}$ where, by equation (30) $i_{J_{\perp}u}i_{v}(\omega_{\varphi})=-\omega_{\varphi}(Ju,v)=g_{\perp}(u,v)~{}.$ Consider now the last term. Using the fact that $J_{\perp}^{2}=-{\bf 1}$ and that the action of $J_{\perp}$ on a transverse six form is the identity, we find $\begin{split}J_{\perp}(\Omega_{+})\wedge i_{J_{\perp}u}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})&=-J_{\perp}\Big{(}-\Omega_{+}\wedge\,J_{\perp}\big{(}i_{J_{\perp}u}(\Omega_{+})\big{)}\wedge\,J_{\perp}\big{(}i_{v}(\omega_{\varphi})\big{)}\Big{)}\\\ &=-\Omega_{+}\wedge\,i_{J_{\perp}u}(\Omega_{+})\wedge\,i_{J_{\perp}v}(\omega_{\varphi})\end{split}$ By equations (36) and (23) $J_{\perp}(\Omega_{+})\wedge i_{J_{\perp}u}(\Omega_{+})\wedge\,i_{v}(\omega_{\varphi})=-\frac{1}{3}\,g_{\varphi}(Ju,Jv)\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}=-\frac{1}{3}\,g_{\varphi}(u,v)\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}~{}.$ Therefore $\frac{1}{6}\,g_{\perp}(u,v)\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}=\frac{1}{4}\,g_{\perp}(u,v)\,\Omega_{+}\wedge J_{\perp}(\Omega_{+})~{},$ that is, $\frac{1}{6}\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}=\frac{1}{4}\,\Omega_{+}\wedge J_{\perp}(\Omega_{+})~{}.$ (37) Let $\Omega$ be a $(3,0)$ form with respect to $J_{\perp}$ such that $\Omega_{+}=\frac{1}{2}\,(\Omega+\overline{\Omega})={\rm Re}\,\Omega~{}.$ (38) Then we have that $J_{\perp}(\Omega_{+})=\frac{1}{2\,i}\,(\Omega-\overline{\Omega})={\rm Im}\,\Omega=\Omega_{-}~{}.$ Then equation (37) becomes $\frac{1}{6}\,\omega_{\varphi}\wedge\omega_{\varphi}\wedge\omega_{\varphi}=\frac{1}{4}\,\Omega_{+}\wedge\Omega_{-}=\frac{i}{8}\,\Omega\wedge\overline{\Omega}~{}.$ (39) One can define a Hodge-dual operator $*_{\perp}$ on the transverse geometry using the transverse metric $g_{\perp}$. Let $\alpha$ be a $k$-form on $Y$ with decomposition $\alpha=\sigma\wedge\alpha_{0}+\alpha_{\perp}~{},$ then, it is not too difficult to show that $*_{\varphi}\,\alpha=(-1)^{k}\,\sigma\wedge\,*_{\perp}\,\alpha_{\perp}+*_{\perp}\,\alpha_{0}~{},$ (40) where one needs, $\det{g_{\varphi}}=\sigma_{0}^{2}\,\det{g_{\perp}}~{}.$ The coassociative four form $\psi$ then decomposes as $\psi=*_{\varphi}\varphi=-\sigma\wedge\,\Omega_{-}+\rho_{\varphi}~{},$ (41) where we have defined the transverse four form $\rho_{\varphi}$ by $\rho_{\varphi}=*_{\perp}\,\omega_{\varphi}=\frac{1}{2}\,\omega_{\varphi}\wedge\omega_{\varphi}~{},$ (42) and used the fact that $*_{\perp}\Omega_{+}=\Omega_{-}~{}.$ This ends the proof of proposition 1. It is interesting to ask at this point about the conditions for the transverse geometry to correspond in fact to a submanifold $X\subset Y$. According to the Frobenius Theorem, the foliation ${\cal F}_{R}$ has a global transverse section $X$ when ${\rm Ker}(\sigma)$ is an integrable distribution, that is $[u,v]\in\Gamma({\rm Ker}(\sigma))~{},\quad\forall~{}u\,,v\,\in\Gamma({\rm Ker}(\sigma))~{}.$ (43) A short computation shows that this is the case if and only if $({\rm d}_{7}\sigma)(u,v)=0~{},\quad\forall~{}u\,,v\,\in\Gamma({\rm Ker}(\sigma))~{},$ that is, ${\rm d}_{7}\sigma=\sigma\wedge\alpha~{},$ (44) for some transverse one form $\alpha$, so ${\rm d}_{7}\sigma$ has no transverse component. We will come back to this constraint later when we discuss examples (see section 5.1). ### 2.3 Decomposing the structure equations Let $Y$ be a manifold with a $G_{2}$ structure given by the three form $\varphi$, and let $g_{\varphi}$ be the metric on $Y$ determined by $\varphi$. In this subsection, we decompose the $G_{2}$ structure equations (5) and (6) under the ACMS $(J,R,\sigma,g_{\varphi})$, to obtain the torsion classes $W_{i}$ of the induced transverse $SU(3)$ structure in terms of those $\tau_{i}$ of the $G_{2}$ structure. We recall that under the ACMS $(J,R,\sigma,g_{\varphi})$ the $G_{2}$ structure is decomposed in terms of the underlying transverse $SU(3)$ structure $(\omega_{\varphi},\Omega_{+})$ as $\displaystyle\varphi$ $\displaystyle=\sigma\wedge\omega+\Omega_{+}~{},$ (45) $\displaystyle\psi$ $\displaystyle=-\sigma\wedge\Omega_{-}+\rho~{},$ (46) $\displaystyle\sigma$ $\displaystyle={\rm d}r+\Sigma~{},$ (47) where from now on we drop the label $\varphi$ in the fundamental two form. In order to decompose the structure equations under the ACMS, we begin with the following lemmas regarding the decomposition of the contraction operator $\lrcorner_{\varphi}$ and the exterior derivative. The proof of both lemmas is straightforward. ###### Lemma 1. Let $\alpha$ be a $p$-form and $\beta$ a $(p+q)$-form on $Y$. Let $\alpha=\sigma\wedge\alpha_{0}+\alpha_{\perp}~{},\qquad\beta=\sigma\wedge\beta_{0}+\beta_{\perp}~{},$ be their decomposition with respect to the ACS. Then $\alpha\lrcorner_{\varphi}\,\beta=(-1)^{p}\,\sigma\wedge\,(\alpha_{\perp}\,\lrcorner\,\beta_{0})+(\alpha_{0}\lrcorner\beta_{0}+\alpha_{\perp}\,\lrcorner\,\beta_{\perp})~{},$ where $\lrcorner_{\varphi}$ and $\lrcorner$ are the contraction operators with respect to the $G_{2}$ metric $g_{\varphi}$ and the transverse metric $g_{\perp}$ respectively. ###### Lemma 2. Let $\alpha$ be a transverse p-form on $Y$, that is, $i_{R}(\alpha)=0$. The decomposition under the ACMS of the exterior derivative ${\rm d}_{7}\alpha$ is given by ${\rm d}_{7}\alpha=\sigma\wedge R(\alpha)+{\rm d}_{\perp}\alpha~{},$ (48) where we have defined ${\rm d}_{\perp}\alpha={\rm d}\alpha-\Sigma\wedge R(\alpha)~{},$ (49) and ${\rm d}$ refers to derivatives with respect to the transverse coordinates in the coordinate system $\\{r,x^{m}\\}$ adapted to the one dimensional foliation of $Y$ by the non-zero vector $R$ (see (13)). The operator ${\rm d}_{\perp}$ is a derivation, in particular it satisfies the Leibnitz rule, and its curvature ${\rm d}^{2}_{\perp}$ can be understood as the curvature of ${\rm d}_{\perp}$ given by its action on transverse forms by ${\rm d}_{\perp}^{2}=-{\rm d}_{\perp}\Sigma\wedge R~{}.$ (50) The exterior derivative ${\rm d}_{7}$ on the one-form $\sigma$ is then ${\rm d}_{7}\sigma={\rm d}_{7}\Sigma=\sigma\wedge R(\Sigma)+{\rm d}_{\perp}\Sigma~{},$ (51) Finally, equations (48)-(51) imply that the exterior derivative ${\rm d}_{7}\Lambda$ of a $p$-form $\Lambda$ on $Y$ with components $\Lambda=\sigma\wedge\Lambda_{0}+\Lambda_{\perp}~{},$ is given by ${\rm d}_{7}\Lambda=\sigma\wedge\big{(}-{\rm d}\Lambda_{0}+R(\Lambda_{\perp}+\Sigma\wedge\Lambda_{0})\big{)}+{\rm d}_{\perp}\Lambda_{\perp}+{\rm d}_{\perp}\Sigma\wedge\Lambda_{0}~{}.$ (52) Using Lemma 2, the decomposition of ${\rm d}_{7}\varphi$ and ${\rm d}_{7}\psi$ is given by $\displaystyle{\rm d}_{7}\varphi$ $\displaystyle=\sigma\wedge\big{(}-{\rm d}\omega+R(\Omega_{+}+\Sigma\wedge\omega)\big{)}+{\rm d}_{\perp}\Omega_{+}+{\rm d}_{\perp}\Sigma\wedge\omega~{},$ (53) $\displaystyle{\rm d}_{7}\psi$ $\displaystyle=\sigma\wedge\big{(}{\rm d}\Omega_{-}+R(\rho-\Sigma\wedge\Omega_{-})\big{)}+{\rm d}_{\perp}\rho-{\rm d}_{\perp}\wedge\Omega_{-}~{}.$ (54) Equating these with the decomposition under the ACMS of the right hand side of the $G_{2}$ structure equations (5) and (6), we find the following relations for the transverse $SU(3)$ structure $\displaystyle{\rm d}\Omega_{+}$ $\displaystyle=\tau_{0}\,\rho+3\,\tau_{1\,\perp}\wedge\Omega_{+}-\big{(}J(\tau_{3\,0})+{\rm d}\Sigma\big{)}\wedge\omega+\Sigma\wedge R(\Omega_{+}+\Sigma\wedge\omega)~{},$ $\displaystyle{\rm d}\Omega_{-}$ $\displaystyle=4(\tau_{1\,0}\,\rho+\tau_{1\,\perp}\wedge\Omega_{-})-R(\rho-\Sigma\wedge\Omega_{-})-\tau_{2\,0}\wedge\Omega_{+}-\tau_{2\,\perp}\wedge\omega~{},$ $\displaystyle{\rm d}\omega$ $\displaystyle=\tau_{0}\,\Omega_{-}-3\,\tau_{1\,0}\,\Omega_{+}+3\,\tau_{1\,\perp}\wedge\omega+*\tau_{3\,\perp}+R(\Omega_{+}+\Sigma\wedge\omega)~{},$ $\displaystyle{\rm d}\rho$ $\displaystyle=4\,\tau_{1\,\perp}\wedge\rho+{\rm d}\Sigma\wedge\Omega_{-}+\Sigma\wedge R(\rho-\Sigma\wedge\Omega_{-})-\tau_{2\,\perp}\wedge\Omega_{+}~{},$ where we have used Lemma 1 and decomposed the $G_{2}$ torsion classes with respect to the ACS $\tau_{1}=\sigma\,\tau_{1\,0}+\tau_{1\,\perp}~{},\qquad\qquad\tau_{2}=\sigma\wedge\tau_{2\,0}+\tau_{2\,\perp}\qquad\qquad\tau_{3}=\sigma\wedge\tau_{3\,0}+\tau_{3\,\perp}~{}.$ (55) The condition that $\tau_{3}\in\Omega^{3}_{\bf 27}(Y)$, implies that $\tau_{3\,\perp}$ is type $(2,1)+(1,2)$, $\tau_{3\,0}$ is primitive and $\omega\lrcorner\tau_{3\,\perp}=\tau_{3\,0}\lrcorner\,\Omega_{+}~{}.$ (56) Similarly, the condition that $\tau_{2}\in\Omega^{2}_{\bf 14}(Y)$ implies that $\tau_{2\,\perp}$ is primitive and $\tau_{2\,\perp}\lrcorner\,\Omega_{-}=\tau_{2\,0}~{}.$ (57) We can now compare these relations with the $SU(3)$ structure equations (171) and (172), and deduce formulas for the torsion classes of the ACMS-induced $SU(3)$ structure on $Y$, $W_{i}$, in terms of the $G_{2}$ torsion classes and the flow of the $SU(3)$ structure along $R$. After a somewhat lengthy computation555These computations are very similar to those in reference delaOssa:2014lma ., we find $\begin{split}{\rm Re}\,W_{0}&=\frac{2}{3}\,\tau_{0}+\frac{1}{6}\,\Omega_{-}\lrcorner R\big{(}\Omega_{+}+\Sigma\wedge\omega\big{)}~{},\\\\[5.0pt] {\rm Im}\,W_{0}&=2\,\tau_{1\,0}-\frac{1}{6}\,\Omega_{+}\lrcorner R\big{(}\Omega_{+}+\Sigma\wedge\omega\big{)}~{},\\\\[5.0pt] 2\,W_{1}&=4\,\tau_{1\,\perp}-J(\tau_{2\,0})+{\rm d}_{\perp}\Sigma\lrcorner\,\Omega_{-}+\omega\lrcorner(\Sigma\wedge R(\omega))~{},\\\\[5.0pt] 2\,{\rm Re}\,\theta&=4\,\tau_{1\,\perp}+\frac{1}{2}\,J(\tau_{2\,0})+\frac{1}{2}\,\omega\lrcorner\,R(\Omega_{+})+\Omega_{-}\lrcorner R(\Sigma\wedge\Omega_{-})\\\\[5.0pt] {\rm Re}\,W_{2}&=-\tau_{3\,0}^{(1,1)}-{\cal P}\Big{(}{\rm d}_{\perp}\Sigma-\omega\lrcorner\big{(}\Sigma\wedge R(\Omega_{+})\big{)}\Big{)}^{(1,1)}~{},\\\\[5.0pt] {\rm Im}\,W_{2}&=-\tau_{2\,\perp}^{(1,1)}-{\cal P}\Big{(}R(\omega)-\omega\lrcorner\big{(}\Sigma\wedge R(\Omega_{-})\big{)}\Big{)}^{(1,1)}~{},\\\\[5.0pt] W_{3}&={\cal P}\Big{(}J(\tau_{3\,\perp})+R\big{(}\Omega_{+}+\Sigma\wedge\omega\big{)}^{(2,1)+(1,2)}\Big{)}~{}.\end{split}$ (58) In these equations, $\cal P$ denotes that the primitive part of the form is taken. Two additional identities appear in this computation: $\displaystyle 2\,\omega\lrcorner{\rm d}_{\perp}\Sigma$ $\displaystyle=-\tau_{0}-\Omega_{-}\lrcorner\,R(\Omega_{+})~{},$ (59) $\displaystyle\big{(}{\rm d}_{\perp}\Sigma\big{)}^{(2,0)+(0,2)}$ $\displaystyle=\tau_{3\,0}{}^{(2,0)+(0,2)}+\left(\tau_{1\,\perp}+\frac{1}{2}\,J(\tau_{2\,0})+R(\Sigma)+\frac{1}{2}\,\omega\lrcorner R(\Omega_{+})\right)\lrcorner\Omega_{-}~{}.$ (60) Let $\nabla$ be a connection compatible with the $G_{2}$ structure with intrinsic torsion classes $\\{\tau_{i},i=0,1,2,3\\}$. Equations (58) imply that one can construct on $Y$ a connection $\hat{\nabla}$ with $SU(3)$ holonomy with intrinsic torsion classes $\\{{\rm Re}(\theta),W_{0},W_{1},W_{2},W_{3}\\}$. We remark however that this does not necessarily imply that $\nabla\omega=0$ and $\nabla\Omega=0$. In fact, one can prove, recalling $\omega=i_{R}(\varphi)$, that $\nabla\omega=0$ if and only if $\nabla\sigma=0$. In section, 3 we discusss an application of this condition in the context of supersymmetry enhancement in heterotic string theories. ## 3 Heterotic $G_{2}$ systems under the ACMS In this section we explore the effect of ACMS on a class of minimally supersymmetric compactifications of the heterotic string Gunaydin:1995ku ; Gauntlett:2001ur ; 2001math……2142F ; Gauntlett:2002sc ; Firedrich:2003 ; Gauntlett:2003cy ; Ivanov:2003nd ; Lukas:2010mf ; Gray:2012md that have received attention lately due to, on the one hand, their connection to $G_{2}$ instanton bundles, and on the other hand, their interesting deformation theory delaOssa:2016ivz ; delaOssa:2017pqy ; Fiset:2017auc ; delaOssa:2018azc ; Clarke:2016qtg ; Clarke:2020erl . In particular, we decompose the description of this class of solutions with respect to an ACMS. We will see that this extra structure allows to make contact with string compactifications with enhanced supersymmetry. We use this to write down the necessary and sufficient constraints on the ACMS for supersymmetry enhancement. ### 3.1 Heterotic $G_{2}$ systems Let $Y$ be a seven dimensional manifold with a $G_{2}$ structure $\varphi$ and let $V$ be a vector bundle on $Y$ with connection $A$. We are interested in the decomposition of ten dimensional heterotic superstring backgrounds on $(Y,V)$ that preserve minimal supersymmetry under the ACMS, i.e. heterotic $G_{2}$ systems. A heterotic $G_{2}$ system is defined to be the quadruple $[(Y,\varphi),(V,A),(TY,\Theta),H]~{},$ (61) where * • $\varphi$ is an integrable $G_{2}$ structure on the seven dimensional manifold $Y$, that is $\tau_{2}=0$. In this case the structure equations can be written as $\displaystyle{\rm d}_{7}\varphi$ $\displaystyle=\tau_{0}\,\psi+3\,\tau_{1}\wedge\varphi+*\tau_{3}=i_{T}(\varphi)~{},$ (62) $\displaystyle{\rm d}_{7}\psi$ $\displaystyle=4\,\tau_{1}\wedge\psi=i_{T}(\psi)~{},$ (63) where $T$ is the totally antisymmetric torsion given by $T(\varphi)=\frac{1}{6}\,\tau_{0}\,\varphi-\tau_{1}\lrcorner\,\psi-\tau_{3}~{}.$ (64) Note that a $G_{2}$ structure admits a totally antisymmetric torsion if and only if ${\rm d}\psi\in\Omega^{5}_{{\bf 7}}$, and $T(\varphi)$ is in fact the torsion of the unique metric connection with a totally antisymmetric torsion. * • $V$ is a gauge bundle with connection $A$ that is an instanton, i.e. its curvature $\cal F$ satisfies ${\cal F}\wedge\psi=0~{}.$ (65) * • $\Theta$ is a connection on the tangent bundle $TY$ of $Y$ which is also an instanton ${\cal R}(\Theta)\wedge\psi=0~{},$ (66) where ${\cal R}(\Theta)$ is the curvature of $\Theta$. * • $H$ is a three form defined by $H={\rm d}_{7}B+\frac{\alpha^{\prime}}{4}\,({\cal CS}(A)-{\cal CS}(\Theta))~{},$ (67) where ${\cal CS}(A)$ is the Chern-Simons form of the connection $A$ ${\cal CS}(A)={\rm tr}\left(A\wedge{\rm d}_{7}A+\frac{2}{3}\,A^{3}\right)~{},$ (68) with a similar definition for ${\cal CS}(\Theta)$, and $B$ is the $B$-field. The fields $H$, $A$, $B$ and $\Theta$ are constrained such that $H=T(\varphi)~{},$ (69) where $T(\varphi)$ is given in equation (64) and it is the totally antisymmetric torsion of a (unique) connection $\nabla$ compatible with the $G_{2}$ structure. ### 3.2 Decomposition of the $G_{2}$ structure equations with respect to the ACMS In section 2.3 we presented the decomposition of the structure equations of a general $G_{2}$ structure under the ACMS. We do not find it necessary to write these relations here as for an integrable $G_{2}$, all we need to do is to set $\tau_{2}=0$. ### 3.3 Instanton conditions We want to discuss how the instanton conditions are decomposed under the ACMS on $Y$. Let $A=\sigma\,a_{0}+a~{},$ (70) be the composition under the ACMS of the gauge connection $A$. Then, the curvature $\cal F$ of the connection $A$ decomposes accordingly as ${\cal F}=\sigma\wedge{\cal F}_{0}+{\cal F}_{\perp}~{},$ (71) where $\displaystyle{\cal F}_{0}$ $\displaystyle=-{\rm d}_{a}a_{0}+R(a+\Sigma\,a_{0})~{},$ (72) $\displaystyle{\cal F}_{\perp}$ $\displaystyle={\cal F}(a)+{\rm d}\Sigma\,\,a_{0}-\Sigma\wedge R(a+\Sigma\,a_{0})~{}.$ (73) It is not too difficult to show that the instanton condition ${\cal F}\wedge\psi=0$ for the curvature $\cal F$ is equivalent to the constraints $\displaystyle\omega\lrcorner{\cal F}_{\perp}$ $\displaystyle=0~{},$ (74) $\displaystyle{\cal F}_{\perp}\lrcorner\Omega_{-}$ $\displaystyle={\cal F}_{0}~{}.$ (75) It is instructive to note that in terms of $\hat{a}=a+\Sigma\,a_{0}~{},$ (76) the components of the curvature are ${\cal F}_{0}=-{\rm d}_{\hat{a}}\,a_{0}+R(\hat{a})~{},\qquad\qquad{\cal F}_{\perp}={\cal F}(\hat{a})-\Sigma\wedge{\cal F}_{0}~{},$ (77) and the instanton conditions become $\omega\lrcorner{\cal F}(\hat{a})=-{\cal F}_{0}\lrcorner J(\Sigma)~{},\qquad\qquad{\cal F}(\hat{a})\lrcorner\,\Omega_{-}={\cal F}_{0}+{\cal F}_{0}\lrcorner(\Sigma\lrcorner\,\Omega_{-})~{}.$ (78) These conditions do not correspond to $\hat{a}$ instantons on the transverse geometry unless, for example, ${\cal F}_{0}=0$. The component ${\cal F}_{0}$ of the field strength $\cal F$ can be interpreted as a trivial deformation of the gauge connection $A$ under the one parameter group of diffeomorphisms generated by the vector $R$. To see this, recall delaOssa:2017pqy 666In delaOssa:2017pqy it was proven that trivial deformations of the heterotic $G_{2}$ system are exact in the cohomology of a nilpotent operator. We refer the reader to this reference for details. that a trivial deformation of $A$ due to a gauge transformation with parameter $\epsilon$ together with a diffeomorphism generated by a vector $V$ is given by $(\delta A)_{triv}={\rm d}_{A}\epsilon+i_{V}({\cal F})~{}.$ The second term gives the component ${\cal F}_{0}$ when $V=R$. In other words, ${\cal F}_{0}$ gives the variation of $\hat{a}$ along the integral curves of $R$ up to a gauge transformation of $\hat{a}$ generated by $a_{0}$. Note that if $R$ is a Killing vector then ${\cal F}_{0}=0$ and, in this case, $\hat{a}$ is an $SU(3)$ instanton. We will return to this case in section 3.6. Similar considerations apply to the instanton connection $\Theta$ on the tangent bundle of $Y$. In this case we decompose the connection $\Theta$ on the tangent bundle of $Y$ under the ACMS as $\Theta=\sigma\wedge\theta_{0}+\theta_{\perp}~{}.$ The curvature of this connection is thus decomposed as ${\cal R}(\Theta)=\sigma\wedge{\cal R}(\Theta)_{0}+{\cal R}(\Theta)_{\perp}~{},$ where, in terms of $\hat{\theta}=\theta+\Sigma\,\theta_{0}\,$, the instanton conditions become $\omega\lrcorner{\cal R}(\hat{\theta})=-{\cal R}_{0}\lrcorner J(\Sigma)~{},\qquad\qquad{\cal R}(\hat{\theta})\lrcorner\,\Omega_{-}={\cal R}_{0}+{\cal R}_{0}\lrcorner(\Sigma\lrcorner\,\Omega_{-})~{}.$ (79) Just as before, the component ${\cal R}_{0}$ of the curvature ${\cal R}(\Theta)$ is interpreted as a trivial deformation of the instanton connection $\Theta$ on $TY$ under the one parameter group of diffeomorphisms generated by the vector $R$. When $R$ is a Killing vector, this component vanishes and $\hat{\theta}$ is an instanton. ### 3.4 Anomaly cancellation condition Let $H=\sigma\wedge H_{0}+H_{\perp}~{},$ (80) be the decomposition of the flux $H$ with respect to the ACS. Then, recalling equations (67) and (68), the terms in the flux decomposition are given by $\displaystyle H_{0}$ $\displaystyle=-{\rm d}b_{0}+R(\hat{b})+\frac{\alpha^{\prime}}{4}\,\left({\rm tr}\big{(}a_{0}\,{\rm d}\hat{a}-\hat{a}\wedge{\cal F}_{0}\big{)}-{\rm tr}\big{(}\theta_{0}\,{\rm d}\hat{\theta}-\hat{\theta}\wedge{\cal R}_{0}\big{)}\right)~{},$ (81) $\displaystyle H_{\perp}$ $\displaystyle={\rm d}\hat{b}+\frac{\alpha^{\prime}}{4}\,\Big{(}{\cal CS}(\hat{a})-{\cal CS}(\hat{\theta})\Big{)}-\Sigma\wedge H_{0}~{},$ (82) where $\hat{b}=B_{\perp}+\Sigma\wedge B_{0}~{},\qquad b_{0}=B_{0}~{}.$ We can interpret $H_{0}$ as a trivial deformation of the $B$ field due to the diffeomorphism generated by $R$. In fact, up to an ambiguity of an exact two form, a trivial deformation of the $B$-field due to a diffeomorphism generated by the a vector $V$ and a gauge transformation with parameter $\epsilon$, is given by delaOssa:2017pqy $(\delta B)_{triv}=i_{V}\,(H)+\frac{\alpha^{\prime}}{2}\,{{\rm tr}}(\epsilon{\cal F})~{}.$ (83) When $V=R$, the first term is indeed $H_{0}$ as claimed. If $R$ is an isometry, then $H_{0}$ vanishes up to an exact two form. We will see the implications of this result in section 3.6 as well as in an example in section 5.1. The anomaly cancellation condition is the requirement that the flux $H$ equals the torsion $T(\varphi)$ of the connection with $G_{2}$ holonomy determined uniquely by $\varphi$ (see equations (64) and (69)). Under the ACMS we have then $H_{0}=T_{0}({\varphi})~{},\qquad H_{\perp}=T_{\perp}(\varphi)~{}.$ (84) where $T=\sigma\wedge T_{0}+T_{\perp}~{},$ (85) and $\displaystyle T_{0}$ $\displaystyle=\frac{1}{6}\,\tau_{0}\,\omega-\tau_{1\,\perp}\lrcorner\ \Omega_{-}-\tau_{3\,0}~{},$ (86) $\displaystyle T_{\perp}$ $\displaystyle=\frac{1}{6}\,\tau_{0}\,\Omega_{+}+\tau_{1\,0}\,\Omega_{-}+J(\tau_{1\,\perp})\wedge\omega-\tau_{3\,\perp}~{}.$ (87) One can write expressions for $T_{0}$ and $T_{\perp}$ in terms of the torsion classes of $SU(3)$ structure induced by the ACMS using equations (58) with $\tau_{2}=0$. ### 3.5 $N=1$ superpotential in terms of the ACMS A compactification of the heterotic string on a heterotic $G_{2}$ system leads to minimally supersymmetric ($N=1$) effective field theory on either $AdS_{3}$ or three dimensional Minkowski space time. It has been shown de_Ia_Ossa_2020 that, up to an overall constant, the superpotential $W$ of the $N=1$ effective theory is given by $W=\int_{Y}e^{-2\phi}\ \left((H+h\,\varphi)\wedge\psi-\frac{1}{2}\,{\rm d}_{7}\varphi\wedge\varphi\right)~{},$ (88) where $h$ is the three dimensional flux which is related to the curvature of the three dimensional space-time and $\phi$ is the dilaton field. It was also shown in de_Ia_Ossa_2020 that this superpotential is a functional of the fields whose critical points give the conditions for preservation of $N=1$ supersymmetry in three dimensions, or equivalently, the conditions defining the $G_{2}$ structure. Furthermore, it was shown that supersymmetry requires, apart from the constraints described above, that $h$ be proportional to the torsion class $\tau_{0}$, and that $\tau_{1}$ is ${\rm d}$-exact, specifically $h=\frac{1}{3}\,\tau_{0}~{},\qquad\tau_{1}=\frac{1}{2}\,{\rm d}\phi~{}.$ As we have seen in section 2, the ACMS $(J,R,\sigma,g_{\varphi})$ on a manifold $Y$ with a $G_{2}$ structure $\varphi$, implies that the $G_{2}$ structure can be decomposed in terms of an underlying transverse $SU(3)$ structure $(\omega_{\varphi},\Omega_{+})$. This is summarised in Proposition 1. In a similar way, we may use equations (34), (39), (53), and (80), to decompose the various terms in the superpotential (88). A short computation gives $\begin{split}W&=\int_{Y}e^{-2\phi}\,\sigma\wedge{\rm Im}\left(\left[\,\hat{H}+i\,{\rm d}_{\perp}\omega\right.\right.\\\\[5.0pt] &\qquad\left.\left.+\frac{1}{8}\,\left(\omega\lrcorner(H_{0}-{\rm d}_{\perp}\Sigma)+7\,h-\left(\Sigma\wedge H_{0}+\frac{i}{2}\,R(\Omega_{+})\right)\lrcorner\bar{\Omega}\,\right)\,\bar{\Omega}\right]\wedge\Omega\right)~{},\end{split}$ (89) where $\hat{H}={\rm d}\hat{b}+\frac{\alpha^{\prime}}{4}\,\big{(}{\cal CS}(\hat{a})-{\cal CS}(\hat{\theta})\big{)}~{}.$ (90) This decomposition provides links between the heterotic $G_{2}$ system and the six dimensional Strominger–Hull system Strominger:1986uh ; Hull:1986kz .777Such links were discussed from a different perspective in delaOssa:2017gjq . In particular, this is evident from the first two terms in the square bracket in (89), where we recognize the $SU(3)$ superpotential of the latter system.888The remaining terms of $W$ are related to the non-transverse objects. This is consistent with the fact that the systems need not be related by a circle reduction, and that the heterotic $G_{2}$ system need only preserve half of the supersymmetry of the Strominger–Hull system. Indeed, in contrast to Strominger–Hull system, we cannot identify a holomorphic superpotential. As a further comment we note that, as is true for any four dimensional $N=1$ theory, the Strominger–Hull superpotential captures the F-term constraints, but not the D-terms. In contrast, for three dimensional $N=1$ theories, there is no decomposition into F- and D-terms, and indeed the $G_{2}$ superpotential captures also the D-term constraints of the Strominger–Hull system. ### 3.6 Three dimensional theory with $N=2$ Consider now the following question: What are the constraints on the heterotic $G_{2}$ system to have supersymmetry enhanced to $N=2$? This question was already addressed in Gran:2005wf ; Gran:2007kh ; Gran:2016zxk . In this section, we approach the question using an almost contact metric structure and show that supersymmetry enhancement is equivalent to the existence of a particular kind of ACMS on $Y$. We know that any manifold $Y$ with a $G_{2}$ structure admits a covariantly constant nowhere vanishing spinor $\eta$. Given the existence of a nowhere vanishing vector $R$, the manifold $Y$ admits another nowhere vanishing spinor $R\eta$ which is not, however, necessarily covariantly constant. In this section we deduce the geometric conditions under which the spinor $R\eta$ is covariantly constant too. As we will see in this section, this has interesting applications, as for example, to the construction of three dimensional theories with $N=2$ supersymmetry by constraining further the heterotic $G_{2}$ system such that $R\eta$ is indeed covariantly constant. As we will see, with two covariantly constant spinors at hand, the structure group of $Y$ reduces further to a certain type of $SU(3)$ structure. The requirement that $\nabla(R\eta)=0$ is equivalent to the condition that $R$ is itself covariantly constant. Equivalently (as the connection $\nabla$ is metric), we require that $\sigma$ is covariantly constant $\nabla_{a}\sigma_{b}=0~{}.$ (91) As we remarked earlier, we now have $\nabla\omega=0$ and $\nabla\Omega=0$ so the holonomy of the $G_{2}$ compatible connection $\nabla$ is reduced to $SU(3)$. Symmetrising this equation with respect to the indices $a,b$, we obtain that $R$ must be a Killing vector. When $R$ is a Killing vector, $\varphi$ becomes independent of the coordinate $r$ and hence, the transverse forms $\omega$, $\Omega$, and $\Sigma$ are also independent of $r$. The antisymmetric part of equation (91) gives a further constraint ${\rm d}_{7}\sigma=i_{T}(\sigma)=T_{0}~{}.$ (92) The authors of Gran:2005wf ; Gran:2007kh obtain precisely condition (91) as part of the requirements to obtain supersymmetry preserving backgrounds. In fact, a ten dimensional backgound on $AdS_{3}\times Y$, where $Y$ is a manifold with a $G_{2}$ structure, demands the existence of an extra covarianly constant one form to have an extra supersymmetry and hence reducing the the holonomy of the $G_{2}$ connection to $SU(3)$. The relations between the $G_{2}$ torsion classes and the induced $SU(3)$ torsion classes given in equations (93) greatly simplify because we now have to set $\tau_{2}=0$ and, $R$ being a Killing vector, means that $R(\Sigma)=0$, $R(\omega)=0$ and $R(\Omega)=0$. We then find $\begin{split}{\rm Re}\,W_{0}&=\frac{2}{3}\,\tau_{0}=-\frac{4}{3}\,\omega\lrcorner\,{\rm d}\Sigma~{},\qquad\qquad{\rm Im}\,W_{0}=2\,\tau_{1\,0}~{},\\\\[5.0pt] 2\,W_{1}&=4\,\tau_{1\,\perp}+{\rm d}\Sigma\lrcorner\,\Omega_{-}~{},\qquad\qquad 2\,{\rm Re}\,\theta=4\,\tau_{1\,\perp}~{},\\\\[5.0pt] {\rm Re}\,W_{2}&=-\tau_{3\,0}^{(1,1)}-{\cal P}\big{(}{\rm d}\Sigma\big{)}^{(1,1)}~{},\qquad\qquad{\rm Im}\,W_{2}=-\tau_{2\,\perp}^{(1,1)}~{},\\\\[5.0pt] W_{3}&={\cal P}\big{(}J(\tau_{3\,\perp})\big{)}~{},\\\\[5.0pt] \big{(}{\rm d}\Sigma\big{)}^{(2,0)+(0,2)}&=\tau_{3\,0}{}^{(2,0)+(0,2)}+\tau_{1\,\perp}\lrcorner\Omega_{-}~{}.\end{split}$ (93) As discussed above, a Killing vector is necessary but not sufficient for an enhancement of the supersymmetry, we need furthermore to impose the condition (92). Using (93) in (86), the condition (92) becomes ${\rm d}\Sigma=T_{0}=-\frac{1}{3}\,(\omega\lrcorner{\rm d}\Sigma)\,\omega+{\rm Re}W_{2}+{\cal P}({\rm d}\Sigma)^{(1,1)}-({\rm d}\Sigma)^{(2,0)+(0,2)}~{}.$ (94) Therefore ${\rm d}\Sigma$ must be a primitive $(1,1)$ form and ${\rm Re}W_{2}=0$. Moreover, the fact that $\omega\lrcorner{\rm d}\Sigma=0$ means that ${\rm Re}W_{0}=0$, that is $\tau_{0}=0$. For the heterotic string we need to add the constraint that $2\,\tau_{1}={\rm d}_{7}\phi$. This implies that $\tau_{1\,0}=0~{},\qquad 2\,\tau_{1\,\perp}={\rm d}\phi~{}.$ (95) Therefore ${\rm Im}\,W_{0}=0~{}.$ (96) In summary, the torsion classes of the $SU(3)$ on the transverse geometry satisfy $\displaystyle W_{0}$ $\displaystyle=0~{},\qquad W_{2}=0~{},$ $\displaystyle W_{1}$ $\displaystyle={\rm Re}\,\theta={\rm d}\phi=-\tau_{3\,0}\lrcorner\,\Omega_{-}~{},\qquad$ $\displaystyle W_{3}$ $\displaystyle={\cal P}\big{(}J(\tau_{3\,\perp})\big{)}~{},$ and we also have $\tau_{0}=0~{},\qquad{\rm d}\Sigma=-\tau_{3\,0}^{(1,1)}~{},$ where $\tau_{3\,0}$ is primitive. As an example, we note that, if the manifold $Y$ has $G_{2}$ holonomy, then the transverse bundle ${\rm Ker}(\sigma)$ is necessarily integrable, so $Y$ is a codimenion one foliation with leaves which are Calabi–Yau three folds. More generally, the vanishing of $W_{0}$ and $W_{2}$ means that the almost complex structure $J$ on the transverse geometry is integrable, and the fact that $W_{1}$ (and hence ${\rm Re}\,\theta$) are exact imply that the transverse geometry is conformally balanced. Therefore the transverse geometry has the $SU(3)$ structure of the Strominger-Hull system. Moreover, the fact that ${\rm d}\Sigma$ is a primitive $(1,1)$ form means that the manifold $Y$ has the structure of a $U(1)$ principal bundle with a holomorphic connection $\Sigma$ over the transverse geometry. Finally, we note that the vanishing of $\tau_{0}$ implies that the three dimensional spacetime is Minkowski space. Consider now the instanton conditions. Following the discussion at the end of section 3.3, we recall that ${\cal F}_{0}$ represents the change of the connection $A$ with respect to the vector $R$. As $R$ is, in our case, a Killing vector, we have ${\cal F}_{0}=0$ and it follows that $\hat{a}$ is a holomorphic instanton on the transverse geometry. Similarly, ${\cal R}_{0}=0$ and $\hat{\theta}$ is also an instanton. For the anomaly cancellation condition, we earlier saw that if $R$ is a Killing vector, then $H_{0}$ must be an exact two form, a conclusion that was arrived at by studying the symmetries of the heterotic system. Interestingly, the condition that $H_{0}$ be an exact form is precisely the content of equation (94), which fixes this exact form to be $H_{0}=T_{0}={\rm d}\Sigma~{},$ where we recall that the second equality comes from the fact that $R$ is not just a Killing vector, but is also covariantly constant. For completeness we note that the transverse part of the flux becomes $H_{\perp}=-\Sigma\wedge{\rm d}\Sigma+\hat{H}~{},$ where $\hat{H}$ is defined in equation (90). Note, furthermore, that the Bianchi identity of the anomaly cancellation condition becomes ${\rm d}H_{\perp}=-{\rm d}\Sigma\wedge{\rm d}\Sigma+\frac{\alpha^{\prime}}{4}\Big{(}{\rm tr}({\cal F}(\hat{a})\wedge{\cal F}(\hat{a}))-{\rm tr}({\cal R}(\hat{\theta})\wedge{\cal R}(\hat{\theta}))\Big{)}~{}.$ (97) We remark that it has not been necessary to require the integrability of ${\rm Ker}(\sigma)$. That is, it is not necessary to require that ${\rm d}\Sigma=0$ to have $N=2$ in three dimensions, as one might have expected. ## 4 $SU(2)$ structures on manifolds with a $G_{2}$ structure The $SU(3)$ structure discussed in the previous section is not the only “bonus” restriction on the topology of $G_{2}$ structure manifolds. Indeed, there are _two_ non-vanishing vector fields on a manifold $Y$ with a $G_{2}$ structure 10.2307/1970439 ; thomas1969 , which may be combined with the $G_{2}$ compatible spinor to form two additional nowhere-vanishing spinors. These three spinors reduce the structure group of the manifold to $SU(2)$ friedrich1997nearly . Moreover, the existence of two well-defined vectors implies that there is an almost contact metric 3-structure (ACM3S) on $Y$ (that is a reduction of the structure group to ${\bf 1}_{3}\times Sp(1)$) kuo1970 (see also Todd:2015era ). In this section, we will expand on this topic, and show how the $Sp(1)\cong SU(2)$ structure is embedded in the $G_{2}$ structure. As in the preceding section, we will describe the ACM3S in terms of differential forms. We will also clarify when the ACM3S leads to a involutive decomposition of the tangent bundle $TY$, and expand on the relation to associative and coassociative submanifolds. Finally, we will study the space of ACM3S. ### 4.1 Almost contact 3-structure It is a classical result by E. Thomas, that any compact, orientable 7-dimensional manifold $Y$ admits two globally defined, everywhere linearly independent vector fields $R^{1},R^{2}\in\Gamma(TY)$ 10.2307/1970439 ; thomas1969 . Suppose now that $Y$ has $G_{2}$ structure $\varphi$ with metric $g_{\varphi}$. We may then assume, without loss of generality, that the 2-frame $(R^{1},R^{2})$ consists of vectors that are orthonormal. Indeed, given two non-orthogonal, but linearly independent, vectors $(R^{1},R^{2})$, we can form two orthogonal ones using the $G_{2}$ cross product: $(R^{1},R^{1}\times_{\varphi}R^{2})$. We can also normalise the vectors by dividing by their norms as calculated with the $G_{2}$ metric. Thus any $G_{2}$ manifolds allow orthonormal 2-frames of vectors. Each vector $R^{\alpha}$ will be associated to an ACMS, in the sense discussed in section 2.1. We thus have Todd:2015era (see also Arikan:2012acs ; Arikan:2011acs ) two dual one-forms $\sigma^{\alpha}$ so that $(J^{\alpha},R^{\alpha},\sigma^{\alpha},g_{\varphi})$, for $\alpha=1,2$, are two ACMS on $Y$. If the structures furthermore satisfy the relations $\begin{split}&\sigma^{1}(R^{2})=\sigma^{2}(R^{1})=0\\\ &J^{1}(R^{2})=-J^{2}(R^{1})\\\ &\sigma^{1}\circ J^{2}=-\sigma^{2}\circ J^{1}\\\ &J^{1}J^{2}-R^{1}\otimes\sigma^{2}=-J^{2}J^{1}+R^{2}\otimes\sigma^{1}\;,\end{split}$ (98) Kuo kuo1970 has shown that $Y$ admits a third ACMS given by $J^{3}=J^{1}J^{2}-R^{1}\otimes\sigma^{2}\;,\;R^{3}=J^{1}(R^{2})\;,\;\sigma^{3}=\sigma^{1}\circ J^{2}\;.$ (99) Together, these three ACMS then satisfy (10), and hence define an almost contact 3-structure (AC3S) kuo1970 . In fact, one can show that, for two ACS associated to the same metric, the last constraint in (98) implies the other three constraints kuo1970 . From the definition of an AC3S, we have the following useful identities $\sigma^{\alpha}(R^{\beta})=\delta^{\alpha\beta}\;\;,\;\;J^{\alpha}(R^{\beta})=\epsilon^{\alpha\beta\gamma}R^{\gamma}\;\;,\;\;\sigma^{\alpha}\circ J^{\beta}=-\sigma^{\beta}\circ J^{\alpha}\;.$ (100) In addition, $R^{3\,a}=[J^{1}(R^{2})]^{a}=\varphi^{a}{}_{bc}R^{1\,b}R^{2\,c}\;\mbox{ i.e. }\;R^{3}=R^{1}\times_{\varphi}R^{2}$ (101) or, equivalently, $\sigma^{\gamma}=\frac{1}{2}\epsilon^{\alpha\beta\gamma}i_{R^{\beta}}i_{R^{\alpha}}\varphi\;,$ (102) which will be used below when we discuss the $SU(2)$ decomposition of the $G_{2}$ structure. Finally, it was recently proven by Todd Todd:2015era that on any $G_{2}$ structure manifold, two ACMS’s $(J^{\alpha},R^{\alpha},\sigma^{\alpha},g_{\varphi})$ will automatically satisfy the last constraint of (98). Thus, we have ###### Theorem 1 (Todd, Todd:2015era ). Let $(Y,\varphi)$ be a compact and boundary-less 7-manifold with $G_{2}$ structure $\varphi$. Then $Y$ admits an almost contact metric 3-structure which is compatible with the G2-metric. Note that the three ACMS have different contact forms $\sigma^{\alpha}$, but the fact that they are associated to the same $G_{2}$ metric, $g_{\varphi}(J^{\alpha}u,J^{\alpha}v)=g(u,v)-\sigma^{\alpha}(u)\sigma^{\alpha}(v)\;\;,\;\;{\alpha}=1,2,3\;\;{\rm(no\;sum)}\;,$ (103) implies the resulting almost contact 3-structure is indeed an almost contact metric 3-structure (ACM3S). We observe, in particular, that the ACM3S involves fixing a distinguished orthonormal three-frame, $(R^{1},R^{2},R^{3})$, which induces a decomposition of the tangent bundle $TY\cong\mathcal{T}\oplus\mathcal{T}^{\perp},$ (104) where $\mathcal{T}$ is trivial rank three bundle with a distinguished trivialisation induced by the three-frame $(R^{1},R^{2},R^{3})$. The second factor, $\mathcal{T}^{\perp}$, is then the orthogonal complement. With this data, we are able to identify $\mathcal{T}$ as a trivial bundle of imaginary quaternions, with a product induced by the $G_{2}$ cross product. This follows immediately from (101). The choice of ACM3S therefore reduces our structure group $G_{2}\rightarrow{\bf 1}_{3}\times H,$ for some $H\subset G_{2}$. In fact, results of Kuo, kuo1970 , show that $H=Sp(1)\cong SU(2)$. One way to see this, from the $G_{2}$ perspective, is to observe that the subgroup of $G_{2}$ that preserves three orthonormal vectors is, indeed, $SU(2)$, from which the result follows. Note, in particular, that the rank four bundle, $\mathcal{T}^{\perp}$ has reduced structure group $SO(4)\rightarrow SU(2)$. A reduction of structure group leads to distinguished differential forms living in irreducible representations of the reduced group and we will explicitly exhibit the forms induced by the reduction of $\mathcal{T}^{\perp}$’s structure group. These structure forms will be familiar to readers that have studied four dimensional $SU(2)$ structure manifolds, but it is important to bear in mind that $\mathcal{T}^{\perp}$ need not be tangent to any underlying four manifold.999The conditions that $\mathcal{T}^{\perp}$ must satisfy in order for such manifolds to exist are reviewed in section 4.2. This decomposition is purely at the level of the bundle. The splitting of the tangent bundle, (104) induces an analogous decomposition of the cotangent bundle $T^{*}Y=\mathcal{T}^{*}\oplus\mathcal{T}^{*\,\perp}\equiv{\rm Span}\\{\sigma^{1},\sigma^{2},\sigma^{3}\\}\oplus{\rm Span}\\{\sigma^{1},\sigma^{2},\sigma^{3}\\}^{\perp}\;,$ (105) and, consequently, a decomposition of the differential forms. In particular, we will refer to a $k$-form, $\lambda$, as being $\alpha$-transverse if it satisfies $i_{R^{\alpha}}(\lambda)=0\;.$ (106) More generally, an arbitrary form $k$-form, $\lambda$, can be decomposed as $\lambda=\sum_{\alpha}\sigma^{\alpha}\wedge\lambda_{0}^{\alpha}+\lambda_{\perp}~{},$ (107) where $i_{R^{\alpha}}(\lambda)=\lambda_{0}^{\alpha}~{},\qquad i_{R^{\alpha}}(\lambda_{0}^{\alpha})=0~{},\quad\mbox{ and, }\;i_{R^{\alpha}}(\lambda_{\perp})=0\,,\;\forall\alpha~{}.$ Recall that each ACS has an associated fundamental two-form, (25), $\omega^{\alpha}=i_{R^{\alpha}}(\varphi)\,.$ (108) Evidently, $\omega^{1}$ is a $1$-transverse two-form, but it is neither $2$\- nor $3$-transverse; as a consequence it is not in $\Gamma(\Lambda^{2}\mathcal{T}^{*\perp})$, but is instead a linear combination $\omega^{\alpha}=\sum_{\beta\neq\alpha}\sigma^{\beta}\wedge\omega_{0\,\beta}^{\alpha}+\omega^{\alpha}_{\perp}\;.$ (109) where we recall that $\omega^{\alpha}_{\perp}$ is, in fact, transverse with respect to all three ACMS. Moreover, (102) implies that e.g. $\sigma^{3}=i_{R^{2}}\omega^{1}=-i_{R^{1}}\omega^{2}$ (110) and we thus have that $\omega_{0\,1}^{2}=-\sigma^{3}$, or, in general, $\omega^{\alpha}=\frac{1}{2}\epsilon^{\alpha\beta\gamma}\sigma^{\beta}\wedge\sigma^{\gamma}+\omega^{\alpha}_{\perp}\;.$ (111) Next, we decompose the $G_{2}$ structure form, $\varphi$ with respect to the ACM3S. Using that the trivial bundle, $\mathcal{T}\subset TY$, can be interpreted, fibrewise, as the imaginary quaternions sitting inside the imaginary octonions, we can quickly deduce that $\varphi_{\perp}=0$. This is because the octonionic product of any two elements in $\operatorname{Im}(\mathbb{H})^{\perp}\subset\operatorname{Im}(\mathbb{O})$ will necessary land back in $\operatorname{Im}(\mathbb{H})$, while $\varphi_{\perp}$ projects this product back into the complement $\operatorname{Im}(\mathbb{H})^{\perp}$. More concretely, we can consider a local frame, $e^{i}$, such that $e^{4+\alpha}:=R^{\alpha}$, and $e^{1},e^{2},e^{3},e^{4}$ are chosen such that $\varphi$ takes the standard form $\varphi_{0}=(e^{12}+e^{34}+e^{56})\wedge e^{7}+e^{135}-e^{146}-e^{236}-e^{245}\,.$ (112) In such a local frame, $\varphi_{\perp}$ will be those terms of $\varphi_{0}$ in which none of $e^{5},e^{6},e^{7}$ appear and one observes that there are no such terms. Furthermore, comparing the terms that appear in (112) with the expansion (111), we can explicitly expand $\varphi$: $\varphi=\frac{1}{3!}\epsilon_{\alpha\beta\gamma}\sigma^{\alpha}\wedge\sigma^{\beta}\wedge\sigma^{\gamma}+\sum_{\alpha}\sigma^{\alpha}\wedge\omega^{\alpha}_{\perp}\;,$ (113) The expansion of $\psi$ can be straightforwardly computed, using either a local frame and its standard form, or applying the Hodge star. Either way, one obtains $\psi={\rm d}{\rm vol}_{\perp}+\frac{1}{2}\epsilon_{\alpha\beta\gamma}\sigma^{\alpha}\wedge\sigma^{\beta}\wedge\omega^{\gamma}_{\perp}\;.$ (114) Note that we use ${\rm d}{\rm vol}_{\perp}$ to refer to the canonical section of $\Lambda^{4}(\mathcal{T}^{\perp,*})$, although there may not be a four dimensional manifold, even locally, for which ${\rm d}{\rm vol}_{\perp}$ is a volume form. Whenever $\mathcal{T}^{\perp}$ is integrable (see the next subsection), then ${\rm d}{\rm vol}_{\perp}$ is indeed the volume form for the leaves of the corresponding foliation. In the preceding, we have come across a triple of real two-forms $\omega^{\alpha}_{\perp}$, $\alpha=1,2,3$. These characterise the reduced $SU(2)$ structure of the rank four bundle, $\mathcal{T}^{\perp}$. This will be familiar to readers comfortable with $SU(2)$ structures in dimension four and we briefly review this setting in Appendix B. To be sure that these really are the correct differential forms, we must check that $\omega^{\alpha}_{\perp}\wedge\omega^{\beta}_{\perp}=2\delta^{\alpha\beta}{\rm d}{\rm vol}_{\perp}\;.$ (115) That this holds follows directly from the decompositions (114) (113). Indeed, on the one hand, we have $0\neq 7{\rm d}{\rm vol}_{\varphi}=\varphi\wedge\psi=\sigma^{1}\wedge\sigma^{2}\wedge\sigma^{3}\wedge({\rm d}{\rm vol}_{\perp}+3\omega^{\alpha}_{\perp}\wedge\omega^{\alpha}_{\perp})\;,$ (116) showing that $\omega^{\alpha}_{\perp}\wedge\omega^{\alpha}_{\perp}=2{\rm d}{\rm vol}_{\perp}$. On the other hand, we have $0=\varphi\wedge\varphi\sim\sigma^{\alpha}\wedge\sigma^{\beta}\wedge\omega^{\alpha}_{\perp}\wedge\omega^{\beta}_{\perp}$ (117) so $\omega^{\alpha}_{\perp}\wedge\omega^{\beta}_{\perp}$ must vanish when $\alpha\neq\beta$. We now return to the role that the unit vector fields, $R^{\alpha}$ played in the above computations. In particular, it is important that fixing the splitting (104) does not fix the ACM3S. The extra data required is a global identification of $\mathcal{T}$ with the imaginary quaternions, $\operatorname{Im}(\mathbb{H})$, along with the multiplication induced by $\varphi$. This is equivalent to a choice of three orthonormal vector fields, satisfying (101). In particular, after a choice of splitting, (104), there remains an interesting space of compatible almost contact metric 3-structures, which we study in Subsection 4.3. To the authors’ knowledge, this has not been explored in the literature and we provide some initial results below. In contrast, a decomposition $TY\cong\mathcal{T}_{1}\oplus\mathcal{T}_{1}^{\perp}$ into a trivial one- dimensional bundle, $\mathcal{T}_{1}$, and its orthogonal complement, $\mathcal{T}^{\perp}_{1}$, has a much less interesting space of almost contact structures. Indeed, recall from Section 2 that the data needed for an ACMS on a $G_{2}$ structure manifold is precisely that of a unit vector field.101010Just as an ACM3S allows us to identify $\mathcal{T}$ with the imaginary quaternions, an ACMS allows us to identify $\mathcal{T}_{1}$ with the imaginary complex numbers. The rank one bundle, $\mathcal{T}_{1}$ admits precisely two such vector fields, say $R$ and $-R$. Choosing either, say $R$ for concreteness, induces an ACMS such that $\mathcal{T}_{1}={\rm Span}(R)$ and $\mathcal{T}_{1}={\rm Ker}\sigma$. To further contrast with the ACMSs of Section 2, the trivial subbundle, $\mathcal{T}$, is no longer guaranteed to be tangent to a foliation, i.e. it need not be an integrable distribution. As a consequence, we can not generally expect to find adapted coordinates in the same vein as (13). In other words, we can not expect there to be even a local product structure of the geometry. We turn to this question now. ### 4.2 Integrability In this subsection we will investigate the conditions for the distributions $\mathcal{T}$ and $\mathcal{T}^{\perp}$ to be tangent to a foliation of the seven manifold, $Y$. We will begin with the trivial bundle, $\mathcal{T}$. It is a standard result in the study of foliations that this is true if and only if $\mathcal{T}$ is involutive, for which it suffices that the Lie bracket of the distinguished vector fields, $R^{\alpha}$, closes. That is, there are real functions $f^{\alpha\beta}_{\;\;\;\gamma}$, $\alpha,\beta,\gamma\in\\{1,2,3\\}$ such that $[R^{\alpha},R^{\beta}]_{x}=f^{\alpha\beta}_{\;\;\;\gamma}(x)R^{\gamma}_{x}\quad\;\forall\,x\in Y\,.$ (118) Observe that the analagous condition on the rank one bundle induced by an ACMS, ${\rm Span}(R)$, is trivially satisfied since $[R,R]=0$. This is a reflection of the well-known fact that vector fields always admit integrable curves or, to put it another way, that ordinary differential equations always have locally unique solutions. The possible failure of integrability of a higher rank foliation is related to the comparative subtlety of partial differential equations. Note also that (118) is independent of the distinguished vector fields, $R^{\alpha}$, which can be checked by straightforward computation. In particular, integrability is a property of the subbundle, not of its trivialisation. Condition (118) can be equivalently formulated in terms of the kernel $\mathcal{T}^{*,\perp}$. Recalling that a one-form, $\xi$ is in $\Gamma(\mathcal{T}^{*,\perp})$ if and only if $i_{R^{\alpha}}\xi=0$ for each $\alpha=1,2,3$, then, $\mathcal{T}$ is involutive if and only if ${\rm d}(\mathcal{T}^{*,\perp})\subset\mathcal{T}^{*,\perp}\otimes\Omega^{1}(Y)$, i.e. that $i_{R^{\alpha}}i_{R^{\beta}}{\rm d}\xi=0$ for all $\xi\in\Gamma(\mathcal{T}^{*,\perp})$. We can now review the conditions under which $\mathcal{T}^{\perp}$ is integrable. Since the presentation of $\mathcal{T}^{\perp}$ is not as convenient as that of $\mathcal{T}$, it is the second of the above characterisations that is most convenient, viz. $\mathcal{T}^{\perp}$ is integrable if and only ${\rm d}(\mathcal{T}^{*})\subset\mathcal{T}^{*}\otimes\Omega^{1}(Y)$. Using that $\mathcal{T}^{*}={\rm Span}\\{\sigma^{1},\sigma^{2},\sigma^{3}\\}$, we conclude that $\mathcal{T}^{\perp}$ is involutive if and only if ${\rm d}\sigma^{\alpha}=\sigma^{\beta}\wedge\mu_{\beta}^{\alpha}$, for one-forms $\mu_{\beta}^{\alpha}$. This is the ACM3S analogue of the condition (44) for ACMS’s. We can now ask about the relation between integrability of $\mathcal{T}$, respectively $\mathcal{T}^{\perp}$, and the $G_{2}$ structure on $Y$. Let us begin by assuming that $\mathcal{T}$ is integrable, so that $Y$ has a three dimensional foliation with leaves, say $\mathcal{L}_{x}$, such that $T_{y}\mathcal{L}_{x}=\mathcal{T}_{x}$. Here, $x$ is any point on the leaf, so there is a huge degeneracy in this labelling and, unless it will lead to confusion, we will omit recording this point. We observe that, by construction, $\mathcal{L}$ has a volume form which is, at each point $y\in\mathcal{L}_{x}$, the wedge of the dual one-forms ${\rm d}{\rm vol}_{\mathcal{L},y}=(\sigma^{1}\wedge\sigma^{2}\wedge\sigma^{3})_{y}$. Further, this wedge product is nothing but the $G_{2}$ structure three-form, $\varphi$, restricted to $T_{y}\mathcal{L}$, as can be easily seen by choosing a local frame extending $\sigma^{\alpha}$, cf. (113). This is precisely the condition that $\mathcal{L}$ be an associative manifold, harvey1982calibrated ; mclean1998deformations . We have just shown that $\mathcal{T}$ is integrable if and only if $Y$ is foliated by associative submanifolds. In the case that our $G_{2}$ structure is closed, ${\rm d}\varphi=0$, then $\varphi$ is a calibration and the associative submanifolds are the corresponding calibrated cycles, which minimize volume within their homology class. In string theoretic compactification scenarios, calibrated cycles contribute to non-perturbative effects in the effective field theory and it is a long-standing open problem to properly account for these contributions, see Harvey:1999as ; braun2018infinitely ; Acharya:2018nbo ; Braun:2017uku for instance, or Eckhard:2018raj for a brane world-sheet perspective. For more general $G_{2}$ structures, in particular for non-closed $G_{2}$ structures, the positive three-form $\varphi$ is longer a calibration, the volume-minimizing cycles and corresponding non-perturbative effects are, in general, less understood Joyce:2016fij . The surprising link between three-structures and associatives may offer a tool to be applied to both of these problems and we comment on this possibility in the conclusions, Section 6. At any rate, the condition that $\mathcal{T}$ be integrable is clearly a strong one. As a final comment on this property, we remark that the leaf, $\mathcal{L}$, is evidently parallelisable and, once we fix the ACM3S, is equipped with a canonical trivialisation. In three dimensions, every oriented manifold is parallelisable, so this does not add extra conditions on the topology of the leaf. The case for $\mathcal{T}^{\perp}$ is similar. Were $\mathcal{T}^{\perp}$ to be integrable, $Y$ would have a four-dimensional foliation, whose leaves will be denoted $\mathcal{L}^{\perp}$, $T_{x}\mathcal{L}^{\perp}=\mathcal{T}^{\perp}_{x}$. Choosing a local orthonormal frame extending $(R^{1},R^{2},R^{3})$, say with $e^{1},e^{2},e^{3},e^{4}$ then we have that in this frame the volume of $\mathcal{L}^{\perp}$ will be given by ${\rm d}{\rm vol}_{x}(\mathcal{L}^{\perp})=e^{1}\wedge e^{2}\wedge e^{3}\wedge e^{4}$, which is precisely the restriction of the coassociative four form. Submanifolds of a $G_{2}$ structure manifold with volume form given by the restriction of the $G_{2}$ structure four form, $\psi$, are called coassociative submanifolds. When $\psi$ is closed, it is a calibration of the seven manifold and coassociative submanifolds are the corresponding calibrated manifolds. Therefore, much of what we said on the interest in associative three-cycles can be said, mutatis mutandis, for coassociatives. It is interesting to note that the setting where $\mathcal{T}^{\perp}$ is integrable is reminiscent of the study of $G_{2}$ manifolds fibred by coassociative cycles which has been promoted by Donaldson 2016arXiv160308391D , Baraglia 2010JGP….60.1903B and Kovalev 2005math…..11150K , among others. K3-fibrations are also relevant in M-theory/heterotic duality and have been utilised in defining a conjectural generalisation of mirror symmetry to the seven dimensional case gukov2003duality ; braun2018towards ; braun2017mirror . It would be interesting to explore how these topics interact with the ACM3Ss. ### 4.3 Space of ACM3S We now return to the study of the space of ACM3 structures compatible with a given $G_{2}$ structure on a seven manifold, $(Y,\varphi)$. We will denote this space by $\mathscr{C}$, or $\mathscr{C}(Y,\varphi)$ if context makes it necessary to emphasise the $G_{2}$ structure manifold. This is an interesting space in its own right, but it is also necessary to understand from a physics perspective, since it is far from clear that different choices of ACM3S would yield equivalent effective field theory descriptions of physics. In other words, there may be certain choices of ACM3S that provide better descriptions of the low-energy physics. At the moment, the precise meaning of these choices for physics are unclear and we will simply give a mathematical description of this space. It seems likely that physics considerations will refine this study and we will comment on these possibilities in the conclusion, see section 6. Mathematically, this space, $\mathscr{C}$, can be seen as a very coarse invariant of the $G_{2}$ structure, essentially depending only on the topological class of the associated $G_{2}$ bundle. One indication of this interdependence is the way parallelisability is reflected by ACM3Ss. This connection comes about as follows: consider any two splittings $\mathcal{T}\oplus\mathcal{T}^{\perp}\cong TY\cong\mathcal{T}^{\prime}\oplus\mathcal{T}^{\prime,\perp}$. Observe that the rank of the sum $\mathcal{T}+\mathcal{T}^{\prime}$ must have at least one fibre (and therefore, also all fibres in an open neighbourhood) with dimension greater than three in order that the splittings be different. On the other hand, if _all_ the fibres have rank greater than three, then the manifold is in fact parallelizable. Indeed, since we assume $\mathcal{T}+\mathcal{T}^{\prime}$ is everywhere at least rank 4, then we can conclude that there is at least four orthonormal vector fields, say $R^{\mu},\,\mu=1,2,3,4$. Regarding these as unit, imaginary octonions with the $G_{2}$ cross product, we then have the basic fact that any four orthogonal, imaginary octonions generate the space of all imaginary octonions. Turning this statement around: whenever the underlying $G_{2}$ structure manifold is _not_ parallelizable, any two splittings will overlap in at least one point. We will now give a concrete expression for the space of all ACM3Ss induced by the $G_{2}$ structure. We have seen that these are in one-to-one correspondence with orthonormal three-frames satisfying (102), i.e. that $R^{3}=R^{1}\times_{\varphi}R^{2}$. Since this implies that the third orthonormal vector is completely fixed by the first two, we conclude that $\mathscr{C}$ is simply the space of all orthonormal, ordered pairs of vector fields. Fibrewise, the space of orthonormal pairs of vectors inside $T_{x}Y\cong\mathbb{R}^{7}$ is a so-called Stiefel manifold which we can denote $V_{2}(T_{x}Y)$. This space has a description as a homogeneous space $G_{2}/SU(2)$ harvey1982calibrated , expressing the fact that $G_{2}$ acts transitively on orthonomal pairs of vectors, with stabiliser $SU(2)$. Globally, there is a fibre bundle associated to the tangent bundle with typical fibre $V_{2}(\mathbb{R}^{7})$. We will denote this bundle by $\mathcal{V}_{2}(TY)$. A section of $\mathcal{V}_{2}(TM)$ is the same as an orthonormal two-frame or, equivalent in our situation, an ACM3S, and therefore the space of ACM3S is the space of sections of $\mathcal{V}_{2}(TY)$, $\mathscr{C}=\Gamma(Y,\mathcal{V}_{2}(TY))$.111111Thomas’ proof that any $G_{2}$ structure manifold admits a 2-vector field, 10.2307/1970439 ; thomas1969 used obstruction theory to show this space of sections is non- empty. This space may have non-trivial topology, including non-trivial homotopy groups, which we will investigate in some examples, see Section 5. This space of sections, $\mathscr{C}$, has a locally trivial fibre bundle structure, where the base corresponds to the space of splittings of the form (104) and the fibres are the trivialisations of this bundle. The projection map is simply taking the span fibrewise. We will now make this more concrete. The space of decompositions, $TY\cong\mathcal{T}\oplus\mathcal{T}^{\perp}$, with $\mathcal{T}$ a trivial bundle with associative fibres is, fibrewise, equivalent to choosing a three-plane in the seven-dimensional tangent space with a further constraint that enables it to be regarded as a copy of the imaginary quaternions. The space of such choices at a given point is the so- called associative Grassmanian, $G(\varphi_{x})$, and is, like $V_{2}(T_{x}Y)$, a homogeneous space with $G_{2}$ total space, $G(\varphi_{x})=G_{2}/SO(4)$, harvey1982calibrated . Once again, we can consider a fibre bundle associated to $TY$, now with typical fibre $G(\varphi_{0})$. We will call this fibre bundle $\mathcal{G}(\varphi)\rightarrow Y$ and let $\tilde{\mathscr{S}}:=\Gamma(Y,\mathcal{G}(\varphi))$ be the space of sections. By construction, a section of this bundle corresponds to a rank three subbundle of $TY$ with associative fibres, and thus a splitting $TY\cong\mathcal{T}\oplus\mathcal{T}^{\perp}$, but it is not obvious that the bundle $\mathcal{T}$ must be trivial. Thus, $\tilde{\mathscr{S}}$ is not precisely the space we need, it is too big. The subspace of trivial bundles, however, consists of a union of path-connected components of $\tilde{\mathscr{S}}$. Indeed, the bundle associated to a section in the same path component as that of a trivial bundle is, by definition, homotopic to the trivial bundle and consequentially isomorphic. Therefore, the relevant subspace of all sections will be a disjoint union of certain disconnected components of the space of sections, which we will denote by $\mathscr{S}\subset\tilde{\mathscr{S}}$. We can now look at the fibres of the projection $\mathscr{C}\rightarrow\mathscr{S}$, which consists of the the space of orthonormal, associative trivialisations of the corresponding trivial rank three bundle. Since we assume this is not an empty space, we can fix an initial ACM3S by making an arbitrary choice of orthonormal vector fields, $(R^{1},R^{2},R^{3})$ satisfying (101), i.e. $R^{3}=R^{1}\times_{\varphi}R^{2}$. Any other trivialisation of $\mathcal{T}$ is given by an orthonormal framing, $(S^{1},S^{2},S^{3})$, and it follows that there is a unique $\Theta\in{\rm Maps}(Y,O(3))$ such that each $S^{\alpha}=\Theta^{\alpha}_{\beta}R^{\beta}$. Imposing that $(S^{1},S^{2},S^{3})$ satisfies condition (101), implies that $\Theta$ in fact takes values in $SO(3)$. We see, then, that the space of orthonormal trivialisations of $\mathcal{T}$, compatible with (101), has a free, transitive action of the topological group ${\rm Maps}(Y,SO(3))$; in other words it is a torsor for this group. After making an arbitrary choice of basepoint in $Y$, this group factorises into a product, with one factor the space of constant $SO(3)$-valued maps and the other the basepoint-preserving maps. The first factor can be seen as an overall rotation of the vector fields and it seems best to regard this as a redundancy, which we will quotient out. More precisely, we make a choice of basepoint, $x_{0}\in Y$, which gives us a canonical homomorphism ${\rm Maps}(Y,SO(3))\cong{\rm Maps}_{*}(Y,SO(3))\times SO(3)$ where ${\rm Maps}_{*}$ denotes those maps which send the basepoint to the identity, $x_{0}\mapsto 1\in SO(3)$. Explicitly, this map is given by $\Theta\mapsto((\Theta\cdot\Theta(x_{0})^{-1}),\Theta(x_{0}))$ and can be interpreted as using a global rotation to ensure that any given ACM3S agrees with that induced by $(R^{1},R^{2},R^{3})$ at $x_{0}$. We will denote this space of trivialisations by $\mathscr{T}$, which is manifestly independent of the underlying splitting, $s\in\mathscr{S}$. We will now show that the projection $p:\mathscr{C}\rightarrow\mathscr{S}$ is indeed locally trivial. This argument is reminiscent of the standard construction of the universal bundle over the classifying space $BU(n)$, see for instance hatcher2016vector ; Magill1103965 . The topology that we take for both spaces is that induced by compact-open topology on the space of all maps. Focusing on the total space, $\mathscr{C}$, for concreteness, the compact-open topology has a subbase given by the sets $V(K,U)=\\{f:Y\rightarrow\mathcal{V}_{2}(TY)\,|\,f(K)\subset U\\}$ for $K\subset Y$ compact and $U\subset\mathcal{V}_{2}(TY)$ open. An open set in the space of sections will be the intersection of an open set on the space of all maps, with the set of intersections. The same definitions apply, mutatis mutandis, for the base space, $\mathscr{S}$. In particular, for an arbitrary section $s\in\mathscr{S}$, we can take an open neighbourhood to be one of the subbasis generators, $V(K,U)$. We need $U$ to be some open neighbourhood such that, each $s^{\prime}\in U$ is, fibrewise, a three-plane that forms a graph over $s_{x}$. In other words, the plane corresponding to $s^{\prime}(x)$ is just a small rotation of the initial three-plane, $s(x)$, for all $x\in Y$. We can take $K\subset Y$ to be any nonempty, compact subset, possibly $Y$ itself. The claim is that $p^{-1}(V(K,U))\cong V(K,U)\times{\rm Maps}(Y,SO(3))$. Indeed, if we fix a trivialisation over $s$, then by projection and Gram- schmidt, we are able to continuously assign an orthonormal trivialisation to each section in this neighbourhood and from there extract the isomorphism. We therefore have a fibre bundle, $\mathscr{T}\rightarrow\mathscr{C}\rightarrow\mathscr{S}$, or more explicitly ${\rm Maps}(Y,SO(3))\rightarrow\Gamma(Y,\mathcal{V}_{2}(TY))\rightarrow\Gamma(Y,\mathcal{G}(\varphi))\,.$ (119) We do not know when, if ever, this space is in fact a trivial product, but we will show an example, 5.4, in which it is non-trivially fibred. More generally, it is natural to expect that ${\rm Maps}(Y,SO(3))$ has infinitely many components, because both $Y$ and $SO(3)$ have freely generated summands in third cohomology, while $G_{2}/SU(2)$ has only torsion, so it is plausible that the space of sections of $\mathcal{V}_{2}(TY)$ also has only finitely many components. This leads us to expect that the fibre bundle is generically non-trivial, although this question is not settled here. ## 5 Examples In this section we will present example geometries that serve to illustrate concepts related with almost contact structures. The selected examples admit $G_{2}$ structures with different properties, and some have been used in physics for the construction of supersymmetric solutions of string or M-theory. We also explore the connection between ACM3S and associative submanifolds in a class of non-compact examples with $G_{2}$ holonomy. ### 5.1 Heterotic $G_{2}$ systems on nilmanifolds Parallelizable nilmanifolds provide explicit examples of $G_{2}$ structure manifolds, that can be analysed in great detail using their left-invariant one-forms Fernandez:2008wla ; delBarco:2020ddt . Here we will recapitulate, in some detail, a particular example from Ref. Fernandez:2008wla : the nilmanifold $N(3,1)$, which solves the heterotic $G_{2}$ system presented in 3.1 and which appears particularily apt to illustrate that almost contact structures give added insight to the physical properties of string compactifications. Let $H(3,1)$ denote the seven dimensional generalised Heisenberg group., consisting of nilpotent real matrices of the form (cf. page 12 of Fernandez:2008wla ) $H(3,1)=\left\\{\left(\begin{array}[]{lllll}1&x_{1}&x_{2}&x_{3}&z\\\ 0&1&0&0&y_{1}\\\ 0&0&1&0&y_{2}\\\ 0&0&0&1&y_{3}\\\ 0&0&0&0&1\\\ \end{array}\right)|x_{i},y_{i},z\in\mathbb{R},1<i<3\right\\}\,.$ (120) From this, we may construct a compact nilmanifold $N(3,1)=\Gamma(3,1)\backslash H(3,1)$, where $\Gamma(3,1)\in H(3,1)$ consists of integer matrices of the above form. As all nilmanifolds, $N(3,1)$ is parallelizable, and its geometric features can be derived using a basis of left-invariant one-forms $e^{a}$, $a=1,..,7$ on $H(3,1)$ which satisfy the structure equations ${\rm d}e^{i}=0\;,1<i<6\;\;,\;\;{\rm d}e^{7}=ae^{12}+be^{34}+ce^{56}\;,$ (121) where we use the abbreviation $e^{ij}=e^{i}\wedge e^{j}$ and $a,b,c$ are real non-zero constants. Also, in this section, we will not distinguish between ${\rm d}_{7}$ and ${\rm d}$, since they coincide on $N(3,1)$. The structure equations clearly show that the $N(3,1)$ can be viewed as a twisted torus $S^{1}\hookrightarrow N(3,1)\rightarrow T^{6}\;,$ and that $e^{7}$ acts as a connection that encodes the twisting of $S^{1}$ over the six-torus $T^{6}$. In Ref. Fernandez:2008wla , Lemma 5.5 shows that $N(3,1)$ admits a three- parameter family of $G_{2}$ structures, defined by $\varphi=(e^{12}+e^{34}+e^{56})\wedge e^{7}+e^{135}-e^{146}-e^{236}-e^{245}$ (122) with associated diagonal metric $g_{\varphi}=e^{1}\otimes e^{1}+...+e^{7}\otimes e^{7}$. An explicit calculation shows that the Hodge dual fourform is $\psi=*_{\varphi}\varphi=e^{3456}+e^{1256}+e^{1234}-e^{2467}+e^{2357}+e^{1457}+e^{1367}\;.$ Moreover, ${\rm d}\psi=0\;,\;{\rm d}\varphi=(a+b)e^{1234}+(a+c)e^{1256}+(b+c)e^{3456}$ and thus the torsion is contained in the classes $\tau_{0}$ and $\tau_{3}$. Explicitly, we have $\tau_{0}=\frac{2}{7}(a+b+c)\;,\text{and}\;\tau_{3}=*{\rm d}\varphi-\tau_{0}\varphi\;.$ (123) Note that the case where $c=-(a+b)$ leads to a vanishing $\tau_{0}$ and hence a Minkowski solution to the heterotic $G_{2}$ system of section 3.1. Furthermore, in this case, $T$ simplifies to $T=-\tau_{3}=-(a+b)e^{567}+be^{347}+ae^{127}\;.$ (124) We will restrict to this case in the following. #### 5.1.1 Solving the heterotic $G_{2}$ system In order to solve the heterotic $G_{2}$ system of section 3.1, the nilmanifold $Y$ must admit $G_{2}$ instanton connections on $TY$ and $V$. Ref. Fernandez:2008wla shows that this can be accomplished on $N(3,1)$, provided that $c=-(a+b)$. This is seen as follows: recall that all nilmanifolds have Levi-Civita connection 1-forms completely specified by their structure constants $f^{i}{}_{jk}$ : $(\kappa^{LC})^{i}_{j}=\frac{1}{2}(f^{i}{}_{jk}-f^{k}{}_{ij}+f^{j}{}_{ki})e^{k}$ (125) which can be read off from (121) using ${\rm d}e^{i}=f^{i}{}_{jk}e^{jk}$. Moreover, $\nabla^{+}=\nabla^{LC}+\frac{1}{2}T$ is the unique $G_{2}$ compatible connection, and has associated connection one-forms Fernandez:2008wla $(\kappa^{+})^{1}_{2}=-ae^{7}\;\;,\;\;(\kappa^{+})^{3}_{4}=-be^{7}\;\;,\;\;(\kappa^{+})^{5}_{6}=(a+b)e^{7}\;\;.\;\;$ (126) As always, the curvature two-forms are given by $(\Omega)^{i}_{j}={\rm d}(\kappa)^{i}_{j}+(\kappa)^{i}_{k}\wedge(\kappa)^{k}_{j}$, so $(\Omega^{+})^{1}_{2}=-a{\rm d}e^{7}\;,\;(\Omega^{+})^{3}_{4}=-b{\rm d}e^{7}\;,\;(\Omega^{+})^{5}_{6}=-(a+b){\rm d}e^{7}\;.$ (127) Now, as discussed in section 3.1, supersymmetry requires that $\nabla^{+}$ is a $G_{2}$ instanton, and so should satisfy (65). This follows from ${\rm d}e^{7}\wedge\psi=(a+b-a-b)e^{123456}=0\;.$ Thus, $a+b+c=0$ implies that $\nabla^{+}$ is a $G_{2}$ instanton. In fact, the same holds for any other connection with one-form components proportional to $e^{7}$. In the same manner, we may then construct a vector bundle connection $A$ satisfying (65). We assign connection one-forms $(\kappa^{A})^{i}_{j}\sim e^{7}$, which will satisfy (65). As shown in Ref. Fernandez:2008wla , the nilmanifold thus admits a 3-parameter family of $G_{2}$ instanton connections $A^{\lambda,\mu,\tau}$. Finally, with this choice of connections, the Bianchi identity associated to the anomaly cancellation condition (67) admits a solution as long as $(\lambda,\mu,\tau)\neq(0,0,0)$ and $\lambda^{2}+\mu^{2}+\tau^{2}<a^{2}+b^{2}+c^{2}=2(a^{2}+b^{2}+ab)$ Fernandez:2008wla . #### 5.1.2 Left-invariant contact structure and N=2 supersymmetry The left-invariant one-forms $e^{i}$ have dual vectors $E_{i}$ defined by $e^{j}(E_{i})=\delta_{i}^{j}\;.$ Having seven globally defined vectors is the hallmark of a parallelizable seven-dimensional manifold. Picking any of these vectors defines an ACMS on N(3,1), and any three-frame defines an ACM3S. In this subsection, we will determine the properties of the ACMS defined by $R=E_{7}\mbox{ and }\sigma=e^{7}\;.$ By definition, we have $R(\sigma)=1$. Moreover, by (121) we see that ${\rm d}\sigma$ is purely transverse, so $\rm{Ker}(\sigma)$ is non-integrable and, therefore, not tangent to a 6-dimensional foliation (cf. section 2.2). Indeed, we have that $\sigma\wedge{\rm d}\sigma\wedge{\rm d}\sigma\wedge{\rm d}\sigma=-3!a\,b\,(a+b)\,e^{1234567}\neq 0\;.$ (128) Thus, the almost contact structure induced by $\sigma$ is in fact a contact structure. The fundamental two-form is $\omega=i_{R}(\varphi)=ae^{12}+be^{34}-(a+b)e^{56}={\rm d}\sigma\;.$ This contact structure furthermore leads to enhanced supersymmetry. Recall, from section 3.6, that when $R$ is covariantly constant, there exist two covariantly constant spinors on $Y$. We can phrase the covariant constancy of $R$ as a question on the connection one-forms. Indeed, the connection one-form is defined by $(\kappa^{+})_{ji}(E_{k}):=g(\nabla^{+}_{E_{k}}E_{j},E_{i})$ (129) and, consequentially, $\nabla^{+}E_{7}=0$ if and only $(\kappa^{+})_{7i}\equiv 0$. That this is true follows immediately from the discussion leading to (126). We may go on to check the compatibility of the $G_{2}$ instanton connections and $H$-flux, constructed in the previous subsection, with the possible existence of $N=2$ supersymmetry enhancement induced by the covariantly constant vector field, $E_{7}$. The connection one-forms clearly lack a transverse piece, whereas the associated curvature is completely transverse, i.e. $(\kappa^{A})^{i}_{j}\sim\sigma\;,\;(\Omega^{A})^{i}_{j}\sim ae^{12}+be^{34}-(a+b)e^{56}\;.$ We thus read off that $F_{0}=0$, as we have argued in section 3.6 should be the case for $N=2$ enhanced solutions. Likewise, the anomaly cancellation condition requires, for $N=2$ SUSY, that $H$ is completely transverse up to an exact contribution. Indeed, the torsion (124) for this nilmanifold example lacks a transverse piece, and has $T_{0}={\rm d}\sigma={\rm d}\Sigma\;.$ We thus conclude that the $N(3,1)$ solution to the $N=1$ heterotic $G_{2}$ system, that was constructed in Ref. Fernandez:2008wla , in fact preserves $N=2$ supersymmetry. We will discuss this further in the next subsection. #### 5.1.3 Left-invariant almost contact 3-structures In the preceding section we showed that $N(3,1)$ admits a left-invariant CS which is associated to two covariantly constant spinors, leading to $N=2$ supersymmetry. A natural question to pose is whether the ACS associated to the remaining left-invariant one-forms, $e^{i}$, for $i=1,..,6$, give a further enhancement of supersymmetry. To answer this question we are led to construct, and explore, the space of left-invariant ACM3S on this nilmanifold. As discussed in section 4.3, since we are dealing with a parallelisable manifold, we expect several distinguishable ACM3S, corresponding to the $7\choose 2$ different ways of choosing 2-frames among the seven left-invariant vectors $E_{i}$. As a first remark, we have ${\rm d}e^{i}=0$ for $i\neq 7$. Consequently, the ACS associated to $e^{i}$ for $i\neq 7$ are clearly not contact structures. Thus $N(3,1)$ does not admit a left-invariant contact 3-structure (see the discussion at the end of 1.1.2). Let us then start by identifying $R^{1}=E_{7}$. Then, we may choose $R^{2}=E_{1}$ say, after which $R^{3}$ will be determined by (101) to be $R^{3}=E_{2}$. Choosing instead $R^{2}=E_{3}(E_{5})$ gives $R^{3}=E_{4}(E_{6})$, and if we start with picking $R^{2}=E_{2n}$ we find the same trivialisation up to an overall sign. The upshot is that, once $R^{1}=E_{7}$ is chosen, there are three inequivalent trivialisations $(R^{1},R^{2},R^{3})=(E_{7},E_{1},E_{2})$,$(E_{7},E_{3},E_{4})$,$(E_{7},E_{5},E_{6})$. As discussed in section 4.3, these partially overlapping ACM3S are a consequence of the parallelizability of the nilmanifold. We will see below that these different trivializations all have the same qualitative properties. For concreteness, let us study the trivialisation $(R^{1},R^{2},R^{3})=(E_{7},E_{1},E_{2})$. We may now determine whether the associated splitting $TY=\mathcal{T}\oplus\mathcal{T}^{\perp}$ lead to involutive $\mathcal{T},\mathcal{T}^{\perp}$. As discussed in Section 4.2, $\mathcal{T}$ is involutive if for any one-form $\xi\in\mathcal{T}^{*,\perp}$ we have $i_{R^{\alpha}}i_{R^{\beta}}{\rm d}\xi=0\;.$ This follows directly from the fact that $\mathcal{T}^{*,\perp}=\text{Span}\\{e^{3},e^{4},e^{5},e^{6}\\}$, and ${\rm d}e^{i}=0\;\;\forall\;\;i\neq 7.$ In contrast, it is evident from the discussion around equation (128), that $\mathcal{T}^{\perp}$ fails to be involutive since $i_{R^{1}}i_{R^{2}}{\rm d}\sigma^{1}=a\neq 0\;.$ This conclusion clearly holds also for the ACM3S given by $(R^{1},R^{2},R^{3})=(E_{7},E_{3},E_{4})$ or $(E_{7},E_{5},E_{6})$. We thus conclude that these ACM3S are associated three-dimensional foliations of $Y$, but no four-dimensional foliation. The leaves of the three-dimansional foliation are associative submanifolds (however, they are not volume- minimizing since we lack a calibrating three-form). Let us now explore an ACM3S that does not include $E_{7}$ in the three-frame $(R^{1},R^{2},R^{3})$. This leads to rather different properties. For example, let us take $(R^{1},R^{2},R^{3})=(E_{1},E_{3},E_{5})$. The associated splitting $TY=\mathcal{T}\oplus\mathcal{T}^{\perp}$ then lead to involutive $\mathcal{T}$ and $\mathcal{T}^{\perp}$. Clearly, since $\mathcal{T}^{*,\perp}=\text{Span}\\{e^{2},e^{4},e^{6},e^{7}\\}$, for any $\xi\in\Gamma(\mathcal{T}^{*,\perp})$ we have either ${\rm d}\xi=0$ or $i_{R^{\alpha}}i_{R^{\beta}}{\rm d}\xi\sim i_{R^{\alpha}}i_{R^{\beta}}{\rm d}e^{7}=0$ showing that $\mathcal{T}$ is involutive. Moreover, $\mathcal{T}^{\perp}$ is involutive, since $(\sigma^{1},\sigma^{2},\sigma^{3})=(e^{1},e^{3},e^{5})$ are all closed. Thus, with this choice of ACM3S, we see that Y admits both a three- and a four-dimensional foliation. The leaves of these foliations are, respectively, associative and coassociative submanifolds. Furthermore, since ${\rm d}\psi=0$, the leaves of the latter foliation correspond to calibrated submanifolds. Supersymmetry enhancement cares less about these different ACM3Ss on $N(3,1)$. Among the left-invariant basis vectors, only $E^{7}$ is covariantly closed. This follows directly from the definition of the connection one-form, (129), in conjunction with (126). Thus, none of the left-invariant ACM3S under study imply an enhanced supersymmetry to $N=4$. Just like in section 3.6, where we saw that a reduction of structure group need not be compatible with a given connection, the parallelisability of the nilmanifold does not imply supersymmetry is maximally enhanced. Barring extra covariant vector fields on this nilmanifold, the holonomy group stays $SU(3)$. Finally, before closing this discussion, let us remark that we have not exhausted the possible almost contact structures. Our discussion is limited to left-invariant ACM3Ss, and even here one can imagine constructing ACM3S using position dependent linear combination of the left-invariant forms which may lead to different conclusions. ### 5.2 Barely $G_{2}$ examples In this section we demonstrate the features of an $SU(2)$ structure in a special class of $G_{2}$ structure manifolds known as barely $G_{2}$ manifolds, joyce1996:2 ; Grigorian:2009nx ; Harvey:1999as . These manifolds are constructed from Calabi-Yau threefolds endowed with a real structure. For our purposes, we will focus on the subclass where the real structure is freely acting and where the initial Calabi-Yau manifold has vanishing Euler characteristic. Although this is expected to be an extremely small class (there are only two complete intersection Calabi-Yau manifolds with these properties, Grigorian:2009nx , with Betti numbers (15,15) and (19,19)), the restriction is purely for convenience. We will begin reviewing the construction and key features of barely $G_{2}$ manifolds, before moving on to $SU(2)$ structures. The starting point for a barely $G_{2}$ manifold is a Calabi-Yau threefold, $Z$, endowed with a real structure, that is, an antiholomorphic involution, $\zeta$. To avoid having to resolve singularities, we will assume that $\zeta$ is freely acting. This real structure is used to endow $Z\times S^{1}$ with a $\mathbb{Z}_{2}$ action, $\hat{\zeta}=\zeta\times(-1)$, where $(-1)$ acts on $S^{1}\subset\mathbb{C}$ via $e^{i\theta}\mapsto(e^{-i\theta})$, i.e. reflection about the real axis. The barely $G_{2}$ manifold, $Y$ is the quotient by this action, $Y=(Z\times S^{1})/(\zeta\times(-1))\,.$ For us, it will be convenient to give an alternative definition for $Y$: $Y=(Z\times[0,1])/\sim$ (130) where we identify $(z,0)\sim(\zeta(z),1)$ and $\partial_{t}|_{(z,0)}\sim-\partial_{t}|_{\zeta(z),1}$. Observe that this space has only a single constant spinor, but nevertheless the holonomy is a proper subgroup of $G_{2}$, being $SU(3)\rtimes\mathbb{Z}_{2}$. The induced $G_{2}$ structure forms are induced by the Calabi-Yau Kahler form, $\omega$, and holomorphic three-form $\Omega$: $\displaystyle\varphi$ $\displaystyle=\omega\wedge{\rm d}t+\operatorname{Re}\Omega$ (131) $\displaystyle\psi$ $\displaystyle=\rho-{\rm d}t\wedge\operatorname{Im}\Omega\,.$ (132) Observe that these product forms indeed survive the quotient, since: $\displaystyle\zeta^{*}\Omega$ $\displaystyle=\bar{\Omega}$ (133) $\displaystyle\zeta^{*}\omega$ $\displaystyle=-\omega$ (134) $\displaystyle(-1)^{*}{\rm d}t$ $\displaystyle=-{\rm d}t\,.$ (135) Now, by assumption, $Z$ has vanishing Euler characteristic and consequently the underlying smooth manifold admits a vector field, $v$. We will assume that we are given such a vector field, say $v$, and, without loss of generality, that it is of unit norm and invariant under the real structure, $\zeta^{*}v=v$. This will guarantee that $v$ induces a vector field on the barely $G_{2}$ manifold, $Y$, say $R^{1}$. On the other hand, if $I$ denotes the complex structure on $Z$, then $w:=Iv$ is another unit vector field, but is not invariant under the real structure, instead acted on by a sign, $\zeta^{*}w=-w$. Similarly, the unit tangent vector field on the interval, $\partial_{t}$, is not invariant so is not well- defined on equivalence classes. The three vector fields, $v,w,\partial_{t}$, obviously induce vector fields on the product $Z\times[0,1]$, but as observed above, neither $w$ nor $\partial_{t}$ will survive the quotient individually. It is, however, possible to form invariant combinations using both vector fields, in particular by rotating through the two-frame they span. More precisely, consider $R^{2}=\cos(\pi t)w-\sin(\pi t)\partial_{t}\,,$ (136) which is well-defined on the quotient since $R^{2}_{(z,0)}=w_{z}=R^{2}_{(\zeta(z),1)}\,.$ Note, further, $R^{1}$ is everywhere orthogonal to $R^{2}$ so, in particular, it is everywhere linearly independent. The third vector field in the three-frame necessary to define an $SU(2)$ structure is obtained with the $G_{2}$ induced cross product: $R^{3}:=R^{1}\times_{\varphi}R^{2}$ (137) though it is in fact easiest to compute first the dual one-form. Indeed, by definition $\sigma^{3}:=g(R^{3},-)$ can be computed to be $\sigma^{3}=i_{R^{2}}i_{R^{1}}\varphi=\cos(\pi t){\rm d}t-\sin(\pi t)g(w,-)\,.$ (138) This is now easy to dualise back to a vector field and we obtain $R^{3}=-\sin(\pi t)w+\cos(\pi t)\partial_{t}\,.$ (139) We have now fixed three linearly independent vector fields that fix the $SU(2)$ structure and can now directly compute the induced structure forms. Before doing so, we remark that these vector fields can not be covariantly constant, since we have assumed that the initial Calabi-Yau manifold had full $SU(3)$ holonomy. As a consequence, the $G_{2}$ connection does not descend to a connection of the reduced $SU(2)$ structure. Each vector induces an $SU(3)$ structure, for which we can construct a two- form $\omega_{i}:=i_{R^{i}}\varphi$ and three-form $\Omega_{-}^{i}=i_{R^{i}}\psi$, a dual one-form $\sigma_{i}=g(R^{i},-)$ and endomorphism $J^{i}$ such that $g(J^{i}-,-)=\omega_{i}$. We will focus on computing $\omega_{i}$ and $\Omega_{-}^{i}$ since we do not need a metric for this data. The computations are straightforward and we obtain: $\displaystyle\omega_{1}$ $\displaystyle=-w^{\\#}\wedge{\rm d}t+i_{v}\operatorname{Re}\Omega$ (140) $\displaystyle\omega_{2}$ $\displaystyle=-\sin(\pi t)\omega+\cos(\pi t)(v^{\\#}\wedge{\rm d}t+i_{v}\operatorname{Im}\Omega)$ (141) $\displaystyle\omega_{3}$ $\displaystyle=\cos(\pi t)\omega-\sin(\pi t)(v^{\\#}\wedge{\rm d}t+\operatorname{Im}\Omega)$ (142) $\displaystyle\Omega_{-}^{1}$ $\displaystyle=-w^{\\#}\wedge\omega+{\rm d}t\wedge i_{v}\operatorname{Im}\Omega$ (143) $\displaystyle\Omega_{-}^{2}$ $\displaystyle=\sin(\pi t)\operatorname{Im}\Omega+\cos(\pi t)(v^{\\#}\wedge\omega+{\rm d}t\wedge i_{v}\operatorname{Re}\Omega)$ (144) $\displaystyle\Omega_{-}^{3}$ $\displaystyle=-\cos(\pi t)\operatorname{Im}\Omega-\sin(\pi t)(v^{\\#}\wedge\omega+{\rm d}t\wedge i_{v}\operatorname{Re}\Omega)$ (145) where we have introduced $v^{\\#}=g(v,-)$ and $w^{\\#}=g(w,-)$ and recall that $\omega,\Omega$ are the Calabi-Yau structure forms. The particular vector field that we give here can be viewed as rotating through the local frame $(v,w,\partial_{t})$, a fact that is also apparent in the structure forms. On a generic barely $G_{2}$-manifold we would not expect to find vector fields analogous to $v$ and $w$, but even so we are guaranteed to find a ACM3S. One would expect that any such three-framing would intertwine the Calabi-Yau and interval directions much more intricately than exhibited here. We can now consider the space of trivialisations that are compatible with the splitting induced by $(R^{1},R^{2},R^{3})$. We will show that there are at least two connected components, which we distinguish by map induced on the first homology groups. As is well-known, the first homology group of $SO(3)$ is $H_{1}(SO(3),\mathbb{Z})\cong\mathbb{Z}_{2}$, with generator induced by the generator of $SO(2)\cong S^{1}$ under the inclusion $SO(2)\hookrightarrow SO(3)$, i.e. $\displaystyle\rho:I\rightarrow SO(3)\,,$ $\displaystyle\rho(t)=\left(\begin{array}[]{ccc}1&0&0\\\ 0&\cos(2\pi t)&-\sin(2\pi t)\\\ 0&\sin(2\pi t)&\cos 2\pi t\end{array}\right).$ (149) Using standard techniques in algebraic topology, it can be shown that $H_{1}(Y,\mathbb{Z})\cong\mathbb{Z}_{2}$ also, with generator induced by the open path on $Z\times S^{1}$ $\chi(t)=\left\\{\begin{array}[]{cc}(\gamma_{0}(2t),1)&\quad t\in[0,1/2]\\\ (\zeta(x_{0}),e^{i\pi(2t-1)})&\quad t\in[1/2,1]\end{array}\right.$ (150) with $\gamma_{0}$ an arbitrary path in $Z$ between a fixed $x_{0}$ and $\zeta(x_{0})$. Note that since $Z$ is simply connected, the homotopy class of $\chi$ is independent of $\gamma_{0}$. Consider, then, the map $\Theta:Y\rightarrow SO(3)$, induced by the map, $\tilde{\Theta}$ on $Z\times I$ $\tilde{\Theta}(x,t)=\left(\begin{array}[]{ccc}1&0&0\\\ 0&\cos(2\pi t)&-\sin(2\pi t)\\\ 0&\sin(2\pi t)&\cos 2\pi t\end{array}\right).$ (151) Then, it is straightforward to check that $\Theta_{*}\chi(t)=\left\\{\begin{array}[]{cc}1&\quad t\in[0,1/2]\\\ \Theta(\sigma(x_{0}),2t-1)&\quad t\in[1/2,1]\end{array}\right.\sim\rho(t).$ Consequentially, $\Theta_{*}:H_{1}(Y;\mathbb{Z})\rightarrow H_{1}(SO(3),\mathbb{Z})$ is an isomorphism and $\Theta$ can not be homotopic to the constant map $1:Y\rightarrow SO(3)$. The trivialisation corresponding to $\Theta$ is given by the vector fields $(S^{1},S^{2},S^{3})$, which, by direct computation, are: $\displaystyle S^{1}$ $\displaystyle=R^{1}=v$ (152) $\displaystyle S^{2}$ $\displaystyle=\cos(2\pi t)R^{2}-\sin(2\pi t)R^{3}=\cos(\pi t)w-\sin(3\pi t)\partial_{t}$ (153) $\displaystyle S^{3}$ $\displaystyle=\sin(2\pi t)R^{2}+\cos(2\pi t)R^{3}=-\sin(\pi t)w+\cos(3\pi t)\partial_{t}$ (154) The fact that $\Theta$ does not induce the zero map on first homology, or equivalently, that it does not induce the zero map on the fundamental group, implies that $\Theta$ can not be lifted to the universal cover of $SO(3)$, which is $SU(2)$. This indicates that the spin structure that is canonically associated to the trivialised bundle, $\mathcal{T}$, differs between the $R$ and $S$ trivialisations. Although $\Theta$ induces countably many trivialisations via $\Theta^{n},\,n\,\in\,\mathbb{Z}$, the first homology groups are only able to distinguish between the cases that $n$ is even or odd. The group $SO(3)$ has only torsion homotopy groups in dimensions zero through seven, with the exception of $\pi_{3}$, $\pi_{3}(SO(3))\cong\mathbb{Z}$. Therefore, the third homotopy group is a natural arena in which to attempt to distinguish the components of these maps, though we do not attempt that here. Further, we have only shown that these two trivialisations are non-homotopic inside the fixed trivial 3-bundle and it does not necessarily follow that they are non-homotopic in the space of all ACM3S’s. This would only follow if the locally trivial product structure that we identified in Subsection 4.3 is in fact trivial. Working with the space of all ACM3S’s is more subtle because they correspond to sections of a possibly non-trivial bundle, with twisting dictated by the initial $G_{2}$ structure and we have nothing conclusive to say about this. ### 5.3 A compact $G_{2}$ holonomy example Let us now turn to manifolds with $G_{2}$ holonomy, and explore almost contact structures on such spaces. In this section, we review an example of compact seven-manifolds with $G_{2}$ holonomy that is due to Joyce. As we will see, this construction allows us to specify a canonical AC(3)S that is suitable to illustrate the topological nature of the space of ACM3S. In the next section, we will explore this topic in more detail in a non-compact setting. The construction of this type of $G_{2}$ holonomy manifolds is modelled on the Kummer construction of $SU(2)$ holonomy metrics on the orbifold $T^{4}/\mathbb{Z}^{2}$. The construction was originally proposed in joyce1996:1 ; joyce1996:2 , has been presented in great detail in the books joyce2000 , and a clear summary of the construction can be found in Ref. joyce:proc . For the sake of completeness, we will recapitulate the construction algorithm here. To begin, we start with the 7-dimensional torus $T^{7}$, with coordinates $x^{a},a=1,..7$ satisfying $x^{a}\sim x^{a}+1$. This space may be equipped with the standard $G_{2}$ structure $\varphi_{0}$ as in (2), by the global identication $e^{a}={\rm d}x^{a}$. The metric $g_{\varphi_{0}}$ is naturally flat. By quotienting $T^{7}$ by a finite automorphism group $\Gamma$ that respects $\varphi_{0}$, a new $G_{2}$ space is obtained. In general, $T^{7}/\Gamma$ will be an orbifold, with singular set $S$ specified by the fix points of $\Gamma$. We will see an example of this momentarily. In order to obtain a smooth $G_{2}$ manifold, there is then need to resolve the singularities. In Joyce’s construction, this is accomplished by noticing that for certain groups $\Gamma$ the singular set $S$ decomposes into connected components that may be resolved using standard procedures in complex algebraic geometry (see below). This resolution gives a non-singular 7-manifold $Y$, which can be shown to admit a 1-parameter family of $G_{2}$ structures $\varphi_{t}$ with torsion $|\nabla\varphi_{t}|={\mathcal{O}}(t^{4})$. This 1-parameter family of $G_{2}$ structures is obtained by gluing together, using a partition of unity, the flat $G_{2}$ structure $(\varphi_{0},g_{0})$ in the non-singular “bulk” of $Y$ with local $G_{2}$ structures $(\varphi_{i},g_{i})$ valid near the various resolved orbifold singularities. The non-zero torsion is localized in the regions with non-trivial derivatives for the partition of unity, i.e. where the resolved singular spaces adjoin with the bulk. It is then possible to prove that for all sufficiently small parameters $t$, one may deform $(\varphi_{t},g_{t})$ to $(\tilde{\varphi},\tilde{g})$ with vanishing torsion. Finally, given the specific choices made in the construction, one may show that the holonomy of $\tilde{g}$ is indeed $G_{2}$, and not a subgroup thereof. Thus $Y$ is a compact $G_{2}$ holonomy manifold. Our purpose in this section is to explore AC(3)S that are compatible with Joyce’s construction. Let us therefore describe in some more detail the singular set $S$, following joyce:proc (see also Kronheimer:1989zs ; kronheimer1989 ; ROAN1996489 ). By careful selection of $\Gamma$, one may ascertain that $S$ decomposes into connected components $S_{i}$ that are locally isomorphic to either $T^{3}\times\mathbb{C}^{2}/G$, for $G\subset SU(2)$ finite, or $S^{1}\times\mathbb{C}^{3}/G$, for $G\subset SU(3)$ finite and freely acting on $\mathbb{C}^{3}\backslash 0$. We may then use that 1. 1. $\mathbb{C}^{2}/G$, for $G\subset SU(2)$ finite, may be resolved to a smooth Asymptotically Locally Euclidean (ALE) space $U_{2}$, with Kähler metric of $SU(2)$ holonomy 2. 2. $\mathbb{C}^{3}/G$, for $G\subset SU(3)$ as above, may be resolved to a smooth ALE space $U_{3}$, with Kähler metric of $SU(3)$ holonomy Then, we may resolve $Y$ by locally excising the connected components $S^{\alpha}$ of the singular set $S$, and then glue in smooth product spaces $\hat{S}^{\alpha}$, which have the form $T^{3}\times U_{2}$ or $S^{1}\times U_{3}$, depending on the nature of the singularity. The product spaces $\hat{S}^{\alpha}$ admit $G_{2}$ structures, which we will discuss in detail for the example below. An obvious effect of this construction is that these local $G_{2}$ structures are reduced to $SU(2)$ and $SU(3)$ structures, respectively, and the holonomy of the local metric will be either $SU(2)$ or $SU(3)$. This provides a first link to the AC(3)S that we have discussed in this paper. Let’s explore that in more detail in an example. The following example is taken from joyce1996:1 : as above, we construct an orbifold by quotienting $T^{7}$ by a finite group, which in this example is the $\mathbb{Z}_{2}^{3}$ generated by121212There is a change in convention in the definition of the local form of $\varphi$ with respect to joyce1996:1 ; this requires changing a sign in the $\gamma$ action. $\displaystyle\alpha((x_{1},...,x_{7}))$ $\displaystyle=(-x_{1},-x_{2},-x_{3},-x_{4},x_{5},x_{6},x_{7})$ (155) $\displaystyle\beta((x_{1},...,x_{7}))$ $\displaystyle=(-x_{1},\frac{1}{2}-x_{2},x_{3},x_{4},-x_{5},-x_{6},x_{7})$ (156) $\displaystyle\gamma((x_{1},...,x_{7}))$ $\displaystyle=(\frac{1}{2}-x_{1},x_{2},\frac{1}{2}-x_{3},x_{4},x_{5},-x_{6},-x_{7})\;.$ (157) One can readily show that this group preserves the $G_{2}$ threeform $\varphi_{0}$ given in (2). The singular set $S$ of $T^{7}/\mathbb{Z}^{3}_{2}$ is determined by the fix points of the generators $\alpha,\beta$ and $\gamma$, and a little thought reveals that $S$ decomposes into 12 disjoint components of the form $T^{3}\times\mathbb{C}^{2}/\\{\pm 1\\}$ joyce1996:1 . Now, we’re in the first situation described in the above list, and for each simply connected component of singular set, we may blow up $\mathbb{C}^{2}/\\{\pm 1\\}$ to $U^{A}$, where $U^{A}$ is a smooth ALE space that agrees with $\mathbb{C}^{2}/\\{\pm 1\\}$ on its boundary. We may construct this geometry explicitly. Denoting by $(z^{1},z^{2})$ the coordinates on $\mathbb{C}^{2}$, $U^{A}$ admits a hyper-Kähler triple of two- forms $\\{\omega^{\alpha}(t)\\}$ satisfying $\omega^{1}(t)=\frac{i}{2}\partial\bar{\partial}f_{t}\;,\;\omega^{2}(t)+i\omega^{3}(t)={\rm d}z^{1}\wedge{\rm d}z^{2}$ (158) where we have introduced a suitable Kähler potential that reduces to $f_{t}\to|z_{1}|^{2}+|z_{2}|^{2}$ at the boundary of $U^{A}$, and hence guarantees that the two-forms $\omega^{\alpha}(t)$ become flat $\hat{\omega}^{\alpha}$ at this boundary. We then find that the smooth manifold $Y$, that result from desingularising $T^{7}/\mathbb{Z}^{2}_{3}$ in the manner just described, admit a nowhere vanishing three-form $\varphi_{t}$. Using the above defined two-forms, and taking one-forms $\sigma^{\alpha}$ as sections of $\Lambda^{*}T^{3}$, this can be written as ${\varphi}_{t}={\varphi}_{0}=\frac{1}{3!}\epsilon_{\alpha\beta\gamma}\sigma^{\alpha}\wedge\sigma^{\beta}\wedge\sigma^{\gamma}+\sum_{\alpha}\sigma^{\alpha}\wedge\hat{\omega}^{\alpha}\;,$ in the non-singular bulk of $T^{7}/\mathbb{Z}^{2}_{3}$, and ${\varphi}_{t}=\frac{1}{3!}\epsilon_{\alpha\beta\gamma}\sigma^{\alpha}\wedge\sigma^{\beta}\wedge\sigma^{\gamma}+\sum_{\alpha}\sigma^{\alpha}\wedge\omega^{\alpha}(t)\;,$ in an open neighbourhood that contains $S\times U^{A}$. This is a smooth three-form, since $\\{\omega^{\alpha}(t)\\}\to\\{\hat{\omega}^{\alpha}\\}$ at the boundary of $U^{A}$, and it is closed for all $t$. However, a subtlety of the construction is that it is only for small $t$ that $\varphi_{t}$ is guaranteed to be a positive three-form, and hence defines a $G_{2}$ structure, in the interpolating region joyce1996:1 . Finally, while $\varphi_{t}$ is closed for all $t$, it fails to be coclosed in this region. As discussed above, a closed and coclosed $G_{2}$ structure may be constructed, in the $t\to 0$ limit, as $\varphi=\varphi_{t}+{\rm d}\eta_{t}$, where the two-form $\eta$ satisfies a certain elliptic differential equation that we will not discuss in detail. Thus, in the parametric limit of small $t$, we have constructed a $G_{2}$ structure $\varphi_{t}$ which is of the form (113), and provide an example of an ACM3S. In particular, there is a set of globally defined two-forms, $\\{\omega^{\alpha}(t)\\}$, which, when combined with the $G_{2}$ three-form, uniquely defines three one-forms $\\{\sigma^{\alpha}\\}$, which in turn defines an orthonormal three-frame $\\{R^{1},R^{2},R^{3}\\}$. So far, we have seen that there are obvious similarities between Joyce’s torsionful $G_{2}$ structure $\varphi_{t}$ and the ACM3S decomposition that we know should exist for any $G_{2}$ structure. A more interesting question to ask is whether Joyce’s construction automatically provides information about the AC(3)S of the torsion-free $G_{2}$ structure $\varphi_{t}$. This is true, but only to some extent. Indeed, any nowhere vanishing vector field $R$ will provide an ACS when combined either with $\varphi_{t}$ or $\varphi$. However the fundamental two-forms given by (24) depend on the three-form, and hence the transverse geometry will differ between these ACS. Clearly, comparing to (58), the $SU(3)$ torsion classes will also depend on whether the torsionful or torsionfree $G_{2}$ structure is selected. For AC3S the situation is similar, but here one also has to take into account that while linear independence of vector fields does not depend on the $G_{2}$ structure, orthonormality of vectors do. Thus, an orthonormal three-frame $\\{R^{1},R^{2},R^{3}\\}$ associated to $\varphi_{t}$ will not be orthonormal with respect to $\varphi$. However, the space of AC3S, $\mathscr{C}$, is topological, and hence the same for $\varphi_{t}$ and $\varphi$. Thus, we may hope to determine this space using only the explicit, torsionful $G_{2}$ structure $\varphi_{t}$. This is interesting, because we can then hope to even say something new about how to count associatives on a $G_{2}$ manifold. We hope to return to this intriguing problem in a future publication. ### 5.4 A class of non-compact $G_{2}$ holonomy examples In this section we will consider the local neighbourhood of an associative three-cycle in the context of ACM3Ss. Associative cycles are known to be relevant to M-theory compactifications and we earlier found a surprising relation with ACM3Ss. In particular, the associated trivial bundle, $\mathcal{T}$, can only be tangent to a submanifold if it is an associative submanifold.131313Note that $\mathcal{T}$ may be tangent to a submanifold without everywhere being a foliation, that is, the involutivity condition (118) may be satisfied for $x\in X^{3}$, but not for arbitrary $x\in Y$. The question naturally arises: if we fix an arbitrary associative three-cycle, can we always choose a global ACM3S that restricts to the tangent bundle of this three-cycle? In the simplest case, one would be able to locally deform a given ACM3S in a neighbourhood of the chosen three-cycle to satisfy this condition and we explore this possibility using our explicit expression for the space of ACM3Ss. First, we will show by construction that there is always an ACM3S, defined in the neighbourhood of an associative three-cycle, which restricts to a trivialisation of the three-cycle’s tangent bundle. This example, however, has boundary behaviour that is manifestly dependent on the chosen trivialisation at $X$. This is undesirable from the perspective of the initial compact manifold. We will therefore fix boundary conditions and study the space of compatible ACM3S. We will see that there are topologically distinct ACM3Ss at the boundary and, therefore, the possible configurations over the associative three-cycle depends on what happens at the edge of its local neighbourhood. Note that the question of which boundary conditions may arise depends on the global setting. It is an interesting open problem to determing what global topological features may obstruct choosing the ACM3S to be tangent to $X$. Finally, we will then study the space of all ACM3Ss, $\mathscr{C}(NX,\varphi)$ and show, in particular, that the bundle structure observed in subsection 4.3, is non-trivial. Finally, we will fix boundary conditions on $NX$ and study the corresponding space of ACM3S, $\mathscr{C}_{\partial}(NX)$, say. #### 5.4.1 Constructing an ACM3S Let $(Y,\tilde{\varphi})$ be a manifold with $G_{2}$ holonomy, i.e. $\varphi$ is a closed and coclosed stable three-form. Let $X\hookrightarrow\tilde{Y}$ a smooth associative three-cycle, meaning that the induced volume form on $X$ is precisely the restriction of the stable three-form: $\tilde{\varphi}|_{X}={\rm Vol}_{X}$. $X$ has a normal bundle, $NX$, and by standard results, this bundle is diffeomorphic to a tubular neighbourhood of $X$ in $\tilde{Y}$. By choosing such a diffeomorphism and pulling back $\tilde{\varphi}$, the normal bundle is endowed with a $G_{2}$ structure. In practice, it will be most useful for us to replace $NX$ with a finite- radius disc bundle. This is for technical convenience and we will tend not to keep explicit track of the difference. For concreteness, we will choose the embedding of $NX\hookrightarrow\tilde{Y}$ to be given, fibrewise, by geodesics. That is, for $n_{x}\in NX,$ a small enough normal vector at $x\in X$, the corresponding point in the embedded submanifold is given by $\exp_{x}(n_{x})\in Y$, where $\exp$ is exponential map of Riemannian geometry. Pulling back the three-form, $\tilde{\varphi}$ we obtain a three-form on $NX$, $\varphi$, which is covariantly constant with respect to the Levi-Civita connection. This has the convenient consequence that the value of $\varphi$ everywhere on $NX$ is fixed by its value on $X$, since we can parallel transport along the fibres: $\varphi_{(n,x)}={\sf P}_{\exp_{x}(tn)}(\varphi),$ (159) where ${\sf P}_{\gamma}$ indicates parallel transport along $\gamma$; our choice of path, $t\mapsto\exp_{x}(tn)$ is not unique, but $\varphi$ is completely independent of this choice. We can now look at ACM3Ss compatible with the $G_{2}$ structure we have introduced. We are particularly interested in ACM3Ss that restrict to the tangent bundle of $X$, so our first task is to ensure that we can extend such a choice over the whole $NX$. In this example, there is a canonical choice for extending a trivialisation of $X$. Let us fix an oriented, orthonormal framing of $X$, thereby inducing a three-frame on $TY|_{X}$, say $(R^{1}_{X},R^{2}_{X},R^{3}_{X})$. We need to extend this three-frame over $NX$, whilst preserving $R^{1}_{X}\times_{\varphi}R^{2}_{X}=R^{3}_{X}$. The obvious thing to do is parallel transport along the geodesics: $R^{i}_{(x,n_{x})}={\sf P}_{\exp_{x}(n_{x})}R^{i}_{X,x}\,.$ (160) This preserves (101), since $\varphi$ is also given by parallel transporting along the geodesics off of $X$. On the other hand, the triple $(R^{1},R^{2},R^{3})$ need no longer be integrable. Indeed, a straightforward computation shows that the Lie bracket of parallel transported vectors is given by: $[{\sf P}R,{\sf P}S]=(\nabla_{{\sf P}R}{\sf P})(S)-(\nabla_{{\sf P}S}{\sf P})(R)+{\sf P}(\nabla_{{\sf P}R}S-\nabla_{{\sf P}S}R)\,,$ (161) showing that parallel transport does not play well with the Lie bracket. This is not at all surprising: if integrability were preserved along the normal bundles, it would indicate a family of deformations of an associative three- cycle and finding such deformations is notoriously difficult, mclean1998deformations ; Joyce:2016fij . Integrability is not a requirement for us, however, and we are satisfied with an implicit, canonical ACM3S for each framing of the calibrated cycle $X$. #### 5.4.2 Compatibility of boundary conditions We note that the ACM3S constructed in the previous section has a non-trivial relationship between the behaviour at the boundary of $NX$ and the precise framing of $X$. If we want to glue this back into the original, compact manifold, this is not satisfactory. We therefore consider a similar problem, but with fixed boundary conditions. Our approach will use the topological features of $\mathscr{C}$ as opposed to the direct construction used above. To this end, let us fix an ACM3S at the boundary of $NX$, say ${\bf R}_{\partial}\in\mathscr{C}(\partial(NX))$, where $\mathscr{C}(\partial(NX)):=\Gamma\big{(}\partial(NX),\mathcal{V}_{2}(T(NX))|_{\partial(NX)}\big{)}$, by abuse of notation. We will also fix boundary conditions at the zero section ${\bf R}_{X}\in\mathscr{C}(X)$. Note that we have not actually imposed that $R^{i}_{X}$ come from the tangent bundle over $X$, despite our motivation for studying this problem. Our conclusions are independent of which boundary conditions we impose at either end: it matters only that not all conditions are mutually compatible. To impose the boundary conditions over $X$, consider the manifold $Y^{\circ}$, which is obtained from $NX$ by cutting out the zero section. It turns out that $Y^{\circ}$ is very simple. Indeed, the normal bundle, $NX$ can be seen as a vector bundle associated to an $SU(2)\times SU(2)$-principle bundle over $X$. Since $SU(2)$ is simply connected and $X$ is three-dimensional, any such bundle is trivial and $NX$ is diffeomorphic to the product $X\times D^{4}$. Consequentially, we can identify $Y^{\circ}$ with $X\times S^{3}\times I$ for an open interval, $I$. We recall from Subsection 4.3 that the space of ACM3Ss is the space of sections of a fibre bundle with typical fibre the homogeneous space $V_{2}(\mathbb{R}^{7})=G_{2}/SU(2)$. In our case, the seven manifold is parallelisable so that the fibre bundle is trivial and we can view our boundary conditions as maps ${\bf R}_{\partial}:X\times S^{3}\times\\{1\\}\rightarrow V_{2}(\mathbb{R}^{7})$ and ${\bf R}_{X}:X\times S^{3}\times\\{0\\}\rightarrow V_{2}(\mathbb{R}^{7})$. Of course, for consistency, it must be that ${\bf R}_{X}$ be constant in the fibral $S^{3}$ direction. An ACM3S that extends these boundary conditions is the same as a map $Y^{\circ}\rightarrow V_{2}(\mathbb{R}^{7})$ that restricts to ${\bf R}_{\partial}$ and ${\bf R}_{X}$. To put it another way, such a map is a homotopy of maps $X\times S^{3}\rightarrow V_{2}(\mathbb{R}^{7})$, so our problem is to determine the connected components of the space ${\rm Maps}(X\times S^{3},V_{2}(\mathbb{R}^{7}))$. We will utilise elementary topology techniques to put very coarse bounds on the number of these components. Readers unfamiliar with these techniques can consult e.g. hatcher2000algebraic for a comprehensive introduction. To make things more concrete, we will assume our associative three-cycle is the three-sphere $X=S^{3}$. Since this is the simplest compact three-manifold, we would expect that any obstructions appearing in this example would occur for a generic associative submanifold. The first observation we make is that $V_{2}(\mathbb{R}^{7})$ is simply connected, which we shall show below using sequence chasing in a homotopy exact sequence. Given this fact, we know that the homotopy classes of maps ${\rm Maps}(X\times S^{3},V_{2}(\mathbb{R}^{7}))$ (i.e. the connected components of this space) is identical with the homotopy classes of basepoint preserving maps ${\rm Maps}_{*}(X\times S^{3},V_{2}(\mathbb{R}^{7}))$, so we will regard both spaces as coming with an arbitrary choice of basepoint. Since maps out of a product space are not very convenient to deal with, we consider the fibration sequence $S^{5}\rightarrow S^{3}\vee S^{3}\rightarrow S^{3}\times S^{3}\rightarrow S^{6}\rightarrow\Sigma(S^{3}\vee S^{3})\,.$ (162) We have utilised several basic topological constructions here, which we briefly recall. In particular, the operations of wedge sum, $\vee$, the smash product, $\wedge$, and suspension, $\Sigma$, have appeared. Briefly, the wedge sum of pointed spaces, $(X,x_{0}),(Y,y_{0})$ is defined to be quotient space $X\vee Y=(X\sqcup Y)/(x_{0}\sim y_{0})$. In other words, the wedge sum is the space formed by gluing the two spaces together at the basepoint. The wedge sum of two circles is, for instance, the figure-eight space. Next, the smash product of two spaces is formed by quotienting the wedge sum out of the cartesian product, $X\wedge Y=(X\times Y)/(X\vee Y)$. This uses the embedding of the wedge sum $X\vee Y\hookrightarrow X\times Y$, which is induced by the maps $X\mapsto X\times\\{y_{0}\\}$ and $Y\mapsto\\{x_{0}\\}\times Y$. In the example of two circles, this embeds the wedge sum into the two-torus $T^{2}$ as the union of the longitudinal and meridianal circles, and the smash product is given by crushing these circles to a point. One can check that the resulting space is a homeomorphic to the 2-sphere. Finally, the suspension of a space can be given by smashing with a circle, $\Sigma X:=S^{1}\wedge X$. The example with two circles exhibits the general property that the suspension of an $n$-sphere is homeomorphic to an $(n+1)$-sphere, $\Sigma(S^{n})\cong S^{1}\wedge S^{n}\cong S^{n+1}$. Returning to the sequence, (162), we can now define the maps. In particular, the excerpt $S^{3}\vee S^{3}\rightarrow S^{3}\times S^{3}\rightarrow S^{6}$ is precisely the defining maps of the smash product: the first map is the canonical inclusion of the wedge sum into a product, and the final map is the quotient $S^{3}\times S^{3}\rightarrow S^{3}\wedge S^{3}\cong S^{6}$. The first map can be seen as the boundary of the attaching map $\Phi:D^{6}\rightarrow S^{3}\vee S^{3}$ as part of CW construction for $S^{6}$ (see, e.g. (felix2012rational, , p.175) for more details on this). Finally, the map $S^{6}\rightarrow\Sigma(S^{3}\vee S^{3})$ is the suspension of this attaching map, $\partial\Phi$. It is, in particular, nullhomotopic, which gives us an excerpt of an exact sequence: $\begin{split}0\rightarrow\pi_{0}\big{(}{\rm Maps}_{*}(S^{6},V_{2}(\mathbb{R}^{7})\big{)}\rightarrow\pi_{0}\big{(}{\rm Maps}_{*}&(S^{3}\times S^{3},V_{2}(\mathbb{R}^{7}))\big{)}\\\ &\rightarrow\pi_{0}\big{(}{\rm Maps}_{*}(S^{3}\vee S^{3},V_{2}(\mathbb{R}^{7}))\big{)}\end{split}$ (163) where the third term is the set we are interested in. The first term is the homotopy group $\pi_{6}(V_{2}(\mathbb{R}^{7}))$ and the last one is the direct sum $\pi_{3}(V_{2}(\mathbb{R}^{7}))^{\oplus 2}$. These homotopy groups can be calculated using the fibration $SU(2)\rightarrow G_{2}\rightarrow V_{2}(\mathbb{R}^{7})$ and the associated long exact homotopy sequence, as will see now. Let us first prove the earlier claim that $V_{2}(\mathbb{R}^{7})$ is simply connected. Indeed, as an excerpt from the exact sequence we have: $\pi_{1}(G_{2})\rightarrow\pi_{1}(V_{2}(\mathbb{R}^{7}))\rightarrow\pi_{0}(SU(2))\,,$ (164) and the fact that both $G_{2}$ and $SU(2)$ are simply connected shows that $\pi_{1}(V_{2}(\mathbb{R}^{7}))$ is indeed vanishing. Let us now look for the groups appearing as a consequence of (162), i.e. $\pi_{3}(V_{2})$ and $\pi_{6}(V_{2})$. First, we use the following extract: $0\rightarrow\pi_{7}(V_{2})\rightarrow\pi_{6}(SU(2))\rightarrow\pi_{6}(G_{2})\rightarrow\pi_{6}(V_{2}(\mathbb{R}^{7}))\rightarrow\pi_{5}(SU(2))\rightarrow 0$ (165) where the fact that $\pi_{5}(G_{2})=0=\pi_{7}(G_{2})$, mimura1967homotopy has been used. Since $\pi_{5}(SU(2))=\mathbb{Z}_{2}$, exactness implies that $\pi_{6}(V_{2}(\mathbb{R}^{7}))\neq 0$. Further, $\pi_{6}(G_{2})=\mathbb{Z}_{3}$ and $\pi_{6}(SU(2))=\mathbb{Z}_{12}$ so both $\pi_{6}(V_{2}(\mathbb{R}^{7}))$ and $\pi_{7}(V_{2}(\mathbb{R}^{7}))$ are certainly torsion, in particular having finite cardinality. Similarly, we can extract the exact sequence $0\rightarrow\pi_{4}(V_{2})\rightarrow\pi_{3}(SU(2))\rightarrow\pi_{3}(G_{2})\rightarrow\pi_{3}(V_{2})\rightarrow 0$ (166) where $\pi_{3}(G_{2})\cong\pi_{3}(SU(2))\cong\mathbb{Z}$. A map from $\mathbb{Z}\rightarrow\mathbb{Z}$ is either zero, or injective. In the first case, we would find $\pi_{4}(V_{2})\cong\pi_{3}(V_{2})\cong\mathbb{Z}$, while in the second we would conclude that $\pi_{4}(V_{2})=0$ and $\pi_{3}(V_{2})$ is torsion. In fact, this map can not be zero. This is because $SU(2)\cong S^{3}$, so if the generator of $\pi_{3}(SU(2))$ maps to zero in $\pi_{3}(G_{2})$ we have to conclude that the image of $SU(2)$ inside of $G_{2}$ is nullhomotopic. In particular, all maps $\pi_{i}(SU(2))\rightarrow\pi_{i}(G_{2})$ would have to vanish, which is known not to happen, mimura1967homotopy . Therefore, $\pi_{4}(V_{2})=0$ and $\pi_{3}(V_{2})$ is torsion. With these results in hand, we can return to (163). Since $\pi_{6}(V_{2})\neq 0$ is injected into $\pi_{0}({\rm Maps}_{*}(S^{3}\times S^{3}))$, it follows that this set must have cardinality at least as large as $\pi_{6}(V_{2})$. Further, exactness implies this map surjects onto a subgroup of $\pi_{3}(V_{2})^{\oplus 2}$ and we have just seen that this is torsion. As a consequence, the set of connected components, $\pi_{0}({\rm Maps}_{*}(S^{3}\times S^{3},V_{2}(\mathbb{R}^{7})))$, is non-empty but finite. This implies that there are indeed topological obstructions preventing us from extending arbitrary boundary conditions, but there are only finitely many distinct components, or at least only finitely many that are detected by the induced maps on homotopy groups. Assuming that we have found compatible boundary conditions, we can still ask about the space of ACM3S extending them. Our above argument shows that this is the space of paths in ${\rm Maps}(S^{3}\times S^{3},V_{2}(\mathbb{R}^{7}))$, with fixed endpoints. For simplicity, let us assume that our boundary conditions are such that ${\bf R}_{X}={\bf R}_{\partial}\sim\,{\rm constant}$. Such paths are the same as maps on the cylinder $S^{3}\times S^{3}\times I$, constant at the interval endpoints. In fact, we can think of this as a map on the suspension $\Sigma(S^{3}\times S^{3})\cong\Sigma(S^{3})\vee\Sigma(S^{3})\vee\Sigma(S^{3}\wedge S^{3})$. Therefore, the distinct components of this space are $\pi_{0}({\rm Maps}(\Sigma(S^{3}\times S^{3}),V_{2}(\mathbb{R}^{7})))\cong\pi_{4}(V_{2}(\mathbb{R}^{7}))^{\oplus 2}\oplus\pi_{7}(V_{2}(\mathbb{R}^{7}))=\pi_{7}(V_{2}(\mathbb{R}^{7}))\,.$ (167) We have used the earlier observation that $\pi_{4}(V_{2})=0$. The fact that $\pi_{7}(V_{2})$ is torsion means that there are finitely many distinct classes of ACM3Ss with these boundary conditions. In summary, we have seen that boundary conditions on $NX$ may obstruct our ability to find an ACM3S that is tangent to the associative three-cycle, $X$. It is an interesting question for future work to determine these obstructions more precisely and to see how the global topology of $\tilde{Y}$ influences the ACM3S that are obtainable on the boundary $NX$. #### 5.4.3 All ACM3S’s on $NX$ We will now return to study the space of all ACM3Ss on $NX$, without imposing any boundary conditions. This is interesting to study because the topological simplicity of $NX$ means we can easily see that the fibre bundle structure on the space of ACM3S’s cannot be trivial. We will continue to assume that $X$ is a three-sphere to keep the calculations as simple as possible. In this case $NX\cong X\times\mathbb{R}^{4}$, and the space of all ACM3Ss is given by the space of maps into $V_{2}(\mathbb{R}^{7})$, $\mathscr{C}(NX)={\rm Maps}(NX,V_{2}(\mathbb{R}^{7}))$. The connected components of this space are the homotopy classes of these maps and contractibility of $\mathbb{R}^{4}$ means that this is the same as the homotopy classes of maps ${\rm Maps}(X,V_{2}(\mathbb{R}^{7}))$. Conveniently, this is the homotopy group $\pi_{3}(V_{2}(\mathbb{R}^{7}))$, which was argued above to be finite. We can similarly calculate the connected components of ${\rm Maps}(NX,SO(3))$, which is again given by the homotopy group $\pi_{3}(SO(3))\cong\mathbb{Z}$. In particular, this space has infinitely many components. On the other hand, recall that the space of ACM3S is the total space of a bundle, (119), $\mathscr{T}\rightarrow\mathscr{C}\rightarrow\mathscr{S}\,,$ (168) where the fibre is the space of trivialisations, $\mathscr{T}={\rm Maps}(NX,SO(3))$ and the base is the space of splittings $\mathscr{S}=\Gamma(NX,\mathcal{G}_{2}(\varphi))$. We want to know whether this bundle is trivial or not. If we assume that it is a trivial bundle, $\Gamma(NX,\mathcal{V}_{2}(TNX))\cong{\rm Maps}(NX,SO(3))\times\Gamma(NX,\mathcal{G}_{2}(\varphi))\;,$ then we would conclude that $\pi_{0}(\Gamma(NX,\mathcal{V}_{2}(TNX)))\cong\pi_{0}({\rm Maps}(NX,SO(3)))\times\pi_{0}(NX,\mathcal{G}_{2}(\varphi))\;.$ This can not be, however, since the left-hand side is a finite set, and the right-hand side is infinite. Therefore, even in this very simple system the topological structure of the space of ACM3S’s is non-trivial. ## 6 Conclusions and outlook In this paper we reviewed established results guaranteeing that $G_{2}$ structure manifolds admit a further reduction of structure group, first to $SU(3)$, then to $SU(2)$. This reduction can be thought of as being topological in that geometric features of the $G_{2}$ structure, parallel transport for instance, need not respect this reduction. We saw, unsurprisingly, that compatibility of the geometry with this reduction leads to supersymmetry enhancement of the physics and, in the case of heterotic supergravity we explicitly analysed these compatibility conditions for an $SU(3)$ reduction. For $SU(3)$ reductions, there is an induced foliation of the underlying seven manifold and by decomposing the field equations into longitudinal and transverse components with respect to this foliation, we found that the longitudinal components of the fields can be seen as measuring the flow along the foliation _up to gauge transformations_. In other words, looking at the geometric features of the reduction of structure group allows us to interpret the seven-dimensional field equations as non-trivial flows of six-dimensional geometries. Further, this analysis allowed us to precisely identify the conditions for supersymmetry enhancement, from $N=1$ to $N=2$, in terms of the $SU(3)$ structure, in complete agreement with the work in Gran:2005wf ; Gran:2007kh ; Gran:2016zxk , and we used this to observe that an example from the literature Fernandez:2008wla , had an hitherto unrecognised enhancement of supersymmetry. This relation between almost contact structures and flows of six-dimensional geometries may have interesting consequences for the deformation theory of both six- and seven-dimensional geometries. From the six-dimensional perspective, one can consider flows in which the $SU(3)$ structure fails to satisfy the relevant equations of motion (for instance, Strominger-Hull in the heterotic case), but where this failure is cancelled by the change along the flow, leading to well-behaved seven-dimensional solutions. These kind of constructions have been considered in the context of domain walls, delaOssa:2014lma , for instance, but the generality of the ACS may be leveraged to consider more complicated scenarios involving subtle interplay between six- and seven-dimensional geometries, such as intersecting networks of domain walls. Returning to the purely seven-dimensional setting, ACSs may be of use in the study of deformations of a $G_{2}$ structure manifold. In this case, the reduction of structure group can be thought of as an extra redundancy in the system, in other words a symmetry, and deformations can be expected to come in representations of this symmetry. Such considerations have often been used in physics. In the case at hand, we expect this will have non-trivial consequences for the difficult study of finite deformations of compactifications on $G_{2}$ structure manifolds. In fact, looking further ahead, it may be possible to use these structures to study the infinite distance in moduli space, through decompactifying the foliation direction, for instance. That is, if we suppose that the foliation has compact leaves141414In general, the leaves of a foliation need not be compact, so this procedure will not be sensible for every ACS., then we can consider the limit in the geometry where these leaves go to infinite length. This is reminiscent of Donaldson’s study of adiabatic limits of Kovalev–Lefschetz fibrations 2016arXiv160308391D , and it would be interesting to make this connection more precise. From a physicist’s perspective, the usual stringy considerations Ooguri:2006in imply that we ought to find a massive tower of states becoming light in this limit, and these states ought to correspond to excitations along the foliation. What this tower of states consists of depends on which theory we study. In this paper, we studied the decomposition of the fields of heterotic supergravity into longitudinal and transverse components, as well as the torsion of the $SU(3)$ structure connection, and we expect these results will be necessary in pursuing this idea in the heterotic context. Moreover, it would be interesting to perform similar studies of, for example, M-theory compactified on manifolds with $G_{2}$ holonomy. A feature of importance for the future prospects of this research is that almost contact structures are abundant, indeed there is automatically an infinite dimensional family of them. It is, however, unclear whether each of these should truly be regarded as “different” from the physics perspective. It may perhaps be more fruitful to consider the further reduction of structure group to $SU(2)$ for these questions. We saw in this paper that this further reduction is accomplished by an $SO(3)$ triple of vector fields. This is the generic minimum structure group for a $G_{2}$ structure manifold, since any further reduction implies the manifold is, in fact, parallelizable i.e. that the structure group is trivial. Although the existence of an almost contact metric three-structure, an ACM3S, (and hence reduction of structure group to $SU(2)$) has been known since the middle of the last century, it seems the study of the space of such reductions has not been undertaken thus far. In this paper we initiated the study of this space, which we identified as a space of sections on a bundle naturally associated to the principal $G_{2}$ frame bundle. It is, consequentially, infinite dimensional and has non-trivial topology in its own right, including being a locally trivial fiber bundle. We saw in examples that this may be non- trivially fibred and gave a rough argument for why this might be expected to hold in general. Further detailed analysis of this space, for example for compact $G_{2}$ manifolds, may prove to be interesting. From the mathematical perspective, the space of these ACM3Ss, which we have denoted $\mathscr{C}$, depends only on the isomorphism class of the $G_{2}$ frame bundle. In this sense, it is a rather coarse $G_{2}$ invariant. An immediate question to ask is whether it truly depends on this bundle, or if it, in fact, depends only on the unreduced tangent bundle of the manifold. Perhaps of greater interest would be to find a refinement of this space, which somehow encodes geometric features of the $G_{2}$ structure, including the metric and covariant derivative, for instance. Some physics considerations may point the way to constructing such a space. Indeed, what is lacking in our analysis is any consideration of whether there is any ACM3S that is preferred. It is natural, from the physics perspective, that not all structures are created equal, so an important open problem is to discover a principle to differentiate between them. As we have alluded to, there is a connection between ACM(3)S and supersymmetry, that one might hope to build on to discover such a principle. Let us make some arguments in favour of this. One line of reasoning is to relate the triplet of vectors to spontaneously broken supersymmetry, comparable to the approach in KashaniPoor:2013en . This is a natural connection to make, because the vector fields induce spinors via clifford multiplication on the canonical $G_{2}$ spinor. One might hope that these correspond to massless spinors, in the effective theory of a compactification scenario for instance, but unless supersymmetry is enhanced they will correspond to particles with a mass measured by the $G_{2}$ Dirac operator. This suggests that this mass is the quantity to minimise to find the preferred ACM3S, in the general case. In fact, this proposal has some ambiguities. There is no reason to expect a generic nowhere vanishing vector to induce an eigenspinor. Consequentially, a given AC(3)S will induce a linear combination of massive fields that can not be teased apart using the tools at hand. It may be too naive, then, to think of the triplet in an ACM3S as corresponding to particles of the effective theory. Nevertheless, if we view that ACM3S as part of the data in defining an enhanced supersymmetric vacuum, then variations of ACM3S would correspond to vectors in the space of field configurations. Considering the variations of e.g. a superpotential would lead to Euler-Lagrange equations and it is these solutions that one might expect to single out preferred structures. In fact, it might well be expected that the critical locus would be an interesting invariant of the $G_{2}$ geometry and thus be of independent interest to mathematicians. Indeed, we discovered that integrability of the bundles appearing in the construction of the ACM3S is related to interesting aspects of the $G_{2}$ structure: associative and coassociative cycles. These have been much studied in the context of $G_{2}$ holonomy manifolds, impetus in the physics community coming from BPS states in M-theory compactifications, for instance. When the $G_{2}$ structure is not, in fact, closed and coclosed, then identifying the relevant BPS states is a still open question. There is some hope that the ACM3S studied here may help shed light on this, once one has identified the correct action principle. The relation between ACM3Ss and associative cycles was explored further in examples. In particular, we looked at the local neighbourhood of an associative three-cycle in a $G_{2}$-holonomy manifold. It was here that we were able to explicitly show that the space of ACM3Ss, $\mathscr{C}$, was non- trivially fibred over the space of trivial, associative rank 3 subbundles of the tangent bundle. We also saw that we could always find an ACM3S that was tangent to the associative three-cycle, but that if we fixed the behaviour at the boundary of this manifold, then there are possible obstructions to doing so. Re-inserting this local picture into a compact $G_{2}$-holonomy manifold is therefore non-trivial. It would be very interesting to invert this problem and consider how the global topology of a compact manifold affects the possibility of an ACM3S lying tangent to a given associative cycle (or, dually, for the transverse bundle to lie tangent to a coassociative). One way to approach this question is, of course, to study more examples. There are several classes to explore: further nilpotent examples from the list of delBarco:2020ddt , $T^{3}$ bundles over $K3$ Fernandez:2008wla , Clarke:2020erl , and the very recently constructed examples of heterotic $G_{2}$ systems on contact Calabi–Yau seven-manifolds Lotay:2021eog . Of equal import are the different constructions of compact $G_{2}$ holonomy manifolds: Joyce orbifolds joyce1996:1 ; joyce1996:2 and (extra) twisted connected sums kovalev20 ; Corti:2012kd ; Corti:2013 ; 2018arXiv180909083N . Here, we have initiated studies of ACM3S on Joyce orbifolds, and shown that they come equipped with a canonical AC(3)S. We expect similar results to hold for TCS manifolds. Non-compact $G_{2}$ manifolds, such as Bryant and Salamon’s seminal examples bryant89 and the more recent constructions Foscolo:2017vzf ; Acharya:2020vmg , also have clear links with $SU(3)$ structures and ACS. Determining the space $\mathscr{C}$ on some of these example geometries would be interesting. One final possibility that we would like to highlight is the utility of these structures in applications of localisation techniques in quantum field theories (see Pestun:2016zxk for a review). Indeed, localisation techniques have been successfully applied to classes of 7-manifold admitting particular kinds of contact structure, Minahan:2015jta ; Polydorou:2017jha ; Rocen:2018xwo ; Iakovidis:2020znp . Extending these results to a general $G_{2}$ structure manifold would be incredibly interesting, but it thus far remains intractable. It is plausible that the almost contact structures considered here will of be use, in that they are similar (but weaker) to the contact structures already used in Refs. Minahan:2015jta ; Polydorou:2017jha ; Rocen:2018xwo ; Iakovidis:2020znp , while being ubiquitous. In short, we have found that these almost contact (three) structures have tantalising connections to many active research in both physics and mathematics. We think that the tools and results we reviewed and established here will be necessary in fleshing out these surprising relations. ## Acknowledgements The authors would like to thank Eirik Svanes for initial collaboration on this project. We would also like to thank Marc-Antoine Fiset, Mateo Galdeano Solans and Maxim Zabzine for discussions. The research of ML and MM is financed by Vetenskapsrådet under grant number 2016-03873, 2016-03503, and 2020-03230. ## Appendix A $SU(3)$ structures Let $X$ be a 6 dimensional Riemannian manifold with metric $g$. An $SU(3)$ structure on $X$ is defined by a triple $(X,\omega,\Omega)$, where $\omega$ is a positive non-degenerate globally well defined real two form, and $\Omega$ is a locally decomposable nowhere vanishing globally well defined decomposable three form. The forms $\Omega$ and $\omega$ satisfy $\omega\wedge\Omega=0~{}.$ (169) The real part of the form $\Omega$ determines an almost complex structure $J$ on $X$. With respect to $J$, $\Omega$ is a $(3,0)$-form and, by equation (169), $\omega$ is a $(1,1)$ form. In fact $\omega$ is a hermitian form on $X$. The almost complex structure together with $\omega$ determine a hermitian metric on $X$ $g_{mn}=\omega_{mp}\,J^{p}{}_{n}=-J^{p}{}_{m}\,\omega_{pn}~{}.$ There is a unique, up to a constant, invariant volume form which can be written as ${\rm d}{\rm vol}=\frac{1}{3!}\,\omega\wedge\omega\wedge\omega=\frac{i}{||\Omega||^{2}}\,\Omega\wedge\overline{\Omega}~{},\qquad||\Omega||^{2}=\overline{\Omega}\lrcorner\Omega~{}.$ (170) As we will see later, the $SU(3)$ structure which is natural to a manifold with a $G_{2}$ structure satisfies $||\Omega||^{2}=8$. This means that the scale invariance of this $SU(3)$ is reduced to an invariance under phase changes of $\Omega$. The exterior derivative of the form $\omega$ and $\Omega$ can be decomposed into representations of $SU(3)$ as follows $\displaystyle{\rm d}\omega$ $\displaystyle=-\frac{12}{||\Omega||^{2}}\,{\rm Im}(W_{0}\,\overline{\Omega})+W_{1}\wedge\omega+W_{3}~{},$ (171) $\displaystyle{\rm d}\Omega$ $\displaystyle=W_{0}\,\omega\wedge\omega+W_{2}\wedge\omega+\overline{\theta}\wedge\Omega~{},$ (172) where the forms $\\{W_{0},\theta,W_{1},W_{2},W_{3}\\}$ are the torsion classes. $W_{0}$ is a complex function, $W_{1}$ is a real one form, $\theta$ a $(1,0)$ form, $W_{2}$ is a complex primitive $(1,1)$ form, and $W_{3}$ is a real primitive three form type $(2,1)+(1,2)$. ## Appendix B $SU(2)$ structures Let $S$ be a four dimensional manifold with metric $g$. An $SU(2)$ structure on $S$ is defined by a triple $(S,\omega,\Omega)$, where $\omega$ is a positive, non-degenerate, globally well defined real two form, and $\Omega$ is a locally decomposable, nowhere vanishing, globally well defined two form. The forms $\Omega$ and $\omega$ satisfy $\omega\wedge\Omega=0~{}.$ (173) There is a unique, up to a constant, invariant volume form ${\rm d}{\rm vol}_{S}=\frac{1}{2}\,\omega\wedge\omega=\frac{1}{||\Omega||^{2}}\,\Omega\wedge\overline{\Omega}~{},\qquad||\Omega||^{2}=\overline{\Omega}\lrcorner\Omega~{}.$ (174) While keeping the scaling of $\Omega$ as an invariance of the theory, in the $G_{2}$ setting we are interested in, we will have $||\Omega||^{2}=1$. We will not, in this paper, need the torsion classes of the $SU(2)$ structure, which is found by decomposing the exterior derivative of the form $\omega$ and $\Omega$ into representations of $SU(2)$. The reader is referred to Behrndt:2005im , and also Bovy:2005qq ; ReidEdwards:2008rd ; Louis:2009dq ; KashaniPoor:2013en for more details on this point. Just as in the $SU(3)$ structure discussed in the previous subsection, we have that the complex two-form defines an almost complex structure on $TX$. In fact there are several almost complex structures on an $SU(2)$ structure manifold, as we now explain. We may always, as we do in the main discussion of this paper, trade the complex two-form $\Omega$ for two real two-forms. Let $\omega^{1}={\rm Re}\Omega\;,\;\omega^{2}={\rm Im}\Omega\;,\;\omega^{3}=\omega\;.$ (175) Then $\\{\omega^{1},\omega^{2},\omega^{3}\\}$ transform as an $SU(2)$ triplet, and any complex combination of these forms defines an almost complex structure. Moreover, we may associate an almost complex structure to each $\omega^{\alpha}$: $(J^{({\alpha})})^{a}{}_{b}=g^{ac}\omega^{\alpha}_{bc}\;.$ (176) It can be shown that $J^{({\alpha})}J^{(\beta)}=-\delta^{\alpha\beta}\bf{1}$ follows from $2\,\omega^{\alpha}\wedge\omega^{\beta}=\delta^{\alpha\beta}{\rm d}{\rm vol}_{S}$, which in turn follows from (174). The almost complex structure $J^{(\alpha)}$ is integrable if the corresponding two-form is closed. In the case that all $\omega^{\alpha}$ are closed, the four- dimensional manifold is hyper-Kähler, and the $SU(2)$ transformations that rotate the $\omega^{\alpha}$ correspond to hyper-Kähler rotations of the complex structure. If the structure group is proper $SU(2)$, and not a subgroup thereof, the closure of all $\omega^{\alpha}$ implies that $X$ is a K3 surface. ## References * (1) D. D. Joyce, _Compact Riemannian 7-manifolds with holonomy $G_{2}$. I_, _J. Differential Geom._ 43 (1996) 291. * (2) D. D. Joyce, _Compact Riemannian 7-manifolds with holonomy $G_{2}$. II_, _J. Differential Geom._ 43 (1996) 329. * (3) A. Kovalev, _Twisted connected sums and special Riemannian holonomy_ , _arXiv Mathematics e-prints_ (2000) math/0012189 [math/0012189]. * (4) A. Corti, M. Haskins, J. Nordström and T. Pacini, _$\mathrm{G}_{2}$ -manifolds and associative submanifolds via semi-Fano $3$-folds_, _Duke Math. J._ 164 (2015) 1971 [1207.4470]. * (5) A. Corti, M. Haskins, J. Nordström and T. Pacini, _Asymptotically cylindrical Calabi-Yau 3-folds from weak Fano 3-folds_ , _Geom.Topol._ 17 (2013) . * (6) J. Gutowski and G. Papadopoulos, _Moduli spaces and brane solitons for M theory compactifications on holonomy G(2) manifolds_ , _Nucl. Phys._ B615 (2001) 237 [hep-th/0104105]. * (7) S. Grigorian, _Moduli spaces of G 2 manifolds_, _Rev. Math. Phys._ 22 (2010) 1061 [0911.2185]. * (8) X. de la Ossa, M. Larfors and E. E. Svanes, _Infinitesimal moduli of G2 holonomy manifolds with instanton bundles_ , _JHEP_ 11 (2016) 016 [1607.03473]. * (9) X. de la Ossa, M. Larfors and E. E. Svanes, _The Infinitesimal Moduli Space of Heterotic G 2 Systems_, _Commun. Math. Phys._ 360 (2018) 727 [1704.08717]. * (10) M.-A. Fiset, C. Quigley and E. E. Svanes, _Marginal deformations of heterotic G 2 sigma models_, _JHEP_ 02 (2018) 052 [1710.06865]. * (11) X. de La Ossa and M.-A. Fiset, _$\mathcal{G}$ -structure symmetries and anomalies in $(1,0)$ non-linear $\sigma$-models_, _JHEP_ 01 (2019) 062 [1809.01138]. * (12) A. Clarke, M. Garcia-Fernandez and C. Tipler, _Moduli of $G_{2}$ structures and the Strominger system in dimension 7_, 1607.01219. * (13) A. Clarke, M. Garcia-Fernandez and C. Tipler, _$T$ -Dual solutions and infinitesimal moduli of the $G_{2}$-Strominger system_, 2005.09977. * (14) A. P. Braun and S. Schäfer-Nameki, _Compact, Singular $G_{2}$-Holonomy Manifolds and M/Heterotic/F-Theory Duality_, _JHEP_ 04 (2018) 126 [1708.07215]. * (15) M. Atiyah and E. Witten, _M theory dynamics on a manifold of G(2) holonomy_ , _Adv. Theor. Math. Phys._ 6 (2003) 1 [hep-th/0107177]. * (16) B. S. Acharya and E. Witten, _Chiral fermions from manifolds of G(2) holonomy_ , hep-th/0109152. * (17) D. Joyce, _Conjectures on counting associative 3-folds in $G_{2}$-manifolds_, _arXiv e-prints_ (2016) arXiv:1610.09836 [1610.09836]. * (18) A. P. Braun, M. Del Zotto, J. Halverson, M. Larfors, D. R. Morrison and S. Schäfer-Nameki, _Infinitely many M2-instanton corrections to M-theory on G 2-manifolds_, _JHEP_ 2018 (2018) 77 [1803.02343]. * (19) B. S. Acharya, A. P. Braun, E. E. Svanes and R. Valandro, _Counting associatives in compact $G_{2}$ orbifolds_, _JHEP_ 03 (2019) 138 [1812.04008]. * (20) J. A. Harvey and G. W. Moore, _Superpotentials and membrane instantons_ , hep-th/9907026. * (21) S. Gukov, S.-T. Yau and E. Zaslow, _Duality and Fibrations on $G_{2}$ Manifolds_, _Turkish Journal of Mathematics_ 27 (2003) 61 [hep-th/0203217]. * (22) A. P. Braun and M. Del Zotto, _Towards generalized mirror symmetry for twisted connected sum $G_{2}$ manifolds_, _JHEP_ 2018 (2018) 82 [1712.06571]. * (23) A. P. Braun and M. Del Zotto, _Mirror symmetry for $G_{2}$-manifolds: twisted connected sums and dual tops_, _JHEP_ 2017 (2017) 80 [1701.05202]. * (24) J. Eckhard, S. Schäfer-Nameki and J.-M. Wong, _An $\mathcal{N}=1$ 3d-3d Correspondence_, _JHEP_ 07 (2018) 052 [1804.02368]. * (25) E. Thomas, _Vector fields on manifolds_ , _Bull. Amer. Math. Soc._ 75 (1969) 643. * (26) T. Friedrich, I. Kath, A. Moroianu and U. Semmelmann, _On nearly parallel G2-structures_ , _Journal of Geometry and Physics_ 23 (1997) 259. * (27) K. Behrndt, M. Cvetic and T. Liu, _Classification of supersymmetric flux vacua in M theory_ , _Nucl. Phys._ B749 (2006) 25 [hep-th/0512032]. * (28) S. Andriolo, G. Shiu, H. Triendl, T. Van Riet, G. Venken and G. Zoccarato, _Compact G2 holonomy spaces from SU(3) structures_ , _JHEP_ 03 (2019) 059 [1811.00063]. * (29) N. Kim, _AdS(3) solutions of IIB supergravity from D3-branes_ , _JHEP_ 01 (2006) 094 [hep-th/0511029]. * (30) U. Gran, J. Gutowski and G. Papadopoulos, _IIB backgrounds with five-form flux_ , _Nucl. Phys._ B798 (2008) 36 [0705.2208]. * (31) A. Passias and D. Prins, _On AdS 3 solutions of Type IIB_, _JHEP_ 05 (2020) 048 [1910.06326]. * (32) U. Gran, P. Lohrmann and G. Papadopoulos, _The Spinorial geometry of supersymmetric heterotic string backgrounds_ , _JHEP_ 0602 (2006) 063 [hep-th/0510176]. * (33) U. Gran, G. Papadopoulos and D. Roest, _Supersymmetric heterotic string backgrounds_ , _Phys.Lett._ B656 (2007) 119 [0706.4407]. * (34) U. Gran, J. B. Gutowski and G. Papadopoulos, _On supersymmetric Anti-de-Sitter, de-Sitter and Minkowski flux backgrounds_ , 1607.00191. * (35) H. Hopf, _Vectorfelder in n-dimensionalen mannigfaltigkeiten_ , _Math. Ann. 96_ (1927) 225. * (36) E. Thomas, _Real and complex vector fields on manifolds_ , _Journal of Mathematics and Mechanics_ 16 (1967) 1183. * (37) Y.-y. Kuo, _On almost contact $3$-structure_, _Tohoku Math. J. (2)_ 22 (1970) 325. * (38) M. F. Arikan, H. Cho and S. Salur, _Contact Structures on $G_{2}$-Manifolds and Spin 7-Manifolds_, 1207.2046. * (39) M. F. Arikan, H. Cho and S. Salur, _Existence of Compatible Contact Structures on $G_{2}$-manifolds_, _Asian J. Math._ 17 (2013) 321 [1112.2951]. * (40) A. J. Todd, _An Almost Contact Structure on $G_{2}$-Manifolds_, 1501.06966. * (41) M. Fernández and A. Gray, _Riemannian manifolds with structure group g2_ , _Ann. Mat. Pura Appl._ 32 (1982) 19. * (42) R. L. Bryant, _Some remarks on G(2)-structures_ , math/0305124. * (43) D. D. Joyce, _Compact manifolds with special holonomy_ , Oxford Mathematical Monographs. Oxford University Press, 2000. * (44) S. Sasaki, _On differentiable manifolds with certain structures which are closely related to almost contact structure, i_ , _Tohoku Math. J. (2)_ 12 (1960) 459. * (45) S. Sasaki and Y. Hatakeyama, _On differentiable manifolds with certain structures which are closely related to almost contact structure, ii_ , _Tohoku Math. J. (2)_ 13 (1961) 281. * (46) T. Kashiwada, _On a contact 3-structure_ , _Mathematische Zeitschrift_ 238 (2001) 829. * (47) X. de la Ossa, M. Larfors and E. E. Svanes, _Exploring $SU(3)$ structure moduli spaces with integrable $G_{2}$ structures_, _Adv. Theor. Math. Phys._ 19 (2015) 837 [1409.7539]. * (48) M. Gunaydin and H. Nicolai, _Seven-dimensional octonionic Yang-Mills instanton and its extension to an heterotic string soliton_ , _Phys. Lett._ B351 (1995) 169 [hep-th/9502009]. * (49) J. P. Gauntlett, N. Kim, D. Martelli and D. Waldram, _Five-branes wrapped on SLAG three cycles and related geometry_ , _JHEP_ 0111 (2001) 018 [hep-th/0110034]. * (50) T. Friedrich and S. Ivanov, _Parallel spinors and connections with skew-symmetric torsion in string theory_ , _ArXiv Mathematics e-prints_ (2001) [math/0102142]. * (51) J. P. Gauntlett, D. Martelli, S. Pakis and D. Waldram, _G structures and wrapped NS5-branes_ , _Commun.Math.Phys._ 247 (2004) 421 [hep-th/0205050]. * (52) T. Friedrich and S. Ivanov, _Killing spinor equations in dimension 7 and geometry of integrable G 2-manifolds_, _Journal of Geometry and Physics_ 48 (2003) 1 [math/0112201]. * (53) J. P. Gauntlett, D. Martelli and D. Waldram, _Superstrings with intrinsic torsion_ , _Phys.Rev._ D69 (2004) 086002 [hep-th/0302158]. * (54) P. Ivanov and S. Ivanov, _SU(3) instantons and G(2), spin(7) heterotic string solitons_ , _Commun. Math. Phys._ 259 (2005) 79 [math/0312094]. * (55) A. Lukas and C. Matti, _G-structures and Domain Walls in Heterotic Theories_ , _JHEP_ 1101 (2011) 151 [1005.5302]. * (56) J. Gray, M. Larfors and D. Lüst, _Heterotic domain wall solutions and SU(3) structure manifolds_ , _JHEP_ 1208 (2012) 099 [1205.6208]. * (57) X. de la Ossa, M. Larfors, M. Magill and E. E. Svanes, _Superpotential of three dimensional N=1 heterotic supergravity_ , _JHEP_ 2020 (2020) [1904.01027]. * (58) A. Strominger, _Superstrings with Torsion_ , _Nucl.Phys._ B274 (1986) 253. * (59) C. Hull, _Compactifications of the Heterotic Superstring_ , _Phys.Lett._ B178 (1986) 357. * (60) X. de la Ossa, M. Larfors and E. E. Svanes, _Restrictions of Heterotic $G_{2}$ Structures and Instanton Connections_, in _Nigel Hitchin’s 70th Birthday Conference_ , 9, 2017, 1709.06974. * (61) E. Thomas, _Postnikov invariants and higher order cohomology operations_ , _Annals of Mathematics_ 85 (1967) 184. * (62) R. Harvey and H. B. Lawson, _Calibrated geometries_ , _Acta Mathematica_ 148 (1982) 47. * (63) R. C. McLean, _Deformations of calibrated submanifolds_ , _Communications in Analysis and Geometry_ 6 (1998) 705. * (64) S. Donaldson, _Adiabatic limits of co-associative Kovalev-Lefschetz fibrations_ , _arXiv e-prints_ (2016) arXiv:1603.08391 [1603.08391]. * (65) D. Baraglia, _Moduli of coassociative submanifolds and semi-flat G 2-manifolds_, _Journal of Geometry and Physics_ 60 (2010) 1903 [0902.2135]. * (66) A. Kovalev, _Coassociative K3 fibrations of compact G_2-manifolds_ , _arXiv Mathematics e-prints_ (2005) math/0511150 [math/0511150]. * (67) A. Hatcher, _Vector bundles and k-theory_ , _http://www.math.cornell.edu/~hatcher/VBKT/VBpage.html_ (2016) . * (68) M. Magill, _Topological K-theory and Bott Periodicity_ , Master’s thesis, Uppsala University, Algebra and Geometry, http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1103965, 2017. * (69) M. Fernandez, S. Ivanov, L. Ugarte and R. Villacampa, _Compact supersymmetric solutions of the heterotic equations of motion in dimensions 7 and 8_ , _Adv. Theor. Math. Phys._ 15 (2011) 245 [0806.4356]. * (70) V. del Barco, A. Moroianu and A. Raffero, _Purely coclosed G 2-structures on 2-step nilpotent Lie groups_, 2006.15925. * (71) S. Grigorian, _Betti numbers of a class of barely G 2 manifolds_, _Commun. Math. Phys._ 301 (2011) 215 [0909.4681]. * (72) D. Joyce, _Compact manifolds with exceptional holonomy_ , _Documenta Mathematica_ II (1998) 361. * (73) P. Kronheimer, _The construction of ALE spaces as hyper-Kähler quotients_ , _J. Diff. Geom._ 29 (1989) 665. * (74) P. B. Kronheimer, _A Torelli-type theorem for gravitational instantons_ , _J. Differential Geom._ 29 (1989) 685. * (75) S. Roan, _Minimal resolutions of Gorenstein orbifolds in dimension three_ , _Topology_ 35 (1996) 489 . * (76) A. Hatcher, C. U. Press and C. U. D. of Mathematics, _Algebraic topology_ , _http://pi.math.cornell.edu/~hatcher/AT/ATpage.html_ (2002) . * (77) Y. Félix, S. Halperin and J.-C. Thomas, _Rational homotopy theory_ , vol. 205. Springer Science & Business Media, 2012. * (78) M. Mimura, _The homotopy groups of lie groups of low rank_ , _Journal of Mathematics of Kyoto University_ 6 (1967) 131. * (79) H. Ooguri and C. Vafa, _On the Geometry of the String Landscape and the Swampland_ , _Nucl. Phys. B_ 766 (2007) 21 [hep-th/0605264]. * (80) A.-K. Kashani-Poor, R. Minasian and H. Triendl, _Enhanced supersymmetry from vanishing Euler number_ , _JHEP_ 04 (2013) 058 [1301.5031]. * (81) J. D. Lotay and H. N. S. Earp, _The heterotic $\rm{G}_{2}$ system on contact Calabi–Yau $7$-manifolds_, 2101.06767. * (82) J. Nordström, _Extra-twisted connected sum $G_{2}$-manifolds_, _arXiv e-prints_ (2018) arXiv:1809.09083 [1809.09083]. * (83) R. L. Bryant and S. Salamon, _On construction of some complete metrics with exceptional holonomy_ , _Duke Math. J._ 58 (1989) 829. * (84) L. Foscolo, M. Haskins and J. Nordström, _Complete non-compact G2-manifolds from asymptotically conical Calabi-Yau 3-folds_ , 1709.04904. * (85) B. S. Acharya, L. Foscolo, M. Najjar and E. E. Svanes, _New $G_{2}$-conifolds in $M$-theory and their Field Theory Interpretation_, 2011.06998. * (86) V. Pestun et al., _Localization techniques in quantum field theories_ , _J. Phys. A_ 50 (2017) 440301 [1608.02952]. * (87) J. A. Minahan and M. Zabzine, _Gauge theories with 16 supersymmetries on spheres_ , _JHEP_ 03 (2015) 155 [1502.07154]. * (88) K. Polydorou, A. Rocén and M. Zabzine, _7D supersymmetric Yang-Mills on curved manifolds_ , _JHEP_ 12 (2017) 152 [1710.09653]. * (89) A. Rocén, _7D supersymmetric Yang-Mills on a 3-Sasakian manifold_ , _JHEP_ 11 (2018) 024 [1808.06917]. * (90) N. Iakovidis, J. Qiu, A. Rocén and M. Zabzine, _7D supersymmetric Yang-Mills on hypertoric 3-Sasakian manifolds_ , _JHEP_ 06 (2020) 026 [2003.12461]. * (91) J. Bovy, D. Lust and D. Tsimpis, _N = 1,2 supersymmetric vacua of IIA supergravity and SU(2) structures_ , _JHEP_ 08 (2005) 056 [hep-th/0506160]. * (92) R. A. Reid-Edwards and B. Spanjaard, _N=4 Gauged Supergravity from Duality-Twist Compactifications of String Theory_ , _JHEP_ 12 (2008) 052 [0810.4699]. * (93) J. Louis, D. Martinez-Pedrera and A. Micu, _Heterotic compactifications on SU(2)-structure backgrounds_ , _JHEP_ 09 (2009) 012 [0907.3799].
Software Technology and Artificial Intelligence Research Laboratory, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, <EMAIL_ADDRESS>Department of Information Science, Toho University, 2-2-1 Miyama, Funabashi, Chiba, 274-8510<EMAIL_ADDRESS><ccs2012> <concept> <concept_id>10003752.10003790</concept_id> <concept_desc>Theory of computation Logic</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003790.10011740</concept_id> <concept_desc>Theory of computation Type theory</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Theory of computation Logic [500]Theory of computation Type theory ###### Acknowledgements. The author thanks Yosuke Fukuda, Tasuku Hiraishi, Kentaro Kikuchi, and Takeshi Tsukada for the fruitful discussions, which clarified contributions of the present paper. John Q. Open and Joan R. Access 2 42nd Conference on Very Important Topics (CVIT 2016) CVIT 2016 CVIT 2016 December 24–27, 2016 Little Whinging, United Kingdom 42 23 # A Symmetric Lambda-Calculus Corresponding to the Negation-Free Bilateral Natural Deduction Tatsuya Abe Daisuke Kimura ###### Abstract Filinski constructed a symmetric lambda-calculus consisting of expressions and continuations which are symmetric, and functions which have duality. In his calculus, functions can be encoded to expressions and continuations using primitive operators. That is, the duality of functions is not derived in the calculus but adopted as a principle of the calculus. In this paper, we propose a simple symmetric lambda-calculus corresponding to the negation-free natural deduction based bilateralism in proof-theoretic semantics. In our calculus, continuation types are represented as not negations of formulae but formulae with negative polarity. Function types are represented as the implication and but-not connectives in intuitionistic and paraconsistent logics, respectively. Our calculus is not only simple but also powerful as it includes a call-value calculus corresponding to the call-by-value dual calculus invented by Wadler. We show that mutual transformations between expressions and continuations are definable in our calculus to justify the duality of functions. We also show that every typable function has dual types. Thus, the duality of function is derived from bilateralism. ###### category: ###### keywords: symmetric lambda-calculus, formulae-as-types, duality, bilateralism, natural deduction, proof-theoretic semantics, but-not connective, continuation, call- by-value ## 1 Introduction A function of the type $A_{0}\to A_{1}$ from expressions of the type $A_{0}$ to expressions of the type $A_{1}$ can be regarded as a function from continuations of the type $A_{1}$ to continuations of the type $A_{0}$. This property of functions is called _duality_. Filinski constructed a symmetric $\lambda$-calculus based on the duality of functions [12, 13]. His calculus consists of expressions $\mathit{E}$, continuations $\mathit{C}$, and functions $\mathit{F}$. Expressions and continuations are symmetric. Functions are neutral, that is, functions can be encoded to expressions and continuations like $\ulcorner\mathit{F}\urcorner$ and $\llcorner\mathit{F}\lrcorner$, respectively. Expressions and continuations can be decoded to functions by operators $\overline{\mathit{E}}$ and $\underline{\mathit{C}}$. The operators $\ulcorner\cdot\urcorner$, $\llcorner\cdot\lrcorner$, $\overline{\cdot}$, and $\underline{\cdot}$ are _primitive_ since the duality of functions is adopted as a _principle_ of his calculus. The duality allows the call-with-current-continuation operator (call/cc) to have a type $((A_{0}\to A_{1})\to A_{0})\to A_{0}$. In a traditional interpretation of function types, the type means that call/cc takes an expression of the type $(A_{0}\to A_{1})\to A_{0}$ and returns an expression of the type $A_{0}$. However, in the symmetric $\lambda$-calculus, call/cc takes a continuation of the type $A_{0}$ and becomes a function of the type $(A_{0}\to A_{1})\to A_{0}$, which takes an expression of the type $A_{0}\to A_{1}$ and returns an expression of the type $A_{0}$. The duality of functions seems to be one of the most significant reasons that it is possible for the symmetric $\lambda$-calculus to have the provability of classical logic, because the type $((A_{0}\to A_{1})\to A_{0})\to A_{0}$ corresponds to _the Peirce formula_ on the formulae-as-types notion [4, 22], which strengthens the $\lambda$-calculus corresponding to the minimal logic having the provability of classical logic [21]. In this paper, we justify the duality of functions in the symmetric $\lambda$-calculus using _bilateralism_ in proof-theoretic semantics. In proof-theoretic semantics there exists an idea that meanings of logical connectives are given by the contexts in which the logical connectives occur. In this idea, a meaning of a logical connective is considered to be defined by its introduction rule of a natural deduction and its elimination rule is naturally determined to be _in harmony with_ the introduction rule. Rumfitt suggested that the original natural deduction invented by Gentzen [15, 16] is not harmonious, and constructed a natural deduction based on bilateralism [32]. Within the notion of bilateralism, provability is not defined for a plain formula $A$ but a formula with polarity $\mathord{\mathop{+}{A}}$ and $\mathord{\mathop{-}{A}}$. Provability of $\mathord{\mathop{+}{A}}$ means that $A$ is accepted, and provability of $\mathord{\mathop{-}{A}}$ means that $A$ is rejected. The traditional formulation for which provability of $A$ means that $A$ is accepted is based on the notion of unilateralism rather than bilateralism. Bilateralism does not permit anything neutral and forces everything to have either positive or negative polarity. Rumfitt showed that a natural deduction of classical logic that is constructed on unilateralism can be reconstructed on bilateralism. In this paper, we construct a symmetric $\lambda$-calculus corresponding to the negation-free bilateral natural deduction. A distinguishing aspect of our calculus is that we adopt the but-not connective as a constructor for functions between continuations. Another distinguishing aspect is that reductio ad absurdum is a construction of a configuration also known as a command. In our calculus, continuations and commands are first-class citizens. Our bilateral $\lambda$-calculus contains a computationally consistent call- by-value calculus. The calculus corresponds to the sub-calculus of the call- by-value dual calculus invented by Wadler [38, 39] obtained by adding the but- not connective and removing the negation connective. The equivalence is formally obtained by giving mutual translations between these calculi. In other words, the translation provides a strong relationship between a bilateral natural deduction and a sequent calculus including proofs on the formulae-as-types notion. The translations clarify a significant difference between the bilateral natural deduction and the sequent calculus. The negation of the dual calculus is not involutive, that is, $\operatorname{\neg}{\operatorname{\neg}{A}}$ is not isomorphic to $A$. Although the dual calculus also has the involutive duality as the meta-level operation that comes from the left-hand-side and right-hand-side duality of the classical sequent-calculus framework, there exists no inference rule to operate the involutive duality in the calculus. In the bilateral $\lambda$-calculus, the negation is represented using inversions of polarities, and is involutive by definition. A symmetric $\lambda$-calculus which was constructed by Lovas and Crary is the only similar calculus based on bilateralism [25]. However, they adopted the negation connective $\neg$ as a primitive logical connective, and function type $\to$ is defined as syntactic sugar. In Lovas and Crary’s calculus it is necessary to use reductio ad absurdum, although it is generally easy to define functions between expressions. This means that it is not easy to define a sub- calculus corresponding to the minimal logic. Our calculus does not include the negation connective. Our work claims that the negation connective is not necessary but negative polarity is sufficient to define a symmetric $\lambda$-calculus based on bilateralism. Using our calculus, we justify the duality which Filinski adopted as a principle in constructing his calculus. Specifically, the encodings to expressions and continuations are _definable_ in our calculus. More correctly, mutual transformations between expressions and continuations of function types are definable in our calculus. We also show that every typable function has dual types about expressions and continuations. We clarify that bilateralism naturally raises the duality of functions. Finally, we note that one of our goals is to construct a simple and powerful calculus in which the duality of functions is definable. We do not intend to clarify anything unknown in classical logic by assigning $\lambda$-terms to proofs, as seen in existing work in structural proof theory. Actually, our calculus is a sub-calculus of a natural extension of the dual calculus. The remainder of this paper is organized as follows: In Section 2, we introduce bilateral natural deductions. In Section 3, we add proofs to nodes in derivation trees. In Section 4, we construct a symmetric $\lambda$-calculus corresponding to the negation-free bilateral natural deduction. In Section 5, we justify the duality of functions using our calculus. In Section 6, we discuss related work to clarify the contributions of this paper. In Section 7, we conclude the paper by identifying future research directions. ## 2 Bilateral Natural Deductions In this section, we introduce _bilateralism_ , which was proposed by Rumfitt [32], and define a few variants of Rumfitt’s bilateral natural deduction. The set of formulae is defined as follows: (formulae) $\displaystyle\quad A$ $\displaystyle\Coloneqq o\mid(\operatorname{\neg}{A})\mid(A\to A)\mid(A\wedge A)\mid(A\vee A)$ where $o$ ranges over propositional variables. We note that $\bot$ is not contained by the set of formulae. The connective power of $\neg$ is stronger than that of $\wedge$, $\vee$, and $\to$. The connective powers of $\wedge$ and $\vee$ are stronger than that of $\to$. We omit parentheses when the context renders them obvious. $A$ $\operatorname{\neg}{A}$ $\mathrm{(}\mathord{\bot}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $\bot$ $\bot$ $\mathrm{(}\mathord{\bot}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A$ $[A]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\bot$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $\operatorname{\neg}{A}$ $[\operatorname{\neg}{A}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\bot$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A$ $[A_{0}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $A_{1}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{0}\to A_{1}$ $A_{0}\to A_{1}$ $A_{0}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{1}$ $A_{0}$ $A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{0}\wedge A_{1}$ $A_{0}\wedge A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{0}}}\mathrm{)}$ $A_{0}$ $A_{0}\wedge A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{1}}}\mathrm{)}$ $A_{1}$ $A_{0}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{0}}}\mathrm{)}$ $A_{0}\vee A_{1}$ $A_{1}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{1}}}\mathrm{)}$ $A_{0}\vee A_{1}$ $A_{0}\vee A_{1}$ $[A_{0}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $A_{2}$ $[A_{1}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $A_{2}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{2}$ Figure 1: Natural deduction ${\textrm{ND}_{\textrm{prop}}}$. We recall the natural deduction invented by Gentzen [15, 16] and consider its propositional fragment ${\textrm{ND}_{\textrm{prop}}}$, as shown in Figure 1. At each inference rule, formulae or $\bot$ above a line are assumptions and a formula or $\bot$ below a line is a conclusion. A derivation is a tree that has exactly one root. Symbol $\smash{\vdots}$ denotes a transitive connection between a leaf and a node, and $[A]$ means that $A$ is discharged from assumptions in a standard manner. Rules $\mathrm{(}\mathord{\bot}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ and $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ are also known as _explosion_ and _reductio ad absurdum_ , respectively. A judgment is defined as $\varGamma\vdash A$ or $\varGamma\vdash\bot$, where $\varGamma$ is a multiset of formulae. There exists an idea that meanings of logical connectives are defined by their introduction rules and their elimination rules should be defined _in harmony with_ their introduction rules in proof-theoretic semantics. Rumfitt attempted to justify logical connectives and inference rules using a notion of harmony which was proposed by Dummett [8]. We consider a logical connective $\mathit{tonk}$ which was proposed by Prior [29]. Its introduction rule $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ and elimination rule $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ are as follows: $A_{0}$ $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{0}\mathbin{\mathit{tonk}}A_{1}$ $A_{0}\mathbin{\mathit{tonk}}A_{1}$ $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{1}$ . A pair of contiguous introduction and elimination rules is called _harmonious_ if the residue after removing the pair is also a derivation. Such a procedure is called _normalization_. In this section, we let $\leadsto$ denote the normalization procedure. The pair of $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ and $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ is not harmonious because the right-hand side of the following $\leadsto$ relation is not a derivation: $A_{0}$ $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{I}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{0}\mathbin{\mathit{tonk}}A_{1}$ $\mathrm{(}\mathord{\mathit{tonk}}\textrm{-}{\mathrm{E}_{\mathord{}\mathord{}}}\mathrm{)}$ $A_{1}$ $\;\;\leadsto$ $A_{0}$ $A_{1}$ . Rumfitt suggested that ${\textrm{ND}_{\textrm{prop}}}$ also does not enjoy the harmony condition and proposed a notion of _bilateralism_ to construct a harmonious natural deduction. Bilateralism is based on two notions of _acceptance_ and _rejection_ of formulae. They are also called _verification_ and _falsification_ , respectively, by Wansing [41, 42]. Formulae $A$ with _polarity_ are defined as $\mathord{\mathop{+}{A}}$ and $\mathord{\mathop{-}{A}}$. A derivation of root $\mathord{\mathop{+}{A}}$ means that $A$ is accepted. A derivation of root $\mathord{\mathop{-}{A}}$ means that $A$ is rejected. Let $\mathcal{A}$ be a formula with polarity. Conjugates $(\mathord{\mathop{+}{A}})^{\ast}$ and $(\mathord{\mathop{-}{A}})^{\ast}$ are defined as $\mathord{\mathop{-}{A}}$ and $\mathord{\mathop{+}{A}}$, respectively. Rumfitt adopted $\mathrm{(}\textrm{Non-contradiction}\mathrm{)}$ and $\mathrm{(}\textrm{Reductio}\mathrm{)}$ which are called _coordination principles_ and defined inference rules of logical connectives, as shown in Figure 2, which are naturally derived from the standard boolean semantics. In this paper, we call this logic a bilateral natural deduction $\textrm{Bi- ND}_{\textrm{prop}}$. ${\textrm{ND}_{\textrm{prop}}}$ is based on the notion of _unilateralism_ rather than bilateralism. A derivation of root $A$ in ${\textrm{ND}_{\textrm{prop}}}$ means that $A$ is accepted. There exists the following relation between ${\textrm{ND}_{\textrm{prop}}}$ and $\textrm{Bi- ND}_{\textrm{prop}}$: ###### Theorem 2.1 (Rumfitt [32]). For any $n\geq 0$, $A_{0},\ldots,A_{n-1}\vdash A$ is provable in ${\textrm{ND}_{\textrm{prop}}}$ if and only if $\mathord{\mathop{+}{A_{0}}},\ldots,\mathord{\mathop{+}{A_{n-1}}}\vdash\mathord{\mathop{+}{A}}$ is provable in $\textrm{Bi-ND}_{\textrm{prop}}$. _Remark._ It is controversial that explosion and reductio ad absurdum are regarded as elimination rules of the logical connectives $\bot$ and $\neg$, respectively. Rumfitt’s bilateralism is also criticized in a paper [24]. That is, bilateralism is called a work in progress. However, the subject of this paper is not a justification of bilateralism in proof-theoretic semantics. $\mathcal{A}$ $\mathcal{A}^{\ast}$ $\mathrm{(}\textrm{Non- contradiction}\mathrm{)}$ $\bot$ $[\mathcal{A}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\bot$ $\mathrm{(}\textrm{Reductio}\mathrm{)}$ $\mathcal{A}^{\ast}$ $\mathord{\mathop{-}{A}}$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{\operatorname{\neg}{A}}}$ $\mathord{\mathop{+}{\operatorname{\neg}{A}}}$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{A}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}\wedge A_{1}}}$ $\mathord{\mathop{+}{A_{0}\wedge A_{1}}}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{0}\wedge A_{1}}}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}\vee A_{1}}}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}\vee A_{1}}}$ $\mathord{\mathop{+}{A_{0}\vee A_{1}}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathord{\mathop{+}{A_{1}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathcal{A}$ $\mathord{\mathop{+}{A}}$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{\operatorname{\neg}{A}}}$ $\mathord{\mathop{-}{\operatorname{\neg}{A}}}$ $\mathrm{(}\mathord{\neg}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{A}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathord{\mathop{-}{A_{0}}}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}\wedge A_{1}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{1}\wedge A_{1}}}$ $\mathord{\mathop{-}{A_{0}\wedge A_{1}}}$ $[\mathord{\mathop{-}{A_{0}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathord{\mathop{-}{A_{1}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathcal{A}$ $\mathord{\mathop{-}{A_{0}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}\vee A_{1}}}$ $\mathord{\mathop{-}{A_{0}\vee A_{1}}}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\vee A_{1}}}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{1}}}$ Figure 2: Rumfitt’s natural deduction $\textrm{Bi-ND}_{\textrm{prop}}$. The natural deduction $\textrm{Bi-ND}_{\textrm{prop}}$ is not symmetric. We extend the language by adding a logical connective $\leftarrow$: (formulae) $\displaystyle\quad A$ $\displaystyle\Coloneqq o\mid(\operatorname{\neg}{A})\mid(A\to A)\mid(A\leftarrow A)\mid(A\wedge A)\mid(A\vee A)\enspace.$ We will use the logical connective as function types of continuations in the following. The connective $\leftarrow$ is called the _but-not_ connective because $A_{0}\leftarrow A_{1}$ is logically equivalent to $A_{0}\wedge\operatorname{\neg}{A_{1}}$ in classical logic. The but-not connective is also written as pseudo-difference $\overset{\cdot}{-}$ [19, 37], subtraction $-$ [30], difference $-$ [3], and co-implication $\mathrel{\mathord{-}\\!\mbox{\raisebox{0.8pt}{{\scriptsize$<$}}}}$ [20, 42]. The but-not connective is a primitive connective in paraconsistent logic, whereas $\to$ is a primitive connective in intuitionistic logic because $A_{0}\to A_{1}$ is not logically equivalent to $\operatorname{\neg}{A_{0}}\vee A_{1}$ in intuitionistic logic. In paraconsistent logic, sequent calculus consists of sequents $\varGamma\vdash\varDelta$, where $\varGamma$ is empty or a singleton formula, whereas intuitionistic logic can be defined by sequents $\varGamma\vdash\varDelta$, where $\varDelta$ is empty or a singleton formula. $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{1}}}$ $[\mathord{\mathop{-}{A_{1}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathord{\mathop{-}{A_{0}}}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\mathord{\mathop{-}{A_{0}}}$ Figure 3: Inference rules for $\leftarrow$. We define a natural deduction $\textrm{Bi- ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$ by adding inference rules, as shown in Figure 3. The connectives $\to$ and $\leftarrow$ are symmetrically located in $\textrm{Bi-ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$ as follows: ###### Proposition 2.2. $\mathord{\mathop{+}{A_{0}\to A_{1}}}\vdash\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$, $\mathord{\mathop{-}{A_{0}\to A_{1}}}\vdash\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$, $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}\vdash\mathord{\mathop{-}{A_{0}\to A_{1}}}$, and $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}\vdash\mathord{\mathop{+}{A_{0}\to A_{1}}}$ are provable in $\textrm{Bi-ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$. ###### Proof 2.3. See Appendix B. Let $\mathfrak{L}_{0}$ and $\mathfrak{L}_{1}$ be languages such that $\mathfrak{L}_{0}\subseteq\mathfrak{L}_{1}$, and $\mathfrak{S}_{0}$ and $\mathfrak{S}_{1}$ be logics on the languages $\mathfrak{L}_{0}$ and $\mathfrak{L}_{1}$, respectively. We define $\mathfrak{S}_{1}$ as an extension of $\mathfrak{S}_{0}$ if any formula $\varphi$ that is provable in $\mathfrak{S}_{0}$ is also provable in $\mathfrak{S}_{1}$. We define that an extension $\mathfrak{S}_{1}$ of $\mathfrak{S}_{0}$ is _conservative_ if any formula $\varphi$ on the language $\mathfrak{L}_{0}$ that is provable in $\mathfrak{S}_{1}$ is also provable in $\mathfrak{S}_{0}$. ###### Proposition 2.4. $\textrm{Bi-ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$ is a conservative extension of $\textrm{Bi-ND}_{\textrm{prop}}$. ###### Proof 2.5. It is obvious because $\textrm{Bi-ND}_{\textrm{prop}}$ is complete to the standard two-value semantics, and $\textrm{Bi- ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$ is sound to the semantics. $\textrm{Bi-ND}_{\textrm{prop}}$ and $\textrm{Bi- ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$ include sub-logics as follows: ###### Proposition 2.6. 1. 1. The inference rules $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$, and $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$ are derivable in $\textrm{Bi-ND}_{\textrm{prop}}$, and 2. 2. The inference rules $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$, and $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$ are derivable in $\textrm{Bi-ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$. ###### Proof 2.7. See Appendix B. ## 3 Derivation Trees with Proofs in Their Nodes In this section, we introduce derivation trees with proofs in their nodes to mediate between natural deductions and $\lambda$-calculi introduced in Sections 2 and 4, respectively. We add proofs to polarized formulae in the $\neg$-free fragment of $\textrm{Bi-ND}^{\mathord{\leftarrow}}_{\textrm{prop}}$, that is, the $\mathrm{(}\textrm{Non-contradiction}\mathrm{)}$, $\mathrm{(}\textrm{Reductio}\mathrm{)}$, $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$, and $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$ fragment, to construct a symmetric $\lambda$-calculus. We note that the other inference rules are derivable by Proposition 2.6. We assume a set of _proof variables_. We write $\alpha$ for a proof variable. We define that nodes $t\colon\mathord{\mathop{+}{A}}$ and $t\colon\mathord{\mathop{-}{A}}$ in the natural deduction respectively denote that $t$ is a proof for acceptance and rejection of $A$. We also define that a node $T\colon\bot$ in the natural deduction denotes that $T$ is a proof for contradiction. Node $\lambda\alpha.t\colon\mathord{\mathop{+}{A_{0}\to A_{1}}}$ denotes that $\lambda\alpha.t$ is a proof for acceptance of $A_{0}\to A_{1}$ if $\alpha$ is a proof variable for acceptance of $A_{0}$ and $t$ is a proof for acceptance of $A_{1}$. Node $\lambda\alpha.t\colon\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ denotes that $\lambda\alpha.t$ is a proof for acceptance of $A_{0}\to A_{1}$ if $\alpha$ is a proof variable for acceptance of $A_{1}$ and $t$ is a proof for acceptance of $A_{0}$. Node $t_{0}t_{1}\colon\mathord{\mathop{+}{A_{1}}}$ denotes that $t_{0}t_{1}$ is a proof for acceptance of $A_{1}$ if $t_{0}$ is a proof for acceptance of $A_{0}\to A_{1}$ and $t_{1}$ is a proof for acceptance of $A_{0}$. Node $t_{0}t_{1}\colon\mathord{\mathop{-}{A_{0}}}$ denotes that $t_{0}t_{1}$ is a proof for rejection of $A_{1}$ if $t_{0}$ is a proof for rejection of $A_{0}\leftarrow A_{1}$ and $t_{1}$ is a proof for rejection of $A_{1}$. Node $\mathord{(t_{0},t_{1})}\colon\mathord{\mathop{+}{A_{0}\wedge A_{1}}}$ denotes that $\mathord{(t_{0},t_{1})}$ is a proof for acceptance of $A_{0}\wedge A_{1}$ if $t_{0}$ is a proof for acceptance of $A_{0}$ and $t_{1}$ is a proof for acceptance of $A_{1}$. Node $\mathord{(t_{0},t_{1})}\colon\mathord{\mathop{-}{A_{0}\vee A_{1}}}$ denotes that $\mathord{(t_{0},t_{1})}$ is a proof for rejection of $A_{0}\vee A_{1}$ if $t_{0}$ is a proof for rejection of $A_{0}$ and $t_{1}$ is a proof for rejection of $A_{1}$. Node $\pi_{0}(t)\colon\mathord{\mathop{+}{A_{0}}}$ denotes that $\pi_{0}(t)$ is a proof for acceptance of $A_{0}$ if $t_{0}$ is a proof for acceptance of $A_{0}\wedge A_{1}$. Node $\pi_{0}(t_{0})\colon\mathord{\mathop{-}{A_{0}}}$ denotes that $\pi_{0}(t_{0})$ is a proof for rejection of $A_{0}$ if $t_{0}$ is a proof for rejection of $A_{0}\vee A_{1}$. Nodes $\pi_{1}(t_{0})\colon\mathord{\mathop{+}{A_{1}}}$ and $\pi_{1}(t_{0})\colon\mathord{\mathop{-}{A_{1}}}$ are similar. Node $\mathord{\langle t_{0}\>|\>t_{1}\rangle}\colon\bot$ denotes that $\mathord{\langle t_{0}\>|\>t_{1}\rangle}$ is a proof of contradiction if $t_{0}$ is a proof for acceptance of $A$ and $t_{1}$ is a proof for rejection of $A_{0}$. Node $\mu\alpha.T\colon\mathord{\mathop{+}{A}}$ denotes that $\mu\alpha.T$ is a proof for acceptance of $A$ if $\alpha$ is a proof variable for rejection of $A$ and $T$ is a proof of contradiction. Node $\mu\alpha.T\colon\mathord{\mathop{-}{A}}$ denotes that $\mu\alpha.T$ is a proof for rejection of $A$ if $\alpha$ is a proof variable for acceptance of $A$ and $T$ is a proof of contradiction. We formally define the set of proofs in a Curry-style bilateral $\lambda$-calculus, as shown in Figure 4 where $c$ ranges over constants for adding logical axioms. (proofs) $\displaystyle t$ $\displaystyle\Coloneqq c\mid\alpha\mid\lambda\alpha.t\mid tt\mid\mathord{(t,t)}\mid\pi_{0}(t)\mid\pi_{1}(t)\mid\mu\alpha.T$ $\displaystyle T$ $\displaystyle\Coloneqq\mathord{\langle t\>|\>t\rangle}$ Figure 4: Proofs of the negation-free natural deduction. Let $\varGamma$ be a set of nodes. Judgment $\varGamma\vdash t\colon\mathord{\mathop{+}{A}}$ denotes that $t$ is a proof for acceptance of $A$ under $\varGamma$. Judgment $\varGamma\vdash t\colon\mathord{\mathop{-}{A}}$ denotes that $t$ is a proof for rejection of $A$ under $\varGamma$. Judgment $\varGamma\vdash T\colon\bot$ denotes that $T$ is a proof for contradiction under $\varGamma$. ## 4 Bilateral Lambda-Calculi In this section, we construct a Church-style symmetric $\lambda$-calculus based on bilateralism and define a call-by-value sub-calculus. ### 4.1 Definition and Basic Properties We respectively call proofs for acceptance and rejection _expressions_ and _continuations_. We distinguish proof variables for acceptance from those for rejection. We construct an alternative symmetric $\lambda$-calculus called a bilateral $\lambda$-calculus (BLC). We define types, polarized types, expressions, continuations, commands, and syntactical objects as shown in Figure 5. (types) $\displaystyle A$ $\displaystyle\Coloneqq o\mid(A\to A)\mid(A\leftarrow A)\mid(A\wedge A)\mid(A\vee A)$ (expressions) $\displaystyle\mathit{E}$ $\displaystyle\Coloneqq\mathit{cst}^{o}\mid\mathit{x}^{A}\mid\lambda\mathit{x}^{A}.\mathit{E}\mid\mathit{E}\mathit{E}\mid\mathord{(\mathit{E},\mathit{E})}\mid\pi_{0}(\mathit{E})\mid\pi_{1}(\mathit{E})\mid\mu a^{A}.N$ (continuations) $\displaystyle\mathit{C}$ $\displaystyle\Coloneqq\bullet^{o}\mid a^{A}\mid\lambda a^{A}.\mathit{C}\mid\mathit{C}\mathit{C}\mid\mathord{(\mathit{C},\mathit{C})}\mid\pi_{0}(\mathit{C})\mid\pi_{1}(\mathit{C})\mid\mu\mathit{x}^{A}.N$ (commands) $\displaystyle N$ $\displaystyle\Coloneqq\mathord{\langle\mathit{E}\>|\>\mathit{C}\rangle}$ (syntactical objects) $\displaystyle D$ $\displaystyle\Coloneqq\mathit{E}\mid\mathit{C}\mid N$ Figure 5: The bilateral lambda-calculus BLC. Expression $\mathit{cst}^{o}$ denotes a constant. Expression $\mathit{x}^{A}$ denotes an expression variable. Expression $\lambda\mathit{x}^{A}.\mathit{E}$ denotes a $\lambda$-abstraction of expression $\mathit{E}$ by $\mathit{x}^{A}$. Expression $\mathit{E}_{0}\mathit{E}_{1}$ denotes an application of function $\mathit{E}_{0}$ to expression $\mathit{E}_{1}$. Expression $\mathord{(\mathit{E}_{0},\mathit{E}_{1})}$ denotes a pair of expressions $\mathit{E}_{0}$ and $\mathit{E}_{1}$. Expressions $\pi_{0}(\mathit{E})$ and $\pi_{1}(\mathit{E})$ are projections. Continuations are defined symmetrically to expressions. Continuation $\bullet^{o}$ denotes the unique constant denoting a continuation of $o$. By the definition based on bilateralism, the calculus is involutive on the notion of polarities. Commands are first-class citizens. A command can be abstracted by expression variable $\mathit{x}^{A}$ or continuation variable $a^{A}$. Command $N$ abstracted by $a^{A}$ is expression $\mu a^{A}.N$. A command abstracted by $\mathit{x}^{A}$ is continuation $\mu\mathit{x}^{A}.N$. A similar idea can be seen in $\bar{\lambda}\mu\tilde{\mu}$-calculus which was proposed by Curien and Herbelin [3]. Expressions, continuations, and commands are called syntactical objects. We assume that the connective powers of applications are stronger than those of $\lambda$-abstractions. We omit superscripts that denote types when the context renders them obvious. $\varGamma\vdash_{+}\mathit{E}\colon A$ $\varGamma\vdash_{-}\mathit{C}\colon A$ $\mathrm{(}\textrm{Non-contradiction}\mathrm{)}$ $\varGamma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}\mathord{\langle\mathit{E}\>|\>\mathit{C}\rangle}$ $\varPi;\varSigma,a\colon A\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ $\mathrm{(}\textrm{Reductio}_{+}\mathrm{)}$ $\varPi;\varSigma\vdash_{+}\mu a^{A}.N\colon A$ $X^{Y^{Z}}$ (Constant+) $\varGamma\vdash_{+}\mathit{cst}^{o}\colon o$ $X^{Y^{Z}}$ $\mathrm{(}\textrm{Identity}_{+}\mathrm{)}$ $\varGamma,\mathit{x}^{A}\colon A\vdash_{+}\mathit{x}^{A}\colon A$ $\varPi,\mathit{x}\colon A_{0};\varSigma\vdash_{+}\mathit{E}\colon A_{1}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\varPi;\varSigma\vdash_{+}\lambda\mathit{x}^{A_{0}}.\mathit{E}\colon A_{0}\to A_{1}$ $\varGamma\vdash_{+}\mathit{E}_{0}\colon A_{0}\to A_{1}$ $\varGamma\vdash_{+}\mathit{E}_{1}\colon A_{0}$ $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\varGamma\vdash_{+}\mathit{E}_{0}\mathit{E}_{1}\colon A_{1}$ $\varGamma\vdash_{+}\mathit{E}_{0}\colon A_{0}$ $\varGamma\vdash_{+}\mathit{E}_{1}\colon A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$ $\varGamma\vdash_{+}\mathord{(\mathit{E}_{0},\mathit{E}_{1})}\colon A_{0}\wedge A_{1}$ $\varGamma\vdash_{+}\mathit{E}\colon A_{0}\wedge A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$ $\varGamma\vdash_{+}\pi_{0}(\mathit{E})\colon A_{0}$ $\varGamma\vdash_{+}\mathit{E}\colon A_{0}\wedge A_{1}$ $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$ $\varGamma\vdash_{+}\pi_{1}(\mathit{E})\colon A_{1}$ $\varPi;\varSigma,\mathit{x}\colon A\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ $\mathrm{(}\textrm{Reductio}_{-}\mathrm{)}$ $\varPi;\varSigma\vdash_{-}\mu\mathit{x}^{A}.N\colon A$ $X^{Y^{Z}}$ (Constant-) $\varGamma\vdash_{-}\bullet^{o}\colon o$ $X^{Y^{Z}}$ $\mathrm{(}\textrm{Identity}_{-}\mathrm{)}$ $\varGamma,a^{A}\colon A\vdash_{-}a^{A}\colon A$ $\varPi;\varSigma,a\colon A_{1}\vdash_{-}\mathit{C}\colon A_{0}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\varPi;\varSigma\vdash_{-}\lambda a^{A_{1}}.\mathit{C}\colon A_{0}\leftarrow A_{1}$ $\varGamma\vdash_{-}\mathit{C}_{0}\colon A_{0}\leftarrow A_{1}$ $\varGamma\vdash_{-}\mathit{C}_{1}\colon A_{1}$ $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\varGamma\vdash_{-}\mathit{C}_{0}\mathit{C}_{1}\colon A_{0}$ $\varGamma\vdash_{-}\mathit{C}_{0}\colon A_{0}$ $\varGamma\vdash_{-}\mathit{C}_{1}\colon A_{1}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ $\varGamma\vdash_{-}\mathord{(\mathit{C}_{0},\mathit{C}_{1})}\colon A_{0}\vee A_{1}$ $\varGamma\vdash_{-}\mathit{C}\colon A_{0}\vee A_{1}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{0}}}\mathrm{)}$ $\varGamma\vdash_{-}\pi_{0}(\mathit{C})\colon A_{0}$ $\varGamma\vdash_{-}\mathit{C}\colon A_{0}\vee A_{1}$ $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{1}}}\mathrm{)}$ $\varGamma\vdash_{-}\pi_{1}(\mathit{C})\colon A_{1}$ Figure 6: A type system of BLC. Figure 6 shows the type system of BLC consisting of judgments $\varGamma\vdash_{+}\mathit{E}\colon A$, $\varGamma\vdash_{-}\mathit{C}\colon A$, and $\varGamma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$, where type environments $\varGamma$ are defined as follows: (type environments) $\displaystyle\quad\varGamma$ $\displaystyle\Coloneqq\varPi;\varSigma$ $\displaystyle\qquad\quad\varPi$ $\displaystyle\Coloneqq\varnothing\mid\varPi,\mathit{x}\colon A$ $\displaystyle\qquad\quad\varSigma$ $\displaystyle\Coloneqq\varnothing\mid\varSigma,a\colon A\enspace.$ Judgments $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A$, $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A$, and $\varPi;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ correspond to $\\{\,\mathord{\mathop{+}{A}}\mid A\in\varPi\,\\},\\{\,\mathord{\mathop{-}{A}}\mid A\in\varSigma\,\\}\vdash\mathord{\mathop{+}{A}}$, $\\{\,\mathord{\mathop{+}{A}}\mid A\in\varPi\,\\},\\{\,\mathord{\mathop{-}{A}}\mid A\in\varSigma\,\\}\vdash\mathord{\mathop{-}{A}}$, and $\\{\,\mathord{\mathop{+}{A}}\mid A\in\varPi\,\\},\\{\,\mathord{\mathop{-}{A}}\mid A\in\varSigma\,\\}\vdash\bot$, respectively The type system contains rules about commands. Rule $\mathrm{(}\textrm{Non- contradiction}\mathrm{)}$ defines a command from an expression and a continuation. Additionally, even if a command occurs in a derivation, the derivation does not necessarily end and may be continued by $\mathrm{(}\textrm{Reductio}_{+}\mathrm{)}$ or $\mathrm{(}\textrm{Reductio}_{-}\mathrm{)}$. The other inference rules about expressions are defined in a standard manner. The inference rules about continuations are defined symmetrically to expressions. Substitutions $[\mathit{E}/\mathit{x}]$ and $[\mathit{C}/a]$ (denoted by $\theta$) are inductively defined in a standard component-wise and capture- avoiding manner. We write $\operatorname{fev}(\mathit{E})$ and $\operatorname{fev}(\mathit{C})$ for free expression variables in $\mathit{E}$ and $\mathit{C}$, respectively. We also write $\operatorname{fcv}(\mathit{E})$ and $\operatorname{fcv}(\mathit{C})$ for free continuation variables in $\mathit{E}$ and $\mathit{C}$, respectively. The bilateral $\lambda$-calculus is well designed. The so-called weakening holds as follows: ###### Proposition 4.1. 1. 1. $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A_{0}$ implies $\varPi,\mathit{x}\colon A;\varSigma\vdash_{+}\mathit{E}\colon A_{0}$ and $\varPi;\varSigma,a\colon A\vdash_{+}\mathit{E}\colon A_{0}$, 2. 2. $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A_{0}$ implies $\varPi,\mathit{x}\colon A;\varSigma\vdash_{-}\mathit{C}\colon A_{0}$ and $\varPi;\varSigma,a\colon A\vdash_{-}\mathit{C}\colon A_{0}$, and 3. 3. $\varPi;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ implies $\varPi,\mathit{x}\colon A;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ and $\varPi;\varSigma,a\colon A\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$. ###### Proof 4.2. By induction on derivation. The substitution lemma definitely holds as follows: ###### Lemma 4.3. 1. 1. Assume $\varPi,\mathit{x}\colon A_{0};\varSigma\vdash_{+}\mathit{E}^{\prime}\colon A_{1}$ and $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A_{0}$. Then, $\varPi;\varSigma\vdash_{+}[\mathit{E}/\mathit{x}]\mathit{E}^{\prime}\colon A_{1}$ holds. 2. 2. Assume $\varPi,\mathit{x}\colon A_{0};\varSigma\vdash_{-}\mathit{C}\colon A_{1}$ and $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A_{0}$. Then, $\varPi;\varSigma\vdash_{-}[\mathit{E}/\mathit{x}]\mathit{C}\colon A_{1}$ holds. 3. 3. Assume $\varPi,\mathit{x}\colon A;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ and $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A$. Then, $\varPi;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}[\mathit{E}/\mathit{x}]N$ holds. 4. 4. Assume $\varPi;\varSigma,a\colon A_{0}\vdash_{+}\mathit{E}\colon A_{1}$ and $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A_{0}$. Then, $\varPi;\varSigma\vdash_{+}[\mathit{C}/a]\mathit{E}\colon A_{1}$ holds. 5. 5. Assume $\varPi;\varSigma,a\colon A_{0}\vdash_{-}\mathit{C}^{\prime}\colon A_{1}$ and $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A_{0}$. Then, $\varPi;\varSigma\vdash_{-}[\mathit{C}/a]\mathit{C}^{\prime}\colon A_{1}$ holds. 6. 6. Assume $\varPi;\varSigma,a\colon A\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ and $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A$. Then, $\varPi;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}[\mathit{C}/a]N$ holds. ###### Proof 4.4. By induction on derivation. The bilateral $\lambda$-calculus enjoys the type uniqueness property, that is, every expression and continuation has a unique positive and negative type, respectively, as follows: ###### Proposition 4.5. 1. 1. If $\varGamma\vdash_{+}\mathit{E}\colon A_{0}$ and $\varGamma\vdash_{+}\mathit{E}\colon A_{1}$, then $A_{0}$ and $A_{1}$ are the same. 2. 2. If $\varGamma\vdash_{-}\mathit{C}\colon A_{0}$ and $\varGamma\vdash_{-}\mathit{C}\colon A_{1}$, then $A_{0}$ and $A_{1}$ are the same. ###### Proof 4.6. The proposition holds immediately from the definition of the type system. ### 4.2 The Call-by-Value Lambda-Calculus CbV-BLC We define a call-by-value bilateral $\lambda$-calculus CbV-BLC. Types, expressions, continuations, commands, and typing rules are the same as those of BLC. The values and the call-by-value evaluation contexts for expressions of CbV-BLC are defined as shown in Figure 7. (values) $\displaystyle V$ $\displaystyle\Coloneqq\mathit{cst}\mid\mathit{x}\mid\lambda\mathit{x}.\mathit{E}\mid\mathord{(V,V)}\mid\pi_{0}(V)\mid\pi_{1}(V)\mid\mu a.\mathord{\langle V\>|\>\pi_{0}(a)\rangle}\mid\mu a.\mathord{\langle V\>|\>\pi_{1}(a)\rangle}$ (contexts) $\displaystyle\mathcal{E}$ $\displaystyle\Coloneqq\\{-\\}\mid\mathcal{E}\mathit{E}\mid V\mathcal{E}\mid\mathord{(\mathcal{E},\mathit{E})}\mid\mathord{(V,\mathcal{E})}\mid\pi_{0}(\mathcal{E})\mid\pi_{1}(\mathcal{E})\enspace.$ Figure 7: Values and contexts of CbV-BLC. An evaluation context $\mathcal{E}$ is a expression with a hole $\\{-\\}$. The expression obtained by filling the hole of $\mathcal{E}$ with an expression $\mathit{E}$ is denoted by $\mathcal{E}\\{\mathit{E}\\}$. The equations of CbV-BLC are shown in Figure 8. $\displaystyle(\lambda\mathit{x}.\mathit{E})V$ $\displaystyle=_{\mathit{v}}[V/\mathit{x}]\mathit{E}$ $\displaystyle\lambda\mathit{x}.V\mathit{x}$ $\displaystyle=_{\mathit{v}}V$ if $\mathit{x}\not\in\operatorname{fev}(V)$ $\displaystyle\pi_{0}(\mathord{(V_{0},V_{1})})$ $\displaystyle=_{\mathit{v}}V_{0}$ $\displaystyle\pi_{1}(\mathord{(V_{0},V_{1})})$ $\displaystyle=_{\mathit{v}}V_{1}$ $\displaystyle\mathord{(\pi_{0}(V),\pi_{1}(V))}$ $\displaystyle=_{\mathit{v}}V$ $\displaystyle\mu a.\mathord{\langle\mathit{E}\>|\>a\rangle}$ $\displaystyle=_{\mathit{v}}\mathit{E}$ if $a\not\in\operatorname{fcv}(\mathit{E})$ $\displaystyle(\lambda a.\mathit{C}_{0})\mathit{C}_{1}$ $\displaystyle=_{\mathit{v}}[\mathit{C}_{1}/a]\mathit{C}_{0}$ $\displaystyle\lambda a.\mathit{C}a$ $\displaystyle=_{\mathit{v}}\mathit{C}$ if $a\not\in\operatorname{fcv}(\mathit{C})$ $\displaystyle\pi_{0}(\mathord{(\mathit{C}_{0},\mathit{C}_{1})})$ $\displaystyle=_{\mathit{v}}\mathit{C}_{0}$ $\displaystyle\pi_{1}(\mathord{(\mathit{C}_{0},\mathit{C}_{1})})$ $\displaystyle=_{\mathit{v}}\mathit{C}_{1}$ $\displaystyle\mathord{(\pi_{0}(\mathit{C}),\pi_{1}(\mathit{C}))}$ $\displaystyle=_{\mathit{v}}\mathit{C}$ $\displaystyle\mu\mathit{x}.\mathord{\langle\mathit{x}\>|\>\mathit{C}\rangle}$ $\displaystyle=_{\mathit{v}}\mathit{C}$ if $\mathit{x}\not\in\operatorname{fev}(\mathit{C})$ $\mathord{\langle V\>|\>\mu\mathit{x}.N\rangle}=_{\mathit{v}}[V/\mathit{x}]N$ $\mathord{\langle\mu a.N\>|\>\mathit{C}\rangle}=_{\mathit{v}}[\mathit{C}/a]N$ $\mathord{\langle\mathcal{E}\\{\mathit{E}\\}\>|\>\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}\>|\>\mu\mathit{x}.\mathord{\langle\mathcal{E}\\{\mathit{x}\\}\>|\>\mathit{C}\rangle}\rangle}$ if $\mathit{x}$ is fresh Figure 8: The equations of CbV-BLC. Although careful readers will wonder why $\pi_{0}(V)$ and $\pi_{1}(V)$ are values, they can often be seen in $\lambda$-calculi based on categorical semantics (cf. Definition 7.7 in Selinger’s paper [33] and Figure 2 in Wadler’s paper [39]). We also note that $\mu a.\mathord{\langle V\>|\>\pi_{0}(a)\rangle}$ and $\mu a.\mathord{\langle V\>|\>\pi_{1}(a)\rangle}$ are values for $A\vee B$, namely, they mean the left and the right injections of $V$, respectively. We can define case expressions using pairs of continuations as follows: $\mathrm{inl}(\mathit{E})\equiv\mu a.\mathord{\langle\mathit{E}\>|\>\pi_{0}(a)\rangle}\qquad\qquad\mathrm{inr}(\mathit{E})\equiv\mu a.\mathord{\langle\mathit{E}\>|\>\pi_{1}(a)\rangle}$ $\mathrm{case}(\mathit{E},\mathit{x}_{0}.\mathit{E}_{0},\mathit{x}_{1}.\mathit{E}_{1})\equiv\mu a.\mathord{\langle\mathit{E}\>|\>\mathord{(\mu\mathit{x}_{0}.\mathord{\langle\mathit{E}_{0}\>|\>a\rangle},\mu\mathit{x}_{1}.\mathord{\langle\mathit{E}_{1}\>|\>a\rangle})}\rangle}$ $\varGamma\vdash_{+}\mathit{E}\colon A_{0}$ $\varGamma\vdash_{+}\mathrm{inl}(\mathit{E})\colon A_{0}\vee A_{1}$ $\varGamma\vdash_{+}\mathit{E}\colon A_{1}$ $\varGamma\vdash_{+}\mathrm{inr}(\mathit{E})\colon A_{0}\vee A_{1}$ $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A_{0}\vee A_{1}$ $\varPi,\mathit{x}_{0}\colon A_{0};\varSigma\vdash_{+}\mathit{E}_{0}\colon A$ $\varPi,\mathit{x}_{1}\colon A_{1};\varSigma\vdash_{+}\mathit{E}_{1}\colon A$ $\varPi;\varSigma\vdash_{+}\mathrm{case}(\mathit{E},\mathit{x}_{0}.\mathit{E}_{0},\mathit{x}_{1}.\mathit{E}_{1})\colon A$ $\displaystyle\mathord{\langle\mathrm{case}(\mathrm{inl}(V),\mathit{x}_{0}.\mathit{E}_{0},\mathit{x}_{1}.\mathit{E}_{1})\>|\>\mathit{C}\rangle}$ $\displaystyle\equiv\mathord{\langle\mu a.\mathord{\langle\mu a_{2}.\mathord{\langle V\>|\>\pi_{0}(a_{2})\rangle}\>|\>\mathord{(\mu\mathit{x}_{0}.\mathord{\langle\mathit{E}_{0}\>|\>a\rangle},\mu\mathit{x}_{1}.\mathord{\langle\mathit{E}_{1}\>|\>a\rangle})}\rangle}\>|\>\mathit{C}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle\mu a_{2}.\mathord{\langle V\>|\>\pi_{0}(a_{2})\rangle}\>|\>\mathord{(\mu\mathit{x}_{0}.\mathord{\langle\mathit{E}_{0}\>|\>\mathit{C}\rangle},\mu\mathit{x}_{1}.\mathord{\langle\mathit{E}_{1}\>|\>\mathit{C}\rangle})}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle V\>|\>\pi_{0}(\mathord{(\mu\mathit{x}_{0}.\mathord{\langle\mathit{E}_{0}\>|\>\mathit{C}\rangle},\mu\mathit{x}_{1}.\mathord{\langle\mathit{E}_{1}\>|\>\mathit{C}\rangle})})\rangle}=_{\mathit{v}}\mathord{\langle V\>|\>\mu\mathit{x}_{0}.\mathord{\langle\mathit{E}_{0}\>|\>\mathit{C}\rangle}\rangle}=_{\mathit{v}}\mathord{\langle[V/\mathit{x}_{0}]\mathit{E}_{0}\>|\>\mathit{C}\rangle}\enspace.$ (types) $\displaystyle A$ $\displaystyle\Coloneqq\chi\mid(A\land A)\mid(A\vee A)\mid(\operatorname{\neg}{A})$ (terms) $\displaystyle M$ $\displaystyle\Coloneqq x\mid\langle M,M\rangle\mid\langle M\rangle{\tt inl}\mid\langle M\rangle{\tt inr}\mid[K]{\tt not}\mid(S).\alpha$ (coterms) $\displaystyle K$ $\displaystyle\Coloneqq\alpha\mid[K,K]\mid{\tt fst}[K]\mid{\tt snd}[K]\mid{\tt not}\langle M\rangle\mid x.(S)$ (statements) $\displaystyle S$ $\displaystyle\Coloneqq M\mathbin{\bullet}K$ (syntactical objects) $\displaystyle O$ $\displaystyle\Coloneqq M\mid K\mid S$ Figure 9: The syntax of the dual calculus. The calculus CbV-BLC is $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ which is a sub-calculus of an extension with the but-not connective of the call-by-value dual calculus by Wadler [39]. Types, terms, coterms, statements, and syntactical objects are shown in Figure 9. A key difference from BLC is that the dual calculus adopts $\neg$ as a primitive connective and function types are syntactic sugar. See Wadler’s papers [39] or Appendix A for the details. We can define a translation from CbV-BLC. Consequently, the consistency of our call-by-value calculus is obtained from the consistency of the call-by-value dual calculus. Specifically, we can obtain the following: ###### Theorem 4.7. There exist translations $(-)^{\sharp}$ from CbV-BLC into $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ and $(-)^{\flat}$ from $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ into CbV-BLC, which satisfy: * • $D_{0}=_{\mathit{v}}D_{1}$ implies $(D_{0})^{\sharp}=_{\mathit{dcv}}(D_{1})^{\sharp}$, * • $O_{0}=_{\mathit{dcv}}O_{1}$ implies $(O_{0})^{\flat}=_{\mathit{v}}(O_{1})^{\flat}$, * • $((D)^{\sharp})^{\flat}=_{\mathit{v}}D$ holds, and * • $((O)^{\flat})^{\sharp}=_{\mathit{dcv}}O$ holds. where $=_{\mathit{dcv}}$ is the equality relation of $\textrm{CbV- DC}_{\rightarrow\leftarrow}$. ###### Proof 4.8. See Appendix A. The theorem reasons about the call-by-value variant of BLC via the call-by- value dual calculus. Furthermore, the theorem shows that the but-not type $A\leftarrow B$ in the call-by-value dual calculus is considered as the function type for continuations. The theorem also reveals the difference between the dual calculus, whose negation type $\operatorname{\neg}{A}$ is not involutive, and BLC, whose polarities $\mathord{\mathop{+}{A}}$ and $\mathord{\mathop{-}{A}}$ are involutive. The negation type of the dual calculus can appear anywhere in a type. The negation type enables encoding of a coterm, say $K$, of type $A$ to a term $[K]{\tt not}$ of type $\operatorname{\neg}{A}$, and handling of the encoded coterms as a part of terms. For instance, $[[K_{1},K_{2}]]{\tt not}$ of type $\operatorname{\neg}{(A_{1}\vee A_{2})}$ is a term which encodes the pair of coterms $K_{1}$ and $K_{2}$, and functions, such as $\lambda x_{2}.[x_{1}.(x_{2}\mathbin{\bullet}{\tt not}\langle\langle x_{1}\rangle{\tt inl}\rangle)]{\tt not}$ of type $\operatorname{\neg}{(A_{1}\vee A_{2})}\to\operatorname{\neg}{A_{1}}$ that handles such terms, are definable in the dual calculus. The expressive power of BLC is strictly weaker than the dual calculus, since BLC does not permit defining such functions. The theorem also raises a question whether BLC offers an adequate theoretical framework for expressing practical control operators. We conjecture that the polarities of BLC are enough for this purpose. This is future work. ## 5 Justifying the Duality of Functions In this section, we reason about the duality of functions in Filinski’s symmetric $\lambda$-calculus using the bilateral $\lambda$-calculus. ### 5.1 Filinski’s Symmetric Lambda-Calculus A function of the type $A_{0}\to A_{1}$ from expressions of the type $A_{0}$ to expressions of the type $A_{1}$ can be regarded as a function from continuations of the type $A_{1}$ to continuations of the type $A_{0}$, and vice versa. This property of functions is called the duality of functions. Filinski adopted the duality as a principle and constructed a symmetric $\lambda$-calculus [12, 13]. The symmetric $\lambda$-calculus consists of functions $\mathit{F}$, expressions $\mathit{E}$, and continuations $\mathit{C}$. Functions consist of $\lambda$-abstractions of expressions, decodings of expressions, $\lambda$-abstractions of continuations, and decodings of continuations as follows: (functions) $\displaystyle\qquad\mathit{F}^{A_{0}}_{A_{1}}$ $\displaystyle\Coloneqq\mathit{X}^{A_{0}}\Rightarrow\mathit{E}_{A_{1}}\mid\overline{\mathit{E}_{[A_{0}\to A_{1}]}}\mid\mathit{Y}_{A_{1}}\Leftarrow\mathit{C}^{A_{0}}\mid\underline{\mathit{C}^{[A_{1}\leftarrow A_{0}]}}\enspace.$ Let $A_{0}\to A_{1}$ be a function type. Filinski defined a function type $[A_{0}\to A_{1}]$ for an expression, which denotes an exponential object ${A_{1}}^{A_{0}}$ in categorical semantics, where $A_{0}$ and $A_{1}$ are objects that correspond to types $A_{0}$ and $A_{1}$. We note that $A_{2}\times A_{0}\to A_{1}$ is bijective to $A_{2}\to{A_{1}}^{A_{0}}$ in categorical semantics. Similarly, Filinski defined a function type $[A_{1}\leftarrow A_{0}]$ for a continuation, which denotes a coexponential object ${A_{0}}_{A_{1}}$, and $A_{0}\to A_{2}+A_{1}$ is bijective to ${A_{0}}_{A_{1}}\to A_{2}$. Expressions and continuations consist of constants, variables, applications of functions, and encodings of functions as follows: (expressions) $\displaystyle\mathit{E}_{o}$ $\displaystyle\Coloneqq\mathit{cst}_{o}\mid\mathit{x}_{o}\mid\mathit{F}^{A}_{o}\mathit{E}_{A}$ $\displaystyle\quad\mathit{E}_{[A_{0}\to A_{1}]}$ $\displaystyle\Coloneqq\mathit{x}_{[A_{0}\to A_{1}]}\mid\mathit{F}^{A}_{[A_{0}\to A_{1}]}\mathit{E}_{A}\mid\ulcorner\mathit{F}^{A_{0}}_{A_{1}}\urcorner$ (continuations) $\displaystyle\;\;\mathit{C}^{o}$ $\displaystyle\Coloneqq\bullet^{o}\mid a^{o}\mid\mathit{F}^{o}_{A}\mathit{C}^{A}$ $\displaystyle\mathit{C}^{[A_{1}\leftarrow A_{0}]}$ $\displaystyle\Coloneqq a^{[A_{1}\leftarrow A_{0}]}\mid\mathit{F}^{[A_{1}\leftarrow A_{0}]}_{A}\mathit{C}^{A}\mid\llcorner\mathit{F}^{A_{0}}_{A_{1}}\lrcorner\enspace.$ We note that the encodings and decodings are defined to be _primitive_ operators because the duality is adopted as a principle. We explain commands in Filinski’s symmetric $\lambda$-calculus, which is a triple called a _configuration_ : $\vdash\mathit{E}\colon\mathord{\mathop{+}{A_{0}}}$ $\vdash\mathit{F}\colon A_{0}\to A_{1}$ $\vdash\mathit{C}\colon\operatorname{\neg}{A_{1}}$ $\vdash\mathord{\langle\mathit{E}\>|\>\mathit{F}\>|\>\mathit{C}\rangle}$ for the symmetric $\lambda$-calculus where $\vdash\mathit{E}\colon\mathord{\mathop{+}{A_{0}}}$ and $\vdash\mathit{C}\colon\operatorname{\neg}{A_{1}}$ for expression $\mathit{E}$ of type $A_{0}$ and continuation $\mathit{C}$ of type $A_{1}$, respectively. The notation was introduced by Ueda and Asai [36]. A difference from commands in the bilateral $\lambda$-calculus is that configurations are not pairs consisting of expressions and continuations, but triples. Another difference is that any configuration cannot be abstracted by expression or continuation variables. One other difference is that the continuation types are represented using the negation connective in Filinski’s calculus. We can see that the configuration notion is also based on the duality principle. If $\mathit{F}$ is regarded as a function from expressions of the type $A_{0}$ to expressions of the type $A_{1}$, then $\mathit{F}$ is applied to $\mathit{E}$ and an expression of the type $A_{1}$ that is consistent with $\mathit{C}$ of the type $A_{1}$ is generated. Similarly, if $\mathit{F}$ is regarded as a function from continuations of the type $A_{1}$ to continuations of the type $A_{0}$, then $\mathit{F}$ is applied to $\mathit{C}$ and a continuation of the type $A_{0}$ that is consistent with $\mathit{E}$ of the type $A_{0}$ is generated. The configuration notion includes both cases. ### 5.2 Mutual Transformations between Functions Let us see how the duality occurs in the bilateral $\lambda$-calculus. The bilateral $\lambda$-calculus does not permit anything neutral that is neither expression nor continuation. Even if we want to define a neutral function, we must decide whether the type of the function is either $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ or $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$. If we define a function between expressions which is applied to a continuation, then the function cannot be _as-is_ applied to the continuation, and vice versa. However, we can define encodings $\llcorner\mathit{E}\lrcorner=\lambda a.\mu\mathit{x}.\mathord{\langle\mathit{E}\mathit{x}\>|\>a\rangle}$ and $\ulcorner\mathit{C}\urcorner=\lambda\mathit{x}.\mu a.\mathord{\langle\mathit{x}\>|\>\mathit{C}a\rangle}$ to continuations and expressions in the bilateral $\lambda$-calculus, respectively, and the encodings are mutual transformations as follows: ###### Theorem 5.1. The following inferences are derivable: $\varGamma\vdash_{+}\mathit{E}\colon A_{0}\to A_{1}$ $\varGamma\vdash_{-}\llcorner\mathit{E}\lrcorner\colon A_{0}\leftarrow A_{1}$ $\varGamma\vdash_{-}\mathit{C}\colon A_{0}\leftarrow A_{1}$ $\varGamma\vdash_{+}\ulcorner\mathit{C}\urcorner\colon A_{0}\to A_{1}$ . ###### Proof 5.2. See Appendix B. The mutual transformations enjoy the following property: ###### Theorem 5.3. 1. 1. $\mathord{\langle\ulcorner\mathit{C}_{0}\urcorner V\>|\>\mathit{C}_{1}\rangle}=_{\mathit{v}}\mathord{\langle V\>|\>\mathit{C}_{0}\mathit{C}_{1}\rangle}$ holds, 2. 2. $\mathord{\langle V\>|\>\llcorner\mathit{E}\lrcorner\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}V\>|\>\mathit{C}\rangle}$ holds, 3. 3. $\mathord{\langle\ulcorner\llcorner\mathit{E}\lrcorner\urcorner V\>|\>\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}V\>|\>\mathit{C}\rangle}$ holds, and 4. 4. $\mathord{\langle V\>|\>\llcorner\ulcorner\mathit{C}_{0}\urcorner\lrcorner\mathit{C}_{1}\rangle}=_{\mathit{v}}\mathord{\langle V\>|\>\mathit{C}_{0}\mathit{C}_{1}\rangle}$ holds. ###### Proof 5.4. The first and second statements hold immediately from the definition of $\leadsto$, $\ulcorner\mathit{C}\urcorner$, and $\llcorner\mathit{E}\lrcorner$ as follows: $\displaystyle\mathord{\langle\ulcorner\mathit{C}_{0}\urcorner V\>|\>\mathit{C}_{1}\rangle}$ $\displaystyle\equiv\mathord{\langle(\lambda\mathit{x}.\mu a.\mathord{\langle\mathit{x}\>|\>\mathit{C}_{0}a\rangle})V\>|\>\mathit{C}_{1}\rangle}=_{\mathit{v}}\mathord{\langle\mu a.\mathord{\langle V\>|\>\mathit{C}_{0}a\rangle}\>|\>\mathit{C}_{1}\rangle}=_{\mathit{v}}\mathord{\langle V\>|\>\mathit{C}_{0}\mathit{C}_{1}\rangle}$ $\displaystyle\mathord{\langle V\>|\>\llcorner\mathit{E}\lrcorner\mathit{C}\rangle}$ $\displaystyle\equiv\mathord{\langle V\>|\>(\lambda a.\mu\mathit{x}.\mathord{\langle\mathit{E}\mathit{x}\>|\>a\rangle})\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle V\>|\>\mu\mathit{x}.\mathord{\langle\mathit{E}\mathit{x}\>|\>\mathit{C}\rangle}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}V\>|\>\mathit{C}\rangle}\enspace.$ The third and fourth statements hold from the the first and second statements. Theorems 5.1 and 5.3 ensure that we can always _recover_ to define functions between expressions (and continuations) from functions between continuations (resp. expressions) using the mutual transformations. Thus, we confirm that the duality of functions is derived from definability of the mutual transformations in the bilateral $\lambda$-calculus. ### 5.3 Dual Proofs for Functions We also provide an alternative justification of the duality using derivation trees with proofs in their nodes introduced in Section 3. We define a _polarization_ , which is a function from proof variables and proof constants to expression or continuation variables with types and expression or continuation constants, respectively. A polarization for proofs is defined by $\displaystyle p(\lambda\alpha.t)$ $\displaystyle=\lambda p(\alpha).p(t)$ $\displaystyle p(t_{0}t_{1})$ $\displaystyle=p(t_{0})p(t_{1})$ $\displaystyle p(\mathord{(t_{0},t_{1})})$ $\displaystyle=\mathord{(p(t_{0}),p(t_{1}))}$ $\displaystyle p(\pi_{0}(t))$ $\displaystyle=\pi_{0}(p(t))$ $\displaystyle p(\pi_{1}(t))$ $\displaystyle=\pi_{1}(p(t))$ $\displaystyle p(\mu\alpha.T)$ $\displaystyle=\mu p(\alpha).p(T)$ $\displaystyle p(\mathord{\langle t_{0}\>|\>t_{1}\rangle})$ $\displaystyle=\mathord{\langle p(t_{0})\>|\>p(t_{1})\rangle}\enspace.$ Let $V$ be a set of proof variables. We define $p(V)$ as the concatenation of the positive type environments and the negative type environments of $V$ by $p$. Polarizations $p$ and $p^{\prime}$ are equivalent if * • for any proof variable $\alpha$, $p(\alpha)$ and $p^{\prime}(\alpha)$ have the same polarity, and * • for any proof variables $\alpha$ and $\alpha^{\prime}$, $p(\alpha)\equiv p(\alpha^{\prime})$ implies $p^{\prime}(\alpha)\equiv p^{\prime}(\alpha^{\prime})$, vice versa. ###### Proposition 5.5. Assume that $p$ and $p^{\prime}$ are equivalent. Then, 1. 1. $p(V)\vdash_{+}p(t)\colon A$ implies $p^{\prime}(V)\vdash_{+}p^{\prime}(t)\colon A$ 2. 2. $p(V)\vdash_{-}p(t)\colon A$ implies $p^{\prime}(V)\vdash_{-}p^{\prime}(t)\colon A$, and 3. 3. $p(V)\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}p(t)$ implies $p^{\prime}(V)\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}p^{\prime}(t)$. A polarization $p$ is a _conjugate_ of a polarization $p^{\prime}$ if for any variable $\alpha$, if $p(\alpha)$ is an expression variable, then $p^{\prime}(\alpha)$ is a continuation variable, and vice versa. A derivation tree denoting a function has two proofs for acceptance and rejection as follows: ###### Theorem 5.6. 1. 1. $p(V)\vdash_{+}p(t)\colon A$ implies that there exists a conjugate $p^{\prime}$ of $p$ such that $p^{\prime}(V)\vdash_{-}p^{\prime}(t)\colon A$, 2. 2. $p(V)\vdash_{-}p(t)\colon A$ implies that there exists a conjugate $p^{\prime}$ of $p$ such that $p^{\prime}(V)\vdash_{+}p^{\prime}(t)\colon A$, and 3. 3. $p(V)\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}p(t)$ implies that there exists a conjugate $p^{\prime}$ of $p$ such that $p^{\prime}(V)\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}p^{\prime}(t)$. ###### Proof 5.7. By induction on derivation. We note that Proposition 5.5 ensures differences between equivalent polarizations can be ignored. ### 5.4 A Short Remark about the Two Justifications Careful readers might think that * • no distinction of expression variables and continuation variables in derivation trees with proofs is better, and * • BLC, which distinguishes expressions and continuations and requires the mutual transformations, is unnecessarily delicate. However, BLC and the mutual transformations have an advantage in cases that functions and arguments have common variables. For example, a function $\lambda\alpha_{2}.\mu\alpha_{1}.\mathord{\langle\alpha_{0}\>|\>\alpha_{2}\rangle}$ cannot be applied to an argument $\alpha_{0}$ under any assumption because the function and argument must have converse polarities to each other. Because the mutual transformations, which have no variable, can respectively transform functions between expressions and continuations to those between continuations and expressions in BLC, $\ulcorner\lambda a_{2}.\mu\mathit{x}_{1}.\mathord{\langle\mathit{x}_{0}\>|\>a_{2}\rangle}\urcorner\mathit{x}_{0}$ of the type $\mathord{\mathop{+}{A\to A}}$ can be applied to $\mathit{x}_{0}$ of the type $\mathord{\mathop{+}{A}}$ where $\lambda a_{2}.\mu\mathit{x}_{1}.\mathord{\langle\mathit{x}_{0}\>|\>a_{2}\rangle}$ has the type $\mathord{\mathop{-}{A\leftarrow A}}$. ## 6 Related Work and Discussion In this section, we discuss related work from three viewpoints of symmetric $\lambda$-calculi on the formulae-as-types and approaches in structural proof theory. ### 6.1 Symmetric Lambda-Calculi The first symmetric $\lambda$-calculus was proposed by Filinski [12, 13]. Filinski described functions between continuations as follows: “We can therefore equivalently view a function $f\colon A\to B$ as a continuation accepting a pair consisting of an $A$-type value and a $B$-accepting continuation. Such a pair will be called the context of a function application, and its type written as $[B\leftarrow A]$”. In our observation on bilateralism, his intuition is not only computationally but also proof- theoretic semantically reasonable. The underlying idea in defining our calculus is that Filinski’s $[A_{1}\leftarrow A_{0}]$ is regarded as $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$. We elaborate his idea in proof- theoretic semantics and carefully use Rumfitt’s polarities and the but-not connective, instead of simply using the negation connective as Filinski did. A symmetric $\lambda$-calculus proposed by Barbanera and Berardi was invented to extract programs from classical logic proofs. Their calculus contains the involutive negation $A^{\bot}$ for each type $A$ and has symmetric application similar to commands in BLC. The essential difference between their calculus and BLC is polarity, that is, the polarized type $\mathord{\mathop{-}{(A\vee B)}}$ in BLC corresponds to $(A\vee B)^{\bot}$, which is identified with $A^{\bot}\wedge B^{\bot}$ in their calculus. This lack of polarity information makes it difficult to reason about functions of Filinski’s calculus. A calculus which was proposed by Lovas and Crary is the only symmetric $\lambda$-calculus that corresponds to classical logic in which expressions and continuations are symmetric on the bilateralism [25]. They defined $\lambda$-terms similar to those of the dual calculus which was defined by Wadler [38], and did not analyze the duality of functions in Filinski’s symmetric $\lambda$-calculus. They also did not adopt the implication $\to$ but the negation connective $\neg$ as a primitive type constructor. An expression of function type $A_{0}\to A_{1}$ has type $\operatorname{\neg}{(A_{0}\wedge\operatorname{\neg}{A_{1}})}$. Therefore, it is necessary to use an inference rule that corresponds to reductio ad absurdum in classical logic _just_ to define $\lambda$-abstractions and applications of expressions in the simply typed $\lambda$-calculus, unlike ours. We also show that the negative polarity is suitable for representing continuations rather than the negation connective on the notion of bilateralism. Ueda and Asai investigated Filinski’s symmetric $\lambda$-calculus, and provided an explicit definition of commands by writing $\operatorname{\neg}{A}$ for a continuation type $A$ [36]. However, they did not attempt to reason about the neutrality of functions in the symmetric $\lambda$-calculus. Also, the use of the negation connective to represent continuations is not reasonable as we have shown in the present paper. Actually, they also used the negation connective at only the _outermost_ position of formulae. This operator of formulae should not be the negation connective but the negative polarity on bilateralism. Curien and Herbelin’s $\bar{\lambda}\mu\tilde{\mu}$-calculus [3] corresponds to Gentzen’s sequent calculus LK as well as the dual calculus. This symmetric infrastructure, namely the duality of LK, exhibits the duality between continuations and programs. Its symmetricity corresponds to that of the polarities in BLC and to that of types $A$ and $\operatorname{\neg}{A}$ in Ueda and Asai’s calculus. The calculi based on LK naturally contain the (not involutive) negation type, which provides a more expressive power than BLC, as noted in Section 4.2. This observation raises an interesting question: What is the role of the negation type in practical programming languages? ### 6.2 Approaches in Structural Proof Theory Girard and Parigot constructed calculi corresponding to classical logic [18, 28] and analyzed classical logic proof-theoretically. Girard also invented linear logic [17], which is very useful for analyzing classical logic. Danos et al. confirmed that classical logic has well behaved fragments using the positive and negative polarities [5, 6]. The calculi invented through their approaches are larger than or incomparable to ours because their motivations are different from ours. A goal of our work is not to analyze classical logic but to construct a minimal calculus to justify the duality of functions and the computations that delimited continuations raise. Although analyzing negations is a topic of great interest in proof theory [27, 14, 26, 2, 7, 10, 9, 1, 31, 23, 11], we investigated the negation-free fragment of bilateral natural deduction. Dual intuitionistic logic, which is symmetric to intuitionistic logic, is well known in structural proof theory [19, 37, 34]. A combined logic of intuitionistic and dual intuitionistic logics is classical logic. Whereas most of the logics are based on sequent calculi, Wansing constructed a natural deduction that can perform verification and falsification that corresponds to proving $\mathord{\mathop{+}{A}}$ and $\mathord{\mathop{-}{A}}$, respectively, in our calculus [42]. However, a series of his works analyzed _refutation_ , which is a proof for falsification in the context of studying various negations as seen in structural proof theory [40, 41, 42]. This is different from the objective in the present paper. He also neither provided a $\lambda$-calculus based on bilateralism nor described computational aspects, such as continuation controls. Tranchini also constructed a natural deduction of dual intuitionistic logic [35]. Our calculus seems to correspond to a negation-free fragment of his natural deduction. ## 7 Conclusion and Future Work In this paper, we proposed a symmetric $\lambda$-calculus called the bilateral $\lambda$-calculus with the but-not connective based on bilateralism in proof- theoretic semantics. The formulae-as-types notion was extended to consider Rumfitt’s reductio, which corresponds to reductio ad absurdum as a $\mu$-abstraction of a first-class command in our calculus. Its call-by-value calculus can be defined as a sub-calculus of Wadler’s call-by-value dual calculus. We showed that the duality of functions is derived from definability of the mutual transformations between expressions and continuations in the bilateral $\lambda$-calculus. We also showed that every typable function has dual types. In this paper, we have provided a method to justify a few notions in the theory of $\lambda$-calculi on bilateralism. The bilateral analysis in this paper targets the duality of functions in Filinski’s symmetric $\lambda$-calculus. Bilateral analyses of asymmetric calculi constitute our future work. The call-by-value variant of BLC corresponds to a sub-calculus of the call-by- value dual calculus with the but-not connective. It is also future work to clarify what practical uses are derived from the difference between BLC and the dual calculus. ## References * [1] Arnon Avron. Negation: Two points of view. In What is Negation?, Applied Logic Series, pages 3–22. 1999. * [2] Keith L. Clark. Negation as failure. In Logic and Data Bases, pages 293–322. Plenum Press, 1978. * [3] Pierre-Louis Curien and Hugo Herbelin. The duality of computation. In Proc. ICFP, pages 233–243, 2000. * [4] Haskell. B. Curry. Functionality in combinatory logic. In Proc. the National Academy of Sciences of USA, volume 20, pages 584–590, 1934. * [5] Vincent Danos, Jean-Baptiste Joinet, and Harold Schellinx. LKQ and LKT: Sequent calculi for second order logic based upon dual linear decompositions of classical implication. In Proceedings of the Workshop on Advances in Linear Logic, pages 211–224, 1995. * [6] Vincent Danos, Jean-Baptiste Joinet, and Harold Schellinx. A new deconstructive logic: Linear logic. Journal of Symbolic Logic, 62(3):755–807, 1997. * [7] Kosta Dos̆en. Negative modal operators in intuitionistic logic. Publications de l’Institut Mathématique, 35(49):3–14, 1984\. * [8] Michael Dummett. The Logical Basis of Metaphysics. Duckworth, 1991. * [9] Michael Dummett. The Seas of Language. Oxford University Press, 1996. * [10] Jon Michael Dunn. Star and perp: Two treatments of negation. Philisophical Perspectives, 5:331–357, 1993. * [11] Jon Michael Dunn and Chunlai Zhou. Negation in the context of gaggle theory. Studia Logica, 80(2–3):235–264, 2005. * [12] Andrzej Filinski. Declarative continuations: An investigation of duality in programming language semantics. In Proc. CTCS, volume 389 of LNCS, pages 224–249, 1989. * [13] Andrzej Filinski. Declarative continuations and categorical duality. Master’s thesis, DIKU Computer Science Department, University of Copenhagen, 1989. * [14] Peter T. Geach. Assertion. The Philosophical Review, 74:449–465, 1965. * [15] Gerhard Karl Erich Gentzen. Untersuchungen über das logische schließen. Mathematische Zeitschrift, 39:176–210, 1934. * [16] Gerhard Karl Erich Gentzen. Untersuchungen über das logische schließen. Mathematische Zeitschrift, 39:405–431, 1935. * [17] Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50:1–102, 1987. * [18] Jean-Yves Girard. A new constructive logic: classic logic. Mathematical Structures in Computer Science, 1(3):255–296, 1991\. * [19] Nicolas D. Goodman. The logic of contradiction. Zeitschrift für mathematische Logik und Grundlagen der Mathematik, 27:119–126, 1981. * [20] Rajeev Goré, Linda Postniece, and Alwen Tiu. Cut-elimination and proof-search for bi-intuitionistic logic using nested sequents. In Advances in Modal Logic, pages 43–66, 2008. * [21] Timothy G. Griffin. A formulae-as-types notion of control. In Proc. POPL, pages 47–58, 1990. * [22] William. A. Howard. The formulae-as-types notion of construction. In Essays on Combinatory Logic, Lambda Calculus, and Formalism, pages 479–490. Academic Press, 1980. * [23] Lloyd Humberstone. The revival of rejective negation. Journal of Philosophical Logic, 29(4):331–381, 2000. * [24] Nils Kürbis. Some comments on ian rumfitt’s bilateralism. Journal of Philosophical Logic, 45(6):623–644, 2016. * [25] William Lovas and Karl Crary. Structural normalization for classical natural deduction. Manuscript, 2006. URL: http://www.cs.cmu.edu/{{̃}}wlovas/papers/clnorm.pdf. * [26] Storrs McCall. Contrariety. Notre Dame Journal of Formal Logic, 8:121–138, 1967. * [27] David Nelson. Constructible falsity. Journal of Symbolic Logic, 14(2):16–26, 1949. * [28] Michel Parigot. $\lambda$$\mu$-calculus: An algorithmic interpretation of classical natural deduction. In Proc. LPAR, volume 624 of LNAI, pages 190–201, 1992. * [29] Arthur N. Prior. The runabout inference-ticket. Analysis, 21(2):38–39, 1960. * [30] Greg Restall. Extending intuitionistic logic with subtraction, 1997. * [31] Greg Restall. An Introduction to Substructural Logics. Routledge, 2000. * [32] Ian Rumfitt. “Yes” and “no”. Mind, 109(477):781–823, 2000. * [33] Peter Selinger. Control categories and duality: On the categorical semantics of the lambda-mu calculus. Mathematical Structures in Computer Science, 11(2):207–260, 2001\. * [34] Yaroslav Shramko. Dual intuitionistic logic and a variety of negations: The logic of scientific research. Studia Logica, 80(2–3):347–367, 2005. * [35] Luca Tranchini. Natural deduction for dual-intuitionistic logic. Studia Logica, 100(3):631–648, 2012. * [36] Yayoi Ueda and Kenichi Asai. Reinvestigation of symmetric lambda calculus. In Proc. the 4th DIKU-IST Joint Workshop on Foundations of Software, pages 10–26, 2011. * [37] Igor Urbas. Dual-intuitionistic logic. Notre Dame Journal of Formal Logic, 37(3):440–451, 1996. * [38] Philip Wadler. Call-by-value is dual to call-by-name. In Proc. ICFP, pages 189–201, 2003. * [39] Philip Wadler. Call-by-value is dual to call-by-name, reloaded. In Proc. RTA, volume 3467 of LNCS, pages 185–203, 2005. * [40] Heinrich Wansing. Connexive modal logic. In Proc. AIML, pages 367–383, 2004. * [41] Heinrich Wansing. Proofs, disproofs, and their duals. Advances in Modal Logic, 8:483–505, 2010. * [42] Heinrich Wansing. Falsification, natural deduction and bi-intuitionistic logic. Journal of Logic and Compututation, 26(1):425–450, 2016. ## Appendix A The Call-by-Value Calculus of The Bilateral Lambda-Calculus We introduce a call-by-value strategy to BLC and define a computationally consistent call-by-value calculus, which is equivalent to a sub-calculus of the call-by-value dual calculus by Wadler [39] without negation. Consequently, the consistency of our call-by-value calculus is obtained from the consistency of the call-by-value dual calculus. ### A.1 The Call-by-Value Dual Calculus $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ This subsection compares CbV-BLC with dual calculus invented by Wadler [38, 39], which corresponds to the classical sequent calculus on the notion of formulae-as-types. The call-by-value calculus of the dual calculus is known as a well established and computationally consistent system because it has the so-called _CPS-semantics_ [38] and is equivalent to the call-by-value $\lambda\mu$-calculus [39]. We will show that our CbV-BLC is equivalent to a sub-calculus of the call-by-value dual calculus by giving an isomorphism between them. We first recall the dual calculus. Suppose that countable sets of type variables, term variables, and coterm variables are given. Let $\chi$, $x$, and $\alpha$ range over type variables, term variables, and coterm variables, respectively. Types, terms, coterms, statements, and syntactical objects are summarized in Figure 10. Substitution $[M/x]O$ of $x$ in an expression $O$ for $M$ is defined in a standard component-wise and capture-avoiding manner. Similarly, substitution $[K/\alpha]O$ is also defined. (types) $\displaystyle A$ $\displaystyle\Coloneqq\chi\mid(A\land A)\mid(A\vee A)\mid(\operatorname{\neg}{A})$ (terms) $\displaystyle M$ $\displaystyle\Coloneqq x\mid\langle M,M\rangle\mid\langle M\rangle{\tt inl}\mid\langle M\rangle{\tt inr}\mid[K]{\tt not}\mid(S).\alpha$ (coterms) $\displaystyle K$ $\displaystyle\Coloneqq\alpha\mid[K,K]\mid{\tt fst}[K]\mid{\tt snd}[K]\mid{\tt not}\langle M\rangle\mid x.(S)$ (statements) $\displaystyle S$ $\displaystyle\Coloneqq M\mathbin{\bullet}K$ (syntactical objects) $\displaystyle O$ $\displaystyle\Coloneqq M\mid K\mid S$ Figure 10: The syntax of the dual calculus. A judgment has the form of $\varGamma\vdash\varDelta\mid M\colon A$, $\varGamma\mid S\vdash\varDelta$, or $K\colon A\mid\varGamma\vdash\varDelta$, where $\varGamma$ is a type environment for terms that is a finite set of the form $x\colon A$ and $\varDelta$ is a type environment for coterms that is a finite set of the form $\alpha\colon A$. Figure 11 shows the inference rules. $\varGamma\vdash\varDelta\mid M\colon A$ $K\colon A\mid\varGamma\vdash\varDelta$ $\varGamma\mid M\mathbin{\bullet}K\vdash\varDelta$ $\varGamma,x\colon A\vdash\varDelta\mid x\colon A$ $\alpha\colon A\mid\varGamma\vdash\varDelta,\alpha\colon A$ $\varGamma\vdash\varDelta\mid M_{1}\colon A_{1}$ $\varGamma\vdash\varDelta\mid M_{2}\colon A_{2}$ $\varGamma\vdash\varDelta\mid\langle M_{1},M_{2}\rangle\colon A_{1}\land A_{2}$ $K\colon A_{1}\mid\varGamma\vdash\varDelta$ ${\tt fst}[K]\colon A_{1}\land A_{2}\mid\varGamma\vdash\varDelta$ $K\colon A_{2}\mid\varGamma\vdash\varDelta$ ${\tt snd}[K]\colon A_{1}\land A_{2}\mid\varGamma\vdash\varDelta$ $\varGamma\vdash\varDelta\mid M\colon A_{1}$ $\varGamma\vdash\varDelta\mid\langle M\rangle{\tt inl}\colon A_{1}\vee A_{2}$ $\varGamma\vdash\varDelta\mid M\colon A_{2}$ $\varGamma\vdash\varDelta\mid\langle M\rangle{\tt inr}\colon A_{1}\vee A_{2}$ $K_{1}\colon A_{1}\mid\varGamma\vdash\varDelta$ $K_{2}\colon A_{2}\mid\varGamma\vdash\varDelta$ $[K_{1},K_{2}]\colon A_{1}\vee A_{2}\mid\varGamma\vdash\varDelta$ $K\colon A\mid\varGamma\vdash\varDelta$ $\varGamma\vdash\varDelta\mid[K]{\tt not}\colon\operatorname{\neg}{A}$ $\varGamma\vdash\varDelta\mid M\colon A$ ${\tt not}\langle M\rangle\colon\operatorname{\neg}{A}\mid\varGamma\vdash\varDelta$ $\varGamma\mid S\vdash\varDelta,\alpha\colon A$ $\varGamma\vdash\varDelta\mid(S).\alpha\colon A$ $x\colon A,\varGamma\mid S\vdash\varDelta$ $x.(S)\colon A\mid\varGamma\vdash\varDelta$ Figure 11: The inference rules of the dual calculus. We then recall the call-by-value calculus of the dual calculus. The values and the call-by-value evaluation contexts are defined as follows: (values) $\displaystyle W$ $\displaystyle\Coloneqq x\mid\langle W,W\rangle\mid\langle W\rangle{\tt inl}\mid\langle W\rangle{\tt inr}\mid[K]{\tt not}$ $\displaystyle\;\;\mid\;(W\mathbin{\bullet}{\tt fst}[\alpha]).\alpha\mid(W\mathbin{\bullet}{\tt snd}[\alpha]).\alpha$ (contexts) $\displaystyle\mathcal{F}$ $\displaystyle\Coloneqq\\{-\\}\mid\langle\mathcal{F},M\rangle\mid\langle W,\mathcal{F}\rangle\mid\langle\mathcal{F}\rangle{\tt inl}\mid\langle\mathcal{F}\rangle{\tt inr}$ $\displaystyle(\beta\mathord{\land}_{0})$ $\displaystyle\;\;\langle W_{0},W_{1}\rangle\mathbin{\bullet}{\tt fst}[K]$ $\displaystyle=_{\mathit{dcv}}W_{0}\mathbin{\bullet}K$ $\displaystyle(\beta\land_{1})$ $\displaystyle\langle W_{0},W_{1}\rangle\mathbin{\bullet}{\tt snd}[K]$ $\displaystyle=_{\mathit{dcv}}W_{1}\mathbin{\bullet}K$ $\displaystyle(\beta\vee_{0})$ $\displaystyle\langle W\rangle{\tt inl}\mathbin{\bullet}[K_{0},K_{1}]$ $\displaystyle=_{\mathit{dcv}}W\mathbin{\bullet}K_{0}$ $\displaystyle(\beta\vee_{1})$ $\displaystyle\langle W\rangle{\tt inr}\mathbin{\bullet}[K_{0},K_{1}]$ $\displaystyle=_{\mathit{dcv}}W\mathbin{\bullet}K_{1}$ $\displaystyle(\beta\neg)$ $\displaystyle[K]{\tt not}\mathbin{\bullet}{\tt not}\langle M\rangle$ $\displaystyle=_{\mathit{dcv}}M\mathbin{\bullet}K$ $\displaystyle(\beta R)$ $\displaystyle W\mathbin{\bullet}x.(S)$ $\displaystyle=_{\mathit{dcv}}[W/x]S$ $\displaystyle(\beta L)$ $\displaystyle(S).\alpha\mathbin{\bullet}K$ $\displaystyle=_{\mathit{dcv}}[K/\alpha]S$ $\displaystyle(\eta R)$ $\displaystyle\;\;M$ $\displaystyle=_{\mathit{dcv}}(M\mathbin{\bullet}\alpha).\alpha$ if $\alpha$ is fresh $\displaystyle(\eta L)$ $\displaystyle K$ $\displaystyle=_{\mathit{dcv}}x.(x\mathbin{\bullet}K)$ if $x$ is fresh $\displaystyle(\eta\land)$ $\displaystyle W$ $\displaystyle=_{\mathit{dcv}}\langle(W\mathbin{\bullet}{\tt fst}[\alpha]).\alpha,(W\mathbin{\bullet}{\tt snd}[\alpha]).\alpha\rangle$ if $\alpha$ is fresh $\displaystyle(\eta\vee)$ $\displaystyle K$ $\displaystyle=_{\mathit{dcv}}[x.(\langle x\rangle{\tt inl}\mathbin{\bullet}K),x.(\langle x\rangle{\tt inr}\mathbin{\bullet}K)]$ if $x$ is fresh $\displaystyle(\eta\neg)$ $\displaystyle W$ $\displaystyle=_{\mathit{dcv}}[x.(W\mathbin{\bullet}{\tt not}\langle x\rangle)]{\tt not}$ if $x$ is fresh $\displaystyle(\zeta)$ $\displaystyle\mathcal{F}\\{M\\}\mathbin{\bullet}K$ $\displaystyle=_{\mathit{dcv}}M\mathbin{\bullet}x.(\mathcal{F}\\{x\\}\mathbin{\bullet}K)$ if $x$ is fresh Figure 12: The equations of the call-by-value dual calculus. Figure 12 presents the call-by-value equation $=_{\mathit{dcv}}$ of the dual calculus. In the call-by-value dual calculus, the implication type $A_{0}\rightarrow A_{1}$ with its term $\lambda x.M$, coterm $M\mathbin{\texttt{@}}K$ can be defined as $\displaystyle A_{0}\rightarrow A_{1}$ $\displaystyle\equiv\operatorname{\neg}{(A_{0}\land\operatorname{\neg}{A_{1}})}$ $\displaystyle\lambda x.M$ $\displaystyle\equiv[x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M\rangle])])]{\tt not}$ $\displaystyle M\mathbin{\texttt{@}}K$ $\displaystyle\equiv{\tt not}\langle\langle M,[K]{\tt not}\rangle\rangle$ by using $\land$, $\vee$, and $\neg$ as follows: ###### Proposition A.1. The following inferences are derivable: $\varGamma,x\colon A_{0}\vdash\varDelta\mid M\colon A_{1}$ $\varGamma\vdash\varDelta\mid\lambda x.M\colon A_{0}\rightarrow A_{1}$ $\varGamma\vdash\varDelta\mid M\colon A_{0}$ $K\colon A_{1}\mid\varGamma\vdash\varDelta$ $M\mathbin{\texttt{@}}K\colon A_{0}\rightarrow A_{1}\mid\varGamma\vdash\varDelta$ . Also, $\lambda x.M$ is a value and the following equations hold: $\displaystyle(\beta\mathord{\rightarrow})\qquad$ $\displaystyle(\lambda x.M_{0})\mathbin{\bullet}(M_{1}\mathbin{\texttt{@}}K)$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x.(M_{0}\mathbin{\bullet}K)$ $\displaystyle(\eta\mathord{\to})$ $\displaystyle W$ $\displaystyle=_{\mathit{dcv}}\lambda x.((W\mathbin{\bullet}(x\mathbin{\texttt{@}}\alpha)).\alpha)\enspace.$ ###### Proof A.2. The inference part is shown immediately by the definition of $\lambda x.M$ and $M\mathbin{\texttt{@}}K$. The term $\lambda x.M$ is a value, since it has the form $[K]{\tt not}$. The equation $(\beta\mathord{\rightarrow})$ is shown by the case analysis of $M_{1}$. (a) If $M_{1}$ is not a value, then the claim is obtained by using $(\nu\wedge_{0})$: $\displaystyle(\lambda x.M_{0})\mathbin{\bullet}(M_{1}\mathbin{\texttt{@}}K)$ $\displaystyle\equiv[x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])])]{\tt not}\mathbin{\bullet}{\tt not}\langle\langle M_{1},[K]{\tt not}\rangle\rangle$ $\displaystyle=_{\mathit{dcv}}\langle M_{1},[K]{\tt not}\rangle\mathbin{\bullet}x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])])$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x^{\prime\prime}.(\langle x^{\prime\prime},[K]{\tt not}\rangle\mathbin{\bullet}x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])]))$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x^{\prime\prime}.(\langle x^{\prime\prime},[K]{\tt not}\rangle\mathbin{\bullet}{\tt fst}[x.(\langle x^{\prime\prime},[K]{\tt not}\rangle\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])])$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x^{\prime\prime}.(x^{\prime\prime}\mathbin{\bullet}x.([K]{\tt not}\mathbin{\bullet}{\tt not}\langle M_{0}\rangle))$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x.([K]{\tt not}\mathbin{\bullet}{\tt not}\langle M_{0}\rangle)$ $\displaystyle=_{\mathit{dcv}}M_{1}\mathbin{\bullet}x.(M_{0}\mathbin{\bullet}K)\enspace.$ (b) If $M_{1}$ is a value (say $W$), then $\displaystyle(\lambda x.M_{0})\mathbin{\bullet}(W\mathbin{\texttt{@}}K)$ $\displaystyle\equiv[x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])])]{\tt not}\mathbin{\bullet}{\tt not}\langle\langle W,[K]{\tt not}\rangle\rangle$ $\displaystyle=_{\mathit{dcv}}\langle W,[K]{\tt not}\rangle\mathbin{\bullet}x^{\prime}.(x^{\prime}\mathbin{\bullet}{\tt fst}[x.(x^{\prime}\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])])$ $\displaystyle=_{\mathit{dcv}}\langle W,[K]{\tt not}\rangle\mathbin{\bullet}{\tt fst}[x.(\langle W,[K]{\tt not}\rangle\mathbin{\bullet}{\tt snd}[{\tt not}\langle M_{0}\rangle])]$ $\displaystyle=_{\mathit{dcv}}W\mathbin{\bullet}x.([K]{\tt not}\mathbin{\bullet}{\tt not}\langle M_{0}\rangle)$ $\displaystyle=_{\mathit{dcv}}W\mathbin{\bullet}x.(M_{0}\mathbin{\bullet}K)\enspace.$ The but-not type $A_{0}\leftarrow A_{1}$ with its term $K\mathbin{\$}M$, coterm $\lambda\alpha.K$ are also defined as $\displaystyle A_{0}\leftarrow A_{1}$ $\displaystyle\equiv A_{0}\land\operatorname{\neg}{A_{1}}$ $\displaystyle K\mathbin{\$}M$ $\displaystyle\equiv\langle M,[K]{\tt not}\rangle$ $\displaystyle\lambda\alpha.K$ $\displaystyle\equiv x.(x\mathbin{\bullet}{\tt snd}[{\tt not}\langle(x\mathbin{\bullet}{\tt fst}[K]).\alpha\rangle])\enspace.$ ###### Proposition A.3. The following inferences are derivable: $\varGamma\vdash\varDelta\mid M\colon A_{0}$ $K\colon A_{1}\mid\varGamma\vdash\varDelta$ $\varGamma\vdash\varDelta\mid K\mathbin{\$}M\colon A_{0}\leftarrow A_{1}$ $K\colon A_{0}\mid\varGamma\vdash\varDelta,\alpha\colon A_{1}$ $\lambda\alpha.K\colon A_{0}\leftarrow A_{1}\mid\varGamma\vdash\varDelta$ . Also, $K\mathbin{\$}W$ is a value and the following equations hold: $\displaystyle(\beta\mathord{\leftarrow})\quad$ $\displaystyle(K_{0}\mathbin{\$}W)\mathbin{\bullet}(\lambda\alpha.K_{1})$ $\displaystyle=_{\mathit{dcv}}(W\mathbin{\bullet}K_{1}).\alpha\mathbin{\bullet}K_{0}$ $\displaystyle(\eta\mathord{\leftarrow})$ $\displaystyle K$ $\displaystyle=_{\mathit{dcv}}\lambda\alpha.(x.((\alpha\mathbin{\texttt{@}}x)\mathbin{\bullet}K))$ $\displaystyle(\zeta\mathord{\leftarrow})$ $\displaystyle(K_{0}\mathbin{\$}M)\mathbin{\bullet}K_{1}$ $\displaystyle=_{\mathit{dcv}}M\mathbin{\bullet}x.((K_{0}\mathbin{\$}x)\mathbin{\bullet}K_{1})$ if $x$ is fresh. ###### Proof A.4. The inference part is shown immediately by the definition of $K\mathbin{\$}M$ and $\lambda\alpha.K$. By the definition, it is immediately checked that a term of the form $K\mathbin{\$}W$ is a value. The first equation $(\beta\mathord{\leftarrow})$ is shown as follows: $\displaystyle(K_{0}\mathbin{\$}W)\mathbin{\bullet}(\lambda\alpha.K_{1})$ $\displaystyle\equiv\langle W,[K_{0}]{\tt not}\rangle\mathbin{\bullet}x.(x\mathbin{\bullet}{\tt snd}[{\tt not}\langle(x\mathbin{\bullet}{\tt fst}[K_{1}]).\alpha\rangle])$ $\displaystyle=_{\mathit{dcv}}\langle W,[K_{0}]{\tt not}\rangle\mathbin{\bullet}{\tt snd}[{\tt not}\langle(\langle W,[K_{0}]{\tt not}\rangle\mathbin{\bullet}{\tt fst}[K_{1}]).\alpha\rangle]$ $\displaystyle=_{\mathit{dcv}}[K_{0}]{\tt not}\mathbin{\bullet}{\tt not}\langle(W\mathbin{\bullet}K_{1}).\alpha\rangle$ $\displaystyle=_{\mathit{dcv}}(W\mathbin{\bullet}K_{1}).\alpha\mathbin{\bullet}K_{0}$ The second equation $(\zeta\mathord{\leftarrow})$ is shown with $(\nu\wedge_{0})$ as follows: $\displaystyle(K_{0}\mathbin{\$}M)\mathbin{\bullet}K_{1}$ $\displaystyle\equiv\langle M,[K_{0}]{\tt not}\rangle\mathbin{\bullet}K_{1}$ $\displaystyle=_{\mathit{dcv}}M\mathbin{\bullet}x.(\langle x,[K_{0}]{\tt not}\rangle\mathbin{\bullet}K_{1})$ $\displaystyle\equiv M\mathbin{\bullet}x.((K_{0}\mathbin{\$}x)\mathbin{\bullet}K_{1})$ We define a sub-calculus $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ of the call-by-value dual calculus that is obtained by removing the negation type $\operatorname{\neg}{A}$ and adding the implication and but-not types with their syntactical objects, typing rules, and equations as primitives. The calculus $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ can be roughly understood as a sub-calculus of the call-by-value dual calculus that forbids free occurrences of the negation connective and allows only the occurrences necessary to define the implication and the but-not connectives. ### A.2 Equivalence between CbV-BLC and $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ We define a translation $(-)^{\sharp}$ from CbV-BLC into $\textrm{CbV- DC}_{\rightarrow\leftarrow}$. The translation is designed so that judgments in the bilateral natural deduction are mapped to sequents in the sequent calculus including proofs. We assume that there exist variables $x_{cst^{o}}$ and covariables $\alpha_{\bullet^{o}}$ of $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ for any constant expressions $\mathit{cst}^{o}$ and constant continuations $\bullet^{o}$ of CbV-BLC, respectively. Term $(\mathit{E})^{\sharp}$, coterm $(\mathit{C})^{\sharp}$, and statement $(N)^{\sharp}$ are defined inductively as shown in Figure 13. We show that the translation preserves typability. Let ${\rm Cons}^{+}$ and ${\rm Cons}^{-}$ be the sets of constants of the form $\mathit{cst}^{o}$ and $\bullet^{o}$, respectively. For any $X\subseteq_{\rm{fin}}{\rm Cons}^{+}$ and $Y\subseteq_{\rm{fin}}{\rm Cons}^{-}$, we respectively define $\displaystyle(\varPi)^{\sharp}_{X}=\varPi\cup\\{x_{\mathit{cst}^{o}}\colon o\mid\mathit{cst}^{o}\in X\\}\enspace\mbox{and}\enspace(\varSigma)^{\sharp}_{Y}=\varSigma\cup\\{\alpha_{\bullet^{o}}\colon o\mid\bullet^{o}\in Y\\}\enspace.$ $\displaystyle(\mathit{cst}^{o})^{\sharp}$ $\displaystyle\equiv x_{cst^{o}}$ $\displaystyle(\mathit{x}^{A})^{\sharp}$ $\displaystyle\equiv x$ $\displaystyle(\lambda\mathit{x}^{A}.\mathit{E})^{\sharp}$ $\displaystyle\equiv\lambda x.(\mathit{E})^{\sharp}$ $\displaystyle(\mathit{E}_{0}\mathit{E}_{1})^{\sharp}$ $\displaystyle\equiv((\mathit{E}_{0})^{\sharp}\mathbin{\bullet}((\mathit{E}_{1})^{\sharp}\mathbin{\texttt{@}}\alpha)).\alpha$ $\displaystyle(\mathord{(\mathit{E}_{0},\mathit{E}_{1})})^{\sharp}$ $\displaystyle\equiv\langle(\mathit{E}_{0})^{\sharp},(\mathit{E}_{1})^{\sharp}\rangle$ $\displaystyle(\pi_{0}(\mathit{E}))^{\sharp}$ $\displaystyle\equiv((\mathit{E})^{\sharp}\mathbin{\bullet}{\tt fst}[\alpha]).\alpha$ $\displaystyle(\pi_{1}(\mathit{E}))^{\sharp}$ $\displaystyle\equiv((\mathit{E})^{\sharp}\mathbin{\bullet}{\tt snd}[\alpha]).\alpha$ $\displaystyle(\mu a^{A}.N)^{\sharp}$ $\displaystyle\equiv((N)^{\sharp}).\alpha$ $\displaystyle(\bullet^{o})^{\sharp}$ $\displaystyle\equiv\alpha_{\bullet^{o}}$ $\displaystyle(a^{A})^{\sharp}$ $\displaystyle\equiv\alpha$ $\displaystyle(\lambda a^{A}.\mathit{C})^{\sharp}$ $\displaystyle\equiv\lambda\alpha.(\mathit{C})^{\sharp}$ $\displaystyle(\mathit{C}_{0}\mathit{C}_{1})^{\sharp}$ $\displaystyle\equiv x.(((\mathit{C}_{1})^{\sharp}\mathbin{\$}x)\mathbin{\bullet}(\mathit{C}_{0})^{\sharp})$ $\displaystyle(\mathord{(\mathit{C}_{0},\mathit{C}_{1})})^{\sharp}$ $\displaystyle\equiv[(\mathit{C}_{0})^{\sharp},(\mathit{C}_{1})^{\sharp}]$ $\displaystyle(\pi_{0}(\mathit{C}))^{\sharp}$ $\displaystyle\equiv x.(\langle x\rangle{\tt inl}\mathbin{\bullet}(\mathit{C})^{\sharp})$ $\displaystyle(\pi_{1}(\mathit{C}))^{\sharp}$ $\displaystyle\equiv x.(\langle x\rangle{\tt inr}\mathbin{\bullet}(\mathit{C})^{\sharp})$ $\displaystyle(\mu\mathit{x}^{A}.N)^{\sharp}$ $\displaystyle\equiv x.((N)^{\sharp})$ $(\mathord{\langle\mathit{E}\>|\>\mathit{C}\rangle})^{\sharp}\equiv(\mathit{E})^{\sharp}\mathbin{\bullet}(\mathit{C})^{\sharp}$ Figure 13: A translation from CbV-BLC into $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ ###### Proposition A.5. 1. 1. $\varPi;\varSigma\vdash_{+}\mathit{E}\colon A$ implies $(\varPi)^{\sharp}_{X}\vdash(\varSigma)^{\sharp}_{Y}\mid(\mathit{E})^{\sharp}\colon A$ for any ${\rm Cons}^{+}(\mathit{E})\subseteq X$ and ${\rm Cons}^{-}(\mathit{E})\subseteq Y$, 2. 2. $\varPi;\varSigma\vdash_{-}\mathit{C}\colon A$ implies $(\mathit{C})^{\sharp}\colon A\mid(\varPi)^{\sharp}_{X}\vdash(\varSigma)^{\sharp}_{Y}$ for any ${\rm Cons}^{+}(\mathit{C})\subseteq X$ and ${\rm Cons}^{-}(\mathit{C})\subseteq Y$, and 3. 3. $\varPi;\varSigma\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}N$ implies $(\varPi)^{\sharp}_{X}\mid(N)^{\sharp}\vdash(\varSigma)^{\sharp}_{Y}$ for any ${\rm Cons}^{+}(N)\subseteq X$ and ${\rm Cons}^{-}(N)\subseteq Y$. ###### Proof A.6. The claims are shown by simultaneous induction on the derivation of the bilateral $\lambda$-calculus. ###### Lemma A.7. 1. 1. $\mathit{E}$ is a value of CbV-BLC if and only if $(\mathit{E})^{\sharp}$ is a value of $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ , and 2. 2. $([V/x]D)^{\sharp}\equiv[(V)^{\sharp}/x](D)^{\sharp}$ and $([\mathit{C}/\alpha]D)^{\sharp}\equiv[(\mathit{C})^{\sharp}/\alpha](D)^{\sharp}$ . ###### Proof A.8. The claim (1) can be shown immediately. The former claim of (2) is shown by induction on $S$. Note that, by (1), $(\mathit{E})^{\dagger}$ is a value if and only if $([V/x]\mathit{E})^{\dagger}$ is a value. The case of $S\equiv\mathit{cst}^{o}$: $\displaystyle([V/x]\mathit{cst}^{o})^{\dagger}\equiv(\mathit{cst}^{o})^{\dagger}\equiv x_{\mathit{cst}^{o}}\equiv[(V)^{\dagger}/x]x_{\mathit{cst}^{o}}\equiv[(V)^{\dagger}/x](\mathit{cst}^{o})^{\dagger}$ The case of $S\equiv x^{A}$: $\displaystyle([V/x]x^{A})^{\dagger}\equiv(V)^{\dagger}\equiv[(V)^{\dagger}/x]x\equiv[(V)^{\dagger}/x](x^{A})^{\dagger}$ The case of $S\equiv x_{0}^{A_{0}}$, where $x^{A}\not\equiv x_{0}^{A_{0}}$: $\displaystyle([V/x]x_{0}^{A_{0}})^{\dagger}\equiv(x_{0}^{A_{0}})^{\dagger}\equiv x_{0}\equiv[(V)^{\dagger}/x]x_{0}\equiv[(V)^{\dagger}/x](x_{0}^{A_{0}})^{\dagger}$ The case of $S\equiv\pi_{0}(\mathit{E})$ is shown by using induction hypothesis. $\displaystyle([V/x]\pi_{0}(\mathit{E}))^{\dagger}$ $\displaystyle\equiv(\pi_{0}([V/x]\mathit{E}))^{\dagger}$ $\displaystyle\equiv(([V/x]\mathit{E})^{\dagger}\mathbin{\bullet}\overline{x^{\prime}}.(x^{\prime}\mathbin{\bullet}{\tt fst}[\alpha])).\overline{\alpha}$ $\displaystyle\equiv([(V)^{\dagger}/x](\mathit{E})^{\dagger}\mathbin{\bullet}\overline{x^{\prime}}.(x^{\prime}\mathbin{\bullet}{\tt fst}[\alpha])).\overline{\alpha}$ by induction hypothesis $\displaystyle\equiv[(V)^{\dagger}/x](((\mathit{E})^{\dagger}\mathbin{\bullet}\overline{x^{\prime}}.(x^{\prime}\mathbin{\bullet}{\tt fst}[\alpha])).\overline{\alpha})$ $\displaystyle\equiv[(V)^{\dagger}/x](\pi_{0}(\mathit{E}))$ The other cases $S\equiv\pi_{1}(\mathit{E})$, $\mathord{(\mathit{E}_{0},\mathit{E}_{1})}$, $\lambda\mathit{x}.\mathit{E}$, $\mathit{E}_{0}\mathit{E}_{1}$, $\mu a.N$, $\bullet^{o}$, $a^{A}$, $\pi_{0}(\mathit{C})$, $\pi_{1}(\mathit{C})$, $\mathord{(\mathit{C}_{0},\mathit{C}_{1})}$, $\lambda a.\mathit{C}$, $\mathit{C}_{0}\mathit{C}_{1}$, $\mu\mathit{x}.N$, and $\mathord{\langle\mathit{E}\>|\>\mathit{C}\rangle}$ are also shown straightforwardly by using induction hypothesis. The latter claim of (2) is shown by induction on $S$. The case of $S\equiv\bullet^{o}$: $\displaystyle([\mathit{C}/\alpha]\bullet^{o})^{\dagger}\equiv(\bullet^{o})^{\dagger}\equiv\alpha_{\bullet^{o}}\equiv[(\mathit{C})^{\dagger}/\alpha]\alpha_{\bullet^{o}}\equiv[(\mathit{C})^{\dagger}/\alpha](\bullet^{o})^{\dagger}$ The case of $S\equiv a^{A}$: $\displaystyle([\mathit{C}/\alpha]a^{A})^{\dagger}\equiv(\mathit{C})^{\dagger}\equiv[(\mathit{C})^{\dagger}/\alpha]\alpha\equiv[(\mathit{C})^{\dagger}/\alpha](a^{A})^{\dagger}$ The case of $S\equiv a_{0}^{A_{0}}$, where $a^{A}\not\equiv a_{0}^{A_{0}}$: $\displaystyle([\mathit{C}/\alpha]a_{0}^{A_{0}})^{\dagger}\equiv(a_{0}^{A_{0}})^{\dagger}\equiv a_{0}\equiv[(\mathit{C})^{\dagger}/\alpha]a_{0}\equiv[(\mathit{C})^{\dagger}/\alpha](a_{0}^{A_{0}})^{\dagger}$ The case of $S\equiv\pi_{0}(\mathit{C}_{0})$ is shown by using induction hypothesis: $\displaystyle([\mathit{C}/\alpha]\pi_{0}(\mathit{C}_{0}))^{\dagger}$ $\displaystyle\equiv(\pi_{0}([\mathit{C}/\alpha]\mathit{C}_{0}))^{\dagger}$ $\displaystyle\equiv x.(\langle x\rangle{\tt inl}\mathbin{\bullet}([\mathit{C}/\alpha]\mathit{C}_{0})^{\dagger})$ $\displaystyle\equiv x.(\langle x\rangle{\tt inl}\mathbin{\bullet}[(\mathit{C})^{\dagger}/\alpha](\mathit{C}_{0})^{\dagger})$ by induction hypothesis $\displaystyle\equiv[(\mathit{C})^{\dagger}/\alpha](x.(\langle x\rangle{\tt inl}\mathbin{\bullet}(\mathit{C}_{0})^{\dagger}))$ $\displaystyle\equiv[(\mathit{C})^{\dagger}/\alpha](\pi_{0}(\mathit{C}_{0}))^{\dagger}$ The other cases $S\equiv\mathit{cst}^{o}$, $x^{A}$, $\pi_{0}(\mathit{E})$, $\pi_{1}(\mathit{E})$, $\mathord{(\mathit{E}_{0},\mathit{E}_{1})}$, $\lambda\mathit{x}.\mathit{E}$, $\mathit{E}_{0}\mathit{E}_{1}$, $\mu a.N$, $\pi_{1}(\mathit{C})$, $\mathord{(\mathit{C}_{0},\mathit{C}_{1})}$, $\lambda a.\mathit{C}$, $\mathit{C}_{0}\mathit{C}_{1}$, $\mu\mathit{x}.N$, and $\mathord{\langle\mathit{E}\>|\>\mathit{C}\rangle}$ are also shown straightforwardly by using induction hypothesis. ###### Theorem A.9. $D_{0}=_{\mathit{v}}D_{1}$ implies $(D_{0})^{\sharp}=_{\mathit{dcv}}(D_{1})^{\sharp}$. ###### Proof A.10. First, we define a coterm $K$-indexed translation $(-)^{\sharp}_{K}$ from contexts for expressions of CbV-BLC into contexts of $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ as follows: $\displaystyle(\\{-\\})^{\sharp}_{K}$ $\displaystyle\equiv\\{-\\}\mathbin{\bullet}K$ $\displaystyle(V\mathcal{E})^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{x.((V)^{\sharp}\mathbin{\bullet}(x\mathbin{\texttt{@}}K))}$ $\displaystyle(\mathcal{E}\mathit{E})^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{(\mathit{E})^{\sharp}\mathbin{\texttt{@}}K}$ $\displaystyle(\mathord{(V,\mathcal{E})})^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{x.(\langle(V)^{\sharp},x\rangle\mathbin{\bullet}K)}$ $\displaystyle(\mathord{(\mathcal{E},\mathit{E})})^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{x.(\langle x,(\mathit{E})^{\sharp}\rangle\mathbin{\bullet}K)}$ $\displaystyle(\pi_{0}(\mathcal{E}))^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{{\tt fst}[K]}$ $\displaystyle(\pi_{1}(\mathcal{E}))^{\sharp}_{K}$ $\displaystyle\equiv(\mathcal{E})^{\sharp}_{{\tt snd}[K]}\enspace.$ Next, we can immediately confirm $\displaystyle(\mathcal{E}\\{\mathit{E}\\})^{\sharp}\mathbin{\bullet}K=_{\mathit{dcv}}(\mathit{E})^{\sharp}\mathbin{\bullet}x.((\mathcal{E})^{\sharp}_{K}\\{x\\})$ ($\ast$) by induction on $\mathcal{E}$. Finally, the theorem can be shown by the case analysis of $=_{\mathit{v}}$ and Lemma A.7. The case of $(\lambda\mathit{x}.\mathit{E})V=_{\mathit{v}}[V/\mathit{x}]\mathit{E}$. By using Lemma A.7 (1) and (2), we have $\displaystyle((\lambda\mathit{x}.\mathit{E})V)^{\sharp}$ $\displaystyle\equiv(\lambda x.(\mathit{E})^{\sharp}\mathbin{\bullet}((V)^{\sharp}\mathbin{\texttt{@}}\alpha)).\alpha=_{\mathit{dcv}}((V)^{\sharp}\mathbin{\bullet}x.((\mathit{E})^{\sharp}\mathbin{\bullet}\alpha)).\alpha$ $\displaystyle=_{\mathit{dcv}}([(V)^{\sharp}/\mathit{x}](\mathit{E})^{\sharp}\mathbin{\bullet}\alpha).\alpha\equiv(([V/\mathit{x}]\mathit{E})^{\sharp}\mathbin{\bullet}\alpha).\alpha=_{\mathit{dcv}}([V/\mathit{x}]\mathit{E})^{\sharp}\enspace.$ The case of $\lambda\mathit{x}.V\mathit{x}=_{\mathit{v}}V$. By using Lemma A.7 (1), we have $\displaystyle(\lambda\mathit{x}.V\mathit{x})^{\sharp}$ $\displaystyle\equiv\lambda x.(((V)^{\sharp}\mathbin{\bullet}(x\mathbin{\texttt{@}}\alpha)).\alpha)=_{\mathit{dcv}}(V)^{\sharp}\enspace.$ The case of $\pi_{0}(\mathord{(V_{0},V_{1})})=_{\mathit{v}}V_{0}$. By using Lemma A.7 (1), we have $\displaystyle(\pi_{0}(\mathord{(V_{0},V_{1})}))^{\sharp}\equiv(\langle(V_{0})^{\sharp},(V_{1})^{\sharp}\rangle\mathbin{\bullet}{\tt fst}[\alpha]).\alpha=_{\mathit{dcv}}((V_{0})^{\sharp}\mathbin{\bullet}\alpha).\alpha=_{\mathit{dcv}}(V_{0})^{\sharp}\enspace.$ The case of $\pi_{1}(\mathord{(V_{0},V_{1})})=_{\mathit{v}}V_{1}$ is shown similarly. The case of $\mathord{(\pi_{0}(V),\pi_{1}(V))}=_{\mathit{v}}V$. By using Lemma A.7 (1), we have $\displaystyle(\mathord{(\pi_{0}(V),\pi_{1}(V))})^{\sharp}\equiv\langle((V)^{\sharp}\mathbin{\bullet}{\tt fst}[\alpha]).\alpha,((V)^{\sharp}\mathbin{\bullet}{\tt snd}[\alpha]).\alpha\rangle=_{\mathit{dcv}}(V)^{\sharp}\enspace.$ The case of $\mu a.\mathord{\langle\mathit{E}\>|\>a\rangle}=_{\mathit{v}}\mathit{E}$. We have $\displaystyle(\mu a.\mathord{\langle\mathit{E}\>|\>a\rangle})^{\sharp}\equiv((\mathit{E})^{\sharp}\mathbin{\bullet}\alpha).\alpha=_{\mathit{dcv}}(\mathit{E})^{\sharp}\enspace.$ The case of $\mathord{\langle V\>|\>\mu\mathit{x}.N\rangle}=_{\mathit{v}}[V/\mathit{x}]N$: By using Lemma A.7 (1) and (2), we have $\displaystyle(\mathord{\langle V\>|\>\mu\mathit{x}.N\rangle})^{\sharp}\equiv(V)^{\sharp}\mathbin{\bullet}x.((N)^{\sharp})=_{\mathit{dcv}}[(V)^{\sharp}/x](N)^{\sharp}\equiv([V/x]N)^{\sharp}\enspace.$ The case of $\mathord{\langle\mathcal{E}\\{\mathit{E}\\}\>|\>\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}\>|\>\mu\mathit{x}.\mathord{\langle\mathcal{E}\\{\mathit{x}\\}\>|\>\mathit{C}\rangle}\rangle}$. By using $(\ast)$. we have $\displaystyle(\mathord{\langle\mathcal{E}\\{\mathit{E}\\}\>|\>\mathit{C}\rangle})^{\sharp}$ $\displaystyle\equiv(\mathcal{E}\\{\mathit{E}\\})^{\sharp}\mathbin{\bullet}(\mathit{C})^{\sharp}=_{\mathit{dcv}}(\mathit{E})^{\sharp}\mathbin{\bullet}x.((\mathcal{E})^{\sharp}_{(\mathit{C})^{\sharp}}\\{x\\})$ $\displaystyle=_{\mathit{dcv}}(\mathit{E})^{\sharp}\mathbin{\bullet}x.((x)^{\sharp}\mathbin{\bullet}x.((\mathcal{E})^{\sharp}_{(\mathit{C})^{\sharp}}\\{x\\}))$ $\displaystyle=_{\mathit{dcv}}(\mathit{E})^{\sharp}\mathbin{\bullet}x.((\mathcal{E}\\{\mathit{x}\\})^{\sharp}\mathbin{\bullet}(\mathit{C})^{\sharp})\equiv(\mathord{\langle\mathit{E}\>|\>\mu\mathit{x}.\mathord{\langle\mathcal{E}\\{\mathit{x}\\}\>|\>\mathit{C}\rangle}\rangle})^{\sharp}\enspace.$ The case of $(\lambda a.\mathit{C}_{0})\mathit{C}_{1}=_{\mathit{v}}[\mathit{C}_{1}/a]\mathit{C}_{0}$. By using Lemma A.7 (2), we have $\displaystyle((\lambda a.\mathit{C}_{0})\mathit{C}_{1})^{\sharp}$ $\displaystyle\equiv x.((x\mathbin{\$}(\mathit{C}_{1})^{\sharp})\mathbin{\bullet}\lambda\alpha.(\mathit{C}_{0})^{\sharp})=_{\mathit{dcv}}x.((x\mathbin{\bullet}(\mathit{C}_{0})^{\sharp}).\alpha\mathbin{\bullet}(\mathit{C}_{1})^{\sharp})$ $\displaystyle=_{\mathit{dcv}}x.(x\mathbin{\bullet}[(\mathit{C}_{1})^{\sharp}/\alpha](\mathit{C}_{0})^{\sharp})=_{\mathit{dcv}}[(\mathit{C}_{1})^{\sharp}/\alpha](\mathit{C}_{0})^{\sharp}\equiv([\mathit{C}_{1}/\alpha]\mathit{C}_{0})^{\sharp}\enspace.$ The case of $\lambda a.\mathit{C}a=_{\mathit{v}}\mathit{C}$. We have $\displaystyle(\lambda a.\mathit{C}a)^{\sharp}$ $\displaystyle\equiv\lambda\alpha.(x.((x\mathbin{\$}\alpha)\mathbin{\bullet}(\mathit{C})^{\sharp}))=_{\mathit{dcv}}(\mathit{C})^{\sharp}\enspace.$ The case of $\pi_{0}(\mathord{(\mathit{C}_{0},\mathit{C}_{1})})=_{\mathit{v}}\mathit{C}_{0}$. We have $\displaystyle(\pi_{0}(\mathord{(\mathit{C}_{0},\mathit{C}_{1})}))^{\sharp}$ $\displaystyle\equiv x.(\langle x\rangle{\tt inl}\mathbin{\bullet}[(\mathit{C}_{0})^{\sharp},(\mathit{C}_{1})^{\sharp}])=_{\mathit{dcv}}x.(x\mathbin{\bullet}(\mathit{C}_{0})^{\sharp})=_{\mathit{dcv}}(\mathit{C}_{0})^{\sharp}\enspace.$ The case of $\pi_{1}(\mathord{(\mathit{C}_{0},\mathit{C}_{1})})=_{\mathit{v}}\mathit{C}_{1}$ is also shown similarly. The case of $\mathord{(\pi_{0}(\mathit{C}),\pi_{1}(\mathit{C}))}=_{\mathit{v}}\mathit{C}$. We have $\displaystyle(\mathord{(\pi_{0}(\mathit{C}),\pi_{1}(\mathit{C}))})^{\sharp}$ $\displaystyle\equiv[x.(\langle x\rangle{\tt inl}\mathbin{\bullet}(\mathit{C})^{\sharp}),x.(\langle x\rangle{\tt inr}\mathbin{\bullet}(\mathit{C})^{\sharp})]=_{\mathit{dcv}}(\mathit{C})^{\sharp}\enspace.$ The case of $\mu\mathit{x}.\mathord{\langle\mathit{x}\>|\>\mathit{C}\rangle}=_{\mathit{v}}\mathit{C}$. We have $\displaystyle(\mu\mathit{x}.\mathord{\langle\mathit{x}\>|\>\mathit{C}\rangle})^{\sharp}$ $\displaystyle\equiv x.(x\mathbin{\bullet}(\mathit{C})^{\sharp})=_{\mathit{dcv}}(\mathit{C})^{\sharp}\enspace.$ The case of $\mathord{\langle\mu a.N\>|\>\mathit{C}\rangle}=_{\mathit{v}}[\mathit{C}/a]$. By using Lemma A.7 (2), we have $\displaystyle(\mathord{\langle\mu a.N\>|\>\mathit{C}\rangle})^{\sharp}$ $\displaystyle\equiv((N)^{\sharp}).\alpha\mathbin{\bullet}(\mathit{C})^{\sharp}=_{\mathit{dcv}}[(\mathit{C})^{\sharp}/\alpha](N)^{\sharp}\equiv([\mathit{C}/\alpha]N)^{\sharp}\enspace.$ We next define a translation $(-)^{\flat}$ from $\textrm{CbV- DC}_{\rightarrow\leftarrow}$ into CbV-BLC. Expression $(M)^{\flat}$, continuation $(K)^{\flat}$, and command $(S)^{\flat}$ for any typable $M$, $K$, and $S$ in $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ are defined inductively as shown in Figure 14. We show that this translation preserves typability. Let $\varGamma$ be $\overrightarrow{x_{cst^{o^{\prime}}}\colon o^{\prime}},\overrightarrow{x\colon A}$. Then we define $(\varGamma)^{\flat}$ by $\overrightarrow{x\colon\mathord{\mathop{+}{A}}}$. Similarly, we also define $(\varDelta)^{\flat}$. $\displaystyle(x_{cst^{o}})^{\flat}$ $\displaystyle\equiv\mathit{cst}^{o}$ $\displaystyle(x)^{\flat}$ $\displaystyle\equiv\mathit{x}^{A}$ if $x$ has a type $A$ $\displaystyle(\lambda x.M)^{\flat}$ $\displaystyle\equiv\lambda\mathit{x}^{A}.(M)^{\flat}$ if $\lambda x.M$ has a type $A\to A_{1}$ $\displaystyle(K\mathbin{\$}M)^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(M)^{\flat}\>|\>a(K)^{\flat}\rangle}$ $\displaystyle(\langle\mathit{E}_{0},\mathit{E}_{1}\rangle)^{\flat}$ $\displaystyle\equiv\mathord{((\mathit{E}_{0})^{\flat},(\mathit{E}_{1})^{\flat})}$ $\displaystyle(\langle M\rangle{\tt inl})^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(M)^{\flat}\>|\>\pi_{0}(a)\rangle}$ $\displaystyle(\langle M\rangle{\tt inr})^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(M)^{\flat}\>|\>\pi_{1}(a)\rangle}$ $\displaystyle((S).\alpha)^{\flat}$ $\displaystyle\equiv\mu a^{A}.(S)^{\flat}\quad\hbox{if $\alpha\colon A$}$ $\displaystyle(\alpha_{\bullet^{o}})^{\flat}$ $\displaystyle\equiv\bullet^{o}$ $\displaystyle(\alpha)^{\flat}$ $\displaystyle\equiv a^{A}$ if $\alpha$ has a type $A$ $\displaystyle(\lambda\alpha.K)^{\flat}$ $\displaystyle\equiv\lambda a^{A}.(K)^{\flat}$ if $\lambda\alpha.K$ has a type $A_{1}\leftarrow A$ $\displaystyle(M\mathbin{\texttt{@}}K)^{\flat}$ $\displaystyle\equiv\mu\mathit{x}.\mathord{\langle\mathit{x}(M)^{\flat}\>|\>(K)^{\flat}\rangle}$ $\displaystyle([K_{0},K_{1}])^{\flat}$ $\displaystyle\equiv\mathord{((K_{0})^{\flat},(K_{1})^{\flat})}$ $\displaystyle({\tt fst}[K])^{\flat}$ $\displaystyle\equiv\mathit{x}.(\mathord{\langle\pi_{0}(\mathit{x})\>|\>(K)^{\flat}\rangle})$ $\displaystyle({\tt snd}[K])^{\flat}$ $\displaystyle\equiv\mathit{x}.(\mathord{\langle\pi_{1}(\mathit{x})\>|\>(K)^{\flat}\rangle})$ $\displaystyle(x.(S))^{\flat}$ $\displaystyle\equiv\mu\mathit{x}^{A}.(S)^{\flat}\quad\hbox{if $x\colon A$}$ $(M\mathbin{\bullet}K)^{\flat}\equiv\mathord{\langle(M)^{\flat}\>|\>(K)^{\flat}\rangle}$ Figure 14: A translation from $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ into CbV-BLC. ###### Proposition A.11. 1. 1. $\varGamma\vdash\varDelta\mid M\colon A$ implies $(\varGamma)^{\flat};(\varDelta)^{\flat}\vdash_{+}(M)^{\flat}\colon A$, 2. 2. $K\colon A\mid\varGamma\vdash\varDelta$ implies $(\varGamma)^{\flat};(\varDelta)^{\flat}\vdash_{-}(K)^{\flat}\colon A$, and 3. 3. $\varGamma\mid S\vdash\varDelta$ implies $(\varGamma)^{\flat};(\varDelta)^{\flat}\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}(S)^{\flat}$. ###### Proof A.12. The claims can be shown by simultaneous induction on the derivation of judgments of $\textrm{CbV-DC}_{\rightarrow\leftarrow}$. We show that the translation $(-)^{\flat}$ is an inverse of $(-)^{\sharp}$ up to $=_{\mathit{v}}$ as follows: ###### Theorem A.13. 1. 1. $((D)^{\sharp})^{\flat}=_{\mathit{v}}D$ holds, and 2. 2. $((O)^{\flat})^{\sharp}=_{\mathit{dcv}}O$ holds. ###### Proof A.14. (1) is shown by induction on $D$. (2) is shown by induction on $O$. ###### Lemma A.15. 1. 1. $M$ is a value of $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ if and only if there exists a value $V$ of CbV-BLC such that $V=_{\mathit{v}}(M)^{\flat}$, and 2. 2. $([W/x]O)^{\flat}\equiv[(W)^{\flat}/\mathit{x}](O)^{\flat}$ and $([K/\alpha]O)^{\flat}\equiv[(K)^{\flat}/a](O)^{\flat}$. ###### Proof A.16. The claim (1) is shown immediately. The claims of (2) are shown by induction on $O$. For any $\mathcal{F}$ (contexts of $\textrm{CbV-DC}_{\rightarrow\leftarrow}$), we define $(\mathcal{F})^{\flat}$ (contexts for expressions of CbV-BLC) as follows: $\displaystyle(\\{-\\})^{\flat}$ $\displaystyle\equiv\\{-\\}$ $\displaystyle(K\mathbin{\$}\mathcal{F})^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(\mathcal{F})^{\flat}\>|\>a(K)^{\flat}\rangle}$ $\displaystyle(\langle W,\mathcal{F}\rangle)^{\flat}$ $\displaystyle\equiv\mathord{((W)^{\flat},(\mathcal{F})^{\flat})}$ $\displaystyle(\langle\mathcal{F},M\rangle)^{\flat}$ $\displaystyle\equiv\mathord{((\mathcal{F})^{\flat},(M)^{\flat})}$ $\displaystyle(\langle\mathcal{F}\rangle{\tt inl})^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(\mathcal{F})^{\flat}\>|\>\pi_{0}(a)\rangle}$ $\displaystyle(\langle\mathcal{F}\rangle{\tt inr})^{\flat}$ $\displaystyle\equiv\mu a.\mathord{\langle(\mathcal{F})^{\flat}\>|\>\pi_{1}(a)\rangle}\enspace.$ ###### Lemma A.17. 1. 1. $(\mathcal{F}\\{M\\})^{\flat}=_{\mathit{v}}(\mathcal{F})^{\flat}\\{(M)^{\flat}\\}$ holds, and 2. 2. $\mathord{\langle(\mathcal{F})^{\flat}\\{\mathit{E}\\}\>|\>\mathit{C}\rangle}=_{\mathit{v}}\mathord{\langle\mathit{E}\>|\>\mu\mathit{x}.\mathord{\langle(\mathcal{F})^{\flat}\\{\mathit{x}\\}\>|\>\mathit{C}\rangle}\rangle}$ holds. ###### Proof A.18. The claims (1) and (2) are shown by induction on $\mathcal{F}$. ###### Theorem A.19. $O_{0}=_{\mathit{dcv}}O_{1}$ implies $(O_{0})^{\flat}=_{\mathit{v}}(O_{1})^{\flat}$. ###### Proof A.20. The claim is shown by the case analysis of $=_{\mathit{dcv}}$. The case of $(\beta\mathord{\to})$ is shown as follows. $\displaystyle((\lambda x.M_{0})\mathbin{\bullet}(M_{1}\mathbin{\texttt{@}}K))^{\flat}$ $\displaystyle\equiv\mathord{\langle\lambda\mathit{x}.(M_{0})^{\flat}\>|\>\mu\mathit{x}_{1}.\mathord{\langle\mathit{x}_{1}(M_{1})^{\flat}\>|\>(K)^{\flat}\rangle}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(\lambda\mathit{x}.(M_{0})^{\flat})(M_{1})^{\flat}\>|\>(K)^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(M_{1})^{\flat}\>|\>\mu\mathit{x}.\mathord{\langle(\lambda\mathit{x}.(M_{0})^{\flat})\mathit{x}\>|\>(K)^{\flat}\rangle}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(M_{1})^{\flat}\>|\>\mu\mathit{x}.\mathord{\langle(M_{0})^{\flat}\>|\>(K)^{\flat}\rangle}\rangle}$ $\displaystyle\equiv(M_{1}\mathbin{\bullet}x.(M_{0}\mathbin{\bullet}K))^{\flat}\enspace.$ The case of $(\beta\mathord{\leftarrow})$ is shown as follows. $\displaystyle((K_{0}\mathbin{\$}M)\mathbin{\bullet}(\lambda\alpha.K_{1}))^{\flat}$ $\displaystyle\equiv\mathord{\langle\mu a_{1}.\mathord{\langle(M)^{\flat}\>|\>a_{1}(K_{0})^{\flat}\rangle}\>|\>\lambda a.(K_{1})^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(M)^{\flat}\>|\>(\lambda a.(K_{1})^{\flat})(K_{0})^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(M)^{\flat}\>|\>[(K_{0})^{\flat}/a](K_{1})^{\flat}\rangle}$ $\displaystyle\equiv[(K_{0})^{\flat}/a]\mathord{\langle(M)^{\flat}\>|\>(K_{1})^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle\mu a.\mathord{\langle(M)^{\flat}\>|\>(K_{1})^{\flat}\rangle}\>|\>(K_{0})^{\flat}\rangle}$ $\displaystyle\equiv((M\mathbin{\bullet}K_{1}).\alpha\mathbin{\bullet}K_{0})^{\flat}\enspace.$ The case of $(\beta\land_{0})$. By using Lemma A.15 (1), We have $\displaystyle(\langle W_{0},W_{1}\rangle\mathbin{\bullet}{\tt fst}[K])^{\flat}$ $\displaystyle\equiv\mathord{\langle\mathord{((W_{0})^{\flat},(W_{1})^{\flat})}\>|\>\mu\mathit{x}.\mathord{\langle\pi_{0}(\mathit{x})\>|\>(K)^{\flat}\rangle}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle\pi_{0}(\mathord{((W_{0})^{\flat},(W_{1})^{\flat})})\>|\>(K)^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(W_{0})^{\flat}\>|\>(K)^{\flat}\rangle}\equiv(W_{0}\mathbin{\bullet}K)^{\flat}\enspace.$ The case of $(\beta\land_{1})$ is shown similarly. The case of $(\beta\vee_{0})$. We have $\displaystyle(\langle W\rangle{\tt inl}\mathbin{\bullet}[K_{0},K_{1}])^{\flat}$ $\displaystyle\equiv\mathord{\langle\mu a.\mathord{\langle(W)^{\flat}\>|\>\pi_{0}(a)\rangle}\>|\>\mathord{((K_{0})^{\flat},(K_{1})^{\flat})}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(W)^{\flat}\>|\>\pi_{0}(\mathord{((K_{0})^{\flat},(K_{1})^{\flat})})\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(W)^{\flat}\>|\>(K_{0})^{\flat}\rangle}\equiv(W\mathbin{\bullet}K_{0})^{\flat}\enspace.$ The case of $(\beta\vee_{1})$ is shown similarly. The case of $(\beta R)$. By using Lemma A.15 (1) and (2), We have $\displaystyle(W\mathbin{\bullet}x.(S))^{\flat}$ $\displaystyle\equiv\mathord{\langle(W)^{\flat}\>|\>\mu\mathit{x}.(S)^{\flat}\rangle}=_{\mathit{v}}[(W)^{\flat}/\mathit{x}](S)^{\flat}\equiv([W/x]S)^{\flat}\enspace.$ The case of $(\beta L)$. By using Lemma A.15 (2), We have $\displaystyle((S).\alpha\mathbin{\bullet}K)^{\flat}$ $\displaystyle\equiv\mathord{\langle\mu a.(S)^{\flat}\>|\>(K)^{\flat}\rangle}=_{\mathit{v}}[(K)^{\flat}/a](S)^{\flat}\equiv([K/\alpha]S)^{\flat}\enspace.$ The case of $(\eta\mathord{\to})$ is shown by using Lemma A.15 (1). $\displaystyle(\lambda x.((W\mathbin{\bullet}(x\mathbin{\texttt{@}}\alpha)).\alpha))^{\flat}$ $\displaystyle\equiv\lambda\mathit{x}.\mu a.\mathord{\langle(W)^{\flat}\>|\>\mu\mathit{x}_{1}.\mathord{\langle\mathit{x}_{1}\mathit{x}\>|\>a\rangle}\rangle}$ $\displaystyle=_{\mathit{v}}\lambda\mathit{x}.\mu a.\mathord{\langle(W)^{\flat}\mathit{x}\>|\>a\rangle}=_{\mathit{v}}\lambda\mathit{x}.(W)^{\flat}\mathit{x}=_{\mathit{v}}(W)^{\flat}\enspace.$ The case of $(\eta\mathord{\leftarrow})$. $\displaystyle(\lambda\alpha.(x.((\alpha\mathbin{\$}x)\mathbin{\bullet}K)))^{\flat}$ $\displaystyle\equiv\lambda a.\mu\mathit{x}.\mathord{\langle\mu a_{1}.\mathord{\langle a_{1}a\>|\>\mathit{x}\rangle}\>|\>(K)^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\lambda a.\mu\mathit{x}.\mathord{\langle(K)^{\flat}a\>|\>\mathit{x}\rangle}=_{\mathit{v}}\lambda a.(K)^{\flat}a=_{\mathit{v}}(K)^{\flat}\enspace.$ The case of $(\eta\land)$. Note that $((W\mathbin{\bullet}{\tt fst}[\alpha]).\alpha)^{\flat}\equiv\pi_{0}((W)^{\flat})$ by Lemma A.15 (1). Hence we have $\displaystyle(\langle(W\mathbin{\bullet}{\tt fst}[\alpha]).\alpha,(W\mathbin{\bullet}{\tt snd}[\alpha]).\alpha\rangle)^{\flat}$ $\displaystyle=_{\mathit{v}}\mathord{(\pi_{0}((W)^{\flat}),\pi_{1}((W)^{\flat}))}=_{\mathit{v}}(W)^{\flat}\enspace.$ The case of $(\eta\vee)$. Note that $(x.(\langle x\rangle{\tt inl}\mathbin{\bullet}K))^{\flat}=_{\mathit{v}}\pi_{0}((K)^{\flat})$. Hence we have $\displaystyle([x.(\langle x\rangle{\tt inl}\mathbin{\bullet}K),x.(\langle x\rangle{\tt inr}\mathbin{\bullet}K)])^{\flat}$ $\displaystyle=_{\mathit{v}}\mathord{(\pi_{0}((K)^{\flat}),\pi_{1}((K)^{\flat}))}=_{\mathit{v}}(K)^{\flat}\enspace.$ The cases of $(\eta R)$ and $(\eta L)$ are shown immediately. The case of $(\zeta)$ is shown by using Lemma A.17: $\displaystyle(\mathcal{F}\\{M\\}\mathbin{\bullet}K)^{\flat}$ $\displaystyle\equiv\mathord{\langle(\mathcal{F}\\{M\\})^{\flat}\>|\>(K)^{\flat}\rangle}=_{\mathit{v}}\mathord{\langle(\mathcal{F})^{\flat}\\{(M)^{\flat}\\}\>|\>(K)^{\flat}\rangle}$ $\displaystyle=_{\mathit{v}}\mathord{\langle(M)^{\flat}\>|\>\mu\mathit{x}.\mathord{\langle(\mathcal{F})^{\flat}\\{\mathit{x}\\}\>|\>(K)^{\flat}\rangle}\rangle}$ $\displaystyle=_{\mathit{v}}(M\mathbin{\bullet}x.(\mathcal{F}\\{x\\}\mathbin{\bullet}K))^{\flat}\enspace.$ The equivalence between $\textrm{CbV-DC}_{\rightarrow\leftarrow}$ and CbV-BLC clarifies an essential difference between the _full_ dual calculus and the bilateral $\lambda$-calculus. The negation of the dual calculus is not involutive, since $\operatorname{\neg}{\operatorname{\neg}{A}}$ is not isomorphic to $A$. The dual calculus actually contains the involutive duality not as the object-level negation but as the meta-level operation such as the antecedent and succedent duality of the sequent calculus. On the other hand, the negation is represented using inversions of polarities in the bilateral $\lambda$-calculus. By definition, the negation is involutive. ## Appendix B Proofs ###### Proof B.1. The proposition holds as follows: $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $\mathord{\mathop{+}{A_{1}}}$ $[\mathord{\mathop{-}{A_{1}}}]$ $\bot$ $\mathord{\mathop{-}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $[\mathord{\mathop{-}{A_{1}}}]$ $\mathord{\mathop{-}{A_{0}}}$ $\bot$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ ###### Proof B.2. 1) It suffices to use $\mathrm{(}\textrm{Non-contradiction}\mathrm{)}$ and $\mathrm{(}\textrm{Reductio}\mathrm{)}$ with $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{0}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{1}}}\mathrm{)}$, $\mathrm{(}\mathord{\wedge}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{0}}}\mathrm{)}$, and $\mathrm{(}\mathord{\vee}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{1}}}\mathrm{)}$ as follows: $\mathord{\mathop{-}{A_{i}}}$ $[\mathord{\mathop{+}{A_{0}\wedge A_{1}}}]$ $\mathord{\mathop{+}{A_{i}}}$ $\bot$ $\mathord{\mathop{-}{A_{0}\wedge A_{1}}}$ $\mathord{\mathop{-}{A_{0}\wedge A_{1}}}$ $[\mathord{\mathop{-}{A_{0}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathcal{A}^{\ast}]$ $\bot$ $\mathord{\mathop{+}{A_{0}}}$ $[\mathord{\mathop{-}{A_{1}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathcal{A}^{\ast}]$ $\bot$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}\wedge A_{1}}}$ $\bot$ $\mathcal{A}$ $\mathord{\mathop{+}{A_{0}\vee A_{1}}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathcal{A}^{\ast}]$ $\bot$ $\mathord{\mathop{-}{A_{0}}}$ $[\mathord{\mathop{+}{A_{1}}}]$ $\smash{\vdots}\rule{0.0pt}{8.61108pt}$ $\mathcal{A}$ $[\mathcal{A}^{\ast}]$ $\bot$ $\mathord{\mathop{-}{A_{1}}}$ $\mathord{\mathop{-}{A_{0}\vee A_{1}}}$ $\bot$ $\mathcal{A}$ $\mathord{\mathop{+}{A_{i}}}$ $[\mathord{\mathop{-}{A_{0}\vee A_{1}}}]$ $\mathord{\mathop{-}{A_{i}}}$ $\bot$ $\mathord{\mathop{+}{A_{0}\vee A_{1}}}$ where $i=0,1$. 2) It suffices to use $\mathrm{(}\textrm{Non-contradiction}\mathrm{)}$ and $\mathrm{(}\textrm{Reductio}\mathrm{)}$ with $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{E}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\to}\textrm{-}{\mathrm{I}_{\mathord{+}\mathord{}}}\mathrm{)}$, $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{E}_{\mathord{-}\mathord{}}}\mathrm{)}$, and $\mathrm{(}\mathord{\leftarrow}\textrm{-}{\mathrm{I}_{\mathord{-}\mathord{}}}\mathrm{)}$ as follows: $[\mathord{\mathop{+}{A_{0}\to A_{1}}}]$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{-}{A_{1}}}$ $\bot$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $[\mathord{\mathop{+}{A_{0}}}]$ $[\mathord{\mathop{-}{A_{0}}}]$ $\bot$ $\mathord{\mathop{+}{A_{1}}}$ $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ $\bot$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\to A_{1}}}$ $[\mathord{\mathop{+}{A_{1}}}]$ $\mathord{\mathop{+}{A_{0}\to A_{1}}}$ $\bot$ $\mathord{\mathop{-}{A_{1}}}$ $[\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}]$ $\mathord{\mathop{-}{A_{1}}}$ $\mathord{\mathop{-}{A_{0}}}$ $\mathord{\mathop{+}{A_{0}}}$ $\bot$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $[\mathord{\mathop{-}{A_{0}}}]$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $\bot$ $\mathord{\mathop{+}{A_{0}}}$ $\mathord{\mathop{+}{A_{0}\leftarrow A_{1}}}$ $[\mathord{\mathop{+}{A_{1}}}]$ $[\mathord{\mathop{-}{A_{1}}}]$ $\bot$ $\mathord{\mathop{-}{A_{0}}}$ $\mathord{\mathop{-}{A_{0}\leftarrow A_{1}}}$ $\bot$ $\mathord{\mathop{-}{A_{1}}}$ ###### Proof B.3. The proposition holds because the following: $\varPi^{\prime};\varSigma^{\prime}\vdash_{+}\mathit{E}\colon A_{0}\to A_{1}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{+}\mathit{x}\colon A_{0}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{+}\mathit{E}\mathit{x}\colon A_{1}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{-}a\colon A_{1}$ $\varPi,\mathit{x}\colon A_{0};\varSigma,a\colon A_{1}\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}\mathord{\langle\mathit{E}\mathit{x}\>|\>a\rangle}$ $\varPi;\varSigma,a\colon A_{1}\vdash_{-}\mu\mathit{x}.\mathord{\langle\mathit{E}\mathit{x}\>|\>a\rangle}\colon A_{0}$ $\varPi\vdash_{-}\lambda a.\mu\mathit{x}.\mathord{\langle\mathit{E}\mathit{x}\>|\>a\rangle}\colon A_{0}\leftarrow A_{1}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{+}\mathit{x}\colon A_{0}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{-}\mathit{C}\colon A_{0}\leftarrow A_{1}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{-}a\colon A_{1}$ $\varPi^{\prime};\varSigma^{\prime}\vdash_{-}\mathit{C}a\colon A_{0}$ $\varPi,\mathit{x}\colon A_{0};\varSigma,a\colon A_{1}\vdash_{\hskip 1.5pt\mathrm{o}\hskip 1.5pt}\mathord{\langle\mathit{x}\>|\>\mathit{C}a\rangle}$ $\varPi,\mathit{x}\colon A_{0};\varSigma\vdash_{+}\mu a.\mathord{\langle\mathit{x}\>|\>\mathit{C}a\rangle}\colon A_{1}$ $\varPi;\varSigma\vdash_{+}\lambda\mathit{x}.\mu a.\mathord{\langle\mathit{x}\>|\>\mathit{C}a\rangle}\colon A_{0}\to A_{1}$ are derived where $\varPi^{\prime}$ and $\varSigma^{\prime}$ are $\varPi,\mathit{x}\colon A_{0};\varSigma,a\colon A_{1}$, respectively.
# Morse inequalities at infinity for a resonant mean field equation Mohameden Ahmedou $\&$ Mohamed Ben Ayed _In fond memory of Abbas Bahri_ Abstract In this paper we study the following mean field type equation $(MF)\qquad-\Delta_{g}u\,=\varrho(\frac{Ke^{u}}{\int_{\Sigma}Ke^{u}dV_{g}}\,-\,1)\,\mbox{ in }\Sigma,$ where $(\Sigma,g)$ is a closed oriented surface of unit volume $Vol_{g}(\Sigma)$ = 1, $K$ positive smooth function and $\varrho=8\pi m$, $m\in\mathbb{N}$. Building on the critical points at infinity approach initiated in [1] we develop, under generic condition on the function $K$ and the metric $g$, a full Morse theory by proving Morse inequalities relating the Morse indices of the critical points, the indices of the critical points at infinity, and the Betti numbers of the space of formal barycenters $B_{m}(\Sigma)$. We derive from these _Morse inequalities at infinity_ various new existence as well as multiplicity results of the mean field equation in the resonant case, i.e. $\varrho\in 8\pi\mathbb{N}$. Key Words: Critical points at infinity, Morse theory, Space of formal barycenters. AMS subject classification: 35C60, 58J60, 35J91. ## 1 Introduction and statement of the results Let $(\Sigma,g)$ be a closed and oriented surface of unit volume $Vol_{g}(\Sigma)$ = 1. For $\varrho\in\mathbb{R}^{+}$ real number and $K$ positive smooth function, we consider the following Mean Field equation (1) $(MF)\qquad-\Delta_{g}u\,=\varrho(\frac{Ke^{u}}{\int_{\Sigma}Ke^{u}dV_{g}}\,-\,1)\,\mbox{ in }\Sigma.$ Problem $(MF)$ arises as limiting equation in some mean field approximation in the study of limit point vortices of Euler flows, in Onsager vortex theory, and also in the description of self dual condensates in some Chern-Simons- Higgs models. See for example the papers [2, 8, 9, 12, 13, 14, 21, 22, 33, 29], and the monographs of Tarantello [30] and of Yang [32] as well as the references therein. Furthermore its study is also motivated by the prescribed Gauss curvature problem in differential geometry. Indeed, if $\Sigma$ is the standard $2$-sphere $\mathbb{S}^{2}$ endowed with its standard metric $g_{stand}$ and $\varrho=8\pi$, then Problem $(MF)$ is the so called _Nirenberg’s problem_ in conformal geometry, which consists of finding a metric $g$ conformally equivalent to $g_{stand}$ and whose Gauss curvature is given by the function $K$. See [10, 19, 11] and the references therein. Equation $(MF)$ has a variational structure and the associated Euler Lagrange functional $J_{\varrho}$ is defined by $J_{\varrho}(u)\,:=\,\frac{1}{2}||u||^{2}\,-\varrho\ln(\int_{\Sigma}Ke^{u}dV_{g})\quad\mbox{ for }u\in\mathring{H^{1}}(\Sigma)$ where $||u||:=||\nabla u||_{L^{2}}$ is the norm used to equip (2) $\mathring{H}^{1}(\Sigma):=\Big{\\{}u\in H^{1}(\Sigma):\int_{\Sigma}udV_{g}\,=0\Big{\\}}.$ It turns out that the analytical aspect of this variational problem depends on the range of values taken by the parameter $\varrho$. Indeed * • If $\varrho\notin 8\pi\mathbb{N}$ then the associated variational problem is compact in the sense that changes of the topology of the sublevel sets of $J_{\varrho}$ are induced only by critical points. We call it the _non- resonant case._ * • If $\varrho\in 8\pi\mathbb{N}$ then the associated variational problem is not compact, i.e. changes in the difference of topology between its sublevel sets might be induced by critical points at infinity, these are non compact orbits of the gradient flow whose level sets are bounded (see [3] for a related notion for Yamabe type problem). We call it the _resonant case._ In the non-resonant case, Yanyan Li [22] proved that the Leray-Schauder degree of the solutions of $(MF)$ is well defined and is a topological invariant. He also advised a method to compute it by analyzing the _degree jump_ across the critical values $8m\pi$. C.C. Chen and C.S. Lin [12, 13], proved a priori estimates for the solutions, and used the strategy advised by Yanyan Li to compute the Leray-Schauder degree. Then they derived that Problem $(MF)$ is solvable provided that the surface has a positive genus. Later Z. Djadli [15], using the topological argument of [16], proved the existence of a solution of $(MF)$ in this case for surfaces of every genus. Furthermore A. Malchiodi [26] developed a Morse theory in this case and F. De Marchis [17] proved full Morse inequalities as well as multiplicity results for generic data $(K,g)$. For the _resonant case_ the main difficulty lies in the fact that to find critical points of $J_{\varrho}$ one has to understand the contributions of the critical points at infinity to the topology of its sublevel sets. This amounts to proving a Morse lemma at infinity in a highly degenerate setting where gradient flow lines may converge only in the sense of measures to a sum of Dirac masses. Based on ideas going back to Lions [24] and Struwe [28] one can often perform a ”Lyapunov-Schmidt reduction at infinity”, and finds that the Dirac deltas are located at critical points of a finite-dimensional reduced function. Such an approach has been developed by the authors in collaboration with M. Lucia in [1]. There, a Morse reduction in the neighborhood of the critical points has been performed and the induced difference of topology between the levels of $J_{\varrho}$ has been computed. In this paper the authors want to develop further this approach by proving _Morse inequalities_ between _critical points at Infinity_ and use them to deduce new existence and multiplicity results. To state our main results we introduce the following notation and assumptions: For $m\in\mathbb{N}$, we let $\mathbb{F}_{m}(\Sigma):=\left\\{(a_{1},\cdots,a_{m})\in\Sigma^{m};a_{i}\neq a_{j}\mbox{ if }i\neq j\right\\}$ denote the space of configurations of $\Sigma^{m}$ and we define the following reduced energy functional $\mathcal{F}^{K}_{m}:\,\mathbb{F}_{m}(\Sigma)\,\,\longrightarrow\,\mathbb{R},\quad\mbox{ by }$ (3) $\displaystyle\mathcal{F}^{K}_{1}(a)\,:=\,\ln K(a)\,+4\pi H(a,a)\quad(\mbox{for }m=1;\,a\in\Sigma),$ (4) $\displaystyle\mathcal{F}^{K}_{m}(A)\,:=\,\sum_{i=1}^{m}\bigg{(}\ln K(a_{i})\,+\,4\pi H(a_{i},a_{i})\,+4\pi\sum_{j\neq i}G(a_{i},a_{j})\bigg{)}\,(\mbox{for }m\geq 2;\,A=(a_{1},\cdot,a_{m})),$ where $G$ is the Green’s function of $-\Delta_{g}$ and $H$ its regular part. See (12) and (14) for the precise definition. We notice that the function $\mathcal{F}^{K}_{m}$ achieves its infimum for each $m\geq 1$ and for $m=1$ it also achieves its maximum. Next for $A=(a_{1},\cdots,a_{m})\in\mathbb{F}_{m}$ a critical point of $\mathcal{F}^{K}_{m}$ we define a vector $(\mathcal{F}^{A}_{1},\cdots,\mathcal{F}^{A}_{m})$, where $\mathcal{F}^{A}_{i}:\,\Sigma\longrightarrow\mathbb{R},$ are real valued functions defined as follows: (5) $\displaystyle\mathcal{F}_{1}^{a}(x)\,:=\,K(x)\exp(\,8\pi H(a,x)),\quad\mbox{ if }m=1,$ (6) $\displaystyle\mathcal{F}^{A}_{i}(x)\,:=\,K(x)\exp\Big{(}\,8\pi H(a_{i},x)\,+\,8\pi\sum_{j\neq i}G(x,a_{j})\Big{)}\quad\mbox{ if }m\geq 2\,\mbox{ for }\,i=1,\cdots,m.$ Moreover we associate to every critical point $A=(a_{1},\cdots,a_{m})$ of $\mathcal{F}^{K}_{m}$ the following quantity (7) $\mathcal{L}(A)\,:=\,\sum_{i=1}^{m}\left(\Delta_{g}\mathcal{F}^{A}_{i}(a_{i})-2K_{g}(a_{i})\,\mathcal{F}^{A}_{i}(a_{i})\right),$ where $K_{g}$ denotes the Gauss curvature of $(\Sigma,g)$. Next we define the following set (8) $\mathcal{K}^{-}_{m}\,:=\,\left\\{A\in\mathbb{F}_{m}(\Sigma):\,\nabla\mathcal{F}^{K}_{m}(A)\,=\,0,\,\mathcal{L}(A)<0\right\\}.$ Our first type of results are based on the construction of solutions of sub- and sup-approximation with fixed morse indices, which we show that, under appropriate assumptions of critical points of the related reduced energy, they give rise to solutions of Equation $(MF)$. ###### Theorem 1.1 Let $m\geq 1$ and $\varrho=8\pi m$ and assume that the reduced energy $\mathcal{F}^{K}_{m}$ has only non degenerate critical points. Then 1. 1) if at every local minimum $A\in\mathbb{F}_{m}(\Sigma)$ of $\mathcal{F}^{K}_{m}$ we have that $\mathcal{L}(A)<0.$ Then Equation $(MF)$ has at least one solution whose generalized Morse index is $3m$, 2. 2) for $m\geq 2$, if at every critical point $A\in\mathbb{F}_{m}(\Sigma)$ of $\mathcal{F}^{K}_{m}$ with morse index $2$ we have that $\mathcal{L}(A)>0$. Then Equation $(MF)$ has at least one solution whose generalized Morse index is $3m-3$, where the _generalized Morse index_ of a solution $\omega$ of $(MF)$ is the sum of its Morse index and the dimension of the Kernel of the linearized operator $\mathcal{T}_{\omega}(\varphi):=-\Delta_{g}\varphi\,-\,\varrho\frac{Ke^{\omega}\varphi}{\int_{\Sigma}Ke^{\omega}}.$ We observe that, for $m=1$, the functional $J_{8\pi}$ is bounded below and therefore if there exists a local maximum point $A$ of $\mathcal{F}^{K}_{1}$ such that $\mathcal{L}(A)>0$ then $J_{8\pi}$ achieves its minimum (see [1], Theorem 1.1). We point out that there is no such points for the Nirenberg’s problem on $\mathbb{S}^{2}$ and more generally there are no such points if the surface has a positive Gauss curvature. Our next and main result of this paper is to prove _Morse inequalities_. To that aim we introduce the following non degeneracy assumption: _We say that the pair $(g,K)$ satisfies the condition $(\mathcal{N}_{m})$ if the following conditions are satisfied:_ 1. (i) _The critical points of_ $\mathcal{F}^{K}_{m}$ are non degenerate and for every critical point $A$, we have $\mathcal{L}(A)\neq 0,$ 2. (ii) _All the critical points of_ $J_{8\pi m}$ are non degenerate. ###### Remark 1.1 The non degeneracy condition $(\mathcal{N}_{m})$ is generic. Indeed, denoting by $\mathcal{M}$ the set of riemannian metrics on $\Sigma$ and $C^{0,1}_{+}(\Sigma)$ the space of positive Lipschitz functions on $\Sigma$. It follows from transversality arguments, there is an open and dense subset $\mathcal{D}\in\mathcal{M}\times C^{0,1}_{+}$ such that for $(g,h)\in\mathcal{D}$, the functional $J_{\varrho}$ is a Morse function. See for example [17], pp. 2179-80, for an argument based on some generic properties for nonlinear boundary value problems proved by Saut and Temam [27]. Moreover the condition $(i)$ is also a generic one. Furthermore, under the non degeneracy condition $(i)$ of $(\mathcal{N}_{m})$, we associate to every $A\in\mathcal{K}^{-}_{m}$ an index (9) $\iota_{\infty}:\mathcal{K}^{-}_{m}\to\mathbb{N}\quad\mbox{ defined by }\quad\iota_{\infty}(A)\,:=\,3m-1-Morse(\mathcal{F}^{K}_{m},A),$ where $Morse(\mathcal{F}^{K}_{m},A)$ stands for the Morse index of $\mathcal{F}^{K}_{m}$ at its critical point $A$. Next for $m\in\mathbb{N}$, let $B_{m}(\Sigma):=\left\\{\sum_{i=1}^{m}\gamma_{i}\delta_{a_{i}},\,a_{i}\in\Sigma,\,\gamma_{i}\geq 0;\,\sum_{i=1}^{m}\gamma_{i}\,=\,m\right\\}$ denote the set of formal barycenters of order $m$. For $i\geq 0$, we set $\beta^{m}_{i}:=\,rank\,\,H_{i}(B_{m}(\Sigma),\mathbb{Z}_{2}).$ Then $B_{m}(\Sigma)$ is a stratified set of top dimension $3m-1$ and therefore $\beta_{i}^{m}=0$ for $i\geq 3m$. In the next result we provide Morse inequalities relating the Morse indices of the solutions of $(MF)$, the $\iota_{\infty}$-indices of the critical points at infinity and the Betti numbers $\beta_{i}^{m}$ of the set of formal barycenters $B_{m}(\Sigma)$. Before stating these inequalities, we recall that it follows from the blow up analysis of solutions of $(MF)$, see [23, 12, 13] that their Dirichlet energy is uniformly bounded. See please statement $(ii)$ in Theorem 3.1, p. 1686 in [13]. Hence it follows from Moser-Trudinger inequality and elliptic theory that solutions of $(MF)$ with zero mean value integral are uniformly bounded and hence their Morse indices are uniformly bounded. We are now ready to state our Morse inequalities in the case $\varrho=8\pi m;m\geq 2$: ###### Theorem 1.2 Let $\varrho=8\pi m$, $m\geq 2$ and assume that the function $K$ satisfies the non degeneracy condition $(\mathcal{N}_{m})$ and let $\overline{m}$ be the highest Morse index of the critical points of $J_{\varrho}$. Then the following Morse inequalities hold 1. (a) For every $2\leq k\leq\overline{N}:=\max(3m-1,\overline{m})$ there holds (10) $\nu_{k}\,+\nu^{\infty}_{k}\,\geq\,\beta_{k-1}^{m-1},$ where $\nu_{i}$ denotes the number of critical points of $J_{\varrho}$ with Morse index $i$ and (11) $\nu^{\infty}_{q}:=\\#\\{A\in\mathcal{K}^{-}_{m};\iota_{\infty}(A)=q\\}.$ 2. (b) Let $\chi(\Sigma)$ be the Euler Characteristic of $\Sigma$, it holds $\sum_{i=0}^{\overline{m}}(-1)^{i}\nu_{i}\,+\sum_{A\in\mathcal{K}^{-}_{m}}(-1)^{\iota_{\infty}(A)}\,=\,\binom{m-1-\chi(\Sigma)}{m-1}.$ In particular the number of critical points of $J_{\varrho}$ is lower bounded by $\left|\binom{m-1-\chi(\Sigma)}{m-1}-\sum_{A\in\mathcal{K}^{-}_{m}}(-1)^{\iota_{\infty}(A)}\right|.$ As corollary of the above _Morse inequalities at infinity_ we derive the following existence result: ###### Theorem 1.3 Let $m\geq 2$ and $\varrho=8\pi m$ and assume that the function $K$ satisfies the non degeneracy condition $(\mathcal{N}_{m})$. Suppose there exists $2\leq q_{0}\leq 3m-3$ such that * • there is no element in $\mathcal{K}^{-}_{m}$ of index $q_{0}$ * • $\beta_{q_{0}-1}^{m-1}\neq 0$. Then Equation $(MF)$ has at least one solution whose Morse index is $q_{0}$. In particular, since $\beta_{3m-4}^{m-1}=H_{3m-4}(B_{m-1}(\Sigma))\neq 0$, if there is no element in $\mathcal{K}^{-}_{m}$ of index $3m-3$, then Equation $(MF)$ has at least one solution whose Morse index is $3m-3$. Furthermore, the number of solutions is lower bounded by $\beta_{3m-4}^{m-1}$. As a complement of the statement made in the above theorem, we prove the following result: ###### Theorem 1.4 Let $m\geq 2$ and $\varrho=8\pi m$ and assume that the function $K$ satisfies the non degeneracy condition $(\mathcal{N}_{m})$. Suppose there exists $q_{0}$ such that 1. 1. there exists $Q_{0}\in\mathcal{K}^{-}_{m}$ with $\iota_{\infty}(Q_{0})=q_{0}$, 2. 2. for each $Q\in\mathcal{K}^{-}_{m}$, we have $\iota_{\infty}(Q)\notin\\{q_{0}-1,q_{0}+1\\}$, 3. 3. $\displaystyle{\beta_{q_{0}-1}^{m-1}\,=\,0}$. Then Equation $(MF)$ has at least one solution. ###### Remark 1.2 We notice that the indices of the critical points at infinity are lower bounded by $m-1$. Hence for $m=4$ suppose 1. 1. there exists $Q_{0}\in\mathcal{K}^{-}_{m}$ such that $\iota_{\infty}(Q_{0})=3$ (i.e. $\mathcal{F}_{m}^{K}(Q_{0})$ is a local maximum of $\mathcal{F}_{m}^{K}$), 2. 2. for all $Q\in\mathcal{K}^{-}_{m}$, we have $\iota_{\infty}(Q)\neq 4$, then it follows from Theorem 1.4 that Equation $(MF)$ has at least one solution. More generally any violation of the above _Morse inequalities at infinity_ of Theorem 1.2 induces the existence of a solution to Equation $(MF)$. In particular we have the following existence result: ###### Theorem 1.5 Let $m\geq 2$ and $\varrho=8\pi m$ and assume that the function $K$ satisfies the non degeneracy condition $(\mathcal{N}_{m})$ . If one of the following inequalities is not satisfied * • $\nu^{\infty}_{3m-2}\geq\nu^{\infty}_{3m-1}$. * • $\nu^{\infty}_{3m-3}-\beta_{3m-4}^{m-1}\geq\nu^{\infty}_{3m-2}-\nu^{\infty}_{3m-1}$. Then Equation $(MF)$ has at least one solution (where $\nu_{q}^{\infty}$ is defined in (11)). The remainder of this paper is organized as follows: we set up some notation and definitions in Section 2, introduce an $\varepsilon-$neighborhood of potential critical points at infinity in Section 3 and provide in Section 4 useful expansions of the gradient of the Euler-Lagrange functional in _the neighborhood at infinity_. Section 5 is devoted to the characterization of such critical points at infinity and we provide the proof of our main results in Section 6. Finally we provide useful estimates of the projected bubble (see (17) for the definition) in the appendix. ## 2 Notation and definitions To state our results we need to fix some notation. For $\xi\in\Sigma$, let $y_{\xi}$ be a local chart defined in a neighborhood of $\xi$ onto $B(0,2\eta_{0})$ such that $y_{\xi}(\xi)=0$. In the sequel, we will denote $B_{\xi}(\eta):=y_{\xi}^{-1}(B(0,\eta))$. Furthermore, thanks to the isothermal coordinates, we have that for every $a\in\Sigma$, there exists a function $u_{a}\in C^{\infty}(\Sigma)$, satisfying $u_{a}(a)=0$ and $\nabla u_{a}(a)=0$, such that for the conformal metric $g_{a}:=e^{u_{a}}g$, there hold $g_{a}\,=\,\delta_{ij}\mbox{ in }B_{a}(2\eta_{0}),\,\,\Delta_{g}=e^{u_{a}}\Delta_{g_{a}},\,\,dV_{g}=e^{-u_{a}}dV_{g_{a}},\,\mbox{ and }\,\Delta_{g_{a}}u_{a}=2K_{g}(.)e^{-u_{a}}\mbox{ in }B_{a}(2\eta_{0}),$ where $0<\eta_{0}<\eta_{a}<\frac{1}{4}\,inj_{g_{a}}(\Sigma)$ where $inj$ stands for the injectivity radius. Next for $a\in\Sigma$, let $G(a,.)$ be the Green’s function of $\Delta_{g}$ defined by (12) $\begin{cases}-\Delta_{g}\,G(a,x)\,+\,1&=\,\delta_{a}\,\mbox{in }\Sigma,\\\ \int_{\Sigma}G(a,x)dVg(x)&=\,0.\end{cases}$ For $0<\eta<\frac{1}{4}\eta_{0}$ and $a\in\Sigma$ we define the smooth cut-off function (13) $\psi_{a}(x):=\psi(|y_{a}(x)|)\quad\mbox{ where }\quad\psi(t):=\begin{cases}t,&\mbox{if }t\in[0,{\eta}],\\\ 2\eta,&\mbox{if }t\geq{2\eta},\\\ \psi(t)\in[\eta,2\eta],\psi\in C^{\infty}&\mbox{otherwise}.\end{cases}$ Next for $\xi\in\Sigma$, as usual we decompose $G(\xi,.)$ as follows (14) $G(\xi,x)=-\frac{1}{2\pi}\ln(\psi_{\xi}(x))+H(\xi,x).$ From the definition of $G(\xi,.)$ we derive that $H(\xi,.)$ has to satisfy (15) $\Delta_{g}H(\xi,.)\,-1\,=\Delta_{g}G(\xi,.)-1+\frac{1}{2\pi}\Delta_{g}\ln(\psi_{\xi}(.))=\begin{cases}0\mbox{ in }(\Sigma\setminus B_{\xi}(2\eta))\cup B_{\xi}(\eta),\\\ \frac{e^{u_{\xi}}}{2\pi}\Delta_{g_{\xi}}\ln(\psi_{\xi}(.))\mbox{ in }B_{\xi}(2\eta)\setminus B_{\xi}(\eta).\end{cases}$ In the next section, we will describe the lack of compactness occurring in the variational problem associated to Equation $(MF)$. To do so we need to introduce some highly concentrated functions, the so called _bubbles_ and a neighborhood of such bubbles. First, we recall that the following functions $\widetilde{\delta}_{a,\lambda}(x):=\ln(\frac{8\lambda^{2}}{(1+\lambda^{2}|a-x|^{2})^{2}}),\mbox{ for }a\in\mathbb{R}^{2},\mbox{ and }\lambda>0,$ are the solutions of $-\Delta u=e^{u}\quad\mbox{in }\mathbb{R}^{2}\quad\mbox{ with }\quad\int_{\mathbb{R}^{2}}e^{u}<\infty.$ Next for $a\in\Sigma$ and $\lambda>0$ we define the standard bubble as (16) $\delta_{a,\lambda}(x):=\ln(\frac{8\lambda^{2}}{(1+\lambda^{2}\psi_{a}(x)^{2})^{2}}),$ where $\psi_{a}$ is defined in (13). In order to construct a suitable _approximated solution_ of $(MF)$ we follow the strategy introduced by A. Bahri and J.M. Coron in their study of the Yamabe equation [4] and we introduce the _projected bubble_ $\varphi_{a,\lambda}$ to be the unique solution of (17) $\begin{cases}-\Delta_{g}\varphi_{a,\lambda}\,=\,e^{\delta_{a,\lambda}+u_{a}}\,-\,\int_{\Sigma}e^{\delta_{a,\lambda}+u_{a}}dV_{g}\,\mbox{ in }\Sigma,\\\ \int_{\Sigma}\varphi_{a,\lambda}\,dV_{g}\,=\,0.\end{cases}$ We observe that for $A:=(a_{1},\cdots,a_{m})\in\Sigma^{m}$ and $\Lambda:=(\lambda_{1},\cdots,\lambda_{m})\in(\mathbb{R}^{+})^{m}$ the sequence of functions $(U_{A,\Lambda})_{\Lambda}$ defined by (18) $U_{A,\Lambda}:=\sum_{i=1}^{m}\varphi_{a_{i},\lambda_{i}}$ is a Palais-Smale sequence for Equation $(MF)$ for $\varrho=8\pi m$ if $A$ and $\Lambda$ satisfy the following balancing condition (19) $\forall i\neq j\quad\lambda_{i}^{2}\mathcal{F}^{A}_{i}(a_{i})\,=\,\lambda_{j}^{2}\mathcal{F}^{A}_{j}(a_{j})(1\,+\,o_{\lambda}(1)),$ where $o_{\lambda}(1)\to 0\mbox{ if }\lambda\to+\infty$. See please Lemma 7.5 for the equation satisfied by $U_{A,\Lambda}$. Furthermore we collected in the appendix some useful estimates on $\varphi_{a,\lambda}$. Such estimates are used in the expansion of the Euler Lagrange functional and its gradient in the neighborhood at infinity $V(m,\varepsilon)$. See please (20) for the definition of this set. ## 3 Lack of compactness and a neighborhood at infinity Our approach is variational. Therefore, in order to detect critical points for the functional $J_{\varrho}$ with $\varrho=8\pi m$, we have to find all possible obstructions in deforming sublevel sets $J_{\varrho}^{a}:=\\{u:J_{\varrho}(u)\leq a\\}$. To this aim we make use of a pseudogradient for $J_{\varrho}$, constructed in Horak-Lucia [20] (see also [25, 26]), whose flow lines have the following property: For any fixed initial data $u_{0}$, the flow line $\eta(t,u_{0})$ generated by this pseudogradient satisfies the following property: * • either $J_{\varrho}(\eta(t,u_{0}))\to-\infty$ as $t\to+\infty$, * • or $\eta(t,u_{0})$ enters any arbitrary small neighborhood of the set of solutions $u_{\beta}$ of $(\mathcal{P}_{8m\pi-\beta})$ with $\beta\in[0,\overline{\varepsilon})$ for some $\overline{\varepsilon}>0$. Furthermore, for a solution $u_{\beta}$ with $\beta>0$ we have the following alternative: $(i)$ either $u_{\beta}$ converges to a solution $\overline{u}$ of $(\mathcal{P}_{8m\pi})$ as $\beta\to 0$ $(ii)$ or it has to blow up when $\beta\to 0$. In the latter case, it follows from Proposition 7.6 (see also [12, 13]), that the solutions $(u_{\beta})$ have to belong to some subset ${V}(m,\varepsilon)$, called in the sequel neighborhood at Infinity, which is defined as: (20) $\displaystyle{V}(m,\varepsilon)$ $\displaystyle:=\Big{\\{}u\in\mathring{H}^{1}(\Sigma)\,:\,\|\nabla J_{\varrho}(u)\|<\varepsilon\,;\,\,\exists\,\lambda_{1},\cdots,\lambda_{m}>{\varepsilon^{-1}}\mbox{ with }{\lambda_{i}}<C_{1}{\lambda_{j}}\,\,\forall i\neq j;$ $\displaystyle\quad\exists\,a_{1},\cdots,a_{m}\mbox{ with }d_{g}(a_{i},a_{j})\geq 2\eta\,\,\forall i\neq j\quad\mbox{such that }\|u-\sum_{i=1}^{m}\varphi_{a_{i},\lambda_{i}}\|<\varepsilon\Big{\\}},$ where the space $\mathring{H}^{1}(\Sigma)$ is defined in (2), $\varepsilon$ is a small positive constant and $C_{1}$, $\eta$ are fixed positive constants. Hence, we are led to study the obstructions to decrease the functional $J_{\varrho}$ in the set $V(m,\varepsilon)$. A first step consists in finding an appropriate parametrization of this set. To that aim, following the ideas of A. Bahri and J.M. Coron we consider the following minimization problem (21) $\min_{\alpha_{i}>0;a_{i}\in\Sigma;\lambda_{i}>0}\big{\|}u-\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\big{\|}\,.$ We have the following Lemma whose proof is identical to the proof of Proposition 7 in [5] (see also Chen and Lin [13, Lemma 3.2]). Namely we have ###### Lemma 3.1 For $\varepsilon$ small, Problem (21) has for any $u\in{V}(m,\varepsilon)$ only one solution (up to permutations on the indices). The variables $\alpha_{i}$’s satisfy $|\alpha_{i}-1|=O(\varepsilon)$. Hence every $u\in{V}(m,\varepsilon)$ can be written as (22) $u\,=\,\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+\,w,$ where $\alpha_{i}$, $w$ satisfy (23) $\begin{cases}&|\alpha_{i}-1|\leq c\varepsilon\quad\forall i,\qquad\|w\|\leq c\varepsilon,\\\ &\big{<}w,\varphi_{a_{i},\lambda_{i}}\big{>}_{g}\,=\,\big{<}w,{\partial\varphi_{a_{i},\lambda_{i}}}/{\partial\lambda_{i}}\big{>}_{g}\,=\,0,\,\quad\,\big{<}w,{\partial\varphi_{a_{i},\lambda_{i}}}/{\partial a_{i}}\big{>}_{g}\,=0\quad\forall\,\,i.\end{cases}$ In the following, for ${A}=(a_{1},...,a_{m})$ and $\Lambda=(\lambda_{1},...,\lambda_{m})$, we denote (24) $E_{{A},\Lambda}^{m}:=\\{w\in\mathring{H}^{1}(\Sigma):w\mbox{ satisfies \eqref{f:orthog}}\\}.$ To keep the notation short we will write $\varphi_{i}$ instead of $\varphi_{a_{i},\lambda_{i}}$. The next Proposition shows how to deal with the infinite dimensional part $w$ in the above representation: ###### Proposition 3.2 [1] Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}\in V(m,\varepsilon)$. Then there exists a unique $\overline{w}:=\overline{w}(u)\in E_{{A},\Lambda}^{m}$ such that: $J_{\varrho}(u+\overline{w})\,=\,\min\\{J_{\varrho}(u+w)\,:\,w\in E_{{A},\Lambda}^{m}\\}.$ Furthermore, there exists a constant $C$ such that (25) $\|\overline{w}\|\,\leq\,C\sum_{i=1}^{m}\Big{(}|\alpha_{i}-1|+\frac{1}{\lambda_{i}}\Big{)}.$ ## 4 The expansion of the gradient in the neighborhood at infinity In this section we provide an asymptotic expansion of the gradient of the Euler Lagrange functional $J_{\varrho}$ in the neighborhood at infinity $V(m,\varepsilon)$. For the sake of simplicity of the notation and since the variables $\lambda_{i}$’s are of the same order, we will write in this section and in the sequel $O(1/\lambda^{\gamma})$ instead of $\sum O(1/\lambda_{k}^{\gamma})$. In the first proposition we expand the gradient with respect to the concentration rates. Namely we have ###### Proposition 4.1 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$ and $\varrho:=8m\pi(1+\mu)$ with $\mu$ a small real number. It holds $\Big{\langle}\nabla J_{\varrho}(u),\lambda_{i}\frac{\partial\varphi_{i}}{\partial\lambda_{i}}\Big{\rangle}_{g}=16\pi\alpha_{i}(\tau_{i}-\mu+\mu\tau_{i})-64\pi^{2}\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}+O\left(\sum|\alpha_{k}-1|^{2}+\|w\|^{2}\right)+o\left(\frac{\ln\lambda}{\lambda^{2}}\right)$ with (26) $\tau_{i}\,:=\,1\,-\,\frac{m\pi}{2\alpha_{i}-1}\,\frac{\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}^{A}_{i}(a_{i})g^{A}_{i}(a_{i})}{\int_{\Sigma}Ke^{u}dV_{g}},$ where $\mathcal{F}^{A}_{i}$ and $g^{A}_{i}$ are defined in (49). Furthermore, we have the estimate (27) $|\tau_{i}|=O(\varepsilon+|\mu|),\quad\forall i\in\\{1,\cdots,m\\}.$ Proof. Recall that (28) $\Big{\langle}\nabla J_{\varrho}(u),h\Big{\rangle}_{g}=\Big{\langle}u,h\Big{\rangle}_{g}-\frac{\varrho}{\int_{\Sigma}Ke^{u}}\int_{\Sigma}Ke^{u}h.$ Using Lemma 7.2 we get that $\Big{\langle}u,\lambda_{i}\frac{\partial\varphi_{i}}{\partial\lambda_{i}}\Big{\rangle}_{g}=-64\pi^{2}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\sum_{j=1}^{m}\alpha_{j}+16\pi\alpha_{i}+O(\frac{1}{\lambda^{2}})=-64\pi^{2}m\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+16\pi\alpha_{i}+o(\frac{\ln\lambda}{\lambda^{2}}).$ For the other term of (28), using Lemma 7.1, we have $\displaystyle\int_{\Sigma}Ke^{u}\lambda_{i}\frac{\partial\varphi_{i}}{\partial\lambda_{i}}dV_{g}$ $\displaystyle=\int_{\Sigma}Ke^{u}\left(\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}-8\pi\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+O(\frac{1}{\lambda^{2}})\right)dV_{g}$ $\displaystyle=-8\pi\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\int_{\Sigma}Ke^{u}dV_{g}+\int_{B_{i}}Ke^{u}\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}dV_{g}+O(\frac{1}{\lambda^{2}})\int_{\Sigma}Ke^{u}dV_{g}.$ Now we need to estimate the second integral. Letting $\overline{u}:=u-w$ and using Lemma A.4 in [1], we have $\displaystyle\int_{B_{i}}Ke^{u}\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}$ $\displaystyle=\int_{B_{i}}Ke^{\overline{u}}\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}+\int_{B_{i}}Ke^{\overline{u}}w\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}+\int_{B_{i}}Ke^{\overline{u}}(e^{w}-1-w)\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}$ $\displaystyle=\int_{B_{i}}Ke^{\overline{u}}\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}dV_{g}+O\bigg{(}(\sum\lambda_{k}^{4\alpha_{k}-2})\|w\|\Big{(}\|w\|+\frac{1}{\lambda}+|\alpha_{i}-1|\Big{)}\bigg{)}.$ Concerning the last integral, using Lemma 7.3 we get $\displaystyle\int_{B_{i}}Ke^{\overline{u}}\frac{4}{1+\lambda_{i}^{2}\psi_{i}^{2}}dV_{g}$ $\displaystyle=\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\int_{B_{i}}\frac{4\lambda_{i}^{4\alpha_{i}}\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}(.)|^{2})^{2\alpha_{i}+1}}dV_{g_{a_{i}}}+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}\int_{B_{i}}Ke^{\overline{u}}$ $\displaystyle=\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\frac{2\pi}{\alpha_{i}}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}g_{i}^{A}(a_{i})+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}\sum\lambda_{k}^{4\alpha_{k}-2},$ by using $\int_{0}^{\lambda\eta}\frac{4\,r}{(1+r^{2})^{2\alpha+1}}dr=\frac{1}{\alpha}+O\Big{(}\frac{1}{\lambda^{4\alpha}}\Big{)}\quad\mbox{ and }\quad\int_{0}^{\lambda\eta}\frac{r^{3}}{(1+r^{2})^{2\alpha+1}}dr=O\big{(}1\big{)}.$ Thus we get $\displaystyle\frac{\varrho}{\int_{\Sigma}Ke^{u}}\int_{\Sigma}Ke^{u}\lambda_{i}\frac{\partial\varphi_{i}}{\partial\lambda_{i}}dV_{g}=$ $\displaystyle\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}16\pi(1+\mu)\frac{2\alpha_{i}-1}{\alpha_{i}}(1-\tau_{i})$ $\displaystyle+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}+O\Big{(}\|w\|^{2}+|\alpha_{i}-1|^{2}\Big{)}.$ The result follows by summing the previous estimates. In the next proposition we expand the gradient $\nabla J_{\varrho}$ with respect to the gluing parameters $\alpha_{i}$’s. ###### Proposition 4.2 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$ and $\varrho:=8m\pi(1+\mu)$ with $\mu$ a small real number. It holds $\displaystyle\Big{\langle}\nabla J_{\varrho}(u),\varphi_{i}\Big{\rangle}_{g}=$ $\displaystyle 32\pi\ln\lambda_{i}\big{(}(\alpha_{i}-1)+(\tau_{i}-\mu+\mu\tau_{i})\big{)}+O\left(|\alpha_{i}-1|+\|w\|^{2}\ln\lambda\right)$ $\displaystyle+O\left(\|w\|(1+|\alpha_{i}-1|\ln\lambda)+\sum|\tau_{k}-\mu+\mu\tau_{k}|+\frac{\ln^{2}\lambda}{\lambda^{2}}+\frac{\ln\lambda_{i}}{\lambda_{i}^{4\alpha_{i}-2}}\right).$ Proof. Using Lemma 7.2 we get that $\displaystyle\Big{\langle}u,\varphi_{i}\Big{\rangle}_{g}$ $\displaystyle=\alpha_{i}\Big{(}32\pi\ln\lambda_{i}+64\pi^{2}H(a_{i},a_{i})-16\pi\Big{)}+64\pi^{2}\sum_{j\neq i}\alpha_{j}G(a_{i},a_{j})+O\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)},$ $\displaystyle\int_{\Sigma}Ke^{u}\varphi_{i}dV_{g}$ $\displaystyle=\int_{\Sigma}Ke^{\overline{u}}\varphi_{i}dV_{g}+\int_{\Sigma}Ke^{\overline{u}}w\varphi_{i}dV_{g}+\int_{\Sigma}Ke^{\overline{u}}(e^{w}-1-w)\varphi_{i}dV_{g}$ $\displaystyle=\int_{\Sigma}Ke^{\overline{u}}\varphi_{i}dV_{g}+O\Big{(}\|w\|\Big{(}1+|\alpha_{i}-1|\ln\lambda_{i}+\|w\|\ln\lambda_{i}\Big{)}\sum\lambda_{k}^{4\alpha_{k}-2}\Big{)}.$ For the last integral we notice that it follows from Lemma 7.1 that $\varphi_{i}$ and $e^{\overline{u}}$ are bounded outside the balls $B_{k}$. Moreover inside each ball $B_{k}$, for $k\neq i$, using Lemmas 7.1 and 7.3, we derive $\displaystyle\int_{B_{k}}Ke^{\overline{u}}\varphi_{i}dV_{g}$ $\displaystyle=8\pi\int_{B_{k}}\frac{\lambda_{k}^{4\alpha_{k}}\mathcal{F}_{k}^{A}g_{k}^{A}e^{-u_{a_{k}}}}{(1+\lambda_{k}^{2}|y_{a_{k}}(.)|^{2})^{2\alpha_{k}}}G(a_{i},.)dV_{g_{a_{k}}}+O\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}\int Ke^{\overline{u}}$ $\displaystyle=8\pi\lambda_{k}^{4\alpha_{k}-2}\mathcal{F}_{k}^{A}g_{k}^{A}(a_{k})G(a_{i},a_{k})\frac{\pi}{2\alpha_{k}-1}+O\Big{(}\Big{(}\frac{\ln\lambda}{\lambda^{2}}+\sum\frac{|\alpha_{j}-1|}{\lambda^{3/2}}\Big{)}\sum\lambda_{j}^{4\alpha_{j}-2}\Big{)}$ by using the fact that (using (55)) $\displaystyle\int_{0}^{\lambda\eta}\frac{r^{3}dr}{(1+r^{2})^{2\alpha}}=\int_{0}^{\lambda\eta}\frac{r^{3}\xi(r)dr}{(1+r^{2})^{2}}$ $\displaystyle=\int_{0}^{\lambda\eta}\frac{r^{3}dr}{(1+r^{2})^{2}}+O\Big{(}|\alpha-1|\int_{0}^{\lambda\eta}\frac{r^{3}\sqrt{r}dr}{(1+r^{2})^{2}}\Big{)}$ $\displaystyle=O\Big{(}\ln\lambda+|\alpha-1|\sqrt{\lambda}\Big{)}.$ In the ball $B_{i}$, it holds $\displaystyle\int_{B_{i}}K$ $\displaystyle e^{\overline{u}}\varphi_{i}dV_{g}$ $\displaystyle=\int_{B_{i}}\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}|^{2})^{2\alpha_{i}}}\Big{(}4\ln\lambda_{i}-2\ln(1+\lambda_{i}^{2}|y_{a_{i}}|^{2})+8\pi H(a_{i},.)\Big{)}dV_{g_{a_{i}}}+O\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}\int Ke^{\overline{u}}$ $\displaystyle=\Big{(}4\ln\lambda_{i}+8\pi H(a_{i},a_{i})\Big{)}\Big{(}\frac{\pi}{2\alpha_{i}-1}+O\Big{(}\frac{1}{\lambda_{i}^{4\alpha_{i}-2}}+\frac{\ln\lambda}{\lambda^{2}}+\frac{|\alpha_{i}-1|}{\lambda^{3/2}}\Big{)}\Big{)}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}g_{i}^{A}(a_{i})$ $\displaystyle-2\int_{B_{i}}\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}|^{2})^{2\alpha_{i}}}\ln(1+\lambda_{i}^{2}|y_{a_{i}}|^{2})dV_{g_{a_{i}}}+O\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}\int Ke^{\overline{u}}.$ Observe that $\displaystyle\int_{0}^{\lambda\eta}\frac{2r}{(1+r^{2})^{2\alpha}}\ln(1+r^{2})dr=\frac{1}{(2\alpha-1)^{2}}+O\Big{(}\frac{\ln\lambda}{\lambda^{4\alpha-2}}\Big{)},$ $\displaystyle\int_{0}^{\lambda\eta}\frac{r^{3}}{(1+r^{2})^{2\alpha}}\ln(1+r^{2})dr\leq c\,\ln\lambda\int_{0}^{\lambda\eta}\frac{r^{3}}{(1+r^{2})^{2\alpha}}dr\leq c\,\ln\lambda\Big{(}\ln\lambda+|\alpha-1|\sqrt{\lambda}\Big{)}.$ Thus we obtain $\displaystyle\int_{B_{i}}Ke^{\overline{u}}\varphi_{i}dV_{g}=$ $\displaystyle\Big{(}4\ln\lambda_{i}+8\pi H(a_{i},a_{i})\Big{)}\frac{\pi}{2\alpha_{i}-1}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}g_{i}^{A}(a_{i})-\frac{2\pi}{(2\alpha_{i}-1)^{2}}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}g_{i}^{A}(a_{i})$ $\displaystyle+O\Big{(}\ln\lambda+\Big{(}\frac{\ln^{2}\lambda}{\lambda^{2}}+|\alpha_{i}-1|\frac{\ln\lambda}{\lambda^{3/2}}\Big{)}\sum\lambda_{j}^{4\alpha_{j}-2}\Big{)}.$ As in the proof of Proposition 4.1, summing the previous estimates, we derive the result. Combining Propositions 4.1 and 4.2, we obtain ###### Corollary 4.3 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$ and $\varrho:=8m\pi(1+\mu)$ with $\mu$ a small real number. It holds $\displaystyle\Big{\langle}$ $\displaystyle\nabla J_{\varrho}(u),\frac{\varphi_{i}}{\ln\lambda_{i}}-\frac{2}{\alpha_{i}}\lambda_{i}\frac{\partial\varphi_{i}}{\partial\lambda_{i}}\Big{\rangle}_{g}$ $\displaystyle=32\pi(\alpha_{i}-1)+O\left(\|w\|^{2}+\frac{\|w\|}{\ln\lambda}+\frac{1}{\ln\lambda}\sum(|\tau_{k}-\mu+\mu\tau_{k}|+|\alpha_{k}-1|)+\frac{\ln\lambda}{\lambda^{2}}+\frac{1}{\lambda^{4\alpha_{i}-2}}\right).$ ###### Lemma 4.4 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$ and $\varrho:=8m\pi(1+\mu)$ with $\mu$ a small real number. Let $\tau_{i}$ be defined in (26) and assume that $\sum|\alpha_{i}-1|\ln\lambda_{i}$ is small. Then, it holds $\sum_{i=1}^{m}\tau_{i}=\frac{\pi}{2}\frac{m\ln\lambda_{1}}{\int_{\Sigma}Ke^{u}dV_{g}}\mathcal{L}(A)+4\pi m\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}+\sum O\Big{(}|\alpha_{k}-1|^{2}\Big{)}.$ Proof. In this case, since we assumed that $\sum|\alpha_{i}-1|\ln\lambda_{i}$ is small, we derive that $\lambda_{i}^{4(\alpha_{i}-1)}=1+O(|\alpha_{i}-1|\ln\lambda_{i})$ for each $i$. Moreover we have (29) $\int_{\Sigma}Ke^{\overline{u}}wdV_{g}+\int_{\Sigma}Ke^{\overline{u}}(e^{w}-1-w)dV_{g}=O\Big{(}\Big{(}\|w\|^{2}+\sum|\alpha_{k}-1|^{2}+\frac{1}{\lambda^{2}}\Big{)}\sum\lambda_{k}^{4\alpha_{k}-2}\Big{)}.$ Hence the result follows from the previous estimate and (51). Lastly arguing as above, we derive the following asymptotic expansion of the gradient with respect to the concentration points $(a_{1},\cdots,a_{m})$. Namely we prove that ###### Proposition 4.5 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$ and $\varrho:=8m\pi(1+\mu)$ with $\mu$ a small real number. It holds $\displaystyle\Big{\langle}\nabla J_{\varrho}(u),$ $\displaystyle\frac{1}{\lambda_{i}}\frac{\partial\varphi_{i}}{\partial a_{i}}\Big{\rangle}_{g}\,=\,-8\pi(1+\mu)\frac{\nabla\mathcal{F}^{A}_{i}(a_{i})}{\lambda_{i}}+O\left(\frac{|\mu|}{\lambda}+\frac{|\tau_{i}|}{\lambda}+\frac{\ln\lambda}{\lambda^{2}}+\sum_{k=1}^{m}|\alpha_{k}-1|^{2}+\|w\|^{2}\right).$ ## 5 Critical points at infinity and their indices Critical points at infinity of the functional $J_{\varrho}$ are accumulation points of some orbits of the negative gradient flow $-\nabla J_{\varrho}$ which enter some $V(m,\varepsilon)$ and remain there indefinitely. In this section we recall for the sake of completeness the full characterization of these critical points proved in [1] and restate their contribution to the difference of topology between the level sets of the functional $J_{\varrho}$. We start with an expansion of $J_{\varrho}$ in the neighborhood at infinity $V(m,\varepsilon)$. ###### Proposition 5.1 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}+w\in V(m,\varepsilon)$ with $w\in E^{m}_{{A},\Lambda}$. Assume that $|\alpha_{i}-1|\ln\lambda_{i}$ is small for each $i$ and (25) holds. Then $\displaystyle J_{8m\pi}$ $\displaystyle(u)=-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(a_{1},...,a_{m})-4\pi\sum_{i=1}^{m}|\tau^{\prime}_{i}|^{2}+16\pi\sum_{i=1}^{m}(\alpha_{i}-1)^{2}\ln\lambda_{i}$ $\displaystyle-4\pi\frac{\ln\lambda_{1}}{\lambda_{1}^{2}\mathcal{F}^{A}_{1}(a_{1})}\sum_{i=1}^{m}\Big{(}\Delta\mathcal{F}^{A}_{i}(a_{i})-2K_{g}(a_{i})\mathcal{F}^{A}_{i}(a_{i})\Big{)}+\sum_{k=1}^{m}\Big{\\{}O\Big{(}(\alpha_{k}-1)^{2}+\frac{1}{\lambda_{k}^{2}}\Big{)}+o(|{\tau}^{\prime}_{k}|^{2})\Big{\\}},$ where $\mathcal{F}^{A}_{i}$ and $g^{A}_{i}$ are defined in (49) and ${\tau}^{\prime}_{i}=1-\frac{{m}\,\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}^{A}_{i}(a_{i})}{\sum_{k=1}^{m}\lambda_{k}^{4\alpha_{k}-2}\mathcal{F}^{A}_{k}(a_{k})}.$ Proof. The proof follows from Lemmas 7.2 and 7.4 and the following computations. Let us denote by $\Gamma:={\sum_{i=1}^{m}\frac{\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}^{A}_{i}(a_{i})g_{i}^{A}(a_{i})}{2\alpha_{i}-1}}\qquad\mbox{ and }\qquad\tilde{\tau}_{i}=1-\frac{\frac{m}{2\alpha_{i}-1}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}^{A}_{i}(a_{i})g_{i}^{A}(a_{i})}{\sum\frac{1}{2\alpha_{k}-1}\lambda_{k}^{4\alpha_{k}-2}\mathcal{F}^{A}_{k}(a_{k})g_{k}^{A}(a_{k})}.$ Since $w\in E^{m}_{{A},\Lambda}$, then (29) holds and using Lemma 7.4, we have $\displaystyle\ln\Big{(}\int_{\Sigma}Ke^{u}\Big{)}$ $\displaystyle=\ln(\pi\Gamma)+\ln\Big{\\{}1+\frac{1}{2\Gamma}\sum_{i=1}^{m}\Big{(}\Delta\mathcal{F}^{A}_{i}(a_{i})-2K_{g}(a_{i})\mathcal{F}^{A}_{i}(a_{i})\Big{)}\ln\lambda_{i}$ $\displaystyle+\frac{4\pi}{\Gamma}\Big{(}\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\sum_{i=1}^{m}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}(a_{i})+O\big{(}\sum_{k=1}^{m}\big{\\{}|\alpha_{k}-1|^{2}+\frac{1}{\lambda^{2}_{k}}\big{\\}}\big{)}\Big{\\}}$ $\displaystyle=\ln\pi-\frac{1}{m}\sum_{i=1}^{m}\ln(1-\tilde{\tau}_{i})+\frac{1}{2\Gamma}\sum_{i=1}^{m}\Big{(}\Delta\mathcal{F}^{A}_{i}(a_{i})-2K_{g}(a_{i})\mathcal{F}^{A}_{i}(a_{i})\Big{)}\ln\lambda_{i}$ $\displaystyle+\frac{1}{m}\sum_{i=1}^{m}\ln\Big{(}\frac{m\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}^{A}_{i}(a_{i})g_{i}^{A}(a_{i})}{2\alpha_{i}-1}\Big{)}+\frac{4\pi}{\Gamma}\Big{(}\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\sum_{i=1}^{m}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}(a_{i})$ $\displaystyle+O\Big{(}\sum_{k=1}^{m}\big{\\{}|\alpha_{k}-1|^{2}+\frac{1}{\lambda^{2}_{k}}\big{\\}}\Big{)}$ $\displaystyle=\ln\pi+\frac{1}{m}\sum_{i=1}^{m}\Big{\\{}\tilde{\tau}_{i}+\frac{\tilde{\tau}_{i}^{2}}{2}\Big{\\}}+\frac{1}{2\Gamma}\sum_{i=1}^{m}\Big{(}\Delta\mathcal{F}^{A}_{i}(a_{i})-2K_{g}(a_{i})\mathcal{F}^{A}_{i}(a_{i})\Big{)}\ln\lambda_{i}$ $\displaystyle+\ln m+\frac{1}{m}\sum_{i=1}^{m}\\{-\ln(2\alpha_{i}-1)+(4\alpha_{i}-2)\ln\lambda_{i}+\ln(\mathcal{F}^{A}_{i}(a_{i}))+\ln(g_{i}^{A}(a_{i}))\\}$ $\displaystyle+\frac{4\pi}{\Gamma}\Big{(}\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\sum_{i=1}^{m}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}(a_{i})+O\Big{(}\sum_{k=1}^{m}\Big{\\{}|\alpha_{k}-1|^{2}+|\tilde{\tau}_{k}|^{3}+\frac{1}{\lambda_{k}^{2}}\Big{\\}}\Big{)}.$ We remark that $\sum_{i=1}^{m}\tilde{\tau}_{i}=0$ and for each $i$, we have $\tilde{\tau}_{i}=O(\varepsilon)$ which implies that $\lambda_{i}^{2}\mathcal{F}^{A}_{i}(a_{i})=\lambda_{j}^{2}\mathcal{F}^{A}_{j}(a_{j})(1+O(\sum|\alpha_{k}-1|\ln\lambda_{k}))$ for each $i,j$ and $\ln\lambda_{i}=\ln\lambda_{1}+O(1)$. Furthermore, we have $\displaystyle\ln\big{(}2\alpha_{i}-1\big{)}=\ln\big{(}1+2(\alpha_{i}-1)\big{)}=2(\alpha_{i}-1)+O(|\alpha_{i}-1|^{2}),$ $\displaystyle g_{i}^{A}(a_{i})=1+O\Big{(}\sum_{k=1}^{m}|\alpha_{k}-1|\Big{)}\quad;\quad\Gamma=\sum_{i=1}^{m}\lambda_{i}^{2}\mathcal{F}^{A}_{i}(a_{i})+O\Big{(}\sum_{k=1}^{m}|\alpha_{k}-1|\lambda_{k}^{2}\ln\lambda_{k}\Big{)}$ $\displaystyle\mbox{and}\quad|\tilde{\tau}_{i}|^{2}=|\tau^{\prime}_{i}|^{2}+o(|\tau^{\prime}_{i}|^{2})+\sum O(|\alpha_{k}-1|^{2}).$ Hence, the result follows. In [1] we constructed the following decreasing pseudogradient of the Euler Lagrange functional $J_{\varrho}$ in the neighborhood at Infinity $V(m,\varepsilon)$: ###### Proposition 5.2 [1] Let $\varrho=8\pi m$ with $m\geq 1$ and assume that the function $K$ satisfies the condition $(i)$ of $(\mathcal{N}_{m})$. Then there exists a pseudogradient $W$ defined in $V(m,\varepsilon)$ and satisfying the following properties: There exists a constant $C$ independent of $u=\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}+\bar{w}$ such that 1. (1) $\displaystyle{\langle-\nabla J_{\varrho}(u),W\rangle\,\geq\,C\sum_{i=1}^{m}\Big{(}|\alpha_{i}-1|\,+\,|\tau_{i}|\,+\,\frac{|\nabla\mathcal{F}^{A}_{i}(a_{i})|}{\lambda_{i}}\,+\,\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\Big{)}}$, 2. (2) $\displaystyle{\langle-\nabla J_{\varrho}(u),W+\frac{\partial\overline{w}(W)}{\partial(\alpha,\lambda,a)}\rangle\,\geq\,C\sum_{i=1}^{m}\Big{(}|\alpha_{i}-1|\,+\,|\tau_{i}|\,+\,\frac{|\nabla\mathcal{F}^{A}_{i}(a_{i})|}{\lambda_{i}}\,+\,\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\Big{)}}$, 3. (3) $|W|$ is bounded and the only region where the variables $\lambda_{i}$’s increase along the flow lines of $W$ is the region where $(a_{1},\cdots,a_{m})$ is very close to a critical point $q:=(q_{1},\cdots,q_{m})$ of $\mathcal{F}^{K}_{m}$ with $q\in\mathcal{K}^{-}_{m}$. Following the program developed in [1], we deduce from Proposition 5.2 the following characterization of the critical points at infinity: ###### Proposition 5.3 [1] Let $\varrho=8\pi m$, $m\geq 1$. The critical points at Infinity of $J_{\varrho}$ are in one to one correspondence with critical points $Q:=(q_{1},\cdots,q_{m})$ of $\mathcal{F}_{m}^{K}$ satisfying $Q\in\mathcal{K}^{-}_{m},$ that will be denoted by $(Q)_{\infty}$. Furthermore the energy level of such a critical point at Infinity $(Q)_{\infty}$ denoted $C_{\infty}(Q)_{\infty}$ is given by: $C_{\infty}(Q)_{\infty}\,=\,-8\pi m(1+\ln(m\pi))\,-8\pi\mathcal{F}^{K}_{m}(Q).$ Moreover the Morse index of such a critical point at Infinity $(Q)_{\infty}$ is given by: $\iota_{\infty}(Q):=\,3m-1-Morse(\mathcal{F}^{K}_{m},Q).$ We point out that around _a critical point at infinity_ there is a Morse type reduction. Indeed denoting by $\displaystyle V(m,Q,\varepsilon)\,:=\\{u=\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+w\,\in V(m,\varepsilon):\,\forall i=1,\cdots,m,\,|a_{i}-q_{i}|<\varepsilon;\,$ (30) $\displaystyle|\alpha_{i}-1|\ln\lambda_{i}<\varepsilon,\mbox{ and }\forall i\neq j\quad|\frac{\lambda_{i}^{2}\mathcal{F}^{A}_{i}(a_{i})}{\lambda_{j}^{2}\mathcal{F}^{A}_{j}(a_{j})}\,-1|<\varepsilon\\},$ where $Q:=(q_{1},\cdots,q_{m})$ is a critical point of $\mathcal{F}^{K}_{m}$ with $\mathcal{L}(Q)<0$, we have the following Morse Lemma at Infinity,whose proof is inspired by the proof of a similar statement for Yamabe type flows written down by A. Bahri in [6] (pages 415-417). We state our result as follows: ###### Lemma 5.4 Let $u=\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+w\,\in V(m,Q,\varepsilon)$, then there exists a change of variables $(a_{1},\cdots,a_{m},\alpha_{1},\cdots,\alpha_{m},\lambda_{1},\cdots,\lambda_{m},w)\mapsto(\overline{a}_{1},\cdots,\overline{a}_{m},\beta_{1},\cdots,\beta_{m},\Lambda_{1},y_{2},\cdots,y_{m},V)$ (where $(\overline{a}_{1},\cdots,\overline{a}_{m})$ is close to $Q$, the $\beta_{i}$’s and the $y_{i}$’s are small and $\Lambda_{1}$ is very large) such that $J_{\varrho}(u)=-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(\overline{a}_{1},...,\overline{a}_{m})+\sum_{i=1}^{m}\beta_{i}^{2}-\sum_{i=2}^{m}y_{i}^{2}+(4\pi-\sigma)\frac{(-\mathcal{L}(Q))}{\mathcal{F}^{Q}_{1}(q_{1})}\frac{\ln{\Lambda}_{1}}{\Lambda_{1}^{2}}+\|V\|^{2}$ where $\sigma$ is a small positive constant. Proof. We recall that for $\sum\alpha_{i}\varphi_{i}\in V(m,\varepsilon)$, we have by Proposition 3.2 that there exists a unique $\overline{w}$ which minimizes $J_{\varrho}(\sum\alpha_{i}\varphi_{i}+w)$ in the space $E_{A,\Lambda}^{m}$. Hence, by the classical Morse Lemma, there exists a change of variable $w-\overline{w}\to V$ so that (31) $J_{\varrho}(u)=J_{\varrho}(\overline{u})+\|V\|^{2}\quad\mbox{ where }{u}:=\sum\alpha_{i}\varphi_{i}+{w}\quad\mbox{ and }\quad\overline{u}:=\sum\alpha_{i}\varphi_{i}+\overline{w}.$ Furthermore for $\varepsilon^{\prime}>0$ small (with $\varepsilon/\varepsilon^{\prime}$ small) and $W$ the pseudogradient constructed in Proposition 5.2, we have that for $\overline{u}:=\sum\alpha_{i}\varphi_{i}+\overline{w}\in V(m,\varepsilon^{\prime})$ there holds: (32) $\langle\nabla J_{\varrho}(\overline{u}),W\rangle\leq-c\sum\Big{(}|\alpha_{i}-1|+|\tau_{i}|+\frac{|\nabla\mathcal{F}_{i}^{A}(a_{i})|}{\lambda_{i}}+\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\Big{)}.$ Next let $\sigma>0$ be a small constant and set (33) $\displaystyle I(\overline{u}):=$ $\displaystyle-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(a_{1},...,a_{m})-(4\pi-\sigma)\sum_{i=1}^{m}|{\tau}^{\prime}_{i}|^{2}$ $\displaystyle+(16\pi+\sigma)\sum_{i=1}^{m}(\alpha_{i}-1)^{2}\ln\lambda_{i}^{2\alpha_{i}-1}-(4\pi-\sigma)\frac{\ln\lambda_{1}^{2\alpha_{1}-1}}{\lambda_{1}^{4\alpha_{1}-2}\mathcal{F}^{Q}_{1}(q_{1})}\mathcal{L}(Q).$ where $Q:=(q_{1},\cdots,q_{m})$. Since we assumed that $|\alpha_{i}-1|\ln\lambda_{i}$ is small for each $i$, it is easy to see that (34) $0<I(\overline{u})-J_{\varrho}(\overline{u})\leq 2\sigma\sum\Big{(}(\alpha_{i}-1)^{2}\ln\lambda_{i}+\frac{\ln\lambda_{1}}{\lambda_{1}^{2}}+|{\tau}^{\prime}_{i}|^{2}\Big{)}\quad\mbox{ for each }\overline{u}.$ Furthermore it follows from Proposition 5.2 that (35) $\langle\nabla I(\overline{u}),W\rangle\leq-c\sum\Big{(}|\alpha_{i}-1|+|\tau_{i}|+\frac{|\nabla\mathcal{F}_{i}^{A}(a_{i})|}{\lambda_{i}}+\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\Big{)}.$ Next for $\overline{u}_{0}\in V(m,\varepsilon)$ we consider the following differential equation (36) $\frac{\partial u}{\partial s}=W(u)\quad;\quad u(0)=\overline{u}_{0},$ whose solution is $h_{s}(\overline{u}_{0})$ where $h_{s}$ is the 1-parameter group generated by $W$. Note that, for $\overline{u}:=\sum\alpha_{i}\varphi_{a_{i},\lambda_{i}}+\overline{w}$ as far as $h_{s}(\overline{u})\in V(m,Q,\varepsilon)$ we have that $h_{s}(\overline{u})=\sum\alpha_{i}(s)\varphi_{a_{i}(s),\lambda_{i}(s)}+\overline{w(s)},$ that is $\overline{w(s)}$ satisfies conclusions of Proposition 3.2. Next we claim: CLAIM 1: There exists $\overline{s}>0$ such that $I(h_{\overline{s}}(\overline{u}_{0}))=J_{\varrho}(\overline{u}_{0})$. Observe that $I(h_{s}(\overline{u}_{0}))$ is a decreasing function with respect to $s$. Hence there exists at most a unique solution to the equation $I(h_{s}(\overline{u}_{0}))=J_{\varrho}(\overline{u}_{0})$. The only cases where there could be no solution are * • either $h_{s}(\overline{u}_{0})$ exits $V(m,\varepsilon^{\prime})$ (outside this set, we loose (32) since $W$ is defined only in $V(m,\varepsilon^{\prime})$) before reaching this level. * • or $h_{s}(\overline{u}_{0})$ will build a critical point at infinity before reaching the level $J_{\varrho}(\overline{u}_{0})$. We will prove that these two cases cannot occur. In fact, for the first one, since $\overline{u}_{0}\in V(m,\varepsilon)$ then, to exit $V(m,\varepsilon^{\prime})$, the flow line has to travel from $V(m,\varepsilon^{\prime}/2)$ to the boundary of $V(m,\varepsilon^{\prime})$. Note that, using (35), we have that: $\partial I(h_{{s}}(\overline{u}_{0}))/\partial s\leq-c(\varepsilon^{\prime})$ along this path, independent of $\varepsilon$, but depending on $\varepsilon^{\prime}$. Also, by (35), the time to travel from $V(m,\varepsilon^{\prime}/2)$ to the boundary of $V(m,\varepsilon^{\prime})$ is lower-bounded by a constant $c^{\prime}(\varepsilon^{\prime})$, because $|W|$ is bounded and the distance to travel is lower-bounded by a constant $c>0$. Therefore, $I(h_{{s}}(\overline{u}_{0}))$ decreases at least by $c(\varepsilon^{\prime})c^{\prime}(\varepsilon^{\prime})$ during this trip. However, using (34), since $\overline{u}_{0}\in V(m,\varepsilon)$, it follows that $J_{\varrho}(\overline{u}_{0})<I(\overline{u}_{0})\leq J_{\varrho}(\overline{u}_{0})+c(\varepsilon)$. Hence we have to choose $\varepsilon$ small with respect to $\varepsilon^{\prime}$ so that $I(h_{{s}}(\overline{u}_{0}))$ reaches the level $J_{\varrho}(\overline{u}_{0})$ before leaving the set $V(m,\varepsilon^{\prime})$. Concerning the second case, the flow line $h_{{s}}(\overline{u}_{0})$ will enter $V(m,\varepsilon_{1})$ for each $\varepsilon_{1}>0$. Observe that $J_{\varrho}(h_{s}(\overline{u}_{0}))<J_{\varrho}(\overline{u}_{0})\quad;\quad 0<I(h_{{s}}(\overline{u}_{0}))-J_{\varrho}(h_{{s}}(\overline{u}_{0}))\to 0\quad\mbox{ as }s\to\infty\,\,(\mbox{for }\varepsilon_{1}\to 0).$ Hence $I(h_{{s}}(\overline{u}_{0}))$ has to reach the level $J_{\varrho}(\overline{u}_{0})$ and therefore this case cannot occur. Our claim is thereby proved. Conversely, taking $\varepsilon^{\prime\prime}>0$ small with respect to $\varepsilon$ and given $\overline{u}^{\prime}_{0}\in V(m,\varepsilon^{\prime\prime})$, arguing by the same way (and using $-W$ as an increasing pseudogradient) : there exists $\overline{s}^{\prime}>0$ such that $J_{\varrho}(h_{-\overline{s}^{\prime}}(\overline{u}^{\prime}_{0}))=I(\overline{u}^{\prime}_{0})$. Hence, $h_{s}$ is the required isomorphism. It follows from (33) and the previous claim that $\displaystyle J_{\varrho}(\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+\,\overline{w}):=$ $\displaystyle-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(\overline{a}_{1},...,\overline{a}_{m})-(4\pi-\sigma)\sum_{i=1}^{m}|{\overline{\tau}}^{\prime}_{i}|^{2}$ $\displaystyle+(16\pi+\sigma)\sum_{i=1}^{m}(\overline{\alpha}_{i}-1)^{2}\ln(\overline{\lambda}_{i})^{2\overline{\alpha}_{i}-1}-(4\pi-\sigma)\frac{\ln(\overline{\lambda}_{1})^{2\overline{\alpha}_{1}-1}}{(\overline{\lambda}_{1})^{4\overline{\alpha}_{1}-2}\mathcal{F}^{Q}_{1}(q_{1})}\mathcal{L}(Q),$ where $\overline{\tau}^{\prime}_{i}$ has the same definition as $\tau^{\prime}_{i}$ (see Proposition 5.1) using the new variables. In order to achieve the split of variables as claimed in the Lemma we need to perform some changes of variables. We first consider the following change of variables $\displaystyle\psi_{1}:Y:=$ $\displaystyle(\overline{a}_{1},\cdots,\overline{a}_{m},\overline{\alpha}_{1},\cdots,\overline{\alpha}_{m},\overline{\lambda}_{1},\cdots,\overline{\lambda}_{m})\mapsto(\overline{a}_{1},\cdots,\overline{a}_{m},\overline{\alpha}_{1},\cdots,\overline{\alpha}_{m},\Lambda_{1},\cdots,\Lambda_{m})$ $\displaystyle\mbox{ with }\Lambda_{i}:=\overline{\lambda}_{i}^{2\alpha_{i}-1}$ By computing the determinant $\det(D\psi_{1}(Y))$, it is easy to show that $\psi_{1}$ is a diffeomorphism. With this change of variables, the functional reads as follows: $\displaystyle J_{\varrho}(\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+\,\overline{w}):=$ $\displaystyle-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(\overline{a}_{1},...,\overline{a}_{m})-(4\pi-\sigma)\sum_{i=1}^{m}|{{\tau}}^{\prime\prime}_{i}|^{2}$ $\displaystyle+(16\pi+\sigma)\sum_{i=1}^{m}(\overline{\alpha}_{i}-1)^{2}\ln{\Lambda}_{i}-(4\pi-\sigma)\frac{\ln{\Lambda}_{1}}{{\Lambda}_{1}^{2}\mathcal{F}^{Q}_{1}(q_{1})}\mathcal{L}(Q),$ where ${\tau}^{\prime\prime}_{i}=1-\frac{{m}\,\Lambda_{i}^{2}\mathcal{F}^{\overline{A}}_{i}(\overline{a}_{i})}{\sum_{k=1}^{m}\Lambda_{k}^{2}\mathcal{F}^{\overline{A}}_{k}(\overline{a}_{k})}.$ Furthermore we perform the following change of variables: $\psi_{2}:Y:=(\overline{a}_{1},\cdots,\overline{a}_{m},\overline{\alpha}_{1},\cdots,\overline{\alpha}_{m},{\Lambda}_{1},\cdots,{\Lambda}_{m})\mapsto(\overline{a}_{1},\cdots,\overline{a}_{m},{\beta}_{1},\cdots,{\beta}_{m},x_{1},\cdots,x_{m})$ with $\beta_{i}:=\sqrt{16\pi+\sigma}(\overline{\alpha}_{i}-1)\sqrt{\ln\Lambda_{i}}\,\,\forall\,\,1\leq i\leq m\quad;\quad x_{1}:={\Lambda}_{1}\quad;\quad x_{i}:=\sqrt{4\pi-\sigma}\tau^{\prime\prime}_{i}\,\,\forall\,\,2\leq i\leq m.$ Next we claim: CLAIM 2: $\psi_{2}$ is a diffeomorphism on the set $(\alpha,a,\lambda)$ satisfying the conditions in the definition of $V(m,\varepsilon,Q)$, see (5). Indeed let $H:=(b_{1},\cdots,b_{m},\gamma_{1},\cdots,\gamma_{m},\xi_{1},\cdots,\xi_{m})$. We need to solve $D\psi_{2}(Y)(H)=0$ and to prove that the unique solution is $H=0$. Let $\psi_{2}^{j}$ be the $j$-th component of $\psi_{2}$. It holds $D\psi_{2}(Y)(H)=0\Longleftrightarrow D\psi_{2}^{j}(H)=0\,\,\forall\,\,1\leq j\leq 3m\Longleftrightarrow\begin{cases}b_{j}=0\,\,\forall\,\,j=1,\cdots,m,\\\ \sqrt{\ln\Lambda_{i}}\gamma_{i}+\frac{(\overline{\alpha}_{i}-1)}{2\Lambda_{i}\sqrt{\ln\Lambda_{i}}}\xi_{i}=0\,\,\forall\,\,i\leq m,\\\ \xi_{1}=0\\\ \sum_{j=2}^{m}\frac{\partial x_{i}}{\partial\Lambda_{j}}\xi_{j}=0\,\,\forall\,\,i=2,\cdots,m.\end{cases}$ We will focus on the last equation. Let $\mathcal{F}_{k}:=\mathcal{F}_{k}^{\overline{A}}(\overline{a}_{k})$ and $D:=\sum_{k=1}^{m}\Lambda_{k}^{2}\mathcal{F}_{k}(a_{k})$. Observe that $\Lambda_{i}\frac{\partial x_{i}}{\partial\Lambda_{i}}=-\frac{2m}{D^{2}}\Lambda_{i}^{2}\mathcal{F}_{i}\Big{(}\sum_{k\neq i}\Lambda_{k}^{2}\mathcal{F}_{k}\Big{)}\quad;\quad\Lambda_{j}\frac{\partial x_{i}}{\partial\Lambda_{j}}=\frac{2m}{D^{2}}\Lambda_{i}^{2}\mathcal{F}_{i}\Lambda_{j}^{2}\mathcal{F}_{j}\,\,\forall\,j\neq i.$ Hence the last equation of the system is equivalent to $-\Big{(}\sum_{k\neq i}\Lambda_{k}^{2}\mathcal{F}_{k}\Big{)}\frac{\xi_{i}}{\Lambda_{i}}+\sum_{j\geq 2;j\neq i}\Lambda_{j}^{2}\mathcal{F}_{j}\frac{\xi_{j}}{\Lambda_{j}}=0\,\,\forall\,\,i\geq 2\Longleftrightarrow\sum_{j\geq 2}\Lambda_{j}^{2}\mathcal{F}_{j}\frac{\xi_{j}}{\Lambda_{j}}=\Big{(}\sum_{k=1}^{m}\Lambda_{k}^{2}\mathcal{F}_{k}\Big{)}\frac{\xi_{i}}{\Lambda_{i}}\,\,\forall\,\,i\geq 2.$ Thus we derive that $\frac{\xi_{i}}{\Lambda_{i}}=\frac{\xi_{2}}{\Lambda_{2}}$ for each $i\geq 2$. Putting this information in the last equation implies $\Lambda_{1}^{2}\mathcal{F}_{1}\frac{\xi_{2}}{\Lambda_{2}}=0$ which implies that $\xi_{2}=0$ and therefore $H=0$. This implies that $D\psi_{2}(Y)$ is invertible and the claim follows from the inverse function theorem. Next we observe that, using the coordinates provided by the diffeomorphism $\psi_{2}(Y)$, the functional $J_{8m\pi}$ reads as follows $\displaystyle J_{\varrho}(\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+\,\overline{w})=$ $\displaystyle-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(\overline{a}_{1},...,\overline{a}_{m})+\sum_{i=1}^{m}\beta_{i}^{2}$ $\displaystyle-\sum_{i=2}^{m}x_{i}^{2}-\Big{(}\sum_{i=2}^{m}x_{i}\Big{)}^{2}+(4\pi-\sigma)\frac{(-\mathcal{L}(Q))}{\mathcal{F}^{Q}_{1}(q_{1})}\frac{\ln{\Lambda}_{1}}{\Lambda_{1}^{2}}.$ Finally, it follows from elementary linear algebra, that there exists a change of variables: $(x_{2},\cdots,x_{m})\mapsto(y_{2},\cdots,y_{m})$ such that $J_{8m\pi}$ reads as $J_{\varrho}(\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}\,+\,\overline{w})=-8\pi m(1+\ln(m\pi))-8\pi\mathcal{F}_{m}^{K}(\overline{A})+\sum_{i=1}^{m}\beta_{i}^{2}-\sum_{i=2}^{m}y_{i}^{2}+(4\pi-\sigma)\frac{(-\mathcal{L}(Q))}{\mathcal{F}^{Q}_{1}(q_{1})}\frac{\ln{\Lambda}_{1}}{\Lambda_{1}^{2}}$ where $\overline{A}:=(\overline{a}_{1},...,\overline{a}_{m})$. This ends the proof. Next as consequence of Proposition 5.3 of and the Morse reduction in Lemma 5.4 one derives the topological contribution of the _critical points at infinity_ to the difference of topology between the level sets of the functional $J_{\varrho}$. Namely we have the following corollary ###### Corollary 5.5 Let $(Q)_{\infty}$ be a critical point at infinity corresponding to the critical point $Q$ of $\mathcal{F}^{K}_{m}$ at the level $C_{\infty}(Q)_{\infty}$ with index $\iota_{\infty}(Q)$. Assume that at the level $C_{\infty}(Q)_{\infty}$ there is no other critical point/critical point at infinity. Then for $\theta$ a small positive number and a field $\mathbb{F}$, we have that (37) $\displaystyle H_{l}(J_{\varrho}^{C_{\infty}(Q)_{\infty}+\theta},J_{\varrho}^{C_{\infty}(Q)_{\infty}-\theta};\mathbb{F})=\begin{cases}\mathbb{F}&\mbox{ if }\quad l=\iota_{\infty}(Q),\\\ 0,&\mbox{otherwise}.\end{cases}$ where $H_{l}$ denotes the $l-$dimensional homology group with coefficient in the field $\mathbb{F}$. ## 6 Proof of the main results Proof of Theorem 1.1. We start by proving the statement $1)$. To this aim we consider the following sup-approximation of $(MF)$ (38) $(MF)_{\mu}\quad-\Delta_{g}u\,=\,8m\pi(1+\mu)(\frac{Ke^{u}}{\int_{\Sigma}Ke^{u}dV_{g}}\,-\,1)\quad\mbox{ in }\Sigma,$ where $\mu>0$ is a small positive real number. Regarding Problem $(MF)_{\mu}$ we prove the following : Claim 1: For a sequence of $\mu_{k}\to 0$ , Problem $(MF)_{\mu_{k}}$ admits a solution $u_{\mu_{k}}$ whose generalized Morse index $Morse(u_{\mu_{k}})$ is $3m$. Indeed let $J_{8\pi m(1+\mu)}$ be the Euler Lagrange functional associated to $(MF)_{\mu}$ then it follows from [15, 26, 16] that, for large $L$ the sublevel set $J_{8\pi m(1+{\mu})}^{-L}$ has the same homotopy type as the set of formal barycenters $B_{m}(\Sigma)$ of order $m$ . Note that (see [18, 26]), we have $H_{3m-1}(J_{8\pi m(1+\mu)}^{-L},\mathbb{Z}_{2})=H_{3m-1}(B_{m}(\Sigma),\mathbb{Z}_{2})\,\neq\,0.$ Let $\varrho_{\mu}:=8\pi m(1+\mu)$, using the fact that $J^{L}_{\varrho_{\mu}}$ is a contractible set and the exact sequence of the pair $(J^{L}_{\varrho_{\mu}},J^{-L}_{\varrho_{\mu}})$ we derive that $\begin{CD}0=H_{3m}(J_{\varrho_{\mu}}^{L})\to H_{3m}(J_{\varrho_{\mu}}^{L},J_{\varrho_{\mu}}^{-L})\to H_{3m-1}(J_{\varrho_{\mu}}^{-L})\to H_{3m-1}(J_{\varrho_{\mu}}^{L})=0.\end{CD}$ Hence it follows that $H_{3m}(J^{L}_{8\pi m(1+\mu)},J^{-L}_{8\pi m(1+\mu)})\,\neq 0.$ Therefore $J_{8\pi m(1+\mu)}$ has a critical point whose Morse index is $3m$. To conclude the proof of the theorem we prove the following claim: Claim 2: $\displaystyle{u_{\mu_{k}}\to u_{\infty}\mbox{ in }C^{2,\alpha}(\Sigma),\,}$ where $u_{\infty}$ is a solution of Equation $(MF)$. To prove the claim it is enough to rule out the blow up of $u_{\mu_{k}}$. Arguing by contradiction it follows from Proposition 7.6 that $u_{\mu_{k}}\in V(m,\varepsilon)$ for $k$ large . Hence this function has to be written as $u_{\mu_{k}}=\sum_{i=1}^{m}\alpha_{i}\varphi_{a_{i},\lambda_{i}}+w,$ where the function $w\in E^{m}_{{A},\Lambda}$ and satisfies (39) $\|w\|\leq c\Big{(}\sum|\alpha_{k}-1|+\frac{1}{\lambda}\Big{)}.$ Now, using the fact that $\nabla J_{8\pi m(1+\mu)}(u_{\mu_{k}})=0$ and Corollary 4.3, we get (40) $\sum|\alpha_{i}-1|\leq c\Big{(}\frac{1}{\lambda\ln\lambda}+\frac{1}{\ln\lambda}\sum|\tau_{j}-\mu+\mu\tau_{j}|\Big{)}.$ Now, using Proposition 4.1 we derive that (41) $16\pi\alpha_{i}(\tau_{i}-\mu+\mu\tau_{i})-64\pi^{2}\sum\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}=O\Big{(}\sum|\alpha_{j}-1|^{2}\Big{)}+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}.$ Using (40) and (41), we get that (42) $\sum|\tau_{i}-\mu+\mu\tau_{i}|=O\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}.$ Hence, using (40), (42) and summing (41) for $i=1,\cdots,m$, we derive that $\sum_{i=1}^{m}\tau_{i}-m\mu+\mu\sum_{i=1}^{m}\tau_{i}=4\pi m\sum\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}+o\Big{(}\frac{\ln\lambda}{\lambda^{2}}\Big{)}.$ Now using Lemma 4.4 we obtain (43) $\mu=\frac{\pi}{2}\,\mathcal{L}({A})\frac{\ln\lambda_{1}}{\int_{\Sigma}Ke^{u}}(1+o(1)),$ which implies that $\mathcal{L}({A})$ has to be positive. Furthermore it follows from Proposition 4.5, (40), (42) and (39) that ${A}:=(a_{1},\cdots,a_{m})$ converges to a critical point of $\mathcal{F}_{m}^{K}$. Next expanding the functional $J_{8m\pi(1+\mu_{k})}$ in ${V}(m,\varepsilon)$ (following the proof of Proposition 5.1), we obtain $\begin{split}J_{8m\pi(1+\mu_{k})}(u_{\mu_{k}})\,&=\,C(m,\mu_{k})\,-\,8\pi(1+\mu_{k})\mathcal{F}^{K}_{m}(A)\,\,+16\pi\sum_{i=1}^{m}(\alpha_{i}-1)^{2}\ln\lambda_{i}\\\ &-\,4\pi\,\sum_{i=1}^{m}\widetilde{\tau}_{i}^{2}\,-\,4\pi\frac{\mathcal{L}(A)}{\mathcal{F}^{A}_{1}(a_{1})}\frac{\ln\lambda_{1}}{\lambda_{1}^{2}}\,+\,o(\frac{\ln\lambda}{\lambda^{2}}).\end{split}$ Hence we derive from this expansion, arguing as in Corollary 5.3 that $Morse(u_{\mu_{k}})\,=3m-Morse(\mathcal{F}^{K}_{m},A).$ Since $Morse(u_{\varepsilon})=3m$, we have that $A$ is a local minimum of $\mathcal{F}_{m}^{K}$ satisfying that $\mathcal{L}(A)>0.$ Hence we reach a contradiction to the assumption of Theorem 1.1. The proof of statement $2)$ follows the same argument as above. The only difference is that we use a sub-approximation: (44) $(MF)_{\mu}\quad-\Delta_{g}u\,=\,8m\pi(1-\mu)\Big{(}\frac{Ke^{u}}{\int_{\Sigma}Ke^{u}dV_{g}}\,-\,1\Big{)}\quad\mbox{ in }\Sigma,$ where $\mu>0$ is a small positive real number. Using the fact that $H_{3m-4}(B_{m-1}(\Sigma),\mathbb{Z}_{2})\neq 0$ (see [18], Lemma 8.7) we have that $H_{3m-4}(J_{8\pi m(1-\mu)}^{-L},\mathbb{Z}_{2})=H_{3m-4}(B_{m-1}(\Sigma),\mathbb{Z}_{2})\,\neq\,0.$ Hence we have that $H_{3m-3}(J^{L}_{8\pi m(1-\mu)},J^{-L}_{8\pi m(1-\mu)})\,\neq 0.$ Therefore $J_{8\pi m(1-\mu)}$ has a critical point whose Morse index is $3m-3$. Next we claim Claim 3: $\displaystyle{u_{\mu_{k}}\to u_{\infty}\mbox{ in }C^{2,\alpha}(\Sigma),\,}$ where $u_{\infty}$ is a solution of Equation $(MF)$. By elliptic regularity, it is enough to rule out the blow up of $u_{\mu_{k}}$. Arguing by contradiction we have by Proposition 7.6 that for $k$ large $u_{\mu_{k}}\in V(m,\varepsilon)$. It follows then from Proposition 5.3 that $u_{\mu_{k}}\in V(m,Q,\varepsilon)$, where $Q$ is a critical point of $\mathcal{F}^{K}_{m}$ with $\mathcal{L}(Q)<0.$ Moreover we have that $Morse(u_{\mu_{k}})\,=\,3m-1-Morse(\mathcal{F}^{K}_{m},Q).$ That is $Q$ is a critical point of $\mathcal{F}^{K}_{m}$ whose Morse index is 2 and $\mathcal{L}(Q)<0.$ A contradiction to the assumption $2)$ of the theorem. Hence the proof of statement 2) is complete. Proof of Theorem 1.2 We first observe that since the function $K$ satisfies the non degeneracy condition $(\mathcal{N}_{m})$, $J_{\varrho}$ is a Morse function. Moreover the Morse indices of its critical points are uniformly bounded, say by $\bar{m}$ and it follows from Corollary 5.3 that the Morse indices of the critical points at Infinity of $J_{8\pi m}$ are bounded by $3m-1$. Without loss of generality, we may assume that $J_{\varrho}$ separates its critical as well as its critical points at Infinity. That is at any critical value there is only one critical point or one critical point at infinity. This can be arranged by perturbing $J_{\varrho}$ slightly in disjoint neighborhoods of its critical points (resp. its critical points at Infinity). Next we choose $L$ such that all critical values, respectively, critical values at infinity are contained in the open interval $(-L,L)$ and we order these critical values as $-L<C_{1}<\cdots<C_{p}<L.$ Now choose regular values $a_{0}<\cdots<a_{p}$ such that $a_{0}=-L,a_{p}=L\mbox{ and }\,\,a_{i-1}<C_{i}<a_{i},\,\forall i=1,\cdots,p.$ Moreover we denote by $N_{i}$ the Morse index of the critical point $u_{i}$ such that $J_{\varrho}(u_{i})=C_{i}$, resp. by $N^{\infty}_{i}$ the index $\iota_{\infty}$ of the critical point at infinity $u^{\infty}_{i}$ such that $C_{\infty}(u^{\infty}_{i})=C_{i}$ and set $M_{i}:=J_{\varrho}^{a_{i}}:=\\{u:J_{\varrho}(u)<a_{i}\\}$. We notice that it follows from the standard Morse Lemma in the case of usual critical points and from Lemma 5.4 in the case of critical points at infinity, that $M_{i}$ is obtained from $M_{i-1}$ by attaching a $N(i)-$cell (resp. $N^{\infty}(i)-$cell). We set $\theta(i,j):=dim\,H_{i}(M_{j},M_{j-1})\quad;\quad\mu(i,j):=dim\,H_{i}(M_{j},J_{\varrho}^{-L})$ and observe that $\theta(i,j)=\delta_{i,N_{j}}$ (resp. $\theta(i,j)=\delta_{i,N^{\infty}_{j}}$) and $\mu(i,j)=0$ if $i>\overline{N}:=\max(\bar{m},3m-1).$ Next, considering the triple $(M_{j},M_{j-1},J_{\varrho}^{-L})$ we have the exact sequence $\begin{CD}0@>{}>{}>H_{*}(M_{j-1},J_{\varrho}^{-L})@>{}>{}>H_{*}(M_{j},J_{\varrho}^{-L}),@>{}>{}>H_{*}(M_{j},M_{j-1})\\\ @>{}>{}>H_{*-1}(M_{j-1},J_{\varrho}^{-L})@>{}>{}>\cdots @>{}>{}>H_{0}(M_{j},M_{j-1})\end{CD}$ We recall that exactness implies the vanishing of corresponding alternating sum of dimensions of the vector spaces in the sequence. Denoting by $\mathcal{N}_{q,j}$ the kernel of $H_{q}(M_{j-1},J_{\varrho}^{-L})\to H_{q}(M_{j},J_{\varrho}^{-L})$ and $\nu_{q,j}:=dim\mathcal{N}_{q,j}$ and using the exactness of the triple $(M_{j};M_{j-1},J_{\varrho}^{-L})$ we derive that (by grouping 3 terms at a time): $\nu_{q,j}\,=\,\sum_{i=0}^{q}(-1)^{i+q}\Big{(}\mu(i,j-1)\,-\mu(i,j)\,+\,\theta(i,j)\Big{)}.$ Summing over $j=1,\cdots,p$ yields $\sum_{j=1}^{p}\nu_{q,j}\,=\,\sum_{i=0}^{q}(-1)^{i+q}\Big{(}\mu(i,0)\,-\mu(i,p)\,+\,\sum_{j=1}^{p}\theta(i,j)\Big{)}.$ Note that $\mu(i,0)=0$ and $\mu(i,p)=\mbox{dim}(H_{i}(J_{\varrho}^{L},J_{\varrho}^{-L}))$. Using the exact sequence of the pair $(J_{\varrho}^{L},J_{\varrho}^{-L})$ we derive that (45) $\begin{cases}&H_{0}(J_{\varrho}^{L},J_{\varrho}^{-L})\,\simeq\,H_{1}(J_{\varrho}^{L},J_{\varrho}^{-L})\simeq 0\quad\mbox{ and }\\\ &H_{i}(J_{\varrho}^{L},J_{\varrho}^{-L})\,\simeq\,H_{i-1}(J_{\varrho}^{-L})\simeq H_{i-1}(B_{m-1}(\Sigma)):=\beta_{i-1}^{m-1}\quad\forall i\geq 2.\end{cases}$ Moreover, denoting by $\nu_{i}$ the number of critical points of Morse index $i$ resp. by $\nu^{\infty}_{i}$ the number of critical points at Infinity of index $i$, it follows that $\sum_{j=1}^{p}\theta(i,j)\,=\nu_{i}+\nu^{\infty}_{i}.$ Hence we get (46) $\sum_{j=1}^{p}\nu_{q,j}+\sum_{i=2}^{q}(-1)^{q+i}\beta_{i-1}^{m-1}\,=\,\sum_{(A\in\mathcal{K}^{-}_{m};\iota_{\infty}(A)\leq q)}(-1)^{\iota_{\infty}(A)+q}\,+\,\sum_{i=0}^{q}(-1)^{i+q}\nu_{i}.$ Now, for $2\leq k\leq 3m-1$, summing (46) for $q=k$ and $q=k-1$, we obtain $\nu_{k}\,+\,\nu^{\infty}_{k}\,-\,\beta^{m-1}_{k-1}\,=\sum_{j=1}^{\overline{N}}\nu_{k,j}+\sum_{j=1}^{\overline{N}}\nu_{k-1,j}\geq\,0,$ and the statement $(a)$ is proved. The second claim follows by taking $q=\overline{N}$ in (46) and using $\nu_{\overline{N},j}=0$ for each $j$ and $\sum(-1)^{i}\beta_{i}^{m-1}=\chi(B_{m-1}(\Sigma))=1-\binom{m-1-\chi(\Sigma)}{m-1}.$ The proof of Theorem 1.2 is complete. Proof of Theorem 1.3 Since there are no critical points at infinity of index $q_{0}$, it follows from (10) that: $\nu_{q_{0}}\,\geq\,\beta^{m-1}_{q_{0}-1}.$ Since by assumption $\beta_{q_{0}-1}^{m-1}\neq 0$, we deduce that Equation $(MF)$ has at least $\beta_{q_{0}-1}^{m-1}$ solutions. In particular since $\beta^{m-1}_{3m-4}=H_{3m-4}(B_{m-1}(\Sigma),\mathbb{Z}_{2})\,\neq 0$, if there are no critical points at infinity of index $3m-3$, we obtain a solution of $(MF)$ whose Morse index is $3m-3$. Proof of Theorem 1.5 First observe that if there is no solution and there is no critical point at infinity of index $q_{0}$, then there holds: (47) $\nu_{q_{0},j}=0\quad\mbox{ and }\quad\nu_{q_{0}-1,j}=0\quad\forall\,j.$ Indeed, recall that $\nu_{q,j}:=dim\,Ker(i_{q,j})$ where $i_{q,j}$ denotes the map $i_{q,j}:\,H_{q}(M_{j-1},J_{\varrho}^{-L})\to H_{q}(M_{j},J_{\varrho}^{-L}).$ The claim is immediate for $q_{0}$ since there is no critical point or critical point at infinity of index $q_{0}$ which implies that $H_{q_{0}}(M_{j-1},J_{\varrho}^{-L})=H_{q_{0}}(M_{j},J_{\varrho}^{-L})=0$. For $q_{0}-1$, using the exact sequence of the triple $(M_{j},M_{j-1},J_{\varrho}^{-L})$, we get (since we assumed that there is no critical point/critical point at infinity of index $q_{0}$) $0=H_{q_{0}}(M_{j},M_{j-1})\to H_{q_{0}-1}(M_{j-1},J_{\varrho}^{-L})\to H_{q_{0}-1}(M_{j},J_{\varrho}^{-L}),\quad\mbox{ for each }\,j,$ which gives the claim for $q_{0}-1$. Next to prove Theorem 1.5, we assume that there is no solution and observe that $\beta_{i}^{m-1}=0$ for each $i>3m-4$. Furthermore, since there is no critical point at infinity of index $3m$, we get from the above claim (47) that $\nu_{3m-1,j}=0$ for each $j$. Hence, summing (46) for $q=3m-1$ and $q=3m-3$, we get $0\leq\sum_{j=1}^{p}\nu_{3m-3,j}=-(-1)^{3m-3}\sum_{A\in\mathcal{K}_{m}^{-};\iota_{\infty}(A)=3m-1\,et\,\iota_{\infty}(A)=3m-2}(-1)^{\iota_{\infty}(A)}=\nu_{3m-2}-\nu_{3m-1}.$ In the same way, summing (46) for $q=3m-1$ and $q=3m-4$, we get $\sum_{j=1}^{p}\nu_{3m-4,j}+\beta_{3m-4}^{m-1}=-(-1)^{3m-4}\sum_{A\in\mathcal{K}_{m}^{-};3m-3\leq\iota_{\infty}(A)\leq 3m-1}(-1)^{\iota_{\infty}(A)}=\nu_{3m-3}-\nu_{3m-2}+\nu_{3m-1}.$ Hence the result follows. Proof of Theorem 1.4 By contradiction, assume that $(MF)$ does not have solutions and observe that, as in (47) it follows from assumption $2)$ that $\forall j,\quad\nu_{q_{0}+1,j}\,=\,\nu_{q_{0},j}\,=\,\nu_{q_{0}-1,j}\,=\,\nu_{q_{0}-2,j}=0.$ Hence summing (46) for $k=q_{0}$ and $k=q_{0}-1$ we obtain $\nu^{\infty}_{q_{0}}\,=\,\beta^{m-1}_{q_{0}-1}.$ A contradiction to the assumptions $1)$ and $3)$. ## 7 Appendix In this section we collect some technical Lemmas used in this paper. ###### Lemma 7.1 Let $\varphi_{a,\lambda}$ and $\delta_{a,\lambda}$ be defined in (17) and (16). The following expansions hold pointwise $\displaystyle\varphi_{a,\lambda}=\delta_{a,\lambda}\,+\,\ln\big{(}\frac{\lambda^{2}}{8}\big{)}\,+\,8\pi\,H(a,.)\,+\,4\pi\frac{\ln\lambda}{\lambda^{2}}\,+\,O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad\mbox{ in }\Sigma,$ $\displaystyle\lambda\frac{\partial\varphi_{a,\lambda}}{\partial\lambda}(x)=\frac{4}{1+\lambda^{2}\psi_{a}^{2}(x)}-8\pi\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad\mbox{ for each }x\in\Sigma,$ $\displaystyle\frac{1}{\lambda}\frac{\partial\varphi_{a,\lambda}}{\partial a}(x)=\frac{1}{\lambda}\frac{\partial\delta_{a,\lambda}}{\partial a}(x)+8\pi\frac{1}{\lambda}\frac{\partial H(a,x)}{\partial a}(x)+O\big{(}\frac{\ln\lambda}{\lambda^{2}}\big{)}\quad\mbox{ for each }x\in\Sigma.$ In particular, we have $\displaystyle\varphi_{a,\lambda}=8\pi\,G(a,.)\,+\,4\pi\frac{\ln\lambda}{\lambda^{2}}\,+\,O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad\mbox{ in }\Sigma\setminus B_{a}(\eta),$ $\displaystyle\lambda\frac{\partial\varphi_{a,\lambda}}{\partial\lambda}=-8\pi\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad\mbox{ in }\Sigma\setminus B_{a}(\eta),$ $\displaystyle\frac{1}{\lambda}\frac{\partial\varphi_{a,\lambda}}{\partial a}=8\pi\frac{1}{\lambda}\frac{\partial G(a,.)}{\partial a}+O\big{(}\frac{\ln\lambda}{\lambda^{2}}\big{)}\quad\mbox{ in }\Sigma\setminus B_{a}(\eta).$ Proof. We remark that the second part of this lemma follows immediately from the first one. Now, we start by proving the first claim. First, observe that (48) $\int_{\Sigma}e^{\delta_{a,\lambda}+u_{a}}dV_{g}=\int_{B_{a}(\eta)}e^{\delta_{a,\lambda}}dV_{g_{a}}+\int_{\Sigma\setminus B_{a}(\eta)}O\big{(}\frac{1}{\lambda^{2}}\big{)}=8\pi+O\big{(}\frac{1}{\lambda^{2}}\big{)}.$ Now, let us define the following function $k_{a,\lambda}:=\varphi_{a,\lambda}-\delta_{a,\lambda}\,-\,\ln\big{(}\frac{\lambda^{2}}{8}\big{)}\,-\,8\pi\,H(a,.)$. We recall that $\delta_{a,\lambda}$ is a constant function in $\Sigma\setminus B_{a}(2\eta)$. Thus, using (15), we obtain $-\Delta_{g}k_{a,\lambda}=\begin{cases}&e^{\delta_{a,\lambda}+u_{a}}-\,\int_{\Sigma}e^{\delta_{a,\lambda}+u_{a}}dV_{g}+8\pi=O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad(\mbox{in }\Sigma\setminus B_{a}(2\eta)).\\\ &e^{\delta_{a,\lambda}+u_{a}}-\,\int_{\Sigma}e^{\delta_{a,\lambda}+u_{a}}dV_{g}+e^{u_{a}}\Delta_{g_{a}}\delta_{a,\lambda}+8\pi=O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad(\mbox{in }B_{a}(\eta)).\end{cases}$ It remains the case of $x\in B_{a}(2\eta)\setminus B_{a}(\eta)$. Using again (15) we get $\displaystyle-\Delta_{g}k_{a,\lambda}$ $\displaystyle=O\big{(}\frac{1}{\lambda^{2}}\big{)}-8\pi-2e^{u_{a}}\Delta_{g_{a}}\ln(1+\lambda^{2}\psi_{a}^{2})+(8\pi+2e^{u_{a}}\Delta_{g_{a}}\ln(\psi_{a}^{2})$ $\displaystyle=-2e^{u_{a}}\Delta_{g_{a}}\ln\big{(}\lambda^{2}+\frac{1}{\psi_{a}^{2}}\big{)}+O\big{(}\frac{1}{\lambda^{2}}\big{)}=O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad(\mbox{in }B_{a}(2\eta)\setminus B_{a}(\eta)).$ Hence we obtain that $\Delta_{g}k_{a,\lambda}=O(1/\lambda^{2})$ in $\Sigma$ and therefore we get that $k_{a,\lambda}(x)-\int_{\Sigma}k_{a,\lambda}(y)dV_{g}=O\big{(}\frac{1}{\lambda^{2}}\big{)}\quad\mbox{in }\Sigma.$ It remains to estimate the previous integral. Using (12), (14) and (17), we get $\displaystyle\int_{\Sigma}k_{a,\lambda}dV_{g}$ $\displaystyle=-\int_{\Sigma}\delta_{a,\lambda}\,+\,\ln\big{(}\frac{\lambda^{2}}{8}\big{)}\,+\,8\pi\,H(a,.)dV_{g}\,=\,2\int_{\Sigma}\ln\big{(}1+\frac{1}{\lambda^{2}\psi_{a}^{2}}\big{)}dV_{g}$ $\displaystyle=2\int_{B_{a}(\eta)}\ln\big{(}1+\frac{1}{\lambda^{2}\psi_{a}^{2}}\big{)}dV_{g_{a}}+O\big{(}\int_{B_{a}(\eta)}\ln\big{(}1+\frac{1}{\lambda^{2}\psi_{a}^{2}}\big{)}|e^{-u_{a}}-1|dV_{g_{a}}+\frac{1}{\lambda^{2}}\big{)}$ $\displaystyle=\frac{4\pi}{\lambda^{2}}\int_{0}^{\lambda\eta}\ln \big{(}1+\frac{1}{r^{2}}\big{)}rdr+O\big{(}\frac{1}{\lambda^{2}}\big{)}=4\pi\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}.$ Hence the proof of the first claim follows. The other claims can be proved in the same way. ###### Lemma 7.2 Let $\varphi_{a,\lambda}$ be defined in (17). The following expansions hold: $\|\varphi_{a,\lambda}\|^{2}=32\pi\ln\lambda+64\pi^{2}H(a,a)-16\pi+64\pi^{2}\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)},$ $\langle\varphi_{a,\lambda},\lambda\frac{\partial\varphi_{a,\lambda}}{\partial\lambda}\rangle_{g}=16\pi-64\pi^{2}\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)},$ $\langle\varphi_{a_{j},\lambda_{j}},\varphi_{a_{i},\lambda_{i}}\rangle_{g}=64\pi^{2}\,G(a_{j},a_{i})\,+\,32\pi^{2}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\,+\,32\pi^{2}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\,+\,O\big{(}\frac{1}{\lambda_{j}^{2}}+\frac{1}{\lambda_{i}^{2}}\big{)},$ $\langle\varphi_{a_{j},\lambda_{j}},\lambda_{i}\frac{\partial\varphi_{a_{i},\lambda_{i}}}{\partial\lambda_{i}}\rangle_{g}=-64\pi^{2}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+O\big{(}\frac{1}{\lambda_{i}^{2}}+\frac{1}{\lambda_{j}^{2}}\big{)}.$ Proof. Since $\int_{\Sigma}\varphi_{a,\lambda}dV_{g}=0$, using Lemma 7.1 and (48), we get $\displaystyle\|\varphi_{a,\lambda}\|^{2}$ $\displaystyle=\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\big{(}\delta_{a,\lambda}\,+\,\ln\big{(}\frac{\lambda^{2}}{8}\big{)}\,+\,8\pi\,H(a,.)\,+\,4\pi\frac{\ln\lambda}{\lambda^{2}}\,+\,O\big{(}\frac{1}{\lambda^{2}}\big{)}\big{)}dV_{g}$ $\displaystyle=32\pi^{2}\frac{\ln\lambda}{\lambda^{2}}\,+\,O\big{(}\frac{1}{\lambda^{2}}\big{)}+2\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\ln\big{(}\frac{\lambda^{2}\psi_{a}^{2}}{1+\lambda^{2}\psi_{a}^{2}}\big{)}dV_{g}+8\pi\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}G(a,.)dV_{g}$ Note that (using the fact that $\int_{\Sigma}G(a,.)dV_{g}=0$) $8\pi\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}G(a,.)dV_{g}=8\pi\varphi_{a,\lambda}(a)=32\pi\ln\lambda+64\pi^{2}H(a,a)+32\pi^{2}\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}.$ Furthermore, we have $\displaystyle\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\ln\big{(}\frac{\lambda^{2}\psi_{a}^{2}}{1+\lambda^{2}\psi_{a}^{2}}\big{)}dV_{g}$ $\displaystyle=\int_{B_{a}(\eta)}e^{\delta_{a,\lambda}}\ln\big{(}\frac{\lambda^{2}|x-a|^{2}}{1+\lambda^{2}|x-a|^{2}}\big{)}dV_{g_{a}}+O\big{(}\frac{1}{\lambda^{4}}\big{)}$ $\displaystyle=\int_{B(0,\eta)}\frac{8\lambda^{2}}{(1+\lambda^{2}|x|^{2})^{2}}\ln\big{(}\frac{\lambda^{2}|x|^{2}}{1+\lambda^{2}|x|^{2}}\big{)}dx+O\big{(}\frac{1}{\lambda^{4}}\big{)}$ $\displaystyle=-8\pi+O\big{(}\frac{1}{\lambda^{2}}\big{)}.$ Hence the proof of the first claim follows. Concerning the second one, using the fact that $\int_{\Sigma}\lambda\frac{\partial\varphi_{a,\lambda}}{\partial\lambda}dV_{g}=0$, we have $\displaystyle\langle\varphi_{a,\lambda},\lambda\frac{\partial\varphi_{a,\lambda}}{\partial\lambda}\rangle_{g}$ $\displaystyle=\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\bigg{(}\frac{4}{1+\lambda^{2}\psi_{a}^{2}(x)}-8\pi\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}\bigg{)}dV_{g}$ $\displaystyle=-64\pi^{2}\frac{\ln\lambda}{\lambda^{2}}+O\big{(}\frac{1}{\lambda^{2}}\big{)}+\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\frac{4}{1+\lambda^{2}\psi_{a}^{2}(x)}dV_{g}.$ Observe that, we have $\int_{\Sigma}e^{u_{a}}e^{\delta_{a,\lambda}}\frac{4}{1+\lambda^{2}\psi_{a}^{2}(x)}dV_{g}=\int_{B(0,\eta)}\frac{32\lambda^{2}}{(1+\lambda^{2}|x|^{2})^{3}}dx+O\big{(}\frac{1}{\lambda^{4}}\big{)}=16\pi+O\big{(}\frac{1}{\lambda^{4}}\big{)}.$ Thus the proof of the second claim follows. Now, we will focus on the third claim. Using $\int_{\Sigma}|\varphi_{a,\lambda}|=O(1)$ and Lemma 7.1, it holds $\displaystyle\langle\varphi_{a_{j},\lambda_{j}},\varphi_{a_{i},\lambda_{i}}\rangle_{g}$ $\displaystyle=\int_{B_{a_{j}}(\eta)}e^{u_{a_{j}}}e^{\delta_{a_{j},\lambda_{j}}}\bigg{(}8\pi\,G(a_{i},.)\,+\,4\pi\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\,+\,O\big{(}\frac{1}{\lambda_{i}^{2}}\big{)}\bigg{)}dV_{g}+O\big{(}\frac{1}{\lambda_{j}^{2}}\big{)}$ $\displaystyle=32\pi^{2}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+O\big{(}\frac{1}{\lambda_{j}^{2}}+\frac{1}{\lambda_{i}^{2}}\big{)}+8\pi\int_{B_{a_{j}}(\eta)}e^{u_{a_{j}}}e^{\delta_{a_{j},\lambda_{j}}}G(a_{i},.)dV_{g}.$ Observe that (using $\int_{\Sigma}G(a_{i},.)dV_{g}=0$) $\displaystyle\int_{B_{a_{j}}(\eta)}e^{u_{a_{j}}}e^{\delta_{a_{j},\lambda_{j}}}G(a_{i},.)dV_{g}$ $\displaystyle=\int_{\Sigma}e^{u_{a_{j}}}e^{\delta_{a_{j},\lambda_{j}}}G(a_{i},.)dV_{g}+O\big{(}\frac{1}{\lambda_{j}^{2}}\big{)}$ $\displaystyle=\varphi_{a_{j},\lambda_{j}}(a_{i})+O\big{(}\frac{1}{\lambda_{j}^{2}}\big{)}=8\pi\,G(a_{j},a_{i})\,+\,4\pi\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\,+\,O\big{(}\frac{1}{\lambda_{j}^{2}}\big{)}.$ Hence the proof of this claim follows. Concerning the last one, it holds $\displaystyle\langle\varphi_{a_{j},\lambda_{j}},\lambda_{i}\frac{\partial\varphi_{a_{i},\lambda_{i}}}{\partial\lambda_{i}}\rangle_{g}$ $\displaystyle=\int_{B_{a_{j}}(\eta)}e^{u_{a_{j}}}e^{\delta_{a_{j},\lambda_{j}}}\bigg{(}-8\pi\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+O\big{(}\frac{1}{\lambda_{i}^{2}}\big{)}\bigg{)}dV_{g}+O\big{(}\frac{1}{\lambda_{j}^{2}}\big{)}$ $\displaystyle=-64\pi^{2}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}+O\big{(}\frac{1}{\lambda_{j}^{2}}+\frac{1}{\lambda_{i}^{2}}\big{)}.$ Thereby the proof of this lemma follows. From Lemma 7.1, it is easy to get the following expansion ###### Lemma 7.3 Let ${u}:=\sum_{j=1}^{m}\alpha_{j}\varphi_{a_{j},\lambda_{j}}$. In $B_{a_{i}}(\eta)$, it holds $\displaystyle Ke^{u}$ $\displaystyle=\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}_{i}^{A}g_{i}^{A}}{(1+\lambda_{i}^{2}|y_{a_{i}}(.)|^{2})^{2\alpha_{i}}}\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}+O\big{(}\frac{1}{\lambda^{2}}+\sum|\alpha_{j}-1|\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\big{)}\Big{)}$ $\displaystyle=\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}_{i}^{A}g_{i}^{A}}{(1+\lambda_{i}^{2}|y_{a_{i}}(.)|^{2})^{2\alpha_{i}}}\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}+O\bigg{(}\Big{(}\frac{1}{\lambda^{2}}+\sum|\alpha_{j}-1|\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}Ke^{u}\bigg{)},$ where the functions $\mathcal{F}^{A}_{i}$ (with ${A}=(a_{1},\cdots,a_{m})$) and $g_{i}^{A}$ are defined as follows (49) $\left\\{\begin{array}[]{cc}\mathcal{F}^{A}_{i}(x):=K(x)\exp\Big{(}8\pi H(a_{i},x)+8\pi\sum_{j\neq i}G(a_{j},x)\Big{)},\vspace{1mm}\\\ g_{i}^{A}(x):=\exp\Big{(}8\pi(\alpha_{i}-1)H(a_{i},x)+8\pi\sum_{j\neq i}(\alpha_{j}-1)G(a_{j},x)\Big{)}.\end{array}\right.$ ###### Lemma 7.4 Let $u:=\sum_{i=1}^{m}\alpha_{i}\varphi_{i}\in V(m,\varepsilon)$. Then, (50) $\displaystyle\int_{\Sigma}Ke^{u}dV_{g}=\pi\sum_{i=1}^{m}\frac{\lambda_{i}^{4\alpha_{i}-2}}{2\alpha_{i}-1}(\mathcal{F}_{i}^{A}g_{i}^{A})(a_{i})+O\Big{(}1+\sum_{k=1}^{m}\lambda_{k}^{4\alpha_{k}-2}\Big{(}|\alpha_{k}-1|^{2}+\frac{\ln\lambda_{k}}{\lambda_{k}^{2}}\Big{)}\Big{)}.$ If $\sum_{i=1}^{m}|\alpha_{i}-1|\ln\lambda_{i}=o_{\varepsilon}(1)$, the expansion (50) can be improved as follows (51) $\displaystyle\int_{\Sigma}Ke^{u}dV_{g}$ $\displaystyle=$ $\displaystyle\pi\sum_{i=1}^{m}\frac{\lambda_{i}^{4\alpha_{i}-2}}{2\alpha_{i}-1}(\mathcal{F}_{i}^{A}g_{i}^{A})(a_{i})+\frac{\pi}{2}\sum_{i=1}^{m}\Big{(}\Delta\mathcal{F}_{i}^{A}(a_{i})-2K_{g}(a_{i})\mathcal{F}_{i}^{A}(a_{i})\Big{)}\ln\lambda_{i}$ $\displaystyle+4\pi^{2}\Big{(}\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\sum_{i=1}^{m}\lambda_{i}^{4\alpha_{i}-2}\mathcal{F}_{i}^{A}(a_{i})+O\Big{(}1+\sum_{k=1}^{m}|\alpha_{k}-1|\ln^{2}\lambda_{k}\Big{)}.$ Proof. Let $B_{i}:=B_{a_{i}}(\eta)$. Firstly, note that Lemma 7.1 shows that $e^{u}$ is bounded on $\Sigma\setminus(\cup B_{i})$. Hence the integral in this set is bounded. Now, using Lemma 7.3, we have $\displaystyle\int_{B_{i}}Ke^{u}dV_{g}$ $\displaystyle=\int_{B_{i}}\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}^{A}_{i}g_{i}^{A}}{(1+\lambda_{i}^{2}|y_{a_{i}}|^{2})^{2\alpha_{i}}}\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}dV_{g}+O\bigg{(}\big{(}\frac{1}{\lambda^{2}}+\sum|\alpha_{j}-1|\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\big{)}\int_{B_{i}}Ke^{u}dV_{g}\bigg{)}$ $\displaystyle=\Big{(}1+4\pi\sum_{j=1}^{m}\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\Big{)}\int_{B_{i}}\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}^{A}_{i}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}(x)|^{2})^{2\alpha_{i}}}dV_{g_{a_{i}}}+O\bigg{(}\big{(}\frac{1}{\lambda^{2}}+\sum|\alpha_{j}-1|\frac{\ln\lambda_{j}}{\lambda_{j}^{2}}\big{)}\int_{\Sigma}Ke^{u}dV_{g}\bigg{)}.$ Finally, we have (52) $\displaystyle\int_{B_{i}}\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}^{A}_{i}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}(x)|^{2})^{2\alpha_{i}}}dV_{g_{a_{i}}}$ $\displaystyle=\lambda_{i}^{4\alpha_{i}-2}(\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}})(a_{i})\int_{B_{i}}\frac{\lambda_{i}^{2}}{(1+\lambda_{i}^{2}|y_{a_{i}}(x)|^{2})^{2\alpha_{i}}}dV_{g_{a_{i}}}$ $\displaystyle+\lambda_{i}^{4\alpha_{i}-2}O\left(\int_{B_{i}}\frac{\lambda_{i}^{2}|y_{a_{i}}(x)|^{2}}{(1+\lambda_{i}^{2}|y_{a_{i}}(x)|^{2})^{2\alpha_{i}}}dV_{g_{a_{i}}}\right).$ Note that ${u_{a_{i}}}(a_{i})=0$. Furthermore, easy computations imply (53) $\int_{B(0,\eta)}\frac{\lambda_{i}^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx=\frac{\pi}{2\alpha_{i}-1}+O\Big{(}\frac{1}{\lambda_{i}^{4\alpha_{i}-2}}\Big{)}.$ To estimate the other integral, we introduce the following function (54) $\xi(x)=\frac{1}{(1+\lambda^{2}|x|^{2})^{2(\alpha-1)}}.$ If $(\alpha-1)\ln\lambda$ is small, one obtains that $\xi=1+o(1)$ uniformly on $B(0,\eta)$. In general, we have (55) $|\xi(x)-1|=|\xi(x)-\xi(0)|\,=\,\Big{|}\int_{0}^{1}\frac{4(1-\alpha)\lambda^{2}t|x|^{2}}{(1+\lambda^{2}t^{2}|x|^{2})^{2\alpha-1}}dt\Big{|}\,\leq\,c|\alpha-1|\sqrt{\lambda|x|}.$ Now, using (55), we obtain $\displaystyle\int_{B(0,\eta)}\frac{\lambda_{i}^{2}|x|^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx$ $\displaystyle=\int_{B(0,\eta)}\frac{\lambda_{i}^{2}|x|^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2}}dx+\int_{B(0,\eta)}\frac{\lambda_{i}^{2}|x|^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2}}(\xi_{i}-1)dx$ $\displaystyle=O\Big{(}\frac{\ln\lambda_{i}}{\lambda_{i}^{2}}\Big{)}+O\Big{(}\frac{|\alpha_{i}-1|}{\lambda_{i}^{3/2}}\Big{)}.$ Hence, the proof of (50) follows. Now, we will focus on (51). In this case, we have $|\alpha_{i}-1|\ln\lambda_{i}$ is small for each $i$ and we need to improve the estimate of (52). Using (53), we obtain $\displaystyle\int_{B_{i}}$ $\displaystyle\frac{\lambda_{i}^{4\alpha_{i}}\mathcal{F}^{A}_{i}g_{i}^{A}e^{-u_{a_{i}}}}{(1+\lambda_{i}^{2}|y_{a_{i}}(x)|^{2})^{2\alpha_{i}}}dV_{g_{a_{i}}}=\frac{\pi}{2\alpha_{i}-1}\lambda_{i}^{4\alpha_{i}-2}(\mathcal{F}_{i}^{A}g_{i}^{A})(a_{i})+O(1)$ $\displaystyle+\frac{1}{4}\Delta_{g_{a_{i}}}(\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}})(a_{i})\int_{B(0,\eta)}\frac{\lambda_{i}^{4\alpha_{i}}|x|^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx+O\Big{(}\int_{B(0,\eta)}\frac{\lambda_{i}^{4\alpha_{i}}|x|^{3}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx\Big{)}.$ Observe that $\displaystyle\int_{B(0,\eta)}\frac{\lambda_{i}^{4\alpha_{i}}|x|^{3}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx=\lambda_{i}^{4\alpha_{i}-5}2\pi\int_{0}^{\lambda_{i}\eta}.\frac{r^{4}}{(1+r^{2})^{2\alpha_{i}}}dr=O(1),$ $\displaystyle\int_{B(0,\eta)}\frac{\lambda_{i}^{4\alpha_{i}}|x|^{2}}{(1+\lambda_{i}^{2}|x|^{2})^{2\alpha_{i}}}dx=2\pi\ln\lambda_{i}+O(1+|\alpha_{i}-1|\ln^{2}\lambda_{i})\quad(\mbox{by using }|\alpha_{i}-1|\ln\lambda_{i}\mbox{ is small}).$ To conclude, we need the following information. We know that the function $\widetilde{u}_{a}(x):=u_{a}(y_{a}(x))$ (for $x\in B(a,\eta)\subset\mathbb{R}^{2}$) satisfies $-\Delta\widetilde{u}_{a}=-2K_{g}(y_{a}^{-1}(.))e^{-\widetilde{u}_{a}}\quad\mbox{ in }B(a,\eta)$ and therefore we derive that $\Delta_{g_{a_{i}}}(\mathcal{F}_{i}^{A}g_{i}^{A}e^{-u_{a_{i}}})(a_{i})=\Delta_{g_{a_{i}}}\mathcal{F}_{i}^{A}(a_{i})-2\mathcal{F}_{i}^{A}(a_{i})K_{g}(a_{i})+O(\sum|\alpha_{j}-1|),$ by using the fact that $g_{i}^{A}(a_{i})=1+O\Big{(}\sum|\alpha_{j}-1|\Big{)},\quad|\nabla g_{i}^{A}(a_{i})|=O\Big{(}\sum|\alpha_{j}-1|\Big{)},\quad|\Delta g_{i}^{A}(a_{i})|=O\Big{(}\sum|\alpha_{j}-1|\Big{)}.$ Summing the above estimates, the result follows. ###### Lemma 7.5 Let $A$ and $\Lambda$ satisfying the balancing condition (19) then the approximate solution $U_{A,\Lambda}$ defined in (18) satisfies the following equation $-\Delta_{g}U_{A,\Lambda}\,+\,8\pi m\,=\,8\pi m\frac{Ke^{U_{A,\Lambda}}}{\int_{\Sigma}Ke^{U_{A,\Lambda}}}\,+\,f_{A,\Lambda},\quad\mbox{ where }$ $f_{A.\Lambda}\rightharpoonup 0\mbox{ weakly in }H^{1}(\Sigma)\mbox{ as }\lambda_{i}\to\infty\,\forall\,i=1,\cdots,m.$ Proof. We first notice that according to Equation (17) satisfied by $\varphi_{a,\lambda}$, using (48), the function $U_{A,\lambda}$ satisfies the equation (56) $-\Delta_{g}U_{A,\Lambda}\,+\,8\pi m\,+\,O(\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{2}})=\,\sum_{i=1}^{m}e^{\delta_{a_{i},\lambda_{i}}+u_{a_{i}}}.$ Moreover it follows from Lemmas 7.1 and 7.4 that in $\Sigma\setminus\bigcup_{i=1}^{m}B_{\eta}(a_{i})$ we have that $8\pi m\frac{Ke^{U_{A,\Lambda}}}{\int_{\Sigma}Ke^{U_{A,\Lambda}}}\,=\,O(\frac{\ln\lambda}{\lambda^{2}}).$ Furthermore it follows from Lemmas 7.3 and 7.4 that in $B_{\eta}(a_{i})$ there hold $Ke^{U_{a,\lambda}}\,=\,\frac{1}{8}\lambda_{i}^{2}\mathcal{F}_{i}^{A}(a_{i})e^{\delta_{a_{i},\lambda_{i}}}(1+O(\frac{\ln\lambda}{\lambda^{2}}))\quad\mbox{ and }\quad\int_{\Sigma}Ke^{U_{A,\lambda}}\,=\,\pi\sum_{i=1}^{m}\lambda_{i}^{2}\mathcal{F}_{i}^{A}(a_{i})(1+O(\frac{\ln\lambda}{\lambda^{2}})).$ Therefore in $B_{\eta}(a_{i})$ we have that $8\pi m\frac{Ke^{U_{A,\Lambda}}}{\int_{\Sigma}Ke^{U_{A,\Lambda}}}\,=\,\frac{m\lambda_{i}^{2}\mathcal{F}^{A}_{i}(a_{i})e^{\delta_{a_{i},\lambda_{i}}}}{\sum_{j=1}^{m}\lambda_{j}^{2}\mathcal{F}^{A}_{j}(a_{j})}(1+O(\frac{\ln\lambda}{\lambda^{2}})).$ Hence we derive that $\sum_{j=1}^{m}e^{\delta_{a_{j},\lambda_{j}}+u_{a_{j}}}\,-\,8\pi m\frac{Ke^{U_{A,\Lambda}}}{\int_{\Sigma}Ke^{U_{A,\Lambda}}}\,=e^{\delta_{a_{i},\lambda_{i}}}(\tilde{\tau}_{i}+O(|x-a_{i}|^{2}\frac{\ln\lambda}{\lambda^{2}}))\quad\mbox{ in }B_{\eta}(a_{i}),$ where ${\tau}^{\prime}_{i}$ is defined in Proposition 5.1 by taking $\alpha_{k}=1$ for each $k$. We notice that it follows from the balancing condition (19) that $\forall i=1,\cdots,m$ we have that $|{\tau}^{\prime}_{i}|=o_{\lambda}(1)$. Summarizing we have that $U_{A,\Lambda}$ satisfies the equation $\displaystyle-\Delta_{g}U_{A,\Lambda}\,+\,8\pi m\,=\,8\pi m\frac{Ke^{U_{A,\Lambda}}}{\int_{\Sigma}Ke^{U_{A,\Lambda}}}\,+\,f_{A,\Lambda},\quad\mbox{ where }$ $\displaystyle f_{A,\Lambda}\,=\,\begin{cases}O({\ln\lambda}/{\lambda^{2}})\mbox{ in }\Sigma\setminus\bigcup_{i=1}^{m}B_{\eta}(a_{i})\\\ e^{\delta_{a_{i},\lambda_{i}}}(\tilde{\tau}_{i}+O({|x-a_{i}|^{2}+\ln\lambda_{i}}/{\lambda_{i}^{2}}))\mbox{ in }B_{\eta}(a_{i}).\end{cases}$ Hence the Lemma follows. Lastly for the sake of completeness we provide the following characterization of approximate blowing up solutions of Equation $(MF)$. Namely we prove: ###### Proposition 7.6 Let $(\Sigma,g)$ be a closed surface of unit volume and $u_{k}$ be a blowing up solution of $-\Delta_{g}u_{k}\,=\,\varrho_{k}(\frac{Ke^{u_{k}}}{\int_{\Sigma}Ke^{u_{k}}}\,-1)\quad\mbox{ in }\Sigma;\qquad\int_{\Sigma}u_{k}dV_{g}=0,$ where $\varrho_{k}\to 8\pi m,\,m\in\mathbb{N}$. Then for $\varepsilon$ small and $k$ large we have that $u_{k}\in V(m,\varepsilon)$. Proof. We set $\tilde{u}_{k}:=u_{k}\,-\,\ln(\int_{\Sigma}Ke^{u_{k}})$ and observe that $\tilde{u}_{k}$ satisfies the equation $-\Delta_{g}\tilde{u}_{k}\,=\,\varrho_{k}(Ke^{\tilde{u}_{k}}-1)\,\mbox{ in }\Sigma.$ Now it follows from the refined blow up analysis performed in [23, 12, 13] that $\tilde{u}_{k}$ blows up at $m$-points $a_{1},\cdots,a_{m}\in\Sigma$ with comparable blow up rates $\lambda_{i}:=e^{u_{k}(a_{i})}$ such that $d_{g}(a_{i},a_{j})\geq 2\eta$ for some $\eta>0$. Moreover we have that $e^{\tilde{u}_{k}}\,=\,O(\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{2}})\,\mbox{ in }\Sigma\setminus\bigcup_{i=1}^{m}B_{\eta}(a_{i})$ and inside the balls $B_{i}:=B_{\eta}(a_{i})$ there holds (See Theorem 1.4 in [12]) $\tilde{u}_{k}\,+\,\ln(\varrho_{k}K(a_{i}))\,=\,{\delta_{a_{i},\lambda_{i}}}\,+O(d_{g}(a_{i},x)).$ Next we set $\tilde{v}_{k}:=\tilde{u}_{k}\,-\,\sum_{i=1}^{m}\varphi_{a_{i},\lambda_{i}}$ and we notice that $\tilde{v}_{k}$ satisfies the equation (57) $-\Delta_{g}\tilde{v}_{k}\,+\,\varrho_{k}-8m\pi\,=\varrho_{k}Ke^{\tilde{u}_{k}}-\sum_{i=1}^{m}e^{\delta_{a_{i},\lambda_{i}}+u_{a_{i}}}+O(\frac{1}{\lambda^{2}}):=f_{k}.$ Next we claim that: Claim: For $1<p<2$ there exists a constant $C>0$ such that $\|f_{k}\|_{L^{p}}\,\leq\,C\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{(2/p)-1}}.$ Indeed we have that $\int_{\Sigma}|f_{k}|^{p}dV_{g}\,=\,\sum_{i=1}^{m}\int_{B_{i}}|f_{k}|^{p}dV_{g}\,+\,O(\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{2p}}).$ Furthermore in $B_{i}$ we have, in geodesic local coordinates around $a_{i}$, that $\displaystyle f_{k}$ $\displaystyle=e^{\tilde{u}_{k}+\ln(\varrho_{k}K)}\,-\sum_{i=1}^{m}e^{\delta_{a_{i},\lambda_{i}}+u_{a_{i}}}+O(\frac{1}{\lambda^{2}})$ $\displaystyle=e^{\delta_{a_{i},\lambda_{i}}+\ln(\varrho_{k}K)-\ln(\varrho_{k}K(a_{i})+O(|x-a_{i}|))}-e^{\delta_{a_{i},\lambda_{i}}+u_{a_{i}}}+O(\frac{1}{\lambda^{2}})$ (58) $\displaystyle=O(|x-a_{i}|e^{\delta_{a_{i},\lambda_{i}}}+\frac{1}{\lambda^{2}}).$ Hence for $1<p<2$ and a constant $C>0$ there holds $\int_{\Sigma}|f_{k}|^{p}dV_{g}\,\leq\,C\,\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{2-p}}.$ Next setting $v_{k}:=\tilde{v}_{k}\,+\,\ln(\int_{\Sigma}Ke^{u_{k}})\,=\,u_{k}-\sum_{i=1}^{m}\varphi_{a_{i},\lambda_{i}}$ we have that $v_{k}$ satisfies the equation $-\Delta_{g}\,v_{k}+\,\varrho_{k}-8\pi m\,=\,f_{k}\,\mbox{ in }\Sigma;\,\int_{\Sigma}v_{k}dV_{g}\,=\,0,$ where $f_{k}\in L^{p}(\Sigma)$ for $p\in(1,2)$. Hence it follows from the Calderon-Zygmund a priori estimate that $\|v_{k}\|_{W^{2,p}}\,\leq\,C\,\|f_{k}\|_{L^{p}}\,\leq\,C\,\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{(2/p)-1}}.$ Therefore it follows from the Sobolev embedding $W^{2,p}(\Sigma)\hookrightarrow W^{1,2}(\Sigma)$ that $\|v_{k}\|_{H^{1}}\,\leq\,\sum_{i=1}^{m}\frac{1}{\lambda_{i}^{(2/p)-1}}.$ Hence choosing $p=\frac{4}{3}$ we obtain that $\|u_{k}\,-\,\sum_{i=1}^{m}\varphi_{a_{i},\lambda_{i}}\|\,\leq\,C\,\sum_{i=1}^{m}\frac{1}{\sqrt{\lambda_{i}}}.$ Therefore the proposition is fully proven. ## References * [1] Ahmedou, M.; Ben Ayed, M.; Lucia, M. _On a resonant mean field type equation: a ”critical point at infinity” approach,_ Discrete Contin. Dyn. Syst. 37 (2017), no. 4, 1789–-1818. * [2] Ahmedou, M.; Pistoia, A. _On the supercritical mean field equation on pierced domains._ Proc. Amer. Math. Soc. 143 (2015), 3969–3984. * [3] Bahri A., Critical points at infinity in some variational problems, Research Notes in Mathematics, 182, Longman-Pitman, London, 1989. * [4] Bahri, A.; Coron, J.-M. _On a nonlinear elliptic equation involving the critical Sobolev exponent: the effect of the topology of the domain_ , Comm. Pure Appl. Math 41(1988), 253–294. * [5] Bahri A., Coron J.-M., The Scalar-Curvature Problem on the Standard Three-Dimensional Sphere, J. Funct. Anal. 95(1991), 106–172. * [6] Bahri A., An invariant for Yamabe-type flows with applications to scalar-curvature problems in high dimension, Duke Math. J. 81 (1996), 323–466. * [7] Brézis,H. ; Merle, F. _Uniform estimates and blow-up behavior for solutions of $-\Delta u=V(x)e^{u}$ in two dimensions_, Comm. Partial Differential Equations 16 (1991), 1223–1253. * [8] Caglioti, E; Lions, P.-L; Marchioro,C; Pulvirenti,M. _A special class of stationary flows for two dimensional Euler equations: a statistical mechanics description_ , Comm. Math. Phys. 143 (1992), 501–525. * [9] Caglioti, E; Lions, P.-L; Marchioro,C; Pulvirenti,M. _A special class of stationary flows for two dimensional Euler equations: a statistical mechanics description, Part II_ , Comm. Math. Phys. 174 (1995), 229–260. * [10] Chang, A.; Yang, P. _A perturbation result in prescribing scalar curvature on $\mathbb{S}^{n}$._ Duke Math. J. 64 (1991), 27–69. * [11] Chang, A.; Gursky, M.; Yang, Paul. _The scalar curvature equation on $2-$ and $3$-spheres._ Calc. Var. Partial Differential Equations 1 (1993), no. 2, 205–229. * [12] Chen,C.C; Lin, C.S. _Sharp estimates for solutions of multi-bubbles in compact Riemann surfaces_ , Comm. Pure Appl. Math. 55 (2002), 728–771. * [13] Chen,C.C; Lin, C.S. _Topological degree for a mean field equation on Riemann surfaces_ , Comm. Pure Appl. Math. 56 (2003), 1667–1727. * [14] Ding, W.; Jost, J; Li, J.; Wang,G. _Existence results for mean field equations_ , Ann. Inst. Henri Poincaré Anal. Non Linéaire 16 (1999), 653–666. * [15] Djadli, Z; _Existence result for the mean field problem on Riemann surfaces of all genus_ , Commun. Contemp. Math. 10 (2008), 205–220. * [16] Djadli, Z; Malchiodi, A. _Existence of conformal metrics with constant $Q$-curvature_, Ann. of Math. (2) 168 (2008), 813–858. * [17] De Marchis, F. _Generic multiplicity for a scalar field equation on compact surfaces,_ J. Funct. Anal. 259( 2010), 2165–2192. * [18] Kallel S., Karoui R., Symmetric joins and weighted barycenters, Adv. Nonlinear. Stud. 11 (2011), no. 1, 117-143. * [19] Han, Z-C. _Prescribing Gaussian curvature on $\mathbb{S}^{2}$. _ Duke Math. J. 61 (1990), 679–703. * [20] Horák, J; Lucia, M.) _A minimax theorem in the presence of unbounded Palais-Smale sequences._ Israel J. Math. 172 (2009), 125–143. * [21] Kiessling, M. K. H. _Statistical mechanics of classical particles with logarithmic interactions_ , Comm. Pure Appl. Math. 46 (1993), 27–56. * [22] Li, Y. Y. _Harnack type inequality: the method of moving planes_ , Comm. Math. Phys. 200 (1999), 421–444. * [23] Li, Y. Y; Shafrir, I. _Blow-up analysis for solutions of $-\Delta u=Ve^{u}$ in dimension two_, Indiana Univ. Math. J. 43 (1994), 1255–1270. * [24] Lions, P. L. : _The concentration-compactness principle in the calculus of variations. The limit case. Part I._ Rev. Mat. Iberoamericano 1(1985), 145–201. * [25] Lucia, M. A deformation lemma with an application to a mean field equation. Topol. Methods Nonlinear Anal. 30 (2007), no. 1, 113–138. * [26] Malchiodi, A._Morse theory and a scalar field equation on compact surfaces_ , Adv. Differential Equations 13 (2008), 1109–1129. * [27] Saut, J.M.; Temam, R. _Generic properties of nonlinear boundary value problems,_ Comm. Partial Differential Equations 4( 1979), 293-319. * [28] Struwe, M. : _A global compactness result for elliptic boundary value problems invoving limiting nonlinearities._ Math. Z. 187 (1984), 511–517. * [29] M. Struwe, G. Tarantello, _On multivortex solutions in Chern-Simons gauge theory_ , Boll. Unione. Math. Ital. Sez. B Artic. Ric. Mat. (8) 1 (1998), 109–121. * [30] Tarantello, G.; Selfdual gauge field vortices. Progress in nonlinear differential equations and their applications. An analytical approach 72(2008), Birkhäuser Boston Inc. * [31] Tarantello, G. _Analytical, geometrical and topological aspects of a class of mean field equations on surfaces,_ Discrete Contin. Dyn. Syst. 28(2010). * [32] Yang, Y.; Solitons in field theory and nonlinear analysis. Springer monograph in Mathematics. Springer Verlag, New York, 2001. * [33] Zhang, L. _Blow up solutions of some nonlinear elliptic equation involving exponential nonlinearities,_ Com. Math. Phys. 268(2006), 105–133. * [34] Zhang, L. _A priori estimates for a family of semi-linear elliptic equation involving exponential nonlinearities,_ J. Diff. Equations 247(2009), 105–133. * [35] Mohameden Ahmedou Mathematisches Institut der Justus-Liebig-Universität Giessen Arndtsrasse 2, D-35392 Giessen Germany <EMAIL_ADDRESS>Mohamed Ben Ayed Université de Sfax, Faculté des Sciences de Sfax Département de Mathématiques Route de Soukra, Sfax, Tunisia <EMAIL_ADDRESS>
# DigitalExposome: Quantifying the Urban Environment Influence on Wellbeing based on Real-Time Multi-Sensor Fusion and Deep Belief Network Thomas Johnson<EMAIL_ADDRESS>Eiman Kanjo<EMAIL_ADDRESS>Kieran Woodward<EMAIL_ADDRESS> ###### Abstract In this paper, we define the term ’DigitalExposome’ as a conceptual framework that takes us closer towards understanding the relationship between environment, personal characteristics, behaviour and wellbeing using multimodel mobile sensing technology. Specifically, we simultaneously collected (for the first time) multi-sensor data including urban environmental factors (e.g. air pollution including: PM1, PM2.5, PM10, Oxidised, Reduced, NH3 and Noise, People Count in the vicinity), body reaction (physiological reactions including: EDA, HR, HRV, Body Temperature, BVP and movement) and individuals’ perceived responses (e.g. self-reported valence) in urban settings. Our users followed a pre-specified urban path and collected the data using a comprehensive sensing edge devices. The data is instantly fused, time- stamped and geo-tagged at the point of collection. A range of multivariate statistical analysis techniques have been applied including Principle Component Analysis, Regression and Spatial Visualisations to unravel the relationship between the variables. Results showed that EDA and Heart Rate Variability HRV are noticeably impacted by the level of Particulate Matter (PM) in the environment well with the environmental variables. Furthermore, we adopted Deep Belief Network to extract features from the multimodel data feed which outperformed Convolutional Neural Network and achieved up to (a=80.8% , $\sigma$=0.001) accuracy. ###### keywords: Sensor-fusion, Environment, Exposome, DigitalExposome, Machine Learning, Deep Belief Network, Wellbeing. ††journal: Information Fusion ## 1 Introduction The long-term exposure to urban environment stressors such as particulate matter, gases and noise have been found to significantly impact an individual’s behaviour and psychological health [1], [2], [3]. The World Health Organisation (WHO) found that 91% of people are living in places where the air quality guidelines are not met and the use of non-clean fuels and household emissions in the atmosphere are causing over 4.2 million deaths each year [4]. In addition, those living in some locations in the UK have a higher risk of developing serious health conditions such as higher heart rate [1], asthma and cardio-cerebrovascular disease [5] where a lifetime of exposure to high-levels of pollution can result in reduced life expectancy. [6]. Recent developments in urban sensing and Internet of Things (IoT) has created the possibility to utilise environmental and on-body sensing tools to monitor the environment and its impact on individuals [1], [5]. Sensor-based technologies are becoming increasingly popular due to their availability to collect data in real-time, affordability and small size [7], [8], [9], [10]. These advances continue to enable more opportunities for capturing environmental signature in urban setting by providing the mechanisms to collect and analyse objective data [1], physiological changes [11] and behaviour markers of mental wellbeing [12], [10] in real-time. In addition, the major advances and recent developments within data science have created greater opportunities to understand large multimodel datasets through machine learning [1], deep learning [11] and spatial visualisations [13]. In this paper, we define ’DigitalExposome’ as the quantification step in understanding the relationship between the environment and the body along with the perceived environmental responses which could potentially help in designing our cities with wellbeing in mind. In utilising the ’DigitalExposome’ concept, this paper leads us to explore, discuss and address the following research question: How can we monitor, fuse, model and understand the person-environment interaction to help determine what makes an urban environment, potentially healthy”. To answer this question we setup a data collection study in a busy urban area. Our participants collected (for the first time) multi-sensor data simultaneously including: 1. 1. Urban environmental attributes and air quality: PM1, PM2.5, PM10, Oxidised, Reduced, NH3 and Environmental Noise 2. 2. Body physiological reactions including: Heart rate (HR), Electrodermal activity (EDA), Heart Rate variability (HRV), Body temperature (TEMP) 3. 3. Body Movement via Accelerometer 4. 4. People count via wireless proximity detection 5. 5. Individuals’ perceived responses: Self-reported valence The data collection tools were built by the project team including a sophisticated sensing edge (Envro-Edge) with 10 embedded air quality sensors. A smart phone app (EnvBodySens2) that collects accelerometer data, Bluetooth Low Energy (BLE) signal for people count, self-report labels, Noise, Date/Time and GPS traces. On-body data was collected using E4 Empatica. The data is instantly fused, time-stamped and geo-tagged at the point of collection. By collecting the data in the ”wild” and out of the lab paves the way for more realistic approach that can generalise to real-life setting. To the best of our knowledge this combinatory and systematic data collection approach including a comprehensive list of on-body, contextual and environmental sensors along with the user responses has not attempted before. A range of multivariate statistical analysis techniques have been applied including Principle Component Analysis, Regression and spatial visualisations (including heat maps and geometrical tessellation) to explore correlated patterns in the data and unravel the association between the attributes which might suggest a causal relationship. Consequently, we identify common patterns arising from a group of people and mapping the patterns of sensor variance in a place with its stimuli. As a result of this, we can visualize the spatiality of wellbeing on three different levels: Individual wellbeing (the wellbeing of one individual in same environment - temporal), Accumulated wellbeing (the wellbeing of one individual in many environments - spatial), and Collective wellbeing (the wellbeing of group of individuals in many environments). Furthermore, to delve into the predictive ability of the heterogeneous multivariate attributes several machine learning models were built and evaluated ranging from standard machine learning methods such as K-Nearest Neighbor, Decision Trees and Support Vector Machines to more sophisticated deep neural networks-based techniques such as Convolutional Neural Network (CNN). We also adopted Deep Belief Network (DBN) to extract features from the multimodel data feed which then fed in to the machine learning algorithms. The performance of the on-body modality and environment modality is compared with and without DBN, assessing the possibility for a number of machine learning algorithms to infer affect quality (mental wellbeing) from the data. ## 2 Related work Repeated and continuous human exposure to the environment and high- concentrated air pollutants have been found to increase the risk of developing serious conditions such as respiratory and cardiovascular diseases [14], [15] or even death [16]. Research recently has began focusing towards how the environment can impact physical health but it also is necessary to explore how the environment can impact mental wellbeing. Pollution within the urban environment is a continual problem contributing to rising health and mental wellbeing challenges. The ability to monitor air pollutants, physiology and mental wellbeing will enable the relationship between repeated environment exposures and mental wellbeing to be established. ExpoApp [17] used a sensor fusion approach (environmental and on-body) to model the short term health impact of high air pollution. Their analysis showed those who didn’t have access to green spaces inhaled a higher rate of air pollution. A similar study monitored the environmental impact to an individual, indicating a positive correlation between the environment, body temperature, ElectroDermal Activity (EDA), motion and Heart Rate (HR) [17]. In addition ’Project Helix’ [18] studied the environmental impact on individuals living in urban environments. Increased levels of blood pressure, asthma, allergy related illnesses and behaviour issues were found for those living in urban environments. Mobile technology in previous research coupled with sensors have aimed to provide a deeper understanding into the impact of exposure to an individual in a particular location. This highlights the potential of recent technological advances, whereby an individual’s exposure to the environment can be accurately assessed and calculated [5]. Furthermore, particular areas have been found to have an increased risk of individuals developing serious health conditions such as higher heart rate [1], asthma and cardio-cerebrovascular disease [5]. A study in 2018 used mobile technologies to develop the methods of assessing exposure to an individual. This involved using an activity and GPS sensor to predict an individual’s location. Overall the investigation demonstrated the capability of using sensors to accurately assess an individual’s exposure [5]. Personal sensors to measure individual exposure such as air pollution, noise, outdoor temperature, physical activity and blood pressure have been a positive way forward in monitoring due to their ability to collect data continually and in real-time [8] helping to reveal early health conditions [19]. By combining these sensor data streams together and the possibility for an individual to continuously wear sensors, the data can show the exposures an individual encounters as well as predict early health conditions. Developed in 2005, the exposome concept encompasses each exposure that is subjected to a human from birth to death [20]. In recent years, the concept is now actively being used in research communities as an alternative method to measuring the impact of the environment. High polluted environments have shown increased risk of developing conditions like asthma and cardio-cerebrovascular diseases [21]. Figure 1 presents the exposome concept in its simplest stage and highlights the large amount of data that is required in order to calculate exposure impact across an individual lifetime. Figure 1: Demonstrating how the three stages of the exposome concept play a part in calculating the health risk [22] There are three stages associated with the exposome; internal, general exposome and specific external. The first stage of calculating the exposome is, ‘internal’ that measures the body’s biological response to exposures; such as ageing and stress. The second stage, ‘general exposome’ considers the wider impact on our lives and influences on the individual such as their education background and financial situation. Finally, the ‘specific external’ which examines effects out-side of the body such as air pollution, radiation and diet. Once all three stages have been measured, the exposome can be exactly calculated. In its current form researchers are able to calculate some of the impact of environmental exposure, however there still remains some challenges. Most studies have found it challenging to address and understand exposome fully because of its size, quantity of data [23] and the overall quality of the data produced [21]. The environment and its impact has been widely researched, however there has been little consideration of implementing a sensor-fusion approach using physiological and environmental sensors to study the impact of the environment on mental wellbeing. ## 3 DigitalExposome We introduce the term ’DigitalExposome’ as a framework to quantify an individual’s exposure to the environment by utilising a range of technological, mobile-sensing and digital devices, as shown in figure 2. This concept aims to measure multiple environmental factors using mobile technologies and then quantify them in real-life settings. Combining multiple data collection methods helps to support DigitalExposome and gain a better understanding into how exposures to the environment can impact mental wellbeing. Figure 2: Data Collection methods to support DigitalExposome. This concept, further promotes to the use of the exposome concept by digitally providing a better understanding into the impact of exposure directly to an individual. Through DigitalExposome, we aim to explore the opportunities that we for-see with this concept in exploring the link between pollution and wellbeing. To support this work, we have developed a range of sensing devices and applications. DigitalExposome is primarily made up of two parts: data collection and data analysis. Both aspects make use of technological advances in order to calculate the exposome. In order to quantify the process, we propose the utilisation of data from sensors that show how an individual has been exposed to pollutants. We see this as being a key part of the exposome concept, where both terms are clearly connected through their vision of being able to capture the true exposure that an individual has been exposed to. Data that is generated through the use of technology, such as sensors is ideal to monitor various exposures and enable the possibility to link this to health. ## 4 Methodology ### 4.1 System Architecture Figure 3 presents the conceptual system architecture of DigitalExposome with four key layers. Firstly, the conceptual layer explains the four main areas that can impact mental wellbeing include environmental, biological, social and cultural factors [24]. The sensing layer contains the physical devices (e.g. smartphone and wristband) and physiological systems to monitor HR, EDA and body temperature along with the environmental factors such as air quality. The computing layer lists several key core data science techniques that enables processing and analysis of the data including: Machine Learning, Deep Learning, Statistical Analysis and Data Visualisation. Finally, the application layer presents potential applications of DigitalExposome. Figure 3: Conceptual and System Architecture of DigitalExposome for Wellbeing. ### 4.2 Experimental Setup Following approval from Nottingham Trent University’s Ethics Committee, we recruited a total of 20 participants (15 Males and 5 females) and carried out the study between September and October 2020. Due to the lockdown and COVID’19 restrictions it proved to be difficult to recruit further users. Participants’ were each provided with three devices; an environmental monitoring device (Enviro-Edge), Empatica E4 wristband and Samsung phone ready with the EnvoBodySends app. Each participant walked around a pre-specified route around Nottingham Trent University (Clifton Campus). Whilst walking, the three devices continually collected sensor data on environmental pollutants and physiological changes. In addition, participants self-reported their wellbeing continuously during the walk. The information acquired from each device is shown in Figure 4. The specified route took participants around 25 minutes to complete. The route was limited to 25 minutes due to user preference and not wanting to exhaust participants which could impact their body responses. Figure 4: List of the fused variables collected by each device. The experiment data collection tools are depicted in Figure 5 that include the Enviro-IoT, E4 Empatica and smartphone application. The Enviro-IoT edge device equipped with a Raspberry Pi 4 records environmental data continually once every 20 seconds. While the E4 Empatica sensors’ data is sampled at different rates with HR at 1Hz and EDA, BVP, HRV and body temp at64Hz. Each participant used the custom built pre-installed ”EnvBodySens” smartphone app to record their perceived wellbeing. We have adopted the ’Personal Wellbeing Index for adults’ which asks the user how they are feeling with their life as a whole [25]. This has been adapted in the form of a five-point Likert SAM scale [26] to provide a proven method for self-reporting subjective wellbeing. In our pre-installed mobile app the user is met with five well-know emojis from 1=negative/low to 5=positive/high , selected through the use of buttons throughout the walk. The idea is that the participant will be constantly prompted by the researcher to ascertain how they are feeling. Unfortunately, the application did not successfully log data for 8 of the participants resulting in only data from 12 participants being labelled. However, the unlabeled data can remain utilised for unsupervised statistical analysis. Figure 5: (left) Screenshot of smartphone application, (middle) E4 Empatica, (right)Environment monitoring kit. ### 4.3 Pre-processing Following the data collection, the data was cleaned and pre-processed. Due to the varying sample rates, the physiological data collected (EDA, BVP, HRV and body temperature) were down-sampled to a rate of 1Hz to match the sample rate of collected HR by the device. In addition, the collected environmental sensor data had to be up-sampled to match the sampled rate of the physiological data at 1Hz. This was due to the low sample rate produced by the environmental device. Finally, the labelled data from the mobile smartphone was extracted and up-sampled to the same rate as the environmental and physiological data to 1Hz to remain consistent with the other data. To sample the data we have used linear interpolation [27]. If the two known points are given by the coordinates ( x1 , y1), the linear interpolant is the straight line between these points. For a value x in the interval ( x2 , x1 ), the value y along the straight line is given from the equation of slopes as shown below: $y=y_{1}+\left(x-x_{1}\right)\frac{\left(y_{2}-y_{1}\right)}{\left(x_{2}-x_{1}\right)}$ (1) Following this, all signals were then normalised to bring all variables within the same range for both the data analysis and machine learning. Finally, all the sampled sensor data was fused together. Whilst cleaning the data, there were two variables excluded from the experiment; Carbon Dioxide and Volatile Organic Compound because of issues with logging the data resulting in no change in value during the experiment. In total the number of samples, after cleaning were 13,658. ## 5 Results ### 5.1 Statistical Factorial Analysis We have employed mathematical and statistical approaches for the exploratory analysis stage including variable Correlations, PCA factor maps, variable importance and Pearson’s R Correlation Coefficient to measure the association between two categorical variables. A correlation matrix has been depicted at Figure 6 to further understand the relationship between the different variables. From the matrix, it is clear to see that some variables are highly correlated together. Analysing the individual cells shows HRV correlates well with PM10 and NH3. In addition, EDA demonstrates a correlation with PM10, Oxidised and Reduced gases and NH3. Figure 6: Correlation Matrix of the Environmental and Physiological Variables. PCA Factor Maps are an effective method for large datasets, to help understand the relational impact between different variables, with reducing information loss [28]. Also, using PCA maps provides a visual method of presenting data and observing correlations between different variables [1]. PCA factor maps give a view of all the variables projected on to a plane, spanned by the first two principle components. This method demonstrates the structural relationship between the different variables. We have demonstrated two PCA factor map plots based on variable importance of the many variables we have collected. Figure 7: PCA Analysis - (left) Variance between the different variables, (right) Variance between the different variables without EDA. Figure 7 (left), presents the captured environmental and physiological variables depicted on a PCA map. It is worth noting, that most of the body attributes EDA, HR and HRV are all at the top of the figure. Whilst, the environmental variables PM1, PM2.5, PM10 and Reducing gases are located in the middle. From the diagram we can see that the first principle component explains about 25.9% of the total variance and the second principle component an additional 19.2%. Therefore, the first two principle components explain 45.1% of total variance. The most important (or, contributing) variables are highlighted using the color gradient. By removing the least important variables and keeping the most important (or relevant) ones which are positioned close to each others (including PM1,PM10, PM2.5,EDA, HR and IBI) we then notice that the two first components explain a large percentage of total variance, (first component explains around 50% of the variance), as shown in Figure 7 (right). The close grouping and proximity of the independent variables suggests that HRV, HR and PM10 are correlated and that HRV, HR, PM2.5 are also being positively correlated. Furthermore, Figure 8, depicts an analysis on the environmental and physiological data indicated that where participants labelled their wellbeing as negative and very negative, there were high levels of PM2.5 within the environment. This is similarly true for PM1.0 and PM10 again demonstrating the direct correlation between human physiology and environmental pollutants. Figure 8: Depicts the relationship between the self-reported Participant’s wellbeing (Label) and PM2.5. ### 5.2 Multi-Variant Regression Analysis Using PCA analysis and covariance matrix’s allows us to explore the relationship unfolding between the different variables. We continue this process by using multi-variant regression to greater understand the importance and impact each variable has on the other. This work aims to study the variable dependency on two different modalities using Multivariate Regression and Principle Component Analysis (PCA). For each of the dependent variables (physiological data) we have used Multiple Linear Regression to compare against the independent variables (environmental data). The aim of this is to see which dependent variable can be predicted from using the environmental data as independent variables. Multiple Regression Model for EDA: Firstly, we have used a multiple linear regression module for EDA to understand the impact of this physiological on- body sensor to the other independent environmental variables including NH3, Noise, PM1, PM2.5, PM10 and Reduced. Table 1, shows the multiple regression results for EDA: Table 1: Multiple Regression Analysis between EDA and Environmental variables. | Coefficients | Standard Error | t Stat | P-value ---|---|---|---|--- Intercept | -0.02381894 | 0.02225209 | -1.07041 | 0.284452 nh3 | 0.000291595 | $1.14608\mathrm{E}-05$ | 25.44285 | $1.5\mathrm{E}-139$ noise | 0.004050864 | 0.000221511 | 18.2874 | $7.92\mathrm{E}-74$ oxidised | -0.00590754 | 0.000143065 | -41.2928 | 0 pm1 | -0.00768185 | 0.00081832 | -9.38735 | $7.11\mathrm{E}-21$ pm10 | 0.000939923 | 0.000285371 | 3.29369 | 0.000991 pm25 | 0.003698711 | 0.000800215 | 4.622149 | $3.83\mathrm{E}-06$ reduced | -0.00058528 | $4.59985\mathrm{E}-05$ | -12.7239 | $7.06\mathrm{E}-37$ The data in Table 1 was then evaluated using a regression curve shown in Figure 9. This shows the relationship between the calculated residual values verses the fitted values. Figure 9: EDA regression curves.(Left) Residuals curve.(Right) Presents the Q-Q curve. Right of Figure 9 shows the Normal Q-Q plot for EDA constructed by using bi- modal data. In many Q-Q plots, the data on the graph takes the shape of a twist like seen in this plot [1], [29]. The lower part of the plot is almost linear, suggesting a normal distribution in relation to one mode of data distribution. In addition, the upper part of the Q-Q plot again suggests linear, showing an approximate distribution. The steep line between the upper and lower curve suggest that distribution is more dispersed than the distribution plotted. Multiple Regression Model for HR Below presents the multiple linear regression model for HR using the other independent variables (environmental). This includes; NH3, Noise, PM1, PM2.5, PM10 and Reduced. Table 2, shows the multiple regression results for HR: Table 2: Multiple Regression Analysis between HR and Environmental variables. | Coefficients | Standard Error | t Stat | P-value ---|---|---|---|--- Intercept | 128.8420806 | 1.721454748 | 74.84488381 | 0 nh3 | 0.007322924 | 0.000886623 | 8.25933903 | $1.59864\mathrm{E}-16$ noise | -0.083286817 | 0.017136432 | -4.860219147 | $1.1856\mathrm{E}-06$ Oxidised | -0.051833849 | 0.011067698 | -4.683344891 | $2.84951\mathrm{E}-06$ pm1 | 0.118538171 | 0.063306454 | 1.872450026 | 0.061165731 pm10 | 0.112184632 | 0.022076708 | 5.081583292 | $3.79248\mathrm{E}-07$ pm25 | -0.232804742 | 0.061905795 | -3.760629225 | 0.000170194 reduced | -0.072042617 | 0.003558515 | -20.24513451 | $8.12597\mathrm{E}-90$ These initial findings are in agreement with previous research that shows PM2.5 can directly impact HR [30]. In addition, research has shown how differing levels of irregular environmental noise can impact a regular heart- beat. In particular, recent studies exploring this find that noise levels between 55 and 75 Decibels (dB) are linked to a higher risk of developing heart related diseases [31]. Figure 10: HR regression curves.(Left) Residuals curve.(Right) Presents the Q-Q curve. Similar to the EDA Q-Q plot, HR Q-Q plot shown in Figure 10 (right) demonstrates a twist at either end of the plot. In addition the data shows a clear bi-modal distribution. The lower part of the plot is almost linear suggesting an approximate normal distribution. The line in the middle of the upper and lower parts follows a more linear (y=x) line, meaning that the distribution is less dispersed. It is worth noting that there were three outliers for HR distribution due to erroneous sensor readings. ## 6 Visualisations To summaries the dynamic sensing patterns and act upon the findings using visualisation, the geographical study area needs to be divided in smaller areas. One common way of looking at patterns is to use heat maps of the sensor data. For example, Figure 11 presents several heat maps plotting environmental and physiological sensor data, showing the changes whilst the participant travelling along the route. In particular, (upper right of the map) we can observe that moving from the campus towards the main road, each participant is subjected to an increase of PM2.5 and Noise and was met with an increase of HRV and EDA. This approach further demonstrates the impact of the environment on mental wellbeing states. Figure 11: A heat map demonstrating environmental and physiological sensors data along the specified route. It can be seen that the sensor data hot-spots are scattered along the path. While the heat maps show the level of intensity based on GPS traces coordinates, the sensor data on these heat maps indicate the real distribution of sensor data. One option is to divide the study area into grid cells [32] however, it is difficult to allocate a cell to each sensor reading, moreover, it is not possible to decide on the cell size, since the density of the sensor mobility traces can be of different density distribution. To address these issues, we utilise combinatorial computational geometry algorithm called “Voronoi”, which is a diagram partitioning of a plane into regions based on distance to points in a specific subset of the plane [33]. The method of Voronoi visualisations is a computational geometry algorithm which allows the visualisation of large data sets [33]. The concept works by defining a set of polygon regions called cells, whereby the cells give an indication of the overall density of an object area of the size of the object itself [34]. Figure 12: (left) Voronoi overlay from one participant data. Each polygon represents one location trace tagged with a wellbeing label while collecting the data in specified route (the map layer from Microsoft Bing), (right collected label data from start to end). Voronoi Diagram divides the space into a set of regions called Voronoi cells, including the space that is closest to the object (route location, in our case). The size of these cells gives an indication of the density of the area a certain object is in or the size of an object[34]. The cell structure also shows the Delaunay triangulation, which easily allows calculating an object’s immediate set of neighbours. The definition of a Voronoi cell is given by the following equation, where x is a planar metric space; p is the set of generator points in the metric space; and d is the distance between all points in x and a specific generator point (where the distance can be defined using any distance definition such as Euclidean, Manhattan, or road-network distance): $Vor{{}_{i}}=\left\\{x\mid d(x,p{{}_{i}})\leq d(x,p{{}_{j}}),j\neq i\\}\right.$ (2) Thus, the Voronoi diagram is composed of a collection of tessellations (i.e. polygons) defined as Vor, where: $Vor{{}_{i}}=\left\\{Vor{{}_{1}},Vor{{}_{2}}...Vor{{}_{n}}\\}\right.$ (3) The creation of a Voronoi tessellations is a dynamic procedure till all the points are represented in adjacent polygons. If sufficient number of particles did not satisfy Equation (1) then Voronoi gets partially filled. In this case, the data is then redistributed. By giving each polygon a class value Ci that corresponds to the sensor value collected in a particular GPS coordinate, it is then possible to divide the space into adjacent polygons with different sensor reading which are represented in colours. Figure 12, presents the self-reported wellbeing data using the app on the specified route for this experiment. The color of the polygons represents the wellbeing data from low negative to high positive. The visualisation demonstrates that poor wellbeing (lighter colour) was most reported along the main road where high levels of pollution were also experienced whereas more positive states of wellbeing was recorded in less polluted areas such as fields and open spaces. ## 7 Deep Learning Classification The use of machine learning and deep learning networks have been explored to classify the five self-reported states of wellbeing using the pollution and physiological data from the 12 participants who successfully labelled their wellbeing. ### 7.1 Deep Belief Network Deep learning presents many opportunities to classify raw sensor data using classification models such as Convolutional Neural Networks (CNNs) but deep learning is mostly effective for deep feature extraction. To enable depth level feature extraction from the fused environmental and physiological data we employ an unsupervised one dimensional deep belief network (DBN [35], [36], [37]. Unsupervised DBNs are beneficial as they learn to extract a deep hierarchical representation of the training data which can then be used as features within a supervised machine learning classifier. DBNs are generative models and are a composition of stacked Restricted Boltzmann Machines (RBM) and Sigmoid Belief Networks [38], [39]. RBMs are stacked and trained in a greedy manner by training in a sequential way, feeding lower layers’ results to the upper layers to form DBNs. They model the joint distribution between the observed vector x and the $\ell$ hidden layers $h^{k}$ where $x=h^{0},P\left(h^{k-1}\mid h^{k}\right)$ is the distribution of the units conditioned on the hidden units of the RBM at level $k$, and $P\left(h^{\ell-1},h^{\ell}\right)$ is the visible-hidden joint distribution in the top RBM: $P\left(x,h^{1},\ldots,h^{\ell}\right)=\left(\prod_{k=0}^{\ell-2}P\left(h^{k}\mid h^{k+1}\right)\right)P\left(h^{\ell-1},h^{\ell}\right)$ (4) Unsupervised learning is used to train the RBMs of the DBN to automatically construct features and reconstruct inputs. The Gibbs Sampling based contrastive divergence method used to train the RBM is shown below: 1. 1. The fused physiological and pollution sensor data is fed into the RBM as the input $x=h^{(0)}$ of the first layer. 2. 2. The activation probabilities of the hidden layers are calculated using (2): $P\left(h_{j}\mid X\right)=\sigma\left(b_{j}+\sum_{i=1}^{m}W_{ij}X_{i}\right)$ (5) 3. 3. The activation probabilities of input layers are calculated using (3): $P\left(X_{i}\mid h\right)=\sigma\left(a_{i}+\sum_{j=1}^{n}W_{ij}h_{j}\right)$ (6) 4. 4. The edge weights are updated where $\alpha$ is the learning rate using (4): $Wij=Wij+\alpha\left(P\left(h_{j}\mid X\right)-P\left(X_{i}\mid h\right)\right)$ (7) After training the first RBM the edge weights are frozen and the remaining RBMs are trained using the same contrastive divergence method with the output of previous trained RBM being used as the input of the next RBM. After training has completed, the DBN features are extracted from the top hidden layer and a hidden unit of the learned network structure is used as the input layer for a supervised ML models. The DBN is essentially used as a feature selection mechanism for the machine learning models as it is used as a representation learner compressing the original input vector for the ML models to use. ### 7.2 Results The extracted features from the DBN were combined with Random Forest, Support Vector Machine (SVM), Decision Tree, Gaussian Naive Bayes, Logistic Regression and Gradient Boosted supervised machine learning models to classify the five self-reported states of wellbeing using the pollution (PM1, PM2.5, PM10, Oxidised, Reduced, NH3 and Noise) and physiological (BVP, EDA, HR, HRV and body temperature) data. These machine learning models were selected due to their high popularity and were also tested using only common statistical features: mean, median, max, min, max-min, standard deviation and quartiles [40]. Additionally, a Convolutional Neural Network (CNN) has been trained using the same raw data to enable comparison with the DBN models. The models were trained over 20 epochs with a batch size of 128 and tested using 10-fold cross validation. Figure 13: Comparison of classification models trained using statistical features and features extracted from DBN. Figure 13 shows the accuracy for each of the classification models trained using standard statistical features and using features extracted using the DBN. The results demonstrate that the models trained using features extracted from the DBN outperformed the models trained with statistical features for three out of the five classifiers and achieved on average 3.2% higher accuracy. Random Forest combined with the DBN was the best performing model achieving 80.83% accuracy, outperforming all statistical models and the CNN which is frequently used for wellbeing classification by 5.54%. To further explore the impact pollution has on wellbeing the best performing model (Random Forest) in addition to the CNN were individually trained using only the pollution and physiological data as shown in Figure 14. Figure 14: Comparison of Random Forest combined with DBN and CNN when trained using only pollution or physiological data The results show that wellbeing can be inferred using pollution data alone with 73% accuracy while wellbeing can be inferred from physiological data with 79.1% accuracy. It was expected that psychological would accurately classify wellbeing due to its high correlation with the sympathetic nervous system [41]. However, it is surprising that pollution data combined with physiological data outperformed the model trained using pollution data alone. Furthermore, the CNN trained using only pollution data outperformed the CNN trained using physiological data, suggesting pollutants have a considerable impact on wellbeing. ## 8 Discussion and Limitations The ’DigitalExposome’ concept can help to observe changes in the environment and its impact on human body at the same time. To the best of our knowledge, this is the first real-world study that has been shown to quantify the link between the environment and the impact on physiology and mental wellbeing. Continually collecting and fusing real-world environmental and physiological sensor data helped us learn about our surroundings, how we interact and behave in different environmental conditions. This has gone beyond previous work in this area which typically only observes how noise can impact wellbeing [32] and does not consider other environmental pollutants. A PCA analysis suggests that when all collected variables are combined together they can describe the variability of the data as a whole. In particular, on the PCA map, the physiological sensors (EDA, HR and HRV) point towards a different location to the environmental variables. From our analysis we can conclude that a range of environmental factors PM1.0, PM2.5, PM10 impact physiological changes HRV, HR. Voronoi visualisations have given an indication of how changes within the environment can have an impact on mental wellbeing. Typically, it was found that where air pollution such as PM1, 2.5, 10 and Noise was increasing, participants labelled their wellbeing as very negative. This demonstrates consistent results with previous studies in this area [32], [13]. This form of spatial analysis, greatly helps in understanding the degree to which a place is similar to other nearby places. The ability to classify the collected data presents many possibilities for the real-world inference of wellbeing using pollution data. The results show that DBNs successfully improve the accuracy in which wellbeing can be inferred, compared with CNNs and machine learning models trained using statistical features. Combining physiological with pollution data achieved up to 80.8% accuracy compared with 79.1% when trained using only physiological and 73% when trained using only pollution data. The ability for pollution data to increase overall accuracy demonstrates its impact on wellbeing and shows pollution should continue to be considered as a factor that influences changes in wellbeing. During this study some limitations were encountered. Early analysis on the collected sensor data found that the Empatica E4 was not reliably collecting participants’ EDA. While the EDA sensor worked successfully for some, for other participants no variation in EDA was recorded throughout the experiment. At the point of fusing the collected sensor data, both CO2 and VOC were found to have collected data for some participants but not all resulting in its dismissal. As this study was conducted during the COVID-19 pandemic, collecting data in a participant heavy experiment was challenging task. In the future, we aim to recruit more participants to further investigate and generalise the relational impact of the environment on mental wellbeing. ## 9 Conclusion and Future Work Although very appealing, the exposome approach is complex, both in terms of observing each factor, quantifying it and analysing its relationship to health. This novel research challenges existing approaches to understanding the ’Exposome’ concept, providing a new perspective on how to quantify the approach between the environment and mental wellbeing. The ’Exposome’ concept was first proposed to encompass the totality of human environmental exposures from conception onward. This concept has then drawn the attention to the need for more multivariate data enabled by the rise of mobile and sensor technologies. This enables us to study how people experience place, the impact of place and the role that different environmental factors play on people. In this paper, we proposed the new concept ’DigitalExposome’ that demonstrated the potential of employing a multi-model mobile sensing approach to further understand the relationship between the environment and it’s impact on mental wellbeing. To achieve this, a real-world experiment was conducted around Nottingham Trent University, Clifton Campus where participants walked around a specified route reporting their responses and collecting environmental, behavioural and on-body sensor data. Statistical analysis including PCA, Multi variant Linear Regression, Voronoi and data spatial visualisations were implemented to explore the variation in data and the factor importance. We found that physiological (on-body) sensor data is directly correlated to pollution (PM in particular) within the environment. In addition, DBNs have helped successfully classify five states of wellbeing with up to 80.8% accuracy using the fused physiological and pollution data. In the future, we hope to consider additional environmental sensors to observe greater changes that may improve our sense of places and characterize the relationship between people and spatial settings, which in turns might influence the future design of urban spaces. Although the distance walked by participants was sufficient, greater distances offer a deeper observation on how longer periods of time within an urban environment has on changes in mental wellbeing states. In addition, we want to take our approach and apply it to a busier city centre environment to further understand the impact of heightened pollution on wellbeing. It is important to understand when and where they feel better, so they appreciate their surroundings and city planners create places where people feel rejuvenated. Sensing technology can shed the light on how people breath, feel and interact with their environment in different surroundings. This can help in offering a better security for city dwellers and creating a bond with their environments. ## References * [1] E. Kanjo, E. M. Younis, N. Sherkat, Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach, Information Fusion 40 (May) (2018) 18–31. doi:10.1016/j.inffus.2017.05.005. * [2] H. F. Guite, C. Clark, G. Ackrill, The impact of the physical and urban environment on mental well-being, Public Health 120 (12) (2006) 1117–1126. doi:10.1016/j.puhe.2006.10.005. * [3] R. H. Bhat, Environmental Stressors and Its Impact on Human Being, International Journal of Humanities and Social Sciences. 5 (1) (2017) 37–40. * [4] H. I. Ji, Ambient air quality and health (2018). URL http://www.who.int/mediacentre/factsheets/fs313/en/ * [5] A. Stamatelopoulou, D. Chapizanis, S. Karakitsios, P. Kontoroupis, D. N. Asimakopoulos, T. Maggos, D. Sarigiannis, Assessing and enhancing the utility of low-cost activity and location sensors for exposure studies, Environmental Monitoring and Assessment 190 (3) (mar 2018). doi:10.1007/s10661-018-6537-2. * [6] C. Air, S. Plan, Clean Air Strategy Plan, Tech. rep. (2011). * [7] A. Larkin, P. Hystad, Towards Personal Exposures: How Technology Is Changing Air Pollution and Health Research (dec 2017). doi:10.1007/s40572-017-0163-y. * [8] D. G. DeBord, T. Carreón, T. J. Lentz, P. J. Middendorf, M. D. Hoover, P. A. Schulte, Use of the ”exposome” in the Practice of Epidemiology: A Primer on -Omic Technologies, American Journal of Epidemiology 184 (4) (2016) 302–314. doi:10.1093/aje/kwv325. * [9] D. Dai, A. J. Prussin, L. C. Marr, P. J. Vikesland, M. A. Edwards, A. Pruden, Factors Shaping the Human Exposome in the Built Environment: Opportunities for Engineering Control, Environmental Science and Technology 51 (14) (2017) 7759–7774. doi:10.1021/acs.est.7b01097. URL https://pubs.acs.org/sharingguidelines * [10] M. Ueberham, U. Schlink, Wearable sensors for multifactorial personal exposure measurements – A ranking study, Environment International 121 (Part 1) (2018) 130–138. doi:10.1016/j.envint.2018.08.057. * [11] E. Kanjo, E. M. Younis, C. S. Ang, Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection, Tech. rep. (2019). doi:10.1016/j.inffus.2018.09.001. * [12] K. Woodward, E. Kanjo, D. Brown, T. M. McGinnity, B. Inkster, D. J. Macintyre, A. Tsanas, Beyond mobile apps: A survey of technologies for mental well-being, arXiv (2019). arXiv:1905.00288. * [13] T. Johnson, E. Kanjo, K. Woodward, Sensor Data and the City: Urban Visualisation and Aggregation of Well-Being Data, arXiv (July) (2020). arXiv:2007.02674. URL http://arxiv.org/abs/2007.02674 * [14] J. Lelieveld, A. Pozzer, U. Pöschl, M. Fnais, A. Haines, T. Münzel, Loss of life expectancy from air pollution compared to other risk factors: A worldwide perspective, Cardiovascular Research 116 (11) (2020) 1910–1917. doi:10.1093/cvr/cvaa025. URL https://academic.oup.com/cardiovascres/article/116/11/1910/5770885 * [15] B.-J. Lee, B. Kim, K. Lee, Air Pollution Exposure and Cardiovascular Disease, Toxicological Research 30 (2) (2014) 71–75. doi:10.5487/TR.2014.30.2.71. * [16] Sandra Laville, Air pollution a cause in girl’s death, coroner rules in landmark case — London — The Guardian (2020). URL https://www.theguardian.com/environment/2020/dec/16/girls-death- contributed-to-by-air-pollution-coroner-rules-in-landmark-case * [17] D. Donaire-Gonzalez, A. Valentín, E. van Nunen, A. Curto, A. Rodriguez, M. Fernandez-Nieto, A. Naccarati, S. Tarallo, M. Y. Tsai, N. Probst-Hensch, R. Vermeulen, G. Hoek, P. Vineis, J. Gulliver, M. J. Nieuwenhuijsen, ExpoApp: An integrated system to assess multiple personal environmental exposures, Environment International 126 (2019) 494–503. doi:10.1016/j.envint.2019.02.054. * [18] L. Maitre, J. De Bont, M. Casas, O. Robinson, G. M. Aasvang, L. Agier, S. Andrušaitytė, F. Ballester, X. Basagaña, E. Borràs, C. Brochot, M. Bustamante, A. Carracedo, M. De Castro, A. Dedele, D. Donaire-Gonzalez, X. Estivill, J. Evandt, S. Fossati, L. Giorgis-Allemand, J. R. Gonzalez, B. Granum, R. Grazuleviciene, K. B. Gützkow, L. S. Haug, C. Hernandez-Ferrer, B. Heude, J. Ibarluzea, J. Julvez, M. Karachaliou, H. C. Keun, N. H. Krog, C. H. E. Lau, V. Leventakou, S. Lyon-Caen, C. Manzano, D. Mason, R. McEachan, H. M. Meltzer, I. Petraviciene, J. Quentin, T. Roumeliotaki, E. Sabido, P. J. Saulnier, A. P. Siskos, V. Siroux, J. Sunyer, I. Tamayo, J. Urquiza, M. Vafeiadi, D. Van Gent, M. Vives-Usano, D. Waiblinger, C. Warembourg, L. Chatzi, M. Coen, P. Van Den Hazel, M. J. Nieuwenhuijsen, R. Slama, C. Thomsen, J. Wright, M. Vrijheid, Human Early Life Exposome (HELIX) study: A European population-based exposome cohort, in: BMJ Open, Vol. 8, BMJ Publishing Group, 2018, p. e021311. doi:10.1136/bmjopen-2017-021311. * [19] M. J. Nieuwenhuijsen, D. Donaire-Gonzalez, M. Foraster, D. Martinez, A. Cisneros, Using personal sensors to assess the exposome and acute health effects, International Journal of Environmental Research and Public Health 11 (8) (2014) 7805–7819. doi:10.3390/ijerph110807805. * [20] C. P. Wild, The exposome: From concept to utility, International Journal of Epidemiology 41 (1) (2012) 24–32. doi:10.1093/ije/dyr236. * [21] M. Loh, D. Sarigiannis, A. Gotti, S. Karakitsios, A. Pronk, E. Kuijpers, I. Annesi-Maesano, N. Baiz, J. Madureira, E. O. Fernandes, M. Jerrett, J. W. Cherrie, How sensors might help define the external exposome, International Journal of Environmental Research and Public Health 14 (4) (2017) 343. doi:10.3390/ijerph14040434. * [22] M. Vrijheid, EThe exposome: A new paradigm to study the impact of environment on health, Thorax 69 (9) (2014) 876–878. doi:10.1136/thoraxjnl-2013-204949. * [23] V. Siroux, L. Agier, R. Slama, The exposome concept: A challenge and a potential driver for environmental health research, European Respiratory Review 25 (140) (2016) 124–129. doi:10.1183/16000617.0034-2016. * [24] Y. Liang, X. Zheng, D. D. Zeng, A survey on big data-driven digital phenotyping of mental health, Information Fusion 52 (2019) 290–307. doi:10.1016/j.inffus.2019.04.001. * [25] R. A. Cummins, F. A. S. Ps, Personal Wellbeing Index-Adult (PWI-A) (English) 5 th Edition The International Wellbeing Group MANUAL 2013 Personal Wellbeing Index-Adult. URL http://www.acqol.com.au/instruments{#}measureshttp://www.australianunity.com.au/ * [26] M. M. Bradley, P. J. Lang, Measuring emotion: The self-assessment manikin and the semantic differential, Journal of Behavior Therapy and Experimental Psychiatry 25 (1) (1994) 49–59. doi:10.1016/0005-7916(94)90063-9. * [27] J. Needham, Science and Civilisation in China: Mathematics and the Sciences of the Heavens and the Earth, Cambridge University Press 3 (1959) 147. * [28] I. T. Jollife, J. Cadima, Principal component analysis: A review and recent developments (apr 2016). doi:10.1098/rsta.2015.0202. * [29] David Scott, Q-Q plots. URL http://onlinestatbook.com/2/advanced{_}graphs/q-q{_}plots.html * [30] K. Paoin, K. Ueda, X. T. Seposo, J. Hayano, K. Kiyono, N. Ueda, T. Kawamura, A. Honda, H. Takano, Association between PM2.5 exposure and heart rate variability for the patients with cardiac problems in Japan, Air Quality, Atmosphere and Health 13 (3) (2020) 339–347. doi:10.1007/s11869-020-00797-8. URL https://doi.org/10.1007/s11869-020-00797-8 * [31] T. Münzel, T. Gori, W. Babisch, M. Basner, Cardiovascular effects of environmental noise exposure (2014). doi:10.1093/eurheartj/ehu030. URL /pmc/articles/PMC3971384/?report=abstracthttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC3971384/ * [32] E. Kanjo, NoiseSPY: A real-time mobile phone platform for urban noise monitoring and mapping, Mobile Networks and Applications 15 (4) (2010) 562–574. doi:10.1007/s11036-009-0217-y. * [33] A. Dobrin, A Review of Properties and Variations of Voronoi Diagrams, Tech. rep. (2005). URL http://www.whitman.edu/mathematics/SeniorProjectArchive/2005/dobrinat.pdf * [34] W. Pokojski, P. Pokojska, Voronoi diagrams – inventor, method, applications, Polish Cartographical Review 50 (3) (2018) 141–150. doi:10.2478/pcr-2018-0009. * [35] G. E. Hinton, S. Osindero, Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation 18 (7) (2006) 1527–1554. doi:10.1162/neco.2006.18.7.1527. URL https://www.mitpressjournals.org/doi/abs/10.1162/neco.2006.18.7.1527 * [36] A. Fischer, C. Igel, Training restricted Boltzmann machines: An introduction, Pattern Recognition 47 (1) (2014) 25–39. doi:10.1016/j.patcog.2013.05.025. * [37] M. M. Hassan, M. G. R. Alam, M. Z. Uddin, S. Huda, A. Almogren, G. Fortino, Human emotion recognition using deep belief network architecture, Information Fusion 51 (2019) 10–18. doi:10.1016/j.inffus.2018.10.009. * [38] E. Mocanu, P. H. Nguyen, M. Gibescu, Deep Learning for Power System Data Analysis, in: Big Data Application in Power Systems, Elsevier, 2018, pp. 125–158. doi:10.1016/B978-0-12-811968-6.00007-3. * [39] J. Smolander, M. Dehmer, F. Emmert-Streib, Comparing deep belief networks with support vector machines for classifying gene expression data from complex disorders, FEBS Open Bio 9 (7) (2019) 1232–1248. doi:10.1002/2211-5463.12652. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/2211-5463.12652 * [40] C. L. Lisetti, F. Nasoz, Using noninvasive wearable computers to recognize human emotions from physiological signals (sep 2004). doi:10.1155/S1110865704406192. * [41] N. Sharma, T. Gedeon, Objective measures, sensors and computational techniques for stress recognition and classification: A survey, Computer Methods and Programs in Biomedicine 108 (3) (2012) 1287–1301. doi:10.1016/j.cmpb.2012.07.003. URL https://pubmed.ncbi.nlm.nih.gov/22921417/
Symplectic embeddings, homotopy algebras and almost Poisson gauge symmetry Vladislav G. Kupriyanov${}^{1}$ and Richard J. Szabo${}^{2}$ ${}^1$ Centro de Matemática, Computação e Universidade de Federal do ABC Santo André, SP, and Tomsk State University, Tomsk, Russia Email<EMAIL_ADDRESS> ${}^2$ Department of Mathematics, Heriot-Watt University Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, U.K. and Maxwell Institute for Mathematical Sciences, Edinburgh, U.K. and Higgs Centre for Theoretical Physics, Edinburgh, U.K. Email<EMAIL_ADDRESS> We formulate general definitions of semi-classical gauge transformations for noncommutative gauge theories in general backgrounds of string theory, and give novel explicit constructions using techniques based on symplectic embeddings of almost Poisson structures. In the absence of fluxes the gauge symmetries close a Poisson gauge algebra and their action is governed by a $P_\infty$-algebra which we construct explicitly from the symplectic embedding. In curved backgrounds they close a field dependent gauge algebra governed by an $L_\infty$-algebra which is not a $P_\infty$-algebra. Our technique produces new all orders constructions which are significantly simpler compared to previous approaches, and we illustrate its applicability in several examples of interest in noncommutative field theory and gravity. We further show that our symplectic embeddings naturally define a $P_\infty$-structure on the exterior algebra of differential forms on a generic almost Poisson manifold, which generalizes earlier constructions of differential graded Poisson algebras, and suggests a new approach to defining noncommutative gauge theories beyond the gauge sector and the semi-classical limit based on $A_\infty$-algebras. The construction of noncommutative gauge theories on manifolds with non-trivial tensor fields is an important problem for understanding the low-energy physics of D-branes in general backgrounds of string theory. The problem is of course not new, and has been studied for around 20 years now. But despite its relatively long history of investigation, the construction is still not completely understood in full generality. In this paper we propose a new approach to this old problem, where the main mathematical tool employed is known as a `symplectic embedding'. In this section we start by providing some background and motivation from string theory, and then proceed to an informal description of our goals and main results, putting them into context with earlier physics literature on the subject. More precise statements and proofs will be given in subsequent sections, with a detailed technical analysis of the issues discussed here. Symplectic embeddings. Consider a D-brane wrapping a submanifold $M$ in a flat background. Then the string equations imply the vanishing of the three-from $H$-flux for the $B$-field: $H=\dd B=0$; when the closed two-form $B$ is non-degenerate its inverse defines a Poisson bivector $\theta$ on $M$. Quantizing an open string with its ends on the D-brane shows that, in a suitable low-energy scaling limit, its worldvolume algebra of functions undergoes a deformation in the direction of $\theta$. The problem of constructing a noncommutative gauge theory on $M$ then starts with a deformation quantization of the Poisson manifold $(M,\theta)$. Conversely, the semi-classical limit of an associative noncommutative algebra of functions on a manifold $M$ which is a flat deformation defines a Poisson bracket \begin{align*} \{x^i,x^j\}_\theta = \theta^{ij} \ , \end{align*} on local coordinate functions $x^i$; in other words, the Poisson bracket is the first order semi-classical approximation to the star-product in deformation quantization. It is this semi-classical limit that we shall mostly work with in this paper, where all constructions are entirely geometric and are regarded as the classical infinitesimal data whose quantization yields the required ingredients of a noncommutative gauge theory. In the case of a flat D-brane, the Poisson bivector $\theta$ is constant, and the worldvolume deformation is provided by the usual Moyal-Weyl star-product [62], and the construction of noncommutative gauge theories is standard and well-known [63]. For a curved D-brane in flat space, $\theta$ is not constant, and the worldvolume deformation is provided by the Kontsevich star-product [22]. The first problem one then encounters is how to define derivatives in the field theory: the usual differential is no longer a derivation of the Poisson algebra, and this obstructs the naive definition of gauge transformations to closing a Lie algebra. The problem can be formulated and solved by embedding the Poisson bracket in a `noncommutative phase space' \begin{align}\label{eq:NCphasespace} \{x^i,x^j\} &= \theta^{ij} \ , \notag \\[4pt] \{p_i,x^j\} &= \delta_i^j - \tfrac12\,\partial_i\theta^{jk}\,p_k + \cdots \ , \notag \\[4pt] \{p_i,p_j\} &= 0 \ . \end{align} The auxiliary `momentum' coordinates $p_i$ are regarded as `derivatives' when acting on functions $f$ on $M$, since when $\theta$ is constant these brackets give $\{p_i,f\}=\partial_if$; the ellipsis denotes higher order monomials in the momenta $p_k$ which accompany higher orders in the bivector $\theta$ and its derivatives. In general, the extra derivative terms in these brackets ensure that they fulfill the Jacobi identity order by order in $p_i$ when $\theta$ is non-constant. Now the action of the `twisted' derivatives $\{p_i,\,\cdot\,\}$ obeys the Leibniz rule, as a consequence of the Jacobi identity for the bracket. Noncommutative gauge theories have been constructed in this manner in e.g. [6]; see e.g. [66] for a review of this and other approaches, along with further references. The Poisson brackets (<ref>) also offer an alternative approach to quantization of the Poisson manifold $(M,\theta)$. We can map the coordinates $(x,p)$ to canonically conjugate variables $(X,P)$ by means of a generalized Bopp shift; the existence of such a diffeomorphism is guaranteed by Darboux's theorem, at least locally or in the case where $M$ is covered by a single Darboux chart. The canonical coordinates $(X,P)$ can be quantized geometrically in the usual way via a Schrödinger polarization, and mapped back to provide a polydifferential representation of the quantum version of the brackets (<ref>) on the space of functions on $M$. By formally expanding functions of the polydifferential operators in Taylor series, this also constructs a star-product and provides a deformation quantization of the algebra of functions on $M$. This construction of a noncommutative phase space is called a symplectic embedding of the Poisson structure $\theta$; in Section <ref> we will see some explicit all orders examples which are given by closed analytic expressions. The brackets can be derived as the symplectic structure arising from open string quantization on the D-brane in a low-energy scaling limit [21]. In particular, it enables one to define semi-classical gauge transformations that consistently close a gauge algebra defined by the Poisson brackets: we use `twisted covariant derivatives' of gauge parameters defined by the $p_i$ and suitably restrict them by imposing constraints on the phase space that eliminate the auxiliary momentum coordinates. This is explained in detail in Section <ref>. Mathematically, symplectic embeddings are a more general notion of `symplectic realizations' in Poisson geometry [71, 36]. We discuss the precise relation between the two notions in Section <ref>. The global structure which integrates a Poisson manifold is called a symplectic groupoid [70], whose source maps always define symplectic realizations. They were introduced as part of the program of quantizing Poisson manifolds. Symplectic groupoids capture the semi-classical limit of deformation quantization, as inferred by our physical arguments above, and as shown precisely in [16] where the symplectic groupoid is constructed as the classical phase space of the open string sigma-model. Quantized differential forms. The problem of finding suitable derivative operators in a noncommutative field theory on a Poisson manifold $(M,\theta)$ can also be formulated dually as the problem of constructing a suitable noncommutative differential calculus on $M$. In the semi-classical limit, this amounts to finding an extension of the Poisson bracket to the exterior algebra of differential forms. The problem of constructing a differential graded Poisson algebra in this way has been addressed from several points of view, see e.g. [20, 30, 5, 54, 37, 53], while from the perspective of deformation quantization it is discussed in e.g. [9, 2, 25]. These constructions all depend on the choice of an auxiliary connection on $M$ and typically require the Poisson bivector $\theta$ to be invertible. Noncommutative gauge theories are treated using this formalism in [32]. Their definition of gauge transformations is similar in structure to ours in Section <ref>, but differs in important details. A crucial distinction is that our formulation is based on symplectic embeddings, which always exist locally without any further data and applies to arbitrary Poisson structures, while the formulation of [32] requires the extra data of a symplectic connection on $M$. Nonassociative deformation quantization. The problem is more involved in the case of a curved background, wherein the $H$-flux is non-zero: $H=\dd B\neq0$. In this case the Jacobi identity for the semi-classical bracket is violated, and the bivector $\theta$ defines an $H$-twisted Poisson structure [58, 38]. The worldvolume deformations of D-branes in this setting are still given by the Kontsevich expansion [22], which now leads to a nonassociative star-product. In this case, the symplectic embedding is described by a noncommutative (but associative) phase space of the form \begin{align*} \{x^i,x^j\} &= \theta^{ij} - \Pim^{ijk}\, p_k + \cdots \ , \\[4pt] \{p_i,x^j\} &= \delta_i^j - \tfrac12\,\partial_i\theta^{jk}\,p_k + \cdots \ , \\[4pt] \{p_i,p_j\} &= 0 \ , \end{align*} \begin{align*} \Pim^{ijk} = \tfrac13\,\big(\theta^{il}\partial_l\theta^{jk}+{\rm cyclic}\big)= \theta^{il}\,\theta^{jm}\,\theta^{kn}\,H_{lmn} \end{align*} is the Jacobiator $\{\{x^i,x^j\}_\theta,x^k\}_\theta+{\rm cyclic}$, the trivector encoding the violation of the Jacobi identity for the original twisted Poisson brackets $\{x^i,x^j\}_\theta=\theta^{ij}$; its inclusion is now necessary to ensure that these extended brackets fulfill the Jacobi identity. These brackets agree precisely with the symplectic structure found in [33, 31] from quantizing an open string with its ends on a D-brane in a constant $H$-flux background. Regarding again the momenta as `derivatives', now the associative phase space structure is that of an algebra of pseudo-differential operators, whereby even the bracket of two functions on $M$ is generally a differential operator. This was explained by [46], who show that the symplectic embedding in this case captures the semi-classical limit of the associative composition algebra of differential operators on $M$ corresponding to a twisted Poisson structure [56]. In D-brane physics this poses a serious problem in the definition of gauge theories, as the gauge transformation of a gauge field will generically produce not another gauge field but a pseudo-differential operator; a resolution was proposed by [31] which involves promoting gauge parameters to functions of both coordinates $x^i$ and momenta $p_i$. In the present paper we propose a different way of resolving this issue, which is similar in spirit, but involves a suitable constraint on the phase space as in the case of a Poisson structure as well as field dependent gauge transformations. The precise definitions and constructions are explained in Section <ref>. This extended symplectic embedding formalism was initiated in [46] to describe the (associative) classical and quantum mechanics of an electric charge in smooth distributions of magnetic monopoles, whose phase space is described by a twisted Poisson structure. The main idea behind our construction of the “nonassociative” gauge algebra consists in starting with a given almost Poisson structure, and aiming to get the violation of the Jacobi identity under control. For this, we introduce auxiliary variables $p_i$ and construct its symplectic embedding. In the larger space the Jacobi identity is now satisfied, but one is then faced with the problem of interpreting the auxiliary (unphysical) degrees of freedom, which cannot be removed in the nonassociative case [46]. The most non-trivial point of our construction is getting rid of the auxiliary variables by introducing constraints in a consistent way, that is, in such a way that, in the “nonassociative” gauge algebra, the commutator of two gauge transformations is again a gauge transformation, but with a field dependent gauge parameter. The global objects corresponding to symplectic embeddings of twisted Poisson or more generally almost Poisson structures are not presently understood. From a string theory perspective, they can be described as follows. A fundamental closed string in a constant $H$-flux background can be lifted to M-theory as an open membrane ending on an M5-brane in a three-form $C$-field background with constant flux. This gives rise to a noncommutative algebra of functions on the loop space of the M5-brane worldvolume [8], whose deformation quantization has as semi-classical limit a symplectic groupoid on the loop space obtained through transgression of the three-form $H$ [59, 60]. In this sense symplectic embeddings play a similar role in the semi-classical limit of deformation quantization of almost Poisson structures. We give a more precise account in Section <ref>. Strong homotopy algebras. Recent advances in nonassociative deformation quantization have produced some explicit constructions of star-products which quantize twisted Poisson and even almost Poisson structures. A star-product which quantizes the phase space of a closed string in a constant $R$-flux background, or equivalently an electric charge moving in a uniform magnetic monopole density, was constructed in [55]; the phase space defines a twisted Poisson structure and the star-product is, in a sense, a nonassociative analog of the Moyal-Weyl star-product. In [41] a star-product was proposed which quantizes the linear almost Poisson structure of the imaginary octonion algebra, which is an example of an almost Poisson structure that is not twisted Poisson; in [45] this was applied to the quantization of the phase space of a membrane in a non-geometric $R$-flux background of M-theory, or equivalently an M-wave in a non-geometric Kaluza-Klein monopole density, where it was shown that the contraction of the octonionic star-product reproduces the monopole star-product. See [68] for a review of these developments and further references. These examples have nice physical features. In particular, nonassociativity vanishes on-shell, as it should since string theory is based on a two-dimensional quantum field theory that involves strictly associative structures. Precisely, the integrated associator vanishes for these star-products, as the star-associator of three functions is a `total derivative'; this is also true for the nonassociative star-products of [22], provided one chooses a suitable density to integrate against (see e.g. [66]). However, this property fails in general for arbitrary almost Poisson structures, and the question arises as to what structure should be used to control the violation of associativity, or the Jacobi identity in the semi-classical limit, in a consistent way. A good candidate is Stasheff's $A_\infty$-algebras [65], and also $L_\infty$-algebras [51] where the violation of the Jacobi identity is proportional to a higher coherent homotopy or a `total derivative' of some higher bracket, together with their extension to $P_\infty$-algebras [69]. This suggests that strong homotopy algebras might provide the right setting for the quantization of generic twisted Poisson and almost Poisson structures; their role in deformation quantization of twisted Poisson structures was already indicated in [22, 55]. In particular, the semi-classical limit of an $A_\infty$-algebra defines a $P_\infty$-algebra, and it is natural to look for $P_\infty$-algebras as the structure controlling the violation of the Jacobi identity. The deformation quantization of $P_\infty$-structures via the Kontsevich formality theorem was first considered in [52], and a bit later in [17]. As we discuss in Section <ref>, this expectation is indeed correct from the following perspective. We will show that our symplectic embeddings, which always exist locally, naturally endow the exterior algebra of differential forms on an arbitrary almost Poisson manifold $(M,\theta)$ with the structure of a $P_\infty$-algebra which contains the almost Poisson bracket. The $P_\infty$-structure controls not only the potential violation of the Jacobi identity for the almost Poisson bracket, but also the violation of the Leibniz rule for the usual differential, in a way which is compatible with the derivation properties of the original bracket with respect to the classical pointwise multiplication of functions, extended to the exterior product of differential forms. We subsequently sketch an alternative new approach to the quantization of differential forms which does not require the choice of an auxiliary connection on $M$ and quantizes this $P_\infty$-algebra to an $A_\infty$-algebra. The details are beyond the scope of the present paper and are left for future investigation, but they illustrate another novel application of our symplectic embedding formalism, as well as its intimate relation with homotopy algebras. The use of $L_\infty$-algebras as an alternative new construction of noncommutative gauge theories on arbitrary almost Poisson manifolds $(M,\theta)$ was pioneered by [11] and called the `$L_\infty$-bootstrap'. In Section <ref> we provide the dictionary between our symplectic embedding formalism and the formulation in terms of $L_\infty$-algebras for noncommutative gauge transformations, finding agreement with the lower degree brackets that were constructed explicitly by [11, 43]. However, our approach is much better adapted to obtaining explicit closed all orders expressions for all brackets of the gauge $L_\infty$-algebra. We illustrate this explicitly on several concrete non-trivial examples of relevance to physics in Section <ref>, wherein we obtain the complete sets of $L_\infty$-brackets to all orders. In particular, for the twisted Poisson structure on the phase space of an electric charge in a constant magnetic monopole density, we obtain, for the first time, the complete gauge $L_\infty$-structure in closed form, see Section <ref>; this should aid in understanding the symmetries of a nonassociative theory of gravity underlying the low-energy sector of closed non-geometric strings (see e.g. [68]). Technically, our approach based on symplectic embeddings is much simpler than the $L_\infty$-bootstrap approach of [11], particularly for explicit calculations. In this language we also find a significant distinction between Poisson and almost Poisson gauge symmetries. For a Poisson manifold $(M,\theta)$ the gauge $L_\infty$-algebra is itself a $P_\infty$-algebra which is obtained by a truncation of the $P_\infty$-structure underlying the semi-classical limit of quantization of differential forms. This suggests a path towards the construction of noncommutative gauge transformations beyond the semi-classical limit, in terms of $A_\infty$-algebras. However, this is not the case for the homotopy algebras which govern the closure of “nonassociative" gauge transformations: our construction of gauge transformations defines an $L_\infty$-algebra which is not a $P_\infty$-algebra, unless the Jacobi identity is satisfied, that is, only “associative" gauge algebras are controlled by $P_\infty$-algebras. The reasons for this are explained in detail in Section <ref>: the essential feature is that for an almost Poisson structure the $P_\infty$-structure on differential forms cannot be truncated to a subalgebra containing the gauge $L_\infty$-algebra. An alternative approach to the construction of noncommutative gauge theories using homotopy algebras has been put forth recently by [24], whereby the underlying $L_\infty$-structure is also quantized. In contrast to our approach, the gauge theory $L_\infty$-algebra involves only finitely many brackets as in the classical case, and compatibility with the Leibniz rule is manifest from the outset. It would be interesting to understand better how this more algebraic approach is related to the geometric approach of the present paper. Structure of the paper. In this paper we work solely with the kinematical sector of a complete noncommutative gauge theory, and only with the first order semi-classical approximations to proper noncommutative gauge transformations. This will elucidate, in a model independent way, the classical geometric structures underlying the full gauge algebras which quantize them. In the absence of any dynamics, we will require that our gauge transformations close to a Lie algebra. One of the main technical achievements of this paper is the ability to do this in the nonassociative case: the naive passage from Poisson to general almost Poisson gauge transformations do not immediately close, even when one modifies the requirement of closure to include field dependent gauge transformations. We show explicitly how closure can be accomplished using symplectic embeddings by the addition of terms to the gauge variations which compensate the undesired contributions to the closure formulas; we demonstrate that these extra terms are explicitly calculable. When the dynamical sector of a particular gauge theory is included, it may be possible to avoid adding these additional terms in the gauge variations and work instead with a weaker notion of closure, whereby the almost Poisson gauge algebra also involves the field equations, that is, gauge transformations only close on-shell, see Remark <ref>. Barring these model dependent extensions, it is remarkable that we are able to describe the closure of “nonassociative” gauge transformations in terms of strictly associative Lie algebras, albeit field dependent ones. The majority of this paper can be read by taking $M$ to be a coordinate manifold $\real^d$ (or an open subset thereof), but in several places we have written tensor quantities in a more covariant form that we hope makes some of our calculations amenable to global generalizations in future work. Our main purpose here is to unravel the geometric and algebraic structures that govern the first steps to constructing noncommutative gauge theories on D-branes in general string backgrounds. This turns out to be a technically formidable task even with our restriction to local coordinate charts. Therefore, throughout we shall carefully formulate our geometric framework and then carry out all calculations via tensor calculus in local coordinates, leaving the corresponding description of the covariant tensor calculus within our framework for future study. The outline of the remainder of this paper is as follows. In Section <ref> we give a precise description of our symplectic embeddings which we motivated from physical arguments above, and compare it to the existing notions of symplectic realizations and symplectic groupoids from the mathematical literature on Poisson geometry. In Section <ref> we give a precise general definition of what we mean by semi-classical gauge transformations on a Poisson manifold, and provide an explicit construction using symplectic embeddings of the Poisson structure. In Section <ref> we extend these general definitions and constructions to the general case of almost Poisson manifolds. The closure condition on the semi-classical gauge algebra now requires an extension of the notion of gauge transformations using field dependent gauge parameters, which we formulate precisely in terms of $1$-jets of the cotangent bundle of the underlying manifold $M$, and an extension of their construction by symplectic embeddings using horizontal vector fields on the jet space. In Section <ref> we show how our symplectic embedding construction of gauge algebras can be cast into the standard framework of $L_\infty$-algebras [27, 34, 35], contrasting our approach with the approach based on the $L_\infty$-bootstrap. We further show how our symplectic embeddings define $P_\infty$-structures on differential forms and discuss their relation with the $L_\infty$-structures on our gauge algebras. Finally, in Section <ref> we work out several explicit examples as illustrations of our formalism for both Poisson and almost Poisson gauge transformations. Two appendices at the end of the paper collect some technical details which are used in the main text. We are grateful to Alex Schenkel, Jim Stasheff, Dima Vassilevich and Alan Weinstein for helpful discussions and correspondence. V.G.K. acknowledges the support from the São Paulo Research Foundation (FAPESP), grant 2021/09313-8. The work of R.J.S. was supported in part by the Consolidated Grant ST/P000363/1 “Particle Theory at the Higgs Centre” from the UK Science and Technology Facilities Council (STFC). Symplectic embeddings of almost Poisson structures In this section we develop our notion of symplectic embeddings, which encompasses earlier constructions from the physics literature in a rigorous and precise way. We provide a detailed comparison between our symplectic embeddings and the more common concept of symplectic realizations in Poisson geometry, and discuss their relation to deformation quantization. In Section <ref> we will give detailed applications to some non-trivial explicit examples. §.§ Formal deformations of cotangent bundles We begin with some definitions that will permeate this paper. Smooth functions and tensors on a manifold $M$, as well as all vector spaces, are always considered with the ground field $\real$ or $\complex$. Let $M$ be a manifold. An almost Poisson structure on $M$ is a bivector field $\theta\in\mathfrak{X}^2(M)$. The bivector field is equivalently encoded by the skew-symmetric almost Poisson bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$ on $C^\infty(M)$ given by \{f,g\}_\theta=\theta(\dd f,\dd g) \ . The Schouten-Nijenjuis bracket of the bivector $\theta$ with itself, the Jacobiator, is the trivector field $\Pim=[\theta,\theta]\in\mathfrak{X}^3(M)$, which satisfies \begin{align*} \tfrac12\,\Pim(\dd f,\dd g,\dd h)={\sf Cyc}_{f,g,h}\,\{f,\{g,h\}_\theta\}_\theta \ , \end{align*} where ${\sf Cyc}_{f,g,h}$ indicates the cyclic sum over the functions $f,g,h\in C^\infty(M)$. The Jacobi identity $[[\theta,\theta],\theta]=0$ for the Schouten bracket implies that the Jacobiator obeys the integrability condition \begin{align}\label{eq:Piint} [\Pim,\theta] = 0 \ . \end{align} A bivector $\theta$ on $M$ gives rise to a linear map θ^♯: Ω^1(M)⟶(M) , α⟼θ(α, · ) . \Pim=\tfrac12\,\midwedge^3\,\theta^\sharp H for a closed three-form $H\in\Omega^3(M)$, then $\theta$ is an $H$-twisted Poisson structure on $M$ and $\{\,\cdot\,,\,\cdot\,\}_\theta$ is the corresponding $H$-twisted Poisson The bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$ obeys the Jacobi identity if and only if $\Pim=0$. In this case $\theta$ is a Poisson structure and $\{\,\cdot\,,\,\cdot\,\}_\theta$ is the corresponding Poisson By definition, any almost Poisson structure defines a derivation $\{f,\,\cdot\,\}_\theta$ of the commutative algebra of functions $C^\infty(M)$ for any fixed $f\in C^\infty(M)$. When $\theta$ is a Poisson bivector, this property makes $(C^\infty(M),\{\,\cdot\,,\,\cdot\,\}_\theta)$ into a Poisson If an almost Poisson structure $\theta$ is non-degenerate, then it is automatically a twisted Poisson structure: in this case the map (<ref>) is invertible, and the two-form $\theta^{-1}\in\Omega^2(M)$, defined by for $X,Y\in\mfX(M)$, is an almost symplectic structure on $M$ and $H=\dd\theta^{-1}$ is the twisting three-form. In particular, if $\theta$ is a non-degenerate Poisson bivector then $\theta^{-1}$ is a symplectic structure on $M$. We shall often work on an open subset $M=U\subset\real^d$ with local coordinates $(x^i)$. Then the bivector $\theta$ has the local coordinate expression $\theta=\frac12\,\theta^{ij}(x)\,\partial_i\wedge\partial_j$, where $\partial_i=\partial/\partial x^i$, and the almost Poisson bracket can be expressed on $U$ as \{f,g\}_\theta = \theta^{ij}\,\partial_if\,\partial_jg \ . Throughout we use the Einstein convention for implicit summation over repeated upper and lower indices. Similarly, the trivector $\Pim=[\theta,\theta]$ has the local coordinate expression \Pim^{ijk} = \tfrac13\,\big( \theta^{il}\, \partial_l\theta^{jk}+\theta^{kl}\, \partial_l\theta^{ij}+\theta^{jl}\, \partial_l\theta^{ki} \big) \ . In this paper it will be convenient to rescale the bivector $\theta$ by a formal parameter $t$, and regard it as a deformation of the zero bivector $\theta_0=0$. If $V$ is a real or complex vector space, we denote by $V[[t]]$ the space of formal power series in $t$ with coefficients in $V$; it can be regarded as a module over $\real[[t]]$ or $\complex[[t]]$. In this paper we will work in the setting of formal Poisson structures and formal symplectic structures, see e.g. [15]. Let $\theta$ be an almost Poisson structure on a manifold $M$. Write $\pi:T^*M\to M$ for the cotangent bundle of $M$. A symplectic embedding of $(M,\theta)$ is a formal symplectic structure $\omega\in\Omega_{\rm pol}^2(T^*M)[[t]]$ such (a) $\omega$ is a deformation of the canonical symplectic structure $\omega_0$ on $T^*M$, that is, $\omega|_{t=0}=\omega_0$; (b) The corresponding Poisson bracket $\{\,\cdot\,,\,\cdot\,\}_{\omega^{-1}}$ on the ring $C_{\rm pol}^\infty(T^*M)[[t]]$ restricts to the zero section $M\subset T^*M$ as {π^*f,π^*g}_ω^-1|_M = t {f,g}_θ for all functions $f,g\in C^\infty(M)$; and (c) The zero section is a Lagrangian section of Here and in the following we use the subscript ${}_{\rm pol}$ to denote spaces of smooth tensor fields on the cotangent bundle $T^*M$ which are polynomial on the fibres. In general, the formal symplectic structure $\omega$ is a formal power series of closed two-forms on $T^*M$ (which are polynomial on the fibres) given as $\omega=\omega_0+\sum_{n=1}^\infty\, t^n\,\omega_n$. The inverse is a formal Poisson structure $\omega^{-1}\in\mfX_{\rm pol}^2(T^*M)[[t]]$ which is a formal power series of bivectors on $T^*M$ given as \omega^{-1} = \Thetam_0 + \sum_{n=1}^\infty \, t^n\,\Thetam_n \ , where $\Thetam_0=\omega_0^{-1}$ is the Poisson bivector field on $T^*M$ corresponding to $\omega_0$. The Poisson integrability condition $[\omega^{-1},\omega^{-1}]=0$ implies ∑_k=0^n [_k,_n-k] = 0 for all $n\geq0$, and the condition (<ref>) implies $\omega^{-1}(\pi^*\alpha,\pi^*\beta)\big|_M = t\,\theta(\alpha,\beta)$ which leads to \Thetam_1(\pi^*\alpha,\pi^*\beta)\big|_M=\theta(\alpha,\beta) \qquad \mbox{and} \qquad \Thetam_n(\pi^*\alpha,\pi^*\beta)\big|_M=0 \ , for all $\alpha,\beta\in\Omega^1(M)$ and $n\geq2$. The Lagrangian section condition further imposes $\omega(X,Y)=0$ for all $X,Y\in\mfX(M)$. In this sense a symplectic embedding can be thought of as a formal deformation of an almost Poisson structure $\theta$ on $M$, for which $[\theta,\theta]\neq0$, to a nondegenerate Poisson structure $\omega^{-1}$ on $T^*M$, for which $[\omega^{-1},\omega^{-1}]=0$; in particular, the canonical symplectic structure $\omega_0$ yields a symplectic embedding for the trivial bivector $\theta_0=0$. Definition <ref> is an adapted version, to the almost Poisson case, of a symplectic realization of a Poisson structure $\theta$ on $M$, which more generally involves a surjective submersion $\pi:S\to M$ of a symplectic manifold $S$ that is a Poisson morphism admitting a Lagrangian section $M\to S$. When $S=T^*M$ with $\pi$ the cotangent bundle projection and $M\to S$ the zero section, a symplectic realization is a symplectic embedding, i.e. the condition (<ref>) holds, but the converse is not generally true. Similarly, one defines an almost symplectic realization of a twisted Poisson structure. These can all be constructed globally in terms of integrating symplectic groupoids for $(M,\theta)$ [70, 36, 18]. In this paper we deal only with local constructions, where the existence of symplectic embeddings is always guaranteed as a formal deformation of the trivial symplectic groupoid $(T^*M,\omega_0)$ near the identity elements. We shall now establish the existence of local symplectic embeddings by treating the three cases in Definition <ref> individually, in order of increasing complexity. In the following we work on the open neighbourhood $T^*U$ of the zero section of $M=U\subseteq\real^d$, and denote local coordinates thereon by $(x^i,p_i)$, where $(p_i)$ are coordinates in the normal directions to $U\subset T^*U$; in particular, $U$ is given by the equations $p_i=0$ in $T^*U$. We also write $\tilde\partial{}^i=\partial/\partial p_i$. §.§ Local symplectic embedding of Poisson manifolds The simplest case is the original local construction of a symplectic realization of a Poisson manifold, which is due to Weinstein [71]. Let $\lambda_0$ be the Liouville one-form, i.e. the unique one-form on $T^*M$ with the property that $s_\alpha^*\lambda_0=\alpha$ for all one-forms $\alpha\in\Omega^1(M)$, regarded as smooth sections $s_\alpha:M\to T^*M$ of the cotangent bundle of $M$ under the isomorphism $\Omega^1(M)\simeq\Gamma(T^*M)$. It is the tautological primitive for the canonical symplectic structure; in local coordinates, $\lambda_0 =p_i\,\dd x^i$ and $\omega_0=\dd\lambda_0=\dd p_i\wedge\dd x^i$ on $T^*U$. Contracting the pushforward of the Poisson bivector field $\theta$ by the zero section with the Liouville one-form defines a vector field $X^\theta\in\mfX(T^*U)$: X^\theta = \theta(\lambda_0,\,\cdot\,) \ , which in local coordinates reads X^θ= θ^ij(x) p_i ∂_j . The vector field $X^\theta$ defines a Poisson spray, see e.g. [23]. Let $\varphi^{\theta}_u:T^*U\to T^*U$ be the flow of $t\,X^\theta$ for $u\in[0,1]$, which is the diffeomorphism defined by \frac{\dd\varphi_u^{\theta}}{\dd u} = t\,X^{\theta}\circ\varphi_u^{\theta} \ . the symplectic structure $\omega\in\Omega^2(T^*U)$ constructed by <cit.> is given by the integrated pullback of the canonical symplectic structure $\omega_0$ by this ω:= ∫_0^1 (φ_u^θ)^* ω_0 u . Since $\varphi^\theta_u\big|_{t=0}$ is the identity for all $u\in[0,1]$, this symplectic structure is indeed a deformation of $\omega_0$. Note that the zero section $U\subset T^*U$ is a Lagrangian submanifold of $(T^*U,\omega)$, and that $\omega=\dd\lambda$ where $\lambda = \phi^i(x,p)\,\dd p_i$ ϕ^i(x,p) = ∫_0^1 x^i∘φ_u^θ u . The Jacobian matrix ${\sf J}_\phi(x,p)=\big(\frac{\partial\phi^i}{\partial x^j}\big)$ is formally invertible (because $\varphi_u^\theta\big|_{t=0}$ is the identity for all $u\in[0,1]$), and we denote its inverse by $\one+t\,\gamma(x,p)$. The corresponding cosymplectic structure then assumes the local form \begin{align}\label{PB1} \omega^{-1} = \tfrac t2\,\theta^{ij}(x)\, \partial_i\wedge\partial_j + \tfrac12\,\big(\delta^i_j+t\,\gamma^i_j(x,p)\big)\, \big(\partial_i\wedge\tilde\partial{}^j+\tilde\partial{}^j\wedge\partial_i\big) \ . \end{align} For later use, and in particular for comparison with the generic cases of almost Poisson structures, it is useful to cast the symplectic embedding described by (<ref>) into the setting of Definition <ref> by developing its asymptotic series in $t$. For this, we expand the bivector $\gamma$ as a formal power series γ_i^j(x,p) = ∑_n=1^∞ t^n-1 γ_i^j|i_1⋯i_n(x) p_i_1⋯p_i_n using (<ref>), where the local functions $\gamma_i^{j|i_1\cdots i_n}(x)$ are proportional to the components of the bivector $\theta$ and their derivatives. Alternatively, they can be found by solving the Poisson integrability condition $[\omega^{-1},\omega^{-1}]=0$, which using $[\theta,\theta]=0$ yields local first order differential equations ∂̃^lγ_i^k - ∂̃^kγ_i^l + t (γ_j^l ∂̃^jγ_i^k - γ_j^k ∂̃^jγ_i^l) = ∂_iθ^lk + t (γ_i^j ∂_jθ^lk + θ^kj ∂_jγ_i^l - θ^lj ∂_jγ_i^k ) for the bivector $\gamma$ in terms of the given bivector $\theta$. Substituting the formal power series (<ref>) in (<ref>) then yields an infinite system of recursive differential equations given by \begin{align} \label{gtrec} \gamma_i^{k|l} - \gamma_i^{l|k} &= \partial_i\theta^{lk} \ , \end{align} \begin{align} & (n+1)\,\big(\gamma_i^{k|li_2\cdots i_{n+1}} -\gamma_i^{l|ki_2\cdots i_{n+1}}\big) \notag \\ & \hspace{3cm} +\sum_{m=1}^n\,(n-m+1)\,\big(\gamma_j^{l|i_1\cdots i_{m}}\,\gamma_i^{k|ji_{m+1}\cdots i_{n+1}} - \gamma_j^{k|i_1\cdots i_{m}}\,\gamma_i^{l|ji_{m+1}\cdots i_{n+1}} \big) \notag \\[4pt] & \hspace{5cm} = \gamma_i^{j|i_2\cdots i_{n+1}}\,\partial_j\theta^{lk} + \theta^{kj}\,\partial_j\gamma_i^{l|i_2\cdots i_{n+1}} - \theta^{lj}\,\partial_j\gamma_i^{k|i_2\cdots i_{n+1}} \ , \label{h5} \end{align} for $n\geq1$. A formal power series solution of (<ref>) was constructed in this way by [40], with the first two leading orders given by \begin{align} \gamma_i^{j|k} &= -\tfrac12\,\partial_{i}\theta^{jk} \nonumber \\[4pt] \gamma_i^{j|kl} &= + \partial_i\theta^{km}\,\partial_m\theta^{jl} \big) \ . \label{gt}\end{align} The solution (<ref>) is not unique. There is no general notion of equivalence of symplectic embeddings for a general Poisson manifold. If $(M,\theta)$ is integrable, there is a notion of Morita self-equivalence, see e.g. [15]. We will return to the question of uniqueness, as well as the meaning of the symplectic structure on $T^*M$ away from the zero section, later on where we will find that they have natural interpretations in terms of gauge algebras (see Remark <ref> and Proposition <ref>, Let $M=\real^d$ with a constant Poisson structure $\theta$. This is the only case in which the solution $\gamma_i^j=0$ to (<ref>) is possible; this is also the solution that follows from (<ref>) for constant $\theta$. With this choice, the symplectic embedding of $(\real^d,\theta)$ is given by the strict deformation $(T^*\real^d,\omega_0+t\,\theta^*)$ of the cotangent symplectic groupoid for $\real^d$, where $\theta^*$ is the vertical two-form on $T^*\real^d$ induced by the linear dual of the bivector $\theta$ on the vector space $\real^d$, that is, $\theta^*(\tilde\partial^i,\tilde\partial^j)=\theta^{ij}$ and $\theta^*(\partial_i,\partial_j)=0= \theta^*(\partial_i,\tilde\partial^j)$. The integrating symplectic groupoid is the direct product of the pair groupoid $\real^r\times\real^r$ and the cotangent groupoid $T^*\real^{d-r}$ for $\real^{d-r}$ where $r$ is the rank of $\theta$, see e.g. [16]. However, there are also non-zero solutions of (<ref>) in this case. An explicit global extension of the local symplectic embedding (<ref>) to an open neighbourhood $N\subset T^*M$ of the zero section is given in [23, 12]. The construction depends on the choice of an affine connection $\nabla$ on $M$. It amounts to replacing the vector field (<ref>) with X^{\theta,\nabla} = \theta^{ij}(x)\,p_i\,\partial_j + p_k\,p_l\,\theta^{ki}(x)\,\Gamma_{ij}^l(x)\,\tilde\partial{}^j \ , $\nabla_{\partial_i}\partial_j = \Gamma_{ij}^l(x)\, \partial_l$, and replacing $\varphi_u^{\theta}$ with the corresponding flow $\varphi_u^{\theta,\nabla}$ of $t\,X^{\theta,\nabla}$ in (<ref>). The connection $\nabla$ induces a Lie algebroid connection on the cotangent Lie algebroid $(T^*M,\theta^\sharp,[\,\cdot\,,\,\cdot\,]_\theta)$ associated to an integrable Poisson manifold $(M,\theta)$, where the bracket of this Lie algebroid extends the natural Lie bracket on exact one-forms, given by $[\dd f,\dd g]_\theta=\dd\{f,g\}_\theta$, in a unique way to the Koszul bracket [\alpha,\beta]_\theta := \LL_{\theta^\sharp\alpha}\beta-\LL_{\theta^\sharp\beta}\alpha - \dd \theta(\alpha,\beta) where $\LL$ denotes the Lie derivative. This construction makes $N$ into a local symplectic groupoid $N\rightrightarrows M$ (cf. [36]), with source map $\pi$ and target map $\pi\circ\varphi_1^{\theta,\nabla}$, which integrates the cotangent Lie algebroid. §.§ Local symplectic embedding of twisted Poisson manifolds The definition (<ref>) makes sense for any bivector $\theta$ and defines a symplectic structure, but it yields a symplectic embedding only when $\theta$ is a Poisson structure. However, when $\theta$ is an $H$-twisted Poisson structure, it is possible to modify $\omega$ accordingly [18] and turn it into an almost symplectic embedding of $(M,\theta)$: the two-form ω_H = ∫_0^1 (φ_u^θ)^*(ω_0 + t π^*H(X^θ, · , · ) ) u is an almost symplectic structure with $\dd\omega_H=t\,\pi^*H$ which satisfies (<ref>). Similarly to the untwisted case (see Remark <ref>), a twisted Poisson structure makes the cotangent bundle into a Lie algebroid $(T^*M,\theta^\sharp,[\,\cdot\,,\,\cdot\,]_{\theta,H})$ with Lie [\alpha,\beta]_{\theta,H} := [\alpha,\beta]_\theta +H(\theta^\sharp\alpha,\theta^\sharp\beta,\,\cdot\,) \ . In particular, for $f,g\in C^\infty(M)$ this gives [\dd f,\dd g]_{\theta,H} = \dd\{f,g\}_\theta + H(X_f,X_g,\,\cdot\,) with $X_f=\theta^\sharp\dd f$. If $\theta$ is integrable, the corresponding integrating Lie groupoid is called a twisted symplectic groupoid in [18]; it is provided by extending this local almost symplectic embedding construction by replacing $X^\theta$ with $X^{\theta,\nabla}$ and $\varphi_u^{\theta}$ with $\varphi_u^{\theta,\nabla}$ in (<ref>) [23], as explained in Remark <ref>. In this local picture, it is possible to locally `untwist' the almost symplectic embedding to a bonafide symplectic embedding by interpreting the closed three-form $H\in\Omega^3(M)$ as the curvature of a gerbe connection and following the approach of [4, 55]. For this, we choose a suitable covering of the manifold $M$ by good open subsets $U_a$. We can write $H$ in terms of local two-forms $B_a\in\Omega^2(U_a)$ as $H=\dd B_a$. On intersections $U_{ab}:=U_a\cap U_b$, the two-form $F_{ab}:=B_b-B_a$ is closed and hence exact, so it can be expressed in terms of one-form fields $A_{ab}\in\Omega^1(U_{ab})$ as $F_{ab}=\dd A_{ab}$; triple intersections involve local gauge transformations by functions $f_{abc}\in C^\infty(U_a\cap U_b\cap U_c)$ satisfying a suitable integrability condition. Then the two-form $\omega_a\in\Omega^2(U_a)$ defined by \omega_a = \omega_H-t\,\pi^*B_a is a symplectic structure on $T^*U_a$, i.e. $\dd\omega_a=0$. The local symplectic two-forms $\omega_a$ are related by the flows $\varphi_u^{ab}$ at $u=1$ generated by the vector fields $t\,\theta(A_{ab},\,\cdot\,)\in\mfX(U_{ab})$, such that \big(\varphi_1^{ab}\big)^*\{F,G\}_{\omega^{-1}_b} = \big\{\big(\varphi_1^{ab}\big)^*F,\big(\varphi_1^{ab}\big)^*G\big\}_{\omega^{-1}_a} for $F,G\in C^\infty(T^*U_{ab})$. This defines a twisted sheaf (or stack) of Poisson algebras on $T^*M$ [64]. §.§ Local symplectic embedding of almost Poisson manifolds The existence of local symplectic embeddings for arbitrary almost Poisson structures is established in [42]. The construction of Section <ref> is spoilt for a non-zero Jacobiator $\Pim\neq0$, as then $(C^\infty(M),t\,\{\,\cdot\,,\,\cdot\,\}_{\theta})$ cannot be embedded as a Poisson subalgebra of the cosymplectic algebra $(C_{\rm pol}^\infty(T^*M)[[t]],\{\,\cdot\,,\,\cdot\,\}_{\omega^{-1}})$. Nevertheless, we can gleam off the general form of the symplectic embedding from the two special cases (<ref>) and (<ref>). For this, we take the zero section $U\subset T^*U$ to be a Lagrangian submanifold of $(T^*U,\omega)$ and account for the additional terms involving This means that the formal Poisson bivector $\omega^{-1}$ can be written as \begin{align}\label{PBq} \omega^{-1} = \tfrac t2\,\underline{\theta}^{ij}(x,p)\, \partial_i\wedge\partial_j + \tfrac12\,\big(\delta^i_j+t\,\gamma^i_j(x,p)\big)\, \big(\partial_i\wedge\tilde\partial{}^j+\tilde\partial{}^j\wedge\partial_i\big) \ , \end{align} where the bivector $\gamma$ has a formal power series expansion as in (<ref>), with leading order term given in (<ref>), while θ^ij(x,p) = θ^ij(x) - t ^ijk(x) p_k + ∑_n=2^∞ t^n θ^ij|i_1⋯i_n(x) p_i_1⋯p_i_n and the local functions $\theta^{ij|i_1\cdots i_n}(x)$ for $n\geq2$ are proportional to the components of the trivector $\Pim$ and their derivatives. The various functions satisfy local first order differential equations determined from the Poisson integrability condition (<ref>) with \begin{align} \Thetam_0 &= \tfrac12\,\big(\partial_i\wedge\tilde\partial^i + \tilde\partial^i\wedge\partial_i\big) \ , \notag \\[4pt] \Thetam_1 &= \tfrac12\,\theta^{ij}(x)\,\partial_i\wedge\partial_j + \tfrac12\,\gamma_j^{i|k}(x)\,p_k\,\big(\partial_i\wedge\tilde\partial^j + \tilde\partial^j\wedge\partial_i\big) \ , \notag \\[4pt] \Thetam_2 &= -\tfrac12\,\Pim^{ijk}(x)\,p_k\,\partial_i\wedge\partial_j + \tfrac12\,\gamma_j^{i|kl}(x)\,p_k\,p_l\,\big(\partial_i\wedge\tilde\partial^j + \tilde\partial^j\wedge\partial_i\big) \ , \label{eq:Thetanalmost}\\[4pt] \Thetam_{n} &= \tfrac12\,\theta^{ij|i_1\cdots i_{n-1}}(x)\,p_{i_1}\cdots p_{i_{n-1}}\,\partial_i\wedge\partial_j + \tfrac12\,\gamma_j^{i|i_1\cdots i_{n}}(x)\,p_{i_1}\cdots p_{i_{n}}\,\big(\partial_i\wedge\tilde\partial^j + \tilde\partial^j\wedge\partial_i\big) \ , \notag \end{align} for $n\geq3$. This cosymplectic structure indeed satisfies the two requisite requirements: (a) it coincides with the original almost Poisson structure $\theta$ along the zero section of $T^*U$, as in (<ref>); and (b) in the case of a Poisson bivector $\theta$, the symplectic embedding (<ref>) restores the symplectic embedding (<ref>) of a Poisson structure. For example, an explicit form for the order $t^2$ term according to [42] reads \begin{align} \theta^{ij|kl}&=\tfrac{3}{16}\,\big(\Pim^{jlm}\,\partial_m\theta^{ki}+\Pim^{jkm}\,\partial_m\theta^{li}- \Pim^{ilm}\,\partial_m\theta^{kj}-\Pim^{ikm}\,\partial_m\theta^{lj} \big) \nonumber \\ & \quad \big) \ . \label{Theta4}\end{align} The formalism here covers as well the special case of twisted Poisson structures from Section <ref>, and it explicitly realises the local `untwisting' of the almost symplectic realization (<ref>) to a symplectic embedding $\omega$. For example, consider the case of a topologically trivial closed three-form $H\in\Omega^3(M)$, that is, $H=\dd B$ for a globally defined two-form $B\in\Omega^2(M)$, with associated linear map $B^\flat:\mfX(M)\to\Omega^1(M)$. In this case we can choose the trivial one-form $A=0$ (up to gauge equivalence), for which the associated flows are identity maps of $M$ and the corresponding map \begin{align*} \big(\omega^{-1}\big)^\sharp = \big(\omega_{\dd B}^{-1}\big)^\sharp\,\big(\one-t\,(\pi^*B)^\flat\,(\omega_{\dd B}^{-1})^\sharp\big)^{-1} : \Omega_{\rm pol}^1(T^*M)[[t]]\longrightarrow\mfX_{\rm pol}(T^*M)[[t]] \end{align*} defines a $B$-transformation of the almost cosymplectic structure $\omega_{\dd B}^{-1}$. §.§ Semi-classical limit of deformation quantization In the context of deformation quantization, a Poisson bracket may be regarded as the semi-classical limit of the commutator bracket of an associative noncommutative star-product which quantizes a Poisson manifold $(M,\theta)$; the existence of such star-products is provided by the famous Kontsevich formality theorem [39]. More precisely, a star-product on $M$ is a product on $C^\infty(M)[[\hbar]]$ (regarded as a $\complex[[\hbar]]$-module for a formal deformation parameter $\hbar$) of the form \begin{align*} f\star g = f\,g+\sum_{n=1}^\infty \,\hbar^n\,{\rm B}_n(f,g) \end{align*} for smooth functions $f,g\in C^\infty(M)$, where each ${\rm B}_n:C^\infty(M)\times C^\infty(M)\to C^\infty(M)$ for $n\geq1$ is a bidifferential operator, and the Poisson structure is recovered through $\{f,g\}_\theta = \frac1\hbar\,[f,g]_\star\big|_{\hbar=0}$ where $[f,g]_\star=f\star g-g\star f$. The relation between symplectic embeddings and the semi-classical limit of deformation quantization of a Poisson structure $\theta$ is well-known (at least implicitly) in both the physics and mathematics literature; after all, symplectic realizations were originally introduced with the quantization problem for Poisson manifolds in mind. On $M=U\subseteq \real^d$, the Fourier integral representation of the Kontsevich star-product on $(U,t\,\theta)$ can be brought to the form \begin{align}\label{eq:Spxdef} f\star g(x) = \int_{(\real^d)^*\times(\real^d)^*} \, \hat f(p)\,\hat g(p') \, a_\hbar(p,p',x) \, \e^{\frac\ii\hbar \, \Sigma(p,p',x)} \, \frac{\dd p \ \dd p'}{(2\pi\,\hbar)^d} \ , \end{align} where $\hat f$ and $\hat g$ are the asymptotic Fourier transforms of $f,g\in C^\infty(U)$, and the function $a_\hbar(p,p',x)$ is regular at $\hbar=0$. In the semi-classical limit $\hbar\to0$, the leading contribution is given by the oscillatory phase $\Sigma(p,p',x)$, which is a formal power series \begin{align*} \Sigma(p,p',x) = \langle p+p',x\rangle + \sum_{n=1}^\infty \, t^n \, \Sigma_n(p,p',x) \ , \end{align*} where $\langle \,\cdot\,,\,\cdot\,\rangle$ is the dual pairing between $\real^d$ and $(\real^d)^*$, and each $\Sigma_n(p,p',x)$ for $n\geq1$ is a homogeneous polynomial in $p,p'\in(\real^d)^*$ of degree $n+1$, with $\Sigma_n(p,0,x)=\Sigma_n(0,p,x)$, whose homogeneous part $\tilde\Sigma_n(p,p',x)$ in $p$ satisfies $\tilde \Sigma_n(p,p,x)=0$. It formally generates a local symplectic groupoid structure on $(T^*U,\omega_0)$ [19], and in this manner the formal symplectic groupoid can be regarded as a semi-classical version of the full noncommutative algebra of functions $(C^\infty(U)[[\hbar]],\star)$. This determines a formal Poisson submersion $\pi_\theta:(T^*U,\omega_0^{-1})\to (U,t\,\theta)$ with Lagrangian zero section which in components is defined by \begin{align}\label{eq:Boppshift} \pi_\theta(x,p)^i = \tilde\partial'{}^i\Sigma(p,p',x)\big|_{p'=0} = x^i + \sum_{n=1}^\infty \, t^n \, \Sigma^{i|i_1\cdots i_n}(x) \, p_{i_1}\cdots p_{i_n} \ , \end{align} where $\tilde\partial'{}^i=\partial/\partial p_i'$ and $\Sigma^{i|i_1\cdots i_n}(x)\,p_{i_1}\cdots p_{i_n} = \tilde\partial'{}^i\Sigma_n(p,p',x)\big|_{p'=0}$. This map is sometimes called a `generalized Bopp shift' in the physics literature, see e.g. [47, 40]. In [42] it was shown how to construct the (not unique) local functions $\Sigma^{i|i_1\cdots i_n}(x)$ by solving the (formal) Poisson map equation \begin{align*} \{\pi_\theta^*f,\pi_\theta^*g\}_{\omega_0^{-1}} = t\,\pi_\theta^*\{f,g\}_\theta \end{align*} order by order in $t$, with the first three leading orders which are compatible with quantization given by \begin{align*} \Sigma^{i|j} &= -\tfrac12\,\theta^{ij} \ , \\[4pt] \Sigma^{i|jk} &= \tfrac1{24}\,\theta^{kl}\,\partial_l\theta^{ij} + \tfrac1{24} \, \theta^{jl}\,\partial_l\theta^{ik} \ , \\[4pt] \Sigma^{i|jmn} &= -\tfrac1{12} \, \big(2\,\Sigma^{l|mn}\,\partial_l\theta^{ij} + 2\,\Sigma^{l|jn}\,\partial_l\theta^{im} + 2\,\Sigma^{ljm}\,\partial_l\theta^{in} \\ & \quad \hspace{1cm} + \tfrac1{12}\,\theta^{lm}\,\theta^{kn}\,\partial_l\partial_k\theta^{ij} + \tfrac1{12}\,\theta^{lj}\,\theta^{kn}\,\partial_l\partial_k\theta^{im} + \tfrac1{12}\,\theta^{lj}\,\theta^{km}\,\partial_l\partial_n\theta^{in}\big) \ . \end{align*} Writing $\tilde\pi(x,p)=p$ for the projection to the normal directions to $U\subset T^*U$, the inverse of the map $\pi_\theta\times\tilde\pi:T^*U\to T^*U$ then replaces (<ref>) with the canonical cotangent bundle projection $\pi(x,p)=x$ and yields the local symplectic embedding constructed in Section <ref>; this is the sense in which the symplectic embedding `integrates' the Poisson manifold $(M,t\,\theta)$. The existence of invertible generalized Bopp shifts is always guaranteed, at least locally, by virtue of Darboux's theorem. If $\theta=\theta_0=0$, then $\Sigma_n=0$ for all $n\geq1$ and $\pi_0(x,p)=x$. More generally, if $M=\real^d$ with a constant Poisson structure $\theta$, then \begin{align*} \Sigma(p,p',x) = \langle p+p',x\rangle + \tfrac t2\,\theta^{ij}\,p_i\,p_j' \ , \end{align*} and the function $a_\hbar$ in (<ref>) is identically equal to $1$. In this case the generalized Bopp shift \begin{align*} \pi_\theta(x,p)^i = x^i -\tfrac t2\,\theta^{ij}\,p_j \end{align*} reproduces the symplectic embedding of Example <ref>. In the general case, we may again formulate deformation quantization in the direction of an almost Poisson bracket through a suitable nonassociative star-product. For twisted Poisson manifolds these star-products can be constructed through Kontsevich's formalism (see e.g. [22, 64, 4, 55]). For generic almost Poisson manifolds the existence and uniqueness of nonassociative Weyl star-products is established by [48]. The relation between symplectic embeddings and the semi-classical limit of deformation quantization of a generic almost Poisson structure $\theta$ is much more involved. For twisted Poisson manifolds, the twisted symplectic groupoids of [18] capture the semi-classical limit of the nonassociative algebra of functions, but these are not directly induced by our symplectic embedding formalism, which deals with strictly associative structures. Instead, in <cit.> it was shown that symplectic embeddings capture the semi-classical limit of the associative composition algebra of differential operators on $M$ induced by the star-product [56], which coincides with the noncommutative algebra of functions precisely when the Jacobiator $\Pim$ vanishes. In this setting, the semi-classical limit of a nonassociative star-product quantizing a generic almost Poisson structure $\theta$ was formulated as a generalized Bopp shift by [42] as the submersion $\pi_\theta:T^*U\to U$ satisfying \begin{align*} \{\pi_\theta^*f,\pi_\theta^*g\}_{\omega_0^{-1}} = t\,(\pi_\theta\times\tilde\pi)^*\{\pi^*f,\pi^*g\}_{\underline{\theta}} \ , \end{align*} where $\underline{\theta}=\frac12\,\underline{\theta}^{ij}(x,p)\,\partial_i\wedge\partial_j$ is the bivector introduced in Section <ref> with the formal power series expansion (<ref>). The map $\pi_\theta$ has precisely the same expansion (<ref>) as in the case of a Poisson structure, and inverting $\pi_\theta\times\tilde\pi$ then gives a local symplectic embedding in the weaker sense of Definition <ref>, as constructed explicitly in Section <ref>. Poisson gauge transformations In this section we focus on the case of Poisson structures, i.e. $\Pim=0$. Recalling our discussion of deformation quantization from Section <ref>, a noncommutative gauge theory is a formal deformation of an ordinary gauge theory, obtained by replacing pointwise products of gauge fields with star-products. A Poisson gauge theory is a limit of a noncommutative gauge theory whose gauge algebra is the semi-classical limit of the noncommutative gauge algebra with closure defined by the commutators $[f,g]_\star$, in the sense defined below; we return to this point of view in Section <ref> where we will make this notion somewhat more precise in the context of homotopy Poisson algebras. In this paper we deal only with the kinematical data of a Poisson gauge theory. We also discuss only the case of gauge theories with structure group $U(1)$ for simplicity. A conventional local $U(1)$ gauge transformation is specified by a pair (f,A) \ \in \ C^\infty(U)\times\Omega^1(U) on an open subset $U\subseteq M$ consisting of a gauge parameter $f$ and a gauge field $A$. The commutative ring of functions $C^\infty(U)$ is an enveloping algebra for an abelian Lie algebra which acts on $\Omega^1(U)$ as $(f,A)\mapsto A+\delta_f^0A$ via the gauge variation \delta_f^0A=\dd f \ . The closure condition for the gauge variations defines an abelian Lie algebra: \big[\one+\delta_f^0,\one+\delta_g^0\big]A:=\big((\one+\delta_f^0)\circ(\one+\delta_g^0) - (\one+\delta_g^0)\circ(\one+\delta_f^0)\big)A = 0 for $f,g\in C^\infty(U)$. In a Poisson gauge theory, this construction is deformed in the following way. The idea is then to mimick the representation of the Poisson algebra as infinitesimal diffeomorphisms of $M$: The map $f\mapsto X_f:=\theta^\sharp\dd f$ is a Lie algebra homomorphism from $(C^\infty(M),\{\,\cdot\,,\,\cdot\}_\theta)$ to the Lie algebra of vector fields $(\mfX(M),[\,\cdot\,,\,\cdot\,])$: [X_f,X_g] = X_{f,g}_θ . For later reference, we note that this construction makes the trivial line bundle $M\times\real$ into a Lie algebroid over $M$. Let $(M,\theta)$ be a Poisson manifold. A (local) Poisson gauge transformation on an open subset $U\subseteq M$ is an action of the Lie algebra $(C^\infty(U),t\,\{\,\cdot\,,\,\cdot\}_\theta)$ on the affine space $\Omega^1(U)[[t]]$ of the form (f,A)\longmapsto A+\delta^\theta_fA for $(f,A)\in C^\infty(U)\times\Omega^1(U)$, which is a deformation of an abelian gauge transformation: δ_f^θA|_t=0 = δ_f^0A = f , and satisfies the derivation property \begin{align*} \delta_{f\,g}^\theta A = g\,\delta_f^\theta A + f\,\delta_g^\theta A \end{align*} over the algebra $C^\infty(U)$. The closure condition for the Lie algebra of gauge variations is the Poisson gauge algebra [+δ^θ_f,+δ^θ_g]A=δ^θ_t {f,g}_θ A . In noncommutative gauge theory, the space $\Omega^1(U)[[\hbar]]$ is naturally acted upon by the vector space $C^\infty(U)[[\hbar]]$ under the left and right actions by multiplication with the star-product extended over one-forms by replacing the bidifferential operators ${\rm B}_n$ with Lie bidifferential operators. In the semi-classical limit, this naturally defines a skew-symmetric $\Omega^1(U)$-valued bracket between gauge parameters and gauge fields that we denote by $\{f,A\}_\theta=-\{A,f\}_\theta$. Defining $\DD A\in\Omega^1(U)\otimes \Omega^1(U)$ in local coordinates by $\DD A:=\LL_{\partial_i}A\otimes \dd x^i$, this bracket reads as \{f,A\}_\theta = (\theta\otimes \one)(\dd f,\DD A) \ , which can be expressed as \{f,A\}_\theta = \theta^{ij}\, \partial_if \, \LL_{\partial_j}A in local coordinates. In the following we will use the abbreviation $\theta^\otimes:=\theta\otimes \one$. This bracket obeys the usual derivation property over the algebra $C^\infty(U)$ in its first entry, as well as with respect to the $C^\infty(U)$-bimodule $\Omega^1(U)$ in its second entry since \begin{align*} \DD(g\,A)=\dd g\otimes A+g\,\DD A \end{align*} \begin{align*} \{f,g\,A\}_\theta=\{f,g\}_\theta\,A + g\,\{f,A\}_\theta \ , \end{align*} for all $g\in C^\infty(U)$. In later sections we will also use an extension of this bracket to differential forms of arbitrary degree. For $\alpha,\beta\in\Omega^1(U)$, we define their symmetric bracket $\{\alpha,\beta\}_\theta\in\Omega^2(U)$ by \begin{align*} \{\alpha,\beta\}_\theta := \theta^\otimes(\DD\alpha,\DD\beta) \ . \end{align*} This is then extended to a graded skew-symmetric bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$ on the entire exterior algebra $\Omega^\bullet(U)$ as a graded biderivation of degree $0$. The simplest example (apart from the zero bivector) of a Poisson gauge transformation comes from taking $M=\real^d$ and a constant skew-symmetric $d{\times}d$ matrix $\theta=(\theta^{ij})$, regarded as a bivector on $\real^d$. Then the gauge variations \delta_f^\theta A = \dd f + t\,\{A,f\}_\theta fulfill the requirements of Definition <ref>. In the following we extend this result to generic Poisson manifolds $(M,\theta)$, which will generally require the use of formal power series. By the discussion of Section <ref>, it is natural to expect that symplectic embeddings should play a role in determining the semi-classical limit of a noncommutative gauge theory, that is, in Poisson gauge theories. The purpose of this section is to demonstrate that this is indeed the case, focusing in detail on the realization of gauge symmetries. Our main observation here is that a Poisson gauge algebra is canonically defined by the restriction of a symplectic embedding $(T^*M,\omega)$ of $(M,\theta)$ to a constraint submanifold defined by the gauge fields. For this, we regard a gauge field $A\in\Omega^1(U)$ as a section $s_A:U\to T^*U$, whose image is a submanifold $\im(s_A)\subset T^*U$; in local coordinates where $A=A_i(x)\,\dd x^i$, $s_A(x)=(x,A(x))\in T^*U$ for $x\in U$. Define the local one-form $\Phi_A\in\Omega^1(T^*U)$ by \Phi_A=\lambda_0 - \pi^*A where we recall that $\lambda_0$ is the Liouville one-form on $T^*U$. The one-form $\Phi_A$ vanishes precisely on the submanifold $\im(s_A)\subset T^*U$. In local coordinates where $\Phi_A=(\Phi_A)_i\,\dd x^i$ with \begin{align*} (\Phi_A)_i=p_i-A_i(x) \ , \end{align*} this is the submanifold of $T^*U$ defined by the constraints $(\Phi_A)_i=0$. Since $s_A^*\lambda_0=A$, this is equivalently presented as $s_A^*\Phi_A=0$ in Note that $s_A$ is a Lagrangian section of $(T^*U,\omega_0)$ if and only if $A$ is a flat connection, i.e. $\dd A=0$. Thus in general, $\im(s_A)$ is not a Lagrangian submanifold of $(T^*U,\omega)$. In physics parlance, the local equations $(\Phi_A)_i=0$ do not define first class constraints, as expected since otherwise they would eliminate all local degrees of freedom. In the present context, the consistent elimination of the `auxiliary' variables $p_i$ which are adjoined in symplectic embeddings of Poisson manifolds [46] acquires a natural meaning through Let $(T^*M,\omega)$ be a local symplectic embedding of a Poisson manifold $(M,\theta)$, and let $U\subseteq M$ be an open subset. For $(f,A)\in C^\infty(U)\times\Omega^1(U)$, the gauge variation δ^θ_fA := s_A^*{π^*f,Φ_A}_ω^-1 is a Poisson gauge transformation. Using (<ref>) the gauge variations (<ref>) read \begin{equation}\label{gtA} \delta^\theta_f A =\dd f + t\, s_A^*\gamma(\dd f,\,\cdot\,)+t\,\{A,f\}_{\theta} \ , \end{equation} where the one-form $s_A^*\gamma(\dd f,\,\cdot\,)\in \Omega^1(U)[[t]]$ is given by s_A^*\gamma(\dd f,\,\cdot\,) = \gamma_i^j\big(x,A(x)\big)\,\partial_j f(x)\, \dd x^i in local coordinates on $U$. We need to check the closure condition (<ref>) for this definition of gauge transformations. We begin with some preliminary definitions. For a one-form which is a functional $\FF(A)\in\Omega^1(U)$ of the gauge field $A$, we define the gauge variation \begin{equation}\label{eq:FFAgt} \delta^\theta_f \FF(A):=\FF\big(A+\delta^\theta_f A\big)-\FF(A) \ . \end{equation} In particular, \begin{equation*} \delta^\theta_g \{A, f\}_{\theta}=\big\{\delta^\theta_g A,f\big\}_{\theta} \ . \end{equation*} If $\underline{\FF}\in\Omega^1_{\rm pol}(T^*U)$, then we define \begin{equation*} \delta^\theta_f \big(s_A^*\underline\FF\big):=s_A^*\big(\tilde\partial{}^j\underline{\FF}\big) \, \delta^\theta_f A_j \ , \end{equation*} where $\delta^\theta_fA:=\delta^\theta_fA_i\,\dd x^i$. We are now ready to calculate the composition of two gauge variations. We obtain \begin{align*} \delta^\theta_f \big(\delta^\theta_g + t\,\big\{\delta^\theta_fA,g\big\}_\theta \\[4pt] +t\,\{s_A^*\{\pi^*f,\Phi_A\}_{\omega^{-1}},g\}_\theta \ . \end{align*} Thus for the left-hand side of the closure condition (<ref>) one finds \begin{align} & \delta^\theta_f \big(\delta^\theta_g A\big)-\delta^\theta_g \big(\delta^\theta_f A\big) \notag \\[4pt] & \hspace{1cm}= s_A^*\big(\tilde\partial{}^j\{\pi^*g,\Phi_A\}_{\omega^{-1}}\big)\, s_A^*\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}} - s_A^*\big(\tilde\partial{}^j\{\pi^*f,\Phi_A\}_{\omega^{-1}}\big)\, \notag \\ & \hspace{2cm} + s_A^*\{\pi^*s_A^*\{\pi^*f,\Phi_A\}_{\omega^{-1}},\pi^*g\}_{\omega^{-1}} - s_A^*\{\pi^*s_A^*\{\pi^*g,\Phi_A\}_{\omega^{-1}},\pi^*f\}_{\omega^{-1}} \ . \label{t1}\end{align} We now apply the relation (<ref>) from Appendix <ref> to the right-hand side of (<ref>) to get \begin{align} \delta^\theta_f \big(\delta^\theta_g A\big)-\delta^\theta_g \big(\delta^\theta_f A\big) &= - s_A^*\{\{\pi^*g,\Phi_A\}_{\omega^{-1}},\pi^*f\}_{\omega^{-1}} \ . \label{t3}\end{align} Using the Jacobi identity in the right-hand side of (<ref>), we finally end up with \begin{align}\label{t4} \delta^\theta_f \big(\delta^\theta_g A\big)-\delta^\theta_g \big(\delta^\theta_f s_A^*\{\{\pi^*f,\pi^*g\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} \ . \end{align} \{\pi^*f,\pi^*g\}_{\omega^{-1}}=t\,\pi^*\{f,g\}_{\theta} \ , we conclude that the gauge transformations (<ref>) close the Lie algebra \begin{equation*} \big[\one+\delta^\theta_f,\one+\delta^\theta_g\big] A=\delta^\theta_{t\,\{f,g\}_\theta} A \ , \end{equation*} as required. The field dependent term $\dd f + t\,s_A^*\gamma(\dd f,\,\cdot\,)$ in (<ref>) can be thought of as defining a `twisted exterior derivative' of the gauge parameter $f$ required to close the Poisson gauge algebra when the bivector $\theta$ is no longer constant, cf.Examples <ref> and <ref>. Indeed, the gauge closure condition (<ref>) for (<ref>) implies the local symplectic embedding equations (<ref>) [44]. The precise algebraic meaning of this term will be explained in Section <ref>, and we shall see some explicit examples in Section <ref>. Almost Poisson gauge transformations In this section we consider the case of a generic almost Poisson bivector $\theta$ on $M$ with non-vanishing Jacobiator, i.e. $\Pim\neq0$. In this case, we may again formulate the notion of an almost Poisson gauge theory as an appropriate semi-classical limit of a noncommutative gauge theory, which is constructed by nonassociative deformation quantization. The inherent associativity of the symplectic embeddings discussed in Section <ref> will be needed to ensure that the gauge transformations in an almost Poisson gauge theory close a (strict) Lie algebra, despite their origin in a nonassociative algebra. The aim of this section is to generalize Proposition <ref> to the case of generic almost Poisson manifolds. The construction of gauge transformations and gauge algebras in these instances is considerably more involved than in the case of Poisson structures from Section <ref>. We proceed in two steps: We first set up the problem of defining a suitable notion of almost Poisson gauge transformations and their closure condition, and subsequently prove that solutions to this problem exist and are explicitly §.§ Formulation of the gauge algebra It is clear that for a non-trivial Jacobiator $\Pim\neq0$, a direct generalization of Definition <ref> is not possible, because $(C^\infty(M),\{\,\cdot\,,\,\cdot\,\}_\theta)$ is no longer a Lie algebra. This is already manifested in a violation of the Lie algebra homomorphism property (<ref>), which for an $H$-twisted Poisson structure is modified to [X_f,X_g] = X_{\{f,g\}_\theta} + \theta^\sharp H(X_f,X_g,\,\cdot\,) \ . In particular, the gauge closure condition must change substantially. Drawing on results from the $L_\infty$-algebra approach to noncommutative gauge theories [43], it is clear what needs to be done: the space of gauge parameters should be enlarged to include `field dependent' gauge parameters, such that the commutator of two gauge transformations is again a gauge transformation but with a field dependent gauge parameter. We shall discuss the precise relation between our approach here based on symplectic embeddings and the approach based on $L_\infty$-algebras in Section <ref> below. Let us first explain precisely what we mean by a `field dependent' gauge parameter in our setting. We will work with $1$-jets of sections of vector bundles, see e.g. [61]. Let $M$ be a manifold and $U\subseteq M$ an open subset. Let $J^1T^*U$ be the first order jet space of the cotangent bundle over $U$. A field dependent gauge parameter is a smooth function in $C^\infty(J^1T^*U)$. Concretely, in local coordinates a field dependent gauge parameter is specified by a local function $f[x,A]=f(x,A(x),\partial A(x))$ depending on a gauge field $A\in\Omega^1(U)$, viewed as a section $s_A:U\to T^*U$, and its first order derivatives. Via pullback along the vector jet bundle $J^1T^*U\to U$, this contains the usual space $C^\infty(U)$ of (field independent) gauge parameters from Section <ref>. In what follows we will also use pullbacks along the affine jet bundle $J^1T^*U\to T^*U$. In particular, this enables us to view the space of gauge fields $\Omega^1(U)$ as a subspace of the sections $\Gamma(J^1T^*U)$ of the vector jet bundle via prolongation; in other words, gauge fields are precisely the integrable sections of $J^1T^*U\to U$. With this notion in place, we are now in a position to generalize Definition <ref>. The idea is to enlarge the Lie algebra action $C^\infty(U)\times\Omega^1(U)\to\Omega^1(U)$ from Section <ref> to a suitable affine action Let $(M,\theta)$ be an almost Poisson manifold. A (local) almost Poisson gauge transformation on an open subset $U\subseteq M$ is an affine action of $C_{\rm pol}^\infty(J^1T^*U)[[t]]$ on the space $\Gamma_{\rm pol}(J^1T^*U)[[t]]$ of the form (f,A)\longmapsto A+\delta^\theta_fA for $(f,A)\in C^\infty(U)\times\Omega^1(U)$, such that: (a) The assignment $f\mapsto\delta^\theta_f$ is $\complex$-linear and defines a deformation of an abelian gauge transformation: \delta_f^\theta A\big|_{t=0} = \delta_f^0A=\dd f \ , which satisfies the derivation property \begin{align*} \delta_{f\,g}^\theta A = g\,\delta_f^\theta A + f\,\delta_g^\theta A \end{align*} over the algebra $C^\infty(U)$, and closes the almost Poisson gauge algebra [+δ^θ_f,+δ^θ_g]A = δ^θ_t [[f,g]]_θ(A)A , $[\![f,g]\!]_\theta \in C_{\rm pol}^\infty(J^1T^*U)[[t]]$; (b) The assignment $(f,g)\mapsto[\![f,g]\!]_\theta$ is $\complex$-bilinear, skew-symmetric and defines a deformation of the almost Poisson bracket: [\![f,g]\!]_\theta\big|_{t=0}=\{f,g\}_\theta \ , which satisfies the relation \begin{align}\label{j3} {\sf Cyc}_{f,g,h}\big(t\,\big[\!\big[h,[\![f,g]\!]_\theta(A)\big]\!\big]_\theta(A) + \delta_h^\theta[\![f,g]\!]_\theta(A) \big) = 0 \end{align} for consistency with the Jacobi identity for the commutator algebra (<ref>), where we use the definition (<ref>) for gauge variations; and For any fixed $f\in C^\infty(U)$, the map $[\![f,\,\cdot\,]\!]_\theta$ is a derivation of the commutative algebra $C_{\rm pol}^\infty(J^1T^*U)[[t]]$. When $\theta$ is a Poisson structure, i.e. the Jacobi identity for the bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$ holds, Definition <ref> is a special case of Definition <ref>, but the latter allows more freedom to accomodate situations where the Jacobi identity is violated. Nevertheless, a direct generalization of Proposition <ref> is still not possible for generic almost Poisson bivectors $\theta$; in the language of [46], the auxiliary variables $p_i$ can no longer be consistently eliminated by the constraints $\Phi_A=0$ on a symplectic embedding of an almost Poisson manifold. One can still repeat most of the proof of Proposition <ref> in the present case to arrive again at the expression (<ref>). However, due to (<ref>), the key difference from the case of a Poisson bivector is that the cosymplectic bracket \{\pi^*f,\pi^*g\}_{\omega^{-1}} = t\, \underline{\theta}(\dd\pi^*f,\dd\pi^*g) \neq t\,\pi^*\{f,g\}_{\theta} is not a gauge parameter since it depends on the coordinates in the normal directions to the zero section $U\subset T^*U$ through its local dependence on $\underline{\theta}^{ij}(x,p)$. Similarly, the restriction of \{\pi^*A,\pi^*f\}_{\omega^{-1}} = t\,\underline{\theta}^\otimes(\DD\pi^*A, \dd\pi^*f) \neq to $\im(s_A)$ involves a field dependent gauge transformation through its local dependence on $\underline{\theta}^{ij}(x,A(x))$. Using the relation (<ref>) from Appendix <ref> in the right-hand side of (<ref>), we can represent the commutator of two gauge transformations defined by (<ref>) in the form \begin{align} \delta^\theta_f\big(\delta^\theta_gA\big) - \delta^\theta_g\big(\delta^\theta_fA\big) &= s_A^*\{\pi^*s_A^*\{\pi^*f,\pi^*g\}_{\omega^{-1}} , \Phi_A\}_{\omega^{-1}} \notag \\ & \quad + s_A^*\big(\tilde\partial{}^j\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big) \, s_A^*\{(\Phi_A)_j,\Phi_A\}_{\omega^{-1}} \ . \label{t7}\end{align} While the expression $s_A^*\{\pi^*f,\pi^*g\}_{\omega^{-1}}$ defines a field dependent gauge parameter, the second line of (<ref>) prevents the gauge transformations defined by (<ref>) from closing a Lie algebra. To overcome this problem we need to `correct' the gauge variation (<ref>) in order to absorb this term, which is the content of Let $(T^*M,\omega)$ be a local symplectic embedding of an almost Poisson manifold $(M,\theta)$, and let $U\subseteq M$ be an open subset. For $f,g\in C^\infty(U)$ and $A\in\Omega^1(U)$, define $[\![f,g]\!]_\theta\in C_{\rm pol}^\infty(J^1T^*U)[[t]]$ t [[f,g]]_θ(A) = s_A^*{π^*f,π^*g}_ω^-1 - s_A^*{Φ_A(L_f),Φ_A(L_g)}_ω^-1 , where $ L_f\in \mfX_{\rm pol}^{\tt h}(J^1T^*U)[[t]]\subset \mfX_{\rm pol}^{\tt h}(T^*U)[[t]]$ are horizontal vector fields $L_f=L_f^i\,\partial_i$ on the jet space of $T^*U$ satisfying the recursion relations \begin{align} L^i_{t\,[\![f,g]\!]_\theta(A)} &= \delta_f^\theta L^i_g - \delta_g^\theta L^i_f + s_A^*\big(\tilde\partial{}^i\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big) + \notag \\ & \quad + s_A^*\{\pi^*f,L_g^i\}_{\omega^{-1}} - s_A^*\{\pi^*g,L_f^i\}_{\omega^{-1}} \notag \\ & \quad + s_A^*\{L_f^i,\Phi_A(L_g)\}_{\omega^{-1}} - s_A^*\{L_g^i,\Phi_A(L_f)\}_{\omega^{-1}} \notag \\ & \quad + \ , \label{c4}\end{align} δ^θ_fL^i_g(A):=L^i_g(A+δ_f^θA) - L^i_g(A) δ_f^θA := s_A^*{π^*f + Φ_A(L_f),Φ_A}_ω^-1 . Then the gauge variations (<ref>) close the almost Poisson gauge [+δ_f^θ,+δ_g^θ]A = δ^θ_t [[f,g]]_θ(A)A and $(f,A)\mapsto A+\delta_f^\theta A$ is an almost Poisson gauge The proof of (<ref>) utilizes the derivation property of the almost Poisson bracket together with the fact that, since $\Phi_A=0$ on the constraint locus $\im(s_A)\subset T^*U$, the gauge variations (<ref>) can be expressed as δ_f^θA = s_A^*{π^*f,Φ_A}_ω^-1 + L_f^i s_A^*{(Φ_A)_i,Φ_A}_ω^-1 , while the field dependent gauge parameter $[\![f,g]\!]_\theta\in C_{\rm pol}^\infty(J^1T^*U)[[t]]$ from (<ref>) can be determined t [[f,g]]_θ(A) = s_A^*{π^*f,π^*g}_ω^-1 - L_f^i L_g^j s_A^*{(Φ_A)_i,(Φ_A)_j}_ω^-1 . The details are defered to Appendix <ref>. To show that the Jacobi identity (<ref>) is satisfied by (<ref>), we note that the left-hand side of (<ref>) in this case reads \begin{align} {\sf Cyc}_{f,g,h}\Big( s_A^*\{\pi^*h,\pi^*s_A^*\{\pi^*f,\pi^*g\}_{\omega^{-1}}\}_{\omega^{-1}}-s_A^*\{\pi^*h,\pi^*s_A^*\{\Phi_A(L_f),\Phi_A(L_g)\}_{\omega^{-1}}\}_{\omega^{-1}}\notag\\ & \hspace{14mm}-s_A^*\{\Phi_A(L_h),\Phi_A(L_{t\,[\![f,g]\!]_\theta(A)})\}_{\omega^{-1}} +s_A^*\big(\tilde\partial^i\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big)\,s_A^*\{\pi^*h+\Phi_A(L_h),(\Phi_A)_i\}_{\omega^{-1}}\notag\\ & \hspace{23mm} -\big(\delta_h^\theta L^i_f\big)\,s_A^*\{(\Phi_A)_i,\Phi_A(L_g)\}_{\omega^{-1}} -\big(\delta^\theta_h L^i_g\big)\,s_A^*\{\Phi_A(L_f),(\Phi_A)_i\}_{\omega^{-1}}\notag\\ & \hspace{32mm}+s_A^*\{\pi^*s_A^*\{\pi^*h+\Phi_A(L_h),\Phi_A(L_f)\}_{\omega^{-1}},\Phi_A(L_g)\}_{\omega^{-1}}\notag\\ & \hspace{41mm} +s_A^*\{\Phi_A(L_f),\pi^*s_A^*\{\pi^*h+\Phi_A(L_h),\Phi_A(L_g)\}_{\omega^{-1}}\}_{\omega^{-1}} \Big) \ . \label{j4} \end{align} Now we use the expression for $L_{t\,[\![f,g]\!]_\theta(A)}$ from (<ref>) and take into account the cyclic sum over $f$, $g$ and $h$. Many terms cancel, while the remaining terms combine using the formulas (<ref>), (<ref>), and (<ref>) from Appendix <ref>. After these simplifications the expression (<ref>) becomes \begin{align*} {\sf Cyc}_{f,g,h}\big( s_A^*\{\pi^*h,\{\pi^*f,\pi^*g\}_{\omega^{-1}}\}_{\omega^{-1}}+s_A^*\{\{\Phi_A(L_f),\Phi_A(L_g)\}_{\omega^{-1}},\pi^*h\}_{\omega^{-1}}\\ & \hspace{2cm} +s_A^*\{\{\Phi_A(L_g),\pi^*h\}_{\omega^{-1}},\Phi_A(L_f)\}_{\omega^{-1}}+s_A^*\{\{\pi^*h,\Phi_A(L_f)\}_{\omega^{-1}},\Phi_A(L_g)\}_{\omega^{-1}}\\ & \hspace{4cm} \ \end{align*} This expression now vanishes identically as a consequence of the Jacobi identity for the cosymplectic bracket. Thus the gauge transformations (<ref>) form a Lie algebra. Finally, the linearity and derivation requirements of Definition <ref> will follow from the corresponding properties of the cosymplectic bracket and by constructing the vector fields $L_f$ as $C^\infty(J^1T^*U)$-linear duals of the one-forms $\dd f$, so that \begin{align*} L_{f\,g} = g\,L_f + f\,L_g \end{align*} and linearity in $f$ is immediate. By virtue of their role in cancelling the unwanted terms in the commutator (<ref>), we shall refer to the horizontal vector fields $L_f$ as Lagrangian multipliers. One can compare with the Poisson gauge transformations (<ref>), and in particular see the effects of the modifications by the Lagrangian multiplier vector fields, by using (<ref>) together with the notation of Remark <ref> to write the almost Poisson gauge transformations (<ref>) as \begin{align*} \delta_f^\theta A &= \dd f+ t\,s_A^*\gamma(\dd A(L_f^\otimes),\pi^*\DD A\big) + \dd A(L_f,\,\cdot\,) \\[4pt] & \quad \, +t\,s_A^*\gamma^\otimes(\DD A,L_f) - t\, s_A^*\gamma\big(\DD A(L_f^\otimes),\,\cdot\,\big) \ , \end{align*} for $f\in C^\infty(U)$ and $A\in\Omega^1(U)$, where $L_f^\otimes:=L_f\otimes \,\cdot\,$. Similarly, the modification of the almost Poisson bracket $\{f,g\}_\theta$ to the field dependent bracket (<ref>) can be written as \begin{align*} [\![f,g]\!]_\theta(A) &= s_A^*\{\pi^*f,\pi^*g\}_{\underline{\theta}} - A(L_g^\otimes)\big) - t^{-1}\, \dd A(L_f,L_g) \\[4pt] & \quad \, +s_A^*\gamma\big(\DD A(L_f^\otimes),L_g\big) - s_A^*\gamma\big(\DD A(L_g^\otimes),L_f\big) \ , \end{align*} for $f,g\in C^\infty(U)$ and $A\in\Omega^1(U)$. §.§ Existence of Lagrangian multipliers Proposition <ref>, which establishes sufficiency conditions under which local symplectic embeddings lead to almost Poisson gauge transformations, is only meaningful of course if we can establish the existence of the vector fields $L_f$, i.e. the existence of solutions to the defining recursion equations determined by (<ref>)–(<ref>). We shall now demonstrate that they can be constructed recursively for $f\in C^\infty(U)$ to any order in $t$. We can immediately compute the leading order contribution to $L_f$ by using (<ref>) to write the leading contribution to the second line of (<ref>) as s_A^*{(Φ_A)_j,Φ_A}_ω^-1 = t^2 A((f,g, · ), · ) + O(t^3) . At this order, this term can be cancelled by taking \begin{equation}\label{lambda2} L_f=-\tfrac{t^2}2 \, \Pim(\dd f,A,\,\cdot\,) + O(t^3) \end{equation} in (<ref>), which from (<ref>) can be straightforwardly seen to yield \begin{equation}\nonumber [\![f,g]\!]_\theta(A) = \{f,g\}_\theta - t\, \Pim(\dd f,\dd g,A) + O(t^2) \ . \end{equation} To find the higher order contributions, we will work locally. The Lagrangian multipliers $L_f=L_f^i\,\partial_i\in \mfX_{\rm pol}^{\tt h}(J^1T^*U)[[t]]$ should be linear in the gauge parameter $f\in C^\infty(U)$, and will generally depend on the gauge fields $A\in\Omega^1(U)$ and also on their derivatives $\partial A$, so we write their local forms \begin{equation*} L_f^i=t^2\,L^{ij}\big(x,A(x), \partial A(x)\big)\,\partial_j f(x) \ , \end{equation*} for $x\in U$ and functions $L^{ij}\in C_{\rm pol}^\infty(J^1T^*U)[[t]]$. In the following we write $\partial_A^l=\partial/\partial A_l$ and $\partial_A^{k;l} =\partial /\partial(\partial_k A_l)$ for derivatives in local coordinates on the jet space. We can then formulate The solution to the recursion relations (<ref>) is given by \begin{align} L^{ij}(A,\partial A)=\Lambdam^{ij}(A)+\sum_{n=1}^{\infty}\, (-t^2)^n\, & \Lambdam^{ik_1}(A)\, \partial_{k_1} A_{m_1}\, \Lambdam^{m_1k_2}(A) \notag \\ & \times \ \cdots \Lambdam^{m_{n-1}k_n}(A)\, \partial_{k_n}A_{m_n}\, \Lambdam^{m_nj}(A) \ , \label{m5} \end{align} where the function $\Lambdam^{ij}\in s_A^*C_{\rm pol}^\infty(T^*U)[[t]] \subset C_{\rm pol}^\infty(J^1T^*U)[[t]]$ does not depend on the derivatives $\partial A$ and satisfies local first order differential equations on the jet space given by \begin{align} & t\,\big(\partial_A^i\Lambdam^{kj}-\partial_A^j\Lambdam^{ki}\big) + \notag \\[4pt] & \hspace{1cm} = -\partial_A^k\underline{\theta}^{ij}(A) + t^2\,\big(\Lambdam^{kl}\,\partial_l\underline{\theta}^{ij}(A) + \Lambdam^{li}\,\partial_A^k\gamma_l^j (A)-\Lambdam^{lj}\,\partial_A^k\gamma_l^i (A) \label{m6} \\ & \hspace{5cm} +\underline{\theta}^{jl}(A)\,\partial_l\Lambdam^{ki} - \underline{\theta}^{il}(A)\,\partial_l\Lambdam^{kj} + \gamma_l^j (A)\,\partial_A^l\Lambdam^{ki} - \gamma_l^i (A)\,\partial_A^l\Lambdam^{kj}\big) \notag \\ & \hspace{8cm} +t^4\,\big(\gamma_m^l (A)\,\Lambdam^{mi}\,\partial_l\Lambdam^{kj} - \gamma_m^l (A)\,\Lambdam^{mj}\,\partial_l\Lambdam^{ki}\big) \ , \notag \end{align} with $\underline{\theta}^{ij}(A):=s_{A}^*\underline{\theta}^{ij}$ and First we introduce \begin{equation}\label{m7} L^{ij}(A,\partial A)=\Lambdam^{ij}(A)+\bar\Lambdam^{ij}(A,\partial A) \ , \end{equation} where $\Lambdam^{ij}(A)$ is a function of the gauge fields $A$ only, whereas $\bar\Lambdam^{ij}(A,\partial A)$ depends on both gauge fields $A$ and their first order derivatives $\partial A$. To obtain an equation for the function $\Lambdam^{ij}(A)$, we substitute (<ref>) in (<ref>) and collect the terms which contain only first order derivatives of gauge parameters $\partial_i f\,\partial_j g$ but do not contain derivatives of the gauge fields $\partial The left-hand side of (<ref>) reads \begin{align}\label{m8} L^i_{t\,[\![f,g]\!]_\theta(A)} = t^2\,L^{im}\,\partial_m\big(s_A^*\{\pi^*f,\pi^*g\}_{\omega^{-1}} - L_f^j\,L_g^k\,s_A^*\{(\Phi_A)_j,(\Phi_A)_k\}_{\omega^{-1}} \big) \ , \end{align} and it contributes to (<ref>). The contributions on the right-hand side of (<ref>) from \begin{align} \delta^\theta_fL_g^i&=t^2\,\partial_A^l L^{ij} \big(s_A^*\{\pi^*f,(\Phi_A)_l\}_{\omega^{-1}}+t^2\,L^{km}\,s_A^*\{(\Phi_A)_k,(\Phi_A)_l\}_{\omega^{-1}}\, \partial_kf\big)\,\partial_j g \notag \\ & \quad \, + t^2\,\partial_A^{p;l} L^{ij}\,\partial_p \big(s_A^*\{\pi^*f,(\Phi_A)_l\}_{\omega^{-1}}+t^2\,L^{km}\,s_A^*\{(\Phi_A)_k,(\Phi_A)_l\}_{\omega^{-1}}\, \partial_kf\big)\,\partial_j g \ ,\label{m9} \end{align} and from \begin{align*} s_A^*\big(\tilde\partial{}^i\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big) = \ , \end{align*} together yield \begin{align*} t\,\big(\delta^i_l+t\,\gamma^i_l(A)\big)\partial_A^l\Lambdam^{kj} - t\,\big(\delta^j_l+t\,\gamma^j_l(A)\big)\partial_A^l\Lambdam^{ki} + \partial_A^k\underline{\theta}^{ij}(A) \ . \end{align*} The terms coming from \begin{align*} = t^3\,L^{lj}\,\partial^i_A\gamma^k_l(A)\,\partial_kf\,\partial_jg-t^3\,L^{lj} \,\partial_A^i\underline{\theta}^{km}(A) \,\partial_k f\,\partial_jg\,\partial_mA_l \end{align*} are given by \begin{align*} t^2\Lambdam^{lj}\,\partial_A^k\gamma_l^i(A) - t^2\Lambdam^{li}\,\partial_A^k\gamma_l^j(A) \ . \end{align*} There is no contribution from on the right-hand side of (<ref>) since all of its terms contain derivatives of the gauge fields $\partial A$. Finally, the contributions from the second and third lines of (<ref>): \begin{align} s_A^*\{\pi^*f,L_g^i\}_{\omega^{-1}} &= \ , \label{m13} \\[4pt] s_A^*\{L_f^i,\Phi_A(L_g)\}_{\omega^{-1}} &= \partial_pg \ , \notag \end{align} read as \begin{align*} \ . \end{align*} Altogether the contributions from such terms results in the differential equation The contributions to (<ref>) containing second order derivatives of gauge parameters $\partial_i\partial_kf\,\partial_j g$ and $\partial_i f\,\partial_j\partial_kg$ should vanish separately. Let us analyse the contribution with $\partial_i\partial_kf\,\partial_j g$. The contribution from the left-hand side of (<ref>) is given by (<ref>) and reads \begin{equation*} t^3\,\big( L^{mk}\,\underline{\theta}^{ij}(A)-t^3\,L^{mk}\,L^{pi}\,L^{lj}\,s_A^*\{(\Phi_A)_p,(\Phi_A)_l\}_{\omega^{-1}}\big)\,\partial_i\partial_kf\,\partial_jg \ . \end{equation*} On the right-hand side of (<ref>), the contribution from $\delta^\theta_f(L_g^i)-\delta^\theta_g(L_f^i)$ comes from the second line of (<ref>) and reads \begin{equation*} \partial_i\partial_kf\,\partial_jg \ . \end{equation*} The only remaining terms on the right-hand side of (<ref>) which contain second order derivatives of $f$ and $g$ come from the second and third lines, and from (<ref>) one finds \begin{equation*} \ . \end{equation*} Thus demanding that (<ref>) hold for such contributions leads to the equations \begin{align*} \big(\delta^i_l+t\,\gamma^i_l(A)-t\,\underline{\theta}^{in}(A)\,\partial_nA_l+t^2\,L^{pi}\,s_A^*\{(\Phi_A)_p,(\Phi_A)_l\}_{\omega^{-1}}\big) \\ & \qquad + \big(\partial_A^{i;l}L^{mj}+t^2\,L^{mi}\,L^{lj}\big)\,\big(\delta^k_l+t\,\gamma^k_l(A)-t\,\underline{\theta}^{kn}(A)\,\partial_nA_l+t^2\,L^{pk}\,s_A^*\{(\Phi_A)_p,(\Phi_A)_l\}_{\omega^{-1}}\big)=0 \ . \end{align*} This is satisfied if \begin{equation}\label{eqlambda} \partial_A^{k;l}L^{ij}+t^2\,L^{ik}\,L^{lj}=0 \ , \end{equation} or taking into account (<ref>) we may rewrite it as \begin{equation*} \partial_A^{k;l} \bar\Lambdam^{ij}+t^2\,\big(\Lambdam^{ik}+\bar\Lambdam^{ik}\big)\, \big(\Lambdam^{lj}+\bar\Lambdam^{lj}\big)=0 \ . \end{equation*} This equation is solved by \begin{equation*} \bar\Lambdam^{ij}(A,\partial A)=\sum_{n=1}^{\infty}\, (-t^2)^n\,\Lambdam^{ik_1}(A)\,\Lambdam^{m_1k_2}(A) \cdots \Lambdam^{m_{n-1}k_n}(A)\,\Lambdam^{m_nj}(A)\, \partial_{k_1} A_{m_1}\cdots \partial_{k_n}A_{m_n} \ , \end{equation*} which yields the expansion (<ref>). To complete the proof we need to verify that the remaining terms in (<ref>) cancel due to (<ref>) and (<ref>). This is tedious but completely straightforward: for example, the term \begin{equation*} \end{equation*} coming from (<ref>) precisely cancels the two terms \begin{equation*} \partial_ig\,\partial_jf ) \end{equation*} coming from (<ref>). It is now straightforward to construct the functions $\Lambdam^{ij}$ recursively from (<ref>), and we have The functions $\Lambdam^{ij}\in s_A^*C_{\rm pol}^\infty(T^*U)[[t]]$ can be calculated order by order in $t$ as the formal power series \begin{align*} \Lambdam^{ij}(A) = -\frac12\,\Pim^{ijk}\, A_k-\sum_{n=1}^\infty\, \frac{t^{n}}{n+2} \, \Upm^{ijl|k_1\cdots k_{n}} \, A_l\,A_{k_1}\cdots A_{k_{n}} \ , \end{align*} where the functions $\Upm^{ijl|k_1\cdots k_{n}}\in C^\infty(U)$ are skew-symmetric in their indices $jl$, and are given by explicit expressions involving the components of the almost Poisson bivector $\theta$, its Jacobiator $\Pim$, and their We recall the expansions (<ref>) (satisfying the local symplectic embedding equations (<ref>) and (<ref>)) and (<ref>), and write $\Lambdam^{ij}(A)$ as a formal power series of the form \begin{align*} \Lambdam^{ij}= \sum_{n=1}^\infty\, t^{n-1} \, \Lambdam_n^{ij} \qquad \mbox{with} \quad \Lambdam^{ij}_n=\Lambdam^{ij|k_1\cdots k_n}\,A_{k_1}\cdots A_{k_n} \ , \end{align*} where $\Lambdam^{ij|k_1\cdots k_n}\in C^\infty(U)$ for each $n\geq1$. At first non-trivial order the differential equation (<ref>) reads \begin{equation*} \partial^i_A\Lambdam^{kj}_1-\partial^j_A\Lambdam^{ki}_1=\Pim^{ijk} \ , \end{equation*} which has solution $ \Lambdam^{ij}_1=-\tfrac12\, \Pim^{ijk}\, A_k $ as expected from (<ref>). At higher orders $t^{n-1}$ for $n\geq2$, the differential equation (<ref>) can be schematically represented as \begin{equation}\label{m17} \partial^i_A\Lambdam^{kj}_n-\partial^j_A\Lambdam^{ki}_n=\Upm_n^{kij} \qquad \mbox{with} \quad \Upm_n^{kij}=\Upm^{kij|k_1\cdots k_{n-1}}\,A_{k_1}\cdots A_{k_{n-1}} \ , \end{equation} where on the right-hand side $\Upm_n^{kij}$ is constructed from the previously determined lower order solutions $\Lambdam^{kj}_m$ with $m<n$, and $\Upm^{kij|k_1\cdots k_{n-1}}\in C^\infty(U)$. The integrability condition for the equation (<ref>) reads \begin{equation}\label{m18} \partial^k_A\Upm_n^{lij}+\partial^i_A\Upm_n^{ljk}+\partial^j_A\Upm_n^{lki}=0 \ . \end{equation} At second order, after careful calculation and simplification one obtains \begin{align} \Upm^{ilj|k}&= \Pim^{ilm}\,\partial_m\theta^{jk}-\tfrac14\, \Pim^{ijm}\,\partial_m\theta^{lk} \notag \\ & \quad +\tfrac14\, \Pim^{jkm}\,\partial_m\theta^{li}-\tfrac14\, \Pim^{lkm}\,\partial_m\theta^{ji}+\tfrac12\, \theta^{lm}\,\partial_m\Pim^{ijk}-\tfrac12\,\theta^{jm}\,\partial_m\Pim^{ilk} \ , \label{l7} \end{align} where the function $\theta^{lj|ki}$ determines the order $t^2$ contribution to the expansion (<ref>). The integrability condition (<ref>) reads \begin{equation}\label{l8} \Upm^{ilj|k}+\Upm^{ikl|j}+\Upm^{ijk|l}=0 \ . \end{equation} Substituting the explicit form of $\Upm^{ilj|k}$ from (<ref>) into (<ref>) we arrive at \begin{equation}\label{l9} 2\,\big( \theta^{lj|ki}+\theta^{kl|ji}+\theta^{jk|li}\big)-F^{ljki}=0 \ , \end{equation} \begin{align*} & \quad \ \end{align*} The expression (<ref>) is exactly the equation from which the function $\theta^{lj|ki}$ was determined in [42], as given explicitly in (<ref>), whose integrability condition is precisely the integrability identity (<ref>) for the Jacobiator $\Pim$. So the integrability condition (<ref>) is indeed satisfied and \begin{align*} \Lambdam_2^{il}=-\tfrac13\, \Upm^{ilj|k}\,A_j\,A_k \ . \end{align*} This recursive construction can be extended in exactly the same way to higher order calculations. The integrability condition (<ref>) is always satisfied as a consequence of the definition of the functions $\Upm_n^{kij}$ from (<ref>) and the construction of the symplectic embedding for the almost Poisson structure $\theta$ from [42]. Homotopy algebra actions The reader versed in the BV formalism will have already noticed a strong resemblance between our approach to almost Poisson gauge theories based on symplectic embeddings and the approach to generalized gauge symmetries based on $L_\infty$-algebras. In this framework, the classical BRST construction amounts to quotienting a Poisson algebra of smooth functions by the ideal of functions which vanish on a constraint locus through a process of homological reduction to achieve the gauge closure condition. In this language, the concept of field dependent gauge transformation that we introduced in Section <ref> is very natural. In this section we will show that our almost Poisson gauge symmetries, which do not arise from any Lie algebra action, in fact arise from actions of $L_\infty$-algebras. For this, we provide a dictionary between the symplectic embeddings proposed in the present paper and the corresponding $L_\infty$-algebras of generalized gauge transformations [27]. This can be achieved by identifying the gauge variations defined by (<ref>), and the closure bracket which is determined in (<ref>), with the corresponding objects defined in terms of $L_\infty$-algebras. As an interesting application of these constructions, we obtain a new perspective on the deformation quantization of the exterior algebra of differential forms on an arbitrary almost Poisson manifold, whose semi-classical limit is described by a $P_\infty$-algebra. §.§ $L_\infty$-algebras and generalized gauge symmetries Let us start by fixing definitions and conventions. A (flat) $L_\infty$-algebra (also known as a strong homotopy Lie algebra) is a graded vector space $V$ over a field of characteristic zero together with a sequence of linear maps $\ell_n:V^{\otimes n}\to V$ of degree $2-n$, for $n=1,2,\dots$, which satisfy two properties. Firstly, $\ell_n$ are graded skew-symmetric, \begin{align*} \ell_n(\dots, v_i, v_{i+1},\dots )= -(-1)^{|v_i|\,|v_{i+1}|} \, \ell_n(\dots, v_{i+1}, v_i,\dots) \end{align*} for all $v_1,\dots,v_n\in V$, where $|v|$ denotes the degree of a homogeneous element $v\in V$. Secondly, the linear map defined by \begin{align}\label{eq:altmap} \Lie_n(v_1\otimes\cdots\otimes v_n) := \sum_{k=1}^n\, \frac{(-1)^{k\,(n-k)}}{k!\,(n-k)!} \, \ell_{n-k+1}\big(\ell_k(v_1,\dots, v_k), v_{k+1},\dots, v_n\big) \end{align} vanishes on the image of the alternatization map \begin{align*} {\sf Alt}_n(v_1\otimes\cdots\otimes v_n) = \sum_{\sigma\in S_n} \, {\rm sgn}(\sigma) \, v_{\sigma(1)}\otimes \cdots \otimes v_{\sigma(n)} \ , \end{align*} where the sum runs over permutations $\sigma$ of degree $n$ with sign ${\rm sgn}(\sigma)$. We denote the alternatization of the map (<ref>) by $\CJ_n=\Lie_n\circ{\sf Alt}_n:V^{\otimes n}\to V$. Then the relations $\CJ_n=0$ are called homotopy Jacobi identities; when evaluated explicitly on elements they involve cumbersome sign factors which are determined by the usual Koszul sign rules for skew-symmetric maps of graded vector spaces. In particular, the first relation $\CJ_1=0$ implies that $\ell_1$ is a differential making $(V,\ell_1)$ into a cochain complex, while $\CJ_2=0$ implies that $\ell_1$ is a (graded) derivation of the bracket $\ell_2$, i.e. $\ell_2$ is a cochain map. The third relation $\CJ_3=0$ then implies that the bracket $\ell_2$ obeys the Jacobi identity up to exact terms, i.e. $\ell_2$ induces a (graded) Lie bracket on the cohomology of $\ell_1$. Differential graded Lie algebras can be regarded as $L_\infty$-algebras where $\ell_n=0$ for all $n\geq3$. Generalized gauge symmetries of classical field theories on a manifold $M$ which are irreducible can be encoded in $2$-term $L_\infty$-algebras [27, 34]. For the purposes of the present paper, the underlying graded vector space encoding the kinematical data on an open subset $U\subseteq M$ has non-zero homogeneous subspaces only in degrees $0$ and $1$, and is given by \begin{align} \label{eq:2term} V = C^\infty(U) \oplus \Omega^1(U) \ , \end{align} with the grading provided by differential form degree. The gauge variations are then encoded by an $L_\infty$-structure $(\ell_n)_{n=1}^\infty$ on $V$ with $\ell_{n+1}(f, A^{\otimes n})\in\Omega^1(U)$, for $f\in C^\infty(U)$ and $A\in\Omega^1(U)$, as the formal power series \begin{align} \delta_f A &= \sum_{n=0}^\infty\, \frac{t^n}{n!} \, (-1)^{\frac{n\,(n-1)}2} \, \ell_{n+1}(f, A^{\otimes n}) \nonumber \\[4pt] &= \ell_1(f) + t\,\ell_2(f, A)-\tfrac{t^2}2\,\ell_3(f, A, A) + O(t^3) \label{eq:Linftygt} \end{align} in $\Gamma_{\rm pol}(J^1T^*U)[[t]]$. For “standard” gauge symmetries which only involve a finite number of brackets, one can set $t=1$ and avoid the use of formal power series as well as jet coordinates altogether, but the constructions of this paper necessitate their usage in general. For the graded vector space (<ref>), since $\ell_n$ is of degree $2-n$, it vanishes unless its entries contain at least one and at most two gauge parameters. Hence the only non-trivial homotopy Jacobi identities for the $L_\infty$-structure on $V$ involve two and three gauge parameters, that is, ${\cal J}_{n+2}(f, g, A^{\otimes n} )=0$ and $\CJ_{n+3}(f,g,h,A^{\otimes n})=0$. It was shown in [7, 27, 34] that the homotopy Jacobi identities involving two gauge parameters imply the closure of the symmetry variations \begin{equation*} [\one+\delta_{f},\one+\delta_g] A =\delta_{t\,[\![f,g]\!]( A)} A \ , \end{equation*} with the formal power series \begin{align} [\![f,g]\!]( A) &=-\sum_{n= 0}^\infty \, \frac{t^n}{ n!} \, (-1)^{\frac{n\,(n-1)}{ 2}} \, \ell_{n+2}(f, g, A^{\otimes n} ) \nonumber \\[4pt] &= -\ell_2(f, g) - t\, \ell_3(f, g, A) + \tfrac{t^2}2\,\ell_4(f, g, A, A) + O(t^3) \label{eq:Linftyclosure} \end{align} in $C^\infty_{\rm pol}(J^1T^*U)[[t]]$. The homotopy Jacobi identities involving three gauge parameters then guarantee that the strict Jacobi identities \begin{align*} {\sf Cyc}_{f,g,h}\,\big[\one+\delta_h,[\one+\delta_f,\one+\delta_g]\big]A=0 \end{align*} hold for any triple of gauge variations. The following explicit formulas are useful for concrete checks of the homotopy Jacobi identities for gauge transformations. For $f,g,h\in C^\infty(U)$ and $A\in\Omega^1(U)$, the $L_\infty$-relations ${\cal J}_{n+2}(f, g, A^{\otimes n} )=0$ and $\CJ_{n+3}(f,g,h,A^{\otimes n})=0$ for $n\geq1$ are given explicitly by \begin{align} \CJ_{n+2}\big(f,g,A^{\otimes n}\big) &= \ell_1\big(\ell_{n+2}(f,g,A^{\otimes n})\big) - n}\big) \notag\\ & \quad \, +\sum_{i=1}^{n}\,(-1)^{(i+1)\,(n-i+1)} \, \bigg[\binom{n}{i-1}\,\ell_{n-i+2}\big(\ell_{i+1}(f,g,A^{\otimes i-1}),A^{\otimes n-i+1}\big) \notag\\ & \quad \, \hspace{4cm} +(-1)^{i} \, \binom{n}{i}\,\ell_{n-i+2}\big(\ell_{i+1}(f,A^{\otimes i}),g,A^{\otimes n-i}\big) \label{Jnfg}\\ & \quad \, \hspace{4.5cm} -(-1)^{i}\,\binom{n}{i}\,\ell_{n-i+2}\big(\ell_{i+1}(g,A^{\otimes i}),f,A^{\otimes n-i}\big)\bigg] \ , \notag \end{align} \begin{align} & \CJ_{n+3}\big(f,g,h,A^{\otimes n}\big) \notag \\[4pt] & \hspace{0.5cm} = {\sf +(-1)^n\,\ell_{2}\big(\ell_{n+2}(f,g,A^{\otimes n}),h\big) \notag\\ & \hspace{2.5cm} +\sum_{i=1}^{n}\,(-1)^{(i+1)\,(n-i)}\, \bigg[\binom{n}{i}\,\ell_{n-i+3}\big(\ell_{i+1}(f,A^{\otimes i}),g,h,A^{\otimes n-i}\big) \label{Jnfgh}\\ & \quad \, \hspace{5.5cm} - \bigg] \bigg) \ . \notag \end{align} The alternatization of the sum $\Lie_{n+2}(f,g,A^{\otimes n})$ in (<ref>) involves, for fixed $n$ and $k$, a sum over $\binom{n+2}{k}=(n+2)!/k!\,(n+2-k)!$ inequivalent splittings of permutations $\sigma\in S_{n+2}$ [34]. If $k=1$ the contribution to (<ref>) is $\ell_{n+2}(\ell_1(f),g,A^{\otimes n})+\ell_{n+2}(f,\ell_1(g),A^{\otimes n})$, and when $k=n+2$ the contribution is $\ell_1(\ell_{n+2}(f,g,A^{\otimes n}))$; note that terms such as $\ell_{n+2}(f,g,\ell_1(A),A^{\otimes n-1})$ do not appear as $\ell_1(A)=0$ for degree reasons. If $k\neq 1,n+2$ then, taking into account that there are $n$ identical entries $A$, the sum over inequivalent splittings of permutations $\sigma\in S_{n+2}$ will contain $\binom{n}{k-2}$ contributions of the type $\ell_{n-k+3}(\ell_k(f,g,A^{\otimes k-2}),A^{\otimes n-k+2})$, $\binom{n}{k-1}$ contributions of each type $\ell_{n-k+3}(\ell_k(f,A^{\otimes k-1}),g,A^{\otimes n-k+1})$ and $\ell_{n-k+3}(\ell_k(g,A^{\otimes k-1}),f,A^{\otimes n-k+1})$, and also $\binom{n}{k}$ elements $\ell_{n-k+3}(\ell_k(A^{\otimes k}),f,g,A^{\otimes n-k})$; these latter contributions however vanish as $\ell_k(A^{\otimes k})=0$ for degree reasons. Altogether, the total number of such contributions is \begin{equation*} \binom{n}{k-2}+\binom{n}{k-1}+\binom{n}{k-1}+\binom{n}{k}=\binom{n+2}{k} \ , \end{equation*} which is exactly the number of all contributions to the sum over $\sigma\in S_{n+2}$. This results in (<ref>), where the sign factor $(-1)^i$ in the last two lines appears from moving $f$ and $g$ from the second argument to the $(i{+}1)$-th argument of the $(n-i+2)$-bracket. Following the same line of argument we find §.§ $L_\infty$-structures on Poisson gauge algebroids It is instructive to start with the simpler case of Poisson gauge transformations from Section <ref>, where we know $[\![f,g]\!]( A) =\{f,g\}_\theta$ to all orders and a Poisson gauge symmetry corresponds to a Lie algebra structure on the graded vector space $V$. Geometrically, this structure is encoded in the action algebroid \begin{align*} C^\infty(U)\ltimes \Omega^1(U) \longrightarrow \Omega^1(U) \end{align*} corresponding to the Lie algebra $(C^\infty(U),\{\,\cdot\,,\,\cdot\,\}_\theta)$ and the Lie module $\Omega^1(U)$ over $C^\infty(U)$; for the moment we drop the formal power series extension to streamline the presentation. This is the Lie algebroid with trivial vector bundle $C^\infty(U)\times\Omega^1(U)$ over $\Omega^1(U)$ (regarded as a Fréchet space with the weak Whitney $C^\infty$-topology), whose anchor map $\rho:C^\infty(U)\times\Omega^1(U)\to T\Omega^1(U)$ sends a pair $(f,A)$ to the gauge variation $\delta^\theta_fA$ of Definition <ref>, and whose Lie bracket is induced by the Poisson bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$ and the gauge variations. Generally, any Lie algebroid gives rise to a cochain complex (its Chevalley-Eilenberg algebra), and in the case of an action algebroid this yields the BRST complex of a classical field theory, which is dual to a gauge $L_\infty$-algebra (see e.g. [35]). In fact, the proper reinstatement of the formal power series extension, as required by the symplectic embedding approach, into this Lie algebroid perspective necessitates the use of $L_\infty$-algebras. For this, let us note the following useful geometric way of thinking about the `Taylor expansion' of the twisting one-forms $s_A^*\gamma(\dd f,\,\cdot\,)$ in (<ref>) which are induced by (<ref>). Without loss of generality we can assume that the functions $\gamma_i^{j|i_1\cdots i_n} \in C^\infty(U)$ are symmetric in their last $n$ indices, and introduce the $(n+1,1)$-tensor fields $\gamma^{(n)}\in\Omega^1\big(U,\mfX(U)\otimes\mathfrak{X}(U)^{\odot n}\big)$ which in local coordinates read as \begin{align*} \gamma^{(n)} = n!\,\gamma_i^{j|i_1\cdots i_n}(x) \, \partial_j\otimes(\partial_{i_1}\odot\cdots \odot\partial_{i_n})\otimes \dd x^i \ . \end{align*} \begin{align}\label{eq:sAgammaformal} s_A^*\gamma(\dd f,\,\cdot\,) = \sum_{n=1}^\infty\, \frac{t^{n-1}}{n!} \, \gamma^{(n)}\big(\dd f,A^{\otimes n}\big) \end{align} as a formal power series in $\Omega^1(U)[[t]]$. Let $(T^*M,\omega)$ be a local symplectic embedding of a Poisson manifold $(M,\theta)$, and let $U\subseteq M$ be an open subset. Then there is an $L_\infty$-structure $(\ell_n^{\,\theta})_{n=1}^\infty$ on $C^\infty(U)\oplus\Omega^1(U)$, unique up to $L_\infty$-quasi-isomorphism, which induces the Poisson gauge algebroid constructed by Proposition <ref> with non-vanishing maps on coincident degree $1$ entries given by \begin{align*} \ell^{\,\theta}_1(f) & = \dd f \ , \\[4pt] \ell^{\,\theta}_2(f, g)& = -\{f,g\}_\theta \ , \\[4pt] \ell^{\,\theta}_2(f, A) & = \{A,f\}_\theta +\gamma^{(1)}(\dd f,A) \ , \\[4pt] \ell_{n+1}^{\,\theta}\big(f, A^{\otimes n}\big) & = \, \gamma^{(n)}\big(\dd f,A^{\otimes n}\big) \ , \end{align*} and skew-symmetry imposed by definition, for $n\geq2$, $f,g\in C^\infty(U)$, and $A\in\Omega^1(U)$. The brackets $\ell_n^{\,\theta}$ follow from comparing (<ref>) with (<ref>) and (<ref>) with (<ref>) order by order in $t$ using (<ref>). The fact that our symplectic embeddings define Poisson gauge algebras in the sense of Definition <ref>, as we proved in Proposition <ref>, then implies that the statement is a special instance of <cit.> for the case of gauge transformations which are generated by Lie algebra actions (see <cit.>). The uniqueness statement follows from noting that different completions of the brackets to non-coincident gauge field entries, using the homotopy Jacobi identities, are related by invertible field redefinitions $\chi:\Gamma_{\rm pol}(J^1T^*U)[[t]]\to\Gamma_{\rm pol}(J^1T^*U)[[t]]$, called Seiberg-Witten maps, which leave invariant the gauge parameters $f$ and define deformations of the gauge fields: $\chi(A)\big|_{t=0} = A$ for $A\in\Omega^1(U)$. We define the new gauge transformations $\chi(A)\mapsto \chi(A)+\hat\delta_f^{\theta}\chi(A)$ by setting \begin{align*} \hat\delta_f^\theta := \chi\circ\delta_f^\theta\circ\chi^{-1} \ . \end{align*} Then the field redefinition maps gauge orbits onto gauge orbits: \begin{align*} \chi\big(A+\delta_f^\theta A\big) = \chi(A) + \hat\delta_f^\theta\chi(A) \ , \end{align*} and it preserves the Poisson gauge algebra: \begin{align*} \big[\one+\hat\delta_f^\theta,\one+\hat\delta_g^\theta\big] \chi(A) = \chi\circ\big[\one+\delta_f^\theta,\one+\delta_g^\theta\big]A \chi\circ\delta_{t\,\{f,g\}_\theta}^\theta A = \hat\delta_{t\,\{f,g\}_\theta}^\theta \chi(A) \ . \end{align*} In [10] it was shown that the Seiberg-Witten maps $\chi$ correspond to $L_\infty$-quasi-isomorphisms which describe the arbitrariness in the definition of the related $L_\infty$-algebras in the $L_\infty$-bootstrap approach [11]. Let $M=\real^d$ with a constant Poisson structure $\theta$. By Example <ref>, in this case we can take $\gamma^{(n)}=0$ for all $n\geq1$. Then the only non-vanishing maps are \begin{align*} \ell_1^{\,\theta}(f) = \dd f \ , \quad \ell_2^{\,\theta}(f,g) = -\{f,g\}_\theta \qquad \mbox{and} \qquad \ell_2^{\,\theta}(f,A) = \{A,f\}_\theta \ . \end{align*} Thus in this case the symplectic embedding of $(\real^d,\theta)$ in $(T^*\real^d,\omega_0+t\,\theta^*)$ makes $C^\infty(\real^d)\oplus\Omega^1(\real^d)$ into a differential graded Lie algebra, with Lie bracket given by the Poisson bracket $\{\,\cdot\,,\,\cdot\,\}_\theta$. This describes the algebroid of Poisson gauge transformations from Example <ref>. The brackets $\ell_2^{\,\theta}(f,A)$ in Proposition <ref> encode the failure of the exterior derivative $\dd$ in being a derivation of the Poisson algebra in general. The homotopy Jacobi identity $\CJ^{\,\theta}_2(f,g)=0$ reads \begin{align*} \dd\{f,g\}_\theta = \{\dd f,g\}_\theta + \{f,\dd g\}_\theta +\gamma^{(1)}(\dd g,\dd f) - \gamma^{(1)}(\dd f,\dd g) \ , \end{align*} which using (<ref>) can be written locally in components as \begin{align*} \partial_i\{f,g\}_\theta = \{\partial_if,g\}_\theta + \{f,\partial_ig\}_\theta + \partial_i\theta^{jk}\,\partial_jf\,\partial_kg \ . \end{align*} This is just the familiar violation of the Leibniz rule for non-constant Poisson bivectors $\theta$. Using $\ell^{\,\theta}_{n+2}(f,g,A^{\otimes n})=0$ for $n\geq1$, we can demonstrate explicitly that the higher Jacobi identities are equivalent to the local equations (<ref>) for a symplectic embedding of a Poisson manifold, as an illustration of the tight relationship between our symplectic embeddings and the corresponding $L_\infty$-structures. For this, we use Lemma <ref> and write (<ref>) as \begin{align} \CJ^{\,\theta}_{n+2}\big(f,g,A^{\otimes n}\big) &= \ell^{\,\theta}_{n+1}\big(\ell^{\,\theta}_2(f,g),A^{\otimes n}\big) - n}\big) - n}\big) \notag \\ & \quad \, - n\,\Big(\ell^{\,\theta}_{n+1}\big(\ell^{\,\theta}_2(f,A),g,A^{\otimes n-1}\big)-\ell^{\,\theta}_{n+1}\big(\ell^{\,\theta}_2(g,A),f,A^{\otimes n-1}\big) \notag\\ & \quad \, \hspace{1cm} + \ell^{\,\theta}_{2}\big(\ell^{\,\theta}_{n+1}(f,A^{\otimes n}),g\big)-\ell^{\,\theta}_{2}\big(\ell^{\,\theta}_{n+1}(g,A^{\otimes n}),f\big)\Big) \label{Jnfg1}\\ & \quad \, - \sum_{i=3}^{n}\,(-1)^{(i+1)\,(n-i)} \, \binom{n}{i-1} \, \Big(\ell^{\,\theta}_{n-i+3}\big(\ell^{\,\theta}_i(f,A^{\otimes \notag\\ & \quad \, \hspace{6cm} \ . \notag \end{align} We substitute the brackets from Proposition <ref> in (<ref>), and after some careful simplification in local coordinates its components become \begin{align*} \CJ^{\,\theta}_{n+2}\big(f,g,A^{\otimes n}\big)_i &= \\ & \quad \, \times \, \Big((n+1)\,\big(\gamma_i^{k|li_2\cdots i_{n+1}}\big) \\ & \quad \, \hspace{1cm} +\sum_{m=1}^n\,(n-m+1)\,\big(\gamma_j^{l|i_1\cdots i_{m}}\,\gamma_i^{k|ji_{m+1}\cdots i_{n+1}} - \gamma_j^{k|i_1\cdots i_{m}}\,\gamma_i^{l|ji_{m+1}\cdots i_{n+1}} \big) \\ & \quad \, \hspace{2cm} -\theta^{kj}\,\partial_j\gamma_i^{l|i_2\cdots i_{n+1}} + \theta^{lj}\,\partial_j\gamma_i^{k|i_2\cdots i_{n+1}} - \gamma_i^{j|i_2\cdots i_{n+1}}\,\partial_j\theta^{lk}\Big) \\ & \quad \, \hspace{8cm} \times \, \partial_kf\,\partial_lg\,A_{i_2}\cdots \ \end{align*} This vanishes as a consequence of (<ref>), and thus the closure condition for the gauge algebra implies that the brackets $\ell_n^{\,\theta}$ indeed define an $L_\infty$-algebra. In other words, the homotopy Jacobi identities are equivalent to the Poisson integrability condition $[\omega^{-1},\omega^{-1}]=0$, as written in (<ref>) and (<ref>) (with $\underline{\theta}^{ij}(x,p)=\theta^{ij}(x)$); algebraically, the brackets follow from a standard higher derived bracket construction [69], as we shall see in Section <ref>. This discussion illustrates the necessity of the $L_\infty$-algebra formulation even in the case of Poisson gauge transformations for non-constant bivectors $\theta$, despite the fact that they are still defined by an underlying Lie algebra action. We can now give a homotopy algebraic answer to the question of uniqueness of symplectic embeddings for a given Poisson manifold $(M,\theta)$. Similarly to the uniqueness statement of Proposition <ref>, different local symplectic embeddings $(T^*M,\omega)$ correspond to field redefinitions of the brackets of the corresponding $L_\infty$-algebras, which are related to one another through $L_\infty$-quasi-isomorphisms [43, 10]. §.§ $L_\infty$-structures on almost Poisson gauge algebroids The general case of almost Poisson gauge transformations from Section <ref> is markedly different from the Poisson case, as now there is no underlying Lie algebra structure on $V$ and hence no corresponding action algebroid. However, suppressing momentarily the formal power series extension again as well as the jet coordinate dependence, there is still an underlying Lie algebroid with vector bundle \begin{align*} C^\infty(U) \longrightarrow \CCE(U) \longrightarrow \Omega^1(U) \end{align*} whose fibre over a gauge field $A\in \Omega^1(U)$ is $s_A^* C^\infty(T^*U) \simeq C^\infty(U)$. A field dependent gauge parameter can then be identified with a section of this bundle. The bracket and anchor map of this gauge algebroid are induced by the bracket $[\![\,\cdot\,,\,\cdot\,]\!]_\theta$ and the gauge variation $(f,A)\mapsto\delta^\theta_fA$ of Definition <ref>. The corresponding $L_\infty$-structure making this Lie algebroid picture precise is induced by the symplectic embedding construction of Section <ref> and may be presented in the following geometric way. For this, we introduce tensor fields whose local components are the coefficients of the formal power series expansions of Sections <ref> and <ref>, in order to write the gauge variations and brackets of Remark <ref> as formal power series analogously to (<ref>). Assuming without loss of generality that the functions $\theta^{ij|i_1\cdots i_n}\in C^\infty(U)$ for $n\geq 2$ are symmetric in their last $n$ indices, we introduce $n{+}2$-tensors $\theta^{(n)}\in\mfX^2\big(U,\mfX(U)^{\odot n}\big)$ in local coordinates by \begin{align*} \theta^{(n)} := n! \, \theta^{ij|i_1\cdots i_n}(x) \, (\partial_i\wedge\partial_j)\otimes(\partial_{i_1} \odot\cdots\odot \partial_{i_n}) \ . \end{align*} We set $\theta^{(0)}:=\theta$ and $\theta^{(1)}:=-\Pim$. Then \begin{align}\label{eq:sAthetaformal} s_{A}^*\underline{\theta}(\pi^*\,\cdot\,,\pi^*\,\cdot\,) = \sum_{n=0}^\infty \, \frac{t^n}{n!} \ \theta^{(n)}\big(\,\cdot\,,\,\cdot\,,A^{\otimes n}\big) \end{align} as a formal power series in $\mfX^2(U)[[t]]$. Similarly, taking $\Upm^{ijk|i_1\cdots i_{n-1}}\in C^\infty(U)$ for $n\geq 2$ to be symmetric in their last $n$ indices, we introduce $n{+}2$-tensors $\Upm^{(n)}\in\mfX\big(U, \mfX(U)\otimes\mfX(U)^{\odot n}\big)$ in local coordinates by \begin{align*} \Upm^{(n)} := n! \, \Upm^{ijk|i_1\cdots i_{n-1}}(x) \, \partial_i\otimes\partial_j \otimes (\partial_k\odot\partial_{i_1}\odot\cdots\odot \partial_{i_{n-1}}) \ . \end{align*} We set $\Upm^{(1)}:=\Pim$. By the construction of Section <ref>, the vector fields $\Upm^{(n)}(\dd f,A^{\otimes n})\in\mfX(U)$ completely determine the formal power series expansions of the Lagrangian multipliers $L_f\in \mfX_{\rm pol}^{\tt h}(J^1T^*U)[[t]]$ through \begin{align}\label{eq:Lfformal} L_f = -\sum_{n=1}^\infty\, \frac{t^{n+1}}{(n+1)!} \, L^{(n)}\big(\dd f,A^{\otimes n}\big) \ , \end{align} \begin{align*} & L^{(n)}\big(\dd f,A^{\otimes n}\big) := \Upm^{(n)}\big(\dd f,A^{\otimes n}\big) + \sum_{k=1}^{\lfloor \frac{n-1}2 \rfloor} \ \sum_{\stackrel{\scriptstyle l_1,\dots,l_{k+1}\geq 1}{\scriptstyle l_1+\cdots+ l_{k+1}=n-k}} \, \frac{(n+1)!}{(l_1+1)! \cdots (l_{k+1}+1)!} \\ & \hspace{2cm}\ \times \Upm^{(l_1)}\Big(\Upm^{(l_2)\otimes}\big(\DD A,\cdots \Upm^{(l_k)\otimes}(\DD A,\Upm^{(l_{k+1})\otimes}(\DD A,\dd f,A^{\otimes l_{k+1}}),A^{\otimes l_k}), \cdots A^{\otimes l_2}\big), A^{\otimes \end{align*} with the sum omitted for $n=1,2$. Let $(T^*M,\omega)$ be a local symplectic embedding of an almost Poisson manifold $(M,\theta)$ with Jacobiator $\Pim$, and let $U\subseteq M$ be an open subset. Then there is an $L_\infty$-structure $(\ell_n^{\,\theta})_{n=1}^\infty$ on $C^\infty(U)\oplus\Omega^1(U)$, unique up to $L_\infty$-quasi-isomorphism, which induces the almost Poisson gauge algebroid constructed by Proposition <ref> with non-vanishing maps on coincident degree $1$ entries given as follows. The brackets involving a single gauge parameter are given by \begin{align*} \ell^{\,\theta}_1(f) & = \dd f \ , \\[4pt] \ell^{\,\theta}_2(f, A) & = \{A,f\}_\theta +\gamma^{(1)}(\dd f,A) \ , \\[4pt] \ell_3^{\,\theta}\big(f, A^{\otimes 2}\big) & = -\gamma^{(2)}\big(\dd f,A^{\otimes 2}\big) + f,A,\DD A) +\dd \ , \\[4pt] \ell_{n+1}^{\,\theta}\big(f,A^{\otimes n}\big) &= (-1)^{\frac{n\,(n-1)}2} \, \bigg[\gamma^{(n)}\big(\dd f,A^{\otimes n}\big) - n\, \theta^{(n-1)\otimes}\big(\dd f,\DD A,A^{\otimes n-1}\big) \\ & \hspace{4cm} - \dd A\big(L^{(n-1)}(\dd f,A^{\otimes n-1}),\,\cdot\,\big) \\ & \quad \, + \sum_{k=2}^{n-1} \, \binom nk \bigg( \gamma^{(n-k)}\Big(\DD A\big(L^{(k-1) \otimes}(\dd f,A^{\otimes k-1})\big),A^{\otimes n-k}\Big) \\ & \quad \hspace{2cm} A,A^{\otimes n-k},L^{(k-1)}(\dd f,A^{\otimes k-1})\big) \\ & \quad \hspace{2cm} A\big(L^{(k-1) \otimes}(\dd f,A^{\otimes k-1})\big),\DD A,A^{\otimes n-k-1}\Big)\bigg)\bigg] \ , \end{align*} for $n\geq3$. The brackets involving two gauge parameters are given by \begin{align*} \ell_2^{\,\theta}(f,g) &= -\{f,g\}_\theta \ , \\[4pt] \ell_3^{\,\theta}(f,g,A) &= \Pim(\dd f,\dd g,A) \ , \\[4pt] \ell_4^{\,\theta}\big(f,g,A^{\otimes 2}\big) &= \theta^{(2)}\big(\dd f,\dd g,A^{\otimes 2}\big) \ , \\[4pt] \ell_5^{\,\theta}\big(f,g,A^{\otimes 3}\big) &= \theta^{(3)}\big(\dd f,\dd g,A^{\otimes 3}\big)-\tfrac32\, \dd g,A,\,\cdot\,)\big) \ , \\[4pt] \ell_{n+2}^{\,\theta}\big(f,g,A^{\otimes n}\big) &= \\ & \hspace{-2.7cm} \times \bigg[\theta^{(n)}\big(\dd f,\dd g,A^{\otimes n}\big) - \sum_{k=1}^{n-2}\,\frac1{k+1}\,\binom nk \, \dd n-k-1})\big) \\ & \hspace{-2.2cm} + \sum_{k=3}^{n-1}\,\frac1{k+1}\,\binom nk \ \sum_{l=1}^{k-2}\, \binom{k+1}{l+1} \, \bigg(\gamma^{(n-k)}\Big(\DD A\big(L^{(l)\otimes}(\dd f,A^{\otimes l})\big),A^{\otimes n-k},L^{(k-l-1)}(\dd g,A^{\otimes k-l-1})\Big) \\ & \quad \hspace{2.7cm} - \gamma^{(n-k)}\Big(\DD A\big(L^{(l)\otimes}(\dd g,A^{\otimes l})\big),A^{\otimes n-k},L^{(k-l-1)}(\dd f,A^{\otimes k-l-1})\Big) \\ & \hspace{-1cm} -(n-k)\,\theta^{(n-k-1)}\Big(\DD A\big(L^{(l)\otimes}(\dd f,A^{\otimes l})\big),\DD A\big(L^{(k-l-1)\otimes}(\dd g,A^{\otimes k-l-1})\big),A^{\otimes n-k-1}\Big) \bigg) \bigg] \ , \end{align*} for $n\geq 4$. In these expressions skew-symmetry between gauge parameters and gauge fields is imposed by definition, for $f,g\in C^\infty(U)$ and $A\in\Omega^1(U)$. The proof is completely analogous to the proof of Proposition <ref>. The brackets $\ell_n^{\,\theta}$ now follow from a straightforward but laborious comparison of the gauge variation and bracket of Remark <ref> with (<ref>) and (<ref>), respectively, order by order in $t$ using (<ref>), (<ref>) and (<ref>). The fact that Proposition <ref> constructs an almost Poisson gauge algebra, in the sense of Definition <ref>, then implies that the statement is a special instance of <cit.> for the general case of field dependent gauge transformations. When $\theta$ is a Poisson structure, then the tensors $\Pim$, $\theta^{(n)}$ for $n\geq2$ and $L^{(k)}$ all vanish, and the $L_\infty$-structure of Proposition <ref> reduces to the $L_\infty$-structure of Proposition <ref>. The brackets $\ell_3^{\,\theta}$ in Proposition <ref> coincide exactly with the brackets defined in [11] (after taking into account that the Jacobiator $\Pim$ in [11] differs from ours by a factor $\frac13$). Moreover, the brackets $\ell_{n+2}^{\,\theta}(f,g,A^{\otimes n})$ for $n=0,1,2$ coincide with the brackets obtained in [43]. In particular, the homotopy Jacobi identity $\CJ^{\,\theta}_3(f,g,h)=0$ for $f,g,h\in C^\infty(U)$ \begin{align*} {\sf Cyc}_{f,g,h}\,\{f,\{g,h\}_\theta\}_\theta = \tfrac12\,\Pim(\dd f,\dd g,\dd h) \ , \end{align*} which is the familiar violation of the strict Jacobi identity for an almost Poisson bracket with non-vanishing Jacobiator $\Pim$. However, for $n>2$ the brackets $\theta^{(n)}(\dd f,\dd g,A^{\otimes n})$ are corrected by terms involving contributions from the Lagrangian multiplier vector fields and do not on their own constitute the correct $L_\infty$-structure required by the gauge closure condition (<ref>) at higher orders, contrary to the conjecture of [42]. We shall consider an explicit example in Section <ref> below. §.§ $P_\infty$-algebras of exterior differential forms Let us now discuss some potential applications of our constructions to deformation quantization. A central problem in the formulation of noncommutative gauge theories is to find an extension of a star-product, which quantizes an almost Poisson manifold $(M,\theta)$, to the de Rham complex $(\Omega^\bullet(M),\dd)$. Even at the purely kinematical level this problem is non-trivial, as the vector space action mentioned in Remark <ref> does not generally make $\Omega^1(U)[[\hbar]]$ into a $C^\infty(U)[[\hbar]]$-bimodule. The problem can be traced back to the semi-classical limit: the differential graded algebra $\Omega^\bullet(M)$ is not generally a Poisson algebra, even when $\theta$ is a Poisson bivector field. In the case of symplectic manifolds, the problem of endowing $\Omega^\bullet(M)$ with the structure of a differential graded Poisson algebra is discussed in e.g. [20, 30, 5, 54]; the general construction depends on the choice of an almost symplectic connection or of a contravariant connection. Here we shall present an alternative treatment based on the constructions of this paper which is more general: it does not require the auxiliary data of a connection and works for any almost Poisson bivector field $\theta$. Our main observation is that a symplectic embedding, which always exists locally, naturally induces a $P_\infty$-structure on the de Rham complex of the underlying almost Poisson manifold. A $P_\infty$-algebra is a graded commutative algebra $\CCA$ together with an $L_\infty$-structure $\{\ell_n\}_{n=1}^\infty$ such that the differential $\ell_1:\CCA\to\CCA$ is a derivation of degree $1$ and, for each $n\geq2$ and fixed elements $a_1,\dots,a_{n-1}\in\CCA$, the map $a\mapsto \ell_n(a_1,\dots,a_{n-1},a)$ is a derivation of degree $2-n+\sum_{i=1}^{n-1}\,|a_i|$; this is a strong homotopy version of a Poisson algebra. $P_\infty$-algebras were introduced by Cattaneo and Felder in their approach to quantization of coisotropic submanifolds of Poisson manifolds [17]. Just as a Poisson algebra can be regarded as the semi-classical limit of an associative algebra in deformation quantization, $P_\infty$-algebras arise as semi-classical limits of $A_\infty$-algebras. Since a symplectic embedding $(T^*M,\omega)$ of a generic almost Poisson structure $\theta$ similarly involves a Lagrangian submanifold of a symplectic manifold, we can offer a homotopy algebraic explanation for the meaning of the symplectic structure $\omega$ on $T^*M$ away from the zero section, as well as a new perspective on the relation between our symplectic embeddings and the semi-classical limit of a deformation quantization of $\theta$. Let $C$ be any manifold. The general result of <cit.> then constructs a $P_\infty$-structure on the graded commutative algebra $\Gamma\big(C,\midwedge^\bullet E\big)$ of sections of the exterior algebra of any vector bundle $E\to C$ whose total space is a Poisson Applying this result to $E=T^*M$ with a local symplectic embedding $\omega$ of the almost Poisson structure $\theta$, we find that the almost Poisson bracket on $C^\infty(M)$ can be viewed as part of a $P_\infty$-structure on the de Rham complex of the manifold $M$, induced by the Poisson structure $\omega^{-1}$ on Let $(T^*M,\omega)$ be a local symplectic embedding of an almost Poisson manifold $(M,\theta)$ with Jacobiator $\Pim$, and let $U\subseteq M$ be an open subset. Then there is a $P_\infty$-structure $\{\varrho_n^{\,\theta}\}_{n=1}^\infty$ on the exterior algebra $\Omega^\bullet(U)$ of differential forms on $U$ defined as follows. On generators of $\Omega^\bullet(U)$, the non-vanishing brackets are defined by \begin{align} \varrho_1^{\,\theta}(f) &= \dd f \ , \qquad \varrho_1^{\,\theta}(\alpha) = \dd\alpha \ , \nonumber \\[4pt] \varrho_2^{\,\theta}(f,g) &= \{f,g\}_\theta \ , \qquad \varrho_2^{\,\theta}(\alpha,\beta) = \{\alpha,\beta\}_\theta -\gamma^{(1)\otimes}(\DD\alpha,\beta) - \gamma^{(1)\otimes}(\DD\beta,\alpha) \ , \nonumber \\[4pt] \varrho_2^{\,\theta}(f,\alpha) &= \{f,\alpha\}_\theta - \gamma^{(1)}(\dd f,\alpha) \ , \qquad \varrho_3^{\,\theta}(f,g,\alpha) = \Pim(\dd f,\dd g,\alpha) \ , \nonumber \\[4pt] \varrho_n^{\,\theta}(\alpha_1,\dots,\alpha_n) &= \frac{(-1)^n}{(n-1)!} \, \Big( \sum_{i=1}^n\, \gamma^{(n-1)\otimes}\big(\DD\alpha_i,\alpha_1,\dots,\widehat{\alpha_i},\dots,\alpha_n\big) \nonumber \\ & \quad \, \hspace{1cm} - 2\,(n-1) \, \sum_{i<j} \, \theta^{(n-2)\otimes}\big(\DD\alpha_i,\DD\alpha_j,\alpha_1,\dots,\widehat{\alpha_i},\dots,\widehat{\alpha_j},\dots,\alpha_n\big)\Big) \ , \nonumber \\[4pt] \varrho_{n+1}^{\,\theta}(f,\alpha_1,\dots,\alpha_n) &= \frac{1}{n!} \, \gamma^{(n)}(\dd f,\alpha_1,\dots,\alpha_n) \notag \\ & \quad \, + \frac1{(n-1)!} \, \sum_{i=1}^n \, \theta^{(n-1)\otimes}\big(\dd f,\DD\alpha_i,\alpha_1,\dots,\widehat{\alpha_i},\dots,\alpha_n\big) \ , \notag \\[4pt] \varrho_{n+2}^{\,\theta}(f,g,\alpha_1,\dots,\alpha_n) &= \frac{(-1)^n}{n!} \, \theta^{(n)}(\dd f,\dd \ , \label{eq:PinftydeRham} \end{align} for $n\geq2$, $f,g\in C^\infty(U)$, and $\alpha,\beta,\alpha_1,\dots,\alpha_n\in\Omega^1(U)$, where a hat indicates omission of the corresponding entry. The brackets are then defined on higher degree forms by uniquely extending (<ref>) to linear maps $\varrho_n^{\,\theta}:\Omega^\bullet(U)^{\otimes n}\to\Omega^\bullet(U)$ as polyderivations. Since $\varrho_n^{\,\theta}$ is of degree $2-n$, it vanishes on elements of degree $0$ or $1$ except in the cases given in (<ref>). Its structural form is merely a translation of the statement of <cit.> to this situation. That statement tells us that the components of the $P_\infty$-structure on $\Omega^\bullet(U)$ are the Taylor series expansion coefficents, in the transverse coordinates to the zero section $U\subset T^*U$, of the cosymplectic bivector field (<ref>). In local coordinates where $\alpha=\alpha_i\,\dd x^i\in\Omega^1(U)$, and vector fields $X=X^i\,\partial_i \in \mfX(U)$ are regarded as fibre-linear functions $X^i\,p_i$ on $T^*U$, these are given by \begin{align*} \varrho_{n+2}^{\,\theta}(f,g,\alpha_1,\dots,\alpha_n) &= (-1)^n \, \alpha_{1\,i_1}\cdots\alpha_{n\,i_n}\,\tilde\partial^{i_1}\cdots\tilde\partial^{i_n}\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big|_{p=0} \ , \\[4pt] \varrho_{n+1}^{\,\theta}(f,\alpha_1,\dots,\alpha_n)(X) &= \alpha_{1\,i_1}\cdots\alpha_{n\,i_n}\,\tilde\partial^{i_1}\cdots\tilde\partial^{i_n}\{\pi^*f,X^i\,p_i\}_{\omega^{-1}}\big|_{p=0} \ , \\[4pt] \varrho_n^{\,\theta}(\alpha_1,\dots,\alpha_n)(X,Y) &= (-1)^n\,\alpha_{1\,i_1}\cdots\alpha_{n\,i_n}\,\tilde\partial^{i_1}\cdots\tilde\partial^{i_n}\{X^i\,p_i,Y^j\,p_j\}_{\omega^{-1}}\big|_{p=0} \ . \end{align*} Substituting the series expansions (<ref>) and (<ref>) at $t=1$, and using the Koszul formula for the de Rham differential, after some calculation we arrive at the formulas (<ref>). One can check directly that these brackets extend to a $P_\infty$-structure: the homotopy Jacobi identities are equivalent to the Poisson integrability condition $[\omega^{-1},\omega^{-1}]=0$, as written in (<ref>) and (<ref>), and indeed the brackets follow algebraically from a standard higher derived bracket construction (see <cit.>). Let $M=\real^d$ with a constant Poisson structure $\theta$. In this case all tensors $\Pim$, $\gamma^{(n)}$ and $\theta^{(n)}$ for $n\geq1$ vanish in Proposition <ref>, and the only non-zero brackets are given by \begin{align*} \varrho^{\,\theta}_1(\xi) = \dd \xi \qquad \mbox{and} \qquad \varrho_2^{\,\theta}(\xi,\zeta) = \{\xi,\zeta\}_\theta \ , \end{align*} for all $\xi,\zeta\in\Omega^\bullet(\real^d)$. In this case we recover the well-known realization of $\Omega^\bullet(\real^d)$ as a differential graded Poisson algebra. In general, however, the $P_\infty$-algebra of Proposition <ref> involves infinitely-many non-zero brackets on the exterior algebra of differential forms, even for Poisson bivectors. An $L_\infty$-algebroid is a Lie algebroid together with an $L_\infty$-structure on its Chevalley-Eilenberg algebra whose differential is the Lie algebroid differential; it is a $P_\infty$-algebroid if the brackets of the $L_\infty$-structure are polyderivations. Hence an equivalent way of stating Proposition <ref> is that a symplectic embedding makes the tangent Lie algebroid over an almost Poisson manifold into a $P_\infty$-algebroid. In this language, Proposition <ref> is essentially a special case of <cit.> when $(M,\theta)$ is a Poisson manifold. Proposition <ref> implies a new homotopy algebraic construction of a deformation quantization of the exterior algebra $\Omega^\bullet(U)$ in the direction of a generic almost Poisson bracket. Since $M$ is a Lagrangian submanifold of the local symplectic embedding $(T^*M,\omega)$, and since for our local symplectic embeddings the cohomological obstructions of <cit.> are trivial, we may apply the result of <cit.> to quantize the $P_\infty$-algebra of Proposition <ref> to an $A_\infty$-structure on $\Omega^\bullet(U)[[\hbar]]$ over $\real[[\hbar]]$, which is a deformation of the exterior product on $\Omega^\bullet(U)$ and whose semi-classical limit induces the $P_\infty$-structure $\{\varrho_n^{\,\theta}\}_{n=1}^\infty$ in the following sense. The alternatization of the structure maps of this $A_\infty$-algebra, which are polydifferential operators, define an $L_\infty$-structure $\{\varrho_n^{\,\star}\}_{n=1}^\infty$ on $\Omega^\bullet(U)[[\hbar]]$. Then $\varrho_n^{\,\theta} = \frac1\hbar\,\varrho_n^{\,\star}\big|_{\hbar=0}$, and in this sense we may regard the structure maps $\{\varrho_n^{\,\theta}\}_{n=1}^\infty$ as a semi-classical limit of $\{\varrho_n^{\,\star}\}_{n=1}^\infty$. These considerations are based on a version of Kontsevich's formality theorem for the case of Lagrangian submanifolds of a symplectic manifold. The role of homotopy algebras in nonassociative deformation quantization of twisted Poisson structures was anticipated by [55], and the present observation makes this precise for arbitrary almost Poisson manifolds. The details are beyond the scope of this paper and will be explored elsewhere. Poisson gauge algebras. In the case of a Poisson structure $\theta$, the structure maps of Proposition <ref>, truncated to degrees $0$ and $1$ with $\theta^{(n)}=0$ for all $n\geq1$ and $\alpha=\beta=\alpha_1=\dots=\alpha_n=A$ for all $n\geq1$, essentially agree with those of Proposition <ref> up to numerical factors. This suggests that the $L_\infty$-structure of Proposition <ref> is compatible with a graded commutative algebra structure on $V=C^\infty(U)\oplus\Omega^1(U)$, such that the Poisson bracket on $C^\infty(U)$ is part of a $P_\infty$-structure on the $2$-term cochain complex $(V,\dd)$. This expectation turns out to be correct and is the content of Let $(M,\theta)$ be a Poisson manifold and $U\subseteq M$ an open subset. Let $\CCA=C^\infty(U)\oplus\Omega^1(U)$ be the graded commutative algebra with multiplication defined by truncating the product on the exterior algebra $\Omega^\bullet(U)$ at degree $1$, that is, \begin{align*} f\cdot g = f\,g \ , \quad f\cdot A = f\, A \qquad \mbox{and} \qquad A\cdot B=0 \end{align*} for all $f,g\in C^\infty(U)$ and $A,B\in\Omega^1(U)$. Then the $L_\infty$-structure of Proposition <ref> turns $\CCA$ into a $P_\infty$-algebra. This follows from the fact that the subalgebra of Proposition <ref> obtained by truncation to degrees $0$ and $1$, and subsequent replacement of $\theta$ with $-\theta$, is precisely the stated $L_\infty$-structure on $\CCA$. It is also an easy direct check using the Leibniz rules for the exterior derivative $\dd$, the Poisson bracket on $C^\infty(U)$, and the $\Omega^1(U)$-valued bracket of Remark <ref>. The significance of this observation is that it offers a natural path towards a homotopy algebraic construction of noncommutative gauge transformations beyond the semi-classical level. Similarly to the discussion of Remark <ref>, the $P_\infty$-algebra of Proposition <ref> can be quantized to an $A_\infty$-structure on $\CCA[[\hbar]]$ over $\real[[\hbar]]$ which is a deformation of the algebra structure on $\CCA$ and whose semi-classical limit induces the $P_\infty$-structure $\{\ell_n^{\,\theta}\}_{n=1}^\infty$ through the alternatization $\{\ell_n^{\,\star}\}_{n=1}^\infty$ of the structure maps of this $A_\infty$-algebra: $\ell_n^{\,\theta} = \frac1\hbar\,\ell_n^{\,\star}\big|_{\hbar=0}$. Then the Poisson gauge algebra is a semi-classical limit of a noncommutative gauge algebra which is organized by the “quantum” $L_\infty$-structure $\{\ell_n^{\,\star}\}_{n=1}^\infty$, making the discussion at the beginning of Section <ref> somewhat more precise. While this is an interesting approach to the construction of noncommutative gauge symmetries, it also lies beyond the scope of the present paper and we leave it for future investigation. Almost Poisson gauge algebras. The situation is much more complicated in the case of a general almost Poisson bivector $\theta$. Now, because of the presence of the non-vanishing brackets $\rho_{n+2}^{\,\theta}$ in Proposition <ref> involving two functions and forms of higher degree, the corresponding truncation does not determine a subalgebra, as this would violate the homotopy Jacobi identities. The difference between the $L_\infty$-structure maps of Propositions <ref> and <ref> (aside from numerical factors) lies entirely in the terms involving the Lagrangian multipliers of the almost Poisson gauge algebroid, whose inclusion restores the homotopy Jacobi identities. As we will now show, these terms violate the derivation properties of the original $P_\infty$-algebra from Proposition <ref>. For this, let us study in detail the derivation properties of the brackets $\ell^{\,\theta}_{n+2}(f,g,A^{\otimes n})$ and $\ell^{\,\theta}_{n+1}(f,A^{\otimes n})$ from Proposition <ref> with the same graded commutative algebra $\CCA$ as in Proposition <ref>. First of all, for the multiplication of gauge parameters it is clear that the derivation \begin{align*} \ell^{\,\theta}_{n+2}(f \cdot h,g,A^{\otimes n})&=f\cdot \ell^{\,\theta}_{n+2}(h,g,A^{\otimes n}) + \ell^{\,\theta}_{n+2}(f,g,A^{\otimes n})\cdot h \ , \\[4pt] \ell^{\,\theta}_{n+1}(f \cdot h,A^{\otimes n})&=f\cdot \ell^{\,\theta}_{n+2}(h,A^{\otimes n}) + \ell^{\,\theta}_{n+2}(f,A^{\otimes n})\cdot h \end{align*} For the multiplication of gauge fields by gauge parameters, consider first the brackets involving two gauge parameters. Since brackets with three gauge parameters vanish for degree reasons, the desired derivation property is just $C^\infty(U)$-linearity \begin{equation}\label{ph2} \ell^{\,\theta}_{n+2}(f ,g, h\cdot A,A^{\otimes n-1})=h\cdot \ell^{\,\theta}_{n+2}(f ,g,A^{\otimes n}) \ . \end{equation} This essentially means that the field dependent gauge parameter $[\![f,g]\!]_\theta$ from Proposition <ref>, when evaluated on the argument $h\cdot A$, should not depend on the derivatives of $h$. The potentially problematic terms are the ones involving the Lagrangian multiplier vector fields. However, the functions $\Lambdam^{ij}(A)$ constructed in Corollary <ref> satisfy \begin{equation} \Lambdam^{ij}(A)\,A_i=0 \ , \label{tr1} \end{equation} and together with Proposition <ref> it now easily follows that \begin{equation*} L^{ij}\big(A,\partial(h\,A)\big)=L^{ij}(A,h\,\partial A) \ . \end{equation*} Moreover, from (<ref>) the identity (<ref>) also implies \begin{equation*} L^{ij}(A,\partial A)\,A_i=0 \ , \end{equation*} and as a consequence the terms with derivatives of $h$ disappear from $s_{h\,A}^*\{\Phi_{h\,A}(L_f),\Phi_{h\,A}(L_g)\}_{\omega^{-1}}$. It follows that all terms involving derivatives of $h$ also disappear from $[\![f,g]\!]_\theta (h\,A)$ and this implies (<ref>). In other words, the map $A\mapsto\ell^{\,\theta}_{n+2}(f ,g, \alpha_1,\dots,\alpha_{n-1},A)$ is a derivation, for all $n\geq1$ and $\alpha_1,\dots,\alpha_{n-1}\in\Omega^1(U)$. For the brackets with a single gauge parameter the desired derivation property reads \begin{equation}\label{tr3} \ell^{\,\theta}_{n+1}(f,h\cdot A, A^{\otimes n-1})=h\cdot\ell^{\,\theta}_{n+1}(f,A^{\otimes n}) +(-1)^{n-1} \ell^{\,\theta}_{n+1}(f,h,A^{\otimes n-1})\cdot A \ . \end{equation} This relation is more complicated since it involves brackets of different nature and different graded symmetry. In particular, since the bracket with two gauge parameters is skew-symmetric in $f$ and $h$, the relation (<ref>) implies the consistency condition \begin{equation}\label{tr4} \ell^{\,\theta}_{n+1}(f,h\cdot A, A^{\otimes n-1})+\ell^{\,\theta}_{n+1}(h,f\cdot A, A^{\otimes n})+f\cdot\ell^{\,\theta}_{n+1}(h,A^{\otimes n}) \ . \end{equation} For $n=1$ it is easy to see that the bracket $\ell_2^{\,\theta}$ from Proposition <ref> satisfies (<ref>), while for $n=2$ the relevant brackets are \begin{align*} \ell^{\,\theta}_3(f,h,A)&=\Pim(\dd f,\dd h,A) \ , \\[4pt] \ell^{\,\theta}_3(f,A,B)&=-\gamma^{(2)}(\dd f,A,B) + \Pim^\otimes(\dd f,A,\DD B) + \Pim^\otimes(\dd f,B,\DD A) \\ & \quad \, + \tfrac12\,\dd A\big(\Pim(\dd f,B,\,\cdot\,),\,\cdot\,\big) + \tfrac12\,\dd B\big(\Pim(\dd f,A,\,\cdot\,),\,\cdot\,\big) \ , \end{align*} for $f,h\in C^\infty(U)$ and $A,B\in\Omega^1(U)$. One then calculates explicitly \begin{align*} \ell^{\,\theta}_3(f,h\cdot A,A) &= -h\cdot\gamma^{(2)}(\dd f,A,A) + h\cdot\Pim^\otimes(\dd f,A,\DD A) + \Pim^\otimes(\dd f,A, \dd h\otimes A + h\,\DD A) \\ & \quad \, +\tfrac12\,(\dd h\wedge A)\big(\Pim(\dd f,A,\,\cdot\,),\,\cdot\,\big) + \tfrac12\, h\cdot\dd A\big(\Pim(\dd f,A,\,\cdot\,),\,\cdot\,\big) \\ & \quad \, + \tfrac12\, h\cdot\dd A\big(\Pim(\dd f,A,\,\cdot\,),\,\cdot\,\big) \\[4pt] &= h\cdot\big(-\gamma^{(2)}(\dd f,A,A) + 2\,\Pim^\otimes(\dd f,A,\DD A)+\dd A(\Pim(\dd f,A,\,\cdot\,),\,\cdot\,) \big) \\ & \quad \, +\Pim(\dd f,A,\dd h)\,\cdot A +\tfrac12\,\Pim(\dd f, A,\dd h)\cdot A - \tfrac12\,\dd h\cdot\Pim(\dd f,A,A) \\[4pt] A \ , \end{align*} showing that already for $n=2$ the derivation property (<ref>) is violated. Starting from the next order $n=3$, even the consistency condition (<ref>) is violated. In the case of a Poisson structure the higher brackets all satisfy (<ref>) because of their $\CCA$-linearity, but for generic almost Poisson structures the derivation property imposes severe restrictions on the $L_\infty$-algebra. We summarise the present discussion in Let $(M,\theta)$ be an almost Poisson manifold with non-zero Jacobiator $\Pim$, and let $U\subseteq M$ be an open subset. Then the $L_\infty$-structure of Proposition <ref> and the graded commutative algebra structure of Proposition <ref> do not combine into a compatible $P_\infty$-structure on We do not solve the problem of finding a $P_\infty$-structure on the almost Poisson gauge algebroid in this paper, but let us briefly mention some possible ways that one may proceed. As we have shown above, the derivation properties are violated by precisely the terms involving the Lagrangian multipliers $L_f$, which were introduced to cancel the second term in the commutator of two gauge transformations in (<ref>). Instead of including $L_f$, one could try to cure the problem by identifying this term as the contribution from a non-zero homogeneous subspace $V_2$ in degree $2$. This would lead to a $3$-term $L_\infty$-algebra, which is likely a $P_\infty$-algebra under a suitable extension of the truncated exterior product on $C^\infty(U)\oplus \Omega^1(U)$. However, the gauge theory interpretation of the space $V_2$ is not clear, and this appears to be a general feature of physical systems based on almost Poisson algebras: to maintain all desirable features one inevitably needs to introduce some auxiliary (unphysical) degrees of freedom, such that the elimination of these auxiliary variables comes at the price of losing some of the desired properties (see [46] for an example of this). In the present situation, we lose the Leibniz rule for the $L_\infty$-structure, but still retain a well-defined gauge algebra. Another possibility would be to work instead with a curved $L_\infty$-algebra. A curving of an $L_\infty$-structure $\{\ell_n\}_{n=1}^\infty$ on a graded vector space $V$ is an additional map $\ell_0$ of degree $2$ from the ground field into $V$, which intertwines with the higher brackets through the homotopy Jacobi identities extended to $\{\ell_n\}_{n=0}^\infty$; then $\ell_1$ is no longer a differential and one loses the underlying cochain complex. In our situation, it may be possible to redefine the brackets $\ell_n^{\,\theta}$ (via an $L_\infty$-quasi-isomorphism) to absorb the violation of the Leibniz rule, which may then violate the standard $L_\infty$-relations, but may instead form a curved $L_\infty$-algebra. However, a curved $L_\infty$-algebra also necessarily contains a non-zero homogeneous subspace $V_2$ in degree $2$, whose meaning at the purely kinematic level of gauge symmetries is not Finally, one could work with a weaker notion of $P_\infty$-structure, where the graded commutative product is also replaced by a sequence of higher products such that the commutativity of the product and the Leibniz rule hold only up to homotopy. It would be interesting to explore all of these modifications of the $L_\infty$-structure exhibited in Proposition <ref> in order to fully elucidate the structure of the almost Poisson gauge algebroid, and its extension to noncommutative and nonassociative gauge transformations. Our constructions of $P_\infty$-structures above differ in several ways from the treatments of <cit.> and <cit.>, where an $L_\infty$-structure on the de Rham complex, truncated at degree $2$, was constructed for generic almost Poisson structures. Firstly, the $L_\infty$-structure of Proposition <ref> differs for brackets involving forms of degree higher than one: for example, the $2$-bracket of $E\in\Omega^2(U)$ and $f\in C^\infty(U)$ is simply the almost Poisson bracket in [11, 44], while in our case the $2$-bracket is \begin{align*} \varrho^{\,\theta}_2(f,E) = \{f,E\}_\theta - 2\,\gamma^{(1)\otimes}(\dd f,E) \ . \end{align*} On the one hand the brackets of [11, 44] do not define a $P_\infty$-structure, while on the other hand the brackets of Proposition <ref> are not designed to close an arbitrary gauge algebra. For a Poisson bivector $\theta$, the $P_\infty$-algebra of Proposition <ref> contains the Poisson gauge algebra of Proposition <ref>, and following the standard $L_\infty$-algebra formulation of field theories [34, 35], it also contains the higher degree spaces needed to formulate the dynamics of a particular Poisson gauge theory. For an almost Poisson structure, one can use the $P_\infty$-algebra in this way to formulate a gauge algebra without the need of Lagrangian multipliers, at the price of obtaining a closure condition that involves the field equations in addition to the field dependent gauge transformations [34, 35]. For example, in $d=3$ dimensions the brackets appear to define an almost Poisson Chern-Simons theory which is different from that of [11, 44]; it would be interesting to further develop this field theory, which in our symplectic embedding approach can be written down concisely and explicitly to all orders using the brackets (<ref>). Secondly, an $A_\infty$-structure on $\CCA[[\hbar]]$ is sketched in <cit.>, where the first few multiplication maps are deduced up to order $\hbar^2$. However, this structure is different from what is proposed above, as in their case the classical limit reduces the $A_\infty$-algebra to the differential graded commutative algebra $(\CCA,\dd)$, whereas here we propose that the classical limit should simply be the algebra $\CCA$ without further structure. In other words, even the differential of the $A_\infty$-structure should be considered as part of the deformation quantization of the graded commutative algebra $\CCA$. §.§ Comparison with the $L_\infty$-bootstrap According to the prescription of the $L_\infty$-bootstrap approach to constructing almost Poisson gauge algebroids [11], one starts with the natural structure maps \begin{align*} \ell_1(f)=\dd f \qquad \mbox{and} \qquad \ell_2(f,g)=-\{f,g\}_\theta \ , \end{align*} and attempts to construct the rest of the $L_\infty$-structure by consistently solving the homotopy Jacobi identities order by order. Let us briefly comment on the benefits of our approach to deformations of gauge transformations, based on symplectic embeddings, over the approach based on the $L_\infty$-bootstrap, which was used in [43] to propose recursion relations for the construction of the gauge $L_\infty$-algebras: * In the $L_\infty$-bootstrap approach, one can define contributions to the gauge transformations order by order in the formal deformation parameter $t$. Symplectic embeddings are more appropriate for computing explicit all orders expressions, which are sometimes asymptotic expansions of analytic functions known in closed form. We will illustrate this in several examples in Section <ref> below. * Following the method proposed in [43], one can recursively construct the brackets of the form $\ell_{n+1}(f,\ell_1(g), \ell_1(h),\dots)$ from previously defined brackets at lower orders. In the case of Poisson deformations of gauge theories, there is no problem in restoring $\ell_{n+1}(f,A^{\otimes n})$ from the given brackets $\ell_{n+1}(f,\ell_1(g), \ell_1(h),\dots)$. However, in the case of almost Poisson deformations the situation is much more complicated, as the passage from the brackets $\ell_{n+1}(f,\ell_1(g), \ell_1(h),\dots)$ to $\ell_{n+1}(f,A^{\otimes n})$ is extremely non-trivial. This problem is circumvented when working with symplectic embeddings. * From purely technical and calculational standpoints, the approach of this paper based on symplectic embeddings is significantly simpler then the bootstrap approach proposed in [43]. * The $L_\infty$-bootstrap approach describes a linearization of the complete $A_\infty$-structure of noncommutative gauge variations, which loses part of the information required to completely determine the semi-classical limit of the full noncommutative gauge transformations. The missing information is a derivation property, which requires a $P_\infty$-algebra to describe the semi-classical limit, and this is naturally captured by our symplectic embedding construction. In previous sections we already looked at the two simplest examples of Poisson bivector fields: the case of a manifold $M$ with the trivial Poisson structure $\theta_0=0$, for which the symplectic embedding is given by the associated symplectic groupoid $(T^*M,\omega_0)$ for $M$, and $M=\real^d$ with a constant Poisson structure $\theta(x)=\theta$, whose symplectic embedding is given by the strict deformation of the cotangent bundle $T^*\real^d$ with symplectic form $\omega=\omega_0+t\,\theta^*$; the corresponding Poisson gauge transformations were described in Example <ref> and the associated $L_\infty$-structure on the gauge algebroid in Example <ref>. The purpose of this final section is to extend these basic examples to some more complicated examples, and in particular to consider an example of an almost Poisson structure. These cases will generally involve symplectic embeddings of $(M,\theta)$ given by a formal deformation of the cotangent bundle of $M$, while the (almost) Poisson gauge algebroid is correspondingly described by an $L_\infty$-algebra which in general is no longer simply a differential graded Lie algebra and involves infinitely many brackets. §.§ Linear Poisson structures We consider first a large class of Poisson structures whereby the symplectic embedding can be described as a strict deformation of $(T^*M,\omega_0)$. Let $\mfg$ be a Lie algebra of dimension $d$ with structure constants in a given basis denoted by $f^{ij}_k$. On $M=\real^d$ we can define the linear Poisson bivector field [1] \begin{equation}\label{e1} \theta^{ij}(x)=f^{ij}_k\,x^k \ , \end{equation} which, by regarding $\real^d$ as the dual of the Lie algebra $\mfg$, is the Kirillov-Kostant Poisson structure on $\mfg^*$. In this case the function $\Sigma(p,p',x)$ in (<ref>) is a generating function for the Dynkin series for the Baker-Campbell-Hausdorff formula for the Lie algebra $\mfg=(\real^d)^*$; if $G$ is any Lie group whose Lie algebra is $\mfg$, then an integrating symplectic groupoid is $T^*G\simeq G \ltimes\mfg^*$, regarded as the action groupoid with respect to the coadjoint action, see e.g. [16]. Using the polydifferential representation constructed in [29, 26, 49] one finds, in the notation of Section <ref>, that a local symplectic embedding of the Poisson structure (<ref>) can be written as \begin{equation}\label{e3} \gamma^i_j(p)= \sum_{n=1}^\infty \, \frac{t^{n-1}\,B_n}{n!} \, p_{j_1}\cdots p_{j_n} \ , \end{equation} where $B_n$ are the Bernoulli numbers ($B_n=-\frac12,\frac16,0,-\frac1{30},0,\dots$ for $n=1,2,3,4,5,\dots$). Suppose that the structure constants of $\mfg$ are chosen such that the $d{\times}d$ matrix $\sf M$, with elements ${\sf M}^{i}_l(p):={f}^{ij_1}_k\, {f}^{kj_2}_l\,p_{j_1}\,p_{j_2}$, is diagonalizable for all transverse coordinates $p$. Following [49], one may then construct an analytic function $\gamma^i_j(p)$ whose Taylor expansion around $t=0$ coincides with the asymptotic series (<ref>). For this, we observe that (<ref>) can be rewritten as \begin{equation}\label{e4} \gamma^i_j(p)=-\tfrac12\,f^{il}_j\,p_{l}+\tfrac1t\,\mathcal{X}\big(-t^2\,{\sf M}/{2}\big)^i_j \ , \end{equation} where $\mathcal{X}({\sf M})^i_j$ is the matrix-valued function with \begin{equation*} \mathcal{X}(u)=\sqrt{\tfrac{u}{2}}\cot\sqrt{\tfrac{u}{2}}-1=\sum_{n=1}^\infty\, \frac{(-2)^n\,B_{2n}\,u^{n}}{(2n)!} \ , \end{equation*} and the power series converges for $u\in\complex$ with $|u|<\frac12$. Since ${\sf M}$ is diagonalizable, there exists a non-degenerate $d{\times}d$ matrix $\sf S$ such that \begin{equation*} {\sf M}={\sf S}\, {\sf D}\, {\sf S}^{-1} \ , \end{equation*} where $\sf D$ is the diagonal matrix whose entries are the eigenvalues $\lambda_1(p),\dots,\lambda_d(p)$ of $\sf M$ on the diagonal. Thus (<ref>) becomes \begin{equation*} \gamma^i_j(p)=-\tfrac12\,f^{il}_j\,p_{l}+\tfrac1t\,\big[{\sf S}\, \mathcal{X}\big(-t^2\,{\sf D}/{2}\big)\, {\sf S}^{-1}\big]^{i}_j \ , \end{equation*} \begin{equation*} \mathcal{X}({\sf D})=\begin{pmatrix} \mathcal{X}(\lambda_1) & & \\ & \ddots & \\ & & \mathcal{X}(\lambda_d) \end{pmatrix} \ . \end{equation*} Let us now consider two particular examples. Let $\mfg=\mathfrak{su}(2)$ with the Lie-Poisson structure \begin{align*} \theta^{ij}(x) = 2\,\varepsilon^{ij}{}_k\,x^k \end{align*} on $\mathfrak{su}(2)^*=\real^3$, where $\varepsilon^{ijk}$ is the Levi-Civita symbol in three dimensions, and we used the standard Euclidean inner product on $\real^3$ to raise and lower indices: $\varepsilon^{ij}{}_k:=\varepsilon^{ijl}\,\delta_{lk}$; the factor of $2$ is just for convenience. In this case the generalized Bopp shift (<ref>) is given by [49] \begin{align*} \pi_\theta(x,p)^i = x^i - t\,\varepsilon^{ij}{}_k\,p_j\,x^k + t^2\,\chi\big(t^2\,|p|^2\big)\,\big(x^i\,|p|^2-p^i\,p_j\,x^j\big) \ , \end{align*} from which one calculates \begin{equation*} \gamma^i_j(p)= - \varepsilon_j{}^{ik}\,p_k + t\, \chi\big(t^2\,|p|^2\big) \,\big( |p|^2\, \delta^i_j- p_j\,p^i\big) \ , \end{equation*} \begin{equation*} \chi(u)=\tfrac1u\,\big(\sqrt{u}\cot\sqrt{u}-1\big) \qquad \mbox{with} \quad \chi(0) = -\tfrac13 \ , \end{equation*} and $|p|^2:=\delta^{ij}\,p_i\,p_j$. According to (<ref>), the corresponding deformation of abelian gauge transformations yields the Poisson gauge transformations [43] \begin{equation*} \delta_{f}^\theta A=\dd f+t\, x\cdot (\nabla A\times\nabla f)+t\,A\times\nabla f+t^2\, \chi\big(t^2\,|A|^2\big)\,\big(|A|^2\,\dd f- (A\cdot\nabla f)\,A\big) \ , \end{equation*} where $A\times\nabla f:=\ast(A\wedge\dd f)$, with $\ast$ the Hodge duality operator on the Euclidean vector space $\real^3$, while $|A|^2:=\delta^{ij}\,A_i\,A_j$ and $A\cdot\nabla f:=\delta^{ij}\,A_i\,\partial_j f$; we also abbreviated $\{A,f\}_\theta=: x\cdot (\nabla A\times\nabla f)$ at $x\in\real^3$. This may be verified directly to close the Poisson gauge algebra (<ref>) and to have the correct deformation property (<ref>). It encodes the gauge symmetry of rotationally invariant Poisson gauge theories, which by Proposition <ref> is generated by the action of the $P_\infty$-algebra with infinitely many non-vanishing brackets in coincident gauge field entries given by \begin{align*} \ell^{\,\theta}_1(f) &= \dd f \ , \\[4pt] \ell^{\,\theta}_2(f,g) &= -x\cdot(\nabla f \times \nabla g) \ , \\[4pt] \ell^{\,\theta}_2(f,A) &= x\cdot (\nabla A\times\nabla f)+ A\times\nabla f \ , \\[4pt] \ell^{\,\theta}_{2n+1}\big(f,A^{\otimes 2n}\big) &= B_{2n}\,\big(|A|^2\big)^{n-1} \, \big(|A|^2\,\dd f - (A\cdot\nabla f)\, A\big) \ , \end{align*} for $n\geq1$. Let $\mfg$ be the $d$-dimensional $\kappa$-Minkowski algebra which yields the Kirillov-Kostant structure \begin{equation*} \theta_a^{ij}(x)=2\,\big(a^i\,x^j-a^j\,x^i\big) \ , \end{equation*} parameterized by a fixed constant vector $(a^i)\in\real^d$. For this Poisson structure one finds [50] \begin{equation*} \gamma^i_j(p)= \big(\tfrac1t\,\sqrt{1+t^2\,\langle a, p\rangle^2}-\tfrac1t+\langle a, p\rangle \big) \, \delta^i_j -a^i\,p_j \ , \end{equation*} where $\langle a, p\rangle :=a^i\,p_i$ is the usual pairing between $\real^d=\mfg^*$ and $\mfg$. The corresponding deformation of abelian gauge transformations becomes the Poisson gauge transformations \begin{equation*} \delta_{f}^{\theta_a} A=\big(\sqrt{1+t^2\,(\iota_aA)^2}+t\,\iota_a A\big) \, \dd f + t\,\{A,f\}_{\theta_a} - t\,(\iota_a\dd f)\, A \ , \end{equation*} where $\iota_a$ denotes interior multiplication with the constant vector field $a^i\,\partial_i$ on $\real^d$. By expanding the square root in its Taylor series around $t=0$ using the binomial series \begin{align*} \sqrt{1+u^2} = 1 - \sum_{n=1}^\infty \, \frac2n \, \binom{2n-2}{n-1} \, \Big(-\frac{u^2}4\Big)^n \ , \end{align*} we can calculate the non-zero coincident gauge field brackets of the corresponding $P_\infty$-algebra from Proposition <ref> to get \begin{align*} \ell^{\,\theta_a}_1(f) &= \dd f \ , \\[4pt] \ell^{\,\theta_a}_2(f,g) &= -\{f,g\}_{\theta_a} \ , \\[4pt] \ell^{\,\theta_a}_2(f,A) &= \{A,f\}_{\theta_a} + \iota_a(A\wedge\dd f) \ , \\[4pt] \ell^{\,\theta_a}_{2n+1}\big(f,A^{\otimes 2n}\big) &= -\frac1{2^{2n-1}} \, \frac{(2n-2)!\,(2n)!}{(n-1)!\,n!} \ (\iota_aA)^{2n} \, \dd f \ , \end{align*} for $n\geq1$. §.§ Poisson families We will now show that a rather large class of non-linear Poisson structures on $M=\real^2$ admit symplectic embeddings that yield Poisson gauge algebras whose homotopy algebraic structures are given by differential graded Poisson algebras, though in a different form than what we have encountered thus far in this paper. The basic idea is to start from the outset with smooth families of bivectors $\theta_t\in\mfX^2(\real^2)$, parameterized smoothly by $t\in[0,1]$. Any such bivector defines a family of Poisson structures on $\real^2$ by dimensional reasons and can be written as \begin{align}\label{e24} \theta_t^{ij}(x) = \vartheta_t(x)\,\varepsilon^{ij} \ , \end{align} where $\varepsilon^{ij}$ is the Levi-Civita symbol in two dimensions. We assume that the smooth functions $\vartheta_t(x)$ satisfy two properties: (a) $\vartheta_t(x)\neq0$ for all $x\in\real^2$ and $t\in[0,1]$; and (b) $\vartheta_0(x)=1$ for all $x\in\real^2$. From this we can construct a symplectic embedding of $(\real^2,\theta_t)$ by finding a one-form $b=b_i(x)\,\dd x^i$ on $\real^2$ whose components solve the first order differential equation \begin{equation}\label{e25} \ , \end{equation} together with the Jacobian equation \begin{equation}\label{e26} \det(\partial_{i}b_{j}) = 0 \ . \end{equation} For given $\vartheta_t(x)$ the solution of (<ref>) and (<ref>) was constructed in [28], where a first order classical mechanics on the cotangent bundle of $\real^2$ was proposed whose canonical quantization gives a quantization of Poisson algebras with non-constant bivectors. The action functional of this one-dimensional topological sigma-model is given by \begin{align*} {\mathcal S}(X,P) = \int_\real \, \langle P,\dd X\rangle + \frac t2\,(P+X^*b)\wedge\dd(P+X^*b) \ , \end{align*} where $(X,P):\real\to T^*\real^2=\real^2\times(\real^2)^*$ are (suitably supported) smooth maps, with $\langle\,\cdot\,,\,\cdot\,\rangle$ the pairing between $(\real^2)^*$ and $\real^2$. Hamiltonian reduction of the phase space of this sigma-model by its rank $2$ second class constraints defines Dirac brackets which, in our language, yield a symplectic embedding of the family (<ref>) with cosymplectic structure \begin{align}\label{e29} \omega_t^{-1} = \tfrac t2\,\vartheta_t(x)\, \varepsilon^{ij}\, \partial_i\wedge\partial_j + \tfrac12\,\gamma_t{}^i_j(x)\, \big(\partial_i\wedge\tilde\partial{}^j+\tilde\partial{}^j\wedge\partial_i\big) \ , \end{align} \begin{align}\label{eq:gammaijx} \gamma_t{}^i_j(x) = \vartheta_t(x)\,\big(\delta_j^i-t\,\varepsilon^{ik}\, \partial_kb_j(x)\big) \ , \end{align} and the Jacobian condition (<ref>) ensures the Lagrangian section By condition (a) above this is indeed non-degenerate for all $t\in[0,1]$, and by condition (b) it coincides with the canonical cosymplectic structure on $T^*\real^2$ at $t=0$. In particular, it defines a strict deformation of the cotangent symplectic groupoid $(T^*\real^2,\omega_0)$ for $\real^2$. The important new feature here, compared to our previous formulation in (<ref>), is that the matrix (<ref>) does not depend on the transverse coordinates $p$. Regarding it as a $(1,1)$-tensor field on $\real^2$, according to (<ref>) the deformation of abelian gauge transformations by the Poisson family (<ref>) is given by the Poisson gauge transformations \begin{align}\label{e30} \delta_f^{\theta_t}A = \gamma_t(\dd f,\,\cdot\,) + t\,\{A,f\}_{\theta_t} \ . \end{align} The $L_\infty$-algebra formulation of these gauge symmetries has the remarkable property given by The Poisson gauge transformations (<ref>) are generated by the action of the family of differential graded Poisson algebras on $C^\infty(\real^2)\oplus\Omega^1(\real^2)$ with non-vanishing brackets \begin{align*} \ell_1^{\,\theta_t}(f) = \gamma_t(\dd f,\,\cdot\,) \ , \quad \ell_2^{\,\theta_t}(f,g) = -\{f,g\}_{\theta_t} \qquad \mbox{and} \qquad \ell_2^{\,\theta_t}(f,A) = \{A,f\}_{\theta_t} \ , \end{align*} for $f,g\in C^\infty(\real^2)$ and $A\in\Omega^1(\real^2)$. This is a simple consequence of the fact that the components of the Poisson bivector (<ref>) do not depend on $p$, the Jacobi identities for the Poisson brackets, and the Lagrangian zero section condition $\{p_i,p_j\}_{\omega_t^{-1}}=0$. In coordinates \begin{align*} \ell^{\,\theta_t}_1(f) = \{f,p_i\}_{\omega_t^{-1}}\,\dd x^i \ , \end{align*} and the differential condition $\big(\ell_1^{\,\theta_t}\big)^2=0$ follows from the Jacobi identity for the Poisson bracket $\{\,\cdot\,,\,\cdot\,\}_{\omega_t^{-1}}$ along with $\{p_i,p_j\}_{\omega_t^{-1}}=0$. Similarly, the derivation property \begin{align*} \ell_1^{\,\theta_t}\{f,g\}_{\theta_t} = \big\{\ell_1^{\,\theta_t}(f),g\big\}_{\theta_t} + \big\{f,\ell^{\,\theta_t}_1(g)\big\}_{\theta_t} \end{align*} holds as a consequence of the Jacobi identity for the bracket $\{\,\cdot\,,\,\cdot\,\}_{\omega_t^{-1}}$. The homotopy Jacobi identity $\CJ_3=0$ is simply the graded Jacobi identity for the Lie bracket $\{\,\cdot\,,\,\cdot\,\}_{\theta_t}$, and the derivation properties of the brackets are clear. Proposition <ref> implies that there is no need to introduce higher brackets in the $L_\infty$-structure on the gauge algebra in this case, provided that one “twists” the differential, which in our previous treatments always coincided with the exterior derivative $\dd$, in such a way that the new differential is a derivation of the Poisson algebra corresponding to (<ref>). This generalizes Example <ref> for constant Poisson structures, recovered here in the case that $\vartheta_t(x)=1$ for all $x\in\real^2$ and $t\in[0,1]$, for which $b=0$. The “twisting” here is captured automatically by the symplectic embedding approach to the deformation of gauge transformations that we developed in the present Consider the family of rotationally symmetric Poisson structures (<ref>) with \begin{align*} \vartheta_t(x) = \frac1{1+t\,|x|^2} \ , \end{align*} where $|\cdot|$ is the standard Euclidean norm. In this case, one \begin{align*} b_i(x) = -\tfrac14\,|x|^2\,\varepsilon_{ij}\,x^j \end{align*} as a solution to (<ref>) and (<ref>). The twisted differential from Proposition <ref> then has components \begin{align*} \ell_1^{\,\theta_t}(f)_i = \frac1{1+t\,|x|^2} \, \Big( \big(1+\tfrac t4\,|x|^2\big)\,\partial_if + \tfrac t2\,\varepsilon_{ik}\,x^k\,x^l\,\varepsilon_l{}^j\,\partial_jf \Big) \ , \end{align*} for $i=1,2$. §.§ Magnetic Poisson structures Let $M=T^*Q$ be the cotangent bundle of a $d$-dimensional manifold $Q$, with bundle projection $\varpi:M\to Q$. The manifold $M$ is a symplectic manifold with canonical symplectic two-form which we denote by $\sigma_0$. We write local coordinates on $M$ as $(x^i)=(q^a,q^*_a)$, with $a=1,\dots,d$ and $i=1,\dots,2d$, where $(q^a)$ are local coordinates on $Q$ and $(q^*_a)$ are canonically conjugate coordinates in the normal directions to the zero section $Q\subset M$. We denote the corresponding derivatives by $(\partial_i)=(\partial_a,\partial_*^a)$, where $\partial_a=\partial/\partial q^a$ and $\partial_*^a=\partial/\partial q^*_a$. Let $B\in\Omega^2(Q)$ be an arbitrary two-form on the base manifold $Q$. Its pullback to $M$ deforms the symplectic structure $\sigma_0$ to an almost symplectic form \begin{align*} \sigma_B = \sigma_0-\varpi^*B \end{align*} which is closed if and only if $B$ is a closed two-form on $Q$. The inverse $\theta_B=\sigma_B^{-1}$ is an $H$-twisted Poisson structure on $M$, with twisting three-form $H\in\Omega^3(M)$ given by \begin{align*} H = \varpi^*\dd B \ . \end{align*} The bivector $\theta_B$ defines a magnetic Poisson structure on $M$; the terminology comes from monopole physics where the two-form $B$ plays the role of a magnetic field. In local coordinates, where $B=\frac12\,B_{ab}(q) \, \dd q^a\wedge\dd q^b$ and $H=\frac1{3!}\,H_{abc}(q)\,\dd q^a\wedge\dd q^b\wedge\dd q^c$, the bivector $\theta_B$ reads \begin{align*} \theta_B = \tfrac12\,(\partial_a\wedge \partial_*^a+\partial_*^a\wedge\partial_a) + \tfrac12\,B_{ab}(q)\, \partial_*^a\wedge\partial_*^b \ , \end{align*} and the Jacobiator \begin{align*} \Pim_B=[\theta_B,\theta_B]=\tfrac1{3!}\, \end{align*} has non-vanishing components only in the normal directions to the zero section $Q\subset M$. Deformation quantization of such twisted Poisson manifolds was originally considered in [55] using Kontsevich's formalism. Their higher geometric quantization was developed in [13] by regarding the three-form $H$ as the curvature of a trivial gerbe on $M$, and the extension to non-trivial gerbes is discussed in [14]; see [67] for a review of the different perspectives to quantization of magnetic Poisson structures. In the language of Remark <ref>, the almost cosymplectic structure $\sigma_B^{-1}$ is equivalent to the canonical cosymplectic structure $\sigma_0^{-1}$ by means of a $B$-transformation, which leads to many simplifications in its symplectic embedding construction: many components of the tensors $\gamma^{(n)}$ and $\theta^{(n)}$ of the symplectic embedding vanish as various combinations of derivatives acting on the components of $\theta_B$ and $\Pim_B$, which are functions on the base manifold $Q$, are identically zero; in particular $\Pim_B^{ijk}\,\partial_k\theta_B^{lm}=0$, see e.g. [45]. Linear magnetic Poisson structures. We specialise now to $Q=\real^d$ with the linear magnetic Poisson structure defined by the two-form \begin{align*} B_{ab}(q) = \tfrac12\,H_{abc}\,q^a \ , \end{align*} where $H$ is a constant three-form on $\real^d$. This is the twisted Poisson analog of a constant Poisson structure: In this case the components of the Jacobiator $\Pim_B$ are constant and the generalized Bopp shift (<ref>) reads \begin{align*} \pi_{\theta_B}(x,p)^i = x^i-\tfrac t2\,\theta_B^{ij}(x)\,p_j \ , \end{align*} so $\gamma^{(n)}=0$ and $\theta^{(n)}=0$ for all $n\geq2$ [46]. We denote the fibre coordinates of $T^*M$ by $(p_i)=(\xi_a,\xi_*^a)$. In the notation of Sections <ref> and <ref>, the non-zero components of the symplectic embedding are then given by \begin{align*} \gamma_{ab}(\xi_*) = -\tfrac12\,H_{abc}\,\xi_*^c \ , \quad \underline{\theta}\,^a_b=-\underline{\theta}\,_{a}^b = \delta^a_b \qquad \mbox{and} \qquad \underline{\theta}_{\,ab}(q,\xi_*) = H_{abc}\,(q^c-t\,\xi_*^c) \ . \end{align*} For the corresponding deformation of abelian gauge transformations, we note that the tensors $\Upm^{(n)}$ determined from Corollary <ref> also vanish for all $n\geq2$, so that in this case $\Lambdam^{ij}(A) = -\tfrac12\,\Pim_B^{ijk}\,A_k$. This depends only on the transverse components of the gauge fields $A\in\Omega^1(M)$ to the zero section $Q\subset M$, so we decompose gauge fields as \begin{align}\label{eq:Aqdecomp} A = A_i(x)\, \dd x^i = \alpha_a(q,q^*)\, \dd q^a + \alpha^a_*(q,q^*)\, \dd q_a^* \end{align} and obtain explicitly \begin{align*} \Lambdam_{ab}(\alpha_*) = -\tfrac12\,H_{abc}\,\alpha_*^c \end{align*} as the only non-zero components of $\Lambdam^{ij}(A)$. The Lagrangian multipliers $L_f$ are constructed from Proposition <ref> as formal power series which are now given explicitly to all orders by the constant Jacobiator $\Pim_B$ on $M$. We can write them in terms of an analytic function on the jet space $J^1T^*M$ whose Taylor series around $t=0$ coincides with the asymptotic series (<ref>). For this, we introduce the $2d{\times}2d$ matrix ${\sf M}$ with elements \begin{align*} {\sf M}^{a}_b(\alpha_*,\partial_* \alpha_*)=-H_{bce}\,\alpha_*^c\,\partial_*^a\alpha_*^e \end{align*} and observe that the non-vanishing components in (<ref>) can be rewritten as \begin{align}\label{eq:Labsum} L_{ab}(\alpha_*,\partial_* \alpha_*) = -\frac12\,H_{ace}\,\alpha_*^e \ \sum_{n=0}^\infty \, \Big(\frac{t^2}2\Big)^n \, \big({\sf M}^n\big)^{c}_{b} = -\frac12\,H_{ace}\,\alpha_*^e\,\big[\big(\one - \tfrac{t^2}2 \,{\sf M}\big)^{-1}\big]_{b}^{c} \ . \end{align} The Lagrangian multipliers \begin{align*} L_f={L_f}_a\,\partial_*^a \qquad \mbox{with} \quad {L_f}_a = t^2\, L_{ab}(\alpha_*,\partial_*\alpha_*) \, \partial_*^bf \end{align*} are then transverse to $Q\subset M$, depend only on the transverse components of the gauge fields and their normal derivatives, and are determined entirely by the normal derivatives of gauge parameters $f\in C^\infty(M)$. From Remark <ref>, the corresponding deformation of abelian gauge transformations is given by the almost Poisson gauge \begin{align*} \delta_f^{\theta_B} A = \delta_f^{\theta_B}\alpha_a \, \dd q^a + \delta_f^{\theta_B}\alpha^a_* \, \dd q^*_a \ , \end{align*} \begin{align} \delta_f^{\theta_B}\alpha_a &= \partial_af+t\,\{\alpha_a,f\}_{\theta_B} + \tfrac t2\, H_{abc}\,\alpha_*^c\,\partial_*^bf + \nonumber \\ & \quad \, \hspace{1cm} +\tfrac{t^2}2\, H_{bln}\,\alpha_*^n\,\partial_*^kf\, \big[\big(\one - \tfrac{t^2}2 \,{\sf M}\big)^{-1}\big]_{k}^{l} \, \big(\partial_a\alpha_*^b-\partial_*^b\alpha_a + \tfrac t2\, +t\,\{\alpha_a,\alpha_*^b\}_{\theta_B} \nonumber \\ & \quad \, \hspace{7cm} + t^2\, \big) \ , \nonumber \\[4pt] \delta_f^{\theta_B}\alpha_*^a &= \partial_*^af + t\,\{\alpha_*^a,f\}_{\theta_B} + \nonumber \\ & \quad \, \hspace{1cm}+\tfrac{t^2}2\, H_{bln}\,\alpha_*^n\,\partial_*^kf\, \big[\big(\one - \tfrac{t^2}2 \,{\sf M}\big)^{-1}\big]_{k}^{l} \, \big(\partial_*^a\alpha_*^b-\partial_*^b\alpha_*^a +t\,\{\alpha_*^a,\alpha_*^b\}_{\theta_B} \nonumber\\ & \quad \, \hspace{7cm} + \big) \ . \label{eq:deltaalpha} \end{align} These satisfy the almost Poisson gauge algebra (<ref>) with the field dependent gauge parameter \begin{align} [\![f,g]\!]_{\theta_B}(\alpha_*) &= \{f,g\}_{\theta_B} - \nonumber \\ & \quad \, -\tfrac{t^3}4\,H_{bsn}\,H_{crm}\,\alpha_*^n\,\alpha_*^m\, \partial_*^kf\,\partial_*^pg\, \big[\big(\one - \tfrac{t^2}2 \,{\sf M}\big)^{-1}\big]_{k}^{s} \, \big[\big(\one - \tfrac{t^2}2 \,{\sf M}\big)^{-1}\big]_{p}^{r} \label{eq:fgthetaBA} \\ & \quad \, \hspace{4cm} \times \big(\partial_*^b\alpha_*^c-\partial_*^c\alpha_*^b + t\, \{\alpha_*^b,\alpha_*^c\}_{\theta_B} \big) \ . \nonumber \end{align} Notice that, apart from the terms involving the almost Poisson brackets $\{\,\cdot\,,\,\cdot\,\}_{\theta_B}$, the gauge variations of the transverse components $\alpha_*^a$ in (<ref>) as well as the brackets (<ref>) are determined completely by normal components to the embedding $Q\subset The $L_\infty$-algebra of these almost Poisson gauge symmetries is given by Proposition <ref>, with infinitely many non-vanishing brackets which in this example greatly simplify due to the vanishing of most tensors in the symplectic embedding and in the Lagrangian multipliers. They can be read off directly from (<ref>) and (<ref>) by reinstating the formal power series expansion (<ref>). For the coincident gauge field brackets involving a single gauge parameter, we use the decomposition (<ref>) to similarly write $\ell_{n+1}^{\,\theta_B}\big(f,A^{\otimes n}\big)\in\Omega^1(M)$ in component form \begin{align*} \ell_{n+1}^{\,\theta_B}\big(f,A^{\otimes n}\big) = \lambda^{\theta_B}_{n+1}\big(f,A^{\otimes n}\big)_a \, \dd q^a + \lambda^{\theta_B}_{n+1}\big(f,\alpha_*^{\otimes n}\big)^a_* \, \dd q^*_a \ , \end{align*} \begin{align*} \lambda_1^{\theta_B}(f)_a &= \partial_af \ , \\[4pt] \lambda_2^{\theta_B}(f,A)_a &= \{\alpha_a,f\}_{\theta_B} + \tfrac12\,H_{abc}\,\alpha_*^c\,\partial_*^bf \ , \\[4pt] \lambda_3^{\theta_B}\big(f,A^{\otimes2}\big)_a &= \, \big(\partial_a\alpha_*^b-3\,\partial_*^b\alpha_a\big) \ , \\[4pt] \lambda_4^{\theta_B}\big(f,A^{\otimes3}\big)_a &= 3\, \, \big(\{\alpha_a,\alpha_*^b\}_{\theta_B} \tfrac12\,H_{ace}\,\alpha_*^e\,\partial_*^c\alpha_*^b\big) \ , \\[4pt] \lambda_{2n-1}^{\theta_B}\big(f,A^{\otimes2n-2}\big)_a &= \, H_{blm}\, H_{kb_1c_1}\,H_{a_1b_2c_2}\cdots H_{a_{n-3}b_{n-2}c_{n-2}}\,\alpha_*^m\, \alpha_*^{b_1}\cdots\alpha_*^{b_{n-2}}\,\partial_*^kf \\ & \quad \, \hspace{1cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots \partial_*^{a_{n-3}}\alpha_*^{c_{n-3}} \, \partial_*^l\alpha_*^{c_{n-2}}\,\big(\partial_a\alpha_*^b-3\,\partial_*^b\alpha_a\big) \ , \\[4pt] \lambda_{2n}^{\theta_B}\big(f,A^{\otimes2n-1}\big)_a &= \, H_{blm}\, H_{kb_1c_1}\,H_{a_1b_2c_2}\cdots H_{a_{n-3}b_{n-2}c_{n-2}}\,\alpha_*^m\, \alpha_*^{b_1}\cdots\alpha_*^{b_{n-2}}\,\partial_*^kf\, \\ & \quad \, \hspace{1cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots\partial_*^{a_{n-3}}\alpha_*^{c_{n-3}}\,\partial_*^l\alpha_*^{c_{n-2}} \, \big(\{\alpha_a,\alpha_*^b\}_{\theta_B} + \tfrac12\,H_{ace}\,\alpha_*^e\,\partial_*^c\alpha_*^b \big) \ , \end{align*} \begin{align} \lambda_1^{\theta_B}(f)_*^a &= \partial_*^af \ , \nonumber \\[4pt] \lambda_2^{\theta_B}(f,\alpha_*)_*^a &= \{\alpha_*^a,f\}_{\theta_B} \ , \nonumber \\[4pt] \lambda_3^{\theta_B}\big(f,\alpha_*^{\otimes2}\big)_*^a &= \, \big(\partial_*^a\alpha_*^b-3\,\partial_*^b\alpha_*^a\big) \ , \nonumber \\[4pt] \lambda_4^{\theta_B}\big(f,\alpha_*^{\otimes3}\big)_*^a &= \, \{\alpha_*^a,\alpha_*^b\}_{\theta_B} \ , \label{eq:lambdamagnetic}\\[4pt] \lambda_{2n-1}^{\theta_B}\big(f,\alpha_*^{\otimes2n-2}\big)_*^a &= \, H_{blm}\, H_{kb_1c_1}\,H_{a_1b_2c_2}\cdots H_{a_{n-3}b_{n-2}c_{n-2}}\,\alpha_*^m\, \alpha_*^{b_1}\cdots\alpha_*^{b_{n-2}}\,\partial_*^kf \nonumber \\ & \quad \, \hspace{1cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots \partial_*^{a_{n-3}}\alpha_*^{c_{n-3}} \, \partial_*^l\alpha_*^{c_{n-2}}\,\big(\partial_*^a\alpha_*^b-3\,\partial_*^b\alpha_*^a\big) \ , \nonumber \\[4pt] \lambda_{2n}^{\theta_B}\big(f,\alpha_*^{\otimes2n-1}\big)_*^a &= \, H_{blm}\, H_{kb_1c_1}\,H_{a_1b_2c_2}\cdots H_{a_{n-3}b_{n-2}c_{n-2}}\,\alpha_*^m\, \alpha_*^{b_1}\cdots\alpha_*^{b_{n-2}}\,\partial_*^kf\, \nonumber \\ & \quad \, \hspace{1cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots\partial_*^{a_{n-3}}\alpha_*^{c_{n-3}}\,\partial_*^l\alpha_*^{c_{n-2}} \, \{\alpha_*^a,\alpha_*^b\}_{\theta_B} \ , \nonumber \end{align} for $n\geq3$. For the coincident gauge field brackets involving two gauge parameters, after some further tedious calculation and simplification we obtain for the first few brackets \begin{align} \ell_2^{\,\theta_B}(f,g) &= -\{f,g\}_{\theta_B} \ , \nonumber \\[4pt] \ell_3^{\,\theta_B}(f,g,\alpha_*) &= \ , \nonumber \\[4pt] \ell_4^{\,\theta_B}\big(f,g,\alpha_*^{\otimes2}\big) &= 0 \ , \nonumber \\[4pt] \ell_5^{\,\theta_B}\big(f,g,\alpha_*^{\otimes 3}\big) &= -\tfrac32\, H_{bke}\,H_{cpm}\,\alpha_*^e\,\alpha_*^m\, \partial_*^kf\,\partial_*^pg\, \big(\partial_*^b\alpha_*^c-\partial_*^c\alpha_*^b\big) \ , \nonumber \\[4pt] \ell_6^{\,\theta_B}\big(f,g,\alpha_*^{\otimes 4}\big) &= -6\,H_{bke}\,H_{cpm}\,\alpha_*^e\,\alpha_*^m\, \partial_*^kf\,\partial_*^pg\, \{\alpha_*^b,\alpha_*^c\}_{\theta_B} \ , \nonumber \\[4pt] \ell_7^{\,\theta_B}\big(f,g,\alpha_*^{\otimes5}\big) &= \nonumber \\ & \quad \, \qquad \times \, \big(\partial_*^a\alpha_*^b\,\partial_*^r\alpha_*^c + \tfrac12\, \partial_*^b\alpha_*^a\,\partial_*^r\alpha_*^c + \tfrac12\, \partial_*^a\alpha_*^b\,\partial_*^c\alpha_*^r\big) \ , \label{eq:ellfgmagnetic1} \end{align} together with the higher order brackets \begin{align} \ell_{2n}^{\,\theta_B}\big(f,g,\alpha_*^{\otimes2n-2}\big) &= \frac{(2n-2)!}{2^{n-1}} \, \cdots \alpha_*^{b_{n-3}}\,\partial_*^kf\,\partial_*^pg\,\{\alpha_*^b,\alpha_*^c\}_{\theta_B} \nonumber \\ & \quad \, \times \,\Big(H_{a_1b_2c_2}\cdots \nonumber \\ & \quad \, \hspace{2cm} \times \, \big(\delta_k^s \, H_{pb_1c_1}\,\partial_*^r\alpha_*^{c_{n-3}} + \delta_p^r\, H_{kb_1c_1}\,\partial_*^s\alpha_*^{c_{n-3}}\big) \nonumber \\ & \quad \, \qquad + \sum_{l=1}^{n-4} \, H_{a_{l-1}b_lc_l} \nonumber \\ & \quad \, \qquad \hspace{1.5cm} \times \, H_{pb_{l+1}c_{l+1}}\,H_{a_{l+1}b_{l+2}c_{l+2}}\cdots H_{a_{n-4}b_{n-3}c_{n-3}} \nonumber \\ & \quad \, \qquad \hspace{2cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots\partial_*^{a_{l-1}}\alpha_*^{c_{l-1}}\,\partial_*^s\alpha_{*}^{c_l} \nonumber \\ & \quad \, \qquad \hspace{2.5cm} \times \, \partial_*^{a_{l+1}}\alpha_*^{c_{l+1}}\cdots\partial_*^{a_{n-4}}\alpha_*^{c_{n-4}}\,\partial_*^r\alpha_*^{c_{n-3}} \Big) \ , \label{eq:ellfgmagnetic2} \end{align} \begin{align} \ell_{2n+1}^{\,\theta_B}\big(f,g,\alpha_*^{\otimes2n-1}\big) &= \, \nonumber \\ & \quad \, \times \, \bigg(H_{a_1b_2c_2}\cdots \nonumber \\ & \quad \, \qquad \times \, \Big(\frac12\, - \partial_*^c\alpha_*^b\big) \nonumber \\ & \quad \qquad \hspace{2cm} \times \, \big(\delta_k^s\, H_{pb_1c_1}\,\partial_*^r\alpha_*^{c_{n-2}} + \delta_p^r\, H_{kb_1c_1}\,\partial_*^s\alpha_*^{c_{n-2}}\big) \nonumber \\ & \quad \, \qquad \qquad + H_{pb_1c_1}\,\partial_*^r\alpha_*^{c_{n-3}} + \delta_p^r\, H_{kb_1c_1}\,\partial_*^s\alpha_*^{c_{n-3}}\big) \Big) \nonumber \\ & \quad \, \qquad + \Big(\frac12\,H_{a_{n-3}b_{n-2}c_{n-2}}\,\partial_*^{a_{n-3}}\alpha_*^{c_{n-3}}\,\partial_*^r\alpha_*^{c_{n-2}}\,\big(\partial_*^b\alpha_*^c - \partial_*^c\alpha_*^b\big) \nonumber \\ & \quad \, \qquad \hspace{2cm} + \nonumber \\ & \quad \, \qquad \qquad \times \, \sum_{l=1}^{n-4} \, H_{a_{l-1}b_lc_l} \nonumber \\ & \quad \qquad \qquad \hspace{1.5cm} \times \, H_{pb_{l+1}c_{l+1}}\,H_{a_{l+1}b_{l+2}c_{l+2}}\cdots H_{a_{n-4}b_{n-3}c_{n-3}} \nonumber \\ & \quad \qquad \qquad \hspace{1.5cm} \times \partial_*^{a_1}\alpha_*^{c_1}\cdots\partial_*^{a_{l-1}}\alpha_*^{c_{l-1}}\,\partial_*^s\alpha_*^{c_l}\,\partial_*^{a_{l+1}}\alpha_*^{c_{l+1}}\cdots\partial_*^{a_{n-4}}\alpha_*^{c_{n-4}} \nonumber \\ & \quad \, \qquad + \frac12\,H_{kb_1c_1}\,H_{a_1b_2c_2}\cdots H_{a_{n-4}b_{n-3}c_{n-3}}\,H_{pb_{n-2}c_{n-2}} \nonumber \\ & \quad \, \qquad \hspace{2cm} \times \, \partial_*^{a_1}\alpha_*^{c_1}\cdots\partial_*^{a_{n-4}}\alpha_*^{a_{n-4}c_{n-4}}\,\partial_*^s\alpha_*^{c_{n-3}}\,\partial_*^r\alpha_*^{c_{n-2}}\bigg) \ , \label{eq:ellfgmagnetic3} \end{align} for $n\geq4$. The forms of the transverse gauge transformations in (<ref>) and the brackets (<ref>), as well as their corresponding $L_\infty$-structures (<ref>) and (<ref>)–(<ref>), suggest a natural truncation of almost Poisson gauge transformations in this case to gauge parameters and gauge fields along the normal directions to the zero section $Q\subset M$. Let $Q^*$ be the submanifold defined by the equations $q_a=0$ in $M$. Given $f,g\in C^\infty(Q^*)$ and $\alpha_*\in\Omega^1(Q^*)$, the pullbacks $\delta_f^{\theta_B}\alpha_*\big|_{Q^*}$ and $[\![f,g]\!]_{\theta_B}(\alpha_*)\big|_{Q^*}$ eliminate only the terms involving almost Poisson brackets $\{\,\cdot\,,\,\cdot\,\}_{\theta_B}$, leaving a non-trivial dependence on the three-form $H$ which deforms the standard abelian gauge transformations on the manifold $Q^*$. In [3] it was shown that this pullback operation turns the nonassociative star-product on $C^\infty(M)[[\hbar]]$, which quantizes the twisted Poisson structure $\theta_B$, into a sequence of `triproducts' on $C^\infty(Q^*)[[\hbar]]$, which quantizes the trivector $\Pim_B\in\mfX^3(Q^*)$ that for $d=3$ defines a Nambu-Poisson structure (of degree $3$) on $Q^*=\real^3$. This pullback operation can thus be thought of as defining a `Nambu-Poisson gauge symmetry' which ought to be related to a 3-Lie algebra (or equivalently a related Lie 2-algebra) action on $\Omega^1(Q^*)$. Indeed, the pullback of the $L_\infty$-structure to $Q^*$ does not involve any $2$-brackets, as $\ell_2^{\,\theta_B}(f,\alpha_*)\big|_{Q^*}=0$ and $\ell_2^{\,\theta_B}(f,g)\big|_{Q^*}=0$, and should be regarded as the homotopy algebra action underlying this higher Lie algebra gauge symmetry. It would be interesting to work out the details and understand the underlying structures better. Cosymplectic brackets on the constraint locus Let $(T^*M,\omega)$ be a local symplectic embedding of an almost Poisson manifold $(M,\theta)$, let $U\subseteq M$ be an open subset, and let $\lambda_0=p_i\,\dd x^i$ be the Liouville one-form on $T^*U$. Let $f,g,h\in C^\infty(U)$ and $A\in\Omega^1(U)$. In this appendix we derive some useful identities for the cosymplectic brackets on the constraint locus ${\sf im}(s_A)\subset T^*U$, where $\Phi_A=(p_i-A_i(x))\,\dd x^i=0$, which we use in the main text. We will need the relation \begin{align} & s_A^*\{\pi^*f,\{\pi^*g,\lambda_0\}_{\omega^{-1}}\}_{\omega^{-1}} - \nonumber \\[4pt] & \hspace{5cm} = t\,s_A^*\{\pi^*f,\gamma(\pi^*\dd g,\,\cdot\,)\}_{\omega^{-1}} - t\, g,\,\cdot\,)\}_{\omega^{-1}} \nonumber \\[4pt] & \hspace{5cm} = \nonumber \\[4pt] & \hspace{5cm} = \, s_A^*\{\pi^*f,(\Phi_A)_i\}_{\omega^{-1}} \ . \label{r1}\end{align} We also need \begin{align} & s_A^*\{\pi^*f,\{\pi^*g,\pi^*h\}_{\omega^{-1}}\}_{\omega^{-1}} - \notag \\[4pt] & \hspace{5cm} = - t\, \notag \\[4pt] & \hspace{5cm} = \, \big(s_A^*\{\pi^*f,p_i\}_{\omega^{-1}} \nonumber \\[4pt] & \hspace{5cm} = \, s_A^*\{\pi^*f,(\Phi_A)_i\}_{\omega^{-1}} \ . \label{r2}\end{align} Now the combination of (<ref>) and (<ref>) implies \begin{align} & s_A^*\{\pi^*f,\{\pi^*g,\Phi_A\}_{\omega^{-1}}\}_{\omega^{-1}} - \notag \\[4pt] & \hspace{5cm} = s_A^*\big(\tilde\partial^i\{\pi^*g,\Phi_A\}_{\omega^{-1}}\big) \, s_A^*\{\pi^*f,(\Phi_A)_i\}_{\omega^{-1}} \ . \label{r3}\end{align} Next we calculate \begin{align} & s_A^*\{\{\pi^*g,\pi^*f\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} - \notag \\[4pt] & \hspace{1.5cm} = \, \big(-s_A^*\{\pi^*A_i,\lambda_0\}_{\omega^{-1}} - s_A^*\{p_i,\pi^*A\}_{\omega^{-1}} + \notag \\[4pt] & \hspace{1.5cm} = \, \ . \label{r4}\end{align} One may also check \begin{align} & s_A^*\{\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} - \notag \\[4pt] & \hspace{5cm} = \, s_A^*\{(\Phi_A)_i,\Phi_A\}_{\omega^{-1}} \ , \label{r5}\end{align} \begin{align} \notag \\[4pt] & \hspace{5cm} = \, \ . \label{r6}\end{align} Closure of almost Poisson gauge transformations In this appendix we prove that the gauge transformations defined in Proposition <ref> close the almost Poisson gauge algebra (<ref>). For this, we use (<ref>) to calculate the left-hand side of (<ref>), and after rearranging the terms we find \begin{align} \delta^\theta_f\big(s_A^*\{\pi^*g,\Phi_A\}_{\omega^{-1}}+L_g^j\,s_A^*\{(\Phi_A)_j,\Phi_A\}_{\omega^{-1}}\big)-\delta^\theta_g\big(s_A^*\{\pi^*f,\Phi_A\}_{\omega^{-1}}+L_f^i\,s_A^*\{(\Phi_A)_i,\Phi_A\}_{\omega^{-1}}\big) \notag \\[4pt] & = \\ & \hspace{0.5cm} \notag \\ & \hspace{1cm} \notag \\ & \hspace{1.5cm} & \hspace{2cm} \notag \\ & \hspace{2.5cm} +L_g^j\,s_A^*\big(\tilde\partial{}^k\{(\Phi_A)_j,\Phi_A\}_{\omega^{-1}}\big)\,\big(s_A^*\{\pi^*f,(\Phi_A)_k\}_{\omega^{-1}}+L_f^i\,s_A^*\{(\Phi_A)_i,(\Phi_A)_k\}_{\omega^{-1}}\big) \notag\\ & \hspace{3cm} -L_g^j\,s_A^*\{\pi^*s_A^*\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}} + \notag \\ & \hspace{3.5cm} +L_f^i\,s_A^*\{\pi^*s_A^*\{\pi^*g,(\Phi_A)_i\}_{\omega^{-1}} + \notag \\ & \hspace{4cm} +\big(\delta_f^\theta L_g^i - \delta_g^\theta L_f^i\big)\,s_A^*\{(\Phi_A)_i,\Phi_A\}_{\omega^{-1}} \ . \label{c6}\end{align} By explicit calculation and using the formulas (<ref>), (<ref>) and (<ref>) from Appendix <ref> one may check \begin{align*} & s_A^*\{\pi^*f + L_f^i\,(\Phi_A)_i,\{\pi^*g + L_g^j\,(\Phi_A)_j,\Phi_A\}_{\omega^{-1}}\}_{\omega^{-1}} \\ & \hspace{6cm} - s_A^*\{\pi^*f + L_f^i\,(\Phi_A)_i,\pi^*s_A^*\{\pi^*g + L_g^j\,(\Phi_A)_j,\Phi_A\}_{\omega^{-1}}\}_{\omega^{-1}} \\[4pt] & \hspace{1cm} = \big(s_A^*(\tilde\partial{}^k\{\pi^*g,\Phi_A\}_{\omega^{-1}}) \big) \\ & \hspace{5cm} \times \ \big(s_A^*\{\pi^*f,(\Phi_A)_k\}_{\omega^{-1}}+L_f^i\,s_A^*\{(\Phi_A)_i,(\Phi_A)_k\}_{\omega^{-1}}\big) \ . \end{align*} Using this expression we rewrite (<ref>) as \begin{align*} \\ & \hspace{6cm} \\ & \hspace{1cm} + s_A^*\{L_f^i,\Phi_A\}_{\omega^{-1}} \, s_A^*\{\pi^*g,(\Phi_A)_i\}_{\omega^{-1}} - s_A^*\{L_g^i,\Phi_A\}_{\omega^{-1}} \, s_A^*\{\pi^*f,(\Phi_A)_i\}_{\omega^{-1}} \\ & \hspace{1cm} + \\ & \hspace{1cm} + 2\, \big( L_f^i\,s_A^*\{L_g^j,\Phi_A\}_{\omega^{-1}} - L_g^i\,s_A^*\{L_f^j,\Phi_A\}_{\omega^{-1}} \big) \, s_A^*\{(\Phi_A)_j,(\Phi_A)_i\}_{\omega^{-1}} \\ & \hspace{1cm} + 2\, +\big(\delta_f^\theta L_g^i - \delta_g^\theta \ . \end{align*} Using the Jacobi identity in the first two lines and simplifying the remaining lines, we rewrite this expression as \begin{align*} & s_A^*\{\{\pi^*f+L_f^i\,(\Phi_A)_i,\pi^*g+L_g^j\,(\Phi_A)_j\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} +\big(\delta_f^\theta L_g^i - \delta_g^\theta \\ & \hspace{2cm} - s_A^*\{L_g^j\,\pi^*s_A^*\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}} + L_f^i\,\pi^*s_A^*\{(\Phi_A)_i,\pi^*g\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} \\ & \hspace{4cm} + \ . \end{align*} Using $\Phi_A=0$ on the constraint locus ${\sf im}(s_A)$, we rewrite the first line so that this expression becomes \begin{align*} & s_A^*\{\{\pi^*f,\pi^*g\}_{\omega^{-1}} {+} L_g^j\,\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}} {+} L_f^i\,\{(\Phi_A)_i,\pi^*g\}_{\omega^{-1}} {+} \\ & \quad +\big(s_A^*\{L_f^k,\pi^*g\}_{\omega^{-1}} + s_A^*\{\pi^*f,L_g^k\}_{\omega^{-1}} + L_f^i\,s_A^*\{(\Phi_A)_i,L_g^k\}_{\omega^{-1}} \\ & \hspace{2cm} + L_g^j\,s_A^*\{L_f^k,(\Phi_A)_j\}_{\omega^{-1}}\big) \, s_A^*\{(\Phi_A)_k,\Phi_A\}_{\omega^{-1}} +\big(\delta_f^\theta L_g^i - \delta_g^\theta \\ & \hspace{4cm} - s_A^*\{L_g^j\,\pi^*s_A^*\{\pi^*f,(\Phi_A)_j\}_{\omega^{-1}} + L_f^i\,\pi^*s_A^*\{(\Phi_A)_i,\pi^*g\}_{\omega^{-1}},\Phi_A\}_{\omega^{-1}} \\ & \hspace{6cm} + \ . \end{align*} Applying again the formulas (<ref>)–(<ref>) from Appendix <ref> in the first line of this expression, after simplification we end up with \begin{align*} & s_A^*\{\pi^*s_A^*\{\pi^*f,\pi^*g\}_{\omega^{-1}} - \\ & \quad + \Big( \delta_f^\theta L_g^k - \delta_g^\theta L_f^k + s_A^*\big(\tilde\partial^k\{\pi^*f,\pi^*g\}_{\omega^{-1}}\big) + \\ & \hspace{1cm} + + s_A^*\{L_f^k,\pi^*g\}_{\omega^{-1}} \\ & \hspace{1.5cm} + s_A^*\{\pi^*f,L_g^k\}_{\omega^{-1}} + L_f^i\,s_A^*\{(\Phi_A)_i,L_g^k\}_{\omega^{-1}} + \Big) \, \ . \end{align*} By (<ref>) and (<ref>) this is equal to $s_A^*\{t\,[\![f,g]\!]_\theta(A) + L^k_{t\, which proves the closure formula (<ref>). [1] A. Yu. Alekseev and A. Z. Malkin, “Symplectic structures associated to Lie-Poisson groups,” Commun. Math. Phys. 162 (1994) 147–174 [2] M. Ammar, V. Chloup and S. Gutt, “Universal star products,” Lett. Math. Phys. 84 (2008) 199–215 [arXiv:0804.1300 [math.SG]]. [3] P. Aschieri and R. J. Szabo, “Triproducts, nonassociative star products and geometry of $R$-flux string compactifications,” J. Phys. Conf. Ser. 634 (2015) 012004 [arXiv:1504.03915 [hep-th]]. [4] P. Aschieri, I. Baković, B. Jurčo and P. Schupp, “Noncommutative gerbes and deformation quantization,” J. Geom. Phys. 60 (2010) 1754–1761 [5] E. J. Beggs and S. Majid, “Semi-classical differential structures,” Pacific J. Math. 224 (2006) 1–44 [6] W. Behr and A. Sykora, “Construction of gauge theories on curved noncommutative space-time,” Nucl. Phys. B 698 (2004) 473–502 [7] F. A. Berends, G. J. H. Burgers and H. van Dam, “On the theoretical problems in constructing interactions involving higher spin massless particles,” Nucl. Phys. B 260 (1985) 295–322. [8] E. Bergshoeff, D. S. Berman, J. P. van der Schaar and P. Sundell, “A noncommutative M-theory five-brane,” Nucl. Phys. B 590 (2000) 173–197 [9] P. Bieliavsky, M. Cahen, S. Gutt, J. Rawnsley and L. Schwachhöfer, “Symplectic connections,” Int. J. Geom. Meth. Mod. Phys. 03 (2006) 375–420 [10] R. Blumenhagen, M. Brinkmann, V. G. Kupriyanov and M. Traube, “On the uniqueness of $L_\infty$-bootstrap: Quasi-isomorphisms are Seiberg-Witten maps,” J. Math. Phys. 59 (2018) 123505 [arXiv:1806.10314 [hep-th]]. [11] R. Blumenhagen, I. Brunner, V. G. Kupriyanov and D. Lüst, “Bootstrapping noncommutative gauge theories from $L_\infty$-algebras,” JHEP 05 (2018) 097 [arXiv:1803.00732 [hep-th]]. [12] D. Broka and P. Xu, “Symplectic realizations of holomorphic Poisson manifolds,” arXiv:1512.08847 [math.DG]. [13] S. Bunk, L. Müller and R. J. Szabo, “Geometry and 2-Hilbert space for nonassociative magnetic translations,” Lett. Math. Phys. 109 (2019) 1827–1866 [arXiv:1804.08953 [hep-th]]. [14] S. Bunk, L. Müller and R. J. Szabo, “Smooth 2-group extensions and symmetries of bundle gerbes,” Commun. Math. Phys. 384 (2021) 1829–1911 [arXiv:2004.13395 [math.DG]]. [15] H. Bursztyn, I. Ortiz and S. Waldmann, “Morita equivalence of formal Poisson structures,” Int. Math. Res. Not. rnab096 (2021) [arXiv:2006.10240 [math.SG]]. [16] A. S. Cattaneo and G. Felder, “Poisson sigma-models and symplectic groupoids,” Prog. Math. 198 (2001) 61–93 [17] A. S. Cattaneo and G. Felder, “Relative formality theorem and quantization of coisotropic Adv. Math. 108 (2007) 521–548 [18] A. S. Cattaneo and P. Xu, “Integration of twisted Poisson structures,” J. Geom. Phys. 49 (2004) 187–196 [19] A. S. Cattaneo, B. Dherin and G. Felder, “Formal symplectic groupoid,” Commun. Math. Phys. 253 (2005) 645–674 [20] C. S. Chu and P. M. Ho, “Poisson algebra of differential forms,” Int. J. Mod. Phys. A 12 (1997) 5573–5587 [21] C. S. Chu, P. M. Ho and Y. C. Kao, “Worldvolume uncertainty relations for D-branes,” Phys. Rev. D 60 (1999) 126003 [22] L. Cornalba and R. Schiappa, “Nonassociative star product deformations for D-brane worldvolumes in curved backgrounds,” Commun. Math. Phys. 225 (2002) 33–66 [23] M. Crainic and I. Mărcuţ, “On the existence of symplectic realizations,” J. Sympl. Geom. 9 (2011) 435–444 [arXiv:1009.2058 [math.DG]]. [24] M. Dimitrijević Ćirić, G. Giotopoulos, V. Radovanović and R. J. Szabo, “Homotopy Lie algebras of gravity and their braided deformations,” Proc. Sci. 376 (2020) 198 [arXiv:2005.00454 [hep-th]]. [25] V. A. Dolgushev, S. L. Lyakhovich and A. A. Sharapov, “Wick type deformation quantization of Fedosov manifolds,” Nucl. Phys. B 606 (2001) 647–672 [arXiv:hep -th/0101032]. [26] N. Durov, S. Meljanac, A. Samsarov and Z. Skoda, “A universal formula for representing Lie algebra generators as formal power series with coefficients in the Weyl algebra,” J. Algebra 309 (2007) 318–359 [27] R. Fulp, T. Lada and J. Stasheff, “sh-Lie algebras induced by gauge transformations,” Commun. Math. Phys. 231 (2002) 25–43. [28] M. Gomes and V. G. Kupriyanov, “Position-dependent noncommutativity in quantum mechanics,” Phys. Rev. D 79 (2009) 125011 [arXiv:0902.3252 [math-ph]]. [29] S. Gutt, “An explicit star-product on the cotangent bundle of a Lie Lett. Math. Phys. 7 (1983) 249–258. [30] E. Hawkins, “Noncommutative rigidity,” Commun. Math. Phys. 246 (2004) 211–235 [31] P. M. Ho, “Making nonassociative algebra associative,” JHEP 11 (2001) 026 [32] P. M. Ho and S. P. Miao, “Noncommutative differential calculus for D-brane in non-constant $B$-field background,” Phys. Rev. D 64 (2001) 126002 [33] P. M. Ho and Y. T. Yeh, “Noncommutative D-brane in non-constant NS–NS $B$-field background,” Phys. Rev. Lett. 85 (2000) 5523–5526 [34] O. Hohm and B. Zwiebach, “$L_{\infty}$-algebras and field theory,” Fortsch. Phys. 65 (2017) 1700014 [arXiv:1701.08824 [hep-th]]. [35] B. Jurčo, L. Raspollini, C. Sämann and M. Wolf, “$L_\infty$-algebras of classical field theories and the Batalin-Vilkovisky formalism,” Fortsch. Phys. 67 (2019) 1900025 [arXiv:1809.09899 [hep-th]]. [36] M. V. Karasev, “Analogues of objects of the theory of Lie groups for nonlinear Poisson brackets,” Math. USSR-Izv. 28 (1987) 497–527. [37] H. M. Khudaverdian and Th. Voronov, “Higher Poisson brackets and differential forms,” AIP Conf. Proc. 1079 (2008) 203–215 [arXiv:0808.3406 [math-ph]], [38] C. Klimčík and T. Strobl, “WZW–Poisson manifolds,” J. Geom. Phys. 43 (2002) 341–344 [arXiv:math.SG/0104189] . [39] M. Kontsevich, “Deformation quantization of Poisson manifolds,” Lett. Math. Phys. 66 (2003) 157–216 [40] V. G. Kupriyanov, “Quantum mechanics with coordinate dependent noncommutativity,” J. Math. Phys. 54 (2013) 112105 [arXiv:1204.4823 [math-ph]]. [41] V. G. Kupriyanov, “Weak associativity and deformation quantization,” Nucl. Phys. B 910 (2016) 240–258 [arXiv:1606.01409 [hep-th]]. [42] V. G. Kupriyanov, “Recurrence relations for symplectic realization of (quasi-)Poisson structures,” J. Phys. A 52 (2019) 225204 [arXiv:1805.12040 [math-ph]]. [43] V. G. Kupriyanov, “$L_\infty$-bootstrap approach to noncommutative gauge theories,” Fortsch. Phys. 67 (2019) 1910010 [arXiv:1903.02867 [hep-th]]. [44] V. G. Kupriyanov, “Noncommutative deformation of Chern-Simons theory,” Eur. Phys. J. C 80 (2020) 42 [arXiv:1905.08753 [hep-th]]. [45] V. G. Kupriyanov and R. J. Szabo, “$G_{2}$-structures and quantization of non-geometric M-theory backgrounds,” JHEP 02 (2017) 099 [arXiv:1701.02574 [hep-th]]. [46] V. G. Kupriyanov and R. J. Szabo, “Symplectic realization of electric charge in fields of monopole distributions,” Phys. Rev. D 98 (2018) 045005 [arXiv:1803.00405 [hep-th]]. [47] V. G. Kupriyanov and D. V. Vassilevich, “Star products made (somewhat) easier,” Eur. Phys. J. C 58 (2008) 627–637 [arXiv:0806.4615 [hep-th]]. [48] V. G. Kupriyanov and D. V. Vassilevich, “Nonassociative Weyl star-products,” JHEP 09 (2015) 103 [arXiv:1506.02329 [hep-th]]. [49] V. G. Kupriyanov and P. Vitale, “Noncommutative $\mathbb{R}^d $ via closed star product,” JHEP 08 (2015) 024 [arXiv:1502.06544 [hep-th]]. [50] V. G. Kupriyanov, M. Kurkov and P. Vitale, “$\kappa$-Minkowski deformation of $U(1)$ gauge theory,” JHEP 01 (2021) 102 [arXiv:2010.09863 [hep-th]]. [51] T. Lada and J. Stasheff, “Introduction to sh-Lie algebras for physicists,” Int. J. Theor. Phys. 32 (1993) 1087–1104 [52] S. L. Lyakhovich and A. A. Sharapov, “BRST theory without Hamiltonian and Lagrangian,” JHEP 03 (2005) 011 [53] S. L. Lyakhovich, M. Peddie and A. A. Sharapov, “Lifting a weak Poisson bracket to the algebra of forms,” J. Geom. Phys. 116 (2017) 330–344 [arXiv:1511.05731 [math-ph]]. [54] S. McCurdy and B. Zumino, “Covariant star product for exterior differential forms on symplectic manifolds,” AIP Conf. Proc. 1200 (2010) 204–214 [arXiv:0910.0459 [hep-th]]. [55] D. Mylonas, P. Schupp and R. J. Szabo, “Membrane sigma-models and quantization of non-geometric flux backgrounds,” JHEP 09 (2012) 012 [arXiv:1207.0926 [hep-th]]. [56] D. Mylonas, P. Schupp and R. J. Szabo, “Non-geometric fluxes, quasi-Hopf twist deformations and nonassociative quantum mechanics,” J. Math. Phys. 55 (2014) 122301 [arXiv:1312.1621 [hep-th]]. [57] Y. G. Oh and J. S. Park, “Deformations of coisotropic submanifolds and strongly homotopy Lie algebroids,” Invent. Math. 161 (2005) 287–360 [58] J.-S. Park, “Topological open $p$-branes,” in: Symplectic geometry and mirror symmetry, eds. K. Fukaya, Y. G. Oh, K. Ono and G. Tian (World Scientific Publishing, River Edge, NJ, 2001) 311–384 [arXiv:hep-th/0012141]. [59] C. Sämann and R. J. Szabo, “Groupoid quantization of loop spaces,” Proc. Sci. 155 (2012) 046 [arXiv:1203.5921 [hep-th]]. [60] C. Sämann and R. J. Szabo, “Groupoids, loop spaces and quantization of 2-plectic manifolds,” Rev. Math. Phys. 25 (2013) 1330005 [arXiv:1211.0395 [hep-th]]. [61] G. Sardanashvily, Fiber Bundles, Jet Manifolds and Lagrangian Theory (Lambert Academic Publishing, 2013) [arXiv:0908.1886 [math-ph]]. [62] V. Schomerus, “D-branes and deformation quantization,” JHEP 06 (1999) 030 [63] N. Seiberg and E. Witten, “String theory and noncommutative geometry,” JHEP 09 (1999) 032 [64] P. Ševera, “Quantization of Poisson families and of twisted Poisson structures,” Lett. Math. Phys. 63 (2003) 105–113 [65] J. Stasheff, “Homotopy associativity of H-spaces I,II," Trans. Amer. Math. Soc. 108 (1963) 275–312. [66] R. J. Szabo, “Symmetry, gravity and noncommutativity,” Class. Quant. Grav. 23 (2006) R199–R242 [67] R. J. Szabo, “Quantization of magnetic Poisson structures,” Fortsch. Phys. 67 (2019) 1910022 [arXiv:1903.02845 [hep-th]]. [68] R. J. Szabo, “An introduction to nonassociative physics,” Proc. Sci. 347 (2019) 100 [arXiv:1903.05673 [hep-th]]. [69] Th. Voronov, “Higher derived brackets and homotopy algebras," J. Pure Appl. Alg. 202 (2005) 133–153 [70] A. Weinstein, “Symplectic groupoids and Poisson manifolds,” Bull. Amer. Math. Soc. 16 (1987) 101–104. [71] A. Weinstein, “The local structure of Poisson manifolds,” J. Diff. Geom. 18 (1983) 523–557.
# EphemeriShield - defence against cyber-antisatellite weapons Rafal Graczyk1, Marcus Voelp1, Paulo Esteves-Verissimo1 Email: <EMAIL_ADDRESS>CritiX Lab - Critical and Extreme Security and Dependability 1 SnT - Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Satellites, are both crucial and, despite common misbelieve, very fragile parts our civilian and military critical infrastructure. While, many efforts are focused on securing ground and space segments, especially when national security or large businesses interests are affected, the small-sat, new-space revolution democratizes access to, and exploitation of the near earth orbits. This brings new players to the market, typically in the form of small to medium sized companies, offering new or more affordable services. Despite the necessity and inevitability of this process, it also opens potential new venues for targeted attacks against space-related infrastructure. Smaller organizations, with less established revenue models, have a natural incentive to cut-corners in search for cost optimization and shorter time-to-market [1]. While there are many classical anti-satellite (ASAT) weapons using various kinds of effectors (kinetic, RF, laser), we recently observed a proof of concept for a further smart attack of this kind [2], following a more cybernetic approach to attack the information sphere, rather than causing direct energy transfer in orbit. In most cases, satellite operators know very well for their own space assets, where they are localized and with what orbital parameters they fly (e.g., from GNSS receivers on board of their spacecrafts or from their own tracking and ranging facilities). What these operators typically can’t derive just by their own means of observation are the locations and orbital parameters of other objects, such as active and inactive satellites, but more importantly, fields of space debris on a potential collision course with the operator’s space assets [3, 4]. Instead, they have to obtain information about such objects by querying two-line element (TLE) debris files provided by Celestrack or Space-Track or any other source. The consequences of orbital collisions don’t have to be discussed here, as are already widely known [5]. To avoid them, orbital conjunction assessment aims at foreseeing possible close encounters, threatening the well being of satellites, monitoring changes in the orbital situation, and engaging in collision avoidance maneuvers, to the degree that propulsion or attitude manipulation allows this [6]. Cyber-ASAT [2] describes a method for altering TLE debris to orchestrate fake alarms and unnecessary collision avoidance maneuvers, targeting satellites to exhaust their fuel or jeopardize their availability during collision avoidance. The spectrum of TLE attack possibilities (altering or spoofing) is wide: 1. 1. intentional, by the Space Surveillance and Tracking (SST) system operator, altering the TLE at the source 2. 2. intentional, by external groups, altering the TLE provided to satellite operator (hacking into SST user front-end, man-in-the-middle attacks) 3. 3. unintentional, by error The consequences of such TLE spoofing attacks are diverse. They may include fake collision avoidance alarms, spaning from seemingly low critical but unnecessary maneuvers and propellant loss (shortening mission lifetime or degrading system performance) to potentially orchestrated activity, launching organized attack campaigns on some third party, by tricking several space assets operators into undertaking actions that increase the collision likelihood with that 3rd party’s space assets. In this work, we propose a countermeasure to the presented problem that include distributed solution, which will have no central authority responsible for storing and disseminating TLE information. Instead, each of the peers participating to the system, have full access to all of the records stored in the system, and distribute the data in a consensual manner, ensuring information replication at each peer node. This way, single point of failure syndromes of classic systems, which currently exist due to the direct ephemerids distribution mechanism, are removed. Our proposed solution is to build data dissemination systems using permissioned, private ledgers where peers have strong and verifiable identities, which allow also for redundancy in SST data sourcing. Each partner, providing object localization data, can be held responsible (or at least be identified) for low quality information and excluded in the future. In our proposed solution, object data (unique identifier and ephemerids) is stored in a blockchain. The chaincode (i.e. operations associated with a designed blockchain system) runs in the system and defines an assets (in this case, orbital elements of objects under tracking) and transactions (in this case, instructions on how to modify an asset, i.e. algorithm criteria to decide whether to update objects’ orbital elements or requests for conjunction analysis and warnings distributions). All information injected to the system, has to pass sanity checks, validating the feasibility envelope of provided parameters by numerical propagation with respect to last inputs, or with respect to other ephemerids provided by other SST data sources. An example algorithm is outlined in Figure 1. After the values are checked, sanitized and accepted by chaincode governing the data entry, a transaction is added to the blockchain and made visible for all participants in the system, which update their ephemerids catalog. In case of discrepancy or detection of non-feasible SST data entries, a warning is raised for manual intervention. This way, discrepancies reveal measurement errors, attempts of intentional information modification or significant orbital maneuvers without prior information. Clearly at least three type of peer nodes in the system can be identified: * • SST data providers, attempting to update the database with new orbital elements as soon as they are detected by physical measurements; * • SST data users, who are interested in obtaining up to date orbital elements information on objects they control; and, * • a third type, which commonly could be the same as the second, looking for conjunction analysis results, which will raise alerts on future proximity events. ⬇ 1 on new ephemerides entry $\mathit{(object_{i},epoch_{new},OE_{new})}$: 2 $(OE_{old},epoch_{old})$ := $retrive\\_last\\_ephemeris(object_{i})$ 3 $OE_{prop}$ := $\mathit{propagate}(OE_{old},epoch_{old},epoch_{new})$ 4 if $|\mathit{OE_{new}-OE_{prop}}|$ < $\epsilon$: 5 $append\\_ephemerides(object_{i},epoch_{new},OE_{new})$ 6 else: 7 $raise\\_warning(object_{i},epoch_{new},OE_{new})$ Figure 1: Orbital Elements entry check algorithm The third type requires significant data processing capabilities, as conjunction candidate objects need to be selected (whose number could be substantial, especially on Low-Earth Orbit (LEO)). The orbits of such candidates need to be propagated for some time in the future (depending on quality of used models, usually no longer than a few days ahead) to check for possible close proximity passes. However, with today’s high data processing capabilities, this shall not pose any organizational and technical problems, in a large majority of ground segment facilities. In order to keep the blockchain size reasonable, only recent system snapshots (i.e., the state of all the objects in catalog) and transactions relating to them need to be kept. This is possible due to fact that ephemerids older than 3 days become useless for predicting the current position and velocity of an object on orbit, as numerical propagators, but especially popular SGP4, used with TLEs, cannot take into account all the perturbations affecting orbital movement of bodies, which can be significant on LEO. The resulting propagation errors would be too large. As all the operations on the surveyed objects catalog are immutable and traceable (thanks to the permissioned-ledger based blockchain) it is possible to assign financial value to the use of the system, as incentive for participants to provide correct and high quality data. Eventually, such an additional mechanism would lead to the quite desirable outcome, where SST data and analysis providers could have (at least to some extent) their costs covered by the users of their data. Incentives embedded in proposed mechanism, along with the independence from central authority, redundancy in data sources and data processing may lead to a democratization of access to SST information, which further contributes to the stability of the system, providing trust without the need of central enforcement. ## References * [1] B. Lal, A. Balakrishnan, B. M. Caldwell, R. S. Buenconsejo, and S. A. Carioscia, “Global trends in space situational awareness (ssa) and space traffic management (stm),” _IDA Science & Technology Policy Institute_, 04 2018. * [2] J. Pavur and I. Martinovic, “The Cyber-ASAT: On the impact of cyber weapons in outer space,” in _2019 11th International Conference on Cyber Conflict_ , 05 2019, pp. 1–18. * [3] T. Kelso and S. Alfano, “Satellite orbital conjunction reports assessing threatening encounters in space (socrates),” _Proceedings of SPIE - The International Society for Optical Engineering_ , 05 2006. * [4] 18th Space Control Squadron Joint Force Space Component Command, “Spaceflight safety handbook for satellite operators,” 02 2019. * [5] D. Kessler, N. Johnson, J.-C. Liou, and M. Matney, “The kessler syndrome: Implications to future space operations,” _Advances in the Astronautical Sciences_ , vol. 137, 01 2010. * [6] M. Nicolls, V. Vittaldev, D. Ceperley, J. Creus-Costa, C. Foster, N. Griffith, E. Lu, J. Mason, I. Park, C. Rosner, and L. Stepan, “Conjunction assessment for commercial satellite constellations using commercial radar data sources,” in _Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conference_ , Jan 2017, p. 18.
§ INTRODUCTION A planetary nebulae (PN) forms when a sun-like star ejects its envelope at the end of its life. The ejected envelope forms an expanding nebula around the remnant core of the star which ionizes it. After some $10^4$ years, the PN fades from view, both because of the expansion and dilution of the nebula and because of the fading of the ionizing star. Around 3000 PNe are known in the Galaxy. PNe show up as compact nebulosity on images of the sky, with typical spectra that are dominated by emission lines. They are commonly identified by comparing images taken at different wavelengths. However, they can be confused with other types of astronomical objects: confirmation that a nebula is indeed a PN requires follow-up spectroscopy. A significant fraction of cataloged PNe were later found to be misidentified. An overview of PNe discovery surveys can be found in Parker, 2020. The most up-to-date catalog of PNe in our Milky Way Galaxy is the Hong Kong/Australian Astronomical Observatory/Strasbourg Observatory H-alpha Planetary Nebula research platform database (HASH DB) [Parker et al., 2016]. It contains over 3600 Galactic objects classified as either confirmed ('True') PNe, Likely PNe or Possible PNe, in decreasing order of confidence. There are also about 5000 objects in the database that were originally suggested as PNe but were rejected and re-classified as a variety of different types of objects. A notable aspect of PNe is their axi-symmetric structure. There is a wide variety of structures, seen well especially in high-resolution observations (e.g., from the Hubble Space Telescope), but they tend to fall into a few distinct groupings, namely Round, Elliptical and Bipolar morphologies. These morphologies are thought to have their origins in the envelope ejection by the originating star, where especially a binary companion may contribute to the deviations from sphericity [Balick and Frank, 2002]. PNe morphology has been studied since the 19th century [Shaw, 2011], and it has grown in importance with the advances in sensitivity and resolution arising from new detector technologies and observation techniques. The morphological classification assigned to a PN can be affected by the quality of the image. The outer regions are often faint and require high dynamic range. Many PNe that had been earlier classified as Elliptical or Round are now seen as Bipolar [Kwok, 2018]. However, for many PNe, only images from wide-field or all-sky surveys are available, and these have limited resolution and sensitivity. For PNe close to the plane of the Galaxy, the confusion by many field stars seen near to or superposed on the nebula can also complicate the analysis of the PN image. The morphological classifications are still studied and improved upon. In this paper, we investigate the efficacy of Deep Learning (DL) for deciding whether an object is a PN and for determining its morphological classification. We make use of the PNe images available in the HASH DB and in the Panoramic Survey Telescope and Rapid Response System Data Release 2 (Pan-STARRS) [Flewelling et al., 2016, Chambers et al., 2019]. The main objective is to leverage knowledge from pre-trained DL models and use these to automatically distinguish True from Rejected PNe and to obtain their morphology classification. We compare various current DL models and assess their success in identifying the True PNe and determining their corresponding morphology [Kwok, 2018]. It is a challenging problem when using typical images rather than the highest quality available for only a subset of PNe. Several related works to classify PNe have been proposed using different methods. Faundez-Abans et al., 1996 performed a cluster analysis on the PN chemical composition and then trained an Artificial Neural Network using the classified chemical composition to recognize and assign the PNe into its respective type. Recently, Akras et al., 2019 used a Machine Learning (ML) technique alongside the infrared photometric data to distinguish compact PNe from their mimics. Deep learning has been used for galaxy morphology classification [Fluke and Jacobs, 2020], mostly utilizing the Galaxy Zoo dataset [Barchi et al., 2020]. Galaxy morphologies are easier to determine, and the objects are little affected by foreground stars, as depicted in Figure <ref>. In contrast, PNe are more complex and are often located in dense star fields. This makes PNe a good testing case for determining the accuracy and limitations of the technique. The results can be generalized to other datasets, such as the deep-field images where the most distant galaxies also present extended objects in highly confused fields [Beckwith et al., 2006]. Examples images of Elliptical objects. From left to right: Elliptical galaxy from the Galaxy Zoo dataset [Barchi et al., 2020]; Elliptical PNe in Optical images, H$\alpha$ “Quotient” images and infrared (“WISE432”) images; and high-resolution Optical Pan-STARRS images. Deep Learning is the emerging subdomain of ML in Artificial Intelligence (AI). The algorithm consist of deep Artificial Neural Network (ANN) layers that mimic the information-processing mechanism of the human brain. It is the state-of-the-art in computer vision and an effective image classification [Gavali and Banu, 2019] approach as DL is capable of processing a large amount of input images without having to perform pre-processing (feature extraction, mining or engineering), and it has the capability to learn to solve complex problems without human intervention. Inspired by the nature of how humans learn by transferring and leveraging on previously obtained knowledge, we exploit the transfer learning approach. The advantages of using transfer learning is its ability to achieve faster learning processes while requiring less training time and data. The formal definition of transfer learning for this work is defined as [Pan and Yang, 2009]: Given a source domain $D_s$ and its learning task $T_s$, a target domain $D_t$ and its learning task $T_t$, transfer aims to help improve the learning of the target predictive function learning $f_T(p)$ in $D_t$ from $D_s$ and $T_s$, where $D_s \neq D_t$ or $T_s \neq T_t$. $D_s$ consist the source domain data; $D_s = {(x_s, y_s), ..., (x_{s_i}, y_{s_i})}$, where $x_{s_i}$ is the image data instance and $y_{s_i}$ is its corresponding class label. Likewise, the target domain data $D_t= {(x_t, y_t), ..., (x_{t_i}, y_{t_i})}$, where $x_{t_i}$ is the input image data instance and $y_{t_i}$ is its corresponding output class label. Most often $ 0 < x_{t_i} \le x_{s_i}$. In this work, the term Deep Transfer Learning (DTL) is used to refer to the application of the transfer learning approach during the training of the DL algorithms for PNe classifications, as shown in Figure <ref>. The overall framework for this work is depicted in Figure <ref>. We first create a dataset from HASH DB and Pan-STARRS, and then select a suitable modern DL algorithm architecture to perform the transfer learning and save the best model built during training. The model is then used to classify the test images. Finally, we analyze and evaluate the results. Details of the framework components are elaborated further in the following section. PNe classification as used in this work: True PNe versus Rejected and the three allowed morphologies of the nebulae. The framework for deep transfer learning for True PNe, Rejected and morphological classifications. The images shown are from the HASH DB. § MATERIALS AND METHODS §.§ Dataset Creation and Pre-Processing: HASH DB To obtain the images of PNe, we used two databases, HASH DB [Parker et al., 2016] and the recent Pan-STARRS [Flewelling et al., 2016]. HASH DB contains a wide range of images taken with different instruments and telescopes. We selected images from wide-area surveys, mostly taken from the IPHAS/VPHAS CCD survey of the Galactic plane and the SHS/SSS photographic-plate survey at optical wavelengths, and the Wide-field Infrared Survey Explorer (WISE) all-sky survey at infrared wavelengths. The Optical images detect the emission from the nebular gas, whilst the infrared wavelengths detect emission from small solid particles (dust) in the nebulae. Traditionally, PNe have been discovered by a combination of Optical images showing their extended morphological nature and spectroscopy detecting the bright emission lines of the nebulae (normally dominated by H$\alpha$ emission near 656.3 nm). The wide-area surveys provide uniform data quality and properties for the PNe. The uniformity is a significant advantage to the DTL. The Optical images used here are taken in several filters. The SSS and SHS surveys are photographic SuperCOSMOS Sky Surveys. SSS describes a three-band survey with broad filters et al., 2001), which in HASH DB are combined into a three-color image. SHS provides images in a narrow H$\alpha$ filter and a broader short-red filter [Parker et al., 2005]. HASH DB uses these to obtain a quotient image by dividing the H$\alpha$ image by the continuum image. This brings out the PN while minimizing the field stars which are bright in the continuum. INT Photometric H$\alpha$ Survey of the Northern Galactic Plane (IPHAS) and VST/OmegaCAM Photometric H$\alpha$ Survey (VPHAS) are CCD H$\alpha$ surveys of, respectively, the northern and southern halves of the Galaxy [Drew et al., 2014]. HASH DB uses the three filters employed by these surveys (r, i and H$\alpha$) to make three-color images and uses the H$\alpha$ and r filters for a quotient image. For the WISE data [Wright et al., 2010], we used the `432' HASH DB image created by combining filters at 22, 11 and 4.6 $\upmu$m. The IPHAS and VPHAS cover the areas within 5 degrees of the galactic plane, where most PNe are found. SHS extends to areas further from the plane. SSS is all-sky. The three-band images are hereafter called 'Optical'. Where both IPHAS/VPHAS and SSS are available, we used the former as they have better spatial resolution and dynamic range of the images. For this research, we selected image resources that are available for the large majority of PNe. An alternative approach would have been to select images from targeted observations which are optimized for PNe. This includes observations taken with the Hubble Space Telescope. This would have given much better quality images for the DL attempted here, but with less scope for applications as such data is typically only available for already well-studied objects. We concentrate on general surveys to test whether these methods can be used to classify less well-studied astronomical objects. Images from the HASH DB are retrieved as PNG images which include the RA (J2000) vs. Dec (J2000) coordinate axes and labels. We automatically cropped the images to remove the white regions where the axes are located. No further image manipulation was performed, and the input images of the PNe are generally visually similar to the ones in Figure <ref>. We divided the total number of images into Training (80%), Validation (10%) and Test (10%) sets. The images for the Training and Validation sets were randomly selected from all images, whereas the images for the Test set were based on selecting PNe and using all images associated with that PN. Because the Test set covers a minority of the objects, randomly selecting images for it will lead to most PNe in the test sample being represented by a single image resource only. This would not allow us to test the use of various combination of different image resources. All of these sets do not contain the same PN and images. The Training set is used to built the DTL models. The Validation set is set aside to provide an unbiased evaluation of a model built using the Training set and to fine tune the model parameters. The Test set is used to provide an unbiased evaluation of the best model that was built on the Training set. In this work, we use the DL algorithms without tuning any parameters. Therefore, the result discussions focus on the outcome from Training and Test set, and the Validation set is only used as an intermediate check. It is worth pointing out that information about the position of the PNe in the Galaxy (which determines the density of the confusing stars in the field) and the distances of the PNe were not used in the DL training and classification. The PNe were considered as a uniform set. §.§ Dataset Creation and Pre-Processing: Pan-STARRS The HASH DB contains pre-processed images of PNe, designed to act as visual cues for human researchers. These images come from several different programs with different characteristics and include some comparatively low-resolution photographic datasets. Ideally, a comprehensive dataset should provide uniform resolution; cover a large portion of the sky; be sufficiently deep that faint, extended emission from the outer regions of nebulae is recovered; and contain a sufficiently wide set of color information that an emission spectrum can be distinguished from blackbody emission in the color data. These criteria are currently best met in the Pan-STARRS survey, which contains five-color ($grizy$-band) images of roughly three-quarters of the sky at arcsecond resolution, where the $r$-band includes the H$\alpha$ emission line. Its main draw-back is that it lacks a narrow-band H$\alpha$ filter which would have increased sensitivity to PNe. Pan-STARRS images are currently not included in the HASH DB. We added these data separately to our image resources. To obtain a uniform dataset of PNe, we use the Pan-STARRS image cutout API [<https://ps1images.stsci.edu/ps1image.html>] to extract pixel FITS images in each filter, centred on the co-ordinates of the object as listed in HASH. Of the 3617 objects in the HASH DB, 2356 have a complete set of $grizy$ observations. To produce color images from these, each FITS image was clipped to remove the brightest 2.5% of pixels (set as white) and combined so that the blue, green and red channels ($B,G,R$) of the final image were represented by \begin{eqnarray} B &=& g + r/2, \nonumber\\ G &=& r/2 + i + z/2, \nonumber\\ R &=& z/2 + y. \end{eqnarray} Each of these three channels were then normalized on an 8-bit scale (0–255) to produce a color image. This was cropped to 512 $\times$ 512 pixels, and then scaled to the stated input size for the relevant DL algorithm. These are hereafter referred to as the `plain' set of images. Many known or suspected PNe are located in the Galactic bulge and Galactic plane, where stellar densities are high. Frequently, they are among the fainter objects in the surrounding projected field and are often lost in the glare of many brighter stars. To try to circumvent this, two further sets of images were produced: one where an effort was made to remove foreground and background stars from the images (referred to as `No-star' images) and one where a mask was generated to remove emission from any sources other than the PN (hereafter `Mask'). The additional processing steps to create these alternate images were performed on the original FITS images before clipping. Best-practice for removal of stars from images generally involves calculating and removing a point-spread function (PSF) for each star (e.g., [Feder et al., 2020]). However, characterizing an accurate PSF for each observation and in each filter and ensuring its creation and subtraction while accounting for non-linearity, saturation and background correction are too complex an endeavor to be attempted here. Instead, we take the approach of treating stellar PSFs as bad data and masking them from the image, either using a median filter for the `No-Star' images or an image mask. Stars were identified for removal by searching for local maxima in the images, within a certain neighborhood radius. Data were then median-filtered on the same radius, and this median-filtered image was subtracted, leaving a high-pass-filtered image showing small-scale structure. If the brightness of a star in this high-pass image exceeded a threshold, it was flagged for removal. Stars within $\sqrt{2}$ times the neighborhood radius of the PN centre were ignored, in order to avoid masking the central star of the nebula. Many stars lie within the nebula emission itself. Thus, it is important to mask out no more of the image than necessary. For the `No-Star' images, annuli were drawn around the star in the original image at one-pixel intervals, and the median value in each annulus was calculated. The reduction in median flux between one annulus and the next was calculated, and annuli were flagged for replacement if that reduction exceeded a tolerance (out to a certain maximum radius). The replacement value used was the median of the next annulus from the star (i.e., the median flux of the first annulus not to show a substantial reduction in median flux with radius, $r$, denoted $M$ in the following). However, this had a tendency to produce faint, round `ghost' stars in the images (Figure <ref>). Consequently, a hardness parameter ($h$) was introduced, which allowed for a weighted mean of this median and the original data ($D$) to generate the replacement dataset ($D^\prime$), namely \begin{equation} D^\prime = f D + (1 - f) M , \end{equation} \begin{equation} f = \left( \frac{r}{R} \right)^h \end{equation} and $R$ is the maximum allowed radius for removal. This both avoids hard edges to the removed regions and allows $R$ to be expanded to larger radii without greatly affecting the nebula. This procedure was repeated four times to remove progressively fainter stars, using different parameters in each iteration. Through trial and error, we determined an appropriate set of parameters for the Pan-STARRS dataset: a neighborhood size of 15, 13, 11 and 9 pixels; $R$ = 25, 20, 15 and 10 pixels; tolerances of factors of 1.01, 1.02, 1.03 and 1.04; thresholds of 0.3%, 0.7%, 1.0% and 2.5% of the image's brightest pixels; and $h$ = 7, 5, 3 and 1, for the four iterations, respectively. One pixel in Pan-STARRS correspond to 0.26 arcsec. Visual inspection of the images with stars removed in this manner showed that it was effective in ensuring the fainter, diffuse emission from the nebula was given more prominence in the images. However, the removal of stars was still imperfect, and a large number of objects classified as True PNe remained as a relatively faint, unresolved source in the centre of the images. An attempt was then made to generate a mask around the emission from the central source. This began by filtering the star-subtracted data with a 21-pixel-radius median filter. The median flux of this filtered image was subtracted to leave an image showing only large-scale structure and with an overall median flux of zero. We ordered the pixels in this image by flux and calculated the 2.5th percentile flux as a benchmark flux. Working on the assumption that the majority of the image remains Gaussian-distributed noise, the negative of this benchmark should approximate the 2$\sigma$ upper bound (97.5th percentile) of noise in the data, and any greater flux should represent emission from the PN (or surrounding stars). A mask was generated such that any areas of emission that were contiguous with that of the central star remained in the image, and the remainder of the image was set to black. The mask was not applied to each filter, but to the overall image, with areas passed by the mask if at least three of the five bands showed emission. In practice, this masking process was not very effective for many PNe. As shown in Figure <ref>, it proved difficult to identify an appropriate cut-off percentile that satisfied both the need to remove overlapping PSFs of unrelated stars, and the need to retain faint emission from the edges of the PNe. Examples of PNe from the Pan-STARRS survey (left), showing (top) successful and (bottom) unsuccessful algorithmic removal (middle) and masking (right) of contaminating stars. The bottom example shows the difficulties in isolating the faint nebular emission (the diffuse red glow in the bottom-centre panel) from the dense field of background stars. To reduce the confusion generated by foreground and background stars, as well as by the large number of PNe that remained as point sources in the images, a subset of objects were selected from the HASH DB that are at least 2$^{\prime\prime}$ in diameter along their major axis and lie at least 2$^\circ$ from the Galactic plane. §.§.§ Sample Selection for True PNe and Rejected Classification We queried the HASH DB using the 'select sample' option provided by HASH DB in the combined search user interface. We submitted the query listed in Table <ref> for retrieving the True PNe, the Rejected PNe and other objects. The True PNe have been confirmed as PN while the latter are considered not to be PN. The HASH DB contains separate lists of objects suspected to be PNe but not yet confirmed as such: these are listed as Likely PNe or Possible PNe, depending on the degree of confidence. 'Likely' PNe have a higher degree of confidence and a PN is the most likely classification, but lack confirmation from spectra or images; `Possible' PNe have inconclusive spectra and images and PN classification is one of several possibilities [Parker et al., 2016]. We also retrieved these. To select the Rejected PNe and other objects, we searched for the following object types in the HASH DB: AGB star, AGB star candidate, artifact, Be star, cataclysmic variable star, circumstellar matter, cluster of stars, cometary globule, emission-line star, emission object, galaxy, Herbig–Haro Object, H ii region, interesting object, ionized ISM, object of unknown nature, objects to check, PAGB/Pre-PN (post-AGB stars), possible Be star, possible emission-line star, possible galaxy, possible Herbig–Haro Object, possible pre-PN, possible transient event, RCrB/eHe/LTP (post-PNe objects), reflection nebula, RV Tau, star, supernova remnant, supernova-remnant candidate, symbiotic star, symbiotic star candidate, test object, transient event, transition object, white dwarf/hot sub-dwarf, young stellar object and young stellar object candidate. During the period of our data collection (April 2020), the total number of True PNe returned by HASH DB was 2450. The distribution details of the True PNe according to their image resources are listed in Table <ref>. Based on the total number images for each type of image resources, we decided to use 2100 images from the HASH DB as samples for True PNe and Rejected classes. This is because of two factors: first, the number of Quotient images available; and, second, that our approach is to perform DTL on a balanced dataset. All of the Possible and Likely PNe images were used to predict whether they fall into True PNe or Rejected class. For this test, we only use the Plain images from the Pan-STARRS survey: the total number of Pan-STARRS images of True PNe is 1508 and that of Rejected class is 1768, and we used 1500 images from each class for the DL algorithms. The details of the distribution for the True PNe and Rejected class from HASH DB and Pan-STARRS used for the DL algorithms are in Table <ref>. Distribution of True PNe, Rejected PNe/Other Objects, Possible PNe and Likely PNe alongside their respective image resources from the HASH DB. Class Total # PNe Total # Images Optical Quotient WISE432 Pan-STARRS True PNe 2450 17,612 2443 2101 2441 1508 Rejected PNe/Other Objects 2741 18,507 2696 2159 2694 1768 Possible PNe 368 2630 367 330 368 216 Likely PNe 313 2287 311 282 312 242 Grand Total 5872 41,036 5817 4872 5815 3734 Dataset distribution for True and Rejected PNe from the HASH DB and Pan-STARRS. 2*Dataset Percentage HASH DB Pan-STARRS HASH DB/Pan-STARSS Number of Images Number of Images Training 80%/77% 1680 1200 Validation 10%/10% 210 150 Test 10%/13% 210 210 2cTotal number of images for each image resource 2100 1560 2cTotal number of images for each PNe class 6300 1560 2cTotal number of images used for True and Rejected PNe Classification 12,600 3120 §.§.§ Sample Selection for PNe Morphological Classification The morphological classification of the PNe in HASH DB is based on Corradi and Schwarz [Ritter and Parker, 2020, Corradi and Schwarz, 1995]. The retrieved images returned by the True PNe query (in Section <ref>) were downloaded and consolidated as a collection of PNe based on their morphologies and type of image resources. Distribution details of the True PNe morphologies and image resources from HASH DB and Pan-STARRS are in Table <ref>. The number of images for each different type of Pan-STARRS image resources (Plain, Quotient, No-star and Mask images) are the same as the number of PNe. Since our approach is to create a DL model out of a balanced image distribution, we selected the three most frequent morphologies (Bipolar, Elliptical/Oval and Round), which have a reasonable number of examples to learn from. The Asymmetric and Irregular classes have too few objects. The Quasi-stellar class refers to unresolved PNe for which no morphological information is available. In total, 280 images from each type of HASH DB PNe image resources were randomly selected as examples for the three morphologies (hence the total of 840 images). Choosing 280 images allows us to later build a model that comprises of True PNe, Likely PNe and Possible PNe. As for images from Pan-STARRS, we used 160 images for each type of morphology, set by the limit of the samples for Bipolar images. Details of the distribution are tabulated in Table <ref>. Distribution of Morphology and the image resources from HASH DB. 1cTotal Number of PNe 1cTotal Number of Images Asymmetric 9 69 9 8 9 N/A Bipolar 543 3857 542 464 540 161 Elliptical/oval 1017 9764 1010 861 1012 390 Irregular 18 135 18 15 18 N/A Quasi-Stellar 374 2829 370 350 372 N/A Round 489 3408 489 397 487 200 Grand Total 2450 20,062 2438 2095 2438 751 Dataset distribution for each type of morphology from HASH DB and Pan-STARRS. 2*Dataset 2* Percentage HASH DB Pan-STARRS Number of Images Number of Images Training 80% 224 128 Validation 10% 28 16 Test 10% 28 16 2cTotal number of images for each morphology 280 160 2cTotal number of images for each image resource 840 640 2cTotal number of images used for PNe Morphology Classification 2520 1920 §.§ Deep Transfer Learning Algorithm Selection Instead of initiating a new DL process to learn the PNe classification and morphological structures from scratch, we applied transfer learning from existing popular DL algorithms. These algorithms were trained to learn a large-scale image-classification task according to the visual categories in the ImageNet dataset, which contains 14 million images corresponding to 22 thousand visual categories [Russakovsky et al., 2015]. For an initial study, we selected eight DL algorithms for which the classification effectiveness was validated using ImageNet by Keras [23]. We found that the three selected DL algorithms in Table <ref> were the most effective in classifying PNe. AlexNet [Krizhevsky et al., 2012], VGG-16 [Simonyan and Zisserman, 2015], VGG-19 [Simonyan and Zisserman, 2015], ResNet50 [He et al., 2016] and NASNetMobile [Zoph et al., 2018] were also tested but were found to be less effective and were dropped from further consideration. The algorithms and the effectiveness from [23] are listed in Table <ref>. List of DL algorithms used in this work for PNe True versus Rejected and for the morphological classification. Selected DL Algorithms Top-1 Accuracy [23] InceptionResNetV2 (2016) [Szegedy et al., 2017] 0.803 DenseNet201 (2017) [Huang et al., 2017] 0.773 ResNet50 (2015) [He et al., 2016] 0.749 NASNetMobile (2017) [Zoph et al., 2018] 0.744 VGG-16 (2105) [Simonyan and Zisserman, 2015] 0.713 VGG-19 (2105) [Simonyan and Zisserman, 2015] 0.713 MobileNetV2 (2018) [Howard et al., 2017] 0.713 AlexNet (2012) [Krizhevsky et al., 2012] 0.633 Many efforts have been made to improve the original architectural design of Convolutional networks (ConvNets) to achieve better accuracy. One of the improvements dealt with the depth of the ConvNets where other parameters of the architecture are fixed while the depth of the network is steadily increased by adding more convolutional layers. An example is the Inception architecture, which achieved very good performance at a relatively low computational time. Residual networks (ResNets) [He et al., 2016] have been introduced to address the limitation of computational time required to train very deep ConvNets. Recently, the introduction of residual connections along with traditional architecture has yielded state-of-the-art performance. Inception-ResNet-v2 is the combination of the Inception architecture and residual connections. This idea was studied by Szegedy et al., 2017 and the experimental results clearly show that the speed of training the inception networks with residual connections (Inception-ResNet-v2) has been significantly improved. Another DL architecture inspired by ResNets is DenseNet [Huang et al., 2017]. DenseNet has proven to utilize significantly fewer parameters and less computation by introducing a computational approach that leads to shorter connections in between the early layers and later layers. This approach had resulted in input feature-maps that are reusable, accessible to all layers in the network and closer to the output layer. DenseNet comes in various network architectures; DenseNet with 201 layers (known as DenseNet-201) has been shown to be the most effective among the variations. MobileNetV2 was introduced by Google [Howard et al., 2017]. In its original version—MobileNetV1, a Depthwise Separable Convolution was introduced which dramatically reduced the complexity cost and model size of the network. As the name implies, MobileNet is suitable for applications using mobile devices, or any devices with low computational power. The DL network architecture consist of two main layers. The first layer is known as depthwise convolution; it performs lightweight filtering by applying a single convolutional filter per input channel. The second layer is a $1\times 1$ convolution, known as pointwise convolution, which is responsible for building new features through computing linear combinations of the input channels. ReLU6 is used due to its robustness when used with low-precision computation. In the second version of MobileNet (MobileNetV2), a better module was introduced with an inverted residual structure and the non-linearities in narrow layers were removed. MobileNetV2 is one of the best DL algorithm for feature extraction; it has achieved state-of-the-art performance using ImageNet. It is also widely used for object detection and semantic segmentation. There are two popular DTL strategies. The first is using a pre-trained model as feature extractor and leverage on the pre-trained model's weighted layers to extract features but not to update the weights of the model's layers during training with new data. The second strategy is to perform fine tuning using the pre-trained model by updating the hyper-parameters and model-parameters of selected layers in the network. A DL training process involves tuning hyper-parameters and model-parameters. Hyper-parameters are the DL architecture properties that govern the entire training process, which are set before the training process starts such as the number of epochs, learning rate, hidden layers, hidden units and activation functions. The model parameters are values estimated based on the training data, which are the weights and bias in the DL architecture. The process of finding the best hyper-parameters and model parameters requires expertise and extensive trial and error. Frozen parameters are parameter values that are not changed or updated. As this is an initial study on how DL can be used to classify PNe, we focus the evaluation on the effectiveness of different DL models. Nevertheless, we executed several preliminary experiments that automatically update the model weights based on the training data, and the results are better than the model with frozen parameters. By taking advantage of the learned feature maps from the selected pre-trained DL algorithms, we adopted the first DTL strategy to extract meaningful features from the PNe images. Transfer learning was done for all the selected pre-trained algorithms using the same approach: A new DTL model was composed by loading the selected pre-trained DL algorithms (as the convolutional base) with ImageNet weights as the initial starting weight and stacking a classification layer on top to represent the PNe class and morphologies as output (depicted in Figure <ref>). All of the selected pre-trained DL algorithms were used without the original classification layers and the convolutional parameters were frozen and used as the feature extractor. As the PNe class and morphology classifier, we used a global average pooling layer and fed its output directly into the softmax activated layer. For algorithms where the output is a raw prediction value (logit), we omitted the softmax activation layer and replaced it with a Dense layer. every picture/.style=line width=0.75pt every picture/.style=line width=0.75pt (42.5,131) – (268,131) – (268,300) – (42.5,300) – cycle ; [fill=rgb, 255:red, 245; green, 166; blue, 35 ,fill opacity=0.77 ] (280.5,217) – (398.5,217) – (398.5,254) – (280.5,254) – cycle ; [fill=rgb, 255:red, 245; green, 166; blue, 35 ,fill opacity=0.73 ] (42.5,82) – (268,82) – (268,131) – (42.5,131) – cycle ; (153.5,341) – (153.5,309) ; [shift=(153.5,306), rotate = 450] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08] [draw opacity=0] (8.93,-4.29) – (0,0) – (8.93,4.29) – cycle ; [fill=rgb, 255:red, 255; green, 255; blue, 255 ,fill opacity=1 ] (280.5,262) – (398.5,262) – (398.5,299) – (280.5,299) – cycle ; (155.5,78) – (155.5,49) ; [shift=(155.5,46), rotate = 450] [fill=rgb, 255:red, 0; green, 0; blue, 0 ][line width=0.08] [draw opacity=0] (8.93,-4.29) – (0,0) – (8.93,4.29) – cycle ; (85,203) node [anchor=north west][inner sep=0.75pt] [align=left] Selected DL Algorithm; (57,89) node [anchor=north west][inner sep=0.75pt] [align=left] [lt]142.94892000000002pt PNe Classification Layer: True, Rejected & Morphologies (60,341) node [anchor=north west][inner sep=0.75pt] [align=left] [lt]137.09412pt (Training and Validation set) (122,24) node [anchor=north west][inner sep=0.75pt] [align=left] [lt]48.07810800000001pt (309,226) node [anchor=north west][inner sep=0.75pt] [align=left] Trainable; (316,270) node [anchor=north west][inner sep=0.75pt] [align=left] Frozen; Conceptual view of the Deep Transfer Learning (DTL) Architecture used in this work. In this work, our initial experimental strategy was executed in Python 3.7 using Google Colab's GPU [Carneiro et al., 2018]. The DTL model was implemented using TensorFlow version 2.0 [Abadi et al., 2015] and Keras applications modules [23]. Keras with TensorFlow backend evaluate() function was used to evaluate the fit performance of the built model and the predict() function was used to predict the images in the Test set. However, the evaluate() and predict() functions produced inconsistent results. The output of the predict() function could not be reconciled with the success rate returned by the evaluate() function and was considered suspect. To address the issue, we used our own local GPU server (Tesla V100, 16GB, 5120 CUDA Cores and 640 Tensor Cores used for DL computations) and MATLAB to execute the same DTL models and evaluation. This produced internally consistent results. All of the DTL models are trained and build using the same hyper-parameters of 64 batches; Root Mean Square Propagation (RMSprop) with the learning rate of 0.0001 as the loss optimizer; and 100 epochs. The model-parameters remained unchanged except for the last layer that was used to classify the PNe class. The model-parameter details for the DTL are as in Table <ref>. STEM is the number of parameters when a model is trained without the original classification layers. Model and parameter details. Model Image Size 1cSTEM 3*InceptionResNetV2 3*299 Total parameters: 66,920,163 c]@l@Trainable parameters: 66,859,619 Non-trainable parameters: 60,544 3*DenseNet201 3* 224 Total parameters: 30,364,739 c]@l@Trainable parameters: 30,135,683 Non-trainable parameters: 229,056 3*MobileNetV2 3*224 Total parameters: 2,260,546 1c c]@l@Trainable parameters: 2,562 Non-trainable parameters: 2,257,984 §.§ Evaluation Metrics A binary classification was used for True versus Rejected PNe. In contrast, multi-class classification was used to classify the PNe into their respective morphology, where an object is assigned to only one class out of $n$ distinct classes [Sokolova and Lapalme, 2009]—in this case, Bipolar, Elliptical or Round. We evaluated the effectiveness of using the DTL models using accuracy, F1 score and two other of the most commonly used evaluation metrics based on relevance judgement, namely precision and recall [Baeza-Yates and Ribeiro-Neto, 1999]. Accuracy is how often the DTL model correctly classifies the PNe into its correct class. F1 score is the harmonic mean of the precision and recall. Based on the confusion matrix in Figure <ref>, the formal definition for all the evaluation metrics are: \begin{align} {\rm Accuracy} & = \frac{(T_p+T_n)} { (T_p+F_p+F_n+T_n)} \\ {\rm Precision} & = \frac{T_p }{(T_p+F_p)} \\ {\rm Recall} & = \frac{T_p} {T_p+F_n} \\ {\rm F1\ Score} & = \frac{2 \times (Recall \times Precision)} {(Recall + Precision)} \end{align} Confusion matrix for the evaluation measures. § RESULTS In this section, we provide the experimental findings of the DL algorithms in classifying PNe as True versus Rejected and for PNe morphologies. The highlight of these results are the effectiveness of using the three DL algorithms to classify PNe in their categories and the outcome of predicting whether Possible and Likely PNe are True or Rejected. We describe the results obtained using the Training and Test sets. As InceptionResNetV2 requires a very high computation load and is time consuming, we did not manage to execute it on our local GPU server. Hence, the results presented for this model are obtained from the initial experiments using the Google Colab's GPU. As for DenseNet201 and MobileNetV2, the results are from our local GPU server. The results are still comparable since the presented results for InceptionResNetV2 use the evaluate() function which in our evaluation worked correctly. All the tabulated results are interpreted in the following manner: the most effective result among all image resources and DTL models are bold and the best results among the DTL models for each type of image resource are underlined. Note that in some cases the differences are within the statistical (stochastic) noise. §.§ Planetary Nebulae True vs. Rejected Classification The evaluation outcome of the expected performance of the DTL model built during the training process is shown in Table <ref>. Based on the comparison of using four different image resources and the three DTL models for classifying True PNe versus Rejected, the highest accuracy was achieved by DenseNet201 with the Quotient images using the Training and Test set. DenseNet201 was also the best DTL model when using Optical (achieved highest accuracy, precision and recall) and WISE432 (achieved highest accuracy and recall) images. Using Pan-STARRS Plain images, MobileNetV2 achieved highest accuracy and precision, and both DenseNet201 and MobileNetV2 achieved the highest Recall. Averaged over all categories, DenseNet201 achieves slightly higher score (82%) than MobileNetV2 (81%), but the difference is not significant. A further investigation was conducted to analyze the expected classification effectiveness of each of the classes using the Test set shown in Figure <ref>. We found that the average F1 score of the Optical images for True PNe and Rejected classification was 81%. The classification of True PNe and Rejected classes using Quotient images was consistently high across all DTL models. For WISE432 and Pan-STARRS, InceptionResNetV2 returned F1 scores that were higher for the Rejected class than for True PNe. For Optical and Quotient images, the F1 scores are similar for the two classes across all DTL models. The DenseNet201 model yielded the most effective algorithm with the average F1 score of 82% for both classes. width = , height = 8cm, bar width=10pt, enlarge x limits=0.25, legend style=at=(0.5,-0.15),anchor=north, legend columns=-1, ylabel=F1 Score, symbolic x coords=InceptionResNetV2, DenseNet201, MobileNetV2, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, legend columns=4, coordinates (InceptionResNetV2,0.81) (DenseNet201,0.84) (MobileNetV2,0.83); coordinates (InceptionResNetV2,0.81) (DenseNet201,0.84) (MobileNetV2,0.82); coordinates (InceptionResNetV2,0.81) (DenseNet201,0.86) (MobileNetV2,0.82); coordinates (InceptionResNetV2,0.78) (DenseNet201,0.86) (MobileNetV2,0.84); coordinates (InceptionResNetV2,0.47) (DenseNet201,0.83) (MobileNetV2,0.82); coordinates (InceptionResNetV2,0.70) (DenseNet201,0.82) (MobileNetV2,0.80); [style=black!60!green ,fill=white!00!green] coordinates (InceptionResNetV2,0.57) (DenseNet201,0.75) (MobileNetV2,0.76); [style=black!60!green ,fill=white!60!green] coordinates (InceptionResNetV2,0.72) (DenseNet201,0.73) (MobileNetV2,0.77); HASH Optical TPN, HASH Optical RPN, HASH Quotient TPN, HASH Quotient RPN, HASH WISE432 TPN, HASH WISE432 RPN, Pan-STARRS TPN, Pan-STARRS RPN The trained model evaluation F1 Score for PNe True and Rejected classification using images from HASH DB and Pan-STARRS Test set. The trained model evaluation results for True and Rejected PNe Classification from HASH DB and Pan-STARRS. The values are the average accuracy, precision and recall for both True PNe and Rejected classes. 2* 2*DTL Models 1cTraining Set 2*Accuracy 1cTest Set 2*Recall Accuracy Precision 2* 2*DTL Models 1cTraining Set 2*Accuracy 1cTest Set 2*Recall Accuracy Precision §.§ Prediction As DenseNet201 was evaluated to be the top-scoring DTL model, we focused on this implementation for the next step. We predicted whether a particular object is a PN using the Test set for each of the available images (Optical, Quotient, WISE432 and Pan-STARRS Plain). The four predictions were combined with equal weights to produce a predicted final class for a particular planetary nebula. A similar method of using several diagnostics and averaging the DL outcomes was used by Zhu et al., 2014. The results are presented as a confusion matrix. When a particular planetary nebula falls into both classes with equal classification probability, we excluded that particular planetary nebula from the confusion matrix. Figure <ref> shows the combined classification probability of 210 planetary nebula and 210 other objects in the Test set, which are then used to derive the confusion matrix. The total number of planetary nebula/other objects that can be confidently classified (combined classification probability $\not=$ 50%) is 347. The results show that True PNe are correctly classified in 94% of cases (precision = 0.94). The Matthews correlation coefficient of the confusion matrix, $\phi$, provides an unbiassed metric for the performance when the categories do not have equal sizes. The metric runs from $-1$ to +1, where 0 indicates a random result and 1 is a perfect classification. We found $\phi = 0.90$, indicating a good performance. For further evaluation, we reduced the classification weight of Pan-STARRS (as it has the lowest accuracy). This prevents the inconclusive possibility of 50% probability, and thus allows all the PNe to be classified as either True PN or Rejected. The number of PNe correctly classified as True PNe increased from 179 to 190, leaving 20 True PNe classified as Rejected. On the other hand, the number of PNe correctly classified as Rejected increased from 168 to 195, leaving 15 Rejected PNe classified as True PNe. This reduces $\phi$ to 0.83. Down weighting Pan-STARRS for the inconclusive objects does not improve the classification confidence. [short for lof][True PNe] ytick = 0,20,40,60,80,100, width = 0.45, height = 7cm, bar width=30pt, ybar, ymin = 0, ylabel=Total Number of PNe, xlabel=Classification Prediction, symbolic x coords=0%,25%,50%,75%,100%, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, [style=teal ,fill=white!70!teal] coordinates (0%,2) (25%,9) (50%,20)(75%,75)(100%,107); [short for lof][Rejected] width = 0.45, height = 7cm, bar width=30pt, ytick = 0,20,40,60,80,100, legend style=at=(0.5,-0.15), anchor=north,legend columns=-1, ylabel=Total Number of PNe, xlabel=Classification Prediction, symbolic x coords=0%,25%,50%,75%,100%, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, [style=violet ,fill=white!70!violet] coordinates (0%,2) (25%,6) (50%,34)(75%,69)(100%,99); [short for lof][Confusion matrix] Combined predictions for True PNe and Rejected class using the DenseNet201 DTL model: (a) probability distribution histogram for True PNe prediction; (b) probability distribution histogram for Rejected prediction; and (c) confusion matrix of the combined predictions derived from (a,b). Figure <ref> shows the classification probability for the resources separately. The black lines shows the correctly classified objects and the red line where the assigned classification disagrees with that in the catalog. Optical, Quotient and WISE432 all show a high degree of confidence, with probability for the correctly classified objects peaking at over 90%, for both the True and Rejected PNe. Pan-STARRS also shows a good result but the probabilities are not quite as high. This is understandable because Pan-STARRS lacks a filter dedicated to the emission lines that are characteristic for PNe. The histogram of the probabilities assigned by DenseNet201 DTL model to the PNe in the HASH Optical Test set. The x-axis shows the probability score assigned a PN to both classes. The y-axis shows number of objects per bin. The plots on the left show the True PNe and on the right the Rejected PNe. Black lines show correctly classified (true positives on the right, true negatives on the left) and red lines show the misclassified objects (false negatives on the left, false positives on the right). §.§ Possible and Likely Planetary Nebulae Classification The applicability of these DTL models results were then tested on the Possible and Likely PNe. The same approach was used to create the confusion matrix: each image resource was used separately to classify each PN into True PNe or Rejected classes, and subsequently all image resources for each PN were combined to arrive at a classification. From a total of 681 Possible and Likely PNe, we were able to classify 578 PNe into either True PNe or Rejected PNe. As depicted in Figure <ref>, the Likely PNe are classified as True in 64% of 260 classifiable cases (out of 313 in total), while for Possible PN this fraction is 41% of 318 classifiable objects (out of 368 in total). The confusion matrix of combined DenseNet201 DTL models predictions for Possible and Likely PNe. The higher success rate for Likely PNe agrees with the original level of confidence which is higher for `Likely PN' than for `Possible PN'. The DTL indicates that the majority of Likely PNe are indeed PN, but that for the Possible PNe, the majority are not. §.§ Planetary Nebulae Morphology Classification In this section, we discuss the evaluation outcome of the built model for PNe morphology classification. We first start with the overall results of InceptionResNetV2, DenseNet201 and MobileNetV2 for image from HASH DB and Pan-STARRS. The results are compared between the Training and Test sets. Then, we present our findings for the classification of the Bipolar, Elliptical and Round morphologies. The PNe morphology classification was carried out using images of the True PNe from HASH and Pan-STARRS. We experimented using seven type of image resources: Optical, Quotient and WISE432 from HASH DB and Plain, Quotient, No-star and Mask images that were derived from Pan-STARRS. Based on Table <ref>, the results from training DTL models using InceptionResNetV2 produced models with 100% accuracy. However, the Test set does not achieve this: when comparing the training results to those of the test results, the highest average accuracy, precision and recall in classifying the three type of PNe morphologies was 71% by using MobileNetV2 with the Pan-STARRS Plain images. We acknowledge the possibility of overfitting when 100% accuracy was obtained using InceptionResNetV2 (executed using TensorFlow and Keras), and the Test set gives a better indication of the success rate. Comparing the model evaluation outcome obtained using Pan-STARRS images, we found that MobileNetV2 was the best overall performing DTL model. As Pan-STARRS Plain images can be considered the same type of image resource as HASH Optical, the results confirm that images from Pan-STARRS can be a good alternative. However, HASH Quotient images performed better than Pan-STARRS Quotient images, possibly because the Pan-STARRS filters are not optimized for the PN emission lines. Average accuracy, precision and recall for planetary nebulae morphology classification using image resources from HASH DB and Pan-STARRS. 2* 2*DTL Models 1cTraining Set 2*Accuracy 1cTest Set 2*Recall Accuracy Precision We used four different Pan-STARRS resources. The three additional resources experimented with different ways to address the star-nebula confusion. However, this did not result in a notable improvement. §.§.§ Classification Accuracy of Bipolar, Round and Elliptical Planetary Nebulae From this section onward, we focus the discussion on the classification of PNe morphologies based on the F1 scores for the images from the HASH DB and Pan-STARRS Test set. The results for Bipolar PNe classification depicted in Figure <ref> demonstrate that InceptionResNetV2 was the most effective DTL model for classifying the Bipolar PNe, with the average F1 score of 56%, followed by MobileNetV2 with the average F1 score of 49% and DenseNet201 with the average F1 score of 47%. This hides large variations. All three models did reasonably well on HASH Optical images. DenseNet201 was best for the HASH Optical but worse for Pan-STARRS Plain. MobileNetV2 was more consistent but failed at the Pan-STARRS resources except for Pan-STARRS Plain and Mask. InceptionResNetV2 was more consistent, and it was notably the only routine able to handle the Pan-STARRS No-star images. DenseNet201 was not able to classify the Pan-STARRS Plain Bipolar PNe images: out of the 16 test images, only three were correctly classified, and the majority of the remainder were classified as Round PNe. Figure <ref> shows the result for classifying the Elliptical PNe. This morphological type was challenging. The accumulative average F1 score was the lowest among all of the PNe morphologies. Among the three DTL models, DenseNet201 was superior in classifying the Elliptical PNe with the average F1 score of 38%. MobileNetV2 gave an average F1 score of 36% and InceptionResNetV2 of 27%. For InceptionResNetV2, the HASH DB images were not an effective image resource for classifying Elliptical PNe. The Pan-STARRS resources gave better results, with the Mask images being the most consistent between the three models. The other Pan-STARRS resources gave mixed results. InceptionResNetV2 wrongly classified most of the Pan-STARRS No-star Elliptical PNe images as Bipolar PNe. MobileNetV2 did a bit better here, but failed on the Pan-STARRS Quotient resource where most Elliptical PNe images were classified as Bipolar PNe. title=Bipolar Planetary Nebulae Classification, width = 0.95*, height = 8cm, bar width=10pt, enlarge x limits=0.25, legend style=at=(0.5,-0.15),anchor=north, legend columns=-1, ylabel=F1 Score, symbolic x coords=InceptionResNetV2, DenseNet201, MobileNetV2, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, legend columns=3, coordinates (InceptionResNetV2,0.40) (DenseNet201,0.64) (MobileNetV2,0.54); coordinates (InceptionResNetV2,0.61) (DenseNet201,0.50) (MobileNetV2,0.43); coordinates (InceptionResNetV2,0.49) (DenseNet201,0.55) (MobileNetV2,0.43); [style=black!60!green ,fill=white!60!green ] coordinates (InceptionResNetV2,0.58) (DenseNet201,0.25) (MobileNetV2,0.76); [style=black!60!cyan ,fill=white!60!cyan ] coordinates (InceptionResNetV2,0.52) (DenseNet201,0.35) (MobileNetV2,0.39); coordinates (InceptionResNetV2,0.57) (DenseNet201,0.47) (MobileNetV2,0.36); coordinates (InceptionResNetV2,0.42) (DenseNet201,0.56) (MobileNetV2,0.52); HASH Optical, HASH Quotient, HASH WISE432, Pan-STARRS Plain, Pan-STARRS Quotient, Pan-STARRS No-star, Pan-STARRS Mask Bipolar planetary nebulae morphology classification F1 score using the Test set from HASH DB and Pan-STARRS. title=Elliptical Planetary Nebulae Classification, width = 0.95*, height = 8cm, bar width=10pt, enlarge x limits=0.25, legend style=at=(0.5,-0.15),anchor=north, legend columns=-1, ylabel=F1 Score, symbolic x coords=InceptionResNetV2, DenseNet201, MobileNetV2, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, legend columns=3, coordinates (InceptionResNetV2,0.11) (DenseNet201,0.47) (MobileNetV2,0.44); coordinates (InceptionResNetV2,0.21) (DenseNet201,0.45) (MobileNetV2,0.16); coordinates (InceptionResNetV2,0.26) (DenseNet201,0.42) (MobileNetV2,0.28); [style=black!60!green ,fill=white!60!green ] coordinates (InceptionResNetV2,0.50) (DenseNet201,0.29) (MobileNetV2,0.52); [style=black!60!cyan ,fill=white!60!cyan ] coordinates (InceptionResNetV2,0.28) (DenseNet201,0.22) (MobileNetV2,0.31); coordinates (InceptionResNetV2,0.1) (DenseNet201,0.36) (MobileNetV2,0.41); coordinates (InceptionResNetV2,0.43) (DenseNet201,0.47) (MobileNetV2,0.41); HASH Optical, HASH Quotient, HASH WISE432, Pan-STARRS Plain, Pan-STARRS Quotient, Pan-STARRS No-star, Pan-STARRS Mask Elliptical planetary nebulae morphology classification F1 score using the Test set from HASH DB and Pan-STARRS. In classifying the Round PNe (Figure <ref>), DenseNet201 DTL model was the best, with the average F1 score of 43% and a reasonably consistent performance. The second best model was MobileNetV2, with the average F1 score of 37% and lastly InceptionResNetV2 with the average F1 score of only 29%. All three models gave the best result for the HASH Optical images. title=Round Planetary Nebulae Classification, width = 0.95*, height = 8cm, bar width=10pt, enlarge x limits=0.25, legend style=at=(0.5,-0.15),anchor=north, legend columns=-1, ylabel=F1 Score, symbolic x coords=InceptionResNetV2, DenseNet201, MobileNetV2, nodes near coords, nodes near coords align=vertical, every node near coord/.append style=font=, legend columns=3, coordinates (InceptionResNetV2,0.31) (DenseNet201,0.52) (MobileNetV2,0.63); coordinates (InceptionResNetV2,0.50) (DenseNet201,0.37) (MobileNetV2,0.27); coordinates (InceptionResNetV2,0.00) (DenseNet201,0.40) (MobileNetV2,0.42); [style=black!60!green ,fill=white!60!green ] coordinates (InceptionResNetV2,0.31) (DenseNet201,0.5) (MobileNetV2,0.44); [style=black!60!cyan ,fill=white!60!cyan ] coordinates (InceptionResNetV2,0.24) (DenseNet201,0.4) (MobileNetV2,0.21); coordinates (InceptionResNetV2,0.39) (DenseNet201,0.47) (MobileNetV2,0.47); coordinates (InceptionResNetV2,0.29) (DenseNet201,0.36) (MobileNetV2,0.21); HASH Optical, HASH Quotient, HASH WISE432, Pan-STARRS Plain, Pan-STARRS Quotient, Pan-STARRS No-star, Pan-STARRS Mask Round planetary nebulae morphology classification F1 score using the Test set from HASH DB and Pan-STARRS. Pan-STARR Mask was particularly poor for this morphological class, perhaps because the masks were themselves round. For this resource, both InceptionResNetV2 and MobileNetV2 classified most of the Round PNe images as Elliptical PNe. The DenseNet201 DTL model did best for this resource. The InceptionResNetV2 model had a problem with the WISE432 images. Visual inspection of the classification shows that none of the HASH WISE432 Round PNe images were correctly classified. Most of the classification fell into Bipolar PNe. Here, it should be noted that the dust emission measured by WISE may have a different morphological distribution than the gas emission measured by the other resources. §.§ Prediction of Morphologies Figure <ref> shows an example confusion matrix, for the particular case of the DenseNet201 model and the HASH Optical images. This was chosen for having the highest F1 Score in Table <ref> and Figures <ref>–<ref>. The confusion matrix indicates a reasonable result, with about half of objects correctly classified. The misclassified objects do not show a strong bias. The best identified category is that of the Bipolar nebulae, where two thirds are correctly classified. For Round and Elliptical nebulae about half are correct, with most of the confusion between the two categories. The conclusion is that the DTL has a good success on separating Bipolar PNe from the other two categories, but it is less successful distinguishing Round from Elliptical nebulae. If we combine Round and Elliptical into one group, then the Matthews correlation coefficient becomes $\phi =0.45$. More accurate morphological classification may require higher quality image resources, especially to better separate Round from Elliptical PNe. Additional training of the DL model may also improve results. However, the current data show that the DTL models are able to do morphological classification, using the Optical images. The confusion matrix of PNe morphology using HASH Optical with DenseNet201 DTL model predictions. § DISCUSSION Only a few studies have attempted ML aimed at PNe [Faundez-Abans et al., 1996, Akras et al., 2019]. PN classification is a difficult problem, as PNe have a large variety in their appearances, can easily be confused with other types of objects (HII regions and galaxies, for instance) and are often faint objects located in dense star fields. We also did not use the highest quality data available, but used general-purpose surveys not optimized for PNe. In addition, we used transfer learning and did not fine-tune the parameters. The high success rate, with Matthews correlation coefficient of 90%, was therefore not necessarily expected. Classifying the morphologies was a more difficult task, but the results are promising. Of the architectures that were tested, DenseNet201 was found to be the most consistent performer. InceptionResNetV2 also worked well, in some cases better than DenseNet201 but with variable results and at high computational cost, while MobileNetV2 was also acceptable but fell short in some tests on the morphology. Five other architectures were also tested (AlexNet, VGG-16, VGG-19, ResNet50 and NASNetMobile) but were found not to be optimal for this particular problem. Previous studies of ImageNet by Keras [23] have indicated that InceptionResNetV2, DenseNet201 and MobileNetV2 were among the best DL algorithms for image classification. We found that this also holds for astronomical images. For the implementation, we found that the Keras routine predict() produced significantly discrepant results from evaluate(), for unclear reason, and it could not be used. The MATLAB implementation produced consistent results. The architectures were originally trained with the large ImageNet dataset and we did not optimize the parameters for our images. This transfer of learning is a limitation: it is plausible that results will improve with a future training step which optimizes the feature extraction. Each algorithm also requires images of a specific maximum size, which is much smaller than typical astronomical images. This required some loss of resolution in some cases. Even with these drawbacks, we found strong results for the current sample. Combining four different diagnostic image resources, the Matthews correlation coefficient is an impressive 90%. As a check, we inspected the images of objects that are cataloged as Rejected PN, but for which all four resources returned a classification as True PN. The objects are listed in Table <ref>. Of the eight objects in the table, in five cases, the available images do not suggest the target to be a PN. The fields are crowded, with multiple stars, infrared sources and in a few cases some extended emission, but the different tracers do not appear to centre on the same source. In three cases, the objects could be PNe, in two cases also with an indication from a spectrum. The third target shows an extended nebula. These three targets are worth further investigations. We also used the DL algorithm to classify the samples of Possible PNe and Likely PNe. About half were classified as True PN. The ratio was twice as high among Likely PNe as opposed to Possible PNe. This agrees with expectations, as the level of confidence is higher for Likely PNe. This result is a good indication that the DTL is producing reasonable results. Rejected objects classified here as True PN. In the last column, `p' indicates a potential PN while `n' suggests a negative classification. PNG Number Name Visual Inspection 359.0+02.8 Al 2-G p 001.0$-$02.6 Sa 3-104 p 002.5$-$02.6 MPA 1802$-$2803 n 001.8$-$05.3 PM 1-216 n 002.4+01.4 [DSH2001] 520-9 n 018.6-02.7 PN PM 1-243 n 003.0$-$02.8 PHR J1803$-$2748 p 140.0+01.7 IPHASX J031434.2+594856 n The second part of the this work focused on morphological classification of the True PNe. The results on this are best illustrated using Figure <ref>, albeit for only one resource. The correct classification was found in half of cases (for three possible classifications). The success rate was similar for Bipolar, Elliptical and Round. However, it was more difficult to distinguish Round from Elliptical nebulae. Although this is a reasonable result, morphological classification would benefit from better images than available from the surveys we used. For future research, there are several aspects that can improve on the current result. The sample for non-PNe could have been improved, with clearly identified object types. This would allow classifying Rejected PNe into separate groups of objects, rather than a mixed bag of `rejects'. A feedback step to optimize the learning to the specific images is also likely to improve the success rate. This includes K-fold Cross Validation and fine-tuning of the related hyper-parameters and model-parameters. Finally, a method that combines the diagnostics into a single training set, rather than analyzing them separately, may give even better results. This is also closer to how PNe are normally classified [Cohen et al., 2011, Fragkou et al., 2018]. The research has shown that DL can identify and classify PNe. This first investigation is very promising and provides clear pathways for future research. PNe are among the most difficult problem for automated classification. This is therefore an important step in the application of DL in complex, wide-field astronomical images. Conceptualization, D.N.F.A.I. and A.A.Z.; methodology, D.N.F.A.I. and A.A.Z., R.A. and G.A.F.; software, D.N.F.A.I., I.M., A.H.F. and J.A.; validation, D.N.F.A.I., I.M. and A.A.Z.; formal analysis, D.N.F.A.I., A.A.Z. and I.M.; investigation, D.N.F.A.I., A.A.Z. and I.M.; resources, D.N.F.A.I. and A.A.Z.; data curation, D.N.F.A.I. and I.M.; writing—original draft preparation, D.N.F.A.I., A.A.Z. and I.M.; writing—review and editing, D.N.F.A.I., A.A.Z., I.M., R.A., G.A.F., A.H.F. and J.A.; visualization, D.N.F.A.I., A.A.Z. and I.M.; supervision, A.Z.; project administration, A.A.Z. and D.N.F.A.I.; and funding acquisition, A.A.Z., D.N.F.A.I., R.A. and G.A.F. All authors have read and agreed to the published version of the manuscript. This research was funded under the Newton program for the project entitled ”Deep Learning for Classification of Astronomical Archives” under grant UK Science and Technology Facilities Council: ST/R006768/1 and the Newton-Ungku Omar Fund: F08/STFC/1792/2018. This research would not have been possible without the exceptional support from the Ministry of Higher Education Malaysia, UK Science and Technology Facilities Council, Universiti Malaysia Sarawak, Universiti Sains Malaysia and the University of Manchester. This research has made use of the HASH PN database at hashpn.space and the Pan-STARRS1 Surveys (PS1). The PS1 surveys and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes (the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics), Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory and the Gordon and Betty Moore Foundation. The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. § HASH DB QUERY Query submitted to obtain True PNe and Rejected PNe alongside other objects from HASH DB. Select Sample Options True PNe Rejected PNe and Other Objects Status True PN Check all except True PN, Likely PN, Possible PN and New Candidates Morphology Check all Uncheck all Galaxy Galactic PNe Check all except Galactic PNe Catalogs Uncheck all Uncheck all Origin Uncheck all Uncheck all Spectra Uncheck all Uncheck all Checks Uncheck all Uncheck all User Samples Uncheck all Uncheck all [Parker, 2020] Parker, Q.A. Planetary Nebulae and How to Find Them: A Review. arXiv 2020, arXiv:2012.05621. [Parker et al., 2016] Parker, Q.A.; Bojičić, I.S.; Frew, D.J. HASH: The Hong Kong/AAO/Strasbourg H Planetary Nebula Database. J. Phys. Conf. Ser. 2016, 728, 032008, [Balick and Frank, 2002] Balick, B.; Frank, A. Shapes and Shaping of Planetary Nebulae. Annu. Rev. Astron. Astrophys. 2002, 40, 439–486, doi:10.1146/annurev.astro.40.060401.093849. [Shaw, 2011] Shaw, R.A. Shape, Structure, and Morphology in Planetary Nebulae. Proc. Int. Astron. Union 2011, 7, 156–163, doi:10.1017/S1743921312010873. [Kwok, 2018] Kwok, S. On the Origin of Morphological Structures of Planetary Nebulae. Galaxies 2018, 6, 66, doi:10.3390/galaxies6030066. [Flewelling et al., 2016] Flewelling, H.A.; Magnier, E.A.; Chambers, K.C.; Heasley, J.N.; Holmberg, C.; Huber, M.E.; Sweeney, W.; Waters, C.Z.; Calamida, A.; Casertano, S.; et al. The Pan-STARRS1 Database and Data Products. arXiv 2016, arXiv:1612.05243. [Chambers et al., 2019] Chambers, K.C.; Magnier, E.A.; Metcalfe, N.; Flewelling, H.A.; Huber, M.E.; Waters, C.Z.; Denneau, L.; Draper, P.W.; Farrow, D.; Finkbeiner, D.P.; et al. The Pan-STARRS1 Surveys. arXiv 2019, arXiv:astro-ph.IM/1612.05560. [Faundez-Abans et al., 1996] Faundez-Abans, M.; Ormeno, M.I.; de Oliveira-Abans, M. Classification of Planetary Nebulae by Cluster analysis and Artificial Neural Networks. AAPS 1996, 116, 395–402. [Akras et al., 2019] Akras, S.; Guzman-Ramirez, L.; Gonçalves, D.R. Compact Planetary Nebulae: Improved IR Diagnostic Criteria Based on Classification Tree Modelling. Mon. Not. R. Astron. Soc. 2019, 488, 3238–3250, doi:10.1093/mnras/stz1911. [Fluke and Jacobs, 2020] Fluke, C.J.; Jacobs, C. Surveying the Reach and Maturity of Machine Learning and Artificial Intelligence in Astronomy. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1349. [Barchi et al., 2020] Barchi, P.; de Carvalho, R.; Rosa, R.; Sautter, R.; Soares-Santos, M.; Marques, B.; Clua, E.; Gonçalves, T.; de Sá-Freitas, C.; Moura, T. Machine and Deep Learning Applied to Galaxy Morphology-A Comparative Study. Astron. Comput. 2020, 30, 100334, doi:10.1016/j.ascom.2019.100334. [Beckwith et al., 2006] Beckwith, S.V.W.; Stiavelli, M.; Koekemoer, A.M.; Caldwell, J.A.R.; Ferguson, H.C.; Hook, R.; Lucas, R.A.; Bergeron, L.E.; Corbin, M.; Jogee, S.; et al. The Hubble Ultra Deep Field. Astron. J. 2006, 132, 1729–1755, doi:10.1086/507302. [Gavali and Banu, 2019] Gavali, P.; Banu, J.S., Chapter 6 - Deep Convolutional Neural Network for Image Classification on CUDA Platform. In Deep Learning and Parallel Computing Environment for Bioengineering Systems; Arun, K.S., Ed.; Academic Press: Cambridge, MA, USA, 2019; pp. 99–122. [Pan and Yang, 2009] Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359, doi:10.1109/tkde.2009.191. [Hambly et al., 2001] Hambly, N.C.; MacGillivray, H.T.; Read, M.A.; Tritton, S.B.; Thomson, E.B.; Kelly, B.D.; Morgan, D.H.; Smith, R.E.; Driver, S.P.; Williamson, J.; et al. The SuperCOSMOS Sky Survey-I. Introduction and description. Mon. Not. R. Astron. Soc. 2001, 326, 1279–1294, doi:10.1111/j.1365-2966.2001.04660.x. [Parker et al., 2005] Parker, Q.A.; Phillipps, S.; Pierce, M.J.; Hartley, M.; Hambly, N.C.; Read, M.A.; MacGillivray, H.T.; Tritton, S.B.; Cass, C.P.; Cannon, R.D.; et al. The AAO/UKST SuperCOSMOS H survey. Mon. Not. R. Astron. Soc. 2005, 362, 689–710, doi:10.1111/j.1365-2966.2005.09350.x. [Drew et al., 2014] Drew, J.E.; Gonzalez-Solares, E.; Greimel, R.; Irwin, M.J.; Küpcü Yoldas, A.; Lewis, J.; Barentsen, G.; Eislöffel, J.; Farnhill, H.J.; Martin, W.E.; et al. The VST Photometric H Survey of the Southern Galactic Plane and Bulge (VPHAS+). Mon. Not. R. Astron. Soc. 2014, 440, 2036–2058, doi:10.1093/mnras/stu394. [Wright et al., 2010] Wright, E.L.; Eisenhardt, P.R.M.; Mainzer, A.K.; Ressler, M.E.; Cutri, R.M.; Jarrett, T.; Kirkpatrick, J.D.; Padgett, D.; McMillan, R.S.; Skrutskie, M.; et al. The Wide-field Infrared Survey Explorer (WISE): Mission Description and Initial On-orbit Performance. Astron. J. 2010, 140, 1868–1881, doi:10.1088/0004-6256/140/6/1868. [Feder et al., 2020] Feder, R.M.; Portillo, S.K.N.; Daylan, T.; Finkbeiner, D. Multiband Probabilistic Cataloging: A Joint Fitting Approach to Point-source Detection and Deblending. Astron. J. 2020, 159, 163, doi:10.3847/1538-3881/ab74cf. [Ritter and Parker, 2020] Ritter, A.; Parker, Q.A. A Preferred Orientation Angle for Bipolar Planetary Nebulae. Galaxies 2020, 8, 34, doi:10.3390/galaxies8020034. [Corradi and Schwarz, 1995] Corradi, R.L.M.; Schwarz, H.E. Morphological Populations of Planetary Nebulae: Which Progenitors? I. Comparative properties of bipolar nebulae. Astron. Astrophys. 1995, 293, 871–888. [Russakovsky et al., 2015] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252, doi:10.1007/s11263-015-0816-y. [23] Keras Applications. Available online: <https://keras.io/api/applications/> (accessed on 20 May 2020). [Krizhevsky et al., 2012] Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems-Volume 1, Lake Tahoe, Nevada, USA, 3-8 Dec 2012, Curran Associates Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Simonyan and Zisserman, 2015] Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7-9 May 2015. [He et al., 2016] He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, USA, 26 Jun - 1 Jul 2016; pp. 770–778. [Zoph et al., 2018] Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning Transferable Architectures for Scalable Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA, 18-22 Jun 2018; pp. 8697–8710. [Szegedy et al., 2017] Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4-9 Feb 2017, AAAI Press: Palo Alto, CA, USA 2017, pp. 4278–4284. [Huang et al., 2017] Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21-26 Jul 2017; pp. 2261–2269. [Howard et al., 2017] Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision arXiv 2017, arXiv:1704.04861. [Carneiro et al., 2018] Carneiro, T.; Medeiros Da NóBrega, R.V.; Nepomuceno, T.; Bian, G.; De Albuquerque, V.H.C.; Filho, P.P.R. Performance Analysis of Google Colaboratory as a Tool for Accelerating Deep Learning Applications. IEEE Access 2018, 6, 61677–61685. [Abadi et al., 2015] Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, 2015; Software available from tensorflow.org.; Available online: <https://arxiv.org/pdf/1603.04467.pdf> (accessed on 15 Jan 2020). [Sokolova and Lapalme, 2009] Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification 2009, 45, 427–437, doi:10.1016/j.ipm.2009.03.002. [Baeza-Yates and Ribeiro-Neto, 1999] Baeza-Yates, R.A.; Ribeiro-Neto, B. Modern Information Retrieval; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1999. [Zhu et al., 2014] Zhu, W.W.; Berndsen, A.; Madsen, E.C.; Tan, M.; Stairs, I.H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; et al. Searching for Pulsars Using Image Pattern Recognition. Astrophys. J. 2014, 781, 117, doi:10.1088/0004-637X/781/2/117. [Cohen et al., 2011] Cohen, M.; Parker, Q.A.; Green, A.J.; Miszalski, B.; Frew, D.; Murphy, T. Multiwavelength diagnostic properties of Galactic planetary nebulae detected by the GLIMPSE-I. Mon. Not. R. Astron. Soc. 2011, 413, 514–542, doi:10.1111/j.1365-2966.2010.18157.x. [Fragkou et al., 2018] Fragkou, V.; Parker, Q.A.; Bojičić, I.S.; Aksaker, N. New Galactic Planetary nebulae selected by radio and multiwavelength Mon. Not. R. Astron. Soc. 2018, 480, 2916–2928, doi:10.1093/mnras/sty1977.
# Unveiling the three-dimensional spin texture of skyrmion tubes Daniel Wolf Leibniz Institute for Solid State and Materials Research, IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany Sebastian Schneider Dresden Center for Nanoanalysis, cfaed, Technische Universität Dresden, 01069 Dresden, Germany Leibniz Institute for Solid State and Materials Research, IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany Ulrich K. Rößler Leibniz Institute for Solid State and Materials Research, IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany András Kovács Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, 52425 Jülich, Germany Marcus Schmidt Department Chemical Metal Science, Max Planck Institute for Chemical Physics of Solids, Nöthnitzer Str. 40, 01187 Dresden, Germany Rafal E. Dunin-Borkowski Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, 52425 Jülich, Germany Bernd Büchner Leibniz Institute for Solid State and Materials Research, IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany Institute of Solid State and Materials Physics, TU Dresden, 01069 Dresden, Germany Bernd Rellinghaus Dresden Center for Nanoanalysis, cfaed, Technische Universität Dresden, 01069 Dresden, Germany Axel Lubk Leibniz Institute for Solid State and Materials Research, IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany Institute of Solid State and Materials Physics, TU Dresden, 01069 Dresden, Germany Corresponding author. Magnetic skyrmions 1 are stable topological solitons with complex non-coplanar spin structures. Their nanoscopic size and the low electric currents required to initiate and control their motion has opened a new field of research, skyrmionics, that aims at using skyrmions as information carriers for data storage and manipulation 2, 3, 4, 5. Recent advances in skyrmionics prompt for a thorough understanding of the detailed three-dimensional spin texture of a skyrmion 6, 7, 8, 9, 10, 11 including skyrmion-skyrmion interactions and their coupling to surfaces and interfaces. These properties crucially affect application-related aspects such as the stability and mobility of skyrmions in confined structures. To date, however, experimental techniques to measure the three-dimensional (3D) spin texture with nanometer resolution are largely missing. We therefore adapt holographic vector field electron tomography12 to the problem and report on the first quantitative reconstruction of the 3D spin texture of skyrmions with sub-10 nanometer resolution. The reconstructed textures reveal a variety of previously unseen local deviations from a homogeneous Bloch character within the skyrmion tubes (SkTs), details of the collapse of the skyrmion texture at surfaces, and a correlated modulation of the SkT in FeGe along their tube axes. The quantitative 3D data of the magnetic induction also allow to experimentally confirm some principles of skyrmion formation by deriving spatially resolved maps of the magnetic energy density across these magnetic solitons. ## Introduction The unique features of magnetic skyrmions such as their competing magnetic interactions, topological structure, or dynamics are of great interest in both fundamental and applied physics. As multidimensional solitons, these particle- like states are localized in two-dimensions which requires a definite mechanism through additional frustrating magnetic couplings for their stabilization 13. As consequence of their solitonic character, they can condense into thermodynamically stable phases, in particular dense packed lattices under applied fields 1. The stabilization mechanism of these phases and their formation principles are ruled by effective skyrmion-skyrmion interactions 13. However, the morphology of these phases in the phase-diagrams of real materials is dictated by the problem of condensation of two- dimensional periodic arrays, as in vortex-lattices of type-II superconductors 1. In particular, the field-temperature phase diagram may hold various transitions between different condensed phases of skyrmions 14, 15. Very recently, some studies address this problem for skyrmionic phases theoretically 16 and in experiments 17. In three dimensional bulk materials or thicker films the skyrmions are extended string-like objects; in the simplest formation they are homogeneously continued as skyrmion tubes (SkT) preserving translational invariance along their axis. In magnetic nanoobjects, however, the influence of surfaces will affect formation, shape and interaction of skyrmions and the stabilization of condensed skyrmionic phases. Already from the earliest observations of skyrmionic phases in films of chiral helimagnets 18, 19, it is known that their phase diagram massively deviate form those of bulk materials. It has been shown that 3D surface twists can stabilize SkTs in thin films 6, 7, 20, 21 and that 3D modulations of SkTs embedded in a conical host phase may introduce an attractive interaction between these tubes 22. 3D SkT modulations also affect emergent electric and magnetic fields acting on spin-polarized electrons and magnons 23, which results in unusual transport phenomena 24 on top of the ”normal” topological Hall effect in static and current-driven skyrmion crystals 25, 26, 27, 28. Figure 1: a, Holographic vector-field electron tomography (VFET) of skyrmions in FeGe. A needle-shaped sample is placed above an out-of-plane magnetized $\mathrm{Sm_{2}Co_{17}}$ ring in a liquid nitrogen cooled TEM holder. The low temperature and the remanent stray field of the ring stabilizes skyrmion tubes and their orientation with respect to the holder. A tilt series of 2D phase images of the transmitted electron wave is obtained from off-axis electron holographic tilt series around the $x$ (left) and $y$ axes (right). Subsequently, $x$ and $y$ components of the magnetic induction $\boldsymbol{B}$ are tomographically reconstructed from the corresponding phase tilt series. Solving $\mathrm{div}\boldsymbol{B}=0$ finally yields the $z$ component, hence the full 3D vector of $\boldsymbol{B}(x,y,z)$. b, 3D map of the resulting magnetic induction. For clarity, only the experimental $B_{x}$ and $B_{y}$ components are shown here. Similarly, e.g., in hybrid chiral ferromagnet/superconductor systems 29, the functionalization of skyrmions is modified by 3D modulations of the SkT at the interface. Finally, the observation of unusually strong topological quantum Hall effects 30 may indicate the presence of abrupt magnetization changes such as in magnetic bobbers (incorporating a Bloch point) in surface regions 31. Notwithstanding the importance of 3D effects, neither exact 3D models of SkTs in realistic confined geometries nor high-resolution experimental mappings of their spin texture are currently available, although effects of confinement 19, 6, 20 and of anisotropies 32 have received attention. This lack of data prevents a deeper understanding of skyrmion lattice defects 11, 33, influence of surface anisotropies, curvatures 34, and real structure effects in the modulation of 3D skyrmionic spin textures. Among the various high-resolution magnetic imaging techniques, transmission electron microscopy (TEM) based electron holography (EH)35, 36, 37, 12 and X-ray magnetic chiral dichrosim (XMCD) 38, 39, 40 can be conducted in a tomographic way to determine the 3D magnetic induction, $\boldsymbol{B}$, or magnetization, $\boldsymbol{M}$, of a sample, respectively. In this work, we employ holographic vector-field electron tomography (VFET) as it provides a higher spatial resolution (below 10 nm) than X-ray based methods, which is crucial for resolving the details of magnetic textures in nanomagnetic structures such as vortices 12 or skyrmions. The limited space in a high-resolution TEM instrument, however, has so far prevented any in-situ applications of rotatable (out-of-plane) magnetic fields to a cryogenically cooled sample, which is essential for the acquisition of tomographic tilt series of electron holograms from a sample that needs to be magnetically stabilized. This limitation impedes the measurement and 3D reconstruction of spin textures for a large class of materials with a metastable skyrmion phase at non-zero applied fields below room temperature (e.g., many isotropic helimagnets). For the present experiments, we have therefore devised a setup that overcomes these obstacles. ## High resolution vector-field tomography in an external magnetic field Figure 2: Spin texture of skyrmion tubes in a FeGe needle: a, Volume rendering (colored) of the in-plane ($x$, $y$) components of the reconstructed magnetic induction $\boldsymbol{B}$ and iso-surface of the mean inner potential highlighting the sample shape (grey, bottom half only). Rectangles indicate cross-sections, whose details are shown in c and d. b, Planar $x$-$y$ cross- section. Color and size of the arrows indicate the direction and magnitude of the in-plane component of $\boldsymbol{B}$, respectively. c and d, direction of $\boldsymbol{B}$ (arrow orientation) and magnitude (color) of $B_{x}$ and $B_{y}$ in $y$-$z$ and $x$-$z$ cross-sections through SkT 3. Here, the SkT was artificially aligned along its $z$ axis (see text for details). The tomographic investigation of the magnetic texture of skyrmions was conducted on a sample of the isotropic helimagnet FeGe with $\mathrm{P2_{1}3}$ structure (B20 phase). The material was chosen, since FeGe is an otherwise well studied archetypical skyrmion host with a rather large skyrmion phase pocket in the phase diagram spanned by temperature and external field 18, 41. A needle-shaped sample (cf. Supplementary Sect. LABEL:suppl:note:geometry) was cut from a FeGe single crystal by focused ion beam (FIB) including ion polishing to restrict the ion beam damage to a surface layer of some nanometers (see Methods). The dimensions and shape of the needle ensure that, even at high tilt angles, the sample is fully electron-transparent and the obtained holographic projections cover the same sample region. Additionally, the elongated shape has some technological significance for anticipated spintronic devices such as racetrack memories 42. In order to (i) adjust the skyrmion phase below the Curie temperature and (ii) stabilize the orientation of the skyrmion lattice with respect to the TEM holder, the FeGe needle was steadily exposed to an out-of-plane magnetic field of $\mu_{0}H_{\mathrm{ext}}\approx 170\,\mathrm{mT}$. The field was provided through the remanent stray field of a ring-shaped $\rm Sm_{2}Co_{17}$ hard magnet that was placed under the sample in a tomography-adapted liquid nitrogen TEM cooling holder (cf. Fig. 1a). The field is virtually homogeneous across the micron-sized sample (cf. Supplementary Sect. LABEL:suppl:note:ring). Using this special setup, we have recorded three holographic tilt series as required for VFET 36, 12 (see Methods and Supplementary Sect. LABEL:suppl:note:geometry for details of the imaging conditions). The first series of holograms was acquired by tilting the sample around the $x$ axis at room temperature, since above the Curie temperature of $\rm T_{C}=278.7\,K$ 43, the phase $\varphi_{e}$ reconstructed from the holograms is of pure electrostatic origin. The (scalar) electrostatic potential $\rm\Phi$ was then determined by inverting the Radon transformation (i.e., linear projection law) linking $\rm\Phi$ and $\varphi_{e}$. The resulting 3D mean inner potential distribution is nearly homogeneous as discussed in Supplementary Sect. LABEL:suppl:note:MIP (see Methods for the tomographic reconstruction details). In the following two series, the sample was tilted around the $x$ and $y$ axes (cf. Fig. 1a) at $T=95\,\mathrm{K}$. At this temperature below $T_{C}$, the magnetic fields imposes an additional Aharonov-Bohm phase $\varphi_{m}$ on the imaging electrons. After subtracting the pre-determined electrostatic contribution from the total phase shift, the in-plane components of the magnetic induction, $B_{x}(x,y,z)$ and $B_{y}(x,y,z)$ (see Fig. 1b), were reconstructed from the remaining $\varphi_{m}$ in 3D by inverse Radon transformation of another linear projection law linking the gradient of $\varphi_{m}$ and $B_{x,y}$ (see Methods for details). The spatial resolution of the reconstructed $B_{x}$ and $B_{y}$ components was below $10\,\mathrm{nm}$ in directions outside of the missing tilt range (cf. Supplementary Sect. LABEL:suppl:note:resolution). In order to change the tilt axis from $x$ to $y$, the sample required to be warmed up to room temperature and rotated in-plane by $90^{\circ}$ in the sample holder. The associated heating and (field) cooling of the sample followed precisely the same protocol as prior to tilting it around $x$. As a result, at the here investigated and most confined tip region of the FeGe needle, the skyrmion patterns obtained after cooling prior to acquiring the $x$ and $y$ tilt series were almost perfectly identical, while at the less confined broader end of the needle, the skyrmion pattern had changed considerably (see Suppl. Fig. LABEL:suppl:Fig_Specimen for details). Based on $B_{x,y}$, also the remaining third component $B_{z}(x,y,z)$ was finally determined by solving $\mathrm{div}\,\boldsymbol{B}=0$, thereby yielding the full 3D vector field of the magnetic induction $\boldsymbol{B}(x,y,z)$ (see Methods for details and Supplementary Movie 1 for 3D animations of the tomograms). Since the calculation of $B_{z}$ is, however, based on the differentiation of $B_{x}$ and $B_{y}$, it suffers from noise amplification and corresponding artefacts, which needs to be taken into account in the following analysis. ## Spin texture of Skyrmion tubes In the following, we analyze this comprehensive 3D set of $\boldsymbol{B}(x,y,z)$ data in order to extract characteristic magnetic features and quantities of the SkTs in FeGe. Fig. 1b reveals that the tip of the needle hosts a single row of SkTs that are elliptically distorted towards the sideward surfaces of the needle, i.e., perpendicular to both the needle axis and the stabilizing external field. Towards the broader back of the needle (top region of Fig. 1b), these elongated SkTs develop into a zig-zag chain of Bloch SkTs, when the width surpasses a critical value of roughly 150 nm. This width corresponds to about twice the characteristic helical modulation length $L_{D}$ and the next-nearest neighbour distance in a close- packed skyrmion lattice in FeGe 33. An evaluation of the out-of-plane component of $\boldsymbol{B}$ (cf. Supplementary Sect. LABEL:suppl:note:Lattice_Packing) reveals a ratio of areas with positive and negative $B_{z}$ of 0.85, which also points to a close-packing of the SkTs in this region 16. Figure 3: Axial modulations of the SkTs. a, 3D rendering of the ($x$, $y$) components of $\boldsymbol{B}$ for eight close-packed SkTs in the FeGe needle. The red lines represent the SkT axes, and $\boldsymbol{q_{1}}$, $\boldsymbol{q_{2}}$, $\boldsymbol{q_{3}}$ indicate the three NN directions of the SkT lattice. b and c, $z$ dependent positions of the cores of SkTs 1-5 and SkTs 6-8 (f and g) along $\boldsymbol{q_{2}}$ and $\boldsymbol{q_{1}}$. The data points are colored according to the labels in a. d and e, $x$-$z$ slices through SkTs 3 d and 8 e. Fig. 2 and Supplementary Movies 2, 3 represent an in-depth view of the spin texture within these SkTs. Fig. 2b shows a planar cross-section of the spin texture through the vertical center of the needle. Here, the color of the arrows indicate the direction of the in-plane ($x$, $y$) components of $\boldsymbol{B}$ according to the color wheel in Fig. 2a. While all four SkTs in this section feature radial Bloch walls, details of the spin texture exhibit subtle discrepancies from that of an undisturbed, perfect Bloch skyrmion: (i) The skyrmions may exhibit significant distortions and (partially) lose their axial symmetry. Neither the direction of the in-plane component of $\boldsymbol{B}$ remain tangential nor is its magnitude constant for a given radius. (ii) Frequently, the maximum out-of-plane orientation (as indicated by a virtually vanishing size of the arrows) is not located in the center of the skyrmions. (iii) Unlike expected for isolated magnetic solitons, some distortions of the skyrmionic spin textures seem to go along with magnetic flux ”leaking” between neighboring SkTs as highlighted by the dashed region ”I” between SkTs 4 and 7. The mere similarity of this appearance with a confined helical ”band” resembles the evolution of a metastable isolated skyrmion towards a helical modulation (”strip-out”) discussed in bi-layer thin films 44. Both scenarios, however, violate the topology of a skyrmion and necessitate the occurrence of singular Bloch points. Features indicating the occurrence of Bloch points are in fact frequently observed: Figs. 2c, d show two orthogonal vertical slices through SkT 3. Since the SkT axes are found to be bent and (partially) twisted (see below and Fig. 3), SkT 3 was artificially aligned along the $z$ axis for this presentation. To this end, each $x$-$y$ slice of the tube was laterally shifted such that the minima of the in-plane component $B_{\|}=\sqrt{B_{x}^{2}+B_{y}^{2}}$ of all slices are aligned along the $z$ axis. The dashed circles labeled ”II” and ”III” highlight sections that are indeed reminiscent of a Bloch point. Besides, both cross-sections confirm the lack of axial symmetry and substantiate the overall inhomogeneity of the spin texture in the SkT already seen from the planar cross-section. In contrast to ”pure” Bloch SkT, we frequently (but not systematically) observe a (partial) radial (i.e., a small Néel character) orientation of the local magnetic induction. These imperfections grow upon approaching the surface and finally lead to a total collapse of the skyrmion structure. This becomes most apparent in the $x$-$z$ cross-section in Fig. 2d, where the thickness of the needle decreases. This region should be understood as result of surface symmetry breaking and concomitant effects, such as pinning by surface anisotropies, modified magnetic properties due to FIB surface damage and demagnetization fields. Figure 4: Planar maps of the predominant magnetic energy density contributions $DB_{z}(\nabla\times\boldsymbol{B})_{z}$ (DM interaction, (a and b)) and $A\|\left(\nabla\times\boldsymbol{B}\right)_{z}\|^{2}$ (exchange interaction, (c and d)) and their sum (e and f), together with radial averages as function of the radius (g, h). Left: Simulations. Right: Reconstructed from the central part of SkT 3. Overlaid arrows indicate the in-plane magnetic induction. In the cubic helimagnet FeGe, a twisting in the third direction, i.e. along the axis of the SkT, could result in a gain of energy through the Dzyaloshinskii-Moryia (DM) exchange. However, such an effect will not create triply twisted structures of skyrmions, as this is geometrically impossible, because the ferromagnetic vector can be rotated only in two directions in the cutting plane perpendicular to the skyrmion axis. Hence, the chiral twist6 could only affect the shape of the SkTs. E.g., a modulation could arise as a tertiary conformational deviation from straight cylindrical SkT shape. Confinement or surface pinning may promote such morphology changes. Indeed, Fig. 3a illustrates that the axis of SkTs (red lines) are axially bent and twisted rather than extending as straight cylindrical objects along the $z$ axis in the close-packed region of the needle (similar rendering of $B_{\|}$ as in Fig. 2a). In order to study possible correlations between these deformations, we have analyzed the in-plane positions of the SkT axes along the nearest neighbour (NN) directions $\boldsymbol{q_{1}}$, $\boldsymbol{q_{2}}$, $\boldsymbol{q_{3}}$ (indicated by white arrows). The resulting dependencies of the deviations from an average axial position along $\boldsymbol{q_{2}}$ and $\boldsymbol{q_{1}}$, i.e., in directions that are largely affected by the lateral confinement, are shown in Fig. 3b, c for the bottom row of SkTs (nos. 1-5) and in Fig. 3f, g for SkTs 6-8 in the top row. Except for SkT 1 (small blue circles in b and c), which is least close-packed and rather has two elliptically elongated SkT neighbours, and SkT 7 (small pink circles in b and c), which is additionally distorted due to an unusual magnetic coupling to SkT 4 (see above), all SkT axes exhibit pronounced sideward deformations. As indicated by grey bands (guides to the eye only), these lateral modulations are correlated among SkTs in the same row. These deformations are harmonically modulated with a modulation length of approximately $\rm 80\,nm$ that is close to the helical period $L_{D}\simeq 70\,\mathrm{nm}$ in FeGe 18, 45 pointing to the DM interaction as a possible origin of the deformations. Note, however, that comparisons with $y$-$z$ cross-sections through SkTs 3 and 8 in Figs. 3d, e reveal that these modulations correlate with the occurrence of uniformly polarized edge states 46. These edge states reside at the sidelong rims of the FeGe needle (cf. left and right surfaces in Figs. 1b and 2a) and are separated by a very narrow magnetic transition regions (resembling domain walls) of some 10 nm in width from the SkTs. The correlation of the deformation of the SkTs with these edge states is corroborated by the facts that (i) the central deformations are directed towards the center on both sides of the needle and (ii) the magnetic orientations of the edge states and the outer rims of the SkTs’ spin textures are concurrently reversed between the right (SkTs 1-5) and left side (SkTs 6-8) of the needle, respectively. This results in qualitatively identical interactions between the SkTs and the edge states on either side. In contrast, the deformations of the SkTs along the largely unconfined direction $\boldsymbol{q_{3}}$ do not exhibit any obvious correlations (not shown). The availability of 3D vector data of the magnetic induction enables us for the first time to experimentally derive from the volume of a sample spatial maps of free energy density contributions from magnetic exchange and DM interactions, respectively. These energetic contributions are most essential for the formation and stabilization of skyrmions and SkTs, as they are expected to reduce the free energy in the centers of the SkTs, while the interstitial regions in a SkT lattice may be considered as domain walls of increased energy 47. We have calculated from $\boldsymbol{B}(x,y,z)$ the solenoidal part of the magnetic exchange energy density $\mathit{w}_{\mathrm{ex}}=A\|(\nabla\times\boldsymbol{B})\|^{2}$ and the volume contribution of the DM energy density $\mathit{w}_{\mathrm{DM}}=D\boldsymbol{B}\cdot(\nabla\times\boldsymbol{B})$. Here, $A=8.78\ \mathrm{\frac{pJ}{m}}$ and $D=1.58\ \mathrm{\frac{mJ}{m^{2}}}$ denote the exchange stiffness and the DM interaction strength of FeGe, respectively 48. Due to the vanishing magnetic charge density $\rho_{m}\approx 0$, the conservative part of the exchange energy $\|\nabla\cdot\boldsymbol{M}\|^{2}$ is small in Bloch skyrmions, and other contributions are divergence contributions that can be collapsed to surface terms, which is the reason why both are neglected here. As the spin texture of the SkTs is highly disturbed in the near-surface region (cf. Fig. 2c, d), and in order to account for the axial deformation of the SkTs, only magnetic induction data from the central part of the SkT (cf. grey shaded boxes in Figs. 2c, d) was used and projected in the $x$-$y$ plane to calculate the planar distribution of energy densities. For comparison, such energy density maps were also calculated for a simplified skyrmion lattice model employing the circular cell approximation 49 taking into account the shape of the needle (Supplementary Sect. LABEL:suppl:note:Magnetostatics). Fig. 4 shows the resulting simulated (left column) and experimentally determined energy density maps (right column) for the contributions arising from the DM and exchange interactions, and their sum. Here, we only plot energy densities dominated by the in-plane components of the magnetic induction to suppress some of the artefacts afflicting the $B_{z}$ component, which is, however, sufficient and consistent with calculations that take the full $\boldsymbol{B}(x,y,z)$ into account (cf. Supplementary Sect. LABEL:suppl:note:mag-energies and Fig. LABEL:Suppl_Fig_energetics_simulation). Given the apparent real structure effects and noise in the data, the experimental results are in excellent qualitative agreement with the simulations. In particular, the course of the radially averaged contribution (g, h) confirms experimentally the prediction that the reduction in the free energy density due to the DM interaction overcompensates the energetic costs of the exchange in the core of the skyrmion tube, which results in the overall energetic stabilization of the SkT (lattice). In comparison with the simulations, the experimental energy density landscapes are slightly compressed in $x$ direction, which is attributed to the interaction with the edge state and contributes, besides the noise, to the quantitative reduction of the radial averages in Fig. 4h. Noteworthy, the energetically least stable part of the SkT is the center of the circumferential Bloch wall, which is the region with the highest in-plane orientation of the magnetic induction in the SkT. It is remarkable that this is precisely that region, where we had found indications of ”leaking flux” between SkTs 4 and 7 (see discussion above and Fig. 2b). Apparently, the center of the Bloch wall is the most ”forgiving” zone of the SkT, which in turn may explain its overall stability against the variety of observed magnetic defects. All in all the energy maps confirm the heterogeneous particle-like nature of skyrmions. Unlike the energetically homogeneous spiral states of a chiral helimagnet like FeGe, the skyrmions have a definite shape and size that is caused by the frustration between the different exchange energies, which can be lifted only partially through the doubly twisted core. ## Summary Low temperature holographic vector-field electron tomography was used in combination with the spatial stabilization of the specimen’s magnetic state by an external magnetic field to reconstruct the full vector-field $\boldsymbol{B}$ of the skyrmionic spin texture in FeGe in all three dimensions at nanometer resolution. The unrivaled resolution of this 3D magnetic microscopy of a volume sample reveals unprecedented insight into the details of the 3D spin texture of skyrmions. Besides a characterization of the complicated breakdown of the skyrmion texture upon approaching surfaces in axial directions, we observe a variety of imperfections in the spatial extension of skyrmion tubes. Among them are axial and planar distortions of the SkTs, local losses of axial symmetry, and the occurrence of unexpected radial (rather than purely tangential) tilts of the magnetic induction in the circumferential Bloch walls. Even indications of in- plane magnetic flux leaking among neighboring SkTs in close-packed regions and abrupt changes of the magnetic induction that may be indicative of the occurrence of Bloch points are found. Also, the 3D course of the SkT axes was investigated in great detail. Here, we observe a substantial bending and twisting of these axes that is locally correlated with the occurrence of pronounced edge states, specifically in directions that are affected by confinements. Noteworthy, these deformations appear at length scales, where harmonic modulations are promoted by the DM interaction. Planar energy density maps across the SkTs were derived from the volume data of the magnetic induction and confirm for the first time experimentally the anticipated formation and stabilization mechanism of skyrmions for a volume sample. The results reveal a substantial energetic gain due to the DM interaction that overcompensates the energetic effort associated with the magnetic exchange interaction in the core of the SkT thereby stabilizing the SkT lattice as a whole. We anticipate that this novel experimental approach will pave the way to a better understanding of spin textures in a large variety of complex topologically protected and non-topological magnetization patterns, including other members of the skyrmion family, thereby moving the fields of both nanomagnetism and spintronics significantly forward. ## Methods ### Sample preparation. Based on the results of crystal growth by chemical vapour transport in the system Fe/Ge 50, 51 single crystals of FeGe in the B20 structure were grown via chemical transport reaction using iodine as transport agent. Starting from a homogeneous mixture of the element powders iron (Alfa Aesar 99,995 %) and germanium (Alfa Aesar 99,999 %) the cubic modification of FeGe crystallized by a chemical transport reaction very slowly in a temperature gradient from 850 K (source) to 810 K (sink), and a transport agent concentration of $0.2\,\mathrm{\frac{mg}{cm^{3}}}$ iodine (Alfa Aesar 99,998 %). The chemical vapour transport was made perpendicular to the tube axis over a diffusion distance of 38 mm. Selected crystals were characterized by EDXS, WDXS, and especially X-ray single crystal diffraction to verify the present modification. The preparation of the FeGe needle was carried out via focused ion beam (FIB) technique on a Thermo Scientific Helios 660. A rough cut of the needle geometry $(700\times 700\,\mathrm{nm})$ was performed with currents of 790 and $430\,\mathrm{pA}$ respectively. For further fine shaping $(300\times 300\,\mathrm{nm})$ the current was reduced to 80 and $40\,\mathrm{pA}$. The final polishing was carried out at $24\,\mathrm{pA}$. In order to remove preparation residue the needle was finally cleaned in a Fischione Model 1070 NanoClean for $1\,\mathrm{min}$. High-resolution TEM images indicate, that the crystalline core of the needle is surrounded by an amorphous surface layer of 4 nm. STEM-EDX measurements reveal, that this layer consists of iron oxide (cf. Supplementary Sect. LABEL:suppl:note:surface_layer). The skyrmion phase within the needle was stabilized by the stray field of a ring-shaped $\mathrm{Sm_{2}Co_{17}}$ permanent magnet fitted into a GATAN 636 double tilt liquid nitrogen sample holder. The ring was prepared from a bulk magnet by sinker spark eroding and mechanical grinding and provided a magnetic field in $z$-direction of approximately $170\,\mathrm{mT}$ (cf. Supplementary Sect. LABEL:suppl:note:ring). ### Acquisition and reconstruction of the holographic tilt series. Holographic tilt series were recorded at an FEI Titan G2 60-300 HOLO52 in Lorentz-Mode (conventional objective lens switched off) operated at $300\,\mathrm{kV}$. The voltage of the electrostatic Möllenstedt bisprism was set to 120 V leading to a fringe spacing of $2.3\,\mathrm{nm}$ in the electron hologram (cf. Supplementary Sect. LABEL:suppl:note:solitonic_skyrmions). For the acquisition of the latter, a GATAN K2 Summit direct detection camera in counting modes was used yielding a holographic fringe contrast of $40\,\%$. The acquisition process was performed semi-automatically with an in-house developed software package 53 to collect three holographic tilt series consisting of object and object-free empty holograms, two at $95\,\mathrm{K}$ and one at room temperature. For the first tilt series at $95\,\mathrm{K}$, the angle between the needle and tilt axis amounts to $30^{\circ}$. For the second tilt series, the specimen was manually rotated outside the microscope in-plane by $70^{\circ}$ (ideal is $90^{\circ}$) resulting in an angle between the needle and tilt axis of $-40^{\circ}$ (cf. Supplementary Sect. LABEL:suppl:note:geometry for the details). The tilt range of each tilt series was from $-66^{\circ}$ to $+65^{\circ}$ in $3^{\circ}$ steps. To obtain the full phase shift ($>2\pi$), the phase images were unwrapped automatically by the Flynn algorithm 54 and manually at regions, where the phase signal is too noisy or undersampled, by using prior knowledge of the phase shift (e.g., from adjacent projections) 55. Potential phase wedges in vacuum caused by the magnetic stray-field of the ring were corrected in all three tilt series. An analysis of these stray-field contributions is presented in the Supplementary Sect. LABEL:suppl:note:el_mag_fringing_fields. ### Tomographic reconstruction. All three phase image tilt series were aligned, i.e., corrected for image displacements with respect to their common tilt axis by cross-correlation, center-of-mass method and common-line approach12. Thereby obtained aligned datasets correspond to the following linear projection laws (Radon transformations): $\varphi_{e}\left(p,\theta,z\right)=C_{E}\iint_{\boldsymbol{e}\cdot\boldsymbol{r}}\Phi\left(x,y,z\right)\mathrm{d}x\mathrm{d}y$ (1) and $\frac{\partial\varphi_{m}(p,\theta,z)}{\partial z}=\frac{e}{\hbar}\iint_{\boldsymbol{e}\cdot\boldsymbol{r}}B_{p=y,x}\left(x,y,z\right)\mathrm{d}x\mathrm{d}y.$ (2) Here, $C_{E}$ is a kinetic constant depending solely on the acceleration voltage, $p$ and $z$ are the 2D detector coordinates, $\theta$ the tilt angle, $\boldsymbol{r}=(x,y)^{T}$, and $\boldsymbol{e}=(\cos{\theta},\sin{\theta})^{T}$. The index to the integral indicates a collapse of the 2D integral to the projection line defined by $\boldsymbol{e}\cdot\boldsymbol{r}$. The subsequent tomographic 3D reconstruction of the aligned phase tilt series (i.e., the inverse Radon transformation) was numerically carried out using weighted simultaneous iterative reconstruction technique (W-SIRT) 56. The three resulting tomograms represent the incremental 3D phase shift per voxel that we refer to as 3D phase maps. The two 3D phase maps obtained at $95\,\mathrm{K}$ were released from their electrostatic (MIP) contribution by superposition and subtraction of the 3D phase map obtained at room temperature. Then, the derivation of each of the two resulting magnetic 3D phase maps in direction perpendicular to the experimental tilt axis and multiplication with the factor $\hbar/e$ leads to one component of the magnetic induction in the respective direction. Since the specimen was rotated only by $70^{\circ}$ in the underlying tomographic experiment for the reconstruction of these two $\boldsymbol{B}$-field components, one of them was projected on the orthogonal direction of the other to receive finally the 3D $B_{x}$ and $B_{y}$ component. A verification of the experimental workflow repeated on simulated data is provided in Supplementary Sect. LABEL:suppl:note:Rec-of-Sim. ### Calculation of the third magnetic B field component. The third $\boldsymbol{B}$ field component $B_{z}$ is obtained by solving Gauss’s law for magnetism $\mathrm{div}\,\boldsymbol{B}=0$ with appropriate boundary conditions on the surface of the reconstruction volume. Here, we employed periodic boundary conditions for solving this differential equation in Fourier space endowed with coordinates $\boldsymbol{k}$, i.e., $B_{z}\left(\mathbf{k}\right)=-\frac{k_{x}B_{x}\left(\mathbf{k}\right)+k_{y}B_{y}\left(\mathbf{k}\right)}{k_{z}}$ (3) The zero frequency component (integration constant) was fixed by setting the average of $B_{z}$ to zero on the boundary of the reconstruction volume. To suppress noise amplification by this procedure, a butterworth-type low pass filter was applied. ### Magnetic energy densities. Following Ref. 12 the exchange energy density may be split into contributions from magnetic charges, currents and surface terms $A\left(\left(\nabla\cdot\boldsymbol{M}\right)^{2}+\left|\nabla\times\boldsymbol{M}\right|^{2}-\mathit{w}_{\mathrm{surf}}\right)$. In the magnetostatic limit considered here, the magnetization in the second term may be replaced by $\boldsymbol{B}/\mu_{0}$ and can be reconstructed from the tomographic data. In the case of the DM interaction we have the following identities $\displaystyle E_{\text{DM}}\left[\boldsymbol{M}\right]$ $\displaystyle=D\int\boldsymbol{M}\cdot\left(\nabla\times\boldsymbol{M}\right)dV$ (4) $\displaystyle=\frac{D}{\mu_{0}}\int\boldsymbol{B}\cdot\left(\nabla\times\boldsymbol{B}\right)dV+D\int\nabla\cdot\left(\Phi\boldsymbol{j}_{b}\right)dV$ $\displaystyle=\frac{D}{\mu_{0}}\int\boldsymbol{B}\cdot\left(\nabla\times\boldsymbol{B}\right)dV+D\varoiint\boldsymbol{w}_{\mathrm{surf}}\cdot d\boldsymbol{S}$ Here $\boldsymbol{j}_{b}$ denotes the bound current and $\Phi$ the scalar magnetic potential. The last line identifies that part of the DM energy density, which can be derived solely from the $\boldsymbol{B}$-field, may be identified as a volume contribution, which can be reconstructed from tomographic data. The remainder can be collapsed to a surface term. #### Acknowledgements We thank D. Pohl for helpful discussions in the process of planning the experimental setup and the data analysis. We furthermore acknowledge A. Tahn and T. Walter for the preparation of the FIB needle and magnetic ring, respectively. The authors are indebted to Vacuumschmelze GmbH & Co. KG for providing the $\mathrm{Sm_{2}Co_{17}}$ magnet. AL, BR, and SS gratefully acknowledge financial support through the Priority Program SPP2137 of the German Research Foundation (DFG) within projects LU-2261/2-1 and RE-1164/6-1. DW and AL have received funding from the European Research Council (ERC) under the Horizon 2020 research and innovation program of the European Union (grant agreement number 715620). AK and RD received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant No. 856538, project “3D MAGiC”), from the European Union’s Horizon 2020 Research and Innovation Programme (Grant No. 823717, project “ESTEEM3”) and from the European Union’s Horizon 2020 Research and Innovation Programme (Grant No. 766970, project “Q-SORT”). ### Author Contributions SS devised the experimental setup for the magnetic field stabilization. DW conducted the holographic VFET experiments with active support of SS and AK and performed the holographic and tomographic reconstructions. SS, DW, BR and AL analyzed the data. AL performed the magnetic simulations. AL, SS, DW, BR and UR wrote the manuscript. The FeGe single crystal was grown by MS. All authors contributed to the critical discussion and revision of the manuscript. ### Supplementary Information Supplementary Information is available for this paper. ### Author Information Correspondence and requests for materials should be addressed to AL (a.lubk@ifw-dresden.de). ## References * 1 Bogdanov, A. & Yablonskii, D. Thermodynamically stable ”vortices” in magnetically ordered crystals. The mixed state of magnets. _Zh. Eksp. Teor. Fiz_ 95, 178 (1989). * 2 Kiselev, N. S., Bogdanov, A. N., Schäfer, R. & Röler, U. K. Chiral skyrmions in thin magnetic films: New objects for magnetic storage technologies? _Journal of Physics D: Applied Physics_ 44, 392001 (2011). * 3 Sampaio, J., Cros, V., Rohart, S., Thiaville, A. & Fert, A. Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures. _Nature Nanotechnology_ 8, 839–844 (2013). * 4 Tomasello, R. _et al._ A strategy for the design of skyrmion racetrack memories. _Scientific Reports_ 4, 1–7 (2014). * 5 Fert, A., Reyren, N. & Cros, V. Magnetic skyrmions: Advances in physics and potential applications (2017). * 6 Rybakov, F. N., Borisov, A. B. & Bogdanov, A. N. Three-dimensional skyrmion states in thin films of cubic helimagnets. _Physical Review B - Condensed Matter and Materials Physics_ 87, 1–4 (2012). * 7 Meynell, S. A., Wilson, M. N., Fritzsche, H., Bogdanov, A. N. & Monchesky, T. L. Surface twist instabilities and skyrmion states in chiral ferromagnets. _Physical Review B - Condensed Matter and Materials Physics_ 90, 014406 (2014). * 8 Leonov, A. O., Loudon, J. C. & Bogdanov, A. N. Spintronics via non-axisymmetric chiral skyrmions. _Applied Physics Letters_ 109, 1–5 (2016). * 9 Schneider, S. _et al._ Induction Mapping of the 3D-Modulated Spin Texture of Skyrmions in Thin Helimagnets. _Physical Review Letters_ 120, 217201 (2018). * 10 Birch, M. T. _et al._ Real-space imaging of confined magnetic skyrmion tubes. _Nature Communications_ 11, 1–8 (2020). * 11 Yu, X. _et al._ Real-Space Observation of Topological Defects in Extended Skyrmion-Strings. _Nano Letters_ 20, 7313–7320 (2020). * 12 Wolf, D. _et al._ Holographic vector field electron tomography of three-dimensional nanomagnets. _Communications Physics_ 2, 1–9 (2019). * 13 Rößler, U. K., Bogdanov, A. N. & Pfleiderer, C. Spontaneous skyrmion ground states in magnetic metals. _Nature_ 442, 797–801 (2006). * 14 Rößler, U. K., Leonov, A. A. & Bogdanov, A. N. Chiral skyrmionic matter in non-centrosymmetric magnets. In _Journal of Physics: Conference Series_ , vol. 303, 12105 (Institute of Physics Publishing, 2011). * 15 Wilhelm, H. _et al._ Confinement of chiral magnetic modulations in the precursor region of FeGe. _Journal of Physics Condensed Matter_ 24, 294204 (2012). * 16 Balkind, E., Isidori, A. & Eschrig, M. Magnetic skyrmion lattice by the Fourier transform method. _Physical Review B_ 99, 134446 (2019). * 17 Huang, P. _et al._ Melting of a skyrmion lattice to a skyrmion liquid via a hexatic phase. _Nature Nanotechnology_ 15, 761–767 (2020). * 18 Yu, X. Z. _et al._ Near room-temperature formation of a skyrmion crystal in thin-films of the helimagnet FeGe. _Nature Materials_ 10, 106–109 (2011). * 19 Wilson, M. N. _et al._ Extended elliptic skyrmion gratings in epitaxial MnSi thin films. _Phys. Rev. B_ 86, 144420 (2012). * 20 Rybakov, F. N., Borisov, A. B., Blügel, S. & Kiselev, N. S. New spiral state and skyrmion lattice in 3D model of chiral magnets. _New Journal of Physics_ 18, 045002 (2016). * 21 Leonov, A. O. _et al._ Chiral Surface Twists and Skyrmion Stability in Nanolayers of Cubic Helimagnets. _Physical Review Letters_ 117, 087202 (2016). * 22 Du, H. _et al._ Interaction of Individual Skyrmions in a Nanostructured Cubic Chiral Magnet. _Physical Review Letters_ 120, 197203 (2018). * 23 Nagaosa, N. & Tokura, Y. Topological properties and dynamics of magnetic skyrmions. _Nature Nanotechnology_ 8, 899–911 (2013). * 24 Leonov, A. O. & Mostovoy, M. Edge states and skyrmion dynamics in nanostripes of frustrated magnets. _Nature Communications_ 8, 1–7 (2017). * 25 Lee, M., Kang, W., Onose, Y., Tokura, Y. & Ong, N. P. Unusual hall effect anomaly in MnSi under pressure. _Physical Review Letters_ 102, 186601 (2009). * 26 Neubauer, A. _et al._ Topological hall effect in the a phase of MnSi. _Physical Review Letters_ 102, 186602 (2009). * 27 Zang, J., Mostovoy, M., Han, J. H. & Nagaosa, N. Dynamics of Skyrmion crystals in metallic thin films. _Physical Review Letters_ 107, 136804 (2011). * 28 Schulz, T. _et al._ Emergent electrodynamics of skyrmions in a chiral magnet. _Nature Physics_ 8, 301–304 (2012). * 29 Dahir, S. M., Volkov, A. F. & Eremin, I. M. Interaction of Skyrmions and Pearl Vortices in Superconductor-Chiral Ferromagnet Heterostructures. _Physical Review Letters_ 122, 097001 (2019). * 30 Huang, S. X. & Chien, C. L. Extended skyrmion phase in epitaxial FeGe(111) thin films. _Physical Review Letters_ 108, 267201 (2012). * 31 Zheng, F. _et al._ Experimental observation of chiral magnetic bobbers in B20-type FeGe. _Nature Nanotechnology_ 13, 451–455 (2018). * 32 Leonov, A. O., Tambovtcev, I. M., Lobanov, I. S. & Uzdin, V. M. Stability of in-plane and out-of-plane chiral skyrmions in epitaxial MnSi(111)/Si(111) thin films: Surface twists versus easy-plane anisotropy. _Phys. Rev. B_ 102, 174415 (2020). * 33 Jin, C. _et al._ Control of morphology and formation of highly geometrically confined magnetic skyrmions. _Nature Communications_ 8, 15569 (2017). * 34 Kravchuk, V. P. _et al._ Multiplet of Skyrmion States on a Curvilinear Defect: Reconfigurable Skyrmion Lattices. _Phys. Rev. Lett._ 120, 67201 (2018). * 35 Phatak, C., Petford-Long, A. K. & De Graef, M. Three-dimensional study of the vector potential of magnetic structures. _Physical Review Letters_ 104, 1–4 (2010). * 36 Tanigaki, T. _et al._ Three-dimensional observation of magnetic vortex cores in stacked ferromagnetic discs. _Nano Letters_ 15, 1309–1314 (2015). * 37 Wolf, D. _et al._ 3D Magnetic Induction Maps of Nanoscale Materials Revealed by Electron Holographic Tomography. _Chemistry of Materials_ 27, 6771–6778 (2015). * 38 Streubel, R. _et al._ Retrieving spin textures on curved magnetic thin films with full-field soft X-ray microscopies. _Nature Communications_ 6, 1–11 (2015). * 39 Donnelly, C. _et al._ Three-dimensional magnetization structures revealed with X-ray vector nanotomography. _Nature_ 547, 328–331 (2017). * 40 Hierro-Rodriguez, A. _et al._ Revealing 3D magnetization of thin films with soft X-ray tomography: magnetic singularities and topological charges. _Nature Communications_ 11, 6382 (2020). * 41 Stolt, M. J. _et al._ Electrical Detection and Magnetic Imaging of Stabilized Magnetic Skyrmions in Fe1-xCoxGe (x< 0.1) Microplates. _Advanced Functional Materials_ 29, 1805418 (2019). * 42 Parkin, S. S., Hayashi, M. & Thomas, L. Magnetic domain-wall racetrack memory (2008). * 43 Kovács, A. _et al._ Mapping the magnetization fine structure of a lattice of Bloch-type skyrmions in an FeGe thin film. _Applied Physics Letters_ 111, 192410 (2017). * 44 Leonov, A. O. _et al._ The properties of isolated chiral skyrmions in thin magnetic films. _New Journal of Physics_ 18, 065003 (2016). * 45 Lebech, B., Bernhard, J. & Freltoft, T. Magnetic structures of cubic FeGe studied by small-angle neutron scattering. _Journal of Physics: Condensed Matter_ 1, 6105–6122 (1989). * 46 Song, D. _et al._ Quantification of Magnetic Surface and Edge States in an FeGe Nanostripe by Off-Axis Electron Holography. _Physical Review Letters_ 120, 167204 (2018). * 47 Butenko, A. B., Leonov, A. A., Rößler, U. K. & Bogdanov, A. N. Stabilization of skyrmion textures by uniaxial distortions in noncentrosymmetric cubic helimagnets. _Physical Review B - Condensed Matter and Materials Physics_ 82, 052403 (2010). * 48 Beg, M. _et al._ Ground state search, hysteretic behaviour, and reversal mechanism of skyrmionic textures in confined helimagnetic nanostructures. _Scientific Reports_ 5, 1–14 (2015). * 49 Bogdanov, A. & Hubert, A. Thermodynamically stable magnetic vortex states in magnetic crystals. _Journal of Magnetism and Magnetic Materials_ 138, 255–269 (1994). * 50 Richardson, M., Ingri, N., Salomaa, P., Bloom, G. & Hagen, G. The Partial Equilibrium Diagram of the Fe-Ge System in the Range 40-72 at. % Ge, and the Crystallisation of some Iron Germanides by Chemical Transport Reactions. _Acta Chemica Scandinavica_ 21, 2305–2317 (1967). * 51 Bosholm, O., Oppermann, H. & Däbritz, S. Chemischer Transport intermetallischer Phasen IV: Das System Fe-Ge: Chemical Vapour Transport of Intermetallic Phases IV: The System Fe - Ge. _Zeitschrift fur Naturforschung - Section B Journal of Chemical Sciences_ 56, 329–336 (2001). * 52 Boothroyd, C., Kovács, A. & Tillmann, K. FEI Titan G2 60-300 HOLO. _Journal of large-scale research facilities JLSRF_ 2, 44 (2016). * 53 Wolf, D., Lubk, A., Lichte, H. & Friedrich, H. Towards automated electron holographic tomography for 3D mapping of electrostatic potentials. _Ultramicroscopy_ 110, 390–399 (2010). * 54 Ghiglia, D. C., Ghiglia, D. C., Pritt, M. D. & Pritt, M. D. _Two-Dimensional Phase Unwrapping - Theory, Algorithms, and Software_ (Wiley, New York, 1998). * 55 Lubk, A. _et al._ Nanometer-scale tomographic reconstruction of three-dimensional electrostatic potentials in GaAs/AlGaAs core-shell nanowires. _Physical Review B - Condensed Matter and Materials Physics_ 90 (2014). * 56 Wolf, D., Lubk, A. & Lichte, H. Weighted simultaneous iterative reconstruction technique for single-axis tomography. _Ultramicroscopy_ 136, 15–25 (2014).
# Morphology of relaxed and merging galaxy clusters. Analytical models for monolithic Minkowski functionals C. Schimd,1 M. Sereno2,3 1 Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France 2 INAF – Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, via Piero Gobetti 93/3, I-40129 Bologna, Italy 3 INFN, Sezione di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy E-mail<EMAIL_ADDRESS> (Accepted 2021 January 25. Received 2021 January 19; in original form 2020 July 23) ###### Abstract Galaxy clusters exhibit a rich morphology during the early and intermediate stages of mass assembly, especially beyond their boundary. A classification scheme based on shapefinders deduced from the Minkowski functionals is examined to fully account for the morphological diversity of galaxy clusters, including relaxed and merging clusters, clusters fed by filamentary structures, and cluster-pair bridges. These configurations are conveniently treated with idealised geometric models and analytical formulae, some of which are novel. Examples from CLASH and LC2 clusters and observed cluster-pair bridges are discussed. ###### keywords: galaxies: clusters: general – cosmology: observations ††pubyear: 2019††pagerange: Morphology of relaxed and merging galaxy clusters. Analytical models for monolithic Minkowski functionals–D ## 1 Introduction Morphology of galaxy clusters is an indicator of their state of relaxation and can be used to infer their formation history and evolution. As result of the gravitational dynamics of dark and luminous matter, relaxed galaxy clusters and their hosting dark matter haloes have a triaxial shape (Limousin et al., 2013), with tendency to prolateness over oblateness especially in their final stage of evolution as assessed by high-resolution $N$-body simulations (e.g. Bett et al., 2007; Macciò et al., 2007; Despali et al., 2014; Bonamigo et al., 2015) and confirmed by X-ray, optical, Sunyaev-Zel’dovich (SZ), and weak- lensing measurements (Cooray, 2000; De Filippis et al., 2005; Sereno et al., 2006; Sereno et al., 2018b). The persistence of this trend in the outskirts of clusters depends on their mass (Prada et al., 2006), mass accretion rate (Diemer & Kravtsov, 2014), and assembly history (Dalal et al., 2008; Faltenbacher & White, 2010; More et al., 2016). The three-dimensional shape of these structures is normally described by the eigenvalues of the mass distribution or inertia tensors and related parameters such as sphericity, elongation, ellipticity, prolateness, and triaxiality (Springel et al., 2004). These statistics are well-suited for dynamically evolved or poorly resolved clusters, however they cannot account for the rich morphology of unrelaxed structures or beyond the virial radius shown by high- quality imaging and spectroscopy. New instruments are opening indeed a golden age for a multi-wavelenth study of protoclusters, merging clusters and their filamentary environment at both low and high redshift. The clearest example in the local universe is the Virgo cluster with its different substructures identified using GUViCS (Boselli et al., 2014) and HyperLeda (Kim et al., 2016) data. At intermediate and high redshift, some spectacular illustration of rich structures are: the outskirts of Abell 2744 probed by XMM-Newton X-ray data (Eckert et al., 2015); the proto-clusters revealed by Herschel-SPIRE from Planck candidates (Greenslade et al., 2018) or combining VUDS and zCOSMOS-Deep data (Cucciati et al., 2018); the filaments bridging the cluster systems A399-A401 and A3016-A3017, detected combining Planck data with ROSAT (Planck Collaboration et al., 2013) or Chandra (Chon et al., 2019); the gaseous and dusty bridge IRDC G333.73+0.37 (Veena et al., 2018); the molecular filamentary structures around Centaurus, Abell S1101, and RXJ1539.5 probed by ALMA and MUSE (Olivares et al., 2019); the multiple filaments within the SSA22 protocluster (Umehata et al., 2019). Weak gravitational lensing analyses have been successful in detecting the dense environment and the correlated dark matter around the main cluster halo (Sereno et al., 2018a). The increasingly large samples of haloes detected in optical (Rykoff et al., 2014; Oguri et al., 2018; Maturi et al., 2019), X-ray (Pierre et al., 2016), or SZ surveys (Bleem et al., 2015; Planck Collaboration et al., 2016) demand for flexible and reliable indicators of morphology that can be applied to the full zoo of galaxy clusters. A number of statistics, such as halo concentration, peak-centroid shift, power ratio, axial ratio, and position angle, have been considered to quantify the degree of regularity and symmetry of these structures (Donahue et al., 2016; Lovisari et al., 2017). However, these indicators can fail for very irregular systems. A cluster progenitor experiences very different shapes during the merger history and the configuration of satellite halos and local environment dramatically changes. Major mergers can be followed by slow accretion along filaments until the cluster ends up in a relatively viralized final phase with a nearly regular and spherical shape. We aim at finding a small set of morphological parameters that can in principle describes all the different phases of the merging accretion history. In this paper, we propose to use the three non-trivial Minkowski functionals to fully characterise the morphology of spatial structures (Mecke et al., 1994; Mecke, 2000). We show that very different morphologies, namely major mergers, multiple mergers, and filamentary structures can be suitably described by a single set of geometrically motivated parameters. We calculate analytical expressions for the triaxial ellipsoid, a $n$-fused balls model accounting for non-relaxed clusters undergoing merging, a spiky model with $n$ cylindrical branches radially connected to a central ball possibly accounting for filaments of matter feeding a central halo, and a dumbbell model describing axially-symmetric cluster-pair bridges (§2; details of the calculations reported in the Appendices). These systems are then classified using the so-called shapefinders deduced from the Minkowski functionals (§3). Conclusions are addressed in §4. ## 2 Morphology by Minkowski functionals: models The Minkowski functionals are a complete set of morphological descriptors that characterise the geometry and topology of a continuous body. In three dimensions they are its volume ($V$), surface area ($A$), integral mean curvature ($H$) and integral Gaussian curvature ($G$) of the surface, the latter being linearly related to the Euler characteristic $\chi$ that counts the number of connected components minus the number of tunnels plus the number of cavities (Mecke, 2000). According to a characterisation theorem, the Minkowski functionals are the only valuations invariant under rotations and translations and preserving additivity and continuity (Hadwiger, 1957). These properties along with the Steiner formula allow the calculation of $V_{0}\equiv V$, $V_{1}\equiv A/6$, and $V_{2}\equiv H/3\pi$, the fourth functional $V_{3}\equiv\chi=1$ being trivial for isolated bodies with no tunnels and cavities as here. ### 2.1 Ellipsoidal model: relaxed clusters Nearly viriliazed clusters can be conveniently described as ellipsoidal haloes. For a triaxial ellipsoid $\mathcal{E}$ with principal semi-axes $a\geqslant b\geqslant c$, with $a$ defining the polar axis and $(b,c)$ the equatorial plane, namely with $q\equiv b/a$ and $s\equiv c/a$ respectively the intermediate-to-major and minor-to-major axis ratio, the non-trivial Minkowski functionals are $\displaystyle V_{0}^{\mathcal{E}}$ $\displaystyle=$ $\displaystyle\frac{4\pi}{3}a^{3}qs,$ (1a) $\displaystyle V_{1}^{\mathcal{E}}$ $\displaystyle=$ $\displaystyle\frac{\pi}{3}a^{2}s^{2}\left[1+\frac{q}{e}F(\varphi,m)+\frac{eq}{s^{2}}E(\varphi,m)\right],$ (1b) $\displaystyle V_{2}^{\mathcal{E}}$ $\displaystyle=$ $\displaystyle\frac{aqs}{3\pi}(I_{1}+I_{2}),$ (1c) in which $e=\sqrt{1-s^{2}}$, $m=e^{-1}[1-(s/q)^{2}]$, $F(\varphi,m)$ and $E(\varphi,m)$ are elliptic integrals of first and second kind with $\sin\varphi=e$ (Abramowitz & Stegun, 1970), and $I_{1,2}$ are dimensionless integrals that we evaluate numerically in the general case; see equations (14) in Appendix A. Analytic limits of the previous equations exist for prolate ellipsoids of revolution about axis $a$ ($a\geqslant b=c$, so that $q=s$, $m=0$, $F=E=\arcsin e$), which could account for virialised clusters, and for oblate ellipsoids of revolution about axes $c$ ($a=b\geqslant c$, so that $q=1$, $m=1$, $F=\mathrm{arctanh}~{}e$, $E=e$), which could account for an intermediate stage of merging. Following the notation in Schmalzing et al. (1999), one has $\displaystyle V_{0}^{\mathcal{E}_{*}}$ $\displaystyle=$ $\displaystyle\frac{4\pi}{3}r^{3}\lambda,$ (2a) $\displaystyle V_{1}^{\mathcal{E}_{*}}$ $\displaystyle=$ $\displaystyle\frac{\pi}{3}r^{2}\left[1+\lambda f\left(\frac{1}{\lambda}\right)\right],$ (2b) $\displaystyle V_{2}^{\mathcal{E}_{*}}$ $\displaystyle=$ $\displaystyle\frac{2r}{3}\left[\lambda+g(\lambda)\right],$ (2c) where $f(x)=(\arccos x)/\sqrt{1-x^{2}}$, and $\\{r=as,\lambda=1/s,g(\lambda)=f(\lambda)\\}$ for prolate ellipsoids ($\mathcal{E}_{*}=\mathcal{E}_{\mathcal{P}}$), $\\{r=a,\lambda=s,g(\lambda)=\lambda^{-1}f(\lambda^{-1})$ for oblate ellipsoids ($\mathcal{E}_{*}=\mathcal{E}_{\mathcal{O}}$).111Our results slightly differ from Schmalzing et al. (1999). Equations (1-2) reduce to the familiar expressions for a sphere $\mathcal{S}$ ($a=b=c$), viz. $V_{0}^{\mathcal{S}}=4\pi a^{3}/3$, $V_{1}^{\mathcal{S}}=2\pi a^{2}/3$, and $V_{2}^{\mathcal{S}}=4a/3$. Figure 1: Minkowski functionals iso-contours for an ellipsoid, $V_{\mu}^{\mathcal{E}}$, as function of the intermediate-to-major and minor- to-major axis ratio, $q=b/a$ and $s=c/a$. Volume (solid lines; $\mu=0$), surface (dashed; $\mu=1$), and integrated mean curvature (dotted; $\mu=2$) are shown for values ranging from (0.05,0.7,0.1) and increasing in steps $\Delta V_{\mu}=\\{0.5,0.25,0.01\\}~{}h^{\mu-3}$Mpc3-μ. The underlying density plot represents the triaxiality $T$. Points with error bars are CLASH clusters from Sereno et al. (2018b, colour coded by redshift; point size proportional to the mass) and Chiu et al. (2018, black), softly following the median prolatness- ellipticity relation of Despali et al. (2014, white long-dashed line). As illustrated in Figure 1, the surface area $V_{1}$ and the integrated mean curvature $V_{2}$ are nearly degenerate with the volume $V_{0}$ for nearly prolate shapes or orthogonal for nearly oblate structures. They follow the trend of the triaxiality parameter $T=(1-q^{2})/(1-s^{2})$, which distinguishes oblate ($T\gtrsim 0$) from prolate ($T\lesssim 1$) structures and can be used to define three broad morphological classes (Chua et al., 2019, long dashed lines). Coloured points (size proportional to the mass, colour-coded by redshift) have been obtained for the Cluster Lensing and Supernova Survey with Hubble (CLASH) clusters (Sereno et al., 2018b). Their 3D shape are constrained with a multi-wavelength analysis combining the surface mass density as determined by gravitational lensing, which probes the size in the plane of the sky, and X-ray and SZ data, to infer the radial extent (Sereno, 2007). With convenient priors, some less strong constraints on the 3D shape can be still determined based on lensing alone (Chiu et al., 2018, black points). These points softly follows the median prolatness-ellipticity relation of Despali et al. (2014, white long-dashed line) that fits $\Lambda$CDM $N$-body simulations. ### 2.2 Fused-balls model: merging clusters Clusters are usually neither relaxed nor isolated. The complex morphology of merging clusters, or a central cluster with satellite haloes can be conveniently pictured as a group of partially overlapping balls. In this subsection, we consider first the case of a major merger ($N=2$ balls) and then the case of satellite halos ($N>2$ balls). #### Major mergers. The Minkowski functionals of two merged balls 222We denote $B_{i}\equiv B[\boldsymbol{x}_{i},R]$ the $i$-th ball centred in $\boldsymbol{x}_{i}$ with radius $R$, omitted for clarity. $\mathcal{M}=B_{1}\cup B_{2}$ cannot be calculated like for the ellipsoid because the surface is not regular enough to uniquely define the fundamental form. Instead, they can be calculated using additivity, $V_{\mu}(B_{1}\cup B_{2})=V_{\mu}(B_{1})+V_{\mu}(B_{2})-V_{\mu}(B_{1}\cap B_{2})$. For two merged balls with unequal radii $R$ and $r\leqslant R$ and centres at distance $d\leqslant R+r$, the volume and surface area are trivial (see also Gibson & Scheraga, 1987), while the integrated mean curvature can be calculated using the Steiner formula; see Appendix B. One finally obtains $\displaystyle V_{0}^{\mathcal{M}}$ $\displaystyle=\frac{{2}\pi}{3}\left(R^{3}+r^{3}-\frac{1}{8}d^{3}\right)+\frac{\pi}{2}(R^{2}+r^{2})d+\frac{\pi}{4d}(R^{2}-r^{2})^{2},$ (3a) $\displaystyle V_{1}^{\mathcal{M}}$ $\displaystyle=\frac{\pi}{3}(R^{2}+r^{2})+\frac{\pi}{6}(R+r)d+\frac{\pi}{6d}(R-r)(R^{2}-r^{2}),$ (3b) $\displaystyle V_{2}^{\mathcal{M}}$ $\displaystyle=\frac{2}{3}(R+r+d)-\frac{\psi}{3}d\sqrt{2\frac{R^{2}+r^{2}}{d^{2}}-1-\left(\frac{R^{2}-r^{2}}{d^{2}}\right)^{2}},$ (3c) with $\cos\psi=(R^{2}+r^{2}-d^{2})/2Rr$. These equations are defined for non- trivial merging, i.e. as long as the two spheres overlap with no total embedding ($B_{1}\cap B_{2}\neq\emptyset$ and $B_{2}\nsubseteq B_{1}$, i.e. $R-r\leqslant d$); for non-overlapping spheres the correct expression is recovered setting $d=R+r$. The results are illustrated in Figure 2 as function of the radius of the smaller ball and separation between the centres, both normalised to the radius of the larger ball. Note that for major ($r\lesssim R$) and advanced ($d\ll R$) mergers, $V_{0}$ and $V_{1}$ are nearly degenerate. As reference, the values for the major-mergers ($r\sim R$) from the LC2 catalogue (Sereno, 2015) calculated assuming a flat $\Lambda$CDM cosmology with $\Omega_{\mathrm{m}}=0.3$ and $h=0.7$ and $R\equiv R_{200\mathrm{c}}$ are shown, along with the characteristic splashback (Diemer et al., 2017) and pericenter values estimated for binary systems at redshift $z=0.3$ with main halo mass $M_{200\mathrm{c}}=10^{14}h^{-1}\mathrm{M}_{\sun}$ and secondary halo with 3 or 10 times smaller mass;333$M_{200\mathrm{c}}$ denotes the mass enclosed within a sphere of radius $R_{200\mathrm{c}}$ with mean over-density 200 times the critical density. see Table 1. Figure 2: Minkowski functionals iso-contours for major mergers, $V_{\mu}^{\mathcal{M}}$ (Eqs. 3), as function of the smaller ball radius $r$ and distance $d$ from the major ball with radius $R$. Volume (solid lines), surface (dashed), and integrated mean curvature (dotted) levels increase by 10% moving top-right from the lower values attained for a single ball, $V_{\mu}^{\mathcal{S}}$ (for $V_{2}^{\mathcal{M}}$, also the contours corresponding to 93 and 96 % of $V_{2}^{\mathcal{S}}$ are shown). The lower (‘total embedding’) and upper (‘no overlap’) triangular regions account for trivial morphologies of one and two isolated balls, respectively. Points indicate major mergers from the LC2 cluster catalogue (Sereno, 2015, colour- coded by redshift as in figure 1, size proportional to $R_{200\mathrm{c}}$). Filled (empty) symbols designate a merging subclump at the pericenter (splashback) for a system at $z=0.3$; see Table 1. For balls with equal radius ($R=r$) the Minkowski functionals of the resulting body $\mathcal{M}_{\mathcal{P}}$ are well-known, $\displaystyle V_{0}^{\mathcal{M}_{\mathcal{P}}}$ $\displaystyle=\frac{4\pi}{3}R^{3}\left(1+\frac{3d}{4R}-\frac{1}{16}\frac{d^{3}}{R^{3}}\right),$ (4a) $\displaystyle V_{1}^{\mathcal{M}_{\mathcal{P}}}$ $\displaystyle=\frac{2\pi}{3}R^{2}\left(1+\frac{d}{2R}\right),$ (4b) $\displaystyle V_{2}^{\mathcal{M}_{\mathcal{P}}}$ $\displaystyle=\frac{4R}{3}\left(1+\frac{d}{2R}\right)-\frac{2R}{3}\sqrt{1-\frac{d^{2}}{4R^{2}}}\arccos\left(1-\frac{d^{2}}{2R^{2}}\right).$ (4c) with limits $V_{0}^{\mathcal{S}}=4\pi R^{3}/3$, $V_{1}^{\mathcal{S}}=2\pi R^{2}/3$, and $V_{2}^{\mathcal{S}}=4R/3$ when $d=0$, i.e. when $\mathcal{M}_{\mathcal{P}}$ becomes a sphere, $\mathcal{S}$. Table 1: LC2 merging clusters (Sereno, 2015): parameters of two-balls model ($R\equiv R_{200\mathrm{c}}$, flat $\Lambda$CDM cosmology). For comparison (lower part of the table), indicative values for the splashback and pericenter phase of some major merger. Lengths in $h^{-1}$Mpc. Name | redshift | $R$ | $r$ | $d$ ---|---|---|---|--- Abell 1750 | 0.0678 | $0.98\pm 0.19$ | $0.85\pm 0.20$ | $0.57\pm 0.06$ Abell 901 | 0.16 | $0.88\pm 0.15$ | $0.77\pm 0.19$ | $0.85\pm 0.08$ Abell 115 | 0.197 | $1.13\pm 0.10$ | $1.07\pm 0.11$ | $0.63\pm 0.06$ Zw Cl2341 | 0.27 | $0.87\pm 0.15$ | $0.86\pm 0.15$ | $0.73\pm 0.07$ Abell 1758 | 0.28 | $0.95\pm 0.26$ | $0.68\pm 0.17$ | $1.46\pm 0.14$ Bullet Cluster | 0.296 | $1.91\pm 0.09$ | $1.66\pm 0.09$ | $0.50\pm 0.05$ MACS J0025 | 0.5842 | $1.15\pm 0.28$ | $1.06\pm 0.23$ | $0.45\pm 0.05$ CLJ0102-4915 | 0.87 | $1.10\pm 0.05$ | $0.96\pm 0.05$ | $0.52\pm 0.05$ $M_{1}=10^{14}h^{-1}\mathrm{M}_{\sun}$, $M_{2}=M_{1}/10$: splashback $\triangledown$ | 0.3 | 1.47 | 0.68 | 2.03 pericenter $\blacktriangledown$ | 0.3 | 1.47 | 0.68 | 0.20 $M_{1}=10^{14}h^{-1}\mathrm{M}_{\sun}$, $M_{2}=M_{1}/3$: splashback $\vartriangle$ | 0.3 | 1.47 | 0.68 | 2.03 pericenter $\blacktriangle$ | 0.3 | 1.47 | 0.68 | 0.20 #### Multiple mergers. Equations (3) can be extended to the easiest configuration for multiple merging, i.e. a central ball (halo) $B$ of radius $R$ intersecting $n$ smaller balls (satellites) $B_{i}$ of radius $r_{i}$, with centres at distance $d_{i}$ from the centre of $B$ and not mutually intersecting ($B_{i}\cap B_{j}=\emptyset$; $i,j=1,\dots,n$; $N=n+1$). The Minkowski functionals of the simply-connected resulting body $\mathcal{M}_{n}=B\cup\bigcup_{i=1}^{n}B_{i}$ (so $\mathcal{M}_{1}\equiv\mathcal{M}$) are trivially obtained by additivity; see Appendix C. The two top-line panels in Figure 3 illustrate the results for $n=3$ and 5. Figure 3: Minkowski functionals of merging models $\mathcal{M}_{n}$ with $n=3,5$ satellites (rows 1-2 from the top) and spiky models $\mathcal{S}_{n}$ with $n=1,2,4$ filaments (rows 3-4-5), illustrated by topologically equivalent bodies in the top left corner of the right panels. Left-to-right: volume, surface, and integrated curvature, normalised to the values of the central ball. In $\mathcal{M}_{n}$ the satellite balls $B_{i}$ have different radius $r_{i}$ and are at distance $d_{i}\propto d$ from the central ball $B$ (see legends); lines with increasing thickness would represent subsequent stage of merging, with satellites going closer to $B$. In $\mathcal{S}_{s}$ the cylindric filaments have bases of radius $r_{i}$ and length $\ell_{i}\propto\ell$ (see legends); thicker lines represent later stages of gravitational evolution. For the $\mathcal{S}_{s}$ models $V_{2}$ does not depend on the radius of filaments but only on their length. Lengths are in units of the central ball radius $R$. ### 2.3 Spiky model: filaments feeding clusters Massive halos form at the highest density nodes of the cosmic web. Even in absence of major mergers, dark matter is continuously accreting along filaments connecting the nodes (e.g. Eckert et al., 2015; Connor et al., 2018). We can approximate such spiky geometry by a ball $B$ of radius $R$ attached to $n$ distinct i.e. not mutually intersecting cylinders $C_{i}$ $(i=1,\dots,n)$ radially joined to $B$, each with length $\ell_{i}$ and basis with radius $r_{i}$ lying on the surface of $B$. Using additivity, the Steiner formula, and Equation (16) the Minkowski functionals of the resulting body $\mathcal{\bar{S}}_{n}=B\cup\bigcup_{i=1}^{n}C_{i}$ are $\displaystyle~{}~{}V_{0}^{\mathcal{\bar{S}}_{n}}$ $\displaystyle=\frac{\pi}{3}(2-n)R^{3}+\pi\sum_{i}\left[r_{i}^{2}\ell_{i}+\frac{p_{i}}{3}(3R^{2}-2Rp_{i}-p_{i}^{2})\right],$ (5a) $\displaystyle V_{1}^{\mathcal{\bar{S}}_{n}}$ $\displaystyle=\frac{\pi}{3}(2-n)R^{2}+\frac{\pi}{6}\sum_{i}(r_{i}^{2}+2r_{i}\ell_{i}-2Rp_{i}),$ (5b) $\displaystyle V_{2}^{\mathcal{\bar{S}}_{n}}$ $\displaystyle=\frac{2}{3}(2-n)R+\frac{1}{3}\sum_{i}\left(\ell_{i}+2p_{i}+r_{i}\arcsin\frac{r_{i}}{R}\right),$ (5c) in which $p_{i}=(R^{2}-r_{i}^{2})^{1/2}$ is the distance from the centre of $B$ to the $i$-th spherical cap bounded by the cylinder $C_{i}$. The condition $C_{i}\cap C_{j}=\emptyset$ is possible if approximately $\sum_{i}r_{i}^{2}\lesssim 4R^{2}$. Equations (5) are simpler but still keep the essential information if the free heads of cylinders are not flat but spherical caps with the same curvature radius as the central ball, i.e. $\mathcal{L}_{i}=B\cap C_{i}$, so that $V_{\mu}(\mathcal{S}_{n})=V_{\mu}(B)+\sum_{i}V_{\mu}(C_{i})$. The Minkowski functionals are then $\displaystyle~{}~{}V_{0}^{\mathcal{S}_{n}}$ $\displaystyle=\frac{4\pi}{3}R^{3}+\pi\sum_{i}r_{i}^{2}\ell_{i}\,,$ (6a) $\displaystyle V_{1}^{\mathcal{S}_{n}}$ $\displaystyle=\frac{2\pi}{3}R^{2}+\frac{\pi}{3}\sum_{i}r_{i}\ell_{i}\,,$ (6b) $\displaystyle V_{2}^{\mathcal{S}_{n}}$ $\displaystyle=\frac{2}{3}R+\frac{1}{3}\sum_{i}\ell_{i}\,.$ (6c) The bottom rows of Figure 3 illustrate results for $n=1,2,4$. Figure 4: Minkowski functionals iso-contours for the dumbbell model $\mathcal{D}$ with balls of same radius $R$ as function of distance $D$ between the balls’ centres and of radius $\rho$ of the cylindric bridge. Volume (solid lines; $\mu=0$), surface (dashed; $\mu=1$), and integrated mean curvature (dotted; $\mu=2$) are shown for values ranging from the smaller as indicated and in steps $\Delta V_{\mu}=\\{2,0.5,0.5\\}$ Mpc3-μ moving rightward. Data points are described in Table 2. ### 2.4 Dumbbell model: cluster-pair bridge Table 2: Observed cluster-pair bridges of single (upper table) or stacked (lower table) systems. The compilation is restricted to clusters pairs with similar radius $R\approx r=r_{200}$, separated by $D$ and with filament radius $\rho$. Lengths in $h^{-1}$Mpc. Cluster/data set | redshift | $R$ | $D$ | $\rho$ ---|---|---|---|--- A222 - A223 ($\bullet$) | 0.21 | 1.2 | $15\pm 3$ | 0.6 A399 - A401 ($\blacktriangle$) | 0.073 | $1.70$ | 3 | $1.52\pm 0.09$ A21 - PSZ2 G114.9 ($\blacktriangledown$) | 0.094 | $1.36$ | 4.2 | 0.92 SDSS/DR17 (LRG) | 0.2–0.5 | 0.5–1 | 6–14 | $\gtrsim 1$ BOSS + CFHTLenS | 0.3–0.6 | 1.25 | $7.1\pm 1$ | 1.25 SDSS/DR12 + Planck | 0–0.4 | 1.35 | 6–10 | $\leqslant 0.5$ CMASS + Planck | $\sim 0.55$ | 0.5–1 | 6–14 | $\leqslant 2.5$ Cluster of galaxies may reside in superclusters still not in equilibrium. In the simpler configuration, major haloes are connected through thick filaments (e.g. Werner et al., 2008; Dietrich et al., 2012; Planck Collaboration et al., 2013; Bonjean et al., 2018, Table 2, rows 1-3), also detected by stacking techniques (Clampitt et al., 2016; Epps & Hudson, 2017; Tanimura et al., 2019; de Graaff et al., 2019, Table 2, rows 4-7). Figure 5: Shapefinders isocontours for merger model $\mathcal{M}$, where two balls with radius $R$ and $r$ are separated by $d$. Left: $H_{1},H_{2},H_{3}$ isocontours in units of the values attained for a unit ball $\mathcal{S}$, i.e. $H_{1}^{\mathcal{S}}=H_{2}^{\mathcal{S}}=H_{3}^{\mathcal{S}}=1$, increasing by $\Delta H_{i}/H_{i}^{\mathcal{S}}=(0.025,0.1,0.05)$ rightward. $H_{1}$ and $H_{2}$ are minimum when the two balls do not overlap, respectively for $r/R\sim 0.89$ and 0.41; $H_{3}$ is minimum for non-trivial overlap, at $(r/R,d/R)\approx(0.54,0.84)$. Right: planarity (filamentarity) isocontours range in $[-0.025,0.125]$ ($[-0.15,0.3]$) in steps of 0.025 (0.05), crossing the vanishing value valid for a ball or through total embedding. Symbols as in Figure 2. The morphology of an axially-symmetric body defined by two balls connected by a cylinder, $\mathcal{D}=B_{1}\cup B_{2}\cup C$, can be deduced from the previous equations using additivity and noting that the sum of the Minkowski functionals of the two spherical caps chopped by the cylinder bases, $\mathcal{L}_{1,2}=B_{1,2}\cap C$, are equivalent to the Minkowski functionals of the lens $\mathcal{L}=\mathcal{L}_{1}\cup\mathcal{L}_{2}$ as reported in Appendix B. After some algebra and recognising the two-fused balls model, one obtain $V_{\mu}(\mathcal{D})=V_{\mu}^{\mathcal{M}}+V_{\mu}(C)$. The exact though cumbersome mathematical expression combines Equations (3) for two balls of radius $R$ and $r$ separated by an effective distance $d=(R^{2}-\rho^{2})^{1/2}+(r^{2}-\rho^{2})^{1/2}$, and the well-known Minkowski functionals of a cylinder with circular basis of radius $\rho$ and height $D-d$, with $D$ the actual distance between the centres of $B_{1}$ and $B_{2}$. An illustrative example of Minkowski functionals iso-contours for balls with same radius $R$ is shown in Figure 4 as function of length and radius of the cylindric bridging filament. For relatively small bridge lengths ($D\sim 2R$), the functionals are quite degenerate. For larger radii, degeneracy is broken. A compilation of systems that can be approximated by this geometry are reported for comparison; see Table 2. A more advanced configuration is obtained by replacing the cylinder by a truncated cone $P$ with circular bases of radius $\rho_{1}$ and $\rho_{2}$ and height $h=D-(R^{2}-\rho_{1}^{2})^{1/2}-(r^{2}-\rho_{2}^{2})^{1/2}$; see Appendix D. The Minkowski functionals are $V_{\mu}(\mathcal{D}_{P})=V_{\mu}(B_{1})+V_{\mu}(B_{2})+V_{\mu}(P)-V_{\mu}(\mathcal{L}_{1})-V_{\mu}(\mathcal{L}_{2})$, with the functionals for $P$ obtained from Equations (21-23). It is not difficult to further generalise this model by adding two additional cylindric filaments that protrude from the two haloes in opposite directions. These two haloes can be regarded as local clumps of matter embedded in a single, bent cosmic filament similar to the A3016-A3017 system (Chon et al., 2019). Finally, note that a pile of truncated cones with matching bases can describe axially-symmetric filaments with varying thickness, well-suited for systems such as the one recently reported by Umehata et al. (2019) and Herenz et al. (2020). Figure 6: Classification by shapefinders $(H_{1},H_{2},H_{3})$ of ellipsoidal triaxial, merging, spiky, and dumbbell models in two projections (left and right panels; see § 3 for values). Top: merging, spiky, and dumbbell models have central or major ball with radius $R=1$ or $2h^{-1}$Mpc. Correspondingly, they have $H_{1}\approx 1$ or $2h^{-1}$Mpc and $H_{2}\gtrsim 1$ or $2h^{-1}$Mpc. Bottom: zoom on models with major ball with $R=2h^{-1}$Mpc. ### 2.5 Mass assembly history and morphology During the late stage of evolution before virialisation, satellite haloes are closer to the main halo. This tends to accrete mass at merging rate and with time-scale depending on the epoch, initial mass and statistics of the primordial density field (Bond et al., 1991), mass and kinematic of sub-haloes (e.g. Zhao et al., 2003), and tidal forces (Lapi & Cavaliere, 2011), which are possibly conditioned by dark-energy (Pace et al., 2019). The filamentary structures feeding clusters tend to become shorter and thinner (e.g. Cautun et al., 2014), and the connectivity of the more massive hence largest and latest formed haloes decreases over time (Choi et al., 2010; Codis et al., 2018; Kraljic et al., 2020). Since Minkowski functionals account for the non-trivial geometrical and topological content of fused bodies despite their evolutionary stage, relaxed or not, one expects that they correlate with the dynamical state of galaxy clusters. This claim is supported by the results we obtained with idealised models. As shown in Figure 3 (top panels), while the volume of merging models $\mathcal{M}_{n}$ is mainly sensitive to the relative size (radius) of the satellites, area and integrated mean curvature strongly depend also on their relative distance from the main halo, lifting the degeneracies. Overall, late- time structures are more compact i.e. occupy smaller volume, cover smaller area, and have smaller intrinsic curvature than at early time (later stages of the gravitational evolution are represented by thicker lines and by solid- dashed-dotted sequence). The morphology captured by Minkowski functionals for spiky models $\mathcal{S}_{n}$ (Figure 3, bottom rows) is similar to merging models: regardless the number of filaments attached to the central ball, the volume primarily depends on the thickness (radius) of filaments, the area is likewise sensitive to the relative length of filaments, $\ell_{i}$, while integrated mean curvature only depends on $\ell_{i}$. Again, the overall amplitude of Minkowski functionals decreases for late-time morphologies, converging towards the values of the central main cluster. A numerical study based on $N$-body simulations is needed to quantitatively assess the correlation between the full set of Minkowski functionals and the relaxation state of these structures. It will be of interest to evaluate the ability of these statistics to distinguish between ‘stalled’ and ‘accreting’ haloes, which are located at the nodes of a network of respectively thin and thick filaments feeding them (Borzyszkowski et al., 2017). ## 3 Classification by shapefinders Sahni et al. (1998) introduced the thickness $H_{1}=V_{0}/2V_{1}$, width $H_{2}=2V_{1}/\pi V_{2}$, and length $H_{3}=3V_{2}/4$ of isodensity contours, dubbed shapefinders, to investigate in a non-parametric way the size and shape of the matter density field above or below a given threshold on large scales. Shandarin et al. (2004) used these statistics for voids and superclusters. We employ the shapefinders to attempt a classification of the morphology of galaxy clusters. The shapefinders are geometrically and physically motivated and can characterise all the different phases of the merging accretion history, or, in a complementary view, all the halo configurations that populate the universe at a single cosmic time. The dependence of shapefinders on the parameters of models is illustrated only for the two-fused ball model, $\mathcal{M}$, as prototype for major mergers and fully accounted for by two parameters only, i.e. the ratio of balls radii $r/R$ and the distance between balls in units of the major ball radius, $d/R$. Figure 5 (left panel) suggests that major-merging clusters from the LC2 catalogue, which share similar geometric scales $r$ or $d$, can be distinguished by the values of $H_{1}$, $H_{2}$, and $H_{3}$, whose iso- contours are markedly orthogonal in different part of the $(r,d)$ parameter space. For reference, the two major-merging systems with secondary haloes orbiting at the splashback radius of the main halo and having 3 and 10 times smaller mass (upper and lower empty triangles, respectively) differ by about 16, 12, and 6 percent in volume, surface, integrated mean curvature, corresponding to a 4, 16, and 6 percent difference in $H_{1}$, $H_{2}$, and $H_{3}$, respectively. The so-called planarity $P=(H_{2}-H_{1})/(H_{2}+H_{1})$ and filamentarity $F=(H_{3}-H_{2})/(H_{3}+H_{2})$ are less suitable shapefinders to classify the systems considered in this study, especially the ‘stellar’ models $\mathcal{M}_{n}$ and $\mathcal{S}_{n}$; here the words ‘planarity’ and ‘filamentarity’ are equivocal. Nonetheless, for the two-fused ball model $P$ and $F$ can differ by about $\pm 0.15$ from zero, which is the value for a ball. A Blaschke diagram based on $(P,F)$ (see e.g. Schmalzing et al., 1999) could be therefore an alternative interesting diagnostic to classify more realistic systems. Figure 7: Example of shapefinders classification of observed systems (projections as in Figure 6): triaxial haloes (Sereno et al., 2018b, disks, colour-coded by redshift as in Figure 1), major mergers (LC2 catalogue by Sereno, 2015, squares, colour-coded by redshift as in Figure 2), dumbbell systems (black symbols and shaded regions, see Table 2 and Figure 4). For reference, major merging models $\mathcal{M}$ are shown, all having main halo with radius $R=0.1-3h^{-1}$Mpc and secondary halo with radius $r$ located at distance $d$ such that $(r,d)=(R,0.3R)$ (i.e. close haloes with same radius; solid line), $(0.5R,R)$ (dashed), and $(R,1.5R)$ (i.e. distant haloes with same radius; dot-dashed). Triaxial haloes by Sereno et al. (2018b) nicely fit the ellipsoidal prolate model with $(a,b,c)=(1~{}h^{-1}\mathrm{Mpc},b,b)$, $b\in[0.1,1]~{}h^{-1}$Mpc in the $(H_{2},H_{3})$ plane (dotted line) but not in the $(H_{1},H_{2})$ plane. The geometrical models presented in Section 2 illustrate the potentiality of the classification scheme based on $(H_{1},H_{2},H_{3})$, which can be applied to clusters of galaxies or any astrophysical or physical system with non- trivial geometry. Figure 6 shows the three projections of the $(H_{1},H_{2},H_{3})$ parameter space populated with triaxial ellipsoids $\mathcal{E}$ (with axes $a,b,c\in[1,5]~{}h^{-1}$Mpc), merging models $\mathcal{M}_{n}$ with $n=1,2,3$ satellites (balls with radius $r_{i}\in[R/2,R]$ at distance $d\in[R,2R]$ from the central ball with radius $R=1,2h^{-1}$Mpc), spiky models $\mathcal{S}_{n}$ with $n=1,2,3$ filaments (cylinders with radius $r_{i}\in[R/4,3R/4]$ and length $\ell_{i}\in[1,5]h^{-1}$Mpc, feeding a central ball with radius $R=1,2~{}\,h^{-1}$Mpc), and dumbbell models $\mathcal{D}_{h}$ (major ball with radius $R=1,2~{}h^{-1}$Mpc, minor ball with radius $r\in[R/2,R]$, cylindric bridge with radius $\rho\in[R/4,3R/4]$ and height $h=5,10~{}h^{-1}$Mpc).444Length units are here irrelevant since only ratios do matter; in Figure 6 we adopt $h^{-1}$Mpc as common practice in cosmology. The triaxial ellipsoids $\mathcal{E}$ (shown only in the left panels) extend over the broadest region of the parameter space, quite well-separated from the other models. For fixed width $H_{2}$, the maximum thickness $H_{1}$ and minimum length $H_{3}$ is achieved for prolate and oblate ellipsoids, which are almost superposed. Instead, as shown in Figure 1, the same value of the Minkowski functionals corresponds to different values of the triaxiality parameter, viz. $V_{\mu}$ are orthogonal to $T$ so bringing less discriminating power than the shapefinders. All but the $\mathcal{E}$ models are approximately centred around a value of $H_{1}$ that is equal to the radius $R$ of the central or major ball. Disregarding the unavoidable degeneracies between models, for fixed value of $R$ (see right panels, showing only models with $R=2h^{-1}$Mpc) there is a clear trend of $H_{2}$ that increases with the number $n$ of satellites in merging models $\mathcal{M}_{n}$, while it only mildly decreases with the number $n$ of cylindric filaments (connectivity) of spiky models $\mathcal{S}_{n}$. The connectivity is instead more evident in the $(H_{2},H_{3})$ plane, increasing on average with $H_{3}$. A similar trend occurs for the $\mathcal{M}_{n}$ models, with larger integrated mean curvature or $H_{3}$ occurring for systems with more satellites. The Minkowski functionals support these conclusions. As suggested by Figure 3 (rows 1-2), while the volume ($V_{0}$) and, to smaller extent, the surface ($V_{1}$) of merging systems $\mathcal{M}_{n}$ mainly inform about the size of the central ball and that of the largest satellite, the integrated mean curvature ($V_{2}$) is sensitive also to the smaller satellites even when $n$ is small, catching both their size and distance from the central ball. For $\mathcal{S}_{n}$ models (rows 3-5), $V_{0}$ and $V_{1}$ equally respond to the thickness and lengths of the filaments, while the slope of $V_{2}$ as function of the typical length increase on average with the connectivity $n$; in this case the classification in the $(H_{2},H_{3})$ plane seems more selective. Dumbbell models $\mathcal{D}_{h}$ attain the largest value of $H_{1}$, which increases with the length $h$ of the bridge. Consistently with $V_{\mu}$ (see Figure 4), the width $H_{2}$ is mainly sensitive to the radius of the smaller ball, while the length $H_{3}$ is strongly responsive to the bridge length almost regardless the other scales of the dumbbell. Figure 7 illustrates the ability of shapefinders to separate the observed systems presented in the precedent sections. Triaxial ellipsoids by Sereno et al. (2018b, disks), major mergers of the LC2 catalogue Sereno (2015, squares), and cluster-pair bridges (see Table 2, symbols and large squares) nicely occupy different positions in the parameter space. ## 4 Discussion and Conclusions The forthcoming generation of imaging and spectroscopic surveys carried out with DESI, WEAVE, 4MOST, Rubin Observatory, Euclid, Roman Space Telescope, eROSITA, or SKA will collect thousands of new galaxy clusters and proto- clusters at low and high redshift and with a considerable spatial resolution, allowing us to establish a firm relationship between their complex morphology and the mass assembly history. The recent exquisite observations operated by CFHT/Megacam, VLT/MUSE or ALMA already support the introduction of new spatial statistics besides the traditional ones calculated from the mass or inertia tensors (ellipticity, triaxiality, etc.), which are well-suited for relaxed or poorly resolved systems but less appropriate to describe merging clusters or their filamentary environment. The usual morphological parameters can be impractical for diverse samples. Axial ratios and inertia eigenvectors provide a very accurate and physically motivated description of regular and approximately triaxial haloes, but they can fail to properly describe major mergers or bridges and filaments. In some sense, classic morphological schemes usually adopted in cluster astronomy can be properly used only after the shape of the halo as been already assessed. One first determines the class which the cluster belongs to and then adopts the relevant shape classifier. The shapefinders based on the Minkowski functionals, introduced by Sahni et al. (1998) to investigate the morphology of the large scale structure and dubbed thickness ($H_{1}$), width ($H_{2}$), and length ($H_{3}$), provide instead a small set of parameters that can properly describe very diverse morphologies, possibly correlating with the entire accretion history of the halo. This study assesses the capability of Minkowski functionals and shapefinders to discriminate between ellipsoidal, merging, spiky, and dumbbell morphologies, providing explicit formulae for simplified geometries. Equations (1) for triaxial ellipsoids $\mathcal{E}$ and (3) for two-fused balls $\mathcal{M}$ are the main analytical result of this study; to our knowledge the formulas for their integrated mean curvature, $H_{\mathcal{E}}$ and $H_{\mathcal{M}}$, are new in the literature. Using the additivity of Minkowski functionals and the Steiner formula, one can generalise the model to $n$ merging balls (satellites), $\mathcal{M}_{n}$. Analytical expression for axially-symmetric filaments with varying thickness, Equations (21-23), are pivotal to build spiky geometries $\mathcal{S}_{n}$ accounting for filaments feeding a central halo or cluster-pair bridges $\mathcal{D}$. It is important to remind that the (scalar) Minkowski functionals for the merger and spiky models, $\mathcal{M}_{n}$ and $\mathcal{S}_{n}$, in which the different satellites or branches do not mutually overlap, do not supply any information about the relative orientation of the substructures. The morphology of anisotropic bodies can be instead distinguished by the vector and tensor-valued Minkowski functionals (e.g. Beisbart et al., 2001, 2002), which can be interpreted as generalisation of the moment of inertia of the body. Consistently, the so-called planarity and filamentarity shapefinders deduced from $(H_{1},H_{2},H_{3})$ would be misleading for the simplified models considered here, thus not used for the classification. Not surprisingly, as shown in Figure 3, the Minkowski functionals respond to the distance and size of satellites, to the connectivity of central haloes, and to the thickness of feeding filaments. These geometrical and topological properties trace the growth of structures and have an impact on the physical properties of galaxies in the nodes of the cosmic-web (Choi et al., 2010; Codis et al., 2018; Kraljic et al., 2018, 2020). Reasonably enough, Minkowski functionals and shapefinders are therefore correlated with the mass assembly history both in the dark and gaseous components and could serve as diagnostics to investigate the relationship between local morphology and global dynamics. Figures 6 and 7 summarise this study. They show how a simple three-dimensional parameter space is adequate to describe the full variety of cluster. Thought not fully lifting the degeneracies necessarily occurring between very different morphologies, $(H_{1},H_{2},H_{3})$ can be effectively used as classifiers provided at least some effective radius of the major structure is estimated. This study is a proof-of-concept to illustrate the potential of Minkowski functionals and shapefinders for clusters studies. The full practical potential of this approach in cluster morphological analysis has still to be assessed. The toy models we considered can capture some of the main features of a diverse sample of clusters but likely fail to describe more complex configurations that shows up in the observed sky or in numerical simulations. In these case, alternative computation techniques shall be used. Depending on the discrete or continuous nature of the mass tracers, the underlying Minkowski functionals can be estimated using the so-called germ-grain or excursion set models (Mecke et al., 1994; Schmalzing et al., 1999), i.e. by dressing the point-processes (e.g. galaxies or subhaloes) with balls of fixed radius, whose union forms the continuous body, or considering the iso-contours of suitably smoothed random field (e.g. the density or temperature of the cluster), respectively using the radius of balls or the threshold value defining the iso-contours as diagnostic parameter. We showed the potential of shapefinders as morphological classifier in 3D. The three-dimensional shape of galaxy clusters can be constrained with joint multi-wavelength analyses combining lensing, X-ray, and SZ (Sereno et al., 2018b) or deep spectroscopic campaigns (Rosati et al., 2014; Finoguenov et al., 2019; Kuchner et al., 2020), which unveil the third dimension orthogonal to the projected sky. The data sets required by these analyses can be very expansive and the full 3D analysis of galaxy clusters is usually not feasible for most of the known halos. Even though this situation can change with the next generation surveys and instruments, it might be useful to consider 2D shapefinder classes in the projected space. This could be more useful in the context of large surveys. Finally, it is worth to stress that morphology alone cannot unambiguously determine the degree of equilibrium of a halo. Apparently, morphological regular clusters can be unrelaxed (Meneghetti et al., 2014). Any possible correlation between shapefinders and dynamical state of the clusters shall require a more accurate investigation, also based on $N$-body simulations. ## Acknowledgments We thank G. Covone, K. Kraljic, and E. Sarpa for discussions and a critical reading of the manuscript, and the anonymous referee for the fruitful suggestions that largely improved the illustration of our results. This work has been partially supported by the Programme National Cosmology et Galaxies (PNCG) of CNRS/INSU with INP and IN2P3, co-funded by CEA and CNES, and Labex OCEVU (ANR-11-LABX-0060). MS acknowledges financial contribution from contracts ASI-INAF n.2017-14-H.0 and INAF mainstream project 1.05.01.86.10. MS acknowledges LAM for hospitality. Figure A1: Local mean curvature $H_{\mathrm{loc}}$ of triaxial ellipsoids (inset) with axes ratio $(b/a,c/a)=(0.6,0.6)$ (left), $(0.6,0.4)$ (centre), and $(0.6,0.2)$ (right) as function of the angles measured from the centre of the body and spanning a quarter of the equatorial plane (solid line) and of the two perpendicular meridian planes (dashed, dotted; same ). For the axially symmetric, prolate ellipsoid (left) two directions are equivalent. Note the different range of values of $H_{\mathrm{loc}}$. ## Data Availability The data underlying this article are available in the article and in its online supplementary material. ## References * Abramowitz & Stegun (1970) Abramowitz M., Stegun I. A., 1970, Handbook of mathematical functions : with formulas, graphs, and mathematical tables * Beisbart et al. (2001) Beisbart C., Valdarnini R., Buchert T., 2001, A&A, 379, 412 * Beisbart et al. (2002) Beisbart C., Dahlke R., Mecke K., Wagner H., 2002, in Mecke K., Stoyan D., eds, Lecture Notes in Physics, Berlin Springer Verlag Vol. 600, Morphology of Condensed Matter. pp 238–260 (arXiv:physics/0203072) * Bett et al. (2007) Bett P., Eke V., Frenk C. S., Jenkins A., Helly J., Navarro J., 2007, MNRAS, 376, 215 * Bleem et al. (2015) Bleem L. E., et al., 2015, ApJS, 216, 27 * Bonamigo et al. (2015) Bonamigo M., Despali G., Limousin M., Angulo R., Giocoli C., Soucail G., 2015, MNRAS, 449, 3171 * Bond et al. (1991) Bond J. R., Cole S., Efstathiou G., Kaiser N., 1991, ApJ, 379, 440 * Bonjean et al. (2018) Bonjean V., Aghanim N., Salomé P., Douspis M., Beelen A., 2018, A&A, 609, A49 * Borzyszkowski et al. (2017) Borzyszkowski M., Porciani C., Romano-Díaz E., Garaldi E., 2017, MNRAS, 469, 594 * Boselli et al. (2014) Boselli A., et al., 2014, A&A, 570, A69 * Cautun et al. (2014) Cautun M., van de Weygaert R., Jones B. J. T., Frenk C. S., 2014, MNRAS, 441, 2923 * Chiu et al. (2018) Chiu I.-N., Umetsu K., Sereno M., Ettori S., Meneghetti M., Merten J., Sayers J., Zitrin A., 2018, ApJ, 860, 126 * Choi et al. (2010) Choi E., Bond N. A., Strauss M. A., Coil A. L., Davis M., Willmer C. N. A., 2010, MNRAS, 406, 320 * Chon et al. (2019) Chon G., Böhringer H., Dasadia S., Kluge M., Sun M., Forman W. R., Jones C., 2019, A&A, 621, A77 * Chua et al. (2019) Chua K. T. E., Pillepich A., Vogelsberger M., Hernquist L., 2019, MNRAS, 484, 476 * Clampitt et al. (2016) Clampitt J., Miyatake H., Jain B., Takada M., 2016, MNRAS, 457, 2391 * Codis et al. (2018) Codis S., Pogosyan D., Pichon C., 2018, MNRAS, 479, 973 * Connor et al. (2018) Connor T., et al., 2018, ApJ, 867, 25 * Cooray (2000) Cooray A. R., 2000, MNRAS, 313, 783 * Cucciati et al. (2018) Cucciati O., et al., 2018, A&A, 619, A49 * Dalal et al. (2008) Dalal N., White M., Bond J. R., Shirokov A., 2008, ApJ, 687, 12 * De Filippis et al. (2005) De Filippis E., Sereno M., Bautz M. W., Longo G., 2005, ApJ, 625, 108 * Despali et al. (2014) Despali G., Giocoli C., Tormen G., 2014, MNRAS, 443, 3208 * Diemer & Kravtsov (2014) Diemer B., Kravtsov A. V., 2014, ApJ, 789, 1 * Diemer et al. (2017) Diemer B., Mansfield P., Kravtsov A. V., More S., 2017, ApJ, 843, 140 * Dietrich et al. (2012) Dietrich J. P., Werner N., Clowe D., Finoguenov A., Kitching T., Miller L., Simionescu A., 2012, Nature, 487, 202 * Donahue et al. (2016) Donahue M., et al., 2016, ApJ, 819, 36 * Eckert et al. (2015) Eckert D., et al., 2015, Nature, 528, 105 * Epps & Hudson (2017) Epps S. D., Hudson M. J., 2017, MNRAS, 468, 2605 * Faltenbacher & White (2010) Faltenbacher A., White S. D. M., 2010, ApJ, 708, 469 * Finoguenov et al. (2019) Finoguenov A., et al., 2019, The Messenger, 175, 39 * Gibson & Scheraga (1987) Gibson K. D., Scheraga H. A., 1987, Mol. Phys., 62, 1247 * Greenslade et al. (2018) Greenslade J., et al., 2018, MNRAS, 476, 3336 * Hadwiger (1957) Hadwiger H., 1957, Vorlesungen über Inhalt, Oberflache und Isoperimetrie. Die Grundlehren der mathematischen Wissenschaften, Springer * Herenz et al. (2020) Herenz E. C., Hayes M., Scarlata C., 2020, arXiv e-prints, p. arXiv:2001.03699 * Kim et al. (2016) Kim S., et al., 2016, ApJ, 833, 207 * Kraljic et al. (2018) Kraljic K., et al., 2018, MNRAS, 474, 547 * Kraljic et al. (2020) Kraljic K., et al., 2020, MNRAS, 491, 4294 * Kuchner et al. (2020) Kuchner U., et al., 2020, MNRAS, 494, 5473 * Lapi & Cavaliere (2011) Lapi A., Cavaliere A., 2011, ApJ, 743, 127 * Limousin et al. (2013) Limousin M., Morandi A., Sereno M., Meneghetti M., Ettori S., Bartelmann M., Verdugo T., 2013, Space Sci. Rev., 177, 155 * Lovisari et al. (2017) Lovisari L., et al., 2017, ApJ, 846, 51 * Macciò et al. (2007) Macciò A. V., Dutton A. A., van den Bosch F. C., Moore B., Potter D., Stadel J., 2007, MNRAS, 378, 55 * Maturi et al. (2019) Maturi M., Bellagamba F., Radovich M., Roncarelli M., Sereno M., Moscardini L., et al. 2019, MNRAS, 485, 498 * Mecke (2000) Mecke K. R., 2000, in Mecke K. R., Stoyan D., eds, Lecture Notes in Physics, Berlin Springer Verlag Vol. 554, Statistical Physics and Spatial Statistics. The Art of Analyzing and Modeling Spatial Structures and Pattern Formation. p. 111 * Mecke et al. (1994) Mecke K. R., Buchert T., Wagner H., 1994, A&A, 288, 697 * Meneghetti et al. (2014) Meneghetti M., et al., 2014, ApJ, 797, 34 * More et al. (2016) More S., et al., 2016, ApJ, 825, 39 * Oguri et al. (2018) Oguri M., et al., 2018, PASJ, 70, S20 * Olivares et al. (2019) Olivares V., et al., 2019, A&A, 631, A22 * Pace et al. (2019) Pace F., Schimd C., Mota D. F., Del Popolo A., 2019, J. Cosmology Astropart. Phys., 2019, 060 * Pierre et al. (2016) Pierre M., et al., 2016, A&A, 592, A1 * Planck Collaboration et al. (2013) Planck Collaboration et al., 2013, A&A, 550, A134 * Planck Collaboration et al. (2016) Planck Collaboration et al., 2016, A&A, 594, A27 * Poelaert et al. (2011) Poelaert D., Schniewind J., Janssens F., 2011, arXiv e-prints, p. arXiv:1104.5145 * Prada et al. (2006) Prada F., Klypin A. A., Simonneau E., Betancort-Rijo J., Patiri S., Gottlöber S., Sanchez-Conde M. A., 2006, ApJ, 645, 1001 * Rosati et al. (2014) Rosati P., et al., 2014, The Messenger, 158, 48 * Rykoff et al. (2014) Rykoff E. S., et al., 2014, ApJ, 785, 104 * Sahni et al. (1998) Sahni V., Sathyaprakash B. S., Shandarin S., 1998, ApJ, 495, L5 * Schmalzing et al. (1999) Schmalzing J., Buchert T., Melott A. L., Sahni V., Sathyaprakash B. S., Shandarin S. F., 1999, ApJ, 526, 568 * Sereno (2007) Sereno M., 2007, MNRAS, 380, 1207 * Sereno (2015) Sereno M., 2015, MNRAS, 450, 3665 * Sereno et al. (2006) Sereno M., De Filippis E., Longo G., Bautz M. W., 2006, ApJ, 645, 170 * Sereno et al. (2018a) Sereno M., et al., 2018a, Nature Astronomy, 2, 744 * Sereno et al. (2018b) Sereno M., Umetsu K., Ettori S., Sayers J., Chiu I.-N., Meneghetti M., Vega-Ferrero J., Zitrin A., 2018b, ApJ, 860, L4 * Shandarin et al. (2004) Shandarin S. F., Sheth J. V., Sahni V., 2004, MNRAS, 353, 162 * Springel et al. (2004) Springel V., White S. D. M., Hernquist L., 2004, in Ryder S., Pisano D., Walker M., Freeman K., eds, IAU Symposium Vol. 220, Dark Matter in Galaxies. p. 421 * Tanimura et al. (2019) Tanimura H., et al., 2019, MNRAS, 483, 223 * Umehata et al. (2019) Umehata H., et al., 2019, Science, 366, 97 * Veena et al. (2018) Veena V. S., Vig S., Mookerjea B., Sánchez-Monge Á., Tej A., Ishwara-Chandra C. H., 2018, ApJ, 852, 93 * Werner et al. (2008) Werner N., Finoguenov A., Kaastra J. S., Simionescu A., Dietrich J. P., Vink J., Böhringer H., 2008, A&A, 482, L29 * Zhao et al. (2003) Zhao D. H., Mo H. J., Jing Y. P., Börner G., 2003, MNRAS, 339, 12 * de Graaff et al. (2019) de Graaff A., Cai Y.-C., Heymans C., Peacock J. A., 2019, A&A, 624, A48 ## Appendix A Integrated mean curvature of an ellipsoid Following Poelaert et al. (2011), an ellipsoid described by $\frac{X^{2}}{a^{2}}+\frac{Y^{2}}{b^{2}}+\frac{Z^{2}}{c^{2}}=1,$ (7) with principal semi-axes $a\geqslant b\geqslant c$ and central Cartesian coordinates $\displaystyle X$ $\displaystyle=a\cos\theta,$ (8) $\displaystyle Y$ $\displaystyle=b\sin\theta\cos\phi,$ (9) $\displaystyle Z$ $\displaystyle=c\sin\theta\sin\phi,$ (10) written in terms of the eccentric anomalies $\theta$ and $\phi$, has local mean curvature $H_{\mathrm{loc}}(\theta,\phi)=\frac{h^{3}(a^{2}+b^{2}+c^{2}-R^{2})}{2a^{2}b^{2}c^{2}},$ (11) with $h=\frac{abc}{\sqrt{b^{2}c^{2}\cos^{2}\theta+a^{2}(c^{2}\cos^{2}\phi+b^{2}\sin^{2}\phi)\sin^{2}\theta}}$ (12) being the shortest distance (‘height’) from the centre to the tangent plane to the ellipsoid at the point considered and $R=\sqrt{X^{2}+Y^{2}+Z^{2}}$ the radius to this point upon the ellipsoid surface (see Figure A1). The local Gaussian curvature is $G_{\mathrm{loc}}=h^{4}/a^{2}b^{2}c^{2}$. The mean curvature integrated over the surface is $H\equiv abc\int_{0}^{2\pi}\mathrm{d}\phi\int_{0}^{\pi}\mathrm{d}\theta\,\frac{\sin\theta}{h}H_{\mathrm{loc}}=\frac{abc}{a^{2}}\,(I_{1}+I_{2}),$ (13) with $\displaystyle I_{1}$ $\displaystyle=\int_{0}^{2\pi}\frac{a^{2}+k^{2}}{k^{2}}\frac{\mathrm{arctanh}~{}U}{U}\mathrm{d}\phi,$ (14a) $\displaystyle I_{2}$ $\displaystyle=\int_{0}^{2\pi}\frac{a^{2}-b^{2}-c^{2}-k^{2}}{k^{2}}\left(\frac{1}{U^{2}}-\frac{\mathrm{arctanh}~{}U}{U^{3}}\right)\mathrm{d}\phi,$ (14b) where $U^{2}=1-\frac{b^{2}c^{2}}{a^{2}k^{2}}\leq 1$ and $k^{2}=b^{2}\sin^{2}\phi+c^{2}\cos^{2}\phi$. The dimensionless integrals (14) are evaluated numerically. Analytic limits exists for prolate and oblate ellipsoids (see main text). ## Appendix B Integrated mean curvature of two merged balls Figure A2: Section of two fused balls $B_{1}$ and $B_{2}$ with centres separated by a distance $r$ and radius $R_{1}$ and $R_{2}$. Their intersection $\mathcal{L}\equiv B_{1}\cap B_{2}$ forms a “bi-concave lens” (light grey). Covering $\mathcal{L}$ with balls with radius $\epsilon$ (only two are shown, dotted) one obtains the parallel lens $\mathcal{L}_{\epsilon}$ (dashed), which results in the union of two axially-symmetric dihedra $D_{1}$ and $D_{2}$ glued to a wedged torus (sections $T$ and $T^{\prime}$ in dark gray). The integrated mean curvature of $\mathcal{M}=B_{1}\cup B_{2}$ is calculated using $V_{2}(B_{1}\cup B_{2})=V_{2}(B_{1})+V_{2}(B_{2})-V_{2}(B_{1}\cap B_{2})$. The last term is derived from the Steiner formula applied to parallel lens $\mathcal{L}_{\epsilon}$, namely the uniform coverage of the lens $\mathcal{L}\equiv B_{1}\cap B_{2}$ by balls with radius $\epsilon$. As illustrated in Figure A2, $\mathcal{L}_{\epsilon}$ results in the union of $\mathcal{L}$ with two dihedra $D_{1}$ and $D_{2}$ of thickness $\epsilon$ and opening angles respectively $\theta_{1}$ and $\theta_{2}$, and a wedged torus $T$ centred on $O$ and with cross-section $(\pi-\theta_{1}-\theta_{2})\epsilon^{2}$. According to the Steiner formula, its volume $V(\mathcal{L}_{\epsilon})\equiv V(\mathcal{L})+V(D_{1})+V(D_{2})+V(T)$ is equal to $V(\mathcal{L})+A(\mathcal{L})\epsilon+H(\mathcal{L})\epsilon^{2}+\frac{4\pi}{3}\epsilon^{3}$ (Mecke, 2000), i.e. a fourth-order polynomial in $\epsilon$ with coefficients proportional to the Minkowski functionals; the integrated mean curvature $H(\mathcal{L})$ corresponds to the sum of the terms of $V(\mathcal{L}_{\epsilon})$ proportional to $\epsilon^{2}$. The volume of the spherical dihedron $D_{1}$ is $V(D_{1})=2\pi\left(1-\frac{p}{R_{1}}\right)\left(R_{1}^{2}\epsilon+R_{1}\epsilon^{2}+\frac{\epsilon^{3}}{3}\right),$ (15) with $p\equiv\overline{OO}_{1}=(R_{1}^{2}-R_{2}^{2}+r^{2})/2r$ the distance from the centre of $B_{1}$ and the centroid of the lens and $r\equiv\overline{O_{1}O}_{2}$. A similar expression holds for $V(D_{2})$ replacing $R_{1}$ by $R_{2}$ and $p$ by $r-p$. The volume of the wedged torus $T$ is $V(T)=\pi(\pi-\theta_{1}-\theta_{2})\rho\epsilon^{2}+\frac{2\pi}{3}(\cos\theta_{1}+\cos\theta_{2})\epsilon^{3},$ (16) with $\rho=(R_{1}^{2}-p^{2})^{1/2}$ its major radius, $\cos\theta_{1}=p/R_{1}$, and $\cos\theta_{2}=(r-p)/R_{2}$. The Steiner formula finally yields $\displaystyle V(\mathcal{L})$ $\displaystyle=\frac{\pi}{12}r^{3}-\frac{\pi}{4}\frac{\Delta^{4}}{r}-\frac{\pi}{2}r\Sigma^{2}+\frac{2\pi}{3}(R_{1}^{3}+R_{2}^{3})\;,$ (17) $\displaystyle A(\mathcal{L})$ $\displaystyle=2\pi\Delta^{2}-\pi r(R_{1}+R_{2})-\frac{\pi}{r}(R_{1}-R_{2})\Delta^{2}\;,$ (18) $\displaystyle H(\mathcal{L})$ $\displaystyle=2\pi(R_{1}+R_{2}-r)+\pi\psi\sqrt{2\Sigma^{2}-r^{2}-\frac{\Delta^{4}}{r^{2}}}\;,$ (19) in which $\cos\psi\equiv\cos(\pi-\theta_{1}-\theta_{2})=(\Sigma^{2}-r^{2})/(2R_{1}R_{2})$, $\Sigma^{2}=R_{1}^{2}+R_{2}^{2}$, and $\Delta^{2}=R_{1}^{2}-R_{2}^{2}$. All these equations are valid for non trivial intersection, i.e. overlap with no embedding or $|r-R_{1}|\leqslant R_{2}$. ## Appendix C Minkowski functionals of the multiple-mergers model $\boldsymbol{\mathcal{M}^{*}_{n}}$ The Minkowski functionals of $\mathcal{M}^{*}_{n}=B\cup B_{1}\cup\dots\cup B_{n}$ for $i=1,\dots,n$ satellites with radius $r_{i}<r$, not mutually overlapping, and avoiding trivial embedding ($|d_{i}-r|\leqslant r_{i}$) are $\displaystyle V_{0}^{\mathcal{S}^{*}}$ $\displaystyle=\frac{2\pi}{3}(2-n)r^{3}+\frac{2\pi}{3}\sum_{i}\left(r_{i}^{3}-\frac{1}{8}d_{i}^{3}\right)$ $\displaystyle\;+\frac{\pi}{2}\sum_{i}d_{i}(r^{2}+r_{i}^{2})+\frac{\pi}{4}\sum_{i}\frac{(r^{2}-r_{i}^{2})^{2}}{d_{i}},$ (20a) $\displaystyle V_{1}^{\mathcal{S}^{*}}$ $\displaystyle=\frac{\pi}{3}(2-n)r^{2}-\frac{\pi}{3}\sum_{i}r_{i}^{2}$ $\displaystyle+\frac{\pi}{6}\sum_{i}d_{i}(r+r_{i})+\frac{\pi}{6}\sum_{i}\frac{(r-r_{i})(r^{2}-r_{i}^{2})}{d_{i}},$ (20b) $\displaystyle V_{2}^{\mathcal{S}^{*}}$ $\displaystyle=\frac{2}{3}r+\frac{2}{3}\sum_{i}(r_{i}+d_{i})$ $\displaystyle-\frac{1}{3}\sum_{i}\psi_{i}d_{i}\sqrt{2\frac{r^{2}+r_{i}^{2}}{d_{i}}-1-\left(\frac{r^{2}-r_{i}^{2}}{d_{i}^{2}}\right)^{2}},$ (20c) with $\cos\psi_{i}=(r^{2}+r_{i}^{2}-d_{i}^{2})/2rr_{i}$. The condition $B_{i}\cap B_{j}=\emptyset$ is approximately realised if the surface covered by the basis of the $n$ balls does not exceed the surface of the central sphere, $\sum_{i}[(r^{2}-r_{i}^{2}+d_{i}^{2})/(2d_{i})]^{2}\lesssim 4r^{2}$. ## Appendix D Minkowski functionals of the dumbbell model $\boldsymbol{\mathcal{D}_{P}}$ Figure A3: Section of dumbbell model with axially-symmetric conic filament $P$. The intersection of the two balls $B_{1}$ and $B_{2}$ with $P$ yields two “flat-concave lenses” $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ (light grey). The dumbbell $\mathcal{D}_{P}=B_{1}\cup P\cup B_{2}$ resulting from the union of two balls $B_{i}$ ($i=1,2$) bridged by a truncated cone $P$ (Figure A3) has $V_{\mu}(\mathcal{D}_{P})=V_{\mu}(B_{1})+V_{\mu}(B_{2})+V_{\mu}(P)-V_{\mu}(\mathcal{L}_{1})-V_{\mu}(\mathcal{L}_{2})$. The Minkowski functionals of $\mathcal{L}_{i}\equiv B_{i}\cap P$ and $P$ are calculated as proportional to $\epsilon^{\mu}$ (Steiner formula) using Equations (15,16). The non-trivial result for $P$ are $\displaystyle V(P)$ $\displaystyle=\pi\rho_{1}^{2}h-\pi\rho_{1}h^{2}\tan\theta+\frac{\pi}{3}h^{3}\tan^{2}\theta,$ (21) $\displaystyle A(P)$ $\displaystyle=\pi(\rho_{1}^{2}+\rho_{2}^{2})+2\pi h(\rho_{1}\cos\theta+h\sin\theta)$ (22) $\displaystyle+\pi(\rho_{2}^{2}-\rho_{1}^{2})\sin\theta,$ $\displaystyle H(P)$ $\displaystyle=\pi h\cos^{2}\theta+\pi(\pi-2\theta)\frac{\rho_{1}+\rho_{2}}{2}-\pi\frac{\rho_{2}-\rho_{1}}{2}\sin 2\theta,$ (23) where $\rho_{1}$ and $\rho_{2}$ are the radii of the minor and major circular basis of $P$, $h$ its height, and $\tan\theta=(\rho_{2}-\rho_{1})/h$. For $\theta=0$ one recovers the known Minkowski functionals for a cylinder $C$ with basis $\rho\equiv\rho_{1}=\rho_{2}$ and height $h$, i.e. $V(C)=\pi\rho^{2}h$, $A(C)=2\pi\rho(\rho+h)$, $H(C)=\pi(h+\pi\rho)$. For $h=0$ the second and third Minkowski functionals further yield the area and the integrated mean curvature (or $2/\pi~{}\times$ perimeter) of a two-faces two- dimensional disk embedded in a three-dimensional space.
# CD2CR: Co-reference Resolution Across Documents and Domains James Ravenscroft Centre for Scientific Computing, University of Warwick, CV4 7AL, United Kingdom Alan Turing Institute, 96 Euston Rd, London, NW1 2DB, United Kingdom Filament AI, 1 King William St, London, EC4N 7BJ, United Kingdom Arie Cattan Computer Science Department, Bar-Ilan University, Ramat- Gan, Israel Amanda Clare Department of Computer Science, Aberystwyth University, SY23 3DB, United Kingdom Ido Dagan Computer Science Department, Bar-Ilan University, Ramat-Gan, Israel Maria Liakata Centre for Scientific Computing, University of Warwick, CV4 7AL, United Kingdom Alan Turing Institute, 96 Euston Rd, London, NW1 2DB, United Kingdom ###### Abstract Cross-document co-reference resolution (CDCR) is the task of identifying and linking mentions to entities and concepts across many text documents. Current state-of-the-art models for this task assume that all documents are of the same type (e.g. news articles) or fall under the same theme. However, it is also desirable to perform CDCR across different domains (type or theme). A particular use case we focus on in this paper is the resolution of entities mentioned across scientific work and newspaper articles that discuss them. Identifying the same entities and corresponding concepts in both scientific articles and news can help scientists understand how their work is represented in mainstream media. We propose a new task and English language dataset for cross-document cross-domain co-reference resolution (CD2CR). The task aims to identify links between entities across heterogeneous document types. We show that in this cross-domain, cross-document setting, existing CDCR models do not perform well and we provide a baseline model that outperforms current state- of-the-art CDCR models on CD2CR. Our data set, annotation tool and guidelines as well as our model for cross-document cross-domain co-reference are all supplied as open access open source resources. ## 1 Introduction Cross-document co-reference resolution (CDCR) is the task of recognising when multiple documents mention and refer to the same real-world entity or concept. CDCR is a useful NLP process that has many downstream applications. For example, CDCR carried out on separate news articles that refer to the same politician can facilitate inter-document sentence alignment required for stance detection and natural language inference models. Furthermore, CDCR can improve information retrieval and multi-document summarisation by grouping documents based on the entities that are mentioned within them. Recent CDCR work Dutta and Weikum (2015); Barhom et al. (2019); Cattan et al. (2020) has primarily focused on resolution of entity mentions across news articles. Despite differences in tone and political alignment, most news articles are relatively similar in terms of grammatical and lexical structure. Work based on modern transformer networks such as BERT Devlin et al. (2019) and ElMo Peters et al. (2018) have been pre-trained on large news corpora and are therefore well suited to news-based CDCR Barhom et al. (2019). However, there are cases where CDCR across documents from different domains (i.e. that differ much more significantly in style, vocabulary and structure) is useful. One such example is the task of resolving references to concepts across scientific papers and related news articles. This can help scientists understand how their work is being presented to the public by mainstream media or facilitate fact checking of journalists’ work Wadden et al. (2020). A chatbot or recommender that is able to resolve references to current affairs in both news articles and user input could be more effective at suggesting topics that interest the user. Finally, it may be helpful for e-commerce companies to know when product reviews gathered from third party websites refer to one of their own listings. The work we present here focuses on the first cross-document, cross-domain co-reference-resolution (CD2CR) use case, namely co-reference resolution between news articles and scientific papers. The objective of CD2CR is to identify co-referring entities from documents belonging to different domains. In this case co-reference resolution is made more challenging by the differences in language use (lexical but also syntactic) across the different domains. Specifically, authors of scientific papers aim to communicate novel scientific work in an accurate and unambiguous way by using precise scientific terminology. Whilst scientific journalists also aim to accurately communicate novel scientific work, their work is primarily funded by newspaper sales and thus they also aim to captivate as large an audience as possible. Therefore journalists tend to use simplified vocabulary and structure, creative and unexpected writing style, slang, simile, metaphor and exaggeration to make their work accessible, informative and entertaining in order to maximise readership Louis and Nenkova (2013). Success at the CD2CR task in this setting is dependent on context sensitive understanding of how the accessible but imprecise writing of journalists maps on to precise terminology used in scientific writing. For example, a recent study has found that “convalescent plasma derived from donors who have recovered from COVID-19 can be used to treat patients sick with the disease” 111DOI: 10.1101/2020.03.16.20036145. A news article222https://tinyurl.com/ycnq9xg7 discussing this work says that “…blood from recovered Covid-19 patients in the hope that transfusions…[can help to treat severely ill patients]” . In this example the task is to link ‘blood’ to ‘convalescent plasma’ and ‘recovered Covid-19 patients’ to ‘donors’. These cross-document, cross-domain co-reference chains can be used as contextual anchors for downstream analysis of the two document settings via tasks such as natural language inference, stance detection and frame analysis. The contributions in this paper are the following: * • A novel task setting for CDCR that is more challenging than those that already exist due to linguistic variation between different domains and document types (we call this CD2CR). * • An open source English language CD2CR dataset with 7602 co-reference pair annotations over 528 documents and detailed 11 page annotation guidelines (section 3.1). * • A novel annotation tool to support ongoing data collection and annotation for CD2CR including a novel sampling mechanism for calculating inter-annotator agreement (Section 3.4). * • A series of experiments on our dataset using different baseline models and an in-depth capability-based evaluation of the best-performing baseline (Section 5) ## 2 Related Work ### 2.1 Co-reference Resolution Intra-document co-reference resolution is a well understood task with mature training data sets Weischedel et al. (2013) and academic tasks Recasens et al. (2010). The current state of the art model by Joshi et al. (2020) is based on Lee et al. (2017, 2018) and uses a modern BERT-based Devlin et al. (2019) architecture. Comparatively, CDCR, which involves co-reference resolution across multiple documents, has received less attention in recent years Bagga and Baldwin (1998); Rao et al. (2010); Dutta and Weikum (2015); Barhom et al. (2019). Cattan et al. (2020) jointly learns both entity and event co-reference tasks, achieving current state of the art performance for CDCR, and as such provides a strong baseline for experiments in CD2CR. Both Cattan et al. (2020) and Barhom et al. (2019) models are trained and evaluated using the ECB+ corpus Cybulska and Vossen (2014) which contains news articles annotated with both entity and event mentions. ### 2.2 Entity Linking Entity Linking (EL) focuses on alignment of mentions in documents to resources in an external knowledge resource Ji et al. (2010) such as SNOMED CT333https://tinyurl.com/yy7g4ttz or DBPedia444https://wiki.dbpedia.org/. EL is challenging due to the large number of pairwise comparisons between document mentions and knowledge resource entities that may need to be carried out. Raiman and Raiman (2018) provide state of the art performance by building on Ling et al. (2015)’s work in which an entity type system is used to limit the number of required pairwise comparisons to related types. Yin et al. (2019) achieved comparable results using a graph-traversal method to similarly constrain the problem space to candidates within a similar graph neighbourhood. EL can be considered a narrow sub-task of CDCR since it cannot resolve novel and rare entities or pronouns Shen et al. (2015). Moreover EL’s dependency on expensive-to-maintain external knowledge graphs is also problematic when limited human expertise is available. Given these limitations, EL is inappropriate within our task setting, hence our CDCR-based approach. ### 2.3 Semantic Specialisation Like earlier static vector language models, contextual language models such as BERT Devlin et al. (2019) and ElMo Peters et al. (2018) use distributional knowledge Harris (1954) inherent in large text corpora to learn context-aware word embeddings that can be used for downstream NLP tasks. However, these models do not learn about formal lexical constraints, often conflating different types of semantic relatedness Ponti et al. (2018); Lauscher et al. (2020). This is a weakness of all distributional language models that is particularly problematic in the context of CD2CR for entity mentions that are related but not co-referent (e.g. “Mars” and “Jupiter”) as shown in section 5. A number of solutions have been proposed for adding lexical knowledge to static word embeddings Yu and Dredze (2014); Wieting et al. (2015); Ponti et al. (2018) but contextual language models have received comparatively less attention. Lauscher et al Lauscher et al. (2020) propose adding a lexical relation classification step to BERT’s language model pre-training phase to allow the model to integrate both lexical and distributional knowledge. Their model, LIBERT, has been shown to facilitate statistically-significant performance boosts on a variety of downstream NLP tasks. ## 3 Dataset creation Our dataset is composed of pairs of news articles and scientific papers gathered automatically (Section 3.1). Our annotation process begins by obtaining summaries of the news and science document pairs (extractive news summaries and scientific abstracts, respectively) (Section 3.2). Candidate co- reference pairs from each summary-abstract pair are identified and scored automatically. (Section 3.3). Candidate co-reference pairs are then presented to human annotators via a bespoke annotation interface for scoring (Section 3.4). Annotation quality is measured on an ongoing basis as new candidates are added to the system (Section 3.5). ### 3.1 Data Collection We have developed a novel data set that allows us to train and evaluate a CD2CR model. The corpus is approximately 50% the size of the ECB+ corpus (918 documents) Cybulska and Vossen (2014) and is split into training, development and test sets (statistics for each subset are provided in Table 1). Each pair of documents consists of a scientific paper and a newspaper article that discusses the scientific work. In order to detect pairs of documents, we follow the approach of Ravenscroft et al. (2018), using approximate matching of author name and affiliation metadata, date of publishing and exact DOI matching where available to connect news articles to scientific publications. Subset | Documents | Mentions | Clusters ---|---|---|--- Train | 300 | 4,604 | 426 Dev | 142 | 1,821 | 199 Test | 86 | 1,177 | 101 Table 1: Total individual documents, mentions, co-reference clusters of each subset excluding singletons. We built a web scraper that scans for new articles from the ‘Science’ and ‘Technology’ sections of 3 well-known online news outlets (BBC, The Guardian, New York Times) and press releases from Eurekalert, a widely popular scientific press release aggregator. Once a newspaper article and related scientific paper are detected, the full text from the news article and the scientific paper abstract and metadata are stored. Where available the full scientific paper content is also collected. We ran the scraper between April and June 2020 collecting news articles and scientific papers including preprints discussing a range of topics such as astronomy, computer science and biology (incl. coverage of COVID-19). New relevant content is downloaded and ingested into our annotation tool (see Section 3.4) on an ongoing basis as it becomes available. ### 3.2 Article Summarisation Newspaper articles and scientific papers are long and often complex documents, usually spanning multiple pages, particularly the latter. Moreover the two document types differ significantly in length. Comparing documents of such uneven length is a difficult task for human annotators. We also assume that asking human annotators to read the documents in their entirety to identify co-references would be particularly hard with a very low chance for good inter-annotator agreement (IAA). We therefore decided to simplify the task by asking annotators to compare summaries of the newspaper article (5-10 sentences long) and the scientific paper (abstract). For each document pair, we ask the annotators to identify co-referent mentions between the scientific paper abstract and a summary of the news article that is of similar length (e.g. 5-10 sentences). Scientific paper abstracts act as a natural summary of a scientific work and have been used as a strong baseline or even a gold-standard in scientific summarisation tasks Liakata et al. (2013). Furthermore, abstracts are almost always available rather than behind paywalls like full text articles. For news summarisation, we used a state-of- the-art extractive model Grenander et al. (2019) to extract sentences forming a summary of the original text. This model provides a summary de-biasing mechanism preventing it from focusing on specific parts of the full article, preserving the summary’s informational authenticity as much as possible. The difference in style between the two documents is preserved by both types of summary since abstracts are written in the same scientific style as full papers and the extractive summaries use verbatim excerpts of the original news articles. ### 3.3 Generation of pairs for annotation Figure 1: Illustration of the generation process for pairs of potentially co- referring expressions, left boxes represent related news summary (top) and abstract (bottom), co-referent entity pairs in middle boxes shown with same formatting (underline,italic). To populate our annotation tool, we generate pairs of candidate cross-document mentions to be evaluated by the user. Candidate mentions are identified by using spaCy Honnibal and Montani (2017) for the recognition of noun phrases and named entities from each input document pair (abstract-news summary). For each pair of documents, pairs of all possible mention combinations are generated and stored for annotation. In any given pair of documents, the majority of mention pairs ($M_{0}$, $M_{1}$) generated automatically in this way will not co-refer thus resulting in a vastly imbalanced dataset and also running the risk of demotivating annotators. To ensure that annotators are exposed to both positive and negative examples, we use a similarity score to rank examples based on how likely they are to co-refer. The first step in generating a similarity score $s$ is to concatenate each abstract-news-summary pair together: “summary [SEP] abstract” into a pre-trained $\text{BERT}_{\text{large}}$ model. Then we take the mean of the word vectors that correspond to the mention spans within the documents and calculate the cosine similarity of these vectors. We find that this BERT-based similarity score performs well in practice. We also use it in combination with a thresholding policy as one of our baseline models in Section 4. ### 3.4 Annotation Tool & Interface We developed an open source annotation tool555https://github.com/ravenscroftj/cdcrtool that allows humans to identify cross-document co-reference between each pair of related documents. Whilst designing this tool, we made a number of decisions to simplify the task and provide clear instructions for the human annotators in order to encourage consistent annotation behaviour. To maximise the quality and consistency of annotations in our corpus, we simplified the task as much as possible for the end user. Annotation tasks were framed as a single yes or no question: “Are x and y mentions of the same entity?”. Mentions in context were shown in bold font whereas mentions already flagged as co-referent were shown in green. This enabled annotators to understand the implications for existing co-reference chains before responding (see Figure 3). Questions were generated and ranked via our task generation pipeline (see Section 3.3 above). We added two additional features to our annotation interface to improve annotators’ experience and to speed up the annotation process. Firstly, if the candidate pair is marked as co-referent, the user is allowed to add more mentions to the coreference cluster at once. Secondly, inspired by Li et al. (2020), if the automatically shown mention pair is not co-referent, the user can select a different mention that is co-referent. The upstream automated mention detection mechanism can sometimes introduce incomplete or erroneous mentions, leading to comparisons that don’t make sense or that are particularly difficult. Therefore, annotators can also move or resize the mention spans they are annotating. We use string offsets of mention span pairs to tokens to check that they do not overlap with each other in order to prevent the creation of duplicates. Figure 1 shows an illustrated example of the generation pipeline for mention pairs. ### 3.5 Annotation Protocol We recruited three university-educated human annotators and provided them with detailed annotation guidelines for the resolution of yes/no questions on potentially co-referring entities in pairs from the ordered queue described above. By default each entity pair resolution is carried out once, allowing us to quickly expand our data set. However, we pseudo-randomly sample 5% of mention pairs in order to calculate inter-annotator-agreement (IAA) and make sure that data collected from the tool is consistent and suitable for modelling. New entity pairs for IAA are continually sampled as new document pairs and mention tuples are added to the corpus by the web scraper (Section 3.1). The annotation system puts mention pairs flagged for IAA first in the annotation queue. Thus, all annotators are required to complete IAA comparisons before moving on to novel mention pairs. This allows us to ensure that all annotators are well represented in the IAA exercise. To avoid annotators being faced with a huge backlog of IAA comparisons before being able to proceed with novel annotations, we also limited the number of comparisons for IAA required by each user to a maximum of 150 per week. ### 3.6 Task Difficulty and Annotator Agreement We anticipated that annotation of the CD2CR corpus would be difficult in nature due to its dependencies on context and lexical style. We invited users to provide feedback regularly to help us refine and clarify our guidelines and annotation tool in an iterative fashion. Users could alert us to examples they found challenging by flagging them as difficult in the tool. Qualitative analysis of the subset of ‘difficult’ cases showed that the resolution of mention pairs is often perceived by annotators as difficult when: * • Deep subject-matter-expertise is required to understand the mentions, e.g. is “jasmonic acid” the same as “regulator cis ‐(+)‐12‐oxophytodienoic acid”. * • Mentions involve non-commutable set membership ambiguity e.g. “Diplodocidae” and “the dinosaurs” * • Mentions are context dependent e.g. “the struggling insect” and “the monarch butterfly”. This feedback prompted the introduction of highlighting for existing co- reference chains in the user interface (as described in section 3.4 above) to make it easier to tell when non-commutable set membership would likely introduce inconsistencies into the dataset. For mention pairs requiring subject-matter-expertise, annotators were encouraged to research the terms online. For context sensitive mention pairs, annotators were encouraged to read the full news article and full scientific paper in order to make a decision. In our 11 page annotation guidelines document (appendix) we describe the use of our annotation tool and illustrate some challenging CD2CR tasks and resolution strategies. For example precise entities mentioned in the scientific document may be referenced using ambiguous exophoric mentions in the news article (e.g. ‘a mountain breed of sheep’ vs ‘eight ovis aries’). Our guidelines require resolving these cases based on the journalist’s intent (e.g. ‘a mountain breed’ refers to the ‘ovis aries’ sheep involved in the experiment). We evaluated the final pairwise agreement between annotators using Cohen’s Kappa Cohen (1960) ($\kappa_{\text{cohen}}$) and an aggregate ‘n-way’ agreement score using Fleiss’ Kappa Fleiss (1971) ($\kappa_{\text{fleiss}}$). Pairwise $\kappa_{\text{cohen}}$ is shown in table 2 along with the total number of tasks each annotator completed. Annotator 3 (A3) shows the most consistent agreement with the other two annotators. Our Fleiss’ Kappa analysis of tasks common across the three annotators gave $\kappa_{\text{fleiss}}=0.554$. We note that Fleiss’ Kappa is a relatively harsh metric and values, like ours, between 0.41 and 0.60 are considered to demonstrate ’moderate agreement’Landis and Koch (1977). We also carried out Fleiss’ Kappa analysis on the subset of mention pairs that were completed by all annotators and were also marked as difficult by at least one user (180 mention pairs in total). We found that for this subset of pairs, $\kappa_{\text{fleiss}}=0.399$ which is considered to be fair agreementLandis and Koch (1977). | # Annotations | A1 | A2 | A3 ---|---|---|---|--- A1 | 10,685 | - | 0.492 | 0.600 A2 | 3,051 | 0.492 | - | 0.500 A3 | 9,847 | 0.600 | 0.500 | - Table 2: Number of Annotations and Pairwise Cohen’s Kappa scores $\kappa_{\text{cohen}}$ between annotators. ## 4 Model Below we describe several baseline models including state of the art CDCR models that we used to evaluate how well current approaches can be used in our CD2CR task setting. ### 4.1 BERT Cosine Similarity (BCOS) Baseline In this model we calculate the cosine-similarity between embeddings of the two mentions in context ($M_{0},M_{1}$) encoded using a pre-trained BERT model as discussed above in section 3.3. We define a thresholding function $f$ to decide if $M_{0}$ and $M_{1}$ are co-referent ($f(x)=1$) or not ($f(x)=0$): $f(x)=\begin{dcases}1,&\text{if }\text{COSSIM}(M_{0},M_{1})\geq t\\\ 0,&\text{otherwise}\end{dcases}$ During inference, we pass this function over all pairs $M_{0},M_{1}$ and infer missing links such that if $f(A,B)=1$ and $f(B,C)=1$ then $f(A,C)=1$. Based on Figure 2, we test values in increments of 0.01 between 0.3 and 0.8 inclusive for threshold cut off $t$. We evaluated the baseline by measuring its accuracy at predicting co-reference in each mention pair in the $CD^{2}CR$ development set. The best performance was attained when $t=0.65$. A visualisation of the BERT Cosine Similarity distributions of co-referent and non co-referent annotated mention pairs can be seen in Figure 2. Figure 2: BERT Cosine Similarity frequency distribution for co-referent (Yes) and non-co-referent (No) mention pairs in the CD2CR corpus. Co-referent mention pairs tend to have a slightly higher BERT cosine similarity than non co-referent mention pairs but there is significant overlap of the two distributions suggesting that in many cases BERT similarity is too simplistic a measure. Figure 3: An example of a cross-document co-reference task presented within our annotation tool. ### 4.2 Entities Only Baseline (CA) We use a state-of-the-art model Cattan et al. (2020) (CA) for cross-document co-reference resolution. In this model, each document is separately encoded using a RoBERTa encoder (without fine-tuning) to get contextualized representations for each token. Then, similarly to the within-document co- reference model by Lee et al. (2017), the mention spans are represented by the concatenation of four vectors: the vectors of the first and last token in the span, an attention-weighted sum of the span token vectors, and a feature vector to encode the span width. Two mention representations are then concatenated and fed to a feed-forward network to learn a likelihood score for whether two mentions co-refer. At inference time, agglomerative clustering is used on the pairwise scores to form coreference clusters. The CA model is trained to perform both event and entity recognition on the ECB+ corpus Cybulska and Vossen (2014) In our setting there is no event detection subtask so, for fair comparison, we pre-train the CA model on ECB+ entity annotations only and evaluate it on our new CD2CR task to see how well it generalises to our task setting. ### 4.3 CA + Fine-Tuned (CA-FT) Baseline Here we aim to evaluate whether fine tuning the CA model from section 4.2 using the CD2CR corpus can improve its performance in the new task setting. The CA model is first trained on the ECB+ corpus in the manner described above. We then further fine-tune the feed-forward model (without affecting the RoBERTa encoder) on the CD2CR corpus for 10 epochs with early stopping. Pseudo-random sub-sampling is carried out on the training set to ensure a balance of co-referent and non-co-referent mention pairs. ### 4.4 CA - Vanilla (CA-V) Baseline Here we aim to evaluate whether training the CA model on the CD2CR dataset from the RoBERTa baseline without first training on the ECB+ corpus allows it to fit well to the new task setting. We re-initialise the CA encoder (Section 4.2) using weights from RoBERTa Liu et al. (2019) and randomly initialise the remaining model parameters. We then train the model on the CD2CR corpus for up to 20 epochs with early stopping with pseudo-random sub-sampling as above. ### 4.5 CA - SciBERT (CA-S) Baseline This model is the same as CA-V but we replace the RoBERTa encoder with SciBERT Beltagy et al. (2019), a version of BERT pre-trained on scientific literature in order to test whether the scientific terms and context captured by SciBERT improve performance at the CD2CR task compared to RoBERTa. Similarly to CA-V in section 4.4, we initialise the BERT model with weights from $\text{SciBERT}_{\text{scivocab-uncased}}$ Beltagy et al. (2019) and randomly initialise the remaining model parameters, training on the CD2CR corpus for up to 20 epochs with early stopping. ## 5 Results and Discussion Model | MUC | $\text{B}^{3}$ ---|---|--- P | R | F1 | P | R | F1 BCOS | 0.42 | 0.94 | 0.58 | 0.01 | 0.45 | 0.00 CA | 0.41 | 0.51 | 0.46 | 0.39 | 0.33 | 0.35 CA-V | 0.50 | 0.69 | 0.58 | 0.35 | 0.57 | 0.44 CA-FT | 0.47 | 0.71 | 0.52 | 0.30 | 0.62 | 0.41 CA-S | 0.58 | 0.46 | 0.51 | 0.32 | 0.53 | 0.39 Table 3: MUC and $B^{3}$ results from running baseline models on CD2CR test subset, BCOS threshold=0.65 We evaluate each of the model baselines described in section 4 above on the test subset of our CD2CR corpus. Results are shown in table 3. For the purposes of evaluation, we use named entity spans from the manually annotated CD2CR as the “gold standard” in all experiments rather than using the end-to-end Named Entity Recognition capabilities provided by some of the models. We evaluate the models using the metrics described by Vilain et al. (1995) (henceforth MUC) and Bagga and Baldwin (1998) (henceforth $B^{3}$). MUC F1, precision and recall are defined in terms of pairwise co-reference relationships between each mention. $B^{3}$, F1, precision and recall are defined in terms of presence or absence of specific entities in the cluster. When measuring $B^{3}$, we remove entities with no co-references (singletons) from the evaluation to avoid inflation of results Cattan et al. (2020). Test Type | Co-referent? | Pass Rate & Total Tests | Example test case and outcome for test case ---|---|---|--- Anaphora and Exophora resolution | Yes | 47.1% (16/34) | M1: …to boost the struggling insect’s numbers… [PASS] M2: the annual migration of the monarch butterfly… No | 76.5% (26/34) | M1: …monarchs raised in captivity… [FAIL] M2: …rearing wild-caught monarchs in an indoor environment… Subset relationship resolution | Yes | 24.3% (9/37) | M1: …it was in fact a hive of human activity… [FAIL] M2: …this region for Pre-Columbian cultural developments… No | 60.0% (18/30) | M1: … the carnivore’s skull… [FAIL] M2: … the gigantic extinct Agriotherium africanum Para-phrase resolution | Yes | 33.3% (13/39) | M1: …a giant short-faced bear… [PASS] M2: …the gigantic extinct Agriotherium africanum… No | 80.5% (29/36) | M1: …half the energy that existing techniques require [FAIL] M2: …the lack of efficient catalysts for ammonia synthesis Table 4: A breakdown of specific tests carried out on CA-V model against three challenging types of relationships found in the $CD^{2}CR$ corpus. [PASS] or [FAIL] indicates CA-V model correctness. Pass Rate is mathematically equivalent to Recall for test sets. The threshold baseline (BCOS) gives the highest MUC recall but also poor MUC precision and poorest $B^{3}$ precision. The $B^{3}$ metric is highly specific with respect to false-positive entity mentions and strongly penalises BCOS for linking all non-coreferent pairs with $COSSIM(M_{0},M_{1})\geq 0.65$. Furthermore, Fig. 2 shows that a thresholding strategy is clearly sub-optimal given that there is a significant overlap of co-referent and non-co-referent pairs with only a small minority of pairs at the top and bottom of the distribution that do not overlap. Therefore, despite its promising MUC F1 score, it is clear that BCOS is not useful in practical terms. Whilst our thresholding baseline above uses BERT, RoBERTa is used by Cattan et al. (2020) as the basis for their state-of-the-art model and thus for our models based on their work. Although the two models have the same architecture, RoBERTa has been shown to outperform BERT at a range of tasks Liu et al. (2019). However, as shown in Figure 4, the cosine similarity distribution of mention pair embeddings produced by RoBERTa is compressed to use a smaller area of the potential distribution space compared to that of BERT (Figure 2). This compression of similarities may imply a reduction in RoBERTa’s ability to discriminate in our task setting. Liu et al. (2019) explain that their byte-pair-encoding (BPE) mechanism, which expands RoBERTa’s sub-word vocabulary and simplifies pre-processing, can reduce model performance for some tasks, although this is not further explored in their work. We leave further exploration of RoBERTa’s BPE scheme and its effects on the CD2CR task setting to future work. Figure 4: RoBERTa Cosine Similarity frequency distribution for co-referent (Yes) and non-co-referent (No) mention pairs in the CD2CR corpus. Distribution is compressed between 0.8 and 1.0. All of the models specifically trained on the CD2CR corpus (CA-V, CA-FT, CA-S) outperform the CA model by a large margin. Furthermore, the CA-V model (without pre-training on ECB+ corpus) outperforms the CA-FT model (with ECB+ pre-training) by 6% MUC and 3% B3. These results suggest that the CD2CR task setting is distinct from the CDCR and ECB+ task setting and that this distinction is not solvable with fine-tuning. In terms of both MUC and B3, CA-S performs much worse than CA-V suggesting that SciBERT embeddings are less effective than RoBERTa embeddings in this task setting. We hypothesise that SciBERT’s specialisation towards scientific embeddings may come at the cost of significantly worse news summary embeddings when compared to those produced by RoBERTa. We next evaluate our best performing CD2CR baseline model (CA-V) at the entity resolution CDCR task using the ECB+ test corpus, to see how well it generalises to the original CDCR task. Results are presented in 5 along-side Cattan et al’s original model results (CA). The CA-V model still shows good performance, despite a small drop, when compared to the original CA model. The drop in $B^{3}$ F1 is more pronounced than MUC but is still broadly in line with other contemporary CDCR systems Cattan et al. (2020). The CA-V model demonstrates a promising ability to generalise beyond our corpus to other tasks and reveals an interesting correspondence between CDCR and CD2CR settings. Model | MUC | $B^{3}$ ---|---|--- P | R | F1 | P | R | F1 CA | 0.86 | 0.82 | 0.84 | 0.63 | 0.68 | 0.65 CA-V | 0.82 | 0.81 | 0.81 | 0.56 | 0.53 | 0.55 Table 5: MUC and $B^{3}$ results from running the CD2CR baseline model (CA-V) on ECB+ dataset compared with original Cattan et al. (2020) (CA). Finally, the best model (CA-V) is analysed using a series of challenging test cases inspired by Ribeiro et al Ribeiro et al. (2020). These test cases were created using 210 manually annotated mention-pairs found in the test subset of the $CD^{2}CR$ corpus according to the type of relationship illustrated (Anaphora & Exophora, Subset relationships, paraphrases). We collected a balanced set of 30-40 examples of both co-referent and non-coreferent-but- challenging pairs for each type of relationship (exact numbers in Table 4). We then recorded whether the model correctly predicted co-reference for these pairs. The results along with illustrative examples of each relationship type are shown in Table 4. The results suggest that the model is better at identifying non-co-referent pairs than co-referent pairs and that it struggles with positive co-referent mentions for all three types of relationship. The model struggles to relate general reader-friendly descriptions of entities from news articles to precise and clinical descriptions found in scientific papers. The model often successfully identifies related concepts such as ‘the carnivore’s skull’ and ‘Agriotherium africanum’. However it is unable to deal with the complexity of these relationships and appears to conflate ‘related’ with ‘co-referent’, which is likely due to lack of lexical knowledge as we discussed in section 2.3. Figure 5 shows significant overlap between co- referent and non-co-referent RoBERTa-based cosine similarities, which can also be observed for the wider corpus in Figure 4, but is especially bad for these test examples. This overlap suggests that disentangling these pairs is likely to be a challenging task for the downstream classification layer in the CA-V model. These challenges are less likely to occur in homogeneous corpora like ECB+ where descriptions and relationships remain consistent in detail and complexity. Figure 5: RoBERTa-based mention pair similarity frequency distribution for test examples from Table 4. ’yes’ and ’no’ for ’co-referent’ and ’not-co- referent’ respectively ## 6 Conclusion We have defined cross-document, cross-domain co-reference resolution (CD2CR), a special and challenging case of cross-document co-reference resolution for comparing mentions across documents of different types and/or themes. We have constructed a specialised CD2CR annotated dataset, available, along with our annotation guidelines and tool, as a free and open resource for future research. We have shown that state-of-the-art CDCR models do not perform well on the CD2CR dataset without specific training. Furthermore, even with task- specific training, models perform modestly and leave room for further research and improvement. Finally, we show that the understanding of semantic relatedness offered by current generation transformer-based language models may not be precise enough to reliably resolve complex linguistic relationships such as those found in CD2CR as well as other types of co-reference resolution and relationship extraction tasks. The use of semantic enrichment techniques (such as those discussed in Section 2.3) to improve model performance in the CD2CR task should be investigated as future work. ## Acknowledgments This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1 and the University of Warwick’s CDT in Urban Science under EPSRC grant EP/L016400/1. ## References * Bagga and Baldwin (1998) Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In _Proceedings of ACL ’98/COLING ’98_ , page 79–85, USA. Association for Computational Linguistics. * Barhom et al. (2019) Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4179–4189. * Beltagy et al. (2019) Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3615–3620. * Cattan et al. (2020) Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2020. Streamlining cross-document coreference resolution: Evaluation and modeling. arXiv:2009.11032. * Cohen (1960) Jacob Cohen. 1960. A coefficient of agreement for nominal scales. _Educational and Psychological Measurement_ , 20:37–46. * Cybulska and Vossen (2014) Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , pages 4545–4552. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4171–4186. Association for Computational Linguistics. * Dutta and Weikum (2015) Sourav Dutta and Gerhard Weikum. 2015. Cross-Document Co-Reference Resolution using Sample-Based Clustering with Knowledge Enrichment. _Transactions of the Association for Computational Linguistics_ , 3:15–28. * Fleiss (1971) Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. _Psychological Bulletin_ , 76(5):378–382. * Grenander et al. (2019) Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, and Annie Louis. 2019. Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6019–6024. * Harris (1954) Zellig S. Harris. 1954. Distributional structure. _WORD_ , 10(2-3):146–162. * Honnibal and Montani (2017) Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. * Ji et al. (2010) Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the TAC 2010 knowledge base population track. In _Proceedings of the 2010 Text Analysis Conference_. * Joshi et al. (2020) Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77. * Landis and Koch (1977) JR Landis and GG Koch. 1977. The measurement of observer agreement for categorical data. _Biometrics_ , 33(1):159—174. * Lauscher et al. (2020) Anne Lauscher, Ivan Vulić, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavaš. 2020. Specializing unsupervised pretraining models for word-level semantic similarity. _Computing Research Repository_ , arXiv:1909.02339. * Lee et al. (2017) Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics. * Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics. * Li et al. (2020) Belinda Z. Li, Gabriel Stanovsky, and Luke Zettlemoyer. 2020. Active learning for coreference resolution using discrete annotation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8320–8331. * Liakata et al. (2013) Maria Liakata, Simon Dobnik, Shyamasree Saha, Colin Batchelor, and Dietrich Rebholz-Schuhmann. 2013. A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task. In _Proceedings of Conference on Empirical Methods in Natural Language Processing, EMNLP 2013_ , pages 747–757. Association for Computational Linguistics. * Ling et al. (2015) Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. _Transactions of the Association for Computational Linguistics_ , 3:315–328. * Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. _Computing Research Repository_ , arXiv:1907.11692. * Louis and Nenkova (2013) Annie Louis and Ani Nenkova. 2013. A corpus of science journalism for analyzing writing quality. _Dialogue and Discourse_ , 4(2):87–117. * Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 2227–2237. * Ponti et al. (2018) Edoardo Maria Ponti, Ivan Vulić, Goran Glavaš, Nikola Mrkšić, and Anna Korhonen. 2018. Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 282–293. * Raiman and Raiman (2018) Jonathan Raiman and Olivier Raiman. 2018. DeepType: Multilingual entity linking by neural type system evolution. In _AAAI Conference on Artificial Intelligence_. * Rao et al. (2010) Delip Rao, Paul McNamee, and Mark Dredze. 2010. Streaming cross document entity coreference resolution. In _Coling 2010: Posters_ , pages 1050–1058, Beijing, China. Coling 2010 Organizing Committee. * Ravenscroft et al. (2018) James Ravenscroft, Amanda Clare, and Maria Liakata. 2018. HarriGT: A tool for linking news to science. In _Proceedings of ACL 2018, System Demonstrations_ , pages 19–24. * Recasens et al. (2010) Marta Recasens, Lluís Màrquez, Emili Sapena, M. Antònia Martí, Mariona Taulé, Véronique Hoste, Massimo Poesio, and Yannick Versley. 2010. SemEval-2010 task 1: Coreference resolution in multiple languages. In _Proceedings of the 5th International Workshop on Semantic Evaluation_ , pages 1–8. Association for Computational Linguistics. * Ribeiro et al. (2020) Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4902–4912, Online. Association for Computational Linguistics. * Shen et al. (2015) W. Shen, J. Wang, and J. Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. _IEEE Transactions on Knowledge and Data Engineering_ , 27(2):443–460. * Vilain et al. (1995) Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In _MUC6 ’95: Proceedings of the 6th conference on Message understanding_ , pages 45–52. * Wadden et al. (2020) David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. _Computing Research Repository_ , arXiv:2004.14974. * Weischedel et al. (2013) Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. Ontonotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA. * Wieting et al. (2015) John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. _Transactions of the Association for Computational Linguistics_ , 3:345–358. * Yin et al. (2019) Xiaoyao Yin, Yangchen Huang, Bin Zhou, Aiping Li, Long Lan, and Yan Jia. 2019. Deep entity linking via eliminating semantic ambiguity with BERT. _IEEE Access_ , 7:169434–169445. * Yu and Dredze (2014) Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics_ , pages 545–550.
# On foci of ellipses inscribed in cyclic polygons Markus Hunziker Department of Mathematics, Baylor University, Waco TX, USA <EMAIL_ADDRESS>, Andrei Martínez-Finkelshtein Department of Mathematics, Baylor University, Waco TX, USA, and Department of Mathematics, University of Almería, Almería, Spain<EMAIL_ADDRESS>, Taylor Poe Department of Mathematics, Baylor University, Waco TX, USA <EMAIL_ADDRESS>and Brian Simanek Department of Mathematics, Baylor University, Waco TX, USA<EMAIL_ADDRESS> ###### Abstract. Given a natural number $n\geq 3$ and two points $a$ and $b$ in the unit disk $\mathbb{D}$ in the complex plane, it is known that there exists a unique elliptical disk having $a$ and $b$ as foci that can also be realized as the intersection of a collection of convex cyclic $n$-gons whose vertices fill the whole unit circle $\mathbb{T}$. What is less clear is how to find a convenient formula or expression for such an elliptical disk. Our main results reveal how orthogonal polynomials on the unit circle provide a useful tool for finding such a formula for some values of $n$. The main idea is to realize the elliptical disk as the numerical range of a matrix and the problem reduces to finding the eigenvalues of that matrix. ###### Key words and phrases: Orthogonal polynomials, Poncelet Ellipses, Blaschke products ###### 2010 Mathematics Subject Classification: Primary: 30J10, 42C05; Secondary: 14N15, 14H50, 47A12 ## 1\. Introduction Suppose $n\geq 3$ is a natural number and $E$ is an ellipse in the open unit disk $\mathbb{D}$ in the complex plane. A classical result known as Poncelet’s Theorem asserts that if there is an $n$-gon $P$ inscribed in the unit circle $\mathbb{T}$ with every side of $P$ tangent to $E$, then there are in fact infinitely many such $n$-gons and the union of the vertices of these $n$-gons fills $\mathbb{T}$. In this case, the ellipse $E$ is said to be a Poncelet $n$-ellipse. A simple argument shows that if $a,b\in\mathbb{D}$ and $n\geq 3$ is a natural number, then there exists a unique Poncelet $n$-ellipse with foci at $a$ and $b$. However, if $n>3$, then it is not obvious how to write down a formula for this ellipse or deduce any properties of its size (such as area, eccentricity, etc.). Some early relevant formulas for this purpose were found by Cayley [5, Chapter 5], but they are not easy formulas to use. Some of the main results in this paper will show how to write an explicit expression for Poncelet $n$-ellipses when $n=4$ or $n=6$. Figure 1. A Poncelet 3-ellipse and its two foci. To accomplish this task, we must frame this problem in a broader context. The main tool we will use is numerical ranges of a special class of $(n-1)\times(n-1)$ matrices called completely non-unitary contractions of defect index $1$ (denoted $S_{n-1}$). The specifics of these objects are provided in Section 2 below, but for now it is enough for us to say that the numerical range $W(A)$ of a matrix $A\in S_{n-1}$ is a strictly convex subset of the unit disk with a smooth boundary (see [14]). Furthermore, the boundary of this set $\partial W(A)$ has the Poncelet property, meaning that every point on $\mathbb{T}$ is the vertex of an $n$-gon that is inscribed in $\mathbb{T}$ having every side tangent to $\partial W(A)$ and every point on $\partial W(A)$ is a point of tangency for such an $n$-gon (see [4]). Our approach to the problem described in the previous paragraph has its roots in work of Gau and Wu [10] and aims to realize the desired ellipse as the numerical range of an appropriate matrix in $S_{n-1}$. Theorem 2 below assures us that this problem has a solution. One simplification of our problem comes from the fact that instead of finding a matrix in $S_{n-1}$ with the desired properties, it suffices to find just the eigenvalues of that matrix. This is because numerical ranges are preserved by unitary conjugation and every matrix in $S_{n-1}$ is unitarily equivalent to a canonical form called a cutoff CMV matrix (see [13, 14]). Such matrices are an important part of the theory of orthogonal polynomials on the unit circle and will play an essential role in our analysis. Further details are presented in Section 2. Returning to our original problem, notice that Theorem 2 states that both $a$ and $b$ must be eigenvalues of the desired matrix. Thus, one really only needs to determine the remaining $n-3$ eigenvalues, which is why the problem becomes trivial when $n=3$. When $n=4$, a concise formula for the one remaining eigenvalue appears in [12], and we give another proof of that formula in Section 3. In [15], Mirman presented a collection of algebraic relationships that must be satisfied by the eigenvalues we seek. While this finding is significant, it falls short of presenting a complete solution to our problem because the algebraic relationships admit multiple solutions (even in $\mathbb{D}^{n-3}$). In Section 4, we will present a different collection of algebraic relationships that applies in the case $n=6$ and admits a unique solution in $\mathbb{D}^{3}$, which means that the unique solution to this system is the collection of $3$ eigenvalues that we seek. In Section 5, we will examine some additional properties of all of the solutions to Mirman’s system of equations in the case $n=5$. The next section is a brief review of the background, notation, and terminology that will be relevant to the remainder of the paper. Many of these topics are discussed in much greater detail in [3, 13], and a thorough introduction to orthogonal polynomials on the unit circle can be found in [19, 20]. ## 2\. Background $\&$ Notation Our primary objects of study will be numerical ranges of matrices. The _numerical range_ of a matrix $A\in\mathbb{C}^{n\times n}$ is the subset of the complex plane $\mathbb{C}$ given by $W(A)=\\{\langle x,Ax\rangle:\,x\in\mathbb{C}^{n},\;\|x\|=1\\}.$ For any matrix $A$, the set $W(A)$ is a compact and convex subset of $\mathbb{C}$ (a fact known as the Toeplitz-Hausdorff Theorem) that contains the eigenvalues of $A$. If $W(A)$ is bounded by an ellipse, then we will say that $W(A)$ is an elliptical disk. The matrices $A$ that we are most interested in have the following properties: 1. (i) $\|A\|=1$; 2. (ii) all eigenvalues of $A$ are in $\mathbb{D}$; 3. (iii) $\operatorname{\textrm{rank}}(I-AA^{*})=\operatorname{\textrm{rank}}(I-A^{*}A)=1.$ The set of all $n\times n$ matrices satisfying properties (i-iii) is precisely the set $S_{n}$ that we described in the Introduction. As we stated there, it is known that the numerical range of a matrix in $S_{n}$ is the convex hull of an algebraic curve of class $n$ and has the $(n+1)$-Poncelet property, meaning that every point on $\partial W(A)$ is a point of tangency for a convex $(n+1)$-gon that is circumscribed about $W(A)$ and inscribed in $\mathbb{T}$ (see [13] for details). There are several canonical forms of matrices from the class $S_{n}$ (see [13, Section 2.3]) and the one that we will use is that of a cutoff CMV matrix. To define a CMV matrix, first define a sequence of $2\times 2$ matrices $\\{\Theta_{j}\\}_{j=0}^{\infty}$ by $\Theta_{j}=\begin{pmatrix}\bar{\alpha}_{j}&\sqrt{1-|\alpha_{j}|^{2}}\\\ \sqrt{1-|\alpha_{j}|^{2}}&-\alpha_{j}\end{pmatrix},$ where $\alpha_{j}\in\mathbb{D}$. One then defines the operators $\mathcal{L}$ and $\mathcal{M}$ by $\mathcal{L}=\Theta_{0}\oplus\Theta_{2}\oplus\Theta_{4}\oplus\cdots,\qquad\mathcal{M}=1\oplus\Theta_{1}\oplus\Theta_{3}\oplus\cdots$ where the initial $1$ in the definition of $\mathcal{M}$ is a $1\times 1$ identity matrix. The CMV matrix corresponding to the sequence $\\{\alpha_{n}\\}_{n=0}^{\infty}$ is $\bm{\mathcal{G}}:=\mathcal{L}\mathcal{M}=\begin{pmatrix}\overline{\alpha_{0}}&\overline{\alpha_{1}}\rho_{0}&\rho_{1}\rho_{0}&0&0&\dots\\\ \rho_{0}&-\overline{\alpha_{1}}\alpha_{0}&-\rho_{1}\alpha_{0}&0&0&\dots\\\ 0&\overline{\alpha_{2}}\rho_{1}&-\overline{\alpha_{2}}\alpha_{1}&\overline{\alpha_{3}}\rho_{2}&\rho_{3}\rho_{2}&\dots\\\ 0&\rho_{2}\rho_{1}&-\rho_{2}\alpha_{1}&-\overline{\alpha_{3}}\alpha_{2}&-\rho_{3}\alpha_{2}&\dots\\\ 0&0&0&\overline{\alpha_{4}}\rho_{3}&-\overline{\alpha_{4}}\alpha_{3}&\dots\\\ \dots&\dots&\dots&\dots&\dots&\dots\end{pmatrix},\quad\rho_{n}=\sqrt{1-|\alpha_{n}|^{2}}$ (2.1) (see [19, Section 4.2]). Since each of $\mathcal{L}$ and $\mathcal{M}$ is a direct sum of unitary matrices, each of $\mathcal{L}$ and $\mathcal{M}$ is unitary and hence $\mathcal{G}$ is unitary as an operator on $\ell^{2}(\mathbb{N})$. The principal $n\times n$ submatrix of $\mathcal{G}$ will also be called the $n\times n$ cut-off CMV matrix, which we will denote by $\mathcal{G}^{(n)}$. It is easy to see that $\mathcal{G}^{(n)}\in S_{n}$. CMV matrices are intimately connected with the theory of orthogonal polynomials on the unit circle (OPUC). Indeed, if one defines $\Phi_{n}(z):=\det(zI_{n}-\mathcal{G}^{(n)}),$ (2.2) then the polynomial $\Phi_{n}(z)$ is the degree $n$ monic orthogonal polynomial with respect to the measure $\mu$ that is the spectral measure of $\mathcal{G}$ and the vector $\vec{e}_{1}$. Since $\mathcal{G}$ is unitary, the formula (2.2) implies that all zeros of $\Phi_{n}$ are in $\mathbb{D}$. Furthermore, the coefficients $\\{\alpha_{n}\\}_{n=0}^{\infty}$ that are used to define $\mathcal{G}$ are related to $\\{\Phi_{n}\\}_{n=0}^{\infty}$ by the Szegő recursion: $\begin{pmatrix}\Phi_{k+1}(z)\\\ \Phi_{k+1}^{*}(z)\end{pmatrix}=\begin{pmatrix}z&-\overline{\alpha_{k}}\\\ -\alpha_{k}z&1\end{pmatrix}\,\begin{pmatrix}\Phi_{k}(z)\\\ \Phi_{k}^{*}(z)\end{pmatrix},$ (2.3) where if $\Phi_{n}(z)=\sum_{j=0}^{n}c_{j}z^{j},$ then $\Phi^{*}_{n}(z)=\sum_{j=0}^{n}\overline{c_{j}}z^{n-j}=z^{n}\,\overline{\Phi_{n}\left(1/\overline{z}\right)}.$ (2.4) $\Phi_{n}^{*}$ is called the reversed polynomial of $\Phi_{n}$. Observe that $\Phi^{*}_{n}$ can be of degree strictly less than $n$. It follows from the Szegő recursion that $\alpha_{n}=-\overline{\Phi_{n+1}(0)}$ and the sequence $\\{\alpha_{n}\\}_{n=0}^{\infty}$ is often called the sequence of Verblunsky coefficients for the measure $\mu$. For future use, let us define the notation $\Phi_{n}(z)=\mathcal{S}_{\alpha_{n-1}}(\Phi_{n-1}(z)),\qquad\Phi_{n}^{*}(z)=\mathcal{T}_{\alpha_{n-1}}(\Phi_{n-1}^{*}(z)).$ to say that $\Phi_{n}$ is related to $\Phi_{n-1}$ by the Szegő recursion and the parameter $\alpha_{n-1}$. Some of the most important theorems in the study of OPUC come from establishing the following bijections (see [19, Chapter 1]): 1. $\bullet$ All of the zeros of $\Phi_{n,\mu}(z)$ are in $\mathbb{D}$ and any collection $\\{z_{j}\\}_{j=1}^{n}\in\mathbb{D}^{n}$ is the zero set of $\Phi_{n,\nu}(z)$ for some $\nu$ supported on $\mathbb{T}$. 2. $\bullet$ The sequence $\\{\Phi_{n,\mu}(0)\\}_{n\in\mathbb{N}}$ is a sequence in $\mathbb{D}$ and every sequence $\\{\gamma_{j}\\}_{j\in\mathbb{N}}\in\mathbb{D}^{\mathbb{N}}$ satisfies $\Phi_{n,\nu}(0)=\gamma_{n}$ for all $n\in\mathbb{N}$ and some measure $\nu$ supported on $\mathbb{T}$. This last fact (known as Verblunsky’s Theorem [19, Section 1.7]) tells us that the sequence $\\{\alpha_{n}\\}_{n=0}^{\infty}$ completely characterizes the measure $\mu$ on $\mathbb{T}$. If it is necessary to make the sequence of Verblunsky coefficients explicit, we will write $\Phi_{n}(z;\alpha_{0},\ldots,\alpha_{n-1})$. We also note here that the Szegő recursion is invertible. This means that if we know $\Phi_{n}(z;\alpha_{0},\ldots,\alpha_{n-1})$, then we can recover $\Phi_{j}(z;\alpha_{0},\ldots,\alpha_{j-1})$ for all $j<n$ and hence we can recover $\alpha_{j}$ for all $j=0,\ldots,n-1$ (by evaluation at $0$). Given a family of OPUC, replacing the last Verblunsky coefficient $\alpha_{n-1}$ in the Szegő recursion with $\lambda\in\mathbb{T}$ yields a degree-$n$ paraorthogonal polynomial on the unit circle (POPUC) $\Phi_{n}(z;\alpha_{0},...,\alpha_{n-2},\lambda)=z\Phi_{n-1}(z;\alpha_{0},...,\alpha_{n-2})-\overline{\lambda}\Phi^{*}_{n-1}(z;\alpha_{0},...,\alpha_{n-2}).$ In contrast to OPUC, the zeros of POPUC are on $\mathbb{T}$. Given $\Phi_{n-1}(z)$, we can define $\\{\Phi_{n}^{(\lambda)}\\}$ as the set of all degree-$n$ POPUC for $\Phi_{n-1}$ as $\lambda$ varies around $\mathbb{T}$. We have already noted that the OPUC $\Phi_{n}$ is the characteristic polynomial of a cut-off CMV matrix $\mathcal{G}^{(n)}$. Using this parameter $\lambda\in\mathbb{T}$, we can characterize a family of rank one unitary dilations of $\mathcal{G}^{(n)}$. By adding one row and one column to $\mathcal{G}^{(n)}$, we define a unitary $(n+1)\times(n+1)$ matrix whose characteristic polynomial is $\Phi_{n+1}^{(\lambda)}(z)$. While the numerical range of $\mathcal{G}^{(n)}$ has the $(n+1)$-Poncelet property, the numerical ranges of its unitary dilations are bounded by $(n+1)$-gons inscribed in $\mathbb{T}$ and circumscribed around $W(\mathcal{G}^{(n)})$. The vertices of these $(n+1)$-gons are the eigenvalues of the matrices, or equivalently, the zeros of the POPUC. We have already mentioned that the boundary of the numerical range of $\mathcal{G}^{(n)}$ has the $(n+1)$-Poncelet property. This phenomenon can be reformulated as saying that $\partial W(\mathcal{G}^{(n)})$ is the envelope of the family of circumscribing $(n+1)$-gons. The precise definition of an envelope and its relationship to numerical ranges is complicated, so we will restrict our attention only to the most relevant facts and refer the reader to [13] for details. The envelope is most easily understood by means of a dual curve. If the matrix is $\mathcal{G}^{(n)}$, then the dual curve is an algebraic curve of degree exactly $n$ and the dual of that dual is an algebraic curve of class $n$ with multiple components. The largest component is the boundary of the numerical range of $\mathcal{G}^{(n)}$ and we denote this component by $C_{1}$ (to be consistent with notation in [13]). The other components can be numbered $C_{2},\ldots,C_{\lceil n/2\rceil}$ and they too have an interpretation in terms of the $(n+1)$-gons that circumscribe $\partial W(\mathcal{G}^{(n)})$. To understand this interpretation, let us consider the component $C_{2}$, which we call the Pentagram curve after the pentagram map from [18]. For each $(n+1)$-gon $P$ that circumscribes $\partial W(\mathcal{G}^{(n)})$, let us order the vertices cyclically on $\mathbb{T}$ by $P_{1},P_{2},\ldots,P_{n+1}$. Consider now the “polygon” obtained by joining $P_{j}$ to $P_{j+2}$ for every $j=1,\ldots,n+1$ (where arithmetic is done modulo $n+1$). The resulting shape will be a non-convex $(n+1)$-gon if $n$ is even and it will be two $(n+1)/2$-gons with interlacing vertices if $n$ is odd. In either case, one can define the envelope of this collection of polygons and that will be the curve $C_{2}$. A similar construction yields the other curves $C_{j}$. When $n=6$, we will refer to the curve $C_{3}$ as the Brianchon curve (after Brianchon’s Theorem [3, Theorem 5.4]) and we note that the curve obtained from this procedure can be a single point (see Figures 4 and 5). It was Darboux who first proved that if the curve $C_{1}$ is an ellipse, then all of the other curves $C_{j}$ are ellipses, where we consider a single point to be a degenerate ellipse (see [13]). These ellipses all happen to be from the same package (see [15]). In the context of our motivating problem from the opening paragraph, if $C_{1}$ is an ellipse, then the foci of $C_{1}$ are eigenvalues of $\mathcal{G}^{(n)}$. Darboux’s result implies that the other $C_{j}$ curves are ellipses, and it turns out that their foci are also eigenvalues of $\mathcal{G}^{(n)}$. It turns out that this property persists even if $C_{1}$ is not an ellipse. More precisely, if any curve $C_{j}$ is an ellipse, then the foci of that ellipse are eigenvalues of $\mathcal{G}^{(n)}$. For this reason, finding matrices $\mathcal{G}^{(n)}$ for which some components $C_{j}$ are ellipses is an interesting problem related to our primary objective, and we will present results of this kind in later sections (see Theorem 1, Theorem 13, and Theorem 16 below). We will also work with Blaschke products $B_{n}(z):=\frac{\Phi_{n}(z)}{\Phi_{n}^{*}(z)},$ (2.5) where $\Phi_{n}(z)=\prod_{j=1}^{n}\left(z-z_{j}\right),\qquad\qquad|z_{j}|<1.$ With this notation, we will say that the Blaschke product $B_{n}(z)$ has degree $n$. If we need to make the dependence on the zeros explicit, then we will write $B_{n}(z;z_{1},\ldots,z_{n})$. We will say that a Blaschke product $B_{n}(z)$ is regular if $B_{n}(0)=0$. Our discussion so far shows that the following sets are in bijection with one another: 1. (i) equivalence classes of matrices in $S_{n-1}$ (where equivalence is defined by unitary conjugation) 2. (ii) monic polynomials of degree $n-1$ with all of their zeros in $\mathbb{D}$ 3. (iii) regular degree $n$ Blaschke products 4. (iv) $\mathbb{D}^{n-1}$ (thought of as collections of Verblunsky coefficients) Much of what we will do in Sections 3 and 4 relates properties of the numerical range of a cutoff CMV matrix $\mathcal{G}^{(n-1)}$ to properties of the corresponding Blaschke product $z\Phi_{n-1}(z)/\Phi_{n-1}^{*}(z)$. The next result will be very helpful in that regard. It is essentially an OPUC version of a result from [2]. ###### Theorem 1. Let $n=jk$ and $B_{n}(z)$ be a regular Blaschke product, i.e. $\displaystyle B_{n}(z)=\frac{z\Phi_{n-1}(z)}{\Phi_{n-1}^{*}(z)}$. Then $B_{n}(z)$ can be expressed as a composition of two regular Blaschke products $B_{j}(B_{k}(z))$ (with the degree of $B_{m}$ equal to $m$) if and only if $\Phi_{n-1}(z)$ factors as $\Phi_{n-1}(z)=\Phi_{k-1}(z)\prod_{m=1}^{j-1}\mathcal{S}_{\bar{a}_{m}}(\Phi_{k-1}(z))$ for some $\Phi_{k-1}$ having all of its zeros in $\mathbb{D}$ and some $\\{a_{1},\ldots,a_{j-1}\\}\in\mathbb{D}^{j-1}$. If this factorization holds, then the zeros of $\Phi_{k-1}$ are the zeros of $B_{k}(z)/z$ and $\\{a_{1},\ldots,a_{j-1}\\}$ is the zero set of $B_{j}(z)/z$. ###### Proof. Let $B_{n}(z)=B_{j}(B_{k}(z))$. Then $B_{j}=\frac{z(z-a_{1})...(z-a_{j-1})}{(1-\overline{a_{1}}z)...(1-\overline{a_{j-1}}z)}$ for some $\\{a_{1},\ldots,a_{j-1}\\}\in\mathbb{D}^{j-1}$. If $B_{k}=\frac{z\Phi_{k-1}(z)}{\Phi_{k-1}^{*}(z)},$ then $\displaystyle B_{n}(z)$ $\displaystyle=B_{j}\left(\frac{z\Phi_{k-1}(z)}{\Phi_{k-1}^{*}(z)}\right)$ $\displaystyle=\frac{z\Phi_{k-1}(z)(z\Phi_{k-1}(z)-a_{1}\Phi_{k-1}^{*}(z))\cdots(z\Phi_{k-1}(z)-a_{j-1}\Phi_{k-1}^{*}(z))}{\Phi_{k-1}^{*}(z)(\Phi_{k-1}^{*}(z)-\overline{a_{1}}z\Phi_{k-1}(z))\cdots(\Phi_{k-1}^{*}(z)-\overline{a_{j-1}}z\Phi_{k-1}(z))}$ It follows that $B_{n}(z)=\frac{z\Phi_{k-1}(z)\mathcal{S}_{\bar{a}_{1}}(\Phi_{k-1}(z))\cdots\mathcal{S}_{\bar{a}_{j-1}}(\Phi_{k-1}(z))}{\Phi_{k-1}^{*}(z)\mathcal{T}_{\bar{a}_{1}}(\Phi_{k-1}^{*}(z))\cdots\mathcal{T}_{\bar{a}_{j-1}}(\Phi_{k-1}^{*}(z))}$ and hence $\Phi_{n-1}$ has the desired factorization. Reversing this reasoning shows the converse statement. ∎ Our focus is on finding those $A\in S_{n-1}$ whose numerical range is bounded not just by an $n$-Poncelet curve but by an $n$-Poncelet ellipse. The relationship between numerical ranges of $A\in S_{n-1}$ and Poncelet ellipses is a subject of significant ongoing research (see [2, 3, 4, 7, 8, 12, 13, 15, 17]). The following theorem (from [17]) is the starting point of our investigation and shows that the ellipses that we are looking for do in fact exist. We state it using the terminology that we have defined so far. ###### Theorem 2. [17] Suppose $f_{1},f_{2}\in\mathbb{D}$. There exists a Poncelet $n$-ellipse with foci at $f_{1}$ and $f_{2}$. Furthermore, this ellipse forms the boundary of the numerical range of a matrix $A\in S_{n-1}$ and $f_{1},f_{2}$ are eigenvalues of $A$. Our path forward is now clear. Given $f_{1},f_{2}\in\mathbb{D}$, we want to find a matrix $A\in S_{n-1}$ such that $\partial W(A)$ is an ellipse with foci at $f_{1}$ and $f_{2}$. Theorem 2 tells us that such an $A$ exists, and we know that we can realize it as a cutoff CMV matrix. Such a matrix has $n-1$ eigenvalues in $\mathbb{D}$, two of which must be $f_{1}$ and $f_{2}$. A priori, there are no other restrictions on the remaining eigenvalues of $A$ other than they must be in $\mathbb{D}$. In Section 3, we will show how to locate the third eigenvalue of $A$ when $n=4$ and in Section 4 we will consider the case when $n=6$ and find a set of algebraic equations in three variables whose unique solution marks the locations of the other three eigenvalues that we seek. The most significant results known for general $n$ come from [15, 16, 17]. The following theorem is a restatement of a result from [15] using the terminology of matrices from $S_{n}$. ###### Theorem 3. Suppose $E_{n}$ is a Poncelet $n$-ellipse in $\mathbb{D}$ that is also the boundary of the numerical range of a matrix $A\in S_{n-1}$. Suppose the foci of $E_{n}$ are $f_{1}$ and $f_{2}$. Then the eigenvalues of $A$ can be labeled $\\{w_{1},\ldots,w_{n-1}\\}$ so that $w_{1}=f_{1}$, $w_{n-1}=f_{2}$, and $w_{j-1}w_{j+1}=B_{2}(w_{j};f_{1},f_{2}),\qquad\qquad j=2,3,\ldots,n-2$ (2.6) We will refer to the system of equations (2.6) as the “Mirman system”. It gives us a necessary but not sufficient condition for a set of eigenvalues to be the spectrum of a matrix in $S_{n-1}$ whose numerical range is bounded by an ellipse with foci at $f_{1}$ and $f_{2}$. We will see that this system of equations has many solutions and only one of them has the desired interpretation. In Section 5, we will consider matrices in $S_{4}$ and discover some properties of all of the solutions to the Mirman system when $n=5$. ## 3\. The quadrilateral case In this section, we will consider matrices in $S_{3}$ to show how our approach to Poncelet ellipses using OPUC allows us to reformulate, and in some cases strengthen, existing results in the literature. A classification of those matrices $A\in S_{3}$ for which $W(A)$ is an elliptical disk is given in [7]. Recall that a Blaschke product is regular if it maps $0$ to $0$. Fujimura [7] showed that $A\in S_{3}$ has a numerical range that is an elliptical disk if and only if there are regular Blaschke products $B_{2},C_{2}$ of degree $2$ such that $B_{A}(z):=\frac{z\det(zI-A)}{\det(zI-A)^{*}}=B_{2}(C_{2}(z))$ (3.1) (see also [2, 12]). By Theorem 1, we can state the following theorem. ###### Theorem 4. Suppose $A\in S_{3}$. The numerical range of $A$ is an elliptical disk if and only if there exist $a,b\in\mathbb{D}$ so that $\det(zI-A)=\Phi_{1}(z;a)\Phi_{2}(z;a,b).$ If this condition holds, then the pentagram curve is the single point $\bar{a}$ and the foci of $\partial W(A)$ are the zeros of $\Phi_{2}(z;a,b)$. Remark. Theorem 4 implies that to any Poncelet $4$-ellipse $E\subseteq\mathbb{D}$, one can associate a well-defined point in $\mathbb{D}$ that will be the pentagram curve of the matrix $A\in S_{3}$ that satisfies $\partial W(A)=E$. We will call this point the pentagram point of the ellipse $E$. Figure 2. The Poncelet 4-ellipse with foci $-0.28+0.12i$, $0.6+0.24i$ and pentagram point $0.4+0.3i$. Theorem 4 gives us a new interpretation of the algorithm for finding a matrix in $S_{3}$ with elliptic numerical range and two prescribed foci (such an algorithm can be found in [12]). Indeed, given $\\{f_{1},f_{2}\\}\in\mathbb{D}^{2}$, consider the polynomial $\Phi_{2}(z)=(z-f_{1})(z-f_{2})$. Perform the Inverse Szegő recursion to obtain a degree $1$ monic polynomial $\Phi_{1}(z;\bar{f}_{3})$ whose zero $f_{3}$ is in $\mathbb{D}$. The $3\times 3$ cutoff CMV matrix with eigenvalues at $\\{f_{1},f_{2},f_{3}\\}$ has the desired property. We can even generalize this algorithm to find an $A\in S_{3}$ with elliptic numerical range having one prescribed focus and a pentagram curve that is a prescribed point. ###### Theorem 5. Given $\\{f_{1},f_{2}\\}\in\mathbb{D}^{2}$, there exists a unique $3\times 3$ cutoff CMV matrix whose numerical range is bounded by an ellipse with one focus at $f_{1}$ and such that the pentagram curve is the single point $\\{f_{2}\\}$. ###### Proof. By Theorem 4, this amounts to showing that we can find $b\in\mathbb{D}$ such that $\Phi_{2}(z;\bar{f}_{2},b)$ vanishes at $f_{1}$. It is easy to see that this can be achieved precisely by setting $\bar{b}=\frac{f_{1}\Phi_{1}(f_{1};\bar{f}_{2})}{\Phi_{1}^{*}(f_{1};\bar{f}_{2})}=\frac{f_{1}(f_{1}-f_{2})}{1-\bar{f}_{2}f_{1}}$ (see also [21]). ∎ One can also find an $A_{\alpha}\in S_{3}$ with $\partial W(A_{\alpha})$ an ellipse and whose pentagram curve is a specified point. Since this is a weaker set of conditions than was used in Theorem 5, one expects that we will have many solutions to this problem. Rather than a unique matrix as in Theorem 5, fixing the pentagram point yields a family of matrices in $S_{3}$ parametrized by the Verblunsky coefficient of $\Phi_{2}(z;a,b)$. ###### Proposition 6. Given $\alpha_{0}\in\mathbb{D}$, there exists a one-parameter family $A_{\alpha}\in S_{3},\hskip 2.84526pt\alpha\in\mathbb{D}$ such that for each $A_{\alpha}$, $\partial W(A_{\alpha})$ is an ellipse and the pentagram point of $A_{\alpha}$ is $\alpha_{0}$. ###### Proof. Suppose $\alpha_{0}$ is given and consider the polynomial $\Phi_{1}(z;\bar{\alpha}_{0})$. For any $\alpha\in\mathbb{D}$, consider $\Phi_{2}(z;\bar{\alpha}_{0},\alpha)$ and define $\phi_{3}(z)=\Phi_{1}(z;\bar{\alpha}_{0})\Phi_{2}(z;\bar{\alpha}_{0},\alpha).$ Thinking of $\phi_{3}(z)$ as an OPUC and applying the inverse Szegő recursion allows us to recover the Verblunsky coefficients of $\phi_{3}(z)$ and thus define a cutoff CMV matrix, $A_{\alpha}\in S_{3}$, whose characteristic polynomial is $\phi_{3}(z)$. As $\phi_{3}(z)$ factors into a degree one and degree two OPUC related by the Szegő recursion, Theorem 4 implies that $\partial W(A_{\alpha})$ is an ellipse and the pentagram point of $A_{\alpha}$ is $\alpha_{0}$. ∎ | | ---|---|--- Figure 3. Two Poncelet 4-ellipses with the same pentagram point, $0.4+0.3i$. Now suppose we have an $A\in S_{3}$ with numerical range given by an elliptical disk. Suppose that the foci of the ellipse bounding that elliptical disk are $\\{f_{1},f_{2}\\}$ and the pentagram point of that ellipse is $\\{f_{3}\\}$. Then $\det(zI-A)=(z-f_{1})(z-f_{2})(z-f_{3}).$ By Theorem 4, we also have $\det(zI-A)=(z-f_{3})(z(z-f_{3})-b(1-\bar{f}_{3}z))$ for some $b\in\mathbb{D}$. Evaluating both of these expressions at $0$ and equating them shows $b=-f_{1}f_{2}$. It follows that $(z-f_{1})(z-f_{2})=z(z-f_{3})+f_{1}f_{2}(1-\bar{f}_{3}z)=z^{2}+z(-f_{3}-f_{1}f_{2}\bar{f}_{3})+f_{1}f_{2}.$ (3.2) If we replace $z$ by $f_{3}$ in (3.2), we get $(f_{3}-f_{1})(f_{3}-f_{2})=f_{1}f_{2}(1-|f_{3}|^{2}).$ (3.3) If we look at the reversed polynomials in (3.2) and replace $z$ by $f_{3}$, we get $(1-\bar{f}_{1}f_{3})(1-\bar{f}_{2}f_{3})=1-|f_{3}|^{2}.$ (3.4) If we divide (3.3) by (3.4), we recover the Mirman system: $f_{1}f_{2}=B_{2}(f_{3};f_{1},f_{2})$. Solving for $f_{3}$ yields the familiar formula for $f_{3}$ in terms of $f_{1},f_{2}$: $f_{3}=\frac{f_{1}+f_{2}-f_{2}|f_{1}|^{2}-f_{1}|f_{2}|^{2}}{1-|f_{1}f_{2}|^{2}}.$ Notice that we have recovered something that the Mirman system does not give us. One can verify that $f_{3}=0$ is a solution to the Mirman system, but this does not (in general) give us the matrix in $S_{3}$ with elliptical numerical range. Our calculations using OPUC eliminate this extraneous solution. ## 4\. The hexagon case In this section we will consider curves that are realized as the envelope of line segments joining vertices of hexagons inscribed in the unit circle (see [13, Section 3] for a rigorous discussion of envelopes of cyclic polygons). We know that such curves have three components: the largest one (outer) formed by connecting adjacent eigenvalues of the unitary dilations is the Poncelet curve, the middle component formed by joining alternate eigenvalues is the pentagram curve, and the smallest component formed by joining opposite eigenvalues is the Brianchon curve. Our first results are the following two theorems. ###### Theorem 7. Let $\mathcal{G}^{(5)}$ be a cut-off CMV matrix and $\Phi_{5}(z):=\det(zI_{5}-\mathcal{G}^{(5)}).$ The following are equivalent. 1. (i) the pentagram curve of $\mathcal{G}^{(5)}$ is an ellipse; 2. (ii) there exist regular Blaschke products $\\{B_{j}\\}_{j=2}^{3}$ with $\deg(B_{j})=j$ such that $\frac{z\Phi_{5}(z)}{\Phi_{5}^{*}(z)}=B_{2}(B_{3}(z))$ 3. (iii) There exist $\alpha_{0},\alpha_{1},\alpha_{2}\in\mathbb{D}$ so that $\Phi_{5}(z)=\Phi_{2}(z;\alpha_{0},\alpha_{1})\Phi_{3}(z;\alpha_{0},\alpha_{1},\alpha_{2}).$ If any of these conditions hold, then the foci of the pentagram curve are the zeros of $B_{3}(z)/z$ or equivalently, the zeros of $\Phi_{2}(z;\alpha_{0},\alpha_{1})$. Figure 4. A $6$-Poncelet curve such that the pentagram curve is an ellipse but the Brianchon curve is not a single point. ###### Theorem 8. If we retain the notation from Theorem 7, then the following are equivalent: 1. (i) the Brianchon curve of $\mathcal{G}^{(5)}$ is a single point; 2. (ii) there exist regular Blaschke products $\\{B_{j}\\}_{j=2}^{3}$ with $\deg(B_{j})=j$ such that $\frac{z\Phi_{5}(z)}{\Phi_{5}^{*}(z)}=B_{3}(B_{2}(z))$ 3. (iii) There exist $\alpha_{0},\alpha_{1},\gamma_{1}\in\mathbb{D}$ so that $\Phi_{5}(z)=\Phi_{1}(z;\alpha_{0})\Phi_{2}(z;\alpha_{0},\alpha_{1})\Phi_{2}(z;\alpha_{0},\gamma_{1})$ If any of these conditions hold, then the Brianchon point is the zero of $B_{2}(z)/z$ and the zero of $\Phi_{1}(z;\alpha_{0})$. The equivalence of (i) and (ii) in both theorems is proven in [2]. The equivalence of (ii) and (iii) in both theorems follows from Theorem 1. By comparing Theorem 8 with Theorem 4 (and the remark after it), one arrives at the following result. ###### Corollary 9. Let $A\in S_{5}$ have eigenvalues $\\{f_{j}\\}_{j=1}^{5}$. Suppose the Brianchon curve of $A$ is the point $f_{5}$. Then $\\{f_{j}\\}_{j=1}^{4}$ can be labelled in such a way that both of the following conditions hold: 1. (i) $f_{5}$ is the pentagram point of the Poncelet $4$-ellipse with foci at $f_{1}$ and $f_{4}$; 2. (ii) $f_{5}$ is the pentagram point of the Poncelet $4$-ellipse with foci at $f_{2}$ and $f_{3}$. Using ideas from [13], we can prove the following. ###### Theorem 10. If we retain the notation from Theorem 7, then the following are equivalent 1. (i) The Poncelet curve associated with $\mathcal{G}^{(5)}$ is an ellipse. 2. (ii) There exist regular Blaschke products $\\{B_{j}\\}_{j=2}^{3}$ with $\deg(B_{j})=j$ and regular Blaschke products $\\{C_{j}\\}_{j=2}^{3}$ with $\deg(C_{j})=j$ such that $\frac{z\Phi_{5}(z)}{\Phi_{5}^{*}(z)}=B_{3}(B_{2}(z))=C_{2}(C_{3}(z)).$ 3. (iii) The pentagram curve of $\mathcal{G}^{(5)}$ is an ellipse and the Brianchon curve of $\mathcal{G}^{(5)}$ is a single point. Figure 5. A Poncelet 6-ellipse along with the pentagram ellipse and Brianchon point. ###### Proof. The equivalence of (ii) and (iii) is an immediate consequence of Theorems 7 and 8. The fact that (i) implies (iii) is a result of Darboux (see [13, Theorem B]). To see that (iii) implies (i), we will make use of the dual curve described in Section 2. Notice that [13, Section 3] implies that if $C_{1}$, $C_{2}$, or $C_{3}$ is an algebraic curve of degree $1$ or $2$, then the same is true of the corresponding component of the dual curve and vice versa (counting a single point as having degree $1$). The dual curve in this case has degree $5$ (see [9, Section 3]). Thus, if the Brianchon curve has degree $1$ and the pentagram curve has degree $2$, then the Poncelet curve has degree $2$, which means it is an ellipse. ∎ Theorem 8 reveals an interesting phenomenon. Suppose we are given an $A\in S_{5}$ whose Brianchon curve is a point and whose pentagram curve is not an ellipse (it is easy to find such $A$). Take a rank one unitary dilation $U_{1}$ of $A$ and look at the hexagon with vertices at the eigenvalues of $U_{1}$. The diagonals of this hexagon meet in a single point (which is the Brianchon curve of $A$), so by Brianchon’s Theorem there exists an ellipse $E_{1}$ inscribed in this hexagon. By construction, the ellipse $E_{1}$ is a Poncelet $6$-ellipse, so there is in fact an infinite family of hexagons $\\{H_{\lambda}\\}$ inscribed in $\partial\mathbb{D}$ and circumscribed about $E_{1}$. On the other hand, one can look at any other rank one unitary dilation of $A$ and repeat this process. This gives a second infinite family of hexagons, each of which is circumscribed in $\partial\mathbb{D}$ and has its diagonals meeting at the point that is the Brianchon curve of $A$. But these two families of hexagons cannot be the same, for that would imply that the numerical range of $A$ is bounded by an ellipse and we know it is not. | | ---|---|--- Figure 6. A $6$-Poncelet curve that is not an ellipse having the property that its Brianchon curve is a single point (Brianchon point). The picture on the right shows the ellipse that is inscribed in one of the Poncelet hexagons. This ellipse exists by Brianchon’s theorem. Notice that the $6$-Poncelet curve contains points in the exterior as well as points in the interior of this ellipse. The next step in our analysis will be very much analogous to that performed in Section 3. If we are given $\\{f_{1},f_{2}\\}\in\mathbb{D}^{2}$, we want to find $A\in S_{5}$ whose numerical range is bounded by an ellipse with foci at $\\{f_{1},f_{2}\\}$. Theorem 2 tells us that such an ellipse exists and is the unique Poncelet $5$-ellipse with foci at $f_{1}$ and $f_{2}$. We will find a cutoff CMV matrix $\mathcal{G}^{(5)}$ whose numerical range is bounded by an ellipse with foci at $\\{f_{1},f_{2}\\}$, whose pentagram curve is an ellipse (whose foci will be called $\\{f_{3},f_{4}\\}$), and whose Brianchon curve is a single point (that we will call $f_{5}$). ###### Theorem 11. A matrix $A\in S_{5}$ with eigenvalues $\\{f_{j}\\}_{j=1}^{5}$ satisfies all of the following conditions 1. (i) $W(A)$ is an elliptical disk 2. (ii) the foci of $\partial W(A)$ are $\\{f_{1},f_{2}\\}$ 3. (iii) the foci of the pentagram curve are $\\{f_{3},f_{4}\\}$ 4. (iv) the Brianchon curve is the single point $f_{5}$ if and only if the complex numbers $\\{f_{j}\\}_{j=1}^{5}$ satisfy $\displaystyle\\{f_{3},f_{4}\\}=\left\\{\frac{f_{5}-f_{2}}{1-f_{2}\bar{f}_{5}},\frac{f_{5}-f_{1}}{1-f_{1}\bar{f}_{5}}\right\\}$ (4.1) as sets and $\displaystyle f_{3}+f_{4}+f_{1}f_{2}f_{5}\bar{f}_{4}\bar{f}_{3}$ $\displaystyle=f_{1}+f_{2}+f_{5}$ (4.2) $\displaystyle f_{3}f_{4}+f_{1}f_{2}f_{5}(\bar{f}_{4}+\bar{f}_{3})$ $\displaystyle=f_{1}f_{2}+f_{2}f_{5}+f_{1}f_{5}$ Recall that Theorem 8 and Theorem 10 show that if $W(A)$ is bounded by an ellipse, then the characteristic polynomial factors in a certain way (in terms of OPUC) and the degree $1$ polynomial in that factorization vanishes at the Brianchon point. We require the following refinement of that result, which tells us about the zeros of the remaining polynomials in that factorization. ###### Lemma 12. Suppose $A\in S_{5}$ is such that $W(A)$ is an elliptical disk and the eigenvalues of $A$ are $\\{f_{j}\\}_{j=1}^{5}$. Suppose the foci of $\partial W(A)$ are $\\{f_{1},f_{2}\\}$, the foci of the pentagram curve are $\\{f_{3},f_{4}\\}$, and the Brianchon point is $f_{5}$. Write $\det(zI_{5}-A)=\Phi_{1}(z;\alpha_{0})\Phi_{2}(z;\alpha_{0},\alpha_{1})\Phi_{2}(z;\alpha_{0},\gamma_{1})$ as in Theorem 8. If $\Phi_{2}(f_{1};\alpha_{0},\alpha_{1})=0$, then $\Phi_{2}(f_{2};\alpha_{0},\gamma_{1})=0$. ###### Proof. Suppose $\Phi_{2}(f_{1};\alpha_{0},\alpha_{1})=0$ and $\Phi_{2}(f_{2};\alpha_{0},\gamma_{1})\neq 0$. Then $\Phi_{2}(f_{2};\alpha_{0},\alpha_{1})=0$ and hence $\Phi_{2}(f_{3};\alpha_{0},\gamma_{1})=\Phi_{2}(f_{4};\alpha_{0},\gamma_{1})=0$. Since $\Phi_{2}(f_{2};\alpha_{0},\alpha_{1})=0$, we can write $(z-f_{5})(z-f_{1})(z-f_{2})=\Phi_{1}(z;\alpha_{0})\Phi_{2}(z;\alpha_{0},\alpha_{1}).$ This means $\\{f_{1},f_{2},f_{5}\\}$ are the eigenvalues of some $A\in S_{3}$ that satisfies the hypotheses of Theorem 4. Applying the Mirman system in the $N=4$ case shows $f_{1}f_{2}=B_{2}(f_{5};f_{1},f_{2})$ The Mirman system in the case $n=6$ shows shows $f_{3}f_{4}=B_{2}(f_{5};f_{1},f_{2})$ and hence $f_{1}f_{2}=f_{3}f_{4}$. Since $f_{3},f_{4}$ are the zeros of $\Phi_{2}(z;\alpha_{0},\gamma_{1})$ and $\gamma_{1}=-\overline{\Phi_{2}(0;\alpha_{0},\gamma_{1})}$, we conclude that $\gamma_{1}=\alpha_{1}$, which implies $\Phi_{2}(z;\alpha_{0},\gamma_{1})=\Phi_{2}(z;\alpha_{0},\alpha_{1})$. It follows that $\Phi_{2}(f_{2};\alpha_{0},\gamma_{1})=0$, which gives us a contradiction. ∎ Proof of Theorem 11 In Theorem 7 it is stated that the foci of the pentagram ellipse will be the zeros of $\Phi_{2}(z;\alpha_{0},\alpha_{1})$. Thus, the foci of the Poncelet curve and the Brianchon point must be the zeros of $\Phi_{3}(z;\alpha_{0},\alpha_{1},\alpha_{2})$. The product of these zeros is then $\bar{\alpha}_{2}$ and hence we have $z(z-f_{3})(z-f_{4})-f_{1}f_{2}f_{5}(1-\bar{f_{3}}z)(1-\bar{f}_{4}z)=(z-f_{1})(z-f_{2})(z-f_{5})$ (4.3) Equating coefficients of $z$ and $z^{2}$ in (4.3) tells us that (4.2) must hold. One can perform a similar calculation invoking Theorem 8. By equating coefficients of the appropriate polynomials, we find that (4.1) must hold. For the converse statement, the above calculations show that if (4.1) and (4.2) hold, then the conditions (iii) in Theorems 7 and 8 are satisfied (by equating coefficients of polynomials). Theorem 10 then implies that the cutoff CMV matrix $\mathcal{G}^{(5)}$ with eigenvalues $\\{f_{j}\\}_{j=1}^{5}$ has numerical range that is bounded by an ellipse, has pentagram curve that is an ellipse with foci $\\{f_{3},f_{4}\\}$, and has Brianchon point $\\{f_{5}\\}$. The foci of $\partial W(\mathcal{G}^{(5)})$ are eigenvalues of $\mathcal{G}^{(5)}$ (see [13, Section 5]). By [15, Corollary 4], we know that one can partition the eigenvalues of $\mathcal{G}^{(5)}$ into the Brianchon point, the foci of the pentagram curve, and the foci of the boundary of the numerical range. Thus, by elimination it must be that the foci of $\partial W(\mathcal{G}^{(5)})$ are $\\{f_{1},f_{2}\\}$. $\square$ Recall that the Mirman system in the $N=6$ case is $f_{1}f_{5}=B_{2}(f_{3};f_{1},f_{2}),\qquad\qquad f_{3}f_{4}=B_{2}(f_{5};f_{1},f_{2}),\qquad\qquad f_{2}f_{5}=B_{2}(f_{4};f_{1},f_{2}),$ (4.4) The conditions (4.1) and (4.2) allow us to recover these relations. Substitute $z$ for $f_{3}$ in (4.3) to obtain $(f_{3}-f_{1})(f_{3}-f_{2})=\frac{f_{1}f_{2}f_{5}(1-|f_{3}|^{2})(1-f_{3}\bar{f}_{4})}{f_{5}-f_{3}}$ (4.5) If we replace $z$ by $f_{3}$ in the reversed polynomials from (4.3), then we obtain $(1-\bar{f}_{1}f_{3})(1-\bar{f}_{2}f_{3})=\frac{(1-|f_{3}|^{2})(1-f_{3}\bar{f}_{4})}{1-\bar{f}_{5}f_{3}}$ (4.6) If we divide (4.5) by (4.6), and use (4.1), we find $B_{2}(f_{3};f_{1},f_{2})=f_{1}f_{5}$. Similar reasoning can be used to derive $B_{2}(f_{4};f_{1},f_{2})=f_{2}f_{5}$. Replacing $z$ by $f_{5}$ in (4.3) tells us that $\frac{f_{5}(f_{5}-f_{4})(f_{5}-f_{3})}{(1-\bar{f}_{4}f_{5})(1-\bar{f}_{3}f_{5})}=f_{1}f_{2}f_{5}$ If we assume $f_{1}f_{2}f_{5}\neq 0$ and we substitute the relations (4.1) for $f_{3}$ and $f_{4}$ (for an appropriate choice of which to call $f_{3}$ and which to call $f_{4}$), then we find $(1-\bar{f}_{2}f_{5})(1-\bar{f}_{1}f_{5})=(1-\bar{f}_{5}f_{2})(1-\bar{f}_{5}f_{1})$ In other words $(1-\bar{f}_{2}f_{5})(1-\bar{f}_{1}f_{5})\in\mathbb{R}$. If we use this fact, then multiplying the expressions in (4.1) gives $B_{2}(f_{5};f_{1},f_{2})=f_{3}f_{4}$. If one is given $(f_{1},f_{2})\in\mathbb{D}^{2}$, the construction above shows how to find an $A\in S_{5}$ so that $\partial W(A)$ is an ellipse with foci at $f_{1}$ and $f_{2}$. Our next result shows that one can similarly find $A\in S_{5}$ with $\partial W(A)$ an ellipse and whose pentagram curve has prescribed foci (Theorem 10 assures us that if the Poncelet curve of $A$ is an ellipse, then so is its pentagram curve). ###### Theorem 13. Given $(f_{3},f_{4})\in\mathbb{D}^{2}$, there exists a unique (up to unitary conjugation) $A\in S_{5}$ so that $\partial W(A)$ is an ellipse and the pentagram curve of $A$ is an ellipse with foci at $f_{3}$ and $f_{4}$. Our proof of Theorem 13 requires the following two lemmas. ###### Lemma 14. Given two triangles inscribed in $\partial\mathbb{D}$ with interlacing vertices, there is a unique ellipse that is inscribed in both of them. ###### Proof. Any such ellipse would be a Poncelet $3$-ellipse. The lemma is a consequence of Wendroff’s Theorem for POPUC, which includes a uniqueness statement (see [11, Theorem 3.1], [14, Theorem 8], or [1]). ∎ ###### Lemma 15. Let $\\{\Phi_{3}^{(\lambda)}\\}_{\lambda\in\partial\mathbb{D}}$ be the collection of degree $3$ POPUC for the same degree $2$ OPUC. Label the zeros of $\Phi_{3}^{(\lambda)}$ as $\\{z_{j}^{(\lambda)}\\}_{j=1}^{3}$. For each $\lambda\in\partial\mathbb{D}$ there exists a unique $\tau\in\partial\mathbb{D}$ so that the line segments joining $z_{j}^{(\lambda)}$ to $z_{j}^{(\tau)}$ all meet in a single point independent of $j$ (this assumes an appropriate labeling of the zeros $\\{z_{j}^{(\lambda)}\\}_{j=1}^{3}$). ###### Proof. For each $\tau$, let $z_{1}^{(\tau)}$ be the zero that lies between $z_{2}^{(\lambda)}$ and $z_{3}^{(\lambda)}$, let $z_{2}^{(\tau)}$ be the zero that lies between $z_{3}^{(\lambda)}$ and $z_{1}^{(\lambda)}$, and let $z_{3}^{(\tau)}$ be the zero that lies between $z_{1}^{(\lambda)}$ and $z_{2}^{(\lambda)}$. Let $L_{j}^{(\tau)}$ be the line segment that joins $z_{j}^{(\lambda)}$ to $z_{j}^{(\tau)}$ for $j=1,2,3$. Given $\\{z_{j}^{(\lambda)}\\}_{j=1}^{3}$, start with $\tau=\lambda$ and move $\tau$ around $\mathbb{T}$ counterclockwise. As this happens, consider $\eta_{j}(\tau):=L_{1}^{(\tau)}\cap L_{j}^{(\tau)}$ for $j=2,3$ and notice that both of these points are in $L_{1}^{(\tau)}$. Define $d_{j}(\tau)=\frac{|z_{1}^{(\lambda)}-\eta_{j}(\tau)|}{|L_{1}^{(\tau)}|},\qquad\qquad j=2,3.$ Initially, $d_{3}(\tau)$ is close to $0$ and $d_{2}(\tau)$ is close to $1$. As $\tau$ nears the end of its trip around $\mathbb{T}$, it holds that $d_{2}(\tau)$ is close to $0$ and $d_{3}(\tau)$ is close to $1$. Thus, by the Intermediate Value Theorem, there must be a value of $\tau$ such that $d_{2}(\tau)=d_{3}(\tau)$ as desired. One can see by inspection that this choice of $\tau$ is unique. ∎ Proof of Theorem 13 Suppose $(f_{3},f_{4})\in\mathbb{D}^{2}$ are given. Consider the Poncelet $3$-ellipse with foci at $f_{3}$ and $f_{4}$ (call it $E$). Pick any triangle $T^{(\lambda)}$ that is inscribed in $\partial\mathbb{D}$ and circumscribed about $E$. By Lemma 15, there exists a unique second such triangle $T^{(\tau)}$ such that the line segments joining opposite vertices meet in a single point. By Brianchon’s Theorem, there is an ellipse inscribed in the hexagon whose vertices are the vertices of $T^{(\tau)}$ and $T^{(\lambda)}$. Call this ellipse $E^{\prime}$ and suppose its foci are $f_{1}$ and $f_{2}$. From what we already know, the ellipse $E^{\prime}$ is the unique Poncelet $6$-ellipse with foci at $f_{1}$ and $f_{2}$, and Theorem 2 tells us that it is associated to a matrix in $S_{5}$. For this matrix, the associated pentagram curve must be an ellipse and the associated Brianchon curve must be a single point. This pentagram ellipse is a Poncelet $3$-ellipse and must be inscribed in the triangles $T^{(\lambda)}$ and $T^{(\tau)}$. By Lemma 14, that ellipse must be $E$. $\square$ Our next result is an analog of Theorem 13 for the Brianchon curve. Specifically, we will show that one can find $A\in S_{5}$ so that $\partial W(A)$ is an ellipse and the Brianchon curve is a single predetermined point. The main difference between Theorem 16 and Theorem 13 is the lack of uniqueness. ###### Theorem 16. Given $f_{5}\in\mathbb{D}$, there exists a $5\times 5$ cutoff CMV matrix $A$ so that $\partial W(A)$ is an ellipse and the Brianchon curve of $A$ is the single point $f_{5}$. Furthermore, the set of all possible $5\times 5$ cutoff CMV matrices with this property is naturally parametrized by an open triangle. ###### Proof. Suppose $f_{5}\in\mathbb{D}$ is given. Consider the set of all lines passing through $f_{5}$. Each line intersects $\mathbb{T}$ in two places. One can choose three distinct lines, thus specifying 6 distinct points of $\mathbb{T}$, labeled cyclically as $v_{j}\text{ for }j=1,2,...,6$. By Brianchon’s Theorem, there is an ellipse inscribed in the hexagon whose vertices are $\\{v_{j}\\}_{j=1}^{6}$. Call this ellipse $E$ and its foci $f_{1}\text{ and }f_{2}$. By Theorem 11, $E$ is the boundary of $W(A)$ for some $A\in S_{5}$. Thus, the Poncelet curve and pentagram curve of $A$ are ellipses and the Brianchon curve of $A$ is a single point, which we see must be $f_{5}$ as desired. $e^{it}$$1$$e^{ix}$$e^{ix^{*}}$$e^{iy}$$e^{iy^{*}}$$f_{5}$ Figure 7. Given $f_{5}$ and the line through 1,$f_{5}$, varying $x,y$ such that $0<x<y<t$ parametrizes the set of all $A\in S_{5}$ with elliptic numerical range and Brianchon curve $f_{5}$. Note that the above procedure yields a well-defined map from collections of distinct triples of lines through $f_{5}$ to matrices in $S_{5}$ whose numerical range is an elliptical disk and whose Brianchon point is $f_{5}$. To see this, suppose one starts with three distinct lines through $f_{5}$. This produces six points on the unit circle by the above procedure. Connecting alternate points forms two triangles that must be tangent to the pentagram ellipse of the matrix we seek. By Lemma 14, there is only one possible choice for such an ellipse, so its foci $f_{3},f_{4}$ are a well-defined output. By Theorem 11 and equation (4.1), we can calculate the foci $f_{1},f_{2}$ of the Poncelet curve of this matrix. Thus, the eigenvalues $\\{f_{j}\\}_{j=1}^{5}$ of the matrix we seek are a computable quantity from any three distinct lines through $f_{5}$. Since the eigenvalues determine the matrix in $S_{5}$, this means the map is well-defined. To make this map a bijection, we restrict it to triples of lines through $f_{5}$ for which one of them also passes through $1$. Given any $A\in S_{5}$ with $W(A)$ an elliptical disk and Brianchon point $f_{5}$, there exists a hexagon that circumscribes $\partial W(A)$ for which $1$ is a vertex, so the restricted map is onto. To show that it is injective, suppose $H$ is a hexagon inscribed in $\mathbb{T}$ with one vertex at $1$ and $E$ is an ellipse that is tangent to every edge of $H$. From any given point on $\mathbb{T}$ (in particular, the point $1$), there are only two tangents to $E$ through that point. This implies $H$ is the unique hexagon that includes $1$ and has the required tangency properties. Thus, our restricted map uniquely determines the numerical range of the matrix and hence uniquely determines the matrix itself. Suppose that the line through $1$ and $f_{5}$ also intersects $\mathbb{T}$ at $e^{it}$. We have shown that the space of all $5\times 5$ CMV matrices with the desired property is naturally parametrized by $\\{(x,y):0<x<y<t\\}$, which is an open triangle. ∎ ## 5\. The pentagon case To find Poncelet $5$-ellipses, it will not be possible to consider compositions of Blaschke products as in the previous sections, which partially explains why this case has been studied less often in the literature. Instead, we will revisit the Mirman system and prove a structure theorem about the set of possible solutions. In this setting, the system of equations has multiple solutions and our next result describes their relative placement in the plane. ###### Theorem 17. Let $f_{1},f_{2}\in\mathbb{D}$, $f_{1}f_{2}\neq 0$ and set $\Phi_{2}(z)=(z-f_{1})(z-f_{2})$. The system $\begin{split}\frac{\Phi_{2}(z)}{\Phi_{2}^{*}(z)}&=wf_{1},\\\ \frac{\Phi_{2}(w)}{\Phi_{2}^{*}(w)}&=zf_{2}\end{split}$ (5.1) has exactly 5 distinct solution pairs $(z,w)\in\mathbb{C}^{2}$: four of them are in $\mathbb{D}^{2}$: $(0,f_{2})$, $(f_{1},0)$, $(z_{1},w_{1})$, $(z_{2},w_{2})$, and exactly one solution $(z_{3},w_{3})$ satisfies $|z_{3}|>1$, $|w_{3}|>1$. Moreover, the points $z_{1}$, $z_{2}$ and $z_{3}$ are collinear, as are $w_{1}$, $w_{2}$ and $w_{3}$. More precisely, 1. i) It holds that $\frac{z_{1}-f_{2}}{w_{1}-f_{1}}=\frac{z_{2}-f_{2}}{w_{2}-f_{1}}=\frac{z_{3}-f_{2}}{w_{3}-f_{1}}=\frac{f_{1}\Phi_{2}^{*}(f_{2})}{f_{2}\Phi_{2}^{*}(f_{1})},$ 2. ii) The points $f_{1}$, $w_{1}$, $w_{2}$ and $w_{3}$ are collinear, i.e. $\frac{w_{i}-f_{1}}{w_{j}-f_{1}}\in\mathbb{R},\quad i,j\in\\{1,2,3\\},\quad i\neq j.$ and the points $f_{2}$, $z_{1}$, $z_{2}$ and $z_{3}$ are collinear, i.e. $\frac{z_{i}-f_{2}}{z_{j}-f_{2}}\in\mathbb{R},\quad i,j\in\\{1,2,3\\},\quad i\neq j.$ $f_{1}$$f_{2}$$w_{1}$$w_{2}$$z_{1}$$z_{2}$$z_{3}$$w_{3}$ Figure 8. Collinearity of the points $f_{1}$, $f_{2}$, $z_{j}$’s and $w_{j}$’s, as explained in Theorem 17. ###### Proof. Let us denote $g_{1}(z)=\frac{1}{f_{1}}\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})},\quad g_{2}(z)=\frac{1}{f_{2}}\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})},$ so that (5.1) takes the form $g_{1}(z)=w$, $g_{2}(w)=z$. From here, we get that (5.1) implies that $\begin{split}g_{2}\circ g_{1}(z)&=z,\\\ g_{1}\circ g_{2}(w)&=w.\end{split}$ (5.2) Both $g_{2}\circ g_{1}$ and $g_{1}\circ g_{2}$ have 4 fixed points inside $\mathbb{D}$. Indeed, consider $g_{2}\circ g_{1}$ (same analysis for $g_{1}\circ g_{2}$). Recall that a Blaschke product is analytic in $\mathbb{D}$, maps $\mathbb{T}\to\mathbb{T}$, $\mathbb{D}\to\mathbb{D}$ and exterior of $\mathbb{D}$ onto its exterior. So, $|z|=1\quad\Rightarrow\quad|g_{1}(z)|>1\quad\Rightarrow\quad|g_{2}(g_{1}(z))|>1.$ By Rouche’s theorem, $g_{2}\circ g_{1}(z)-z$ and $g_{2}\circ g_{1}(z)$ have the same number of zeros in $\mathbb{D}$; it is straightforward to check that $g_{2}\circ g_{1}$ vanishes at 4 points in $\mathbb{D}$. Finally, we claim that the statement of the proposition is equivalent to the just established fact that both $g_{2}\circ g_{1}$ and $g_{1}\circ g_{2}$ have 4 fixed points inside $\mathbb{D}$. Indeed, the 4 pairs of solutions of (5.1) satisfy (5.2). Reciprocally, let $z$ be a fixed point of $g_{2}\circ g_{1}$ and denote $\tau=g_{1}(z)$. Then $g_{2}(\tau)=g_{2}(g_{1}(z))=z$, and in consequence, $g_{1}(g_{2}(\tau))=g_{1}(z)=\tau$, meaning that $\tau=g_{1}(z)$ is a fixed point of $g_{1}\circ g_{2}$. This shows that the fixed points $z$ and $w$ of $g_{2}\circ g_{1}$ and $g_{1}\circ g_{2}$ can be paired $(z,w)$ in such a way that (5.1) holds. Now we prove the statement about the remaining solution, this time in $(\mathbb{C}\setminus\overline{\mathbb{D}})^{2}$. The identity $\overline{\frac{(1/\bar{z}-f_{1})(1/\bar{z}-f_{2})}{(1-\bar{f}_{1}/\bar{z})(1-\bar{f}_{2}/\bar{z})}}=\frac{1}{\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})}}$ (5.3) allows us to reduce the analysis of (5.1) in $(\mathbb{C}\setminus\overline{\mathbb{D}})^{2}$ to the equivalent system $\begin{split}\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})}&=\frac{w}{\overline{f_{1}}},\\\ \frac{(w-f_{1})(w-f_{2})}{(1-w\bar{f}_{1})(1-w\bar{f}_{2})}&=\frac{z}{\overline{f_{2}}}\end{split}$ (5.4) for $(z,w)\in\mathbb{D}^{2}$ (we return to the actual solutions outside by the mapping $z\mapsto 1/\overline{z}$, $w\mapsto 1/\overline{w}$). The advantage of working in $\mathbb{D}$ is that again we can use the fixed point argument and Rouche’s Theorem. Indeed, as before, define $h_{1}(z)=\overline{f_{1}}\,\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})},\qquad\qquad h_{2}(z)=\overline{f_{2}}\,\frac{(z-f_{1})(z-f_{2})}{(1-z\bar{f}_{1})(1-z\bar{f}_{2})},$ and look for fixed points of $h_{1}\circ h_{2}$ and $h_{2}\circ h_{1}$. This time $|z|=1\quad\Rightarrow\quad|h_{1}(z)|<1\quad\Rightarrow\quad|h_{2}(h_{1}(z))|<1,$ and by Rouche’s Theorem, $h_{2}\circ h_{1}(z)-z$ and $f(z)=z$ have the same number of zeros in $\mathbb{D}$, that is, exactly one. To prove the statements about collinearity, define the linear functions $\begin{split}z(t)&=\frac{f_{2}-f_{1}}{1-f_{1}\bar{f}_{2}}+\frac{4f_{1}}{(1-|f_{1}|^{2})(1-f_{1}\bar{f}_{2})}t\\\ w(t)&=\frac{f_{1}-f_{2}}{1-f_{2}\bar{f}_{1}}+\frac{4f_{2}}{(1-|f_{2}|^{2})(1-f_{2}\bar{f}_{1})}t\end{split}$ (5.5) It is a straightforward calculation to verify that the two polynomials (in the variable $t$) $\Phi_{2}(z(t))-w(t)f_{1}\Phi_{2}^{*}(z(t)),\qquad\qquad\Phi_{2}(w(t))-z(t)f_{2}\Phi_{2}^{*}(w(t))$ are scalar multiples of each other so any zero of one is a zero of the other. Notice that both of these polynomials have degree $3$. One can also check by hand that if $t_{z}$ is such that $z(t_{z})=0$, then $w(t_{z})\neq f_{2}$. Similarly, if $t_{w}$ is such that $w(t_{w})=0$, then $z(t_{w})\neq f_{1}$. Thus, the three zeros $\\{t_{j}\\}_{j=1}^{3}$ of $Y(t):=\Phi_{2}(z(t))-w(t)f_{1}\Phi_{2}^{*}(z(t))$ (5.6) will be such that $(z(t_{j}),w(t_{j}))$ is a solution to the system (5.1) other than $(0,f_{2})$ and $(f_{1},0)$. Thus we can say $(z_{j},w_{j})=(z(t_{j}),w(t_{j}))$. If we set $t_{0}=\frac{1}{4}(1-|f_{1}|^{2})(1-|f_{2}|^{2})$ and observe that $z(t_{0})=f_{2}$ and $w(t_{0})=f_{1}$, then a short calculation shows $\frac{z_{j}-f_{2}}{w_{j}-f_{1}}=\frac{z(t_{j})-z(t_{0})}{w(t_{j})-w(t_{0})}=\frac{f_{1}\Phi_{2}^{*}(f_{2})}{f_{2}\Phi_{2}^{*}(f_{1})}.$ This proves claim (i) of the theorem. To prove claim (ii), it suffices to show that each $t_{j}\in\mathbb{R}$. To this end, a calculation reveals that if we divide the polynomial $Y(t)$ in (5.6) by its leading coefficient, then we obtain a monic degree $3$ polynomial with real coefficients. Define $q(x,y;t):=\left(y+\frac{x-f_{1}}{1-\overline{f_{1}}x}\right)\left(y+\frac{x-f_{2}}{1-\overline{f_{2}}x}\right)-\frac{4txy}{(1-\overline{f_{1}}x)(1-\overline{f_{2}}x)}$ There exist two distinguished matrices $A_{1},A_{2}\in S_{4}$ such that $A_{1}$ has numerical range bounded by an ellipse with foci at $\\{f_{1},f_{2}\\}$ and the pentagram curve of $A_{2}$ is an ellipse with foci at $\\{f_{1},f_{2}\\}$. Let us denote the eigenvalues of $A_{j}$ by $\\{f_{1},f_{2},z_{j},w_{j}\\}$ for $j=1,2$. Then from [13, Section 5] (and also [15, Equations 29–32]) we know that there exist positive real numbers $b_{1}$ and $b_{2}$ so that $q(f_{2},z_{j};b_{j}^{2})=q(w_{j},f_{1};b_{j}^{2})=q(z_{j},w_{j};b_{j}^{2})=0$ for $j=1,2$. In fact, $b_{j}$ is the length of the minor semiaxis of the ellipse associated with $A_{j}$ whose foci are $\\{f_{1},f_{2}\\}$ (see [15]). Using $q(f_{2},z_{j};b_{j}^{2})=q(w_{j},f_{1};b_{j}^{2})=0$ and the formulas (5.5), we find that $z_{j}=z(b_{j}^{2})$ and $w_{j}=w(b_{j}^{2})$. A short calculation shows (recall $t_{0}=(1-|f_{1}|^{2})(1-|f_{2}|^{2})/4$) $q(z(t),w(t);t)=\frac{Y(t)(t-t_{0})}{C_{f_{1},f_{2}}\Phi_{2}^{*}(z(t))},$ where $\displaystyle C_{f_{1},f_{2}}=\frac{-f_{1}t_{0}}{f_{2}}$. Thus the zeros of $Y(t)$ are also zeros of $q(z(t),w(t);t)$. If $t_{0}$ were a zero of $Y(t)$, then $z=z(t_{0})$ and $w=w(t_{0})$ would satisfy (5.1). However, we have seen that $z(t_{0})=f_{2}$ and $w(t_{0})=f_{1}$, giving $0=\Phi_{2}(f_{2})=f_{1}^{2}$, which is nonzero by assumption, so $Y(t_{0})\neq 0$. Thus the zeros of $Y(t)$ are zeros of $q(z(t),w(t);t)$ distinct from $t_{0}$. We know that $q(z(t),w(t);t)$ has zeros at $t=b_{1}^{2}$ and $t=b_{2}^{2}$ so these must be zeros of $Y(t)$ as well. This gives us two real zeros of $Y(t)$. Our earlier observation implies that $Y(t)$ has either one or three real zeros, so all zeros of $Y(t)$ are real as desired. ∎ \begin{overpic}[width=151.76964pt]{Fig_foci9a} \put(49.0,110.0){ $w_{1}$} \put(58.0,125.0){ $z_{1}$} \put(47.0,80.0){ $f_{2}$} \put(83.0,110.0){ $f_{1}$} \end{overpic} | | \begin{overpic}[width=151.76964pt]{Fig_foci9b} \put(110.0,125.0){ $w_{2}$} \put(36.0,55.0){ $z_{2}$} \put(47.0,80.0){ $f_{2}$} \put(83.0,110.0){ $f_{1}$} \end{overpic} ---|---|--- Figure 9. Pairs $(z_{1},w_{1})$ and $(z_{2},w_{2})$ as foci of the pentagram and the Poncelet ellipses, respectively. The solutions to the Mirman system in this setting have a geometric interpretation. Given $\\{f_{1},f_{2}\\}\in\mathbb{D}^{2}$, we have just seen that there are three solution pairs $(z,w)$ to the Mirman system and we denote them by $(z_{1},w_{1})$, $(z_{2},w_{2})$, and $(z_{3},w_{3})$. One of the solution pairs in $\mathbb{D}^{2}$ (say $(z_{1},w_{1})$) will be such that the $4\times 4$ cutoff CMV matrix with eigenvalues $\\{f_{1},f_{2},z_{1},w_{1}\\}$ will have numerical range bounded by an ellipse with foci at $f_{1}$ and $f_{2}$ and the pentagram curve of this matrix will be an ellipse with foci at $z_{1}$ and $w_{1}$ (see Figure 9, left). For the other solution pair in $\mathbb{D}^{2}$ (say $(z_{2},w_{2})$), it will be true that the $4\times 4$ cutoff CMV matrix with eigenvalues $\\{f_{1},f_{2},z_{2},w_{2}\\}$ has numerical range bounded by an ellipse with foci at $z_{2}$ and $w_{2}$ and the pentagram curve of this matrix will be an ellipse with foci at $f_{1}$ and $f_{2}$, see Figure 9, right. The geometric interpretation of the solution $(z_{3},w_{3})\in(\mathbb{C}\setminus\bar{\mathbb{D}})^{2}$ is not clear. Notice that Theorem 17 implies that each line $\ell_{j}$ passing through $\\{z_{j},w_{j}\\}$ for $j=1,2,3$ is parallel to the line through $\\{f_{1},f_{2}\\}$. For $j=1,2$, one can obtain this same conclusion from the fact that the ellipses $C_{1}$ and $C_{2}$ corresponding to a matrix $A\in S_{4}$ are in the same package (see [17]). The fact that this same conclusion applies to $\ell_{3}$ is a new result. ## Acknowledgments The second author was partially supported by Simons Foundation Collaboration Grants for Mathematicians (grant 710499) and by the Spanish Government–European Regional Development Fund (grant MTM2017-89941-P), Junta de Andalucía (research group FQM-229 and Instituto Interuniversitario Carlos I de Física Teórica y Computacional), and by the University of Almería (Campus de Excelencia Internacional del Mar CEIMAR). The fourth author graciously acknowledges support from Simons Foundation Collaboration Grant 707882. ## References * [1] Y. Arlinskii, L. Golinskii, and E. Tsekanovskii, Contractions with rank one defect operators and truncated CMV matrices, J. Funct. Anal. 254 (2008), no. 1, 154–195. * [2] U. Daepp, P. Gorkin, A. Shaffer, B. Sokolowsky, and K. Voss, Decomposing finite Blaschke products, J. Math. Anal. Appl. 426 (2015), no. 2, 1201–1216. * [3] U. Daepp, P. Gorkin, A. Shaffer, and K. Voss, Finding Ellipses: What Blaschke products, Poncelet’s theorem, and the numerical range know about each other, Carus Mathematical Monographs, 34. MAA Press, Providence, RI, 2018. * [4] U. Daepp, P. Gorkin, and K. Voss, Poncelet’s theorem, Sendov’s conjecture, and Blaschke products, J. Math. Anal. Appl. 365 (2010), no. 1, 93–102. * [5] V. Dragović and M. Radnović, Poncelet porisms and beyond. Integrable billiards, hyperelliptic Jacobians and pencils of quadrics, Frontiers in Mathematics. Birkhäuser/Springer Basel AG, Basel, 2011. * [6] L. Flatto, Poncelet’s Theorem, American Mathematical Society, Providence, RI, 2009, Chapter 15 by S. Tabachnikov. * [7] M. Fujimura, Inscribed ellipses and Blaschke products, Computational Methods and Function Theory 13 (2013), no. 4, 557–573. * [8] M. Fujimura, Blaschke products and circumscribed conics, Computational Methods and Function Theory 17 (2017), no. 4, 635–652. * [9] M. Fujimura, Interior and exterior curves of finite Blaschke products, J. of Math. Anal. Appl. 467 (2018), no. 1, 711–722. * [10] H.-L. Gau and P. Wu, Numerical range of $S(\phi)$, Linear and Multilinear Algebra 45 (1998), no. 1, 49–73. * [11] H.-L. Gau and P. Wu, Numerical range circumscribed by two polygons, Linear Alg. Appl. 382 (2004), 155–170. * [12] P. Gorkin and N. Wagner, Ellipses and compositions of finite Blaschke products, J. Math. Anal. Appl. 445 (2017), no. 2, 1354–1366. * [13] M. Hunziker, A. Martinez-Finkelshtein, T. Poe, and B. Simanek, Poncelet-Darboux, Kippenhahn, and Szegő: Interactions between projective geometry, matrices, and orthogonal polynomials, preprint arXiv: math.2101.12165. * [14] A. Martinez-Finkelshtein, B. Simanek, and B. Simon, Poncelet’s Theorem, Paraorthogonal polynomials and the numerical range of compressed multiplication operators, Advances in Mathematics 349 (2019), 992–1035. * [15] B. Mirman, $UB$-matrices and conditions for Poncelet polygon to be closed, Linear Alg. Appl. 360 (2003), 123–150. * [16] B. Mirman, Sufficient conditions for Poncelet polygons not to close, Amer. Math. Monthly 112 (2005), no. 4, 351–356. * [17] B. Mirman and P. Shukla, A characterization of complex plane Poncelet curves, Linear Algebra Appl. 408 (2005), 86–119. * [18] R. E. Schwartz, The pentagram map, Experimental Mathematics 1 (1992), 71–81. * [19] B. Simon, Orthogonal Polynomials on the Unit Circle, Part One: Classical Theory, American Mathematical Society, Providence, RI, 2005. * [20] B. Simon, Orthogonal Polynomials on the Unit Circle, Part Two: Spectral Theory, American Mathematical Society, Providence, RI, 2005. * [21] B. Simon and V. Totik, Limits of zeros of orthogonal polynomials on the circle, Math. Nachr. 278 (2005), no. 12-13, 1615–1620.
# Optimizing $\alpha\mu$ Tristan Cazenave1, Swann Legras2, Véronique Ventos2 ###### Abstract $\alpha\mu$ is a search algorithm which repairs two defaults of Perfect Information Monte Carlo search: strategy fusion and non locality. In this paper we optimize $\alpha\mu$ for the game of Bridge, avoiding useless computations. The proposed optimizations are general and apply to other imperfect information turn-based games. We define multiple optimizations involving Pareto fronts, and show that these optimizations speed up the search. Some of these optimizations are cuts that stop the search at a node, while others keep track of which possible worlds have become redundant, avoiding unnecessary, costly evaluations. We also measure the benefits of parallelizing the double dummy searches at the leaves of the $\alpha\mu$ search tree. ## Introduction The state of the art for imperfect information card games is Perfect Information Monte Carlo sampling (PIMC). It was first proposed by Levy (Levy 1989) for Bridge, and used in the popular program GIB (Ginsberg 2001). PIMC can be used in other trick-taking card games such as Skat (Buro et al. 2009; Kupferschmid and Helmert 2006), Spades and Hearts (Sturtevant and White 2006). Long analyzed the reasons why PIMC is successful in these games (Long et al. 2010). The principle of PIMC is to use determinization to generate possible worlds. For each possible move, PIMC plays the move, samples from a set of possible worlds, and then solves each world exactly as if it was a perfect information game. It then computes the average of the results among the possible worlds to evaluate each move. Other approaches to imperfect information games are Information Set Monte Carlo Tree Search (Cowling, Powley, and Whitehouse 2012), counterfactual regret minimization that solved Poker (Zinkevich et al. 2008; Brown and Sandholm 2019), and Exploitability Descent (Lockhart et al. 2019). There are also various Reinforcement Learning algorithms for incomplete information games are available in the OpenSpiel framework (Lanctot et al. 2019). Recursive Monte Carlo Search (Furtak and Buro 2013) is the adaptation of Nested Monte Carlo Search (Cazenave 2009) to multi-player and incomplete information games. The basic principle is the same: it uses Monte Carlo Search to improve Monte Carlo Search made at a higher level of recursion. Recursive Monte Carlo Search improves on PIMC for Skat and Bridge (Bouzy, Rimbaud, and Ventos 2020), however, it takes much more time to complete than PIMC. PIMC plays sub-optimally due to two main problems: strategy fusion and non- locality (Frank and Basin 2001). $\alpha\mu$ (Cazenave and Ventos 2020) is an anytime heuristic search algorithm for incomplete information games that assumes perfect information for the opponents. $\alpha\mu$ addresses the strategy fusion and non-locality problems encountered by PIMC. Other programs also address the strategy fusion problem in endgame play, for example, GIB (Ginsberg 2001) in Bridge (using a single dummy solver) and the Skat endgame solver of Stefan Edelkamp (Edelkamp 2020). In this paper we propose improvements to speedup the $\alpha\mu$ algorithm without affecting it’s output. The paper starts with a section on previous work in Bridge and on the $\alpha\mu$ algorithm. The next section describes the optimizations and the modified algorithm. The last section gives experimental results. ## Prerequisites and Previous Work In this section, we introduce some definitions used in the paper, we then briefly explain the game of Bridge, Computer Bridge, previous work on dealing with strategy fusion, non-locality, the early cut and the root cut that were defined previously in the $\alpha\mu$ paper (Cazenave and Ventos 2020). ### Definitions We define general naming conventions used throughout the article. #### Vectors Given $n$ different possible worlds, a boolean vector of size $n$ keeps the status of the game for each possible world: A zero at index $w$ means that the game is lost for world number $w$. A one means the game is won. Associated to the vector there is another vector of booleans indicating whether the world is possible in the current state. At the root of the search all worlds are possible but when an opponent makes a move, the move is usually only valid in some of the worlds and the set of valid worlds is reduced. #### Pareto Fronts A set of vectors that are not currently dominated by other sets of vectors. We say a Pareto front $f_{1}$ dominate $f_{2}$ iff: $\forall x_{2}\in f_{2},\exists x_{1}\in f_{1},x_{1}$ dominates $x_{2}$ or $x_{1}=x_{2}$. We say a vector $x_{1}\in\\{0,1\\}^{n}$ dominates $x_{2}$ iff: $\forall i\in[1,n]$: $x_{1}[i]\geq x_{2}[i]$ and $\exists i\in[1,n]$ such that $x_{1}[i]>x_{2}[i].$ The score of a vector is the average among all possible worlds of the values contained in the vector. #### Impossible worlds When the search reaches a node where some worlds have disappeared because the moves that have been played are not possible in these worlds. These worlds are represented as ”x” in figures in this article. New impossible worlds appear at min nodes when we consider a move from the defender possible in only a subset of all worlds. #### Useless worlds Worlds for which we know for sure that nothing is lost by labelling them 0. These worlds are noted ”-” on figures. #### Comparison of Pareto Fronts When comparing two Pareto fronts from different depths in the tree or at min node between two actions that contain different impossible worlds, to test for the dominance of one over the other, one must carefully handle impossible worlds as they can be a win higher in the tree. Such impossible worlds must be evaluated as 1 unless proven useless higher in the tree when checking for dominance relation. As such, [1 x 0] is not dominated by [1 0 0] but is dominated by [1 1 0] or even by [1 x -]. ### Bridge in short The interested reader can refer for instance to (Mahmood, Grant, and Sharif 2014) for a more complete presentation of the game of Bridge. Bridge is a trick-taking card game with four players (denoted by West, North, East and South or W,N,E,S) divided in two partnerships (East-West and North- South). A standard 52 card deck is shuffled and each player receives a hand of 13 cards that is only visible to them. A Bridge deal (or board) is divided into two major phases: the bidding (out of the scope of the paper) and the card play. The goal of the bidding is to reach a contract which determines the minimum number of tricks the pair commits to win during the card play, either with no trump (NT) or with a determined suit as trump. In the following, we assume that the North-South pair reached the contract of 3NT (resp. 7NT) and that South is the agent who plays the board (i.e the declarer). During the card play, the goal is to fulfill (for the declarer) or to defeat (for the defenders) the contract reached during the bidding phase. For the contract of 3NT (resp. 7NT), the minimum number of tricks required to win the board is 9 (resp. 13). The player on the left of the declarer exposes the first card of the game. The declarer’s partner (called the Dummy) then lays their cards face up on the table. When playing in a NT contract, there is only one simple rule : each player is required to follow suit if possible and can play any card of the suit. When the four players have played a card, the player who played the highest- ranked card in the suit (2$<$3$<$…$<$10,J,Q,K,A) wins the trick and they will be on lead at the following trick. The board is over when all the cards have been played. ### Computer Bridge Since 1996, the best bridge programs can annually participate in the World Computer-Bridge Championship (WCBC). The selected bridge AI used in our experiments was developed by Yves Costel for the bridge program Wbridge5111http://www.wbridge5.com. The boosted version of Wbridge5 (Ventos et al. 2017) won the WCBC in 2016, 2017 and 2018. The best Bridge bots use PIMC, with a double-dummy solver (DDS) to evaluate each simulated world. They also use some additional heuristics, for example to prefer the less-revealing of equivalent actions. DDS gives the number of tricks won by each side for Spade, Heart, Diamond, Club or No Trump contracts when all four players know the placement of the 52 cards, and each player plays optimally. A tree-search can then be used in this simplified game, as well as standard alpha-beta pruning methods. Values of the leaves are computed using the double-dummy solver. The growing number of tricks won during the play is used as upper and lower bounds in the alpha-beta process. A very efficient DDS has been written (and made public) by Bo Haglund (Haglund 2010). Currently, the average level of the best current Bridge AIs are still far from the level of professional players, and closer to the level of good amateurs. This is partly due to the aforementioned problems of PIMC: namely strategy fusion and non-locality. The strategy fusion problem of PIMC comes from the fact that DDS plays different Max moves in the different worlds of an information set. In the real game, Max has to play the same move in all the worlds of an information set. $\alpha\mu$ addresses this problem by evaluating every Max move jointly for all the possible worlds in the information set during its search. The second problem of PIMC is non-locality. It happens that a move which is optimal at a node is not the best move to backup from a global perspective. In some cases it is better to keep a locally suboptimal move that gives a better result at the root of the search tree than the locally optimal one. $\alpha\mu$ addresses this problem by backing up Pareto fronts instead of vectors. ### Previous Work: the $\alpha\mu$ Algorithm The $\alpha\mu$ algorithm repairs the strategy fusion and non-locality problems of PIMC by searching multiple moves ahead and by manipulating Pareto fronts at Max and Min nodes. The $\alpha\mu$ algorithm assumes that the defence has perfect information whereas the declarer has incomplete information. At Max nodes, each possible move returns a Pareto front. The final Pareto front for a node is the union of all the Pareto fronts returned by the search beginning with each of the different possible moves. The idea is to keep all the possible options for Max, i.e. Max has the choice between all the vectors of the overall Pareto front. In order to optimize computations and memory, vectors that are dominated by another vector in the same Pareto front are removed. The Min players can choose different moves in different possible worlds, so they take the minimum outcome over all possible moves for a possible world. When they can choose between two vectors they take for each index the minimum between the two values at this index of the two vectors. The $\alpha\mu$ algorithm is related to GIB single dummy solver (Ginsberg 2001). However the single dummy solver of Ginsberg is limited to the endgame and its complexity explodes rapidly for earlier states that $\alpha\mu$ handles easily at the price of incompleteness. Given enough time and using all possible worlds, $\alpha\mu$ converges to the single dummy solver results. ### Generation of Possible Worlds The possible worlds are generated using random generation followed by verification of the constraints. We currently use three types of constraints: the constraints coming from the bidding phase, the constraints on the West hand due to following rules for the opening lead, the constraints due to sluff. ### Early Cut If a Pareto front at a Min node is dominated by the Pareto front of the upper Max node it can safely be cut since the evaluation is optimistic for the Max player. A deeper search will always return a worse (or the same) result for the Max player due to strategy fusion. Figure 1 gives an example of an early cut at a Min node. The root node $a$ is a Max node, the first move played at $a$ returned $\\{[1~{}1~{}0],[0~{}1~{}1]\\}$. The second move is then tried leading to node $c$ and the initial Pareto front calculated with double dummy searches at node $c$ is [1 1 0]. It is dominated by the Pareto front of node $a$ so node $c$ can be cut. Figure 1: Example of an early cut at node c.ab$\\{[1~{}1~{}0],[0~{}1~{}1]\\}$c$[1~{}1~{}0]\rightarrow cut$ ### Root Cut If a move at the root of $\alpha\mu$ for $M$ Max moves gives the same probability of winning than the best move of the previous iteration of iterative deepening for $M-1$ Max moves, the search can be safely be stopped since it is not possible to find a better move. ## Optimizations In this section we present different improvements of the $\alpha\mu$ algorithm that make it faster but do not change the result of a search. ### Maintaining Useful Worlds As we descend the tree it is possible to remove the worlds that are useless to evaluate, allowing us to run fewer DDS evaluations at the leaves, and enabling cuts. A transposition table perfectly recalls the results of the previous search at nodes. For example if the DDS result stored in the transposition table for a world at a Min node is zero, it will also be zero in the whole subtree. It is therefore useless to calculate it again and the world can be marked as useless. If at a Min node the maximum value of a world for the current Pareto front is zero, the world can be marked as useless as it will always have a zero value in the Pareto front returned by the node. At the leaves only useful worlds are solved by DDS. At Min nodes it is useless to unify the set of possible moves with the possible moves of the useless worlds. We thus only take the union of the possible moves in useful worlds. ### World Cuts Cuts due to having only zero or one useful world are called world cuts. If the search is at a node without useful worlds it can be safely cut. If there are no useful worlds left, search can be cut as the Pareto front is known to be the vector with only zeros. Search can also be cut if there is only one useful world left. The reason for this is that all of the useless worlds will eventually evaluate to zero at the root, and so we only need to compute the DDS result associated to the single useful world and we can return a single vector containing the result for the useful world. Figure 2 gives an example of a world cut with no useful worlds and figure 3 gives an example of a world cut with one useful world. Figure 2: Example of a world cut with no useful worlds at node c.…a$\\{[1~{}0~{}0]\\}$b$\\{[1~{}0~{}0]\\}$$[1~{}0~{}0]$c$[$x$~{}-~{}-]\rightarrow[$0 0$~{}0]$$cut$ Figure 3: Example of a world cut with one useful world at node c.…a$\\{[1~{}0~{}1],[0~{}1~{}1]\\}$b$\\{[1~{}0~{}1],[0~{}1~{}1]\\}$$[1~{}0~{}1]$$[0~{}1~{}1]$c$[$x x$~{}?]\rightarrow[$x x$~{}DDS]$$cut$ ### Calculating Vectors at Interior Nodes It happens that some moves are not explored due to root cut for example. However when the evaluation of the best move at the root gets worse with more search, these moves are then explored to a depth greater than one. In order to verify if an early cut is possible at a depth smaller than the search depth for these moves, the algorithm has to calculate the Pareto fronts at smaller depths. If it can cut with the resulting front it has gained much search efficiency. We call this optimization the Empty Entry optimization. ### Cut on Win If at a Max node a move is found for which all worlds are won for the current search depth, the search can be cut, as more search cannot improve on this result. ### $\alpha$ Cut Figure 4 gives an example of an $\alpha$ cut at a Min node. The root node a is a Max node, the first move played at $a$ returned $\\{[1~{}1~{}0],[0~{}1~{}1]\\}$. The second move is then tried leading to node $c$ and the initial Pareto front calculated with double dummy searches at node $c$ is [1 1 1]. The first Min move at $c$ returns the [x 1 0] Pareto front. The x means that the Min move is not possible in the first world and thus there is no evaluation associated to this world for the first Min move. The current Pareto front at node $c$ is updated by taking the minimum in all the possible and evaluated worlds of the outcomes, leading to a [1 1 0] Pareto front. This Pareto front is dominated by the Pareto front of the left move at node $a$ and therefore the search can be cut. Figure 5 gives an example of a deep $\alpha$ cut at a Min node. For each max node earlier in the path, if there exists one node whose Pareto front dominates currently evaluated front, further search can be avoided. Algorithm 1 shows how to go up in the tree from a candidate node to check if the candidate node can be cut. The principle of the algorithm is to check all upper Max nodes to see if one of them dominates the current candidate node. We first save the current node to be evaluated against upper node (line 2), we then crawl the tree upward until reaching a max node (line 4 to 7). If we do not find any Max node it means that we reached the tree root and can return false, otherwise we test for Pareto dominance (line 11). If the upper max node does not dominate the candidate node, we search for the next Max node in the tree (line 3). 1: $\alpha$cut(node) 2: candidate $\leftarrow$ node 3: while node exist do 4: node $\leftarrow$ parent(node) 5: while node exist and Min parent(node) do 6: node $\leftarrow$ parent(node) 7: end while 8: if node doesn’t exist then 9: return false 10: end if 11: if candidate.front $\leq$ node.front then 12: return true 13: end if 14: end while 15: return false Algorithm 1 The deep $\alpha$ cut Figure 4: Example of an $\alpha$ cut at node c.a$\\{[1~{}1~{}0],[0~{}1~{}1]\\}$b$\\{[1~{}1~{}0],[0~{}1~{}1]\\}$$\\{[1~{}1~{}0],[0~{}1~{}1]\\}$$[1~{}1~{}1]$c$[1~{}1~{}0]$$[$x$~{}1~{}0]$ $cut$ Figure 5: Example of a deep $\alpha$ cut at node d.a$\\{[1~{}1~{}0],[0~{}0~{}1]\\}$$[1~{}1~{}0]$…b$[0~{}0~{}1]$c$[0~{}0~{}1]$d$[1~{}0~{}0]$$cut$ ### The $\alpha\mu$ Algorithm with Cuts Algorithm 2 gives the $\alpha\mu$ algorithm with cuts. $M$ is the number of Max moves to search. If this number is equal to zero or if the state is terminal or if there is at most one useful world left the search is stopped (lines 2-5). At a Min node if there is an early cut the search stops (lines 9-11). Otherwise the union of the sets of legal moves of the useful worlds is calculated (lines 12-16). All moves are tried, maintaining the possible worlds (line 21), making a recursive call (line 22), updating the Pareto front (line 23), updating the useful worlds (line 24) and making a cut if possible (lines 25-27). At a Max node similar operations are performed, the root cut is also tested (lines 43-47). 1: Function $\alpha\mu$ ($state,M,Worlds,\alpha$) 2: if $stop(state,M,Worlds,result)$ then 3: update the transposition table 4: return $result$ 5: end if 6: $t\leftarrow$ entry in the transposition table 7: if Min node then 8: $mini\leftarrow\emptyset$ 9: if $t.front\leq\alpha$ then 10: return $mini$ 11: end if 12: $Worlds$ = updateUsefulWorlds($t.front,Worlds$) 13: $allMoves\leftarrow\emptyset$ 14: for $w\in Worlds$ do 15: $l\leftarrow$ legalMoves ($w$) 16: $allMoves=allMoves\cup l$ 17: end for 18: move $t.move$ in front of $allMoves$ 19: for $move\in allMoves$ do 20: $s\leftarrow$ play ($move,state$) 21: $W_{1}\leftarrow\\{w\in Worlds:move\in w\\}$ 22: $f\leftarrow\alpha\mu$ ($s,M,W_{1},\emptyset$) 23: $mini\leftarrow$ min($mini,f$) 24: $Worlds$ = updateUsefulWorlds($mini,Worlds$) 25: if $mini\leq$ to an upper front then 26: break 27: end if 28: end for 29: update the transposition table 30: return $mini$ 31: else 32: $front\leftarrow\emptyset$ 33: for $w\in Worlds$ do 34: $l\leftarrow$ legalMoves ($w$) 35: $allMoves=allMoves\cup l$ 36: end for 37: move $t.move$ in front of $allMoves$ 38: for $move\in allMoves$ do 39: $s\leftarrow$ play ($move,state$) 40: $W_{1}\leftarrow\\{w\in Worlds:move\in w\\}$ 41: $f\leftarrow\alpha\mu$ ($s,M-1,W_{1},front$) 42: $front\leftarrow$ max($front,f$) 43: if root node then 44: if $\mu(front)=\mu$ of previous search then 45: break 46: end if 47: end if 48: end for 49: update the transposition table 50: return $front$ 51: end if Algorithm 2 The $\alpha\mu$ search algorithm with cuts. ## Experimental Results In the experiments we have $\alpha\mu$ play either the 3NT contract or the 7NT contract against PIMC with 20 simulated worlds at each decision point, or against WBridge5. ### Search Times with and without Optimizations We fix a number of cards for the initial state and we play a game from this state using $\alpha\mu$, recording the average time per move. The initial number of cards is either 32 or 52. We play 100 games, and thus 3200 searches for 32 cards and 5200 searches for 52 cards. We make experiments both for 20 and 40 simulated worlds at each decision point. In all experiments the early cut and the root cut are enabled. Table 1 gives the average times used by the program with and without optimizations for deals with 52 cards, 20 worlds and three Max moves. Cards is the number of Cards for the starting hand of each game. M is the number of Max moves allowed during the search. Worlds is the number of worlds used by the search. U is for maintaining useful worlds. E is the use of the Empty Entry optimization. $\alpha$ is the use of the $\alpha$ cut. W is the use of the World cuts. Win is the use of Cut on Win. Table 2 gives the times for deals with 32 cards, 20 worlds and three Max moves. Table 3 gives the times for deals with 32 cards, 40 worlds and three Max moves. We see that the speedup factor carries through to the case with twice as many simulated worlds. Table 1: Comparison of the average time per move of different configurations of $\alpha\mu$ with three Max moves and 20 worlds on deals with 52 cards. Cards | M | Worlds | U | E | $\alpha$ | W | Win | Time ---|---|---|---|---|---|---|---|--- 52 | 3 | 20 | n | n | n | n | n | 3.020 52 | 3 | 20 | y | n | n | n | n | 2.916 52 | 3 | 20 | n | y | n | n | n | 3.321 52 | 3 | 20 | n | n | y | n | n | 3.342 52 | 3 | 20 | n | n | n | y | n | 3.648 52 | 3 | 20 | n | n | n | n | y | 2.837 52 | 3 | 20 | y | y | y | y | y | 1.032 52 | 3 | 20 | n | y | y | y | y | 3.230 52 | 3 | 20 | y | n | y | y | y | 1.438 52 | 3 | 20 | y | y | n | y | y | 1.469 52 | 3 | 20 | y | y | y | n | y | 1.365 52 | 3 | 20 | y | y | y | y | n | 3.566 Table 2: Comparison of the average time per move of different configurations of $\alpha\mu$ with three Max moves and 20 worlds on deals with 32 cards. Cards | M | Worlds | U | E | $\alpha$ | W | Win | Time ---|---|---|---|---|---|---|---|--- 32 | 3 | 20 | n | n | n | n | n | 2.016 32 | 3 | 20 | y | n | n | n | n | 1.465 32 | 3 | 20 | n | y | n | n | n | 1.739 32 | 3 | 20 | n | n | y | n | n | 1.704 32 | 3 | 20 | n | n | n | y | n | 1.972 32 | 3 | 20 | n | n | n | n | y | 1.259 32 | 3 | 20 | y | y | y | y | y | 0.605 32 | 3 | 20 | n | y | y | y | y | 0.989 32 | 3 | 20 | y | n | y | y | y | 0.648 32 | 3 | 20 | y | y | n | y | y | 0.647 32 | 3 | 20 | y | y | y | n | y | 0.629 32 | 3 | 20 | y | y | y | y | n | 1.207 Table 3: Comparison of the average time per move of different configurations of $\alpha\mu$ with three Max moves and 40 worlds on deals with 32 cards. Cards | M | Worlds | U | E | $\alpha$ | W | Win | Time ---|---|---|---|---|---|---|---|--- 32 | 3 | 40 | n | n | n | n | n | 4.381 32 | 3 | 40 | y | n | n | n | n | 2.891 32 | 3 | 40 | n | y | n | n | n | 3.939 32 | 3 | 40 | n | n | y | n | n | 3.852 32 | 3 | 40 | n | n | n | y | n | 4.338 32 | 3 | 40 | n | n | n | n | y | 2.989 32 | 3 | 40 | y | y | y | y | y | 1.297 32 | 3 | 40 | n | y | y | y | y | 2.420 32 | 3 | 40 | y | n | y | y | y | 1.382 32 | 3 | 40 | y | y | n | y | y | 1.410 32 | 3 | 40 | y | y | y | n | y | 1.329 32 | 3 | 40 | y | y | y | y | n | 2.443 ### Games Against WBridge 5 Table 4: Results of duplicate games against WBridge5 and PIMC on 10 000 deals at 7 No Trump. P1 | P2 | D | $\neq$ | Winrate | $\sigma$ ---|---|---|---|---|--- $\alpha\mu$ | WB5 | PIMC | 812 | 0.567 | 0.0174 $\alpha\mu$ | WB5 | WB5 | 755 | 0.428 | 0.0180 $\alpha\mu$ | WB5 | DDS | 567 | 0.697 | 0.0193 $\alpha\mu$ | PIMC | PIMC | 757 | 0.551 | 0.0181 $\alpha\mu$ | PIMC | WB5 | 687 | 0.569 | 0.0189 $\alpha\mu$ | PIMC | DDS | 467 | 0.647 | 0.0221 Figure 6: The evolution of the winrate with the number of worlds and the depth of the search. Figure 7: The evolution of the thinking time with the number of worlds and the depth of the search. In order to compare two algorithms we use duplicate games. It means that the two algorithms play the same 10 000 deals against the same algorithm for defense. Table 4 shows the results of duplicate games against different algorithms. The P1 column shows the Max player associated to the winrate. The P2 column shows the other player to which it is compared. The D column indicates the algorithm used for playing the defense. P1 and P2 are playing the declarer for a 7 No Trump contract. The $\neq$ column gives the number of games where P1 has a different outcome than P2. The Winrate column shows the percentage of these different games that were won by P1 and lost by P2. The standard deviation is given in column $\sigma$. The players are $\alpha\mu$, WBridge5 (WB5) the winner of the 2016, 2017 and 2018 World Computer-Bridge Championships, Perfect Information Monte Carlo (PIMC), and the Double Dummy Solver (DDS) which has complete information and plays knowing the hands of all players. We can see on the first line that $\alpha\mu$ outperforms WBridge 5 against PIMC as the defense, but we see on the second line that WBridge5 performs better against itself playing as the defense. After discussing with the author of WBridge5, this may be due to the fact that WBridge5 as a declarer partly models itself as the defense during its search. The third line shows that $\alpha\mu$ performs much better than WBridge5 against a perfect complete information defense (DDS). The next three lines deal with PIMC as the second player of the duplicate games. $\alpha\mu$ performs better than PIMC against the three defenders, and especially against DDS (as in the duplicate with WBridge5). Figure 6 shows the evolution of the winning percentage of different versions of $\alpha\mu$ against WBridge5. The x-axis is the number of worlds used by $\alpha\mu$. It starts at 20 worlds and doubles the number of worlds until 320 worlds. There are curves for each number of Max moves between 1 and 4. Note that $\alpha\mu$ with 1 max Move is PIMC. We can observe that the curves are asymptotic and that the asymptote of PIMC is below the asymptote of $\alpha\mu$ with numbers of Max moves greater than 1. Going from 2 Max moves to 3 or 4 does not improve the results. Figure 7 gives the average time per move of the different versions of $\alpha\mu$. We can observe that $\alpha\mu$ with two Max moves is close to PIMC. However for equivalent thinking times for $\alpha\mu$ with two Max moves and for PIMC with 320 worlds, if we refer to figure 6, $\alpha\mu$ with two Max moves has better results than PIMC for equivalent times. ### Leaf Parallelization Previous experiments did not use parallelization. We now present a simple parallelization of the algorithms with cuts. In Monte Carlo Tree Search there are three kinds of parallelization: root, leaf and tree parallelization (Cazenave and Jouandeau 2007; Chaslot, Winands, and van den Herik 2008; Cazenave and Jouandeau 2008). Root parallelization runs independent searches in parallel and sums the result to choose the most simulated move. Tree parallelization has multiple threads sharing a common tree. Leaf parallelization parallelizes the playouts at the leaves of the tree. In order to speedup $\alpha\mu$, we are interested in the simplest form of parallelization, namely leaf parallelization. When reaching a leaf, $\alpha\mu$ does an $\alpha\beta$ complete information double dummy search on each possible world compatible with the cards played so far. These $\alpha\beta$ searches are independent and can safely be run in parallel. This is what we call the leaf parallelization of $\alpha\mu$. In order to optimize Leaf Parallelization each thread is assigned the next available world as soon as the thread becomes available. There is a mutual exclusion on the assignment of a world to a thread. Table 5 gives the comparison of the times spent per move for $\alpha\mu$ with and without parallelization for different numbers of Max moves and different depth. Leaf parallelization was done with OpenMP and mutual exclusion to choose the next available thread. We can see that there is little to gain for PIMC with 20 worlds, going from 0.019 seconds for 1 thread to 0.015 seconds for 6 threads. For PIMC with 80 worlds going from 1 to 6 threads halves the average time per move. Similar speedups occur for $\alpha\mu$ with two Max moves: the speedup factor is approximately two for 80 worlds. Leaf parallelized $\alpha\mu$ with two Max moves and 40 worlds has approximately the same thinking time as PIMC with 160 worlds, and we can see in figure 6 that it has a better winrate. Table 5: Comparison of the average time per move with Leaf Parallelization using 1 thread and 6 threads. Cards | M | Worlds | 1 thread | 6 threads ---|---|---|---|--- 52 | 1 | 20 | 0.019 | 0.015 52 | 1 | 40 | 0.041 | 0.024 52 | 1 | 80 | 0.083 | 0.041 52 | 1 | 160 | 0.160 | 0.078 52 | 2 | 20 | 0.067 | 0.042 52 | 2 | 40 | 0.145 | 0.081 52 | 2 | 80 | 0.333 | 0.152 ## Conclusion We presented five different optimizations that speed up search for the $\alpha\mu$ algorithm, and showed how each optimization contributes to the final speedup when combined. In our experiments, the algorithm with the optimizations was shown to be three times faster than without the optimizations. The optimized algorithm has also been parallelized with leaf parallelization which gave a speedup factor of 2. The evolution of the winrate and the thinking time with the number of worlds has shown an asymptotic behavior both for PIMC and $\alpha\mu$ with different numbers of Max moves. The asymptote of $\alpha\mu$ is higher than the asymptote of PIMC, meaning that for usual thinking times and number of worlds, optimized $\alpha\mu$ performs better than PIMC. Moreover, optimized $\alpha\mu$ outperforms the former computer Bridge world champion WBridge5 in the 7NT contract when the defense is played by PIMC or DDS. ## Acknowledgment Thanks to Dan Braun for proofreading. ## References * Bouzy, Rimbaud, and Ventos (2020) Bouzy, B.; Rimbaud, A.; and Ventos, V. 2020. Recursive Monte Carlo Search for Bridge Card Play. In _2020 IEEE Conference on Games (CoG)_ , 1–8. * Brown and Sandholm (2019) Brown, N.; and Sandholm, T. 2019. Superhuman AI for multiplayer poker. _Science_ 365(6456): 885–890. * Buro et al. (2009) Buro, M.; Long, J. R.; Furtak, T.; and Sturtevant, N. 2009. Improving state evaluation, inference, and search in trick-based card games. In _Twenty-First International Joint Conference on Artificial Intelligence_. * Cazenave (2009) Cazenave, T. 2009. Nested Monte-Carlo Search. In Boutilier, C., ed., _IJCAI_ , 456–461. * Cazenave and Jouandeau (2007) Cazenave, T.; and Jouandeau, N. 2007. On the parallelization of UCT. In _proceedings of the Computer Games Workshop_ , 93–101. * Cazenave and Jouandeau (2008) Cazenave, T.; and Jouandeau, N. 2008. A Parallel Monte-Carlo Tree Search Algorithm. In _Computers and Games, 6th International Conference, CG 2008, Beijing, China, September 29 - October 1, 2008. Proceedings_ , 72–80. * Cazenave and Ventos (2020) Cazenave, T.; and Ventos, V. 2020. The $\alpha$$\mu$ Search Algorithm for the Game of Bridge. In _Monte Carlo Search at IJCAI_. * Chaslot, Winands, and van den Herik (2008) Chaslot, G. M.-B.; Winands, M. H.; and van den Herik, H. J. 2008. Parallel monte-carlo tree search. In _International Conference on Computers and Games_ , 60–71. Springer. * Cowling, Powley, and Whitehouse (2012) Cowling, P. I.; Powley, E. J.; and Whitehouse, D. 2012. Information set monte carlo tree search. _IEEE Transactions on Computational Intelligence and AI in Games_ 4(2): 120–143. * Edelkamp (2020) Edelkamp, S. 2020. Representing and Reducing Uncertainty for Enumerating the Belief Space to Improve Endgame Play in Skat. In _ECAI 2020_. * Frank and Basin (2001) Frank, I.; and Basin, D. 2001. A theoretical and empirical investigation of search in imperfect information games. _Theoretical Computer Science_ 252(1-2): 217–256. * Furtak and Buro (2013) Furtak, T.; and Buro, M. 2013. Recursive Monte Carlo search for imperfect information games. In _2013 IEEE Conference on Computational Inteligence in Games (CIG), Niagara Falls, ON, Canada, August 11-13, 2013_ , 1–8. * Ginsberg (2001) Ginsberg, M. L. 2001. GIB: Imperfect Information in a Computationally Challenging Game. _J. Artif. Intell. Res._ 14: 303–358. * Haglund (2010) Haglund, B. 2010. Search algorithms for a bridge double dummy solver. * Kupferschmid and Helmert (2006) Kupferschmid, S.; and Helmert, M. 2006. A skat player based on Monte-Carlo simulation. In _International Conference on Computers and Games_ , 135–147. Springer. * Lanctot et al. (2019) Lanctot, M.; Lockhart, E.; Lespiau, J.-B.; Zambaldi, V.; Upadhyay, S.; Pérolat, J.; Srinivasan, S.; Timbers, F.; Tuyls, K.; Omidshafiei, S.; et al. 2019. OpenSpiel: A framework for reinforcement learning in games. _arXiv preprint arXiv:1908.09453_ . * Levy (1989) Levy, D. N. 1989. The million pound bridge program. _Heuristic Programming in Artificial Intelligence The First Computer Olympiad_ 95–103. * Lockhart et al. (2019) Lockhart, E.; Lanctot, M.; Pérolat, J.; Lespiau, J.-B.; Morrill, D.; Timbers, F.; and Tuyls, K. 2019. Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent. _arXiv preprint arXiv:1903.05614_ . * Long et al. (2010) Long, J. R.; Sturtevant, N. R.; Buro, M.; and Furtak, T. 2010. Understanding the success of perfect information monte carlo sampling in game tree search. In _Twenty-Fourth AAAI Conference on Artificial Intelligence_. * Mahmood, Grant, and Sharif (2014) Mahmood, Z.; Grant, A.; and Sharif, O. 2014. _Bridge for Beginners: A Complete Course_. Pavilion Books. * Sturtevant and White (2006) Sturtevant, N. R.; and White, A. M. 2006. Feature construction for reinforcement learning in hearts. In _International Conference on Computers and Games_ , 122–134. Springer. * Ventos et al. (2017) Ventos, V.; Costel, Y.; Teytaud, O.; and Thépaut Ventos, S. 2017. Boosting a Bridge Artificial Intelligence. In _Proc. International Conference on Tools with Artificial Intelligence (ICTAI)_ , 1280–1287. IEEE. * Zinkevich et al. (2008) Zinkevich, M.; Johanson, M.; Bowling, M.; and Piccione, C. 2008. Regret minimization in games with incomplete information. In _Advances in neural information processing systems_ , 1729–1736.
# Enhancing the Transformer Decoder with Transition-based Syntax Leshem Choshen Department of Computer Science Hebrew University of Jerusalem <EMAIL_ADDRESS> &Omri Abend Department of Computer Science Hebrew University of Jerusalem <EMAIL_ADDRESS> ###### Abstract Notwithstanding recent advances, syntactic generalization remains a challenge for text decoders. While some studies showed gains from incorporating source- side symbolic syntactic and semantic structure into text generation Transformers, very little work addressed the decoding of such structure. We propose a general approach for tree decoding using a transition-based approach. Examining the challenging test case of incorporating Universal Dependencies syntax into machine translation, we present substantial improvements on test sets that focus on syntactic generalization, while presenting improved or comparable performance on standard MT benchmarks. Further qualitative analysis addresses cases where syntactic generalization in the vanilla Transformer decoder is inadequate and demonstrates the advantages afforded by integrating syntactic information.111Code can be found in https://github.com/borgr/nematus/tree/generation ## 1 Introduction In parallel to the impressive achievements of large neural networks in a variety of NLP fields, more and more work emphasizes the importance of the inductive biases models possess and the types of generalizations they make (Welleck et al., 2021; Csordás et al., 2021; Ontanón et al., 2021). Syntactic generalization has been repeatedly identified as a problem in text generation (Linzen and Baroni, 2020; Hu et al., 2020), an issue that we address here. Importantly, language models may fail, sometimes unexpectedly, on constructions that can be reliably parsed using standard syntactic parsers. In this work, we propose a method for incorporating syntax into the decoder to assist in mitigating these challenges, focusing on NMT as a test case. The use of (mostly syntactic) structure in machine translation dates back to the early days of the field (Lopez, 2008). While focus has shifted to string- to-string methods since the introduction of neural methods, considerable work has shown gains from integrating linguistic structure into NMT and text generation technologies. We briefly survey such methods in §7. Incorporating target-side syntax has been less frequently addressed than source-side syntax, possibly due to the additional conceptual and technical complexity it entails, as it requires to jointly generate the translation and its syntactic structure. In addition to linearizing the structure into a string, that allows to easily incorporate source and target structure (Aharoni and Goldberg, 2017b; Nadejde et al., 2017), several works generated the nodes of the syntactic tree using RNNs (Gū et al., 2018; Wang et al., 2018; Wu et al., 2017). Others have shown gains from multi-task training of a decoder with a syntactic parser (Eriguchi et al., 2016). However, we are not aware of any Transformer-based architecture to support the integration of target-side structure in the form of a tree or a graph. Addressing this gap, we propose a flexible architecture for integrating graphs into a Transformer decoder. Our approach is based on predicting the output tree as a sequence of transitions (§3), following the transition-based tradition in parsing (Nivre, 2003, and much subsequent work). The method (presented in §4) is based on generating the structure incrementally, as a sequence of transitions, as is customary in transition-based parsers. However, unlike standard linearization approaches, our proposed decoder re-encodes the intermediate graph (and not only the generated tokens), thus allowing the decoder to take advantage of the hitherto produced structure in its further predictions. In §2, we discuss the possibilities offered by such decoders, that do not only auto-regress on their previous outputs, but also on (symbolic) structures defined by those outputs. Indeed, a decoder thus built can condition both on information it did not predict (e.g., external knowledge bases) and information predicted later on. We introduce _bidirectional attention_ into the decoder, that allows token representations to encode the following tokens that were predicted. This is similar to the bidirectional attention in the encoder, where any token can attend to any token, and not only to preceding ones. Our architecture is flexible, supporting decoding not only into trees, but into any graph structure for which a transition system exists. We test two architectures for incorporating the syntactic graph. One inputs the graph into a Graph Convolutional Network (GCN; Kipf and Welling, 2016), and another dedicates an attention head to point at the syntactic parent of each token, which does not yield any increase in the number of parameters. We assess in §6 the impact of the proposed architecture on syntactically challenging translation cases (Choshen and Abend, 2019) and in general. We experiment with a 4 layered model in three target languages, and a 6 layered on En-De. Due to the high computational cost, we experiment with the model on a single language pair only. We find that on the syntactic challenge sets proposed by Choshen and Abend (2019), the proposed decoder achieves substantial improvements over the vanilla decoder, which do not diminish (and even slightly improve) when increasing the size of the model. In addition, evaluating on the standard MT benchmarks, we find that the syntactic decoders outperform the vanilla Transformer for the smaller model size on all examined language pairs: on the English-German (En-De) and German-English (De-En) challenge sets and on En-De, De-En and English-Russian (En-Ru) test sets, and obtain comparable results to the vanilla when experimenting with a larger model on En-De. Finally, we analyse the different modifications in isolation, finding that the ablated versions’ performance resides between the full model and the vanilla decoder. Figure 1: Illustration of the information fed into the decoder with each method. Left: Vanilla. Center: Bidirectional Decoder Right: Structural Decoder. At a given step Bidirectional Decoder attends to all predicted words and Syntactic Transformer predicts edges and receives both edges and words as input. ## 2 Decoding Approach [column sep=0.35cm] $\overline{\text{Jo@@~{}hn}}$ & put & the & coals & out [edge height=1.6cm]2root [edge height=0.3cm]21nsubj [edge height=0.3cm]43det [edge height=0.7cm]24obj [edge height=1.2cm]25compound:prt Examples 1 Target-side structure reduces the ambiguity of “put”. De source: “John löschte die Kohlen” (lit. John put-out the coals). Disambiguating and connecting distant words is a known challenge in NMT (Avramidis et al., 2020). In Example 1 to disambiguate “put” as not having the sense “lay” but “extinguish”, “out” must be considered. To achieve this from the autoregressed output, the decoder’s representation may need to be re- computed after predicting “out”. We note that while source-side information can potentially be used to disambiguate “put”, it may still be beneficial to enhance the auto-regressive decoder with disambiguating information. Current implementations impose an architectural bias, namely, a decoded token’s representation may not attend to future tokens. Transformer models mask attention in the following manner (we did not find any alternative methods): Token embeddings attend only to previously generated tokens, even when the following tokens are already known. This practice “ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$” (Vaswani et al., 2017). We propose to allow attending to any known token (Fig. 1), as done on the encoder side. Due to its conceptual resemblance to Bidirectional RNN, we name this Bidirectional Transformer or biTran. Formally, let $o_{1}\ldots o_{n}$ be a hitherto predicted sequence and $d$ max sentence length. Attention is $softmax\left(L+M\right)$ where $L\in\mathbb{R}^{d\times d}$ are the logits and $M\in\mathbb{R}^{d\times d}$ is a mask. Hence, $M(i,j)=-\infty$ masks a token $j$ from representation $i$. $\vspace{-0.1cm}M_{v}(i,j)=\begin{cases}0&j<i\\\ -\infty&o.s.\end{cases}\vspace{-0.1cm}$ while Bidirectional attention mask is $\vspace{-0.1cm}M_{bi}(i,j)=\begin{cases}0&n<j\\\ -\infty&o.s.\end{cases}\vspace{-0.1cm}$ This change does not introduce any new parameters or hyperparameters, but still increases the expressivity of the model. We note, however, that this modification does prevent some commonly implemented speed-ups relying on unidirectionality (e.g., in NEMATUS; Sennrich et al., 2017). Apart from the technical contribution, we emphasize that this and the following approaches take advantage of attention-based models being state- less. Transformers can, therefore, be viewed as conditional language models, namely as models for producing a distribution for the next word, given the generated prefix and source sentence. Viewing them as such opens possibilities that were not native to RNNs, such as predicting only partial outputs and conditioning on per-token or non-autoregressed context (see App. A). ## 3 Transition-based Structure Generation We turn to describe how we represent structure within the proposed decoder. We generate the target-side structure with a transition-based approach, motivated by the practical strength of such methods, as well as their sequential nature, which fits neural decoders well. We therefore augment the vocabulary with transitions. Our work is inspired by RNNG (Dyer et al., 2016), a conceptually similar architecture that was developed for RNNs. At each step, the input to the decoder includes the tokens and the parse graph that was generated thus far. As edges and their tokens are not generated simultaneously (but rather by different transitions; see below), we rely on bidirectional attention to update the past embeddings when a new edge connects previously generated tokens. In this section, we present the syntactic transitions and in the next (§4), the ways we incorporate it back into the model. In this work, we represent syntax through Universal Dependencies (UD; Nivre et al., 2016), but note that other syntactic and semantic formalisms that have transition-based parsers (Hershcovich et al., 2018; Stanojević and Steedman, 2020; Oepen et al., 2020) fit the framework as well. We select UD due to its support for over 100 languages and its status as the de facto standard for syntactic representation. We base our transition system on arc-standard (Nivre, 2003), which can produce any projective tree. Both contain a transition connecting two words by a labeled edge. However, we replace Shift that reads the next word by subwordt generating a new sub-word $t$. Sub-words are generated successively until a full word is formed. To avoid suboptimal representation of transition tokens, we add the edges going through them to the graph (e.g., the edge Left- Arc:det$\xrightarrow[]{\text{det}}the$). We denote with $f$ the transition functions updating a word stack $\Sigma$ and the labeled graph $G$. If $a,b$ are the top and second words in $\Sigma$ respectively, and $x$ a transition, then $f(x;\Sigma)$ is defined as: x (token) | $\Sigma$ | Edges Added ---|---|--- $Subword_{t}$ | t,a,b | $\emptyset$ Left-Arc:l | a | $a\xrightarrow[]{\text{l}}b$, $x\xrightarrow[]{\text{l}}b$, $a\xrightarrow[]{\text{l}}x$ Right-Arc:l | b | $b\xrightarrow[]{\text{l}}a$, $x\xrightarrow[]{\text{l}}a$, $b\xrightarrow[]{\text{l}}x$ For brevity, we denote an edge from/to every subword of $a$ as an edge from/to $a$. Overall, the translation sequence to create the graph in Example 1 is: Jo@@ hn put Left-Arc:nsubj the coals Left-Arc:det Right-Arc:obj out Right- Arc:compound:prt (more details in App. B) ## 4 Regressing on Generated Structure As discussed in §2, the state-less nature of the Transformer allows re- encoding not only the previous predictions, but any information that can be computed based on them. So far, we proposed to autoregress on the syntactic structure, token by token. However, as $f$ is deterministic, learning to emulate it, is pointless. Instead, we can autoregress on the generated graph itself, $G=f\left(o_{1}\ldots o_{n}\right)$, as well as the encoder output, $o_{1}\ldots o_{n}$. Our approach is modular and works with any graph encoding method. We experiment with two prominent methods for source-side graph encoding. #### GCN Encoder. Graph Convolutional Networks (GCN; Kipf and Welling, 2016) are a type of graph neural network. GCNs were used successfully by previous work to encode source- side syntactic and semantic structure for NMT (Bastings et al., 2017; Marcheggiani et al., 2018). The GCN layers are stacked immediately above the embedding layer. The GCN contains weights per edge type and label as well as gates, that allow placing less emphasis on the syntactic cue if the network so chooses. Gating is assumed to help against noisy structure, which machine output is expected to be. See ablation experiments to assess the impact of gating in §6.3. Following Kipf and Welling (2016), we introduce 3 edge types. Self from a token to itself, Left to the parent tokens and Right from the parents. A GCN layer over input layer $h$, a node $v$ and a graph $G$ containing nodes of size $d$, with activation $\rho$, edge directions $\operatorname{dir}$, labels $\operatorname{lab}$, and a function $N$ from a node in $G$ to its neighbors is $\mathbf{gcn}(h,v,G)=\rho\Bigg{(}\sum_{u\in\mathcal{N}(v)}g_{u,v}\cdot f_{u,v}\Bigg{)}$ where $f_{u,v}$ are graph weighted embedding: $f_{u,v}=\left(W_{\operatorname{dir}(u,v)}\,\mathbf{h}_{u}+\mathbf{b}_{\operatorname{lab}(u,v)}\right)$ and $g_{u,v}$ is the applied gate: $g_{u,v}=\sigma\Big{(}\mathbf{h}_{u}\cdot\mathbf{\hat{w}}_{\operatorname{dir}(u,v)}+\hat{b}_{\operatorname{lab}(u,v)}\Big{)}$ where $\sigma$ is the logistic sigmoid function and $\mathbf{\hat{w}}_{\operatorname{dir}(u,v)}\in\mathbb{R}^{d}$, $W\in\mathbb{R}^{d\times d}$, $\hat{b}_{\operatorname{lab}(u,v)}\in\mathbb{R}$, $\mathbf{b}\in\mathbb{R}^{d}$ are the learned parameters for the GCN. #### Attending to Parent Token. The second re-encoding method we test, Parent, dedicates an attention head only to the parent(s) of the given token. Commonly, the parent is given by an external parser (Hao et al., 2019) or learned locally in each layer, to focus the attention (Strubell et al., 2018). Unlike such approaches, we define the parents by the self-generated graph. To allow ignoring it when preferable or when no parent was generated, we also allow attending to the current token. To recap, for a token $o_{i}$, we mask all but $o_{i}$ and its parents. Parent differs from GCN considerably. On the one hand, Parent requires minimal architectural changes and no additional hyperparameters. It also affects different network parts, some attention heads, rather than an additional embedding. On the other hand, only GCN represents the labels and the whole graph, specifically children. By considering both architectures, we show that graph methods for the encoder (Bastings et al., 2017) may be easily adapted to the decoder, demonstrating the flexibility of the proposed framework. ## 5 Experimental Setup #### Metrics. We report both BLEU (Papineni et al., 2002) and chrF+ (Popovic, 2017) and note that chrF+ has been deemed more reliable for current technology (Ma et al., 2019). #### Model. Medium (large) models are trained with batch size 128, embedding size 256 (512), 4 (6) decoder and encoder blocks, 8 attention heads (Parent replaces one). We train for 90K (150K) steps, where empirically some saturation is reached, allowing a fair system comparison (Popel and Bojar, 2018). The GCN architecture includes 2 layers with residual connections. Parses are extracted by UDPipe (Straka, 2018), UD2.0 for English and German and UD2.5 syntagrus for Russian. Unable to identify a preexisting implementation, we implemented labeled sparse GCNs with gating in Tensorflow. Implementation mostly focused on memory considerations, and was optimized for runtime when possible. More on implementation details, filtering and preprocessing in App. B. #### Language Pairs. We experiment on 3 language pairs with 3 target languages: English (De-En), German (En-De) and Russian (En-Ru). We use the WMT16 data (Bojar et al., 2016) for En-De, and either the clean News commentary or the full noisy WMT20 data Barrault et al. (2020) for En-Ru. #### Test sets. Newstest 2012 served as a development set. To measure the overall system performance we used newstest 2013-15. To test syntactic generalization, we used the challenge sets by Choshen and Abend (2019). Those are sub-sets of the books and newstest corpora in En$\leftrightarrow$De, automatically filtered by a syntactic parser to contain lexical long-distance dependencies. i.e., sentences where two or more non- consecutive words correspond to a single word. E.g., “put … out” in Example 1 corresponds to the German “löschte” (see also Example 2). Previous work has shown such phenomena to be challenging for present-day NMT systems. Improving the automatic measures on one such challenge set indicates better performance on a specific phenomenon, while better overall challenge set performance implies better handling of lexical long-distance dependencies. The various challenge set settings are represented as a 3-tuple $(dir,p,dom)$, corresponding to the direction, inspected phenomenon and domain. Direction can be either “source” or “target”, indicating whether the long distance dependency is in the source or the target (i.e., in the reference). A more effective representation of the target-side syntax should improve target challenges and potentially also the source side’s, by increasing the model’s “awareness” to syntactic structure. By phenomenon, we refer to the syntactic phenomenon in question. There are three test cases for English phenomena and two for German. By domain we refer to the origin of the examples, which can be either the sizable books corpus (Tiedemann, 2012), or a smaller news corpus (Barrault et al., 2020). ## 6 Results | Preposition Stranding | Particle | Reflexive ---|---|---|--- | Books | News | Books | News | Books | News | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | – | – | – | – | 4.14 | 20.72 | 20.31 | 49.04 | 8.08 | 32.38 | 20.65 | 49.09 Parent | – | – | – | – | 8.37 | 33.78 | 20.54 | 49.99 | 8.60 | 33.49 | 21.39 | 50.01 (a) Target challenge sets for En-De, large models (Preposition Stranding is omitted as it is not present in German) Vanilla | 8.70 | 33.58 | 13.82 | 43.41 | 8.59 | 32.66 | 15.28 | 44.28 | 8.54 | 32.85 | 18.90 | 45.82 ---|---|---|---|---|---|---|---|---|---|---|---|--- Parent | 9.03 | 34.83 | 11.53 | 45.12 | 8.59 | 33.71 | 14.99 | 45.90 | 9.05 | 34.11 | 20.79 | 46.73 (b) Source challenge sets for En-De, large models Vanilla | 5.95 | 25.88 | 9.96 | 36.96 | 5.37 | 24.69 | 9.39 | 39.19 | 5.32 | 24.71 | 16.48 | 42.04 ---|---|---|---|---|---|---|---|---|---|---|---|--- Parent | 6.21 | 28.12 | 11.17 | 41.13 | 5.47 | 25.74 | 11.93 | 41.24 | 5.71 | 26.22 | 15.56 | 42.76 GCN | 6.21 | 27.27 | 11.31 | 40.48 | 5.51 | 25.53 | 10.35 | 39.83 | 5.46 | 25.70 | 16.45 | 43.03 (c) Source challenge sets for En-De, medium models Vanilla | 6.38 | 27.30 | 9.18 | 38.22 | 6.53 | 25.70 | 10.54 | 38.28 | 6.15 | 25.94 | 17.20 | 43.12 ---|---|---|---|---|---|---|---|---|---|---|---|--- Parent | 7.59 | 27.87 | 10.81 | 39.22 | 7.07 | 26.50 | 9.72 | 39.57 | 6.82 | 26.58 | 17.56 | 44.00 GCN | 6.33 | 26.60 | 10.14 | 41.00 | 6.69 | 26.16 | 10.60 | 39.81 | 6.33 | 25.83 | 20.16 | 44.19 (d) Target challenge sets for De-En, medium models Table 1: Results on the syntactic challenge sets, both on the larger sets from books and the smaller ones from news. Models include Vanilla and the GCN and Parent UD-based decoders. Models can be large or medium in size and trained on En-De or De-En. Challenges are either in the source or target translation. See also App. E. Source | der gruppe, an die sich der Plan richtet | ---|---|--- Gloss | the group to which himself the plan aims | Ref. | the group to whom the plan is aimed | Parent | the group to which the plan is aimed | Vanilla | the group aimed at the plan | Examples 2 A part of a sentence with a long-distance German reflexive verb from the challenge set. We compare the syntactic generalization abilities of the different decoders in §6.1, and continue by examining their overall performance (§6.2). We then assess the contribution of the components of the system through ablation experiments (§6.3) and evaluate the effects of noisy training data (§6.4). ### 6.1 Syntactic Generalization We evaluate the syntactic generalization abilities of the models using the syntactic challenge sets. Results (Table 1) show that the medium Parent (GCN) improves over the Vanilla in 18 (20) of 20 target challenge settings and 19 (19) of 20 in the source challenges. The large model improves in 18/20 of the challenges and gains seem similar or even larger. The latter results suggest that simply using larger models is unlikely to address these gaps in syntactic generalization. See also E. ### 6.2 Overall Performance | 2013 | 2014 | 2015 ---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 17.61 | 45.54 | 18.23 | 47.29 | 19.57 | 47.50 Parent | 18.11 | 46.75 | 18.6 | 48.46 | 20.55 | 49.20 GCN | 18.03 | 46.43 | 18.86 | 48.46 | 20.32 | 48.90 BiTran | 17.64 | 45.66 | 18.34 | 47.53 | 19.33 | 47.61 Linearized | 17.71 | 46.07 | 18.39 | 47.69 | 19.81 | 48.36 \- Gates | 17.81 | 46.12 | 18.43 | 48.08 | 20.06 | 48.62 \- Labels | 17.98 | 46.40 | 18.77 | 48.29 | 19.96 | 48.73 (a) Overall performance for En-De, medium models Vanilla | 23.64 | 53.44 | 21.94 | 53.13 | 21.60 | 50.84 ---|---|---|---|---|---|--- Parent | 23.56 | 54.08 | 22.11 | 53.77 | 20.69 | 49.16 (b) Overall performance for En-De translation, large models Vanilla | 21.51 | 48.20 | 21.4 | 48.46 | 21.44 | 48.13 ---|---|---|---|---|---|--- Parent | 22.46 | 49.24 | 21.75 | 49.41 | 22.14 | 49.31 GCN | 22.33 | 49.27 | 21.76 | 49.71 | 22.43 | 49.73 BiTran | 21.63 | 48.48 | 21.42 | 48.86 | 21.38 | 48.54 Linearized | 21.95 | 49.27 | 21.83 | 49.79 | 22.2 | 49.70 \- Gates | 22.28 | 49.33 | 21.89 | 49.68 | 22.04 | 49.39 \- Labels | 22.21 | 49.46 | 21.75 | 49.73 | 22.26 | 49.57 (c) Overall performance for De-En translation, medium models Vanilla | 13.2 | 38.72 | 17.17 | 43.69 | 14.19 | 40.87 ---|---|---|---|---|---|--- Parent | 13.61 | 40.67 | 18.53 | 46.44 | 15.75 | 43.57 GCN | 13.25 | 40.31 | 17.86 | 46.09 | 15.38 | 43.09 (d) Overall performance for En-Ru Table 2: Overall performance in different settings. Ablated models (where applicable), appear in the bottom part of the table and include the Bidirectional Transformer (BiTran), with linearized syntax (Linearized), GCN without labels or gating (-Gates) and GCN without labels (-Labels). The syntactic variants consistently outperform the vanilla and ablated variants in the medium size setting and are comparable to it in the large one. The Bidirectional Transformer (BiTran) slightly outperforms Vanilla Transformer. Table 2 presents the overall test performance for all models. For medium-sized models, the UD-based decoders (GCN and Parent rows) show better performance over the vanilla decoder in all settings, with 0.7-1.1 average BLEU improvements and 1-2.4 chrF+. We see a slight advantage to the GCN decoder on De-En, and an advantage to Parent on En-De and En-Ru. We apply a sign test on all medium size test sets and separately on challenge sets. GCN and parent are significantly ($p<0.01$) better than BiTran, which is significantly better than Vanilla Transformer. With the large models, parent performs comparably to the vanilla (Table 2(b)), despite the superior results it obtains on syntactic generalization. ### 6.3 Ablation Experiments In order to better understand the contribution of different parts of the architecture and to compare them, we consider ablated versions (See Tables 2 and App. E). Differences are small but consistent. In one, Linearized, we train the vanilla Transformer over the transitions, linearized to a string, without encoding the graph through GCN or attention. This is reminiscent of the approaches taken by Aharoni and Goldberg (2017b); Nadejde et al. (2017), albeit with a different form of linearization. Results place Linearized in a clear place: consistently better than the structure-unaware models but not as good as the structure-aware ones. We turn to experiment with ablated versions of the GCN decoder. Unlabeled ignores the labels and relies only on the graph structure, while Ungated, also removes the gate $g$. Gating was hypothesized to be important to avoid over- reliance on the erroneous edges (Bastings et al., 2017; Hao et al., 2019). As our graphs are generated by the network, rather than fed into it by an external parser, this is a good place to test this hypothesis. Comparing GCN with and without labels, we find their contribution to be limited. Despite some improvement in overall BLEU, as often as not, Unlabeled is better on the challenges. We advise caution, however, in interpreting these results, as they may not necessarily indicate that syntactic labels are redundant. There are two technical points to consider. First, the labels’ role in GCNs is small, they contribute many hyperparameters, while only affecting a bias term. Presumably, this is an inefficient use that should be addressed in future work. Second, the labels are incorporated also through the transitions, and hence have token embeddings. These could compensate for the disregard of labels. Unlike labels, gating appears to be crucial. The Ungated scores are lower than the Unlabeled variant in 34/40 challenges. This might indirectly support the hypothesis that gating aids with erroneous parses. It also hints introducing similar mechanisms to Parent may also be beneficial. Even BiTran provides a small (up to $.28$ BLEU,$.42$ chrF+) but consistent improvement. Indeed, it outperforms the vanilla on average and in 10/12 scores in each pair. We observe a similar trend in the challenge sets (Table 1): BiTran improves scores in 26/40 syntactic challenge sets. In conclusion, bidirectionality in itself is somewhat beneficial, both in general and specifically for aggregating the syntactically correct context tokens. As a next step, we compare GCN ablations to Parent. Like unlabeled GCNs, Parent does not rely on the labels and provides a different way to incorporate the graph structure, which is still shown to be successful. We note that while labels are not incorporated, they appear as transition inputs and can be attended to. Comparing the two architectures, Parent shows significant gains over Unlabeled GCN. Despite being easier to implement and being much lighter in terms of memory, time and hyperparameters, Parent generally outperforms Unlabeled GCN in both performance and specific challenges. Parent is slightly better than unablated? GCN on En-De and slightly worse on De-En. It is better on 3 of 5 De-En phenomena and one of the En-De, when compared to the GCN variant. ### 6.4 Noise Robustness Preliminary experiments indicate that syntactic architectures may be more sensitive to noisy training data than the vanilla Transformer, possibly amplifying parser errors. To test this, we trained on the full WMT data for En-Ru, which is mostly crawled data. Results show that the improvement in chrF+ is smaller, 1 point instead of 1.5-2.5 in other settings, and BLEU scores are somewhat worse (see App. §E.1). It seems then that overall, the inclusion of noisy data diminishes the relative improvement. An alternative explanation to these results may be that our methods contribute less in the presence of more training data. Our positive results on En-De and De-En, that use relatively large amounts of data (4.5M sentence pairs), show that if this is indeed the case, saturation is slow. ### 6.5 Qualitative Analysis To complement the automatic challenges, we compile a set of 99 simple subject- verb-object sentences where the German object and subject can swap locations without affecting the meaning. We created three sets of sentences, where the case marking for the subject and object may or may not be ambiguous. For example, Das Pferd bringt der Vater and Der Vater bringt das Pferd both translate to the father brings the horse. Such examples are of particular interest to us here, as the case of the first noun phrase is ambiguous (“Das Pferd” could be either a subject or an object) and is only disambiguated by the case marking of the second one. These cases require some understanding of the syntax to translate correctly. See App. §C. A native-speaking German annotator, fluent in English, then evaluated the medium-size Parent and Vanilla outputs on these sentences. The ambiguous examples were found to be challenging for both systems, especially the ambiguous case markings. However, overall, Parent is more robust to the changes in order. Interestingly, both models (Parent more consistently) translate some sentences to passive voice, keeping both the (changed) order and the meaning. ## 7 Related Work While there are indications that Transformers implicitly learn some syntactic structure when trained as language models or as NMT (e.g., Jawahar et al., 2019; Manning et al., 2020; Don-Yehiya et al., 2022), it is not at all clear whether such information replaces the utility of incorporating syntactic structure. Indeed, a considerable body of work suggests the contrary. Much previous work tested RNN-based and attention-based systems for their ability to make structural generalizations (Welleck et al., 2021; Csordás et al., 2021; Ontanón et al., 2021). Syntactic generalizations seem to pose a particularly difficult challenge (Ravfogel et al., 2019; McCoy et al., 2019). Moreover, while NMT often succeeds in translating inter-dependent linearly distant words, their performance is unstable: the same systems may well fail on other “obvious” cases of the same phenomena (Belinkov and Bisk, 2017; Choshen and Abend, 2019). This evidence provides motivation for efforts such as ours, to incorporate linguistic knowledge into the architecture. Syntactic structure was used to improve various tasks, including code generation (Chakraborty et al., 2018), question answering (Bogin et al., 2020), automatic proof generation (Gontier et al., 2020) language modelling (Wilcox et al., 2020) and grammatical error correction (Harer et al., 2019). Such approaches, however, are task specific. E.g., the latter makes strong conditional independence assumptions, and is less suitable for MT where the source and target syntax may diverge considerably. In NMT, some works used structural cues by reinforcement learning (Wieting et al., 2019; Yehudai et al., 2022), but the gain from such methods seems to be constrained by the performance presented by the pre-trained model (Choshen et al., 2020). Aharoni and Goldberg (2017a) proposed to replace the source and target tokens with a linearized constituency graph. Nadejde et al. (2017) proposed a similar approach using CCG parses. Eriguchi et al. (2016) proposed an RNN to encode the source syntax. Some works suggested modifications to the RNN to encode source-side syntax (Chen et al., 2017, 2018; Li et al., 2017). Song et al. (2019) used a graph recurrent network to encode source-side AMR structures. Few works suggested changes in the Transformer to incorporate source-side syntax: Nguyen et al. (2020) and Bugliarello and Okazaki (2020) proposed a tree-based attention mechanism to encode source syntax; Zhang et al. (2019) incorporated the first layers of a parser in addition to the source-side token embeddings. Relatedly, previous work showed gains from using syntactic information for preprocessing (Ponti et al., 2018; Zhou et al., 2019a). Much fewer works focused on structure-based decoding. Eriguchi et al. (2017), building on Dyer et al. (2016), train a decoder in a multi-task setting of translation and parsing. Notably, unlike in the method we propose, their generated translation is not constrained by the parse during the decoding. Few works proposed alternating between two connected RNNs one translating and one creating a linearized graph using a tree-based RNN (Wang et al., 2018) or transition-based parsing (Wu et al., 2017). Gū et al. (2018) both parse and generate, using a recursive RNN representation. Other work changed RNNs (Tai et al., 2015) or Transformers to include structural inductive biases, but without explicit syntactic information. Wang et al. (2019) suggested an unsupervised way to train Transformers that learn tree-like structures following the intuition that such representations are more similar to syntax. Shiv and Quirk (2019) encoded tree-structured data in the positional embeddings. ## 8 Discussion The work we presented is motivated from several angles. First, we note that Transformers are trained in the same way that former sequence to sequence models are trained (e.g., RNNs) and to many, they are just a better architecture for the same task. Instead, our work emphasizes the possibility of conditional training using Transformers; namely, Transformers should be able to predict the third token given the first two, even without previously predicting them. Although generally not implemented this way, Transformers are already conditional networks, and allow for flexibility not found in RNNs. The finding that MT quality changes between beginnings and ends of predicted sentences both in RNNs and in Transformers (Liu et al., 2016; Zhou et al., 2019b), further motivates conditional translation. This is often explained by lack of context and disregard for the future tokens. Such future context is used by humans (Xia et al., 2017) and can potentially improve NMT (Tu et al., 2016; Mi et al., 2016). Moreover, as the encoded input is constant throughout the prediction, the varying performance is likely due to the decoder. Attending to all predictions from lower layers, as we propose here, aims to provide more of this required information.222Admittedly, for the very first generated tokens, bidirectionality will not help, as there is nothing to attend to. Finally, previous work investigated the reasons why incorporating source syntax helps RNNs (Shi et al., 2018) and Transformers (Pham et al., 2019; Sachan et al., 2020). These works show evidence that similar gains can be obtained when incorporating either syntactic trees or non-syntactic, syntactically uninformative, ones. A hypothesis followed, that graph-like architectures are helpful, but that syntactic information is redundant. While GCN creates such an architecture, linearized syntax, arguably Parent and to some extent the labels GCN component, do not. Still, they allow gains over the vanilla decoder, which challenges this hypothesis. ## 9 Conclusion We presented a flexible method for constructing decoders capable of outputting trees and graphs. We show that the improved decoder achieves notable gains in syntactic generalization, and in some settings improves overall performance as well. Our proposal is based on two main modifications to the standard Transformer decoder: (1) autoregression on structure; (2) bidirectional attention in the decoder, which allows recomputing token embeddings in light of newly decoded tokens. Testing on two variants for the decoder, we find that they both show superior syntactic generalization abilities over the vanilla Transformer, and that the gap does not diminish with model size. The method is flexible enough to allow decoding into a wide variety of graph and tree structures. Our work opens many avenues for future work. One direction would be to focus on conditional networks, training with (intentionally) noisy prefixes, randomly masking “predicted” spans during training (as done in masked language models, Devlin et al., 2019), and data augmentation through hard words or phrases rather than full sentences. Another direction might enhance bidirectionality by allowing “regretting” and changing past predictions. Finally, the work opens possibilities for better incorporating structure into language generators, of incorporating semantic structure and of enforcing meaning preservation (thus targeting hallucinations, Wang and Sennrich, 2020), by incorporating source and target structure together. ## 10 Acknowledgements We thank Daniel Lehmann which helped in some of the analysis. The work was done with the support of the Israel Science Foundation (grant no. 929/17) and the Kamin project. ## References * Aharoni and Goldberg (2017a) Roee Aharoni and Yoav Goldberg. 2017a. Morphological inflection generation with hard monotonic attention. In _Proc. of ACL_ , pages 2004–2015. * Aharoni and Goldberg (2017b) Roee Aharoni and Yoav Goldberg. 2017b. Towards string-to-tree neural machine translation. In _ACL_. * Avramidis et al. (2020) Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, Aljoscha Burchardt, and Sebastian Möller. 2020. Fine-grained linguistic evaluation for state-of-the-art machine translation. In _Proceedings of the Fifth Conference on Machine Translation_ , pages 346–356. * Barrault et al. (2020) Loïc Barrault, Magdalena Biesialska, Ondrej Bojar, Marta R. Costa-jussà, C. Federmann, Yvette Graham, Roman Grundkiewicz, B. Haddow, Matthias Huck, E. Joanis, Tom Kocmi, Philipp Koehn, Chi kiu Lo, Nikola Ljubesic, Christof Monz, Makoto Morishita, M. Nagata, T. Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (wmt20). In _WMT_. * Bastings et al. (2017) Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaán. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In _Proc. of EMNLP_. * Belinkov and Bisk (2017) Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. _ICLR_ , abs/1711.02173. * Bisazza et al. (2021) Arianna Bisazza, A. Ustun, and Stephan Sportel. 2021. On the difficulty of translating free-order case-marking languages. _ArXiv_ , abs/2107.06055. * Bogin et al. (2020) Ben Bogin, Sanjay Subramanian, Matt Gardner, and Jonathan Berant. 2020. Latent compositional representations improve systematic generalization in grounded question answering. _arXiv preprint arXiv:2007.00266_. * Bojar et al. (2016) Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, et al. 2016. Findings of the 2016 conference on machine translation. In _Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers_ , pages 131–198. * Bugliarello and Okazaki (2020) Emanuele Bugliarello and N. Okazaki. 2020. Enhancing machine translation with dependency-aware self-attention. In _ACL_. * Chakraborty et al. (2018) Saikat Chakraborty, Miltiadis Allamanis, and Baishakhi Ray. 2018. Tree2tree neural translation model for learning source code changes. _ArXiv_ , abs/1810.00314. * Chen et al. (2017) Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017. Neural machine translation with source dependency representation. In _Proc. of EMNLP_. * Chen et al. (2018) Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In _Proc. of AAAI_. * Choshen and Abend (2019) Leshem Choshen and Omri Abend. 2019. Automatically extracting challenge sets for non-local phenomena in neural machine translation. In _Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)_ , pages 291–303, Hong Kong, China. Association for Computational Linguistics. * Choshen et al. (2020) Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. 2020. On the weaknesses of reinforcement learning for neural machine translation. _ArXiv_ , abs/1907.01752. * Csordás et al. (2021) Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. _ArXiv_ , abs/2108.12284. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. * Ding et al. (2019) Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In _Proceedings of Machine Translation Summit XVII Volume 1: Research Track_ , pages 204–213, Dublin, Ireland. European Association for Machine Translation. * Don-Yehiya et al. (2022) Shachar Don-Yehiya, Leshem Choshen, and Omri Abend. 2022. Prequel: Quality estimation of machine translation outputs in advance. _arXiv preprint arXiv:2205.09178_. * Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In _HLT-NAACL_. * Dyer et al. (2016) Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In _HLT-NAACL_. * Eriguchi et al. (2016) Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 823–833, Berlin, Germany. Association for Computational Linguistics. * Eriguchi et al. (2017) Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. _ArXiv_ , abs/1702.03525. * Fernández-González and Gómez-Rodríguez (2018) Daniel Fernández-González and Carlos Gómez-Rodríguez. 2018. Non-projective dependency parsing with non-local transitions. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 693–700, New Orleans, Louisiana. Association for Computational Linguistics. * Gontier et al. (2020) Nicolas Gontier, Koustuv Sinha, Siva Reddy, and Christopher Pal. 2020. Measuring systematic generalization in neural proof generation with transformers. _arXiv preprint arXiv:2009.14786_. * Gū et al. (2018) Jetic Gū, Hassan S. Shavarani, and Anoop Sarkar. 2018. Top-down tree structured decoding with syntactic connections for neural machine translation and parsing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 401–413, Brussels, Belgium. Association for Computational Linguistics. * Hao et al. (2019) Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. 2019. Multi-granularity self-attention for neural machine translation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 887–897, Hong Kong, China. Association for Computational Linguistics. * Harer et al. (2019) Jacob Harer, C. Reale, and P. Chin. 2019. Tree-transformer: A transformer-based method for correction of tree-structured data. _ArXiv_ , abs/1908.00449. * Hershcovich et al. (2018) Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2018. Multitask parsing across semantic representations. In _Proc. of ACL_ , pages 373–385. * Hu et al. (2020) Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 1725–1744. * Jawahar et al. (2019) Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3651–3657, Florence, Italy. Association for Computational Linguistics. * Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. _CoRR_ , abs/1412.6980. * Kipf and Welling (2016) Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _CoRR_ , abs/1609.02907. * Koehn et al. (2007) Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, Wade Shen, C. Moran, R. Zens, Chris Dyer, Ondrej Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In _ACL_. * Li et al. (2017) Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017\. Modeling source syntax for neural machine translation. In _Proc. of ACL_. * Linzen and Baroni (2020) Tal Linzen and Marco Baroni. 2020. Syntactic structure from deep learning. _Annual Review of Linguistics_ , 7. * Liu et al. (2016) L. Liu, M. Utiyama, A. Finch, and Eiichiro Sumita. 2016. Agreement on target-bidirectional neural machine translation. In _HLT-NAACL_. * Lopez (2008) Adam Lopez. 2008. Statistical machine translation. _ACM Computing Surveys (CSUR)_ , 40:8. * Lui and Baldwin (2012) Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In _Proceedings of the ACL 2012 system demonstrations_ , pages 25–30. * Ma et al. (2019) Qingsong Ma, Johnny Wei, Ondřej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In _Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)_ , pages 62–90, Florence, Italy. Association for Computational Linguistics. * Manning et al. (2020) Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent linguistic structure in artificial neural networks trained by self-supervision. _PNAS_. * Marcheggiani et al. (2018) Diego Marcheggiani, Jasmijn Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In _Proc. of NAACL_. * McCoy et al. (2019) Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3428–3448. * Mi et al. (2016) Haitao Mi, B. Sankaran, Z. Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In _EMNLP_. * Nadejde et al. (2017) Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, P. Koehn, and Alexandra Birch. 2017. Predicting target language ccg supertags improves neural machine translation. In _WMT_. * Nguyen et al. (2020) Xuan-Phi Nguyen, Shafiq R. Joty, S. Hoi, and R. Socher. 2020. Tree-structured attention with hierarchical accumulation. _ArXiv_ , abs/2002.08046. * Nivre (2003) Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In _IWPT_. * Nivre et al. (2016) Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In _Proc. of LREC_ , pages 1659–1666. * Oepen et al. (2020) Stephan Oepen, Omri Abend, Lasha Abzianidze, Johan Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O’Gorman, Nianwen Xue, and Daniel Zeman. 2020. MRP 2020: The second shared task on cross-framework and cross-lingual meaning representation parsing. In _Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing_ , pages 1–22, Online. Association for Computational Linguistics. * Ontanón et al. (2021) Santiago Ontanón, Joshua Ainslie, V. Cvicek, and Zachary Kenneth Fisher. 2021\. Making transformers solve compositional tasks. _ArXiv_ , abs/2108.04378. * Papineni et al. (2002) Kishore Papineni, S. Roukos, T. Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _ACL_. * Pham et al. (2019) Thuong-Hai Pham, Dominik Machácek, and Ondrej Bojar. 2019. Promoting the knowledge of source syntax in transformer nmt is not needed. _Computación y Sistemas_ , 23. * Ponti et al. (2018) Edoardo Maria Ponti, Roi Reichart, Anna Korhonen, and Ivan Vulić. 2018. Isomorphic transfer of syntactic structures in cross-lingual nlp. In _Proc. of ACL_ , volume 1. * Popel and Bojar (2018) Martin Popel and Ondřej Bojar. 2018. Training tips for the transformer model. _The Prague Bulletin of Mathematical Linguistics_ , 110(1):43–70. * Popovic (2017) Maja Popovic. 2017. chrf++: words helping character n-grams. In _WMT_. * Ravfogel et al. (2019) Shauli Ravfogel, Y. Goldberg, and Tal Linzen. 2019. Studying the inductive biases of rnns with synthetic variations of natural languages. _ArXiv_ , abs/1903.06400. * Sachan et al. (2020) D. Sachan, Yuhao Zhang, Peng Qi, and W. Hamilton. 2020. Do syntax trees help pre-trained transformers extract information? _ArXiv_ , abs/2008.09084. * Sennrich et al. (2017) Rico Sennrich, Orhan Firat, K. Cho, Alexandra Birch, B. Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, A. Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine translation. In _EACL_. * Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. * Shi et al. (2018) Haoyue Shi, Hao Zhou, J. Chen, and Lei Li. 2018. On tree-based neural sentence modeling. In _EMNLP_. * Shiv and Quirk (2019) Vighnesh Leonardo Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In _NeurIPS_. * Song et al. (2019) Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. _TACL_ , 7. * Stanojević and Steedman (2020) Miloš Stanojević and Mark Steedman. 2020. Max-margin incremental CCG parsing. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4111–4122, Online. Association for Computational Linguistics. * Straka (2018) Milan Straka. 2018. Udpipe 2.0 prototype at conll 2018 ud shared task. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 197–207. * Strubell et al. (2018) Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018\. Linguistically-informed self-attention for semantic role labeling. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics. * Tai et al. (2015) Kai Sheng Tai, R. Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In _ACL_. * Tiedemann (2012) J. Tiedemann. 2012. Parallel data, tools and interfaces in opus. In _LREC_. * Tu et al. (2016) Zhaopeng Tu, Z. Lu, Y. Liu, X. Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. _arXiv: Computation and Language_. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in Neural Information Processing Systems_ , pages 5998–6008. * Wang and Sennrich (2020) Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 3544–3552, Online. Association for Computational Linguistics. * Wang et al. (2018) Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018. A tree-based decoder for neural machine translation. _arXiv preprint arXiv:1808.09374_. * Wang et al. (2019) Yau-Shian Wang, Hung yi Lee, and Yun-Nung Chen. 2019. Tree transformer: Integrating tree structures into self-attention. In _EMNLP/IJCNLP_. * Welleck et al. (2021) Sean Welleck, Peter West, Jize Cao, and Yejin Choi. 2021. Symbolic brittleness in sequence models: on systematic generalization in symbolic mathematics. _arXiv preprint arXiv:2109.13986_. * Wieting et al. (2019) J. Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond bleu: Training neural machine translation with semantic similarity. In _ACL_. * Wilcox et al. (2020) Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger Levy, and Miguel Ballesteros. 2020. Structural supervision improves few-shot learning and syntactic generalization in neural language models. _arXiv preprint arXiv:2010.05725_. * Wu et al. (2017) Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In _Proc. of ACL_. * Xia et al. (2017) Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, T. Qin, N. Yu, and T. Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In _NIPS_. * Yehudai et al. (2022) Asaf Yehudai, Leshem Choshen, Lior Fox, and Omri Abend. 2022. Reinforcement learning with large action spaces for neural machine translation. In _COLING_. * Zhang et al. (2019) Meishan Zhang, Zhenghua Li, Guohong Fu, and Min Zhang. 2019. Syntax-enhanced neural machine translation with syntax-aware word representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1151–1161, Minneapolis, Minnesota. Association for Computational Linguistics. * Zhang et al. (2018) Xiangwen Zhang, Jinsong Su, Yue Qin, Y. Liu, R. Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine translation. _ArXiv_ , abs/1801.05122. * Zhou et al. (2019a) Chunting Zhou, Xuezhe Ma, Junjie Hu, and Graham Neubig. 2019a. Handling syntactic divergence in low-resource machine translation. In _Proc. of EMNLP-IJCNLP_ , pages 1388–1394. * Zhou et al. (2019b) Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019b. Synchronous bidirectional neural machine translation. _Transactions of the Association for Computational Linguistics_ , 7:91–105. ## Appendix A From sequence-to-sequence to conditional Attention-based models are characterized by being state-less. They can, therefore, be viewed as conditional language models, namely as models for producing a distribution for the next word, given the generated prefix and source sentence It is possible to re-encode other information (not only the decoded output) into the decoder at each step, or predict only tokens of interest, rather than the complete sequence. It is also possible to change the source sentence partially or completely (e.g., adding noise to increase robustness), condition on additional information (§4) and adjust this information during prediction (e.g. force predicted word characteristics). Nevertheless, the standard practice is to only re-encode past predictions.333This is true even in cases of bidirectional generation (e.g., Zhang et al., 2018). Unlike RNNs, attention-based models do not inherently rely on past predictions in terms of inputs, weights and gradients. The only connection to past predictions is mediated through their re-encoding back into the decoder. RNNs receive past states as inputs. Backpropagation through time sees the current network as connected to the previous networks supplying the state input. Thus, the gradients take into account past predictions as well. In contrast, Transformers have gradients over representation of past words only if they are fed into the network. Unlike backpropagation through time, the preceding tokens can be changed, or even omitted (e.g., in a limited window size scenario). Specifically, in our case, preceding tokens may have different representations at each generation step. To sum, the representation is updated to provide good representation for the current step, but it is not calculated over the actual network of the previous step. It is often the case, though, that the previous decoded words are auto- regressed and hence updated. This architecture, therefore, allows more flexibility than RNNs. Still, Transformers are often thought about as an extension to RNNs, i.e., sequnce- to-sequence models. For that reason it is rare to find changes to the training schedule that incorporate more knowledge, change "past" information or translate only parts of a sentence with a network. With such methods, for example, one can dynamically force features of the next prediction (by a changing input) or augment learning by teaching the network only over hard cases. Such an approach may choose augmented data in a regular way, but stop the prediction at the part in the sentence one wishes the network to learn, or even teach it several alternatives with the same prefix. ## Appendix B Experimental Setup The code is adapted from the NEMATUS code repository (Sennrich et al., 2017) and will be released upon publication. All hyperparameters are either taken from the original suggestions or optimized for the vanilla Transformer and used as is for our suggested models. Networks are all trained with batch size 128, embedding size 256, 4 decoder and encoder blocks, 8 attention heads (one of which might be a parent head §4), 90K steps (where empirically some saturation is reached. This is a relatively fair comparison (Popel and Bojar, 2018)), learning rate $1e^{-4}$, 4K warm-up steps, Adam (Kingma and Ba, 2015) optimizer with beta 0.9 and 0.999 for the first and second moment and epsilon of $1e^{-8}$. We use the standard (structure-unaware) Transformer encoder in all our experiments. Each model was trained on 4 NVIDIA Tesla M60 or RTX 2080Ti GPUs for approximately a week (2 for GCN architecture), large models on RTX6000. Preprocessing includes truecasing, tokenization as implemented by Moses (Koehn et al., 2007) and byte pair encoding (Sennrich et al., 2016) without tying. Empty source or target sentences were dropped. In training, the maximum target sentence length is 40 non-transition tokens (BPE). We used UDPipe English and German over UD 2.0 and Russian with 2.5 with syntagrus version. In unreported trials, we found that whenever noisy and crawled data is used, filtering is crucial for even the baselines to show reasonable results. On full En-Ru (See §6.2), we filter unexpected languages by langID (Lui and Baldwin, 2012) and improbable alignment ($p<-180$) with FastAlign (Dyer et al., 2013). Overall, about half the sentences were filtered by those measures or length. There were 4,066,323 training sentences after filtering En-De and 4,468,840 before. In En-Ru, there were 19,557,568 after and 37,948,456 before. The English challenge sets on books and news sizes are respectively, 1,188 and 11 reflexive, 3,953 and 17 particle, 191 and 8 prepositions stranding, and the German 2,628 and 261 reflexive and 7,584 and 232 particle. WMT dev and test sets are always of about 3K sentences in size. We use chrF++.py with 1 word and beta of 3 to obtain chrF+ (Popovic, 2017) score as in WMT19 (Ma et al., 2019) and detokenized BLEU (Papineni et al., 2002) as implemented in Moses. We use two automatic metrics: BLEU as the standard measure and chrF+ as it was shown to better correlate with human judgments, while still being simple and understandable (Ma et al., 2019). Both metrics rely on n-gram overlap between the source and reference, where BLEU focuses on word precision, and chrF+ balances precision and recall and includes characters, as well as word n-grams. #### Transitions. We made two practical choices when creating the transition graph. First, we deleted the root edge, as the root is not a word in the translation. Second, we train only on projective parses. This choice reduces noise due to the low reliability of current non-projective parsers (Fernández-González and Gómez- Rodríguez, 2018), while not losing many training sentences. We do note, however, that this choice is not without its risks: it might be less fitting for some languages in which non-projective sentences are common. The transitions serve as the NMT vocabulary. There are 45 labels and two directions of connections, summing up to 90 new tokens. This hardly affects the standard vocabulary size, which usually consists of tens of thousands of tokens(Ding et al., 2019). We treat both token and transition predictions in the same way, and do not rescale their score as done in Stanojević and Steedman (2020). If anything, the need to memorize more should hurt performance, and so increased performance should come despite enlarging the vocabulary and not because of it. It is possible to split the tokens into directions and labels (summing to 47). This comes at the cost of lengthy sentences which increase training time and memory consumption. We did not experiment with other methods for encoding the transitions (e.g., embedding labels and edges separately). ## Appendix C Mixup challenge We follow the results of (Bisazza et al., 2021) that Transformers are able to learn languages with free order, given case markings. Given those findings, we wonder whether indeed Transformers are robust to mixing the order where case marking exists. To do that, we take lists of nouns and verbs to create simple sentences from. Then, we create three types of sentences, validated to be correct and convey the same meaning in both orders by an in-house annotator who is a native German speaker. Ones with both marked such as: den Ball bringt der Hund (lit. the dog brings the ball), ones with only the subject marked: das Pfred drängt der Hund (the dog urges the horse), and ones with only the object. The three lists of sentences are: * • Den {Ball, Stein, Tisch, Hamster} {bringt, wirft, drückt} {das Kind, die Mutter, das Mädchen} * • Das {Pferd, Kind, Mädchen} {drängt, drückt, zieht} der {Vater, Hund, Student} * • Den {Ball, Stein, Tisch, Hamster} {bringt, wirft, drückt} der {Vater, Hund, Student} Then, we switch the object and subject and calculate how often is the translation correct in terms of places. We disregard other errors such as choice of verb in English. Interestingly, as seen in the results section §E, both networks are quite bad at it (although the syntactic variant is better). | Vanilla | Parent ---|---|--- Object | 6 | 6 Subject | 5 | 8 Both | 10 | 13 Table 3: Amount of sentences where the rare order (OVS) in German was still well corrected. In rows, what had unambiguous casing. ## Appendix D Results with the Large We include the full results over the two larger models Parent and the Vanilla. While overall results are comparable, Parent consistently performs better on the challenge sets, often with large margins. | Preposition Stranding | Particle | Reflexive ---|---|---|--- | Books | | News | | Books | | News | | Books | | News | | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 8.70 | 33.58 | 13.82 | 43.41 | 8.59 | 32.66 | 15.28 | 44.28 | 8.54 | 32.85 | 18.90 | 45.82 Parent | 9.03 | 34.83 | 11.53 | 45.12 | 8.59 | 33.71 | 14.99 | 45.90 | 9.05 | 34.11 | 20.79 | 46.73 Table 4: Source challenge sets for En-De translation of large models. Parent outperforms the Vanilla. | 2013 | | 2014 | | 2015 | | Average | ---|---|---|---|---|---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 23.64 | 53.44 | 21.94 | 53.13 | 21.60 | 50.84 | 22.39 | 52.47 Parent | 23.56 | 54.08 | 22.11 | 53.77 | 20.69 | 49.16 | 22.12 | 52.34 Table 5: Test sets for En-De translation of large models. | Particle | Reflexive ---|---|--- | Books | | News | | Books | | News | | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 4.14 | 20.72 | 20.31 | 49.04 | 8.08 | 32.38 | 20.65 | 49.09 Parent | 8.37 | 33.78 | 20.54 | 49.99 | 8.60 | 33.49 | 21.39 | 50.01 Table 6: Target challenge sets for En-De translation of large models. Parent outperforms the Vanilla. ## Appendix E Additional Results We include here the full results including ablations that were omitted in the paper due to space considerations. For ease of comparison we also split them by challenge direction (source Table 7 and target Table 8). Note that improvements in the syntactic aspect could also be seen in the ablations (not reported in the main paper). Moreover, BiTran improves over the Vanilla even as a standalone architecture. | Particle | Reflexive ---|---|--- | Books | News | Books | News | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 7.15 | 27.66 | 17.79 | 44.91 | 6.83 | 26.84 | 19.68 | 45.06 Parent | 7.82 | 28.43 | 19.66 | 46.32 | 7.49 | 27.70 | 20.97 | 47.07 GCN | 7.32 | 27.67 | 20.13 | 46.77 | 7.11 | 27.16 | 20.68 | 47.15 BiTrans | 7.02 | 27.60 | 18.58 | 45.09 | 6.8 | 26.90 | 19.87 | 45.89 Linearized | 7.44 | 28.05 | 19.2 | 46.21 | 7.27 | 27.43 | 20.25 | 46.92 \- Gates | 7.62 | 28.23 | 19.71 | 46.36 | 7.38 | 27.65 | 20.74 | 47.19 \- Labels | 7.75 | 28.60 | 19.01 | 46.51 | 7.44 | 27.90 | 20.81 | 47.32 (a) Syntactic source challenge sets for De-En | Preposition Stranding | Particle | Reflexive ---|---|---|--- | Books | News | Books | News | Books | News | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 5.95 | 25.88 | 9.96 | 36.96 | 5.37 | 24.69 | 9.39 | 39.19 | 5.32 | 24.71 | 16.48 | 42.04 Parent | 6.21 | 28.12 | 11.17 | 41.13 | 5.47 | 25.74 | 11.93 | 41.24 | 5.71 | 26.22 | 15.56 | 42.76 GCN | 6.21 | 27.27 | 11.31 | 40.48 | 5.51 | 25.53 | 10.35 | 39.83 | 5.46 | 25.70 | 16.45 | 43.03 BiTrans | 5.3 | 26.38 | 10.56 | 38.05 | 6.07 | 26.08 | 10.21 | 39.48 | 5.77 | 26.01 | 13.91 | 37.74 Linearized | 5.99 | 26.90 | 8.86 | 39.12 | 5.24 | 25.42 | 10.45 | 39.56 | 5.48 | 25.47 | 14.94 | 42.15 \- Gates | 5.29 | 25.86 | 11.64 | 40.51 | 5.3 | 25.03 | 10.01 | 38.64 | 5.31 | 25.41 | 12.08 | 37.00 \- Labels | 5.83 | 27.05 | 8.62 | 38.33 | 5.41 | 25.62 | 11.98 | 41.79 | 5.42 | 25.67 | 16.55 | 41.65 (b) Syntactic source challenge sets for En-De Table 7: Results on the syntactic challenge sets, both on the large challenges from book domain and the smaller ones from news. Models include Vanilla and Bidirectional Transformer baselines (top) and the GCN and Parent syntactic variants (middle). Ablated models (bottom) include Vanilla with linearized syntax (Linearized), GCN without labels or gating (-Gates) and GCN without labels (-Labels). Among the baselines, BiTrans is better. It is inconclusive which syntactic method is best, but they are significantly superior to both baselines. | Preposition Stranding | Particle | Reflexive ---|---|---|--- | Books | | News | | Books | | News | | Books | | News | | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 6.38 | 27.30 | 9.18 | 38.22 | 6.53 | 25.70 | 10.54 | 38.28 | 6.15 | 25.94 | 17.2 | 43.12 Parent | 7.59 | 27.87 | 10.81 | 39.22 | 7.07 | 26.50 | 9.72 | 39.57 | 6.82 | 26.58 | 17.56 | 44.00 GCN | 6.33 | 26.60 | 10.14 | 41.00 | 6.69 | 26.16 | 10.6 | 39.81 | 6.33 | 25.83 | 20.16 | 44.19 BiTrans | 6.75 | 27.44 | 8.92 | 37.76 | 6.29 | 25.69 | 10.77 | 39.15 | 6.24 | 25.93 | 17.22 | 43.96 Linearized | 6.79 | 27.46 | 7.79 | 39.62 | 6.55 | 25.96 | 12.95 | 40.78 | 6.56 | 26.28 | 16.38 | 43.76 \- Gates | 6.89 | 27.31 | 10.46 | 40.80 | 6.53 | 26.26 | 12.45 | 40.70 | 6.62 | 26.50 | 15.97 | 43.10 \- Labels | 7.05 | 27.51 | 9.89 | 38.24 | 6.98 | 26.42 | 12.83 | 40.18 | 6.62 | 26.65 | 18.9 | 46.59 (a) Syntactic target challenge sets for De-En | Particle | Reflexive ---|---|--- | Books | | News | | Books | | News | | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 5.4 | 25.84 | 16.24 | 43.22 | 5.12 | 24.94 | 16.47 | 42.71 Parent | 5.52 | 26.96 | 16.19 | 44.83 | 5.37 | 26.31 | 16.86 | 44.30 GCN | 5.6 | 26.74 | 15.57 | 43.23 | 5.34 | 25.91 | 16.44 | 43.52 BiTrans | 5.81 | 26.79 | 15.84 | 43.25 | 5.43 | 25.88 | 16.33 | 42.44 \+ Linearized | 5.32 | 26.30 | 15.69 | 43.77 | 5.07 | 25.57 | 16.19 | 43.07 \- Gates | 5.31 | 26.21 | 15.49 | 43.45 | 5.01 | 25.30 | 15.67 | 43.13 \- Labels | 5.56 | 26.55 | 15.78 | 43.96 | 5.24 | 25.67 | 16.8 | 43.65 (b) Syntactic target challenge sets for En-De Table 8: Results on the syntactic challenge sets, both on the large challenges from book domain and the smaller ones from news. Models include Vanilla and Bidirectional Transformer baselines (top) and the GCN and Parent syntactic variants (middle). Ablated models (bottom) include the Vanilla with linearized syntax (Linearized), GCN without labels or gating (-Gates) and GCN without labels (-Labels). Among the baselines, BiTrans is better. It is inconclusive which syntactic method is best, but they are significantly superior to both baselines. | 2013 | 2014 | 2015 | Average ---|---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 17.61 | 45.54 | 18.23 | 47.29 | 19.57 | 47.50 | 18.47 | 46.78 BiTrans | 17.64 | 45.66 | 18.34 | 47.53 | 19.33 | 47.61 | 18.44 | 46.93 Parent | 18.11 | 46.75 | 18.6 | 48.46 | 20.55 | 49.20 | 19.09 | 48.14 GCN | 18.03 | 46.43 | 18.86 | 48.46 | 20.32 | 48.90 | 19.07 | 47.93 Linearized | 17.71 | 46.07 | 18.39 | 47.69 | 19.81 | 48.36 | 18.64 | 47.37 \- Gates | 17.81 | 46.12 | 18.43 | 48.08 | 20.06 | 48.62 | 18.77 | 47.61 \- Labels | 17.98 | 46.40 | 18.77 | 48.29 | 19.96 | 48.73 | 18.90 | 47.80 (a) Test sets for En-De translation | 2013 | 2014 | 2015 | Average ---|---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 21.51 | 48.20 | 21.40 | 48.46 | 21.44 | 48.13 | 21.45 | 48.26 BiTrans | 21.63 | 48.48 | 21.42 | 48.86 | 21.38 | 48.54 | 21.48 | 48.63 Parent | 22.46 | 49.24 | 21.75 | 49.41 | 22.14 | 49.31 | 22.12 | 49.32 GCN | 22.33 | 49.27 | 21.76 | 49.71 | 22.43 | 49.73 | 22.17 | 49.57 Linearized | 21.95 | 49.27 | 21.83 | 49.79 | 22.20 | 49.70 | 21.99 | 49.59 \- Gates | 22.28 | 49.33 | 21.89 | 49.68 | 22.04 | 49.39 | 22.07 | 49.46 \- Labels | 22.21 | 49.46 | 21.75 | 49.73 | 22.26 | 49.57 | 22.07 | 49.59 (b) Test sets for De-En translation Table 9: En-De and De-En results on newstest 2013-15. Ablated models include the Transformer decoder with linearized syntax (Linearized), GCN without labels or gating (-Gates) and GCN without labels (-Labels). The syntactic variants consistently outperfom the vanilla and ablated variants, and the Bidirectional Transformer (BiTrans) slightly outperforms Vanilla Transformer. ### E.1 Noisy data Table 10(a) presents the two tables side by side for ease of comparison. The one on larger noisy Russian train set and the cleaner one. | 2013 | 2014 | 2015 | Average ---|---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 13.20 | 38.72 | 17.17 | 43.69 | 14.19 | 40.87 | 14.85 | 41.09 BiTran | 13.13 | 39.10 | 17.63 | 44.63 | 14.59 | 41.52 | 15.12 | 41.75 GCN | 13.25 | 40.31 | 17.86 | 46.09 | 15.38 | 43.09 | 15.50 | 43.16 Parent | 13.61 | 40.67 | 18.53 | 46.44 | 15.75 | 43.57 | 15.96 | 43.56 (a) Test sets for En-Ru translation trained on news data | 2013 | 2014 | 2015 | Average ---|---|---|---|--- | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ | BLEU | chrF+ Vanilla | 16.84 | 44.28 | 20.12 | 47.7 | 14.74 | 40.92 | 17.23 | 44.30 BiTran | 16.84 | 44.46 | 20.61 | 48.17 | 14.79 | 41.05 | 17.41 | 44.56 GCN | 17.11 | 45.55 | 20.29 | 48.67 | 14.6 | 41.63 | 17.33 | 45.28 Parent | 16.8 | 45.42 | 20.2 | 48.95 | 14.59 | 41.73 | 17.20 | 45.37 (b) Test sets for En-Ru translation trained on noisy data Table 10: En-Ru results on newstest 2013-15 trained on clean (top) or noisy (bottom) data. Models include Vanilla, Bidirectional Transformer and syntactic variants. The syntactic ones improve over all datasets and on average.
# Gas flow in Martian spider formation Nicholas Attree111Corresponding author<EMAIL_ADDRESS>Erika Kaufmann Axel Hagermann Faculty of Natural Sciences, University of Stirling, UK Now at Institute for Space Research Graz, Austrian Academy of Sciences, Austria Luleå University of Technology, Space Campus Kiruna, Sweden ###### Abstract Martian araneiform terrain, located in the Southern polar regions, consists of features with central pits and radial troughs which are thought to be associated with the solid state greenhouse effect under a CO2 ice sheet. Sublimation at the base of this ice leads to gas buildup, fracturing of the ice and the flow of gas and entrained regolith out of vents and onto the surface. There are two possible pathways for the gas: through the gap between the ice slab and the underlying regolith, as proposed by Kieffer [2007], or through the pores of a permeable regolith layer, which would imply that regolith properties can control the spacing between adjacent spiders, as suggested by Hao et al. [2019]. We test this hypothesis quantitatively in order to place constraints on the regolith properties. Based on previously estimated flow rates and thermophysical arguments, we suggest that there is insufficient depth of porous regolith to support the full gas flow through the regolith. By contrast, free gas flow through a regolith–ice gap is capable of supplying the likely flow rates for gap sizes on the order of a centimetre. This size of gap can be opened in the centre of a spider feature by gas pressure bending the overlying ice slab upwards, or by levitating it entirely as suggested in the original Kieffer [2007] model. Our calculations therefore support at least some of the gas flowing through a gap opened between the regolith and ice. Regolith properties most likely still play a role in the evolution of spider morphology, by regolith cohesion controlling the erosion of the central pit and troughs, for example. ###### keywords: Mars; Mars, surface; Mars, polar geology: ices ††journal: Icarus ## 1 Introduction Araneiform terrain consists of groups of so–called spiders; radial, dendritic arrangements of troughs connecting to central pits, found at southern high latitudes of Mars [Kieffer, 2000]. Their occurence was thought to be limited to the South Polar Layered Deposits (SPLD). Spiders were also discovered outside the area of the SPLD, although the extent of the SPLD was subsequently re–defined. Tanaka et al. [2014] or Schwamb et al. [2018] provide more insight the geography of spiders and the SPLD. These features are perennial, visible all year round, scoured up to several metres deep into the regolith, and can be tens to hundreds of metres in size [Hansen et al., 2010]. A slab of CO2 ice begins to form in the autumn and covers this terrain during the southern winter and spring [Kieffer, 2000]. Based on this hypothesis, Piqueux et al. [2003] were the first to define and map the distribution of the spider features, confirming the correlation of spiders with the existence of highly transparent CO2 slab ice, which overlies the largely unconsolidated particulate regolith of the South Polar Layered Deposits. Over the years, what is now commonly referred to as the Kieffer model has undergone extensive extensions and refinements, but there is general consensus that spiders are formed by gas flowing under the CO2 ice [Piqueux et al., 2003, Kieffer et al., 2006, Kieffer, 2007, Piqueux and Christensen, 2008, Portyankina et al., 2010, Thomas et al., 2010, Pilorget et al., 2011, Thomas et al., 2011b, Martínez et al., 2012, Pilorget and Forget, 2016]. Sub–ice gas production is enabled by the transparency of $CO_{2}$ ice (see e.g. Hansen [2005] and references therein). The ice slab allows spring–time insolation to penetrate to its base and warm it in a solid-state greenhouse, a phenomenon similar to the atmospheric greenhouse effect [Brown and Matson, 1987, Fanale et al., 1990]. This means that sunlight can penetrate the CO2 ice layer down to a dust deposit at depth within or below the ice, where the radiation is absorbed. The temperature increase here leads to sublimation of the CO2 ice on the boundary between the ice layer and the underlying material. Sublimating gas then builds up until the pressure fractures the ice, opening a vent to the surface, followed by gas flow from the surrounding area into the vent and ejection. Regolith particles become entrained in the flow, being ejected along with the gas, whilst also eroding the troughs and pits [Thomas et al., 2010]. Larger and heavier grains end up forming dark fans on top of the ice. These can be more radial or unidirectional, depending on local surface winds carrying the material over the top of the ice sheet. whilst the smallest and lightest grains are ejected into the atmosphere [Portyankina et al., 2010]. This process is thought to repeat seasonally, so spiders play an important role in Mars’ global dust cycle and therefore the planet’s climate (see also [Kieffer, 2000, Piqueux et al., 2003]). Over time, spiders can effect substantial morphological change and they are representative of a type $CO_{2}$–sublimation associated erosion representative for Mars. In fact, most morphological changes on Mars are considered to be $CO_{2}$ related today [Piqueux and Christensen, 2008, Hansen, 2005]. As for the gas trapped unerneath the ice slab, there are several escape routes to consider. In the commonly accepted Kieffer model 2007, gas pressure physically lifts the ice slab, levitating at least part of the overlying slab and allowing gas to flow through an ice–regolith gap. This allows a radially converging flow to a central vent to drain a large area of sublimating ice through only a small ($\sim$cm thick) gap. However, we need to bear in mind the porous nature of Mars’ regolith, which might permit gas to flow within the substrate underlying the ice. The importance of the latter gas escape route has recently been stressed by Hao et al. [2019], who established a link between spider spatial density and regolith morphology. They observed that spiders are grouped at mutual distances smaller than that of random spacing, suggesting a controlling length–scale whereby a spider inhibits the formation of new spiders out to some distance from itself. Since this distance was seen to vary between different classes of spider morphology, Hao et al. [2019] proposed that it is controlled by the regolith properties, e.g. that different regolith porosities/permeabilities constrain the gas flow by different amounts, and therefore the radius around each spider where gas can build up. Evidence for the relevance of regolith morphology was observed by Hao et al. [2020]. These recent results highlight that the potential effectiveness of gas flow through the regolith itself — rather than through a regolith–ice gap — has to be addressed because it seems vital for our understanding of spiders. We approach the problem by providing suitable limits on the effectiveness of the underground flow of gas, based on constraints on regolith properties such as porosity/permeability combined with some known gas physics. Regolith porosity is a function of depth and therefore our treatment of the problem has to involve some stratigraphic considerations. We derive the depth of porous regolith needed to support the magnitudes of flow rate previously calculated in hydrodynamics simulations [Thomas et al., 2011a, b] and energy balance models [Kieffer, 2007, Pilorget et al., 2011, Pilorget and Forget, 2016]. We then compare this to the required gap size to support laminar free–flow between the ice and regolith, and to the expected bending of the ice sheet in response to the rise in gas pressure beneath. We present the calculations in Section 2, before discussing the results and their implications in Section 3, and concluding in Section 4. ## 2 Modelling The most common model for spider formation is illustrated in Figure 1; CO2 slab ice condenses on-top of regolith over the Martian winter. When insolation begins to penetrate this transparent ice layer in spring, CO2 ice at the base of the slab and top of the regolith starts to sublimate, raising the gas pressure to the point where it exceeds the strength of the ice slab and fracturing occurs. In the model of Kieffer [2007], this gas pressure levitates the slab above the regolith surface by a distance of $d_{gap}$, estimated at around one or two centimetres. Gas can also diffuse into the permeable part of the regolith [Hao et al., 2019], which extends down to some depth $d$ (also on the order of centimetres [Kieffer, 2007, Portyankina et al., 2010]), below which the regolith is made impermeable by water ice filling the pore spaces. See Mellon et al. [2004] for a detailed discussion of the geography of ground ice on Mars. Irrespective of whether the gas flows above or through the regolith to begin with, once fracturing has opened a vent to the surface, a pressure gradient across the radius of the spider drives gas flow to it, either as free–flow through the gap or viscous flow through the porous regolith. In the latter case, flow is limited by the pressure gradient and the regolith permeability, and we estimate these below. Figure 1: Conceptual model for spider formation based on a layer of transparent CO2 ice of thickness $z$ , overlaying permeable regolith of thickness $d$, above an impermeable ice–rich regolith layer shaded dark. Gas pressure in the regolith increases from near–atmospheric pressure around the spider vent to a maximum value at the edge of the spider. Some models, such as those by Hao et al. [2019] and Pilorget et al. [2011] assume no gap between ice and regolith, $d_{gap}\to 0$. See section 2.1 for a detailed description of parameters used. ### 2.1 Gas pressure The sublimation rate at the bottom of the ice slab controls both the pressure build-up and the flow rate into the spider vent. Upon initial fracturing there may be a very energetic flow, driven by the large initial pressure gradient, followed by a reduction to a steady state balance between gas production and ejection. Both these cases were simulated with a commercial hydrodynamics code in Thomas et al. [2011a] and Thomas et al. [2011b]. The gas production rate is estimated by comparing the amount of energy deposited by insolation at the ice-regolith boundary with the latent heat of sublimation of CO2. Kieffer [2007] assumed a perfectly transparent ice slab and calculate an area gas production rate of $\dot{m}=1.157\times 10^{-4}$ kg m-2 s-1. Thomas et al. [2011b] meanwhile assume $25\%$ of the solar energy penetrates a $z=20$ cm thick slab (based on Pilorget et al. [2011]) and arrive at $\dot{m}=2.21\times 10^{-5}$ kg m-2 s-1. This constant gas production during spring–time daylight hours cannot escape and swiftly fills up all available sub-ice volume. Gas pressure will at this point be equal to the cryostatic pressure of $P=P_{surf}+\rho_{ice}~{}gz$, where $g=3.72$ m s-2 is the Martian gravity and $\rho_{ice}=1606$ kg m-3 is the solid CO2 ice density. Taking $z$ as 0.7 m, as in Pilorget et al. [2011], and $P_{surf}\approx 600$ Pa, results in sub-ice pressures of $P_{cryo}\approx 4700$ Pa. This pressure will continue building until the stress in the ice slab above exceeds its rupture strength and it fractures. During the initial rupture, Thomas et al. [2011a] simulated a total flow rate of $Q_{i}=0.05-0.5$ kg s-1 draining a 400 square metre area into a central vent. Subsequently, ejection of gas reduces the pressure gradient and, in a steady state, the 400 square metre area will produce a total flow some $\sim 5$ times smaller: $Q_{s}=0.00884$ kg s-1 for $\dot{m}=2.21\times 10^{-5}$ kg m-2 s-1. Hydrodynamics simulations of these magnitudes of flow rate produce gas and dust ejection velocities of $\sim 10$ ms-1, consistent with the erosion of regolith and the spread of fine material in fans on the surface [Thomas et al., 2011a, b]. A gas production area of 400 square metres corresponds to a circular gas production radius of only $r_{P}=11.3$ m, somewhat smaller than the 55 m separation (spider radius $R\sim 25$ m) between densely packed spiders found in Hao et al. [2019]. Gas flows here therefore represent lower limits; larger spiders will have larger mass flow rates, but the ice breaking pressure will remain the same. The critical breaking pressure can be estimated using the fracturing model of Portyankina et al. [2010], which is based on an engineering model of the bending of a thin sheet, as shown in Figure 2. Portyankina et al. [2010] give the maximum stress at the middle of a sheet of radius $R$, and thickness $z$, when subjected to a constant pressure $P$ over a radius of $r_{P}$. Recently, Kaufmann et al. [2020] have measured the yield strength of CO2 ice at the relevant temperature as $\sigma=12.3$ MPa, and so we can set the stress equal to this value and solve for the critical gas pressure needed to cause fracturing $P_{crit}=\frac{8\sigma z^{2}}{3r_{P}^{2}}\left[4-(1-\nu)\frac{r_{P}^{2}}{R^{2}}+4(1+\nu)\ln\frac{R}{r_{P}}\right]^{-1}.$ (1) Here, $\nu_{I}=0.544$ is the Poisson’s ratio of CO2 ice [Yamashita and Kato, 1997]. It is assumed that the pressure builds up and is constant over a length–scale $r_{P}$, comparable to the spider radius $R$ (see Fig. 2). We favour a ratio of $r_{P}/R$ equal to one, in contrast to the value of 0.25 as used by Portyankina et al. [2010]. We think this is more realistic because gas pressure should build up over the whole radius of the spider, right up to its edge where the maximum sub-ice pressure exists, and beyond-which pressure begins to decrease again with decreasing distance towards the next spider (Fig. 1). Nevertheless, we plot $P_{crit}$ for a range of ratios and for two different overall spider sizes in Figure 3. Figure 2: Model for the breaking of a thin plate as in Portyankina et al. [2010]. Pressure $P$ builds up in a circular region of radius $r_{P}$ under a plate of thickness $z$ and radius $R$ with simply supported edges. Figure 3: Gas pressure required to fracture a sheet of CO2 ice of thickness $z=0.7$ m and radius $R$ for different scales of gas buildup area $r_{P}$. It can be seen from Figure 3 that large pressures, exceeding the cryostatic pressure of $P_{cryo}\sim 5$ kPa, can develop before fracturing occurs. This is especially the case for the smaller spiders (small R) and for small gas production radii (small $r_{P}/R$), both of which can be understood as reducing the total area that the pressure is applied to, meaning more gas pressure must be added to achieve the same total stress in the ice. For the densely packed spider separations observed in Hao et al. [2019] and Hao et al. [2020] and a gas production radius of $r_{P}=0.25R\sim 6.5$ m (even smaller in size than the case simulated by Thomas et al. [2011b] above), a total pressure of $P_{crit}\approx 33$ kPa can build up before fracturing occurs. Thus, in this case our maximum pressure gradient driving gas flow at the initial breach would be $(P_{crit}-P_{surf})/r_{P}\approx 33$ kPa$/6.5$ m. ### 2.2 Permeability The permeability of a porous medium can be expressed as (see for example Balme and Hagermann [2006]) $K=\frac{D^{2}\phi^{3}}{72\tau(1-\phi)^{2}},$ (2) where $D$ is the constituent particle size, $\phi$ is porosity and $\tau$ is tortuosity. Typical values used for Martian regolith are $\phi=40\%$, $\tau=25/12$ [Balme and Hagermann, 2006, Demidov et al., 2015], and particle sizes between $\sim$microns and a millimetre (e.g. Morgan et al., 2018). Over this range of particle sizes $K$ changes by several orders of magnitude, exceeding any uncertainty in $\phi$ and $\tau$. For gasses at a low pressure $P$ such as we are dealing with (rather than liquid flows), K must be corrected by the ‘Klinkenberg correction’ [Klinkenberg, 1941] $k_{G}=k+\frac{6.9}{P}k^{0.64},$ (3) where pressure is expressed in psi and permeability in mdarcies. Conversion back to SI units is performed using $k=cK$ and $k_{G}=cK_{G}$, with $c=0.986923\times 10^{-3}\mu$m2. With the above permeability, Darcy’s law for the total mass flow, $Q$, through a cross-sectional area $A$ of porous medium, driven by a pressure gradient of $dP/dr$, is [Ahmed, 2010] $Q=-\frac{\rho K_{G}A}{\eta}\frac{dP}{dr},$ (4) with the gas density $\rho$ and viscosity $\eta$ given by $\rho=\frac{Pm_{CO2}}{k_{B}T},$ (5) and $\eta=\rho\lambda\sqrt{\frac{2k_{B}T}{\pi m_{CO2}}},$ (6) respectively. Here $k_{B}$ is Boltzmann’s constant, $m_{CO2}=7.3\times 10^{-26}$ kg and $d_{CO2}=330\times 10^{-12}$ m are the molecular mass and diameter of CO2, and the mean free path is given by $\lambda=\frac{k_{B}T}{\sqrt{2}\pi Pd_{CO2}}.$ (7) It is assumed that gas temperature, $T$, is equal to the sublimation temperature at the sub-ice pressure $T=\frac{-b}{\ln(P/100)-a},$ (8) where we use the constants $a=23.3494$ and $b=3182.48$, as in Thomas et al. [2011a]. Under the 0.7 m deep slab these values are $T=163$ K (around 10 K higher than at the surface) $\rho=0.155$ kg m-3, $\lambda=9.75\times 10^{-7}$ m, and $\eta=2.12\times 10^{-5}$ Pa s. Assuming a cylindrical geometry with a thickness of permeable regolith $d$ metres, the cross-sectional area through which gas can flow, $A$, at a distance $r$ from the central vent, is $A=2\pi rd$. The pressure gradient, $\frac{dP}{dr}$, can be approximated as $\Delta P/r_{P}$, where $\Delta P$ is the pressure drop from the high-pressure, sub-ice environment to the low- pressure vent over a length-scale $r_{P}$. Close to the central vent (i.e. $r\ll R$), almost the entire gas flow, $Q$ from above, must pass through an area of permeable regolith. We therefore set $r=1$ m and obtain an expression from Eqn. 4 for the required depth of regolith in terms of this flow as $d=\frac{Q\eta}{2\pi\rho K_{G}}\frac{r_{P}}{\Delta P}.$ (9) Note that we have assumed a constant permeability for the regolith layer while, in reality, porosity, and therefore permeability, is generally reckoned to decrease with depth (see for example Morgan et al. [2018]). Decreasing permeability will restrict gas flow rates meaning that, again, our estimate here is a ‘strongest-case’ for flow through porous regolith. The length–scale $r_{P}$ is the gas buildup radius of above and should be similar in size to the spider radius $R$. Indeed, Hao et al. [2019] suggest that the distance between adjacent spiders is determined by the pressure fall- off, so that new fractures open (forming a new spider) at a distance from the existing vent where pressure increases to it’s maximum sub-ice value, $r_{P}\approx R$. In the steady state case, therefore, a further simplification can be made. Total flow into the vent is the sum of the per- metre gas production rate over the area of the spider $Q=\dot{m}\pi R^{2}$, and, having made the assumption $r_{P}=R$, Eqn. 1 can be simplified and combined with Eqn. 9 to make the dependence on spider size $R$ explicit. We also make the additional assumption that $\Delta P\approx P_{crit}$ (neglecting the comparatively small surface pressure) so that we are using the maximum pressure gradient possible: the ice breaking pressure, rather than the cryostatic pressure. Then $R$ can be expressed as $R=\left[\frac{16d\rho K_{G}\sigma z^{2}}{3\eta\dot{m}(3+\nu)}\right]^{1/5}.$ (10) This is the maximum radius of a cylindrical volume of porous regolith which can be effectively drained into a central vent at a steady state production $\dot{m}$ before gas pressure at the edge exceeds ice strength and a new vent opens. ## 3 Results and discussion We compute the required regolith depth $d$ from Equation 9 for a range of regolith particle sizes and two scenarios: steady state flow and just after the initial rupture. In the steady state, we use the minimum flow rate from above, $Q_{s}=0.00884$ kg s-1, and cryostatic pressure over a length-scale equal to the smaller spider separations, i.e. $\Delta P/r_{P}=(P_{cryo}-P_{surf})/r_{P}\approx 5$kPa$/25$ m. For the rupture case we use the strongest case of $(P_{crit}-P_{surf})/r_{P}\approx 33$ kPa$/6.5$ m, from above, as well as the slightly higher Thomas et al. [2011a] flow rate ($Q_{i}=0.05$ kg s-1). Figure 4: Required depth of the permeable layer, $d$, to sustain the minimum mass flow, $Q$, for various particle diameters and pressure gradients. Figure 4 shows the results: in the steady state, a permeable regolith layer of thickness at least $d\approx 13.4$ m is needed to support the flow rate for the likely particle size of up to $D\approx 200~{}\mu$m [Kieffer, 2007]. This is rather large, given the presence of ground ice and a decreasing regolith porosity with depth. Even in the case of the strongest possible pressure gradient just after rupture, the required thickness is still $d\approx 5.4$ m. Only in the case of very large regolith particles ($\sim 1$ mm) and the strongest pressure gradients does the required layer thickness fall below one metre. An impermeable layer of water-ice bonded regolith is generally assumed to be present in the Martian polar regions at a depth of between several centimetres [Portyankina et al., 2010, Pilorget et al., 2011] and the seasonal skin depth of $\sim 0.8$ m [Kieffer, 2007]. This is supported by thermal inertia [Bandfield and Feldman, 2008] and gamma-ray and neutron spectrometry measurements [Boynton et al., 2002, Diez et al., 2008], as well as the recent detection of shallow-buried water ice cliffs at mid-latitudes [Dundas et al., 2018]. Given the above, it seems unlikely that the total gas flow into a spider vent can occur purely through a porous regolith. What then is the largest size of a feature than can be supported this way? Figure 5 shows the results of Equation 10, where we assumed a steady state flow with the minimum gas production and a fixed geometry, and solve for $R$ for a set of permeable regolith depths. This is the length-scale over which a spider can effectively drain gas to a central vent before pressure exceeds the overlying ice strength at the edge. Even in the case of large regolith depths, the length-scales ($\sim 15$ m) are below the observed minimum spider separation distances ($\sim 25$ m) for a particle size below $200~{}\mu$m. For more realistic porous layer thicknesses, only areas of a few metres in radius can be drained. Gas flow through porous regolith therefore seems incompatible with the expected polar stratigraphy. Even in the most amenable case of the smallest distances between araneiforms, with the highest pressure gradients and the lowest gas flow rates, there is insufficient depth of permeable regolith to support the flow associated with the whole spider area venting through a single central vent. Small areas of a few metres in radius around local vents (such as the so-called ‘baby spiders’ identified by Schwamb et al. 2018) may be drained by viscous flow through the pores, but it seems likely that the larger (tens to hundreds of metres across) spiders must involve the original Kieffer [2007] model of ice slab levitation and free gas flow. Figure 5: Maximum radius $R$, over which steady-state gas production can be drained before pressure build–up exceeds the overlying ice strength at the edge, for different regolith depths. Slab levitation is likely when considering the large gas pressures, in excess of cryostatic pressure, calculated above, as well as the small gap size needed for relatively large flows. This required gap size $d_{gap}$ can be computed. Assuming laminar Poiseuille flow between two parallel plates (an approximation that likely holds away from the energetic flow at the vent itself) and a pressure gradient of $\Delta P/R$, the mass flow rate per unit circumference is given by $q=\frac{\rho d_{gap}^{3}}{12\eta}\frac{\Delta P}{R}$. The total mass flow into an annulus at radius $r$ can also be expressed in terms of the steady state area production rate as $Q(r)=\dot{m}\pi(R^{2}-r^{2})$, so that per unit of circumference it is $\frac{\dot{m}}{2}(R^{2}/r-r)$. The two can then be equated and solved for $d_{gap}$, the required distance between the two plates to accommodate the flow, as $d_{gap}=\left[\frac{6\dot{m}\eta}{\rho\Delta P}\left(\frac{R^{3}}{r}-rR\right)\right]^{1/3}.$ (11) Figure 6 shows the results of equation 11 for the two spider sizes from above. The required depth is greatest for the large mass flows of the bigger spiders, and increases towards the central vent as the gas is concentrated together in the converging flow (it is undefined at $r=0$, the vent itself). Nonetheless, the required gap between the regolith and ice, $d_{gap}$ is much smaller than the depth of permeable regolith $d$ calculated above and rarely exceeds one centimetre. Thus the whole flow can be supported by a gap of a few centimetres, as suggested by Kieffer [2007]. Figure 6: Required gap size, $d_{gap}$, to support free-flow of the total gas production, between the ice and regolith, across the radii of two different spider sizes. Such a regolith–ice gap may occur due to levitation of the ice slab, but also important is the fact that the slab will bend upwards before fracturing, providing an additional gap for gas to flow through. Portyankina et al. [2010] give an expression for the maximum displacement at the centre of the ice sheet (see Fig. 2), which we can evaluate just before rupture by using the pressure $P_{crit}$ calculated above: $w_{crit}=\frac{3(1-\nu)r_{P}^{2}P_{crit}}{16Ez^{3}}\left[4(3+\nu)R^{2}-(7+3\nu)r_{P}^{2}-4(1+\nu)r_{P}^{2}\ln\frac{R}{r_{P}}\right],$ (12) with the CO2 ice Young’s modulus of $E=11.5$ GPa [Yamashita and Kato, 1997]. The resulting displacement is shown in Figure 7, for the same spider sizes and $r_{P}/R$ ratios as before. The ice in the centre of 25 m and 150 m radius spiders is bent upwards by $\sim 20$ cm and $\sim 10$ m, respectively, before fracturing occurs. These values greatly exceed the required gap sizes for the gas flow shown in Fig. 6. Therefore, even though the upwards displacement is smaller towards the edge of the feature, it seems likely that upwards bending of the ice sheet can contribute significantly to the opening of a regolith–ice gap, aiding the flow of gas towards the spider vent. Figure 7: Maximum upwards displacement in the centre of an ice sheet just before fracturing, for different sizes $R$, and gas buildup areas $r_{P}$. With viscous flow through regolith pores unable to deliver the full gas production of a reasonable sized spider, and large gas pressure enough to bend or levitate the ice slab, it may be that some spiders undergo both types of flow. For example, low volumes of gas moving through a large cross-sectional area of porous regolith at the outskirts of a feature or around multiple vents, transitioning to free-flow nearer the vent where the converging gas has bent or levitated the overlying ice slab upwards, opening up a gap. Meanwhile, flow through and just above a loose regolith will certainly begin to erode material, digging troughs downwards into it. In Hao et al. [2019]’s conceptual model, regolith properties are cited as explaining the different appearances and spacing of spider sub-types. Although the above calculations demonstrate that significant gas flow through the pores is unlikely, regolith properties, such as cohesion, could still have an important effect on morphology. So- called ‘fat spiders’ could still be examples of low-cohesion, more easily eroded regoliths, as suggested by Hao et al. [2019], for example. Once these troughs and pits are eroded into the regolith, gas will preferentially flow down and further erode them, as described in Hao et al. [2019]. This will provide additional flow paths for the gas but, since slab ice is thought to conformally cover the terrain in winter, at least some levitation or bending is required to open up a gap again in the spring. The physics of mixed gas flow through the regolith pores, eroded troughs, and a levitated or bent-open ice gap will be complicated, and may involve turbulent as well as viscous flow. Alternatively, local variations in the depth of the impermeable water- ice layer, below the resolution of orbital observations, may allow increased porous gas flow in some localised areas. It seems unlikely, however, that water-ice depths could vary from $\sim$cm to $\sim$metres over a few metres lateral distance in otherwsie identical terrain. Finally, it should also be noted that some recent work [Chinnery et al., 2018] suggests a much lower translucence of CO2 ice when compared to previous experiments [Hansen, 1999], which would require much thinner ice sheets (on the order of 10 cm) in order to trigger the solid state greenhouse effect. We note that in this case, of a reduced $z$, cryostatic pressure and the critical breaking pressure would be reduced (see Eqn. 1), further lowering the pressure gradient and making permeable gas flow even less effective, even for small gas flow rates. ## 4 Conclusion In the formation of the so–called araneiform features, seen in Mars’ southern polar regions, subsurface flow of CO2 gas below an ice slab can occur in a gap between ice and regolith, and/or in the substrate itself. The latter possibility has recently received increased attention because spider spatial density scales seem to be related to properties of the underlying regolith. We investigate the effectiveness of subsurface flow of CO2 and compare it to gas flow through the gap between ice and regolith. Based on previously estimated flow rates and thermophysical arguments, we suggest that there is insufficient depth of porous regolith to support the full gas flow of all but the very smallest observed spiders through the regolith. On the other hand, free gas flow through a regolith–ice gap is capable of supplying the likely flow rates for gap sizes on the order of a centimetre. This size of gap can be opened in the centre of a spider feature by gas pressure bending the overlying ice slab upwards, or by levitating it entirely as suggested in the original Kieffer [2007] model. Our calculations therefore support at least some of the gas flowing through a gap opened between the regolith and ice. Regolith properties most likely still play a role in the evolution of spider morphology, by regolith cohesion controlling the erosion of the central pit and troughs, for example. ## Acknowledgements The authors would like to thank Jingyan Hao and an anonymous reviewer for their constructive and helpful comments which helped improve this manuscript substantially. NA and AH acknowledge support from the UK Space Agency, grant no. ST/R001375/2. EK was supported by STFC grant no. ST/S001271/1. ## References * Ahmed [2010] Ahmed, T., 2010. Reservoir Engineering Handbook. Fourth ed., Gulf Professional Publishing, Boston. * Balme and Hagermann [2006] Balme, M., Hagermann, A., 2006\. Particle lifting at the soil-air interface by atmospheric pressure excursions in dust devils. Geophysical Research Letters 33, L19S01. doi:10.1029/2006GL026819. * Bandfield and Feldman [2008] Bandfield, J., Feldman, W., 2008\. Martian high latitude permafrost depth and surface cover thermal inertia distributions. Journal of Geophysical Research: Planets 113, 0148–0227. * Boynton et al. [2002] Boynton, W.V., Feldman, W.C., Squyres, S.W., Prettyman, T.H., Brückner, J., Evans, L.G., Reedy, R.C., Starr, R., Arnold, J.R., Drake, D.M., Englert, P.A.J., Metzger, A.E., Mitrofanov, I., Trombka, J.I., d’Uston, C., Wänke, H., Gasnault, O., Hamara, D.K., Janes, D.M., Marcialis, R.L., Maurice, S., Mikheeva, I., Taylor, G.J., Tokar, R., Shinohara, C., 2002. Distribution of hydrogen in the near surface of mars: Evidence for subsurface ice deposits. Science 297, 81–85. doi:10.1126/science.1073722. * Brown and Matson [1987] Brown, R.H., Matson, D.L., 1987\. Thermal effects of insolation propagation into the regoliths of airless bodies. Icarus 72, 84–94. doi:10.1016/0019-1035(87)90122-9. * Chinnery et al. [2018] Chinnery, H.E., Hagermann, A., Kaufmann, E., Lewis, S.R., 2018\. The Penetration of Solar Radiation Into Carbon Dioxide Ice. Journal of Geophysical Research (Planets) 123, 864–871. doi:10.1002/2018JE005539. * Demidov et al. [2015] Demidov, N.E., Bazilevskii, A.T., Kuz’min, R.O., 2015. Martian soils: Varieties, structure, composition, physical properties, drillability, and risks for landers. Solar System Research 49, 209–225. doi:10.1134/S0038094615040024. * Diez et al. [2008] Diez, B., Feldman, W., Maurice, S., Gasnault, O., Prettyman, T., Mellon, M., Aharonson, O., Schorghofer, N., 2008\. H layering in the top meter of mars. Icarus 196, 409 – 421. doi:https://doi.org/10.1016/j.icarus.2008.02.006. * Dundas et al. [2018] Dundas, C.M., Bramson, A.M., Ojha, L., Wray, J.J., Mellon, M.T., Byrne, S., McEwen, A.S., Putzig, N.E., Viola, D., Sutton, S., Clark, E., Holt, J.W., 2018\. Exposed subsurface ice sheets in the martian mid-latitudes. Science 359, 199–201. doi:10.1126/science.aao1619. * Fanale et al. [1990] Fanale, F., Salvail, J., Matson, D., Brown, R., 1990\. The effect of volume phase changes, mass transport, sunlight penetration, and densification on the thermal regime of icy regoliths. Icarus 88, 193–204. doi:10.1016/0019-1035(90)90185-C. * Hansen et al. [2010] Hansen, C.J., Thomas, N., Portyankina, G., McEwen, A., Becker, T., Byrne, S., Herkenhoff, K., Kieffer, H., Mellon, M., 2010. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: I. Erosion of the surface. Icarus 205, 283–295. doi:10.1016/j.icarus.2009.07.021. * Hansen [1999] Hansen, G.B., 1999. Control of the radiative behavior of the Martian polar caps by surface CO2 ice: Evidence from Mars Global Surveyor measurements. JGR Planets 104, 16471–16486. doi:10.1029/1998JE000626. * Hansen [2005] Hansen, G.B., 2005. Ultraviolet to near-infrared absorption spectrum of carbon dioxide ice from 0.174 to 1.8 $\mu$m. Journal of Geophysical Research (Planets) 110, E11003. doi:10.1029/2005JE002531. * Hao et al. [2019] Hao, J., Michael, G.G., Adeli, S., Jaumann, R., 2019\. Araneiform terrain formation in Angustus Labyrinthus, Mars. Icarus 317, 479–490. doi:10.1016/j.icarus.2018.07.026. * Hao et al. [2020] Hao, J., Michael, G.G., Adeli, S., Jaumann, R., Portyankina, G., Hauber, E., Millot, C., Zuschneid, W., 2020\. Variability of spider spatial configuration at the Martian south pole. Planet. Space Sci. 185, 104848\. doi:10.1016/j.pss.2020.104848. * Kaufmann et al. [2020] Kaufmann, E., Attree, N., Bradwell, T., Hagermann, A., 2020\. Hardness and Yield Strength of CO2 Ice Under Martian Temperature Conditions. JGR Planets 125, e06217. doi:10.1029/2019JE006217. * Kieffer [2000] Kieffer, H.H., 2000. Annual Punctuated CO2 Slab-Ice and Jets on Mars, in: Second International Conference on Mars Polar Science and Exploration, p. 93. * Kieffer [2007] Kieffer, H.H., 2007. Cold jets in the Martian polar caps. Journal of Geophysical Research (Planets) 112, E08005. doi:10.1029/2006JE002816. * Kieffer et al. [2006] Kieffer, H.H., Christensen, P.R., Titus, T.N., 2006. CO2 jets formed by sublimation beneath translucent slab ice in Mars’ seasonal south polar ice cap. Nature 442, 793–796. doi:10.1038/nature04945. * Klinkenberg [1941] Klinkenberg, L.J., 1941. The permeability of porous media to liquids and gases. Drilling and Production Practice . * Martínez et al. [2012] Martínez, G.M., Renno, N.O., Elliott, H.M., 2012. The evolution of the albedo of dark spots observed on Mars polar region. Icarus 221, 816–830. doi:10.1016/j.icarus.2012.09.008. * Mellon et al. [2004] Mellon, M.T., Feldman, W.C., Prettyman, T.H., 2004. The presence and stability of ground ice in the southern hemisphere of Mars. Icarus 169, 324–340. doi:10.1016/j.icarus.2003.10.022. * Morgan et al. [2018] Morgan, P., Grott, M., Knapmeyer-Endrun, B., Golombek, M., Delage, P., Lognonné, P., Piqueux, S., Daubar, I., Murdoch, N., Charalambous, C., Pike, W.T., Müller, N., Hagermann, A., Siegler, M., Lichtenheldt, R., Teanby, N., Kedar, S., 2018. A Pre-Landing Assessment of Regolith Properties at the InSight Landing Site. Space Science Reviews 214, 104\. doi:10.1007/s11214-018-0537-y. * Pilorget and Forget [2016] Pilorget, C., Forget, F., 2016\. Formation of gullies on Mars by debris flows triggered by CO2 sublimation. Nature Geoscience 9, 65–69. doi:10.1038/ngeo2619. * Pilorget et al. [2011] Pilorget, C., Forget, F., Millour, E., Vincendon, M., Madeleine, J.B., 2011. Dark spots and cold jets in the polar regions of Mars: New clues from a thermal model of surface CO 2 ice. Icarus 213, 131–149. doi:10.1016/j.icarus.2011.01.031. * Piqueux et al. [2003] Piqueux, S., Bryne, S., Richardson, M.I., 2003. Sublimation of Mars’s southern seasonal CO2 ice cap and the formation of spiders. JGR Planets 104, 5084\. doi:10.1029/2002JE002007. * Piqueux and Christensen [2008] Piqueux, S., Christensen, P.R., 2008\. North and south subice gas flow and venting of the seasonal caps of Mars: A major geomorphological agent. JGR Planets 113, E06005. doi:10.1029/2007JE003009. * Portyankina et al. [2010] Portyankina, G., Markiewicz, W.J., Thomas, N., Hansen, C.J., Milazzo, M., 2010. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: III. Models of processes involving translucent ice. Icarus 205, 311–320. doi:10.1016/j.icarus.2009.08.029. * Schwamb et al. [2018] Schwamb, M.E., Aye, K.M., Portyankina, G., Hansen, C.J., Allen, C., Allen, S., Calef, F.J., Duca, S., McMaster, A., Miller, G.R.M., 2018\. Planet Four: Terrains - Discovery of araneiforms outside of the South Polar layered deposits. Icarus 308, 148–187. doi:10.1016/j.icarus.2017.06.017, arXiv:1708.07858. * Tanaka et al. [2014] Tanaka, K.L., Skinner, J. A., J., Dohm, J.M., Irwin, R. P., I., Kolb, E.J., Fortezzo, C.M., Platz, T., Michael, G.G., Hare, T.M., 2014. Geologic map of Mars. U.S. Geological Survey Report 3292\. doi:10.3133/sim3292. * Thomas et al. [2010] Thomas, N., Hansen, C.J., Portyankina, G., Russell, P.S., 2010\. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: II. Surficial deposits and their origins. Icarus 205, 296--310. doi:10.1016/j.icarus.2009.05.030. * Thomas et al. [2011a] Thomas, N., Portyankina, G., Hansen, C.J., Pommerol, A., 2011a. HiRISE observations of gas sublimation-driven activity in Mars’ southern polar regions: IV. Fluid dynamics models of CO2 jets. Icarus 212, 66--85. doi:10.1016/j.icarus.2010.12.016. * Thomas et al. [2011b] Thomas, N., Portyankina, G., Hansen, C.J., Pommerol, A., 2011b. Sub-surface CO2 gas flow in Mars’ polar regions: Gas transport under constant production rate conditions. Geophysical Research Letters 38, L08203. doi:10.1029/2011GL046797. * Yamashita and Kato [1997] Yamashita, Y., Kato, M., 1997\. Viscoelastic properties of polycrystalline solid methane and carbon dioxide. Geophys. Res. Lett. 24, 1327--1330. doi:10.1029/97GL01205.
# Monitoring SEIRD model parameters using MEWMA for the COVID-19 pandemic with application to the State of Qatar. E. L. Boonea, Abdel-Salam G. Abdel-Salamb, Indranil Sahooa, Ryad Ghanamc, Xi Chend, and Aiman Hanifa CONTACT E. L. Boone. Email<EMAIL_ADDRESS>aDepartment of Statistical Sciences and Operations Research, Virginia Commonwealth University, Richmond, Virginia, USA; bDepartment of Mathematics, Statistics and Physics, College of Arts and Sciences, Qatar University, Doha, Qatar<EMAIL_ADDRESS>cDepartment of Liberal Arts and Science, Virginia Commonwealth University in Qatar, Doha<EMAIL_ADDRESS>dGrado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, Virginia, USA. ###### Abstract During the current COVID-19 pandemic decision makers are tasked with implementing and evaluating strategies for both treatment and disease prevention. In order to make effective decisions they need to simultaneously monitor various attributes of the pandemic such as transmission rate and infection rate for disease prevention, recovery rate which indicates treatment effectiveness as well as the mortality rate and others. This work presents a technique for monitoring the pandemic by employing an Susceptible, Exposed, Infected, Recovered Death model regularly estimated by an augmented particle Markov chain Monte Carlo scheme in which the posterior distribution samples are monitored via Multivariate Exponentially Weighted Average process monitoring. This is illustrated on the COVID-19 data for the State of Qatar. ###### keywords: Epidemiology; Augmented particle Markov chain Monte Carlo; Multivariate Exponentially Weighted Moving Average; process monitoring; COVID-19 ††articletype: article ## 1 Introduction Coronavirus Disease 2019 (COVID-19) [27, 22] is a severe pandemic affecting the whole world with a fast spreading regime, requiring to perform strict precautions to keep it under control. As there are limited cure and target treatment at the moment, establishing those precautions become inevitable. These limitations [10] can be listed as social distancing, closure of businesses / schools and travel prohibitions [4]. Corona Virus is a new human Betacoronavirus that uses densely glycosylated spike protein to penetrate host cells. The COVID-19 belongs to the same family classification with Nidovirales, viruses that use a nested set of mRNAs to replicate and it further falls under the subfamily of alpha, beta, gamma and delta Co-Vis. The virus that causes COVID-19 belongs to the Betacoronavirus 2B lineage and has a close relationship with SARS species. It is a novel virus since the monoclonal antibodies do not exhibit a high degree of binding to SARS-CoV-2. Replication of the viral RNA occurs when RNA polymerase binds and re-attaches to multiple locations [18, 8]. Cases of COVID-19 started in December 2019 when a strange condition was reported among a cluster of patients in Wuhan, China. Within a few weeks of this, the COVID-19 virus had spread to different parts of the world. On January 20, 2020, the first case of COVID-19 was recorded in the United States; Italy reported its first confirmed case on January 31, 2020\. With COVID-19 cases rising across the world, the governments were soon seen intervening in financial and healthcare sectors. In late January, 2020, the first U.S. travel restrictions were imposed on travel from China. Weeks later, additional travel bans were imposed on countries in Europe and the United Kingdom. The World Health Organization (WHO) declared COVID-19 a pandemic on March 11, 2020, with a total of more than 100,000 cases globally. As of January 26, 2021, the worldwide total number of confirmed COVID-19 cases eclipses 100 million with over 2.15 million deaths. This virus has a global mortality rate of 3.4%, which makes it more severe than the flu. The elderly who have other pre-existing illnesses are succumbing more to the COVID-19. People with only mild symptoms recover within 3 to 7 days, while those with conditions such as pneumonia or severe diseases take weeks to recover. As of January 26, 2021, the recovery percentage of patients, for example, in China stands at 95%. The global recovery percentage rate of COVID-19 is somewhere around 97% [26]. The main efforts in the literature is focused on model estimation and forecasting the dynamic nature of the COVID-19 pandemic versus monitoring the process. The Susceptible, Exposed, Infected, Recovered, Death (SEIRD) model is a common compartmental model used for modeling disease through a population and variants of it have been used in modeling the COVID-19 pandemic, such as [15] for Italy, [2] for India and [9] for the State of Qatar. For more on disease modeling in general, see [17, 6, 13, 24]. Other modeling approaches include a time-series model to analyze the outbreak of the pandemic [7], a time-varying Bayesian semi-parametric model to look at short-term projections of the pandemic [23]. [12] studies the dynamic pattern of COVID-19 deaths over time. The impact of government intervention such as a lockdown has been studied for China in [25] and for India in [2, 3]. Monitoring of the mathematical process has been done for Ukraine’s COVID-19 outbreak [14]; however the parameter estimates are not generated directly from the data and no clear monitoring scheme is presented. As the vaccine is beginning to be distributed to the public, monitoring the pandemic is more important to decision makers as they can determine how the pandemic is progressing as well as any shocks to the system that may be problematic. The monitoring approach needs to be able to react quickly to any large shifts in the system. The goal of this work is to develop a method of monitoring a pandemic using a base mathematical model, such as SEIRD, that can be quickly updated as soon as new information comes in and can “signal” if there is a change in the parameters of the mathematical model. The literature does not seem to provide any approaches that meet this goal. The big challenge in this problem is updating the parameters in an automated way and converting those parameters to something that can be monitored. Here a Bayesian approach is taken for the parameter estimation via a sampling algorithm that will allow for quick updating, avoid particle depletion and from which the samples can be monitored using a standard process control regime. This work is organized in the following manner. Section 2 introduces the SEIRD model specific to the State of Qatar, the mean model, and the likelihood used. Section 3 introduces the Reproduction number and illustrates its deficiency in this case. This is followed by the Markov chain Monte Carlo sampling algorithm with particle augmentation to update the parameters at each time step in Section 4. The Multivariate Exponentially Weighted Moving Average (MEWMA) monitoring approach is presented in Section 5. The method is illustrated on real data from the State of Qatar in Section 6, which shows how the monitoring can be employed to identify critical changes in the pandemic. Finally, Section 7 provides a discussion of the method and provides some suggestions for implementation as well as possible areas for improvement. ## 2 SEIRD Model Let $t$ be a time index that is the number of days from the first recorded case of COVID-19 in the population of interest, $S(t)$ be the number of subjects Susceptible at time $t$, $E(t)$ be the number of subjects Exposed at time t, $I(t)$ be the number of Infected (symptomatic) subjects at time $t$, $R(t)$ be the cumulative number of Recovered subjects at time $t$ and $D(t)$ be the cumulative number of subject Deaths at time $t$. This can be modeled with the following system of ordinary differential equations: $\displaystyle\frac{dS(t)}{dt}$ $\displaystyle=$ $\displaystyle-\alpha S(t)E(t)$ (1) $\displaystyle\frac{dE(t)}{dt}$ $\displaystyle=$ $\displaystyle\alpha S(t)E(t)-\beta E(t)-\gamma E(t)$ (2) $\displaystyle\frac{dI(t)}{dt}$ $\displaystyle=$ $\displaystyle\beta E(t)-\gamma I(t)-\eta I(t)$ (3) $\displaystyle\frac{dR(t)}{dt}$ $\displaystyle=$ $\displaystyle\gamma I(t)$ (4) $\displaystyle\frac{dD(t)}{dt}$ $\displaystyle=$ $\displaystyle\eta I(t),$ (5) where the parameters are explained as follows: $\alpha$ is the transmission rate (per day $\times$ individual2) from Susceptible to Exposed, the rate (per day) at which Exposed become Infected (symptomatic) is denoted by $\beta$, $\gamma$ is the rate (per day) at which Infected become recovered and the mortality rate (per day) for those Infected is denoted by $\eta$. Notice that, this model formulation makes several key assumptions which are as follows. Immigration, emigration, natural mortality and births are negligible over the time frame and hence are not in the model. Once a person is in the Infected group, they are quarantined and hence they do not mix with the Susceptible population. The Recovered and Deaths compartments are for those who first are infected. Those who are exposed and are asymptomatic recover at the same rate $\gamma$ as those who become sick and recover. The SEIRD model presented here is the same as the one presented in [9] and matches the assumptions needed for the example provided in Section 6, however, the estimation and monitoring method is not specific to this particular model. Due to the dynamic nature of how the pandemic has developed, assuming the system is in “steady state” is invalid as governments have intervened into the system in an effort to influence various parameters as well as medical treatment of the disease has changed across the time frame. Hence the parameters are also functions of time denoted as $\alpha(t),~{}\beta(t),~{}\gamma(t)$ and $\eta(t)$. Specifically, $\displaystyle\frac{d\lambda_{S}(t)}{dt}$ $\displaystyle=$ $\displaystyle-\alpha(t)\lambda_{S}(t)\lambda_{E}(t)$ (6) $\displaystyle\frac{d\lambda_{E}(t)}{dt}$ $\displaystyle=$ $\displaystyle\alpha(t)\lambda_{S}(t)\lambda_{E}(t)-\beta(t)\lambda_{E}(t)-\gamma(t)\lambda_{E}(t)$ (7) $\displaystyle\frac{d\lambda_{I}(t)}{dt}$ $\displaystyle=$ $\displaystyle\beta(t)\lambda_{E}(t)-\gamma(t)\lambda_{I}(t)-\eta(t)\lambda_{I}$ (8) $\displaystyle\frac{d\lambda_{R}(t)}{dt}$ $\displaystyle=$ $\displaystyle\gamma(t)\lambda_{I}(t)$ (9) $\displaystyle\frac{d\lambda_{D}(t)}{dt}$ $\displaystyle=$ $\displaystyle\eta(t)\lambda_{I}(t),$ (10) where $\lambda_{s}(t)$, $\lambda_{E}(t)$, $\lambda_{s}(I)$, $\lambda_{R}(t)$ and $\lambda_{D}(t)$ denote the respective mean parameters. At each time point $t$ the parameters must have a prior distribution. For this work the prior distribution specification will be the same for all $t$, however, this is not necessary if one has information that needs to be included at a specific time. Since a Bayesian methodology is being employed the likelihood is specified to be: $\displaystyle I(t)$ $\displaystyle\sim$ $\displaystyle Poisson\left(\lambda_{I}(t)\right)$ (11) $\displaystyle R(t)$ $\displaystyle\sim$ $\displaystyle Poisson\left(\lambda_{R}(t)\right)$ (12) $\displaystyle D(t)$ $\displaystyle\sim$ $\displaystyle Poisson\left(\lambda_{D}(t)\right).$ (13) Notice that $S(t)$ and $E(t)$ are not in the likelihood as they are latent states in that they are not directly observed. The true likelihood for $\\{S(t),E(t),I(t),R(t),D(t)\\}$ should be Multinomial. However, with two latent states, one of which is the largest state, the Multinomial approach is challenging to apply, thus this work uses adopts Poisson likelihood as an approximation. ## 3 The Basic Reproduction Number $R_{0}$ The basic reproduction number, $R_{0}$, is defined as the expected number of secondary cases produced by a single infection in a completely susceptible population. It is dimensionless and can be calculated as a product of the transmissibility, the average rate of contact between susceptible and infected individuals, and the duration of the infectiousness [5]. In model (1), the last two equations do not contribute to $R_{0}$ and so $R_{0}=\frac{\alpha}{\beta+\gamma}.$ Since our model is time varying, it follows that $R_{0}(t)=\frac{\alpha(t)}{\beta(t)+\gamma(t)}$ In this work the Reproduction number is not considered since it does not account for all the parameters in the model. One of the goals of this work is to ensure the monitoring process accounts for all of the parameters. ## 4 Sequential Sampling with Particle Augmentation From a monitoring perspective, we need to essentially estimate the parameters, $\theta_{t}=\\{\alpha(t),\beta(t),\gamma(t),\eta(t)\\}$), at each time step dependent primarily on the latest data. If at time $t$, we want to estimate the change in the parameters from $t-1$ to $t$, we need an estimation algorithm that can update the parameters so that changes can be detected. In a dynamic system where the parameters are changing due to interventions into the system such as vaccines or quarantines, the estimation algorithm needs to also take into account the previous changes that have taken place in the system. Traditional approaches to modeling varying coefficient models from a Bayesian perspective have difficulties as the sampling methods required often suffer from particle depletion as the process proceeds through time. The proposed Algorithm 1 is a variant of the Sampling Importance Resampling algorithm at each time step. To ensure there are enough particles to work through the sampling process, the basic idea is to augment each accepted sample by some random perturbations to generate new particles to move through to the next step. Let us fix the notation first. Denote $\mathbf{D}(k)=\\{S(k),E(k),I(k),R(k),D(k)\\}$ as the actual state vector of the system at time step $k$, where the first two components are latent, $\forall k\in\\{0,1,\ldots,T\\}$ and $T$ denotes the total number of time steps considered. Denote $g(\theta_{0})$ and $g(\theta_{k}|\theta_{k-1})$, respectively, as the candidate distributions to sample $\theta$ from at time step $0$ and at time step $k$ ($\forall k=1,2,\ldots,T$). Let $\tilde{\theta}_{k}$ denote the set of accepted samples of $\theta$ at time $k$ that will be passed on to the next time step ($\forall k\in\\{0,1,\ldots,T-1\\}$). Denote the set that contains all the accepted values of $\theta$ up to time $k$ by $\tilde{\Theta}(k)$. We elaborate on the key steps of Algorithm 1 below. At time step $0$, we draw $n_{c}$ candidate samples of $\theta$ from $g(\theta_{0})$, denoted by $\\{\theta_{0,j}^{*}\\}_{j=1}^{n_{c}}$. We then evaluate the posterior distribution using the data at time 0 (i.e., $\mathscr{D}(0)$) and the candidate particles to obtain the unnormalized weights $\\{w_{j}^{*},j=1,2,\ldots,n_{c}\\}$ at time step $0$, which are subsequently normalized to the $\widehat{w}_{j}^{*}$. We then obtain $n_{p}$ posterior samples by selecting from $\\{\theta_{0,j}^{*},j=1,2,\ldots,n_{c}\\}$ with the corresponding probabilities given by $\\{\widehat{w}_{j}^{*}\\}_{j=1}^{n_{c}}$; denote this set of $n_{p}$ samples by $\tilde{\theta}_{0}=\\{\tilde{\theta}_{0,1},\tilde{\theta}_{0,2},\ldots,\tilde{\theta}_{0,n_{p}}\\}$. This set of accepted values of $\theta$ will be passed on to time step 1. The sampling for time step $k$ is similar to that for time step $0$ but enhanced with sample augmentation. Specifically, at time step $k$ ($\forall k\in\\{1,2,\ldots,T\\}$), using an appropriate candidate density $g(\theta_{k}|\tilde{\theta}_{k-1,\ell})$ which is conditioned on each accepted sample from time step $k-1$, we generate a batch of $n_{b}$ candidate samples in the neighborhood of $\tilde{\theta}_{k-1,\ell}$; denote the batch by $\\{\theta_{k,\ell,j}^{*}\\}_{j=1}^{n_{b}}$ for $\ell=1,2,\ldots,n_{p}$. We then evaluate the posterior distribution using the data up to time $k$ (i.e., $\mathscr{D}(k)$), the accepted samples up to time step $k-1$ (i.e., $\tilde{\Theta}(k-1)$) and the candidate particles, to obtain the unnormalized weights $\\{w_{\ell,j}^{*},j=1,2,\ldots,n_{b},\ell=1,2,\ldots,n_{p}\\}$ at time step $k$, which are subsequently normalized to the $\widehat{w}_{\ell,j}^{*}$. We then obtain $n_{p}$ posterior samples by selecting from $\\{\theta_{k,\ell,j}^{*},j=1,2,\ldots,n_{b},\ell=1,2,\ldots,n_{p}\\}$ with the corresponding probabilities given by $\\{\widehat{w}_{\ell,j}^{*},j=1,2,\ldots,n_{b},\ell=1,2,\ldots,n_{p}\\}$; denote this set of $n_{p}$ accepted samples by $\tilde{\theta}_{k}=\\{\tilde{\theta}_{k,1},\tilde{\theta}_{k,2},\ldots,\tilde{\theta}_{k,n_{p}}\\}$. Then by the end of time step $k$, we update the set of accepted values of $\theta$ to $\tilde{\Theta}(k)=\tilde{\Theta}(k-1)\cup\tilde{\theta}_{k}$. Algorithm 1 Enhanced sampling importance resampling algorithm 1: Specify a prior distribution $p_{0}(\theta_{0})$, and define $\mathbf{D}(0)=\\{S(0),E(0),I(0),R(0),D(0)\\}$. Let $\mathscr{D}(0)=\mathbf{D}(0).$ 2: Draw $n_{c}$ candidate samples from a candidate distribution $g(\theta_{0})$, i.e., $\theta_{0,j}^{*}\sim g(\theta_{0})$, $j=1,2,\ldots,n_{c}$, then $\mathbf{D}(1)$ is observed. 3: Evaluate $w_{j}^{*}=p(\mathbf{D}(1)|\mathscr{D}(0),\theta_{0,j}^{*})p_{0}(\theta_{0,j}^{*})/g(\theta_{0,j}^{*})$, $j=1,2,\ldots,n_{c}$. 4: Obtain normalized weights $\widehat{w}_{j}^{*}=w_{j}^{*}/\left(\sum_{\ell=1}^{n_{c}}w_{\ell}^{*}\right)$, $j=1,2,\ldots,n_{c}$. 5: Resample from $\\{\theta_{0,j}^{*}\\}_{j=1}^{n_{c}}$ using the set of normalized weights, $\\{\widehat{w}_{j}^{*}\\}_{j=1}^{n_{c}}$, and obtain a set of $n_{p}$ samples, $\tilde{\theta}_{0}=\\{\tilde{\theta}_{0,1},\tilde{\theta}_{0,2},\ldots,\tilde{\theta}_{0,n_{p}}\\}$. Let $\tilde{\Theta}(0)=\tilde{\theta}_{0}$. 6: for $k=1,2,\ldots,T$ do 7: Update the total dataset with new observations, $\mathscr{D}(k)=\mathbf{D}(k)\cup\mathscr{D}(k-1)$, where $\mathbf{D}(k)=\\{S(k),E(k),I(k),R(k),D(k)\\}$. 8: Specify a prior distribution $p_{k}(\theta_{k})$, then $\mathbf{D}(k+1)$ is observed. 9: For the $\ell$th sample in the collection $\tilde{\theta}_{k-1}$ from step $k-1$, $\tilde{\theta}_{k-1,\ell}$, draw $n_{b}$ samples from the candidate distribution $g(\theta_{k}|\tilde{\theta}_{k-1,\ell})$, i.e., $\theta_{k,\ell,j}^{*}\sim g(\theta_{k}|\tilde{\theta}_{k-1,\ell})$, $j=1,2,\ldots,n_{b}$. 10: Evaluate $w_{\ell,j}^{*}=p(\mathbf{D}(k+1)|\mathscr{D}(k),\theta_{k,\ell,j}^{*},\tilde{\Theta}(k-1))p_{k}(\theta_{k,\ell,j}^{*})/g(\theta_{k,\ell,j}^{*}|\tilde{\theta}_{k-1,\ell})$, $\ell=1,2,\ldots,n_{p}$, $j=1,2,\ldots,n_{b}$. 11: Normalize $\widehat{w}_{\ell,j}^{*}=w_{j,\ell}^{*}/\left(\sum_{\ell=1}^{n_{p}}\sum_{h=1}^{n_{b}}w_{h,\ell}^{*}\right)$, $\ell=1,2,\ldots,n_{p}$, $j=1,2,\ldots,n_{b}$. 12: Resample from $\\{\theta_{k,\ell,j}^{*},\ell=1,2,\ldots,n_{p};j=1,2,\ldots,n_{b}\\}$ using the set of normalized weights, $\\{\widehat{w}_{\ell,j}^{*},\ell=1,2,\ldots,n_{p};j=1,2,\ldots,n_{b}\\}$, and obtain a set of $n_{p}$ samples, $\tilde{\theta}_{k}=\\{\tilde{\theta}_{k,1},\tilde{\theta}_{k,2},\ldots,\tilde{\theta}_{k,n_{p}}\\}$. Let $\tilde{\Theta}(k)=\tilde{\Theta}(k-1)\cup\tilde{\theta}_{k}$. 13: end for ## 5 Monitoring The presented method for estimating the SEIRD model parameters through time is quite responsive to changes in the system. Hence, these parameters can be monitored to look for changes in the parameters which will be manifested in the data. Since there are four parameters in the SEIRD model that need to be simultaneously monitored, the Multivariate Exponentially Weighted Moving Average (MEWMA) approach was chosen as an appropriate monitoring method. [16] developed a multivariate EWMA (MEWMA) control chart, which is an extension to the univariate EWMA. First the parameter samples were differenced using a single backward lag of one, $\Delta\alpha(t)_{i}=\alpha(t)_{i}-\alpha(t-1)_{i}$, $\Delta\gamma(t)_{i}=\gamma(t)_{i}-\gamma(t-1)_{i}$, $\Delta\beta(t)_{i}=\beta(t)_{i}-\beta(t-1)_{i}$ and $\Delta\eta(t)_{i}=\eta(t)_{i}-\eta(t-1)_{i}$. These form a vector $\left(\Delta\alpha(t)_{i},~{}\Delta\gamma(t)_{i},~{}\Delta\beta(t)_{i},~{}\Delta\eta(t)_{i}\right)^{T}$ to be monitored for significant deviations from zero, which would correspond to a significant change in the set of parameters. The multivariate parameters are given by the mean: $\Delta\bar{\boldsymbol{\theta}}(t)=\frac{1}{n_{p}}\mathbf{1}_{n_{p}}^{T}\left(\Delta\boldsymbol{\alpha}(t),\Delta\boldsymbol{\gamma}(t),\Delta\boldsymbol{\beta}(t),\Delta\boldsymbol{\eta}(t)\right)$ and variance: $Cov(\Delta\boldsymbol{\theta}(t))=\frac{1}{n_{p}-1}\begin{pmatrix}\Delta\boldsymbol{\alpha}(t)^{T}\\\ \Delta\boldsymbol{\gamma}(t)^{T}\\\ \Delta\boldsymbol{\beta}(t)^{T}\\\ \Delta\boldsymbol{\eta}(t)^{T}\end{pmatrix}\left(\Delta\boldsymbol{\alpha}(t),\Delta\boldsymbol{\gamma}(t),\Delta\boldsymbol{\beta}(t),\Delta\boldsymbol{\eta}(t)\right).$ In order to have a monitoring process that is not too sensitive, the MEWMA is employed which is given by $MEWMA(t)=\Lambda\Delta\bar{\boldsymbol{\theta}}(t)+(1-\Lambda)MEWMA(t-1),$ (14) with the moving covariance matrix: $V(t)=\Lambda^{2}Cov(\Delta\boldsymbol{\theta}(t))+(1-\Lambda)^{2}V(t-1),$ where $\Lambda$ is a smoothing coefficient that controls how much a new observation can influence the overall mean. Lower values of $\Lambda$ are more conservative in that new observations do not have much influence on the mean as higher values allow new observations to have a greater influence on the mean. This is used to control false signals as any process being monitored will have some natural variation that may cause the observation to be signalled. Typical values of $\Lambda$ include $0.1,~{}0.15,~{}0.2,~{}0.25$ and $0.3$. Since we are looking at the differences in parameter values, the target value for the process mean should be $(0,0,0,0)$ indicating no shift in the process. In the multivariate case, a test statistic based on Hotelling’s $T^{2}$ can be formed as $T^{2}(t)=MEWMA(t)^{T}V(t)^{-1}MEWMA(t).$ (15) When $n_{p}$ is large, $T^{2}(t)$ in Equation (15) is approximately $\chi^{2}(4)$ which has a 0.95-quantile equal to 9.48. Hence any $T^{2}(t)>9.48$ would be deemed as a significant change in the parameter differences and thus a significant change in the SEIRD process. Note that, in our case $n_{p}=1,000$ and hence deemed large enough for the $T^{2}$ approximation. ## 6 Data Analysis: State of Qatar ### 6.1 Data Description The World Health Organization, Johns Hopkins University and other agencies maintain data sets on the daily number of confirmed infected cases, deaths, and recoveries for every country. In this work, we study the evolution of the pandemic in the State of Qatar. All data for Qatar were obtained from Johns Hopkins University and are freely accessible via the Johns Hopkins COVID-19 GitHub repository [19]. The GitHub site includes daily cumulative number of confirmed infections, cumulative number of recovered and cumulative number of deaths starting 22 January 2020. The goal of the data analysis is to demonstrate and assess the proposed modeling approach and its use in monitoring the pandemic as the varying coefficient approach will allow for the model parameters to adjust quickly to changes in the data generation process. In model (1), the Recovered and Death states are cumulative with no outgoing transitions whereas the Infected state has transitions from Exposed and to Recovered and Death states. Hence the data for confirmed infections are cumulative and include the numbers from both the Recovered and Death states. As such, if $CI(t)$ denotes the confirmed infections at time $t$, then the number of Infected subjects at time $t$ is defined as $I(t)=CI(t)-R(t)-D(t).$ From here on, the term “Active Infections” will be used to denote this derived variable versus the cumulative Infected provided in the data. Figure 1 shows the plots of daily Active Infections, Recovered and Deaths data for the State of Qatar since 29 February 2020. The Active Infections start very low but encounter a large jump around day 12 due to increased testing. The Active Infections then seem to plateau until day 30, after which there is an extreme growth in Active Infections. The Active Infections start to go down after day 90. The patterns in the Recovered and Deaths are similar and reflect the time of infection before recovery or death. Both graphs show a very slight increase in the number of recovered or dead subjects until about day 90, after which a steady increase is noticed. (a) | (b) | (c) ---|---|--- | | Figure 1: Plot of Active Infections (a), Recovered (b) and Deaths (c) for the State of Qatar data. ### 6.2 Evolution of the pandemic in Qatar The State of Qatar is one of the countries heavily affected by the COVID-19 pandemic. Since its first infected case way back on February 29, 2020, Qatar has become one of the highest infected countries in the Middle East with the total number of confirmed cases standing at 148,258 as of January 26, 2021. The total number of deaths in Qatar so far stands at 248 cases, which is low relative to the total number of infected cases, which is an indication of the country’s highly effective healthcare system. Qatar prepared an excellent flexible plan for risk management, grounded on national risk assessment, taking into account of the global risk assessment done by WHO and focusing on reinforcing capacities to reduce or eliminate health risks from COVID-19. Along with the well-organized healthcare system, the country was very quick to respond to the global pandemic. The country implemented many preventive measures very early on in the pandemic, including border control for early detection of cases. This included, were but not limited to, installing thermal screening for passengers who entered the country at Hamad International Airport and at seaports as early as January 2020, with the first quarantine facilities opening on February 1 [1]. On March 9, 2020 (day 10), Qatar closed all universities and schools and placed a travel ban on 15 countries: Bangladesh, China, Egypt, India, Iran, Iraq, Italy, Lebanon, Nepal, Pakistan, the Philippines, South Korea, Sri Lanka, Syria and Thailand. On March 14, 2020 (day 15), Qatar expanded its travel ban to include three new countries: Germany, Spain and France [11, 20]. The Ministry of Municipal and Environment on March 21, 2020, closed all parks and public beaches to curb the spread of coronavirus. On March 23, 2020 (day 24), the Ministry of Commerce and Industry decided to temporarily close all restaurants, cafes, food outlets, and food trucks for the public. Also, the Ministry of Commerce and Industry decided to close all unnecessary businesses on March 27, 2020 (day 28) [11, 20]. As the number of infected cases continued to rise, on April 8, 2020 (day 40), the Ministry of Public Health (MoPH) announced that Primary Health Care Cooperation will be designating two health centers, one in Umm-Salal and one in Gharrafat Al-Rayyan, for screening, testing, and quarantining COVID-19 patients. MoPH also announced a hotline for psychological aid on April 9, 2020 (day 41). These interventions made by the government change the dynamics of the pandemic and hence, need to be considered while setting up a a real-time monitoring system of the infection, recovery and death rates. In the next subsection, we illustrate the proposed model as a data-driven forecasting model for use by stakeholders in the State of Qatar to monitor the COVID-19 pandemic. ### 6.3 Data analysis results For Qatar, the prior distribution specification is: $\alpha(t)\sim Exp(2/4450000)$, $\beta(t)\sim Exp(1/105)$, $\gamma(t)\sim Exp(1/14)$ and $\eta(t)\sim Exp(1/9500)$. These priors reflect some information such as for $\gamma(t)$ the mean is $1/14$ which corresponds to a 14-day infection duration. The SEIRD model was run with the following initial values for Qatar: $S(0)=2,782,000$, $E(0)=3$, $I(0)=1$, $R(0)=0$ and $D(0)=0$. The SEIRD model and the sampling process given in Algorithm 1 were coded in MATLAB R2020a and were run on a PC with Intel Core i7-7700 CPU at 3.60GHz with 8GB of RAM. At each time step the sampler was run with $n_{c}=10,000$, $n_{p}=1,000$ and $n_{b}=10$. Hence, each of the $n_{p}$ samples had a batch of $n_{b}$ samples generated in its neighborhood resulting in $n_{p}$ candidate particles at the beginning of the next time step. The model computation time is about 60 minutes for $T=135$ time steps. Note that, the number of individuals in each compartment in the model is much smaller due to the population size. This means that many of the computations are faster especially when dealing with large factorials associated with the Poisson distribution. Figure 2 shows the coefficient estimates across time, $\alpha(t)$ (panel a), $\beta(t)$ (panel b), $\gamma(t)$ (panel c) and $\eta(t)$ (panel d) for the Qatar data set. Of particular interest is the time frame from day 90 to day 95 in which the active infections $I(t)$ exhibited a large drop (see Figure 1). Notice that the distribution for $\alpha(t)$ becomes incredibly concentrated in this time frame as evidenced by the narrow credible intervals. This is further exhibited in $\beta(t)$ and $\eta(t)$. However, by examining $\gamma(t)$ during this time frame one sees that, a large spike in the recovery rate with very narrow credible intervals as well. And for $\gamma(t)$ after about 80 days the recovery rate begins to increase dramatically with another spike around day 115. (a) | (b) ---|--- | (c) | (d) | Figure 2: Plots of $\alpha(t)$ (a), $\beta(t)$ (b), $\gamma(t)$ and $\eta(t)$ across time with associated 95% credible intervals for the State of Qatar data. Figure 3 shows the model fitted to the data with 95% posterior predictive bounds for Active Infections (panel a), Recovered (panel b) and Deaths (panel c). First note is that all three models appear to fit the data extremely well based on visual inspection. Particularly notice that around day 90 when the dramatic drop in Active infections occurs this can be contrasted with Figure 2. (a) | (b) | (c) ---|---|--- | | Figure 3: Plots of the data, fitted model with 95% posterior prediction bounds for Active Infections (a), Recovered (b) and Deaths (c) for the State of Qatar data. To assess the fit of the model a $Pseudo-R^{2}$ was calculated as: $Pseudo-R^{2}=1-\frac{\sum_{t=1}^{n}\left[I(t)-\widehat{I}(t)\right]^{2}+\sum_{t=1}^{n}\left[R(t)-\widehat{R}(t)\right]^{2}+\sum_{t=1}^{n}\left[D(t)-\widehat{D}(t)\right]^{2}}{\sum_{t=1}^{n}\left[I(t)-\overline{I(t)}\right]^{2}+\sum_{t=1}^{n}\left[R(t)-\overline{R(t)}\right]^{2}+\sum_{t=1}^{n}\left[D(t)-\overline{D(t)}\right]^{2}}$ where $\overline{I(t)}$, $\overline{R(t)}$ $\overline{D(t)}$ are the sample means of $I$, $R$ and $D$, respectively, across time (and hence are not a function of time), $\widehat{I}(t)$, $\widehat{R}(t)$ and $\widehat{D}(t)$ are the medians of the posterior predictive distributions for $I$, $R$ and $D$, respectively, at each time ( and hence are functions of time), Since uncertainty quantification is important, the proportion of observations that fall into the predictive bands was calculated as follows: $\widehat{P}_{\text{fit}}=\frac{\sum_{t=1}^{n}I_{\\{I(t)\in\widehat{C}_{I(t)}\\}}+\sum_{t=1}^{n}I_{\\{R(t)\in\widehat{C}_{R(t)}\\}}+\sum_{t=1}^{n}I_{\\{D(t)\in\widehat{C}_{D(t)}\\}}}{3n}$ where $\widehat{C}_{I(t)}$, $\widehat{C}_{R(t)}$ and $\widehat{C}_{E(t)}$ denote the 95% predictive intervals for $I(t)$, $R(t)$ and $D(t)$, respectively and $I_{\\{\mathcal{A}\\}}$ is an indicator function taking the value one if event $\mathcal{A}$ is true. The _Pseudo-R ${}^{2}=0.9999$_ which shows an incredible agreement between the median fitted values and their corresponding data values. The proportion of observations that fall into the 95% predictive bands $\widehat{P}_{\text{fit}}=0.8394$ which indicates that the model has less uncertainty that it should. However, the value is still quite high with approximately 84% of observations being captured by the intervals. Figure 4 shows the plot of $T^{2}(t)$ for the Qatar data with a control limit set at 9.48 (red dashed line) and a smoothing parameter $\Lambda=0.2$. Notice that until time 40 the process seems to be pretty stable as indicated by the $T^{2}(t)$ being below the control limit. After day 40 there are several time points signalling a change in the process on days: 40, 44, 47, 64, 65, 67, 68, 69, 71, 77, 95, and 123. The early days (40,44,47) can easily be seen to agree with the change in both infection rate $\alpha(t)$ and death rate $\eta(t)$ in Figure 2 (a) and (d). Notice in these plots the high volatility with spikes in $\alpha(t)$ and a clear shift upwards in $\eta(t)$ at the same times. During the days 64 to 77 there appears to be a very large amount of volatility in the infection rate $\alpha(t)$ which is also evidenced in Figure 3 (a) for active infections. Notice how the active infections have large increases one day and smaller increases the next day. The method also picks up the spike in infection rates as well as the dramatic increase in recovery rate $\gamma(t)$ at time 95. At day 123, Figure 2 shows a spike in the rate of exposed to actively infected, $\beta(t)$ (panel b) and another shift in recovery rate $\gamma(t)$ (panel c). Figure 4: Plot of the Hotelling’s $T^{2}$ statistic through time. Horizontal line corresponds to the 95% control limits To study the sensitivity to the smoothing parameter in the MEWMA chart across signals several other analyses were performed with $\Lambda=0.1,\Lambda=0.15,\Lambda=0.25$, and $\Lambda=0.3$, which are commonly chosen values for this parameter. When $\Lambda=0.1$ the monitoring process signaled at days: 44, 47, 69, 77 and 123. Since the smoothing parameter puts more weight on the previous mean than on the new observations, it is expected that a smaller number of days would be signalled. When $\Lambda=0.15$ the monitoring process signaled at days: 40, 44, 47, 64, 68, 69, 71, 77 and 123. Notice that for both $\Lambda=0.1$ and $\Lambda=0.15$ day 95 is not signaled, which upon inspection of almost all the charts there is a shift in the process. Hence, these parameter choices would be considered too conservative in this case. When $\Lambda=0.25$ the following days were signalled: 40, 44, 47, 64, 65, 67, 68, 69, 71, 77, 95 and 123. This is the same result as when $\Lambda=0.2$. When $\Lambda=0.3$ the monitoring process signalled the following days: 40, 44, 47, 64, 65, 67, 68, 69, 71, 72, 74, 77, 80, 95 and 123. Here days 72, 74 and 80 are added to the list of signalled days. This reflects the volatility in $\alpha(t)$ across this time frame. Overall, when less conservative values for $\Lambda$ are chosen, the days signalled are quite reasonable. In could be argued that in a pandemic situation a more sensitive monitoring process would be beneficial to public policy makers as it can signal when an effective intervention has been introduced. ## 7 Discussion This work provides a novel tool for monitoring and capturing changes in a pandemic evolution process via monitoring changes in parameters of mathematical epidemiological models, such as the Susceptible, Exposed, Infected, Recovered, Death (SEIRD) model using the Multivariate Exponentially Weighted Moving Average (MEWMA) process monitoring technique. A Bayesian approach is taken for the parameter estimation with a sampling algorithm that allows for both quick updating of the SEIRD model but also provides samples that can be monitored by the MEWMA regime. This sampling algorithm uses the notion of Sampling Importance Resampling, but augments the particles at each step to avoid particle depletion. This quick updating allows for the process monitoring scheme to “signal” quickly if there is a change in the model parameters. The method is then used to monitor the evolution of the COVID-19 pandemic in the State of Qatar. Despite the proliferation of forecasting models for the evolution of the COVID-19 pandemic, their accuracy achieved can be compromised and comparisons can be complicated due to numerous factors, e.g., their construction methods, distinct healthcare systems adopted by different countries/regions, different political decisions or policies made, distinct testing and reporting mechanisms [21]. Hence, using the forecasts given by a particular forecasting model for critical decision making is challenging. The proposed approach takes a different perspective and enables decision-makers to work with a tailored SEIRD model, assess the effectiveness of the policies/decisions made, and adopt interventions and/or prevention strategies consistently over time. The State of Qatar example illustrates the proposed method’s ability to perform daily monitoring of a pandemic. The proposed model fits the data very well with a $pseudo-R^{2}=0.999$. In the model definition, immigration, emigration, natural births and natural mortality have not been included; however, based on the high psuedo-$R^{2}$, they would have a negligible effect on the fit. Furthermore, the model does not contain compartments for subjects who recovered without being confirmed infections. Since this is not observed, one can only speculate on the impact that additional data would have on the model fit; however, it would be very small. As seen in Figures 2 and 3, the proposed method successfully picks up the day to day fluctuations in the pandemic evolution process in Qatar via the estimated time-varying model parameters. Note that the pandemic’s overall state can also be monitored by tracking the $T^{2}$ statistic over time (see Figure 4). For Qatar, the method signals the first change in the process around day 40. This change can be attributed to several government interventions such as closing parks and public beaches on day 24, closing all unnecessary businesses on day 28 and announcing two major health centers catered towards COVID-19 patients on day 40. The method also signals multiple days beyond day 40, all of which seem reasonable upon further inspection. Thus, the proposed method gives decision- makers the ability to evaluate planned interventions as well as discover new changes to the process and respond accordingly. This method can also be extended for monitoring a process at the state/county level by incorporating a spatial covariance and using the mixed model approach. Since the augmented sampling regime allows posterior samples to be saved from the previous day, updating is performed on a daily basis and only requires the new data and the previous day’s samples. Thus the entire SEIRD model need not be fit from the beginning of the series. Furthermore, the MEWMA is quickly calculated from the posterior samples and can quickly signal those managing the pandemic. Note that the method is not tied to the SEIRD model given in Equation (1), as the augmented sampler and MEWMA monitoring protocol are generic. Our motivation here has been using a system where the reproduction number fails to include all the relevant parameters. In systems where the reproduction number is dependent on all parameters, the reproduction number could be added as a dimension to the monitoring protocol as well. In situations where the reproduction number is meaningful, this could be another dimension that could “signal” serious changes in long-term process outcomes. ## References * [1] Al Khal A., Al-Kaabi S., Checketts RJ., _Qatar’s response to COVID-19 pandemic_ , Heart Views, 32 (2020), pp. 21-129. * [2] Bagal, D. K., Rath, A., Barua, A., & Patnaik, D., _Estimating the parameters of susceptible-infected-recovered model of COVID-19 cases in India during lockdown periods_ , Chaos, Solitons & Fractals, 140 (2020), pp. 110-154. * [3] Basu, D., Salvatore, M., Ray, D., Kleinsasser, M., Purkayastha, S., Bhattacharyya, R., & Mukherjee, B., _A comprehensive public health evaluation of lockdown as a non-pharmaceutical intervention on COVID-19 spread in India: National trends masking state level variations_ , (2020). medRxiv. * [4] Chinazzi M. Davis, J.T., Ajelli, M., Gioannini, C., Litvinova, M., Merler, S., y Piontti, A.P., Mu, K., Rossi, L., Sun, K. and Viboud, C., _The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak_ , Science (2020). * [5] Chowell G., Brauer F., _The basic reproduction number of infectious diseases: Computation and estimation using compartmental epidemic models_ , Mathematical and statistical estimation approaches in epidemiology (2009), Springer. * [6] Clancy, D., & O’Neill, P. D., _Bayesian estimation of the basic reproduction number in stochastic epidemic models_ , Bayesian Analysis, 3(4), 737-757 (2008). * [7] Deb, S., & Majumdar, M., _A time series method to analyze incidence pattern and estimate reproduction number of COVID-19_ , arXiv preprint arXiv:2003.10655 (2020). * [8] Fisher D., Heymann D., _Q &A: The novel coronavirus outbreak causing COVID-19_, BMC Medicine. doi: 10.1186/s12916-020-01533-w (2020). * [9] Ghanam, Ryad, Edward Boone, and Abdel-Salam Abdel-Salam, _COVID-19: SEIRD Model for Qatar COVID-19 Outbreak_ , Letters in Biomathematics, June (2020). https://lettersinbiomath.journals.publicknowledgeproject.org/index.php/lib/article/view/323 * [10] Giuliani D. Dickson, M.M., Espa, G. and Santi, F., _Modelling and predicting the spread of Coronavirus (COVID-19) infection in NUTS-3 Italian regions_ , arXiv preprint arXiv:2003.06664 (2020). * [11] Hamad Medical Corporation. (2020). Major Risks to Business Continuity. Doha, Qatar. * [12] Han, Z., Li, T., & You, J., _These Unprecedented Times: The Dynamic Pattern Of COVID-19 Deaths Around The World_ , (2020). arXiv preprint arXiv:2011.02824. * [13] Jewell, C. P., Kypraios, T., Neal, P., & Roberts, G. O., _Bayesian analysis for emerging infectious diseases_ , Bayesian analysis, 4(3), 465-496 (2009). * [14] Kyrychko, Y.N., Blyuss, K.B. & Brovchenko, I., _Mathematical modelling of the dynamics and containment of COVID-19 in Ukraine_ , Scientific Reports 10 (2020), 19662. https://doi.org/10.1038/s41598-020-76710-1 * [15] Loli Piccolomini E, Zama F., _Monitoring Italian COVID-19 spread by a forced SEIRD model_ , PLoS ONE 15(2020), e0237417. https://doi.org/10.1371/journal.pone.0237417 * [16] Lowry, C.A., Woodall, W.H., Champ, C.W. and Rigdon, S.E., _A multivariate exponentially weighted moving average control chart_ , Technometrics, 34 (1992), pp. 46-53. * [17] May, Robert M.; Anderson, Roy M., _Infectious diseases of humans: dynamics and control_ , Oxford: Oxford University Press, 1991. ISBN 0-19-854040-X. * [18] McIntosh K., _Coronavirus disease 2019 (COVID-19), Up-To-Date_ , https://www.uptodate.com/contents/coronaviruses. Accessed May 8 (2020). * [19] Miller, M., _2019 Novel Coronavirus COVID-19 (2019-nCoV) Data Repository_ , Bulletin-Association of Canadian Map Libraries and Archives (ACMLA), 164 (2020), pp. 47-51. * [20] Ministry of Public Health. (2019). Qatar National Preparedness and Response Plan for Communicable Diseases. Doha, Qatar. * [21] Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C. and Vasilakis, C., _Forecasting and planning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmental decisions_ , European Journal of Operational Research 290 (2021), pp. 99-115. * [22] Rezabakhsh A. Ala, A. and Khodaei, S.H., _Novel Coronavirus (COVID-19): A New Emerging Pandemic Threat_ , Journal of Research in Clinical Medicine 8(1), pp. 5-6 (2020) * [23] Roy, A., & Karmakar, S., _Bayesian semiparametric time varying model for count data to study the spread of the COVID-19 cases_ , (2020). arXiv preprint arXiv:2004.02281. * [24] Vynnycky, E.; White, R. G., _An Introduction to Infectious Disease Modelling_ , Oxford: Oxford University Press, eds. (2010). ISBN 978-0-19-856576-5. * [25] Wang, L., Zhou, Y., He, J., Zhu, B., Wang, F., Tang, L., … & Song, P. X. (2020), _An epidemiological forecast model and software assessing interventions on the COVID-19 epidemic in China_ , Journal of Data Science, 18(3), 409-432 (2020). * [26] World Health Organization, _Novel Coronavirus (2019-nCoV) situation reports_. * [27] Wu F. Zhao, S., Yu B. Chen, Y.M. Wang, W. Song, Z.G. Hu, Y. Tao, Z.W. Tian, J.H. Pei, Y.Y. and Yuan, M.L., _A new coronavirus associated with human respiratory disease in China_ , Nature (2020), pp. 265-269.
# Manipulation of an elongated internal Josephson junction of bosonic atoms A. Farolfi<EMAIL_ADDRESS>A. Zenesini <EMAIL_ADDRESS>R. Cominotti D. Trypogeorgos Current address: CNR Nanotec, Institute of Nanotechnology, via Monteroni, 73100, Lecce, Italy A. Recati G. Lamporesi G. Ferrari INO-CNR BEC Center, Dipartimento di Fisica, Università di Trento and TIFPA-INFN, 38123 Povo, Italy ###### Abstract We report on the experimental characterization of a spatially extended Josephson junction realized with a coherently-coupled two-spin-component Bose- Einstein condensate. The cloud is trapped in an elongated potential such that that transverse spin excitations are frozen. We extract the non-linear parameter with three different manipulation protocols. The outcomes are all consistent with a simple local density approximation of the spin hydrodynamics, i.e., of the so-called Bose-Josephson junction equations. We also identify a method to produce states with a well defined uniform magnetization. ## I Introduction One of the macroscopic quantum effects observed in superconducting circuits and superfluid helium is the Josephson effect [1, 2], arising when two superconducting leads are coupled via tunneling effect through a thin insulating layer. An analogous effect has also been observed in atomic Bose-Einstein condensates (BECs). In this context, the coupling has been experimentally realized mainly in two different ways. In the first case (external coupling), two BECs are spatially separated by a thin potential barrier that allows for tunneling [3]. In the second case (internal coupling), two internal states are Rabi-coupled by resonant radiation [4]. As in superconducting circuits, in the most studied configurations with BECs the spatial extension does not play any relevant role and the dynamics is described by the bosonic Josephson junction (BJJ) model [5, 6] and allows to study nonlinear dynamics [3, 7] and non-classical states [8, 9]. Spatially extended systems, i.e., when the BJJ dynamics depends on the position, require a more general theoretical description, including gradients of the population imbalance and the relative phase. From the experimental point of view, double-well systems can be extended at most to 1D or 2D geometries, because the third dimension is already used to spatially separate the two quantum states. Instead, mixtures of different hyperfine states offer the possibility of studying also 3D extended systems. So far, extended systems have not been fully investigated in experiments. Pioneering works have been reported on 1D systems studying the dynamics in the large-coupling regime [10, 11], dephasing–rephasing effects [12, 13], local squeezing [14], and phase transition dynamics [15]. In this context, our group recently observed far- from-equilibrium spin dynamics dominated by the quantum-torque effect [16]. The control and manipulation of the full spatially extended system is experimentally more challenging with respect to single-mode systems, because distant parts of the system can react differently depending on local properties as, for example, atomic density or external field inhomogeneities. Coherently-coupled systems are characterized by a flexible control thanks to the possibility of manipulating the internal state using external radio- frequency fields. However, the techniques to prepare the system in a desired state usually act globally, hence the simultaneous control of the internal state in all spatial positions is not trivial. A strong external drive can overcome this issue, but it requires very strong uniform fields that may not be possible to implement due to technical limitations, unwanted couplings or losses to other atomic states. In this work, we present different protocols for the manipulation of an elongated, inhomogeneous internal Josephson junction realized by coherently coupling two spin states of a Sodium BEC. The effect of the density inhomogeneity is well described within a local density approximation. In particular, our protocols allow us to determine the parameters of the Josephson dynamics and to prepare the whole system in an internal homogeneous state. The paper is organised as follows: Section II introduces the experimental system and Sec. III sets the theoretical frame for an effectively 1D system. In Sec. IV we show that the spin dynamics in our system is one dimensional. In Sec. V, we study the response of the system under a homogeneous pulse of the coupling. In Sec. VI we report on an adiabatic method to produce a homogeneous state of magnetization. Finally, we present a high-accuracy characterization method for the non-linearity of the system (Sec. VII). ## II The experimental system In our apparatus we start with a thermal cloud of 23Na atoms in a hybrid trap [17, 18] in the $|F,m_{F}\rangle=\ket{1,-1}$ state (later referred as $\ket{\downarrow}$), where $F$ is the total atomic angular momentum and $m_{F}$ its projection on the quantization axis. The atoms are then transferred into a crossed optical trap where a uniform magnetic field is applied along the $z$-axis with a Larmor frequency of $913.9(1)\text{\,}\mathrm{kHz}$. The shot-to-shot stability [19] of the magnetic field is at the level of a few $\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{G}$ over tens of minutes of continuous experimental cycling. Evaporative cooling by lowering the depth of the optical trap leads to a BEC with up to N=$3\times 10^{6}$ atoms with negligible thermal component. The atom number is known with an uncertainty of about 20%, inferred from the calibration of the imaging system. The final trap frequencies are tuned to different values increasing the depth of the optical trap adiabatically. The final trap geometry is elongated, with axial and radial trap frequencies of $\omega_{x}/2\pi\approx$10\text{\,}\mathrm{H}\mathrm{z}$$ and $\omega_{\rho}/2\pi$ between 500(10) and 1000(10)Hz. The density of the sample follows the Thomas-Fermi distribution $n_{3D}=n_{3D,0}(1-\frac{\rho^{2}}{R_{\rho}^{2}}-\frac{x^{2}}{R_{x}^{2}})$ and the radial and axial sizes are given by the Thomas-Fermi radii $R_{\rho}$ and $R_{x}$ [fig:fig0eq:Eq. (1)fig:fig0fig:Fig. 1fig:fig0tab:Table 1fig:fig0appendix:Appendix 1fig:fig0sec:Section 1(a)]. Figure 1: (a) The trapped cloud presents an elongated and cylindrically symmetric shape, with radial and axial sizes $R_{\rho}$ and $R_{x}$. (b) Level scheme and microwave radiations used to couple the two states $\ket{1,\pm 1}$. $\delta$ represents the detuning between the two-photon coupling and the $\ket{1,\pm 1}$ energy difference. $\Delta$ is the detuning from the virtual state $\ket{2,0}$. (c) The non-linear strength $\kappa n$ of the cloud in the $x$ direction follows the Thomas-Fermi inverted parabola. A two-photon Raman microwave transition to the $\ket{1,+1}$ (later referred as $\ket{\uparrow}$) is suddenly introduced [see Fig.1(b)]. The two microwave frequencies are detuned by $\Delta$ from the state $\ket{2,0}$. The effective Rabi coupling $\Omega$ between $\ket{\downarrow}$ and $\ket{\uparrow}$ is inversely proportional to $\Delta$ and we use the latter to tune $\Omega$ while keeping the single-photon Rabi frequencies fixed to $5.0(1)\text{\,}\mathrm{kHz}$. The two-photon coupling can be detuned from the $\ket{1,\pm 1}$ transition by $\delta$, that we tune by varying the magnetic field. An additional microwave radiation ($20\text{\,}\mathrm{kHz}$ blue- detuned from the $\ket{1,0}\rightarrow\ket{2,0}$ transition and with Rabi frequency of $7.9(1)\text{\,}\mathrm{kHz}$) introduces a quadratic Zeeman shift on the $\ket{1,0}$ to suppress spin-changing collisions [20]. The two- photon coupling and the dressing are generated by two out-of-vacuum half- dipole antennas fed by 100W amplifiers. We routinely calibrate the magnetic field and the Rabi coupling by driving Rabi dynamics in a very dilute thermal cloud. By driving Rabi oscillations on such a thermal cloud, we observe coherence times of $370\text{\,}\mathrm{ms}$, presumably limited by residual collisional effects and technical noise, therefore we consider fully coherent dynamics since all the measurements are performed with less than $100\text{\,}\mathrm{m}\mathrm{s}$ of evolution time. After applying the coherent coupling for a given time $t$, the atoms are released from the optical trap. After a short time of flight, the states $\ket{1,\pm 1}$ are separately transferred by microwave pulses to the stretched states $\ket{2,\pm 2}$ and independently imaged by absorption imaging. ## III Theoretical model As mentioned in the Introduction we are interested in describing our system in terms of an inhomogeneous, elongated BJJ. In a standard two-level approximation, it is common to use the relative population of the two states and their relative phase as the degrees of freedom of a BJJ (see, e.g., [21]). However, in order to reduce the full description of our system to the one of a (local) BJJ, it is convenient to describe the BEC in terms of its (position- dependent) total density $n_{3D}$ and its spin-density $\mathbf{s}=(\sqrt{n^{2}_{3D}-s_{z}^{2}}\cos{\phi},\sqrt{n^{2}_{3D}-s_{z}^{2}}\sin{\phi},s_{z})$ on the Bloch sphere, where $s_{z}$ is the population difference and $\phi$ is the relative phase of the $\ket{\uparrow}$ and $\ket{\downarrow}$ states. The spin density has the property that $|\mathbf{s}|=n_{3D}$. For sodium atoms, states $\ket{\uparrow}$ and $\ket{\downarrow}$ have equal intrastate coupling constants $g_{-1}=g_{+1}=g$ and a smaller interstate coupling constant $g_{-1,+1}$, with a positive difference $\delta g=g-g_{-1,+1}$. This leads to a full miscibility of the spin mixture [22, 23, 20] and a separation of timescales between the density and spin dynamics. Neglecting both density and spin currents, the total density is constant and the spin dynamics is described by the nonlinear precession equation [24, 16] $\dot{\mathbf{s}}({\mathbf{r}})=\mathbf{H}(\mathbf{s})\times\mathbf{s}({\mathbf{r}}),$ (1) where $\mathbf{H}(\mathbf{s})=\left(\Omega,0,\delta+\frac{\delta g}{\hbar}s_{z}\right)^{T}$ is the effective magnetic field. The effective magnetic field is due to the presence of $SU(2)$ symmetry breaking terms: the homogeneous transverse microwave Rabi coupling $\Omega$, the linear detuning $\delta$ and the nonlinear detuning $\frac{\delta g}{\hbar}s_{z}$. The latter term arises from the difference between the intra- and interspecies interaction constants $\delta g$. In the case of strongly-elongated cylindrically-symmetric Thomas-Fermi profile (also referred later as 1D regime, see Sec. IV), spin dynamics occurs only in the axial direction. By integrating in the radial plane, we can describe the dynamics of the spin along the axial direction $x$ introducing the 1D spin- density $\mathbf{s}(x)$, such that $|\mathbf{s}(x)|=n(x)=n_{0}(1-x^{2}/R_{x}^{2}).$ (2) The spin-density obeys the following 1D version of Eq.(1): $\dot{\mathbf{s}}(x)=\begin{pmatrix}\Omega\\\ 0\\\ \delta+\kappa s_{z}(x)\end{pmatrix}\times\mathbf{s}(x),$ (3) where the nonlinear coupling strength is $\kappa=\frac{5}{6\hbar}\frac{\delta g}{\pi R_{\rho}^{2}},$ (4) and is related to the 3D density [see fig:fig0eq:Eq. (1)fig:fig0fig:Fig. 1fig:fig0tab:Table 1fig:fig0appendix:Appendix 1fig:fig0sec:Section 1(c)] through $\kappa n_{0}=\frac{2}{3\hbar}n_{3D,0}\delta g.$ (5) The equations for the spin of the system, eq:H_Josepeq:Eq. (1)eq:H_Josepfig:Fig. 1eq:H_Joseptab:Table 1eq:H_Josepappendix:Appendix 1eq:H_Josepsec:Section 1 and eq:H_Josep1eq:Eq. (3)eq:H_Josep1fig:Fig. 3eq:H_Josep1tab:Table 3eq:H_Josep1appendix:Appendix 3eq:H_Josep1sec:Section 3 are equivalent to a local version of the BJJ equations [21], which are written in terms of the normalized magnetization $Z(x)=s_{z}(x)/n$ and of the relative phase $\phi(x)$. In such a context, it has been realized that the BJJ equations have different dynamical regimes. In the particular case of $\delta=0$, for $\Omega>|\kappa n|$ the dynamics resembles Rabi oscillations for any initial state. For $\Omega<|\kappa n|$, instead, a self-trapped regime characterized by a fixed-sign magnetization appears for initial states such that $\frac{Z^{2}}{\sqrt{1-Z^{2}}}>\frac{2\Omega}{kn}\cos\phi$. For $\Omega<|\kappa n|/2$, the initial states $Z=\pm 1$ are also self-trapped. The nonlinear term in $\mathbf{H}$ is referred to as magnetic anisotropy in the context of ferromagnetism and as a capacitive term in the context of Bose- Josephson dynamics (see also below). Equation 1 does not take into account density nor spin currents. The effects of these currents can be implemented by means of a full hydrodynamic description of the system [25, Chap. 21] and the evolution equation becomes equivalent to the Landau-Lifshitz equation [26]. For the measurements presented here, the contribution is negligible as the applied protocols do not excite strong spin gradients. The nonlinear term $\kappa n_{0}$ can be calculated from the experimental parameters (atom number and trap frequencies), but the accuracy remains poor. In Sec. V, Sec. VI and Sec. VII, we show how we extract it from the spin dynamics in different ways. ## IV Dimensionality reduction The dynamics of the density and of the pseudo-spin in an elongated two- component Bose-Einstein condensate can be either effectively one- or three- dimensional depending on the characteristic lengths of spin and density excitations in comparison to the radial size of the condensate. In an equally populated uniform sample with total density $n_{3D,0}$, the density and spin excitations are characterized by the healing length $\xi=\hbar/\sqrt{2mn_{3D,0}g}$ and by the spin healing length $\xi_{s}=\hbar/\sqrt{2mn_{3D,0}\delta g}$, respectively. The ratios $R_{\rho}/\xi$ and $R_{\rho}/\xi_{s}$, evaluated in the center of the sample, depend on the choice of the trap parameters and the peak density $n_{3D,0}$ as follows $\frac{R_{\rho}}{\xi}=\frac{2n_{3D,0}g}{\hbar\omega_{\rho}}$ (6) $\frac{R_{\rho}}{\xi_{s}}=\frac{2n_{3D,0}g}{\hbar\omega_{\rho}}\sqrt{\frac{\delta g}{g}}.$ (7) In our case, $R_{x}$ is always much larger than $\xi$ and $\xi_{s}$. Figure 2: Spatial distribution of the two components ($\ket{\uparrow}$ in blue and $\ket{\downarrow}$ in red) for an effectively 1D (a) and 3D (b) sample. $R_{\rho}/\xi_{s}$ are 1.6 and 4.9, respectively. (c) Magnetization along $y$ at $x=0$, integrated along $z$, after a $\pi$-pulse for different values of $R_{\rho}/\xi_{s}$ (values are reported above the plots). Confidence interval of one standard deviation is indicated as shaded region. Since $\sqrt{\delta g/g}=0.26$ for our sodium mixture, we can tune the experimental conditions to effectively realize a 1D system for spin dynamics ($\xi_{s}\sim R_{\rho}$), while the total density of the sample is still well described by the Thomas-Fermi approximation ($\xi\ll R_{\rho}$) and the relevant quantity characterizing the radial size is simply the 3D Thomas-Fermi radius $R_{\rho}$. The following two-step protocol is used in order to discriminate 1D spin dynamics from clouds with a 3D one. First we tune $R_{\rho}/\xi_{s}$ by changing the final trap parameters or the total atom number. Next we apply a resonant coherent coupling pulse ($\delta=0$) to the initial condensate with all atoms in $\ket{\downarrow}$ for a time $t=\pi/\Omega$. For the low-density regions of the cloud, the applied pulse corresponds to the well known Rabi $\pi$-pulse and one expects to observe a full population transfer into the $\ket{\uparrow}$ state. In the denser part, if the non- linearity term results higher than the driving frequency $\Omega$, the population remains trapped in the $\ket{\downarrow}$ state, according to BJJ dynamics. For the data in fig:fig1eq:Eq. (2)fig:fig1fig:Fig. 2fig:fig1tab:Table 2fig:fig1appendix:Appendix 2fig:fig1sec:Section 2, we set $\hbar\Omega\approx 0.3n_{3D,0}\delta g$. Evidence of a 1D regime will emerge when the low density region in the radial direction follows the denser part dynamics, remaining self-trapped to $\ket{\downarrow}$. Since $R_{\rho}$ is comparable to our imaging resolution, we let the system expand for a short time, prior to imaging, in order to magnify the radial distribution of the population. After releasing the atoms from the trap, we let them freely expand for $2\text{\,}\mathrm{m}\mathrm{s}$ ($3\text{\,}\mathrm{m}\mathrm{s}$) before the state $\ket{\downarrow}$ ($\ket{\uparrow}$) is imaged. Due to the different expansion time, the observed clouds have different radial dimensions. We rescale the second image along $y$ by considering that the radial size expands according to $R_{\rho}(t)=R_{\rho}(0)\sqrt{1+\omega_{\rho}^{2}t^{2}}$ [27]. While this relation is strictly correct only for an expanding single-component condensate, we observe that this is a good approximation also for the total density of a two-component system, even in the presence of magnetic excitations. Indeed, the large energy difference between density- and spin- excitations allows the former one to dominate the expansion of the condensate. Moreover, since the expansion times are much shorter than $1/\omega_{x}$, the radial expansion is ensured with negligible axial motion, allowing for direct imaging of the radial distribution of population. Figure 2(a) and fig:fig1eq:Eq. (2)fig:fig1fig:Fig. 2fig:fig1tab:Table 2fig:fig1appendix:Appendix 2fig:fig1sec:Section 2(b) highlight the differences between the 1D and 3D regime. In an effective 1D system, radial features in the magnetization are absent and the population in the center of the cloud remains in $\ket{\downarrow}$, as shown in fig:fig1eq:Eq. (2)fig:fig1fig:Fig. 2fig:fig1tab:Table 2fig:fig1appendix:Appendix 2fig:fig1sec:Section 2(a), for which $R_{\rho}/\xi_{s}=1.6$. When the sample is more 3D, radial excitations lead to nonuniform radial distribution, as can be seen in fig:fig1eq:Eq. (2)fig:fig1fig:Fig. 2fig:fig1tab:Table 2fig:fig1appendix:Appendix 2fig:fig1sec:Section 2(b), where $R_{\rho}/\xi_{s}=4.9$. In Fig.2(c) we average the density along the $x$-axis for the central 100-$\mu$m region for different values of the ratio $R_{\rho}/\xi_{s}$. Note that integration along one of the radial directions happens naturally through the absorption imaging technique. We observe that the transition between radially uniform and inhomogeneuos takes place at $R_{\rho}/\xi_{s}\approx 3$. For comparison, single component condensates in elongated traps, admit stable topological structures in the transverse direction for $R_{\rho}/\xi>6$ [28, 29]. In the experiments reported in the next Sections we choose $R_{\rho}/\xi_{s}=2.5$, therefore, in the following, we consider only the 1D axial dynamics. Figure 3: (a, e, i) Spin dynamics represented on the Bloch sphere in the presence (blue, thick) and in the absence (orange, thin) of the non-linear contribution for the protocols described in Sec. V, Sec. VI and Sec. VII, respectively. Local magnetization for the different procedures as a function of the detuning $\delta$ (b), the final detuning $\delta_{f}$ (f) and time (j). The center of the cloud is at $x=0$ and extends to the Thomas-Fermi radius $R_{x}$. The dashed lines indicate $x=0$ and $x=0.8R_{x}$. (c, g, k) Vertical cuts along the dashed lines in (b,f,j) for high- (blue circles, $x\approx 0$) and low-density (orange dots, $x=0.8R_{x}$) regions. Thick and thin lines are fits to the data for high- and low-density regions, respectively. In (c), lines correspond to the solution of Eq. 1 with density as a fitting parameter. Significant asymmetry of the resonance peak is observed in the high-density region. In (g), the line is a sigmoidal function fitted to the data. In (k), the line is a sinusoidal function fitted to the data. (d, h, l) Local nonlinear parameter at different positions (points) extracted from each protocol. The shaded area refers to the prediction of $\kappa n(x)$ obtained from the trap frequencies and atom number. ## V Density-dependent shift In dense atomic clouds, transitions between energy levels are modified by the presence of interactions, whose effects can be introduced by means of mean- field corrections. These are commonly known as collisional shifts and have great importance in metrology [30]. In a Josephson system, collisional shifts are dominant when the nonlinear mean field contributions are of the same order of magnitude of (or larger than) the linear coupling strength. Starting from a fully polarized sample in $\ket{\downarrow}$, a Rabi pulse with $t=\pi/\Omega$ and $\Omega=2\pi\times$68.5(5)\text{\,}\mathrm{H}\mathrm{z}$$ is applied to transfer part of the population to $\ket{\uparrow}$. Depending on the (global) detuning and on the (local) nonlinear contribution, the final magnetization will locally change [see fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(a-b)]. The measurement is repeated for different values of the detuning $\delta$ of the coupling from the transition frequency and the final magnetization is plotted in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(b). On the thermal tails of the cloud the density is low enough that it can be considered as a pure two-level system (deep Rabi regime). In this case, the amount of transferred population depends on the detuning $\delta$ according to the commonly known sinc-like spectroscopic curve [orange data and curve in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(c)]. When the nonlinear term is no longer negligible compared to $\Omega$, the dynamics follows the Josephson equations [blue data and curve in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(c)]. The spectroscopic curve becomes asymmetric with a shifted peak. The direction and magnitude of the shift depend on the sign and magnitude of $\kappa n$, respectively. We fit the data at different position $x$ with the numerical solution of the Josephson equation by having only $\kappa n$ as a free parameter. Figure 3(d) shows the local value of $\kappa n(x)$ where the error bars include statistical error on the fit of the spectroscopy data (due to shot-to-shot magnetic field and atom number fluctuation) and systematic uncertainties coming from the determination of $\delta=0$ ($\approx 10$ Hz). With this method we obtain $\kappa n_{0}/2\pi=192(11)$ Hz. The light blue area shown in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(d,h,l) refers to the prediction of $\kappa n(x)$ obtained from eq:kappaeq:Eq. (5)eq:kappafig:Fig. 5eq:kappatab:Table 5eq:kappaappendix:Appendix 5eq:kappasec:Section 5, the trap frequencies and atom number being averaged over the full data acquisition. The expected value is $\kappa n_{0}/2\pi=173(20)$ Hz, and the main source of uncertainty is related to the determination of the atom number. ## VI Density-dependent Adiabatic Rapid Passage Different proposals in the field of nonlinear spin-waves [31, 32], quantum computation and squeezing require that the full cloud must be prepared in a single state, with a uniform magnetization $Z=0$. For this task, the procedure presented in the previous Section can be used only if the regime $\Omega\gg\kappa n$ is experimentally reachable. In the case of $\Omega\sim\kappa n$, the magnetization of the cloud after a pulse of duration $t=\pi/\Omega$ is not uniform as only some regions of the cloud with a certain non-linearity will be transferred for a fixed $\delta$, as fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(b) clearly shows. A different approach is based on the Adiabatic Rapid Passage (ARP). This can be used, for instance, to generate number-squeezed states [6]. In the ARP, the coupling is applied to a polarized state with an initially large detuning, so that the system is in the state of minimum energy. The detuning is adiabatically swept to a final value $\delta_{f}$ close to resonance. During the ramp, the local magnetization and $\delta$ are connected through the following relation [6] $\delta=\Omega\frac{Z}{\sqrt{1-Z^{2}}}+\kappa nZ,$ (8) while $\phi=0$ during the whole passage. Note that, far in the Rabi regime, the magnetization depends only on $\Omega/\delta$, while in the Josephson regime, an additional density- dependent term is present. At the beginning of the ramp, all parts of the cloud are close to the south pole of the Bloch sphere. Due to the inhomogeneous nonlinear interaction, the magnetization has a position- dependent evolution. However, if $\delta$ is adiabatically reduced to zero, at the end of the ARP, the whole system will reach $Z=0$ simultaneously, independent of the value of the local nonlinear parameter, as sketched in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(e). In our experiment, we start from a polarized sample in $\ket{\downarrow}$, turn on a coupling with $\Omega=2\pi\times$273(1)\text{\,}\mathrm{H}\mathrm{z}$$ with an initial detuning $\delta\approx 2\pi\times$3\text{\,}\mathrm{k}\mathrm{H}\mathrm{z}$$. For experimental convenience, and taking advantage from the dependence of $\delta$ on the magnetic field $B$, the sweep of the detuning is performed by keeping constant microwave frequencies and by varying the strength of the magnetic field in $50\text{\,}\mathrm{m}\mathrm{s}$ with a nonlinear ramp. The ramp is stopped to a variable final $\delta_{f}$ and in fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(f) we plot the magnetization of the sample as a function of the coordinate $x$ and $\delta_{f}$. The magnetization at $\delta=0$ of the ARP procedure is less sensitive to magnetic field fluctuations, since, expanding eq:deltaeq:Eq. (8)eq:deltafig:Fig. 8eq:deltatab:Table 8eq:deltaappendix:Appendix 8eq:deltasec:Section 8 near $Z=0$, one gets $\frac{\partial Z}{\partial\delta}=\frac{1}{\Omega+\kappa n}$ (9) that is lowered by the nonlinear term. Figure 3(g) shows how the final value of the magnetization is sensitive to the final detuning, with a smaller sensitivity in the central part of the system (blue points) rather than at the edges (orange). Remarkably this method allows for a clean preparation of the extended system in a uniform $Z=0$ state, at the expected value $\delta_{f}=0$, thanks to the symmetric interaction constants of 23Na. This result is not trivial since the magnetization varies indeed with a different velocity for each spatial coordinates. However the symmetric dynamics on the Bloch sphere leads the magnetization to reach zero at the same time for the whole cloud. Note that the efficiency of the full rotation is increased by the nonlinear term. By fitting the dynamics of the magnetization for each position $x$ with a sigmoidal function, we can extract the slope of the magnetization as a function of $\delta$ and hence $\kappa n$ applying eq:dZddeltaeq:Eq. (9)eq:dZddeltafig:Fig. 9eq:dZddeltatab:Table 9eq:dZddeltaappendix:Appendix 9eq:dZddeltasec:Section 9 [see Fig. 3(h)]. With such a procedure, we obtain $\kappa n_{0}/2\pi=200(15)$ Hz. The error bars include statistical error on the fit and systematic uncertainties coming from the imaging procedure (uncertainty on the state population), and from a non-perfect adiabaticity of the process. Systematic contributions strongly enhance the uncertainty on the value of $\kappa n$ compared to the one obtained in Sec. V. ## VII Plasma oscillations In the presence of coherent coupling and at $\delta=0$, the ground-state of the system is uniformly $Z=0$, $\phi=0$. For small deviations near the ground- state, the Josephson dynamics predicts small oscillations around $Z=0$ and $\phi=0$, which are known as plasma oscillations [see Fig. 3(i)]. Their frequency follows $\omega_{p}=\sqrt{\Omega(\Omega+\kappa n)},$ (10) allowing to determine $\kappa n$ from independent measurements of $\Omega$ and $\omega_{p}$. Figure 4: Observed oscillation frequency $\omega_{p}$ as function of the Rabi frequency (points). Error bars are smaller than marker size. The line is a fit of Eq. 10 with $\kappa n$ as free parameter, yielding the value $\kappa n_{0}/2\pi=164(3)$. The sample is prepared in $Z=0,\phi=0$ with the previously described ARP procedure. Then, the phase of the coupling is suddenly modified from $\phi=0$ to $\phi=0.1\pi$, starting the oscillatory dynamics. We extract the frequency of oscillation $\omega_{p}$ by fitting a sinusoid to the local magnetization. According to eq:plasmaeq:Eq. (10)eq:plasmafig:Fig. 10eq:plasmatab:Table 10eq:plasmaappendix:Appendix 10eq:plasmasec:Section 10, we determine $\kappa n$ for different $x$-position. For each fit, we determine the initial guess for the frequency by determining the peak in the Fourier-transform of the data. In this case we obtain $\kappa n_{0}/2\pi=161(3)$ Hz at the center of the cloud [fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(k), blue points and line]. In the low-density regions of the sample the noise is larger due to the low atom number, however the observed dynamics is compatible with the independently calibrated Rabi frequency [fig:fig2eq:Eq. (3)fig:fig2fig:Fig. 3fig:fig2tab:Table 3fig:fig2appendix:Appendix 3fig:fig2sec:Section 3(k), orange line]. The high precision of the determination of $\kappa n(x)$ from plasma oscillations compared to previous methods is twofold. At first, fluctuations on the magnetic field, which enter as an uncertainty on $\delta$, result in a uncertainty on $\kappa n$ below 1%. Secondly, uncertainties on the observed magnetization affect the amplitude of the oscillation, but only poorly its frequency. We repeat the procedure for different $\Omega$. After preparation of the sample in the $Z=0,\phi=0$ state, the detuning $\Delta$ is suddenly modified, changing the Rabi frequency. The phase is changed to $\phi=0.1\pi$ as well. We extract the oscillation frequency at the center of the cloud as a function of Rabi frequency fig:fig4eq:Eq. (4)fig:fig4fig:Fig. 4fig:fig4tab:Table 4fig:fig4appendix:Appendix 4fig:fig4sec:Section 4, and by fitting eq:plasmaeq:Eq. (10)eq:plasmafig:Fig. 10eq:plasmatab:Table 10eq:plasmaappendix:Appendix 10eq:plasmasec:Section 10 on the data we determine $\kappa n$ over a range of Rabi frequencies with low statistical uncertainties. ## VIII conclusions and outlook We have characterized the properties of an elongated Josephson junction based on two coherently coupled atomic spin states of 23Na. After finding the regime where the dynamics is 1D-like, we demonstrate the capability to calibrate the nonlinear term of the BJJ dynamics with different protocols. We adiabatically manipulate the internal state on the Bloch sphere to produce a uniform magnetization sample. Additionally to the presented ARP procedure, future investigation can be focused on the search for shortcuts to adiabaticity, based on different ramps of the driving detuning and amplitude, in order to decrease losses and decoherence of the system during the state preparation [33]. The full control of the quantum state of an elongated Josephson junction represents a cornerstone to future investigations in the field of nonlinear dynamics and towards new metrological tools. The system can be driven to points of the Bloch sphere that are far from the equilibrium position, but present locally different evolution due to the non-uniform nonlinearity, leading to localized and propagating instability [34, 16]. In an elongated cloud, the interplay between a spatially non-uniform squeezing and the long- range entanglement requires further theoretical and experimental investigations, with particular focus on local and global correlations [14]. While the presented results focus on the dynamics in a one-dimensional system, the possibility of tuning the effective dimensionality of the system could allow the experimental investigation of topological excitations in the transverse directions, such as domain walls and vortices [35, 36, 37]. ## IX acknowledgements We thank I. Carusotto and P. Hauke for fruitful discussions. We acknowledge fundings from INFN through the FISH project, from the European Union’s Horizon 2020 Programme through the NAQUAS project of QuantERA ERA-NET Cofund in Quantum Technologies (Grant Agreement No. 731473), from Italian MIUR under the PRIN2017 project CEnTraL (Protocol Number 20172H2SC4) and from Provincia Autonoma di Trento. We thank the BEC Center in Trento, the Q@TN initiative and QuTip. ## References * Josephson [1962] B. D. Josephson, Possible new effects in superconductive tunnelling, Phys. Lett. 1, 252 (1962). * Sato _et al._ [2019] Y. Sato, E. Hoskinson, and R. E. Packard, Josephson effects in superfluid Helium, in _Fundamentals and Frontiers of the Josephson Effect_, edited by F. Tafuri (Springer International Publishing, Cham, 2019) pp. 765–810. * Albiez _et al._ [2005] M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Direct observation of tunneling and nonlinear self-trapping in a single bosonic Josephson junction, Phys. Rev. Lett. 95, 010402 (2005). * Zibold _et al._ [2010] T. Zibold, E. Nicklas, C. Gross, and M. K. Oberthaler, Classical bifurcation at the transition from Rabi to Josephson dynamics, Phys. Rev. Lett. 105, 204101 (2010). * Smerzi _et al._ [1997] A. Smerzi, S. Fantoni, S. Giovanazzi, and S. R. Shenoy, Quantum coherent atomic tunneling between two trapped Bose-Einstein condensates, Phys. Rev. Lett. 79, 4950 (1997). * Steel and Collett [1998] M. J. Steel and M. J. Collett, Quantum state of two trapped Bose-Einstein condensates with a Josephson coupling, Phys. Rev. A 57, 2920 (1998). * Spagnolli _et al._ [2017] G. Spagnolli, G. Semeghini, L. Masi, G. Ferioli, A. Trenkwalder, S. Coop, M. Landini, L. Pezzè, G. Modugno, M. Inguscio, A. Smerzi, and M. Fattori, Crossing over from attractive to repulsive interactions in a tunneling bosonic Josephson junction, Phys. Rev. Lett. 118, 230403 (2017). * Estève _et al._ [2008] J. Estève, C. Gross, A. Weller, S. Giovanazzi, and M. K. Oberthaler, Squeezing and entanglement in a Bose–Einstein condensate, Nature 455, 1216 (2008). * Gross _et al._ [2010] C. Gross, T. Zibold, E. Nicklas, J. Estève, and M. K. Oberthaler, Nonlinear atom interferometer surpasses classical precision limit, Nature 464, 1165 (2010). * Nicklas _et al._ [2011] E. Nicklas, H. Strobel, T. Zibold, C. Gross, B. A. Malomed, P. G. Kevrekidis, and M. K. Oberthaler, Rabi flopping induces spatial demixing dynamics, Phys. Rev. Lett. 107, 193001 (2011). * Nicklas _et al._ [2015a] E. Nicklas, W. Muessel, H. Strobel, P. G. Kevrekidis, and M. K. Oberthaler, Nonlinear dressed states at the miscibility-immiscibility threshold, Phys. Rev. A 92, 053614 (2015a). * Pigneur _et al._ [2018] M. Pigneur, T. Berrada, M. Bonneau, T. Schumm, E. Demler, and J. Schmiedmayer, Relaxation to a phase-locked equilibrium state in a one-dimensional bosonic Josephson junction, Phys. Rev. Lett. 120, 173601 (2018). * Tononi _et al._ [2020] A. Tononi, F. Toigo, S. Wimberger, A. Cappellaro, and L. Salasnich, Dephasing–rephasing dynamics of one-dimensional tunneling quasicondensates, New Journal of Physics 22, 073020 (2020). * Latz [2019] B. M. Latz, _Master thesis: Multipartite Entanglement from Quench Dynamics in Spinor Bose Gases using Bogoliubov Theory_ (Heidelberg University, 2019). * Nicklas _et al._ [2015b] E. Nicklas, M. Karl, M. Höfer, A. Johnson, W. Muessel, H. Strobel, J. Tomkovič, T. Gasenzer, and M. K. Oberthaler, Observation of scaling in the dynamics of a strongly quenched quantum gas, Phys. Rev. Lett. 115, 245301 (2015b). * Farolfi _et al._ [2020] A. Farolfi, A. Zenesini, D. Trypogeorgos, C. Mordini, A. Gallemì, A. Roy, A. Recati, G. Lamporesi, and G. Ferrari, Quantum-torque-induced breaking of magnetic domain walls in ultracold gases (2020), arXiv:2011.04271 [cond-mat.quant-gas] . * Colzi _et al._ [2016] G. Colzi, G. Durastante, E. Fava, S. Serafini, G. Lamporesi, and G. Ferrari, Sub-doppler cooling of sodium atoms in gray molasses, Phys. Rev. A 93, 023421 (2016). * Colzi _et al._ [2018] G. Colzi, E. Fava, M. Barbiero, C. Mordini, G. Lamporesi, and G. Ferrari, Production of large Bose-Einstein condensates in a magnetic-shield-compatible hybrid trap, Phys. Rev. A 97, 053625 (2018). * Farolfi _et al._ [2019] A. Farolfi, D. Trypogeorgos, G. Colzi, E. Fava, G. Lamporesi, and G. Ferrari, Design and characterization of a compact magnetic shield for ultracold atomic gas experiments, Review of Scientific Instruments 90, 115114 (2019). * Fava _et al._ [2018] E. Fava, T. Bienaimé, C. Mordini, G. Colzi, C. Qu, S. Stringari, G. Lamporesi, and G. Ferrari, Observation of spin superfluidity in a bose gas mixture, Phys. Rev. Lett. 120, 170401 (2018). * Raghavan _et al._ [1999] S. Raghavan, A. Smerzi, S. Fantoni, and S. R. Shenoy, Coherent oscillations between two weakly coupled Bose-Einstein condensates: Josephson effects, $\pi$ oscillations, and macroscopic quantum self-trapping, Phys. Rev. A 59, 620 (1999). * Knoop _et al._ [2011] S. Knoop, T. Schuster, R. Scelle, A. Trautmann, J. Appmeier, M. K. Oberthaler, E. Tiesinga, and E. Tiemann, Feshbach spectroscopy and analysis of the interaction potentials of ultracold sodium, Phys. Rev. A 83, 042704 (2011). * Bienaimé _et al._ [2016] T. Bienaimé, E. Fava, G. Colzi, C. Mordini, S. Serafini, C. Qu, S. Stringari, G. Lamporesi, and G. Ferrari, Spin-dipole oscillation and polarizability of a binary Bose-Einstein condensate near the miscible-immiscible phase transition, Phys. Rev. A 94, 063652 (2016). * Nikuni and Williams [2003] T. Nikuni and J. E. Williams, Kinetic theory of a spin-1/2 Bose-condensed gas, Journal of Low Temperature Physics 133, 323 (2003). * Pitaevskii and Stringari [2016] L. Pitaevskii and S. Stringari, _Bose-Einstein condensation and superfluidity_, International series of monographs on physics (Oxford University Press, Oxford, 2016). * [26] L. Landau and E. Lifshitz, On the theory of the dispersion of magnetic permeability in ferromagnetic bodies., Phys. Z. Sowjetunion 8, 153. * Castin and Dum [1996] Y. Castin and R. Dum, Bose-Einstein condensates in time dependent traps, Phys. Rev. Lett. 77, 5315 (1996). * Brand and Reinhardt [2002] J. Brand and W. P. Reinhardt, Solitonic vortices and the fundamental modes of the “snake instability”: Possibility of observation in the gaseous Bose-Einstein condensate, Phys. Rev. A 65, 043612 (2002). * Muñoz Mateo and Brand [2014] A. Muñoz Mateo and J. Brand, Chladni solitons and the onset of the snaking instability for dark solitons in confined superfluids, Phys. Rev. Lett. 113, 255302 (2014). * Harber _et al._ [2002] D. M. Harber, H. J. Lewandowski, J. M. McGuirk, and E. A. Cornell, Effect of cold collisions on spin coherence and resonance shifts in a magnetically trapped ultracold gas, Phys. Rev. A 66, 053616 (2002). * Qu _et al._ [2016] C. Qu, L. P. Pitaevskii, and S. Stringari, Magnetic solitons in a binary Bose-Einstein condensate, Phys. Rev. Lett. 116, 160402 (2016). * Qu _et al._ [2017] C. Qu, M. Tylutki, S. Stringari, and L. P. Pitaevskii, Magnetic solitons in Rabi-coupled Bose-Einstein condensates, Phys. Rev. A 95, 033614 (2017). * Guéry-Odelin _et al._ [2019] D. Guéry-Odelin, A. Ruschhaupt, A. Kiely, E. Torrontegui, S. Martínez-Garaot, and J. G. Muga, Shortcuts to adiabaticity: Concepts, methods, and applications, Rev. Mod. Phys. 91, 045001 (2019). * Bernier _et al._ [2014] N. R. Bernier, E. G. Dalla Torre, and E. Demler, Unstable avoided crossing in coupled spinor condensates, Phys. Rev. Lett. 113, 065303 (2014). * Son and Stephanov [2002] D. T. Son and M. A. Stephanov, Domain walls of relative phase in two-component Bose-Einstein condensates, Phys. Rev. A 65, 063621 (2002). * Tylutki _et al._ [2016] M. Tylutki, L. P. Pitaevskii, A. Recati, and S. Stringari, Confinement and precession of vortex pairs in coherently coupled Bose-Einstein condensates, Phys. Rev. A 93, 043623 (2016). * Calderaro _et al._ [2017] L. Calderaro, A. L. Fetter, P. Massignan, and P. Wittek, Vortex dynamics in coherently coupled Bose-Einstein condensates, Phys. Rev. A 95, 023605 (2017).
# One dimensional martingale rearrangement couplings B. Jourdain Université Paris-Est, Cermics (ENPC), INRIA F-77455 Marne-la- Vallée, France. E-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>- This research benefited from the support of the “Chaire Risques Financiers”, Fondation du Risque. W. Margheriti11footnotemark: 1 ###### Abstract We are interested in martingale rearrangement couplings. As introduced by Wiesel [37] in order to prove the stability of Martingale Optimal Transport problems, these are projections in adapted Wasserstein distance of couplings between two probability measures on the real line in the convex order onto the set of martingale couplings between these two marginals. In reason of the lack of relative compactness of the set of couplings with given marginals for the adapted Wasserstein topology, the existence of such a projection is not clear at all. Under a barycentre dispersion assumption on the original coupling which is in particular satisfied by the Hoeffding-Fréchet or comonotone coupling, Wiesel gives a clear algorithmic construction of a martingale rearrangement when the marginals are finitely supported and then gets rid of the finite support assumption by relying on a rather messy limiting procedure to overcome the lack of relative compactness. Here, we give a direct general construction of a martingale rearrangement coupling under the barycentre dispersion assumption. This martingale rearrangement is obtained from the original coupling by an approach similar to the construction we gave in [24] of the inverse transform martingale coupling, a member of a family of martingale couplings close to the Hoeffding-Fréchet coupling, but for a slightly different injection in the set of extended couplings introduced by Beiglböck and Juillet [9] and which involve the uniform distribution on $[0,1]$ in addition to the two marginals. We last discuss the stability in adapted Wassertein distance of the inverse transform martingale coupling with respect to the marginal distributions. Keywords: Martingale couplings, Martingale Optimal Transport, Adapted Wasserstein distance, Robust finance, Convex order. ## 1 Introduction Let $\rho\geq 1$ and $\mu,\nu$ be in the set $\mathcal{P}_{\rho}(\mathbb{R})$ of probability measures on the real line with finite order $\rho$ moment. We denote by $\Pi(\mu,\nu)$ the set of couplings between $\mu$ and $\nu$, that is $\pi\in\Pi(\mu,\nu)$ iff $\pi$ is a measure on $\mathbb{R}\times\mathbb{R}$ with first marginal $\mu$ and second marginal $\nu$. We denote by $\Pi^{\mathrm{M}}(\mu,\nu)$ the set of martingale couplings between $\mu$ and $\nu$: $\Pi^{\mathrm{M}}(\mu,\nu)=\left\\{M\in\Pi(\mu,\nu)\mid\mu(dx)\textrm{-a.e.},\ \int_{\mathbb{R}}y\,M_{x}(dy)=x\right\\},$ (1.1) where for any coupling $\pi\in\Pi(\mu,\nu)$ we denote by $(\pi_{x})_{x\in\mathbb{R}}$ its disintegration with respect to its first marginal, that is $\pi(dx,dy)=\mu(dx)\,\pi_{x}(dy)$, or with a slight abuse of notation $\pi=\mu\times\pi_{x}$. The celebrated Strassen theorem [35] ensures that $\Pi^{\mathrm{M}}(\mu,\nu)\neq\emptyset$ iff $\mu$ and $\nu$ are in the convex order, which we denote $\mu\leq_{cx}\nu$, that is iff $\int_{\mathbb{R}}f(x)\,\mu(dx)\leq\int_{\mathbb{R}}f(y)\,\nu(dy)$ for any convex function $f:\mathbb{R}\to\mathbb{R}$. Fix $\pi\in\Pi(\mu,\nu)$. We are interested in finding a projection of $\pi$ on the set $\Pi^{\mathrm{M}}(\mu,\nu)$ for the adapted Wasserstein distance $\mathcal{AW}_{\rho}$ (defined in (1.5) below), that is finding a martingale coupling $M$ between $\mu$ and $\nu$ such that $\mathcal{AW}_{\rho}(\pi,M)=\inf_{M^{\prime}\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{\rho}(\pi,M^{\prime}).$ (1.2) This problem arose our interest when Wiesel [37] highlighted its connection for $\rho=1$ with the stability of the Martingale Optimal Transport (MOT) problem. The MOT problem was introduced in discrete time by Beiglböck, Henry- Labordère and Penkner [6] and in continuous time by Galichon, Henry- Labordère and Touzi [17] in order to get model-free bounds of an option price. It consists in the classical Optimal Transport problem, which was formulated by Gaspard Monge [27] in 1781 and modernised by Kantorovich [25] in 1942, to which an additional martingale constraint is added in order to reflect the arbitrage-free condition of the market. In our setting the MOT problem consists in the minimisation $\operatorname{MOT}(\mu,\nu):=\inf_{M\in\Pi^{\mathrm{M}}(\mu,\nu)}\int_{\mathbb{R}\times\mathbb{R}}C(x,y)\,M(dx,dy),$ (MOT) where $C:\mathbb{R}\times\mathbb{R}\to\mathbb{R}_{+}$ is a nonnegative measurable payoff function. The study of its stability, that is the continuity of the map $(\mu,\nu)\mapsto\operatorname{MOT}(\mu,\nu)$, represents a major stake, since it confirms the robustness of model-free bounds of an option price. Backhoff-Veraguas and Pammer [5] gave a positive answer under mild regularity assumptions by showing the stability of the so called martingale $C$-monotonicity property, which is proved sufficient for optimality. Independently, Wiesel [37] also gave a positive answer. More recently, Beiglböck, Pammer and the two authors generalised those stability results to the weak MOT problem [8]. For adaptations of celebrated results on classical optimal transport theory to the MOT problem, we refer to Beiglböck and Juillet [10], Henry-Labordère, Tan and Touzi [21] and Henry- Labordère and Touzi [22]. On duality, we refer to Beiglböck, Nutz and Touzi [12], Beiglböck, Lim and Obłój [11] and De March [15]. We also refer to De March [14] and De March and Touzi [16] for the multi-dimensional case. We recall that the Wasserstein distance with index $\rho$ between $\mu$ and $\nu$ is defined by $\mathcal{W}_{\rho}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\left(\int_{\mathbb{R}\times\mathbb{R}}|x-y|^{\rho}\,\pi(dx,dy)\right)^{1/\rho}.$ (1.3) This family was meant to be as close as possible to the Hoeffding-Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$, that is the image of the Lebesgue measure on $(0,1)$ by $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$, where $F_{\eta}^{-1}$ denotes the quantile function of a probability measure $\eta$ on $\mathbb{R}$. The infimum is attained by the comonotonic or Hoeffding- Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$, that is the image of the Lebesgue measure on $(0,1)$ by $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$, where $F_{\eta}^{-1}(u)=\inf\\{x\in\mathbb{R}:\eta((-\infty,x])\geq u\\}$ denotes the quantile function of a probability measure $\eta$ on $\mathbb{R}$. As a consequence, $\mathcal{W}_{\rho}(\mu,\nu)=\left(\int_{(0,1)}|F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)|^{\rho}du\right)^{1/\rho}.$ (1.4) The topology induced by the Wasserstein distance is not always well suited for any setting, especially in mathematical finance. Indeed, the symmetry of this distance does not take into account the temporal structure of martingales. One can easily get convinced that two stochastic processes very close in Wasserstein distance can yield radically unalike information, as [3, Figure 1] illustrates very well. Therefore, one needs to strengthen, or adapt this usual topology. This can be done in many different ways, such as the adapted weak topology (see below), Hellwig’s information topology [20], Aldous’s extended weak topology [1] or the optimal stopping topology [4]. Strikingly, all those apparently independent topologies are actually equal, at least in discrete time [4, Theorem 1.1]. Hence it induces no loss of generality to focus on the so called adapted Wasserstein distance. For an extensive background, we refer to [28, 29, 30, 31, 26, 13]. For all $\mu^{\prime},\nu^{\prime}\in\mathcal{P}_{\rho}(\mathbb{R})$ and $\pi^{\prime}\in\Pi(\mu^{\prime},\nu^{\prime})$, the adapted Wasserstein distance with index $\rho$ between $\pi$ and $\pi^{\prime}$ is defined by $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})=\inf_{\chi\in\Pi(\mu,\mu^{\prime})}\left(\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(\pi_{x},\pi^{\prime}_{x^{\prime}})\right)\,\chi(dx,dx^{\prime})\right)^{1/\rho}.$ (1.5) Note that by Lemma 5.1 below there always exists a coupling $\chi\in\Pi(\mu,\mu)$ optimal for $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})$. Moreover it is easy to check that $\mathcal{W}_{\rho}\leq\mathcal{AW}_{\rho}$, so that $\mathcal{AW}_{\rho}$ induces a finer topology than $\mathcal{W}_{\rho}$. Wiesel [37] studies Problem (1.2) for $\rho=1$ and introduces the notion of martingale rearrangement: a martingale coupling $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ is called a martingale rearrangement coupling of $\pi$ if $\mathcal{AW}_{1}(\pi,M)=\inf_{M^{\prime}\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi,M^{\prime}).$ (1.6) Actually, he works with the nested Wasserstein distance, which according to the appendix is equal to the adapted Wasserstein distance. In the present paper, even if we mainly concentrate on martingale rearrangements, we will also consider a slight extension of the latter definition: a martingale coupling $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ is called an $\mathcal{AW}_{\rho}$-minimal martingale rearrangement coupling of $\pi$ if $\mathcal{AW}_{\rho}(\pi,M)=\inf_{M^{\prime}\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{\rho}(\pi,M^{\prime}).$ (1.7) Note that the existence of an $\mathcal{AW}_{\rho}$-minimal martingale rearrangement coupling is not clear in the general case. Indeed, let $(M_{n})_{n\in\mathbb{N}}$ be a sequence of martingale couplings between $\mu$ and $\nu$ such that $(\mathcal{AW}_{\rho}(\pi,M_{n}))_{n\in\mathbb{N}}$ converges to $\mathcal{AW}_{\rho}(\pi,M)$. The tightness of the marginals $\mu$ and $\nu$ guarantees tightness and therefore relative compactness of $(M_{n})_{n\in\mathbb{N}}$ for the $\mathcal{W}_{\rho}$-distance, but not necessarily for the $\mathcal{AW}_{\rho}$-distance. In order to compensate this lack of relative compactness, Wiesel [37] introduces a new assumption: the coupling $\pi\in\Pi(\mu,\nu)$ is said to satisfy the barycentre dispersion assumption iff $\forall a\in\mathbb{R},\quad\int_{\mathbb{R}}\mathds{1}_{[a,+\infty)}(x)\left(x-\int_{\mathbb{R}}y\,\pi_{x}(dy)\right)\,\mu(dx)\leq 0.$ (1.8) The latter assumption is important in this context since it provides a sufficient condition for a coupling $\pi$ between $\mu$ and $\nu$ to admit a martingale rearrangement coupling. More precisely, Wiesel shows [37, Lemma 2.1] that in the general case, $\inf_{M^{\prime}\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi,M^{\prime})\geq\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx),$ (1.9) and there exists $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ such that $\mathcal{AW}_{1}(\pi,M)=\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx)$ when $\pi$ satisfies the barycentre dispersion assumption (1.8) [37, Proposition 2.4]. The problem (1.2) was in a certain way already considered by Rüschendorf [34], who looked for a projection of a probability measure on a set of probability measures with given linear constraints. Since the martingale constraint is linear, his study encompasses our problem. Yet he considered the projection with respect to the Kullback-Leibler distance, also known as relative entropy, in place of $\mathcal{AW}_{\rho}$ and this does not suit our purpose. More recently, Gerhold and Gülüm looked at a very similar problem [18, Problem 2.4] for the infinity Wasserstein distance, the Prokhorov distance, the stop-loss distance, the Lévy distance or modified versions of them. Once again, despite being of great interest in their setting, in particular for their application to the existence of a market model which is consistent with a finite set of European call options prices on a single underlying asset [19], their choice of distance is still inadequate for a connection with the stability of (MOT). In Section 2 we briefly recall Wiesel’s construction [37] of a martingale rearrangement coupling of any coupling $\pi$ which satisfies the barycentre dispersion assumption (1.8). Then we design our own construction of a martingale rearrangement coupling of $\pi$. This construction is actually done through lifted couplings, in the sense of Beiglböck and Juillet [9], that is probability measures on the enlarged space $(0,1)\times\mathbb{R}\times\mathbb{R}$ in which the spatial domain $\mathbb{R}\times\mathbb{R}$ of regular couplings is embedded. Our construction in Section 2 is highly inspired of the one we did in [24], where we designed a family $(M^{Q})_{Q\in\mathcal{Q}}$ of martingale couplings between $\mu$ and $\nu$ parametrised by a set $\mathcal{Q}$ of probability measures on $(0,1)^{2}$. This family was meant to be as close as possible to the Hoeffding-Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$. As proved by Wiesel [37, Lemma 2.3], $\pi^{HF}$ satisfies the barycentre dispersion assumption (1.8). We show in section 3 that the lifted coupling associated with any element of $(M^{Q})_{Q\in\mathcal{Q}}$ is, in a very natural sense, a lifted martingale rearrangement coupling of a lift of $\pi^{HF}$. At the level of regular couplings on $\mathbb{R}\times\mathbb{R}$, we can conclude the same as soon as the sign of $F_{\nu}^{-1}-F_{\mu}^{-1}$ is constant on the jumps of $F_{\mu}$, which holds when $F_{\nu}^{-1}-F_{\mu}^{-1}$ is constant on these jumps and $\pi^{HF}$ is concentrated on the graph of the Monge transport map $T=F_{\nu}^{-1}\circ F_{\mu}$. We showed in [24] that a particular element stands out from the family $(M^{Q})_{Q\in\mathcal{Q}}$, the so called inverse transform martingale coupling. We finally show in Section 4 the stability of the inverse transform martingale coupling for the $\mathcal{AW}_{\rho}$-distance with respect to its marginals. The latter stability holds in full generality at the lifted level but a condition on the first marginals is needed at the level of regular couplings. Let us now recall some standard results about cumulative distribution functions and quantile functions since they will prove very handy one- dimensional tools. Proofs can be found for instance in [24, Appendix]. For any probability measure $\eta$ on $\mathbb{R}$, denoting by $F_{\eta}(x)=\eta((-\infty,x])$ and $F_{\eta}^{-1}(u)=\inf\\{x\in\mathbb{R}:F_{\eta}(x)\geq u\\}$ the cumulative distribution function and the quantile function of $\eta$, we have 1. (1) $F_{\eta}$, resp. $F_{\eta}^{-1}$, is right continuous, resp. left continuous, and nondecreasing; 2. (2) For all $(x,u)\in\mathbb{R}\times(0,1)$, $F_{\eta}^{-1}(u)\leq x\iff u\leq F_{\eta}(x),$ (1.10) which implies $\displaystyle F_{\eta}(x-)<u\leq F_{\eta}(x)\implies x=F_{\eta}^{-1}(u),$ (1.11) and $\displaystyle F_{\eta}(F_{\eta}^{-1}(u)-)\leq u\leq F_{\eta}(F_{\eta}^{-1}(u));$ (1.12) 3. (3) For $\eta(dx)$-almost every $x\in\mathbb{R}$, $0<F_{\eta}(x),\quad F_{\eta}(x-)<1\quad\text{and}\quad F_{\eta}^{-1}(F_{\eta}(x))=x;$ (1.13) 4. (4) The image of the Lebesgue measure on $(0,1)$ by $F_{\eta}^{-1}$ is $\eta$. This property is referred to as inverse transform sampling. 5. (5) Denoting by $\lambda_{(0,1)}$, resp. $\lambda_{(0,1)^{2}}$, the Lebesgue measure on $(0,1)$, resp. $(0,1)^{2}$ and setting $\theta(x,v)=F_{\mu}(x-)+v\mu(\\{x\\})\quad\textrm{for}\quad(x,v)\in\mathbb{R}\times[0,1],$ (1.14) we have $\left((u,v)\mapsto\theta(F_{\mu}^{-1}(u),v)\right)_{\sharp}\lambda_{(0,1)^{2}}=\lambda_{(0,1)},$ (1.15) where $\sharp$ denotes the pushforward operation. Coupled with the inverse transform sampling we also have the equivalent formulation $\theta_{\sharp}(\mu\times\lambda_{(0,1)})=\lambda_{(0,1)}.$ (1.16) ## 2 Martingale rearrangements of couplings which satisfy the barycentre dispersion assumption ### 2.1 Regular and lifted martingale rearrangement couplings By (1.4) for the first equality and the inverse transform sampling for the second one, we have for $\eta,\eta^{\prime}\in\mathcal{P}_{1}(\mathbb{R})$, $\displaystyle\mathcal{W}_{1}(\eta,\eta^{\prime})=\int_{(0,1)}\left|F_{\eta}^{-1}(u)-F_{\eta^{\prime}}^{-1}(u)\right|du\geq\left|\int_{(0,1)}F_{\eta}^{-1}(u)du-\int_{(0,1)}F_{\eta^{\prime}}^{-1}(u)du\right|=\left|\int_{\mathbb{R}}x\,\eta(dx)-\int_{\mathbb{R}}x\,\eta^{\prime}(dx)\right|.$ (2.1) The inequality is an equality iff either $\forall u\in(0,1)$, $F_{\eta}^{-1}(u)\leq F_{\eta^{\prime}}^{-1}(u)$ i.e. $\eta$ is smaller then $\eta^{\prime}$ for the stochastic order which we denote $\eta\leq_{st}\eta^{\prime}$ or $\forall u\in(0,1)$, $F_{\eta}^{-1}(u)\geq F_{\eta^{\prime}}^{-1}(u)$ i.e. $\eta\geq_{st}\eta^{\prime}$. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ such that $\mu\leq_{cx}\nu$. We are now ready to reproduce the proof of [37, Lemma 2.1] to check (1.9). For $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ and $\chi\in\Pi(\mu,\mu)$ we have, using (2.1) then the triangle inequality, $\displaystyle\begin{split}\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|+\mathcal{W}_{1}(\pi_{x},M_{x^{\prime}})\right)\,\chi(dx,dx^{\prime})&\geq\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|+\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x^{\prime}\right|\right)\,\chi(dx,dx^{\prime})\\\ &\geq\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx).\end{split}$ (2.2) When $\pi$ satisfies the barycentre dispersion assumption (1.8), finding a martingale rearrangement coupling of $\pi$ amounts to find a martingale coupling such that the inequalities in (2.2) are equalities. This observation leads to the following lemma. ###### Lemma 2.1. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ and $\pi\in\Pi(\mu,\nu)$ satisfy the barycentre dispersion assumption (1.8). Then $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ is a martingale rearrangement coupling of $\pi$ iff there exists $\chi\in\Pi(\mu,\mu)$ such that $\chi(dx,dx^{\prime})$-almost everywhere, $x<x^{\prime}\implies\pi_{x}\geq_{st}M_{x^{\prime}},\quad x>x^{\prime}\implies\pi_{x}\leq_{st}M_{x^{\prime}}\quad\textrm{and}\quad x=x^{\prime}\implies\pi_{x}\leq_{st}M_{x}\textrm{ or }\pi_{x}\geq_{st}M_{x},$ (2.3) in which case $\chi$ is optimal for $\mathcal{AW}_{1}(\pi,M)$. ###### Proof. Suppose that $M$ is a martingale rearrangement coupling of $\pi$ and $\chi$ is optimal for $\mathcal{AW}_{1}(\pi,M)$. Since $\pi$ satisfies the barycentre dispersion assumption, we know by [37, Proposition 2.4] that $\mathcal{AW}_{1}(M,\pi)=\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx)$. Then the first inequality in (2.2) is an equality, hence $\chi(dx,dx^{\prime})$-almost everywhere, $\mathcal{W}_{1}(\pi_{x},M_{x^{\prime}})=\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x^{\prime}\right|$, or equivalently $\pi_{x}$ and $M_{x^{\prime}}$ are comparable in the stochastic order. Morever the second inequality in (2.2) is an equality as well, hence $\chi(dx,dx^{\prime})$-almost everywhere, $x^{\prime}$ lies between $x$ and $\int_{\mathbb{R}}y\,\pi_{x}(dy)$. We deduce that $\chi(dx,dx^{\prime})$-almost everywhere, $(x-x^{\prime})\left(x^{\prime}-\int_{\mathbb{R}}y\,\pi_{x}(dy)\right)\geq 0,$ (2.4) and $\pi_{x}\leq_{st}M_{x^{\prime}}$ or $\pi_{x}\geq_{st}M_{x^{\prime}}$. Then (2.3) is easily deduced from the fact that the map $\eta\mapsto\int_{\mathbb{R}}z\,\eta(dz)$ is increasing for the stochastic order. Conversely, suppose that (2.3) and therefore (2.4) holds for some $\chi\in\Pi(\mu,\mu)$. Then the inequalities in (2.2) are equalities, hence $\chi$ is optimal for $\mathcal{AW}_{1}(\pi,M)$ and $M$ is a martingale rearrangement coupling of $\pi$. ∎ To construct a martingale rearrangement coupling of $\pi$ satisfying the barycenter dispersion assumption (1.8), we will define a probability kernel $(m_{u})_{u\in(0,1)}$ such that $\int_{\mathbb{R}}y\,m_{u}(dy)=F_{\mu}^{-1}(u)$ $du$-a.e. and deduce that the probability measure $M(dx,dy)=\int_{0}^{1}\delta_{F_{\mu}^{-1}(u)}(dx)\,m_{u}(dy)\,du$ (2.5) is a martingale coupling between $\mu$ and $\nu$. Yet the probability kernel $(m_{u})_{u\in(0,1)}$ is not uniquely determined from the knowledge of $M$. Hence the definition (2.5) induces a loss of information. In order to keep this information, one can consider like Beiglböck and Juillet [9] instead of $M$ its lifted martingale coupling $\widehat{M}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,m_{u}(dy)\in\Pi(\lambda_{(0,1)},\mu,\nu),$ (2.6) where $\lambda_{(0,1)}$ denotes the Lebesgue measure on $(0,1)$. More generally, for any $\pi\in\Pi(\mu,\nu)$, we call lifted coupling of $\pi$ any coupling $\widehat{\pi}\in\Pi(\lambda_{(0,1)},\mu,\nu)$ such that there exists a probability kernel $(p_{u})_{u\in(0,1)}$ which satisfies $\widehat{\pi}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,p_{u}(dy)\quad\textrm{and}\quad\int_{u\in(0,1)}\widehat{\pi}(du,dx,dy)=\pi(dx,dy).$ We denote by $\widehat{\Pi}(\mu,\nu)$ the set of all lifted couplings between $\mu$ and $\nu$. Notice that there exists an easy embedding $\iota:\Pi(\mu,\nu)\to\widehat{\Pi}(\mu,\nu),\quad\pi\mapsto\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\pi_{F_{\mu}^{-1}(u)}(dy).$ (2.7) For $\widehat{\pi}=\lambda_{(0,1)}\times\widehat{\pi}_{u}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times p_{u}$ and $\widehat{\pi}^{\prime}=\lambda_{(0,1)}\times\widehat{\pi}^{\prime}_{u}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times p^{\prime}_{u}$ two lifted couplings of $\pi\in\Pi(\mu,\nu)$ and $\pi^{\prime}\in\Pi(\mu^{\prime},\nu^{\prime})$, we define their lifted adapted Wasserstein distance of order $\rho$ by $\displaystyle\widehat{\mathcal{AW}}_{\rho}(\pi,\pi^{\prime})$ $\displaystyle=\inf_{\chi\in\Pi(\lambda_{(0,1)},\lambda_{(0,1)})}\left(\int_{(0,1)\times(0,1)}\left(|u-u^{\prime}|^{\rho}+\mathcal{AW}_{\rho}^{\rho}(\widehat{\pi}_{u},\widehat{\pi}^{\prime}_{u^{\prime}})\right)\,\chi(du,du^{\prime})\right)^{1/\rho}$ $\displaystyle=\inf_{\chi\in\Pi(\lambda_{(0,1)},\lambda_{(0,1)})}\left(\int_{(0,1)\times(0,1)}\left(|u-u^{\prime}|^{\rho}+|F_{\mu}^{-1}(u)-F_{\mu^{\prime}}^{-1}(u^{\prime})|^{\rho}+\mathcal{W}_{\rho}^{\rho}(p_{u},p^{\prime}_{u^{\prime}})\right)\,\chi(du,du^{\prime})\right)^{1/\rho}.$ Note that by Remark 5.2 below there always exists a coupling $\chi\in\Pi(\lambda_{(0,1)},\lambda_{(0,1)})$ optimal for $\widehat{\mathcal{AW}}_{\rho}(\widehat{\pi},\widehat{\pi}^{\prime})$. We denote by $\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$ the set of all lifted martingale couplings between $\mu$ and $\nu$, that is the set of all lifted couplings $\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times m_{u}\in\widehat{\Pi}(\mu,\nu)$ such that $\int_{\mathbb{R}}y\,m_{u}(dy)=F_{\mu}^{-1}(u)$ for $du$-almost all $u\in(0,1)$. For $\rho\geq 1$, we then call lifted $\widehat{\mathcal{AW}}_{\rho}$-minimal martingale rearrangement coupling (or simply lifted martingale rearrangement coupling when $\rho=1$) of $\widehat{\pi}\in\widehat{\Pi}(\mu,\nu)$ any lifted martingale coupling $\widehat{M}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$ such that $\widehat{\mathcal{AW}}_{\rho}(\widehat{\pi},\widehat{M})=\inf_{\widehat{M}^{\prime}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)}\widehat{\mathcal{AW}}_{\rho}(\widehat{\pi},\widehat{M}^{\prime}).$ Ignoring the non-negative contribution of $|u-u^{\prime}|$ in the definition of $\widehat{\mathcal{AW}}_{1}$ and reasoning like in (2.2), we easily check the following lower bound analogous, at the lifted level, to (1.9). ###### Lemma 2.2. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$. Then for all $\widehat{\pi}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times p_{u}\in\widehat{\Pi}(\mu,\nu)$, $\inf_{\widehat{M}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)}\widehat{\mathcal{AW}}_{1}(\widehat{\pi},\widehat{M})\geq\int_{(0,1)}\left|\int_{\mathbb{R}}y\,p_{u}(dy)-F_{\mu}^{-1}(u)\right|\,du.$ The next proposition gives a sufficient condition for the collapse through (2.5) of a lifted martingale coupling to be a martingale rearrangement. ###### Proposition 2.3. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$. Let $\widehat{\pi}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times p_{u}\in\widehat{\Pi}(\mu,\nu)$ be such that $u\mapsto p_{u}$ is constant on the jumps of $F_{\mu}$, that is constant on the intervals $(F_{\mu}(x-),F_{\mu}(x)]$, $x\in\mathbb{R}$, which is trivially satisfied when $\mu$ is atomless. Suppose that $\widehat{M}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times m_{u}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$ is such that $\int_{(0,1)}\mathcal{W}_{1}(p_{u},m_{u})\,du\leq\int_{(0,1)}\left|\int_{\mathbb{R}}y\,p_{u}(dy)-F_{\mu}^{-1}(u)\right|\,du.$ Then the martingale coupling $M(dx,dy)=\int_{u\in(0,1)}\delta_{F_{\mu}^{-1}(u)}(dx)\,m_{u}(dy)\,du$ is a martingale rearrangement coupling of $\pi=\int_{u\in(0,1)}\delta_{F_{\mu}^{-1}(u)}(dx)\,p_{u}(dy)\,du$ which satisfies $\mathcal{AW}_{1}(\pi,M)=\int_{\mathbb{R}}\mathcal{W}_{1}(\pi_{x},M_{x})\,\mu(dx)=\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx).$ Of course, under the hypotheses, $\widehat{\mathcal{AW}}_{1}(\widehat{\pi},\widehat{M})\leq\int_{(0,1)}\mathcal{W}_{1}(p_{u},m_{u})\,du\leq\int_{(0,1)}\left|\int_{\mathbb{R}}y\,p_{u}(dy)-F_{\mu}^{-1}(u)\right|\,du$ so that, by Lemma 2.2, these inequalities are equalities and $\widehat{M}$ is a lifted martingale rearrangement of $\widehat{\pi}$. ###### Proof. By (1.9) it suffices to show that $\int_{\mathbb{R}}\mathcal{W}_{1}(\pi_{x},M_{x})\,\mu(dx)\leq\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx).$ (2.8) For $(x,v)\in\mathbb{R}\times(0,1)$, let $\theta(x,v)=F_{\mu}(x-)+v\mu(\\{x\\})$. Using (1.16) and the fact that $F_{\mu}^{-1}(\theta(x^{\prime},v))=x^{\prime}$ for all $(x^{\prime},v)\in\mathbb{R}\times(0,1)$ , we get $\displaystyle\begin{split}\pi(dx,dy)&=\int_{u\in(0,1)}\delta_{F_{\mu}^{-1}(u)}(dx)\,p_{u}(dy)\,du=\int_{(x^{\prime},v)\in\mathbb{R}\times(0,1)}\delta_{x^{\prime}}(dx)\,p_{\theta(x^{\prime},v)}(dy)\,\mu(dx^{\prime})\,dv\\\ &=\int_{v\in(0,1)}\mu(dx)\,p_{\theta(x,v)}(dy)\,dv.\end{split}$ (2.9) Hence we have $\mu(dx)$-almost everywhere $\pi_{x}(dy)=\int_{0}^{1}p_{\theta(x,v)}(dy)\,dv$, and similarly we find $M_{x}(dy)=\int_{0}^{1}m_{\theta(x,v)}(dy)\,dv$. Using (1.16) for the first and last equality, we deduce that $\displaystyle\int_{\mathbb{R}}\mathcal{W}_{1}(\pi_{x},M_{x})\,\mu(dx)$ $\displaystyle\leq\int_{\mathbb{R}\times(0,1)}\mathcal{W}_{1}(m_{\theta(x,v)},p_{\theta(x,v)})\,\mu(dx)\,dv$ $\displaystyle=\int_{(0,1)}\mathcal{W}_{1}(m_{u},p_{u})\,du$ $\displaystyle\leq\int_{(0,1)}\left|\int_{\mathbb{R}}y\,p_{u}(dy)-F_{\mu}^{-1}(u)\right|\,du$ $\displaystyle=\int_{\mathbb{R}\times(0,1)}\left|\int_{\mathbb{R}}y\,p_{\theta(x,v)}(dy)-F_{\mu}^{-1}(\theta(x,v))\right|\,\mu(dx)\,dv.$ For $(x,v)\in\mathbb{R}\times(0,1)$, $F_{\mu}^{-1}(\theta(x,v))=x$, and since $u\mapsto p_{u}$ is constant on the jumps of $F_{\mu}$, the map $v\mapsto p_{\theta(x,v)}$ is constant on $(0,1)$, hence $\displaystyle\int_{(0,1)}\left|\int_{\mathbb{R}}y\,p_{\theta(x,v)}(dy)-F_{\mu}^{-1}(\theta(x,v))\right|\,dv$ $\displaystyle=\left|\int_{\mathbb{R}\times(0,1)}y\,p_{\theta(x,v)}(dy)\,dv-x\right|.$ We deduce that $\int_{\mathbb{R}}\mathcal{W}_{1}(\pi_{x},M_{x})\,\mu(dx)\leq\int_{\mathbb{R}}\left|\int_{\mathbb{R}\times(0,1)}y\,p_{\theta(x,v)}(dy)\,dv-x\right|\,\mu(dx)=\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx),$ which proves (2.8) and concludes the proof. ∎ ### 2.2 Construction of an explicit martingale rearrangement coupling We recall that a coupling $\pi\in\Pi(\mu,\nu)$ between two probability measures $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ in the convex order satisfies the barycentre dispersion assumption formulated by Wiesel [37] iff $\forall a\in\mathbb{R},\quad\int_{\mathbb{R}}\mathds{1}_{[a,+\infty)}(x)\left(x-\int_{\mathbb{R}}y\,\pi_{x}(dy)\right)\,\mu(dx)\leq 0.$ (2.10) First we briefly recall Wiesel’s construction [37] of a martingale rearrangement coupling of a coupling $\pi$ which satisfies (2.10), which is well perceivable as soon as $\pi$ has finite support but becomes rather implicit in the general case. Then we design our own construction of such a martingale rearrangement coupling, whose intelligibility does not depend on the finiteness of the support of $\pi$. Since the Hoeffding-Fréchet satisfies (2.10) [37, Lemma 2.3], this construction extends the study made in Section 3. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$ and $\mu\neq\nu$ and $\pi\in\Pi(\mu,\nu)$ be a coupling between $\mu$ and $\nu$ which satisfies the barycentre assumption (2.10). Suppose first that $\pi$ has finite support and is not a martingale coupling between $\mu$ and $\nu$. Denoting $S$ the finite support of $\mu$ and for all $x\in S$, $S_{x}$ the finite support of $\pi_{x}$, the latter is equivalent to say that there exists $x\in S$ such that $\int_{\mathbb{R}}y\,\pi_{x}(dy)\neq x$. As Wiesel [37] points out, the barycentre dispersion assumption (2.10) and the convex order between $\mu$ and $\nu$ imply the existence of $x^{-},x^{+}\in S$, $y^{-}\in S_{x^{-}}$ and $y^{+}\in S_{x^{+}}$ such that $\int_{\mathbb{R}}y\,\pi_{x^{-}}(dy)<x^{-},\quad x^{+}<\int_{\mathbb{R}}y\,\pi_{x^{+}}(dy)\quad\text{and}\quad y^{-}<y^{+}.$ He then switches as much as possible the mass at $y^{-}$ and $y^{+}$ of $\pi_{x^{-}}$ and $\pi_{x^{+}}$ in order to rectify the barycentres. More precisely, he defines for all $x\in S$ $\pi^{(1)}_{x}=\mathds{1}_{\\{x\notin\\{x^{-},x^{+}\\}\\}}\,\pi_{x}+\mathds{1}_{\\{x=x^{-}\\}}\left(\pi_{x^{-}}+\frac{\lambda}{\mu(\\{x^{-}\\})}(\delta_{y^{+}}-\delta_{y^{-}})\right)+\mathds{1}_{\\{x=x^{+}\\}}\left(\pi_{x^{+}}+\frac{\lambda}{\mu(\\{x^{+}\\})}(\delta_{y^{-}}-\delta_{y^{+}})\right),$ where $\lambda\geq 0$ is taken as large as possible, so that $\text{either}\quad\pi^{(1)}_{x^{-}}(\\{y^{-}\\})=0,\quad\pi^{(1)}_{x^{+}}(\\{y^{+}\\})=0,\quad\int_{\mathbb{R}}y\,\pi^{(1)}_{x^{+}}(dy)=x^{+}\quad\text{or}\quad\int_{\mathbb{R}}y\,\pi_{x^{-}}(dy)=x^{-}.$ Then the measure $\pi^{(1)}(dx,dy)=\mu(dx)\,\pi^{(1)}_{x}(dy)$ is a coupling between $\mu$ and $\nu$ which satisfies the barycentre dispersion assumption (2.10). Repeating inductively this process yields a sequence $(\pi^{(n)}_{x})_{x\in\mathbb{R}}$ of couplings between $\mu$ and $\nu$ which satisfy (2.10). By finiteness of $S$, the latter sequence is constant for $n$ large enough and the limit is precisely a martingale rearrangement coupling of $\pi$. In the general case, there exists by [37, Lemma 4.1] a sequence $(\pi^{n})_{n\in\mathbb{N}^{*}}$ of finitely supported measures such that $\mathcal{W}_{1}^{nd}(\pi^{n},\pi)\leq 1/n$ for all $n\in\mathbb{N}^{*}$. The marginals $\mu_{n}$ and $\nu_{n}$ of $\pi^{n}$ are not in the convex order, but a mere adaptation of the previous reasoning yields the existence of a coupling $\pi^{n}_{mr}$ between $\mu_{n}$ and $\nu_{n}$ which is almost a martingale rearrangement coupling of $\pi^{n}$, in the sense that $\int_{\mathbb{R}}\left|x-\int_{\mathbb{R}}y\,(\pi^{n}_{mr})_{x}(dy)\right|\,\mu^{n}(dx)\leq\frac{1}{n}\quad\text{and}\quad\mathcal{W}_{1}^{nd}(\pi^{n}_{mr},\pi^{n})\leq\int_{\mathbb{R}}\left|x-\int_{\mathbb{R}}y\,\pi^{n}_{x}(dy)\right|\,\mu^{n}(dx).$ (2.11) Then Wiesel shows the existence of a coupling $\pi_{mr}$ between $\mu$ and $\nu$ such that $\mathcal{W}_{1}^{nd}\left(\frac{1}{n}\sum_{k=1}^{n}\pi^{k}_{mr},\pi_{mr}\right)$ vanishes as $n$ goes to $+\infty$. By (2.11) taken to the limit $n\to+\infty$ and (1.9) he deduces that $\pi_{mr}$ is a martingale rearrangement coupling of $\pi$. We now propose an alternate construction of a martingale rearrangement coupling of $\pi$, regardless of the finiteness of its support, deduced from a lifted martingale coupling. For all $u\in[0,1]$, let $G(u)=\int_{\mathbb{R}}y\,\pi_{F_{\mu}^{-1}(u)}(dy),\quad\Delta_{+}(u)=\int_{0}^{u}(F_{\mu}^{-1}-G)^{+}(v)\,dv\quad\text{and}\quad\Delta_{-}(u)=\int_{0}^{u}(F_{\mu}^{-1}-G)^{-}(v)\,dv.$ (2.12) Let us show that (2.10) is equivalent to $\forall u\in[0,1],\quad\Delta_{+}(u)\geq\Delta_{-}(u).$ (2.13) Using (1.10) we see that for all $u\in(0,1)$ and $a\in\mathbb{R}$, $u>F_{\mu}(a-)\implies F_{\mu}^{-1}(u)\geq a\implies u\geq F_{\mu}(a-)$. By the latter implications and the inverse transform sampling we deduce that (2.10) is equivalent to $\forall a\in\mathbb{R},\quad\int_{F_{\mu}(a-)}^{1}(F_{\mu}^{-1}(u)-G(u))\,du\leq 0.$ Since $\Delta_{+}(1)=\Delta_{-}(1)$, consequence of the equality of the respective means of $\mu$ and $\nu$, we deduce that it is equivalent to $\forall a\in\mathbb{R},\quad\Delta_{+}(F_{\mu}(a-))\geq\Delta_{-}(F_{\mu}(a-)).$ By right continuity of $F_{\mu}$, for all $a\in\mathbb{R}$ we have $F_{\mu}(a)=\lim_{h\to 0,h>0}F_{\mu}((a+h)-)$, so by continuity of $\Delta_{+}$ and $\Delta_{-}$ we also have $\Delta_{+}(F_{\mu}(a))\geq\Delta_{-}(F_{\mu}(a))$ for all $a\in\mathbb{R}$. Moreover, for all $a\in\mathbb{R}$ such that $\mu(\\{a\\})>0$ and $u\in(F_{\mu}(a-),F_{\mu}(a)]$, we have by (1.11) that $F_{\mu}^{-1}(u)=a$, so $\Delta_{+}$ and $\Delta_{-}$ are affine on $(F_{\mu}(a-),F_{\mu}(a)]$. We deduce that we also have $\Delta_{+}\geq\Delta_{-}$ on $(F_{\mu}(a-),F_{\mu}(a)]$, hence the equivalence with (2.13). We define $\displaystyle\mathcal{U}^{+}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)>G(u)\\},\quad\mathcal{U}^{-}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)<G(u)\\},$ $\displaystyle\text{and}\quad\mathcal{U}^{0}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)=G(u)\\},$ and thanks to the equality $\Delta_{+}(1)=\Delta_{-}(1)$ we can set for all $u\in[0,1]$ $\phi(u)=\left\\{\begin{array}[]{rcl}\Delta_{-}^{-1}(\Delta_{+}(u))&\text{if}&u\in\mathcal{U}^{+};\\\ \Delta_{+}^{-1}(\Delta_{-}(u))&\text{if}&u\in\mathcal{U}^{-};\\\ u&\text{if}&u\in\mathcal{U}^{0}.\end{array}\right.$ Applying [24, Lemma 6.1] again with $f_{1}=(F_{\mu}^{-1}-G)^{+}$, $f_{2}=(F_{\mu}^{-1}-G)^{-}$, $u_{0}=1$ and $h:u\mapsto\mathds{1}_{\\{G(\phi(u))\leq F_{\mu}^{-1}(\phi(u))\\}}$ yields $\int_{0}^{1}\mathds{1}_{\\{G(\phi(u))\leq F_{\mu}^{-1}(\phi(u))\\}}\,d\Delta_{+}(u)=\int_{0}^{1}\mathds{1}_{\\{G(v)\leq F_{\mu}^{-1}(v)\\}}\,d\Delta_{-}(u)=0.$ Similarly, we get $\int_{0}^{1}\mathds{1}_{\\{G(\phi(u))\geq F_{\mu}^{-1}(\phi(u))\\}}\,d\Delta_{-}(u)=0$. We deduce that $\phi(u)\in\mathcal{U}^{-},\quad\textrm{resp. }\phi(u)\in\mathcal{U}^{+},\quad\text{for $du$-almost all }u\in\mathcal{U}^{+},\quad\text{resp. }\mathcal{U}^{-}.$ (2.14) This allows us to define for $du$-almost all $u\in\mathcal{U}^{+}\cup\mathcal{U}^{-}$ $p(u)=\frac{G(\phi(u))-F_{\mu}^{-1}(\phi(u))}{F_{\mu}^{-1}(u)-G(u)+G(\phi(u))-F_{\mu}^{-1}(\phi(u))}\cdot$ (2.15) Notice that (2.14) implies that for $du$-almost all $u\in\mathcal{U}^{+}$, $\phi(\phi(u))=\Delta_{+}^{-1}(\Delta_{-}(\Delta_{-}^{-1}(\Delta_{+}(u))))$. Since $\Delta_{-}$ is continuous we have $\Delta_{-}(\Delta_{-}^{-1}(v))=v$ for all $v\in[0,\Delta_{-}(1)]$, and using (1.13) after an appropriate normalisation we get $\Delta_{+}^{-1}(\Delta_{+}(v))=v$ for $dv$-almost all $v\in\mathcal{U}^{+}$. We deduce that $u=\phi(\phi(u)),\quad\text{$du$-almost everywhere on $\mathcal{U}^{+}$}.$ (2.16) Similarly, $\phi(\phi(u))=u$ for all $du$-almost all $u\in\mathcal{U}^{-}$. We deduce that $\text{for $du$-almost all $u\in\mathcal{U}^{+}\cup\mathcal{U}^{-}$},\quad\phi(\phi(u))=u,$ (2.17) and $\text{for $du$-almost all $u\in\mathcal{U}^{+}\cup\mathcal{U}^{-}$},\quad p(\phi(u))=\frac{G(u)-F_{\mu}^{-1}(u)}{F_{\mu}^{-1}(\phi(u))-G(\phi(u))+G(u)-F_{\mu}^{-1}(u)}=1-p(u).$ (2.18) In order to define the appropriate martingale kernel, we rely on the following lemma which allows us to inject some stochastic order in the construction, a convenient tool for the computation of Wasserstein distances. We recall that two probability measures $\mu$ and $\nu$ on the real line are said to be in the stochastic order, denoted $\mu\leq_{st}\nu$, iff $F_{\mu}^{-1}(u)\leq F_{\nu}^{-1}(u)$ for all $u\in[0,1]$. Since the Hoeffding-Fréchet coupling between $\mu$ and $\nu$ is optimal for $\mathcal{W}_{1}(\mu,\nu)$, this implies by the inverse transform sampling that $\mathcal{W}_{1}(\mu,\nu)=\int_{\mathbb{R}}y\,\nu(dy)-\int_{\mathbb{R}}x\,\mu(dx)$. ###### Lemma 2.4. Let $\mathfrak{B}$ be the set of all quadruples $(y,\widetilde{y},\mu,\widetilde{\mu})\in\mathbb{R}\times\mathbb{R}\times\mathcal{P}_{1}(\mathbb{R})\times\mathcal{P}_{1}(\mathbb{R})$ such that $\mu$ and $\widetilde{\mu}$ have respective means $x$ and $\widetilde{x}$ and $x<y\leq\widetilde{y}<\widetilde{x}$. Endow $\mathcal{P}_{1}(\mathbb{R})$ with the Borel $\sigma$-algebra of the weak convergence topology and $\mathfrak{B}$ with the trace of the product $\sigma$-algebra on $\mathbb{R}\times\mathbb{R}\times\mathcal{P}_{1}(\mathbb{R})\times\mathcal{P}_{1}(\mathbb{R})$. Then there exist two measurable maps $\beta,\widetilde{\beta}:\mathfrak{B}\to\mathcal{P}_{1}(\mathbb{R})$ such that for all $(y,\widetilde{y},\mu,\widetilde{\mu})$, denoting $\nu=\beta(y,\widetilde{y},\mu,\widetilde{\mu})$, $\widetilde{\nu}=\widetilde{\beta}(y,\widetilde{y},\mu,\widetilde{\mu})$ and $p=\frac{\stackrel{{\scriptstyle\sim}}{{\smash{x}\rule{0.0pt}{1.5pt}}}-\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}{y-x+\stackrel{{\scriptstyle\sim}}{{\smash{x}\rule{0.0pt}{1.5pt}}}-\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}$ where $x$ and $\widetilde{x}$ are the respective means of $\mu$ and $\widetilde{\mu}$, we have $\int_{\mathbb{R}}w\,\nu(dw)=y,\quad\int_{\mathbb{R}}w\,\widetilde{\nu}(dw)=\widetilde{y},\quad\mu\leq_{st}\nu,\quad\widetilde{\nu}\leq_{st}\widetilde{\mu}\quad\text{and}\quad p\nu+(1-p)\widetilde{\nu}=p\mu+(1-p)\widetilde{\mu}.$ (2.19) In particular, $p\,\delta_{y}(dz)\,\nu(dw)+(1-p)\,\delta_{\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}(dz)\,\widetilde{\nu}(dw)$ is a martingale coupling between $p\delta_{y}(dz)+(1-p)\delta_{\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}(dz)$ and $p\mu(dw)+(1-p)\widetilde{\mu}(dw)$, and $\mathcal{W}_{1}(\mu,\nu)=y-x$, $\mathcal{W}_{1}(\widetilde{\mu},\widetilde{\nu})=\widetilde{x}-\widetilde{y}$. The proof, which consists in exhibiting particular maps $\beta$ and $\tilde{\beta}$, is moved to the end of the present section. In order to use Lemma 2.4 we need to compare $\phi$ to the identity function. The inequality (2.13) is equivalent by appropriate normalisation of (1.10) to $u\geq\Delta_{+}^{-1}(\Delta_{-}(u))$ for all $u\in[0,1]$, hence $\forall u\in\mathcal{U}^{-},\quad\phi(u)\leq u.$ (2.20) Moreover, by (2.17), [24, Lemma 6.1] applied with $f_{1}=(F_{\mu}^{-1}-G)^{+}$, $f_{2}=(F_{\mu}^{-1}-G)^{-}$, $u_{0}=1$ and $h:u\mapsto\mathds{1}_{\\{u<\phi(u)\\}}$ we have $\int_{0}^{1}\mathds{1}_{\\{\phi(u)<u\\}}\,d\Delta_{+}(u)=\int_{0}^{1}\mathds{1}_{\\{\phi(u)<\phi(\phi(u))\\}}\,d\Delta_{+}(u)=\int_{0}^{1}\mathds{1}_{\\{u<\phi(u)\\}}\,d\Delta_{-}(u).$ By (2.20) the right-hand side is $0$, hence $\text{for $du$-almost all $u\in\mathcal{U}^{+}$},\quad\phi(u)\geq u.$ (2.21) Let $\displaystyle A^{+}=\\{u\in\mathcal{U}^{+}\mid F_{\mu}^{-1}(\phi(u))<G(\phi(u)),\ \phi(\phi(u))=u\text{ and }p(\phi(u))=1-p(u)\\}$ and $\displaystyle A^{-}=\\{u\in\mathcal{U}^{-}\mid F_{\mu}^{-1}(\phi(u))>G(\phi(u)),\ \phi(\phi(u))=u\text{ and }p(\phi(u))=1-p(u)\\}.$ For all $u\in A^{+}$, we have by definition $\displaystyle\phi(u)\in\mathcal{U}^{-},\quad F_{\mu}^{-1}(\phi(\phi(u)))=F_{\mu}^{-1}(u)>G(u)=G(\phi(\phi(u))),$ $\displaystyle\phi(\phi(\phi(u)))=\phi(u)\quad\text{and}\quad p(\phi(\phi(u)))=p(u)=1-p(\phi(u)),$ hence $\phi(u)\in A^{-}$. Similarly, for all $u\in A^{-}$, $\phi(u)\in A^{+}$. By (2.14), (2.17), (2.18), (2.21) and the monotonicity of $F_{\mu}^{-1}$, we deduce that $A^{+}$ and $A^{-}$ are two disjoint Borel sets such that the Lebesgue measure of $(\mathcal{U}^{+}\backslash A^{+})\cup(\mathcal{U}^{-}\backslash A^{-})$ is $0$ and $\displaystyle\begin{split}&\forall u\in A^{+},\quad G(u)<F_{\mu}^{-1}(u)\leq F_{\mu}^{-1}(\phi(u))<G(\phi(u)).\end{split}$ (2.22) For all $u\in A^{+}$, $\pi_{F_{\mu}^{-1}(u)}$ and $\pi_{F_{\mu}^{-1}(\phi(u))}$ have by definition respective means $G(u)$ and $G(\phi(u))$, so by (2.22) we can apply Lemma 2.4 with $(y,\widetilde{y},\mu,\widetilde{\mu})=(F_{\mu}^{-1}(u),F_{\mu}^{-1}(\phi(u)),\pi_{F_{\mu}^{-1}(u)},\pi_{F_{\mu}^{-1}(\phi(u))}).$ Hence there exist two probability measures $m_{u},\widetilde{m}_{u}\in\mathcal{P}_{1}(\mathbb{R})$ with respective means $F_{\mu}^{-1}(u)$, $F_{\mu}^{-1}(\phi(u))$ and such that $\displaystyle\begin{split}&\pi_{F_{\mu}^{-1}(u)}\leq_{st}m_{u},\quad\widetilde{m}_{u}\leq_{st}\pi_{F_{\mu}^{-1}(\phi(u))},\\\ \text{and}\quad&p(u)m_{u}+(1-p(u))\widetilde{m}_{u}=p(u)\pi_{F_{\mu}^{-1}(u)}+(1-p(u))\pi_{F_{\mu}^{-1}(\phi(u))}.\end{split}$ (2.23) Since $A^{+}=\phi(A^{-})$ and $A^{-}=\phi(A^{+})$, for all $u\in A^{-}$ we can set $m_{u}=\widetilde{m}_{\phi(u)}$, so that $\forall u\in A^{+},\quad\pi_{F_{\mu}^{-1}(u)}\leq_{st}m_{u}\quad\text{and}\quad\forall u\in A^{-},\quad m_{u}\leq_{st}\pi_{F_{\mu}^{-1}(u)},$ (2.24) and , $\forall u\in A^{+}\cup A^{-},\;p(u)m_{u}+p(\phi(u))m_{\phi(u)}=p(u)\pi_{F_{\mu}^{-1}(u)}+p(\phi(u))\pi_{F_{\mu}^{-1}(\phi(u))}.$ (2.25) Finally, for all $u\in\mathcal{U}^{0}\cup(\mathcal{U}^{+}\backslash A^{+})\cup(\mathcal{U}^{-}\backslash A^{-})$ set $m_{u}=\pi_{F_{\mu}^{-1}(u)}$. By composition of the measurable map $u\mapsto(F_{\mu}^{-1}(u),F_{\mu}^{-1}(\phi(u)),\pi_{F_{\mu}^{-1}(u)},\pi_{F_{\mu}^{-1}(\phi(u))})$ and the measurable map $\beta$ defined in Lemma 2.4, the map $u\mapsto m_{u}$ is measurable. By [2, Theorem 19.12] it is equivalent to say that $(m_{u})_{u\in(0,1)}$ is a probability kernel, hence we can define $\widehat{M}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,m_{u}(dy),$ (2.26) and $M(dx,dy)=\int_{0}^{1}\delta_{F_{\mu}^{-1}(u)}(dx)\,m_{u}(dy)\,du.$ (2.27) We now state that $\widehat{M}$ is a lifted martingale rearrangement coupling of $\widehat{\pi}=\iota(\pi)$. ###### Proposition 2.5. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$ and $\mu\neq\nu$ and $\pi\in\Pi(\mu,\nu)$ be a coupling between $\mu$ and $\nu$ which satisfies the barycentre dispersion assumption (2.10). Then the measure $\widehat{M}$ defined by (2.26) is a lifted martingale rearrangement coupling of the lifted coupling $\widehat{\pi}=\iota(\pi)$: $\inf_{\widehat{M}^{\prime}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)}\widehat{\mathcal{AW}}_{1}(\widehat{\pi},\widehat{M}^{\prime})=\widehat{\mathcal{AW}}_{1}(\widehat{\pi},\widehat{M})=\int_{(0,1)}\mathcal{W}_{1}(\pi_{F_{\mu}^{-1}(u)},m_{u})\,du=\int_{(0,1)}\left|G(u)-F_{\mu}^{-1}(u)\right|\,du.$ Since $u\mapsto\pi_{F_{\mu}^{-1}(u)}$ is constant on the jumps on $F_{\mu}$ by (1.11), we immediately deduce by Proposition 2.3 that $M$ is a martingale rearrangement coupling of $\pi$. ###### Corollary 2.6. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$ and $\mu\neq\nu$ and $\pi\in\Pi(\mu,\nu)$ be a coupling between $\mu$ and $\nu$ which satisfies the barycentre dispersion assumption (2.10). Then the measure $M$ defined by (2.27) is a martingale rearrangement coupling of $\pi$: $\inf_{M^{\prime}\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi,M^{\prime})=\mathcal{AW}_{1}(\pi,M)=\int_{\mathbb{R}}\mathcal{W}_{1}(\pi_{x},M_{x})\,\mu(dx)=\int_{\mathbb{R}}\left|\int_{\mathbb{R}}y\,\pi_{x}(dy)-x\right|\,\mu(dx).$ ###### Remark 2.7. As seen from the proof of Proposition 2.5 just below, for $\widehat{M}$ defined by (2.26) to be a lifted martingale rearrangement coupling of the lifted coupling $\widehat{\pi}=\iota(\pi)$ and therefore $M$ defined by (2.27) to be a martingale rearrangement coupling of $\pi$, it is enough that $u\mapsto m_{u}$ is measurable, satisfies (2.24), (2.25) and $m_{u}=\pi_{F_{\mu}^{-1}(u)}$ for all $u\in\mathcal{U}^{0}\cup(\mathcal{U}^{+}\backslash A^{+})\cup(\mathcal{U}^{-}\backslash A^{-})$. ###### Proof of Proposition 2.5. Assume for a moment that $\widehat{M}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$. Then we have by (2.24) that for all $u\in(0,1)$, $\pi_{F_{\mu}^{-1}(u)}\leq_{st}m_{u}$ or $m_{u}\leq_{st}\pi_{F_{\mu}^{-1}(u)}$, hence $\mathcal{W}_{1}(\pi_{F_{\mu}^{-1}(u)},m_{u})=|G(u)-F_{\mu}^{-1}(u)|$ and $\widehat{\mathcal{AW}}_{1}(\widehat{\pi},\widehat{M})\leq\int_{(0,1)}\mathcal{W}_{1}(\pi_{F_{\mu}^{-1}(u)},m_{u})\,du=\int_{(0,1)}|G(u)-F_{\mu}^{-1}(u)|\,du,$ which proves the claim by Lemma 2.2. It remains to show that $\widehat{M}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$. By the inverse transform sampling and the fact that $m_{u}$ has mean $F_{\mu}^{-1}(u)$ for all $u\in(0,1)$, it is clear that $\widehat{M}$ is a lifted martingale coupling between $\mu$ and $\int_{u\in(0,1)}m_{u}(dy)\,du$. To conclude it is therefore sufficient to check that $\int_{u\in(0,1)}m_{u}(dy)\,du=\nu.$ (2.28) To this end, let $H:[0,1]\to\mathbb{R}$ be measurable and bounded. Using (2.15), (2.17) and [24, Lemma 6.1] applied with $f_{1}=(F_{\mu}^{-1}-G)^{+}$, $f_{2}=(F_{\mu}^{-1}-G)^{-}$, $u_{0}=1$ and $h:u\mapsto\frac{H(\phi(u))}{F_{\mu}^{-1}(\phi(u))-G(\phi(u))+G(u)-F_{\mu}^{-1}(u)}$ for the third equality, we get $\displaystyle\int_{\mathcal{U}^{+}}(1-p(u))H(u)\,du$ $\displaystyle=\int_{0}^{1}\frac{(F_{\mu}^{-1}-G)^{+}(u)}{F_{\mu}^{-1}(u)-G(u)+G(\phi(u))-F_{\mu}^{-1}(\phi(u))}H(u)\,du$ $\displaystyle=\int_{0}^{1}h(\phi(u))\,d\Delta_{+}(u)$ $\displaystyle=\int_{0}^{1}h(v)\,d\Delta_{-}(v)$ $\displaystyle=\int_{0}^{1}\frac{(F_{\mu}^{-1}-G)^{-}(v)}{F_{\mu}^{-1}(\phi(v))-G(\phi(v))+G(v)-F_{\mu}^{-1}(v)}H(\phi(v))\,dv$ $\displaystyle=\int_{\mathcal{U}^{-}}p(\phi(v))H(\phi(v))\,dv.$ Similarly, we have $\int_{\mathcal{U}^{-}}(1-p(u))H(u)\,du=\int_{\mathcal{U}^{+}}p(\phi(u))H(\phi(u))\,du$. We deduce that $\displaystyle\begin{split}\int_{0}^{1}H(u)\,du&=\int_{\mathcal{U}^{0}}H(u)\,du+\int_{\mathcal{U}^{+}}p(u)H(u)\,du+\int_{\mathcal{U}^{+}}(1-p(u))H(u)\,du\\\ &\phantom{=}+\int_{\mathcal{U}^{-}}p(u)H(u)\,du+\int_{\mathcal{U}^{-}}(1-p(u))H(u)\,du\\\ &=\int_{\mathcal{U}^{0}}H(u)\,du+\int_{\mathcal{U}^{+}\cup\mathcal{U}^{-}}(p(u)H(u)+p(\phi(u))H(\phi(u)))\,du.\end{split}$ (2.29) Let $f:\mathbb{R}\to\mathbb{R}$ be measurable and bounded. Using (2.29) applied with $H:u\mapsto\int_{\mathbb{R}}f(y)\,m_{u}(dy)$ for the first equality, the fact that $m_{u}=\pi_{F_{\mu}^{-1}(u)}$ for all $u\in\mathcal{U}^{0}$ and (2.25) for the second equality, (2.29) again applied with $H:u\mapsto\int_{\mathbb{R}}f(y)\,\pi_{F_{\mu}^{-1}(u)}(dy)$ for the third equality and the inverse transform sampling for the last equality, we get $\displaystyle\int_{0}^{1}\int_{\mathbb{R}}f(y)\,m_{u}(dy)\,du$ $\displaystyle=\int_{\mathcal{U}^{0}}\int_{\mathbb{R}}f(y)\,m_{u}(dy)\,du+\int_{\mathcal{U}^{+}\cup\mathcal{U}^{-}}\int_{\mathbb{R}}f(y)\,(p(u)\,m_{u}(dy)+p(\phi(u))\,m_{\phi(u)}(dy))\,du$ $\displaystyle=\int_{\mathcal{U}^{0}}\int_{\mathbb{R}}f(y)\,\pi_{F_{\mu}^{-1}(u)}(dy)\,du$ $\displaystyle\phantom{=}+\int_{\mathcal{U}^{+}\cup\mathcal{U}-}\int_{\mathbb{R}}f(y)\,(p(u)\,\pi_{F_{\mu}^{-1}(u)}(dy)+p(\phi(u))\,\pi_{F_{\mu}^{-1}(\phi(u))}(dy))\,du$ $\displaystyle=\int_{0}^{1}\int_{\mathbb{R}}f(y)\,\pi_{F_{\mu}^{-1}(u)}(dy)\,du$ $\displaystyle=\int_{\mathbb{R}}f(y)\,\nu(dy),$ which shows (2.28) and concludes the proof. ∎ ###### Proof of Lemma 2.4. Let $(y,\widetilde{y},\mu,\widetilde{\mu})\in\mathfrak{B}$, $x$ and $\widetilde{x}$ be the respective means of $\mu$ and $\widetilde{\mu}$ and $p=\frac{\stackrel{{\scriptstyle\sim}}{{\smash{x}\rule{0.0pt}{1.5pt}}}-\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}{y-x+\stackrel{{\scriptstyle\sim}}{{\smash{x}\rule{0.0pt}{1.5pt}}}-\stackrel{{\scriptstyle\sim}}{{\smash{y}\rule{0.0pt}{1.5pt}}}}$. First we construct two measures $\nu,\widetilde{\nu}\in\mathcal{P}_{1}(\mathbb{R})$ which satisfy (2.19). Then we show that $\nu$ and $\widetilde{\nu}$ are measurable in $(y,\widetilde{y},\mu,\widetilde{\mu})$. We set for $q\in[0,p\wedge(1-p)]$ $\displaystyle\begin{split}&J(q):=\int^{\frac{q}{p}}_{0}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-pu-p}{1-p}\right)\,du+\int_{\frac{q}{p}}^{1}F_{\mu}^{-1}\left(u\right)\,du,\\\ \text{and}\quad&\widetilde{J}(q):=\int_{0}^{\frac{q}{1-p}}F_{\mu}^{-1}\left(\frac{(1-p)u}{p}\right)\,du+\int_{0}^{\frac{1-q-p}{1-p}}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}(u)\,du.\end{split}$ (2.30) Since the quantile functions are non-decreasing, the function $J$ is continuous and concave as the sum of two concave functions and the function $\widetilde{J}$ is continuous and convex as the sum of two convex functions. We have $J(0)=\int_{0}^{1}F_{\mu}^{-1}(u)\,du=x$ and $\widetilde{J}(0)=\int_{0}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}(u)\,du=\widetilde{x}$ by the inverse transform sampling. If $p\leq\frac{1}{2}$, then by the change of variables $v=\frac{1-pu-p}{1-p}$ and since $F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}$ is non-decreasing, we have $\displaystyle J(p)$ $\displaystyle=\frac{1-p}{p}\int_{\frac{1-2p}{1-p}}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\,dv$ $\displaystyle=\int_{0}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\,dv+\frac{1-2p}{1-p}\left(\frac{1}{1-\frac{1-2p}{1-p}}\int_{\frac{1-2p}{1-p}}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\,dv-\frac{1-p}{1-2p}\int_{0}^{\frac{1-2p}{1-p}}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\,dv\right)$ $\displaystyle\geq\int_{0}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\,dv=\widetilde{x}.$ Since $x<y<\widetilde{x}$ and the function $J$ is concave, there is a unique $q_{\star}\in(0,p)$ such that $J(q_{\star})=y$. Moreover, the left-hand derivative of $J$ at $q_{\star}$ is positive which writes $F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-q_{\star}-p}{1-p}+\right)>F_{\mu}^{-1}\left(\frac{q_{\star}}{p}\right).$ (2.31) If $p>\frac{1}{2}$, we have $\widetilde{J}(1-p)=\int_{0}^{1}F_{\mu}^{-1}\left(\frac{(1-p)u}{p}\right)\,du\leq\int_{0}^{1}F_{\mu}^{-1}(v)\,dv=x$ and by convexity of $\widetilde{J}$ there is a unique $q_{\star}\in(0,1-p)$ such that $\widetilde{J}(q_{\star})=\widetilde{y}$. Moreover, the left-hand derivative of $\widetilde{J}$ at $q_{\star}$ is negative so that (2.31) still holds. Let $\nu$ and $\widetilde{\nu}$ be the respective images of the Lebesgue measure on $[0,1]$ by $\displaystyle\begin{split}&u\mapsto\mathds{1}_{\\{u<\frac{q_{\star}}{p}\\}}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-pu-p}{1-p}\right)+\mathds{1}_{\\{u\geq\frac{q_{\star}}{p}\\}}F_{\mu}^{-1}(u)\\\ \text{and}\quad&u\mapsto\mathds{1}_{\\{u<\frac{q_{\star}}{1-p}\\}}F_{\mu}^{-1}\left(\frac{(1-p)u}{p}\right)+\mathds{1}_{\\{u\geq\frac{q_{\star}}{1-p}\\}}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(u-\frac{q_{\star}}{1-p}\right).\end{split}$ (2.32) With the definition of $J$ and $\widetilde{J}$, we easily check that $\int_{\mathbb{R}}z\,\nu(dz)=J(q_{\star})$ and $\int_{\mathbb{R}}z\,\widetilde{\nu}(dz)=\widetilde{J}(q_{\star})$. Moreover, the inequality (2.31) implies that for each $u\in(0,\frac{q_{\star}}{p})$, $F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-pu-p}{1-p}\right)\geq F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-q_{\star}-p}{1-p}+\right)>F_{\mu}^{-1}\left(\frac{q_{\star}}{p}\right)\geq F_{\mu}^{-1}(u),$ so that $\nu$ dominates for the stochastic order the image $\mu$ of the Lebesgue measure on $(0,1)$ by $F_{\mu}^{-1}$. In a symmetric way, $\widetilde{\nu}\leq_{st}\widetilde{\mu}$. For $h:\mathbb{R}\to\mathbb{R}$ measurable and bounded, we have using the changes of variables $v=\frac{1-pu-p}{1-p}$, $v=\frac{(1-p)u}{p}$ and $v=u-\frac{q_{\star}}{1-p}$ then the inverse transform sampling for the last equality $\displaystyle p\int_{\mathbb{R}}h(z)\,\nu(dz)+(1-p)\int_{\mathbb{R}}h(z)\,\widetilde{\nu}(dz)$ $\displaystyle=p\left(\int_{0}^{\frac{q_{\star}}{p}}h\left(F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(\frac{1-pu-p}{1-p}\right)\right)\,du+\int_{\frac{q_{\star}}{p}}^{1}h(F_{\mu}^{-1}(u))\,du\right)$ $\displaystyle\phantom{=}+(1-p)\left(\int_{0}^{\frac{q_{\star}}{1-p}}h\left(F_{\mu}^{-1}\left(\frac{(1-p)u}{p}\right)\right)\,du+\int^{1}_{\frac{q_{\star}}{1-p}}h\left(F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(u-\frac{q_{\star}}{1-p}\right)\right)\,du\right)$ $\displaystyle=(1-p)\int^{1}_{\frac{1-q_{\star}-p}{1-p}}h\left(F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}(v)\right)\,dv+p\int_{\frac{q_{\star}}{p}}^{1}h(F_{\mu}^{-1}(u))\,du+p\int_{0}^{\frac{q_{\star}}{p}}h\left(F_{\mu}^{-1}\left(v\right)\right)dv$ $\displaystyle\phantom{=}+(1-p)\int_{0}^{\frac{1-q_{\star}-p}{1-p}}h\left(F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\right)\,dv$ $\displaystyle=p\int_{0}^{1}h(F_{\mu}^{-1}(u))\,du+(1-p)\int_{0}^{1}h\left(F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}\left(v\right)\right)\,dv$ $\displaystyle=p\int_{\mathbb{R}}h(z)\,\mu(dz)+(1-p)\int_{\mathbb{R}}h(z)\,\widetilde{\mu}(dz).$ Hence $p\nu+(1-p)\tilde{\nu}=p\mu+(1-p)\widetilde{\mu}$. Taking expectations then using the definition of $p$, we get $pJ(q_{\star})+(1-p)\widetilde{J}(q_{\star})=px+(1-p)\widetilde{x}=py+(1-p)\widetilde{y}.$ When $p\leq\frac{1}{2}$ (resp. $p>\frac{1}{2}$), $J(q_{\star})=y$ (resp. $\widetilde{J}(q_{\star})=\widetilde{y}$) and we deduce that $\widetilde{J}(q_{\star})=\widetilde{y}$ (resp. $J(q_{\star})=y$). It remains to show that $\nu$ and $\widetilde{\nu}$ are measurable in $(y,\widetilde{y},\mu,\widetilde{\mu})$. It is clear that $p$ is a measurable function of $(y,\widetilde{y},\mu,\widetilde{\mu})$. Moreover we always have by definition $p\in(0,1)$, so the relation $p\nu+(1-p)\widetilde{\nu}=p\mu+(1-p)\widetilde{\mu}$ implies that it suffices to show that $\nu$ is measurable in $(y,\widetilde{y},\mu,\widetilde{\mu})$. Any quantile function is an element of the set $\mathcal{D}$ of the real- valued càglàd functions on $(0,1)$. Analogously to the Skorokhod space of the real-valued càdlàg functions on $(0,1)$, we endow $\mathcal{D}$ with the $\sigma$-field generated by the projection maps $\alpha_{u}:\mathcal{D}\ni f\mapsto f(u)$, $u\in(0,1)$, which coincides with the $\sigma$-field $\sigma(\alpha_{u},u\in T)$ for any dense subset $T\subset(0,1)$. Let $(\mu_{n})_{n\in\mathbb{N}}\in\mathcal{P}_{1}(\mathbb{R})^{\mathbb{N}}$ converge to $\mu$ for the weak convergence topology and $T$ be the complement of the at most countable set of discontinuities of $F_{\mu}^{-1}$. Then for all $u\in T$, $F_{\mu_{n}}^{-1}(u)=\alpha_{u}(F_{\mu_{n}}^{-1})$ converges to $F_{\mu}^{-1}(u)=\alpha_{u}(F_{\mu}^{-1})$. We deduce that $F_{\mu}^{-1}$ and $F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}$ are respectively measurable in $\mu$ and $\widetilde{\mu}$. By (2.32), $\nu$ is the image of the Lebesgue measure on $[0,1]$ by a measurable function of $p,F_{\mu}^{-1},F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}$ and $q_{\star}$, hence it remains to prove the measurability of $q_{\star}$ in $(y,\widetilde{y},\mu,\widetilde{\mu})$. If $p\leq\frac{1}{2}$ we saw that $J(0)=x<y<\widetilde{x}\leq J(p)$ and $q_{\star}$ is the only real number in $[0,p]$ such that $J(q_{\star})=y$. By concavity of $J$ we necessarily have for all $q\in[0,p]$ that $q_{\star}\leq q$ iff $J(q)\geq y$. Similarly, if $p>\frac{1}{2}$ then for all $q\in[0,1-p]$, $q_{\star}\leq q$ iff $\widetilde{J}(q)\leq\widetilde{y}$. Fix then $q\in[0,p\wedge(1-p)]$. To conclude, it suffices to show that $J(q)$ and $\widetilde{J}(q)$ are measurable in $(y,\widetilde{y},\mu,\widetilde{\mu})$. By [23, Lemma 4.5], the Borel $\sigma$-algebras of the weak convergence topology and the $\mathcal{W}_{1}$-distance topology coincide on $\mathcal{P}_{1}(\mathbb{R})$. Moreover, since $(\mathcal{P}_{1}(\mathbb{R}),\mathcal{W}_{1})$ is separable [36, Theorem 6.18], we deduce from [2, Theorem 4.44] that the $\sigma$-field on $\mathfrak{B}$ coincides with the trace of the Borel $\sigma$-algebra of $\mathbb{R}\times\mathbb{R}\times\mathcal{P}_{1}(\mathbb{R})\times\mathcal{P}_{1}(\mathbb{R})$ endowed with the metric $((y,\widetilde{y},\mu,\widetilde{\mu}),(y^{\prime},\widetilde{y}^{\prime},\mu^{\prime},\widetilde{\mu}^{\prime}))\mapsto|y-y^{\prime}|+|\widetilde{y}-\widetilde{y}^{\prime}|+\mathcal{W}_{1}(\mu,\mu^{\prime})+\mathcal{W}_{1}(\widetilde{\mu},\widetilde{\mu}^{\prime})$. Then the measurability of $J(q)$ and $\widetilde{J}(q)$ follows from their continuity in $(y,\widetilde{y},\mu,\widetilde{\mu})$ with respect to the latter metric, which is clear in view of their definition (2.30) which implies by the changes of variables $v=\frac{1-pu-p}{1-p}$ and $v=\frac{(1-p)u}{p}$ that $\displaystyle J(q)$ $\displaystyle=\frac{1-p}{p}\int_{\frac{1-q-p}{1-p}}^{1}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}(v)\,dv+\int_{\frac{q}{p}}^{1}F_{\mu}^{-1}(u)\,du,\quad\widetilde{J}(q)=\frac{p}{1-p}\int_{0}^{\frac{q}{p}}F_{\mu}^{-1}(v)\,dv+\int_{0}^{\frac{1-q-p}{1-p}}F_{\stackrel{{\scriptstyle\sim}}{{\smash{\mu}\rule{0.0pt}{1.5pt}}}}^{-1}(u)\,du,$ and the easy fact that if $(a_{n})_{n\in\mathbb{N}}\in[0,1]^{\mathbb{N}}$ converges to $a\in[0,1]$ and $(f_{n})_{n\in\mathbb{N}}\in L^{1}([0,1])^{\mathbb{N}}$ converges to $f\in L^{1}([0,1])$ in $L^{1}$, then $\int_{0}^{a_{n}}f_{n}(u)\,du$ converges to $\int_{0}^{a}f(u)\,du$ as $n\to+\infty$. ∎ ## 3 Martingale rearrangement couplings of the Hoeffding-Fréchet coupling ### 3.1 The inverse transform martingale coupling We come back on the inverse transform martingale coupling and the family parametrised by $\mathcal{Q}$ introduced in [24] since they will have particular significance in the remaining of the present paper. We briefly recall the construction and main properties and refer to [24] for an extensive study. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$ and $\mu\neq\nu$. For $u\in[0,1]$ we define $\Psi_{+}(u)=\int_{0}^{u}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(v)\,dv\quad\text{and}\quad\Psi_{-}(u)=\int_{0}^{u}(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\,dv,$ (3.1) with respective left continuous generalised inverses $\Psi_{+}^{-1}$ and $\Psi_{-}^{-1}$. We then define $\mathcal{Q}$ as the set of probability measures on $(0,1)^{2}$ with first marginal $\frac{1}{\Psi_{+}(1)}d\Psi_{+}$, second marginal $\frac{1}{\Psi_{+}(1)}d\Psi_{-}$ and such that $u<v$ for $Q(du,dv)$-almost every $(u,v)\in(0,1)^{2}$. Since $d\,\Psi_{+}$ and $d\,\Psi_{-}$ are concentrated on two disjoint Borel sets, there exists for each $Q\in\mathcal{Q}$ a probability kernel $(\pi^{Q}_{u})_{u\in(0,1)}$ such that $Q(du,dv)=\frac{1}{\Psi_{+}(1)}d\Psi_{+}(u)\,\pi^{Q}_{u}(dv)=\frac{1}{\Psi_{+}(1)}d\Psi_{-}(v)\,\pi^{Q}_{v}(du),$ (3.2) and we exhibit a probability kernel $(\widetilde{m}^{Q}_{u})_{u\in(0,1)}$ which satisfies for $du$-almost all $u\in(0,1)$ such that $F_{\mu}^{-1}(u)\neq F_{\nu}^{-1}(u)$ $\widetilde{m}^{Q}_{u}(dy)=\int_{v\in(0,1)}\left(\frac{F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)}{F_{\nu}^{-1}(v)-F_{\nu}^{-1}(u)}\delta_{F_{\nu}^{-1}(v)}(dy)+\frac{F_{\nu}^{-1}(v)-F_{\mu}^{-1}(u)}{F_{\nu}^{-1}(v)-F_{\nu}^{-1}(u)}\delta_{F_{\nu}^{-1}(u)}(dy)\right)\,\pi^{Q}_{u}(dv),$ (3.3) and $\widetilde{m}^{Q}_{u}(dy)=\delta_{F_{\nu}^{-1}(u)}(dy)$ for all $u\in(0,1)$ such that $F_{\mu}^{-1}(u)=F_{\nu}^{-1}(u)$. Then the measure $\widehat{M}^{Q}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\widetilde{m}^{Q}_{u}(dy)$ (3.4) is a lifted martingale coupling between $\mu$ and $\nu$. Moreover it was shown by [24, Proposition 2.18] and its proof that for $du$-almost all $u\in(0,1)$, $\int_{\mathbb{R}}|y-F_{\nu}^{-1}(u)|\,\widetilde{m}^{Q}_{u}(dy)=|F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)|,$ (3.5) from which we deduce that the measure $M^{Q}(dx,dy)=\int_{0}^{1}\delta_{F_{\mu}^{-1}(u)}(dx)\,\widetilde{m}^{Q}_{u}(dy)\,du$ (3.6) is a martingale coupling between $\mu$ and $\nu$ which satisfies $\int_{\mathbb{R}\times\mathbb{R}}|y-x|\,M^{Q}(dx,dy)\leq 2\mathcal{W}_{1}(\mu,\nu)$. Let also $\displaystyle\mathcal{U}^{+}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)>F_{\nu}^{-1}(u)\\},\quad\mathcal{U}^{-}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)<F_{\nu}^{-1}(u)\\},$ (3.7) and $\displaystyle\mathcal{U}^{0}=\\{u\in(0,1)\mid F_{\mu}^{-1}(u)=F_{\nu}^{-1}(u)\\}.$ (3.8) Thanks to the equality $\Psi_{+}(1)=\Psi_{-}(1)$, consequence of the equality of the respective means of $\mu$ and $\nu$, we can set for all $u\in[0,1]$ $\displaystyle\begin{split}\varphi(u)=\left\\{\begin{array}[]{rcl}\Psi_{-}^{-1}(\Psi_{+}(u))&\text{if}&u\in\mathcal{U}^{+};\\\ \Psi_{+}^{-1}(\Psi_{-}(u))&\text{if}&u\in\mathcal{U}^{-};\\\ u&\text{if}&u\in\mathcal{U}^{0}.\end{array}\right.\end{split}$ (3.9) Then the measure $Q^{IT}(du,dv)=\frac{1}{\Psi_{+}(1)}d\Psi_{+}(u)\mathds{1}_{\\{0<\varphi(u)<1\\}}\,\delta_{\varphi(u)}(dv)$ belongs to $\mathcal{Q}$. The martingale coupling $M^{IT}=M^{Q^{IT}}$ is the so called inverse transform martingale coupling, associated to the probability kernel $\widetilde{m}^{IT}=\widetilde{m}^{Q^{IT}}$ which satisfies for $du$-almost all $u\in(0,1)$ $\widetilde{m}^{IT}(u,dy)=p(u)\,\delta_{F_{\nu}^{-1}(\varphi(u))}(dy)+\left(1-p(u)\right)\,\delta_{F_{\nu}^{-1}(u)}(dy),$ (3.10) where $p(u)=\mathds{1}_{\\{F_{\mu}^{-1}(u)\neq F_{\nu}^{-1}(u)\\}}\frac{F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)}{F_{\nu}^{-1}(\varphi(u))-F_{\nu}^{-1}(u)}$. ### 3.2 The Hoeffding-Fréchet coupling Let $\mu$ and $\nu$ be two probability measures on the real line with finite first moment. We recall that the Hoeffding-Fréchet coupling between $\mu$ and $\nu$, denoted $\pi^{HF}$, is by definition the comonotonic coupling between $\mu$ and $\nu$, that is the image of the Lebesgue measure on $(0,1)$ by the map $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$. Equivalently, we can write $\pi^{HF}(dx,dy)=\int_{(0,1)}\delta_{(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))}(dx,dy)\,du.$ This coupling is of paramount importance in the classical optimal transport theory in dimension $1$ since it attains the infimum in the minimisation problem $\inf_{P\in\Pi(\mu,\nu)}\int_{\mathbb{R}\times\mathbb{R}}c(x,y)\,P(dx,dy)$ as soon as $c$ satisfies the so called Monge condition, see [32, Theorem 3.1.2]. The latter condition being satisfied for any function $(x,y)\mapsto h(|y-x|)$ where $h:\mathbb{R}\to\mathbb{R}$ is convex, we deduce that $\pi^{HF}$ is optimal for $\mathcal{W}_{\rho}(\mu,\nu)$ for all $\rho\geq 1$. By strict convexity, it is even the only coupling optimal for $\mathcal{W}_{\rho}(\mu,\nu)$ for $\rho>1$. Reasoning like in (2.9), we get that for $\mu(dx)$-almost all $x\in\mathbb{R}$, $\pi^{HF}_{x}(dy)=\int_{(0,1)}\delta_{F_{\nu}^{-1}(\theta(x,v))}(dy)\,dv.$ (3.11) By (3.11) and monotonicity and left continuity of $F_{\nu}^{-1}$ we recover the well known fact that $\pi^{HF}$ is given by a measurable map, i.e. is the image of $\mu$ by $x\mapsto(x,T(x))$ where $T:\mathbb{R}\to\mathbb{R}$ is measurable, iff for all $x\in\mathbb{R}$ such that $\mu(\\{x\\})>0$, $F_{\nu}^{-1}$ is constant on $(F_{\mu}(x-),F_{\mu}(x)]$. In that case, we have $T=F_{\nu}^{-1}\circ F_{\mu}$, referred to as the Monge transport map. ### 3.3 Martingale rearrangement couplings Our family $(M^{Q})_{Q\in\mathcal{Q}}$ of martingale couplings mentioned above was meant to contain the closest martingale couplings from the Hoeffding- Fréchet coupling, the latter being well known for minimising the Wasserstein distance. Thanks to Wiesel’s definition of martingale rearrangement couplings we can now rephrase the latter sentence in a more formal way. Let $\pi^{HF}$ be the Hoeffding-Fréchet coupling between $\mu$ and $\nu$. We will consider the following lifted coupling of $\pi^{HF}$: $\widehat{\pi}^{HF}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\delta_{F_{\nu}^{-1}(u)}(dy).$ (3.12) Recall the embedding $\iota$ defined by (2.7) and the definition of the map $\theta$ given by (1.14). Then $\iota(\pi^{HF})(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\int_{0}^{1}\delta_{F_{\nu}^{-1}(\theta(F_{\mu}^{-1}(u),v))}(dy)\,dv,$ which is different from $\widehat{\pi}^{HF}$ when $F_{\nu}^{-1}$ is not constant on the jumps of $F_{\mu}$. We can actually see that $\widehat{\pi}^{HF}=\iota^{\prime}(\pi^{HF})$, where $\iota^{\prime}$ is another embedding $\Pi(\mu,\nu)$ to $\widehat{\Pi}(\mu,\nu)$, such that for all $\pi\in\Pi(\mu,\nu)$, $\iota^{\prime}(\pi)$ is defined by $\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\left(\mathds{1}_{\\{\mu(\\{F_{\mu}^{-1}(u)\\})>0\\}}\delta_{\left(F_{\pi_{F_{\mu}^{-1}(u)}}\right)^{-1}\left(\frac{u-F_{\mu}(F_{\mu}^{-1}(u)-)}{\mu(\\{F_{\mu}^{-1}(u)\\})}\right)}(dy)+\mathds{1}_{\\{\mu(\\{F_{\mu}^{-1}(u)\\})=0\\}}\pi_{F_{\mu}^{-1}(u)}(dy)\right).$ Although $\widehat{\pi}^{HF}$ is a very natural lifted coupling of $\pi^{HF}$, the embedding $\iota$ used in Section 2.1 appears to be in general simpler than $\iota^{\prime}$. ###### Proposition 3.1. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$. Then for all $Q\in\mathcal{Q}$, the lifted martingale coupling $\widehat{M}^{Q}$ defined by (3.4) is a lifted martingale rearrangement coupling of the lifted Hoeffding-Fréchet coupling $\widehat{\pi}^{HF}$ defined by (3.12): $\forall Q\in\mathcal{Q},\quad\widehat{\mathcal{AW}}_{1}(\widehat{\pi}^{HF},\widehat{M}^{Q})=\inf_{\widehat{M}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)}\widehat{\mathcal{AW}}_{1}(\widehat{\pi}^{HF},\widehat{M}).$ ###### Proof. Let $Q\in\mathcal{Q}$. The fact that $\widehat{M}^{Q}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$ is clear. By (3.5) we have $\displaystyle\widehat{\mathcal{AW}_{1}}(\widehat{\pi}^{HF},\widehat{M}^{Q})$ $\displaystyle\leq\int_{(0,1)}\mathcal{W}_{1}(\delta_{F_{\nu}^{-1}(u)},\widetilde{m}^{Q}_{u})\,du=\int_{(0,1)}\int_{\mathbb{R}}|y-F_{\nu}^{-1}(u)|\,\widetilde{m}^{Q}_{u}(dy)du=\int_{(0,1)}|F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)|\,du,$ which proves the claim by Lemma 2.2. ∎ We can also easily show that any lifted martingale coupling is a lifted quadratic martingale rearrangement coupling of the lifted Hoeffding-Fréchet coupling. ###### Proposition 3.2. Let $\mu,\nu\in\mathcal{P}_{2}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$. Then any lifted martingale coupling between $\mu$ and $\nu$ is a $\widehat{\cal AW}_{2}$-minimal lifted martingale rearrangement coupling of the lifted Hoeffding-Fréchet coupling $\widehat{\pi}^{HF}$ defined by (3.12): $\forall M,M^{\prime}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu),\quad\widehat{\mathcal{AW}}_{2}(\widehat{M},\widehat{\pi}^{HF})=\widehat{\mathcal{AW}}_{2}(\widehat{M}^{\prime},\widehat{\pi}^{HF}).$ ###### Proof. Let $\widehat{M}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times m_{u}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$ and $\chi\in\Pi(\lambda_{(0,1)},\lambda_{(0,1)})$ be optimal for $\widehat{\mathcal{AW}}_{2}(\widehat{M},\widehat{\pi}^{HF})$, so that $\displaystyle\widehat{\mathcal{AW}}_{2}^{2}(\widehat{M},\widehat{\pi}^{HF})$ $\displaystyle=\int_{(0,1)\times(0,1)}\left(|u-u^{\prime}|^{2}+|F_{\mu}^{-1}(u)-F_{\mu}^{-1}(u^{\prime})|^{2}+\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\right)\,\chi(du,du^{\prime})$ $\displaystyle\geq\int_{(0,1)\times(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\,\chi(du,du^{\prime}).$ By bias-variance decomposition for the first equality, the fact that the image of $\lambda_{(0,1)}$ by $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$ is optimal for $\mathcal{W}_{2}^{2}(\mu,\nu)$ for the inequality, and by bias-variance decomposition again for the second equality, we have that $\displaystyle\begin{split}&\int_{(0,1)\times(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\,\chi(du,du^{\prime})\\\ &=\int_{(0,1)\times(0,1)}\left(|F_{\nu}^{-1}(u^{\prime})-F_{\mu}^{-1}(u)|^{2}+\int_{\mathbb{R}}|F_{\mu}^{-1}(u)-y|^{2}\,m_{u}(dy)\right)\,\chi(du,du^{\prime})\\\ &\geq\int_{(0,1)}\left(|F_{\nu}^{-1}(u)-F_{\mu}^{-1}(u)|^{2}+\int_{\mathbb{R}}|F_{\mu}^{-1}(u)-y|^{2}\,m_{u}(dy)\right)\,du\\\ &=\int_{(0,1)}\int_{\mathbb{R}}|F_{\nu}^{-1}(u)-y|^{2}\,m_{u}(dy)\,du\\\ &=\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du\geq\widehat{\mathcal{AW}}_{2}^{2}(\widehat{M},\widehat{\pi}^{HF}).\end{split}$ (3.13) Using the fact that $\int_{(0,1)}\int_{\mathbb{R}}|F_{\mu}^{-1}(u)-y|^{2}\,m_{u}(dy)\,du=\int_{\mathbb{R}}|y|^{2}\,\nu(dy)-\int_{\mathbb{R}}|x|^{2}\,\mu(dx)$, we deduce that $\widehat{\mathcal{AW}}_{2}^{2}(\widehat{M},\widehat{\pi}^{HF})=\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du=\mathcal{W}_{2}^{2}(\mu,\nu)+\int_{\mathbb{R}}|y|^{2}\,\nu(dy)-\int_{\mathbb{R}}|x|^{2}\,\mu(dx),$ hence $\widehat{\mathcal{AW}}_{2}^{2}(\widehat{M},\widehat{\pi}^{HF})$ does not depend on the choice of $M$. ∎ A similar conclusion holds for regular couplings. Just this once, we provide a proof valid in any dimension. In the following statement, $d\in\mathbb{N}^{*}$. The definitions (1.1), (1.3), (1.5) (1.7) given in $\mathbb{R}$ have straightfoward extensions to $\mathbb{R}^{d}$ endowed with the Euclidean norm $|\cdot|$. ###### Proposition 3.3. Let $\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})$ be such that $\mu\leq_{cx}\nu$ and $\pi\in\Pi(\mu,\nu)$ be optimal for $\mathcal{W}_{2}(\mu,\nu)$ and concentrated on the graph of a measurable map $T:\mathbb{R}^{d}\to\mathbb{R}^{d}$. Then any $M\in\Pi^{M}(\mu,\nu)$ is an ${\cal AW}_{2}$-minimal martingale rearrangement coupling of $\pi$. ###### Proof. Let $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ and $\chi\in\Pi(\mu,\mu)$ be optimal for $\mathcal{AW}_{2}(M,\pi)$, so that $\mathcal{AW}_{2}^{2}(M,\pi)=\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\left(|x-x^{\prime}|^{2}+\mathcal{W}_{2}^{2}(M_{x},\delta_{T(x^{\prime})})\right)\,\chi(dx,dx^{\prime})\geq\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{W}_{2}^{2}(M_{x},\delta_{T(x^{\prime})})\,\chi(dx,dx^{\prime}).$ By bias-variance decomposition for the first equality and the fact that the image of $\chi$ by $(x,x^{\prime})\mapsto(x,T(x^{\prime}))$ is a coupling between $\mu$ and $\nu$ for the first inequality, and by bias-variance decomposition again for the second equality, we have that $\displaystyle\begin{split}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{W}_{2}^{2}(M_{x},\delta_{T(x^{\prime})})\,\chi(dx,dx^{\prime})&=\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\left(|T(x^{\prime})-x|^{2}+\int_{\mathbb{R}^{d}}|x-y|^{2}\,M_{x}(dy)\right)\,\chi(dx,dx^{\prime})\\\ &\geq\mathcal{W}_{2}^{2}(\mu,\nu)+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|^{2}\,M(dx,dy)\\\ &=\int_{\mathbb{R}^{d}}\left(|x-T(x)|^{2}+\int_{\mathbb{R}^{d}}|x-y|^{2}\,M_{x}(dy)\right)\,\mu(dx)\\\ &=\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|y-T(x)|^{2}\,M(dx,dy)\\\ &=\int_{\mathbb{R}^{d}}\mathcal{W}_{2}^{2}(M_{x},\delta_{T(x)})\,\mu(dx)\geq\mathcal{AW}_{2}^{2}(M,\pi).\end{split}$ (3.14) Using the fact that $\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|^{2}\,M(dx,dy)=\int_{\mathbb{R}^{d}}|y|^{2}\,\nu(dy)-\int_{\mathbb{R}^{d}}|x|^{2}\,\mu(dx)$, we deduce that $\mathcal{AW}_{2}^{2}(M,\pi)=\int_{\mathbb{R}^{d}}\mathcal{W}_{2}^{2}(M_{x},\delta_{T(x)})\,\mu(dx)=\mathcal{W}_{2}^{2}(\mu,\nu)+\int_{\mathbb{R}^{d}}|y|^{2}\,\nu(dy)-\int_{\mathbb{R}^{d}}|x|^{2}\,\mu(dx),$ hence any martingale coupling $M\in\Pi^{\mathrm{M}}(\mu,\nu)$ is an $\mathcal{AW}_{2}$-minimal martingale rearrrangement coupling of $\pi$. ∎ The use of Lemma 2.1 allows us to easily prove that the analogue of Proposition 3.1 holds for regular couplings as soon as on each interval $(F_{\mu}(x-),F_{\mu}(x)]$, where $x\in\mathbb{R}$, the sign of $u\mapsto F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)$ is constant. Of course this includes the case where $F_{\nu}^{-1}$ is constant on the intervals of the form $(F_{\mu}(x-),F_{\mu}(x)]$ for $x\in\mathbb{R}$, or equivalently the Hoeffding-Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$ is concentrated on the graph of the Monge transport map $T=F_{\nu}^{-1}\circ F_{\mu}$. In the latter case, the conclusion of Proposition 3.4 below can also be seen as an immediate consequence of Proposition 2.3 and the proof of Proposition 3.1. ###### Proposition 3.4. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$ and on each interval $(F_{\mu}(x-),F_{\mu}(x)]$, where $x\in\mathbb{R}$, the sign of $u\mapsto F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)$ is constant. Then for all $Q\in\mathcal{Q}$, the martingale coupling $M^{Q}$ defined by (3.6) is a martingale rearrangement coupling of the Hoeffding-Fréchet coupling $\pi^{HF}$: $\forall Q\in\mathcal{Q},\quad\mathcal{AW}_{1}(\pi^{HF},M^{Q})=\inf_{M\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi^{HF},M).$ ###### Proof. By Lemma 2.1 applied with $\chi=(x\mapsto(x,x))_{\sharp}\mu$, it suffices to show that $\mu(dx)$-almost everywhere, $\pi^{HF}_{x}\leq_{st}M^{Q}_{x}\quad\textrm{or}\quad\pi^{HF}_{x}\geq_{st}M^{Q}_{x}.$ (3.15) Reasoning like in (2.9) we get that $\mu(dx)$-almost everywhere, $\pi^{HF}_{x}(dy)=\int_{(0,1)}\delta_{F_{\nu}^{-1}(\theta(x,v))}(dy)\,dv\quad\textrm{and}\quad M^{Q}_{x}(dy)=\int_{(0,1)}\widetilde{m}^{Q}_{\theta(x,v)}(dy)\,dv.$ We deduce from [24, Lemma 2.5] that for $du$-almost all $u\in(0,1)$ such that $F_{\mu}^{-1}(u)\geq F_{\nu}^{-1}(u)$ (resp $F_{\mu}^{-1}(u)\leq F_{\nu}^{-1}(u)$), $\delta_{F_{\nu}^{-1}(u)}\leq_{st}\widetilde{m}^{Q}_{u}$ (resp. $\delta_{F_{\nu}^{-1}(u)}\geq_{st}\widetilde{m}^{Q}_{u}$). This implies that for $du$-almost all $u\in(0,1)$ such that $\mu(\\{F_{\mu}^{-1}(u)\\})=0$, $\pi^{HF}_{F_{\mu}^{-1}(u)}=\delta_{F_{\nu}^{-1}(u)}$ and $M^{Q}_{F_{\mu}^{-1}(u)}=\widetilde{m}^{Q}_{u}$ are comparable under the stochastic order. Moreover, the assumption made on the sign of the map $F_{\mu}^{-1}-F_{\nu}^{-1}$ on the jumps of $F_{\mu}$ implies that for $du$-almost all $u\in(0,1)$ such that $\mu(\\{F_{\mu}^{-1}(u)\\})>0$, we have either $(\theta(F_{\mu}^{-1}(u),0),\theta(F_{\mu}^{-1}(u),1)]\subset\\{F_{\mu}^{-1}\geq F_{\nu}^{-1}\\}$ so that, using the characterization of the stochastic order in terms of the cumulative disribution functions, $\pi^{HF}_{F^{-1}_{\mu}(u)}(dy)=\int_{(0,1)}\delta_{F_{\nu}^{-1}(\theta(F^{-1}_{\mu}(u),v))}(dy)\,dv\leq_{st}\int_{(0,1)}\widetilde{m}^{Q}_{\theta(F^{-1}_{\mu}(u),v)}(dy)\,dv=M^{Q}_{F^{-1}_{\mu}(u)}(dy),$ or $(\theta(F_{\mu}^{-1}(u),0),\theta(F_{\mu}^{-1}(u),1)]\subset\\{F_{\mu}^{-1}\leq F_{\nu}^{-1}\\}$ so that $\pi^{HF}_{F^{-1}_{\mu}(u)}\geq_{st}M^{Q}_{F^{-1}_{\mu}(u)}$. By the inverse transform sampling, this shows (3.15) and completes the proof. ∎ In the next example where the above constant sign condition fails, the inverse transform martingale coupling between $\mu$ and $\nu$ is not a martingale rearrangement coupling of $\pi^{HF}$. Therefore, in general, we cannot say that every element of our family $(M^{Q})_{Q\in\mathcal{Q}}$ is a martingale rearrangement coupling of the Hoeffding-Fréchet coupling. However, we show in the next proposition that we can always find a specific parameter $Q\in\mathcal{Q}$ such that the martingale coupling $M^{Q}$ is a martingale rearrangement coupling of $\pi^{HF}$. ###### Example 3.5. Let $\mu=\frac{1}{4}(\delta_{-1}+2\delta_{0}+\delta_{1})$ and $\nu=\frac{1}{4}(\delta_{-2}+\delta_{-1}+\delta_{1}+\delta_{2})$. The Hoeffding-Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$ is given by $\pi^{HF}=\frac{1}{4}\left(\delta_{(-1,-2)}+\delta_{(0,-1)}+\delta_{(0,1)}+\delta_{(1,2)}\right).$ To see that the inverse transform martingale coupling $M^{IT}=\frac{1}{6}\delta_{(-1,-2)}+\frac{1}{12}\delta_{(-1,1)}+\frac{1}{12}\delta_{(0,-2)}+\frac{1}{6}\delta_{(0,-1)}+\frac{1}{6}\delta_{(0,1)}+\frac{1}{12}\delta_{(0,2)}+\frac{1}{12}\delta_{(1,-1)}+\frac{1}{6}\delta_{(1,2)}$ is not a martingale rearrangement coupling of $\pi^{HF}$, we rely on the equivalent condition provided by Lemma 2.1. One can readily compute $M^{IT}_{0}=\frac{1}{6}\delta_{-2}+\frac{1}{3}\delta_{-1}+\frac{1}{3}\delta_{1}+\frac{1}{6}\delta_{2}$, $\pi^{HF}_{-1}=\delta_{-2}$, $\pi^{HF}_{0}=\frac{1}{2}(\delta_{-1}+\delta_{1})$ and $\pi^{HF}_{1}=\delta_{2}$. Then $-1<0$, $1>0$ and $0=0$, but we have neither $\pi^{HF}_{-1}\geq_{st}M^{IT}_{0}$, $\pi^{HF}_{1}\leq_{st}M^{IT}_{0}$, $\pi^{HF}_{0}\leq_{st}M^{IT}_{0}$ nor $\pi^{HF}_{0}\geq_{st}M^{IT}_{0}$. We deduce by Lemma 2.1 that $M^{IT}$ is not a martingale rearrangement coupling of $\pi^{HF}$. Note that the martingale rearrangement constructed in Section 2.2 is $\frac{3}{16}\delta_{(-1,-2)}+\frac{1}{16}\delta_{(-1,2)}+\frac{1}{4}\delta_{(0,-1)}+\frac{1}{4}\delta_{(0,1)}+\frac{1}{16}\delta_{(1,-2)}+\frac{3}{16}\delta_{(1,2)}.$ ###### Proposition 3.6. Let $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ be such that $\mu\leq_{cx}\nu$. Let $\pi^{HF}$ be the Hoeffding-Fréchet coupling between $\mu$ and $\nu$. Then there exists $Q\in\mathcal{Q}$ such that the martingale coupling $M^{Q}$ defined by (3.6) is a martingale rearrangement coupling of $\pi^{HF}$: $\mathcal{AW}_{1}(\pi^{HF},M^{Q})=\inf_{M\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi^{HF},M).$ ###### Remark 3.7. We show in the proof that as soon as on each interval $(F_{\mu}(x-),F_{\mu}(x)]$ for $x\in\mathbb{R}$ the sign of $u\mapsto F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)$ is constant, the martingale rearrangement coupling $M^{Q}$ is the inverse transform martingale coupling, in coherence with Proposition 3.4. ###### Proof of Proposition 3.6. By (1.9), we have $\inf_{M\in\Pi^{\mathrm{M}}(\mu,\nu)}\mathcal{AW}_{1}(\pi^{HF},M)\geq\int_{\mathbb{R}}\left|x-\int_{\mathbb{R}}y\,\pi^{HF}_{x}(dy)\right|\,\mu(dx),$ (3.16) hence it is sufficient to show that there exists $Q\in\mathcal{Q}$ such that $\mathcal{AW}_{1}(\pi^{HF},M^{Q})\leq\int_{\mathbb{R}}\left|x-\int_{\mathbb{R}}y\,\pi^{HF}_{x}(dy)\right|\,\mu(dx).$ (3.17) If $\mu=\nu$ then the statement is straightforward, hence we suppose $\mu\neq\nu$. The proof is achieved in four steps. First we exhibit an appropriate subdivision of the intervals $(0,1)$ in order to define a measure $Q$ on $(0,1)^{2}$. Second we show that $Q$ belongs to $\mathcal{Q}$ and is therefore associated to the martingale coupling $M^{Q}$ between $\mu$ and $\nu$. Then we find for $\mu(dx)$-almost all $x\in\mathbb{R}$ a coupling $\eta_{x}\in\Pi(\pi^{HF}_{x},M^{Q}_{x})$, so that $\mathcal{AW}_{1}(\pi^{HF},M^{Q})\leq\int_{\mathbb{R}}\mathcal{W}_{1}(\pi^{HF}_{x},M^{Q}_{x})\,\mu(dx)\leq\int_{\mathbb{R}}\left(\int_{\mathbb{R}}|y-y^{\prime}|\,\eta_{x}(dy,dy^{\prime})\right)\,\mu(dx).$ (3.18) Last, we show that for $\mu(dx)$-almost all $x\in\mathbb{R}$, $\int_{\mathbb{R}\times\mathbb{R}}|y-y^{\prime}|\,\eta_{x}(dy,dy^{\prime})=\left|x-\int_{\mathbb{R}}y\,\pi^{HF}_{x}(dy)\right|,$ (3.19) which implies (3.17) and completes the proof. Step 1. Recall the definitions of $\Psi_{+}$, $\Psi_{-}$, $\mathcal{U}^{+}$ and $\mathcal{U}^{-}$ from (3.1) and (3.7). Let $A_{\mu}=\\{x\in\mathbb{R}\mid\mu(\\{x\\})>0\\}$. For all $x\in A_{\mu}$, let $\mathcal{U}_{x}=(F_{\mu}(x-),F_{\mu}(x)]$, $\mathcal{U}^{+}_{x}=\mathcal{U}^{+}\cap\mathcal{U}_{x}$, $\mathcal{U}^{-}_{x}=\mathcal{U}^{-}\cap\mathcal{U}_{x}$ and $\displaystyle a_{x}$ $\displaystyle=\inf\left\\{a\in[F_{\mu}(x-),F_{\mu}(x)]\mid\int_{F_{\mu}(x-)}^{a}(x-F_{\nu}^{-1}(u))^{+}\,du=\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{+}\,du\right)\right.$ $\displaystyle\phantom{=\inf\left\\{a\in[F_{\mu}(x-),F_{\mu}(x)]\mid\int_{F_{\mu}(x-)}^{a}(x-F_{\nu}^{-1}(u))^{+}\,du=\right.\ }\left.\wedge\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du\right)\right\\},$ $\displaystyle b_{x}$ $\displaystyle=\sup\left\\{b\in[F_{\mu}(x-),F_{\mu}(x)]\mid\int_{b}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du=\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{+}\,du\right)\right.$ $\displaystyle\phantom{=\sup\left\\{b\in[F_{\mu}(x-),F_{\mu}(x)]\mid\int_{b}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du=\right.\ }\left.\wedge\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du\right)\right\\},$ $\displaystyle\mathcal{V}^{+}_{x}$ $\displaystyle=(F_{\mu}(x-),a_{x}],\quad\mathcal{V}^{-}_{x}=(b_{x},F_{\mu}(x)],\quad\widetilde{\mathcal{U}}^{+}=\mathcal{U}^{+}\backslash(\bigcup_{x\in A_{\mu}}\mathcal{V}^{+}_{x}),\quad\text{and}\quad\widetilde{\mathcal{U}}^{-}=\mathcal{U}^{-}\backslash(\bigcup_{x\in A_{\mu}}\mathcal{V}^{-}_{x}).$ By monotonicity of $u\mapsto x-F_{\nu}^{-1}(u)$ and by definition of $a_{x}$ and $b_{x}$, we have $F_{\mu}(x-)\leq a_{x}\leq b_{x}\leq F_{\mu}(x)$, $\displaystyle\begin{split}\int_{\mathcal{V}^{+}_{x}}(x-F_{\nu}^{-1}(u))\,du&=\int_{\mathcal{V}^{-}_{x}}(x-F_{\nu}^{-1}(u))\,du\\\ &=\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{+}\,du\right)\wedge\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du\right),\end{split}$ (3.20) $\mathcal{V}^{+}_{x}\subset\mathcal{U}^{+}_{x}$ and $\mathcal{V}^{-}_{x}\subset\mathcal{U}^{-}_{x}$. Moreover, we have $\displaystyle\begin{split}\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{+}\,du\geq\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du&\iff(F_{\mu}(x-),b_{x}]\cap\mathcal{U}^{-}_{x}=\emptyset\iff\mathcal{V}^{-}_{x}=\mathcal{U}^{-}_{x},\\\ \int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{+}\,du\leq\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(u))^{-}\,du&\iff(a_{x},F_{\mu}(x)]\cap\mathcal{U}^{+}_{x}=\emptyset\iff\mathcal{V}^{+}_{x}=\mathcal{U}^{+}_{x}.\end{split}$ (3.21) $u$$|x-F_{\nu}^{-1}(u)|$$F_{\mu}(x-)$$F_{\mu}(x)$$a_{x}$$b_{x}$$\mathcal{U}^{+}_{x}$$\mathcal{V}^{+}_{x}$$\mathcal{U}^{-}_{x}=\mathcal{V}^{-}_{x}$ Figure 1: Points and intervals involved in the proof in the case where $\int_{\mathcal{U}_{x}}(x-F_{\nu}^{-1}(u))^{+}\,du>\int_{\mathcal{U}_{x}}(x-F_{\nu}^{-1}(u))^{-}\,du$. For $x\in A_{\mu}$, let $Q_{x}$ be any measure on $(0,1)^{2}$ such that its first and second marginal are respectively $\frac{1}{\Psi_{+}(1)}\mathds{1}_{\mathcal{V}^{+}_{x}}(u)\,(x-F_{\nu}^{-1}(u))^{+}\,du\quad\text{and}\quad\frac{1}{\Psi_{+}(1)}\mathds{1}_{\mathcal{V}^{-}_{x}}(v)\,(x-F_{\nu}^{-1}(v))^{-}\,dv.$ (3.22) Notice that $a_{x}\leq b_{x}$ implies $Q_{x}(\\{(u,v)\in(0,1)^{2}\mid u<v\\})=Q_{x}((0,1)^{2}).$ (3.23) Let now $\chi_{+},\chi_{-}:[0,1]\to\mathbb{R}$ be defined for all $u\in[0,1]$ by $\chi_{+}(u)=\int_{0}^{u}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(v)\mathds{1}_{\widetilde{\mathcal{U}}^{+}}(v)\,dv\quad\text{and}\quad\chi_{-}(u)=\int_{0}^{u}(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\mathds{1}_{\widetilde{\mathcal{U}}^{-}}(v)\,dv.$ (3.24) For all $u\in[0,1]$, $[0,u]\cap\mathcal{U}^{+}$, resp. $[0,u]\cap\mathcal{U}^{-}$, is the disjoint union of $[0,u]\cap\widetilde{\mathcal{U}}^{+}$ and $[0,u]\cap\left(\bigcup_{x\in A_{\mu}}\mathcal{V}^{+}_{x}\right)$, resp. $[0,u]\cap\widetilde{\mathcal{U}}^{-}$ and $[0,u]\cap\left(\bigcup_{x\in A_{\mu}}\mathcal{V}^{-}_{x}\right)$. Therefore, for all $u\in[0,1]$, $\displaystyle\begin{split}\Psi_{+}(u)&=\chi_{+}(u)+\sum_{x\in A_{\mu}}\int_{[0,u]\cap\mathcal{V}^{+}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(v)\,dv,\\\ \text{and}\quad\Psi_{-}(u)&=\chi_{-}(u)+\sum_{x\in A_{\mu}}\int_{[0,u]\cap\mathcal{V}^{-}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\,dv.\end{split}$ (3.25) Applying (3.25) with $u=1$, (3.20) and the equality $\Psi_{+}(1)=\Psi_{-}(1)$, we get $\chi_{+}(1)=\chi_{-}(1)$, hence we can define the map $\Gamma:[0,1]\to[0,1]$ for all $u\in[0,1]$ by $\displaystyle\begin{split}\Gamma(u)=\left\\{\begin{array}[]{rl}\chi_{-}^{-1}(\chi_{+}(u))&\text{if}\ u\in\widetilde{\mathcal{U}}^{+};\\\ \chi_{+}^{-1}(\chi_{-}(u))&\text{if}\ u\in\widetilde{\mathcal{U}}^{-};\\\ u&\text{otherwise.}\end{array}\right.\end{split}$ Let then $\widetilde{Q}$ be the measure on $(0,1)^{2}$ defined by $\widetilde{Q}(du,dv)=\frac{1}{\Psi_{+}(1)}\,d\chi_{+}(u)\,\delta_{\Gamma(u)}(dv).$ (3.26) Step 2. We now show that the measure $Q$ on $(0,1)^{2}$ defined by $Q=\widetilde{Q}+\sum_{x\in A_{\mu}}Q_{x}$ is an element of $\mathcal{Q}$. Notice that if for all $x\in\mathbb{R}$, $u\mapsto F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)$ does not change sign on $\mathcal{U}_{x}$, then $\mathcal{V}^{+}_{x}=\mathcal{V}^{-}_{x}=\emptyset$. In that case, $\widetilde{\mathcal{U}}^{+}=\mathcal{U}^{+}$ and $\widetilde{\mathcal{U}}^{-}=\mathcal{U}^{-}$, hence $Q=\widetilde{Q}=Q^{IT}$, which proves the statement of Remark 3.7, provided that we complete the present proof. To prove that $Q\in\mathcal{Q}$ we begin to show that $Q(\\{(u,v)\in(0,1)^{2}\mid u<v\\})=Q((0,1)^{2}).$ (3.27) In view of (3.23) it suffices to show that $\widetilde{Q}(\\{(u,v)\in(0,1)^{2}\mid u<v\\})=\widetilde{Q}((0,1)^{2}),$ which by (3.26) is equivalent to $\Gamma(u)>u,\quad d\chi_{+}(u)\text{-almost everywhere}.$ (3.28) Let $u\in[0,1]$. Suppose first that $u\notin B_{\mu}:=\bigcup_{x\in A_{\mu}}(F_{\mu}(x-),F_{\mu}(x))$. Then for all $x\in A_{\mu}$, we have either $u\leq F_{\mu}(x-)$ or $F_{\mu}(x)\leq u$, and since $\mathcal{V}^{+}_{x},\mathcal{V}^{-}_{x}\subset\mathcal{U}_{x}$, either $\mathcal{V}^{+}_{x},\mathcal{V}^{-}_{x}\subset(u,1)$ or $\mathcal{V}^{+}_{x},\mathcal{V}^{-}_{x}\subset[0,u]$. Equivalently, $[0,u]\cap\mathcal{V}^{+}_{x}=\mathcal{V}^{+}_{x}\quad\text{and}\quad[0,u]\cap\mathcal{V}^{-}_{x}=\mathcal{V}^{-}_{x},\quad\text{or}\quad[0,u]\cap\mathcal{V}^{+}_{x}=[0,u]\cap\mathcal{V}^{-}_{x}=\emptyset.$ If $[0,u]\cap\mathcal{V}^{+}_{x}=\mathcal{V}^{+}_{x}$ and $[0,u]\cap\mathcal{V}^{-}_{x}=\mathcal{V}^{-}_{x}$, (3.20) yields $\displaystyle\int_{[0,u]\cap\mathcal{V}^{+}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(v)\,dv$ $\displaystyle=\int_{\mathcal{V}^{+}_{x}}(x-F_{\nu}^{-1}(v))\,dv=\int_{\mathcal{V}^{-}_{x}}(x-F_{\nu}^{-1}(v))\,dv$ $\displaystyle=\int_{[0,u]\cap\mathcal{V}^{-}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\,dv.$ Else if $[0,u]\cap\mathcal{V}^{+}_{x}=[0,u]\cap\mathcal{V}^{-}_{x}=\emptyset$, then we clearly have $\int_{[0,u]\cap\mathcal{V}^{+}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(v)\,dv=\int_{[0,u]\cap\mathcal{V}^{-}_{x}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\,dv$ too. We then deduce from (3.25) that $\chi_{+}(u)-\chi_{-}(u)=\Psi_{+}(u)-\Psi_{-}(u)$. By [24, (3.8)], $\Psi_{+}(u)>\Psi_{-}(u)$ for $du$-almost every $u\in\mathcal{U}^{+}$ and therefore $u\in\widetilde{\mathcal{U}}^{+}$. We deduce that $\chi_{+}(u)\geq\chi_{-}(u),\quad\text{for all $u\in[0,1]\backslash B_{\mu}$, and the inequality is strict for $du$-almost every }u\in\widetilde{\mathcal{U}}^{+}\backslash B_{\mu}.$ (3.29) Suppose now that $u\in\widetilde{\mathcal{U}}^{+}\cap B_{\mu}$. Then there exists $x\in A_{\mu}$ such that $u\in(F_{\mu}(x-),F_{\mu}(x))$, or equivalently $u\in(a_{x},F_{\mu}(x))\cap\mathcal{U}^{+}$. Since $F_{\mu}^{-1}$ and $F_{\nu}^{-1}$ are left continuous, we have for $\varepsilon>0$ small enough $[u-\varepsilon,u]\subset\mathcal{U}^{+}$ and of course $[u-\varepsilon,u]\subset(a_{x},F_{\mu}(x))$, hence $[u-\varepsilon,u]\subset\widetilde{\mathcal{U}}^{+}$ and $\chi_{+}(u)>\chi_{+}(u-\varepsilon)\geq\chi_{+}(F_{\mu}(x-)).$ Using (3.29) with $F_{\mu}(x-)\in[0,1]\backslash B_{\mu}$, we get $\chi_{+}(F_{\mu}(x-))\geq\chi_{-}(F_{\mu}(x-))$. Moreover, the existence of $u$ implies that $\mathcal{V}^{+}_{x}\neq\mathcal{U}^{+}_{x}$ and therefore $\mathcal{V}^{-}_{x}=\mathcal{U}^{-}_{x}$ by (3.21), hence $\chi_{-}(u)=\chi_{-}(F_{\mu}(x-))$. We deduce that $\chi_{+}(u)>\chi_{-}(u),\quad\text{for all }u\in\widetilde{\mathcal{U}}^{+}\cap B_{\mu}.$ (3.30) Since for all $u\in(0,1)$, $\chi_{+}(u)>\chi_{-}(u)\iff\Gamma(u)>u$, (3.28) follows directly from (3.29) and (3.30), which proves (3.27). We now show that $Q$ has the right marginals. On the one hand, $\mathcal{U}^{+}=\widetilde{\mathcal{U}}^{+}\cup\left(\bigcup_{x\in A_{\mu}}\mathcal{V}^{+}_{x}\right)$ where the unions are disjoint, hence its first marginal is $\displaystyle\begin{split}&\frac{1}{\Psi_{+}(1)}d\chi_{+}(u)+\sum_{x\in A_{\mu}}\frac{1}{\Psi_{+}(1)}\mathds{1}_{\mathcal{V}^{+}_{x}}(u)(x-F_{\nu}^{-1}(u))^{+}\,du\\\ &=\frac{1}{\Psi_{+}(1)}\left((F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\mathds{1}_{\widetilde{\mathcal{U}}^{+}}(u)\,du+\sum_{x\in A_{\mu}}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\mathds{1}_{\mathcal{V}^{+}_{x}}(u)\,du\right)\\\ &=\frac{1}{\Psi_{+}(1)}(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\mathds{1}_{\mathcal{U}^{+}}(u)\,du=\frac{1}{\Psi_{+}(1)}d\Psi_{+}(u).\end{split}$ (3.31) On the other hand, by [24, Lemma 6.1] applied with $f_{1}=(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}\mathds{1}_{\widetilde{\mathcal{U}}^{+}}$, $f_{2}=(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}\mathds{1}_{\widetilde{\mathcal{U}}^{-}}$, $u_{0}=1$ and $h:u\mapsto\mathds{1}_{\\{u\notin\widetilde{\mathcal{U}}^{-}\\}}$, we get $\int_{0}^{1}\mathds{1}_{\\{\Gamma(u)\notin\widetilde{\mathcal{U}}^{-}\\}}\,d\chi_{+}(u)=\int_{0}^{1}\mathds{1}_{\\{v\notin\widetilde{\mathcal{U}}^{-}\\}}\,d\chi_{-}(v)=0,$ hence $\Gamma(u)\in\widetilde{\mathcal{U}}^{-}$ for $du$-almost all $u\in\widetilde{\mathcal{U}}^{+}$. Reasoning like in the derivation of (2.16) with $(\Gamma,\chi_{+},\chi_{-},\widetilde{\mathcal{U}}^{+},\widetilde{\mathcal{U}}^{-})$ replacing $(\varphi,\Psi_{+},\Psi_{-},\mathcal{U}^{+},\mathcal{U}^{-})$, we get that $u=\Gamma(\Gamma(u)),\quad\text{$du$-almost everywhere on $\widetilde{\mathcal{U}}^{+}$}.$ (3.32) Let $H:(0,1)^{2}\to\mathbb{R}$ be a measurable and bounded map. Applying [24, Lemma 6.1] with $f_{1}=(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}\mathds{1}_{\widetilde{\mathcal{U}}^{+}}$, $f_{2}=(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}\mathds{1}_{\widetilde{\mathcal{U}}^{-}}$, $u_{0}=1$ and $h:u\mapsto H(\Gamma(u),u)$ then yields $\displaystyle\int_{(0,1)^{2}}H(u,v)\,\widetilde{Q}(du,dv)$ $\displaystyle=\frac{1}{\Psi_{+}(1)}\int_{0}^{1}H(u,\Gamma(u))\,d\chi_{+}(u)$ $\displaystyle=\frac{1}{\Psi_{+}(1)}\int_{0}^{1}H(\Gamma(\Gamma(u)),\Gamma(u))(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\mathds{1}_{\widetilde{\mathcal{U}}^{+}}(u)\,du$ $\displaystyle=\frac{1}{\Psi_{+}(1)}\int_{0}^{1}H(\Gamma(v),v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\mathds{1}_{\widetilde{\mathcal{U}}^{-}}(v)\,dv$ $\displaystyle=\frac{1}{\Psi_{-}(1)}\int_{0}^{1}H(\Gamma(v),v)\,d\chi_{-}(v).$ We deduce that $\widetilde{Q}(du,dv)=\frac{1}{\Psi_{+}(1)}d\chi_{-}(v)\,\delta_{\Gamma(v)}(du),$ (3.33) and we show with a calculation similar to (3.31) that its second marginal is $\frac{1}{\Psi_{+}(1)}d\Psi_{-}$. We conclude that $Q\in\mathcal{Q}$. Step 3. For all $x\in\mathbb{R}$, let $(\eta_{x}(dy,dy^{\prime}))_{x\in\mathbb{R}}$ be the probability kernel defined by $\left\\{\begin{array}[]{r}\displaystyle\delta_{x}(dy)\,\delta_{x}(dy^{\prime})\\\ \displaystyle\textrm{if}\ F_{\mu}(x)=0\ \textrm{or}\ F_{\mu}(x-)=1;\\\ \\\ \displaystyle\frac{1}{\mu(\\{x\\})}\left(\int_{u\in\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\widetilde{m}^{Q}_{u}(dy^{\prime})\,\delta_{y^{\prime}}(dy)\,du+\int_{u\in(a_{x},b_{x})}\widetilde{m}^{Q}_{u}(dy^{\prime})\,\delta_{F_{\nu}^{-1}(u)}(dy)\,du\right)\\\ \displaystyle\textrm{if}\ \mu(\\{x\\})>0;\\\ \\\ \displaystyle\delta_{F_{\nu}^{-1}(F_{\mu}(x))}(dy)\,\widetilde{m}^{Q}_{F_{\mu}(x)}(dy^{\prime})\\\ \textrm{otherwise},\end{array}\right.$ where the probability kernel $(\widetilde{m}^{Q}_{u})_{u\in(0,1)}$ is given by (3.3). Notice that in view of the definition (3.6) of $M^{Q}$, one can check that for $\mu(dx)$-almost all $x\in\mathbb{R}$, $M^{Q}_{x}(dy)=\left\\{\begin{array}[]{cr}\delta_{x}(dy)&\text{if}\ F_{\mu}(x)=0\ \text{or}\ F_{\mu}(x_{-})=1;\\\ \\\ \displaystyle\frac{1}{\mu(\\{x\\})}\int_{u=F_{\mu}(x-)}^{F_{\mu}(x)}\widetilde{m}^{Q}_{u}(dy)\,du&\text{if}\ \mu(\\{x\\})>0;\\\ \\\ \displaystyle\widetilde{m}^{Q}_{F_{\mu}(x)}(dy)&\text{otherwise}.\end{array}\right.$ (3.34) Let us show that for $\mu(dx)$-almost all $x\in\mathbb{R}$, $\eta_{x}(dy,dy^{\prime})$ is a coupling between $\pi^{HF}_{x}$ and $M^{Q}_{x}$. Let $x\in\mathbb{R}$. By (1.13) we may suppose without loss of generality that $F_{\mu}(x)>0$ and $F_{\mu}(x-)<1$. Let $h:\mathbb{R}\to\mathbb{R}$ be a measurable and bounded map. Suppose first that $\mu(\\{x\\})=0$. Then by (3.11) for the second equality, $\displaystyle\int_{\mathbb{R}\times\mathbb{R}}h(y)\,\eta_{x}(dy,dy^{\prime})$ $\displaystyle=h(F_{\nu}^{-1}(F_{\mu}(x)))=\int_{\mathbb{R}}h(y)\,\pi^{HF}_{x}(dy),$ $\displaystyle\text{and}\quad\int_{\mathbb{R}\times\mathbb{R}}h(y^{\prime})\,\eta_{x}(dy,dy^{\prime})$ $\displaystyle=\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{F_{\mu}(x)}(dy^{\prime})=\int_{\mathbb{R}}h(y^{\prime})\,M^{Q}_{x}(dy^{\prime}).$ Suppose now that that $\mu(\\{x\\})>0$. On the one hand, using the fact that $\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}\cup(a_{x},b_{x}]=(F_{\mu}(x-),F_{\mu}(x)]$ for the second equality, we have $\displaystyle\int_{\mathbb{R}\times\mathbb{R}}h(y^{\prime})\,\eta_{x}(dy,dy^{\prime})$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left(\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{u}(dy^{\prime})\,du+\int_{(a_{x},b_{x})}\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{u}(dy^{\prime})\,du\right)$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\int_{(F_{\mu}(x-),F_{\mu}(x)]}\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{u}(dy^{\prime})\,du=\int_{\mathbb{R}}h(y^{\prime})\,M^{Q}_{x}(dy^{\prime}).$ On the other hand, $\int_{\mathbb{R}\times\mathbb{R}}h(y)\,\eta_{x}(dy,dy^{\prime})=\frac{1}{\mu(\\{x\\})}\left(\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{u}(dy^{\prime})\,du+\int_{(a_{x},b_{x})}h(F_{\nu}^{-1}(u))\,du\right).$ (3.35) Assume for a moment that $\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\int_{\mathbb{R}}h(y^{\prime})\,\widetilde{m}^{Q}_{u}(dy^{\prime})\,du=\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}h(F_{\nu}^{-1}(u))\,du.$ (3.36) We then deduce with (3.35) and (3.11) that $\displaystyle\int_{\mathbb{R}\times\mathbb{R}}h(y)\,\eta_{x}(dy,dy^{\prime})$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left(\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}h(F_{\nu}^{-1}(u))\,du+\int_{(a_{x},b_{x})}h(F_{\nu}^{-1}(u))\,du\right)$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\int_{(F_{\mu}(x-),F_{\mu}(x)]}h(F_{\nu}^{-1}(u))\,du=\int_{\mathbb{R}}h(y)\,\pi^{HF}_{x}(dy).$ This proves that we indeed have $\eta_{x}(dy,dy^{\prime})\in\Pi(\pi^{HF}_{x},M^{Q}_{x})$ for $\mu(dx)$-almost all $x\in\mathbb{R}$, hence (3.18) holds. Let us then prove (3.36). By (3.2) there exists a probability kernel $(\pi^{Q}_{u})_{u\in(0,1)}$ such that $Q(du,dv)=\frac{1}{\Psi_{+}(1)}\,d\Psi_{+}(u)\,\pi^{Q}_{u}(dv)=\frac{1}{\Psi_{+}(1)}\,d\Psi_{-}(v)\,\pi^{Q}_{v}(du).$ Since the first, resp. second marginals of $Q-Q_{x}$ and $Q_{x}$ are singular, i.e. supported by disjoint measurable subsets of $(0,1)$, we have $Q_{x}(du,dv)=\frac{1}{\Psi_{+}(1)}\mathds{1}_{\mathcal{V}^{+}_{x}}(u)\,d\Psi_{+}(u)\,\pi^{Q}_{u}(dv)=\frac{1}{\Psi_{+}(1)}\mathds{1}_{\mathcal{V}^{-}_{x}}(v)\,d\Psi_{-}(v)\,\pi^{Q}_{v}(du).$ (3.37) For $u,v\in(0,1)$ such that $F_{\nu}^{-1}(u)\neq F_{\nu}^{-1}(v)$, let $\Delta(u,v)=\frac{h(F_{\nu}^{-1}(v))-h(F_{\nu}^{-1}(u))}{F_{\nu}^{-1}(v)-F_{\nu}^{-1}(u)}$. By [24, Lemma 2.5], for $du$-almost all $u\in(0,1)$ we have $\displaystyle\begin{split}\int_{\mathbb{R}}h(y)\,\widetilde{m}^{Q}_{u}(dy)&=h(F_{\nu}^{-1}(u))+\int_{(0,1)}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\,\pi^{Q}_{u}(dv)\\\ &\phantom{=\ }-\int_{(0,1)}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(u)\,\pi^{Q}_{u}(dv),\end{split}$ (3.38) where the integrals are well defined. Using the facts that $\mathcal{V}^{+}_{x}\subset\mathcal{U}^{+}$, $\mathcal{V}^{-}_{x}\subset\mathcal{U}^{-}$ and the symmetry of the function $\Delta$, we get $\displaystyle\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\int_{(0,1)}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\,\pi^{Q}_{u}(dv)\,du$ $\displaystyle=\int_{(0,1)^{2}}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{+}(u)\mathds{1}_{\mathcal{V}^{+}_{x}}(u)\,\pi^{Q}_{u}(dv)\,du$ $\displaystyle=\Psi_{+}(1)\int_{(0,1)^{2}}\Delta(u,v)\,Q_{x}(du,dv)$ $\displaystyle=\int_{(0,1)^{2}}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(v)\mathds{1}_{\mathcal{V}^{-}_{x}}(v)\,\pi^{Q}_{v}(du)\,dv$ $\displaystyle=\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\int_{(0,1)}\Delta(u,v)(F_{\mu}^{-1}-F_{\nu}^{-1})^{-}(u)\,\pi^{Q}_{u}(dv)\,du.$ Then (3.36) is a direct consequence of (3.38) integrated on $\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}$ with respect to the Lebesgue measure. Step 4. As mentioned at the beginning of the proof, it remains only to show that for $\mu(dx)$-almost all $x\in\mathbb{R}$, (3.19) is satisfied. By (3.5) and (1.16) we have for $\mu(dx)$-almost all $x\in\mathbb{R}$ and $dv$-almost $v\in(0,1)$ $\int_{\mathbb{R}}|y^{\prime}-F_{\nu}^{-1}(F_{\mu}(x-)+v\mu(\\{x\\}))|\,\widetilde{m}^{Q}_{F_{\mu}(x-)+v\mu(\\{x\\})}(dy^{\prime})=|x-F_{\nu}^{-1}(F_{\mu}(x-)+v\mu(\\{x\\}))|.$ The latter equality implies that for $\mu(dx)$-almost all $x\in\mathbb{R}$ such that $\mu(\\{x\\})=0$, $\displaystyle\int_{\mathbb{R}\times\mathbb{R}}|y-y^{\prime}|\eta_{x}(dy,dy^{\prime})$ $\displaystyle=\int_{\mathbb{R}}|y^{\prime}-F_{\nu}^{-1}(F_{\mu}(x))|\,\widetilde{m}^{Q}_{F_{\mu}(x)}(dy^{\prime})=|x-F_{\nu}^{-1}(F_{\mu}(x))|$ $\displaystyle=\left|x-\int_{\mathbb{R}}y\,\pi^{HF}_{x}(dy)\right|.$ It remains to show (3.19) for $x\in\mathbb{R}$ such that $\mu(\\{x\\})>0$. For such an element $x$, we have by (3.21) either $\mathcal{V}^{+}_{x}=\mathcal{U}^{+}_{x}$, which implies $(a_{x},b_{x})\cap\mathcal{U}^{+}_{x}=\emptyset$, or $\mathcal{V}^{-}_{x}=\mathcal{U}^{-}_{x}$, which implies $(a_{x},b_{x})\cap\mathcal{U}^{-}_{x}=\emptyset$. In both cases we have that $u\mapsto F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)$ does not change sign on $(a_{x},b_{x})$. Added to (3.5) and (3.20), we deduce that $\displaystyle\int_{\mathbb{R}\times\mathbb{R}}|y-y^{\prime}|\,\eta_{x}(dy,dy^{\prime})$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\int_{(a_{x},b_{x})}\left(\int_{\mathbb{R}}|y^{\prime}-F_{\nu}^{-1}(u)|\,\widetilde{m}^{Q}_{u}(dy^{\prime})\right)\,du$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\int_{(a_{x},b_{x})}\left|F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)\right|\,du$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left|\int_{(a_{x},b_{x})}(F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u))\,du\right|$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left|\int_{\mathcal{V}^{+}_{x}}(F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u))\,du+\int_{(a_{x},b_{x})}(F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u))\,du+\int_{\mathcal{V}^{-}_{x}}(F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u))\,du\right|$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left|\int_{(F_{\mu}(x-),F_{\mu}(x)]}(x-F_{\nu}^{-1}(u))\,du\right|$ $\displaystyle=\left|x-\frac{1}{\mu(\\{x\\})}\int_{(F_{\mu}(x-),F_{\mu}(x)]}F_{\nu}^{-1}(u)\,du\right|$ $\displaystyle=\left|x-\int_{\mathbb{R}}y\,\pi^{HF}_{x}(dy)\right|,$ which shows (3.19) and completes the proof. ∎ ###### Remark 3.8. 1. (i) Despite appearances, the martingale coupling $M^{Q}$ constructed in the proof of Proposition 3.6 does not depend on the choice of the measures $Q_{x}$, $x\in A_{\mu}$, whose marginals are given by (3.22). Informally, we see that $(Q_{x})_{x\in A_{\mu}}$ does not affect $(M^{Q}_{x})_{x\in\mathbb{R}}$ outside the jumps of $F_{\mu}$. Moreover, for all $x\in A_{\mu}$, $Q_{x}$ describes the way the elements of $\mathcal{V}^{+}_{x}$ are matched with the elements of $\mathcal{V}^{-}_{x}$, but this level of detail is not seen by $M^{Q}_{x}$, which only retains the contribution $\frac{1}{\mu(\\{x\\})}\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\delta_{F_{\nu}^{-1}(u)}(dy)\,du$, as shown below. Formally, since for all $x\in A_{\mu}$, the first, resp. second marginals of $\widetilde{Q}$ and $Q_{x}$ are singular, i.e. supported by disjoint measurable subsets of $(0,1)$, we have $\pi^{Q}_{u}=\delta_{\Gamma(u)}$ for $du$-almost all $u\in\widetilde{\mathcal{U}}^{+}\cup\widetilde{\mathcal{U}}^{-}$. In view of the definition (3.3), we deduce that for all $du$-almost all $u\in\widetilde{\mathcal{U}}^{+}\cup\widetilde{\mathcal{U}}^{-}$, $\widetilde{m}^{Q}_{u}$ does not depend on $(Q_{x})_{x\in A_{\mu}}$, neither does it for all $u\in(0,1)$ such that $F_{\mu}^{-1}(u)=F_{\nu}^{-1}(u)$. Moreover, the image of the continuous part $\mu-\sum_{x\in A_{\mu}}\mu(\\{x\\})\delta_{x}$ of $\mu$ by $F_{\mu}^{-1}$ is absolutely continuous with respect to the Lebesgue measure on $(0,1)$. This implies that for $\mu(dx)$-almost all $x\in\mathbb{R}$ such that $\mu(\\{x\\})=0$, $M^{Q}_{x}=\widetilde{m}^{Q}_{F_{\mu}(x)}$ does not depend on $(Q_{x^{\prime}})_{x^{\prime}\in A_{\mu}}$. Let now $x\in A_{\mu}$. Since $(F_{\mu}(x-),F_{\mu}(x)]=\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}\cup(a_{x},b_{x}]$, we have $\displaystyle M^{Q}_{x}(dy)$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\int_{F_{\mu}(x-)}^{F_{\mu}(x)}\widetilde{m}^{Q}_{u}(dy)\,du$ $\displaystyle=\frac{1}{\mu(\\{x\\})}\left(\int_{\mathcal{V}^{+}_{x}}\widetilde{m}^{Q}_{u}(dy)\,du+\int_{\mathcal{V}^{-}_{x}}\widetilde{m}^{Q}_{u}(dy)\,du+\int_{(a_{x},b_{x}]}\widetilde{m}^{Q}_{u}(dy)\,du\right).$ We can check that $\int_{\mathcal{V}^{+}_{x}}\widetilde{m}^{Q}_{u}(dy)\,du+\int_{\mathcal{V}^{-}_{x}}\widetilde{m}^{Q}_{u}(dy)\,du=\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\delta_{F_{\nu}^{-1}(u)}(dy)\,du$ and deduce that $M^{Q}_{x}(dy)=\frac{1}{\mu(\\{x\\})}\left(\int_{\mathcal{V}^{+}_{x}\cup\mathcal{V}^{-}_{x}}\delta_{F_{\nu}^{-1}(u)}(dy)\,du+\int_{(a_{x},b_{x}]}\widetilde{m}^{Q}_{u}(dy)\,du\right)$. Since $(a_{x},b_{x}]\subset\widetilde{\mathcal{U}}^{+}\cup\widetilde{\mathcal{U}}^{-}$, we have from the foregoing that $\pi^{Q}_{u}$ and therefore $\widetilde{m}^{Q}_{u}$ does not depend on $(Q_{x})_{x\in A_{\mu}}$ for $du$-almost all $u\in(a_{x},b_{x}]$, hence $M^{Q}_{x}$ is independent of $(Q_{x^{\prime}})_{x^{\prime}\in A_{\mu}}$. 2. (ii) The continuous functions $\Delta_{+}$ and $\Delta_{-}$ defined in (2.12) respectively coincide with $\chi_{+}$ and $\chi_{-}$ outside the jumps of $F_{\mu}$. Let $u\notin\bigcup_{x\in A_{\mu}}[F_{\mu}(x-),F_{\mu}(x)]$. We have $\displaystyle\\{v\in(0,u)\mid\mu(\\{F_{\mu}^{-1}(v)\\})=0\\}\cup\bigcup_{x\in A_{\mu}\cap(-\infty,F_{\mu}^{-1}(u))}(F_{\mu}(x-),F_{\mu}(x)]$ $\displaystyle\subset(0,u)$ $\displaystyle\subset\\{v\in(0,u)\mid\mu(\\{F_{\mu}^{-1}(v)\\})=0\\}\cup\bigcup_{x\in A_{\mu}\cap(-\infty,F_{\mu}^{-1}(u))}[F_{\mu}(x-),F_{\mu}(x)],$ where the sets in the first union are disjoint. Hence $\displaystyle\Delta_{\pm}(u)$ $\displaystyle=\int_{0}^{u}(F_{\mu}^{-1}-G)^{\pm}(v)\mathds{1}_{\\{\mu(\\{F_{\mu}^{-1}(v)\\})=0\\}}\,dv+\sum_{x\in A_{\mu}\cap(-\infty,F_{\mu}^{-1}(u))}\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(F_{\mu}^{-1}-G)^{\pm}(v)\,dv,$ $\displaystyle\chi_{\pm}(u)$ $\displaystyle=\int_{0}^{u}(F_{\mu}^{-1}-F_{\nu}^{-1})^{\pm}(v)\mathds{1}_{\\{\mu(\\{F_{\mu}^{-1}(v)\\})=0\\}}\,dv+\sum_{x\in A_{\mu}\cap(-\infty,F_{\mu}^{-1}(u))}\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(F_{\mu}^{-1}-F_{\nu}^{-1})^{\pm}(v)\mathds{1}_{\widetilde{\mathcal{U}}^{\pm}}(v)\,dv.$ For all $v\in(0,1)$ such that $\mu(\\{F_{\mu}^{-1}(v)\\})=0$, $G(v)=F_{\nu}^{-1}(v)$. Moreover, for $x\in A_{\mu}$, $F_{\mu}^{-1}$ and $G$ are constant and respectively equal to $x$ and $\frac{1}{F_{\mu}(x)-F_{\mu}(x-)}\int_{F_{\mu}(x-)}^{F_{\mu}(x)}F_{\nu}^{-1}(v)\,dv$ on $(F_{\mu}(x-),F_{\mu}(x)]$, so that, using the definition of $\widetilde{\mathcal{U}}^{\pm}$ for the second equality, we obtain $\displaystyle\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(F_{\mu}^{-1}-F_{\nu}^{-1})^{\pm}(v)\mathds{1}_{\widetilde{\mathcal{U}}^{\pm}}(v)\,dv$ $\displaystyle=\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(v))^{\pm}\mathds{1}_{\widetilde{\mathcal{U}}^{\pm}}(v)\,dv=\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-F_{\nu}^{-1}(v))\,dv\right)^{\pm}$ $\displaystyle=\left(\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(x-G(v))\,dv\right)^{\pm}=\int_{F_{\mu}(x-)}^{F_{\mu}(x)}(F_{\mu}^{-1}-G)^{\pm}(v)\,dv.$ We deduce that $\Delta_{\pm}(u)=\chi_{\pm}(u)$. For all $x\in A_{\mu}$, $u\mapsto\Delta_{\pm}$ is affine on $[F_{\mu}(x-),F_{\mu}(x)]$, whereas $\chi_{\pm}$ realises an a priori different continuous interpolation. However the fact that $\Delta_{\pm}$ and $\chi_{\pm}$ coincide outside the jumps of $F_{\mu}$ indicates that the constructions of the martingale couplings $M$ defined in Section 2.2 and $M^{Q}$ defined in the proof of Proposition 3.6 are very close. Let us now illustrate with the following example that the martingale coupling $M^{Q}$ constructed in the proof of Proposition 3.6 is in general different from the inverse transform martingale coupling $M^{IT}$. ###### Example 3.9. Let $\mu=\frac{1}{2}(\delta_{-2}+\delta_{2})$ and $\nu=\frac{1}{3}\delta_{-4}+\frac{1}{6}\delta_{-1}+\frac{1}{6}\delta_{1}+\frac{1}{3}\delta_{4}$. Let $Q\in\mathcal{Q}$, $(\widetilde{m}^{Q}_{u})_{u\in(0,1)}$ be defined by (3.3) and $M^{Q}$ be defined by (3.6). With the definition (3.7) in mind, we have $\mathcal{U}^{+}=\left(0,\frac{1}{3}\right]\cup\left(\frac{1}{2},\frac{2}{3}\right]\quad\textrm{and}\quad\mathcal{U}^{-}=\left(\frac{1}{3},\frac{1}{2}\right]\cup\left(\frac{2}{3},1\right).$ Let $u\in(0,1)$ be such that $F_{\mu}^{-1}(u)=-2$, or equivalently $u\leq\frac{1}{2}$. If $u\leq\frac{1}{3}$ then $u\in\mathcal{U}^{+}$. For all $v\in(0,1)$, $F_{\nu}^{-1}(v)=1\iff\frac{1}{2}<v\leq\frac{2}{3}\implies v\in\mathcal{U}^{+}$, so $\widetilde{m}^{Q}_{u}(\\{1\\})=0$. Else if $u>\frac{1}{3}$, then $F_{\nu}^{-1}(u)=-1$, so $\widetilde{m}^{Q}_{u}(\\{1,4\\})=0$. Since $\widetilde{m}^{Q}_{u}$ must have mean $-2$, $\widetilde{m}^{Q}_{u}(\\{1,4\\})=0$. We deduce that $M^{Q}(\\{(-2,1)\\})=0$. Similarly we find $M^{Q}(\\{(2,-1)\\})=0$. Since $\nu(\\{-1\\})=\nu(\\{1\\})=\frac{1}{6}$, we deduce that $M^{Q}(\\{(-2,-1)\\})=M^{Q}(\\{(2,1)\\})=\frac{1}{6}$. Then the martingale constraint imposes $M^{Q}=\frac{13}{48}\delta_{(-2,-4)}+\frac{1}{6}\delta_{(-2,-1)}+\frac{1}{16}\delta_{(-2,4)}+\frac{1}{16}\delta_{(2,-4)}+\frac{1}{6}\delta_{(2,1)}+\frac{13}{48}\delta_{(2,4)},$ which coincides therefore with the martingale coupling $M^{Q}$ constructed in the proof of Proposition 3.6. We easily find that the Hoeffding-Fréchet coupling $\pi^{HF}$ between $\mu$ and $\nu$ is given by $\pi^{HF}=\frac{1}{3}\delta_{(-2,-4)}+\frac{1}{6}\delta_{(-2,-1)}+\frac{1}{6}\delta_{(2,1)}+\frac{1}{3}\delta_{(2,4)},$ and for all $u\in(0,\frac{1}{2})$, resp. $u\in(\frac{1}{2},1)$, we have $\varphi(u)=u+\frac{1}{2}$, resp. $\varphi(u)=u-\frac{1}{2}$. Then one can easily check that the probability kernel $(m_{u})_{u\in(0,1)}$ defined for all $u\in(0,\frac{1}{2})$ by $m_{u}=\frac{5}{9}\delta_{-4}+\frac{5}{18}\delta_{-1}+\frac{1}{18}\delta_{1}+\frac{1}{9}\delta_{4},$ and for all $u\in(\frac{1}{2},1)$ by $m_{u}=\frac{1}{9}\delta_{-4}+\frac{1}{18}\delta_{-1}+\frac{5}{18}\delta_{1}+\frac{5}{9}\delta_{4},$ satisfies (2.23) with $\pi=\pi^{HF}$. Then the martingale coupling $M$ defined by (2.27) is $M=\frac{5}{18}\delta_{(-2,-4)}+\frac{5}{36}\delta_{(-2,-1)}+\frac{1}{36}\delta_{(-2,1)}+\frac{1}{18}\delta_{(-2,4)}+\frac{1}{18}\delta_{(2,-4)}+\frac{1}{36}\delta_{(2,-1)}+\frac{5}{36}\delta_{(2,1)}+\frac{5}{18}\delta_{(2,4)},$ hence $M^{Q}\neq M$. ### 3.4 An example of $\mathcal{AW}_{\rho}$-minimal martingale rearrangement for $\rho>2$ Let $f:\mathbb{R}\to\mathbb{R}$ and $q:\mathbb{R}\to[0,1]$ be defined for all $y\in\mathbb{R}$ by $\displaystyle f(y)$ $\displaystyle=\frac{1+\textrm{e}}{6}\left(\textrm{e}^{-|y|}\mathds{1}_{\\{|y|\geq 1\\}}+\frac{\textrm{e}^{-|y|}+1}{1+\textrm{e}}\mathds{1}_{\\{|y|<1\\}}\right);$ $\displaystyle q(y)$ $\displaystyle=\frac{\textrm{e}}{1+\textrm{e}}\mathds{1}_{\\{y\leq-1\\}}+\frac{1}{1+\textrm{e}^{y}}\mathds{1}_{\\{-1<y<1\\}}+\frac{1}{1+\textrm{e}}\mathds{1}_{\\{y\geq 1\\}}.$ Let $T:\mathbb{R}\to\mathbb{R}$ be the inverse of the continuous increasing map $y\mapsto y+2q(y)-1$, so that for all $y\in\mathbb{R}$, $q(y)=\frac{1+T^{-1}(y)-y}{2}$. Let $\nu(dy)=f(y)\,dy$ and $\mu=(T^{-1})_{\sharp}\nu$. We can easily compute $\sup_{x\in\mathbb{R}}|x-T(x)|=\sup_{y\in\mathbb{R}}|T^{-1}(y)-y|=\sup_{y\in\mathbb{R}}|2q(y)-1|=\frac{\operatorname{\operatorname{e}}-1}{\operatorname{\operatorname{e}}+1}<1.$ By considering the cases $y\leq-2$, $-2<y\leq-1$, $-1<y\leq 0$, $0<y\leq 1$, $1<y\leq 2$ and $2\leq y$, it is easy to check that $\forall y\in\mathbb{R},\quad q(y-1)f(y-1)+(1-q(y+1))f(y+1)=f(y).$ (3.39) Let $m^{0}_{u}=q(F_{\nu}^{-1}(u))\,\delta_{F_{\nu}^{-1}(u)+1}+(1-q(F_{\nu}^{-1}(u)))\,\delta_{F_{\nu}^{-1}(u)-1}$ (3.40) and $h:\mathbb{R}\to\mathbb{R}$ be measurable and bounded. Then $\displaystyle\int_{(0,1)\times\mathbb{R}}h(y)\,du\,m^{0}_{u}(dy)$ $\displaystyle=\int_{(0,1)}\left(q(F_{\nu}^{-1}(u))h(F_{\nu}^{-1}(u)+1)+(1-q(F_{\nu}^{-1}(u)))h(F_{\nu}^{-1}(u)-1)\right)\,du$ $\displaystyle=\int_{\mathbb{R}}\left(q(y)h(y+1)+(1-q(y))h(y-1)\right)\,\nu(dy)$ $\displaystyle=\int_{\mathbb{R}}q(y)h(y+1)f(y)\,dy+\int_{\mathbb{R}}(1-q(y))h(y-1)f(y)\,dy$ $\displaystyle=\int_{\mathbb{R}}\left(q(y-1)f(y-1)+(1-q(y+1))f(y+1)\right)h(y)\,dy$ $\displaystyle=\int_{\mathbb{R}}f(y)h(y)\,dy,$ where we used (3.39) for the last equality. We deduce that $\int_{u\in(0,1)}m^{0}_{u}(dy)\,du=\nu(dy)$. Hence $\widehat{M}^{0}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times m^{0}_{u}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$. Let us now show that $\widehat{M}^{0}$ is the only $\widehat{\mathcal{AW}}_{\rho}$-minimal martingale rearrangement coupling of $\widehat{\pi}^{HF}$ for $\rho>2$. Since $|y-F_{\nu}^{-1}(u)|$ is $du\,m^{0}_{u}(dy)$-almost everywhere constant, we have $\displaystyle\begin{split}\left(\int_{(0,1)}\mathcal{W}_{2}^{2}(m^{0}_{u},\delta_{F_{\nu}^{-1}(u)})\,du\right)^{\rho/2}&=\left(\int_{(0,1)}\int_{\mathbb{R}}|F_{\nu}^{-1}(u)-y|^{2}\,m^{0}_{u}(dy)\,du\right)^{\rho/2}\\\ &=\int_{(0,1)}\int_{\mathbb{R}}|F_{\nu}^{-1}(u)-y|^{\rho}\,m^{0}_{u}(dy)\,du\geq\widehat{\mathcal{AW}}_{\rho}^{\rho}(\widehat{M}^{0},\widehat{\pi}^{HF}).\end{split}$ (3.41) Since by Proposition 3.2 and its proof, $\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du$ does not depend on $\widehat{M}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times m_{u}\in\widehat{\Pi}^{\mathrm{M}}(\mu,\nu)$, to conclude it is enough to show that for $\widehat{M}\neq\widehat{M}^{0}$, $\widehat{\mathcal{AW}}_{\rho}^{\rho}(\widehat{M},\widehat{\pi}^{HF})>\left(\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du\right)^{\rho/2}.$ (3.42) Let $\chi\in\Pi(\lambda_{(0,1)},\lambda_{(0,1)})$ be optimal for $\widehat{\mathcal{AW}}_{\rho}(\widehat{M},\widehat{\pi}^{HF})$. Suppose first that $\chi(du,du^{\prime})=\lambda_{(0,1)}(du)\,\delta_{u}(du^{\prime})$. Since $\int_{(0,1)}\int_{\mathbb{R}}|y-F_{\nu}^{-1}(u)|^{2}\,m_{u}(dy)\,du=\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du=\int_{(0,1)}\mathcal{W}_{2}^{2}(m^{0}_{u},\delta_{F_{\nu}^{-1}(u)})\,du=1,$ and $\widehat{M}\neq\widehat{M}^{0}$, $|y-F_{\nu}^{-1}(u)|$ is not $du\,m_{u}(dy)$-almost everywhere constant, so by Jensen’s strict inequality we have $\displaystyle\widehat{\mathcal{AW}}_{\rho}^{\rho}(\widehat{M},\widehat{\pi}^{HF})$ $\displaystyle=\int_{(0,1)}\mathcal{W}_{\rho}^{\rho}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du=\int_{\mathbb{R}\times(0,1)}|y-F_{\nu}^{-1}(u)|^{\rho}\,m_{u}(dy)\,du$ $\displaystyle>\left(\int_{\mathbb{R}\times(0,1)}|y-F_{\nu}^{-1}(u)|^{2}\,m_{u}(dy)\,du\right)^{\rho/2}=\left(\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du\right)^{\rho/2}.$ Else if $\chi(du,du^{\prime})\neq\lambda_{(0,1)}(du)\,\delta_{u}(du^{\prime})$, then using Jensen’s inequality for the third inequality and (3.13) for the fourth, we have $\displaystyle\begin{split}\widehat{\mathcal{AW}}_{\rho}^{\rho}(\widehat{M},\widehat{\pi}^{HF})&>\int_{(0,1)\times(0,1)}\mathcal{W}_{\rho}^{\rho}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\,\chi(du,du^{\prime})\geq\int_{(0,1)\times(0,1)}\mathcal{W}_{2}^{\rho}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\,\chi(du,du^{\prime})\\\ &\geq\left(\int_{(0,1)\times(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u^{\prime})})\,\chi(du,du^{\prime})\right)^{\rho/2}\geq\left(\int_{(0,1)}\mathcal{W}_{2}^{2}(m_{u},\delta_{F_{\nu}^{-1}(u)})\,du\right)^{\rho/2},\end{split}$ (3.43) which proves (3.42) and therefore that $\widehat{M}^{0}$ is the only $\widehat{\mathcal{AW}}_{\rho}$-minimal martingale rearrangement coupling of $\widehat{\pi}^{HF}$. Note that (3.43) is valid for $\widehat{M}=\widehat{M}^{0}$, which in view of (3.41) shows that $\lambda_{(0,1)}(du)\,\delta_{u}(du^{\prime})$ is the only coupling between $\lambda_{(0,1)}$ and $\lambda_{(0,1)}$ optimal for $\widehat{\mathcal{AW}}_{\rho}(\widehat{M}^{0},\widehat{\pi}^{HF})$. With similar arguments we prove that $M^{0}(dx,dy)=\mu(dx)\left(q(T(x))\,\delta_{T(x)+1}(dy)+(1-q(T(x)))\,\delta_{T(x)-1}(dy)\right)$ is the only $\mathcal{AW}_{\rho}$-minimal martingale rearrangement coupling of $\pi^{HF}$, and $\mu(dx)\,\delta_{x}(dx^{\prime})$ is the only coupling between $\mu$ and $\mu$ optimal for $\mathcal{AW}_{\rho}(M^{0},\pi^{HF})$. ###### Remark 3.10. According to [24, Proposition 2.11], any element of our family $(M^{Q})_{Q\in\mathcal{Q}}$ of martingale couplings minimises $\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|\,M(dx,dy)$ among all martingale couplings $M$ between $\mu$ and $\nu$ and satisfies $\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|\,M^{Q}(dx,dy)=\mathcal{W}_{1}(\mu,\nu)$. According to [24, Proposition 3.5], since $\rho>2$, the inverse transform martingale coupling $M^{IT}$ minimises $\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{\rho}\,M^{Q}(dx,dy)$ (it maximises this integral for $1<\rho<2$) among all martingale couplings $M^{Q}$ parametrised by $Q\in\mathcal{Q}$. Yet the optimum over the whole set of martingale couplings between $\mu$ and $\nu$ is not $M^{IT}$ but $M^{0}$. Indeed, by construction we have $M^{IT}_{x}(\\{T(x)\\})>0$, $\mu(dx)$-almost everywhere, hence $M^{IT}\neq M^{0}$ and $|y-T(x)|$ is not $M^{IT}(dx,dy)$-almost everywhere constant. Then by Jensen’s strict inequality and the fact that $\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{2}\,M(dx,dy)$ does not depend on the choice of $M\in\Pi^{\mathrm{M}}(\mu,\nu)$, we get $\displaystyle\begin{split}\left(\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{\rho}\,M^{IT}(dx,dy)\right)^{1/\rho}&>\left(\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{2}\,M^{0}(dx,dy)\right)^{1/2}\\\ &=\left(\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{\rho}\,M^{0}(dx,dy)\right)^{1/\rho}.\end{split}$ (3.44) Note that when $1<\rho<2$, one can readily adapt the above arguments to show that $M^{0}$ is the only $\mathcal{AW}_{\rho}$-maximal martingale rearrangement coupling of $\pi^{HF}$, i.e. it maximises $\mathcal{AW}_{\rho}(\pi^{HF},M)$ over $M\in\Pi^{\mathrm{M}}(\mu,\nu)$, and reasoning similarly to (3.44) yields $\left(\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{\rho}\,M^{IT}(dx,dy)\right)^{1/\rho}<\left(\int_{\mathbb{R}\times\mathbb{R}}|y-T(x)|^{\rho}\,M^{0}(dx,dy)\right)^{1/\rho}.$ ## 4 Stability of the inverse transform martingale coupling In the next proposition we prove the stability in $\mathcal{AW}_{\rho}$, for $\rho\geq 1$, of the lifted inverse transform martingale coupling, defined for all $\mu,\nu\in\mathcal{P}_{1}(\mathbb{R})$ in the convex order by $\widehat{M}^{IT}(du,dx,dy)=\lambda_{(0,1)}(du)\,\delta_{F_{\mu}^{-1}(u)}(dx)\,\widetilde{m}^{IT}_{u}(dy),$ where $(\widetilde{m}^{IT}_{u})_{u\in(0,1)}$ is defined by (3.10). In another proposition, we give a condition on the first marginals ensuring that the inverse transform martingale coupling is stable in $\mathcal{AW}_{\rho}$. ###### Proposition 4.1. Let $\rho\geq 1$ and $\mu_{n},\nu_{n}\in\mathcal{P}_{\rho}(\mathbb{R})$, $n\in\mathbb{N}$, be in convex order and respectively converge to $\mu$ and $\nu$ in $\mathcal{W}_{\rho}$ as $n\to+\infty$. Then $\widehat{\mathcal{AW}}^{\rho}_{\rho}(\widehat{M}^{IT}_{n},\widehat{M}^{IT})\leq\int_{(0,1)}\mathcal{W}_{\rho}^{\rho}((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u})\,du\underset{n\to+\infty}{\longrightarrow}0,$ (4.1) where $\widehat{M}^{IT}_{n}=\lambda_{(0,1)}\times\delta_{F_{\mu_{n}}^{-1}(u)}\times(\widetilde{m}^{IT}_{n})_{u}$, resp. $M^{IT}=\lambda_{(0,1)}\times\delta_{F_{\mu}^{-1}(u)}\times\widetilde{m}^{IT}_{u}$, denotes the lifted inverse transform martingale coupling between $\mu_{n}$ and $\nu_{n}$, resp. $\mu$ and $\nu$. ###### Proof. Since $\widehat{\mathcal{AW}}^{\rho}_{\rho}(\widehat{M}^{IT}_{n},\widehat{M}^{IT})\leq\int_{(0,1)}\mathcal{W}_{\rho}^{\rho}((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u})\,du,$ it suffices to show that the right-hand side vanishes as $n$ goes to $+\infty$. This is achieved in two steps. First, we prove that, on the probability space $(0,1)$ endowed with the Lebesgue measure, the family of random variables $\left(\mathcal{W}_{\rho}^{\rho}\left((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u}\right)\right)_{n\in\mathbb{N}}$ is uniformly integrable. Second we show for $du$-almost all $u\in(0,1)$ that $\mathcal{W}_{\rho}^{\rho}\left((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u}\right)\underset{n\to+\infty}{\longrightarrow}0$ (4.2) Let us begin with the uniform integrability. For $u\in(0,1)$, we can estimate $\mathcal{W}_{\rho}^{\rho}\left((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u}\right)\leq 2^{\rho-1}\int_{\mathbb{R}}|y|^{\rho}\,\left((\widetilde{m}^{IT}_{n})_{u}(dy)+\widetilde{m}^{IT}_{u}(dy)\right).$ (4.3) According to [24, Lemma 2.6], $M^{IT}$ is the image of $\mathds{1}_{(0,1)}(u)\,du\,\widetilde{m}^{IT}_{u}(dy)$ by $(u,y)\mapsto(F_{\mu}^{-1}(u),y)$ so that the second marginal of this measure is $\nu(dy)$. Therefore $\int_{(0,1)}\int_{\mathbb{R}}|y|^{\rho}\,\widetilde{m}^{IT}_{u}(dy)\,du=\int_{\mathbb{R}}|y|^{\rho}\,\nu(dy)<+\infty.$ Hence it is enough to check the uniform integrability of $\left(\int_{\mathbb{R}}|y|^{\rho}\,(\widetilde{m}^{IT}_{n})_{u}(dy)\right)_{n\in\mathbb{N}}$ to ensure that of $\left(\mathcal{W}_{\rho}^{\rho}\left((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u}\right)\right)_{n\in\mathbb{N}}$. Since the second marginal of the measure $\mathds{1}_{(0,1)}(u)\,du\,(\widetilde{m}^{IT}_{n})_{u}(dy)$ is $\nu_{n}(dy)$, this measure also writes $\nu_{n}(dy)k^{n}_{y}(du)$ for some probability kernel $k^{n}$ on $\mathbb{R}\times(0,1)$. Let $\varepsilon>0$ and $A$ be a measurable subset of $(0,1)$ such that $\lambda(A)<\varepsilon$. For all $n\in\mathbb{N}$, we have $J_{n}(A):=\int_{A}\int_{\mathbb{R}}|y|^{\rho}\,(\widetilde{m}^{IT}_{n})_{u}(dy)\,du=\int_{\mathbb{R}}|y|^{\rho}\,\tau_{n}(dy),$ where $\tau_{n}(dy)=\int_{u=0}^{1}\mathds{1}_{A}(u)\,k^{n}_{y}(du)\,\nu_{n}(dy)$ is such that $\tau_{n}\leq\nu_{n}$ and $\tau_{n}(\mathbb{R})=\lambda(A)$. Hence $\sup_{A\in\mathcal{B}((0,1)),\ \lambda(A)\leq\varepsilon}J_{n}(A)\leq I_{\varepsilon}^{\rho}(\nu_{n}),$ where $I^{\rho}_{\varepsilon}(\zeta)$ is defined for all $\zeta\in\mathcal{P}_{\rho}(\mathbb{R})$ as the supremum of $\int_{\mathbb{R}}|y|^{\rho}\,\tau(dy)$ over all finite measures $\tau$ on $\mathbb{R}$ such that $\tau\leq\zeta$ and $\tau(\mathbb{R})\leq\varepsilon$. Let $\eta>0$. By [7, Lemma 3.1 (b)], since $\nu\in{\mathcal{P}}_{\rho}(\mathbb{R})$, there exists $\varepsilon^{\prime}>0$ such that $I^{\rho}_{\varepsilon^{\prime}}(\nu)<\eta$. Let then $N\in\mathbb{N}$ be such that for all $n>N$, $\mathcal{W}_{\rho}^{\rho}(\nu_{n},\nu)<\eta$, so that by [7, Lemma 3.1 (c)], $I^{\rho}_{\varepsilon^{\prime}}(\nu_{n})\leq 2^{\rho-1}(\mathcal{W}_{\rho}^{\rho}(\nu_{n},\nu)+I^{\rho}_{\varepsilon^{\prime}}(\nu))<2^{\rho}\eta$. By [7, Lemma 3.1 (b)] again there exists $\varepsilon^{\prime\prime}>0$ such that for all $n\leq N$, $I^{\rho}_{\varepsilon^{\prime\prime}}(\nu_{n})<2^{\rho}\eta$. We deduce that for all $\varepsilon\in(0,\varepsilon^{\prime}\wedge\varepsilon^{\prime\prime})$, $\sup_{n\in\mathbb{N}}\sup_{A\in\mathcal{B}((0,1)),\ \lambda(A)\leq\varepsilon}J_{n}(A)\leq 2^{\rho}\eta,$ which yields uniform integrability of $\left(\int_{\mathbb{R}}|y|^{\rho}\,(\widetilde{m}^{IT}_{n})_{u}(dy)\right)_{n\in\mathbb{N}}$. Next, we show the $du$-almost everywhere pointwise convergence of (4.2). Since, by monotonicity, $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$ is continuous $du$-almost everywhere on $(0,1)$ and, then, the weak convergence implies that $(F_{\mu_{n}}^{-1}(u),F_{\nu_{n}}^{-1}(u))\underset{n\to+\infty}{\longrightarrow}(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u)),$ (4.4) we suppose without loss of generality that this convergence holds. Let $n\in\mathbb{N}$. Let $\Psi_{n+}$, resp. $\Psi_{n-}$, be the map defined by the left-hand, resp. right-hand side of (3.1), with $(\mu_{n},\nu_{n})$ replacing $(\mu,\nu)$. By (3.10), $(\widetilde{m}_{n}^{IT})_{u}=p_{n}(u)\delta_{F_{\nu_{n}}^{-1}(\varphi_{n}(u))}+(1-p_{n}(u))\delta_{F_{\nu_{n}}^{-1}(u)}\quad\mbox{ with }\quad p_{n}(u)=\mathds{1}_{\\{F_{\mu_{n}}^{-1}(u)\neq F_{\nu_{n}}^{-1}(u)\\}}\frac{F_{\mu_{n}}^{-1}(u)-F_{\nu_{n}}^{-1}(u)}{F_{\nu_{n}}^{-1}(\varphi_{n}(u))-F_{\nu_{n}}^{-1}(u)}\in[0,1]$ and $\varphi_{n}(u)=\Psi_{n-}^{-1}(\Psi_{n+}(u))$. Suppose first that $u\in\mathcal{U}_{0}$ i.e. $F_{\mu}^{-1}(u)=F_{\nu}^{-1}(u)$, so that $\widetilde{m}^{IT}_{u}=\delta_{F_{\nu}^{-1}(u)}$. We have $\displaystyle\begin{split}\mathcal{W}_{\rho}^{\rho}((\widetilde{m}_{n}^{IT})_{u},\widetilde{m}^{IT}_{u})&=p_{n}(u)|F_{\nu_{n}}^{-1}(\varphi_{n}(u))-F_{\nu}^{-1}(u)|^{\rho}+(1-p_{n}(u))|F_{\nu_{n}}^{-1}(u)-F_{\nu}^{-1}(u)|^{\rho}\\\ &\leq 2^{\rho-1}p_{n}(u)|F_{\nu_{n}}^{-1}(\varphi_{n}(u))-F_{\nu_{n}}^{-1}(u)|^{\rho}+(2^{\rho-1}p_{n}(u)+1-p_{n}(u))|F_{\nu_{n}}^{-1}(u)-F_{\nu}^{-1}(u)|^{\rho}\\\ &\leq 2^{\rho-1}\mathds{1}_{\\{F_{\mu_{n}}^{-1}(u)\neq F_{\nu_{n}}^{-1}(u)\\}}|F_{\mu_{n}}^{-1}(u)-F_{\nu_{n}}^{-1}(u)|^{\rho}+2^{\rho-1}|F_{\nu_{n}}^{-1}(u)-F_{\nu}^{-1}(u)|^{\rho}\\\ &\leq 2^{2\rho-2}|F_{\mu_{n}}^{-1}(u)-F_{\mu}^{-1}(u)|^{\rho}+2^{\rho-1}(2^{\rho-1}+1)|F_{\nu_{n}}^{-1}(u)-F_{\nu}^{-1}(u)|^{\rho},\end{split}$ (4.5) where the right-hand side goes to $0$ as $n\to\infty$ by (4.4). Suppose next that $u\in\mathcal{U}_{+}$ i.e. $F_{\mu}^{-1}(u)>F_{\nu}^{-1}(u)$, the case $u\in\mathcal{U}_{-}$ being treated in a similar way. Then without loss of generality $\widetilde{m}^{IT}_{u}=p(u)\delta_{F_{\nu}^{-1}(\varphi(u))}+(1-p(u))\delta_{F_{\nu}^{-1}(u)}\mbox{ with }p(u)=\frac{F_{\mu}^{-1}(u)-F_{\nu}^{-1}(u)}{F_{\nu}^{-1}(\varphi(u))-F_{\nu}^{-1}(u)}$ and $\varphi(u)=\Psi_{-}^{-1}(\Psi_{+}(u))$. By (4.4), for $n$ large enough, $u\in\mathcal{U}_{n+}$ so that without loss of generality, $p_{n}(u)=\frac{F_{\mu_{n}}^{-1}(u)-F_{\nu_{n}}^{-1}(u)}{F_{\nu_{n}}^{-1}(\varphi_{n}(u))-F_{\nu_{n}}^{-1}(u)}$ and checking (4.2) amounts to show that $F_{\nu_{n}}^{-1}(\varphi_{n}(u))\underset{n\to+\infty}{\longrightarrow}F_{\nu}^{-1}(\varphi(u)).$ (4.6) It was shown in the proof of [24, Proposition 5.10] that $\Psi_{n+}$ converges uniformly to $\Psi_{+}$ on $[0,1]$ and for $dv$-almost every $v\in(0,1)$, $F_{\nu_{n}}^{-1}(\Psi_{n-}^{-1}(\Psi_{n+}(1)v))\underset{n\to+\infty}{\longrightarrow}F_{\nu}^{-1}(\Psi_{-}^{-1}(\Psi_{+}(1)v)).$ (4.7) Let $\mathcal{D}$ be the set of discontinuities of $F_{\nu}^{-1}\circ\Psi_{-}^{-1}$, which is at most countable by monotonicity. Then [33, Proposition 4.10, Chapter 0] yields $0=\int_{\Psi_{+}(0)}^{\Psi_{+}(1)}\mathds{1}_{\mathcal{D}}(v)\,dv=\int_{0}^{1}\mathds{1}_{\\{\Psi_{+}(u)\in\mathcal{D}\\}}\,d\Psi_{+}(u).$ We deduce that for $du$-almost all $u\in\mathcal{U}_{+}$, $F_{\nu}^{-1}\circ\Psi_{-}^{-1}$ is continuous at $\Psi_{+}(u)$, which we suppose from now. According to (4.7), there exists $\varepsilon>0$ arbitrarily small such that $\displaystyle F_{\nu_{n}}^{-1}\left(\Psi_{n-}^{-1}\left(\Psi_{n+}(1)\frac{\Psi_{+}(u)-\varepsilon}{\Psi_{+}(1)}\right)\right)$ $\displaystyle\underset{n\to+\infty}{\longrightarrow}F_{\nu}^{-1}\left(\Psi_{-}^{-1}\left(\Psi_{+}(u)-\varepsilon\right)\right)$ $\displaystyle\text{and}\quad F_{\nu_{n}}^{-1}\left(\Psi_{n-}^{-1}\left(\Psi_{n+}(1)\frac{\Psi_{+}(u)+\varepsilon}{\Psi_{+}(1)}\right)\right)$ $\displaystyle\underset{n\to+\infty}{\longrightarrow}F_{\nu}^{-1}\left(\Psi_{-}^{-1}\left(\Psi_{+}(u)+\varepsilon\right)\right).$ For $n$ large enough, we have $\Psi_{+}(u)\in\left[\Psi_{n+}(1)\frac{\Psi_{+}(u)-\varepsilon}{\Psi_{+}(1)},\Psi_{n+}(1)\frac{\Psi_{+}(u)+\varepsilon}{\Psi_{+}(1)}\right]$. Therefore, by monotonicity, we have $\displaystyle F_{\nu}^{-1}\left(\Psi_{-}^{-1}\left(\Psi_{+}(u)-\varepsilon\right)\right)$ $\displaystyle=\liminf_{n\to+\infty}F_{\nu_{n}}^{-1}\left(\Psi_{n-}^{-1}\left(\Psi_{n+}(1)\frac{\Psi_{+}(u)-\varepsilon}{\Psi_{+}(1)}\right)\right)$ $\displaystyle\leq\liminf_{n\to+\infty}F_{\nu_{n}}^{-1}(\Psi_{n-}^{-1}(\Psi_{n+}(u)))$ $\displaystyle\leq\limsup_{n\to+\infty}F_{\nu_{n}}^{-1}(\Psi_{n-}^{-1}(\Psi_{n+}(u)))$ $\displaystyle\leq\limsup_{n\to+\infty}F_{\nu_{n}}^{-1}\left(\Psi_{n-}^{-1}\left(\Psi_{n+}(1)\frac{\Psi_{+}(u)+\varepsilon}{\Psi_{+}(1)}\right)\right)$ $\displaystyle=F_{\nu}^{-1}\left(\Psi_{-}^{-1}\left(\Psi_{+}(u)+\varepsilon\right)\right).$ Since $F_{\nu}^{-1}\circ\Psi_{-}^{-1}$ is continuous at $\Psi_{+}(u)$, we get when $\varepsilon$ vanishes the convergence (4.6), which concludes the proof of (4.2) and therefore (4.1) ∎ ###### Proposition 4.2. Let $\rho\geq 1$ and $\mu_{n},\nu_{n}\in\mathcal{P}_{\rho}(\mathbb{R})$, $n\in\mathbb{N}$, be in convex order and respectively converge to $\mu$ and $\nu$ in $\mathcal{W}_{\rho}$ as $n\to+\infty$. Suppose that asymptotically, any jump of $F_{\mu}$ is included in a jump of $F_{\mu_{n}}$, that is $\forall x\in\mathbb{R},\quad\mu(\\{x\\})>0\implies\exists(x_{n})_{n\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}},\quad F_{\mu_{n}}(x_{n})\wedge F_{\mu}(x)-F_{\mu_{n}}(x_{n}-)\vee F_{\mu}(x-)\underset{n\to+\infty}{\longrightarrow}\mu(\\{x\\}),$ (4.8) which is for instance satisfied if $\mu$ is non-atomic. Then $\mathcal{AW}_{\rho}(M_{n}^{IT},M^{IT})\underset{n\to+\infty}{\longrightarrow}0,$ (4.9) where $M_{n}^{IT}$, resp. $M^{IT}$, denotes the inverse transform martingale coupling between $\mu_{n}$ and $\nu_{n}$, resp. $\mu$ and $\nu$. ###### Remark 4.3. If (4.8) is not satisfied, then $\eqref{eq:stabilityAW1ITMC}$ may not hold. Indeed, for $n\in\mathbb{N}^{*}$, let $\mu_{n}=\mathcal{U}((-1/n,1/n))$, $\mu=\delta_{0}$ and $\nu_{n}=\nu=\mathcal{U}((-1,1))$. We trivially have $M^{IT}(dx,dy)=\mu(dx)\,\nu(dy)$, so $\mathcal{AW}_{1}(M_{n}^{IT},M^{IT})\geq\int_{x\in\mathbb{R}}\mathcal{W}_{1}((M_{n}^{IT})_{x},\nu)\,\mu_{n}(dx)$. However, for $n\in\mathbb{N}^{*}$, since $F_{\mu_{n}}$ is continuous, we have that for all $x\in\mathbb{R}$, $(M_{n}^{IT})_{x}=(\widetilde{m}^{IT}_{n})_{F_{\mu}(x)}$, where according to (3.10), $((\widetilde{m}^{IT}_{n})_{u}(dy))_{u\in(0,1)}$ is a probability kernel such that for all $u\in(0,1)$, there exist $a,b\in[-1,1]$ and $p\in[0,1]$ which satisfy $\widetilde{m}^{IT}_{n}(u,dy)=p\delta_{a}+(1-p)\delta_{b}$. Using the fact that the comonotonic coupling is optimal for the $\mathcal{W}_{1}$-distance, we get $\mathcal{W}_{1}(p\delta_{a}+(1-p)\delta_{b},\nu)=\int_{0}^{p}|a+1-2u|\,du+\int_{p}^{1}|b+1-2u|\,du.$ It is easy to show that $\int_{0}^{p}|a+1-2u|\,du$ is equal to $p(a+1-p)\geq p^{2}$ if $(a+1)/2>p$, and equal to $(a+1)^{2}/2-p(a+1)+p^{2}\leq p^{2}$ if $(a+1)/2\leq p$. Therefore, one can readily show that $\int_{0}^{p}|a+1-2u|\,du\geq p^{2}/2$, attained for $a=p-1$. Similarly, we have $\int_{p}^{1}|b+1-2u|\,du\geq(1-p)^{2}/2$, attained for $b=p$. We deduce that for all $(a,b,p)\in\mathbb{R}^{2}\times[0,1]$, $\mathcal{W}_{1}(p\delta_{a}+(1-p)\delta_{b},\nu)\geq(p^{2}+(1-p)^{2})/2\geq 1/4$, attained for $p=1/2$, hence $\int_{x\in\mathbb{R}}\mathcal{W}_{1}((M^{IT}_{n})_{x},\nu)\,\mu_{n}(dx)\geq 1/4$, which proves that (4.9) is not satisfied. ###### Proof of Proposition 4.2. By Lemma 5.3 below we may suppose without loss of generality that $\rho=1$. We have $\displaystyle\mathcal{AW}_{1}(M^{IT}_{n},M^{IT})$ $\displaystyle\leq\int_{0}^{1}\left(|F_{\mu_{n}}^{-1}(u)-F_{\mu}^{-1}(u)|+\mathcal{W}_{1}\left((M^{IT}_{n})_{F_{\mu_{n}}^{-1}(u)},M^{IT}_{F_{\mu}^{-1}(u)}\right)\right)\,du$ $\displaystyle=\mathcal{W}_{1}(\mu_{n},\mu)+\int_{0}^{1}\mathcal{W}_{1}\left((M^{IT}_{n})_{F_{\mu_{n}}^{-1}(u)},M^{IT}_{F_{\mu}^{-1}(u)}\right)\,du.$ For $(x,v)\in\mathbb{R}\times[0,1]$ and $n\in\mathbb{N}$, let $\theta(x,v)=F_{\mu}(x-)+v\mu(\\{x\\})$, $\theta_{n}(x,v)=F_{\mu_{n}}(x-)+v\mu_{n}(\\{x\\})$ and $(M_{n})_{x}(dy)=\int_{v=0}^{1}\widetilde{m}^{IT}_{\theta_{n}(x,v)}(dy)\,dv.$ Then (1.15) and the triangle inequality yield $\displaystyle\int_{(0,1)}\mathcal{W}_{1}\left((M^{IT}_{n})_{F_{\mu_{n}}^{-1}(u)},M^{IT}_{F_{\mu}^{-1}(u)}\right)\,du$ $\displaystyle\leq$ $\displaystyle\int_{(0,1)}\left(\mathcal{W}_{1}\left((M^{IT}_{n})_{F_{\mu_{n}}^{-1}(u)},(M_{n})_{F_{\mu_{n}}^{-1}(u)}\right)+\mathcal{W}_{1}\left((M_{n})_{F_{\mu_{n}}^{-1}(u)},M^{IT}_{F_{\mu}^{-1}(u)}\right)\right)\,du$ $\displaystyle\leq$ $\displaystyle\int_{(0,1)^{2}}\left(\mathcal{W}_{1}\left((\widetilde{m}^{IT}_{n})_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)}\right)+\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\right)\,du\,dv$ $\displaystyle=$ $\displaystyle\int_{(0,1)}\mathcal{W}_{1}\left((\widetilde{m}^{IT}_{n})_{u},\widetilde{m}^{IT}_{u}\right)\,du+\int_{(0,1)^{2}}\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\,du\,dv.$ In order to show (4.9), it is therefore sufficient by (4.1) to prove that the second summand in right-hand side vanishes when $n$ goes to $+\infty$. This is achieved in two steps. First, we prove that, on the probability space $(0,1)^{2}$ endowed with the Lebesgue measure, the family of random variables $\left(\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\right)_{n\in\mathbb{N}}$ is uniformly integrable. Second, we show for $du\,dv$-almost every $(u,v)\in(0,1)^{2}$ that $\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\underset{n\to+\infty}{\longrightarrow}0.$ (4.10) Let us begin with the uniform integrability. For $(u,v)\in(0,1)^{2}$, we can estimate $\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\leq\int_{\mathbb{R}}|y|\,\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)}(dy)+\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}(dy)\right).$ For each nonnegative measurable function $f:\mathbb{R}\to\mathbb{R}$, we have by (1.15) $\displaystyle\int_{(0,1)^{2}}f\left(\int_{\mathbb{R}}|y|\,\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)}(dy)\right)\,du\,dv$ $\displaystyle=\int_{(0,1)^{2}}f\left(\int_{\mathbb{R}}|y|\,\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}(dy)\right)\,du\,dv$ $\displaystyle=\int_{(0,1)}f\left(\int_{\mathbb{R}}|y|\,\widetilde{m}^{IT}_{u}(dy)\right)\,du.$ According to [24, Lemma 2.6], $M^{IT}$ is the image of $\mathds{1}_{(0,1)}(u)\,du\,\widetilde{m}^{IT}_{u}(dy)$ by $(u,y)\mapsto(F_{\mu}^{-1}(u),y)$ so that the second marginal of this measure is $\nu(dy)$, hence the random variables $\left(\mathcal{W}_{1}\left(\widetilde{m}^{IT}_{\theta_{n}(F_{\mu_{n}}^{-1}(u),v)},\widetilde{m}^{IT}_{\theta(F_{\mu}^{-1}(u),v)}\right)\right)_{n\in\mathbb{N}}$ are uniformly integrable. Next, we show the $du\,dv$-almost everywhere pointwise convergence of (4.10). Let $w\in(0,1)$ be in the set of continuity points of $F_{\mu}^{-1}$, $F_{\nu}^{-1}$, $F_{\nu}^{-1}\circ\Psi_{-}^{-1}\circ\Psi_{+}$ and $F_{\nu}^{-1}\circ\Psi_{+}^{-1}\circ\Psi_{-}$. Recall that we have $\widetilde{m}^{IT}_{w}=p(w)\delta_{F_{\nu}^{-1}(\varphi(w))}+(1-p(w))\delta_{F_{\nu}^{-1}(\varphi(w))}\quad\mbox{ with }\quad p(w)=\mathds{1}_{\\{F_{\mu}^{-1}(w)\neq F_{\nu}^{-1}(w)\\}}\frac{F_{\mu}^{-1}(w)-F_{\nu}^{-1}(w)}{F_{\nu}^{-1}(\varphi(w))-F_{\nu}^{-1}(w)}\in[0,1].$ Let $(w_{n})_{n\in\mathbb{N}}$ be a sequence with values in $(0,1)$ converging to $w$ and let us show that $\mathcal{W}_{1}(\widetilde{m}^{IT}_{w_{n}},\widetilde{m}^{IT}_{w})\underset{n\to+\infty}{\longrightarrow}0.$ (4.11) Suppose first that $w\in\mathcal{U}_{0}$ i.e. $F_{\mu}^{-1}(w)=F_{\nu}^{-1}(w)$. Then a computation similar to (4.5) yields $\mathcal{W}_{1}(\widetilde{m}^{IT}_{w_{n}},\widetilde{m}^{IT}_{w})\leq|F_{\mu}^{-1}(w_{n})-F_{\mu}^{-1}(w)|+2|F_{\nu}^{-1}(w_{n})-F_{\nu}^{-1}(w)|,$ where the right-hand side goes to $0$ as $n\to+\infty$ by continuity of $F_{\mu}^{-1}$ and $F_{\nu}^{-1}$ at $w$. Suppose next that $w\in\mathcal{U}_{+}$ i.e. $F_{\mu}^{-1}(w)>F_{\nu}^{-1}(w)$, the case $w\in\mathcal{U}_{-}$ being treated in a similar way. Then by continuity of $F_{\mu}^{-1}$ and $F_{\nu}^{-1}$ at $w$, $w_{n}\in\mathcal{U}_{+}$ for $n$ large enough so that without loss of generality $p(w)=\frac{F_{\mu}^{-1}(w)-F_{\nu}^{-1}(w)}{F_{\nu}^{-1}(\varphi(w))-F_{\nu}^{-1}(w)},\quad p(w_{n})=\frac{F_{\mu}^{-1}(w_{n})-F_{\nu}^{-1}(w_{n})}{F_{\nu}^{-1}(\varphi(w_{n}))-F_{\nu}^{-1}(w_{n})},$ $\varphi(w)=\Psi_{-}^{-1}(\Psi_{+}(w))$, and $\varphi(w_{n})=\Psi_{-}^{-1}(\Psi_{+}(w_{n}))$, hence (4.11) follows from the continuity at $w$ of $F_{\mu}^{-1}$, $F_{\nu}^{-1}$ and $F_{\nu}^{-1}\circ\Psi_{-}^{-1}\circ\Psi_{+}$. Since the set of discontinuity points of the non-decreasing functions $F_{\mu}^{-1}$, $F_{\nu}^{-1}$, $F_{\nu}^{-1}\circ\Psi_{-}^{-1}\circ\Psi_{+}$ and $F_{\nu}^{-1}\circ\Psi_{+}^{-1}\circ\Psi_{-}$ are at most countable, we deduce by (1.15) and (4.11) that it is sufficient to show for $du\,dv$-almost every $(u,v)\in(0,1)^{2}$ $\theta_{n}(F_{\mu_{n}}^{-1}(u),v)\underset{n\to+\infty}{\longrightarrow}\theta(F_{\mu}^{-1}(u),v),$ or equivalently $(F_{\mu_{n}}(x^{n}_{u}-),F_{\mu_{n}}(x^{n}_{u}))\underset{n\to+\infty}{\longrightarrow}(F_{\mu}(x_{u}-),F_{\mu}(x_{u}))$ (4.12) for $du$-almost every $u\in(0,1)$, where $x_{u}:=F_{\mu}^{-1}(u)$ and $x^{n}_{u}:=F_{\mu_{n}}^{-1}(u)$. Let then $u\in(0,1)$. Since, by monotonicity, $u\mapsto(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u))$ is continuous $du$-almost everywhere on $(0,1)$ and, then, the weak convergence implies that $(F_{\mu_{n}}^{-1}(u),F_{\nu_{n}}^{-1}(u))\underset{n\to+\infty}{\longrightarrow}(F_{\mu}^{-1}(u),F_{\nu}^{-1}(u)),$ (4.13) we suppose without loss of generality that this convergence holds. For $n\in\mathbb{N}$, define $l_{n}=\inf_{k\geq n}x^{k}_{u}$ and $r_{n}=\sup_{k\geq n}x^{k}_{u}$. Since (4.13) holds, we find that $(l_{n})_{n\in\mathbb{N}}$, resp. $(r_{n})_{n\in\mathbb{N}}$, is a nondecreasing, resp. nonincreasing, sequence converging to $x_{u}$. Due to right continuity of $F_{\mu}$ and left continuity of $x\mapsto F_{\mu}(x-)$ we have $F_{\mu}(x_{u}-)=\lim_{p\to+\infty}F_{\mu}(l_{p}-)\quad\text{and}\quad\lim_{p\to+\infty}F_{\mu}(r_{p})=F_{\mu}(x_{u}).$ By Portmanteau’s theorem and monotonicity of cumulative distribution functions we have $F_{\mu}(l_{p}-)\leq\liminf_{n\to+\infty}F_{\mu_{n}}(l_{p}-)\leq\liminf_{n\to+\infty}F_{\mu_{n}}(x^{n}_{u}-)\leq\limsup_{n\to+\infty}F_{\mu_{n}}(x^{n}_{u})\leq\limsup_{n\to+\infty}F_{\mu_{n}}(r_{p})\leq F_{\mu}(r_{p}).$ By taking the limit $p\to+\infty$, we find $F_{\mu}(x_{u}-)\leq\liminf_{n\to+\infty}F_{\mu_{n}}(x^{n}_{u}-)\leq\limsup_{n\to+\infty}F_{\mu_{n}}(x^{n}_{u})\leq F_{\mu}(x_{u}).$ This implies (4.12) as soon as $F_{\mu}$ is continuous at $x_{u}$. Suppose now that $F_{\mu}$ is discontinuous at $x_{u}$. Since $\mu$ has countably many atoms, we may suppose without loss of generality that $u\in(F_{\mu}(x_{u}-),F_{\mu}(x_{u}))$. Let $(x_{n})_{n\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}$ be the sequence associated to $x=x_{u}$ by (4.8). For $n$ large enough, we have $u\in(F_{\mu_{n}}(x_{n}-),F_{\mu_{n}}(x_{n}))$, hence $x_{n}=x^{n}_{u}$. Using the assumption made in (4.8), we get $\displaystyle\liminf_{n\to+\infty}F_{\mu_{n}}(x^{n}_{u})$ $\displaystyle=\liminf_{n\to+\infty}(F_{\mu_{n}}(x^{n}_{u})\wedge F_{\mu}(x_{u}))$ $\displaystyle=\liminf_{n\to+\infty}(F_{\mu_{n}}(x^{n}_{u})\wedge F_{\mu}(x_{u})-F_{\mu_{n}}(x^{n}_{u}-)\vee F_{\mu}(x_{u}-)+F_{\mu_{n}}(x^{n}_{u}-)\vee F_{\mu}(x_{u}-))$ $\displaystyle=\mu(\\{x_{u}\\})+\liminf_{n\to+\infty}(F_{\mu_{n}}(x^{n}_{u}-)\vee F_{\mu}(x_{u}-))\geq F_{\mu}(x_{u}),$ hence $F_{\mu_{n}}(x^{n}_{u})\underset{n\to+\infty}{\longrightarrow}F_{\mu}(x_{u})$. Similarly, $F_{\mu_{n}}(x^{n}_{u}-)\underset{n\to+\infty}{\longrightarrow}F_{\mu}(x_{u}-)$, which shows (4.12) and concludes the proof. ∎ ## 5 Appendix: adapted Wasserstein distances A useful point of view is the following: for all $\mu,\nu\in\mathcal{P}_{\rho}(\mathbb{R})$ and $\pi\in\Pi(\mu,\nu)$, let $J(\pi)$ be the probability measure on $\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})$ defined by $J(\pi)(dx,dp)=\mu(dx)\,\delta_{\pi_{x}}(dp).$ (5.1) Then one can readily show that for any $\mu^{\prime},\nu^{\prime}\in\mathcal{P}_{\rho}(\mathbb{R})$ and $\pi^{\prime}\in\Pi(\mu,\nu)$, $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})=\mathcal{W}_{\rho}(J(\pi),J(\pi^{\prime})),$ (5.2) where $\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})$ is of course endowed with the product metric $((x,p),(x^{\prime},p^{\prime}))\mapsto(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(p,p^{\prime}))^{1/\rho}$. Therefore, the topology induced by $\mathcal{AW}_{\rho}$ coincides with the initial topology with respect to $J$. This allows us to easily derive the two following lemmas. ###### Lemma 5.1. Let $\rho\geq 1$, $\mu,\nu,\mu^{\prime},\nu^{\prime}\in\mathcal{P}_{\rho}(\mathbb{R})$ and $\pi\in\Pi(\mu,\nu),\pi^{\prime}\in\Pi(\mu^{\prime},\nu^{\prime})$. Then there exists a coupling $\chi\in\Pi(\mu,\mu^{\prime})$ optimal for $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})$, i.e. such that $\mathcal{AW}_{\rho}^{\rho}(\pi,\pi^{\prime})=\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(\pi_{x},\pi^{\prime}_{x^{\prime}})\right)\,\chi(dx,dx^{\prime}).$ ###### Remark 5.2. A similar statement holds when $\pi,\pi^{\prime}$ have three marginals. In that case, writing $\pi(dx,dy,dz)=\mu(dx)\,\pi_{x}(dy,dz)$ and $\pi^{\prime}(dx^{\prime},dy^{\prime},dz^{\prime})=\mu^{\prime}(dx^{\prime})\,\pi^{\prime}_{x^{\prime}}(dy^{\prime},dz^{\prime})$ we define $\mathcal{AW}_{\rho}^{\rho}(\pi,\pi^{\prime})=\inf_{\chi\in\Pi(\mu,\mu^{\prime})}\left(|x-x^{\prime}|^{\rho}+\mathcal{AW}_{\rho}^{\rho}(\pi_{x},\pi^{\prime}_{x^{\prime}})\right)\,\chi(dx,dx^{\prime}).$ Let $K(\pi)$ be the probability measure on $\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R}))$ defined by $K(\pi)(dx,dp)=\mu(dx)\,\delta_{J(\pi_{x})}(dp).$ Then one can readily show that $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})=\mathcal{W}_{\rho}(K(\pi),K(\pi^{\prime})),$ where $\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R}))$ is of course endowed with the product metric $((x,p),(x,p^{\prime}))\mapsto\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(p,p^{\prime})\right)^{1/\rho}$. Similarly to Lemma 5.1, the latter characterisation allows us to easily see that there exists a coupling $\chi\in\Pi(\mu,\mu^{\prime})$ optimal for $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})$. ###### Proof of Lemma 5.1. Since $\mathbb{R}$ is Polish, so are the set $\mathcal{P}_{\rho}(\mathbb{R})$ and the set of probability measures on $\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})$. Hence there exists a coupling $P\in\Pi(J(\pi),J(\pi^{\prime}))$ optimal for $\mathcal{W}_{\rho}(J(\pi),J(\pi^{\prime}))$, i.e. $\mathcal{W}_{\rho}^{\rho}(J(\pi),J(\pi^{\prime}))=\int_{\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})\times\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})}|(x,p)-(x^{\prime},p^{\prime})|^{\rho}\,P(dx,dp,dx^{\prime},dp^{\prime}).$ Since the $J(\pi)$ and $J(\pi^{\prime})$ are concentrated on graphs of measurable maps, it is clear that $P(dx,dp,dx^{\prime},dp^{\prime})=\chi(dx,dx^{\prime})\,\delta_{\pi_{x}}(dp)\,\delta_{\pi^{\prime}_{x^{\prime}}}(dp^{\prime})$ for $\chi(dx,dx^{\prime})=\int_{(p,p^{\prime})\in\mathcal{P}_{\rho}(\mathbb{R})\times\mathcal{P}_{\rho}(\mathbb{R})}P(dx,dp,dx^{\prime},dp^{\prime})\in\Pi(\mu,\mu^{\prime})$. Then $\displaystyle\mathcal{AW}_{\rho}^{\rho}(\pi,\pi^{\prime})$ $\displaystyle=\mathcal{W}_{\rho}^{\rho}(J(\pi),J(\pi^{\prime}))$ $\displaystyle=\int_{\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})\times\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})}\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(p,p^{\prime})\right)\,\chi(dx,dx^{\prime})\,\delta_{\pi_{x}}(dp)\,\delta_{\pi^{\prime}_{x^{\prime}}}(dp^{\prime})$ $\displaystyle=\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(\pi_{x},\pi^{\prime}_{x^{\prime}})\right)\,\chi(dx,dx^{\prime}),$ hence $\chi$ is optimal for $\mathcal{AW}_{\rho}(\pi,\pi^{\prime})$. ∎ ###### Lemma 5.3. Let $\rho\geq 1$, $\mu,\nu\in\mathcal{P}_{\rho}(\mathbb{R})$, $(\mu_{n})_{n\in\mathbb{N}},(\nu_{n})_{n\in\mathbb{N}}\in\mathcal{P}_{\rho}(\mathbb{R})^{\mathbb{N}}$, $\pi\in\Pi(\mu,\nu)$ and $(\pi_{n})_{n\in\mathbb{N}}\in\prod_{n\in\mathbb{N}}\Pi(\mu_{n},\nu_{n})$. Then $\mathcal{AW}_{\rho}(\pi_{n},\pi)\underset{n\to+\infty}{\longrightarrow}0\iff\mathcal{AW}_{1}(\pi_{n},\pi)+\mathcal{W}_{\rho}(\mu_{n},\nu)+\mathcal{W}_{\rho}(\nu_{n},\nu)\underset{n\to+\infty}{\longrightarrow}0.$ (5.3) ###### Proof. Clearly, $\int_{\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})}|x|^{\rho}\,J(\pi)(dx,dp)=\int_{\mathbb{R}}|x|^{\rho}\,\mu(dx)$, and $\int_{\mathbb{R}\times\mathcal{P}_{\rho}(\mathbb{R})}\mathcal{W}_{\rho}^{\rho}(p,\delta_{0})\,J(\pi)(dx,dp)=\int_{\mathbb{R}}\int_{\mathbb{R}}|y|^{\rho}\,\pi_{x}(dy)\,\mu(dx)=\int_{\mathbb{R}}|y|^{\rho}\,\nu(dy),$ so $\pi$ and $J(\pi)$ have equal $\rho$-th moments. Since convergence in $\mathcal{W}_{\rho}$ is equivalent to convergence in $\mathcal{W}_{1}$ coupled with convergence of the $\rho$-th moments, we deduce from (5.2) that $\mathcal{AW}_{\rho}(\pi_{n},\pi)\underset{n\to+\infty}{\longrightarrow}0\iff\mathcal{AW}_{1}(\pi_{n},\pi)+\left|\int_{\mathbb{R}}|x|^{\rho}\,\mu_{n}-\int_{\mathbb{R}}|x|^{\rho}\,\mu(dx)\right|+\left|\int_{\mathbb{R}}|y|^{\rho}\,\nu_{n}(dy)-\int_{\mathbb{R}}|y|^{\rho}\,\nu(dy)\right|\underset{n\to+\infty}{\longrightarrow}0.$ Since $\mathcal{W}_{1}\leq\mathcal{AW}_{1}$ and $\mathcal{W}_{1}$-convergence of the couplings implies that of their respective marginals, using the fact that convergence in $\mathcal{W}_{\rho}$ is equivalent to convergence in $\mathcal{W}_{1}$ coupled with convergence of the $\rho$-th moments again, we conclude that the right-hand side is clearly equivalent to $\mathcal{AW}_{1}(\pi_{n},\pi)+\mathcal{W}_{\rho}(\mu_{n},\nu)+\mathcal{W}_{\rho}(\nu_{n},\nu)\underset{n\to+\infty}{\longrightarrow}0,$ which proves (5.3). ∎ For $\pi=\mu\times\pi_{x},\pi^{\prime}=\mu^{\prime}\times\pi^{\prime}_{x^{\prime}}\in\mathcal{P}_{\rho}(\mathbb{R}\times\mathbb{R})$, their nested Wasserstein distance of order $\rho$ is defined by $\mathcal{W}_{\rho}^{nd}(\pi,\pi^{\prime}):=\inf_{\eta\in\Pi_{bc}(\pi,\pi^{\prime})}\left(\int_{\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+|y-y^{\prime}|^{\rho}\right)\,\eta(dx,dy,dx^{\prime},dy^{\prime})\right)^{1/\rho},$ where $\Pi_{bc}\in(\pi,\pi^{\prime})$ denotes the set of bicausal couplings between $\pi$ and $\pi^{\prime}$: a coupling $\eta\in\Pi(\pi,\pi^{\prime})$ is called bicausal iff $\int_{y\in\mathbb{R}}\eta(dx,dy,dx^{\prime},dy^{\prime})=\pi^{\prime}(dx^{\prime},dy^{\prime})\,\chi_{x^{\prime}}(dx)\quad\textrm{and}\quad\int_{y^{\prime}\in\mathbb{R}}\eta(dx,dy,dx^{\prime},dy^{\prime})=\pi(dx,dy)\,\chi_{x}(dx^{\prime}),$ (5.4) where with a slight abuse of notation, $\chi(dx,dx)=\int_{(y,y^{\prime})\in\mathbb{R}\times\mathbb{R}}\eta(dx,dy,dx^{\prime},dy^{\prime})=\mu(dx)\,\chi_{x}(dx^{\prime})=\mu^{\prime}(dx^{\prime})\,\chi_{x^{\prime}}(dx).$ (5.5) ###### Lemma 5.4. Let $\pi,\pi^{\prime}$ be two probability measures on $\mathbb{R}\times\mathbb{R}$ with respective first marginals $\mu$ and $\mu^{\prime}$. Let $\eta\in\Pi(\pi,\pi^{\prime})$, $\chi$ be defined by (5.5) and $(\gamma_{(x,x^{\prime})}(dy,dy^{\prime}))_{(x,x^{\prime})\in\mathbb{R}\times\mathbb{R}}$ be a probability kernel such that $\eta(dx,dy,dx^{\prime},dy^{\prime})=\chi(dx,dx^{\prime})\,\gamma_{(x,x^{\prime})}(dy,dy^{\prime})$. Then $\eta\in\Pi_{bc}(\pi,\pi^{\prime})$ iff $\chi(dx,dx^{\prime})\textrm{-almost everywhere},\quad\gamma_{(x,x^{\prime})}\in\Pi(\pi_{x},\pi^{\prime}_{x^{\prime}}).$ (5.6) We deduce from Lemma 5.4 that $\displaystyle\begin{split}\left(\mathcal{W}_{\rho}^{nd}(\pi,\pi^{\prime})\right)^{\rho}&=\inf_{\chi\in\Pi(\mu,\mu^{\prime})}\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+\inf_{\gamma_{(x,x^{\prime})}\in\Pi(\pi_{x},\pi^{\prime}_{x^{\prime}})}\int_{\mathbb{R}\times\mathbb{R}}|y-y^{\prime}|^{\rho}\,\gamma_{(x,x^{\prime})}(dy,dy^{\prime})\right)\,\chi(dx,dx^{\prime})\\\ &=\inf_{\chi\in\Pi(\mu,\mu^{\prime})}\int_{\mathbb{R}\times\mathbb{R}}\left(|x-x^{\prime}|^{\rho}+\mathcal{W}_{\rho}^{\rho}(\pi_{x},\pi^{\prime}_{x^{\prime}})\right)\,\chi(dx,dx^{\prime})=\mathcal{AW}_{\rho}^{\rho}(\pi,\pi^{\prime}),\end{split}$ (5.7) hence nested and adapted Wasserstein distances coincide. ###### Proof of Lemma 5.4. We can rephrase (5.6) as $\chi(dx,dx^{\prime})\int_{y^{\prime}\in\mathbb{R}}\gamma_{(x,x^{\prime})}(dy,dy^{\prime})=\chi(dx,dx^{\prime})\,\pi_{x}(dy)\quad\textrm{and}\quad\chi(dx,dx^{\prime})\int_{y\in\mathbb{R}}\gamma_{(x,x^{\prime})}(dy,dy^{\prime})=\chi(dx,dx^{\prime})\,\pi^{\prime}_{x^{\prime}}(dy^{\prime}).$ (5.8) Since $\chi(dx,dx^{\prime})\int_{y^{\prime}\in\mathbb{R}}\gamma_{(x,x^{\prime})}(dy,dy^{\prime})=\int_{y^{\prime}\in\mathbb{R}}\eta(dx,dy,dx^{\prime},dy^{\prime})$, $\chi(dx,dx^{\prime})\int_{y\in\mathbb{R}}\gamma_{(x,x^{\prime})}(dy,dy^{\prime})=\int_{y\in\mathbb{R}}\eta(dx,dy,dx^{\prime},dy^{\prime})$, $\chi(dx,dx^{\prime})\,\pi_{x}(dy)=\pi(dx,dy)\,\chi_{x}(dx^{\prime})$ and $\chi(dx,dx^{\prime})\,\pi^{\prime}_{x^{\prime}}(dy^{\prime})=\pi^{\prime}(dx^{\prime},dy^{\prime})\,\chi_{x^{\prime}}(dx)$, we deduce that (5.8) and therefore (5.6) is equivalent to (5.4), that is $\eta$ is bicausal. ∎ ## References * [1] D. J. Aldous. Weak convergence and general theory of processes. Unpublished incomplete draft of monograph; Department of Statistics, University of California, Berkeley, CA 94720, July 1981. * [2] C. D. Aliprantis and K. C. Border. Infinite dimensional analysis: A hitchhiker’s guide. Springer, 3rd edition, 2006. * [3] J. Backhoff-Veraguas, D. Bartl, M. Beiglböck, and M. Eder. Adapted wasserstein distances and stability in mathematical finance. Finance and Stochastics, 24(3):601–632, 2020. * [4] J. Backhoff-Veraguas, D. Bartl, M. Beiglböck, and M. Eder. All adapted topologies are equal. Probability Theory and Related Fields, pages 1–48, 2020. * [5] J. Backhoff-Veraguas and G. Pammer. Stability of martingale optimal transport and weak optimal transport. arXiv e-prints:1904.04171, April 2019. * [6] M. Beiglböck, P. Henry-Labordère, and F. Penkner. Model-independent bounds for option prices: A mass transport approach. Finance and Stochastics, 17(3):477–501, 2013. * [7] M. Beiglböck, B. Jourdain, W. Margheriti, and G. Pammer. Approximation of martingale couplings on the line in the weak adapted topology. arXiv e-prints:2101.02517, 2020. * [8] M. Beiglböck, B. Jourdain, W. Margheriti, and G. Pammer. Monotonicity and stability of weak martingale optimal transport. 2020\. [Online; posted October 2020, https://cermics.enpc.fr/~margherw/Documents/stabilityWMOT.pdf]. * [9] M. Beiglböck and N. Juillet. Shadow couplings. To appear in Transactions of the AMS. * [10] M. Beiglböck and N. Juillet. On a problem of optimal transport under marginal martingale constraints. Annals of Probability, 44(1):42–106, 2016. * [11] M. Beiglböck, T. Lim, and J. Obłój. Dual attainment for the martingale transport problem. Bernoulli, 25(3):1640–1658, 2019. * [12] M. Beiglböck, M. Nutz, and N. Touzi. Complete Duality for Martingale Optimal Transport on the Line. Annals of Probability, 45(5):3038–3074, 2017. * [13] J. Bion-Nadal and D. Talay. On a wasserstein-type distance between solutions to stochastic differential equations. Annals of Applied Probability, 29(3):1609–1639, 2019. * [14] H. De March. Local structure of multi-dimensional martingale optimal transport. arXiv e-prints:1805.09469, November 2018. * [15] H. De March. Quasi-sure duality for multi-dimensional martingale optimal transport. arXiv e-prints:1805.01757, May 2018. * [16] H. De March and N. Touzi. Irreducible convex paving for decomposition of multi-dimensional martingale transport plans. Annals of Probability, 47(3):1726–1774, 2019. * [17] A. Galichon, P. Henry-Labordère, and N. Touzi. A stochastic control approach to no-arbitrage bounds given marginals, with an application to lookback options. Annals of Applied Probability, 24(1):312–336, 2014. * [18] S. Gerhold and I. C. Gülüm. Peacocks nearby: Approximating sequences of measures. Stochastic Processes and their Applications, 129(7):2406–2436, 2019\. * [19] S. Gerhold and I. C. Gülüm. Consistency of option prices under bid–ask spreads. Mathematical Finance, 30(2):377–402, 2020. * [20] M. F. Hellwig. Sequential decisions under uncertainty and the maximum theorem. Journal of Mathematical Economics, 25(4):443–464, 1996. * [21] P. Henry-Labordère, X. Tan, and N. Touzi. An explicit version of the one-dimensional brenier’s theorem with full marginals constraint. Stochastic Processes and their Applications, 126(9):2800–2834, 2016\. * [22] P. Henry-Labordère and N. Touzi. An explicit martingale version of the one-dimensional Brenier theorem. Finance and Stochastics, 20(3):635–668, 2016. * [23] B. Jourdain and W. Margheriti. Martingale Wasserstein inequality for probability measures in the convex order. arXiv e-prints:2011.11599, 2020. * [24] B. Jourdain and W. Margheriti. A new family of one dimensional martingale couplings. Electronic Journal of Probability, 25, 2020. * [25] L. V. Kantorovich. On the translocation of masses. Doklady Akademii Nauk SSSR, 37(7-8):199–201, 1942. * [26] R. Lassalle. Causal transference plans and their Monge-Kantorovich problems. Stochastic Analysis and Applications, 36(3):452–484, 2018. * [27] G. Monge. Mémoire sur la théorie des déblais et des remblais. Histoire de l’académie Royale des Sciences de Paris, 1781\. * [28] G. C. Pflug and A. Pichler. A distance for multistage stochastic optimization models. SIAM Journal on Optimization, 22(1):1–23, 2012. * [29] G. C. Pflug and A. Pichler. Multistage stochastic optimization. Springer Series in Operations Research and Financial Engineering. Springer, Cham, 2014. * [30] G. C. Pflug and A. Pichler. Dynamic generation of scenario trees. Computational Optimization and Applications, 62(3):641–668, 2015\. * [31] G. C. Pflug and A. Pichler. From empirical observations to tree models for stochastic optimization: convergence properties. SIAM Journal on Optimization, 26(3):1715–1740, 2016. * [32] S. T. Rachev and L. Rüschendorf. Mass Transportation Problems: Volume I: Theory. Springer Science & Business Media, 1998. * [33] D. Revuz and M. Yor. Continuous Martingales and Brownian Motion. Grundlehren der mathematischen Wissenschaften. Springer-Verlag, Berlin Heidelberg, 3 edition, 1999. * [34] L. Rüschendorf. On the minimum discrimination information theorem. Statistics & Decisions Supplement Issue, 1:263–283, 1984. * [35] V. Strassen. The existence of probability measures with given marginals. Annals of Mathematical Statistics, 36(2):423–439, 1965. * [36] C. Villani. Optimal Transport, Old and New, volume 338 of Grundlehren der mathematischen Wissenschaften. Springer, 2009. * [37] J. Wiesel. Continuity of the martingale optimal transport problem on the real line. arXiv e-prints:1905.04574, January 2020.
# On the number of critical points of stable solutions in bounded strip-like domains Fabio De Regibus Dipartimento di Matematica, Università di Roma “La Sapienza”, P.le A. Moro 2 - 00185 Roma, Italy, e-mail: <EMAIL_ADDRESS> and Massimo Grossi Dipartimento di Matematica, Università di Roma “La Sapienza”, P.le A. Moro 2 - 00185 Roma, Italy, e-mail: <EMAIL_ADDRESS> ###### Abstract. In this paper we show that there exists a family of domains $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N}$ with $N\geq 2$, such that the _stable_ solution of the problem $\begin{cases}-\Delta u=g(u)&\hbox{in }\Omega_{\varepsilon}\\\ u>0&\hbox{in }\Omega_{\varepsilon}\\\ u=0&\hbox{on }\partial\Omega_{\varepsilon}\end{cases}$ admits $k$ critical points with $k\geq 2$. Moreover the sets $\Omega_{\varepsilon}^{\prime}s$ are star-shaped and “close” to a strip as $\varepsilon\to 0$. Next, if $g(u)\equiv 1$ and $N\geq 3$ we exhibit a family of domain $\Omega_{\varepsilon}^{\prime}s$ with positive mean curvature and solutions $u_{\varepsilon}$ which have $k$ critical points with $k\geq 2$. In this case, the domains $\Omega_{\varepsilon}$ turn out to be “close” to a cylinder as $\varepsilon\to 0$. This work was supported by INDAM-GNAMPA ## 1\. Introduction and main results In this paper we investigate the number of critical points of solutions $u$ to the following problem (1.1) $\begin{cases}-\Delta u=g(u)&\hbox{in }\Omega\\\ u>0&\hbox{in }\Omega\\\ u=0&\hbox{on }\partial\Omega\end{cases}$ where $\Omega$ is a smooth bounded domain in $\mathbb{R}^{N}$, $N\geq 2$ and $g$ is a smooth nonlinearity. It is known that this problem strongly depends on the geometry of $\Omega$. A very studied case is when $\Omega$ is _convex_ : in this case it is expected the uniqueness of the critical point. One of the first results in this direction is the one in [ML71], where it is proved the strict convexity of the level sets for $N=2$ and $g\equiv 1$. Of course this property implies the uniqueness of the critical point. Another classical problem concerns the first eigenfunction of the laplacian with zero Dirichlet boundary condition. In this case the uniqueness of the critical point was proved in [BL76] (see also [APP81]). A very general result on the uniqueness of the critical point of solutions of 1.1 is given in the seminal paper by [GNN79] where it is only assumed that $g$ is a locally Lipschitz function and $\Omega$ is a symmetric domain in $\mathbb{R}^{N}$ which is convex in any direction. Some conjectures claim that the symmetry assumption in Gidas, Ni, Nirenberg’s Theorem can be removed. An interesting contribution in this direction is the result in [CC98] where the uniqueness of the critical point is proved for _semi-stable_ solutions in planar domains with strictly positive boundary curvature. We recall that a solution $u$ of problem 1.1 is said to be _stable_ (or _semi- stable_) if the linearized operator at $u$ is positive definite, i.e. if for all $\xi\in\mathcal{C}^{\infty}_{0}(\Omega)\setminus\\{0\\}$ one has $\int_{\Omega}|\nabla\xi|^{2}-\int_{\Omega}g^{\prime}(u)|\xi|^{2}>0\ (\geq 0),$ or equivalently if the first eigenvalue of the linearized operator $-\Delta-g^{\prime}(u)$ in $\Omega$ is positive (non-negative). The result in [CC98] was recently extended allowing $\partial\Omega$ to have points with zero curvature, see [DRGM21]. Next we are going to discuss what happens if $\partial\Omega$ contains points with _negative_ curvature. We will see that not only the uniqueness of the critical point is lost, but it is not even possible to have any bound on the number of critical points. Indeed, in [GG19] it was proved that there exists a family of bounded domains $\Omega_{\varepsilon}$ in $\mathbb{R}^{2}$ and a solution $u_{\varepsilon}$ to $\begin{cases}-\Delta u_{\varepsilon}=1&\hbox{in }\Omega_{\varepsilon}\\\ u_{\varepsilon}>0&\hbox{in }\Omega_{\varepsilon}\\\ u_{\varepsilon}=0&\hbox{on }\partial\Omega_{\varepsilon}\end{cases}$ such that 1. (i) $\Omega_{\varepsilon}$ is starshaped with respect to an interior point; 2. (ii) $\Omega_{\varepsilon}$ locally converges to a strip $\mathcal{S}=\set{(x,y)\in\mathbb{R}^{2}}{-1<y<1}$ for $\varepsilon\to 0$, i.e. for all compact set $K\subseteq\mathbb{R}^{2}$ it holds $\lvert K\cap(\mathcal{S}\Delta\Omega_{\varepsilon})\rvert\to 0$ as $\varepsilon\to 0$; 3. (iii) The curvature of $\partial\Omega_{\varepsilon}$ change sign once and $\min\limits_{(x,y)\in\partial\Omega_{\varepsilon}}Curv(x,y)\to 0$ as $\varepsilon\to 0$; 4. (iv) $u_{\varepsilon}$ has at least $k$ maximum points with $k\geq 2$. In some sense, for $\varepsilon>0$ small, the domains $\Omega_{\varepsilon}$ are “close” to be convex and the minimum negative value of the curvature of $\partial\Omega_{\varepsilon}$ is close to zero as we want. The aim of this paper is twofold: first we want to extend the result of [GG19] to more general nonlinearities. On the other hand we want to investigate the role of the curvature of $\partial\Omega$ in higher dimensions. Concerning the first point, let us assume that the nonlinearity has the form $g=\lambda f$ where $f$ is smooth and satisfies (1.2) $f:\mathbb{R}\to\mathbb{R}\,\hbox{ is increasing and convex},$ (1.3) $f(0)>0.$ In this setting it is well known that there exists $\lambda^{*}(\Omega)>0$ such that for all $\lambda\in(0,\lambda^{*}(\Omega))$ the problem (1.4) $\begin{cases}-\Delta u=\lambda f(u)&\hbox{in }\Omega\\\ u>0&\hbox{in }\Omega\\\ u=0&\hbox{on }\partial\Omega\end{cases}$ admits a positive stable solution, see for instance [Ban80], [CR75] and [MP80] and the references therein. Finally let us denote by $\mathcal{S}$ the strip $\mathcal{S}=\set{(x,y)\in\mathbb{R}^{N}\times\mathbb{R}}{-1<y<1}$. Our first result claims that, if $f$ satisfies 1.2 and 1.3 then there exists a family of bounded smooth domains $\Omega_{\varepsilon}$ “close” to the strip $\mathcal{S}$ and a solution $u_{\varepsilon}$ to 1.4 with $k$ maximum points, $k\geq 2$. The precise statement follows. ###### Theorem 1.1. Assume that $f$ satisfies 1.2 and 1.3. Then for any $\lambda\in(0,\lambda^{*}(-1,1))$ and for all $k\in\mathbb{N}$ there exists a family of smooth and bounded domain $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N+1}$ such that 1. (i) $\Omega_{\varepsilon}$ is starshaped with respect to the origin and symmetric with respect to the hyperplanes $x_{j}=0$ for $j=1,\dots,N$ and $y=0$; 2. (ii) $\Omega_{\varepsilon}$ locally converges to the strip $\mathcal{S}$ for $\varepsilon\to 0$, i.e. for all compact set $K\subseteq\mathbb{R}^{N+1}$ it holds $\lvert K\cap(\mathcal{S}\Delta\Omega_{\varepsilon})\rvert\to 0$ as $\varepsilon\to 0$; 3. (iii) $\lambda^{*}(\Omega_{\varepsilon})\geq\lambda^{*}(-1,1)$ for $\varepsilon$ small enough; 4. (iv) if $u_{\varepsilon}$ is the stable solution of problem 1.4 in $\Omega_{\varepsilon}$ for some $0<\lambda<\lambda^{*}(\Omega_{\varepsilon})$ then $u_{\varepsilon}$ has at least $k$ maximum points. Let us give an idea of Theorem 1.1. The assumptions on $f$ imply that there exists a _stable_ solution $u_{0}$ of the following ODE $\begin{cases}-u^{\prime\prime}=\lambda f(u)&\hbox{in }(-1,1)\\\ u>0&\hbox{in }(-1,1)\\\ u(\pm 1)=0.\end{cases}$ Next, for a small $\sigma>0$ let us extend $u_{0}$ to a slightly larger interval $(-1-\sigma,1+\sigma)$ and denote by $\varphi:\mathbb{R}^{N+1}\to\mathbb{R}$ a suitable solution of the following PDE (1.5) $-\Delta v=\lambda f^{\prime}(u_{0}(y))v,\quad\text{in }\mathbb{R}^{N}\times\left(-1-\sigma,1+\sigma\right).$ Of course 1.5 can be solved using the classical separation of variables method. Our domain $\Omega_{\varepsilon}$ will be the connected component of $\set{u_{0}+\varepsilon\varphi>0}$ containing the origin and the solution $u_{\varepsilon}$ the _stable_ solution to 1.4 with $\Omega=\Omega_{\varepsilon}$. Finally we show that $u_{\varepsilon}$ is close to $u_{0}+\varepsilon\varphi$ on the compact sets of $\Omega_{\varepsilon}$ and, since it will be proved that this last function admits $k$ nondegenerate critical points then $(iv)$ follows. The proof of Theorem 1.1 will be given in Section 2. We point out that it is possible to prove a slight more general result for problem 1.1 without assuming 1.3, see Remark 2.10. It is important to remark that our construction only works for stable solutions to 1.1. Indeed, even for the case of the first eigenfunction of the laplacian (where the first eigenvalue of the linearized problem is _zero_), we are not able to construct a domain $\Omega_{\varepsilon}$ as in Theorem 1.1. This will be discussed in Remark 2.11. We do not know if in this case there exists a pair $(\Omega_{\varepsilon},u_{\varepsilon})$ as in Theorem 1.1. Next let us discuss the role of the curvature of $\partial\Omega$ for solutions to 1.1 in higher dimensions. We will focus on the particular case of the torsion problem, i.e. $g\equiv 1$ in 1.1. By Makar-Limanov’s result if $N=2$ and the curvature of $\partial\Omega$ is positive then the solution $u$ admits exactly one critical point (see [DRGM21] if the curvature vanishes somewhere). So a question naturally arises: if $N\geq 3$ what is a sufficient condition on $\partial\Omega$ which implies the uniqueness of the critical point? We point out that even for the torsion problem this is an open problem. In the second part of this paper we give a contribution to this question showing that the positive mean curvature of $\partial\Omega$ is not the correct extension to higher dimensions. Indeed, for any $k\geq 2$, we will construct a domain $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N}$ with $N\geq 3$ and positive mean curvature and a solution $u_{\varepsilon}$ of the torsion problem in $\Omega_{\varepsilon}$ such that $u_{\varepsilon}$ has at least $k$ critical points. Actually we suspect that the correct condition which implies the uniqueness of the critical point for the solution of the torsion problem is that all principal curvatures are positive. However we have no result to support this idea. The construction of the pair $(\Omega_{\varepsilon},u_{\varepsilon})$ is similar to the one in Theorem 1.1, but $\Omega_{\varepsilon}$ turns to be a suitable perturbation of a cylinder $\mathcal{C}=\set{(x,y)\in\mathbb{R}\times\mathbb{R}^{N}}{\lvert y\rvert^{2}<1}$ for $N\geq 2$. The result is the following, ###### Theorem 1.2. Let $N\geq 2$. For any $k\in\mathbb{N}$ there exists a family of smooth and bounded domain $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N+1}$ and smooth positive functions $u_{\varepsilon}:\Omega_{\varepsilon}\to\mathbb{R}$ such that 1. (i) $\Omega_{\varepsilon}$ is starshaped with respect to an interior point; 2. (ii) $\Omega_{\varepsilon}$ locally converges to the cylinder $\mathcal{C}$ for $\varepsilon\to 0$, i.e. for all compact set $K\subseteq\mathbb{R}^{N+1}$ it holds $\lvert K\cap(\mathcal{C}\Delta\Omega_{\varepsilon})\rvert\to 0$ as $\varepsilon\to 0$; 3. (iii) the mean curvature of $\partial\Omega_{\varepsilon}$ is positve; 4. (iv) $u_{\varepsilon}$ solves the torsion problem $\begin{cases}-\Delta u=1&\hbox{in }\Omega_{\varepsilon}\\\ u=0&\hbox{on }\partial\Omega_{\varepsilon};\end{cases}$ 5. (v) $u_{\varepsilon}$ has at least $k$ nondegenerate maximum points. Figure 1. The domain $\Omega_{\varepsilon}$ in Theorem 1.2 for $N=2$ and $k=2$. As in Theorem 1.1 we have that $u_{\varepsilon}=u_{0}+\varepsilon\varphi$ where $u_{0}=\frac{1}{2N}\left(N-\lvert y\rvert^{2}\right)$ is a solution of the torsion problem in the unit ball in $\mathbb{R}^{N}$ and $\varphi$ turns to be an harmonic function in the whole $\mathbb{R}^{N+1}$. Then we take $\Omega_{\varepsilon}$ as in Theorem 1.1, while our solution will directly be $u_{\varepsilon}=u_{0}+\varepsilon\varphi$. Since the set $\Omega_{\varepsilon}$ turns out to be a small perturbation of the cylinder $\mathcal{C}$, which boundary has positive mean curvature, then $(iii)$ of Theorem 1.2 follows. Note that, unlike as in Theorem 1.1, here the pair $(\Omega_{\varepsilon},u_{\varepsilon})$ is explicitly computed. Theorem 1.2 will be proved in Section 3. Finally the Appendix is devoted to the detailed proof of some claims in Section 2 and Section 3. ## 2\. Proof of Theorem 1.1 In this section we take $x=(x_{1},\dots,x_{N})\in\mathbb{R}^{N}$ and $y\in\mathbb{R}$ and we assume the hypothesis of Theorem 1.1. The proof works as follows: we first construct a suitable domain $\Omega_{\varepsilon}$ which verifies the claim of Theorem 1.1 and next we prove that the stable solution of 1.4 satisfies the claim $(iv)$ in Theorem 1.1. The first step in the construction of the domain $\Omega_{\varepsilon}$ is to introduce a solution $u_{0}$ of the $1$-dimensional problem (2.1) $\begin{cases}-u^{\prime\prime}=\lambda f(u)&\hbox{in }(-1,1)\\\ u>0&\hbox{in }(-1,1)\\\ u(\pm 1)=0.\end{cases}$ By the assumption on $f$ such a solution exists and by elementary argument it can be extended to verify $\begin{cases}-u^{\prime\prime}=\lambda f(u)&\hbox{in }(-1-\sigma,1+\sigma)\\\ u>0&\hbox{in }(-1,1)\\\ u(\pm 1)=0\\\ u<0&\hbox{in }[-1-\sigma,-1)\cup(1,1+\sigma]\\\ \end{cases}$ for $\sigma>0$ and small. We again denote by $u_{0}$ this extension. Since $u_{0}$ is a stable solution we have that the first eigenvalue of the linearized operator (2.2) $-\frac{\mathrm{d}^{2}}{\mathrm{d}y^{2}}-\lambda f^{\prime}(u_{0}(y)),$ in $(-1,1)$ with Dirichlet boundary conditions is strictly positive. Then, up to choose a smaller $\sigma$, also the first eigenvalue of 2.2 in $(-1-\sigma,1+\sigma)$ is strictly positive. We denote it by $\mu_{0}$. Next ingredient in the construction of $\Omega_{\varepsilon}$ involves a solution of a suitable linearized problem in the strip $\mathbb{R}^{N}\times(-1-\sigma,1+\sigma)$. To do this we need to study the following ODE. ###### Lemma 2.1. For $\mu\in(0,\mu_{0})$ there exists a solution $\omega_{\mu}$ of the ordinary equation $\begin{cases}-\omega^{\prime\prime}-\lambda f^{\prime}(u_{0}(y))\omega=\mu\omega\quad\text{in }(-1-\sigma,1+\sigma)\\\ \omega_{\mu}(0)=1\end{cases}$ such that 1. (i) $\omega_{\mu}>0$ in $[-1-\sigma,1+\sigma]$, 2. (ii) $\omega_{\mu}$ is symmetric with respect to $0$, 3. (iii) $y\omega_{\mu}^{\prime}(y)<0$ for all $y\not=0$. ###### Proof. Fix $\mu\in(0,\mu_{0})$ and let $\omega$ be the solution of $\begin{cases}-\omega^{\prime\prime}-\lambda f^{\prime}(u_{0}(y))\omega=\mu\omega\quad\text{in }(-1-\sigma,1+\sigma),\\\ \omega(\pm(1+\sigma))=1.\end{cases}$ Since $\mu<\mu_{0}$, by the maximum principle we know that $\omega>0$ in $(-1-\sigma,1+\sigma)$. Taking into account the symmetry of $u_{0}$ and the maximum principle we get that $\omega(y)=\omega(-y)$ and then $(ii)$ follows. Moreover, from $f^{\prime}\geq 0$ we deduce $\omega^{\prime\prime}<0$ in $[-1-\sigma,1+\sigma]$ and then $0$ turns out to be a maximum point. The strictly concavity of $\omega$ tells also that $\omega^{\prime}(y)<0$ for $y>0$ and, together to the symmetry of the function, this yields $(iii)$. To conclude the proof set $\omega_{\mu}=\omega/\omega(0)$. ∎ ### 2.1. Construction of the domain $\Omega_{\varepsilon}$ Now, for some $n=n(k)\in\mathbb{N}$, let $\mu_{1},\dots,\mu_{n}\in\mathbb{R}$ be such that (2.3) $\frac{\mu_{0}}{4}>\mu_{1}>\dots>\mu_{n}>0,$ and for $i=1,\dots,n$ $\omega_{i}(y)=\omega_{\mu_{i}}(y),\quad y\in(-1-\sigma,1+\sigma),$ the function given by Lemma 2.1. From now on, we consider $\sigma$ fixed. Given $(t,y)\in\mathbb{R}\times\left(-1-\sigma,1+\sigma\right)$, we define $\tilde{\varphi}(t,y)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t)\omega_{i}(y),$ for some $\alpha_{i}\in\mathbb{R}$ which will be fixed later. A straightforward computation shows that $\tilde{\varphi}$ is a solution of the linearized problem $-\Delta v=\lambda f^{\prime}(u_{0}(y))v,\quad\text{in }\mathbb{R}\times\left(-1-\sigma,1+\sigma\right).$ We set $\alpha_{1}=-1$ while we choose $\alpha_{2},\dots,\alpha_{n}$ in such a way that the function $\tilde{\varphi}(t,0)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t)$ has $k$ nondegenerate maximum points $t_{1},\dots,t_{k}$. We point out that it is always possible to do this, see Lemma A.1 in the Appendix for the details. Finally, for $(x_{1},\dots,x_{N},y)\in\mathbb{R}^{N}\times\left(-1-\sigma,1+\sigma\right)$ we define (2.4) $\boxed{\varphi(x_{1},\dots,x_{N},y)=\sum_{j=1}^{N}\tilde{\varphi}(x_{j},y)=\sum_{j=1}^{N}\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}x_{j})\omega_{i}(y)}$ which solves $-\Delta v=\lambda f^{\prime}(u_{0}(y))v,\quad\text{in }\mathbb{R}^{N}\times\left(-1-\sigma,1+\sigma\right).$ We point out that, for $\varepsilon$ small enough, $u_{0}(0)+\varepsilon\varphi(0,\dots,0,0)>0,$ and we denote by (2.5) $\boxed{\Omega_{\varepsilon}\hbox{ the connected component of $\set{u_{0}+\varepsilon\varphi>0}$ containing the origin.}}$ The following lemma proves some properties of the set $\Omega_{\varepsilon}$. The proof follows [GG19]. ###### Lemma 2.2. The set $\Omega_{\varepsilon}$ satisfies the following properties. 1. (i) $\Omega_{\varepsilon}\subseteq R_{\varepsilon}$ for $\varepsilon$ small enough, with $R_{\varepsilon}=[-M_{\varepsilon},M_{\varepsilon}]^{N}\times[-1-\eta,1+\eta]$ where $M_{\varepsilon}=\frac{1}{\sqrt{\mu_{1}}}\log\left(\frac{3\left\|u_{0}\right\|_{L^{\infty}(-1-\eta,1+\eta)}}{\varepsilon\omega_{1}(1+\eta)}\right),$ and $\eta\in(0,\sigma)$ as small as we want. 2. (ii) $\Omega_{\varepsilon}\supseteq[t_{1},t_{k}]^{N}\times\\{0\\}$. 3. (iii) Let $(x^{\varepsilon},y^{\varepsilon})\in\partial\Omega_{\varepsilon}$ for $\varepsilon$ small enough. Then, if $\lvert x^{\varepsilon}\rvert\leq C$ we have (2.6) $y^{\varepsilon}=\pm 1+o(1),$ and if $\lvert x^{\varepsilon}\rvert\to+\infty$ we have (2.7) $\sum_{j=1}^{N}\cosh(\sqrt{\mu_{1}}x_{j}^{\varepsilon})=\frac{u_{0}(y^{\varepsilon})}{\varepsilon\omega_{1}(y^{\varepsilon})}(1+o(1)).$ In particular $\Omega_{\varepsilon}$ locally converges to the strip $\mathcal{S}=\mathbb{R}^{N}\times(-1,1)$ for $\varepsilon\to 0$. 4. (iv) $\Omega_{\varepsilon}$ is symmetric with respect to the hyperplanes $x_{j}=0$ for $j=1,\dots,N$ and $y=0$. Moreover, it is a smooth and star-shaped domain with respect to the origin for $\varepsilon$ small enough. ###### Proof. In order to prove $(i)$ we show that $u_{0}+\varepsilon\varphi<0$ on $\partial R_{\varepsilon}$. First let us consider the case where $x=(x_{1},\dots,x_{N})\in[-M_{\varepsilon},M_{\varepsilon}]^{N}$ is such that $x_{j}=\pm M_{\varepsilon}$ for some $j=1,\dots,N$ and $y\in[-1-\eta,1+\eta]$. Hence, recalling 2.4, one has $\displaystyle u_{0}(y)+\varepsilon\varphi(x,y)$ $\displaystyle\leq\left\|u_{0}\right\|_{L^{\infty}(-1-\eta,1+\eta)}-\varepsilon\frac{3\left\|u_{0}\right\|_{L^{\infty}(-1-\eta,1+\eta)}}{\varepsilon}(1+o(1))$ $\displaystyle\leq-\left\|u_{0}\right\|_{L^{\infty}(-1-\eta,1+\eta)}<0$ as $\varepsilon\to 0$. Next let $(x,y)\in\set{(x,y)\in\mathbb{R}^{N+1}}{x\in[-M_{\varepsilon},M_{\varepsilon}]^{N},\,\,y=\pm(1+\eta)}$ and observe that since $\omega_{i}>0$ for $y\in[-1-\eta,1+\eta]$ for all $i=1,\dots,n$ and $\alpha_{1}=-1$ we get $\sup_{\mathbb{R}^{N}\times[-1-\eta,1+\eta]}\varphi=C\in\mathbb{R}.$ Finally, we have $u(x,y)\leq u_{0}(\pm(1+\eta))+C\varepsilon<\frac{u_{0}(\pm(1+\eta))}{2}<0,$ for $\varepsilon$ small enough. Then $(i)$ follows. Concerning $(ii)$, if $\varepsilon$ satisfies $\varepsilon N\max_{t\in[t_{1},t_{k}]}\left[\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t)\right]^{-}<\frac{u_{0}(0)}{2},$ where $[\,\,\cdot\,\,]^{-}$ denotes the negative part, then we get $u_{0}+\varepsilon\varphi\geq u_{0}-\varepsilon\varphi^{-}>\frac{u_{0}(0)}{2},$ and so $[t_{1},t_{k}]^{N}\times\\{0\\}\subseteq\Omega_{\varepsilon}$. To prove $(iii)$ note that from $u(x^{\varepsilon},y^{\varepsilon})=0$ on $\partial\Omega_{\varepsilon}$ we have $u_{0}(y^{\varepsilon})=-\varepsilon\varphi(x^{\varepsilon},y^{\varepsilon}).$ If $\lvert x^{\varepsilon}\rvert\leq C$ then $\varphi$ is uniformly bounded with respect to $\varepsilon\to 0$ and then $u_{0}(y^{\varepsilon})\to 0$ which yields to $y^{\varepsilon}\to\pm 1$. On the other hand, if $\lvert x^{\varepsilon}\rvert\to+\infty$ we have (recall that $\alpha_{1}=-1$) (2.8) $-u_{0}(y^{\varepsilon})=-\varepsilon\sum_{j=1}^{N}\cosh(\sqrt{\mu_{1}}x_{j}^{\varepsilon})\omega_{1}(y^{\varepsilon})(1+o(1)),$ which gives 2.7. Moreover, since the right hand side of equation 2.8 is strictly negative we get $u_{0}(y^{\varepsilon})>0$ that implies $\lvert y^{\varepsilon}\rvert\leq 1$. The symmetry properties of the domain immediately follow from the ones of $\varphi$ and $u_{0}$. To show the star-shapeness with respect to the origin, it is enough to prove that there exists $\alpha>0$ such that $y\partial_{y}u_{0}(y)+\varepsilon\sum_{j=1}^{N}x_{j}\partial_{x_{j}}\varphi(x,y)+\varepsilon y\partial_{y}\varphi(x,y)\leq-\alpha<0,\quad\text{for all }(x,y)\in\partial\Omega_{\varepsilon}.$ Since $u_{0}$ solves 2.1 we have that $y\partial_{y}u_{0}(y)<0,\quad\text{in }R_{\varepsilon}\setminus\set{y=0}.$ If $(x^{\varepsilon},y^{\varepsilon})\in\partial\Omega_{\varepsilon}$ is such that $\lvert x^{\varepsilon}\rvert\leq C$ as $\varepsilon\to 0$, then from 2.6 follows $y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})=\pm\partial_{y}u_{0}(\pm 1)(1+o(1))=\partial_{y}u_{0}(1)(1+o(1))<0.$ In this case, since the derivatives of $\varphi$ are uniformly bounded with respect to $\varepsilon$, it easily follows $\displaystyle y\partial_{y}u_{0}(y^{\varepsilon})+\varepsilon\sum_{j=1}^{N}x^{\varepsilon}_{j}\partial_{x_{j}}\varphi(x^{\varepsilon},y^{\varepsilon})+\varepsilon y\partial_{y}\varphi(x^{\varepsilon},y^{\varepsilon})$ $\displaystyle=\partial_{y}u_{0}(1)(1+o(1))+O(\varepsilon)$ $\displaystyle\leq\frac{1}{2}\partial_{y}u_{0}(1)<0,$ for $\varepsilon$ small enough. On the other hand, if $\lvert x^{\varepsilon}\rvert\to+\infty$, let $\\{j_{1},\dots,j_{m}\\}\subseteq\\{1,\dots,N\\}$ be such that $\lvert x_{j}\rvert\to+\infty$ if and only if $j=j_{h}$ for some $h=1,\dots,m$. Then one gets $\displaystyle y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})+\varepsilon\sum_{j=1}^{N}x^{\varepsilon}_{j}\partial_{x_{j}}\varphi(x^{\varepsilon},y^{\varepsilon})+\varepsilon y^{\varepsilon}\partial_{y}\varphi(x^{\varepsilon},y^{\varepsilon})$ $\displaystyle=\\!y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})\\!+\\!\varepsilon\\!\sum_{j=1}^{N}\sum_{i=1}^{n}\\!\alpha_{i}\\!\left(\sqrt{\mu_{i}}x_{j}^{\varepsilon}\sinh(\sqrt{\mu_{i}}x_{j}^{\varepsilon})\omega_{i}(y^{\varepsilon})\\!+\\!\cosh(\sqrt{\mu_{i}}x_{j}^{\varepsilon})y^{\varepsilon}\partial_{y}\omega_{i}(y^{\varepsilon})\right)$ (2.9) $\displaystyle\leq y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\varepsilon}{2}\sum_{h=1}^{m}\sqrt{\mu_{1}}x_{j_{h}}^{\varepsilon}\sinh(\sqrt{\mu_{1}}x_{j_{h}}^{\varepsilon})\omega_{1}(y^{\varepsilon})\big{(}(1+o(1)\big{)}.$ For $h=1,\dots,m$ we have that $-x_{j_{h}}\sinh(\sqrt{\mu_{1}}x_{j_{h}})\leq-\cosh(\sqrt{\mu_{1}}x_{j_{h}})$ and then $\displaystyle-\sum_{h=1}^{m}x_{j_{h}}^{\varepsilon}\sinh(\sqrt{\mu_{1}}x_{j_{h}}^{\varepsilon})\big{(}(1+o(1)\big{)}$ $\displaystyle\leq-\sum_{h=1}^{m}\cosh(\sqrt{\mu_{1}}x_{j_{h}}^{\varepsilon})\big{(}(1+o(1)\big{)}$ $\displaystyle=-\sum_{j=1}^{N}\cosh(\sqrt{\mu_{1}}x_{j}^{\varepsilon})\big{(}(1+o(1)\big{)}.$ So we have that 2.9 becomes $\displaystyle y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})$ $\displaystyle+\varepsilon\sum_{j=1}^{N}x^{\varepsilon}_{j}\partial_{x_{j}}\varphi(x^{\varepsilon},y^{\varepsilon})+\varepsilon y^{\varepsilon}\partial_{y}\varphi(x^{\varepsilon},y^{\varepsilon})$ $\displaystyle\leq\,\,y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\varepsilon}{2}\sqrt{\mu_{1}}\omega_{1}(y^{\varepsilon})\sum_{j=1}^{N}\cosh(\sqrt{\mu_{1}}x_{j}^{\varepsilon})(1+o(1))$ $\displaystyle\\!\\!\\!\\!\overset{\ref{lemma2:eq2}}{\leq}\\!y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\sqrt{\mu_{1}}}{2}u_{0}(y^{\varepsilon})(1+o(1))$ $\displaystyle\leq\,\,y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\sqrt{\mu_{1}}}{4}u_{0}(y^{\varepsilon}),$ and if $y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\sqrt{\mu_{1}}}{4}u_{0}(y^{\varepsilon})\to 0$, since both terms are nonpositive, then they both go to $0$. This implies $y^{\varepsilon}\to 0$ in the first term, and $y^{\varepsilon}\to 1$ in the second one, a contradiction. Hence $y^{\varepsilon}\partial_{y}u_{0}(y^{\varepsilon})-\frac{\sqrt{\mu_{1}}}{4}u_{0}(y^{\varepsilon})\leq-\tilde{\alpha}$. Finally, for $\alpha=\min\left\\{-\frac{1}{2}\partial_{y}u_{0}(1),\tilde{\alpha}\right\\},$ we have the claim. Of course $y\partial_{y}u_{0}(y)+\varepsilon\sum_{j=1}^{N}x_{j}\partial_{x_{j}}\varphi(x,y)+\varepsilon y\partial_{y}\varphi(x,y)\not=0$ on $\partial\Omega_{\varepsilon}$ implies that $\partial\Omega_{\varepsilon}$ is a smooth set. ∎ Next lemma tell us that the function $u_{0}+\varepsilon\varphi$ has many critical points. ###### Lemma 2.3. The function $u_{0}+\varepsilon\varphi$ has at least $k$ different nondegenerate local maxima in $\Omega_{\varepsilon}$ for $\varepsilon$ small enough. ###### Proof. Set $U=u_{0}+\varepsilon\varphi$ and let $t_{1}<\dots<t_{k}$ be local, nondegenerate maxima for $\tilde{\varphi}(t,0)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t)$. Then a straightforward computation gives $\nabla U(t_{m},\dots,t_{m},0)=0.$ Next observing that $\partial_{yy}u_{0}(0)=-\lambda f(u_{0}(0))<0$ we have $\displaystyle\partial_{yy}U(t_{m},\dots,t_{m},0)$ $\displaystyle=\partial_{yy}u_{0}(0)+\varepsilon\sum_{j=1}^{N}\sum_{i=1}^{k}\alpha_{i}\cosh(\sqrt{\mu_{i}}t_{m})\partial_{yy}\omega_{i}(0)$ (2.10) $\displaystyle<-\frac{\lambda}{2}f(u_{0}(0))<0,$ for $\varepsilon$ small enough and for all $m=1,\dots,k$. Finally in $(t_{m},\dots,t_{m},0)$ one has $\displaystyle\partial_{x_{j}x_{j}}U$ $\displaystyle=\varepsilon\sum_{i=1}^{k}\alpha_{i}{\mu_{i}}\cosh(\sqrt{\mu_{i}}t_{m})<0,$ $\displaystyle\partial_{x_{\ell}x_{j}}U$ $\displaystyle=0,\qquad\forall\ell\not=j,$ $\displaystyle\partial_{x_{j}y}U$ $\displaystyle=\varepsilon\sum_{i=1}^{k}\alpha_{i}\sqrt{\mu_{i}}\sinh(\sqrt{\mu_{i}}t_{m})\partial_{y}\omega_{i}(0)=0,$ which, together to 2.10 show us that the Hessian matrix of $U$ is negative definite in $(t_{m},\dots,t_{m},0)$ for all $m=1,\dots,k$ and the proof is complete. ∎ Now we prove that problem 1.4 admits a stable solution in the domain $\Omega_{\varepsilon}$ for many $\lambda^{\prime}s$. ###### Lemma 2.4. For $\varepsilon$ small enough, it holds $\lambda^{*}(\Omega_{\varepsilon})\geq\lambda^{*}(-1,1).$ ###### Proof. Let us write $\lambda^{*}=\lambda^{*}(-1,1)$ for simplicity. For $\eta>0$ small enough we have $\lambda_{\eta}^{*}=\lambda^{*}(-1-\eta,1+\eta)=\frac{\lambda^{*}}{(1+\eta)^{2}}>\lambda,$ and by $u_{\eta}^{*}$ the solution of $\begin{cases}-u^{\prime\prime}=\lambda_{\eta}^{*}f(u)&\hbox{in }(-1-\eta,1+\eta)\\\ u>0&\hbox{in }(-1-\eta,1+\eta)\\\ u(\pm(1+\eta))=0.\end{cases}$ Now, let $\varepsilon$ so small that $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N}\times(-1-\eta,1-\eta)$, then $u_{\eta}^{*}$ is a _supersolution_ of problem $\begin{cases}-u^{\prime\prime}=\lambda_{\eta}^{*}f(u)&\hbox{in }\Omega_{\varepsilon}\\\ u>0&\hbox{in }\Omega_{\varepsilon}\\\ u=0&\hbox{on }\partial\Omega_{\varepsilon}\end{cases}$ that is $-\Delta u_{\eta}^{*}\geq\lambda_{\eta}^{*}f(u_{\eta}^{*})$ in $\Omega_{\varepsilon}$ and $u_{\eta}^{*}\geq 0$ on $\partial\Omega_{\varepsilon}$ (here we follows the notations in [Ban80]). Then [Ban80, Theorem 4.7] ensures that $\lambda^{*}(\Omega_{\varepsilon})\geq\lambda_{\eta}^{*}>\lambda$. ∎ Finally, for $\varepsilon>0$, we define (2.11) $\boxed{u_{\varepsilon}\hbox{ as a stable solution of problem\leavevmode\nobreak\ \ref{PB0} in }\Omega_{\varepsilon}.}$ ### 2.2. Properties of the function $u_{\varepsilon}$ Before to state the main properties of the solution $u_{\varepsilon}$ we compute the eigenvalues of a related operator. The proof uses the classical separation of variables. ###### Lemma 2.5. Denote by $\mu_{1,0}(R)$ the first eigenvalue of the operator $-\Delta-\lambda f^{\prime}(u_{0}(y))$ in the rectangle $R=\prod_{j}^{N}(a_{j},b_{j})\times(-1-\sigma,1+\sigma),$ with $u_{|\partial R}=0$, where $a_{j}<b_{j}$ for all $j=1,\dots,N$. Then $\mu_{1,0}(R)=\mu_{0}+\sum_{j=1}^{N}\left(\frac{\pi}{b_{j}-a_{j}}\right)^{2}>\mu_{0}.$ ###### Proof. Fix $\mu\in\mathbb{R}$ and let $A_{j}$ and $B$ be positive solutions of (2.12) $\begin{cases}A_{j}^{\prime\prime}(t)=c_{j}A_{j}(t)\quad&\text{in }(a_{j},b_{j})\\\ A_{j}(a_{j})=A_{j}(b_{j})=0\end{cases}$ and (2.13) $\begin{cases}-B^{\prime\prime}(y)-\left(\lambda f^{\prime}\left(u_{0}(y)\right)+\mu\right)B(y)=\sum_{j=1}^{N}c_{j}B(y)\quad\text{in }(-1-\sigma,1+\sigma)\\\ B(\pm(1+\sigma))=0\end{cases}$ for some $c_{j}\in\mathbb{R}$. We have that the solution of 2.12 is given by $A_{j}(t)=\alpha\sin\left(\sqrt{-c_{j}}(t-a_{j})\right)$ with $\alpha\in\mathbb{R}$ and $c_{j}=-\left(\frac{\pi}{b_{j}-a_{j}}\right)^{2}<0$ and from 2.13 it follows $\sum_{j=1}^{N}c_{j}+\mu=\mu_{0}.$ Finally, since $v(x,y)=B(y)\prod_{j}^{N}A_{j}(x_{j}),$ solves $\begin{cases}-\Delta v-\lambda f^{\prime}(u_{0}(y))v=\mu v&\hbox{in }R\\\ v=0&\hbox{on }\partial R\end{cases}$ and $v>0$ we conclude that $\mu_{1,0}(R)=\mu=\mu_{0}-\sum_{j=1}^{N}c_{j}=\mu_{0}+\sum_{j=1}^{N}\left(\frac{\pi}{b_{j}-a_{j}}\right)^{2}>\mu_{0}.\qed$ ###### Remark 2.6. From $(i)$ of Lemma 2.2 and the previous lemma, one has that the first eigenvalue of the operator $-\Delta-\lambda f^{\prime}(u_{0}(y))$ with Dirichlet boundary conditions in $\Omega_{\varepsilon}$ is strictly positive. The rest of the section is devoted to show that the solution $u_{\varepsilon}$ defined in 2.11 is close to $u_{0}+\varepsilon\varphi$ as $\varepsilon\to 0$. By Lemma 2.3 then $(iv)$ of Theorem 1.1 follows. Let us start with the following bound for $u_{\varepsilon}$. ###### Lemma 2.7. There exists a function $h:(0,+\infty)\to(0,+\infty)$ such that $h(\varepsilon)\to 0$ for $\varepsilon\to 0$ and $u_{\varepsilon}-u_{0}\leq h(\varepsilon)$ in $\Omega_{\varepsilon}$ uniformly with respect to $(x,y)\in\Omega_{\varepsilon}$. ###### Proof. For $\eta>0$, let $u_{\eta}$ be the stable solution of $\begin{cases}-u^{\prime\prime}=\lambda f(u)&\hbox{in }(-1-\eta,1+\eta)\\\ u>0&\hbox{in }(-1-\eta,1+\eta)\\\ u(\pm(1+\eta))=0.\end{cases}$ For $\varepsilon$ small enough such that $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N}\times(-1-\eta,1+\eta)$, from the convexity of $f$ we have $\begin{cases}-\Delta(u_{\varepsilon}-u_{\eta})=\lambda\left(f(u_{\varepsilon})-f(u_{\eta})\right)\leq\lambda f^{\prime}(u_{\varepsilon})(u_{\varepsilon}-u_{\eta})&\hbox{in}\ \Omega_{\varepsilon}\\\ u_{\varepsilon}-u_{\eta}<0&\hbox{on}\ \partial\Omega_{\varepsilon}\end{cases}$ and then from the stability of $u_{\varepsilon}$ we can apply the maximum principle to deduce $u_{\varepsilon}\leq u_{\eta}$ in $\Omega_{\varepsilon}$. For $(x,y)\in\Omega_{\varepsilon}$, by the maximum principle applied to $u_{\eta}-u_{0}$ we get $u_{\varepsilon}(x,y)-u_{0}(y)\leq u_{\eta}(y)-u_{0}(y)\leq\max(u_{\eta}-u_{0})_{|y=\pm(1+\eta)}=-u_{0}(1+\eta).$ Next let us define the function $h(\varepsilon)$ as follows: for any $\varepsilon>0$ let $\eta(\varepsilon)$ be the smallest positive number such that $\Omega_{\varepsilon}\subseteq\mathbb{R}^{N}\times(-1-\eta(\varepsilon),1+\eta(\varepsilon))$. By the properties of $\Omega_{\varepsilon}$ we have that $\eta(\varepsilon)\to 0$ as $\varepsilon\to 0$. Finally, as $\varepsilon\to 0$ $h(\varepsilon)=-u_{0}\big{(}1+\eta(\varepsilon)\big{)}\to 0,$ which gives the claim. ∎ Next Lemma gives a first approximation of the closeness of $u_{\varepsilon}$ to $u_{0}+\varepsilon\varphi$. It will be improved later. ###### Lemma 2.8. Given $\psi_{\varepsilon}=\frac{u_{\varepsilon}-u_{0}-\varepsilon\varphi}{\varepsilon}$ one has $0\leq\psi_{\varepsilon}<\bar{\psi}$ in $\Omega_{\varepsilon}$ for $\varepsilon$ small enough, where $\bar{\psi}(x,y)=\sum_{j=1}^{N}\sum_{i=1}^{n}\lvert\alpha_{i}\rvert\left(\omega_{i}(y)-C_{i}\right)\cosh(\sqrt{\mu_{i}}x_{j}),$ with $0<C_{i}<\inf\limits_{(-1-\eta,1+\eta)}\omega_{i}$ for all $i=1,\dots,k$ and $0<\eta<\sigma$ small, fixed. ###### Proof. Using the convexity of $f$ we have $-\Delta\psi_{\varepsilon}-\lambda f^{\prime}(u_{0})\psi_{\varepsilon}\geq 0.$ Moreover, $\psi_{\varepsilon}=0$ on $\partial\Omega_{\varepsilon}$ and taking into account Remark 2.6 we can apply the maximum principle to get $\psi_{\varepsilon}>0$ in $\Omega_{\varepsilon}$. Again from the convexity of $f$ we have $\displaystyle-\Delta\psi_{\varepsilon}-\lambda f^{\prime}(u_{\varepsilon})\psi_{\varepsilon}$ $\displaystyle\leq\lambda\left(f^{\prime}(u_{\varepsilon})-f^{\prime}(u_{0})\right)\varphi$ (2.14) $\displaystyle=\lambda\sum_{j=1}^{N}\sum_{i=1}^{n}\alpha_{i}\left(f^{\prime}(u_{\varepsilon})-f^{\prime}(u_{0})\right)\cosh(\sqrt{\mu_{i}}x_{j})\omega_{i}(y).$ From the definition of $C_{i}$ it holds $\bar{\psi}>0$ on $\overline{\Omega}_{\varepsilon}$. Furthermore, in $\Omega_{\varepsilon}$ we have that $\bar{\psi}$ verifies $-\Delta\bar{\psi}=\sum_{j=1}^{N}\sum_{i=1}^{n}\lvert\alpha_{i}\rvert\left(\lambda f^{\prime}(u_{0})\omega_{i}(y)+\mu_{i}C_{i}\right)\cosh(\sqrt{\mu_{i}}x_{j}),$ and then $\displaystyle-\Delta$ $\displaystyle\bar{\psi}-\lambda f^{\prime}(u_{\varepsilon})\bar{\psi}$ (2.15) $\displaystyle=\sum_{j=1}^{N}\sum_{i=1}^{n}\lvert\alpha_{i}\rvert\left[\lambda\left(f^{\prime}(u_{0})-f^{\prime}(u_{\varepsilon})\right)\omega_{i}(y)+(\lambda f^{\prime}(u_{\varepsilon})+\mu_{i})C_{i}\right]\cosh(\sqrt{\mu_{i}}x_{j}).$ Moreover $f^{\prime}(u_{\varepsilon})-f^{\prime}(u_{0})=f^{\prime\prime}\left(t_{\varepsilon}u_{\varepsilon}+(1-t_{\varepsilon})u_{0}\right)(u_{\varepsilon}-u_{0}),$ with $t_{\varepsilon}=t_{\varepsilon}(x,y)\in(0,1)$ . From Lemma 2.7 we have $u_{\varepsilon}-u_{0}\leq h(\varepsilon)$ with $h>0$ and $h\to 0$ as $\varepsilon\to 0$. Since $f^{\prime\prime}$ is positive and $t_{\varepsilon}u_{\varepsilon}+(1-t_{\varepsilon})u_{0}$ is bounded uniformly with respect to $\varepsilon$ we get $\lambda\left(f^{\prime}(u_{\varepsilon})-f^{\prime}(u_{0})\right)\leq Ch(\varepsilon),$ for some $C>0$. Finally from 2.14 and 2.15 we deduce that $\displaystyle-\Delta$ $\displaystyle(\psi_{\varepsilon}-\bar{\psi})-\lambda f^{\prime}(u_{\varepsilon})(\psi_{\varepsilon}-\bar{\psi})$ $\displaystyle\leq\sum_{j=1}^{N}\sum_{i=1}^{n}\left[(\lvert\alpha_{i}\rvert+\alpha_{i})\lambda(f^{\prime}(u_{\varepsilon})-f^{\prime}(u_{0}))\omega_{i}(y)-\lvert\alpha_{i}\rvert(\lambda f^{\prime}(u_{\varepsilon})+\mu_{i})C_{i}\right]\cosh(\sqrt{\mu_{i}}x_{j})$ $\displaystyle\leq\sum_{j=1}^{N}\sum_{i=1}^{n}\big{[}(\lvert\alpha_{i}\rvert+\alpha_{i})Ch(\varepsilon)-\underbrace{\lvert\alpha_{i}\rvert(\lambda f^{\prime}(u_{\varepsilon})+\mu_{i})C_{i}}_{\leq-\lvert\alpha_{i}\rvert\mu_{i}C_{i}}\big{]}\cosh(\sqrt{\mu_{i}}x_{j})\leq 0,$ for $\varepsilon$ small enough, which gives $\begin{cases}-\Delta(\psi_{\varepsilon}-\bar{\psi})-\lambda f^{\prime}(u_{\varepsilon})(\psi_{\varepsilon}-\bar{\psi})\leq 0&\text{in}\ \Omega_{\varepsilon}\\\ \psi_{\varepsilon}-\bar{\psi}<0&\text{on}\ \partial\Omega_{\varepsilon}\end{cases}$ and the maximum principle provides $\psi_{\varepsilon}-\bar{\psi}<0$ in $\Omega_{\varepsilon}$. ∎ Next lemma gives us the final estimate. Here it will be crucial to choose the coefficients $\mu_{i}$ as in 2.3. ###### Lemma 2.9. Let $\Psi_{\varepsilon}=\frac{u_{\varepsilon}-u_{0}-\varepsilon\varphi}{\varepsilon^{2}}.$ Then in every $K\subset\\!\subset\Omega_{\varepsilon}$ one has $\lvert\Psi_{\varepsilon}\rvert\leq C$, for some $C=C(K)>0$ and $\varepsilon$ small enough. ###### Proof. Let us denote by $C$ any positive constant which does not depend on $\varepsilon$. Consider the function $F(\varepsilon)=f(u_{0}+\varepsilon\varphi+\varepsilon^{2}\Psi_{\varepsilon})$. Then for $\varepsilon$ small there exists $t_{\varepsilon}=t_{\varepsilon}(x,y)\in(0,1)$ such that $\displaystyle f(u_{\varepsilon})=F(\varepsilon)=f(u_{0})$ $\displaystyle+\varepsilon f^{\prime}(u_{0})\varphi+\frac{\varepsilon^{2}}{2}f^{\prime\prime}(u_{0})\varphi^{2}+\varepsilon^{2}f^{\prime}(u_{0})\Psi_{\varepsilon}+$ $\displaystyle+\frac{\varepsilon^{3}}{6}f^{\prime\prime\prime}(u_{0}+t_{\varepsilon}\varepsilon\varphi+t_{\varepsilon}^{2}\varepsilon^{2}\Psi_{\varepsilon})(\varphi+2t_{\varepsilon}\varepsilon\Psi_{\varepsilon})^{2}+$ (2.16) $\displaystyle+\varepsilon^{3}f^{\prime\prime}(u_{0}+t_{\varepsilon}\varepsilon\varphi+t_{\varepsilon}^{2}\varepsilon^{2}\Psi_{\varepsilon})(\varphi+2t_{\varepsilon}\varepsilon\Psi_{\varepsilon})\Psi_{\varepsilon}.$ From the previous lemma we have that $0\leq\varepsilon\Psi_{\varepsilon}\leq\bar{\psi}\leq C\sum_{j=1}^{N}\cosh(\sqrt{\mu_{1}}x_{j})$. From Lemma 2.2, $\lvert x_{j}\rvert\leq C\log(1/\varepsilon)$ for all $j=1,\dots,N$ and then $\left|u_{0}+t_{\varepsilon}\varepsilon\varphi+t_{\varepsilon}^{2}\varepsilon^{2}\Psi_{\varepsilon}\right|\leq C,\quad\text{in }\Omega_{\varepsilon}.$ In $\Omega_{\varepsilon}$, taking into account 2.2, we have the following inequality $\displaystyle f(u_{\varepsilon})-f(u_{0})-\varepsilon f^{\prime}(u_{0})\varphi$ $\displaystyle\leq C\varepsilon^{2}\left(\varphi^{2}+\varepsilon(\varphi+2\bar{\psi})^{2}+(\varphi+2\bar{\psi})\bar{\psi}\right)+\varepsilon^{2}f^{\prime}(u_{0})\Psi_{\varepsilon}$ $\displaystyle\leq\frac{C_{\infty}}{\lambda}\varepsilon^{2}\sum_{j=1}^{N}\cosh(2\sqrt{\mu_{1}}x_{j})+\varepsilon^{2}f^{\prime}(u_{0})\Psi_{\varepsilon},$ for some $C_{\infty}>0$, that implies (2.17) $-\Delta\Psi_{\varepsilon}-\lambda f^{\prime}(u_{0})\Psi_{\varepsilon}\leq C_{\infty}\sum_{j=1}^{N}\cosh(2\sqrt{\mu_{1}}x_{j}).$ Fix $\mu_{\infty}=4\mu_{1}$. Note that $\mu_{\infty}<\mu_{0}$ thanks to 2.3. Then taking into account Lemma 2.1 set $\omega_{\infty}=\omega_{\mu_{\infty}}$ and for $(x,y)\in\mathbb{R}^{N}\times(1-\sigma,1+\sigma)$ consider $\psi_{\infty}(x,y)=\frac{C_{\infty}}{c_{\infty}\mu_{\infty}}\sum_{j=1}^{N}\left(\omega_{\infty}(y)-c_{\infty}\right)\cosh(\sqrt{\mu_{\infty}}x_{j}),$ where $0<c_{\infty}<\inf\limits_{(-1-\sigma,1+\sigma)}\omega_{\infty}$. Clearly $\psi_{\infty}>0$ in $\overline{\Omega}_{\varepsilon}$ and $\psi_{\infty}$ satisfies the following inequality $\displaystyle-\Delta\psi_{\infty}-\lambda f^{\prime}(u_{0})\psi_{\infty}$ $\displaystyle=\frac{C_{\infty}}{c_{\infty}\mu_{\infty}}\sum_{j=1}^{N}c_{\infty}\left(\mu_{\infty}+\lambda f^{\prime}(u_{0})\right)\cosh(\sqrt{\mu_{\infty}}x_{j})$ $\displaystyle\geq C_{\infty}\sum_{j=1}^{N}\cosh(2\sqrt{\mu_{1}}x_{j}),$ which together to 2.17 gives $\begin{cases}-\Delta(\Psi_{\varepsilon}-\psi_{\infty})-\lambda f^{\prime}(u_{0})(\Psi_{\varepsilon}-\psi_{\infty})\leq 0&\text{in}\ \Omega_{\varepsilon}\\\ \Psi_{\varepsilon}-\psi_{\infty}<0&\text{on}\ \partial\Omega_{\varepsilon}\end{cases}$ and again the maximum principle provides $\Psi_{\varepsilon}-\psi_{\infty}<0$ in $\Omega_{\varepsilon}$. For $C(K)=\max_{K}\psi_{\infty}$ the proof is complete. ∎ ### 2.3. Proof of Theorem 1.1 ###### Proof. We have that $(i)$ and $(ii)$ follow by $(iv)$ and $(iii)$ of Lemma 2.2 respectively. The proof of $(iii)$ is given in Lemma 2.4. Let us prove $(iv)$. By Lemma 2.3 we have that $u_{0}+\varepsilon\varphi$ admits $k$ strict maxima points. Fix a compact set $K\subset\\!\subset\Omega_{\varepsilon}$ containing such points. On the other hand Lemma 2.9 implies $u_{\varepsilon}=u_{0}+\varepsilon\varphi+O(\varepsilon^{2})$ in $K$ and so the claim follows. ∎ ###### Remark 2.10. We can prove a little more general version of Theorem 1.1: indeed assumption 1.3 can be dropped and we can simply ask that there exists $u_{0}$ stable solution of $\begin{cases}-u^{\prime\prime}=g(u)&\hbox{in }(-1,1)\\\ u>0&\hbox{in }(-1,1)\\\ u(\pm 1)=0.\end{cases}$ Finally we build $\Omega_{\varepsilon}$ as before and then ask for the existence of a stable solution $u_{\varepsilon}$ of problem 1.1 in $\Omega_{\varepsilon}$. ###### Remark 2.11. Let us show that the assumption that $u_{\varepsilon}$ is a _stable_ solution is crucial in our construction. To do this let us assume $N=1$ for simplicity and consider $f(t)=\lambda_{1}t$, where $\lambda_{1}$ is the first eigenvalue of the Dirichlet problem. In this case the first eigenvalue of the linearized problem at the first eigenfunction is $0$. Let us see that it is not possible to construct a domain $\Omega_{\varepsilon}$ as in the previous section. Indeed if we argue as before we have that $u_{0}(y)=\cos\left(\frac{\pi}{2}y\right)$ is the solution of $\begin{cases}-u^{\prime\prime}=\frac{\pi^{2}}{4}u&\hbox{in }(-1,1)\\\ u>0&\hbox{in }(-1,1)\\\ u(\pm 1)=0.\end{cases}$ Now, for $n\in\mathbb{N}$, $\alpha_{i}\in\mathbb{R}$ (again with $\alpha_{1}=-1$) and $\mu_{i}>0$ for $i=1,\dots,n$, we have that $\varphi(x,y)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}x)\cos\left(\sqrt{\pi^{2}/4+\mu_{i}}y\right),$ solves the linearized problem, i.e. $-\Delta\varphi=\frac{\pi^{2}}{4}\varphi\quad\hbox{in }\mathbb{R}^{2},$ As for the general case we observe that $u_{0}(0)+\varepsilon\varphi(0,0)>0$ for $\varepsilon$ small enough and then we set $\Omega_{\varepsilon}=\set{u_{0}+\varepsilon\varphi>0}$. Now for any $\mu_{1}>0$ set $\bar{y}=\frac{\frac{\pi}{2}}{\sqrt{\pi^{2}/4+\mu_{1}}}\in(0,1),$ and then we can find $\delta>0$ sufficiently small such that if $\varepsilon$ is small enough it holds $\mathbb{R}\times\set{y=\bar{y}+\delta}\subseteq\Omega_{\varepsilon},$ showing that the domain $\Omega_{\varepsilon}$ is not bounded. This shows that our construction fails. ## 3\. The torsion problem: proof of Theorem 1.2 In this section we take $x\in\mathbb{R}$ and $y=(y_{1},\dots,y_{N})\in\mathbb{R}^{N}$ and we assume the hypothesis of Theorem 1.2. We construct a solution $u_{\varepsilon}$ of the torsion problem ($g(u)=\mathrm{Const.}$) with $k$ maximum points in a domain $\Omega_{\varepsilon}$ whose boundary has positive mean curvature. Here the domain $\Omega_{\varepsilon}$ and the function $u_{\varepsilon}$ are similar to the ones defined in Section 2. Let us start by introducing the following function $u_{\varepsilon}:\mathbb{R}^{N+1}\to\mathbb{R}$, given by $u_{\varepsilon}(x,y)=u_{0}(y)+\varepsilon\varphi(x,y)\quad x\in\mathbb{R},\ y\in\mathbb{R}^{N},$ where $u_{0}(y)=\frac{1}{2}\sum_{j=1}^{N}\left(1-y_{j}^{2}\right)=\frac{1}{2}\left(N-\lvert y\rvert^{2}\right),$ which solves (3.1) $\begin{cases}-\Delta u=N&\hbox{in }\mathcal{C}\\\ u=0&\hbox{on }\partial\mathcal{C}\end{cases}$ in the cylinder $\mathcal{C}=\\{(x,y)\in\mathbb{R}^{N+1}|\lvert y\rvert^{2}<N\\}$. Finally $\varphi$ is an harmonic function in the whole $\mathbb{R}^{N+1}$ defined by $\varphi(x,y)=\sum_{j=1}^{N}v(x,y_{j}),$ where $v(t,s)=\Re(F_{k}(t+is))$, for $t,s\in\mathbb{R}$ with $\displaystyle F_{k}(t+is)$ $\displaystyle=-\prod_{\ell=1}^{k}\left[(t-t_{\ell}+is)(t+t_{\ell}+is)\right]$ $\displaystyle=-\prod_{\ell=1}^{k}\left(t^{2}-s^{2}-t_{\ell}^{2}+2its\right),\qquad\quad\text{for }0<t_{1}<\dots<t_{k},$ and $\Re(\cdot)$ stands for the real part of a complex function. Note that $v$ is symmetric with respect to both $\\{t=0\\}$ and $\\{s=0\\}$ and it can be written as (3.2) $v(t,s)=-\sum_{h=0}^{2k}a_{h}P_{h}(t,s),$ where $P_{h}$ is an harmonic polynomial of degree $h$, $a_{2k}=1$ and (3.3) $P_{2k}(t,s)=\sum_{\ell=0}^{k}b_{\ell}t^{2k-2\ell}s^{2\ell},\quad b_{0}=b_{k}=1.$ Resuming we have that for $x\in\mathbb{R}$ and $y\in\mathbb{R}^{N}$ $\boxed{\begin{split}u_{\varepsilon}(x,y)&=u_{0}(y)+\varepsilon\varphi(x,y)\\\ &=\frac{1}{2}\left(N-\lvert y\rvert^{2}\right)+\varepsilon\sum_{j=1}^{N}v(x,y_{j})\\\ &=\frac{1}{2}\sum_{j=1}^{N}\left(1-y_{j}^{2}\right)-\varepsilon\sum_{j=1}^{N}\sum_{h=0}^{2k}a_{h}P_{h}(x,y_{j}).\end{split}}$ Since $F_{k}:\mathbb{C}\to\mathbb{C}$ is holomorphic, it easily follows that $\varphi$ is harmonic and then $u_{\varepsilon}$ satisfies $-\Delta u_{\varepsilon}=N$. Finally, we point out that $\partial_{y_{i}y_{j}}u_{\varepsilon}=0$ for all $i\not=j$. ### 3.1. Preliminary results In this section we show some properties of the function $u_{\varepsilon}$ and of the domain $\Omega_{\varepsilon}$ that we are going to define. As in Section 2 we point out that $u_{\varepsilon}(0,0,\dots,0)=\frac{N}{2}+\varepsilon\sum_{j=1}^{N}v(0,0)\geq\frac{N}{4}>0,$ for $\varepsilon$ small enough and we denote by $\Omega_{\varepsilon}$ the connected component of $\set{u_{0}+\varepsilon\varphi>0}$ containing the origin. The following lemma proves some properties of the set $\Omega_{\varepsilon}$. ###### Lemma 3.1. The set $\Omega_{\varepsilon}$ satisfies the following properties. 1. (i) $\Omega_{\varepsilon}\subseteq C_{\varepsilon}$ for $\varepsilon$ small enough, where $C_{\varepsilon}=\Set{(x,y)\in\mathbb{R}^{N+1}}{x\in(-M_{\varepsilon},M_{\varepsilon}),\,\lvert y\rvert^{2}<N(1+\eta)^{2}},$ for some $0<\eta<1$, and $M_{\varepsilon}=\varepsilon^{-\frac{1}{2k}}$. 2. (ii) $\Omega_{\varepsilon}\supseteq[-t_{k},t_{k}]\times\\{0\\}^{N}$. 3. (iii) Let $(x^{\varepsilon},y^{\varepsilon})\in\partial\Omega_{\varepsilon}$. If $\lvert y^{\varepsilon}\rvert\to 0$ then we have (3.4) $\lvert x^{\varepsilon}\rvert=\left(2\varepsilon\right)^{-\frac{1}{2k}}(1+o(1))\to+\infty.$ On the other hand, if $\lvert x^{\varepsilon}\rvert\leq C$, then $\lvert y^{\varepsilon}\rvert^{2}\to N.$ 4. (iv) $\Omega_{\varepsilon}$ is symmetric with respect to the hyperplanes $x=0$ and $y_{j}=0$ for $j=1,\dots,N$. Moreover, it is a smooth and star-shaped domain with respect to the origin for $\varepsilon$ small enough. ###### Proof. To prove $(i)$ we firstly show that (3.5) $u_{\varepsilon}\leq-1/2,\quad\text{on }\Set{(x,y)\in\mathbb{R}^{N+1}}{x=\pm M_{\varepsilon},\,\lvert y\rvert^{2}<N(1+\eta)^{2}},$ for $\varepsilon$ small enough. Indeed by 3.3 we get $\varepsilon P_{2k}(\pm M_{\varepsilon},s)=\varepsilon\sum_{\ell=0}^{k}b_{\ell}\left(\varepsilon^{-\frac{1}{2k}}\right)^{2k-2\ell}s^{2\ell}=1+o(1),\quad\text{as }\varepsilon\to 0,$ uniformly with respect to $\lvert s\rvert<\sqrt{N}(1+\eta)$. Similarly we have $\varepsilon P_{h}(\pm M_{\varepsilon},s)=o(1),\quad\text{for all }0\leq h\leq 2k-1.$ Finally, for $x=\pm M_{\varepsilon}$ and $\lvert y\rvert^{2}\leq N(1+\eta)^{2}$ we have $\displaystyle u_{\varepsilon}(x,y)\leq\frac{N}{2}+\varepsilon\sum_{j=1}^{N}v(\pm M_{\varepsilon},y_{j})(1+o(1))=\frac{N}{2}-N+o(1)\leq-\frac{1}{2}.$ On the other hand by 3.2 and since $a_{2k}=1$ we get $\sup_{t\in\mathbb{R}}\max_{s\in[-\sqrt{N}(1+\eta),\sqrt{N}(1+\eta)]}v(t,s)=C\in\mathbb{R}.$ Then for all $(x,y)\in\overline{C}_{\varepsilon}$ with $\lvert y\rvert^{2}=N(1+\eta)^{2}$ we obtain $u_{\varepsilon}(x,y)=-\frac{N}{2}\eta^{2}-N\eta+\varepsilon\sum_{j=1}^{N}v(x,y_{j})<-\frac{N}{2}\eta^{2}<0,$ for $\varepsilon$ small enough which together to 3.5 proves $(i)$. Concerning $(ii)$, we know that the origin belongs to $\Omega_{\varepsilon}$ and since $u_{\varepsilon}$ is continuous, then $\Omega_{\varepsilon}$ is an open and connected set. Finally if $\varepsilon$ satisfies $\varepsilon<\frac{u_{0}(0,\dots,0)}{\max_{x\in[-t_{k},t_{k}]}(-\varphi(x,0,\dots,0))},$ then $[-t_{k},t_{k}]\times\\{0\\}^{N}\subseteq\Omega_{\varepsilon}$. In order to prove $(iii)$, let $(x^{\varepsilon},y^{\varepsilon})\in\partial\Omega_{\varepsilon}$. Then one has (3.6) $\frac{1}{2}\left(N-\lvert y^{\varepsilon}\rvert^{2}\right)=-\varepsilon\sum_{j=1}^{N}v(x^{\varepsilon},y_{j}^{\varepsilon}).$ If $\lvert x^{\varepsilon}\rvert\leq C$, $v(x^{\varepsilon},y_{j}^{\varepsilon})$ is bounded and then we easily get $\lvert y^{\varepsilon}\rvert^{2}\to N$. Then we can assume $\lvert x^{\varepsilon}\rvert\to+\infty$. In particular, for all $j=1,\dots,N$, it holds $v(x^{\varepsilon},y_{j}^{\varepsilon})=-(x^{\varepsilon})^{2k}(1+o(1))$ and from 3.6 we get $(x^{\varepsilon})^{2k}=\frac{1}{2}\left(1-\frac{\lvert y^{\varepsilon}\rvert^{2}}{N}\right)\varepsilon^{-1}(1+o(1))=\frac{1}{2}\varepsilon^{-1}(1+o(1)),$ and in particular 3.4 holds. The symmetry properties of the domain immediately follow from the ones of $u_{\varepsilon}$. Then to finish the proof it is enough to prove that there exists $\alpha>0$ such that $x\partial_{x}u_{\varepsilon}+\sum_{j=1}^{N}y_{j}\partial_{y_{j}}u_{\varepsilon}\leq-\alpha<0,\quad\text{for all }(x,y)\in\partial\Omega_{\varepsilon}.$ We have $x\partial_{x}u_{\varepsilon}+\sum_{j=1}^{N}y_{j}\partial_{y_{j}}u_{\varepsilon}=-\sum_{j=1}^{N}y_{j}^{2}+\varepsilon\sum_{j=1}^{N}\left(xv_{t}(x,y_{j})+y_{j}v_{s}(x,y_{j})\right).$ On the other hand since $u_{\varepsilon}(x,y)=0$ on $\partial\Omega_{\varepsilon}$ we have $\sum_{j=1}^{N}y_{j}^{2}=N+2\varepsilon\sum_{j=1}^{N}v(x,y_{j}),$ and then $x\partial_{x}u_{\varepsilon}+\sum_{j=1}^{N}y_{j}\partial_{y_{j}}u_{\varepsilon}=-N+\varepsilon\sum_{j=1}^{N}\big{(}xv_{t}(x,y_{j})+y_{j}v_{s}(x,y_{j})-2v(x,y_{j})\big{)}.$ Since we have that $\displaystyle tv_{t}(t,s)+sv_{s}(t,s)-2v(t,s)$ $\displaystyle=-\sum_{h=0}^{2k}a_{h}\left(t\partial_{t}P_{h}(t,s)+s\partial_{s}P_{h}(t,s)-2P_{h}(t,s)\right)$ $\displaystyle=-\sum_{h=0}^{2k}(h-2)a_{h}P_{h}(t,s)\to-\infty,$ for $\lvert t\rvert\to+\infty$ uniformly with respect to $\lvert s\rvert<\sqrt{N}(1+\eta)$. Hence $\sup_{(t,s)\in\mathbb{R}\times[-\sqrt{N}(1+\eta),\sqrt{N}(1+\eta)]}tv_{t}(t,s)+sv_{s}(t,s)-2v(t,s)=d<+\infty,$ and then $\sum_{j=1}^{N}\left(xv_{t}(x,y_{j})+y_{j}v_{s}(x,y_{j})-2v(x,y_{j})\right)\leq Nd<+\infty.$ Finally $\sup_{\partial\Omega_{\varepsilon}}\left(x\partial_{x}u_{\varepsilon}+\sum_{j=1}^{N}y_{j}\partial_{y_{j}}u_{\varepsilon}\right)\leq-N+o(1)\leq-\frac{N}{2},$ for $\varepsilon$ small enough. Of course $x\partial_{x}u_{\varepsilon}+\sum_{j=1}^{N}y_{j}\partial_{y_{j}}u_{\varepsilon}\not=0$ on $\partial\Omega_{\varepsilon}$ implies that $\partial\Omega_{\varepsilon}$ is a smooth hypersurface. ∎ ###### Remark 3.2. In particular from $(iii)$ of Lemma 3.1 we deduce that $\Omega_{\varepsilon}$ locally converges to the cylinder $\mathcal{C}=\set{(x,y)\in\mathbb{R}^{N+1}}{\lvert y\rvert^{2}<N}$. Equation 3.4 will be useful in the computation of the curvature of $\partial\Omega_{\varepsilon}$ in next subsection. ###### Lemma 3.3. The function $u_{\varepsilon}$ has at least $k$ different nondegenerate local maxima in $\Omega_{\varepsilon}$ for $\varepsilon$ small enough. ###### Proof. The proof is similar to the one of Lemma 2.3. For $q(t)=\Re\left(F_{k}(t+i0)\right)=-\prod_{\ell=1}^{k}(t-t_{\ell})(t+t_{\ell})=v(t,0),$ we have $q(t)=0$ if and only if $t=\pm t_{\ell}$ for some $\ell=1,\dots,k$ and $q(t)\to-\infty$ as $\lvert t\rvert\to+\infty$. Now assume $k$ even, the case $k$ odd follows by minor changes. Then there exist $\bar{t}_{\ell}\in(t_{2\ell+1},t_{2\ell+2})$ with $\ell=0,\dots,k/2$ such that $q^{\prime}(\bar{t}_{\ell})=0,\quad\text{and}\quad q^{\prime\prime}(\bar{t}_{\ell})<0\quad\forall\ell=0,\dots,k/2,$ see also Lemma A.2. Moreover, from the definition of $v$, since every time a power of $s$ appears then it is an even power, we get that $\partial_{s}v(t,0)=\partial_{ts}v(t,0)=0$ for all $t\in\mathbb{R}$. Then a straightforward computation gives $\nabla u_{\varepsilon}(\bar{t}_{\ell},0,\dots,0)=0.$ Next, for all $j=1,\dots,N$ and for all $\ell=0,\dots,k/2$, we have (3.7) $\partial_{y_{j}y_{j}}u_{\varepsilon}(\bar{t}_{\ell},0,\dots,0)=-1+\varepsilon\partial_{ss}v(\bar{t}_{\ell},0)<0,$ for $\varepsilon$ small enough. Finally in $(\bar{t}_{\ell},0,\dots,0)$ one has $\displaystyle\partial_{xx}u_{\varepsilon}$ $\displaystyle=\varepsilon Nq^{\prime\prime}(\bar{t}_{\ell},0)<0,$ $\displaystyle\partial_{y_{i}y_{j}}u_{\varepsilon}$ $\displaystyle=0,\qquad\forall i\not=j,$ $\displaystyle\partial_{xy_{j}}u_{\varepsilon}$ $\displaystyle=\varepsilon\partial_{ts}v(\bar{t}_{\ell},0)=0,$ which, together to 3.7, show us that the Hessian matrix of $u_{\varepsilon}$ is negative definite in $(\bar{t}_{\ell},0,\dots,0)$ for all $\ell=0,\dots,k/2$ and the proof is complete since $u_{\varepsilon}$ is even in the $x$ variable. ∎ ###### Remark 3.4. We point out that $\Omega_{\varepsilon}$ is not convex. Indeed, we know from Lemma 3.1 that the domain is symmetric with respect to $\\{x=0\\}$ and $\\{y_{j}=0\\}$ for all $j=1,\dots,N$ and by the well known result by [GNN79], the domain cannot be convex otherwise every solution of problem 1.1 has exactly one critical point in contradiction with Lemma 3.3. ### 3.2. Curvature of the domain In this section we prove that the domain $\Omega_{\varepsilon}$ previously defined has positive mean curvature. Let us start by a technical lemma that gives us an explicit formula to compute the mean curvature for manifolds which are preimage of a regular value of real functions. The proof is postponed to the Appendix. ###### Lemma 3.5. Let $\Sigma=F^{-1}(0)$, for some $F\in\mathcal{C}^{2}(\mathbb{R}\times\mathbb{R}^{N},\mathbb{R})$. Assume $0$ is a regular value for $F$ and $F_{y_{i}y_{j}}=0$ for all $i\not=j$. Then the _mean curvature_ of $\Sigma$ is given by $K_{m}=-\frac{1}{N\lvert\nabla F\rvert^{3}}\left[\sum_{j=1}^{N}\left(F_{x}^{2}F_{y_{j}y_{j}}-2F_{x}F_{y_{j}}F_{xy_{j}}+F_{y_{j}}^{2}F_{xx}\right)+\sum_{j=1}^{N}F_{y_{j}}^{2}\sum_{\begin{subarray}{c}\ell=1\\\ \ell\not=j\end{subarray}}^{N}F_{y_{\ell}y_{\ell}}\right].$ Finally, we are able to compute the mean curvature of the boundary of the domain. ###### Lemma 3.6. The mean curvature of the boundary of $\Omega_{\varepsilon}$ is strictly positive everywhere. ###### Proof. We will apply the previous lemma to $F(x,y)=u_{\varepsilon}(x,y)$. Note that $\nabla u_{\varepsilon}\not=0$ on $\partial\Omega_{\varepsilon}$ from $(iv)$ of Lemma 3.1. Let $(x^{\varepsilon},y^{\varepsilon})\in\partial\Omega_{\varepsilon}$ and from the asymptotic behavior of the derivatives of $v(t,y)$ for $t\to\infty$ we have $\displaystyle v_{t}$ $\displaystyle=-2kt^{2k-1}(1+o(1)),$ $\displaystyle v_{s}$ $\displaystyle=c_{k}t^{2k-2}s(1+o(1)),$ $\displaystyle v_{tt}$ $\displaystyle=-2k(2k-1)t^{2k-2}(1+o(1)),$ $\displaystyle v_{ts}$ $\displaystyle=c^{\prime}_{k}t^{2k-3}s(1+o(1)),$ $\displaystyle v_{ss}$ $\displaystyle=c_{k}t^{2k-2}(1+o(1)),$ and from the estimate $\lvert x^{\varepsilon}\rvert\leq\varepsilon^{-\frac{1}{2k}}$ we get that for all $j=1,\dots,N$ the following quantities $\varepsilon v_{t}(x^{\varepsilon},y_{j}^{\varepsilon}),\quad\varepsilon v_{s}(x^{\varepsilon},y_{j}^{\varepsilon}),\quad\varepsilon v_{tt}(x^{\varepsilon},y_{j}^{\varepsilon}),\quad\varepsilon v_{ts}(x^{\varepsilon},y_{j}^{\varepsilon}),\quad\varepsilon v_{ss}(x^{\varepsilon},y_{j}^{\varepsilon}),$ go to $0$ as $\varepsilon\to 0$. Then we proceed by considering the cases $\lvert y^{\varepsilon}\rvert\not\to 0$ and $\lvert y^{\varepsilon}\rvert\to 0$. _Case $\lvert y^{\varepsilon}\rvert\not\to 0$._ We point out that for $\varepsilon$ small enough there exists $j\in\set{1,\dots,N}$ such that $\partial_{y_{j}}u_{\varepsilon}\not=0$, otherwise $\lvert y^{\varepsilon}\rvert\to 0$. Then from Lemma 3.5 we have $K_{m}=-\frac{-(N-1)\lvert y^{\varepsilon}\rvert^{2}(1+o(1))}{N\left(\lvert y^{\varepsilon}\rvert^{2}(1+o(1))\right)^{\frac{3}{2}}}=\frac{N-1}{N\lvert y^{\varepsilon}\rvert}(1+o(1))>0.$ Not that the assumption $N\geq 2$ is crucial. Indeed if $N=1$ the curvature changes sign, see [GG19]. _Case $\lvert y^{\varepsilon}\rvert\to 0$._ In this case, by 3.4 we have that $x^{\varepsilon}\to+\infty$ and for all $j=1,\dots,N$ fixed $\partial_{y_{j}}u_{\varepsilon}=o(1)$. Recalling 3.4 again, the following estimates hold true $\displaystyle(\partial_{y_{j}}u_{\varepsilon})^{2}\partial_{xx}u_{\varepsilon}$ $\displaystyle=o(\varepsilon^{1-\frac{2k-2}{2k}})=o(\varepsilon^{\frac{1}{k}}),$ $\displaystyle\partial_{x}u_{\varepsilon}\partial_{y_{j}}u_{\varepsilon}\partial_{xy_{j}}u_{\varepsilon}$ $\displaystyle=o\left(\varepsilon^{1-\frac{2k-1}{2k}}\varepsilon^{1-\frac{2k-3}{2k}}\right)=o(\varepsilon^{\frac{2}{k}})=o(\varepsilon^{\frac{1}{k}}),$ $\displaystyle(\partial_{x}u_{\varepsilon})^{2}\partial_{y_{j}y_{j}}u_{\varepsilon}$ $\displaystyle=-\left(-2Nk\varepsilon(x^{\varepsilon})^{2k-1}\right)^{2}(1+o(1))$ $\displaystyle=-2^{\frac{1}{k}}N^{2}k^{2}\varepsilon^{\frac{1}{k}}(1+o(1)).$ This yields (3.8) $(\partial_{y_{j}}u_{\varepsilon})^{2}\partial_{xx}u_{\varepsilon}-2\partial_{x}u_{\varepsilon}\partial_{y_{j}}u_{\varepsilon}\partial_{xy_{j}}u_{\varepsilon}+(\partial_{x}u_{\varepsilon})^{2}\partial_{y_{j}y_{j}}u_{\varepsilon}=-2^{\frac{1}{k}}N^{2}k^{2}\varepsilon^{\frac{1}{k}}(1+o(1)).$ Moreover by similar computations (3.9) $\sum_{j=1}^{N}(\partial_{y_{j}}u_{\varepsilon})^{2}\sum_{\begin{subarray}{c}\iota=1\\\ \iota\not=j\end{subarray}}^{N}\partial_{y_{\iota}y_{\iota}}u_{\varepsilon}=-(N-1)(1+o(1))\sum_{j=1}^{N}(\partial_{y_{j}}u_{\varepsilon})^{2}\leq 0.$ Finally, we can apply Lemma 3.5, and putting together 3.8 and 3.9 we have $\displaystyle-N\lvert\nabla u_{\varepsilon}\rvert^{3}K_{m}$ $\displaystyle\leq\sum_{j=1}^{N}\left((\partial_{y_{j}}u_{\varepsilon})^{2}\partial_{xx}u_{\varepsilon}-2\partial_{x}u_{\varepsilon}\partial_{y_{j}}u_{\varepsilon}\partial_{xy_{j}}u_{\varepsilon}+(\partial_{x}u_{\varepsilon})^{2}\partial_{y_{j}y_{j}}u_{\varepsilon}\right)$ $\displaystyle=-2^{\frac{1}{k}}N^{3}k^{2}\varepsilon^{\frac{1}{k}}(1+o(1))<0,$ that is $K_{m}>0$. ∎ ### 3.3. Proof of Theorem 1.2 ###### Proof. The claims follow from Lemma 3.1, Lemma 3.3 and Lemma 3.6 considering $u_{\varepsilon}/N$. ∎ ###### Remark 3.7. It is also possible to treat the case $x=(x_{1},\dots,x_{M})\in\mathbb{R}^{M}$, with $M>1$, in such a way that the domain $\Omega_{\varepsilon}$ grows in $M$ directions. The proof works replacing the function $u_{\varepsilon}$ by the following one $\tilde{u}_{\varepsilon}(x,y)=\frac{1}{2}\sum_{j=1}^{N}\left(1-y_{j}^{2}\right)+\varepsilon\sum_{i=1}^{M}\sum_{j=1}^{N}v(x_{i},y_{j}).$ The computations are very similar to the case $M=1$. It is not difficult to generalize Lemma 3.5 taking into account that $\partial_{x_{i}x_{h}}u_{\varepsilon}=0$ for all $i\not=h$. ## Appendix A Here we show that there exist coefficients $\alpha_{i}\in\mathbb{R}$ such that the function introduced in subsection 2.1 $F(t)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t),$ admits $k$ nondegenerate maxima points. ###### Lemma A.1. For $k\in\mathbb{N}$ fixed, there exists $n=n(k)\in\mathbb{N}$ and $\alpha_{1},\dots,\alpha_{n}\in\mathbb{R}$ such that the function $F(t)=\sum_{i=1}^{n}\alpha_{i}\cosh(\sqrt{\mu_{i}}t),$ admits $k$ nondegenerate maxima points for $\alpha_{1}=-1$. ###### Proof. Let $1<\tau_{1}<\dots<\tau_{k}$ . For some $n=n(k)\in\mathbb{N}$ consider a polynomial $P(t)=\sum_{j=1}^{n}a_{j}t^{j}$ such that $\displaystyle a_{n}=-1$ $\displaystyle P^{\prime}(\tau_{i})=0,\quad\forall i=1,\dots,k,$ $\displaystyle P^{\prime\prime}(\tau_{i})<0,\quad\forall i=1,\dots,k.$ Let $0<t_{1}<\dots<t_{k}$ be such that $\cosh(t_{i})=\tau_{i}$ for all $i=1,\dots,k$ and define $h(t)=P(\cosh(t))$. Then we have $h^{\prime}(t_{i})=0,\qquad h^{\prime\prime}(t_{i})<0,$ that is $\tau_{1},\dots,\tau_{k}$ are nondegenerate maximum point for $h$. Up to a constant, from the binomial formula it is easy to see that for all $m\in\mathbb{N}$ $(\cosh(t))^{m}=\sum_{\ell=1}^{m}c(m,\ell)\cosh(\ell t),$ for suitables $c(m,\ell)>0$, with $c(m,m)=1$. Finally, for $\delta=\frac{\mu_{0}}{8n}$ the function $F(t)=\sum_{j=1}^{n}a_{j}\sum_{\ell=1}^{j}c(j,\ell)\cosh(\delta\ell t)$ is the function we were looking for. We point out that from the choice of $\delta$, 2.3 is satisfied. ∎ Now we prove that the critical points of the function $q(t)=-\prod_{\ell=1}^{k}(t^{2}-t_{k}^{2}),\quad\text{with }k\in\mathbb{N},\,\,\,k\geq 2\text{ and }0<t_{1}<\dots<t_{k},$ are nondegenerate. ###### Lemma A.2. Let $q(t)=-\prod_{\ell=1}^{k}(t^{2}-t_{k}^{2})$ with $k\in\mathbb{N}$, $k\geq 2$ and $0<t_{1}<\dots<t_{k}$. Then the critical points of $f$ are nondegenerate. ###### Proof. Let $k>2$ (the case $k=2$ is left to the reader). A straightforward computation shows that $q^{\prime}(0)=0$ and $q^{\prime\prime}(0)\not=0$. Now let $\tau\neq 0$ be such that $q^{\prime}(\tau)=0$. Of course $q(\tau)\not=0$ and $0=q^{\prime}(\tau)=-2\tau\sum_{\ell=1}^{k}\prod_{\begin{subarray}{c}h=1\\\ h\not=\ell\end{subarray}}^{k}(\tau^{2}-t_{h}^{2}),$ Finally, one has $\displaystyle q^{\prime\prime}(\tau)$ $\displaystyle=-4\tau^{2}\sum_{\ell=1}^{k}\sum_{\begin{subarray}{c}h=1\\\ h\not=\ell\end{subarray}}^{k}\prod_{\begin{subarray}{c}m=1\\\ m\not=\ell\\\ m\not=h\end{subarray}}^{k}(\tau^{2}-t_{m}^{2})$ $\displaystyle=-4\tau^{2}\sum_{\ell=1}^{k}\frac{1}{(\tau^{2}-t_{\ell}^{2})}\sum_{\begin{subarray}{c}h=1\\\ h\not=\ell\end{subarray}}^{k}\prod_{\begin{subarray}{c}m=1\\\ m\not=h\end{subarray}}^{k}(\tau^{2}-t_{m}^{2})$ $\displaystyle=-4\tau^{2}\sum_{\ell=1}^{k}\frac{1}{(\tau^{2}-t_{\ell}^{2})}\left[\underbrace{\sum_{h=1}^{k}\prod_{\begin{subarray}{c}m=1\\\ m\not=h\end{subarray}}^{k}(\tau^{2}-t_{m}^{2})}_{=0\hbox{ since }q^{\prime}(\tau)=0}-\prod_{\begin{subarray}{c}m=1\\\ m\not=\ell\end{subarray}}^{k}(\tau^{2}-t_{m}^{2})\right]$ $\displaystyle=4\tau^{2}\sum_{\ell=1}^{k}\frac{1}{(\tau^{2}-t_{\ell}^{2})}\prod_{\begin{subarray}{c}m=1\\\ m\not=\ell\end{subarray}}^{k}(\tau^{2}-t_{m}^{2})$ $\displaystyle=-4\tau^{2}q(\tau)\sum_{\ell=1}^{k}\frac{1}{(\tau^{2}-t_{\ell}^{2})^{2}}\not=0.\qed$ The following is the proof of Lemma 3.5 from Section 3. ###### Proof of Lemma 3.5 . Let $\Phi=\frac{1}{\lvert\nabla F\rvert}$ and consider the normal field $\mathbf{N}=-\Phi\cdot(F_{x},F_{y_{1}},\dots,F_{y_{N}}).$ Then the mean curvature of $\Sigma$ is given by $K_{m}(p)=\frac{1}{N}\mathrm{tr}(d\mathbf{N}_{p}).$ Taking into account that $\displaystyle\Phi_{x}$ $\displaystyle=-\Phi^{3}\left(F_{x}F_{xx}+\sum_{j=1}^{N}F_{y_{j}}F_{xy_{j}}\right),$ $\displaystyle\Phi_{y_{j}}$ $\displaystyle=-\Phi^{3}\left(F_{x}F_{xy_{j}}+F_{y_{j}}F_{y_{j}y_{j}}\right),$ one has $\displaystyle-\mathrm{tr}(d\mathbf{N}_{p})=$ $\displaystyle\,\,\Phi\Delta F+\Phi_{x}F_{x}+\sum_{j=1}^{N}\Phi_{y_{j}}F_{y_{j}}$ $\displaystyle=$ $\displaystyle\,\,\Phi^{3}\left[\lvert\nabla F\rvert^{2}\left(F_{xx}+\sum_{j=1}^{N}F_{y_{j}y_{j}}\right)-\left(F_{x}F_{xx}+\sum_{j=1}^{N}F_{y_{j}}F_{xy_{j}}\right)F_{x}\right.$ $\displaystyle-\left.\sum_{j=1}^{N}\left(F_{x}F_{xy_{j}}+F_{y_{j}}F_{y_{j}y_{j}}\right)F_{y_{j}}\right]$ $\displaystyle=$ $\displaystyle\,\,\Phi^{3}\left[\sum_{j=1}^{N}\left(F_{x}^{2}F_{y_{j}y_{j}}-2F_{x}F_{y_{j}}F_{xy_{j}}+F_{y_{j}}^{2}F_{xx}\right)+\sum_{j=1}^{N}F_{y_{j}}^{2}\sum_{\begin{subarray}{c}\ell=1\\\ \ell\not=j\end{subarray}}^{N}F_{y_{\ell}y_{\ell}}\right]$ which yields the claim. ∎ ## References * [APP81] A. Acker, L. E. Payne, and G. Philippin. On the convexity of level lines of the fundamental mode in the clamped membrane problem, and the existence of convex solutions in a related free boundary problem. Z. Angew. Math. Phys., 32(6):683–694, 1981. * [Ban80] C. Bandle. Isoperimetric inequalities and applications. Pitman Boston, 1980. * [BL76] H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. J. Functional Analysis, 22(4):366–389, 1976. * [CC98] X. Cabré and S. Chanillo. Stable solutions of semilinear elliptic problems in convex domains. Selecta Math. (N.S.), 4(1):1–10, 1998. * [CR75] M. G. Crandall and P. H. Rabinowitz. Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems. Arch. Rational Mech. Anal., 58(3):207–218, 1975. * [DRGM21] F. De Regibus, M. Grossi, and D. Mukherjee. Uniqueness of the critical point for semi-stable solutions in $\mathbb{R}^{2}$. Calc. Var. Partial Differential Equations, 60(1):25, 2021. * [GG19] F. Gladiali and M. Grossi. On the number of critical points of solutions of semilinear equations in $\mathbb{R}^{2}$. preprint ArXiv, 1907.09895, 2019. * [GNN79] B. Gidas, W. M. Ni, and L. Nirenberg. Symmetry and related properties via the maximum principle. Comm. Math. Phys., 68(3):209–243, 1979. * [ML71] L. G. Makar-Limanov. The solution of the Dirichlet problem for the equation $\Delta u=-1$ in a convex region. Mat. Zametki, 9:89–92, 1971. * [MP80] F. Mignot and J. P. Puel. Sur une classe de problèmes non linéaires avec non linéairité positive, croissante, convexe. Comm. Partial Differential Equations, 5(8):791–836, 1980.
# Large area-constrained Willmore surfaces in asymptotically Schwarzschild $3$-manifolds Michael Eichmair University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria<EMAIL_ADDRESS>and Thomas Koerber University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria <EMAIL_ADDRESS> (Date: August 27, 2024) ###### Abstract. We apply the method of Lyapunov-Schmidt reduction to study large area- constrained Willmore surfaces in Riemannian $3$-manifolds asymptotic to Schwarzschild. In particular, we prove that the end of such a manifold is foliated by distinguished area-constrained Willmore spheres. The leaves are the unique area-constrained Willmore spheres with large area, non-negative Hawking mass, and distance to the center of the manifold at least a small multiple of the area radius. Unlike previous related work, we only require that the scalar curvature satisfies mild asymptotic conditions. We also give explicit examples to show that these conditions on the scalar curvature are necessary. ## 1\. Introduction Let $(M,g)$ be an asymptotically flat Riemannian $3$-manifold with non- negative scalar curvature. Such manifolds arise as maximal initial data sets for the Einstein field equations and thus play an important role in general relativity. Let $\Sigma\subset M$ be a sphere with unit normal $\nu$, mean curvature vector $-H\,\nu$, area measure $\text{d}\mu$, and area $|\Sigma|$. The Hawking mass $m_{H}(\Sigma)=\sqrt{\frac{|\Sigma|}{16\,\pi}}\bigg{(}1-\frac{1}{16\,\pi}\int_{\Sigma}H^{2}\,\text{d}\mu\bigg{)}$ of $\Sigma$ has been used to probe the gravitational field in the domain bounded by $\Sigma$; see e.g. [19, 13]. R. Geroch [18, p. 115] has noted that the Hawking mass does not increase if $\Sigma$ flows in direction of the unit normal $\nu$ at a speed equal to $H^{-1}$, provided $H>0$. Moreover, he has proposed a proof of the positive energy theorem based on evolving by inverse mean curvature flow a small geodesic sphere in $(M,g)$ with Hawking mass close to zero into a large, centered sphere in the asymptotically flat end whose Hawking mass is close to the ADM-mass of $(M,g)$. Expanding upon Geroch’s idea, P. S. Jang and R. Wald [23, p. 43] have sketched a proof of the Riemannian Penrose inequality in the special case where the apparent horizon is connected. These programs have been completed in the paper [20] by G. Huisken and T. Ilmanen, where a suitable, necessarily non-smooth notion of inverse mean curvature flow is developed. H. Bray has proven the Riemannian Penrose inequality with no restriction on the number of boundary components in [3] using a different method. D. Christodoulou and S.-T. Yau [13] have noted that the Hawking mass of stable constant mean curvature spheres is non-negative. Note that $m_{H}(\Sigma)\leq 0$ in flat $\mathbb{R}^{3}$ with equality if and only if $\Sigma$ is a round sphere. The apparent tension between these results is indicative of the potential role of the Hawking mass as a measure of the gravitational field. In this relation, note that stable constant mean curvature surfaces abound in every initial data set. Indeed, as discussed in Appendix K of [7], there exist isoperimetric regions of every volume. To describe our contributions here, we say that $(M,g)$ is $C^{k}$-asymptotic to Schwarzschild with mass $m>0$ if there is a non-empty compact set $K\subset M$ such that the end $M\setminus K$ is diffeomorphic to $\\{x\in\mathbb{R}^{3}:|x|>1\\}$ and such that, in this so-called chart at infinity, there holds $g=\bigg{(}1+\frac{m}{2\,|x|}\bigg{)}^{4}\bar{g}+\sigma.$ Here, $x$ is the Euclidean position vector and $\bar{g}$ is the Euclidean metric on $\mathbb{R}^{3}$, while $\sigma$ is a symmetric two-tensor that satisfies $\partial_{J}\sigma=O(|x|^{-2-|J|})$ for every multi-index $J$ with $|J|\leq k$. Note that $(M,g)$ is modeled upon the initial data of a Schwarzschild black hole given by (1) $\displaystyle\bigg{(}\mathbb{R}^{3}\setminus B_{\frac{m}{2}}(0),\left(1+\frac{m}{2\,|x|}\right)^{4}\bar{g}\bigg{)}.$ We say that a surface $\Sigma\subset M$ is on-center if it bounds a compact region that contains $K$. If $\Sigma$ bounds a compact region disjoint from $K$, it will be called outlying. In pioneering work [21], G. Huisken and S.-T. Yau have shown that an end that is $C^{4}$-asymptotic to Schwarzschild with positive mass is foliated by stable constant mean curvature spheres. This foliation detects fundamental physical quantities associated with the initial data set such as the ADM mass and the ADM center of mass. Moreover, they have shown that the leaves of the foliation are the only stable constant mean curvature spheres of their respective mean curvature within large classes of surfaces. The original characterization of the leaves in [21] has been sharpened by J. Qing and G. Tian in [35], by S. Brendle and the first-named author in [6], and by A. Carlotto, O. Chodosh, and the first-named author in [7]. The optimal uniqueness result for large stable constant mean curvature spheres in asymptotically Schwarzschild initial data sets has recently been obtained by O. Chodosh and the first-named author [10, 11]. The characterization of the leaves of the foliation as the unique solutions of the isoperimetric problem for large volumes has been established by H. Bray in [4] for exact Schwarzschild (1) and by J. Metzger and the first-named author in [16, 17] for initial data asymptotic to Schwarzschild. In fact, these optimal global uniqueness results for large isoperimetric surfaces hold for asymptotically flat manifolds with positive mass, in particular for the examples constructed by A. Carlotto and R. Schoen in [8], as has recently been shown by O. Chodosh, Y. Shi, H. Yu, and the first-named author in [12] and by H. Yu in [39]. A different approach to obtain surfaces that are well adapted to the ambient geometry is to maximize the Hawking mass under a suitable geometric constraint. Here, fixing the area is a natural choice. Area-constrained critical points of the Hawking mass are also area-constrained critical points of the Willmore energy $\displaystyle\mathcal{W}(\Sigma)=\frac{1}{4}\int_{\Sigma}H^{2}\,\text{d}\mu.$ We refer to such surfaces as area-constrained Willmore surfaces. Note that in e.g. [28], such surfaces are said to be of Willmore type. Critical points of the Willmore energy, known as Willmore surfaces, satisfy the Euler-Lagrange equation $-W=0$ where (2) $\displaystyle W=\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu))\,H.$ Here, $\Delta$ is the non-positive Laplace-Beltrami operator, $\accentset{\circ}{h}$ the traceless part of the second fundamental form $h$, and $\operatorname{Ric}$ the Ricci curvature of $(M,g)$. Likewise, area- constrained Willmore surfaces satisfy the constrained Willmore equation (3) $\displaystyle-W=\kappa\,H,$ where $\kappa\in\mathbb{R}$ is a Lagrange multiplier.111Note that $\kappa$ is denoted by $\lambda$ in [28]. The linearization of the Willmore operator is denoted by $Q$. It measures how $-W$ changes along a normal variation of the surface $\Sigma$. We refer to Appendix A for more details, including a discussion of the notion of stability of such surfaces. The cross-sections of rotationally symmetric Riemannian manifolds are easily seen to form a foliation by area-constrained Willmore surfaces. This observation applies in particular to the spheres of symmetry in the spatial Schwarzschild manifold (1). In [28], T. Lamm, J. Metzger, and F. Schulze have used a delicate singular perturbation analysis to prove the existence of such a foliation also in the case of small perturbations of the Schwarzschild manifold. To state their result, we define the area radius $\lambda(\Sigma)>0$ of a surface $\Sigma\subset M\setminus K$ by $4\,\pi\,\lambda^{2}(\Sigma)=|\Sigma|$ and its inner radius $\rho(\Sigma)$ by $\rho(\Sigma)=\min_{x\in\Sigma}|x|.$ Moreover, we use $R$ to denote the scalar curvature of $(M,g)$. Below, we summarize Theorem 1 and Theorem 2 in [28]. ###### Theorem 1 ([28]). Given $m>0$, there is a constant $\eta>0$ with the following property. Suppose that $(M,g)$ is $C^{3}$-asymptotic to Schwarzschild with mass $m>0$ such that (4) $\displaystyle\limsup_{|x|\to\infty}\bigg{(}|\sigma|\,|x|^{2}+|D\sigma|\,|x|^{3}+|D^{2}\sigma|\,|x|^{4}+|D^{3}\sigma|\,|x|^{5}\bigg{)}<\eta$ and $\limsup_{|x|\to\infty}|R(x)|\,|x|^{5}<\eta.$ There is a compact set $K\subset M$, a number $\kappa_{0}>0$, and spheres $\\{\Sigma(\kappa):\kappa\in(0,\kappa_{0})\\}$ such that the following hold: * • $\Sigma(\kappa)$ is a stable area-constrained Willmore surface that satisfies (3) with parameter $\kappa$, * • $M\setminus K$ is smoothly foliated by the family $\\{\Sigma(\kappa):\kappa\in(0,\kappa_{0})\\}$. Moreover, there is a constant $\epsilon_{0}>0$ such that every on-center, strictly mean convex area-constrained Willmore sphere $\Sigma\subset M\setminus K$ with (5) $\displaystyle\bigg{|}\frac{\lambda(\Sigma)}{\rho(\Sigma)}-1\bigg{|}<\epsilon_{0}\quad\text{and}\quad\int_{\Sigma}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0}$ is part of this foliation. ###### Remark 2. In [28], the uniqueness result is stated in terms of smallness conditions on the rescaled barycenter $\displaystyle\frac{1}{\lambda(\Sigma)\,|\Sigma|}\int_{\Sigma}x\,\text{d}\mu$ and the quotient $\rho(\Sigma)^{-2}\,\lambda(\Sigma)$. These conditions are implied by (5) and Theorem 1.1 in [15]. The stability of the leaves $\Sigma(\kappa)$ suggests that each contains a maximal amount of Hawking mass given their surface area. Locally, this has been confirmed by the second-named author; see Theorem 1.2 in [24]. ###### Theorem 3 ([24]). Assumptions as in Theorem 1. Let $\Sigma\subset M\setminus K$ be a closed, on- center sphere with $\bigg{|}\frac{\lambda(\Sigma)}{\rho(\Sigma)}-1\bigg{|}<\epsilon_{0}$ and $|\Sigma|=|\Sigma(\kappa)|$ for some $\kappa\in(0,\kappa_{0})$. Then $m_{H}(\Sigma)\leq m_{H}(\Sigma_{\kappa})$ with equality if and only if $\Sigma=\Sigma_{\kappa}$. Moreover, every on- center area-constrained Willmore sphere $\Sigma\subset M\setminus K$ with $\bigg{|}\frac{\lambda(\Sigma)}{\rho(\Sigma)}-1\bigg{|}<\epsilon_{0}\quad\text{and}\quad\int_{\Sigma}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0}$ is part of the foliation from Theorem 1. ###### Remark 4. Unlike in Theorem 1, there is no assumption on the sign of the mean curvature in the uniqueness statement in Theorem 3. Comparing with the results available for stable constant mean curvature surfaces, the assumptions of Theorem 1 and Theorem 3 are quite restrictive. Yet, there has been no subsequent result which either establishes the existence of a foliation by area-constrained Willmore surfaces in a more general setting or characterizes the leaves of such a foliation more globally. Even in exact Schwarzschild initial data (1), our variational understanding of the Willmore energy is limited. In fact, it is not known if an area- constrained maximizer of the Hawking mass exists unless the prescribed area is either very small or an integer multiple of the area of the horizon; see [38, Theorem 1.6 and Remark 1.7]. By contrast, S. Brendle has shown in [5] that the spheres of symmetry are the only closed, embedded constant mean curvature surfaces in exact Schwarzschild (1). Previously, it had been known from the work of H. Bray in [4] that these spheres are the only solutions of the isoperimetric problem for the volume they enclose. Note that such a result fails for the Willmore energy, as one can construct surfaces of arbitrarily large area and Hawking mass by gluing small catenoidal necks between spheres of symmetry that are close to the horizon; see the remark below Corollary 5.4 in [24]. Consequently, any reasonable characterization of such surfaces can only possibly hold outside a compact set or under a small energy assumption. In his habilitation thesis [30], P. Laurain conjectures the existence of a foliation by area-constrained Willmore surfaces if the metric $g$ satisfies the so-called Regge-Teitelboim condition (see [36]) and that all area- constrained Willmore surfaces with small energy that enclose a sufficiently large bounded set are part of this foliation; cf. "Theorem 49 (In progress)" in [30]. Note that – being non-linear and of fourth order – the constrained Willmore equation (3) poses hard analytical challenges and is not as accessible geometrically as the constant mean curvature equation. What is more, Willmore stability does not appear to be as useful of a condition as the stability of a constant mean curvature surface. For instance, every closed minimal surface is a stable Willmore surface. In this work, we establish the existence and uniqueness of foliations by area- constrained Willmore surfaces in a generality analogous to the optimal results for stable constant mean curvature surfaces in [10, 11]. In summary, we discover optimal conditions on the scalar curvature under which the end of every asymptotically Schwarzschild manifold is foliated by large stable area- constrained Willmore spheres. These surfaces are unique among all large area- constrained Willmore spheres with non-negative Hawking mass whose inner radius is at least a small multiple of the area radius. Our results differ from those in Theorem 1 in that we do not require smallness of the perturbation $\sigma$ off Schwarzschild or the centering quantity $\frac{\lambda(\Sigma)}{\rho(\Sigma)}-1$ such as (4) or (5). More precisely, we first establish the existence of a foliation by area- constrained Willmore surfaces assuming that $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild. We also assume that the scalar curvature is asymptotically even and satisfies a certain growth condition. ###### Theorem 5. Let $(M,g)$ be $C^{4}$-asymptotic to Schwarzschild with mass $m>0$ and suppose that the scalar curvature $R$ satisfies (6) $\displaystyle x^{i}\,\partial_{i}(|x|^{2}\,R)$ $\displaystyle\leq o(|x|^{-2}),$ (7) $\displaystyle R(x)-R(-x)$ $\displaystyle=o(|x|^{-4}).$ There exists a compact set $K\subset M$, a number $\kappa_{0}>0$, and on- center stable area-constrained Willmore spheres $\Sigma(\kappa)$, $\kappa\in(0,\kappa_{0})$, satisfying (3) with parameter $\kappa$ such that $M\setminus K$ is foliated by the family $\\{\Sigma(\kappa):\kappa\in(0,\kappa_{0})\\}$. Moreover, there holds $\displaystyle\lim_{\kappa\to 0}\frac{\lambda({\Sigma({\kappa})})}{\rho({\Sigma({\kappa})})}=1.$ ###### Remark 6. Note that the assumptions of the theorem are satisfied if $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild and $R=o(|x|^{-4})$. The $C^{4}$-decay gives $DR=o(|x|^{-5})$ in this case, which implies (6). ###### Remark 7. In Theorem 5 and Theorem 8, it would be sufficient to require appropriate $C^{3,\alpha}$-decay of the metric for some $\alpha\in(0,1)$. We use the slightly stronger assumption for the sake of readability. Next, we focus on the geometric characterization of the foliation $\\{\Sigma(\kappa):\kappa\in(0,\kappa_{0})\\}$. Continuing to assume the same asymptotic conditions on the scalar curvature, we show that the leaves of the foliation are the unique large area-constrained Willmore surfaces whose inner radius and area radius are comparable and with traceless second fundamental form small in $L^{2}$. The conclusion of Theorem 8 below is illustrated in Figure 1. ###### Theorem 8. Assumptions as in Theorem 5. There exist a small constant $\epsilon_{0}>0$ and a compact set $K\subset M$ which only depend on $(M,g)$ such that the following holds. For every $\delta>0$, there exists a large constant $\lambda_{0}>1$ such that every area-constrained Willmore sphere $\Sigma\subset M\setminus K$ with $|\Sigma|>4\,\pi\,\lambda_{0}^{2},\qquad\qquad\delta\,\lambda(\Sigma)<\rho(\Sigma),\qquad\qquad\delta\,\rho(\Sigma)<\lambda(\Sigma),$ and (8) $\displaystyle\int_{\Sigma}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0}$ belongs to the foliation from Theorem 5. ###### Remark 9. The small energy assumption (8) and the assumption that $\Sigma$ be spherical may be replaced by requiring a lower bound on the Hawking mass and an upper bound on the genus of $\Sigma$; see Proposition 26. ###### Remark 10. For outlying area-constrained Willmore surfaces, we may drop the assumption of asymptotic evenness of the scalar curvature (7). This relaxed condition is the one discovered by O. Chodosh and the first-named author in [11, Theorem 1.4]. It is thus sufficient to rule out sequences of large outlying stable constant mean curvature spheres whose area radius and inner radius are comparable. Finally, we consider large area-constrained Willmore surfaces whose inner radius dominates the area radius. In this regime, the contribution of the Schwarzschild metric to the Hawking mass is so weak that a stronger assumption on the scalar curvature is needed to preclude the existence of such surfaces. ###### Theorem 11. Suppose that $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild with mass $m>0$ and that its scalar curvature $R$ satisfies (9) $\displaystyle x^{i}\,\partial_{i}(|x|^{2}\,R)\leq 0.$ There is no sequence $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ of area-constrained Willmore spheres $\Sigma_{j}\subset M\setminus K$ such that $\lim_{j\to\infty}|\Sigma_{j}|=\infty,\qquad\limsup_{j\to\infty}\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0},\qquad\lim_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}=\infty,$ where $\epsilon_{0}>0$ is as in Theorem 8. ###### Remark 12. Condition (9) is stronger than (6) and, for instance, satisfied if the scalar curvature of $(M,g)$ vanishes. In any case, the assumptions of Theorem 11 are weaker than those for far-outlying stable constant mean curvature surfaces discovered in [11], where stronger decay of the metric is required and the scalar curvature is assumed to either vanish or to be radially convex. This improvement owes to a conservation law for the Einstein tensor known as the Pohozaev identity. In the generality required here, this law has been observed by R. Schoen, see [37, Proposition 1.4] and [34], and applied by T. Lamm, J. Metzger, and F. Schulze in [28] in a similar context. This identity precisely brings out the contribution of the scalar curvature to the Willmore energy as we explain in Lemma 42. It turns out that the assumptions on the scalar curvature required in Theorem 11 are sufficient to rule out large far-outlying stable constant mean curvature spheres as well. We include a proof of this fact in Appendix E. Figure 1. An illustration of the situation in Theorem 8. The dashed, gray lines indicate the leaves of the foliation from Theorem 5 while the solid, black line indicates the surface $\Sigma$. On the left, $\Sigma$ belongs to the foliation. In the middle, $\Sigma$ is on-center and the area radius dominates the inner radius. On the right, $\Sigma$ is outlying and the inner radius dominates the area radius. $\Sigma$ violates the assumptions of the Theorem in the latter two scenarios. The assumptions on the scalar curvature in Theorem 5, Theorem 8, and Theorem 11 are surprisingly sharp. We show that the growth condition (6) cannot be relaxed to requiring the scalar curvature to be non-negative and even. In fact, these weaker conditions are not sufficient to preclude large area- constrained Willmore spheres – on-center or outlying – that satisfy (8) but violate all three alternatives in Theorem 8. In particular, the assumptions conjecturally proposed in [30] are not quite sufficient to conclude that large area-constrained Willmore surfaces are unique. ###### Theorem 13. There exist rotationally symmetric metrics $g_{1}$ and $g_{2}$ on $M=\\{x\in\mathbb{R}^{3}:|x|>1\\}$ both smoothly asymptotic to Schwarzschild with mass $m=2$ and with non-negative scalar curvature such that the following holds. There exist sequences of stable area-constrained Willmore spheres $\\{\Sigma^{1}_{j}\\}_{j=1}^{\infty}$ and $\\{\Sigma^{2}_{j}\\}_{j=1}^{\infty}$ that are on-center in $(M,g_{1})$ and outlying in $(M,g_{2})$, respectively, such that $\lim_{j\to\infty}|\Sigma^{1}_{j}|=\lim_{j\to\infty}|\Sigma^{2}_{j}|=\infty,\quad\lim_{j\to\infty}m_{H}(\Sigma^{1}_{j})=2,\quad\lim_{j\to\infty}m_{H}(\Sigma^{2}_{j})=0,$ while $\frac{1}{4}<\frac{\rho({\Sigma^{1}_{j}})}{\lambda({\Sigma^{1}_{j}})}<\frac{7}{8}\qquad\text{ and }\qquad 2\sqrt{2}<\frac{\rho({\Sigma^{2}_{j}})}{\lambda({\Sigma^{2}_{j}})}<5$ for all $j$. Conversely, centering of the foliation from Theorem 5 may fail if the assumption that the scalar curvature be asymptotically even is dropped. We refer to the work [9] by C. Cederbaum and C. Nerz for a thorough investigation of various divergent notions of center of mass. ###### Theorem 14. There exists a metric $g_{3}$ on $\\{x\in\mathbb{R}^{3}:|x|>1\\}$ smoothly asymptotic to Schwarzschild with mass $m=2$ with non-negative scalar curvature satisfying (6) such that the following holds. There exists a number $\kappa_{0}>0$ and a smooth asymptotic foliation $\\{\Sigma(\kappa):\kappa\in(0,\kappa_{0})\\}$ by on-center stable area- constrained Willmore spheres such that $\limsup_{\kappa\to 0}\frac{\lambda({\Sigma({\kappa})})}{\rho({\Sigma({\kappa})})}>1.$ Finally, the following result shows that if we relax the growth condition on the scalar curvature only slightly, large far-outlying area-constrained Willmore surfaces may exist. ###### Theorem 15. There exists a rotationally symmetric metric $g_{4}=\left(1+|x|^{-1}\right)^{4}\bar{g}+\sigma_{4}$ on $\\{x\in\mathbb{R}^{3}:|x|>1\\}$ with non-negative scalar curvature and $\partial_{J}\sigma_{4}=O(|x|^{-3-|J|})$ for every multi-index $J$ that has the following property. There exists a sequence of stable area-constrained Willmore spheres $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ such that $\lim_{j\to\infty}|\Sigma_{j}|=\infty,\qquad\lim_{j\to\infty}m_{H}(\Sigma_{j})=0,\qquad\lim_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}=\infty.$ The study of large area-constrained Willmore surfaces $\Sigma$ with $\lambda(\Sigma)\gg\rho(\Sigma)$ is challenging. This is on account of the loss of analytic control in the part of the surface where the area radius $\lambda(\Sigma)$ is much larger than $|x|$. In particular, the non- linearities owing to the Schwarzschild background dominate so the method in this paper loses its grip. We remark that large coordinate spheres cease to be mean convex in this regime and exhibit a first-order defect in their Hawking mass; see Remark 43. This suggests that the comparability assumptions on the area radius and inner radius in Theorem 8 are not necessary. In order to prove Theorem 5 and Theorem 8, we use a strategy modeled upon the Lyapunov-Schmidt reduction developed for stable constant mean curvature surfaces in [6, 11]. We will follow by and large the notation of [6, 11] throughout this paper. By scaling, we may assume that $m=2$, that is, $g=(1+|x|^{-1})^{4}\,\bar{g}+\sigma.$ We use a bar to indicate that a geometric quantity has been computed with respect to the Euclidean background metric $\bar{g}$. When the Schwarzschild metric $g_{S}=(1+|x|^{-1})^{4}\,\bar{g}$ with mass $m=2$ has been used in the computation, we use the subscript $S$. For every $\xi\in\mathbb{R}^{3}$ and $\lambda>1$ large, depending on $|1-|\xi||^{-1}$, we use the implicit function theorem to perturb the sphere ${S}_{\lambda}(\lambda\,\xi)$ to a surface $\Sigma_{\xi,\lambda}$ with area $4\,\pi\,\lambda^{2}$ and which satisfies the constrained Willmore equation (3) up to a sum of first spherical harmonics. On the one hand, we show that $\Sigma_{\xi,\lambda}$ is an area-constrained Willmore surface if and only if $\xi$ is a critical point of the function $G_{\lambda}$ given by $G_{\lambda}(\xi)=\begin{dcases}&\lambda^{2}\bigg{(}\int_{\Sigma_{\xi,\lambda}}H^{2}\,\text{d}\mu-16\,\pi+64\,\pi\,\lambda^{-1}\bigg{)}\,\text{ if }|\xi|<1,\\\ &\lambda^{2}\bigg{(}\int_{\Sigma_{\xi,\lambda}}H^{2}\,\text{d}\mu-16\,\pi\bigg{)}\hskip 49.79231pt\,\,\text{ if }|\xi|>1.\end{dcases}$ The absence of the term $64\,\pi\,\lambda^{-1}$ in the case $|\xi|>1$, which would equal $32\,\pi\,\lambda^{-1}\,m$ without the normalization $m=2$, owes to the fact that the Hawking mass of an outlying coordinate sphere does not detect the mass of $(M,g)$; see Lemma 42. On the other hand, we show that $\displaystyle G_{\lambda}(\xi)=64\,\pi+\frac{32\,\pi}{1-|\xi|^{2}}-48\,\pi\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}-128\,\pi\log(1-|\xi|^{2})+2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+o(1)$ if $|\xi|<1$ and (10) $\displaystyle G_{\lambda}(\xi)=\frac{32\,\pi}{1-|\xi|^{2}}-48\,\pi\,|\xi|^{-1}\log\frac{|\xi|+1}{|\xi|-1}-128\,\pi\log(1-|\xi|^{-2})-2\,\lambda\int_{{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+o(1)$ if $|\xi|>1$. Assuming that the scalar curvature is asymptotically even (7) and satisfies the growth condition (6), we show that $G_{\lambda}$ has a unique minimum near the origin for every $\lambda>1$ large. Moreover, the corresponding surfaces $\Sigma_{\xi,\lambda}$ form a foliation. We show that these are in fact the only critical points of $G_{\lambda}$ if $\lambda>1$ is large. Theorem 8 follows from this and a compactness argument. The strategy for the proof of Theorem 11 is formally similar. However, a key difference is that the first three terms in (10) become very small when $\xi$ is large. In a more precise analysis, we verify that (10) remains true up to lower-order error terms that decay sufficiently fast as $|\xi|\to\infty$. Finally, we give explicit examples that show that $G_{\lambda}$ may have other critical points for suitable choices of $\sigma$ that violate the particular assumptions on the scalar curvature. Unlike large area-constrained Willmore surfaces, small area-constrained Willmore surfaces in closed manifolds are well-understood. It is shown in the work of T. Lamm and J. Metzger [27] and the work of A. Mondino and T. Riviere [33] that minimizers of the Willmore energy with small prescribed area exist in closed manifolds. Moreover, T. Lamm, J. Metzger, and F. Schulze [29] as well as N. Ikoma, A. Mondino, and A. Malchiodi [22] have shown that a neighborhood of a non-degenerate critical point of the scalar curvature is foliated by small area-constrained Willmore surfaces. This extends previous work of T. Lamm and J. Metzger [26] as well as of A. Mondino and P. Laurain [31]. We also mention the work [1] of R. Alessandroni and E. Kuwert on small area-constrained Willmore surfaces with free boundary and the work by A. Mondino [32], where unconstrained Willmore surfaces are studied in a semi- perturbative setting. The method of Lyapunov-Schmidt reduction is used in [22, 1, 32]. Acknowledgments. The authors would like to thank Simon Brendle, Otis Chodosh, Jan Metzger, and Felix Schulze for helpful discussions. The authors acknowledge the support of the START-Project Y963-N35 of the Austrian Science Fund (FWF). ## 2\. Proof of Theorem 5 and Theorem 8 Let $M=\mathbb{R}^{3}\setminus\\{0\\}$ and $g=(1+|x|^{-1})^{4}\,\bar{g}+\sigma$ where $\sigma$ is a symmetric, covariant two-tensor with $\partial_{J}\sigma=O(|x|^{-2-|J|})$ for every multi-index $J$ with $|J|\leq 4$ as $|x|\to\infty$. Applying a strategy similar to that in [6, 11], we perform a Lyapunov-Schmidt reduction to analyze sequences of area-constrained Willmore surfaces $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ with $|\Sigma_{j}|\to\infty$ and comparable area radius $\lambda({\Sigma_{j}})$ and inner radius $\rho({\Sigma_{j}})$. Given $\xi\in\mathbb{R}^{3}$ and $\lambda>0$, we abbreviate ${S}_{\xi,\lambda}=S_{\lambda}(\lambda\,\xi)=\\{x\in\mathbb{R}^{3}:|x-\lambda\,\xi|=\lambda\\}.$ Given $u\in C^{\infty}(S_{\xi,\lambda})$, we define the map (11) $\displaystyle\Phi^{u}_{\xi,\lambda}:S_{\xi,\lambda}\to M\qquad\text{given by}\qquad\Phi^{u}_{\xi,\lambda}(x)=x+u(x)\,(\lambda^{-1}\,x-\xi).$ We denote by $\Sigma_{\xi,\lambda}(u)=\Phi^{u}_{\xi,\lambda}(S_{\xi,\lambda})$ the Euclidean graph of $u$ over $S_{\xi,\lambda}$. Throughout, for instance in (12) below, we tacitly identify functions defined on $\Sigma_{\xi,\lambda}(u)$ with functions defined on $S_{\xi,\lambda}$ by precomposition with $\Phi^{u}_{\xi,\lambda}$. When there is no chance of ambiguity, we abbreviate $\Phi^{u}_{\xi,\lambda}$ by $\Phi^{u}$ and $\Sigma_{\xi,\lambda}(u)$ by $\Sigma(u)$. Let $\alpha\in(0,1)$ and $\gamma>1$. We denote by $\mathcal{G}$ the space of $C^{3,\alpha}$-Riemmanian metrics on $\\{y\in\mathbb{R}^{3}:\gamma^{-1}\leq|y|\leq\gamma\\}$ with the $C^{3,\alpha}$-topology. Let $\Lambda_{0}(S_{1}(0))$ and $\Lambda_{1}(S_{1}(0))$ be the constants and first spherical harmonics viewed as subspaces of $C^{4,\alpha}(S_{1}(0))$, respectively. We use the symbol $\perp$ to denote the orthogonal complements of these spaces. ###### Lemma 16. There exist open neighborhoods $\mathcal{U}$ of $\bar{g}\subset\mathcal{G}$, $\mathcal{V}$ of $0\subset\Lambda_{1}(S_{1}(0))^{\perp}$, and $I$ of $0\subset\mathbb{R}$ as well as smooth maps $u:\mathcal{U}\to\mathcal{V}$ and $\kappa:\mathcal{U}\to I$ such that the surface $\Sigma(u(g))$ has area equal to $4\,\pi$ and satisfies (12) $\displaystyle\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu)+\kappa(g))\,H\in\Lambda_{1}(S_{1}(0)).$ Here, all geometric quantities are computed with respect to the surface $\Sigma(u(g))$ and the metric $g$. Moreover, if $g_{0}\in\mathcal{U}$, $u_{0}\in\mathcal{V}$, and $\kappa_{0}\in I$ are such that $\Sigma(u_{0})$ satisfies (12) with respect to $g_{0}$ and has area equal to $4\,\pi$, then $u_{0}=u(g_{0})$ and $\kappa_{0}=\kappa(g_{0})$. ###### Proof. Let $\Lambda_{0,0}(S_{1}(0))$ and $\Lambda_{1,0}(S_{1}(0))$ be the constants and first spherical harmonics viewed as subspaces of $C^{0,\alpha}(S_{1}(0))$, respectively. We define the smooth map $T:\Lambda_{1}(S_{1}(0))^{\perp}\times\mathbb{R}\times\mathcal{G}\to\Lambda_{1,0}(S_{1}(0))^{\perp}\times\mathbb{R}$ to be $T(u,\,\kappa,\,g)=\left(\operatorname{proj}_{\Lambda_{1}(S_{1}(0))^{\perp}}\left[\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu)+\kappa)\,H\right],\,|\Sigma|\right)$ where all geometric quantities are with respect to $\Sigma(u)$ and the metric $g$. Specifying equation (47) to $S_{1}(0)$ in flat $\mathbb{R}^{3}$, we find $(DT)|_{(0,\,0,\,\bar{g})}(u,\,0,\,0)=\left(-\bar{\Delta}^{2}u-2\,\bar{\Delta}u,\,8\,\pi\,\operatorname{proj}_{\Lambda_{0}(S_{1}(0))}u\right).$ Moreover, there holds $(DT)|_{(0,\,0,\,\bar{g})}(0,\,\kappa,\,0)=(2\,\kappa,\,0).$ As discussed in Corollary 33, the kernel of the operator $-\bar{\Delta}^{2}-2\,\bar{\Delta}:C^{4,\alpha}(S_{1}(0))\to C^{0,\alpha}(S_{1}(0))$ is given by $\Lambda_{0}(S_{1}(0))\oplus\Lambda_{1}(S_{1}(0))$. It follows from the Fredholm alternative and elliptic regularity that $-\bar{\Delta}^{2}-2\,\bar{\Delta}:[\Lambda_{0}(S_{1}(0))\oplus\Lambda_{1}(S_{1}(0))]^{\perp}\to[\Lambda_{0,0}(S_{1}(0))\oplus\Lambda_{1,0}(S_{1}(0))]^{\perp}$ is an isomorphism. Thus, $(DT)|_{(0,\,0,\,\bar{g})}(\,\cdot\,,\,\cdot\,,\,0):\Lambda_{1}(S_{1}(0))^{\perp}\times\mathbb{R}\to\Lambda_{1,0}(S_{1}(0))^{\perp}\times\mathbb{R}$ is an isomorphism, too. The assertions follow from this and the implicit function theorem. ∎ We consider the map $\Theta_{\xi,\lambda}:\mathbb{R}^{3}\to\mathbb{R}^{3}\qquad\text{ given by }\qquad\Theta_{\xi,\lambda}(y)=\lambda\,(\xi+y).$ Note that $\Theta_{\xi,\lambda}(S_{1}(0))=S_{\xi,\lambda}$. The rescaled metric (13) $\displaystyle g_{\xi,\lambda}=\lambda^{-2}\,\Theta_{\xi,\lambda}^{*}\,g$ satisfies $||g_{\xi,\lambda}-\bar{g}||_{\mathcal{G}}=O(\lambda^{-1}\,|1-|\xi||^{-1})$ as $\lambda\to\infty$. Let $\delta\in(0,1/2)$. The following proposition follows from Lemma 16 and scaling. We let $\Lambda_{0}(S_{\xi,\lambda})$ and $\Lambda_{1}(S_{\xi,\lambda})$ be the constants and first spherical harmonics viewed as subspaces of $C^{4,\alpha}(S_{\xi,\lambda})$, respectively. ###### Proposition 17. There are constants $\lambda_{0}>1$, $c>1$, and $\epsilon>0$ depending on $(M,g)$ and $\delta\in(0,1/2)$ such that for every $\xi\in\mathbb{R}^{3}$ with $|\xi|<1-\delta$ or $|\xi|>1+\delta$ and every $\lambda>\lambda_{0}$ there exist $u_{\xi,\lambda}\in C^{\infty}(S_{\xi,\lambda})$ and $\kappa_{\xi,\lambda}\in\mathbb{R}$ such that the following hold. The surface $\Sigma_{\xi,\lambda}=\Sigma_{\xi,\lambda}(u_{\xi,\lambda})$ has the properties * • $\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu)+\kappa_{\xi,\lambda})\,H\in\Lambda_{1}(S_{\xi,\lambda})$, * • $|\Sigma_{\xi,\lambda}|=4\,\pi\,\lambda^{2}$. There holds $u_{\xi,\lambda}\perp\Lambda_{1}(S_{\xi,\lambda})$ and $\displaystyle|u_{\xi,\lambda}|+\lambda\,|\nabla u_{\xi,\lambda}|+\lambda^{2}\,|\nabla^{2}u_{\xi,\lambda}|+\lambda^{3}\,|\nabla^{3}u_{\xi,\lambda}|+\lambda^{4}\,|\nabla^{4}u_{\xi,\lambda}|$ $\displaystyle<c,$ $\displaystyle\lambda^{3}\,|\kappa_{\xi,\lambda}|$ $\displaystyle<c.$ Moreover, if $\kappa\in\mathbb{R}$ and $\Sigma_{\xi,\lambda}(u)$ with $u\perp\Lambda_{1}(S_{\xi,\lambda})$ are such that * • $\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu)+\kappa)\,H\in\Lambda_{1}(S_{\xi,\lambda}),$ * • $|\Sigma_{\xi,\lambda}(u)|=4\,\pi\,\lambda^{2},$ and $\displaystyle|u|+\lambda\,|\nabla u|+\lambda^{2}\,|\nabla^{2}u|+\lambda^{3}\,|\nabla^{3}u|+\lambda^{4}\,|\nabla^{4}u|$ $\displaystyle<\epsilon\,\lambda,$ $\displaystyle\lambda^{3}\,|\kappa|$ $\displaystyle<\epsilon\,\lambda,$ then $u=u_{\xi,\lambda}$ and $\kappa=\kappa_{\xi,\lambda}$. ###### Remark 18. By the implicit function theorem and scaling, $\displaystyle(\bar{D}u)|_{(\xi,\lambda)}=O(\lambda^{-1})\qquad\text{and}\qquad u^{\prime}|_{(\xi,\lambda)}=O(\lambda^{-2})$ where $\bar{D}$ and the dash indicate differentiation with respect to the parameters $\xi$ and $\lambda$, respectively. We abbreviate $u_{\xi,\lambda}$ by $u$, $\kappa_{\xi,\lambda}$ by $\kappa$, and $\Lambda_{\ell}(S_{\xi,\lambda})$ by $\Lambda_{\ell}$ for $\ell=0,\,1$. To obtain more precise information about the perturbation, we expand $u$ in terms of spherical harmonics. ###### Lemma 19. There holds $\displaystyle\operatorname{proj}_{\Lambda_{0}}u=\begin{dcases}&-2+{O}(\lambda^{-1})\,\hskip 22.76228pt\quad\text{ if }|\xi|<1-\delta,\\\ &-2\,|\xi|^{-1}+{O}(\lambda^{-1})\quad\text{ if }|\xi|>1+\delta.\end{dcases}$ ###### Proof. On the one hand, we have $|\Sigma_{\xi,\lambda}|=4\,\pi\,\lambda^{2}$ by construction. By Lemma 40, $|S_{\xi,\lambda}|=\begin{dcases}&4\,\pi\,\lambda^{2}+16\,\pi\,\lambda+{O}(1)\hskip 34.14322pt\,\text{ if }|\xi|<1-\delta,\\\ &4\,\pi\,\lambda^{2}+16\,\pi\,\lambda\,|\xi|^{-1}+{O}(1)\quad\text{ if }|\xi|>1+\delta.\end{dcases}$ On the other hand, by the first variation of area formula, $|\Sigma_{\xi,\lambda}|-|S_{\xi,\lambda}|=\int_{S_{\xi,\lambda}}H\,u\,\text{d}\mu+{O}(1)=\int_{S_{\xi,\lambda}}\bar{H}\,u\,\text{d}\bar{\mu}+{O}(1).$ In the second equation, we have used Lemma 39 and Lemma 41. Since $\frac{1}{4\,\pi\,\lambda^{2}}\int_{S_{\xi,\lambda}}u\,\text{d}\bar{\mu}=\operatorname{proj}_{\Lambda_{0}}u,$ the assertion follows. ∎ For the statement of the next lemma, recall the definition of the Legendre polynomials $P_{\ell}$ from Appendix B. ###### Lemma 20. If $|\xi|<1-\delta$, there holds $\displaystyle\kappa$ $\displaystyle=4\,\lambda^{-3}+O(\lambda^{-4}),$ $\displaystyle W({\Sigma_{\xi,\lambda}})+\kappa\,H(\Sigma_{\xi,\lambda})$ $\displaystyle={O}(\lambda^{-5}),$ $\displaystyle u$ $\displaystyle=-2+4\sum_{\ell=2}^{\infty}\frac{|\xi|^{\ell}}{\ell}\,P_{\ell}\left(-|\xi|^{-1}\,\bar{g}(y,\xi)\right)+O(\lambda^{-1}).$ If $|\xi|>1+\delta$, there holds $\displaystyle\kappa$ $\displaystyle=O(\lambda^{-4}),$ $\displaystyle W({\Sigma_{\xi,\lambda}})+\kappa\,H(\Sigma_{\xi,\lambda})$ $\displaystyle={O}(\lambda^{-5}),$ $\displaystyle u$ $\displaystyle=-2\,|\xi|^{-1}-4\sum_{\ell=2}^{\infty}\frac{|\xi|^{-\ell-1}}{\ell+1}\,P_{\ell}\left(-|\xi|^{-1}\,\bar{g}(y,\xi)\right)+O(\lambda^{-1}).$ ###### Proof. We abbreviate $\Delta=\Delta_{S_{\xi,\lambda}}$. It follows from (48) and Lemma 47 that ${W}({\Sigma_{\xi,\lambda}})-{W}({{S}_{\xi,\lambda}})=-Q_{S_{\xi,\lambda}}u+O(\lambda^{-5})=-\bar{\Delta}^{2}u-2\,\lambda^{-2}\,\bar{\Delta}u+{O}(\lambda^{-5}).$ Likewise, (45), Proposition 17, Lemma 39, and Lemma 41 imply that $H(\Sigma_{\xi,\lambda})=2\,\lambda^{-1}+O(\lambda^{{-2}}).$ Conversely, by Proposition 17, ${W}({\Sigma_{\xi,\lambda}})+H(\Sigma_{\xi,\lambda})\,\kappa=Y_{1}\in\Lambda_{1}.$ Thus, (14) $\displaystyle\Delta^{2}u+2\,\lambda^{-2}\,\Delta u-2\,\lambda^{-1}\,\kappa=W({{S}_{\xi,\lambda}})-Y_{1}+{O}(\lambda^{-5}).$ By Corollary 45, there holds $W({{S}_{\xi,\lambda}})=\begin{dcases}&\,\,4\,\lambda^{-4}\,\sum_{\ell=0}^{\infty}(\ell-1)\,(\ell+1)\,(\ell+2)\,|\xi|^{\ell}\,P_{\ell}\left(-|\xi|^{-1}\,\bar{g}(y,\xi)\right)+{O}(\lambda^{-5})\quad\text{ if }|\xi|<1-\delta,\\\ &-\,4\,\lambda^{-4}\sum_{\ell=0}^{\infty}(\ell-1)\,\ell\,(\ell+2)\,|\xi|^{-\ell-1}\,P_{\ell}\left(-|\xi|^{-1}\,\bar{g}(y,\xi)\right)+{O}(\lambda^{-5})\,\,\quad\,\text{ if }|\xi|>1+\delta.\end{dcases}$ Projecting (14) onto $\Lambda_{1}$, we find $Y_{1}=O(\lambda^{-5})$. The assertions follow from Corollary 33. ∎ To relate the variational structure of the area-constrained Willmore equation on the families of surfaces $\\{\Sigma_{\xi,\lambda}:|\xi|<1-\delta\\}$ and $\\{\Sigma_{\xi,\lambda}:|\xi|>1+\delta\\}$ to a 3-dimensional problem, we introduce the functional (15) $\displaystyle F_{\lambda}(\Sigma)=\begin{dcases}&\lambda^{2}\,\bigg{(}\int_{\Sigma}H^{2}\,\text{d}\mu-16\,\pi+64\,\pi\,\lambda^{-1}\bigg{)}\quad\text{ if }\Sigma\text{ is on-center},\\\ &\lambda^{2}\,\bigg{(}\int_{\Sigma}H^{2}\,\text{d}\mu-16\,\pi\bigg{)}\,\hskip 51.21504pt\quad\text{ if }\Sigma\text{ is outlying},\end{dcases}$ for closed, two-sided surfaces $\Sigma\subset M$. Essentially, $F_{\lambda}$ measures the Willmore energy on the relevant scales for on-center and outlying surfaces, respectively. We then define the function (16) $\displaystyle G_{\lambda}:\\{\xi\in\mathbb{R}^{3}:|\xi|<1-\delta\text{ or }|\xi|>1+\delta\\}\to\mathbb{R}\qquad\text{given by}\qquad G_{\lambda}(\xi)=F_{\lambda}(\Sigma_{\xi,\lambda}).$ ###### Lemma 21. There is $\lambda_{0}>1$ depending on $(M,g)$ and $\delta\in(0,1/2)$ with the following property. Let $\lambda>\lambda_{0}$. Then $\Sigma_{\xi,\lambda}$ is an area-constrained Willmore surface if and only if $\xi$ is a critical point of $G_{\lambda}$. ###### Proof. Fix $\xi\in\mathbb{R}^{3}$ with either $|\xi|<1-\delta$ or $|\xi|>1+\delta$. Let $a\in\mathbb{R}^{3}$ with $|a|=1$ and $\epsilon>0$ be small. Note that the normal speed $f$ of the area-preserving variation $\\{\Sigma_{\xi+s\hskip 0.56917pta,\lambda}:|s|<\epsilon\\}$ of $\Sigma$ at $s=0$ is given by $f=g\bigg{(}\nu,\frac{d}{ds}\bigg{|}_{s=0}\Phi^{u_{\xi+s\,a,\lambda}}_{\xi+s\,a,\lambda}\bigg{)}=\lambda\,\bar{g}(a,\bar{\nu})+O(1).$ Assume that $\xi$ is a critical point of $G_{\lambda}$. Using (46), we find $\int_{\Sigma_{\xi,\lambda}}W(\Sigma_{\xi,\lambda})\,f\,\text{d}\mu=0,\qquad\int_{\Sigma_{\xi,\lambda}}H(\Sigma_{\xi,\lambda})\,f\,\text{d}\mu=0.$ In particular, $\int_{\Sigma_{\xi,\lambda}}(W(\Sigma_{\xi,\lambda})+\kappa\,H(\Sigma_{\xi,\lambda}))\,(\bar{g}(a,\bar{\nu})+O(\lambda^{-1}))\,\text{d}\mu=0$ for every choice of $a\in\mathbb{R}^{3}$ with $|a|=1$. Since $W(\Sigma_{\xi,\lambda})+\kappa\,H(\Sigma_{\xi,\lambda})\in\Lambda_{1}$ by Proposition 17, it follows that $W(\Sigma_{\xi,\lambda})+\kappa\,H(\Sigma_{\xi,\lambda})=0$ provided $\lambda_{0}>1$ is large enough. Conversely, if $\Sigma_{\xi,\lambda}$ is an area-constrained Willmore surface, then $\int_{\Sigma_{\xi,\lambda}}W(\Sigma_{\xi,\lambda})\,f\,\text{d}\mu=-\kappa\int_{\Sigma_{\xi,\lambda}}H(\Sigma_{\xi,\lambda})\,f\,\text{d}\mu=0.$ In conjunction with Lemma 31, we see that $\xi$ is a critical point of $G_{\lambda}$. ∎ In the next step, we compute the asymptotic expansions of $G_{\lambda}$ as $\lambda\to\infty$. ###### Lemma 22. If $|\xi|<1-\delta$, there holds $\displaystyle G_{\lambda}(\xi)=64\,\pi+\frac{32\,\pi}{1-|\xi|^{2}}-48\,\pi\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}-128\,\pi\log(1-|\xi|^{2})+2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+O(\lambda^{-1}).$ If $|\xi|>1+\delta$, there holds $\displaystyle G_{\lambda}(\xi)=-\frac{32\,\pi}{|\xi|^{2}-1}-48\,\pi\,|\xi|^{-1}\,\log\frac{|\xi|+1}{|\xi|-1}-128\,\pi\log(1-|\xi|^{-2})-2\,\lambda\int_{B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}+O(\lambda^{-1}).$ ###### Proof. Using Lemma 31, we compute $\displaystyle\int_{\Sigma_{\xi,\lambda}}H^{2}(\Sigma_{\xi,\lambda})\,\text{d}\mu=$ $\displaystyle\,\int_{{S}_{\xi,\lambda}}H^{2}\,\text{d}\mu-2\int_{{S}_{\xi,\lambda}}{W}\,u\,\text{d}\mu+\int_{{S}_{\xi,\lambda}}\left[u\,Qu-{W}\,H\,u^{2}\right]\text{d}\mu+O(\lambda^{-3})$ $\displaystyle=$ $\displaystyle\,\int_{{S}_{\xi,\lambda}}H^{2}\,\text{d}\mu-2\int_{{S}_{\xi,\lambda}}{W}\,u\,\text{d}\mu+\int_{{S}_{\xi,\lambda}}u\,Qu\,\text{d}\mu+O(\lambda^{-3}),$ where we have abbreviated $H=H(S_{\xi,\lambda})$, $W=W({S_{\xi,\lambda}})$, $\Delta=\Delta_{S_{\xi,\lambda}}$, and $Q=Q_{S_{\xi,\lambda}}$. Using (57) and (44), we find that $\int_{{S}_{\xi,\lambda}}u\,Qu\,\text{d}\mu=\int_{{S}_{\xi,\lambda}}\left[(\bar{\Delta}u)^{2}+2\,\lambda^{-2}\,u\,\bar{\Delta}u\right]\text{d}\bar{\mu}+O(\lambda^{-3}).$ Conversely, Lemma 20 and (14) imply $\int_{{S}_{\xi,\lambda}}{W}\,u\,\text{d}\mu=\int_{{S}_{\xi,\lambda}}\left[(\bar{\Delta}u)^{2}+2\,\lambda^{-2}\,u\,\bar{\Delta}u-2\,\lambda^{-1}\,\kappa\,u\right]\text{d}\bar{\mu}+O(\lambda^{-3}).$ Using Lemma 20 again, we obtain $\displaystyle 2\,\lambda^{-1}\,\kappa\int_{{S}_{\xi,\lambda}}u\,\text{d}\bar{\mu}=\begin{dcases}&-64\,\pi\,\lambda^{-2}+{O}(\lambda^{-3})\quad\text{ if }|\xi|<1-\delta,\\\ &\,{O}(\lambda^{-3})\hskip 69.70915pt\text{ if }|\xi|>1+\delta.\end{dcases}$ Using Corollary 33, Lemma 20, (50), and Lemma 36 in the case where $|\xi|<1-\delta$, we compute $\displaystyle\,-\int_{{S}_{\xi,\lambda}}\left[(\bar{\Delta}u)^{2}+2\,\lambda^{-2}\,u\,\bar{\Delta}u\right]\text{d}\bar{\mu}$ $\displaystyle=$ $\displaystyle\,-16\,\lambda^{-4}\sum_{\ell=2}^{\infty}\frac{(\ell-1)\,(\ell+1)\,(\ell+2)}{\ell}\,|\xi|^{2\ell}\int_{{S}_{\xi,\lambda}}P_{\ell}^{2}\left(-|\xi|^{-1}\,\bar{g}(y,\xi)\right)\text{d}\bar{\mu}$ $\displaystyle=$ $\displaystyle\,-64\,\pi\,\lambda^{-2}\sum_{\ell=2}^{\infty}\frac{(\ell-1)\,(\ell+1)\,(\ell+2)}{\ell\,(2\,\ell+1)}\,|\xi|^{2\ell}$ $\displaystyle=$ $\displaystyle\,-16\,\pi\,\lambda^{-2}\bigg{[}\frac{9}{2}\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}+8\,\log(1-|\xi|^{2})+\frac{23\,|\xi|^{2}-12\,|\xi|^{4}-9}{(1-|\xi|^{2})^{2}}\bigg{]}.$ Similarly, if $|\xi|>1+\delta$, we obtain $\displaystyle-\int_{{S}_{\xi,\lambda}}\left[(\bar{\Delta}u)^{2}+2\,\lambda^{-2}\,u\,\bar{\Delta}u\right]\text{d}\bar{\mu}=$ $\displaystyle\,-64\,\pi\,\lambda^{-2}\sum_{\ell=2}^{\infty}\frac{(\ell-1)\,\ell\,(\ell+2)}{(\ell+1)\,(2\,\ell+1)}\,|\xi|^{-2\ell-2}.$ $\displaystyle=$ $\displaystyle\,-16\,\pi\,\lambda^{-2}\bigg{[}\frac{9}{2}\,|\xi|^{-1}\log\frac{|\xi|+1}{|\xi|-1}+8\,\log(1-|\xi|^{-2})+\frac{3-|\xi|^{2}}{(|\xi|^{2}-1)^{2}}\bigg{]}.$ We have used Lemma 38 in the last equation. The assertions follow from this and Lemma 42. ∎ In order to proceed, we note the following technical result. ###### Lemma 23. There are $c>0$ and $\lambda_{0}>1$ which only depend on $(M,g)$ and $\delta\in(0,1/2)$ such that $||G_{\lambda}||_{C^{3}(\\{\xi\in\mathbb{R}^{3}:|\xi|\leq 1-\delta\text{ or }|\xi|\geq 1+\delta\\})}<c$ for every $\lambda>\lambda_{0}$. ###### Proof. This estimate follows from a straightforward computation using the regularity properties of the implicit function theorem, the fact that $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild, and the variational formulae for the Willmore energy in the same way as in the proof of Proposition 6 in [6]. ∎ We now investigate the qualitative behavior of $G_{\lambda}$ for large values of $\lambda$. In this analysis, the assumptions (17) $\displaystyle x^{i}\,\partial_{i}(|x|^{2}\,R)$ $\displaystyle\leq o(|x|^{-2}),$ (18) $\displaystyle R(x)-R(-x)$ $\displaystyle=o(|x|^{-4}),$ are used. Note that (17) integrates to (19) $\displaystyle R\geq-o(|x|^{-4}).$ We remark that the weaker decay (20) $\displaystyle R=O(|x|^{-4})$ is implied by the fact that $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild. ###### Lemma 24. Suppose that the scalar curvature $R$ of $(M,g)$ satisfies (17) and (18). There exist $\tau>0$, $\delta_{0}\in(0,1/2)$, and $\lambda_{0}>1$ depending only on $(M,g)$ such that, provided $\lambda>\lambda_{0}$, $\bar{D}^{2}G_{\lambda}\geq\tau\,\operatorname{Id}$ holds on $\\{\xi\in\mathbb{R}^{3}:|\xi|<\delta_{0}\\}$. Moreover, given $\delta\in(0,1/2)$ and $\delta_{1}\in(0,1-\delta)$, there is $\lambda_{1}>\lambda_{0}$ such that $G_{\lambda}$ is strictly increasing in radial directions on $\\{\xi\in\mathbb{R}^{3}:\delta_{1}<|\xi|<1-\delta\\}$, provided $\lambda>\lambda_{1}$. ###### Proof. We write (21) $\displaystyle G_{\lambda}=G_{1}+G_{\lambda,2}+O(\lambda^{-1})$ where (22) $\displaystyle G_{1}(\xi)=64\,\pi+\frac{32\,\pi}{1-|\xi|^{2}}-48\,\pi\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}-128\,\pi\log(1-|\xi|^{2})$ and $G_{\lambda,2}(\xi)=2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}.$ The $C^{4}$-decay of the metric implies that the family of functions $\\{G_{2,\lambda}:\lambda>\lambda_{0}\\}$ is uniformly bounded in $C^{3}(\\{\xi\in\mathbb{R}^{3}:|\xi|\leq 1-\delta\\})$, provided $\lambda_{0}>1$ is sufficiently large. Hence, by Lemma 23 and interpolation, the error term in (21) converges to $0$ in $C^{2}(\\{\xi\in\mathbb{R}^{3}:|\xi|\leq 1-\delta\\})$. We compute $|\xi|^{-1}\,\xi^{i}\,(\partial_{i}G_{\lambda,2})(\xi)=-2\,\lambda^{2}\int_{S_{\xi,\lambda}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}.$ For ease of notation, we will assume that $\xi=(0,\,0,\,\xi^{3})$ and $\xi^{3}>0$. Consider the subsets $S_{+}=\\{x\in S_{\xi,\lambda}:\bar{g}(x,e_{3})\leq 0\\},\quad S_{-}=\\{x\in S_{\xi,\lambda}:\bar{g}(\xi,\bar{\nu}(x))\geq 0\\},\quad-S_{+}=\\{-x:x\in S_{+}\\}.$ Figure 2. An illustration of the proof of Lemma 24. The scalar curvature is compared along the lines connecting $S_{-}$ and $-S_{+}$. The cross marks the origin in the chart at infinity. Using (18) and (19), we obtain $-2\,\lambda^{2}\int_{S_{\xi,\lambda}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}\geq 2\,\lambda^{2}\int_{-S_{+}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}-2\,\lambda^{2}\int_{S_{-}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}-o(1).$ We parametrize almost all of $S_{-}$ via $\Psi:(0,\pi)\times(0,2\pi)\to S_{-}\qquad\text{ given by }\qquad\Psi(\zeta,\,\varphi)=\lambda\,(\sin\zeta\,\sin\varphi,\,\sin\zeta\,\cos\varphi,\,\cos\zeta+\xi_{3}).$ Likewise, we parametrize almost all of $-S_{+}$ via $(0,\arccos(\xi_{3}))\times(0,2\,\pi)\to- S_{+},\qquad(\theta,\,\varphi)\mapsto\lambda\,(\sin\theta\,\sin\varphi,\,\sin\theta\,\cos\varphi,\,\cos\theta-\xi_{3}).$ As shown in Figure 2, given $\zeta\in(0,\pi)$, there exists a unique angle $\theta=\theta(\zeta)\in(0,\arccos(\xi_{3}))$ and a number $t=t(\zeta)>1$ such that $t\,(\sin\theta\,\sin\varphi,\,\sin\theta\,\cos\varphi,\,\cos\theta-\xi_{3})=(\sin\zeta\,\sin\varphi,\,\sin\zeta\,\cos\varphi,\,\cos\zeta+\xi_{3}).$ Moreover, $t=\frac{\sin\zeta}{\sin\theta}=\frac{\cos\zeta+\xi_{3}}{\cos\theta-\xi_{3}}\qquad\text{and}\qquad\frac{\sin\theta}{\cos\theta-\xi_{3}}=\frac{\sin\zeta}{\cos\zeta+\xi_{3}}.$ Differentiating the last equation, we find $\dot{\theta}=\frac{1+\xi_{3}\,\cos\zeta}{1-\xi_{3}\,\cos\theta}\,\frac{(\cos\theta-\xi_{3})^{2}}{(\cos\zeta+\xi_{3})^{2}}.$ It is elementary to check that $\frac{1+\xi_{3}\,\cos\zeta}{1-\xi_{3}\,\cos\theta}\geq t\,\frac{\cos\zeta}{\cos\theta}$ from which it follows that $\dot{\theta}\,\sin\theta\,\cos\theta\geq t^{-2}\,\sin\zeta\,\cos\zeta.$ Using (19) again, we obtain $\displaystyle 2\,\lambda^{2}\int_{-S_{+}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}-2\,\lambda^{2}\int_{S_{-}}|\xi|^{-1}\,\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}$ $\displaystyle\geq$ $\displaystyle 2\,\lambda^{4}\int_{0}^{2\pi}\int_{0}^{\pi/2}\left[t^{-2}\,R(t^{-1}\,\Psi(\zeta,\,\varphi))-R(\Psi(\zeta,\,\varphi))\right]\sin\zeta\,\cos\zeta\,\text{d}\zeta\,\text{d}\varphi-o(1).$ By (17), $\lambda^{4}\,(t^{-2}\,R(t^{-1}\,\Psi(\zeta,\,\varphi))-R(\Psi(\zeta,\,\varphi)))\geq-o(1).$ In particular, $|\xi|^{-1}\,\xi^{i}\,(\partial_{i}G_{\lambda,2})(\xi)\geq-o(1).$ Conversely, it is elementary to check that the function $G_{1}$ defined in (22) is strictly increasing in radial directions on $\\{\xi\in\mathbb{R}^{3}:0<|\xi|<1\\}$. Given $\delta_{1}\in(0,1-\delta)$, it follows that $G_{\lambda}$ is strictly increasing in radial directions on $\\{\xi\in\mathbb{R}^{3}:\delta_{1}<|\xi|<1-\delta\\}$, provided $\lambda>1$ is sufficiently large. It remains to show that $G_{\lambda}$ is strictly convex near the origin. Again, it is elementary to check that $G_{1}$ is strictly convex on $\\{\xi\in\mathbb{R}^{3}:|\xi|<1\\}$. Moreover, given $a\in\mathbb{R}^{3}$ with $|a|=1$, we compute $\displaystyle(\bar{D}^{2}G_{\lambda,2})|_{\xi}(a,\,a)=-2\,\lambda^{2}\int_{S_{\xi,\lambda}}\bigg{(}\lambda\,\bar{g}(a,\bar{\nu})^{2}\,\bar{D}_{\bar{\nu}}R+3\,\bar{g}(a,\bar{\nu})^{2}\,R-R\bigg{)}\,\text{d}\bar{\mu}.$ If $\xi=0$, the growth condition (17) implies that $-\lambda\,\bar{D}_{\bar{\nu}}R\geq 2\,R-o(\lambda^{-4}).$ Combined with (19), this gives $(\bar{D}^{2}G_{\lambda,2})|_{(0,\,0,\,0)}\geq-o(1)\,\operatorname{Id}.$ Using the $C^{4}$-decay of the metric $g$, we conclude that there are $c>0$ and $\delta_{0}\in(0,1-\delta)$, both independent of $\lambda$, such that $(\bar{D}^{2}G_{\lambda,2})|_{\xi}\geq-(o(1)+c\,|\xi|)\operatorname{Id}$ for every $\xi\in\mathbb{R}^{3}$ with $|\xi|<\delta_{0}$, provided $\lambda>1$ is sufficiently large. The assertions of the lemma follow. ∎ ###### Lemma 25. Suppose that the scalar curvature $R$ of $(M,g)$ satisfies (17). There is a constant $\lambda_{2}>1$ depending only on $(M,g)$ and $\delta\in(0,1/2)$ such that $G_{\lambda}$ is strictly increasing in radial directions on $\\{\xi\in\mathbb{R}^{3}:1+\delta<|\xi|<1+\delta^{-1}\\}$, provided $\lambda>\lambda_{2}$. ###### Proof. The argument is almost the same as the proof of Lemma 24 except that we do not need to consider the reflection $-S_{+}$. In particular, the assumption (18) is not required. ∎ ###### Proof of Theorem 5 . First, we show that for every $\lambda>1$ sufficiently large, there exists a stable area-constrained Willmore surface with area radius $\lambda$. To see this, we decompose $\displaystyle G_{\lambda}(\xi)=G_{1}(\xi)+2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+O(\lambda^{-1})$ as in (21). Note that $G_{1}(0)=0$ and $G_{1}(\xi)\to\infty$ as $|\xi|\nearrow 1$. Using (20), we find $2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(0)}}R\,\text{d}\bar{v}=O(1).$ Conversely, (19) implies $2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}\geq-2\,\lambda\int_{\mathbb{R}^{3}\setminus{B_{\lambda\,(1-|\xi|)}(0)}}o(|x|^{-4})\,\text{d}\bar{v}\geq-o(|1-|\xi||^{-1})$ as $\lambda\to\infty$. It follows that there is a number $z\in(1/2,1)$ such that $G_{\lambda}(0)<G_{\lambda}(\xi)$ for every $\xi$ with $|\xi|=z$ and every sufficiently large $\lambda>1$. Consequently, there is a local minimum $\xi(\lambda)$ of $G_{\lambda}$ with $|\xi(\lambda)|<z.$ Lemma 21 shows that $\Sigma(\lambda)=\Sigma_{\xi(\lambda),\lambda}$ is a stable area-constrained Willmore surface. We claim that $\xi(\lambda)=o(1).$ Otherwise, there exists a suitable sequence $\\{\lambda_{j}\\}_{j=1}^{\infty}$ with $\lambda_{j}\to\infty$ as $j\to\infty$ such that * • $\xi({\lambda_{j}})\to\xi_{0}$ with $\xi_{0}\in\\{\xi\in\mathbb{R}^{3}:0<|\xi|<1\\}$, * • $(DG_{\lambda_{j}})|_{\xi(\lambda_{j})}=0$. This is incompatible with Lemma 24. Lemma 20 implies that $\kappa=\kappa({\Sigma({\lambda})})$ is decreasing and approaches $0$ as $\lambda\to\infty$. By Proposition 48, the surfaces $\\{\Sigma({\lambda}):\lambda>\lambda_{0}\\}$ form a smooth foliation. This finishes the proof of Theorem 5. ∎ ###### Proof of Theorem 8. Suppose, for a contradiction, that there is a sequence of area-constrained Willmore surfaces $\\{\Sigma_{j}\\}_{j=0}^{\infty}$ with $\liminf_{j\to\infty}|\Sigma_{j}|=\infty,\quad\limsup_{j\to\infty}\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0},\quad 0<\liminf_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}\leq\limsup_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}<\infty,$ and $\Sigma_{j}\neq\Sigma({\lambda_{j}})$ where $\lambda_{j}$ is the area radius of $\Sigma_{j}$. For $\epsilon_{0}>0$ small enough, Lemma 4.2 in [24] implies the uniform, scale invariant curvature estimate $||h||_{L^{\infty}(\Sigma_{j})}=O(\lambda_{j}^{-1})$ with corresponding higher-order estimates as well as the estimate $\kappa({\Sigma_{j}})=O(\lambda_{j}^{-3}).$ After passing to a subsequence, the rescaled surfaces $\tilde{\Sigma}_{j}=\lambda_{j}^{-1}\,\Sigma_{j}$ converge smoothly to a Euclidean Willmore surface $\tilde{\Sigma}\subset\mathbb{R}^{3}$ satisfying $\int_{\tilde{\Sigma}}|\accentset{\circ}{\bar{h}}|^{2}\,\text{d}\bar{\mu}<\epsilon_{0},\quad|\tilde{\Sigma}|=4\,\pi,\quad\frac{1}{4\,\pi}\int_{\tilde{\Sigma}}y\,\text{d}\bar{\mu}=\xi_{0}$ where $|\xi_{0}|\neq 1$. Now, the gap theorem [25, Theorem 2.7] for Euclidean Willmore surfaces due to E. Kuwert and R. Schätzle implies that $\tilde{\Sigma}={S}_{1}(\xi_{0})$. It follows that $\Sigma_{j}$ is a perturbation of a coordinate sphere for large $j$. By Proposition 17, it is captured in our Lyapunov-Schmidt reduction in the sense that $\Sigma_{j}=\Sigma_{\xi_{j},\lambda_{j}}$ where $\xi_{j}$ is a critical point of $G_{\lambda_{j}}$ and $\xi_{j}\to\xi_{0}$. If $|\xi_{0}|<1$, Lemma 24 implies that $\xi_{0}=0$. However, since $G_{\lambda_{j}}$ is strictly convex near the origin, it follows that $\xi_{j}=\xi({\lambda_{j}})$, a contradiction. If $|\xi_{0}|>1$, we use Lemma 25 instead to obtain a contradiction in a similar way. ∎ ###### Proposition 26. Assumptions as in Theorem 5. There is no sequence $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ of connected, closed area-constrained Willmore surfaces with $\displaystyle\liminf_{j\to\infty}|\Sigma_{j}|=\infty,\quad\liminf_{j\to\infty}m_{H}(\Sigma_{j})$ $\displaystyle>-\infty,\quad\limsup_{j\to\infty}\operatorname{genus}(\Sigma_{j})<\infty$ $\displaystyle 0<\liminf_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}$ $\displaystyle\leq\limsup_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}<\infty$ that are not part of the foliation from Theorem 5. ###### Proof. According to Theorem 8, it suffices to verify that (23) $\displaystyle\limsup_{j\to\infty}\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu=0.$ Let $K$ denote the Gauss curvature of $\Sigma_{j}$. Using the Gauss equation in the form $4\,K=2\,R-4\,\operatorname{Rc}(\nu,\nu)+H^{2}-2\,|\accentset{\circ}{h}|^{2}$ and the Gauss-Bonnet theorem $\int_{\Sigma_{j}}K\,\text{d}\mu=4\,\pi\,(1-\operatorname{genus}(\Sigma_{j})),$ we find that (24) $\displaystyle\int_{\Sigma_{j}}H^{2}\,\text{d}\mu=16\,\pi(1-\operatorname{genus}(\Sigma_{j}))+2\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu+2\int_{\Sigma_{j}}\left(2\,\operatorname{Ric}(\nu,\nu)-R\right)\text{d}\mu.$ In particular, (25) $\displaystyle m_{H}(\Sigma_{j})=\sqrt{\frac{|\Sigma_{j}|}{(16\,\pi)^{3}}}\bigg{(}16\,\pi\,\operatorname{genus}(\Sigma_{j})-2\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu-2\int_{\Sigma_{j}}\left(2\,\operatorname{Ric}(\nu,\nu)-R\right)\text{d}\mu\bigg{)}.$ The decay assumptions on $g$ imply that $-\int_{\Sigma_{j}}\left(2\,\operatorname{Ric}(\nu,\nu)-R\right)\text{d}\mu=O(\lambda^{2}(\Sigma_{j})\,\rho^{-3}(\Sigma_{j}))=O(\lambda^{-1}(\Sigma_{j})).$ In particular, $\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu=O(1).$ As in Lemma 41, we compute that $\accentset{\circ}{\bar{h}}=(1+|x|)^{2}\,\accentset{\circ}{h}+O(|x|^{-2}\,|h|)+O(|x|^{-3})$ and consequently $\int_{\Sigma_{j}}|\accentset{\circ}{\bar{h}}|^{2}\,\text{d}\bar{\mu}=\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu+O(\lambda^{-2}(\Sigma_{j})).$ Now, using the Gauss equation for the Euclidean surface $\Sigma_{j}\subset\mathbb{R}^{3}$ and the Gauss-Bonnet theorem, we find that $16\,\pi-\int_{\Sigma_{j}}\bar{H}^{2}\,\text{d}\bar{\mu}=O(\lambda^{-1}(\Sigma_{j})).$ According to the result [2, Theorem 1.2] of E. Kuwert and M. Bauer, for any fixed genus, there exists an embedded surface which attains the infimum of the Euclidean Willmore energy. Since the round spheres are the only compact surfaces with Euclidean Willmore energy equal to $4\,\pi$, it follows that $\operatorname{genus}(\Sigma_{j})=0$ for $j$ large. Thus, (23) follows directly from (25). ∎ ## 3\. Proof of Theorem 11 In this section, we study large, far-outlying area-constrained Willmore surfaces with ($L^{2}$-)small traceless second fundamental form. Throughout this section, we will assume that $g$ is $C^{5}$-asymptotic to Schwarzschild. Let $|\xi|>2$ and $\lambda>\lambda_{0}$ for some $\lambda_{0}>1$ large. As before, given $\ell\in\\{0,\,1,\,2,\dots\\}$, we use $\Lambda_{\ell}$ to denote the space of the $\ell$-th spherical harmonics on $S_{\xi,\lambda}$. Likewise, we define $\Lambda_{>2}$, $\Lambda_{>1}$, and $\Lambda_{>0}$ to be the orthogonal complements of $\Lambda_{0}\oplus\Lambda_{1}\oplus\Lambda_{2}$, $\Lambda_{0}\oplus\Lambda_{1}$, and $\Lambda_{0}$ in $C^{4,\alpha}(S_{\xi,\lambda})$, respectively. We suppress the dependence on $\xi$ and $\lambda$ to relax the notation. We recall the definition of the rescaled metric $g_{\xi,\lambda}$ in (13). Note that $||g_{\xi,\lambda}-\bar{g}||_{\mathcal{G}}=O(\lambda^{-1}\,|\xi|^{-1}).$ Consequently, Lemma 16 leads to the following proposition. ###### Proposition 27. There are constants $\lambda_{0}>1$, $c>1$, and $\epsilon>0$ depending on $(M,g)$ such that for every $|\xi|>2$ and $\lambda>\lambda_{0}$ there exist $u_{\xi,\lambda}\in C^{\infty}(S_{\xi,\lambda})$ and $\kappa_{\xi,\lambda}\in\mathbb{R}$ such that the following hold. The surface $\Sigma_{\xi,\lambda}=\Sigma_{\xi,\lambda}(u_{\xi,\lambda})$ has the properties * • $W(\Sigma_{\xi,\lambda})+\kappa_{\xi,\lambda}\,H(\Sigma_{\xi,\lambda})\in\Lambda_{1}$, * • $|\Sigma_{\xi,\lambda}|=4\,\pi\,\lambda^{2}$. There holds $u_{\xi,\lambda}\perp\Lambda_{1}$ and (26) $\displaystyle|u_{\xi,\lambda}|+\lambda\,|\nabla u_{\xi,\lambda}|+\lambda^{2}\,|\nabla^{2}u_{\xi,\lambda}|+\lambda^{3}\,|\nabla^{3}u_{\xi,\lambda}|+\lambda^{4}\,|\nabla^{4}u_{\xi,\lambda}|$ $\displaystyle<c\,|\xi|^{-1},$ $\displaystyle\lambda^{3}\,|\kappa_{\xi,\lambda}|$ $\displaystyle<c\,|\xi|^{-1}.$ Moreover, if $\kappa\in\mathbb{R}$ and $\Sigma_{\xi,\lambda}(u)$ with $u\perp\Lambda_{1}(S_{\xi,\lambda})$ are such that * • $\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu)+\kappa)\,H\in\Lambda_{1},$ * • $|\Sigma_{\xi,\lambda}(u)|=4\,\pi\,\lambda^{2},$ and $\displaystyle|u|+\lambda\,|\nabla u|+\lambda^{2}\,|\nabla^{2}u|+\lambda^{3}\,|\nabla^{3}u|+\lambda^{4}\,|\nabla^{4}u|$ $\displaystyle<\epsilon\,\lambda,$ $\displaystyle\lambda^{3}\,|\kappa|$ $\displaystyle<\epsilon\,\lambda,$ then $u=u_{\xi,\lambda}$ and $\kappa=\kappa_{\xi,\lambda}$. As in the previous section, we abbreviate $u=u_{\xi,\lambda}$ and $\kappa=\kappa_{\xi,\lambda}$. Lemma 20 suggests that a stronger estimate for $u$ than (26) might hold. However, more care is required since error terms of order $O(\lambda^{-5})$ may be larger than any inverse power of $|\xi|$. First, we introduce some notation. We define $\phi(|x|)=1+|x|^{-1}$ to be the conformal factor of the Schwarzschild metric with mass $m=2$. As in [11], we use a bar underneath a quantity to indicate evaluation at $\lambda\,\xi$. If the quantity includes derivatives, these are taken first before we evaluate. For instance, we have $\underaccent{\bar}{\sigma}=\sigma(\lambda\,\xi),\qquad\bar{D}\underaccent{\bar}{\sigma}=(\bar{D}\sigma)(\lambda\,\xi).$ We note that the decay assumptions and Taylor’s theorem imply for example that $\underaccent{\bar}{\sigma}(\lambda\,\bar{\nu}+\lambda\,\xi)=\underaccent{\bar}{\sigma}+\lambda\,D_{\bar{\nu}}\underaccent{\bar}{\sigma}+O(\lambda^{-2}\,|\xi|^{-4}).$ We now adapt Lemma 20 to the current setting. In the statement of the following lemma, we let $Y^{ij}_{2}=\frac{1}{2}\,\bigg{[}3\,\bar{g}(\bar{\nu},e_{i})\,g(\bar{\nu},e_{j})-\delta_{ij}\bigg{]}\in\Lambda_{2}$ where $\\{e_{1},\,e_{2},\,e_{3}\\}$ denotes the standard basis of $\mathbb{R}^{3}$. ###### Lemma 28. There holds $\displaystyle\kappa$ $\displaystyle=O(\lambda^{-4}\,|\xi|^{-4}),$ $\displaystyle W(\Sigma_{\xi,\lambda})+\kappa\,H(\Sigma_{\xi,\lambda})$ $\displaystyle={O}(\lambda^{-5}\,|\xi|^{-3}),$ $\displaystyle u$ $\displaystyle=-2\,|\xi|^{-1}+O(\lambda^{-1}\,|\xi|^{-2})+O(|\xi|^{-3}).$ More precisely, $\displaystyle\operatorname{proj}_{\Lambda_{2}}u$ $\displaystyle=-\frac{1}{3}\,\bigg{[}4\,|\xi|^{-5}\,\xi^{i}\,\xi^{j}+\lambda\,\underaccent{\bar}{\phi}^{-6}\,\underaccent{\bar}{\sigma}(e_{i},e_{j})\bigg{]}Y_{2}^{ij}+O(|\xi|^{-4}),$ $\displaystyle\operatorname{proj}_{\Lambda_{>2}}u$ $\displaystyle=O(|\xi|^{-4})+O(\lambda^{-1}\,|\xi|^{-3}).$ These identities may be differentiated once with respect to $\xi$. ###### Proof. From Corollary 45 we obtain (27) $\displaystyle\operatorname{proj}_{\Lambda_{0}}W({S_{\xi,\lambda}})=$ $\displaystyle\,O(\lambda^{-5}\,|\xi|^{-4}),$ $\displaystyle\operatorname{proj}_{\Lambda_{1}}W({S_{\xi,\lambda}})=$ $\displaystyle\,O(\lambda^{-5}\,|\xi|^{-3}),$ $\displaystyle\operatorname{proj}_{\Lambda_{2}}W({S_{\xi,\lambda}})=$ $\displaystyle\,O(\lambda^{-4}\,|\xi|^{-3})+O(\lambda^{-5}\,|\xi|^{-2}),$ $\displaystyle\operatorname{proj}_{\Lambda_{>2}}W({S_{\xi,\lambda}})=$ $\displaystyle\,O(\lambda^{-4}\,|\xi|^{-4})+O(\lambda^{-5}\,|\xi|^{-3}).$ We consider the family of surfaces $\\{\Phi^{t\,u}_{\xi,\lambda}(S_{\xi,\lambda}):t\in[0,1]\\},$ where $\Phi^{t\,u}_{\xi,\lambda}:S_{\xi,\lambda}\to M$ is as in (11). The initial velocity of this variation with respect to the metric $g$ is given by $\displaystyle w=u\,g(\bar{\nu},\nu).$ Note that (28) $\displaystyle g(\bar{\nu},\nu)=\phi^{2}+O(\lambda^{-2}\,|\xi|^{-2}).$ By (26) and Taylor’s theorem, (29) $\displaystyle Q_{S_{\xi,\lambda}}w=W({S_{\xi,\lambda}})-W({\Sigma_{\xi,\lambda}})+O(\lambda^{-5}\,|\xi|^{-2}).$ Using (57), we find that $\displaystyle Q_{S_{\xi,\lambda}}w$ $\displaystyle=\Delta^{2}_{S_{\xi,\lambda}}w+2\,\lambda^{-2}\,\phi^{-4}\,\Delta_{S_{\xi,\lambda}}w+O(\lambda^{-5}\,|\xi|^{-2})$ $\displaystyle=\phi^{-6}\,(\bar{\Delta}^{2}_{S_{\xi,\lambda}}u+2\,\lambda^{-2}\,\bar{\Delta}_{S_{\xi,\lambda}}u)+O(\lambda^{-5}\,|\xi|^{-2}).$ By Proposition 27, we have $W({\Sigma_{\xi,\lambda}})+\kappa\,H({\Sigma_{\xi,\lambda}})=Y_{1}\in\Lambda_{1}.$ Moreover, (45), Proposition 27, Lemma 39, and Lemma 41 imply that $\operatorname{proj}_{\Lambda_{0}}H({\Sigma_{\xi,\lambda}})=2\,\lambda^{-1}+O(\lambda^{-2}\,|\xi|^{-1}).$ Thus, projecting (29) onto $\Lambda_{0}$, we find $\kappa=O(\lambda^{-4}\,|\xi|^{-2}).$ Next, we observe that $\operatorname{proj}_{\Lambda_{>0}}H({\Sigma_{\xi,\lambda}})=O(\lambda^{-2}\,|\xi|^{-1}).$ Projecting (29) onto $\Lambda_{1}$, we conclude that $Y_{1}=O(\lambda^{-5}\,|\xi|^{-2}).$ Similarly, we obtain (30) $\displaystyle\operatorname{proj}_{\Lambda_{>1}}u=O(\lambda^{-1}\,|\xi|^{-2})+O(|\xi|^{-3}).$ Finally, Lemma 40, (26), and the formula for the first variation of area imply $\int_{S_{\xi,\lambda}}H({S_{\xi,\lambda}})\,w\,\text{d}\mu=-16\,\pi\,\lambda\,|\xi|^{-1}+O(|\xi|^{-2}).$ Using the improved estimates (30) for $\operatorname{proj}_{\Lambda_{>1}}u$ and arguing as in the proof of Lemma 19, we obtain $\operatorname{proj}_{\Lambda_{0}}u=-2\,|\xi|^{-1}+O(\lambda^{-1}\,|\xi|^{-2}).$ To remove this scaling effect of the perturbation, we define (31) $\displaystyle\tilde{u}=u+2\,|\xi|^{-1},\qquad\qquad\tilde{\lambda}=\lambda-2\,|\xi|^{-1},\qquad\qquad\tilde{\xi}=\lambda\,(\lambda-2\,|\xi|^{-1})^{-1}\,\xi,$ and note that $\lambda\,\xi=\tilde{\lambda}\,\tilde{\xi}$. Then, $\Sigma_{\xi,\lambda}=\Phi^{\tilde{u}}_{\tilde{\xi},\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}})$ and there holds (32) $\displaystyle\tilde{u}=O(\lambda^{-1}\,|\xi|^{-2})+O(|\xi|^{-3}).$ We now repeat the above argument to obtain an improved estimate for $\tilde{u}$. As before, we consider the family of surfaces $\\{\Phi^{t\,\tilde{u}}_{\tilde{\xi},\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}}):t\in[0,1]\\}.$ The initial velocity of this family with respect to the metric $g$ is given by $\tilde{w}=\tilde{u}\,g(\bar{\nu},\nu).$ Note that (32) implies the improved estimate (33) $\displaystyle Q_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{w}=W({S_{\tilde{\xi},\tilde{\lambda}}})-W({\Sigma_{\xi,\lambda}})+O(\lambda^{-6}\,|\xi|^{-4})+O(\lambda^{-5}\,|\xi|^{-6}).$ Revisiting equation (57) and recalling (28), we find $\displaystyle Q_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{w}$ $\displaystyle=\Delta^{2}_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{w}+2\,\tilde{\lambda}^{-2}\,\phi^{-4}\Delta_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{w}+O(\lambda^{-6}\,|\xi|^{-4})+O(\lambda^{-5}\,|\xi|^{-6})$ $\displaystyle=\phi^{-6}\,(\bar{\Delta}^{2}_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{u}+2\,\tilde{\lambda}^{-2}\,\bar{\Delta}_{S_{\tilde{\xi},\tilde{\lambda}}}\tilde{u})+O(\lambda^{-6}\,|\xi|^{-4})+O(\lambda^{-5}\,|\xi|^{-6}).$ Arguing as before, this improved estimate yields $\kappa=O(\lambda^{-4}\,|\xi|^{-4}),\qquad Y_{1}=O(\lambda^{-5}\,|\xi|^{-3}),\qquad\operatorname{proj}_{\Lambda>2}\tilde{u}=O(|\xi|^{-4})+O(\lambda^{-1}\,|\xi|^{-3}).$ Finally, we project (33) onto $\Lambda_{2}$. Recalling Corollary 45, we find (34) $\displaystyle\bar{\Delta}^{2}_{S_{\tilde{\xi},\tilde{\lambda}}}\operatorname{proj}_{\Lambda_{2}}\tilde{u}$ $\displaystyle\,+2\,\tilde{\lambda}^{-2}\,\bar{\Delta}_{S_{\tilde{\xi},\tilde{\lambda}}}\operatorname{proj}_{\Lambda_{2}}\tilde{u}$ $\displaystyle=$ $\displaystyle\,\operatorname{proj}_{\Lambda_{2}}(\phi^{6}\,W({S_{\tilde{\xi},\tilde{\lambda}}}))+O(\lambda^{-6}\,|\xi|^{-4})+O(\lambda^{-5}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,-32\,\tilde{\lambda}^{-4}\,|\xi|^{-3}\,P_{2}\left(-|\xi|^{-1}\,\bar{g}(\bar{\nu},\xi)\right)-4\,\tilde{\lambda}^{-3}\,\underaccent{\bar}{\phi}^{-4}\,(3\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})-\operatorname{tr}\underaccent{\bar}{\sigma})+O(\lambda^{-4}\,|\xi|^{-4}).$ To conclude, we use Corollary 33 and the estimate (35) $\displaystyle\tilde{\lambda}=\underaccent{\bar}{\phi}^{-2}\,\lambda+O(\lambda^{-1}\,|\xi|^{-2}),$ which is immediate from the definition (31). ∎ We recall from (16) that the function $G_{\lambda}:\\{\xi\in\mathbb{R}^{3}:|\xi|>2\\}\to\mathbb{R}$ is given by $G_{\lambda}(\xi)=\lambda^{2}\,\bigg{(}\int_{\Sigma_{\xi,\lambda}}H^{2}\,\text{d}\mu-16\,\pi\bigg{)}.$ Using the improved estimates for $u$ obtained in Lemma 28, we now show that the expansion in Lemma 22 holds with better error control. This is the key step in the proof of Theorem 11. ###### Lemma 29. There holds $\displaystyle G_{\lambda}(\xi)=-\frac{128\,\pi}{15}\,|\xi|^{-6}-2\,\lambda\int_{B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ This identity may be differentiated once with respect to $\xi$. ###### Proof. We recall from the proof of Lemma 28 that $\Sigma_{\xi,\lambda}=\Phi^{\tilde{u}}_{\tilde{\xi},\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}})$, where $\tilde{u}=u+2\,|\xi|^{-1},\qquad\qquad\tilde{\lambda}=\lambda-2\,|\xi|^{-1},\qquad\qquad\tilde{\xi}=\lambda\,(\lambda-2\,|\xi|^{-1})^{-1}\,\xi.$ As in the proof of Lemma 28, we consider the family of surfaces $\\{\Phi^{t\,\tilde{u}}_{\tilde{\xi},\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}}):t\in[0,1]\\}$ connecting $S_{\tilde{\xi},\tilde{\lambda}}$ and $\Sigma_{\xi,\lambda}$. Recall the functional $F_{\lambda}$ defined in (15). We compute the Taylor expansion of the function $[0,1]\to\mathbb{R}^{3},\qquad t\mapsto F_{\lambda}(\Phi^{t\,\tilde{u}}_{\tilde{\xi},\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}}))$ at $t=0$. To this end, we abbreviate $W=W(S_{\tilde{\xi},\tilde{\lambda}})$ and $Q=Q_{S_{\tilde{\xi},\tilde{\lambda}}}$. Since the initial velocity is given by $\phi^{2}\,\tilde{u}$, we find, using Lemma 31, Lemma 28, as well as (27), that $\displaystyle F_{\lambda}(\Sigma_{\xi,\lambda})$ $\displaystyle=F_{\lambda}(S_{\tilde{\xi},\tilde{\lambda}})-2\,\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{2}\,W\,\tilde{u}\,\text{d}\mu+\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{4}\,\left[\tilde{u}\,Q\tilde{u}-W\,H\,\tilde{u}^{2}\right]\text{d}\mu+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=F_{\lambda}(S_{\tilde{\xi},\tilde{\lambda}})-2\,\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{2}\,W\,\tilde{u}\,\text{d}\mu+\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{4}\,\tilde{u}\,Q\tilde{u}\,\text{d}\mu+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=F_{\lambda}(S_{\tilde{\xi},\tilde{\lambda}})-2\,\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\operatorname{proj}_{\Lambda_{2}}(\phi^{6}\,W)\,\operatorname{proj}_{\Lambda_{2}}\tilde{u}\,\text{d}\bar{\mu}+\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{4}\,\tilde{u}\,Q\tilde{u}\,\text{d}\mu+O(\lambda^{-1}\,|\xi|^{-6}).$ Revisiting equation (57) again and using (34), we find $\displaystyle\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\phi^{4}\,\tilde{u}\,Q\tilde{u}\,\text{d}\mu$ $\displaystyle=\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\left[(\bar{\Delta}\tilde{u})^{2}+2\,\tilde{\lambda}^{-2}\,\tilde{u}\,\bar{\Delta}\tilde{u}\right]\text{d}\bar{\mu}+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=\lambda^{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\operatorname{proj}_{\Lambda_{2}}(\phi^{6}\,W)\,\operatorname{proj}_{\Lambda_{2}}\tilde{u}\,\text{d}\bar{\mu}+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=24\,\lambda^{2}\,\tilde{\lambda}^{-4}\int_{S_{\tilde{\xi},\tilde{\lambda}}}(\operatorname{proj}_{\Lambda_{2}}\tilde{u})^{2}\,\text{d}\bar{\mu}+O(\lambda^{-1}\,|\xi|^{-6}).$ It follows that (36) $\displaystyle F_{\lambda}(\Sigma_{\xi,\lambda})=F_{\lambda}(S_{\tilde{\xi},\tilde{\lambda}})-24\,\lambda^{2}\,\tilde{\lambda}^{-4}\int_{S_{\tilde{\xi},\tilde{\lambda}}}(\operatorname{proj}_{\Lambda_{2}}\tilde{u})^{2}\,\text{d}\bar{\mu}+O(\lambda^{-1}|\xi|^{-6}).$ For ease of notation, we assume that $\xi$ is a multiple of $e_{3}$. Using Lemma 28 and Lemma 35, we compute $\displaystyle-24\,\lambda^{2}\,\tilde{\lambda}^{-4}$ $\displaystyle\,\int_{S_{\tilde{\xi},\tilde{\lambda}}}(\operatorname{proj}_{\Lambda_{2}}\tilde{u})^{2}\,\text{d}\bar{\mu}$ $\displaystyle=$ $\displaystyle\,-\frac{8}{3}\,\lambda^{2}\,\tilde{\lambda}^{-2}\int_{S_{1}(0)}\bigg{(}16\,|\xi|^{-6}\,Y^{3,3}_{2}\,Y^{3,3}_{2}+8\,\lambda\,|\xi|^{-3}\,\underaccent{\bar}{\phi}^{-6}\,\underaccent{\bar}{\sigma}(e_{i},e_{j})\,Y^{3,3}_{2}\,Y^{ij}_{2}$ $\displaystyle\,\qquad\qquad\qquad\qquad+\lambda^{2}\,\underaccent{\bar}{\phi}^{-12}\,\underaccent{\bar}{\sigma}(e_{i},e_{j})\,\underaccent{\bar}{\sigma}(e_{k},e_{\ell})\,Y_{2}^{ij}\,Y^{k\ell}_{2}\bigg{)}\,\text{d}\bar{\mu}$ (37) $\displaystyle=$ $\displaystyle\,-\frac{512\,\pi}{15}\,\lambda^{2}\,\tilde{\lambda}^{-2}\,|\xi|^{-6}-\frac{128\,\pi}{15}\,\lambda^{3}\,\tilde{\lambda}^{-2}\,|\xi|^{-3}\,\left(3\,|\xi|^{-2}\,\underaccent{\bar}{\sigma}(\xi,\xi)-\operatorname{tr}\underaccent{\bar}{\sigma}\right)-\frac{48\,\pi}{15}\,\lambda^{4}\,\tilde{\lambda}^{-2}\,\underaccent{\bar}{\phi}^{-12}\,|\underaccent{\bar}{\sigmacirc}|^{2}$ $\displaystyle\,+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,-\frac{512\,\pi}{15}\,|\xi|^{-6}-\frac{128\,\pi}{15}\,\lambda\,|\xi|^{-3}\,\left(3\,|\xi|^{-2}\,\underaccent{\bar}{\sigma}(\xi,\xi)-\operatorname{tr}\underaccent{\bar}{\sigma}\right)-\frac{48\,\pi}{15}\,\lambda^{2}\,\underaccent{\bar}{\phi}^{-8}\,|\underaccent{\bar}{\sigmacirc}|^{2}$ $\displaystyle\,+O(\lambda^{-1}\,|\xi|^{-6}).$ In the last equation, we have used (35). Conversely, Lemma 42 and Taylor’s theorem give that $\displaystyle F_{\lambda}(S_{\tilde{\xi},\tilde{\lambda}})=$ $\displaystyle\,\lambda^{2}\,\tilde{\lambda}^{-2}\,F_{\tilde{\lambda}}(S_{\tilde{\xi},\tilde{\lambda}})$ (38) $\displaystyle=$ $\displaystyle\,\frac{128\,\pi}{5}\,|\xi|^{-6}-2\,\lambda^{2}\,\tilde{\lambda}^{-1}\,\underaccent{\bar}{\phi}^{4}\int_{{B_{\tilde{\lambda}}(\tilde{\lambda}\,\tilde{\xi})}}R\,\text{d}\bar{v}$ $\displaystyle\,+\frac{48\,\pi}{15}\,\lambda^{2}\,\underaccent{\bar}{\phi}^{-8}\,|\underaccent{\bar}{\sigmacirc}|^{2}+\frac{128\,\pi}{15}\,\lambda\,|\xi|^{-3}\,\left(3\,|\xi|^{-2}\,\underaccent{\bar}{\sigma}(\xi,\xi)-\operatorname{tr}\underaccent{\bar}{\sigma}\right)+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ Finally, using the decay of the metric as well as (35), we find (39) $\displaystyle\lambda^{2}\,\tilde{\lambda}^{-1}\,\underaccent{\bar}{\phi}^{4}\int_{{B_{\tilde{\lambda}}(\tilde{\lambda}\,\tilde{\xi})}}R\,\text{d}\bar{v}=$ $\displaystyle\,\lambda\,\underaccent{\bar}{\phi}^{6}\int_{{B_{\underaccent{\bar}{\phi}^{-2}\,\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,\lambda\int_{B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6}).$ Assembling (36), (37), (38), and (39), the assertion follows. ∎ ###### Proof of Theorem 11. Suppose, for a contradiction, that there exists a sequence of outlying area- constrained Willmore surfaces $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ with $\lim_{j\to\infty}|\Sigma_{j}|=\infty,\qquad\limsup_{j\to\infty}\int_{\Sigma_{j}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu<\epsilon_{0},\qquad\lim_{j\to\infty}\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma_{j}})}=\infty.$ As in the proof of Theorem 8, we may assume that $\Sigma_{j}=\Sigma_{\xi_{j},\lambda_{j}}$ for suitable $\xi_{j}\in\mathbb{R}^{3}$ and $\lambda_{j}\in(\lambda_{0},\infty)$ where $|\xi_{j}|\to\infty\qquad\text{and}\qquad\lambda_{j}\to\infty.$ Arguing as in the proof of Lemma 25, but this time using the exact growth condition222Note that this condition integrates to $R\geq 0$. $x^{i}\,\partial_{i}(|x|^{2}\,R)(x)\leq 0,$ we find that $\xi^{i}\,\partial_{i}\bigg{(}-2\,\lambda\int_{B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}\bigg{)}\geq 0.$ In conjunction with Lemma 29, this gives $\xi^{i}\,(\partial_{i}G_{\lambda})(\xi)\geq\frac{256\,\pi}{5}\,|\xi|^{-6}+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ In particular, $\xi_{j}^{i}\,(\partial_{i}G_{\lambda_{j}})(\xi_{j})>0.$ This is incompatible with Lemma 21. ∎ ## 4\. Proof of Theorems 13, 14 and 15 We first prove Theorem 13 and Theorem 15. To this end, we adapt a construction from [11, §3 and §6]. The metrics in this construction are rotationally symmetric. Their scalar curvature has a pulse. We briefly recall some steps from [11]. Given a function $S:(0,\infty)\to(-\infty,0]$ with $S^{(\ell)}=O(s^{-4-\ell})$ for every integer $\ell\geq 0$, we define the function $\Psi:(0,\infty)\to(-\infty,0]$ by (40) $\displaystyle\Psi(s)=s^{-1}\int_{s}^{\infty}(t-s)\,t\,S(t)\,\text{d}t.$ Note that $\Psi^{(\ell)}=O(s^{-2-\ell})$ for every integer $\ell\geq 0$. The metric $g=(1+|x|^{-1}+\Psi(|x|))^{4}\,\bar{g}$ on $\mathbb{R}^{3}\setminus\\{0\\}$ is smoothly asymptotic to Schwarzschild with mass $m=2$. Its scalar curvature $R$ is given by (41) $\displaystyle R(x)=-8\,(1+O(|x|^{-1}))\,S(|x|).$ In particular, $R\geq 0$ outside of a compact set. ###### Proof of Theorem 13. First, as shown in Figure 3, we construct a metric $g_{2}$ which admits large outlying area-constrained Willmore surfaces $\\{\Sigma_{j}\\}_{j=1}^{\infty}$ with $2\sqrt{2}<\frac{\rho({\Sigma_{j}})}{\lambda({\Sigma^{2}_{j}})}<5\qquad\text{and}\qquad m_{H}(\Sigma_{j})>-o(1).$ Let $\chi\in C^{\infty}(\mathbb{R})$ be such that $\chi(t)>0$ for all $t\in(3,4)$ and $\operatorname{supp}\chi\subset[3,4]$. Let $S(s)=-B\sum_{k=0}^{\infty}10^{-4\,k}\,\chi(10^{-k}\,s).$ The constant $B>0$ will be chosen (large) later. Let $j\geq 1$ be a large integer and $\lambda_{j}=10^{j}$. Recall the definition of $G_{\lambda}$ in (16). $G_{\lambda_{j}}$ is rotationally symmetric on $\\{\xi\in\mathbb{R}^{3}:|\xi|>2\\}$. As in the proof of Lemma 24, we have $G_{\lambda_{j}}=G_{1}+G_{2,\lambda_{j}}+o(1).$ This expansion may be differentiated twice. Here, $G_{1}$ is strictly increasing in radial directions and independent of both $\lambda$ and $B$, while $\displaystyle\xi^{i}\,(\partial_{i}G_{2,\lambda_{j}})(\xi)=-2\,\lambda_{j}^{2}\int_{S_{\xi,\lambda_{j}}}\bar{g}(\xi,\bar{\nu})\,R\,\text{d}\bar{\mu}=-16\,B\int_{S_{1}(\xi)}\bar{g}(\xi,\bar{\nu})\,\chi\,\text{d}\bar{\mu}+o(1).$ The integral on the right hand side vanishes if $|\xi|=5$ and is negative if $|\xi|=2\sqrt{2}$. Thus, using that $G_{\lambda_{j}}$ is rotationally symmetric and that $G_{1}$ is strictly increasing in radial directions, we may increase $B>0$ appropriately so $G_{\lambda_{j}}$ attains a local minimum at some $\xi_{j}\in\mathbb{R}^{3}$ with $2\,\sqrt{2}<|\xi_{j}|<5$ for every sufficiently large $j$. Figure 3. An illustration of the construction for the proof of Theorem 13. The scalar curvature is positive in the shaded region and vanishes elsewhere. On the left, the surface $\Sigma_{\xi,\lambda_{j}}$ with $|\xi|=5$ is shown. It does not overlap with the shaded region. In particular, the radial derivative of $G_{2,\lambda_{j}}$ vanishes. On the right, the surface corresponds to the choice $|\xi|=2\,\sqrt{2}$. If this surface is moved upwards, the overlap with the shaded region increases. The radial derivative of $G_{2,\lambda_{j}}$ is negative. Next, we construct a metric $g_{1}$ which admits large on-center area- constrained Willmore surfaces that are not part of the foliation from Theorem 5. This time, we choose a smooth function $\chi$ such that $\chi(t)>0$ for all $t\in(9/8,11/8)$ and $\operatorname{supp}\chi\subset[9/8,11/8]$. We write $G_{\lambda_{j}}=G_{1}+G_{2,\lambda_{j}}+o(1).$ As before, $G_{1}$ is strictly increasing in radial directions and independent of both $\lambda$ and $B$, while $\displaystyle\xi^{i}\,(\partial_{i}G_{2,\lambda_{j}})(\xi)=-16\,B\int_{S_{1}(\xi)}\bar{g}(\xi,\bar{\nu})\,\chi\,\text{d}\bar{\mu}+o(1).$ The integral on the right hand side is negative if $|\xi|=1/4$ and positive if $|\xi|=7/8$. Thus, we may again increase $B$ appropriately such that for every $j$ large, $G_{\lambda_{j}}$ attains a local minimum $\xi_{j}\in\mathbb{R}^{3}$ with $1/4<|\xi_{j}|<7/8.$ This completes the proof of Theorem 13. ∎ ###### Proof of Theorem 15. To construct $g_{4}$, we choose a suitable function $\Psi$ in (40) which satisfies (42) $\displaystyle\Psi^{(\ell)}=O(s^{-3-\ell}).$ In particular, $\Psi$ decays one order faster than the perturbations used to construct $g_{1}$ and $g_{2}$. Due to the fast decay of the Schwarzschild contribution, this perturbation will still be strong enough to admit large far-outlying area-constrained Willmore surfaces with Hawking masses bounded from below. More precisely, we choose $\chi\in C^{\infty}(\mathbb{R})$ with $\chi(t)>0$ for all $t\in(4,6)$, $\operatorname{supp}\chi\subset[4,6]$, and $\chi^{\prime}(5)=1$. Let $S(s)=-\sum_{k=0}^{\infty}10^{-5\,k}\,\chi(10^{-k}\,s)$ and note that $S^{(\ell)}=O(s^{-5-\ell})$ for every integer $\ell\geq 0$. This ensures that (42) holds. Now, let $j\geq 1$ be large, $\lambda_{j}=10^{j}$, and $\xi_{t}=t\,10^{j}\,a$ where $t\in[3,7]$ and $a\in\mathbb{R}^{3}$ is such that $|a|=1$. From (41), we see that $R=O(|x|^{-5}).$ Using Taylor’s theorem, we find that $-2\,\lambda\int_{B_{\lambda_{j}}(\lambda_{j}\,\xi_{t})}R\,\text{d}\bar{v}=-\frac{8\,\pi}{3}\,\lambda_{j}^{4}\,\underaccent{\bar}{R}+O(\lambda^{-1}\,|\xi_{t}|^{-6})+O(|\xi_{t}|^{-7}).$ Lemma 29 implies that $\displaystyle G_{\lambda_{j}}(\xi_{t})$ $\displaystyle=-\frac{128\,\pi}{15}\,|\xi_{t}|^{-6}-\frac{8\,\pi}{3}\,\lambda_{j}^{4}\,\underaccent{\bar}{R}+O(\lambda^{-1}\,|\xi_{t}|^{-6})+O(|\xi_{t}|^{-7})$ $\displaystyle=-\frac{128\,\pi}{15}\,t^{-6}\,10^{-6j}-\frac{64\,\pi}{3}\,\chi(t)\,10^{-6j}+O(10^{-7j}).$ The derivative of the quantity on the right hand side with respect to $t$ is positive if $t=7$ and negative if $t=5$ provided $j$ is large. Since $G_{\lambda_{j}}$ is rotationally symmetric, it follows that, for every $j$ large, there is a number $t_{j}\in[5,7]$ with $(\bar{D}G_{\lambda_{j}})(\xi_{j})=0$ where $\xi_{j}=t_{j}\,10^{j}\,a$. In particular, $\\{\Sigma_{\xi_{j},\lambda_{j}}\\}_{j=1}^{\infty}$ is a sequence of far-outlying stable area-constrained Willmore surfaces with diverging area and Hawking mass bounded from below. ∎ ###### Remark 30. The proofs of Theorem 13 and Theorem 15 follow the proofs of Theorem 1.3 and 1.8 in [11] closely. Note that the analysis of the function $G_{\lambda}$ differs from the analysis of the reduced area functional in [11]. The construction of the metric $g_{1}$ has features not considered in [11]. Figure 4. An illustration of the construction for the proof of Theorem 14. The odd part of the scalar curvature is positive in the shaded region and negative in the hatched region. On the left and right, the surfaces $\Sigma_{\xi,\lambda_{j}}$ corresponding to the choices $\xi=0$ and $\xi=t\,e_{1}$ for some small $t>0$, respectively, are shown. The latter has larger overlap with the shaded region, causing the derivative of $G_{2,\lambda_{j}}$ in direction $e_{1}$ to be negative at $\xi=0$. ###### Proof of Theorem 14. We construct a metric $g_{3}$ on $\mathbb{R}^{3}\setminus\\{0\\}$ that admits a foliation by area-constrained Willmore spheres whose leaves do not center around the origin; see Figure 4. To this end, we let $\chi:\mathbb{R}^{3}\to[0,\infty)$ be a standard bump function $\chi(y)=\begin{cases}&e^{-(1-|y|^{2})^{-1}}\text{ if }|y|<1,\\\ &0\qquad\qquad\hskip 4.26773pt\text{ if }|y|\geq 1.\end{cases}$ If $y\in B_{1}(0)$, we compute $(\bar{\Delta}\chi)(y)=2\,\chi(y)\,\frac{4\,|y|^{2}+|y|^{4}-3}{(1-|y|^{2})^{4}}.$ In particular, $\chi$ is strictly subharmonic on $\\{y\in\mathbb{R}^{3}:\sqrt{3}/2<|y|<1\\}$. Let $\psi(x)=\sum_{k=0}^{\infty}10^{-2\,k}\,\chi\left(2\cdot 10^{-k}\,(x-10^{k}\,e_{1})\right)$ and define the conformally flat metric $g_{3}=[1+|x|^{-1}-\epsilon\,(|x|^{-2}+\delta\,\psi(x))]^{4}\,\bar{g}$ on $\mathbb{R}^{3}\setminus\\{0\\}$ where $\epsilon,\,\delta>0$ will be chosen (small) later. Note that $g_{3}$ is smoothly asymptotic to Schwarzschild with mass $m=2$. The scalar curvature of $g_{3}$ is given by $\displaystyle R$ $\displaystyle=8\,\epsilon\,\bar{\Delta}(|x|^{-2}+\delta\,\psi(x))+O(|x|^{-5})$ $\displaystyle=8\,\epsilon\,\bigg{(}2\,|x|^{-4}+4\,\delta\sum_{k=0}^{\infty}10^{-4\,k}\,(\bar{\Delta}\chi)\left(2\cdot 10^{-k}\,(x-10^{k}\,e_{1})\right)\bigg{)}+O(|x|^{-5}).$ In particular, $R\geq 0$ outside a bounded set, provided $\delta>0$ is sufficiently small. A similar computation shows that, taking $\delta>0$ smaller if necessary, $x^{i}\,\partial_{i}(|x|^{2}\,R)\leq 0$ outside a bounded set. Arguing as in the proof of Lemma 24, we may choose $\epsilon>0$ small such that $G_{\lambda}$ is strictly convex in $\\{\xi\in\mathbb{R}^{3}:|\xi|<1/2\\}$ and strictly radially increasing near $\\{\xi\in\mathbb{R}^{3}:|\xi|=1/2\\}$ provided $\lambda>1$ is sufficiently large. It follows that $G_{\lambda}$ has a unique critical point $\xi(\lambda)$ with $|\xi(\lambda)|<1/2$. Let $\lambda_{j}=\frac{9}{16}\,10^{j}.$ As in (21), we consider the decomposition $G_{\lambda_{j}}=G_{1}+G_{2,\lambda_{j}}+o(1).$ Note that $(\bar{D}G_{1})(0)=0$. Using that $\chi$ is strictly subharmonic on $\\{y\in\mathbb{R}^{3}:\sqrt{3}/2<|y|<1\\}$ and that odd functions integrate to zero on a sphere, we obtain $(\partial_{1}G_{2,\lambda_{j}})(0)=-4\,\bigg{(}\frac{9}{16}\bigg{)}^{4}\,\epsilon\,\delta\int_{S_{1}(0)}(9\,x-16\,e_{1})\,\bar{g}(e_{1},\bar{\nu})\,(\bar{\Delta}\chi)\,\text{d}\bar{\mu}+o(1)=-c\,\epsilon\,\delta+o(1)$ where $c>0$ is independent of $j$. By Lemma 23, the $C^{2}$-norm of $G_{\lambda}$ is bounded. It follows that there is $z\in(0,1/2)$ with $|\xi({\lambda_{j}})|\geq z$ provided $j$ is sufficiently large. To conclude, note that the same argument as in the proof of Proposition 48 shows that the family of surfaces $\\{\Sigma_{\lambda,\xi(\lambda)}:\lambda>\lambda_{0}\\}$ forms a smooth asymptotic foliation of $\mathbb{R}^{3}$. ∎ ## Appendix A The Willmore energy In this section, we provide some background material on the Willmore energy. We refer to [28, §3] for proofs and further information. Let $(M,g)$ be a Riemannian 3-manifold without boundary. Let $\Sigma\subset M$ be a closed, two-sided surface with unit normal $\nu$. The Willmore energy of $\Sigma$ is the quantity (43) $\displaystyle\mathcal{W}(\Sigma)=\frac{1}{4}\int_{\Sigma}H^{2}\,\text{d}\mu,$ where $H$ is the mean curvature scalar computed as the divergence of $\nu$ along $\Sigma$. Let $\epsilon>0$ and $U\in C^{\infty}(\Sigma\times(-\epsilon,\epsilon))$ with $U(\,\cdot\,,\,0)=0$. Decreasing $\epsilon>0$ if necessary, we obtain a smooth variation $\\{\Sigma_{s}:s\in(-\epsilon,\epsilon)\\}$ of embedded surfaces $\Sigma_{s}=\Phi_{s}(\Sigma_{s})$ where $\Phi_{s}:\Sigma\to M\qquad\text{ is given by }\qquad\Phi_{s}(x)=\operatorname{exp}_{x}(U(x,s)\,\nu(x)).$ We denote the initial velocity and initial acceleration of the variation by $u(x)=\dot{U}(x,\,0)\qquad\text{ and }\qquad v(x)=\ddot{U}(x,\,0).$ In the following lemma, we recall the formulae for the first and the second variation of the Willmore energy (43). To this end, we recall that (44) $\displaystyle Lf=-\Delta f-(|h|^{2}+\operatorname{Ric}(\nu,\nu))\,f$ denotes the linearization of the mean curvature operator. In particular, (45) $\displaystyle Lu=\frac{d}{ds}\bigg{|}_{s=0}(H(\Sigma_{s})\circ\Phi_{s}).$ ###### Lemma 31 ([28, §3]). There holds (46) $\displaystyle\frac{d}{ds}\bigg{|}_{s=0}\int_{\Sigma_{s}}H^{2}\,\text{d}\mu=-2\int_{\Sigma}W\,u\,\text{d}\mu$ where $W=\Delta H+(|\accentset{\circ}{h}|^{2}+\operatorname{Ric}(\nu,\nu))\,H.$ Moreover, $\displaystyle\frac{d^{2}}{ds^{2}}\bigg{|}_{s=0}\int_{\Sigma_{s}}H^{2}\,\text{d}\mu=-2\int_{\Sigma}\left[u\,Qu+H\,W\,u^{2}+W\,v\right]\text{d}\mu$ where (47) $\displaystyle Qu=$ $\displaystyle\,L(Lu)+\frac{1}{2}\,H^{2}\,Lu+2\,H\,g(\accentset{\circ}{h},\nabla^{2}u)+2\,H\,\operatorname{Ric}(\nu,\nabla u)+2\,\accentset{\circ}{h}(\nabla H,\nabla u)$ $\displaystyle\,+u\,\bigg{[}|\nabla H|^{2}+2\,\operatorname{Ric}(\nu,\nabla H)+H\,\Delta H+2\,g(\accentset{\circ}{h},\nabla^{2}H)+2\,H^{2}\,|\accentset{\circ}{h}|^{2}$ $\displaystyle\,\hskip 22.76228pt+2\,H\,g(\operatorname{Ric},\accentset{\circ}{h})-H\,(D_{\nu}\operatorname{Ric})(\nu,\nu)\bigg{]}$ is the linearization of the Willmore operator. The operator $Q$ measures how $W$ changes along a normal variation of $\Sigma$. More precisely, (48) $\displaystyle Qu=-\frac{d}{ds}\bigg{|}_{s=0}(W(\Sigma_{s})\circ\Phi_{s}).$ Surfaces that are critical for the Willmore energy are called Willmore surfaces. The corresponding Euler-Lagrange equation is $-W=0.$ Surfaces that are critical for the Willmore energy among so-called area- preserving variations are called area-constrained Willmore surfaces. They satisfy the constrained Willmore equation $-W=\kappa\,H$ where $\kappa\in\mathbb{R}$ is a Lagrange parameter. A variation $\\{\Sigma_{s}:|s|<\epsilon\\}$ is called area-preserving if $|\Sigma_{s}|=|\Sigma|$ for all $s\in(-\epsilon,\epsilon)$. An area-constrained Willmore surface $\Sigma$ is stable if it passes the second derivative test for the Willmore energy among all area-preserving variations. Note that this is always satisfied if $\Sigma$ is a minimal surface. If $\Sigma$ is not a minimal surface, we define $u^{\perp}=u+s\,H$ where $s=-\frac{\int_{\Sigma}H\,u\,\text{d}\mu}{\int_{\Sigma}H^{2}\,\text{d}\mu}$ is chosen such that $\int_{\Sigma}u^{\perp}\,H\,\text{d}\mu=0.$ It can be seen that $\Sigma$ is stable if and only if $\kappa\int_{\Sigma}u^{\perp}\,Lu^{\perp}\,\text{d}\mu\leq\int_{\Sigma}u^{\perp}\,Qu^{\perp}\,\text{d}\mu$ for every $u\in C^{\infty}(\Sigma)$; see [28, (41) on p. 16]. ## Appendix B Spherical harmonics and Legendre polynomials We collect some standard facts about the Laplace operator on the unit sphere. ###### Lemma 32. The eigenvalues of the operator $-\bar{\Delta}:H^{2}(S_{1}(0))\to L^{2}(S_{1}(0))$ are given by $\\{\ell\,(\ell+1):\ell=0\,,1\,,2,\dots\\}.$ We denote the eigenspace corresponding to the eigenvalue $\ell\,(\ell+1)$ by $\Lambda_{\ell}(S_{1}(0))=\\{f\in C^{\infty}(S_{1}(0)):-\bar{\Delta}f=\ell\,(\ell+1)f\\}.$ Recall that these eigenspaces are finite dimensional and that $L^{2}(S_{1}(0))=\bigoplus_{\ell=0}^{\infty}\Lambda_{\ell}(S_{1}(0)).$ ###### Corollary 33. The eigenvalues of the operator $\bar{\Delta}^{2}+2\,\bar{\Delta}:H^{4}(S_{1}(0))\to L^{2}(S_{1}(0))$ are given by $\\{(\ell-1)\,\ell\,(\ell+1)\,(\ell+2):\ell=0\,,1\,,2,\dots\\}.$ ###### Lemma 34. There holds $\displaystyle\Lambda_{0}(S^{1}(0))$ $\displaystyle=\operatorname{span}\\{1\\},$ $\displaystyle\Lambda_{1}(S^{1}(0))$ $\displaystyle=\operatorname{span}\\{y^{1},\,y^{2},\,y^{3}\\},$ $\displaystyle\Lambda_{2}(S^{1}(0))$ $\displaystyle=\operatorname{span}\\{Y_{2}^{11},\,Y_{2}^{22},\,Y_{2}^{12},\,Y_{2}^{13},\,Y_{2}^{23}\\}.$ Here $y^{1},\,y^{2},\,y^{3}$ are the coordinate functions and $Y_{2}^{ij}=\frac{1}{2}\,(3\,y^{i}\,y^{j}-\delta^{ij}).$ We also record the following useful orthogonality relations. ###### Lemma 35. Let $i,j,k,\ell\in\\{1,\,2\,,3\\}$. There holds $\displaystyle\int_{S_{1}(0)}y^{i}\,y^{j}\,\text{d}\bar{\mu}$ $\displaystyle=\frac{4\,\pi}{3}\,\delta_{ij},$ $\displaystyle\int_{S_{1}(0)}y^{i}\,y^{j}\,y^{k}\,y^{\ell}\,\text{d}\bar{\mu}$ $\displaystyle=\frac{4\,\pi}{15}\,(\delta_{ij}\,\delta_{k\ell}+\delta_{ik}\,\delta_{j\ell}+\delta_{i\ell}\,\delta_{jk}),$ $\displaystyle\int_{S_{1}(0)}Y_{2}^{ij}\,Y_{2}^{k\ell}\,\text{d}\bar{\mu}$ $\displaystyle=\frac{\pi}{5}\,(3\,\delta_{ik}\,\delta_{j\ell}+3\,\delta_{i\ell}\,\delta_{jk}-2\,\delta_{ij}\,\delta_{k\ell}),$ $\displaystyle\int_{B_{1}(0)}y^{i}\,y^{j}\,\text{d}\bar{v}$ $\displaystyle=\frac{4\,\pi}{15}\,\delta_{ij}.$ The Legendre polynomials $P_{0},\,P_{1},\,P_{2},\dots$ may be defined via a generating function. More precisely, given $s\in[0,1]$ and $t\in[0,1)$, there holds (49) $\displaystyle(1-2\,s\,t+t^{2})^{-\frac{1}{2}}=\sum_{\ell=0}^{\infty}P_{\ell}(s)\,t^{\ell}.$ A proof of the following lemma can be found in [14, §8]. ###### Lemma 36. Let $i\in\\{1,\,2,\,3\\}$ and $\ell,\,\ell_{1},\,\ell_{2}\in\\{0,\,1,\,2\dots\\}$. There holds (50) $\displaystyle P_{\ell}(-y^{i})\in\Lambda_{\ell}(S_{1}(0)).$ Moreover, $\int_{S_{1}(0)}P_{\ell_{1}}(-y^{i})\,P_{\ell_{2}}(-y^{i})\,\text{d}\bar{\mu}=\frac{4\,\pi}{2\,\ell+1}\,\delta_{\ell_{1}\ell_{2}}.$ The next lemma extends the calculation of inverse powers of $|y+\xi|$ in [6, p. 668]. ###### Lemma 37. Let $\xi\in\mathbb{R}^{3}$ with $|\xi|\neq 1$ and $k\in\\{0,\,1\,,2\,,3\\}$. There holds $\displaystyle|y+\xi|^{-2k-1}=\begin{dcases}&\sum_{\ell=0}^{\infty}\,a_{k,\ell}(\xi)\,|\xi|^{\ell}\,P_{\ell}(-|\xi|^{-1}\,\bar{g}(y,\xi))\quad\hskip 17.07182pt\text{ if }|\xi|<1,\\\ &\sum_{\ell=0}^{\infty}\,\tilde{a}_{k,\ell}(\xi)\,|\xi|^{-\ell-1}\,P_{\ell}(-|\xi|^{-1}\,\bar{g}(y,\xi))\quad\text{ if }|\xi|>1\end{dcases}$ for all $y\in S_{1}(0)$. Here, $\displaystyle a_{0,\ell}(\xi)$ $\displaystyle=1,$ $\displaystyle a_{1,\ell}(\xi)$ $\displaystyle=(2\,\ell+1)\,\frac{1}{1-|\xi|^{2}},$ $\displaystyle a_{2,\ell}(\xi)$ $\displaystyle=(2\ell+1)\,\frac{(2\,\ell+3)-(2\,\ell-1)\,|\xi|^{2}}{3\,(1-|\xi|^{2})^{3}},$ $\displaystyle a_{3,\ell}(\xi)$ $\displaystyle=(2\ell+1)\,\frac{(2\,\ell+3)\,(2\,\ell+5)-2\,(2\,\ell-3)\,(2\,\ell+5)\,|\xi|^{2}+(2\,\ell-3)\,(2\,\ell-1)\,|\xi|^{4}}{15\,(1-|\xi|^{2})^{5}},$ and $\tilde{a}_{k,\ell}(\xi)=(-1)^{k}\,|\xi|^{-2k}\,a_{k,\ell}(|\xi|^{-2}\,\xi).$ ###### Proof. If $k=0$, the expansions follow from (49) with $s=-|\xi|^{-1}\,\bar{g}(y,\xi)$ and choice of $t=\begin{dcases}&|\xi|\qquad\,\,\,\,\,\,\text{if }|\xi|<1,\\\ &|\xi|^{-1}\qquad\text{if }|\xi|>1.\end{dcases}$ The asserted formula follows from the recursive relation $|y+\xi|^{-2k-1}=\frac{1}{1-|\xi|^{2}}\,\bigg{(}\frac{2}{2\,k-1}\,\xi^{i}\,\partial_{i}(|y+\xi|^{-2k+1})+|y+\xi|^{-2k+1}\bigg{)},$ where the partial derivative is with respect to $\xi$. ∎ ###### Lemma 38. Let $t\in(-1,1)$. There holds $\log(1+t)=\sum_{\ell=1}^{\infty}\frac{(-1)^{\ell-1}}{\ell}\,t^{\ell}\qquad\text{ and }\qquad\frac{1}{(1-t^{2})^{2}}=\sum_{\ell=0}^{\infty}(1+\ell)\,t^{2\,\ell}.$ ## Appendix C Some geometric expansions We compute several geometric expansions needed in this paper. Recall that $S_{\lambda,\xi}=S_{\lambda}(\lambda\,\xi)$. We distinguish between the cases $|\xi|<1-\delta$ and $|\xi|>1+\delta$ where $\delta\in(0,1/2)$. We will assume that $\lambda>\lambda_{0}$ where $\lambda_{0}>1$ is large. We note that $\displaystyle\delta$ $\displaystyle\leq\lambda^{-1}\,|x|\leq 2-\delta\qquad\hskip 5.69046pt\text{if }|\xi|<1-\delta,$ $\displaystyle|\xi|-1$ $\displaystyle\leq\lambda^{-1}\,|x|\leq|\xi|+1\qquad\text{if }|\xi|>1+\delta,$ for every $x\in S_{\xi,\lambda}$. Throughout this section, we assume that $(M,g)$ is at least $C^{4}$-asymptotic to Schwarzschild with mass $m=2$. We point out additional assumptions on $(M,g)$ where required. The estimates below depend on $\lambda_{0}>1,\,\delta>0$, and $(M,g)$. They are otherwise independent of $\xi$ and $\lambda$. For outlying surfaces, where $|\xi|>1+\delta$, we recall that a bar underneath a quantity indicates evaluation at $\lambda\,\xi$, possibly after taking derivatives. For example, $\sigma(\lambda\,\bar{\nu}+\lambda\,\xi)=\underaccent{\bar}{\sigma}+\lambda\,D_{\bar{\nu}}\underaccent{\bar}{\sigma}+O(\lambda^{-2}\,|\xi|^{-4})$ is shorthand for $\sigma(\lambda\,\bar{\nu}+\lambda\,\xi)={\sigma}(\lambda\,\xi)+\lambda\,(D_{\bar{\nu}}{\sigma})(\lambda\,\xi)+O(\lambda^{-2}\,|\xi|^{-4}).$ When stating that an error term such as $\mathcal{E}=O(\lambda^{-\ell_{1}}\,|\xi|^{-\ell_{2}})$ may be differentiated with respect to $\xi$ with $\ell_{1},\,\ell_{2}\in\mathbb{Z}$, we mean that $\bar{D}\mathcal{E}=O(\lambda^{-\ell_{1}}\,|\xi|^{-\ell_{2}-1}),$ where differentiation is with respect to $\xi$. For the statements below, recall that $\phi(x)=1+|x|^{-1}$ denotes the conformal factor of the Schwarzschild metric. ###### Lemma 39. We have (51) $\displaystyle(\operatorname{Ric}_{S})_{ij}=4\,\phi^{-2}\,|x|^{-3}(\delta_{ij}-3\,|x|^{-2}\,x^{i}\,x^{j}).$ Moreover, there holds $\nu_{S}(S_{\xi,\lambda})=\phi^{-2}\,\bar{\nu}\qquad\text{ and }\qquad H_{S}(S_{\xi,\lambda})=2\,\phi^{-2}\,\lambda^{-1}-4\,\phi^{-3}\,|x|^{-3}\,\bar{g}(x,\bar{\nu}).$ A more precise version of the expansion in the following lemma was computed in [6, p. 670]. ###### Lemma 40 ([6]). There holds $\displaystyle|S_{\xi,\lambda}|=\begin{dcases}&4\,\pi\,\lambda^{2}+16\,\pi\,\lambda+O(1)\hskip 82.51282pt\text{ if }|\xi|<1-\delta,\\\ &4\,\pi\,\lambda^{2}+16\,\pi\,\lambda\,|\xi|^{-1}+O(|\xi|^{-2})\hskip 42.67912pt\text{ if }|\xi|>1+\delta.\end{dcases}$ We need a more precise expansion of the Willmore energy of $S_{\xi,\lambda}$ than that computed in [10, §4]. To this end, we first compute the dependence of certain geometric quantities on the perturbation $\sigma$ away from $g_{S}$. ###### Lemma 41. Let $\\{e_{1},\,e_{2}\\}$ be a local Euclidean orthonormal frame for $TS_{\xi,\lambda}$. There holds $\displaystyle\nu-\nu_{S}=$ $\displaystyle\,-\frac{1}{2}\,\phi^{-6}\,\sigma(\bar{\nu},\bar{\nu})\,\bar{\nu}-\phi^{-6}\sum_{\alpha=1}^{2}\sigma(\bar{\nu},e_{\alpha})\,e_{\alpha}+O(\lambda^{-4}\,(1+|\xi|)^{-4}),$ $\displaystyle\accentset{\circ}{h}^{\alpha}_{\beta}=$ $\displaystyle\,-\frac{1}{2}\,\lambda^{-1}\,\phi^{-6}\,\left[2\,\sigma(e_{\alpha},e_{\beta})-(\operatorname{tr}\sigma-\sigma(\bar{\nu},\bar{\nu}))\,\delta_{\alpha\beta}\right]$ $\displaystyle-\frac{1}{2}\,\left[(\bar{D}_{e_{\alpha}}\sigma)(\bar{\nu},e_{\beta})+(\bar{D}_{e_{\beta}}\sigma)(\bar{\nu},e_{\alpha})-(\bar{D}_{\bar{\nu}}\sigma)(e_{\alpha},e_{\beta})\right]$ $\displaystyle\,+\frac{1}{4}\,\left[2\,(\bar{\operatorname{div}}\sigma)(\bar{\nu})-\bar{D}_{\bar{\nu}}\operatorname{tr}\sigma-(\bar{D}_{\bar{\nu}}\sigma)(\bar{\nu},\bar{\nu})\right]\delta_{\alpha\beta}+O(\lambda^{-4}\,(1+|\xi|)^{-4}),$ $\displaystyle H-H_{S}=$ $\displaystyle\,\lambda^{-1}\,\phi^{-6}\,\left[2\,\sigma(\bar{\nu},\bar{\nu})-\operatorname{tr}\sigma\right]+\frac{1}{2}\,\left[\bar{D}_{\bar{\nu}}\operatorname{tr}\sigma+(\bar{D}_{\bar{\nu}}\sigma)(\bar{\nu},\bar{\nu})-2\,(\bar{\operatorname{div}}\sigma)(\bar{\nu})\right]$ $\displaystyle\,+O(\lambda^{-4}\,(1+|\xi|)^{-4}),$ $\displaystyle\bar{\Delta}(H-H_{S})=$ $\displaystyle\,4\,\lambda^{-3}\,\underaccent{\bar}{\phi}^{-10}\,\left[\operatorname{tr}\sigma-3\,\sigma(\bar{\nu},\bar{\nu})\right]+Y_{1}+Y_{3}+O(\lambda^{-5}\,(1+|\xi|)^{-4}).$ Here, $Y_{1}$ and $Y_{3}$ are respectively first and third spherical harmonics with $Y_{1}=O(\lambda^{-5}\,(1+|\xi|)^{-3})\qquad\text{ and }\qquad Y_{3}=O(\lambda^{-5}\,(1+|\xi|)^{-3}).$ If $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild, these identities may be differentiated once with respect to $\xi$. ###### Proof. Given $t\in[0,1]$, we define the family of metrics $g_{t}=g_{S}+t\,\sigma$ such that $g_{0}=g_{S}$ and $g_{1}=g$. The identities can be obtained upon linearizing the respective quantities at $t=0$. ∎ ###### Lemma 42. If $|\xi|<1-\delta$, there holds $\displaystyle\int_{{S}_{\xi,\lambda}}H^{2}\,\text{d}\mu=$ $\displaystyle\,16\,\pi-64\,\pi\,{\lambda}^{-1}+8\,\pi\,\lambda^{-2}\bigg{[}\frac{10-6\,|\xi|^{2}}{(1-|\xi|^{2})^{2}}+3\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}\bigg{]}$ $\displaystyle\,+2\,\lambda^{-1}\int_{\mathbb{R}^{3}\setminus{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}+O(\lambda^{-3}).$ If $|\xi|>1+\delta$, there holds $\displaystyle\int_{{{S}_{\xi,\lambda}}}H^{2}\,\text{d}\mu=\,$ $\displaystyle 16\,\pi+8\,\pi\,\lambda^{-2}\bigg{[}\frac{10-6\,|\xi|^{2}}{(|\xi|^{2}-1)^{2}}+3\,|\xi|^{-1}\log\frac{|\xi|+1}{|\xi|-1}\bigg{]}-2\,\lambda^{-1}\,\underaccent{\bar}{\phi}^{4}\int_{{B_{\lambda}(\lambda\,\xi)}}R\,\text{d}\bar{v}$ $\displaystyle+\frac{48\,\pi}{15}\,\underaccent{\bar}{\phi}^{-8}\,|\underaccent{\bar}{\sigmacirc}|^{2}+\frac{128\,\pi}{15}\,\lambda^{-1}\,|\xi|^{-3}\,(3\,|\xi|^{-2}\,\underaccent{\bar}{ \sigma}(\xi,\xi)-\operatorname{tr}\underaccent{\bar}{\sigma})+O(\lambda^{-3}\,|\xi|^{-6}).$ Both expressions may be differentiated twice with respect to $\xi$. ###### Proof. We first compute the Schwarzschild contribution. Note that (52) $\displaystyle 2\,\bar{g}(x,\bar{\nu})=\lambda\,(1-|\xi|^{2})+\lambda^{-1}\,|x|^{2}.$ In the case where $|\xi|<1-\delta$, we use Lemma 39 to compute $\displaystyle\int_{S_{\xi,\lambda}}H_{S}^{2}\,\text{d}\mu_{S}$ $\displaystyle=\int_{S_{\xi,\lambda}}\left[4\,\lambda^{-2}-16\,\lambda^{-1}\,(|x|^{-3}-|x|^{-4})\,\bar{g}(x,\bar{\nu})+16\,|x|^{-6}\,\bar{g}(x,\bar{\nu})^{2}\right]\,\text{d}\bar{\mu}+O(\lambda^{-3})$ $\displaystyle=16\,\pi-64\,\pi\,{\lambda}^{-1}+16\,\pi\,\lambda^{-2}\frac{5-3\,|\xi|^{2}}{(1-|\xi|^{2})^{2}}+24\,\pi\,\lambda^{-2}\,|\xi|^{-1}\log\frac{1+|\xi|}{1-|\xi|}+O(\lambda^{-3}).$ In the case where $|\xi|>1+\delta$, we compute that $\displaystyle\int_{S_{\xi,\lambda}}H_{S}^{2}\,\text{d}\mu_{S}=16\,\pi+16\,\pi\,\lambda^{-2}\frac{5-3\,|\xi|^{2}}{(|\xi|^{2}-1)^{2}}+24\,\pi\,\lambda^{-2}|\xi|^{-1}\log\frac{|\xi|+1}{|\xi|-1}+O(\lambda^{-3}\,|\xi|^{-6}).$ To compute the contribution from the perturbation $\sigma$ off Schwarzschild, we depart from the identity (53) $\displaystyle\int_{S_{\xi,\lambda}}H^{2}\,\text{d}\mu=16\,\pi+2\int_{S_{\xi,\lambda}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu+2\int_{S_{\xi,\lambda}}\left(2\,\operatorname{Ric}(\nu,\nu)-R\right)\,\text{d}\mu,$ cf. (24). The first integral on the right vanishes if $\sigma=0$. If $|\xi|<1-\delta$ and $\sigma\neq 0$, we estimate $\int_{S_{\xi,\lambda}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu=O(\lambda^{-4}).$ Conversely, if $|\xi|>1+\delta$, using Lemma 41, Taylor expansion, and cancellations due to symmetry, we find that $\displaystyle\int_{S_{\xi,\lambda}}|\accentset{\circ}{h}|^{2}\,\text{d}\mu=\,$ $\displaystyle\underaccent{\bar}{\phi}^{-8}\,\lambda^{-2}\int_{S_{\xi,\lambda}}\left[\bar{g}(\underaccent{\bar}{\sigma}_{|_{S_{\xi,\lambda}}},\underaccent{\bar}{\sigma}_{|_{S_{\xi,\lambda}}})-\frac{1}{2}\,(\operatorname{tr}_{S_{\xi,\lambda}}\underaccent{\bar}{\sigma})^{2}\right]\text{d}\bar{\mu}+O(\lambda^{-4}\,|\xi|^{-6})$ $\displaystyle=\,$ $\displaystyle\underaccent{\bar}{\phi}^{-8}\,\lambda^{-2}\int_{S_{\xi,\lambda}}\bigg{[}|\underaccent{\bar}{\sigma}|^{2}-\frac{1}{2}\,(\operatorname{tr}\underaccent{\bar}{\sigma})^{2}+\frac{1}{2}\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})+\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})\,\operatorname{tr}\underaccent{\bar}{\sigma}$ $\displaystyle\qquad\quad\quad\qquad-2\sum_{i=1}^{3}\underaccent{\bar}{\sigma}(\bar{\nu},e_{i})\,\underaccent{\bar}{\sigma}(\bar{\nu},e_{i})\bigg{]}\text{d}\bar{\mu}+O(\lambda^{-4}\,|\xi|^{-6})$ $\displaystyle=\,$ $\displaystyle\frac{24\,\pi}{15}\,\underaccent{\bar}{\phi}^{-8}\,|\underaccent{\bar}{\sigmacirc}|^{2}+O(\lambda^{-4}\,|\xi|^{-6}).$ In the last step, we have used Lemma 35. To compute the second integral in (53), first recall that the Einstein tensor $E=\operatorname{Ric}-\frac{1}{2}\,R\,g$ is divergence free. If $|\xi|<1-\delta$, this leads to the following form of the Pohozaev identity (54) $\displaystyle\int_{S_{\xi,\lambda}}E(Z,\nu)\,\text{d}\mu=-\int_{\mathbb{R}^{3}\setminus B_{\lambda}(\lambda\,\xi)}\left[\frac{1}{2}\,g(E,\mathcal{D}Z)-\frac{1}{6}\,(\operatorname{div}Z)\,R\right]\text{d}v,$ valid for vector fields $Z$ with $\bar{D}Z=O(|x|^{-1})$. Similarly, if $|\xi|>1+\delta$, we have (55) $\displaystyle\int_{S_{\xi,\lambda}}E(Z,\nu)\,\text{d}\mu=\int_{B_{\lambda}(\lambda\,\xi)}\left[\frac{1}{2}\,g(E,\mathcal{D}Z)-\frac{1}{6}\,(\operatorname{div}Z)\,R\right]\text{d}v$ for every vector field $Z$. Here, $\mathcal{D}Z=\mathcal{L}_{Z}g-\frac{1}{3}\,\operatorname{tr}(\mathcal{L}_{Z}g)\,g$ is the conformal Killing operator. We refer to [28, §6.4] for a discussion of the Pohozaev identity and a related application. Let $U_{\xi,\lambda}=\begin{dcases}&\mathbb{R}^{3}\setminus B_{\lambda}(\lambda\,\xi)\quad\text{ if }|\xi|<1-\delta,\\\ &B_{\lambda}(\lambda\,\xi)\hskip 37.55785pt\text{if }|\xi|>1+\delta.\end{dcases}$ We apply (54) respectively (55) with $Z=\phi^{-2}\lambda^{-1}(x-\lambda\,\xi).$ Note that $Z=\nu_{S}$ on $S_{\xi,\lambda}$. Using that $\mathcal{D}Z=O(|x|^{-2})$ and $R_{S}=0$, we obtain $\displaystyle\int_{S_{\xi,\lambda}}\operatorname{Ric}_{S}(\nu_{S},\nu_{S})\,\text{d}\mu_{S}=\,$ $\displaystyle\frac{1}{2}\,\operatorname{sign}(|\xi|-1)\int_{U_{\lambda,\xi}}g_{S}(\operatorname{Ric}_{S},\mathcal{D}_{S}Z)\,\text{d}v_{S}$ $\displaystyle=\,$ $\displaystyle\frac{1}{2}\,\operatorname{sign}(|\xi|-1)\int_{U_{\lambda,\xi}}g(E,\mathcal{D}Z)\,\text{d}v+O(\lambda^{-3}\,(1+|\xi|)^{-6}).$ The relevant contribution of the perturbation $\sigma$ is therefore given by $\int_{S_{\xi,\lambda}}\operatorname{Ric}(\nu-\nu_{S},\nu)\,\text{d}\mu-\frac{1}{6}\,\operatorname{sign}(|\xi|-1)\int_{U_{\lambda,\xi}}(\operatorname{div}Z)\,R\,\,\text{d}v.$ Note that $\operatorname{div}Z=3\,\phi^{-2}\,\lambda^{-1}+O(|x|^{-2}).$ In the case where $|\xi|<1-\delta$, we use the coarse estimate $\int_{S_{\xi,\lambda}}\operatorname{Ric}(\nu-\nu_{S},\nu)\,\text{d}\mu+\frac{1}{6}\int_{\mathbb{R}^{3}\setminus B_{\lambda}(\lambda\,\xi)}(\operatorname{div}Z)\,R\,\text{d}v=\frac{1}{2}\,\lambda^{-1}\int_{\mathbb{R}^{3}\setminus B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}+O(\lambda^{-3}).$ In the case where $|\xi|>1+\delta$, we first compute $-\frac{1}{6}\int_{B_{\lambda}(\lambda\,\xi)}(\operatorname{div}Z)\,R\,\text{d}v=-\frac{1}{2}\,\underaccent{\bar}{\phi}^{4}\,\lambda^{-1}\int_{B_{\lambda}(\lambda\,\xi)}R\,\text{d}\bar{v}+O(\lambda^{-3}\,|\xi|^{-6}).$ Using Lemma 39 and the expansion for $\nu-\nu_{S}$ from Lemma 41, we obtain $\displaystyle\int_{S_{\xi,\lambda}}\operatorname{Ric}$ $\displaystyle(\nu-\nu_{S},\nu)\,\text{d}\mu$ $\displaystyle=$ $\displaystyle-\int_{S_{\xi,\lambda}}|x|^{-3}\,\left[\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})\,\big{(}1+3\,|x|^{-2}\,\bar{g}(x,\bar{\nu})^{2}\big{)}-6\,|x|^{-2}\,\underaccent{\bar}{\sigma}(x,\bar{\nu})\,g(x,\bar{\nu})\right]\text{d}\bar{\mu}+O(\lambda^{-3}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle-\int_{S_{\xi,\lambda}}\lambda^{-3}\,|\xi|^{-3}\,\left[\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})\,\big{(}1+3\,|\xi|^{-2}\,\bar{g}(\xi,\bar{\nu})^{2}\big{)}-6\,|\xi|^{-2}\,\underaccent{\bar}{\sigma}(\xi,\bar{\nu})\,\bar{g}(\xi,\bar{\nu})\right]\text{d}\bar{\mu}+O(\lambda^{-3}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,\frac{32\,\pi}{15}\,\lambda^{-1}\,|\xi|^{-3}\,\left[3\,|\xi|^{-2}\,\underaccent{\bar}{ \sigma}(\xi,\xi)-\operatorname{tr}\underaccent{\bar}{\sigma}\right]+O(\lambda^{-3}\,|\xi|^{-6}).$ We have used Lemma 35 in the third equality. This completes the proof. ∎ ###### Remark 43. Let $\\{S_{j}\\}_{j=1}^{\infty}$ be a sequence of coordinate spheres $S_{j}=S_{\lambda_{j}}(\lambda_{j}\,\xi_{j})$ with $\lambda_{j}>1$ and $\xi_{j}\in\mathbb{R}^{3}$ that are slowly divergent in the sense that $\rho_{j}^{-1}=o(1)$ and $\rho_{j}=o(\lambda_{j})$ as $j\to\infty$, where $\rho_{j}=\rho(S_{j})$. If the spheres are on-center, we compute $\displaystyle\int_{S_{j}}H_{S}^{2}\,\text{d}\mu_{S}=16\,\pi-32\,\pi\,\lambda_{j}^{-1}(2-\rho_{j}^{-1})+8\,\pi\,\rho_{j}^{-2}+O(\lambda_{j}^{-2}\,\log\lambda_{j})+O(\lambda_{j}^{-1}\,\rho_{j}^{-2})+O(\rho_{j}^{-3}).$ If the spheres are outlying, we have $\displaystyle\int_{S_{j}}H_{S}^{2}\,\text{d}\mu_{S}=16\,\pi+32\,\pi\,\lambda_{j}^{-1}\,\rho_{j}^{-1}+8\,\pi\,\rho_{j}^{-2}+O(\lambda_{j}^{-2}\,\log\lambda_{j})+O(\lambda_{j}^{-1}\,\rho_{j}^{-2})+O(\rho_{j}^{-3}).$ Using Lemma 39, we compute that, in either case, $\min_{x\in S_{j}}(\phi^{2}\,H_{S})=2\,\lambda_{j}^{-1}-4\,\rho_{j}^{-2}+O(\rho_{j}^{-3}).$ Thus, if $\rho^{2}_{j}=o(\lambda_{j})$, it follows that $\operatorname{min}_{x\in S_{j}}H_{S}<0$ and $m_{H}(S_{j})<0$ for all $j$ large. Next, we express the Willmore operator $-W({S_{\xi,\lambda}})$ in terms of spherical harmonics. ###### Lemma 44. There holds $\displaystyle{W}({{S}_{\xi,\lambda}})=$ $\displaystyle\,\frac{1}{2}\,\phi^{-8}\big{[}-9\,\lambda^{-3}\,|x|^{-1}+(3\,|\xi|^{2}-7)\,\lambda^{-1}\,|x|^{-3}-3\,(1-|\xi|^{2})\,(7\,|\xi|^{2}+5)\,\lambda\,|x|^{-5}$ $\displaystyle\hskip 36.98866pt+15\,(1-|\xi|^{2})^{3}\,\lambda^{3}\,|x|^{-7}\big{]}$ $\displaystyle\,+4\,\underaccent{\bar}{\phi}^{-10}\,\lambda^{-3}\,[\operatorname{tr}\underaccent{\bar}{\sigma}-3\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})]+Y_{1}+Y_{3}+O(\lambda^{-5}\,(1+|\xi|)^{-4}).$ Here $Y_{1}$ and $Y_{3}$ are first and third spherical harmonics, respectively. They satisfy $Y_{1}=O(\lambda^{-5}\,(1+|\xi|)^{-3})\qquad\text{ and }\qquad Y_{3}=O(\lambda^{-5}\,(1+|\xi|)^{-3}).$ If $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild, this identity may be differentiated once with respect to $\xi$. ###### Proof. Using Lemma 39 and (52), we find $\displaystyle H\,\operatorname{Ric}(\nu,\nu)$ $\displaystyle=\phi^{-8}\big{[}-3\,\lambda^{-3}\,|x|^{-1}+2\,(3\,|\xi|^{2}-1)\,\lambda^{-1}\,|x|^{-3}-3\,(1-|\xi|^{2})^{2}\,\lambda\,|x|^{-5}\big{]}$ $\displaystyle\qquad+{O}(\lambda^{-5}\,(1+|\xi|)^{-4}).$ Next, using the transformation of the Laplacian under a conformal change of the metric, we find $\displaystyle\Delta H=\phi^{-4}\,\bar{\Delta}H_{S}+\phi^{-4}\,\bar{\Delta}(H-H_{S})+{O}(\lambda^{-6}\,(1+|\xi|)^{-4}).$ From this, we compute that $\displaystyle\bar{\Delta}H_{S}=\bigg{[}$ $\displaystyle-\left[|x|^{-2}\,\bar{g}(x,\bar{\nu})^{2}-1\right]\,|x|^{-1}\,x^{i}\,\partial_{i}(|x|^{-1}\,x^{j}\,\partial_{j}H_{S})$ $\displaystyle+\left[|x|^{-3}\,\bar{g}(x,\bar{\nu})^{2}+|x|^{-1}-2\,\lambda^{-1}\,|x|^{-1}\,\bar{g}(x,\bar{\nu})\right]\,|x|^{-1}\,x^{i}\,\partial_{i}H_{S}\bigg{]}$ $\displaystyle=$ $\displaystyle\,\frac{1}{4}\,\bigg{[}\left[2\,(1+|\xi|^{2})-(1-|\xi|^{2})^{2}\,\lambda^{2}\,|x|^{-2}-\lambda^{-2}\,|x|^{2}\right]\,|x|^{-1}\,x^{i}\,\partial_{i}(|x|^{-1}\,x^{j}\,\partial_{j}H_{S})$ $\displaystyle\quad\,\,\,-\left[3\,\lambda^{-2}\,|x|-2\,(1+|\xi|^{2})\,|x|^{-1}-(1-|\xi|^{2})^{2}\,\lambda^{2}\,|x|^{-3}\right]\,|x|^{-1}\,x^{i}\,\partial_{i}H_{S}\bigg{]}.$ Note that $\left|\left[|x|^{-2}\,\bar{g}(x,\bar{\nu})^{2}-1\right]\,|x|^{-1}\,x\right|\leq 2$ Using Lemma 39 and (52), we find $|x|^{-1}\,x^{i}\,\partial_{i}H_{S}=6\,\phi^{-4}\,[\lambda^{-1}\,|x|^{-2}+(1-|\xi|^{2})\,\lambda\,|x|^{-4}]$ and $\displaystyle|x|^{-1}\,x^{i}\,\partial_{i}(|x|^{-1}\,x^{j}\,\partial_{j}H_{S})=$ $\displaystyle-12\,\phi^{-4}[\lambda^{-1}\,|x|^{-3}+2\,(1-|\xi|^{2})\,\lambda\,|x|^{-5}]$ $\displaystyle+24\,\phi^{-5}\,[\lambda^{-1}\,|x|^{-4}+(1-|\xi|^{2})\,\lambda\,|x|^{-6}].$ Note that $\displaystyle\lambda^{-1}\,|x|^{-4}+(1-|\xi|^{2})\,\lambda\,|x|^{-6}=2\,|x|^{-6}\,\bar{g}(x,\bar{\nu})=O(\lambda^{-5}\,(1+|\xi|)^{-5}).$ The assertion follows from this and Lemma 41. ∎ The following corollary is an immediate consequence of Lemma 37 and Lemma 44. ###### Corollary 45. If $|\xi|<1-\delta$, there holds ${W}({{S}_{\xi,\lambda}})=4\,\lambda^{-4}\sum_{\ell=0}^{\infty}(\ell-1)\,(\ell+1)\,(\ell+2)\,|\xi|^{\ell}\,P_{\ell}(-|\xi|^{-1}\,\bar{g}(\bar{\nu},\xi))+{O}(\lambda^{-5}).$ If $|\xi|>1+\delta$, there holds (56) $\displaystyle{W}({{S}_{\xi,\lambda}})=$ $\displaystyle-4\,\lambda^{-4}\sum_{\ell=0}^{\infty}(\ell-1)\,\ell\,(\ell+2)\,|\xi|^{-\ell-1}\,P_{\ell}(-|\xi|^{-1}\,\bar{g}(\bar{\nu},\xi))$ $\displaystyle-4\,\lambda^{-3}\,\underaccent{\bar}{\phi}^{-10}\,(3\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})-\operatorname{tr}\underaccent{\bar}{\sigma})+Y_{1}+Y_{3}+{O}(\lambda^{-5}\,|\xi|^{-4}).$ Here, $Y_{1}$ and $Y_{3}$ are respectively first and third spherical harmonics with $Y_{1}=O(\lambda^{-5}\,|\xi|^{-3})\qquad\text{ and }\qquad Y_{3}=O(\lambda^{-5}\,|\xi|^{-3}).$ If $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild, (56) may be differentiated once with respect to $\xi$. ###### Remark 46. Note that $3\,\underaccent{\bar}{\sigma}(\bar{\nu},\bar{\nu})-\operatorname{tr}\underaccent{\bar}{\sigma}=\sum_{i,j=1}^{3}\underaccent{\bar}{\sigma}(e_{i},e_{j})\,(3\,\bar{g}(\bar{\nu},e_{i})\,\bar{g}(\bar{\nu},e_{j})-\delta_{ij})\in\Lambda_{2}(S_{\xi,\lambda}).$ In the next lemma, we specify the formula for the linearization of the Willmore operator (47) to a sphere. ###### Lemma 47. For every $u\in C^{\infty}(S_{\xi,\lambda})$ there holds (57) $\displaystyle Q_{S_{\xi,\lambda}}u=$ $\displaystyle\,L(Lu)+\frac{1}{2}\,H^{2}\,Lu+(\nabla^{2}u)*O(\lambda^{-4}\,(1+|\xi|)^{-2})+(\nabla u)*O(\lambda^{-4}\,(1+|\xi|)^{-3})$ $\displaystyle\,+u*O(\lambda^{-5}\,(1+|\xi|)^{-3}).$ If $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild, (57) may be differentiated once with respect to $\xi$. ###### Proof. This follows from (47), the decay of the metric, Lemma 41, the estimates $\nabla H=O(\lambda^{-3}\,(1+|\xi|)^{-2}),\qquad\nabla^{2}H=O(\lambda^{-4}\,(1+|\xi|)^{-2}),$ and the estimate $\Delta H=W+O(\lambda^{-4}\,(1+|\xi|)^{-3})=O(\lambda^{-4}\,(1+|\xi|)^{-3}).$ We have used Corollary 45 in the last equation. ∎ ## Appendix D The foliation property Recall from the proof of Theorem 5 that $\Sigma(\lambda)$ is the surface $\Sigma(\lambda)=\Sigma_{\xi(\lambda),\lambda}=\Sigma_{\xi(\lambda),\lambda}(u_{\xi(\lambda),\lambda})$ where $\xi(\lambda)\in\mathbb{R}^{3}$ is the unique local minimum near the origin of the function $G_{\lambda}$ defined in (16). In particular, by Lemma 21, $\Sigma(\lambda)$ is a stable area-constrained Willmore surface. Moreover, we have seen in the proof of Theorem 5 that (58) $\displaystyle\xi(\lambda)=o(1)$ as $\lambda\to\infty$. By Proposition 17 and Remark 18, we have (59) $\displaystyle u_{\xi(\lambda),\lambda}=O(1),\qquad(\bar{D}u)|_{(\xi(\lambda),\lambda)}=O(\lambda^{-1}),\qquad u^{\prime}|_{(\xi(\lambda),\lambda)}=O(\lambda^{-2}),$ where we recall that $\bar{D}$ and the dash indicate differentiation with respect to the parameters $\xi$ and $\lambda$, respectively. We now verify that the family of surfaces $\\{\Sigma(\lambda):\lambda>\lambda_{0}\\}$ forms a smooth foliation, provided $\lambda_{0}>1$ is sufficiently large. ###### Proposition 48. Suppose that $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild with mass $m>0$ and that the scalar curvature $R$ satisfies (17) and (18). Then the family $\\{\Sigma(\lambda):\lambda>\lambda_{0}\\}$ of stable area-constrained Willmore surfaces is a smooth foliation of the complement of a compact subset of $M$, provided $\lambda_{0}>1$ is sufficiently large. ###### Proof. By Lemma 24, there are constants $\tau>0$ and $\delta_{0}>0$ such that (60) $\displaystyle\bar{D}^{2}G_{\lambda}\geq\tau\,\operatorname{Id}$ on $\\{\xi\in\mathbb{R}^{3}:|\xi|<\delta_{0}\\}$, provided $\lambda>\lambda_{0}$ and $\lambda_{0}>1$ is sufficiently large. In particular, by the implicit function theorem, the dependence of $\xi(\lambda)$ on $\lambda$ is smooth. It follows that the map $\Psi:{S}_{1}(0)\times(\lambda_{0},\infty)\to M\qquad\text{ given by }\qquad\Psi(y,\,\lambda)=\Phi_{\xi(\lambda),\lambda}^{u_{\xi(\lambda),\lambda}}({\lambda\,y+\xi})$ is smooth. Using (58) and (59), we find that $\Sigma(\lambda)$ encloses every given compact set, provided $\lambda>0$ is sufficiently large. We claim that (61) $\displaystyle\xi^{\prime}(\lambda)=o(\lambda^{-1}).$ To see this, let $a\in\mathbb{R}^{3}$. Differentiating the identity $(\bar{D}G_{\lambda})|_{\xi(\lambda)}(a)=0$ with respect to $\lambda$, we obtain (62) $\displaystyle(\bar{D}^{2}G_{\lambda})|_{\xi(\lambda)}(a,\,\xi^{\prime}(\lambda))+(\bar{D}G_{\lambda}^{\prime})|_{\xi(\lambda)}(a)=0.$ The argument presented in the proof of Lemma 24 also shows that we may differentiate the error terms in Lemma 22 with respect to $\lambda$. Applying Lemma 22 and using (58), we thus find $(\bar{D}G^{\prime}_{\lambda})|_{\xi(\lambda)}(a)=-4\,\lambda\int_{S_{\xi(\lambda),\lambda}}\bar{g}(a,\bar{\nu})\,R\,\text{d}\bar{\mu}-2\,\lambda^{2}\int_{S_{\xi(\lambda),\lambda}}g(a,\bar{\nu})\,(\bar{D}_{\bar{\nu}}R)\,\text{d}\bar{\mu}+o(\lambda^{-1}).$ Since $(M,g)$ is $C^{4}$-asymptotic to Schwarzschild, we obtain from (18) that (63) $\displaystyle x^{i}(\partial_{i}R)(x)+x^{i}(\partial_{i}R)(-x)=o(|x|^{-4}).$ Indeed, if (63) failed, integration along radial lines would yield that (18) must be violated, too. From this, we find that $(\bar{D}G^{\prime}_{\lambda})|_{\xi(\lambda)}(a)=\xi(\lambda)\,O(\lambda^{-1})+o(\lambda^{-1})=o(\lambda^{-1}).$ Choosing $a=\xi^{\prime}(\lambda)$ and using (62) as well as (60), we obtain the asserted estimate (61). Note that $\Psi(\,\cdot\,,\,\lambda)$ parametrizes $\Sigma(\lambda)$ and that $\bar{\nu}=y+O(\lambda^{-1})$. Using (59) and (61), we compute that $\displaystyle\bar{g}(\Psi^{\prime},y)=$ $\displaystyle\,1+\bar{g}(\xi(\lambda),y)+\lambda\,\bar{g}(\xi^{\prime}(\lambda),y)+(\bar{D}_{\xi^{\prime}(\lambda)}u)|_{({\xi(\lambda),\lambda})}+u^{\prime}|_{(\xi(\lambda),\lambda)}$ $\displaystyle=$ $\displaystyle\,1+o(1).$ In particular, $\bar{g}(\Psi^{\prime},\bar{\nu})>0$. This finishes the proof. ∎ ## Appendix E Remark on far-outlying stable constant mean curvature surfaces We show that the assumptions of Theorem 11 are sufficient to preclude large far-outlying stable constant mean curvature spheres in $(M,g)$ as well. In the statement of the following result, $\operatorname{vo\ell}(\Sigma)$ denotes the volume of the compact domain bounded by $\Sigma$. ###### Theorem 49. Suppose that $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild with mass $m>0$ and that its scalar curvature $R$ satisfies (64) $\displaystyle x^{i}\,\partial_{i}(|x|^{2}R)\leq 0.$ There is no sequence $\\{\Sigma_{j}\\}^{\infty}_{j=1}$ of outlying stable constant mean curvature spheres $\Sigma_{j}\subset M$ with $\lim_{j\to\infty}\operatorname{vo\ell}(\Sigma_{j})=\infty\qquad\text{ and }\qquad\lim_{j\to\infty}\rho({\Sigma_{j}})\,H(\Sigma_{j})=\infty.$ ###### Remark 50. The hypotheses of Theorem 49 are weaker than those of Corollary 1.7 in [11]. First, we only require $C^{5}$-decay of the metric, while $C^{7}$-decay of the metric is assumed in [11]. Second, the growth condition (64) is weaker than the radial convexity assumption $x^{i}\,x^{j}\,\partial_{i}\partial_{j}R\geq 0$ in [11]. The proof of Theorem 49 is a small variation of the proof of Corollary 1.7 in [11]. The asserted improvement is obtained by first taking the radial derivative of the area functional of a coordinate sphere and then estimating the resulting terms, rather than first estimating the area functional and then taking the radial derivative. This approach brings out the contribution of the scalar curvature in a more precise way. We only point out the necessary modifications of the proof. We recall that $\phi=1+|x|^{-1}$ denotes the conformal factor of the Schwarzschild metric with mass $m=2$. Moreover, continuing the notation introduced on p. 3, we use a bar underneath a quantity to indicate evaluation at $\lambda\,\xi$. ###### Proof of Theorem 49. As in the proof of Theorem 11, there exists a constant $\lambda_{0}>1$ which only depends on $(M,g)$ such that for every $\lambda>\lambda_{0}$ and $\xi\in\mathbb{R}^{3}$ with $|\xi|>2$, there is a surface $\Sigma_{\xi,\lambda}$ with the following properties: * • $\Sigma_{\xi,\lambda}$ is a perturbation of the sphere $S_{\tilde{\xi},\tilde{\lambda}}$, where333Note that $\tilde{\lambda}$ is denoted by $r$ in [11]. $\tilde{\lambda}\,\tilde{\xi}=\lambda\,\xi.$ Moreover, we have (65) $\displaystyle\tilde{\lambda}=\lambda\,\underaccent{\bar}{ \phi}^{-2}+O(\lambda^{-1}\,|\xi|^{-2}).$ * • The mean curvature of $\Sigma_{\xi,\lambda}$ is constant up to first spherical harmonics. * • There holds $\operatorname{vo\ell}(\Sigma_{\xi,\lambda})=\frac{4\,\pi}{3}\,\lambda^{3}.$ * • $\Sigma_{\xi,\lambda}$ has constant mean curvature if and only if $\xi$ is a critical point of the function $A_{\lambda}:\\{\xi\in\mathbb{R}^{3}:|\xi|>2\\}\to\mathbb{R}^{3}\qquad\text{ given by }\qquad A_{\lambda}(\xi)=|\Sigma_{\xi,\lambda}|.$ In [11, p. 25], the authors obtain the expansion (66) $\displaystyle A_{\lambda}(\xi)=4\,\pi\,\lambda^{2}-\frac{2\,\pi}{15}\,\lambda^{4}\,\underaccent{\bar}{R}-\frac{\pi}{105}\,\lambda^{6}\,\bar{\Delta}\underaccent{\bar}{R}-\frac{8\,\pi}{35}\,|\xi|^{-6}+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ This identity may be differentiated once with respect to $\xi$. In the derivation of this identity and its differentiability in [11, §4], $C^{6}$-decay of the metric rather than $C^{5}$-decay is used to analyze the contribution of the term (67) $\displaystyle\frac{1}{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}[\operatorname{tr}\sigma-\sigma(\bar{\nu},\bar{\nu})]\,\text{d}\bar{\mu}-\tilde{\lambda}^{-1}\int_{B_{\tilde{\lambda}}(\lambda\,\xi)}\operatorname{tr}\sigma\,\text{d}\bar{v}$ to (66). In [11, §4.1 and §4.2], (67) was computed to be $-\frac{2\,\pi}{15}\,\lambda^{4}\,\underaccent{\bar}{R}-\frac{\pi}{105}\,\lambda^{6}\,\bar{\Delta}\underaccent{\bar}{ R}-\frac{8\,\pi}{15}\,\lambda|\xi|^{-3}\,\left[\operatorname{tr}\underaccent{\bar}{\sigma}-3|\xi|^{-2}\sigma(\xi,\xi)+\lambda\,\bar{D}_{\xi}\operatorname{tr}\underaccent{\bar}{\sigma}\right]+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ It follows that $\displaystyle A_{\lambda}(\xi)=$ $\displaystyle\,4\,\pi\,\lambda^{2}-\frac{8\,\pi}{35}\,|\xi|^{-6}+\frac{8\,\pi}{15}\,\lambda\,|\xi|^{-3}\,(\operatorname{tr}\underaccent{\bar}{\sigma}-3|\xi|^{-2}\,\sigma(\xi,\xi)+\lambda\,\bar{D}_{\xi}\operatorname{tr}\underaccent{\bar}{\sigma})$ $\displaystyle\,+\frac{1}{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}[\operatorname{tr}\sigma-\sigma(\bar{\nu},\bar{\nu})]\,\text{d}\bar{\mu}-\tilde{\lambda}^{-1}\int_{B_{\tilde{\lambda}}(\lambda\,\xi)}\operatorname{tr}\sigma\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ This expansion may be differentiated once with respect to $\xi$ provided $(M,g)$ is $C^{5}$-asymptotic to Schwarzschild. We proceed by computing the radial derivative of $A_{\lambda}$. Using Taylor’s theorem and cancellations due to symmetry, we find (68) $\displaystyle\,\xi^{i}\,\partial_{i}\bigg{(}\frac{8\,\pi}{15}\,\lambda|\xi|^{-3}\,(\operatorname{tr}\underaccent{\bar}{\sigma}-3\,|\xi|^{-2}\,\sigma(\xi,\xi)+\lambda\,\bar{D}_{\xi}\operatorname{tr}\underaccent{\bar}{\sigma})\bigg{)}$ $\displaystyle=$ $\displaystyle\,-2\int_{B_{\lambda}(\lambda\,\xi)}\left[|x|^{-3}\operatorname{tr}{\sigma}-3\,|x|^{-5}\,\sigma(x,x)+|x|^{-3}\,\bar{D}_{x}\operatorname{tr}\sigma\right]\,\bar{g}(\lambda\,\xi-x,\xi)\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6}).$ Next, we compute (69) $\displaystyle\,\xi^{i}\,\partial_{i}\bigg{(}\frac{1}{2}\int_{S_{\tilde{\xi},\tilde{\lambda}}}[\operatorname{tr}\sigma-\sigma(\bar{\nu},\bar{\nu})]\,\text{d}\bar{\mu}-\tilde{\lambda}^{-1}\int_{B_{\tilde{\lambda}}(\lambda\,\xi)}\operatorname{tr}\sigma\,\text{d}\bar{v}\bigg{)}$ $\displaystyle=$ $\displaystyle\,\frac{1}{2}\,\lambda\int_{S_{\tilde{\xi},\tilde{\lambda}}}\left[\bar{D}_{\xi}\operatorname{tr}\sigma-\bar{D}_{\xi}\sigma(\bar{\nu},\bar{\nu})-2\,\tilde{\lambda}^{-1}\,\operatorname{tr}\sigma\,\bar{g}(\xi,\bar{\nu})\right]\,\text{d}\bar{\mu}$ $\displaystyle\,+\frac{1}{2}\,\xi^{i}\,\partial_{i}\tilde{\lambda}\,\bigg{(}\int_{S_{\tilde{\xi},\tilde{\lambda}}}\left[\bar{D}_{\bar{\nu}}\operatorname{tr}\sigma-\bar{D}_{\bar{\nu}}\sigma(\bar{\nu},\bar{\nu})-2\,\tilde{\lambda}^{-1}\,\sigma(\bar{\nu},\bar{\nu})\right]\text{d}\bar{\mu}+2\,\tilde{\lambda}^{-2}\int_{B_{\tilde{\lambda}}(\lambda\,\xi)}\operatorname{tr}\sigma\,\text{d}\bar{v}\bigg{)}.$ From (65), we find that $\xi^{i}\,\partial_{i}\tilde{\lambda}=2\,|\xi|^{-1}+O(\lambda^{-1}\,|\xi|^{-2}).$ Using cancellations due to symmetry, we compute, using Taylor’s theorem to expand all terms up to second derivatives of $\sigma$ and Lemma 35, that the last line of (69) equals (70) $\displaystyle\,-\frac{16\,\pi}{15}\,\lambda^{3}\,|\xi|^{-1}$ $\displaystyle(\operatorname{div}\operatorname{div}\underaccent{\bar}{\sigma}-\bar{\Delta}\operatorname{tr}\underaccent{\bar}{\sigma})+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,4\int_{B_{\lambda}(\lambda\,\xi)}(\operatorname{div}\operatorname{div}\underaccent{\bar}{\sigma}-\bar{\Delta}\operatorname{tr}\underaccent{\bar}{\sigma})\,(|\xi|^{-1}-|x|^{-1})\,\bar{g}(\xi,\lambda\,\xi-x)\,\text{d}\bar{v}+O(\lambda^{-1}\,|\xi|^{-6}).$ Here, we have also used that $|\xi|^{-1}=|x|^{-1}-\lambda^{-2}\,|\xi|^{-3}\,\bar{g}(\xi,\lambda\,\xi-x)+O(\lambda^{-1}|\xi|^{-3}).$ Finally, using $\lambda\,\xi=\tilde{\lambda}\,\tilde{\xi}$, we can argue exactly as in [11, §2.1] to show that the second line of (69) equals (71) $\displaystyle\,\frac{1}{2}\int_{B_{\tilde{\lambda}}(\lambda\,\xi)}(\operatorname{div}\operatorname{div}$ $\displaystyle{\sigma}-\bar{\Delta}\operatorname{tr}{\sigma})\,\bar{g}(\tilde{\xi},\lambda\,\xi-x)\,\text{d}\bar{\mu}$ $\displaystyle=$ $\displaystyle\,-\frac{2\,\pi}{15}\,\tilde{\lambda}^{5}\,\bar{D}_{\tilde{\xi}}(\operatorname{div}\operatorname{div}\underaccent{\bar}{\sigma}-\bar{\Delta}\operatorname{tr}\underaccent{\bar}{\sigma})+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,-\frac{2\,\pi}{15}\,\underaccent{\bar}{\phi}^{-8}\,\lambda^{5}\,\bar{D}_{\xi}(\operatorname{div}\operatorname{div}\underaccent{\bar}{\sigma}-\bar{\Delta}\operatorname{tr}\underaccent{\bar}{\sigma})+O(\lambda^{-1}\,|\xi|^{-6})$ $\displaystyle=$ $\displaystyle\,\frac{1}{2}\,\,\underaccent{\bar}{\phi}^{-8}\int_{B_{\lambda}(\lambda\,\xi)}(\operatorname{div}\operatorname{div}{\sigma}-\bar{\Delta}\operatorname{tr}{\sigma})\,\bar{g}(\xi,\lambda\,\xi-x)\,\text{d}\bar{\mu}+O(\lambda^{-1}\,|\xi|^{-6}).$ In the first and third equality, we have used Taylor’s theorem to expand the integrand up to fourth derivatives of $\sigma$, the $C^{5}$-decay of the metric, cancellations due to symmetry, and Lemma 35. In the second equality, we have used (65). According to [11, §4.9], there holds $R=\phi^{-8}\,(\operatorname{div}\operatorname{div}\underaccent{\bar}{\sigma}-\bar{\Delta}\operatorname{tr}\underaccent{\bar}{\sigma})-4\,\left[|x|^{-3}\,\operatorname{tr}{\sigma}-3\,|x|^{-5}\,\sigma(x,x)+|x|^{-3}\,\bar{D}_{x}\operatorname{tr}\sigma\right]+O(\lambda^{-1}\,|\xi|^{-6})$ while $\underaccent{\bar}{\phi}^{-8}=\phi^{8}+8\,(|x|^{-1}-|\xi|^{-1})+O(\lambda^{-1}\,|\xi|^{-2}).$ Combing this with (68), (69), (70), and (71), we conclude that $\xi^{i}\,(\partial_{i}A_{\lambda})(\xi)=\frac{48\,\pi}{35}\,|\xi|^{-6}+\frac{1}{2}\int_{B_{\lambda}(\lambda\,\xi)}\bar{g}(\xi,\lambda\,\xi-x)\,R\,\text{d}\bar{\mu}+O(\lambda^{-1}\,|\xi|^{-6})+O(|\xi|^{-7}).$ In [11, §2.2], it has been shown that this integral is non-negative provided that (64) holds. In particular, $\xi^{i}\,(\partial_{i}A_{\lambda})(\xi)>0$ provided both $\xi\in\mathbb{R}^{3}$ and $\lambda>1$ are large. We may now conclude the proof as in [11]. ∎ ## References * [1] Roberta Alessandroni and Ernst Kuwert. Local solutions to a free boundary problem for the Willmore functional. Calc. Var. Partial Differential Equations, 55(2):Art. 24, 29, 2016\. * [2] Matthias Bauer and Ernst Kuwert. Existence of minimizing Willmore surfaces of prescribed genus. Int. Math. Res. Not., (10):553–576, 2003. * [3] Hubert L. Bray. Proof of the Riemannian Penrose inequality using the positive mass theorem. J. Differential Geom., 59(2):177–267, 2001. * [4] Hubert Lewis Bray. The Penrose inequality in general relativity and volume comparison theorems involving scalar curvature. ProQuest LLC, Ann Arbor, MI, 1997. Thesis (Ph.D.)–Stanford University. * [5] Simon Brendle. Constant mean curvature surfaces in warped product manifolds. Publ. Math. Inst. Hautes Études Sci., 117:247–269, 2013. * [6] Simon Brendle and Michael Eichmair. Large outlying stable constant mean curvature spheres in initial data sets. Invent. Math., 197(3):663–682, 2014. * [7] Alessandro Carlotto, Otis Chodosh, and Michael Eichmair. Effective versions of the positive mass theorem. Invent. Math., 206(3):975–1016, 2016. * [8] Alessandro Carlotto and Richard Schoen. Localizing solutions of the Einstein constraint equations. Invent. Math., 205(3):559–615, 2016. * [9] Carla Cederbaum and Christopher Nerz. Explicit Riemannian manifolds with unexpectedly behaving center of mass. Ann. Henri Poincaré, 16(7):1609–1631, 2015. * [10] Otis Chodosh and Michael Eichmair. Global uniqueness of large stable CMC surfaces in asymptotically flat 3-manifolds. arXiv preprint arXiv:1703.02494, 2017. to appear in Duke Mathematical Journal. * [11] Otis Chodosh and Michael Eichmair. On far-outlying constant mean curvature spheres in asymptotically flat Riemannian 3-manifolds. J. Reine Angew. Math., 767:161–191, 2020. * [12] Otis Chodosh, Michael Eichmair, Yuguang Shi, and Haobin Yu. Isoperimetry, scalar curvature, and mass in asymptotically flat Riemannian $3$-manifolds. arXiv preprint arXiv:1606.04626, 2016. to appear in Comm. Pure Appl. Math. * [13] Demetrios Christodoulou and Shing-Tung Yau. Some remarks on the quasi-local mass. In Mathematics and general relativity (Santa Cruz, CA, 1986), volume 71 of Contemp. Math., pages 9–14. Amer. Math. Soc., Providence, RI, 1988. * [14] R. Courant and D. Hilbert. Methods of mathematical physics. Vol. I. Interscience Publishers, Inc., New York, N.Y., 1953. * [15] Camillo De Lellis and Stefan Müller. Optimal rigidity estimates for nearly umbilical surfaces. J. Differential Geom., 69(1):75–110, 2005. * [16] Michael Eichmair and Jan Metzger. Large isoperimetric surfaces in initial data sets. J. Differential Geom., 94(1):159–186, 2013. * [17] Michael Eichmair and Jan Metzger. Unique isoperimetric foliations of asymptotically flat manifolds in all dimensions. Invent. Math., 194(3):591–630, 2013. * [18] Robert Geroch. Energy extraction. Annals of the New York Academy of Sciences, 224(1):108–117, 1973\. * [19] Stephen Hawking. Gravitational radiation in an expanding universe. J. Mathematical Phys., 9(4):598–604, 1968. * [20] Gerhard Huisken and Tom Ilmanen. The inverse mean curvature flow and the Riemannian Penrose inequality. J. Differential Geom., 59(3):353–437, 2001. * [21] Gerhard Huisken and Shing-Tung Yau. Definition of center of mass for isolated physical systems and unique foliations by stable spheres with constant mean curvature. Invent. Math., 124(1-3):281–311, 1996. * [22] Norihisa Ikoma, Andrea Malchiodi, and Andrea Mondino. Embedded area-constrained Willmore tori of small area in Riemannian three-manifolds I: minimization. Proc. Lond. Math. Soc. (3), 115(3):502–544, 2017. * [23] Pong Soo Jang and Robert M. Wald. The positive energy conjecture and the cosmic censor hypothesis. Journal of Mathematical Physics, 18(1):41–44, 1977. * [24] Thomas Koerber. The area preserving Willmore flow and local maximizers of the Hawking mass in asymptotically Schwarzschild manifolds. The Journal of Geometric Analysis, 2020. * [25] Ernst Kuwert and Reiner Schätzle. The Willmore flow with small initial energy. J. Differential Geom., 57(3):409–441, 2001. * [26] Tobias Lamm and Jan Metzger. Small surfaces of Willmore type in Riemannian manifolds. Int. Math. Res. Not. IMRN, (19):3786–3813, 2010. * [27] Tobias Lamm and Jan Metzger. Minimizers of the Willmore functional with a small area constraint. Ann. Inst. H. Poincaré Anal. Non Linéaire, 30(3):497–518, 2013. * [28] Tobias Lamm, Jan Metzger, and Felix Schulze. Foliations of asymptotically flat manifolds by surfaces of Willmore type. Math. Ann., 350(1):1–78, 2011. * [29] Tobias Lamm, Jan Metzger, and Felix Schulze. Local foliation of manifolds by surfaces of willmore type. 2019\. to appear in Ann. Inst. Fourier (Grenoble). * [30] Paul Laurain. Sur l’analyse de quelques problemes invariants conformes. Habilitation, Université de Paris, 2019. http://webusers.imj-prg.fr/~paul.laurain/main.pdf. * [31] Paul Laurain and Andrea Mondino. Concentration of small Willmore spheres in Riemannian 3-manifolds. Anal. PDE, 7(8):1901–1921, 2014. * [32] Andrea Mondino. The conformal Willmore functional: a perturbative approach. J. Geom. Anal., 23(2):764–811, 2013. * [33] Andrea Mondino and Tristan Rivière. Willmore spheres in compact Riemannian manifolds. Adv. Math., 232:608–676, 2013. * [34] S. I. Pohožaev. On the eigenfunctions of the equation $\Delta u+\lambda f(u)=0$. Dokl. Akad. Nauk SSSR, 165:36–39, 1965. * [35] Jie Qing and Gang Tian. On the uniqueness of the foliation of spheres of constant mean curvature in asymptotically flat 3-manifolds. J. Amer. Math. Soc., 20(4):1091–1110, 2007. * [36] Tullio Regge and Claudio Teitelboim. Role of surface integrals in the Hamiltonian formulation of general relativity. Ann. Physics, 88:286–318, 1974. * [37] Richard M. Schoen. The existence of weak solutions with prescribed singular behavior for a conformally invariant scalar equation. Comm. Pure Appl. Math., 41(3):317–392, 1988. * [38] Guodong Wei. On the minimizers of curvature functionals in asymptotically flat manifolds. The Journal of Geometric Analysis, 2020. * [39] Haobin Yu. Isoperimetry for asymptotically flat 3-manifolds with positive ADM mass. arXiv preprint arXiv:2008.13307, 2020.
# DOA Estimation for Transmit Beamspace MIMO Radar via Tensor Decomposition with Vandermonde Factor Matrix Feng Xu, , Matthew W. Morency, , and Sergiy A. Vorobyov This work was supported in part by the Academy of Finland under Grant 319822, in part by Huawei, and in part by the China Scholarship Council. This work was conducted while Feng Xu was a visiting doctoral student with the Department of Signal Processing and Acoustics, Aalto University. (Corresponding author: Sergiy A. Vorobyov.)Feng Xu is with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China, and also with the Department of Signal Processing and Acoustics, Aalto University, Espoo 02150, Finland. (e-mail<EMAIL_ADDRESS>feng.xu@aalto.fi).Matthew W. Morency is with the Dept. Microelectronics, School of Electrical Engineering, Mathematics, and Computer Science, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands. (e-mail: M.W.Morency@tudelft.nl).Sergiy A. Vorobyov is with the Department of Signal Processing and Acoustics, Aalto University, Espoo 02150, Finland. (e-mail: svor@ieee.org). ###### Abstract We address the problem of tensor decomposition in application to direction-of- arrival (DOA) estimation for transmit beamspace (TB) multiple-input multiple- output (MIMO) radar. A general 4-order tensor model that enables computationally efficient DOA estimation is designed. Whereas other tensor decomposition-based methods treat all factor matrices as arbitrary, the essence of the proposed DOA estimation method is to fully exploit the Vandermonde structure of the factor matrices to take advantage of the shift- invariance between and within different subarrays. Specifically, the received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on the target Doppler shifts, the constructed tensor is reshaped into two distinct 3-order tensors. A computationally efficient tensor decomposition method is proposed to decompose the Vandermonde factor matrices. The generators of the Vandermonde factor matrices are computed to estimate the phase rotations between subarrays, which can be utilized as a look-up table for finding target DOA. It is further shown that our proposed method can be used in a more general scenario where the subarray structures can be arbitrary but identical. The proposed DOA estimation method requires no prior information about the tensor rank and is guaranteed to achieve precise decomposition result. Simulation results illustrate the performance improvement of the proposed DOA estimation method as compared to conventional DOA estimation techniques for TB MIMO Radar. ###### Index Terms: DOA estimation, Shift-invariance, TB MIMO radar, Tensor decomposition, Vandermonde factor matrix ## I Introduction The development of multiple-input multiple-output (MIMO) radar has been the focus of intensive research [1, 2, 3, 4, 5] over the last decade, and has opened new opportunities in target detection and parameter estimation. Many works have been reported in the literature showing the applications of MIMO radar with widely separated antennas [1] or collocated antennas [2]. Among these applications, direction-of-arrival (DOA) estimation [3, 6, 7, 8, 9, 10, 11, 12, 13] is one of the most fundamental research topics. In this paper, we mainly focus on the DOA estimation problem for MIMO radar with collocated antennas. By ensuring that the transmitted waveforms are orthogonal [14], MIMO radar enables increasing the system’s degree of freedom (DoF), improving the spatial resolution and enhancing the parameter identifiability. The essence behinds these advantages is the construction of a virtual array (VA), which can be regarded as a new array with larger aperture and more elements [4, 5]. However, the omnidirectional transmit beampattern in MIMO radar, resulting from the orthogonal waveforms, deteriorates the parameter estimation performance since most of the emitted energy is wasted as compared to its phased-array counterpart. To tackle this problem, the transmit beamspace (TB) technique has been introduced [3, 6, 15]. In TB MIMO radar, the transmitted energy can be focused on a fixed region [3, 6] by using a number of linear combinations of the transmitted waveforms via a TB matrix. This benefit becomes more evident when the number of elements in MIMO radar is large [15]. Specifically, at some number of waveforms, the gain from using more waveforms begins to degrade the estimation performance. The trade-off between waveform diversity and spatial diversity implies that the performance of DOA estimation in TB MIMO radar can be further improved with a carefully designed TB matrix. Meanwhile, many algorithms for DOA estimation in MIMO radar have been proposed. These algorithms can be summarized in two categories, signal covariance matrix-based algorithms [7, 8, 9, 3, 6, 10] and signal tensor decomposition-based algorithms [12, 11, 13, 16, 17, 18, 19, 20, 21]. For example, the estimation of target spatial angles can be conducted by multiple signal classification (MUSIC). The generalization of MUSIC to a planar array requires a 2-dimension (2-D) spectrum searching [7], and thus suffers from high computational complexity. By exploiting the rotational invariance property (RIP) of the signal subspace, estimation of signal parameters via rotational invariance technique (ESPRIT) [3, 6, 8] can be applied to estimate the target angles without a spectrum searching. The RIP can be enforced in many ways, e.g., uniformly spaced antennas [8] and the design of TB matrix [3, 6]. To further reduce the computational complexity and increase the number of snapshots, unitary-ESPRIT (U-ESPRIT) has been proposed [10]. Some algorithms like propagator method (PM) have been studied [9] to avoid the singular value decomposition (SVD) of the signal covariance matrix. The aforementioned DOA estimation algorithms are mostly conducted on a per-pulse basis to update the result from pulse to pulse. They ignore the multi-linear structure of the received signal in MIMO radar and, therefore, lead to poor performance in low signal-to-noise ratio (SNR) region. The second category, signal tensor decomposition-based algorithms, has been proposed to address the problem of poor performance in low SNR. In particular, a 3-order tensor is introduced to store the whole received signal for MIMO radar in a single coherent processing interval (CPI). Methods like high-order SVD (HOSVD) [13, 22] and parallel factor (PARAFAC) analysis [12, 11] can be applied to decompose the factor matrices. The DOA estimation can be conducted by exploiting the factor matrix with the target angular information. For example, the widely used alternating least square (ALS) algorithm is a common way of computing the approximate low-rank factors of a tensor. These factor matrices can be used to locate multiple targets simultaneously [12, 16]. Although the application of the conventional ALS algorithm improves the DOA estimation performance for MIMO radar, it usually requires the tensor rank as prior information, and the computational complexity can be extremely high as the convergence is unstable. Nevertheless, conventional tensor decomposition methods are developed for tensors with arbitrary factor matrices. In array signal processing, special matrix structure like Toeplitz, Hankel, Vandermonde and columnwise orthonormal [23, 17] may exist in factor matrix when tensor model is applied to collect the received signal. The Vandermonde structure, as the most common one, can be generated from the application of carrier frequency offset, e.g., frequency diversity array (FDA) [24] and orthogonal frequency-division multiplexing (OFDM) waveform [25], or uniformly spaced antennas, e.g., uniform linear array (ULA) and uniform rectangular array (URA). While conventional tensor decomposition methods are usually designed for tensors with arbitrary factor matrices, the decomposition of a tensor with structured factor matrices deserves further study as the structured factor matrix may point to a novel decomposition method and better uniqueness conditions. This is called constrained tensor decomposition [23, 17]. Moreover, transmit array interpolation is introduced for MIMO radar with arbitrary array structure [13]. By solving the minimax optimization problem regarding interpolation matrix design, the original transmit array is mapped to a virtual array with desired structure. The DOA estimation bias caused by interpolation errors has also been analyzed in [13]. However, the interpolation technique deteriorates the parameter identifiability, which makes it inappropriate for TB MIMO radar with arbitrary but identical subarrays. In this paper, we consider the problem of tensor decomposition in application to DOA estimation for TB MIMO radar with multiple transmit subarrays.111Some preliminary ideas that have been extended and developed to this paper we published in [26, 27]. A general 4-order tensor model that enables computationally efficient DOA estimation is designed. Whereas other tensor decomposition-based methods treat all factor matrices as arbitrary, the proposed DOA estimation method fully exploits the Vandermonde structure of the factor matrix to take advantage of the shift-invariance between and within different subarrays. In particular, the received signal of TB MIMO radar is expressed as a 4-order tensor. Depending on the target Doppler shifts, the constructed tensor is reshaped into two distinct 3-order tensors. A computationally efficient tensor decomposition method, which can be conducted via linear algebra with no iterations, is proposed to decompose the factor matrices of the reshaped tensors. Then, the Vandermonde structure of the factor matrices is utilized to estimate the phase rotations between transmit subarrays, which can be applied as a look-up table for finding target DOA. It is further shown that our proposed method can be used in a more general scenario where the subarray configurations are arbitrary but identical. By exploiting the shift-invariance, the proposed method improves the DOA estimation performance over conventional methods, and it has no requirement of prior information about the tensor rank. Simulation results verify that the proposed DOA estimation method has better accuracy and higher resolution. The rest of this paper is organized as follows. Some algebra preliminaries about tensors and matrices are introduced at the end of Section I. A 4-order tensor model for TB MIMO radar with uniformly spaced subarrays is designed in Section II. In Section III, the proposed tensor model is reshaped properly to achieve the uniqueness condition of tensor decomposition. The DOA estimation is conducted by exploiting the shift-invariance between and within different subarrays. The parameter identifiability is also analysed. Section IV generalizes the proposed DOA estimation method to TB MIMO radar with non- uniformly spaced subarrays, where multiple scales of shift-invariances can be found. Section V performs the simulation examples while the conclusions are drawn in Section VI. Notation: Scalars, vectors, matrices and tensors are denoted by lower-case, boldface lower-case, boldface uppercase, and calligraphic letters, e.g., $y$, $\bf y$, $\bf Y$, and $\cal Y$, respectively. The transposition, Hermitian transposition, inversion, pseudo-inversion, Hadamard product, outer product, Kronecker product and Khatri-Rao (KR) product operations are denoted by ${\left(\cdot\right)^{T}},{\left(\cdot\right)^{H}},{\left(\cdot\right)^{-1}},{\left(\cdot\right)^{{\dagger}}},*,\circ,\otimes$, and $\odot$, respectively, while $vec\left(\cdot\right)$ stands for the operator which stacks the elements of a matrix/tensor one by one to a column vector. The notation $diag({\bf{y}})$ represents a diagonal matrix with its elements being the elements of ${\bf{y}}$, while $\left\|{\bf{Y}}\right\|_{F}$ and $\left\|{\bf{Y}}\right\|$ are the Frobenius norm and Euclidean norm of ${\bf{Y}}$, respectively. Moreover, ${{\bf{1}}_{M\times N}}$ and ${{\bf{0}}_{M\times N}}$ denote an all-one matrix of dimension $M\times N$ and an all-zero matrix of size $M\times N$, respectively, and ${{\bf{I}}_{M}}$ stands for the identity matrix of size $M\times M$. For ${\bf B}\in{{\mathbb{C}}^{M\times N}}$, the $n$-th column vector and $(m,n)$-th element are denoted by ${\bf b}_{n}$ and $B_{mn}$, respectively, while the $m$-th element of ${\bf b}\in{{\mathbb{C}^{M\times 1}}}$ is given by $b(m)$. The estimates of $\bf B$ and $\bf b$ are given by $\bf\hat{B}$ and $\bf\hat{b}$, while the rank and Kruskal-rank of ${\bf B}$ are denoted by $r({\bf B})$ and $k_{\bf B}$, respectively. To express two submatrices of $\bf B$ without the first and last row vector, ${\bf\underline{B}}$ and ${\bf\overline{B}}$ are applied. If the general form, $\bf B$ can be written as ${\bf B}\triangleq[{\bm{\beta}}_{1},{\bm{\beta}}_{2},\cdots,{\bm{\beta}}_{N}]$, where ${\bm{\beta}}_{n}\triangleq[1,z_{n},z_{n}^{2},\cdots,z_{n}^{M-1}]^{T}$, i.e., $\bf B$ is a Vandermonde matrix, and ${\bf z}\triangleq[z_{1},z_{2},\cdots,z_{N}]^{T}\in{{\mathbb{C}}^{N\times 1}}$ is the vector of generators. When each element is unique, ${\bf z}$ is considered to be distinct. ### I-A Algebra Preliminaries for Tensors and Matrices For an $N$-th order tensor ${{\cal Y}\in{{\mathbb{C}}^{{I_{1}}\times{I_{2}}\times\cdots\times{I_{N}}}}}$, the following facts are introduced [16, 28]. ###### Fact 1. (PARAFAC decomposition): The PARAFAC decomposition of an $N$-th order tensor is a linear combination of the minimum number of rank-one tensors, given by $\displaystyle{\cal Y}=\sum\limits_{l=1}^{L}{{{\bm{\alpha}}_{l}^{(1)}}\circ{{\bm{\alpha}}_{l}^{(2)}}\circ\cdots\circ{{\bm{\alpha}}_{l}^{(N)}}}\triangleq[[{\bf A}^{(1)},{\bf A}^{(2)},\cdots,{\bf A}^{(N)}]]$ (1) where ${{\bm{\alpha}}_{l}^{(n)}}$ is the $l$-th column of ${\bf{A}}^{(n)}$ with ${\bf{A}}^{(n)}$ being the $n$-th factor matrix of size $I_{n}\times L$, and $L$ is the tensor rank. ###### Fact 2. (Uniqueness of PARAFAC decomposition): The PARAFAC decomposition is unique if all potential factor matrices satisfying (1) also match with ${{\bf{\tilde{A}}}^{(n)}}={{\bf{A}}^{(n)}}{{\bf{\Pi}}^{(n)}}{{\bf{\Delta}}^{(n)}}$ (2) where ${{\bf{\Pi}}^{(n)}}$ is a permutation matrix and ${{\bf{\Delta}}^{(n)}}$ is a diagonal matrix. The product of ${{\bf{\Delta}}^{(n)}},n=1,2,\cdots,N$ is an $L\times L$ identity matrix. Usually, the generic uniqueness condition is given by [28]: $\sum\limits_{n=1}^{N}{{k_{{{\bf{A}}^{(n)}}}}}\geq 2L+(N-1)$. ###### Fact 3. (Mode-$n$ unfolding of tensor): The mode-$n$ unfolding of a tensor ${{\cal Y}\in{{\mathbb{C}}^{{I_{1}}\times{I_{2}}\times\cdots\times{I_{N}}}}}$ is denoted by ${\bf Y}_{(n)}$, which is a matrix of size ${{I_{1}}\cdots{I_{n-1}}{I_{n+1}}\cdots{I_{N}}}\times{I_{n}}$ ${\bf Y}_{(n)}=\left({{{\bf{A}}^{(1)}}\cdots\odot{{\bf{A}}^{(n-1)}}\odot{{\bf{A}}^{(n+1)}}\cdots\odot{{\bf{A}}^{(N)}}}\right)\left({{{{\bf{A}}^{(n)}}}}\right)^{T}.$ (3) ###### Fact 4. (Tensor reshape): The reshape operator for an $N$-th order tensor ${{\cal Y}\in{{\mathbb{C}}^{{I_{1}}\times{I_{2}}\times\cdots\times{I_{N}}}}}$ returns a new $M$-th order tensor ${{\cal X}\in{{\mathbb{C}}^{{J_{1}}\times{J_{2}}\times\cdots\times{J_{M}}}}}$ ($M\leq N$) with $\prod\limits_{n=1}^{N}{{I_{n}}}=\prod\limits_{m=1}^{M}{{J_{m}}}$ and $vec({\cal Y})=vec({\cal X})$, e.g., if ${J_{m}}={I_{m}},m=1,2,\cdots,M-1$ and ${J_{M}}=\prod\limits_{n=M}^{N}{{I_{n}}}$, the mode-$M$ unfolding of reshaped $\cal X$ is ${{\bf{X}}_{(m)}}=\left({{{\bf{A}}^{(1)}}\odot\cdots\odot{{\bf{A}}^{(M-1)}}}\right){\left({{{\bf{A}}^{(M)}}\odot\cdots\odot{{\bf{A}}^{(N)}}}\right)^{T}}.$ (4) ###### Lemma 1. : For a 3-order tensor ${\cal Y}\triangleq[[{\bf A}^{(1)},{\bf A}^{(2)},{\bf A}^{(3)}]]$, where ${\bf A}^{(1)}$ is a Vandermonde matrix or the KR product of two Vandermonde matrices. The decomposition of $\cal Y$ is generically unique if the generators of ${\bf A}^{(1)}$ are distinct and ${\bf A}^{(3)}$ is column full rank. ###### Proof. It is purely technical and is given in supplemental material as Appendix A. ###### Lemma 2. : The following equalities hold true $\displaystyle{\bf A}{\bf B}={\bf A}\odot{\bf b}^{T}={\bf b}^{T}\odot{\bf A}$ (5) $\displaystyle{\bf A}\odot{\bf b}^{T}\odot{\bf C}={\bf b}^{T}\odot{\bf A}\odot{\bf C}={\bf A}\odot{\bf C}\odot{\bf b}^{T}$ $\displaystyle({\bf A}\odot{\bf B})\odot{\bf C}={\bf A}\odot({\bf B}\odot{\bf C})$ $\displaystyle vec\left({{\bf{ABD}}}\right)=\left({{{\bf{D}}^{T}}\odot{\bf{A}}}\right){\bf{b}}$ $\displaystyle\left({{\bf{A}}\otimes{\bf{C}}}\right)\left({{\bf{D}}\otimes{\bf{E}}}\right)=\left({{\bf{AD}}}\right)\otimes\left({{\bf{CE}}}\right)$ where ${\bf{A}}\in{\mathbb{C}^{M\times N}}$, ${\bf{C}}\in{\mathbb{C}^{Q\times N}}$, ${\bf{D}}\in{\mathbb{C}^{N\times P}}$, ${\bf{E}}\in{\mathbb{C}^{N\times L}}$ and ${\bf{B}}=diag({\bf{b}})\in{\mathbb{C}^{N\times N}}$. ## II TB MIMO Radar Tensor model ### II-A TB MIMO Radar with Linear Array Consider a collocated MIMO radar with $M$ transmit elements and $N$ receive elements. The transmit array is a ULA with its elements spaced at half the working wavelength away from each other. The receive elements are randomly placed within a fixed aperture. Assuming $S$ subarrays are uniformly spaced at the transmit side and also assuming that each subarray contains $M_{0}$ elements, the indices of first elements in those subarrays are denoted by $m_{s},s=1,2,\cdots,S$. Without loss of generality, $m_{s}$ rises uniformly. The steering vectors of the entire transmit array and the first transmit subarray at direction $\theta$ can be given by ${\bm{\alpha}}(\theta)\triangleq{\left[{1,{e^{-j\pi\sin\theta}},\cdots,{e^{-j(M-1)\pi\sin\theta}}}\right]^{T}}$ and ${{\bm{\alpha}}_{0}}(\theta)\triangleq{\left[{1,{e^{-j\pi\sin\theta}},\cdots,{e^{-j(M_{0}-1)\pi\sin\theta}}}\right]^{T}}$, respectively. The steering vector of the receive array can be written as ${\bm{\beta}}(\theta)\triangleq{\left[1,{{e^{-j\frac{{2\pi}}{\lambda}{x_{2}}\sin\theta}},\cdots,{e^{-j\frac{{2\pi}}{\lambda}{x_{N}}\sin\theta}}}\right]^{T}}$, where $\left\\{{\left.x_{n}\right|{\rm{0}}<{x_{n}}\leq{D},n=1,\cdots,N}\right\\}$ and $D$ is the aperture of the receive array. Accordingly, the transmit and receive steering matrices for $L$ targets in $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ can be denoted by ${\bf{A}}\triangleq\left[{{\bm{\alpha}}({\theta_{1}}),{\bm{\alpha}}({\theta_{2}}),\cdots,{\bm{\alpha}}({\theta_{L}})}\right]$ and ${\bf{B}}\triangleq\left[{{\bm{\beta}}({\theta_{1}}),{\bm{\beta}}({\theta_{2}}),\cdots,{\bm{\beta}}({\theta_{L}})}\right]$, respectively, while the steering matrix for the first transmit subarray can be given as ${\bf{A}}_{0}\triangleq\left[{{\bm{\alpha}}_{0}({\theta_{1}}),{\bm{\alpha}}_{0}({\theta_{2}}),\cdots,{\bm{\alpha}}_{0}({\theta_{L}})}\right]$. Note that ${\bf{A}}_{0}$ can also be regarded as the submatrix of ${\bf{A}}$ with first $M_{0}$ rows. In conventional MIMO radar, the received signal at the output of the receive array after matched-filtering in matrix form can be modelled as [12]: ${\bf{Y}}={\bf{B\Sigma}}{\bf A}^{T}+{\bf N}$, where ${\bf{\Sigma}}=diag({\bm{\sigma}})$, ${\bm{\sigma}}\triangleq\left[{\sigma_{1}^{2},\sigma_{2}^{2},\cdots,\sigma_{L}^{2}}\right]^{T}$ represents the vector of target radar cross section (RCS) fading coefficients obeying Swerling I model, and ${\bf N}$ is the noise residue of size $N\times M$. When the TB technique is introduced [3, 6], the received signal model after matched-filtering of $K$ orthogonal waveforms ($K\leq M$) can be generalized as ${\bf{Y}}={\bf{B\Sigma}}({{\bf W}^{H}\bf A})^{T}+{\bf N}$, where ${\bf{W}}\triangleq{\left[{{{\bf{w}}_{1}},{{\bf{w}}_{2}},\cdots,{{\bf{w}}_{K}}}\right]_{M\times K}}$ denotes the TB matrix. Hence, the received signal for the $s$-th transmit subarray, $s=1,2,\cdots,S$, and the whole receive array can be written as ${\bf{Y}}_{s}={\bf{B\Sigma}}({{\bf W}_{s}^{H}{\bf A}_{s}})^{T}+{\bf N}_{s}$ (6) where ${\bf W}_{s}$ and ${\bf A}_{s}$ represent the TB matrix and steering matrix for the $s$-th transmit subarray, respectively, and ${\bf N}_{s}$ is the noise residue of size $N\times{M_{0}}$. Assume that the TB matrix for each subarray is identical, denoted by ${\bf W}_{0}\in{\mathbb{C}^{{M_{0}}\times K}}$. Note that since $m_{s}$ rises uniformly, the steering matrix for the $s$-th transmit subarray can also be expressed as ${\bf A}_{s}={\bf A}_{0}{\bf\Gamma}_{s}$, where ${\bf\Gamma}_{s}=diag({{\bf{k}}_{s}})$ and ${{\bf{k}}_{s}}\triangleq\left[{{e^{-j\pi({m_{s}}-1)\sin{\theta_{1}}}},\cdots,{e^{-j\pi({m_{s}}-1)\sin{\theta_{L}}}}}\right]^{T}$. Substituting this relationship into (6) and vectorizing it, we have ${{\bf{y}}_{s}}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf\Gamma}_{s}{{\bm{\sigma}}}+{{\bf{n}}_{s}}$, where ${\bf n}_{s}$ is the vectorized noise residue. Considering the Doppler effect, the received signal during $q$-th pulse in a single CPI, $q=1,2,\cdots,Q$, can be written as ${{\bf{y}}_{s}^{(q)}}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf\Gamma}_{s}{{\bf c}_{q}}+{{\bf{n}}_{s}^{(q)}}$ (7) where ${{\bf{c}}_{q}}={\bm{\sigma}}*{{\bf{\bar{c}}}_{{q}}}$, ${{\bf{\bar{c}}}_{{q}}}\triangleq\left[{{e^{i2\pi{f_{1}}{q}T}},{e^{i2\pi{f_{2}}{q}T}},\cdots,{e^{i2\pi{f_{L}}{q}T}}}\right]^{T}$, $f_{l}$ denotes the Doppler shift, $T$ is the radar pulse duration, and ${{\bf{n}}_{s}^{(q)}}$ is the vectorized noise residue. Concatenate the received signal of $S$ subarrays in $q$-th pulse, i.e., ${{\bf{Y}}^{(q)}}\triangleq\left[{{{\bf{y}}_{1}^{(q)}},{{\bf{y}}_{2}^{(q)}},\cdots,{{\bf{y}}_{S}^{(q)}}}\right]_{KN\times S}$. The compact form can be written as ${{\bf{Y}}^{(q)}}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\left[{{{\bf{c}}_{q}^{T}}\odot{\bf{K}}}\right]^{T}}+{{\bf{N}}^{(q)}}$ (8) where ${\bf{K}}\triangleq{\left[{{\bf{k}}_{1},{\bf{k}}_{2},\cdots,{\bf{k}}_{S}}\right]^{T}}_{S\times L}$ and ${{\bf{N}}^{(q)}}\triangleq\left[{{{\bf{n}}_{1}^{(q)}},{{\bf{n}}_{2}^{(q)}},\cdots,{{\bf{n}}_{S}^{(q)}}}\right]$. Note that the $l$-th column of $\bf K$ represents the phase rotations of $l$-th target for $S$ subarrays. $\bf K$ can be named as transmit subarray steering matrix. Vectorizing (8), the $KNS\times 1$ vector can be given as $\displaystyle{{\bf{z}}_{q}}$ $\displaystyle=\left\\{{\left[{{{\bf{c}}_{q}^{T}}\odot{\bf{K}}}\right]\odot\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]}\right\\}{{\bf{1}}_{L\times 1}}+{\bf r}_{q}$ (9) $\displaystyle=\left[{{\bf{K}}\odot\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf{c}}_{q}+{{\bf{r}}_{q}}$ where ${{\bf{r}}_{q}}$ is the vectorized noise residue of ${{\bf{N}}^{(q)}}$. Then, concatenate the received signal of $Q$ pulses, i.e., ${{\bf{Z}}}\triangleq\left[{{{\bf{z}}_{1}},{{\bf{z}}_{2}},\cdots,{{\bf{z}}_{Q}}}\right]_{KNS\times Q}$. The compact form can be formulated as ${{\bf{Z}}}=\left[{{\bf{K}}\odot\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{{\bf{C}}^{T}}+{{\bf{R}}}$ (10) where ${{\bf{C}}}\triangleq{\left[{{\bf{c}}_{1},{\bf{c}}_{2},\cdots,{\bf{c}}_{Q}}\right]^{T}}_{Q\times L}$ and ${{\bf{R}}}\triangleq\left[{{{\bf{r}}_{1}},{{\bf{r}}_{2}},\cdots,{{\bf{r}}_{Q}}}\right]$. Similarly, $\bf C$ can be named as Doppler steering matrix since each column denotes the Doppler steering vector for one target (with additional RCS information). According to Fact 3, a 4-order tensor ${\cal Z}\in{\mathbb{C}^{{S}\times K\times N\times{Q}}}$ whose matricized version is ${{\bf{Z}}}$ in (10) can be constructed. Denote ${\bf X}\triangleq{{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}$, then this tensor can be written as ${\cal Z}=\sum\limits_{l=1}^{L}{{{\bm{\kappa}}_{l}}\circ{{\bm{\chi}}_{l}}\circ{{\bm{\beta}}_{l}}\circ{{\bm{\gamma}}_{l}}}+{\cal R}\triangleq[[{{\bf{K}},{\bf{X}},{\bf{B}},{\bf{C}}}]]+{\cal R}$ (11) where ${{{\bm{\kappa}}_{l}},{{\bm{\chi}}_{l}},{{\bm{\beta}}_{l}},{{\bm{\gamma}}_{l}}}$ are the $l$-th columns of ${\bf{K}},{\bf{X}},{\bf{B}},{\bf{C}}$, respectively, $L$ is the tensor rank, and $\cal R$ is the noise tensor of the same size. ### II-B TB MIMO Radar with Planar Array Figure 1: Transmit array configuration for TB MIMO radar with planar array. Consider the planar array case, as shown in Fig. 1. A URA with $M={M_{x}}\cdot{M_{y}}$ elements spaced at half the working wavelength in both directions and a planar array with $N$ randomly spaced elements are applied as the transmit and receive array, respectively. The transmit steering vector can be given as ${\bm{\alpha}}(\theta,\varphi)={\bf{u}}(\theta,\varphi)\otimes{\bf{v}}(\theta,\varphi)$, where ${\bf{u}}(\theta,\varphi)\triangleq{\left[{1,{e^{-j\pi u}},\cdots,{e^{-j({M_{y}}-1)\pi u}}}\right]^{T}}$, ${\bf{v}}(\theta,\varphi)\triangleq{\left[{1,{e^{-j\pi v}},\cdots,{e^{-j({M_{x}}-1)\pi v}}}\right]^{T}}$, $u\triangleq\sin\varphi\sin\theta$, $v\triangleq\sin\varphi\cos\theta$, and $(\theta,\varphi)$ is the pair of azimuth and elevation of a target. The steering vector of the receive array can be written as ${\bm{\beta}}(\theta,\varphi)\triangleq{\left[{1,{e^{-j\frac{{2\pi}}{\lambda}({x_{2}}v+{y_{2}}u)}},\cdots,{e^{-j\frac{{2\pi}}{\lambda}({x_{N}}v+{y_{N}}u)}}}\right]^{T}}$, where $\left\\{{\left.{({x_{n}},{y_{n}})}\right|{\rm{0}}<{x_{n}}\leq{D_{x}},{\rm{0}}<{y_{n}}\leq{D_{y}}}\right\\}$ are the coordinates of the receive elements, and $D_{x},D_{y}$ denote the apertures in two directions, respectively. Accordingly, assume $S=I\cdot J$ transmit subarrays are uniformly spaced at the transmit side, which can be overlapped or not. Each of them contains $M_{0}={M_{x_{0}}}\cdot{M_{y_{0}}}$ elements. The first subarray is selected as the reference subarray. For $L$ targets in $\left\\{\left({{\theta_{l}}},{\varphi_{l}}\right)\right\\}_{l=1}^{L}$, the transmit and receive steering matrices can be generalized as ${\bf{A}}\triangleq\left[{{\bm{\alpha}}({\theta_{1}},{\varphi_{1}}),{\bm{\alpha}}({\theta_{2}},{\varphi_{2}}),\cdots,{\bm{\alpha}}({\theta_{L}},{\varphi_{L}})}\right]$ and ${\bf{B}}\triangleq\left[{{\bm{\beta}}({\theta_{1}},{\varphi_{1}}),{\bm{\beta}}({\theta_{2}},{\varphi_{2}}),\cdots,{\bm{\beta}}({\theta_{L}},{\varphi_{L}})}\right]$, respectively. Note that the transmit array is a URA, thus, we have ${\bf{A}}={{\bf{U}}}\odot{{\bf{V}}}$, where ${{\bf{U}}}\triangleq\left[{{\bf{u}}({\theta_{1}},{\varphi_{1}}),{\bf{u}}({\theta_{2}},{\varphi_{2}}),\cdots,{\bf{u}}({\theta_{L}},{\varphi_{L}})}\right]$ and ${{\bf{V}}}\triangleq\left[{{\bf{v}}({\theta_{1}},{\varphi_{1}}),{\bf{v}}({\theta_{2}},{\varphi_{2}}),\cdots,{\bf{v}}({\theta_{L}},{\varphi_{L}})}\right]$. Similarly, the steering vector of the reference transmit subarray can be written as ${\bm{\alpha}}_{0}(\theta,\varphi)={\bf{u}}_{0}(\theta,\varphi)\otimes{\bf{v}}_{0}(\theta,\varphi)$, where ${\bf{u}}_{0}(\theta,\varphi)$ and ${\bf{v}}_{0}(\theta,\varphi)$ contain the first $M_{y_{0}}$ and $M_{x_{0}}$ elements in ${\bf{u}}(\theta,\varphi)$ and ${\bf{v}}(\theta,\varphi)$, respectively. The steering matrix for the reference transmit subarray can be denoted by ${\bf{A}}_{0}={{\bf{U}}_{0}}\odot{{\bf{V}}_{0}}$, where ${{\bf{U}}_{0}}$ and ${{\bf{V}}_{0}}$ are the submatrices of ${{\bf{U}}}$ and ${{\bf{V}}}$ that consist of the first $M_{y_{0}}$ and $M_{x_{0}}$ rows, respectively. For the $(i,j)$-th subarray (or equivalently, for the $s$-th transmit subarray where $s=(j-1)I+i$), the index of first element is denoted by $(m_{i},m_{j}),\ i=1,2,\cdots,I,\ {j}=1,2,\cdots,J$. Both $m_{i}$ and $m_{j}$ rise uniformly. The steering matrix for the $(i,j)$-th subarray can be given as ${\bf{A}}_{ij}={{\bf{U}}_{j}}\odot{{\bf{V}}_{i}}$, where ${{\bf{U}}_{j}}={{\bf{U}}_{0}}{{\bf\Gamma}_{j}}$, ${{\bf{V}}_{i}}={{\bf{V}}_{0}}{{\bf\Gamma}_{i}}$, ${{\bf\Gamma}_{j}}=diag({\bf h}_{j}),{{\bf\Gamma}_{i}}=diag({\bf d}_{i})$, vectors ${\bf h}_{j}\triangleq{\left[{{e^{-j\pi{(m_{j}-1)}u_{1}}},\cdots,{e^{-j\pi{(m_{j}-1)}u_{L}}}}\right]^{T}}$ and ${\bf d}_{i}\triangleq{\left[{{e^{-j\pi{(m_{i}-1)}v_{1}}},\cdots,{e^{-j\pi{(m_{i}-1)}v_{L}}}}\right]^{T}}$ indicate the phase rotations for $L$ targets in two directions, respectively. Generalizing (7), the received signal after matched-filtering for the $s$-th transmit subarray and the whole receive array in $q$-th pulse can be written as ${{\bf{y}}^{(q)}_{s}}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{{\bf\Gamma}_{i}}{{\bf\Gamma}_{j}}{{\bf c}_{q}}+{{\bf{n}}^{(q)}_{s}}.$ (12) Similarly, the concatenation of the received signal ${{\bf{y}}^{(q)}_{s}}$ for all $S$ subarrays in $q$-th pulse can be expressed as ${\bf Y}^{(q)}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\left({{{\bf{c}}_{q}^{T}}\odot{\bf{H}}\odot{\bf{\Delta}}}\right)^{T}}+{\bf N}^{(q)}$ (13) where ${\bf{H}}\triangleq{\left[{{\bf{h}}_{1},{\bf{h}}_{2},\cdots,{\bf{h}}_{J}}\right]^{T}}_{{J}\times L}$ and ${\bf{\Delta}}\triangleq{\left[{{\bf{d}}_{1},{\bf{d}}_{2},\cdots,{\bf{d}}_{I}}\right]^{T}}_{{I}\times L}$. Proof of (13) is purely technical and is given in supplemental material as Appendix B. Then ${{\bf{z}}_{q}}=vec({\bf Y}^{(q)})$ can be formulated as ${{\bf{z}}_{q}}=\left[{{\bf{H}}\odot{\bf\Delta}\odot\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf{c}}_{q}+{{\bf{r}}_{q}}.$ (14) After concatenating ${{\bf{z}}_{q}}$ in the same way as (10), the received signal of $Q$ pulses in the URA case can be written as ${{\bf{Z}}}=\left[{{\bf{H}}\odot{\bf{\Delta}}\odot\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{{\bf{C}}^{T}}+{{\bf{R}}}.$ (15) It is interesting that, (15) can be directly obtained from (10) by replacing $\bf K$ with ${\bf H}\odot{\bf\Delta}$. Hence, ${\bf H}\odot{\bf\Delta}$ can be regarded as the transmit subarray steering matrix for URA. Using Fact 3, a 5-order tensor $\cal Z$ whose matricized version is ${{\bf{Z}}}$ in (15) can be constructed as ${\cal Z}=\sum\limits_{l=1}^{L}{{{\bm{\eta}}_{l}}\circ{{\bm{\delta}}_{l}}\circ{{\bm{\chi}}_{l}}\circ{{\bm{\beta}}_{l}}\circ{{\bm{\gamma}}_{l}}}+{\cal R}\triangleq[[{{\bf{H}},{\bf{\Delta}},{\bf{X}},{\bf{B}},{\bf{C}}}]]+{\cal R}$ (16) where ${{\bm{\eta}}_{l}}$ and ${{\bm{\delta}}_{l}}$ are the $l$-th columns of ${\bf{H}}$ and ${\bf{\Delta}}$, respectively. Note that since all subarrays are uniformly spaced, ${\bf{K}}$, ${\bf{\Delta}}$ and ${\bf{H}}$ are Vandermonde matrices and their vectors of generators can be respectively denoted by $\displaystyle{{\bm{\omega}}}\triangleq{\left[{{e^{-j\pi{\Delta_{m}}\sin{\theta_{1}}}},\cdots,{e^{-j\pi{\Delta_{m}}\sin{\theta_{L}}}}}\right]^{T}}$ (17) $\displaystyle{{\bm{\omega}}_{x}}\triangleq{\left[{{e^{-j\pi{\Delta_{m_{x}}}v_{1}}},\cdots,{e^{-j\pi{\Delta_{m_{x}}}v_{1}}}}\right]^{T}}$ $\displaystyle{{\bm{\omega}}_{y}}\triangleq{\left[{{e^{-j\pi{\Delta_{m_{y}}}u_{1}}},\cdots,{e^{-j\pi{\Delta_{m_{y}}}u_{L}}}}\right]^{T}}$ where the step sizes ${\Delta}_{m}=m_{s+1}-m_{s}$, ${\Delta_{m_{x}}}=m_{{i}+1}-m_{i}$, and ${\Delta_{m_{y}}}=m_{{j}+1}-m_{j}$. We assume ${{\bm{\omega}}}$, ${{\bm{\omega}}}_{x}$ and ${{\bm{\omega}}}_{y}$ are distinct, which means that multiple targets are spatially distinct. ## III DOA Estimation via Tensor Decomposition with Vandermonde Factor Matrix We have shown that the received signal of TB MIMO radar with transmit subarrays can be formulated as a high-order tensor. It is useful to point out that (11) and (16) are identical if the idea of tensor reshape is applied and ${\bf K}$ is replaced by ${\bf H}\odot{\bf\Delta}$. Hence, a general 4-order tensor model can be used to express the received signal for TB MIMO radar with uniformly spaced subarrays, given by ${\cal Z}\triangleq\left[[{{\bf{G}},{\bf{X}},{\bf{B}},{\bf{C}}}\right]]+{\cal R}$ (18) where ${\bf{G}}\in{\mathbb{C}^{{S}\times{L}}}$ is the transmit subarray steering matrix. Essentially, $\bf G$ can be interpreted as the result of element-wise spatial smoothing between the transmit elements. A new dimension is extended to express the phase rotations between transmit subarrays in the tensor model, which matches with the derivations in (11) and (16). The tensor decomposition of ${\cal Z}$ can be regarded as the constrained tensor decomposition, since one of the factor matrices is structured by the regular array configuration. Generally, the ALS algorithm can be applied to decompose such a tensor. However, the convergence of the ALS algorithm heavily relies on the determination of tensor rank, which is an NP-hard problem. The Vandermonde structure of the factor matrix is ignored. The number of iterations in ALS algorithm is also uncertain, which may lead to high computational complexity. In the literature [23, 17, 18], the uniqueness condition of the tensor decomposition with special-structured factor matrices, e.g., Toeplitz, Hankel, Vandermonde and column-wise orthonormal, has been investigated. The structured factor matrix may change the uniqueness condition and, therefore, point to some new tensor decomposition methods. In this section, we mainly focus on the tensor decomposition with Vandermonde factor matrix in application to DOA estimation for TB MIMO radar with uniformly spaced subarrays. A computationally efficient DOA estimation method is proposed and we discuss the application of the proposed method for both linear and planar arrays. To begin with, a 3-order tensor ${\cal F}\triangleq[[{{\bf{G}},({\bf{X}}\odot{\bf{B}}),{\bf{C}})}]]$ can be reshaped from (18) (see Fact 4), whose mode-3 unfolding is ${\bf F}_{(3)}=({\bf G}\odot{\bf{X}}\odot{\bf{B}}){\bf C}^{T}$. Considering only the $q$-th pulse, ${\bf F}_{(3)}$ is generically identical to that in (9) for linear array or (14) for planar array. In other words, the signal covariance matrix-based DOA estimation methods like MUSIC and ESPRIT can be conducted by using ${\bf R}=1/Q{\bf F}_{(3)}{\bf F}^{H}_{(3)}$ as the signal covariance matrix. Meanwhile, note that ${\bf{G}}$ is either a Vandermonde matrix or the KR product of a pair of Vandermonde matrices. Thus, Lemma 1 can be applied to conduct a tensor decomposition-based DOA estimation if the second precondition is satisfied, i.e., $r({\bf C})=L$. Take a ULA for example, let ${\bf G}={\bf K}$. The SVD of ${{\bf{F}}_{(3)}}$ is denoted by ${{\bf{F}}_{(3)}}={\bf{U}}{\bf\Lambda}{{\bf{V}}^{H}}$, where ${\bf{U}}\in{\mathbb{C}^{{SKN}\times L}}$, ${\bf{\Lambda}}\in{\mathbb{C}^{L\times L}}$, and ${\bf{V}}\in{\mathbb{C}^{{Q}\times L}}$. According to Lemma 1, there must exist a nonsingular matrix $\bf M$ of size $L\times L$ such that ${\bf{UM}}={\bf{K}}\odot{\bf{X}}\odot{\bf B}$ (19) or equivalently, ${{\bf{U}}_{1}}{\bf{M}}={\bf{\overline{K}}}\odot{\bf{X}}\odot{\bf B},\qquad{{\bf{U}}_{2}}{\bf{M}}={\bf{\underline{K}}}\odot{\bf{X}}\odot{\bf B}$ (20) where submatrices ${{\bf{U}}_{1}}=\left[{{{\bf{I}}_{KN(S-1)}},{{\bf{0}}_{KN(S-1)\times KN}}}\right]{\bf{U}}$ and ${{\bf{U}}_{2}}=\left[{{{\bf{0}}_{KN(S-1)\times KN}},{{\bf{I}}_{KN(S-1)}}}\right]{\bf{U}}$ are truncated from rows of $\bf U$. Since $\bf K$ is a Vandermonde matrix, ${\bf{\underline{K}}}={\bf{\overline{K}}}{{\bf\Omega}}$, where ${{\bf\Omega}}=diag({\bm{\omega}})$. Substitute it into (20) to obtain ${{\bf{U}}_{2}}{\bf{M}}={{\bf{U}}_{1}}{\bf{M}}{{\bf\Omega}}.$ (21) Note that $\bf M$ and ${{\bf\Omega}}$ are both full rank, ${{\bf{U}}_{2}}={{\bf{U}}_{1}}\left({{\bf{M}}{{\bf{\Omega}}}{{\bf{M}}^{-1}}}\right)$. After the eigenvalue decomposition (EVD) of the matrix ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$, ${{\bm{\omega}}}$ can be estimated as the vector of eigenvalues and $\bf M$ is the matrix of the corresponding eigenvectors. Then, the target DOA can be computed by $\displaystyle{\hat{\omega}}(l)={e^{-j\pi{\Delta_{m}}\sin{\bar{\theta}_{l}}}}$ (22) $\displaystyle{\Delta_{m}}\sin{\bar{\theta}_{l}}-{\Delta_{m}}\sin{{\bar{\theta}}^{{}^{\prime}}_{l}}=\pm 2k$ where $k\in\left({-\frac{{\Delta_{m}}}{2},\frac{{\Delta_{m}}}{2}}\right)$ is an integer, ${\bar{\theta}_{l}}$ is the true direction, and ${{\bar{\theta}}^{{}^{\prime}}_{l}}$ denotes the potential grating lobes when $\Delta_{m}\geq 2$. The estimation of ${\bm{\omega}}_{x}$ and ${\bm{\omega}}_{y}$ for planar array is straightforward, which can also be found in the proof of Lemma 1. Consequently, $\hat{u}_{l}$ and $\hat{v}_{l}$ can be determined by $\left\\{\begin{array}[]{l}{{\hat{\omega}}_{y}}(l)={e^{-j\pi{\Delta_{{m_{y}}}}{{\bar{u}}_{l}}}}\\\ {\Delta_{{m_{y}}}}{{\bar{u}}_{l}}-{\Delta_{{m_{y}}}}{{\bar{u}^{\prime}}_{l}}=\pm 2{k_{y}}\end{array}\right.,\quad\left\\{\begin{array}[]{l}{{\hat{\omega}}_{x}}(l)={e^{-j\pi{\Delta_{{m_{x}}}}{{\bar{v}}_{l}}}}\\\ {\Delta_{{m_{x}}}}{{\bar{v}}_{l}}-{\Delta_{{m_{x}}}}{{\bar{v}^{\prime}}_{l}}=\pm 2{k_{x}}\end{array}\right.$ (23) where $k_{y}\in\left({-\frac{{\Delta_{m_{y}}}}{2},\frac{{\Delta_{m_{y}}}}{2}}\right)$ and $k_{x}\in\left({-\frac{{\Delta_{m_{x}}}}{2},\frac{{\Delta_{m_{x}}}}{2}}\right)$ are integers, respectively, $\bar{u}_{l}$ and $\bar{v}_{l}$ indicate the DOA information of $l$-th target, while ${{\bar{u}}^{{}^{\prime}}_{l}}$ and ${{\bar{v}}^{{}^{\prime}}_{l}}$ correspond to the potential grating lobes. Since $u_{l}\triangleq\sin\varphi_{l}\sin\theta_{l}$ and $v_{l}\triangleq\sin\varphi_{l}\cos\theta_{l}$, the pair of $(\hat{\theta}_{l},\hat{\varphi}_{l})$ can be denoted by ${\hat{\theta}_{l}}=\arctan\left(\frac{{{\bar{u}_{l}}}}{{{\bar{v}_{l}}}}\right),\qquad{\bar{\varphi}_{l}}=\arcsin\left(\sqrt{\bar{u}_{l}^{2}+\bar{v}_{l}^{2}}\right).$ (24) The process in (19)-(21) can be regarded as the generalized ESPRIT method[29]. Compared to other tensor-decomposition based methods like PARAFAC, the Vandermonde structure of the factor matrix is exploited and the computational complexity is reduced significantly. No iterations are required and the convergence is guaranteed. However, the precondition $r({\bf C})=L$ must be satisfied. In some applications regarding target detection, it may happen that two targets with similar Doppler shifts exist. Under this circumstance, two column vectors in $\bf C$ are considered to be linearly dependent. The rank deficiency problem limits the application of this computationally efficient DOA estimation method. Besides, the spatial ambiguity problem further restricts the placement of transmit elements. The distance of phase centers between two adjacent subarrays should be no more than half the working wavelength. The array aperture is limited. To tackle the problem of rank deficiency and obtain a higher spatial resolution, (18) is reshaped by squeezing $\bf B$ and $\bf C$ into one dimension. The third factor matrix ${\bf B}\odot{\bf C}$, as the KR product of a Vandermonde matrix and an arbitrary matrix, generically has rank $\min(QN,L)$.222Although there exists no deterministic formula for the rank of the KR product of a Vandermonde matrix and an arbitrary matrix, it is generically full rank. See Appendix A. Two targets with identical Doppler shift can be resolved, while the grating lobes can be eliminated by comparing the estimation result originated from ${\bf G}$ to the distinct target angular information obtained by ${\bf X}$ [21, 27]. ### III-A Proposed Computationally Efficient DOA Estimation Method for TB MIMO Radar with Uniformly Spaced Transmit Subarrays Consider the noise-free version of (18), a 3-order tensor ${\cal T}\triangleq[[{{\bf{G}},{\bf{X}},({\bf{B}}\odot{\bf{C}})}]]$ can be reshaped. The mode-3 unfolding of $\cal T$ is given by ${{\bf{T}}_{(3)}}=\left({{\bf{G}}\odot{\bf{X}}}\right){\left({{\bf{B}}\odot{\bf{C}}}\right)^{T}}$ (25) where $\bf G$, $\bf X$, and ${\bf{B}}\odot{\bf{C}}$ are the three factor matrices, respectively. The receive steering matrix and Doppler steering matrix are squeezed into one dimension. Note that the generators of $\bf G$ are distinct, the directions of all targets are unique with or without the existence of grating lobes. Hence, the third factor matrix ${\bf{B}}\odot{\bf{C}}$ is column full rank [17]. Lemma 1 holds for tensor $\cal T$. In the following, we develop methods for DOA estimation in TB MIMO radar with uniformly spaced subarrays via the decomposition of $\cal T$ for linear and planar array sequentially. #### III-A1 ULA Let $\bf G=\bf K$. According to Lemma 1, the decomposition of $\cal T$ is unique. To obtain the factor matrices with target DOA information, denote the SVD of ${{\bf{T}}_{(3)}}$ as ${{\bf{T}}_{(3)}}={\bf{U}}{\bf\Lambda}{{\bf{V}}^{H}}$, where ${\bf{U}}\in{\mathbb{C}^{{SK}\times L}}$, ${\bf{\Lambda}}\in{\mathbb{C}^{L\times L}}$, and ${\bf{V}}\in{\mathbb{C}^{{NQ}\times L}}$. A nonsingular matrix $\bf E$ of size $L\times L$ satisfies ${\bf{UE}}={\bf{K}}\odot{\bf{X}}.$ (26) Owing to the operator of the KR product, we can write ${{\bf{U}}_{1}}{\bf{E}}={\bf{\overline{K}}}\odot{\bf{X}},\qquad{{\bf{U}}_{2}}{\bf{E}}={\bf{\underline{K}}}\odot{\bf{X}}$ (27) where ${{\bf{U}}_{1}}=\left[{{{\bf{I}}_{K(S-1)}},{{\bf{0}}_{K(S-1)\times K}}}\right]{\bf{U}}$ and ${{\bf{U}}_{2}}=\left[{{{\bf{0}}_{K(S-1)\times K}},{{\bf{I}}_{K(S-1)}}}\right]{\bf{U}}$ are truncated from rows of $\bf U$, respectively. Substitute ${\bf{\underline{K}}}={\bf{\overline{K}}}{{\bf\Omega}}$ into (27) to obtain ${{\bf{U}}_{2}}{\bf{E}}={{\bf{U}}_{1}}{\bf{E}}{{\bf\Omega}}.$ (28) Since $\bf E$ and ${{\bf\Omega}}$ are both full rank, ${{\bf{U}}_{2}}={{\bf{U}}_{1}}\left({{\bf{E}}{{\bf{\Omega}}}{{\bf{E}}^{-1}}}\right)$. The connection between ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$ and ${{\bf{E}}{{\bf{\Omega}}}{{\bf{E}}^{-1}}}$ is revealed. From the EVD of ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$, the generators ${{\bm{\omega}}}$ can be estimated with $\bf E$ being the matrix of the corresponding eigenvectors. Then, $\left\\{{{\hat{\theta}_{l}}}\right\\}_{l=1}^{L}$ can be computed by (22). Note that $\left({\frac{{{\bm{\kappa}}_{l}^{H}}}{{{\bm{\kappa}}_{l}^{H}{{\bm{\kappa}}_{l}}}}\otimes{{\bf{I}}_{K}}}\right)\left({{{\bm{\kappa}}_{l}}\otimes{{\bm{\chi}}_{l}}}\right)={{\bm{\chi}}_{l}}$ and ${{\bm{\kappa}}_{l}^{H}}{{\bm{\kappa}}_{l}}=S$, the compact form of ${{\bm{\chi}}_{l}}$ is given as ${{\bm{\chi}}_{l}}=1/S\left({{{\bm{\kappa}}_{l}^{H}}\otimes{{\bf{I}}_{K}}}\right){\bf{U}}{{\bf{e}}_{l}}.$ (29) Equation (29) provides an estimation of each column vector of $\bf X$. Given ${\bf W}_{0}$ as prior information, a polynomial rooting method [21] can be applied to estimate the unambiguous $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ in ${\bf A}_{0}$ independently. Instead of exploiting the signal subspace shift-invariance of the transmit subarray steering matrix, the method in [21] focuses on the Vandermonde structure of ${\bf{A}}_{0}$ within a single subarray and reveals the relationship between the TB MIMO radar transmit beampattern and the generalized sidelobe canceller (GSC). Consequently, the estimation results originated from ${\bf K}$ and ${\bf X}$ both provide the target angular information. The grating lobes can be eliminated by comparing the results to each other. Note that (29) is conducted column by column, the angles are paired automatically before comparison. An outline of the proposed method for the DOA estimation in TB MIMO radar with linear array is given as Algorithm 1. Algorithm 1 DOA Estimation for 1-D TB MIMO Radar with Uniformly Spaced Transmit Subarrays 0: Signal tensor ${\cal{Z}}\in{\mathbb{C}^{S\times K\times N\times Q}}$ from (11) 0: Targets DOA information $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ 1: Reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}}\in{\mathbb{C}^{{S}\times K\times{NQ}}}$, where the mode-3 unfolding of ${\cal{T}}$ is given by (25); 2: Compute the SVD of the matrix ${{\bf{T}}_{(3)}}={\bf{U}}{\bf\Lambda}{{\bf{V}}^{H}}$; 3: Formulate two submatrices ${{\bf{U}}_{1}},{{\bf{U}}_{2}}$ satisfying (27); 4: Calculate the EVD of the matrix ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$; 5: Estimate ${\hat{\theta}}_{l}$ via (22), which contains grating lobes; 6: Construct ${{\bm{\chi}}_{l}}$ via (29); 7: Define ${\bf{\tilde{W}}_{0}}\triangleq{\bf{W}}_{0}-{\bf W}^{\prime}_{0}$, ${{\bf{W}}^{\prime}_{0}}\triangleq{\left[{{{\bm{\chi}}_{l}},{{\bf{0}}_{K\times\left({{\rm{{M_{0}}-1}}}\right)}}}\right]^{T}}$; 8: Build a polynomial via $F({z_{l}})\triangleq{{\bf{p}}^{H}}({z_{l}}){\bf{\tilde{W}}}_{0}{{{\bf{\tilde{W}}}}_{0}^{H}}{\bf{p}}({z_{l}})$, where ${\bf{p}}(z_{l})\triangleq{\left[{1,z_{l},\cdots,{{z_{l}}^{{M_{0}}-1}}}\right]^{T}}$ and $z_{l}\triangleq e^{-j\pi\sin\theta_{l}}$; 9: Compute the roots of the polynomial $F({z_{l}})$ and select the one closest to the unit circle as $\hat{z}_{l}$; 10: Estimate ${\theta}_{l}$ via ${\hat{\theta}}_{l}=\arcsin\left(\frac{{j\ln({\hat{z}_{l}})}}{\pi}\right)$; 11: Compare the results in step 5 and step 10; 12: return $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$. #### III-A2 URA First, substitute ${\bf G}={\bf H}\odot{\bf\Delta}$ into (25). Similar to (26), the SVD of ${{\bf{T}}_{(3)}}$ is ${{\bf{T}}_{(3)}}={\bf{U}}{\bf\Lambda}{{\bf{V}}^{H}}$, and there is a nonsingular matrix ${\bf E}\in{\mathbb{C}^{L\times L}}$ such that ${\bf{UE}}={\bf H}\odot{\bf\Delta}\odot{\bf{X}}.$ (30) Considering the KR product, the Vandermonde structure of both ${\bf H}$ and ${\bf\Delta}$ is exploited via $\displaystyle{{\bf{U}}_{\rm{2}}}{\bf{E}}={\bf{\underline{H}}}\odot{\bf{\Delta}}\odot{\bf{X}}=\left({{\bf{\overline{H}}}\odot{\bf{\Delta}}\odot{\bf{X}}}\right){{\bf{\Omega}}_{y}}={{\bf{U}}_{1}}{\bf{E}}{{\bf{\Omega}}_{y}}$ (31) $\displaystyle{{\bf{U}}_{\rm{4}}}{\bf{E}}={\bf{H}}\odot{\bf{\underline{\Delta}}}\odot{\bf{X}}=\left({{\bf{H}}\odot{\bf{\overline{\Delta}}}\odot{\bf{X}}}\right){{\bf{\Omega}}_{x}}={{\bf{U}}_{3}}{\bf{E}}{{\bf{\Omega}}_{x}}$ where ${{\bf\Omega}_{y}}=diag({{\bm{\omega}}_{y}})$, ${{\bm{\Omega}}_{x}}=diag({{\bm{\omega}}_{x}})$, ${{\bf{U}}_{\rm{1}}}$, ${{\bf{U}}_{\rm{2}}}$, ${{\bf{U}}_{\rm{3}}}$ and ${{\bf{U}}_{\rm{4}}}$ are the submatrices truncated from rows of ${\bf{U}}$, i.e., $\displaystyle{{\bf{U}}_{1}}{\rm{=}}\left[{{{\bf{I}}_{{I}K({J}-1)}},{{\bf{0}}_{{I}K({J}-1)\times{I}K}}}\right]{\bf{U}}$ (32) $\displaystyle{{\bf{U}}_{2}}{\rm{=}}\left[{{{\bf{0}}_{{I}K({J}-1)\times{I}K}},{{\bf{I}}_{{I}K({J}-1)}}}\right]{\bf{U}}$ $\displaystyle{{\bf{U}}_{3}}=\left({{{\bf{I}}_{{J}}}\otimes\left[{{{\bf{I}}_{K({I}-1)}},{{\bf{0}}_{K({I}-1)\times K}}}\right]}\right){\bf{U}}$ $\displaystyle{{\bf{U}}_{4}}=\left({{{\bf{I}}_{{J}}}\otimes\left[{{{\bf{0}}_{K({I}-1)\times K}},{{\bf{I}}_{K({I}-1)}}}\right]}\right){\bf{U}}.$ Like (28), the vectors ${{\bm{\omega}}_{y}}$ and ${{\bm{\omega}}_{x}}$ can be estimated as the collections of eigenvalues of ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$ and ${\bf{U}}_{3}^{\dagger}{{\bf{U}}_{4}}$, respectively, and $\bf E$ is the matrix of the corresponding eigenvectors. Then, the possible pairs of $(\theta_{l},\varphi_{l})$ can be computed by (23)-(24). To eliminate the grating lobes, the relationship between the TB MIMO radar transmit beampattern and the GSC is applied again to estimate the target DOA in 2-D case [27]. Specifically, $\left({\frac{{{\bm{\kappa}}_{l}^{H}}}{{{\bm{\kappa}}_{l}^{H}{{\bm{\kappa}}_{l}}}}\otimes{{\bf{I}}_{K}}}\right)\left({{{\bm{\kappa}}_{l}}\otimes{{\bm{\chi}}_{l}}}\right)={{\bm{\chi}}_{l}}$ and ${{\bm{\kappa}}_{l}^{H}}{{\bm{\kappa}}_{l}}=S$ still hold by replacing ${{\bm{\kappa}}_{l}}$ with ${{\bm{h}}_{l}}\odot{{\bm{\delta}}_{l}}$. Hence, each column of $\bf X$ can be restored by ${{\bm{\chi}}_{l}}=1/S\left[{({{\bm{h}}_{l}}\odot{{\bm{\delta}}_{l}})^{H}\otimes{{\bf{I}}_{K}}}\right]{\bf{U}}{{\bf{e}}_{l}}.$ (33) Note that the TB matrix ${\bf{W}}_{0}$ is given as a prior information, ${{\bm{\chi}}_{l}}={\bf{W}}_{0}^{H}{\bm{\alpha}}_{0}(\theta_{l},\varphi_{l})$ can be rewritten as $K$ different linear equations ${{{\chi}}_{l}}(k)={\bf{w}}_{k}^{H}{\bm{\alpha}}_{0}(\theta_{l},\varphi_{l}),\quad k=1,2,\cdots,K$ (34) or equivalently, ${\bf{p}}_{k}^{H}{\bm{\alpha}}_{0}(\theta_{l},\varphi_{l})=0$, where ${\bf{p}}_{k}\triangleq{\bf{w}}_{k}-[{{{\chi}}_{l}}(k),{\bf 0}_{1\times(M-1)}]$. It can be seen that the linear equations in (34) hold if and only if ${{{\left|\left|{{\bf{P}}^{H}{{\bf{a}}}({\hat{\theta}_{l}},{\hat{\phi}_{l}})}\right|\right|^{2}}}}=0$, where ${\bf P}={\bf W}-{\bf W}_{0}$, ${\bf W}_{0}\triangleq[{{{\bm{\chi}}}_{l}},{\bf 0}_{K\times(M-1)}]^{T}$. Therefore, the estimation of a pair $(\theta_{l},\varphi_{l})$ can be found by solving the following convex optimization problem [27] $\displaystyle\min\limits_{(\hat{\theta}_{l},\hat{\phi}_{l})}{{{\left|\left|{{\bf{P}}^{H}({\bf{u}}(\hat{\theta}_{l},\hat{\varphi}_{l})\otimes{\bf{v}}(\hat{\theta}_{l},\hat{\varphi}_{l}))}\right|\right|^{2}}}}.$ (35) whose structure is similar with the TB MIMO radar transmit beampattern. After obtaining the pair $(\hat{u}_{l},\hat{v}_{l})$, the distinct target DOA can be computed by (24). The computation is conducted via (33) column by column, hence the independent estimates of target DOA from $\bf G$ and $\bf X$ are paired automatically. By comparing them, the grating lobes can be mitigated. The primary procedures for the DOA estimation in TB MIMO radar with planar array is summarized as Algorithm 2. Algorithm 2 DOA Estimation for 2-D TB MIMO radar with Uniformly Spaced Transmit Subarrays 0: Signal Tensor ${\cal{Z}}\in{\mathbb{C}^{{J}\times{I}\times K\times N\times Q}}$ from (16) 0: Targets DOA information $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ and $\left\\{{{\varphi_{l}}}\right\\}_{l=1}^{L}$ 1: Reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}}\in{\mathbb{C}^{{S}\times K\times{NQ}}}$, where the mode-3 unfolding of ${\cal{T}}$ is given by (25); 2: Compute the SVD of the matrix ${{\bf{T}}_{(3)}}={\bf{U}}{\bf\Lambda}{{\bf{V}}^{H}}$; 3: Formulate four submatrices ${{\bf{U}}_{1}},{{\bf{U}}_{2}},{{\bf{U}}_{3}},{{\bf{U}}_{4}}$ via (32); 4: Calculate the EVD of the matrices ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$ and ${\bf{U}}_{3}^{\dagger}{{\bf{U}}_{4}}$; 5: Estimate the pair $({\hat{u}}_{l},\hat{v}_{l})$ via (23) and compute the pair $(\hat{\theta}_{l},\hat{\phi}_{l})$ via (24); 6: Construct ${{\bm{\chi}}_{l}}$ via (33), where $\bf E$ is the matrix of the corresponding eigenvectors in step 4; 7: Build the matrix $\bf P$ and solve the minimization problem (35) to obtain the pair $(\hat{u}_{l},\hat{v}_{l})$; 8: Compute the unambiguous pair $(\hat{\theta}_{l},\hat{\varphi}_{l})$ via (24); 9: Compare the results in step 5 and step 8; 10: return $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ and $\left\\{{{\varphi_{l}}}\right\\}_{l=1}^{L}$. ### III-B Parameter Identifiability As mentioned in Fact 2, the generical uniqueness condition of tensor decomposition for a high-order tensor is given as $\sum\limits_{n=1}^{N}{{k_{{{\bf{A}}^{(n)}}}}}\geq 2L+(N-1)$, where $N$ is the tensor order and $L$ is the tensor rank. The upper bound of the tensor rank, i.e., the maximum number of targets that can be resolved, rises with the increase of tensor order at the level of Kruskal-rank of a matrix. However, if the factor matrix has special structure, the uniqueness condition is changed. An example of tensor decomposition with Vandermonde factor matrix is described in Lemma 1. It can be observed that the maximum number of targets that can be resolved by (18) is determined by the preconditions. The first precondition is that the first factor matrix of the tensor, as a Vandermonde matrix or the KR product of two Vandermonde matrices, must have distinct vector of generators. In this paper, we assume this condition holds since it means that each of the targets posses a unique direction, which is reasonable regarding DOA estimation problem. The second precondition requires that the third factor matrix has rank $L$. Note that the Doppler steering vectors of any two targets with the same Doppler shift are linearly dependent with a scale difference determined by the target RCS. Two scenarios are discussed next. When the target Doppler shifts are unique, Lemma 1 can be applied directly and the tensor $\cal F$ can be used. To ensure the uniqueness decomposition, it is required that [17] $\min((S-1)KN,Q)\geq L.$ (36) In MIMO radar, the number of pulses during a single CPI is usually large. Thus, the maximum number of targets that can be resolved is generically $(S-1)KN$, which is better than that in Fact 2 [17]. However, the size of the transmit array is confined since the distance between phase centers of two adjacent subarrays must be no more than half the working wavelength to avoid the spatial ambiguity. This restriction degrades the spatial resolution and also raises the difficulty of the physical implementation of the array. When there are at least two targets that have identical Doppler shift, tensor $\cal T$ is used to ensure the second precondition. The receive steering matrix is squeezed together with the Doppler steering matrix to distinguish targets with identical velocity. Although the rank of a specific tensor remains the same when it is reshaped, it was proved that different reshape manners are not equivalent from the performance point of view [30]. In our case, it means that the identifiability is changed and, therefore, the uniqueness condition of decomposition for $\cal T$ requires $\min\left((S-1)K,NQ\right)\geq L.$ (37) The usage of $\cal T$ is more appropriate for the general case, since the rank deficiency problem caused by identical target Doppler shift is solved. Additionally, by reshaping tensor $\cal Z$ into tensor $\cal T$, the angular information can be estimated independently from $\bf G$ and $\bf X$. The unambiguous estimation result from $\bf X$ provides a second estimation of the target DOA and can be used to eliminate the grating lobes. Thus, it can be concluded that the use of the tensor model $\cal T$ has at least the above two advantages. The maximum number of targets that can be resolved is reduced to $(S-1)\cdot K$. To improve the parameter identifiability, the increase of number of transmit subarrays or transmit waveforms is worth considering. ## IV Arbitrary but Identical Subarrays with Multiple Scales of Shift- invariances In previous section, we have assumed that the transmit subarrays are uniformly spaced to obtain a Vandermonde structure in the factor matrix of designed tensor. However, such constraint on subarray structure can be relaxed. The placement of all subarrays needs not be uniform, while the configuration within a single subarray can be arbitrary. The tensor model in (18) is applicable for TB MIMO radar with any arbitrary but identical subarrays, since the extended factor matrix that represents the phase rotations between transmit subarrays is merely determined by the coordinates of the transmit subarray phase centers. The difference is that the array configuration varies the structure of the factor matrix, which may cause extra steps to recover the target DOAs. A typical example has been given earlier where the unambiguous spatial information in $\bf X$ is exploited to eliminate the cyclic ambiguity in $\bf G$. In the following, we discuss two general cases that the transmit array with multiple scales of shift-invariances is placed on a lattice and explain the use of the proposed computationally efficient DOA estimation method in both scenarios. ### IV-A Generalized Vandermonde Matrix Note that the Vandermonde structure of the steering matrix ${\bf G}$ is linked to the phase rotations between the transmit subarrays, and it is exploited in a look up table for finding target DOAs. The Vandermonde structure is only a special case leading to phase rotation. Indeed, take, for example a linear array with its elements placed on a lattice, where all lattice cells are enumerated sequentially. The indices of the elements form a counted set of increasing positive integers. It can be shown that $m_{s}$ must be a subset of this set, since the first element of each subarray corresponds to a unique lattice cell. Hence, $m_{s}$ may increase uniformly or non-uniformly. From (17), it can be observed that $m_{s}$ determines ${\bf K}$. When it rises uniformly, ${\bf K}$ should be a Vandermonde matrix.333This has been derived in Section II, and we can find that the shift-invariance between different subarrays is related to the step size of $m_{s}$, or more specifically, the coordinates of the transmit subarray phase centers. Determined by the step size of $m_{s}$, i.e., $\Delta_{m}$, the configuration of adjacent subarrays can be partly overlapped ($\Delta_{m}<M_{0}$) or non-overlapped ($\Delta_{m}\geq M_{0}$). The proposed DOA estimation method in last section can be used directly. In the case when $m_{s}$ rises non-uniformly, let us consider as an example the tensor model (11) with $S=7$ subarrays and $m_{s}=\\{1,2,3,5,6,7,9\\}$. Each subarray contains three elements, therefore, the original transmit array is a ULA with $M=11$ elements. Then, $\bf K$ is a generalized Vandermonde matrix [31], which can be written as ${\bf K}\triangleq[{\bf z}_{1},\cdots,{\bf z}_{L}]$, where ${\bf z}_{l}\triangleq[1,z_{l},z_{l}^{2},z_{l}^{4},z_{l}^{5},z_{l}^{6},z_{l}^{8}]^{T}$ and ${z_{l}}\triangleq{e^{-j\pi\sin{\theta_{l}}}}$. The idea of multiple invariance ESPRIT[32] is introduced to conduct the DOA estimation with non-uniformly spaced transmit subarrays. Consequently, $\bf K$ can be interpreted as the combination of a set of submatrices ${\bf K}^{(sub)}$ denoting different sub-ULAs associated with various shift- invariances, i.e., $\displaystyle{\bf K}^{(sub)}\triangleq\left[\left({\bf K}^{(1,1)}\right)^{T},\left({\bf K}^{(2,1)}\right)^{T},\left({\bf K}^{(1,2)}\right)^{T}\right]^{T}$ (38) $\displaystyle{\bf K}^{(1,1)}\triangleq\left[{\bf z}^{(1,1)}_{1},\cdots,{\bf z}^{(1,1)}_{L}\right]$ $\displaystyle{\bf K}^{(2,1)}\triangleq\left[{\bf z}^{(2,1)}_{1},\cdots,{\bf z}^{(2,1)}_{L}\right]$ $\displaystyle{\bf K}^{(1,2)}\triangleq\left[{\bf z}^{(1,2)}_{1},\cdots,{\bf z}^{(1,2)}_{L}\right]$ where ${\bf z}^{(1,1)}_{l}$ is selected from ${\bf z}_{l}$ with $m_{s}=\\{1,2,3\\}$, ${\bf z}^{(2,1)}_{l}$ is selected from ${\bf z}_{l}$ with $m_{s}=\\{5,6,7\\}$, and ${\bf z}^{(1,2)}_{l}$ is selected from ${\bf z}_{l}$ with $m_{s}=\\{1,3,5,7,9\\}$. In other words, ${\bf K}^{(1,1)}$ is a submatrix of $\bf K$ that consists of first three rows with shift-invariance $\Delta_{m}=1$. The other two submatrices are analogous. Note that (38) is not the only subarray construction method, but it contains all transmit subarrays with a minimal distinct shift-invariance set $\Delta=\\{\Delta_{m}|\Delta_{m}=1,2\\}$. Substituting (38) to (25), we can write ${{\bf{T}}^{(sub)}_{(3)}}=\left({{\bf{K}}^{(sub)}\odot{\bf{X}}}\right){\left({{\bf{B}}\odot{\bf{C}}}\right)^{T}}.$ (39) Its SVD is given as ${{\bf{T}}^{(sub)}_{(3)}}={\bf U}^{(sub)}{\bf\Lambda}^{(sub)}\left({\bf V}^{(sub)}\right)^{H}$. It can be observed that Lemma 1 holds for (39). By constructing ${\bf K}^{(sub)}$, a new transmit subarray steering matrix that consists of several Vandermonde submatrices can be introduced. To exploit the Vandermonde structure, an extra row selection must be applied. Taking ${\bf{K}}^{(1,1)}$, for example, we can generalize (26) to obtain ${{\bf{K}}^{(1,1)}\odot{\bf{X}}}={\bf U}^{(1,1)}{\bf E}$ (40) where ${\bf U}^{(1,1)}$ is truncated from ${\bf U}^{(sub)}$ in the same way as ${\bf K}^{(1,1)}$ from ${\bf K}^{(sub)}$. Thus, each column of ${\bf K}^{(1,1)}$ can be estimated. The estimates of ${\bf K}^{(1,2)}$ and ${\bf K}^{(2,1)}$ can be obtained similarly. It is worth noting that if $\Delta_{m}>1$, the problem of grating lobes may still occur when recovering $\theta_{l}$ from ${\bf U}^{\left({N_{\Delta_{m}}},\Delta_{m}\right)}$, where $N_{\Delta_{m}}$ represents the number of subarrays whose shift-invariance is determined by $\Delta_{m}$. Usually, the unambiguous spatial information in $\bf X$ can be exploited to eliminate the potential grating lobes. However, it requires each subarray to be dense ULA, which restricts the aperture of the transmit subarray, and therefore, the spatial resolution. Note that the generators of the Vandermonde submatrices in ${\bf K}^{(sub)}$ provide the target DOA information at different exponential levels, i.e., $z_{l}^{\Delta_{m}}$. Based on this, a polynomial function is designed to estimate the target DOA without using the second factor matrix ${\bf X}$. For every possible shift-invariance $\Delta_{m}$, denote $\displaystyle{\bf a}_{l}^{(\Delta_{m})}\triangleq\left[{\overline{\bm{\kappa}}}_{l}^{(1,\Delta_{m})T},\cdots,{\overline{\bm{\kappa}}}_{l}^{({N_{\Delta_{m}}},\Delta_{m})T}\right]^{T}$ (41) $\displaystyle{\bf b}_{l}^{(\Delta_{m})}\triangleq\left[{\underline{\bm{\kappa}}}_{l}^{(1,\Delta_{m})T},\cdots,{\underline{\bm{\kappa}}}_{l}^{({N_{\Delta_{m}}},\Delta_{m})T}\right]^{T}.$ To illustrate (41), consider the array structure in (38). When $\Delta_{m}=1$, there are two different submatrices/sub-ULAs, i.e., ${N_{1}}=2$, ${\bf a}_{l}^{(1)}=\left[{\overline{\bm{\kappa}}}_{l}^{(1,1)T},{\overline{\bm{\kappa}}}_{l}^{(2,1)T}\right]^{T}=\left[1,z_{l},z_{l}^{4},z_{l}^{5}\right]^{T}$, and ${\bf b}_{l}^{(1)}=\left[{\underline{\bm{\kappa}}}_{l}^{(1,1)T},{\underline{\bm{\kappa}}}_{l}^{(2,1)T}\right]^{T}=\left[z_{l},z^{2}_{l},z_{l}^{5},z_{l}^{6}\right]^{T}$. When $\Delta_{m}=2$, only one submatrix/sub-ULA exists, i.e., ${N_{2}}=1$, ${\bf a}_{l}^{(2)}={\overline{\bm{\kappa}}}_{l}^{(1,2)}=\left[1,z_{l}^{2},z_{l}^{4},z_{l}^{6}\right]^{T}$ and ${\bf b}_{l}^{(2)}={\underline{\bm{\kappa}}}_{l}^{(1,2)}=\left[z_{l}^{2},z_{l}^{4},z_{l}^{6},z_{l}^{8}\right]^{T}$. Also, the following constraint should be satisfied ${\bf a}_{l}^{(\Delta_{m})}{{z}_{l}^{\Delta_{m}}}={\bf b}_{l}^{(\Delta_{m})}.$ (42) It is proved in [31] that (42) can be achieved by rooting the polynomial function $f({z_{l}})\triangleq\sum\limits_{{\Delta_{m}}\in\Delta}{\left|{\left|{{\bf{a}}_{l}^{({\Delta_{m}})}z_{l}^{{\Delta_{m}}}-{\bf{b}}_{l}^{({\Delta_{m}})}}\right|}\right|_{F}^{2}}$ (43) as long as two coprime numbers can be found in the shift-invariance set $\Delta$. By definition of $z_{l}$, the root nearest to the unit circle should be chosen as $\hat{z}_{l}$, which finally estimates the target DOA as ${\hat{\theta}}_{l}=\arcsin\left(\frac{{j\ln({\hat{z}_{l}})}}{\pi}\right)$. The construction of ${\bf K}^{(sub)}$ enables the use of $\cal T$ in a more general scenario. The transmit subarrays can be organized in a non-uniform way. If the shift-invariance set $\Delta$ contains a pair of coprime integers, the problem of spatial ambiguity can be solved with no limitation on the transmit subarray structure. Hence, the structures of the transmit subarrays can be arbitrary but identical. An outline of the proposed DOA estimation method for TB MIMO radar with non- uniformly spaced arbitrary but identical subarrays is summarized in Algorithm 3. ###### Remarks. : Multiple scales of shift-invariances can also be found in a Vandermonde matrix ${\bf K}$[32]. A simple way to build ${\bf K}^{(sub)}$ is to concatenate the submatrices of ${\bf K}$, which, respectively, consist of the odd rows, even rows and all rows. Hence, Algorithm 3 is also applicable for TB MIMO radar with uniformly spaced transmit subarrays. Since the manifold of the subarray does need not to be dense ULA, it is possible to place the transmit array on a larger lattice to obtain a higher spatial resolution. If some elements in a transmit subarray are broken, a useful solution is to disable the elements in other subarrays accordingly to keep the manifolds identical. Moreover, we can select part of the elements in all subarrays to fulfill other purposes like communication in joint radar-communication system for example [33]. These remarks can be extended to the case of planar array. Algorithm 3 DOA Estimation for TB MIMO radar with Non-Uniformly Spaced Arbitrary but Identical Subarrays 0: Signal Tensor ${\cal{Z}}\in{\mathbb{C}^{S\times K\times N\times Q}}$ from (18) 0: Targets DOA information $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ 1: Construct a new matrix ${\bf K}^{(sub)}$ in (38), which can be divided into several Vandermonde submatrices and contains all transmit subarrays; 2: Update the transmit subarray steering matrix and reshape ${\cal{Z}}$ into a 3-order tensor ${\cal{T}}\in{\mathbb{C}^{{S^{\prime}}\times K\times{NQ}}}$; 3: Compute the SVD of the matrix ${{\bf{T}}^{(sub)}_{(3)}}={\bf U}^{(sub)}{\bf\Lambda}^{(sub)}{\bf V}^{(sub)H}$; 4: Estimate each Vandermonde submatrix in ${\bf K}^{(sub)}$ sequentially via (26)-(28); 5: Build two vectors ${\bf a}_{l}^{(\Delta_{m})}$ and ${\bf b}_{l}^{(\Delta_{m})}$ from (41) for each column of estimated ${\bf K}^{(sub)}$; 6: Compute the roots of the polynomial function in (43) and select the root nearest to the unit circle as $\hat{z}_{l}$; 7: Estimate ${\hat{\theta}}_{l}=\arcsin\left(\frac{{j\ln({\hat{z}_{l}})}}{\pi}\right)$; 8: return $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$. ### IV-B Multiscale Sensor Array Another case when multiple scales of shift-invariances can be found is called multiscale sensor array [34, 35, 36]. Generally, a URA can be regarded as a 2-level multiscale sensor array with different scales of shift-invariances. As shown in Fig. 1, the generation process of such a URA contains two steps. First, consider a single subarray composed of $M_{0}$ elements as the reference subarray. Let $I$ replica subarrays be placed uniformly across the $x$-axis, which form a larger subarray at a higher level. Then, $J$ copies of this higher level subarray are organized uniformly across the $y$-axis. Combining them together, a URA is constructed. Note that in this specific case, $I$ subarrays at level-1 are non-overlapped, while their $J$ counterparts at level-2 are partly overlapped. From (16), it is clear that the transmit subarray steering matrices for subarrays at level-1 and level-2 are $\bf\Delta$ and $\bf H$, respectively. If the URA itself is repeated and spatially moved to other arbitrary but known locations, a new array that yields a larger spatial aperture is created. In this way, an $R$-level multiscale sensor array can be constituted. For any $S_{r}$ subarrays at level-$r$, $r=1,2,\cdots,R$, define ${\bf G}^{(r)}\in{\mathbb{C}^{{S_{r}}\times L}}$ as the transmit subarray steering matrix. The overall transmit subarray steering matrix is given by ${\bf G}={\bf G}^{(R)}\odot{\bf G}^{(R-1)}\odot\cdots\odot{\bf G}^{(1)}\triangleq\mathop{\odot}\limits_{r=1}^{R}{{\bf{G}}^{(r)}}.$ (44) Substituting (44) into (18), the tensor model for TB MIMO radar with an $R$-level miltiscale sensor array at transmit side is given. Note that this is an $(R+3)$-order tensor. The reshape of it and the use of Lemma 1 in this case are quite flexible. The DOA estimation can be conducted via Algorithm 2 or Algorithm 3 analogously. Take a cubic transmit array,444Repeat the URA in Fig. 1 $D$ times across the $z$-axis with coordinates $(0,0,m_{d}),d=1,\cdots,D$. for example. It is a 3-level multiscale sensor array, and we can immediately write the three transmit subarray steering matrices as ${\bf G}^{(1)}={\bf\Delta}$, ${\bf G}^{(2)}={\bf H}$ and ${\bf G}^{(3)}={\bf\Xi}$, respectively, where ${\bf\Xi}\triangleq{\left[{\bm{\tau}}_{1},{\bm{\tau}}_{2},\cdots,{\bm{\tau}}_{D}\right]^{T}}_{V\times L}$ and ${\bm{\tau}}_{d}\triangleq\left[e^{-j\pi(m_{d}-1)cos\varphi_{1}},e^{-j\pi(m_{d}-1)cos\varphi_{2}},\cdots,e^{-j\pi(m_{d}-1)cos\varphi_{L}}\right]^{T}$. The parameter identifiability for different reshaped 3-order tensors ${\cal T}\in{\mathbb{C}^{{I_{1}}\times{I_{2}}\times{I_{3}}}}$ varies, which is determined by the uniqueness condition of tensor decomposition, i.e., $\min((I_{1}-1)I_{2},I_{3})\geq L$ (45) where $I_{1}$, $I_{2}$ and $I_{3}$ can be regarded as permutations of the set $\\{S_{1},S_{2},\cdots,S_{R},K,N,Q\\}$, and $I_{1}I_{2}I_{3}=KNQ\prod\limits_{r=1}^{R}{{S_{r}}}$. From (45), it is possible to estimate the target DOAs using only one single pulse. The transmit array with multiple scales of shift-invariances is exploited via the tensor reshape to make up for the lack of number of snapshots. This property can also be used to distinguish coherent sources. An example of two targets with identical Doppler shift has been discussed in Section III-A. See [37] for more discussions about the partial identifiability of the tensor decomposition, where specific conditions for coherent or collocated sources are investigated. ## V Simulation Results In this section, we investigate the DOA estimation performance of the proposed method in terms of the root mean square error (RMSE) and probability of resolution of closely spaced targets for TB MIMO radar. Throughout the simulations, there are $Q=50$ pulses in a single CPI. We assume that $L=3$ targets lie within a given spatial steering vector determined by $\left\\{{{\theta_{l}}}\right\\}_{l=1}^{L}$ in linear array and $\left\\{\left({{\theta_{l}}},{\varphi_{l}}\right)\right\\}_{l=1}^{L}$ in planar array, the normalized Doppler shifts are $f_{1}=-0.1,f_{2}=0.2$ and $f_{3}=0.2$. The number of Monte Carlo trials is $P=200$. The RCS of every target is drawn from a standard Gaussian distribution, and obeys the Swerling I model. Note that the last two targets share identical Doppler shift, which cause $\bf C$ to drop rank. The noise signals are assumed to be Gaussian, zero-mean and white both temporally and spatially. The $K$ orthogonal waveforms are ${S_{k}}(t)=\sqrt{\frac{1}{{{T}}}}{e^{j2\pi\frac{k}{{{T}}}t}},\,k=1,\cdots,K$. For both linear and planar array, the tensor model in (18) is used and the TB matrix is pre-designed [6, 26]. For linear array, we assume a transmit ULA with $S=8$ subarrays. Each transmit subarray has $M_{0}=10$ elements spaced at half the wavelength. The placement of transmit subarrays can vary from totally overlapped case to non-overlapped case. The number of transmit elements is computed by $M={M_{0}}+{{\Delta}_{m}}(S-1)$. The receive array has $N=12$ elements, which are randomly selected from the transmit array. For planar array, the reference transmit subarray is a $7\times 7$ URA. The number of subarrays is $S=6$, where $J=2$ and $I=3$. The distances between subarrays in both directions are fixed as the working wavelength, which means that $\Delta_{m_{x}}=2$ and $\Delta_{m_{y}}=2$. A number of $N=12$ elements in the transmit array are randomly chosen as the receive array. For comparisons, ESPRIT-based algorithm [8] that exploits the phase rotations between transmit subarrays and U-ESPRIT algorithm [10] that utilizes the conjugate symmetric property of array manifold are used as signal covariance matrix-based DOA estimation methods, while conventional ALS algorithm [12, 16] that decomposes the factor matrices iteratively is utilized as signal tensor decomposition-based DOA estimation method. The Cramer-Rao lower bound (CRLB) for MIMO radar is also provided. For target DOAs estimated by the factor matrix $\bf X$, if applicable, we use a postfix to distinguish it, e.g., ALS- sub (Proposed-sub) refers to the estimation result computed by $\bf X$, while ALS (Proposed) denotes the estimation result originated from $\bf G$ after tensor decomposition. ### V-A Example 1: RMSE and Probability of Resolution for Linear Array with Non-overlapped Subarrays Three targets are placed at $\theta_{l}=[-15^{\circ},5^{\circ},15^{\circ}]$. Consider the matricized form of $\cal Z$ in (18). The goal is to estimate $\theta_{l}$ from ${\bf Z}={\bf T}_{(3)}+{\tau}{\bf R}$, where ${\bf T}_{(3)}$ is given by (25) and ${\bf G}={\bf K}$, the SNR is measured as: $SNR[dB]=10\log\left({{{\left\|{{{\bf{T}}_{(3)}}}\right\|_{F}^{2}}\mathord{\left/{\vphantom{{\left\|{{{\bf{T}}_{(3)}}}\right\|_{F}^{2}}{\left\|{\tau{\bf{R}}}\right\|_{F}^{2}}}}\right.\kern-1.2pt}{\left\|{\tau{\bf{R}}}\right\|_{F}^{2}}}}\right)$. The RMSE is computed by $RMSE=\sqrt{\frac{1}{{2PL}}\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{{{\left({{{\hat{\theta}}_{l}}(p)-{\theta_{l}}(p)}\right)}^{2}}}}}.$ As shown in Fig. 2a, the RMSE results decline gradually with the rise of SNR for all methods. The ESPRIT-based algorithm merely exploits the phase rotations between transmit subarrays and therefore the performance is quite poor. U-ESPRIT algorithm performs better since the number of snapshots is doubled. For conventional ALS algorithm and our proposed method, target angular information can be obtained from both factor matrices $\bf K$ and $\bf X$, which are used to compare to each other to eliminate the potential grating lobes. The proposed method approaches the CRLB with a lower threshold as compared to the ALS method, since the Vandermonde structure of the factor matrix is exploited. Therefore, the proposed method performs better at low SNR. Note that the complexity of our proposed method is reduced significantly, it requires approximately the same number of flops as compared to that of the ALS method in a single iteration. Also, the comparison of the estimation results between $\bf G$ and $\bf X$ shows a reasonable difference. This is mainly caused by the different apertures of the subarray and the whole transmit array. For the probability of resolution, we assume only two closely spaced targets located at $\theta_{l}=[-5^{\circ},-6^{\circ}]$. These two targets are considered to be resolved when $\left\|{{{\hat{\theta}}_{l}}-{\theta_{l}}}\right\|\leq\left\|{{\theta_{1}}-{\theta_{2}}}\right\|/2,l=1,2$. The Doppler shifts are both $f=0.2$ and the other parameters are the same as before. In Fig. 2b, the probability of resolution results for all methods tested are shown and they are consistent with those in Fig. 2a. All methods achieve absolute resolution in high SNR region, and resolution declines with the decrease of SNR. The ESPRIT method presents the worst performance while performance of the U-ESPRIT improves slightly. The results of the ALS-sub method and the Proposed-sub method are almost the same. A gap of approximately 3 dB SNR can be observed between the proposed method and the ALS method, which means that our proposed method enables the lowest SNR threshold. The performance of both accuracy and resolution for our proposed method surpasses the other methods since the shift-invariance between and within different transmit subarrays are fully exploited. (a) RMSE versus SNR (b) Resolution versus SNR (c) RMSE versus SNR (d) Resolution versus SNR Figure 2: DOA estimation performance for TB MIMO radar with uniformly spaced subarrays for linear array (a)-(b) and planar array (c)-(d), 200 trials. ### V-B Example 2: RMSE and Probability of Resolution for Planar Array with Partly Overlapped Subarrays In this example, three targets are placed at ${\theta_{l}}=[-40^{\circ},-30^{\circ},-20^{\circ}]$ and ${\varphi_{l}}=[25^{\circ},35^{\circ},45^{\circ}]$. The signal model ${\bf Z}={\bf T}_{(3)}+{\tau}{\bf R}$ is applied, where ${\bf T}_{(3)}$ is given by (25) with ${\bf G}={\bf H}\odot{\bf\Delta}$. The SNR is measured in the same way as that in linear array. The RMSE for planar array is compute by $RMSE=\sqrt{\frac{1}{{2PL}}\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{\left[{{{\left({{{\hat{\theta}}_{l}}(p)-{\theta_{l}}(p)}\right)}^{2}}+{{\left({{{\hat{\varphi}}_{l}}(p)-{\varphi_{l}}(p)}\right)}^{2}}}\right]}}}.$ In Fig. 2c, the RMSEs of ESPRIT, U-ESPRIT, ALS and the proposed method are given. The CRLB is also provided. The performance of the ESPRIT method and the U-ESPRIT method are relatively poor. It is because of the ignorance of the received signal shift-invariance within a single subarray. It can be observed that the Proposed-sub and the ALS-sub successfully estimate the target DOAs via $\bf X$ in the case of planar array, which proves the validity of (35). The results can be used to mitigate the spatial ambiguity in the following estimations. Like their counterparts in linear array, the RMSEs of the proposed method and the ALS method are almost the same for above 0 dB SNR while the performance of the proposed method in low SNR is better. The ALS method ignores the Vandermonde structure during tensor decomposition. Compared to (35), the DOA estimation result in ${\bf G}$ takes advantage of a larger aperture and therefore achieves a better RMSE performance. To evaluate the resolution performance, only two targets are reserved and the spatial directions are ${\theta_{l}}=[-10^{\circ},-11^{\circ}]$ and ${\varphi_{l}}=[15^{\circ},16^{\circ}]$. The resolution is considered successful if $\left\|{{{\hat{\theta}}_{l}}-{\theta_{l}}}\right\|\leq\left\|{{\theta_{1}}-{\theta_{2}}}\right\|/2,\left\|{{{\hat{\varphi}}_{l}}-{\varphi_{l}}}\right\|\leq\left\|{{\varphi_{1}}-{\varphi_{2}}}\right\|/2,l=1,2$. The target Doppler shifts are the same, given as $f=0.2$. The other parameters are unchanged. Fig. 2d shows the results for all methods with respect to the probability of resolution. The proposed method achieves the lowest SNR threshold, which benefits from the fully exploitation of the shift-invariance and the Vandermonde structure during tensor decomposition. Note that the convergence of the ALS method is unstable and can be influenced by the tensor size. It can be observed that the resolution performance of the ALS method is deteriorated as compared to its counterpart in Fig. 2b. This conclusion implies that the robustness of our proposed method is better regarding 2-D DOA estimation, since no iterations are required. ### V-C Example 3: RMSE Performance for Linear Array with Different ${\Delta}_{m}$ In this example, we mainly consider the RMSE performance when $\Delta_{m}$ changes from one to at most $M_{0}$. The aperture is increased gradually. The SNR is assumed to be 10 dB. All other parameters are the same as those in Example 1. Given the number of subarrays and the structure of a single subarray, the aperture of the overall transmit ULA rises with the increase of $\Delta_{m}$ while the number of elements shared by two adjacent subarrays declines. When $\Delta_{m}=0$, this model is identical to that for conventional ESPRIT method [6, 26], and there is no transmit subarray. When $\Delta_{m}$ rises, the distance between phase centers for two adjacent subarrays becomes larger than half the working wavelength and grating lobes are generated. The locations of these grating lobes are determined by (22), and can be eliminated. Meanwhile, the transmit array aperture is increased and the DOA estimation performance should be improved. To investigate the improvement, the RMSEs of three targets are computed versus the rise of $\Delta_{m}$. It can be seen in Fig. 3a that the RMSE results decrease steadily with the increase of $\Delta_{m}$. The ESPRIT method and U-ESPRIT method suffer from grating lobes and the received signal within a single subarray is not fully exploited, hence, they perform poorly. The RMSEs of the Proposed-sub and the ALS-sub are almost unchanged since the estimation is only based on the subarray, which is fixed during the simulation. Meanwhile, the proposed method and the ALS method achieve better accuracy than their counterparts originated from $\bf X$ when $\Delta_{m}>3$. It can be noted in Fig. 2a that the convergence is satisfied for the ALS method and our proposed method when SNR is above 10 dB. Consequently, the RMSEs of the proposed method and the ALS method are nearly coincident. To evaluate the RMSE performance versus $\Delta_{m_{x}}$ or $\Delta_{m_{y}}$ for a planar array, it is necessary to separately add a new subarray in one direction while keeping the array structure in the other direction unchanged. This can be fulfilled by constructing an L-shaped transmit array, where each element is replaced by a URA subarray. However, this analysis would be beyond the scope of this paper. In general, it can be concluded that the proposed method can estimate the target DOAs via the phase rotations between transmit subarrays. If the placement of two adjacent subarrays satisfies some conditions, e.g., $\Delta_{m}>3$ for linear array, the RMSE performance is better than that computed by a single subarray. Note that the received signal of two adjacent subarrays can be obtained by spatial smoothing [31], a proper spatial smoothing of the received signal can improve the DOA estimation performance. (a) RMSE versus ${\Delta}_{m}$ (b) Generalized Vandermonde matrix (c) Elevation RMSE versus SNR (d) Azimuth RMSE versus SNR Figure 3: RMSE results for TB MIMO radar with different subarray configurations. ### V-D Example 4: Generalized Vandermonde Factor Matrix for Linear Array with $m_{s}=\\{1,2,3,5,7,9\\}$ Here we evaluate the proposed DOA estimation method for TB MIMO radar with non-uniformly spaced transmit subarrays. The transmit linear array has $S=7$ subarrays with $m_{s}=\\{1,2,3,5,6,7,9\\}$. Each subarray contains $M_{0}=10$ elements. The $N=12$ elements are randomly chosen from the transmit array to form the receive array. Three targets are placed at ${\theta_{l}}=[-5^{\circ},10^{\circ},18^{\circ}]$ with normalized Doppler shifts $f_{l}=[0.3,-0.15,-0.15]$. To simplify the signal model, each subarray is a ULA, which is not used during the DOA estimation in this example. Equations (38)-(43) can be applied directly, since the subarray structure stays identical. Two different transmit arrays are introduced for comparison to illustrate the improved performance provided by constructing ${\bf K}^{(sub)}$. Both of them can be regarded as a linear array with uniformly spaced subarrays ($\Delta_{m}=1$). The first one has $S=7$ subarrays, while the second one has $S=9$ subarrays to achieve the same aperture. The DOA estimation for these two transmit arrays can be conducted by Algorithm 1. Meanwhile, conventional ALS method can be applied to decompose the factor matrix ${\bf K}^{(sub)}$, which will be used to estimate the target DOAs by solving (43). The generalized-ESPRIT (G-ESPRIT) method in [36] is also used for comparison. The CRLBs of three different transmit arrays are also shown. From Fig 3b, it can be observed that the formulation of ${\bf K}^{(sub)}$ exploits the multiple scales of shift-invariances in generalized Vandermonde matrix. By solving (43), the grating lobes are eliminated efficiently. Hence, the structurs of transmit subarrays can be arbitrary but identical, which provide more flexibility for array design. The RMSE of the proposed method surpasses those of G-ESPRIT and ALS methods. Also, the performance of the non- uniformly spaced transmit subarrays is better than that of the uniformly spaced transmit subarrays (S = 7). This is expected since the aperture is increased due to sparsity. Compared to the fully spaced transmit subarray case (S = 9), the performance of the proposed method is deteriorated slightly. However, the fully spaced array can be extremely high-cost if the array aperture is further increased. By using the generalized Vandermonde matrix, the proposed method enables the sparsity in transmit array, which achieves higher resolution with less elements. ### V-E Example 5: Multiscale Sensor Array with Arbitrary but Identical Subarrays In the final example, we illustrate the performance of the proposed DOA estimation method for TB MIMO radar with arbitrary but identical subarrays. Specifically, a planar array with $S=4\times 4$ subarrays is considered, whose phase centers form a uniform rectangular grid with a distance of half the working wavelength. For each subarray, $M_{0}=4$ elements are randomly placed in a circle centered on the phase center with a radius of a quarter of wavelength. The structure of all subarrays are identical, hence, the transmit array can be regarded as an 3-level multiscale sensor array and the transmit subarray steering matrix can be obtained from (44). The $N=12$ receive elements are randomly selected from the transmit array. Three targets are placed at ${\theta_{l}}=[-26^{\circ},-19^{\circ},-12^{\circ}]$ and ${\varphi_{l}}=[11^{\circ},21^{\circ},31^{\circ}]$. The other parameters are the same as those in Example 2. Note that the subarray is arbitrary, the DOA information can only be estimated by the phase rotations between the transmit subarrays. Alternatively, the transmit array interpolation technique [13] is introduced to map the original transmit array into a $4\times 4$ URA to enable the ESPRIT-like DOA estimation, which is referred to as Inter-TEV in Fig. 3c and Fig. 3d. It can be observed that by carefully designing the mapping matrix, the RMSEs of the Inter-TEV method are better than those of ESPRIT and U-ESPRIT methods for both elevation and azimuth estimation. However, the proposed method surpasses the other methods with a lower RMSE. This is because of the full usage of the shift-invariance between and within different transmit subarrays. ## VI Conclusion The problem of tensor decomposition with Vandermonde factor matrix in application to DOA estimation for TB MIMO radar with arbitrary but identical transmit subarrays has been considered. A general 4-order tensor that can be used to express the TB MIMO radar received signal in a variety of scenarios, e.g., linear and planar arrays, uniformly and non-uniformly spaced subarrays, regular and irregular subarrays, has been designed. The shift-invariance of the received signal between and within different transmit subarrays have been used to conduct DOA estimation. Specifically, a computationally efficient tensor decomposition method has been proposed to estimate the generators of the Vandermonde factor matrices, which can be used as a look-up table for finding target DOA. The proposed method fully exploits the shift-invariance of the received signal between and within different subarrays, which can be regarded as a generalized ESPRIT method. Comparing with conventional signal tensor decomposition-based techniques, our proposed method take advantage of the Vandermonde structure of factor matrices, and it requires no iterations or any prior information about the tensor rank. The parameter identifiability of our tensor model has also been studied via the discussion of the uniqueness condition of tensor decomposition. Simulation results have verified that the proposed DOA estimation method has better accuracy and higher resolution as compared to existing techniques. ## Appendix A Proof of Lemma 1 First, let ${\bf A}^{(1)}\in{\mathbb{C}^{{I_{1}}\times L}}$ be a Vandermonde matrix with distinct generators, we have $r\left({\bf A}^{(1)}\odot{\bf A}^{(2)}\right)=\min({I_{1}I_{2},L})$, since it is the KR product of a Vandermonde matrix and an arbitrary matrix [17]. Assuming $I_{3}\geq L,I_{1}I_{2}\geq L$, the following results $r\left({\bf A}^{(1)}\odot{\bf A}^{(2)}\right)=L$ and $r({\bf A}^{(3)})=L$ hold, since ${\bf A}^{(3)}$ has full column rank. The proof of Lemma 1 in this case is identical to that of Proposition III.2 in [17]. Next, let ${\bf A}^{(1)}={\bf B}\odot{\bf C}$, where ${\bf B}\in{\mathbb{C}^{{J}\times L}}$ and ${\bf C}\in{\mathbb{C}^{{I}\times L}}$ are both Vandermonde matrices with distinct generators. Consider the rank of matrix ${\bf A}^{(1)}\odot{\bf A}^{(2)}={\bf B}\odot{\bf C}\odot{\bf A}^{(2)}={\bf\Pi}\left({\bf B}\odot{\bf A}^{(2)}\odot{\bf C}\right)$, where ${\bf\Pi}$ is an exchange matrix. Again, the rank of ${\bf B}\odot{\bf A}^{(2)}$ is $\min(JI_{2},L)$ while $r\left({\bf B}\odot{\bf A}^{(2)}\odot{\bf C}\right)=\min(IJI_{2},L)$. Since ${\bf\Pi}$ is nonsingular, $r\left({\bf A}^{(1)}\odot{\bf A}^{(2)}\right)=L$. The mode-3 unfolding of $\cal Y$ is ${\bf Y}_{(3)}=\left({\bf A}^{(1)}\odot{\bf A}^{(2)}\right){\bf A}^{(3)T}$. The SVD of this matrix representation is denoted by ${\bf Y}_{(3)}={\bf U}{\bf\Lambda}{\bf V}^{H}$, where ${\bf{U}}\in{\mathbb{C}^{{I_{1}I_{2}}\times L}}$, ${\bf{\Lambda}}\in{\mathbb{C}^{L\times L}}$, and ${\bf{V}}\in{\mathbb{C}^{{I_{3}}\times L}}$. Since $r\left({\bf A}^{(1)}\odot{\bf A}^{(2)}\right)=L$ and $r\left({\bf A}^{(3)}\right)=L$, it can be derived that a nonsingular matrix ${\bf E}\in{\mathbb{C}^{{L}\times L}}$ satisfies ${\bf U}{\bf E}={\bf A}^{(1)}\odot{\bf A}^{(2)}.$ (46) The Vandermonde structure of both ${\bf B}$ and ${\bf C}$ can be exploited via $\displaystyle{{\bf{U}}_{\rm{2}}}{\bf{E}}={\bf{\underline{B}}}\odot{\bf{C}}\odot{\bf A}^{(2)}=\left({{\bf{\overline{B}}}\odot{\bf{C}}\odot{\bf A}^{(2)}}\right){{\bf{\Omega}}_{b}}={{\bf{U}}_{1}}{\bf{E}}{{\bf{\Omega}}_{b}}$ (47) $\displaystyle{{\bf{U}}_{\rm{4}}}{\bf{E}}={\bf{B}}\odot{\bf{\underline{C}}}\odot{\bf A}^{(2)}=\left({{\bf{B}}\odot{\bf{\overline{C}}}\odot{\bf A}^{(2)}}\right){{\bf{\Omega}}_{c}}={{\bf{U}}_{3}}{\bf{E}}{{\bf{\Omega}}_{c}}$ where ${{\bf{\Omega}}_{b}}=diag({\bm{\omega}}_{b})$, ${{\bf{\Omega}}_{c}}=diag({\bm{\omega}}_{c})$ with ${\bm{\omega}}_{b}$ and ${\bm{\omega}}_{c}$ denoting the vectors of generators of $\bf B$ and $\bf C$, respectively. The submatrices ${{\bf{U}}_{\rm{1}}}$, ${{\bf{U}}_{\rm{2}}}$, ${{\bf{U}}_{\rm{3}}}$ and ${{\bf{U}}_{\rm{4}}}$ are truncated from rows of ${\bf{U}}$ according to the operator of the KR product, i.e., $\displaystyle{{\bf{U}}_{1}}{\rm{=}}\left[{{{\bf{I}}_{{I}K({J}-1)}},{{\bf{0}}_{{I}K({J}-1)\times{I}K}}}\right]{\bf{U}}$ (48) $\displaystyle{{\bf{U}}_{2}}{\rm{=}}\left[{{{\bf{0}}_{{I}K({J}-1)\times{I}K}},{{\bf{I}}_{{I}K({J}-1)}}}\right]{\bf{U}}$ $\displaystyle{{\bf{U}}_{3}}=\left({{{\bf{I}}_{{J}}}\otimes\left[{{{\bf{I}}_{K({I}-1)}},{{\bf{0}}_{K({I}-1)\times K}}}\right]}\right){\bf{U}}$ $\displaystyle{{\bf{U}}_{4}}=\left({{{\bf{I}}_{{J}}}\otimes\left[{{{\bf{0}}_{K({I}-1)\times K}},{{\bf{I}}_{K({I}-1)}}}\right]}\right){\bf{U}}.$ Note that $\bf E$, ${\bf\Omega}_{b}$ and ${\bf\Omega}_{c}$ are full rank. We have ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}={\bf{E}}{{\bf{\Omega}}_{b}}{\bf E}^{-1}$ and ${\bf{U}}_{3}^{\dagger}{{\bf{U}}_{4}}={\bf{E}}{{\bf{\Omega}}_{c}}{\bf E}^{-1}$. Hence, the vectors ${\bm{\omega}}_{b}$ and ${\bm{\omega}}_{c}$ can be computed as the collections of eigenvalues of ${\bf{U}}_{1}^{\dagger}{{\bf{U}}_{2}}$ and ${\bf{U}}_{3}^{\dagger}{{\bf{U}}_{4}}$, respectively, while $\bf E$ is the matrix of collection of the corresponding eigenvectors. From the generators of $\bf B$ and $\bf C$, the first factor matrix ${\bf A}^{(1)}$ can be reconstructed. Meanwhile, it can be observed that $\left({\frac{{{\bm{\alpha}}_{l}^{(1)H}}}{{{\bm{\alpha}}_{l}^{(1)H}{{\bm{\alpha}}_{l}^{(1)}}}}\otimes{{\bf{I}}_{I_{2}}}}\right)\left({{{\bm{\alpha}}_{l}^{(1)}}\otimes{{\bm{\alpha}}_{l}^{(2)}}}\right)={{\bm{\alpha}}_{l}^{(2)}}.$ (49) Assume that the column vectors of ${\bf A}^{(1)}$ have unit form, then ${\bm{\alpha}}_{l}^{(2)}$ can be written as ${\bm{\alpha}}_{l}^{(2)}=({\bm{\alpha}}_{l}^{(1)H}\otimes{\bf I}_{I_{2}}){\bf U}{\bf e}_{l},\quad l=1,2,\cdots,L$ (50) where ${\bf e}_{l}$ is the $l$-th column of $\bf E$. Given ${\bf A}^{(1)}$ and ${\bf A}^{(2)}$, the third factor matrix can be computed by $\displaystyle{\bf A}^{(3)T}={\left({\left({{{\bf{A}}^{(1)H}}{{\bf{A}}^{(1)}}}\right)*\left({{{\bf{A}}^{(2)H}}{{\bf{A}}^{(2)}}}\right)}\right)^{-1}}$ (51) $\displaystyle\times{\left({{{\bf{A}}^{(1)}}\odot{{\bf{A}}^{(2)}}}\right)^{H}}{{\bf{Y}}_{(3)}}.$ Therefore, the tensor decomposition of $\cal Y$ is generically unique, where ${\bf A}^{(1)}$ can be a Vandermonde matrix or the KR product of two Vandermonde matrices with distinct generators, and ${\bf A}^{(3)}$ is column full rank. ## Appendix B Proof of (13) Given ${{\bf{y}}^{(q)}_{s}}$ in (12), $s=1,2,\cdots,I,I+1,\cdots,IJ$, concatenate every $I$ vectors together to form totally $J$ matrices of identical dimension $KN\times I$. These matrices are denoted by ${\bf\bar{Y}}^{(q)}_{j}$, and are given as ${\bf\bar{Y}}^{(q)}_{j}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\left({{\bf{c}}_{q}^{T}\odot{\bf{h}}^{T}_{{j}}\odot{\bf{\Delta}}}\right)^{T}}+{{\bf{\bar{N}}}^{(q)}_{j}}$ (52) where ${{\bf{\bar{N}}}_{j}^{(q)}}\triangleq\left[{{{\bf{n}}_{(j-1)I+1}^{(q)}},{{\bf{n}}_{(j-1)I+2}^{(q)}},\cdots,{{\bf{n}}_{(jI)}^{(q)}}}\right]$. The noise-free version of (52) can be rewritten as ${\bf\bar{Y}}^{(q)}_{j}=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf{\Gamma}}_{q}{\left({{{\bf{h}}^{T}_{{j}}}\odot{\bf{\Delta}}}\right)^{T}}$ (53) where ${\bf{\Gamma}}_{q}=diag({\bf c}_{q})$. Since $\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf{\Gamma}}_{q}$ is fixed, the concatenation of $J$ matrices merely depends on the concatenation of ${\left({{{\bf{h}}^{T}_{{j}}}\odot{\bf{\Delta}}}\right)^{T}}$. Define a matrix ${\bf\Theta}$ ${\bf{\Theta}}\triangleq{\left[{\begin{array}[]{*{20}{c}}{{\bf{h}}_{1}^{T}\odot{\bf{\Delta}}}\\\ {{\bf{h}}_{2}^{T}\odot{\bf{\Delta}}}\\\ \vdots\\\ {{\bf{h}}_{J}^{T}\odot{\bf{\Delta}}}\end{array}}\right]_{S\times L}}$ (54) such that ${\bf{\Theta}}$ performs the concatenation. From the definition of the KR product, it can be observed that ${\bf{\Theta}}={\bf H}\odot{\bf\Delta}$. Therefore, the concatenation of ${\bf\bar{Y}}^{(q)}_{j}$ is given by $\displaystyle{\bf\bar{Y}}^{(q)}$ $\displaystyle=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\bf{\Gamma}}_{q}{\bf{\Theta}}^{T}$ (55) $\displaystyle=\left[{\left({{\bf{W}}_{0}^{H}{{\bf{A}}_{0}}}\right)\odot{\bf{B}}}\right]{\left({{{\bf{c}}_{q}^{T}}\odot{\bf{H}}\odot{\bf{\Delta}}}\right)^{T}}.$ Considering the noise term, equation (13) can be given. ## References * [1] A. M. Haimovich, R. S. Blum, and L. J. Cimini, “MIMO radar with widely separated antennas,” _IEEE Signal Process. Mag._ , vol. 25, no. 1, pp. 116–129, Jan. 2008. * [2] J. Li and P. Stoica, “MIMO radar with colocated antennas,” _IEEE Signal Process. Mag._ , vol. 24, no. 5, pp. 106–114, Sep. 2007. * [3] A. Hassanien and S. A. Vorobyov, “Transmit energy focusing for DOA estimation in MIMO radar with colocated antennas,” _IEEE Trans. Signal Process._ , vol. 59, no. 6, pp. 2669–2682, Jun. 2011. * [4] A. Hassanien and S. A. Vorobyov, “Phased-MIMO radar: A tradeoff between phased-array and MIMO radars,” _IEEE Trans. Signal Process._ , vol. 58, no. 6, pp. 3137–3151, Jun. 2010. * [5] D. R. Fuhrmann, J. P. Browning, and M. Rangaswamy, “Signaling strategies for the hybrid MIMO phased-array radar,” _IEEE J. Sel. Topics Signal Process._ , vol. 4, no. 1, pp. 66–78, Feb. 2010. * [6] A. Khabbazibasmenj, A. Hassanien, S. A. Vorobyov, and M. W. Morency, “Efficient transmit beamspace design for search-free based DOA estimation in MIMO radar,” _IEEE Trans. Signal Process._ , vol. 62, no. 6, pp. 1490–1500, Mar. 2014. * [7] Z. Guo, X. Wang, and W. Heng, “Millimeter-wave channel estimation based on 2-D beamspace MUSIC method,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 8, pp. 5384–5394, Aug. 2017. * [8] A. Hu, T. Lv, H. Gao, Z. Zhang, and S. Yang, “An ESPRIT-based approach for 2-D localization of incoherently distributed sources in massive MIMO systems,” _IEEE J. Sel. Topics Signal Process._ , vol. 8, no. 5, pp. 996–1011, Oct. 2014. * [9] N. Tayem and H. M. Kwon, “L-shape 2-dimensional arrival angle estimation with propagator method,” _IEEE Trans. Antennas Propag._ , vol. 53, no. 5, pp. 1622–1630, May 2005. * [10] M. D. Zoltowski, M. Haardt, and C. P. Mathews, “Closed-form 2-D angle estimation with rectangular arrays in element space or beamspace via unitary ESPRIT,” _IEEE Trans. Signal Process._ , vol. 44, no. 2, pp. 316–328, Feb. 1996. * [11] B. Xu, Y. Zhao, Z. Cheng, and H. Li, “A novel unitary PARAFAC method for DOD and DOA estimation in bistatic MIMO radar,” _Signal Processing_ , vol. 138, pp. 273 – 279, Sep. 2017. * [12] D. Nion and N. D. Sidiropoulos, “Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO radar,” _IEEE Trans. Signal Process._ , vol. 58, no. 11, pp. 5693–5705, Nov. 2010. * [13] M. Cao, S. A. Vorobyov, and A. Hassanien, “Transmit array interpolation for DOA estimation via tensor decomposition in 2-D MIMO radar,” _IEEE Trans. Signal Process._ , vol. 65, no. 19, pp. 5225–5239, Oct. 2017\. * [14] S. D. Blunt and E. L. Mokole, “Overview of radar waveform diversity,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 31, no. 11, pp. 2–42, Nov. 2016. * [15] A. Hassanien, M. W. Morency, A. Khabbazibasmenj, S. A. Vorobyov, J. Park, and S. Kim, “Two-dimensional transmit beamforming for MIMO radar with sparse symmetric arrays,” in _Proc. IEEE Radar Conf._ , Ottawa, ON, Canada, Apr. 2013, pp. 1–6. * [16] N. D. Sidiropoulos, L. De Lathauwer _et al._ , “Tensor decomposition for signal processing and machine learning,” _IEEE Trans. Signal Process._ , vol. 65, no. 13, pp. 3551–3582, Jul. 2017. * [17] M. Sørensen and L. De Lathauwer, “Blind signal separation via tensor decomposition with vandermonde factor: Canonical polyadic decomposition,” _IEEE Trans. Signal Process._ , vol. 61, no. 22, pp. 5507–5519, Nov. 2013\. * [18] T. Jiang, N. D. Sidiropoulos, and J. M. F. ten Berge, “Almost-sure identifiability of multidimensional harmonic retrieval,” _IEEE Trans. Signal Process._ , vol. 49, no. 9, pp. 1849–1859, Sep 2001. * [19] F. Xu, S. A. Vorobyov, and X. Yang, “Joint DOD and DOA estimation in slow-time MIMO radar via PARAFAC decomposition,” _IEEE Signal Process. Lett._ , vol. 27, pp. 1495–1499, Aug. 2020. * [20] A. Cichocki, D. Mandic, L. De Lathauwer, G. Zhou, Q. Zhao, C. Caiafa, and H. A. PHAN, “Tensor decompositions for signal processing applications: From two-way to multiway component analysis,” _IEEE Signal Process. Mag._ , vol. 32, no. 2, pp. 145–163, Mar. 2015. * [21] F. Xu, X. Yang, and T. Lan, “Search-free DOA estimation method based on tensor decomposition and polynomial rooting for transmit beamspace MIMO radar,” _arXiv: 2010.03296_ , Oct. 2020. * [22] L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” _SIAM J. Matrix Anal. Appl._ , vol. 21, no. 4, pp. 1253–1278, 2000. * [23] J. H. de M. Goulart, M. Boizard, R. Boyer, G. Favier, and P. Comon, “Tensor CP decomposition with structured factor matrices: Algorithms and performance,” _IEEE J. Sel. Topics Signal Process._ , vol. 10, no. 4, pp. 757–769, Jun. 2016. * [24] W. Wang, H. C. So, and A. Farina, “An overview on time/frequency modulated array processing,” _IEEE J. Sel. Topics Signal Process._ , vol. 11, no. 2, pp. 228–246, Mar. 2017. * [25] L. Lu, G. Y. Li, A. L. Swindlehurst, A. Ashikhmin, and R. Zhang, “An overview of massive MIMO: Benefits and challenges,” _IEEE J. Sel. Topics Signal Process._ , vol. 8, no. 5, pp. 742–758, Oct 2014. * [26] M. W. Morency and S. A. Vorobyov, “Partially adaptive transmit beamforming for search free 2D DOA estimation in MIMO radar,” in _Proc. 23rd Eur. Signal Process. Conf._ , Nice, France, Aug. 2015, pp. 2631–2635. * [27] F. Xu and S. A. Vorobyov, “Constrained tensor decomposition for 2D DOA estimation in transmit beamspace MIMO radar with subarrays,” in _Submit to Proc. 46th Int. Conf. Acoust., Speech, Signal Process (ICASSP), 2021_ , Jun. 2021. * [28] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” _SIAM Rev._ , vol. 51, no. 3, pp. 455–500, 2009. * [29] R. Roy and T. Kailath, “ESPRIT-estimation of signal parameters via rotational invariance techniques,” _IEEE Trans. Acoust., Speech, Signal Process._ , vol. 37, no. 7, pp. 984–995, Jul. 1989. * [30] A. Phan, P. Tichavský, and A. Cichocki, “CANDECOMP/PARAFAC decomposition of high-order tensors through tensor reshaping,” _IEEE Trans. Signal Process._ , vol. 61, no. 19, pp. 4847–4860, Oct. 2013. * [31] M. Sørensen and L. De Lathauwer, “Multiple invariance ESPRIT for nonuniform linear arrays: A coupled canonical polyadic decomposition approach,” _IEEE Trans. Signal Process._ , vol. 64, no. 14, pp. 3693–3704, Jul. 2016. * [32] A. L. Swindlehurst, B. Ottersten, R. Roy, and T. Kailath, “Multiple invariance esprit,” _IEEE Trans. Signal Process._ , vol. 40, no. 4, pp. 867–881, Apr. 1992. * [33] K. V. Mishra, M. R. Bhavani Shankar, V. Koivunen, B. Ottersten, and S. A. Vorobyov, “Toward millimeter-wave joint radar communications: A signal processing perspective,” _IEEE Signal Process. Mag._ , vol. 36, no. 5, pp. 100–114, Sep. 2019. * [34] S. Miron, Y. Song, D. Brie, and K. T. Wong, “Multilinear direction finding for sensor-array with multiple scales of invariance,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 51, no. 3, pp. 2057–2070, Jul. 2015. * [35] M. D. Zoltowski and K. T. Wong, “Closed-form eigenstructure-based direction finding using arbitrary but identical subarrays on a sparse uniform cartesian array grid,” _IEEE Trans. Signal Process._ , vol. 48, no. 8, pp. 2205–2210, Aug. 2000. * [36] B. Liao and S. Chan, “Direction-of-arrival estimation in subarrays-based linear sparse arrays with gain/phase uncertainties,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 49, no. 4, pp. 2268–2280, Oct. 2013. * [37] X. Guo, S. Miron, D. Brie, and A. Stegeman, “Uni-mode and partial uniqueness conditions for CANDECOMP/PARAFAC of three-way arrays with linearly dependent loadings,” _SIAM J. Matrix Anal. Appl._ , vol. 33, no. 1, pp. 111–129, 2012.
# The Census of Exoplanets in Visual Binaries: population trends from a volume-limited Gaia DR2 and literature search Clémence Fontanive Center for Space and Habitability, University of Bern, Bern, Switzerland Daniella Bardalez Gagliuffi American Museum of Natural History, New York, NY, USA (Received November 2, 2020; Accepted January 28, 2021) ###### Abstract We present results from an extensive search in the literature and Gaia DR2 for visual co-moving binary companions to stars hosting exoplanets and brown dwarfs within 200 pc. We found 218 planet hosts out of the 938 in our sample to be part of multiple-star systems, with 10 newly discovered binaries and 2 new tertiary stellar components. This represents an overall raw multiplicity rate of $23.2\pm 1.6\,\%$ for hosts to exoplanets across all spectral types, with multi-planet systems found to have a lower stellar duplicity frequency at the 2.2-$\sigma$ level. We found that more massive hosts are more often in binary configurations, and that planet-bearing stars in multiple systems are predominantly observed to be the most massive component of stellar binaries. Investigations of the multiplicity of planetary systems as a function of planet mass and separation revealed that giant planets with masses above 0.1 MJup are more frequently seen in stellar binaries than small sub-Jovian planets with a 3.6-$\sigma$ difference, a trend enhanced for the most massive ($>$7 MJup) short-period ($<$0.5 AU) planets and brown dwarf companions. Binarity was however found to have no significant effect on the demographics of low-mass planets ($<$0.1 MJup) or warm and cool gas giants ($>$0.5 AU). While stellar companion mass appears to have no impact on planet properties, binary separation seems to be an important factor in the resulting structure of planetary systems. Stellar companions on separations $<$1000 AU can play a role in the formation or evolution of massive, close-in planets, while planets in wider binaries show similar properties to planets orbiting single stars. Finally, our analyses indicate that numerous stellar companions on separations smaller than 1–3 arcsec likely remain undiscovered to this date. Continuous efforts to complete our knowledge of stellar multiplicity on separations of tens to hundreds of AU are essential to confirm the reported trends and further our understanding of the roles played by multiplicity on exoplanets. exoplanets, multiplicity, visual, binaries, companions, formation, demographics, statistics ††journal: Frontiers in Astronomy and Space Sciences - Exoplanets: The Effect of Stellar Multiplicity on Exoplanetary Systems ## 1 Introduction The architectures of stellar, sub-stellar, and planetary systems are relics of their formation and evolutionary processes. By studying the orbital parameters and configurations of hierarchical systems as an ensemble we can in principle trace back to the formation mechanisms that originated them. Planet formation is a direct consequence of star formation, yet it can be severely influenced by the presence of a stellar companion. The existence of planets in orbit around one or both components of binary systems are stringent probes of planet formation process. Radial velocity measurements estimate that $18\pm 1\,\%$ of FGK stars will have a giant planet within 20 AU (Cumming et al., 2008). About $44\,\%$ of FGK stars are found in multiple systems, with $33\,\%$ in binary systems, and $11\,\%$ in higher-order architectures (Raghavan et al., 2010). Hence roughly half of potential planet hosts are in multiple-star systems, arguing that the fraction of giant planets orbiting a stellar component of a binary system is likely not negligible. A number of campaigns have thus searched for planets in and around stellar binaries, including radial velocity programs (e.g., Konacki et al., 2009), transit discoveries (e.g., Doyle et al., 2011) and direct imaging surveys (e.g., Asensio-Torres et al., 2018; Hagelberg et al., 2020), leading to the detection of a number of circumstellar (orbiting one star) and circumbinary (orbiting two stars) planets. Despite these efforts, most exoplanet searches routinely exclude stars in binary or multiple systems to avoid systematic errors in planet detection, and the first systems of this type were identified serendipitously (see e.g., Patience et al., 2002; Mugrauer et al., 2006). The distinct demographics of the first planets discovered in binary star systems hinted at the possibility that binary companions could dramatically reorient the orbital configuration of planetary systems (Zucker & Mazeh, 2002). Approaching the question from the opposite end, numerous high-resolution imaging studies have also searched for stellar companions to known planetary systems, either to validate or refine the nature of identified planets (Everett et al., 2015; Furlan et al., 2017; Hirsch et al., 2017), or to purposely investigate the effect of stellar duplicity on planetary populations (Colton et al., 2021; Horch et al., 2014; Matson et al., 2018; Wang et al., 2014). Dedicated studies of circumstellar planets in binary systems rapidly revealed a lack of stellar companions within 20–50 AU (e.g., Bergfors et al., 2013; Kraus et al., 2016). Close stellar companions on this separation range are generally accepted to prevent planet formation, although early examples of giant planets in $<$20 AU binaries (Queloz et al., 2000; Hatzes et al., 2003) demonstrated that such systems do exist. Consistent with the observed shortfall of planets of tight binaries, theoretical models predict that the presence of a very close binary companion can truncate a protoplanetary disk (Kraus et al., 2012; Pichardo et al., 2005), hence obstructing the formation of a planet by core accretion, or ejecting the planet in unstable systems (Kaib et al., 2013). Binary companions at large separations (beyond several hundreds to thousands of AU) from planet hosts, on the other hand, have been argued to have no impact on the formation and evolution of planets (Desidera & Barbieri, 2007; White & Ghez, 2001). Meanwhile, the effects of binary companions at intermediate (around $\sim$100–300 AU) separations are more debated. Such companions could truncate circumprimary disks by opening large gaps, hence redirecting the material to the primary stars’ circumstellar disks and leaving the secondaries with no or depleted disks (Artymowicz & Lubow, 1994; Bate & Bonnell, 1997), consistent with observations of binary systems among T Tauri stars (Jensen & Akeson, 2003). Theoretical simulations also showed that perturbations from secondary stars may assist the formation and evolution of giant planets by enhancing mass accretion and orbital migration rates in circumstellar disks (Kley, 2001). The Kozai-Lidov mechanism (Kozai, 1962; Lidov, 1962) could also play a role in the inward migration and final orbital properties of planets through secular interactions induced by an outer stellar companion on such separations. This process has indeed been invoked to explain the formation of hot Jupiters (Fabrycky & Tremaine, 2007; Winn et al., 2010). In this study, we present an overview of the current census of circumstellar exoplanets in visual binaries, with separations from tens of AU out to 20 000 AU, within a volume limited to 200 pc. The goal of this compilation is to gather information of stellar multiplicity for a large sample of exoplanets, which will hopefully serve in future investigations, rather than to perform a detailed statistical analysis of these populations. In particular, this work extends previous such studies of exoplanets orbiting one component of a binary system to all stellar spectral types and all types of extra-solar planets and brown dwarf companions. In Section 2, we describe the construction of our exoplanet sample (Section 2.1), followed by a search for co-moving companions to exoplanet hosts (Section 2.2) in the literature and in Gaia. Section 3 presents our results, in which we explore differences between the demographics of planets in binaries and around single stars (Section 3.2), as well as potential trends in planet properties based on binary separation and mass, for the population of planets in multiple-star systems (Section 3.3). Section 4 discusses the completeness of our sample and the observed effects of stellar duplicity on various exoplanetary populations. Our conclusions are presented in Section 5. ## 2 Materials and Methods We describe in this Section the construction of our studied exoplanet sample (Section 2.1) and the searches performed for wide binary companions to all selected planet hosts (Section 2.2). In the context of this work, we consider brown dwarfs and extra-solar planets orbiting stars as a unique population of sub-stellar companions. We thus make no distinction between companions below and above the deuterium burning limit (13 MJup), and will use the term sub- stellar companion to denote planetary and brown dwarf companions in general, unless otherwise specified. Similarly, double and multiple stellar systems will often be referred to as binaries throughout most of this work for conciseness. Finally, given the exoplanet-oriented approach of this study, the term host will always refer to the planet-bearing star in a system, independently of whether or not it is the higher-mass component of a multiple- star system. ### 2.1 Exoplanet Compilation We gathered a sample of extra-solar planets and brown dwarfs from the NASA Exoplanet Archive111https://exoplanetarchive.ipac.caltech.edu, the Extrasolar Planets Encyclopaedia222http://exoplanet.eu (Schneider et al., 2011), the Exoplanet Orbit Database333http://exoplanets.org (Han et al., 2014) and the Open Exoplanet Catalogue444http://www.openexoplanetcatalogue.com. The data from these libraries were collected on June 23, 2020, and cross-matched to identify all systems with at least one planet or brown dwarf companion reported as confirmed in at least one of these databases. We gathered from these catalogs all available information about the sub-stellar companions and stellar hosts, and only kept systems with robust companion mass (or minimum mass) and semi-major axis measurements. We imposed a cut of 0.1 M⊙ on the minimum mass of the host, based on primary masses supplied in the considered databases, in order to focus our study on stellar hosts only. We also removed all circumbinary (P-type) systems, orbiting both stars from a binary system, as our study concentrates on circumstellar (S-type) planets and brown dwarfs, found around a single component of a binary system. We cross-matched the resulting sample with the Gaia Data Release 2 (DR2; Gaia Collaboration et al., 2016, 2018) catalog, obtaining positions, parallaxes, proper motions, Gaia magnitudes and effective temperatures for all hosts found in Gaia DR2. Astrometric information was taken from the SIMBAD Astronomical Database (Wenger et al., 2000) for the few targets that are not part of Gaia DR2 or do not have full astrometric solutions from the Gaia mission. Based on the obtained stellar parallaxes, we restricted our sample to systems with parallax measurements larger than 5 mas, corresponding to a maximum distance of 200 pc for our volume-limited investigation. This cut allows us to focus on relatively nearby stars, thus limiting the range of probed inner working angles around different targets when searching for stellar companions, while keeping a sufficiently large sample for a statistically significant study. The final sample consists of 938 host stars, harboring a total of 1316 exoplanets and brown dwarfs, and contains 693 single-planet systems and 245 multi-planetary systems. Stellar hosts have masses ranging from 0.1 to 3.09 M⊙, with a median of 0.95 M⊙. Most primaries are along the main sequence, covering spectral types from B to M, with 171 giants or sub-giants and 8 white dwarfs. We show in Figure 1 the Gaia Hertzsprung-Russell diagrams for all primaries in our sample with Gaia DR2 parallaxes and G, BP, and RP magnitudes (926 stars), with the color scale indicating the host mass. Tables for the final samples are provided as supplementary material and are available online, with separate tables for the stellar hosts and sub-stellar companions. Figure 1: Gaia color-magnitude diagrams of planet hosts stars, showing absolute G magnitudes against BP-RP colors (left) and G-RP (right). Symbols plotted with black rings represent planet hosts found to be part of multiple- star systems. The colorbar indicates the mass of each planet host, using a logarithmic scale. The gray background population shows the 200-pc volume- limited cleaned sample from Gaia DR2. ### 2.2 Binary Search In this work, we focus on co-moving visual binaries or higher-order hierarchical stellar systems, that is, systems with two or more stars confirmed to be moving together in the sky. The co-moving nature of two gravitationally-bound objects can be determined in two ways. The first approach is via proper motion (and parallax) measurements, where the components of a multiple system will show astrometric parameters consistent with one another. The second method consists in comparing images taken over a sufficiently-long time baseline to demonstrate that two sources show the same displacement over time compared to fixed background objects. Both approaches require the two (or more) stars to be spatially resolved in imaging observations, and such visual binaries are thus typically widely-separated systems. We place an outer limit of 20 000 AU on the projected separation in our search for multiple systems. In the following sections, we describe the searches we performed for wide, co- moving visual companions to all stellar hosts from our gathered sample of exoplanetary systems. The full compilation of binary systems is provided as online supplementary material. #### 2.2.1 Binaries in Surveys and the Literature The catalogs used to compile our studied sample contain some information about stellar binarity. We complemented the multiplicity data from these databases with the Catalogue of Exoplanets in Binary Star Systems555https://www.univie.ac.at/adg/schwarz/multiple.html (Schwarz et al., 2016). We added to this all systems from published surveys searching for visual stellar companions to circumprimary planetary systems (Adams et al., 2012, 2013; Bergfors et al., 2013; Bohn et al., 2020; Coker et al., 2018; Daemgen et al., 2009; Deacon et al., 2016; Dietrich & Ginski, 2018; Eggenberger et al., 2007, 2011; Faedi et al., 2013; Fontanive et al., 2019; Furlan et al., 2017; Ginski et al., 2012, 2016, 2020; Kraus et al., 2016; Lillo-Box et al., 2012; Lodieu et al., 2014; Luhman & Jayawardhana, 2002; Moutou et al., 2017; Mugrauer et al., 2006, 2007a, 2007b; Mugrauer & Ginski, 2015; Mugrauer & Neuhäuser, 2009; Mugrauer, 2019; Ngo et al., 2016, 2017; Patience et al., 2002; Raghavan et al., 2006; Southworth et al., 2020; Udry et al., 2004; Wang et al., 2014; Wöllert et al., 2015; Ziegler et al., 2018) or reviews of planets in binaries (Bonavita & Desidera, 2007, 2020; Desidera & Barbieri, 2007; Eggenberger & Udry, 2007; Eggenberger, 2010; Roell et al., 2012; Thebault & Haghighipour, 2015) that we could find, and finally any other serendipitous discovery we were aware of that may have been missing from the above compilations. In parallel, we cross-matched our host star sample with large-scale catalogs of stellar multiplicity like the Washington Double Star Catalog (WDS; Mason et al., 2001), the Catalog of Components of Double & Multiple stars (CCDM; Dommanget & Nys, 2002), the Tycho Double Star Catalogue (TDSC; Fabricius et al., 2001) and the Updated Multiple Star Catalog (MSC; Tokovinin, 2018), as well as surveys for wide stellar binaries conducted with direct imaging (Deacon et al., 2014; Janson et al., 2012, 2014, 2017; Raghavan et al., 2010; Tokovinin & Lépine, 2012; Tokovinin, 2014a, b; Ward-Duong et al., 2015; Winters et al., 2019). Each reported multiple system was then checked individually in the literature to ensure the S-type nature of the planets and brown dwarfs, and confirm that the binary or multiple system was indeed visual (with resolved components), and astrometrically confirmed to be co-moving, either via consistent relative astrometry in multi-epoch observations or through similar kinematics (as opposed to optical binaries with a probabilistic bound nature from the chance of alignment). A total of 184 stars in our sample were mentioned in the considered surveys to have at least one companion satisfying these criteria (excluding the recent Gaia search performed by Mugrauer, 2019; see Section 2.2.2). For all identified systems, we gathered, when available, binary separations, companion masses, and companion spectral types. #### 2.2.2 Companions in Gaia DR2 To complement the literature search performed above, we searched for bright companions in the Gaia DR2 catalog to all stars in our compilation. Using the collected positions, proper motions and parallaxes for the stellar hosts, we searched for Gaia sources within angular distances corresponding to separations of 20 000 AU from our primaries, and displaying consistent kinematics. Following the approach from Fontanive et al. (2019), we used thresholds of $20\,\%$ disparity in parallax, and offsets of $<20\,\%$ of the total proper motion in one direction and $<50\,\%$ in the other coordinate. These cuts allow to account for the fact the short-term astrometric measurements from Gaia DR2 may capture the reflex motion of binary systems, or may have spurious solutions for unresolved binaries (see Fontanive et al., 2019 for details). For systems part of young moving groups, other members of the same association may appear nearby in the sky and display similar proper motions and parallaxes, consistent with the average moving group kinematics. To avoid the inclusion of unassociated close-by group members in our binary list, we checked that no more than one other astrometric match was found on angular separations up to 20 times the identified binary radius. We consider that a co-moving source within 20 000 AU projected separation is statistically unlikely to be an unrelated member of the same group if no other members are found within a 400-fold sky area. We thus regard such sources as bonafide bound companions for the purpose of this study. When one other match was found, we applied the same procedure centered on this outer source, with the same search radius, to establish whether other group members were found nearby, in which case all sources were taken to be unrelated moving group members. If no additional sources with consistent kinematics were found, we considered the outer source to be the tertiary component of a triple system. Finally, we checked that identified binary companions were different from the sub-stellar companions in our exoplanet list, as some young and bright brown dwarfs discovered with direct imaging on wide separations may be detected at the low-mass end of the Gaia DR2 completeness (Reylé, 2018). This analysis yielded 175 companions around 172 hosts stars. For all identified systems, we measured the binary separation from the respective Gaia DR2 positions of co-moving components, and collected Gaia photometry for the companions. The majority (139) of identified Gaia companions were already known from the literature (excluding findings from Mugrauer, 2019) and were included in our compilation from Section 2.2.1. Based on our literature findings, 19 of the detected Gaia companions were in fact tight binaries themselves, unresolved in Gaia. Mugrauer (2019) recently performed a very similar search for wide companions to exoplanet host stars in Gaia DR2, which presents a useful comparison survey to validate our approach. We found that all binaries reported in that work and present in our sample list (121 systems) were also retrieved in our Gaia analysis. From these, 23 systems and a tertiary companion to the WASP-11 system (unresolved in Gaia) were never reported prior to that study. We however retrieved 51 additional Gaia systems missing from the Mugrauer (2019) compilation, which is likely due to different target samples between that study and ours. Finally, 10 of our identified Gaia co-moving systems were not found to have been previously reported in the literature (up to early September 2020): CoRoT-7, HD 13167, HD 23472, HIP 73990, K2-228, L2 Pup, TOI-132, WASP-189, WASP-29, WASP-59. In addition, new tertiary components were discovered around HIP 65A and V 1298 Tau. These companions are presented in Table 1, with additional information about the systems provided within the full catalogs available online. Table 1: New stellar companions to planet host stars identified in this work using the Gaia DR2 catalog. The new system components are marked in bold in the System column, which is only indicative of the overall system architecture, and do not represent proposed naming conventions. Following the approach in the main tables, component A systematically denotes the planet host irrespectively of the relative component masses, unless already named differently in the literature. Indices 1 in the stellar mass and spectral type columns refer to the planet hosts, and indices 2 refer to the considered stellar companions. This is a highlight of full tables available as supplementary material, which provide additional information about these systems, together with the rest of our compilation. System | Parallax | Separation | Separation | SpT1 | Mass1 | SpT2 | Mass2 ---|---|---|---|---|---|---|--- | [mas] | [arcsec] | [AU] | | [M⊙] | | [M⊙] New binary companions | | | | | | CoRoT-7 Abc + B | 6.23 | 75.7 | 12160 | K0V | 0.93 | M4 | 0.23 HD 13167 Ab + B | 6.69 | 20.1 | 3001 | G3V | 1.35 | M4 | 0.21 HD 23472 Abc + B | 25.59 | 9.6 | 374 | K3.5V | 0.75 | M6 | 0.14 HIP 73990 Abc + B | 9.03 | 47.3 | 5234 | F2IV | 1.72 | M2 | 0.50 K2-228 Ab + B | 7.71 | 22.6 | 2933 | K6 | 0.71 | G3 | 1.08 L2 Pup Ab + B | 15.61 | 32.8 | 1998 | M5IIIe | 0.66 | M5 | 0.20 TOI-132 Ab + B | 6.08 | 19.6 | 3231 | G8V | 0.97 | M4 | 0.17 WASP-189 Ab + B | 10.00 | 9.4 | 942 | A4/5IV/V | 1.89 | M2 | 0.45 WASP-29 Ab + B | 11.39 | 125.2 | 10994 | K4V | 0.82 | M3 | 0.38 WASP-59 Ab + B | 8.60 | 81.8 | 9512 | K5V | 0.72 | K7 | 0.62 New tertiary companions | | | | | | HIP 65 (Ab + B) + C | 16.16 | 73.6 | 4557 | K4V | 0.78 | M7.5 | 0.11 V 1298 Tau Ab-e + (BC) | 9.21 | 117.0 | 14795 | K1 | 1.10 | K7 | 0.66 For all new Gaia systems, we checked whether the wide stellar companions were known (seemingly single) objects in SIMBAD, and gathered additional mass and spectral type information from the literature for these components when available. In addition, for all components known from the literature but missing from our Gaia binary list, we searched for these wide companions in Gaia DR2 in case these stars were in the catalog but with no astrometric solutions, and thus not detectable as co-moving sources in our Gaia search. From these, 14 companions were recovered without Gaia DR2 astrometry, and available Gaia magnitudes and relative positions of components were added to our catalog for these additional companions. #### 2.2.3 Properties of Stellar Companions As a number of literature systems (mostly from WDS) and Gaia binaries had no existing stellar classification or measured mass, we estimated these characteristics for all identified Gaia components based on their positions in the Gaia color-magnitude diagrams. Hertzsprung-Russell diagrams were made for all binary companions with measured magnitudes in the G, BP and RP bands, using the sources’ parallaxes if available, and the astrometry from the associated planet hosts otherwise. From these, 6 sources populated the white dwarf part of the parameter space, and were all found in the Gaia DR2 white dwarf study by Gentile Fusillo et al. (2019). The white dwarf classifications and masses were thus taken from this work for these companions. All other companions appeared to fall along the main sequence. For these systems, we used the TESS Input Catalog (TIC; Stassun et al., 2018) to map the parameter space of the Gaia color-magnitude diagrams to stellar masses and spectral types, based on TIC stellar masses and queried SIMBAD spectral types for all sources from the catalog out to 200 pc. As clear and continuous trends in mass and spectral type were seen along the Gaia main sequences of the TIC sample (similar to our host sample in Figure 1), the masses and spectral types of wide companions could be inferred directly based on their location along these Gaia main sequences. Quantities were interpolated using the mean mass and spectral type from the TIC sample in a box of size 0.2 mag centered on the companion’s absolute magnitude and color, provided that at least 10 sources were found in that box. For each detected Gaia companion, masses (rounded to 0.01 M⊙) and spectral types (to 1 sub-type) were obtained from the TIC BP–RP and G–RP parameter spaces, and averaged for more robust final values. For sources characterized in this way from their colors and magnitudes, the average offset and scatter between literature values and our Gaia-derived estimates was $+0.3\pm 1.5$ sub-type in spectral type, and $-0.01\pm 0.05$ M⊙ in mass (removing known unresolved sources). We also validated this method by applying it to our main sequence host star sample, and observed comparably negligible offsets to values collected in the planet-host catalog. For companions that fell outside the TIC main sequence due to unusual Gaia colors (14 objects), or for sources with no BP and RP magnitudes (16 objects), we used the median intersection of the absolute G magnitude with the TIC main sequences instead, assuming that these objects were single, main sequence stars, similar to the approach followed in the recent Gaia study by Mugrauer (2019). For companions classified from their absolute magnitudes alone, the average scatter was $-0.4\pm 2.4$ sub-types in spectral type and $-0.2\pm 0.07$ M⊙ in mass for sources truly on the main sequence, with very similar results in mass to Mugrauer (2019) for overlapping systems. Larger offsets were seen for known white dwarf companions with no Gaia colors, which were hence assimilated to M dwarfs on the main sequence based on their absolute G-band magnitudes. Based on these results, we consider that our Gaia-inferred quantities are robust measurements for main sequence components. We adopt these as final values when no previous mass and spectral type estimates were available for the retrieved companions, and use existing literature estimates otherwise. The literature, Gaia, and final adopted values are all reported in our tables. ## 3 Results Many surveys looking for extra-solar planets, in particular with the radial velocity method, are affected by or biased against binaries with separation $\leq 2$–6 arcsec, excluding known multiple systems in target selection processes (see e.g., Eggenberger, 2010; Ngo et al., 2017). As a result, measurements of multiplicity rates for exoplanetary systems are particularly challenging, as these selection biases are not trivial to quantify and correct for (see e.g., Moe & Kratter, 2019). This typically means that studies like ours, investigating the binarity of planetary systems discovered partly by such surveys, cannot be used to derive the true frequency of planets in binaries, nor to probe the existence of planets in very tight binaries. With this in mind, the goal of this work is thus to provide an overview the current census of sub-stellar companions in wide visual binaries, rather than to achieve robust statistical results, and we will therefore not attempt to account for these biases here. Our studied sample of planetary systems was nonetheless compiled independently from the binary nature (known or unknown) of the systems. The gathered compilation should thus not be biased toward or against the existence of binary star systems beyond the intrinsic biases from exoplanet detection campaigns. Our Gaia search for wide companions is also homogeneous across the host star sample, limited only in inner working angle by the distance to each star, and by the inherent completeness of Gaia DR2. Our ability to recover stellar companions in Gaia is therefore, in principle, independent of the architecture of the planetary systems themselves. Similarly, the existing literature surveys considered spanned a large range of planet host stars and probed various distinct planetary populations. We thus consider that while strong biases remain in our binary list, which should be taken into account for detailed statistics and the derivation of absolute occurrence rates, our compilation does not strongly discriminate between different types of sub- stellar companions (i.e., planet or brown dwarf masses, separations or detection methods) in the potential to detect wide visual companions. Our compilation can hence be used to search for raw trends within the obtained sample of binaries and highlight potential correlations between multiplicity and the properties of planetary and sub-stellar companions. ### 3.1 Overall Compilation From the compilation gathered in Section 2.2.1, combining an extensive literature search and a Gaia DR2 investigation, 218 planet hosts were found to have at least one visual co-moving stellar companion: 186 host stars were found to be in binary systems, and 32 host stars in higher-order hierarchical systems. From these, 4 binaries and 1 triple system are composed of 2 planet- hosting stars, organizing the 218 planet hosts into 213 unique multiple systems. The architecture of each planet-bearing multiple-star system is presented in Figure LABEL:f:architectures, which illustrates the relative separations and masses of sub-stellar and stellar companions within each system. Figure 2 presents the distribution of spectral types among the planet hosts stars, showing relative numbers of single stars and planet hosts in multiple systems for each spectral type, and compared to the sample of detected stellar companions. Companion masses range from 2.37 M⊙ down to the hydrogen-burning limit ($\sim$0.07 M⊙). Binary projected separations extend from 0.85 AU (GJ 682) out to our 20 000 AU search limit, with a median value of 678 AU. A total of 19 binaries were found in the range 10–50 AU, and 27 systems were identified on separations shorter than 100 AU. Figure 2: Distribution of spectral types from B through M, plus white dwarfs, for single-star planet hosts (blue), multiple-star planet hosts (magenta) and stellar companions (yellow). Hatched sections of the plotted bars represents giants and sub-giants, with the remaining systems being on the main sequence. Each color-coded histogram is independently normalized so that the sum of the bars within each individual group adds up to 1. Table 2 summarizes the numbers and raw fractions of sub-stellar companions in single and multiple-star systems, where binaries and triples are counted similarly as hierarchical multiple systems. We emphasize once again that no completeness or selection bias corrections have been performed, and the quoted numbers simply provide an overview of the collected catalogs. From the 1316 exoplanets and brown dwarfs in our compilation, 286 were found to be around one component of a multiple-star system ($21.7\pm 1.3\,\%$). In terms of individual planetary systems, 218 out of 938 planet host stars ($23.2\pm 1.6\,\%$) are part of multiple-star systems. Interestingly, a marginally higher fraction (2.2-$\sigma$) of single-planet systems are in hierarchical stellar systems ($25.1\pm 1.9\,\%$) compared to multi-planet systems ($18.0\pm 2.7\,\%$). Table 2: Summary of results, providing the number of single and multiple (binary or higher-order) systems hosting various planetary sub-populations. Raw occurrence rates are given in parentheses with uncertainties computed as Poisson noise. Planetary population | Total | Single-star systems | Multiple-star systems ---|---|---|--- All Planets | 1316 | 1030 ($78.3\pm 2.4\,\%$) | 286 ($21.7\pm 1.3\,\%$) All Planetary Systems | 938 | 720 ($76.8\pm 2.9\,\%$) | 218 ($23.2\pm 1.6\,\%$) Single-Planet Systems | 693 | 519 ($74.9\pm 3.3\,\%$) | 174 ($25.1\pm 1.9\,\%$) Multi-Planet Systems | 245 | 201 ($82.0\pm 5.8\,\%$) | 44 ($18.0\pm 2.7\,\%$) M${}_{\mathrm{pl}}<0.1$ MJup | 554 | 462 ($83.4\pm 3.9\,\%$) | 92 ($16.6\pm 1.7\,\%$) M${}_{\mathrm{pl}}=0.1-7$ MJup | 597 | 444 ($74.4\pm 3.5\,\%$) | 153 ($25.6\pm 2.1\,\%$) M${}_{\mathrm{pl}}>7$ MJup | 165 | 124 ($75.2\pm 6.7\,\%$) | 41 ($24.8\pm 3.9\,\%$) a${}_{\mathrm{pl}}<0.5$ AU | 766 | 603 ($78.7\pm 3.2\,\%$) | 163 ($21.3\pm 1.7\,\%$) a${}_{\mathrm{pl}}=0.5-10$ AU | 476 | 365 ($76.7\pm 4.0\,\%$) | 111 ($23.3\pm 2.2\,\%$) a${}_{\mathrm{pl}}>10$ AU | 74 | 62 ($83.8\pm 10.6\,\%$) | 12 ($16.2\pm 4.7\,\%$) M${}_{\mathrm{pl}}\geq 0.1$ MJup, a${}_{\mathrm{pl}}\leq 10$ AU | 688 | 506 ($73.5\pm 3.3\,\%$) | 182 ($26.5\pm 2.0\,\%$) M${}_{\mathrm{pl}}\geq 0.1$ MJup, a${}_{\mathrm{pl}}\leq 0.5$ AU | 236 | 164 ($69.5\pm 5.4\,\%$) | 72 ($30.5\pm 3.6\,\%$) M${}_{\mathrm{pl}}\geq 7$ MJup, a${}_{\mathrm{pl}}\leq 10$ AU | 106 | 73 ($68.9\pm 8.1\,\%$) | 33 ($31.1\pm 5.4\,\%$) M${}_{\mathrm{pl}}\geq 7$ MJup, a${}_{\mathrm{pl}}\leq 0.5$ AU | 28 | 19 ($66.9\pm 15.6\,\%$) | 9 ($32.1\pm 10.7\,\%$) ### 3.2 Multiplicity as a Function of Planet Properties In this section, we explore the multiplicity of our planet host star sample as a function of planetary mass and separation. Unfortunately, other orbital elements (eccentricity, inclination) are not available for the full exoplanet sample. Investigations involving these parameters would thus be limited to planetary systems detected with specific methods and are not explored here. In Figure 3, we show the masses and semi-major axes of all planet and brown dwarfs in our compilation, with systems found to be in visual stellar binaries marked in magenta, and apparently single stars in blue. Figure 3: Planet mass against semi-major axis for all sub-stellar companions in our exoplanet compilation. Planets identified to be part of multiple-star systems are shown in magenta, while planets orbiting single stars are plotted in blue. The dashed lines divide the parameter space into several bins detailed in the text. Some previously-known trends associated with specific sub-populations of planets are visually apparent in Figure 3. One such feature is the lack of blue scatter points (single-star systems) for sub-stellar companions with semi-major axis in the range $\sim$0.01–0.10 AU (orbital periods of $\sim$0.5–10 days around a Sun-like star) and masses larger than $\sim$3 MJup. This part of the parameter space, representing massive hot Jupiters and brown dwarfs, is entirely filled with multiple-stars systems (magenta scatter points), consistent with early observations that these planets and brown dwarfs are almost exclusively observed in binary stars (Zucker & Mazeh, 2002). A second notable attribute from Figure 3 is the small group of brown dwarfs with even shorter orbital separations ($<$0.01 AU) identified around single stars (top left corner). These sub-stellar companions are all found to orbit white dwarfs, and correspond to most white dwarf hosts from our compilation. Such extreme systems are thought to result from the considerable mass loss stars undergo as they become white dwarfs. This post-main sequence process drastically changes the star-planet mass ratios, thus altering the dynamics and stability of brown dwarfs and planets, in particular in multi-planet systems (e.g., Maldonado et al., 2020). In order to investigate the effect of stellar multiplicity as a function of sub-stellar companion mass and separation, we divide the planetary parameter space into 3 bins in semi-major axis (apl) and 3 bins in mass (Mpl), delimited by the dashed lines in Figure 3. We chose arbitrary limits of 0.5 and 10 AU in semi-major axis, and 0.1 and 7 MJup in mass. The boundary at 0.5 AU corresponds to the observed dearth between two distinct peaks in the distribution of exoplanet orbital periods, representing the pile-up of hot planets, and the bulk population near the snow line ($\sim$1–3 AU), respectively (Udry et al., 2003). The 10-AU threshold corresponds roughly to the outer detection limit for the radial velocity method, and only massive, directly-imaged companions are typically identified beyond 10 AU. The 0.1-MJup mass bound was adopted as the lower limit for the mass of Jovian planets (Mordasini, 2018), while 7 MJup was taken as the median transition between core accretion and gravitational instability giant planets (4–10 MJup; Schlaufman, 2018), a limit also advocated by Moe & Kratter (2019) (see also Santos et al., 2017). Table 2 reports the relative numbers of sub-stellar companions in single and binary systems in each planetary semi-major axis and mass bin. Stars harboring low-mass, sub-Jovian planets (M${}_{\mathrm{pl}}<0.1$ MJup) appear to have a substantially lower stellar binary rate, with $16.6\pm 1.7\,\%$ of such planets being found in multiple-star systems. This compares to $25.5\pm 1.8\,\%$ for higher-mass planets and brown dwarfs, with a 3.6-$\sigma$ difference in raw multiplicity frequency between planetary and sub-stellar companions below and above 0.1 MJup. A similar trend is seen with planet orbital distance, where sub-stellar companions with a${}_{\mathrm{pl}}>10$-AU are less frequently found in stellar binaries, although the smaller number of such planetary companions reduces the significance of this tendency. This effect is most likely the result of an enhanced bias against the existence of wide binaries within 20 000 AU for systems with sub-stellar companions large orbital distances. Indeed, the presence of a planet or brown dwarf prevents the possibility of finding a binary companion on comparable or marginally larger separations than the sub-stellar companion semi-major axis, and binaries with separations of hundreds to thousands of AU are thus dynamically impossible for a sizable fraction of these planetary systems. Given these results, we also report values at the end of Table 2 focusing exclusively on the close-in (a${}_{\mathrm{pl}}<10$ AU and $<$0.5 AU) giant planet and brown dwarf populations. While the lower number of systems associated with these subsets decreases again the significance of observed trends, raw multiplicity rates seem to increase up to around $30\,\%$ for the very shortest-separation and most massive sub-stellar companions. We also note that the vast majority of sub-Jovian planets, with masses below $0.1$ MJup, are found in orbits with semi-major axes shorter than 0.5 AU. To better understand these tendencies and the effect of multiplicity with planet and brown dwarf properties, we explore the distributions of sub-stellar companions around single and binary stars in the various mass and separation bins considered. In Figure 4, we show kernel density estimates (KDE) of the distributions of planet semi-major axis (left panels) and mass (right panel), for the different regions of the parameter space described above. Planets and brown dwarfs in single-star systems are shown in blue, and those in hierarchical stellar systems in magenta. We use KDE bandwidths of 0.3 in all cases, and consider that such estimates of the probability density functions should provide good insights into potential underlying trends. (A) (B) Figure 4: KDEs of planet properties comparing planets in binaries (magenta) and planets around single stars (magenta), with the full planetary population shown in the dotted black lines. Panel A shows the distribution of planetary semi-major axis, divided between massive giant planets and brown dwarfs (top), lower-mass giants (middle) and sub-Jovian planets (bottom), following the cuts in parameter space shown in Figure 3. Panel B shows the distribution of planetary mass for close-in planets (top), intermediate-separation planets (middle) and wide-orbit giant planets (bottom). In terms of planet semi-major axis (Figure 4A), multiplicity appears to have no effect on the orbital separation of sub-Jovian planets (M${}_{\mathrm{pl}}<0.1$ MJup), illustrated by the perfectly consistent distributions for single and binary hosts in the bottom panel, both showing the same narrow peak in the semi-major axis distribution around 0.1 AU. As we enter the giant planet regime (M${}_{\mathrm{pl}}=0.1$–7 MJup; middle panel), the bulk of the planetary population shifts to separations of 1–3 AU, with a secondary peak at tighter separations (a${}_{\mathrm{pl}}<0.1$ AU). The relative density of planets in this secondary sub-population seems to be marginally higher for binary-star systems. Looking at the most massive giant planets and brown dwarf companions (M${}_{\mathrm{pl}}>7$ MJup; top panel), a number of new features emerge in the plotted KDEs. While the core of this exoplanet population still lies at separations of a few AU, comparable to the lower-mass Jovian planets, a strong over-density of closer-in planets and brown dwarfs (a${}_{\mathrm{pl}}\sim 0.01$–0.1 AU) is seen among the sample of multiple-star systems (magenta), corresponding to the population of massive, small-separation sub-stellar companions in binaries highlighted previously from Figure 3. The minor peak at even tighter separations around single hosts corresponds to the sample of extremely short-period brown dwarfs found around white dwarfs discussed previously. At larger orbital distances, the directly- imaged population is subdued in the binary-star sample relative to closer-in planets and brown dwarfs, due to the effect explained above for systems with wide sub-stellar companions. Regarding the distribution of planet masses (Figure 4B), stellar binarity again seems to have no significant effect on the resulting masses for giant planets and brown dwarfs with separations larger than 0.5 AU (middle and bottom panels). At small semi-major axes (top panel), two sub-populations are observed, composed of the sub-Jovian planets with masses below 0.1 MJup forming the primary peak in the mass distribution, and a broader secondary population of giant planets and brown dwarfs. Again, we observe a relative over-abundance of binaries among the more massive planetary population on small semi-major axes, consistent with the findings deduced from our analysis as a function of planet orbital separation, and with the values reported in Table 2. ### 3.3 Planet Properties as a Function of Binary Properties Based on our results from Section 3.2, suggesting that stellar multiplicity impacts the existence or properties of Jovian giant planets and brown dwarfs (M${}_{\mathrm{pl}}>0.1$ MJup) on semi-major axes within 0.5 AU, we further investigate the properties of these sub-stellar companions as a function of binary properties and the statistical significance of these results. We will not look in more details at other planetary systems as the previous analyses revealed no significant effect of binarity on these planetary populations. We assess the effect of binary separation by comparing the distribution of properties for close-in giant planets and brown dwarfs in binaries as a function of the orbital distance to outer stellar companions. Based on the size of this subset (66 sub-stellar companions), we arbitrarily define ranges of $<$250 AU, 250–1000 AU and $>$1000 AU in binary separation $\rho_{\mathrm{bin}}$, dividing this sample into roughly evenly-populated bins with 22, 24 and 20 systems, respectively. For hierarchical triple systems in which the planetary host star is in an inner tight binary, we only consider the close binary companion, as the outer tertiary component is unlikely to have a significant effect on the planetary system compared to the nearby stellar component. For triple systems with a planet host star widely separated from a closer binary, we count this outer binary as a single companion, using the mean separation between the planet host and the distant sub-system. Individual binary systems may be counted more than once, however, if several sub-stellar companions with masses larger than 0.1 MJup around found within 0.5 AU around the same star. Figure 5 shows KDEs of the planet semi-major axes (Figure 5A) and mass (Figure 5B), comparing planets and brown dwarfs in the short (yellow), intermediate (magenta) and wide (blue) binary separation ranges to those around single stars (dashed black line). Despite the small sample size available for this restricted planetary population, clear trends are visible in these figures. In particular, the subset of sub-stellar companions in extremely widely-separated binaries ($\rho_{\mathrm{bin}}>1000$ AU) shows very similar distributions in planetary semi-major axis and mass to planets and brown dwarfs found in single-star systems. In contrast, sub-stellar companions found in tighter stellar binary systems appear to have smaller semi-major axes and higher masses. The previously-noted overabundance of massive, close-in giant planets and brown dwarfs in binaries is hence primarily found in $<$1000 AU binary systems. We highlight, in particular, that from the 9 massive (M${}_{\mathrm{pl}}>7$ MJup), close-in (a${}_{\mathrm{pl}}<0.5$ AU) giant planets and brown dwarfs found in binaries, 8 are in binaries with separations $<$1000 AU, from which 6 have binary separations $<$250 AU. While these rare sub-stellar companions only represent $\sim$2$\,\%$ of the full exoplanet sample, these systems make up about $10\,\%$ of the 64 binaries with separations under 250 AU identified for the full catalog of planet hosts. We further assess the significance of these results by performing two-sided Kolmogorov-Smirnov tests comparing each sub-population of planets in binaries to the sample of planets around single stars (dashed black lines). We are thus testing the null hypothesis that the samples are drawn from the same distribution, and use a threshold of 0.05 on the resulting p-values. We found that the null hypothesis could be rejected for the distributions of planet masses and semi-major axes in short and intermediate-separation binaries ($\rho_{\mathrm{bin}}<1000$ AU; yellow and magenta curves), but not for sub- stellar companions in very wide binaries (blue), confirming that the above findings are statistically significant (p-values of 0.027 and 0.0003 for the planet semi-major axes in short and intermediate-separation binaries, respectively, compared to 0.842 for wider binaries; p-values of 0.005 and 0.013 for the planet masses in short and intermediate-separation binaries, and 0.470 for wide binaries). This result further suggests that close and intermediate-separation ($<1000$ AU) binary companions have strong effects on the final semi-major axes of massive planets and brown dwarfs, whereas planetary systems in very wide ($>1000$ AU) binaries are more likely to evolve as independent stars. (A) (B) Figure 5: Planet properties as a function of binary separation ($\rho_{\mathrm{bin}}$) for all planets with masses above 0.1 MJup and semi- major axes within 0.5 AU, corresponding to the binary systems plotted in the magenta distribution in Figure 9. KDEs of planetary semi-major axis are shown in panel A, and distributions of planet masses are shown in panel B. The dashed black lines show the distributions for planets in the mass and semi- major axis ranges found to be orbiting single stars. (A) (B) Figure 6: Same as Figure 5 dividing the sample of binary stars by companion mass (Mc). For triple systems with a tight binary on a wide separation from the planet host, the total mass of the outer sub-system is considered. The dashed black lines show the distributions for planets in the mass and semi- major axis ranges found to be orbiting single stars. We also investigate potential trends of planet and brown dwarf properties as a function of binary companion mass, Mc. As for the binary separation, we divide the available sample into bins of $<$0.3 M⊙, 0.3–0.6 M⊙ and $>$0.6 M⊙. Triple systems are treated similarly as in the previous analysis, using the total mass of the outer components in the case of tight binaries on wider separations from the planet hosts. Figure 6 shows the resulting distributions of planet semi-major axis (Figure 6A) and mass (Figure 6B) for the various stellar companion mass bins, together with the overall distributions of single-star planetary systems (dashed black line). Unlike Figure 5, no clear trend is observed with binary companion mass. The only marginal tendency is a rather comparable distribution between the planet orbital distances of single- star systems and the binaries with the most massive companions (yellow). Kolmogorov-Smirnov tests performed on these sub-samples confirmed that the planet separation distribution was statistically different from the single- star planetary population for binary systems with companion masses below 0.6 M⊙ (p-values of 0.001 and 0.021 for binary companions in the intermediate and low mass bins, respectively; p-value of 0.859 for high-mass binary companions). However, this effect is mostly due to the fact that most stellar companions in this bin are in fact very distant, two-component companions from triple systems, thus increasing the adopted companion mass, and correspond for the major part to the systems with separations $>$1000 AU that were found to match the single-planet population. Kolmogorov-Smirnov tests could not reject the null hypothesis when comparing the masses of planets and brown dwarfs in various types of binaries to single-star systems, nor was any evidence found that sub-stellar companions in binaries with various stellar companion masses come from different populations (p-values $>$ 0.15 in all cases). Overall, the excess of smaller-separation and higher-mass giant planets and brown dwarfs in binaries appears to be distributed across the different binary mass bins defined, with no robust trend with stellar companion mass. ## 4 Discussion In this work, we performed analyses of planetary populations as a function of multiplicity over all spectral types for hosts to exoplanets and brown dwarf companions. This section similarly presents discussions of our results across all types of stars, without distinguishing between massive stars, Sun-like stars and M dwarfs, or main sequence and evolved stars (sub-giants, giants or white dwarfs), unless explicitly stated otherwise. We note however that only 28 of our stellar hosts (out of 938, i.e. $<3\,\%$) are massive BA stars or white dwarfs, from which only 5 A stars were found to be in multiple systems (i.e. $\sim$2$\,\%$ of the binary sample). Excluding these systems would thus make little difference in the observed results and trends. While giants and sub-giants represent a more consequent fraction of the sample of host stars ($\sim$25$\,\%$ of the FGK hosts), a sizable number of our host stars have no luminosity class (giant/sub-giant vs. main sequence) in the spectral types gathered from the considered exoplanet catalogs or Simbad (e.g. numerous Kepler/TESS/WASP targets). We are therefore not able to strictly discuss main sequence stars separately, and our conclusions include a range of stellar masses and a mixture of stellar evolutionary stages. ### 4.1 Stellar Mass Function and Multiplicity Figure 2 shows the distribution of spectral types from our planet host sample, divided between those identified in visual binaries or multiples (magenta) and seemingly single stars (blue), and compared to the identified stellar companions (yellow). Absolute numbers are provided at the top of each bar. In addition, each color-coded histogram is normalized so that the sum of the bars in a given color add up to 1, i.e. the height of each bar on the y-axis gives the relative contribution from that spectral type towards to full considered sub-sample. Comparing the single and binary stars from our planet hosts, the subset of binary hosts contains a larger relative fraction of massive A, F and G stars, with a smaller contribution from lower-mass K and M dwarfs, as demonstrated by the turnover in the relative heights of the magenta and blue bars from G to K spectral types. This is consistent with the well-known trend of decreasing binary rate with decreasing stellar mass, dropping from $\sim$70$\,\%$ for B and A stars (Kouwenhoven et al., 2007) to around $50\,\%$ for Sun-like stars (Raghavan et al., 2010), and about 30$\,\%$ for M dwarfs (Janson et al., 2012; Winters et al., 2019; Ward-Duong et al., 2015). While our survey results were not corrected for incompleteness and additional binaries may be missing from our compilation (see Section 4.2), our ability to retrieve wide stellar companions for our sample can be assumed to be rather independent of the host spectral types. With raw binary fractions of $37.8\pm 6.5\,\%$, $24.3\pm 2.6\,\%$, $22.4\pm 2.7\,\%$ and $15.7\pm 3.1\,\%$ for F, G, K and M stars, respectively, these results thus suggest that the population of planet-bearing stars is representative of the relative multiplicity output of stellar formation across the stellar spectral sequence. However, without robust completeness corrections, we are not able to determine whether the differences between our observed raw fractions and overall stellar multiplicity rates are due to missing binaries in our samples or to the fact that stars hosting planets and brown dwarfs are truly less commonly found in binary-star systems. The distribution of spectral types from companions, on the other hand, peaks strongly towards low-mass M dwarfs (yellow), which represent over 65$\,\%$ of our sample of stellar companions. In fact, this resembles closely the stellar initial mass function, with M dwarfs being the most abundant types of stars (Chabrier, 2003; Bochanski et al., 2010). This indicates that planet hosts in multiple systems are more often the most massive component of stellar binaries. The feature is partly due to a selection effect, as lower-mass stars are often too faint to be included in target samples for exoplanet campaigns (Eggenberger, 2010). Nonetheless, although Earth to Neptune-sized planets are more abundant around M dwarfs (Mulders et al., 2015), giant planet formation is thought to be more efficient around more massive stars (Mordasini, 2018), and giant planets are indeed observed to be more frequent around higher-mass stars (Bonfils et al., 2013; Vigan et al., 2020). Given that binary systems seem to preferentially host giant planets based on our results, it is not surprising that most planet hosts in multiple systems would be the most massive stellar component in these hierarchical systems. ### 4.2 Completeness and Survey Limitations As mentioned previously, the (in)completeness of our multiplicity search was not accounted for in the results presented in Section 3, as corrections of observational biases are beyond the scope of this work. We may nonetheless look at the properties of our detected systems to understand what biases might lie in our gathered sample. Figure 7 shows the angular separation and Gaia G-band magnitude difference for every visual companion, relative to the planet host star it is bound to. Blue circles represent companions successfully retrieved in Gaia DR2. Binary components which are themselves known to be unresolved binaries are marked with black rings. Magenta triangles correspond to companions known from the literature but undetected in Gaia. As a $\Delta G$ magnitude difference is unavailable for these systems, the plotted magnitudes correspond to contrasts in various visual or infrared filters, and thus correspond to lower limits compared to the expected magnitude difference values in the G-band. Our observed recoverability for binaries is consistent with the estimated Gaia completeness to close binaries (Ziegler et al., 2018): near equal-brightness binaries ($\Delta G<2$ mag) are consistently retrieved from separations of 1 arcsec (dashed line), binaries down to around $\Delta G=6$ mag are typically recovered at separations of $\sim$3 arcsec (dotted line), and wider systems are subject to Gaia DR2 completeness down to the limiting magnitude of $G\sim 21$ mag of the Gaia DR2 survey. Figure 7: Completeness of the Gaia DR2 binary search showing G-band magnitude differences against angular separations for all companions retrieved in Gaia (blue circles). Detected companions known to be themselves close binaries unresolved inGaia are marked with black circles. Known companions not recovered in the Gaia DR2 catalog are shown in the magenta triangles, with plotted magnitude differences corresponding the lower limits in the Gaia G-band. The dashed and dotted gray lines show inner working angles of 1 arcsec and 3 arcsec, respectively. Figure 8: Physical projected separations against distance all identified stellar companions, with the same symbols and color- codes as in Figure 7, highlighting the completeness of our Gaia binary search as a function of distance to the Sun. The dashed and dotted gray lines show inner working angles of 1 arcsec and 3 arcsec, respectively. A significant number of known tight binaries with angular separation $<$1 arcsec (magenta triangles) were not recovered in Gaia, only known thanks to high angular-resolution imaging campaigns. As only a small fraction of our hosts stars have been targeted by such dedicated imaging programs, these results indicate that additional unresolved sub-arcsecond systems may still be hidden among our exoplanet host sample. In particular, the 27 binaries for our sample with projected separations $<$100 AU have a median angular separation of 0.7 arcsec, and such systems are therefore for the most part not recoverable in Gaia. Studies like Kraus et al. (2016) or Furlan et al. (2017) have identified numerous optical candidate companions to Kepler hosts stars on small angular separations, but additional observational epochs are required to confirm or refute the bound nature of most of these candidates. A shortfall of close binaries ($<$50–100 AU) among planet hosts has been vastly reported in observational surveys (Bergfors et al., 2013; Bonavita & Desidera, 2020; Kraus et al., 2016; Moe & Kratter, 2019; Roell et al., 2012; Wang et al., 2014). This feature is generally attributed to a hindrance of planet formation in very tight binaries, and is also predicted in theoretical models (Thebault & Haghighipour, 2015). However, our survey is highly incomplete out to separations of hundreds of AU and thus cannot be used to probe this feature. Indeed, the resolving limit of $\sim$1–3 arcsec in our Gaia search corresponds to projected separations of 200–600 AU for the most distant stars in our study (200 pc). This effect is illustrated in of Figure 8, which plots the physical projected separation of all identified binaries as a function of distance from the Sun. Detection limits corresponding to inner working angles of 1 arcsec and 3 arcsec are marked with dashed and dotted lines, respectively. The figure clearly demonstrates that the range of probed binary separation is strongly affected by the distance to each star. Our compilation is only sensitive in Gaia to binary separations below 100 AU for targets out to 30 pc ($\sim 20\,\%$ of the sample), and only data from heterogeneous high-angular resolution programs have allowed the detection of such systems beyond 100 pc. ### 4.3 Impacts of Multiplicity on Exoplanets #### 4.3.1 No Influence on Low-Mass Planets We found that small planets with masses below 0.1 MJup have a significantly lower raw binary rate ($16.6\pm 1.7\,\%$) than more massive Jovian planets ($25.5\pm 1.8\,\%$ for planets above 0.1 MJup throughout the brown dwarf mass range), an offset with a 3.6-$\sigma$ significance. While these numbers certainly suffer from inherent and observational biases as discussed in Section 4.2, it is reasonable to assume that these biases do not affect hosts to different types of planets differently. Indeed, the transit and radial velocity surveys that yield the detection of these planets are partially subject to the same inherent selection biases as campaigns discovering more massive planets with the same methods. As a result, we consider that the observed trend of lower multiplicity fraction for sub-Jovian planets is a real feature. Furthermore, terrestrial and Neptunian planets are often found in tightly-packed multiple-planet systems (Mayor et al., 2011). The fact that such planets are less frequently seen in hierarchical star systems is thus also consistent with the observation that multi-planet systems are less commonly found in stellar binaries. These results, however, may be a direct consequence of the lower binary frequency of M dwarfs compared to more massive stars. Since low-mass M stars host $\sim$2–3 times more close, small planets than Sun-like stars (Howard et al., 2012; Mulders et al., 2015), and rarely harbor giant planets (Bonfils et al., 2013), the majority of small sub-Jovian planets and high-order multi- planet systems are therefore found around M dwarfs. The intrinsic lower stellar multiplicity rate of M dwarf could hence be responsible, at least partly, for the observed trends. Nonetheless, Moe & Kratter (2019) found that the biases from stellar companions against the detection of planets are higher for F and G stars than M dwarfs. This trend is rooted on the suppression of planet formation in close binaries and bright stellar companions preventing transit detections. This suggests that the observed differences in raw binary fractions between sub-Jovian and giant planet systems would likely be increased after accounting for these biases, and we conclude that low-mass planets and tightly-packed systems with multiple small planets are truly less commonly found in hierarchical stellar systems. #### 4.3.2 The Excess of Massive Close-in Planets and Brown Dwarfs in Binaries The substantial prevalence of short-orbit massive planets and brown dwarfs around members of binary stars was first noted by Zucker & Mazeh (2002), and later confirmed by numerous observational studies (Eggenberger et al., 2004; Desidera & Barbieri, 2007; Mugrauer et al., 2007b). More recently, the Friends of Hot Jupiters survey reported an enhancement of binary frequency for stars hosting hot Jupiters, with a binary rate 3 times higher that for field stars over the separation range 50–2000 AU (Ngo et al., 2016). Fontanive et al. (2019) established the continuity of this trend to the most massive giant planets and brown dwarfs ($>$7 MJup) found within $\sim$1 AU, constraining the binary frequency of such systems to be around $80\,\%$ between 20 and 10 000 AU, a result further validated statistically in Moe & Kratter (2019). Results from these studies demonstrate that stellar companions play an important role in the formation and/or evolution of these rare planetary systems. These findings also suggest that the influence of binary companions is strengthened for higher-mass close-in exoplanets and sub-stellar companions, and that this effect may be magnified for sub-stellar companions on even tighter orbits. While the work presented here did not allow us to place any such frequency constraints, the intrinsic tendencies with planet mass and separation observed in previous studies are confirmed in our compilation. Indeed, we observed a larger relative fraction of Jovian planets in binaries within 0.5 AU than for the bulk of the Jovian planet population around the snow line ($\sim$1–5 AU). This relative frequency was found to further increase when focusing exclusively on the most massive planets and brown dwarfs. These trends suggest that stellar multiplicity affects the orbital separation of massive giant planets. The presence of an outer wide companion would hence allow for the inner sub-stellar companion to reach closer-in semi-major axes than planets of similar masses orbiting single stars, onto an orbital separation regime where essentially no planets around single stars are observed (Fontanive et al., 2019). The influence from outer stellar companions shows a possible dependence on binary separations. Stellar companions on separations of the order of thousands of AU seem to have no significant effect on the demographics of planetary systems, with similar distributions observed between the masses and semi-major axes of planets and brown dwarfs in such binaries and around single stars. In contrast, the most massive, close-in giant planets and brown dwarfs in binaries, in the most extreme planetary configurations, are all in rather tight binaries, with separations of tens to a few hundreds of AU compared to a mean $\sim$600 AU for the full binary sample (likely a direct consequence of uncorrected incompleteness biases as discussed in Sections 4.2 and 4.3.4). This is consistent with the observed peak in binary separation from Fontanive et al. (2019) for such systems ($\sim$250 AU), and further supports the idea that additional binaries may remain undiscovered in our probed sample on this separation range. On the other hand, no robust dependence of binary influence on stellar companion mass was seen in our results. This is not surprising since the gravitational pull from a companion scales with M${}_{\mathrm{c}}/\rho_{\mathrm{bin}}^{2}$ (where Mc is the companion mass and $\rho_{\mathrm{bin}}$ the binary separation). As the companion masses span a range of about one order of magnitude, compared to over three orders of magnitude for the separation (which is then squared), binary separation is thus expected from physical arguments to have larger impact on the circumstellar planetary system. #### 4.3.3 Very Close Binaries in Triple Systems The vast majority of main sequence stars in spectroscopic binaries are known to be the inner binaries of hierarchical triple systems. Tokovinin et al. (2006) demonstrated that 96% of binaries with orbital periods below $\sim$3 days have tertiary stellar companions. The occurrence of outer components for these systems is found to steadily decrease with inner binary period, falling to a rate of 34% triple systems for spectroscopic binaries with periods of 12–30 days. The excess of tertiary companions has been argued (Fabrycky & Tremaine, 2007; Naoz & Fabrycky, 2014) to allow for the migration of the inner companions via Kozai-Lidov oscillations in misaligned triples (Kozai, 1962; Lidov, 1962). Alternatively, these close binary companions have been suggested to form via disk fragmentation and migration within the circumstellar disk of the primary star (Moe & Kratter, 2018). The substantial mass required to form and drive inward such massive inner companions can simultaneously form additional tertiary companions, leading to such systems being often in triple- star configurations. These outer components could then allow for more extreme migrations of the inner companions, leading to the observed negative correlation between inner binary period and triple architecture frequency (Moe & Kratter, 2019). Fontanive et al. (2019) studied hosts to close giant planets and brown dwarf companions with masses of 7–60 MJup, inferring a tertiary companion fraction comparatively high to the spectroscopic binaries from Tokovinin et al. (2006). This population of sub-stellar companions corresponds to the most massive, short-separation systems found to be predominantly in hierarchical stellar structures in this work. Moe & Kratter (2019) further confirmed this excess of triple occurrence rate to be a statistical, real feature, as well as to be measurably higher than for genuine hot Jupiters ($<$4 MJup) surveyed by Ngo et al. (2016). The similar demographics between these brown dwarf desert systems and stellar spectroscopic binaries argues for a common origin for the inner companions from Tokovinin et al. (2006) and Fontanive et al. (2019), indicating that these inner giant planet and brown dwarf companions extent the population of triple stellar systems to sub-stellar masses for the secondary components of the inner binaries. Figure 9: Binary separation distribution comparing the full sample of planet- hosting binaries (thick blue line) to stellar binaries in the Solar- neighborhood (dashed black line) from Raghavan et al. (2010). The sample of planet-bearing multiples is further divided between systems hosting a giant planet of mass $>$0.1 MJup within 0.5 AU (magenta) and all other systems (yellow). Multi-planet systems with planets falling into the two planetary categories are counted towards the close-in giant planet subset. Planet host stars in close binaries with an outer tertiary companion are plotted as the inner binary only. Triple systems composed of a planet host and a wide tighter binary are counted as a binary system using the mean separation to the distant sub-system. The dashed vertical gray lines show the projected separations probed for the closest 20$\,\%$, 50$\,\%$ and 100$\,\%$ of our sample for angular separations of 3 arcsec. #### 4.3.4 The Effect of Binary Separation In Figure 9, we compare the distributions in projected separation of the planet-hosting wide binaries (solid yellow line) gathered in this work, and the Solar-type field binaries (dashed black line) from Raghavan et al. (2010). For multi-planet systems with planets or brown dwarfs falling into the two planetary categories considered (55 Cancri, HD 38529 A, Upsilon Andromedae A, WASP-8), we count the binaries only once, towards the close-in giant planet subset. Triple systems are accounted for in the same way as in Section 3.3. The binary separations of the planet-bearing systems appears to peak at significantly larger values, with a peak around 600 AU compared to $\sim$50 AU for field binaries. Field binaries also show a much broader distribution, with a log-width of 1.70 compared to 0.75 for planet-hosting multiples. A Kolmogorov-Smirnov test confirms that the raw observed distributions are indeed statistically different, with a p-value for the null hypothesis that they are drawn from the same distribution $<10^{-5}$. These differences are primarily due to the incompleteness of our compilation on short binary separations as discussed in Section 4.2. Furthermore, the field binaries from Raghavan et al. (2010) include unresolved companions detected by spectroscopic techniques or proper motion accelerations. Such systems are not detectable with visual detection methods and a number of such tighter binaries could remain undetected in our studied sample. The dashed lines in Figure 9 show the projected separations probed for the closest 20$\,\%$, 50$\,\%$ and 100$\,\%$ of our sample with angular separations of 3 arcsec, our adopted completeness limit for Gaia. Our observed peak in the separation distribution (600 pc) roughly coincides with our inner completeness limit for the full sample (see Section 4.2). This strongly suggests that a number of undiscovered binaries with separations of tens to hundreds of AU may still lie in our sample. For example, the planet host stars DMPP-3 A and HD 59686 from our exoplanet compilation were both found through significant radial velocity trends to have close stellar companions at 1.22 AU (Barnes et al., 2020) and 13.56 AU (Ortiz et al., 2016), respectively. These companions have never been resolved to this date, and these systems were thus counted as single in the context of this work, which only considered visual, astrometrically-confirmed systems. Current high-angular resolution efforts and complementary detection methods, probing smaller binary separation ranges, must thus be pursued to obtain a more complete picture of the multiplicity of exoplanet host stars, and understand the true effect of tight binary companions on the formation and evolution of extra-solar planets and brown dwarf companions. Figure 9 also shows the distribution of binary separation, dividing the sample between binaries hosting a giant planet or brown dwarf within 0.5 AU (magenta) and the all other systems (blue). The sample of binaries hosting a short- period gas giant appears to be on somewhat smaller binary separations than the remaining planet-hosting multiples, with logarithmic means shifting from $\sim$500 AU to $\sim$700 AU between the two samples, and a slightly tighter distribution for the former subset. This is representative of our results from Section 3.3, which showed an enhanced relative fraction of shorter-separation binaries for systems with close-in planets, an effect that is further magnified for the most massive planets and brown dwarfs on very tight orbits. These results are in agreement with previous observations (Desidera & Barbieri, 2007) that found massive planets in short period orbits to be in most cases around the components of rather tight binaries. Finally, the larger relative number of very wide binaries ($>$1000 AU) for hosts to lower-mass and larger-separation planets is also consistent with the rest of our results. We indeed found that such widely-separated binaries generally do not impact the planet properties (see Figure 5), and observed small and wide-orbit planets to not be significantly affected by the presence of stellar companions, compatible with the idea that most such planets are only found in very wide binaries or around single stars. Desidera & Barbieri (2007) similarly concluded that the properties of exoplanets orbiting components of very wide binaries are compatible with those of planets orbiting single stars. ### 4.4 Implications for Formation Mechanisms The final architectures of planetary systems around members of binary stars strongly depend on how the presence of a close massive body impacts standard formation and migration processes, through its efficiency to alter the local disk environment, accretion rates, or tidal interactions between planets and the host star. The population trends highlighted throughout this study might provide new clues and insights into the effects of stellar companions on planet and brown dwarf formation and evolution. Our findings show that binarity has little effect on the distributions of planet mass and semi-major axis for the population of sub-Jovian planets found inside $\sim$1 AU. The impact of stellar duplicity on short binary separations ($<$50 AU) remains to be fully understood theoretically and better constrained observationally, as such very tight binaries would be more amenable to influence disks inner regions crucial for small planet formation and stable orbital behaviors. Nonetheless, our results demonstrate that sub-Jovian planets that form in binaries with separations of hundreds of AU are consistent with the population of single-star planets. This suggests that stellar multiplicity does not need to be extensively accounted for in order to reproduce the core of this population of small planets, either completely inhibiting the formation or survival of such planets, or having no visible effect on the demographics of successfully formed planets. In contrast, we found that the populations of intermediate-mass giant planets (M${}_{\mathrm{pl}}=0.1$–7 MJup) and high-mass sub-stellar companions (M${}_{\mathrm{pl}}>7$ MJup) show different statistical properties between single-star systems and hosts with stellar companions. Small-separation planets and brown dwarf companions within the snow line ($<$1–3 AU) were found to have somewhat larger masses and/or tighter separations when in binary stars. This trend is enhanced for the most massive and closest-separation sub- stellar companions, that also have inflated raw stellar multiplicity rates compared to lower-mass and wider planets, consistent with previous studies (Fontanive et al., 2019; Moe & Kratter, 2019). This strongly indicates that the identified stellar binary companions likely affect the formation and/or migration of these massive sub-stellar objects, either allowing for more massive planets to exist at similar separations than planets around single stars, or enabling similar-mass planets to form or migrate to shorter semi- major axes in stellar binaries. Understanding the true nature and extend of these effects is a challenging task. Unfortunately, available data provide little insight at this stage into the details of the possible underlying processes, with few prospects to disentangle between planetary formation and evolution, and missing information about most orbital elements for binary orbits. Likewise, modeling the formation and evolution of planets in binaries requires to explore a very wide parameter space, including binary separations, mass ratios, inclinations and eccentricities. The large variety of possible binary configurations likely impacts the existence and properties of planetary systems differently for each combination of these key binary parameters, from detrimental to perturbing or even favorable effects. For circumstellar planets, which represent the focus of this study, a nearby stellar companion is expected to primarily affect the outer parts of typical planet formation locations, where the gravitational influence from the stellar companions will be enhanced. The outskirts of protoplanetary disks are believed to predominantly harbor more massive planets, with mostly rare, cold Jovian planets predicted beyond a few tens of AU in the core accretion paradigm (Emsenhuber et al., 2020), and the formation of massive planets and brown dwarfs by gravitational disk fragmentation occurring preferentially in the cool outer regions of disks, from separations of several tens to hundreds of AU (Rafikov, 2005; Hall et al., 2017). Following this reasoning, giant planets and brown dwarfs forming at large orbital separations are thus more likely to be affected by the presence of an outer star in the system than small planets forming and accreting within a few AU from the host star. The observed population of wide-orbit ($>$10 AU) planets and brown dwarfs was found to have a lower raw binary rate than similar-mass sub-stellar companions on shorter orbits, and no significant differences were observed in planetary properties between single and multiple-star systems. The effect from outer companion stars would thus likely be in facilitating inward migration processes, bringing massive giant planets and brown dwarfs onto extremely tight orbits typically unreachable in single-star environments, via e.g. the Kozai-Lidov mechanism (Naoz & Fabrycky, 2014; Winn et al., 2010) or other triggered dynamical perturbations. Alternately, binarity could impact separate planet formation channels differently, i.e., influencing the conditions for gravitational disk instability, but with little effect on the results of core accretion mechanisms if they proceed, thus affecting the very most massive planets and brown dwarfs only. For example, the presence of a nearby companion star within $\sim$100–300 AU could tidally truncate protoplanetary disks (Kraus et al., 2012) and lead to faster disk dissipation rates (Müller & Kley, 2012). This effect would be particularly problematic for giant planet formation by core accretion, which requires significantly longer timescales to operate than disk fragmentation. Formation by core accretion would therefore only take place if the outer companion has little effect on the disk and forming planetary system, thus not significantly impacting the final planet properties compared to single star conditions. Similarly, binary companions have been suggested to be able to trigger instabilities in otherwise stable disks (Boss, 2006), hence favorably modifying formation environments for in-situ disk fragmentation, but inconsequential for core accretion. This idea is reconcilable with the high masses of the outlying population of sub-stellar companions observed to be predominantly in binaries, which seemingly formed differently from the population of lower-mass planets on similar orbits, most likely through gravitational disk fragmentation (Moe & Kratter, 2019). Finally, our results regarding the separation distribution of binaries might help to narrow down the effect of at least one binary parameter. As mentioned previously, there is reliable observational evidence that close binarity ($<$50 AU) hinders planet formation around a host star (Fontanive et al., 2019; Kraus et al., 2016; Wang et al., 2014), although this feature could not be robustly investigated in the present work. We also found stellar multiplicity of very large separations (thousands of AU) to have no significant impact on observed planetary populations, suggesting that planet formation and evolutionary patterns in such systems behave similarly as around single stars. Intermediate separations, from several tens to a few hundreds of AU, therefore appear to be a key region of the parameter space to explore in order to further our perspective of exoplanets in stellar binaries. Examinations of physical quantities in these systems such as binding energy may be especially interesting to study for a better physical understanding of the processes in play, and we particularly advocate for investigations to be conducted in the theoretical context of gravitational disk instability based on our results. ## 5 Summary In this work, we have compiled a sample of 938 stars hosting a total of 1316 extra-solar planets and brown dwarf companions, out to 200 pc. We searched for visual co-moving companions to these systems via an extensive search in the literature and using the Gaia DR2 catalog to identify common proper motion sources. This analysis yielded a total of 218 planet hosts in multiple-star systems, including 186 binaries and 32 hierarchical triple systems, with 10 newly-discovered binary companions and 2 new tertiary components. From these, 4 binaries and 1 triple system contain 2 planet-bearing stars. Stellar companions have masses ranging from the brown dwarf/star boundary at 0.07 M⊙ up to 2.27 M⊙, with separations ranging from $<$1 AU to 20 000 AU with a median of $\sim$600 AU. Investigating our gathered sample of binaries, we found that: 1. $\bullet$ More massive planet hosts are more often part of multiple-star systems, consistent with the population of planet-bearing stars following the overall relative multiplicity outcome of stellar formation. 2. $\bullet$ Planet hosts in multiple systems were also predominantly observed to be the most massive component of stellar binaries. 3. $\bullet$ A total of 27 binary systems have separations $<$100 AU, from which 20 have binary separations smaller than 50 AU, with 1 system in an extreme $<$1 AU configuration. Most of these close binaries, however, were only identified thanks to dedicated high-angular resolution campaigns, and could not, for the most part, be retrieved with the resolving limit of Gaia (1–3 arcsec), in particular for the most distant targets in our sample. 4. $\bullet$ As only a small fraction of planet hosts have been targeted by such imaging programs, a significant number of sub-arcsecond binaries and companions on separations of a few arcseconds could still be missing from our catalog, further supported by the concurrence of our measured peak in binary separation and our estimated Gaia completeness limit. Assuming that the selection and observational biases lying in and limiting our gathered compilation of stellar binaries affect various subsets of planetary populations and planet hosts in a reasonably homogeneous way, we investigated possible correlations between planet properties and the existence and properties of outer stellar companions. Our main results are: 1. $\bullet$ From our identified sample of binary companions, we measured a raw multiplicity rate of $23.2\pm 1.6\,\%$ for planet hosts. 2. $\bullet$ Multi-planet systems were found to have a somewhat lower stellar duplicity frequency ($18.0\pm 2.7\,\%$) compared to single-planet systems ($25.1\pm 1.9\,\%$) with a 2.2-$\sigma$ significance. 3. $\bullet$ Dividing the planet parameter space into various sub-populations, we found that giant planets and brown dwarfs with masses above 0.1 MJup have a substantially larger (3.6-$\sigma$) raw stellar multiplicity fraction ($25.5\pm 1.8\,\%$) than lower-mass planets ($16.6\pm 1.7\,\%$), consistent with the fact that these small sub-Jovian planets are typically organized in tightly-packed multi-planet systems. 4. $\bullet$ This trend appears to further increase up to about $\sim$30$\,\%$ for massive planet and brown dwarfs (M${}_{\mathrm{pl}}>7$ MJup) on very short orbital separations (a${}_{\mathrm{pl}}<0.5$ AU), with the most massive and shortest- period sub-stellar companions almost exclusively observed in multiple-star systems. These results are consistent with previous studies of these populations (Fontanive et al., 2019; Moe & Kratter, 2019), which appear to follow the architectures of stellar spectroscopic binaries, systematically observed as part of hierarchical triple systems (Tokovinin et al., 2006). In terms of planet properties, our results suggest that: 1. $\bullet$ Stellar duplicity has no significant effect on the demographics of low-mass planets (M${}_{\mathrm{pl}}<0.1$ MJup) or the core population of warm giant exoplanets on separations neighboring the snow line (a${}_{\mathrm{pl}}>0.5$ AU). 2. $\bullet$ Only high-mass, small-separation planets were observed to have different distributions of planet properties between the subset of planets in binaries and single-star systems, with an over-density of planets and brown dwarfs of several Jupiter masses found on semi-major axes of $\sim$0.01–0.1 AU identified in multiple-star systems. 3. $\bullet$ These extreme planetary systems with few or no single-star analogues were predominantly found to be in rather tight binary configurations $<$1000 AU, and mostly on separations $<$250 AU for sub-stellar companions with masses $>$7 MJup. These systems represent a sizable fraction of such tight binaries in our compilation ($\sim$10$\,\%$), despite the rarity of these planets and brown dwarfs in our overall exoplanet catalog ($\sim$2$\,\%$). 4. $\bullet$ In contrast, the subset of these planets in binaries with separations $>$1000 AU showed similar distributions in mass and semi-major axis to planets and brown dwarfs orbiting single stars. This indicates that short ($<$250 AU) or intermediate-separation ($<$1000 AU) binaries play a role the formation or evolution of these massive planets and brown dwarfs, but that very wide binaries do not influence the architectures of planetary systems. 5. $\bullet$ Binary companion mass, on the other hand, was found to have no significant effect on planetary properties. Between the upcoming generation of telescopes and future Gaia data releases, the next decade promises to lead to unprecedented discoveries and new characterisation possibilities. These findings will arguably yield unparalleled information and new robust constraints on system architectures and population demographics, which will in turn provide key probes into formation histories and dynamical evolution processes. We hope that the gathered compilation of exoplanets in visual binaries will be useful to future studies in this constantly-growing research area, and will motivate the need to pursue existing campaigns searching for small-separation binary companions to known planetary systems. With a more comprehensive picture of stellar multiplicity on the separation ranges demonstrated here to remain highly incomplete, we will be able to confirm and better understand the tentative trends highlighted in this paper, and improve our fundamental understanding of stellar, sub-stellar and planetary formation and evolution. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions C.F. led this study, performing the catalog compilations and binary searches and leading the data analyses. D.B.G. contributed to the search for trends within the obtained catalogs, scientific interpretation of results, and writing of the paper. ## Acknowledgments C.F. acknowledges support from the Center for Space and Habitability (CSH). This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. D.B.G. acknowledges support from the NASA ADAP award No. 80NSSC19K0532. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). This research has made use of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. ## Data Availability Statement The catalogs compiled in this work and used throughout this study are provided as supplementary online material. The compilations are also made publicly available online at https://docs.google.com/spreadsheets/d/11b29RREm_rTWcpUGvh7M_- wOQgH8aTiyoxDPHieqXBo/edit?usp=sharing, in a catalog that we plan to update regularly. ## References * Adams et al. (2012) Adams, E. R., Ciardi, D. R., Dupree, A. K., et al. 2012, AJ, 144, 42, doi: 10.1088/0004-6256/144/2/42 * Adams et al. (2013) Adams, E. R., Dupree, A. K., Kulesa, C., & McCarthy, D. 2013, AJ, 146, 9, doi: 10.1088/0004-6256/146/1/9 * Artymowicz & Lubow (1994) Artymowicz, P., & Lubow, S. H. 1994, ApJ, 421, 651, doi: 10.1086/173679 * Asensio-Torres et al. (2018) Asensio-Torres, R., Janson, M., Bonavita, M., et al. 2018, A&A, 619, A43, doi: 10.1051/0004-6361/201833349 * Barnes et al. (2020) Barnes, J. R., Haswell, C. A., Staab, D., et al. 2020, Nature Astronomy, 4, 419, doi: 10.1038/s41550-019-0972-z * Bate & Bonnell (1997) Bate, M. R., & Bonnell, I. A. 1997, MNRAS, 285, 33, doi: 10.1093/mnras/285.1.33 * Bergfors et al. (2013) Bergfors, C., Brandner, W., Daemgen, S., et al. 2013, MNRAS, 428, 182, doi: 10.1093/mnras/sts019 * Bochanski et al. (2010) Bochanski, J. J., Hawley, S. L., Covey, K. R., et al. 2010, AJ, 139, 2679, doi: 10.1088/0004-6256/139/6/2679 * Bohn et al. (2020) Bohn, A. J., Southworth, J., Ginski, C., et al. 2020, A&A, 635, A73, doi: 10.1051/0004-6361/201937127 * Bonavita & Desidera (2007) Bonavita, M., & Desidera, S. 2007, A&A, 468, 721, doi: 10.1051/0004-6361:20066671 * Bonavita & Desidera (2020) —. 2020, Galaxies, 8, 16, doi: 10.3390/galaxies8010016 * Bonfils et al. (2013) Bonfils, X., Delfosse, X., Udry, S., et al. 2013, A&A, 549, A109, doi: 10.1051/0004-6361/201014704 * Boss (2006) Boss, A. P. 2006, ApJ, 641, 1148, doi: 10.1086/500530 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392 * Coker et al. (2018) Coker, C. T., Gaudi, B. S., Pogge, R. W., & Horch, E. 2018, AJ, 155, 27, doi: 10.3847/1538-3881/aa9f0e * Colton et al. (2021) Colton, N. M., Horch, E. P., Everett, M. E., et al. 2021, AJ, 161, 21, doi: 10.3847/1538-3881/abc9af * Cumming et al. (2008) Cumming, A., Butler, R. P., Marcy, G. W., et al. 2008, PASP, 120, 531, doi: 10.1086/588487 * Daemgen et al. (2009) Daemgen, S., Hormuth, F., Brandner, W., et al. 2009, A&A, 498, 567, doi: 10.1051/0004-6361/200810988 * Deacon et al. (2014) Deacon, N. R., Liu, M. C., Magnier, E. A., et al. 2014, ApJ, 792, 119, doi: 10.1088/0004-637X/792/2/119 * Deacon et al. (2016) Deacon, N. R., Kraus, A. L., Mann, A. W., et al. 2016, MNRAS, 455, 4212, doi: 10.1093/mnras/stv2132 * Desidera & Barbieri (2007) Desidera, S., & Barbieri, M. 2007, A&A, 462, 345, doi: 10.1051/0004-6361:20066319 * Dietrich & Ginski (2018) Dietrich, J., & Ginski, C. 2018, A&A, 620, A102, doi: 10.1051/0004-6361/201731341 * Dommanget & Nys (2002) Dommanget, J., & Nys, O. 2002, VizieR Online Data Catalog, I/274 * Doyle et al. (2011) Doyle, L. R., Carter, J. A., Fabrycky, D. C., et al. 2011, Science, 333, 1602, doi: 10.1126/science.1210923 * Eggenberger (2010) Eggenberger, A. 2010, in EAS Publications Series, Vol. 42, EAS Publications Series, ed. K. Gożdziewski, A. Niedzielski, & J. Schneider, 19–37, doi: 10.1051/eas/1042002 * Eggenberger & Udry (2007) Eggenberger, A., & Udry, S. 2007, arXiv e-prints, arXiv:0705.3173. https://arxiv.org/abs/0705.3173 * Eggenberger et al. (2007) Eggenberger, A., Udry, S., Chauvin, G., et al. 2007, A&A, 474, 273, doi: 10.1051/0004-6361:20077447 * Eggenberger et al. (2011) Eggenberger, A., Udry, S., Chauvin, G., et al. 2011, in IAU Symposium, Vol. 276, The Astrophysics of Planetary Systems: Formation, Structure, and Dynamical Evolution, ed. A. Sozzetti, M. G. Lattanzi, & A. P. Boss, 409–410, doi: 10.1017/S1743921311020564 * Eggenberger et al. (2004) Eggenberger, A., Udry, S., & Mayor, M. 2004, A&A, 417, 353, doi: 10.1051/0004-6361:20034164 * Emsenhuber et al. (2020) Emsenhuber, A., Mordasini, C., Burn, R., et al. 2020, arXiv e-prints, arXiv:2007.05562. https://arxiv.org/abs/2007.05562 * Everett et al. (2015) Everett, M. E., Barclay, T., Ciardi, D. R., et al. 2015, AJ, 149, 55, doi: 10.1088/0004-6256/149/2/55 * Fabricius et al. (2001) Fabricius, C., Hog, E., Makarov, V. V., et al. 2001, VizieR Online Data Catalog, I/276 * Fabrycky & Tremaine (2007) Fabrycky, D., & Tremaine, S. 2007, ApJ, 669, 1298, doi: 10.1086/521702 * Faedi et al. (2013) Faedi, F., Staley, T., Gómez Maqueo Chew, Y., et al. 2013, MNRAS, 433, 2097, doi: 10.1093/mnras/stt885 * Fontanive et al. (2019) Fontanive, C., Rice, K., Bonavita, M., et al. 2019, MNRAS, 485, 4967, doi: 10.1093/mnras/stz671 * Furlan et al. (2017) Furlan, E., Ciardi, D. R., Everett, M. E., et al. 2017, AJ, 153, 71, doi: 10.3847/1538-3881/153/2/71 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1, doi: 10.1051/0004-6361/201629272 * Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 * Gentile Fusillo et al. (2019) Gentile Fusillo, N. P., Tremblay, P.-E., Gänsicke, B. T., et al. 2019, MNRAS, 482, 4570, doi: 10.1093/mnras/sty3016 * Ginski et al. (2020) Ginski, C., Mugrauer, M., Adam, C., Vogt, N., & van Holstein, R. 2020, arXiv e-prints, arXiv:2009.10363. https://arxiv.org/abs/2009.10363 * Ginski et al. (2012) Ginski, C., Mugrauer, M., Seeliger, M., & Eisenbeiss, T. 2012, MNRAS, 421, 2498, doi: 10.1111/j.1365-2966.2012.20485.x * Ginski et al. (2016) Ginski, C., Mugrauer, M., Seeliger, M., et al. 2016, MNRAS, 457, 2173, doi: 10.1093/mnras/stw049 * Hagelberg et al. (2020) Hagelberg, J., Engler, N., Fontanive, C., et al. 2020, A&A, 643, A98, doi: 10.1051/0004-6361/202039173 * Hall et al. (2017) Hall, C., Forgan, D., & Rice, K. 2017, MNRAS, 470, 2517, doi: 10.1093/mnras/stx1244 * Han et al. (2014) Han, E., Wang, S. X., Wright, J. T., et al. 2014, PASP, 126, 827, doi: 10.1086/678447 * Hatzes et al. (2003) Hatzes, A. P., Cochran, W. D., Endl, M., et al. 2003, ApJ, 599, 1383, doi: 10.1086/379281 * Hirsch et al. (2017) Hirsch, L. A., Ciardi, D. R., Howard, A. W., et al. 2017, AJ, 153, 117, doi: 10.3847/1538-3881/153/3/117 * Horch et al. (2014) Horch, E. P., Howell, S. B., Everett, M. E., & Ciardi, D. R. 2014, ApJ, 795, 60, doi: 10.1088/0004-637X/795/1/60 * Howard et al. (2012) Howard, A. W., Marcy, G. W., Bryson, S. T., et al. 2012, ApJS, 201, 15, doi: 10.1088/0067-0049/201/2/15 * Janson et al. (2014) Janson, M., Bergfors, C., Brandner, W., et al. 2014, ApJ, 789, 102, doi: 10.1088/0004-637X/789/2/102 * Janson et al. (2017) Janson, M., Durkan, S., Hippler, S., et al. 2017, A&A, 599, A70, doi: 10.1051/0004-6361/201629945 * Janson et al. (2012) Janson, M., Hormuth, F., Bergfors, C., et al. 2012, ApJ, 754, 44, doi: 10.1088/0004-637X/754/1/44 * Jensen & Akeson (2003) Jensen, E. L. N., & Akeson, R. L. 2003, ApJ, 584, 875, doi: 10.1086/345719 * Kaib et al. (2013) Kaib, N. A., Raymond, S. N., & Duncan, M. 2013, Nature, 493, 381, doi: 10.1038/nature11780 * Kley (2001) Kley, W. 2001, in The Formation of Binary Stars, ed. H. Zinnecker & R. Mathieu, Vol. 200, 511 * Konacki et al. (2009) Konacki, M., Muterspaugh, M. W., Kulkarni, S. R., & Hełminiak, K. G. 2009, ApJ, 704, 513, doi: 10.1088/0004-637X/704/1/513 * Kouwenhoven et al. (2007) Kouwenhoven, M. B. N., Brown, A. G. A., Portegies Zwart, S. F., & Kaper, L. 2007, A&A, 474, 77, doi: 10.1051/0004-6361:20077719 * Kozai (1962) Kozai, Y. 1962, AJ, 67, 591, doi: 10.1086/108790 * Kraus et al. (2012) Kraus, A. L., Ireland, M. J., Hillenbrand, L. A., & Martinache, F. 2012, ApJ, 745, 19, doi: 10.1088/0004-637X/745/1/19 * Kraus et al. (2016) Kraus, A. L., Ireland, M. J., Huber, D., Mann, A. W., & Dupuy, T. J. 2016, AJ, 152, 8, doi: 10.3847/0004-6256/152/1/8 * Lidov (1962) Lidov, M. L. 1962, Planet. Space Sci., 9, 719, doi: 10.1016/0032-0633(62)90129-0 * Lillo-Box et al. (2012) Lillo-Box, J., Barrado, D., & Bouy, H. 2012, A&A, 546, A10, doi: 10.1051/0004-6361/201219631 * Lodieu et al. (2014) Lodieu, N., Pérez-Garrido, A., Béjar, V. J. S., et al. 2014, A&A, 569, A120, doi: 10.1051/0004-6361/201424210 * Luhman & Jayawardhana (2002) Luhman, K. L., & Jayawardhana, R. 2002, ApJ, 566, 1132, doi: 10.1086/338338 * Maldonado et al. (2020) Maldonado, R. F., Villaver, E., Mustill, A. J., Chavez, M., & Bertone, E. 2020, MNRAS, 499, 1854, doi: 10.1093/mnras/staa2946 * Mason et al. (2001) Mason, B. D., Wycoff, G. L., Hartkopf, W. I., Douglass, G. G., & Worley, C. E. 2001, AJ, 122, 3466, doi: 10.1086/323920 * Matson et al. (2018) Matson, R. A., Howell, S. B., Horch, E. P., & Everett, M. E. 2018, AJ, 156, 31, doi: 10.3847/1538-3881/aac778 * Mayor et al. (2011) Mayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv e-prints, arXiv:1109.2497. https://arxiv.org/abs/1109.2497 * Moe & Kratter (2018) Moe, M., & Kratter, K. M. 2018, ApJ, 854, 44, doi: 10.3847/1538-4357/aaa6d2 * Moe & Kratter (2019) —. 2019, arXiv e-prints, arXiv:1912.01699. https://arxiv.org/abs/1912.01699 * Mordasini (2018) Mordasini, C. 2018, Planetary Population Synthesis, ed. H. J. Deeg & J. A. Belmonte, 143, doi: 10.1007/978-3-319-55333-7_143 * Moutou et al. (2017) Moutou, C., Vigan, A., Mesa, D., et al. 2017, A&A, 602, A87, doi: 10.1051/0004-6361/201630173 * Mugrauer (2019) Mugrauer, M. 2019, MNRAS, 490, 5088, doi: 10.1093/mnras/stz2673 * Mugrauer & Ginski (2015) Mugrauer, M., & Ginski, C. 2015, MNRAS, 450, 3127, doi: 10.1093/mnras/stv771 * Mugrauer & Neuhäuser (2009) Mugrauer, M., & Neuhäuser, R. 2009, A&A, 494, 373, doi: 10.1051/0004-6361:200810639 * Mugrauer et al. (2007a) Mugrauer, M., Neuhäuser, R., & Mazeh, T. 2007a, A&A, 469, 755, doi: 10.1051/0004-6361:20065883 * Mugrauer et al. (2006) Mugrauer, M., Neuhäuser, R., Mazeh, T., et al. 2006, Astronomische Nachrichten, 327, 321, doi: 10.1002/asna.200510528 * Mugrauer et al. (2007b) Mugrauer, M., Seifahrt, A., & Neuhäuser, R. 2007b, MNRAS, 378, 1328, doi: 10.1111/j.1365-2966.2007.11858.x * Mulders et al. (2015) Mulders, G. D., Pascucci, I., & Apai, D. 2015, ApJ, 798, 112, doi: 10.1088/0004-637X/798/2/112 * Müller & Kley (2012) Müller, T. W. A., & Kley, W. 2012, A&A, 539, A18, doi: 10.1051/0004-6361/201118202 * Naoz & Fabrycky (2014) Naoz, S., & Fabrycky, D. C. 2014, ApJ, 793, 137, doi: 10.1088/0004-637X/793/2/137 * Ngo et al. (2016) Ngo, H., Knutson, H. A., Hinkley, S., et al. 2016, ApJ, 827, 8, doi: 10.3847/0004-637X/827/1/8 * Ngo et al. (2017) Ngo, H., Knutson, H. A., Bryan, M. L., et al. 2017, AJ, 153, 242, doi: 10.3847/1538-3881/aa6cac * Ortiz et al. (2016) Ortiz, M., Reffert, S., Trifonov, T., et al. 2016, A&A, 595, A55, doi: 10.1051/0004-6361/201628791 * Patience et al. (2002) Patience, J., White, R. J., Ghez, A. M., et al. 2002, ApJ, 581, 654, doi: 10.1086/342982 * Pichardo et al. (2005) Pichardo, B., Sparke, L. S., & Aguilar, L. A. 2005, MNRAS, 359, 521, doi: 10.1111/j.1365-2966.2005.08905.x * Queloz et al. (2000) Queloz, D., Mayor, M., Weber, L., et al. 2000, A&A, 354, 99 * Rafikov (2005) Rafikov, R. R. 2005, ApJ, 621, L69, doi: 10.1086/428899 * Raghavan et al. (2006) Raghavan, D., Henry, T. J., Mason, B. D., et al. 2006, ApJ, 646, 523, doi: 10.1086/504823 * Raghavan et al. (2010) Raghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, ApJS, 190, 1, doi: 10.1088/0067-0049/190/1/1 * Reylé (2018) Reylé, C. 2018, A&A, 619, L8, doi: 10.1051/0004-6361/201834082 * Roell et al. (2012) Roell, T., Neuhäuser, R., Seifahrt, A., & Mugrauer, M. 2012, A&A, 542, A92, doi: 10.1051/0004-6361/201118051 * Santos et al. (2017) Santos, N. C., Adibekyan, V., Figueira, P., et al. 2017, A&A, 603, A30, doi: 10.1051/0004-6361/201730761 * Schlaufman (2018) Schlaufman, K. C. 2018, ApJ, 853, 37, doi: 10.3847/1538-4357/aa961c * Schneider et al. (2011) Schneider, J., Dedieu, C., Le Sidaner, P., Savalle, R., & Zolotukhin, I. 2011, A&A, 532, A79, doi: 10.1051/0004-6361/201116713 * Schwarz et al. (2016) Schwarz, R., Funk, B., Zechner, R., & Bazsó, Á. 2016, MNRAS, 460, 3598, doi: 10.1093/mnras/stw1218 * Southworth et al. (2020) Southworth, J., Bohn, A. J., Kenworthy, M. A., Ginski, C., & Mancini, L. 2020, A&A, 635, A74, doi: 10.1051/0004-6361/201937334 * Stassun et al. (2018) Stassun, K. G., Oelkers, R. J., Pepper, J., et al. 2018, AJ, 156, 102, doi: 10.3847/1538-3881/aad050 * Thebault & Haghighipour (2015) Thebault, P., & Haghighipour, N. 2015, Planet Formation in Binaries, 309–340, doi: 10.1007/978-3-662-45052-9_13 * Tokovinin (2014a) Tokovinin, A. 2014a, AJ, 147, 86, doi: 10.1088/0004-6256/147/4/86 * Tokovinin (2014b) —. 2014b, AJ, 147, 87, doi: 10.1088/0004-6256/147/4/87 * Tokovinin (2018) —. 2018, ApJS, 235, 6, doi: 10.3847/1538-4365/aaa1a5 * Tokovinin & Lépine (2012) Tokovinin, A., & Lépine, S. 2012, AJ, 144, 102, doi: 10.1088/0004-6256/144/4/102 * Tokovinin et al. (2006) Tokovinin, A., Thomas, S., Sterzik, M., & Udry, S. 2006, A&A, 450, 681, doi: 10.1051/0004-6361:20054427 * Udry et al. (2004) Udry, S., Eggenberger, A., Beuzit, J. L., et al. 2004, in Revista Mexicana de Astronomia y Astrofisica Conference Series, Vol. 21, Revista Mexicana de Astronomia y Astrofisica Conference Series, ed. C. Allen & C. Scarfe, 215–216 * Udry et al. (2003) Udry, S., Mayor, M., & Santos, N. C. 2003, A&A, 407, 369, doi: 10.1051/0004-6361:20030843 * Vigan et al. (2020) Vigan, A., Fontanive, C., Meyer, M., et al. 2020, arXiv e-prints, arXiv:2007.06573. https://arxiv.org/abs/2007.06573 * Wang et al. (2014) Wang, J., Xie, J.-W., Barclay, T., & Fischer, D. A. 2014, ApJ, 783, 4, doi: 10.1088/0004-637X/783/1/4 * Ward-Duong et al. (2015) Ward-Duong, K., Patience, J., De Rosa, R. J., et al. 2015, MNRAS, 449, 2618, doi: 10.1093/mnras/stv384 * Wenger et al. (2000) Wenger, M., Ochsenbein, F., Egret, D., et al. 2000, A&AS, 143, 9, doi: 10.1051/aas:2000332 * White & Ghez (2001) White, R. J., & Ghez, A. M. 2001, ApJ, 556, 265, doi: 10.1086/321542 * Winn et al. (2010) Winn, J. N., Fabrycky, D., Albrecht, S., & Johnson, J. A. 2010, ApJ, 718, L145, doi: 10.1088/2041-8205/718/2/L145 * Winters et al. (2019) Winters, J. G., Henry, T. J., Jao, W.-C., et al. 2019, AJ, 157, 216, doi: 10.3847/1538-3881/ab05dc * Wöllert et al. (2015) Wöllert, M., Brandner, W., Bergfors, C., & Henning, T. 2015, A&A, 575, A23, doi: 10.1051/0004-6361/201424091 * Ziegler et al. (2018) Ziegler, C., Law, N. M., Baranec, C., et al. 2018, AJ, 156, 83, doi: 10.3847/1538-3881/aace59 * Zucker & Mazeh (2002) Zucker, S., & Mazeh, T. 2002, ApJ, 568, L113, doi: 10.1086/340373
# Path integral contour deformations for observables in $SU(N)$ gauge theory William Detmold Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Gurtej Kanwar Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions Henry Lamm Fermi National Accelerator Laboratory, Batavia, IL 60510, USA Michael L. Wagman Fermi National Accelerator Laboratory, Batavia, IL 60510, USA Neill C. Warrington Institute for Nuclear Theory, University of Washington, Seattle, Washington 98195-1550 ###### Abstract Path integral contour deformations have been shown to mitigate sign and signal-to-noise problems associated with phase fluctuations in lattice field theories. We define a family of contour deformations applicable to $SU(N)$ lattice gauge theory that can reduce sign and signal-to-noise problems associated with complex actions and complex observables. For observables, these contours can be used to define deformed observables with identical expectation value but different variance. As a proof-of-principle, we apply machine learning techniques to optimize the deformed observables associated with Wilson loops in two dimensional $SU(2)$ and $SU(3)$ gauge theory. We study loops consisting of up to 64 plaquettes and achieve variance reduction of up to 4 orders of magnitude. ††preprint: FERMILAB-PUB-21-014-T††preprint: INT-PUB-21-002††preprint: MIT- CTP/5270 ## I Introduction In order to test the Standard Model and search for new physics in experiments involving hadrons and nuclei, precision calculations of Standard Model observables are required. To achieve this, Monte Carlo (MC) calculations of lattice-regularized path integrals for quantum chromodynamics (QCD) have been used to make precise predictions for many phenomenologically relevant quantities in the meson and nucleon sectors; for recent reviews see Refs. Aoki _et al._ (2020); Detmold _et al._ (2019); Lehner _et al._ (2019); Cirigliano _et al._ (2019); Aoyama _et al._ (2020). Lattice QCD calculations using MC methods are performed in Euclidean spacetime where the action $S(U)$ is typically a real function of the discretized gauge field $U_{x,\mu}\in SU(3)$ and an ensemble of gauge fields can be generated with probability distribution proportional to $e^{-S(U)}$. Predictions for the expectation values of observables $\mathcal{O}$ are then obtained by averaging the results for $\mathcal{O}(U)$ obtained for each gauge field configuration. For correlation functions describing nucleons, nuclei, and most mesons, $\mathcal{O}(U)$ is complex and includes a gauge-field-dependent phase Wagman and Savage (2017); Wagman (2017). Phase fluctuations grow more rapid with increasing Euclidean time separation and lead to a “signal-to-noise (StN) problem” in which the StN for a fixed size statistical ensemble decreases exponentially with increasing time separation Parisi (1984); Lepage (1989); Beane _et al._ (2009, 2015); Wagman and Savage (2017); Wagman (2017); Davoudi _et al._ (2020). Alternatively, multi-nucleon systems can be probed by including non-zero quark chemical potential in the action. In this case, $e^{-S(U)}$ is not real and positive definite and cannot be interpreted as a probability distribution and the theory is said to have a “sign problem.” If $e^{-\operatorname{Re}S(U)}$ is used to define a probability distribution in this case, $e^{-i\operatorname{Im}S(U)}$ leads to rapid phase fluctuations of path integrands and the appearance of a StN problem that is exponential in the spacetime volume Gibbs (1986); Cohen (2003a, b); Splittorff and Verbaarschot (2007a, b); de Forcrand (2009); Alexandru _et al._ (2015). A generic method for taming sign and StN problems in path integrals has recently emerged. This method involves deforming the manifold of integration of the path integral into a complexified field space. If the path integrand is a holomorphic function of the field variables, then a multi-dimensional version of Cauchy’s integral theorem ensures that expectation values of the corresponding observable is unchanged by the manifold deformation. Manifold deformation may, however, change the values of observables on individual field configurations and therefore modify the severity of phase fluctuations and associated sign/StN problems. Several methods for finding useful manifolds have been proposed and successfully applied in lattice field theories as well as non-relativistic quantum mechanical theories relevant for condensed matter physics Cristoforetti _et al._ (2012); Aarts (2013); Cristoforetti _et al._ (2013); Mukherjee _et al._ (2013); Aarts _et al._ (2014); Cristoforetti _et al._ (2014); Alexandru _et al._ (2016a, b, c); Fujii _et al._ (2015); Tanizaki _et al._ (2016); Alexandru _et al._ (2017a, b); Mori _et al._ (2018); Tanizaki _et al._ (2017); Alexandru _et al._ (2018a, b, c, d); Kashiwa _et al._ (2019a); Fukuma _et al._ (2019a, b); Kashiwa _et al._ (2019b); Mou _et al._ (2019); Ulybyshev _et al._ (2020); Lawrence (2020); Lawrence and Yamauchi (2021). For a recent review see Ref. Alexandru _et al._ (2020). Although most applications have focused on improving sign problems in theories with complex actions, an analogous method for improving StN problems for complex observables in theories with real actions was introduced in Ref. Detmold _et al._ (2020). The focus of this work is on addressing sign/StN problems in QCD-like theories; in this setting, path integral contour deformations have so far been restricted to solving sign problems in simple contexts. Thermodynamic observables at non-zero quark chemical potential have been computed in $(0+1)$D QCD, a theory with a single link variable, using Lefschetz thimble methods Di Renzo and Eruzzi (2018) and the generalized thimble method Schmidt and Ziesché (2017), while preliminary results using neural-network manifolds were obtained in Ohnishi _et al._ (2019). In higher dimensions, Lefschetz thimbles were used to analyze finite-density observables for one- and two-site systems in the heavy-dense limit in Ref. Zambello and Renzo (2018), and this study predicted that the number of relevant Lefschetz thimbles would grow exponentially with the number of lattice sites for larger systems. The task of computing noisy observables in non-Abelian lattice gauge theories for larger spacetime volumes and higher spacetime dimensions has remained challenging. This paper introduces a simple yet expressive family of complexified manifolds for taming sign/StN problems in $SU(N)$ gauge theory using path integral contour deformations. The Jacobians required for calculations using this family of deformations are shown to be triangular matrices whose determinants can be computed with a cost that scales linearly with the spacetime volume. This family of manifolds can be applied to all theories involving $SU(N)$ gauge fields, including gauge theories coupled to matter fields, although their practical utility for improving sign/StN problems is only explored here for pure gauge theory. The deformed observable method introduced in Ref. Detmold _et al._ (2020) relates path integrals over deformed contours to path integrals written in terms of modified observables on undeformed contours, enabling improvement in the StN of observables without the need to modify MC sampling. We apply the method here to calculations of Wilson loops in $SU(2)$ and $SU(3)$ gauge theory, in which Wilson loops are known to have an exponentially severe StN problem and have been used to study other StN improvement methods Lüscher and Weisz (2001). Calculations are performed in $(1+1)$D as a proof of concept, as it is possible to compare with exact StN results derived analytically and to use specialized approaches for efficient Monte Carlo ensemble generation for $(1+1)$D gauge theories. Results are obtained for a range of Wilson loop areas and lattice spacings including areas of up to 64 lattice units at the finest lattice spacing. The variances of Wilson loops with largest areas are reduced by factors of $10^{3}$–$10^{4}$, demonstrating that deformed observables can dramatically improve StN problems in $SU(N)$ lattice gauge theory. The linear scaling with spacetime volume of these contour deformations suggests that it should be computationally feasible to explore the application of analogous contour deformations to $(3+1)$D lattice gauge theory in future work. The remainder of this paper is organized as follows. Sec. II describes our approach to contour deformations for $SU(N)$ variables, including a family of complex manifolds for integration over sets of $SU(N)$ variables, and reviews the deformed observables method introduced in Ref. Detmold _et al._ (2020). Sec. III presents analytical results for expectation values and variances of observables in $(1+1)$D $SU(N)$ lattice gauge theory. Results for MC calculations of deformed observables for Wilson loops are presented for $SU(2)$ gauge theory in Sec. IV and for $SU(3)$ gauge theory in Sec. V. A summary of results and consideration of future work is found in Sec. VI. ## II General formalism Cauchy’s integral theorem implies that the contour of a complex line integral can be deformed without changing the integral value if the integrand is holomorphic in the intervening region and the endpoints are held fixed.111For periodic functions this condition on the endpoints can be relaxed, as discussed in Sec. II.1. When multidimensional integration is performed, the full theorem can be generalized if the integral is describable as iterated complex line integrals or by a technical extension to the full multivariate setting Range (2013). For the purpose of contour deformations, however, only a weaker form of the theorem (equivalent to Stokes’ theorem) is required. Specifically, a contour deformation from manifold $\mathcal{M}_{A}$ to $\mathcal{M}_{B}$ leaves the integral value unchanged if $\mathcal{M}_{A}\cup\mathcal{M}_{B}$ bounds a region in which the integrand is holomorphic; see Ref. Alexandru _et al._ (2020) for a simple proof. To implement such contour deformations and confirm holomorphy of an integrand throughout the relevant region of configuration space, a coordinate parameterization is useful. We discuss such parameterizations and contour deformation for $SU(N)$ groups and $SU(N)$ gauge theory in the following sections. ### II.1 Contour deformations of angular parameters A general formalism for applying path integral contour deformations to $SU(N)$ group integrals can be obtained by using manifold coordinates that map subsets of $\mathbb{R}^{N^{2}-1}$ to $SU(N)$. For any $N$, the group manifold can be given explicit global coordinates using $N^{2}-1$ angular variables Bronzan (1988). These variables can be divided into _azimuthal_ angles $\phi_{1},\dots,\phi_{J}\in[0,2\pi]$ and _zenith_ angles $\theta_{1},\dots,\theta_{K}\in[0,\pi/2]$, where $J=(N^{2}+N-2)/2$ and $K=(N^{2}-N)/2$.222This is not the only possible assignment of angular coordinates to the manifold. For example, Appendix B explores an alternative parameterization for $SU(2)$. The azimuthal angles are periodic, such that $\phi_{i}=0$ is identified with $\phi_{i}=2\pi$, while the zenith angles have distinct endpoints. We define the combined coordinate $\Omega\equiv(\phi_{1},\dots,\phi_{J},\theta_{1},\dots,\theta_{K})$. $\theta$invalid $\displaystyle\widetilde{\theta}(\theta)$valid $\displaystyle\widetilde{\theta}(\theta)$$\phi$valid $\displaystyle\widetilde{\phi}(\phi)$valid $\displaystyle\widetilde{\phi}(\phi)$identified Figure 1: Left: schematic depiction of valid and invalid contour deformations, defined by the mapping $\widetilde{\theta}(\theta)$ from base coordinates to the manifold, when the original domain is a finite interval. Right: schematic depiction of additional allowed deformations (shifts) when endpoints are identified; these shifts are applicable to $U(1)$ variables or azimuthal angles $\phi$ in $SU(N)$ manifolds. A generic integral over group-valued variable $U\in SU(N)$ can be written as $\mathcal{I}=\int dU\,f(U),$ (1) where the Haar measure $dU$ is defined to be the unique measure that satisfies $d(VU)=d(UV)=dU$ for $V\in SU(N)$. We choose the conventional normalization condition $\int dU=1$. Using the coordinates introduced above, the integral can immediately be cast as integration over a sub-domain of $\mathbb{R}^{N^{2}-1}$, $\mathcal{I}=\prod_{j=1}^{J}\left[\int_{0}^{2\pi}d\phi_{j}\right]\prod_{k=1}^{K}\left[\int_{0}^{\pi/2}d\theta_{k}\right]h(\Omega)f(U(\Omega)),$ (2) where $h(\Omega)$ is the Jacobian factor associated with the change of measure from $dU$ to $d\Omega=\prod_{j}d\phi_{j}\prod_{k}d\theta_{k}$, and $U(\Omega)$ is the group element at the manifold coordinate $\Omega$. The specific forms of $h(\Omega)$ and $U(\Omega)$ for the groups $SU(2)$ and $SU(3)$ are presented in Sec. IV.1 and V.1. From Eq. (2), it is clear that each $\theta_{k}$ can be deformed into the complex plane holding the endpoints $0$ and $\pi/2$ fixed. Each $\phi_{j}$ can be deformed under the weaker constraint that the endpoints remain identified, as shown in Fig. 1. This can be seen by noting that the endpoints of the shifted contour can be connected to the original endpoints using a pair of segments parallel to the imaginary axis; these segments differ by a real $2\pi$ shift and a change of orientation so that integrals of periodic functions along these segments exactly cancel. This approach to deforming periodic variables with identified endpoints has previously been applied to path integrals involving $U(1)$ variables Alexandru _et al._ (2018e); Detmold _et al._ (2020); Kashiwa and Mori (2020); Alexandru _et al._ (2017c). If the integrand $h(\Omega)f(U(\Omega))$ is a holomorphic function of all components of $\Omega$, the value of the integral is unchanged by the deformations described above. We can define a deformed integration contour by the maps $\widetilde{\phi}_{j}=\widetilde{\phi}_{j}(\Omega)\in\mathbb{C}$ and $\widetilde{\theta}_{i}=\widetilde{\theta}_{i}(\Omega)\in\mathbb{C}$; for conciseness the collective set of deformed coordinates can be written as a function of original coordinates $\widetilde{\Omega}\equiv(\widetilde{\phi}_{1},\dots,\widetilde{\phi}_{J},\widetilde{\theta}_{1},\dots,\widetilde{\theta}_{K})=\widetilde{\Omega}(\Omega)$. The value of the integral is unchanged by this deformation, $\displaystyle\mathcal{I}$ $\displaystyle=\prod_{j=1}^{J}\left[\int_{0}^{2\pi}d\phi_{j}\right]\prod_{k=1}^{K}\left[\int_{0}^{\pi/2}d\theta_{k}\right]J(\Omega)\,h(\widetilde{\Omega})\,f(U(\widetilde{\Omega}))$ (3) $\displaystyle=\prod_{j=1}^{J}\left[\int_{0}^{2\pi}d\phi_{j}\right]\prod_{k=1}^{K}\left[\int_{0}^{\pi/2}d\theta_{k}\right]h(\Omega)\left\\{J(\Omega)\frac{h(\widetilde{\Omega})}{h(\Omega)}f(U(\widetilde{\Omega}))\right\\}$ $\displaystyle=\int dU\,J(U)\,f(\widetilde{U}).$ Above, $\widetilde{U}\equiv U(\widetilde{\Omega})$ is the deformed group element, $J(\Omega)\equiv\det\frac{\partial\widetilde{\Omega}_{\alpha}}{\partial\Omega_{\beta}}$ is the (complex) Jacobian factor relating the measure $d\Omega$ of the base contour to the measure $d\widetilde{\Omega}$ of the deformed contour, and the Jacobian factor relating the Haar measure $dU$ between the original and deformed contours is given by $J(U)\equiv J(\Omega)h(\widetilde{\Omega})/h(\Omega).$ (4) For any concrete map $U(\Omega)$, the deformed group element is given by simply applying the map to the complexified coordinate $\widetilde{\Omega}$. Any real Lie group has a unique complexification and the space in which $\widetilde{U}$ lives is well understood. In particular, the complexification of $SU(N)$ is $SL(N,\mathbb{C})$ Bump (2013). This is easy to see on intuitive grounds, as $SU(N)$ matrices are specified by unit-norm eigenvalues with determinant $1$ and the effect of complexifying the group is to allow arbitrary non-zero eigenvalues while preserving the determinant constraint, resulting in the group $SL(N,\mathbb{C})$. Though the deformation is defined in terms of the manifold coordinates, we can see in the last line of Eq. (3) that this new integral can be written independently of the coordinates, using the modified integrand $J(U)f(\widetilde{U}(U))$. This form suggests a coordinate-independent definition of a holomorphic integrand and contour deformation. Such a general approach is beyond the scope of this work, but we note that a deformation can be applied in any coordinate system so long as the integrand can be shown to be holomorphic in _some_ coordinate system. The angular coordinates for $SU(N)$ are sufficient to show that all components of the matrix representation of $U$ are holomorphic (i.e. the Wirtinger derivatives $\partial U(\Omega)/\partial\bar{\Omega}_{i}$ are zero Range (2013)). The components of $U^{-1}$ are holomorphic whenever $U$ is invertible, as can be seen by starting with the definition of the inverse, $U^{-1}U=1$, applying Wirtinger derivatives to both sides, $\partial(U^{-1}U)/\bar{\Omega}_{i}=0$, and using holomorphy of $U$ to obtain $\partial U^{-1}/\partial\bar{\Omega}_{i}=0$. The general construction so far is valid for collections of $SU(N)$ group variables, and is therefore applicable to $SU(N)$ lattice gauge theory. Standard pure-gauge actions for $SU(N)$ lattice gauge theory can be written as holomorphic functions of the variables $U$ and their inverses, if we replace instances of $U^{\dagger}$ with $U^{-1}$, and are thus holomorphic throughout the $SL(N,\mathbb{C})$ domain of each complexified group variable. This fact has previously been recognized in complex Langevin approaches to extensive sign problems in lattice gauge theory Aarts and Stamatescu (2008); Sexty (2014); Seiler (2018). Similarly, (unrooted) fermionic determinants can be written as polynomials in components of gauge links and are therefore holomorphic Alexandru _et al._ (2018b), and many observables can be analyzed in the same way. ### II.2 Vertical contour deformations We define a family of deformed manifolds for an $SU(N)$ variable in terms of the angular parameterization by $\widetilde{\Omega}=\Omega+if(\Omega;\lambda,\chi),$ (5) where $f(\Omega;\lambda,\chi)\in\mathbb{R}^{N^{2}-1}$. This family of _vertical deformations_ inspired by Refs. Alexandru _et al._ (2018e, f) is not fully general as it only includes contour deformations describable by vertically shifting each point purely in the imaginary direction, and in particular it does not include any deformations with $\operatorname{Re}{\widetilde{\Omega}(\Omega)}=\operatorname{Re}{\widetilde{\Omega}(\Omega^{\prime})}$ for some $\Omega\neq\Omega^{\prime}$. The Jacobian of any deformation in this family is straightforward to compute as $J(\Omega)=\det\frac{\partial\widetilde{\Omega}_{\alpha}}{\partial\Omega_{\beta}}=\det\left[\delta_{\alpha\beta}+i\frac{\partial f_{\alpha}(\Omega;\lambda,\chi)}{\partial\Omega_{\beta}}\right],$ (6) where $\alpha,\beta=1,\ldots,N^{2}-1$ index the angular parameters $\Omega=(\phi_{1},\ldots,\phi_{J},\theta_{1},\ldots,\theta_{K})$. The function $f$ can further be expanded in terms of Fourier modes, $f(\Omega;\lambda,\chi)=\sum_{\\{n_{i}\\}=0}^{\Lambda}\sum_{\\{m_{j}\\}=1}^{\Lambda}\lambda_{I}T_{I}(\Omega;\chi_{I}^{1},\dots,\chi_{I}^{a}),$ (7) where $I\equiv(n_{1}\dots n_{a},m_{1}\dots m_{b})$, and $T_{I}(\Omega;\chi^{1}\dots\chi^{a})\equiv\prod_{i=1}^{a}{\text{sin}(\phi_{i}n_{i}+\chi^{i})}\prod_{j=1}^{b}{\text{sin}(2\theta_{j}m_{j})},$ (8) which provides a complete basis for vertical contour deformations in the limit $\Lambda\rightarrow\infty$. Including successively more Fourier modes by increasing the Fourier cutoff $\Lambda$ systematically improves the flexibility of the function $f$. In our applications to $SU(N)$ gauge theories in $(1+1)$D below, Fourier cutoffs $\Lambda\in\\{0,1,2\\}$ are explored. It is noteworthy that the sum over azimuthal Fourier modes includes the constant mode as well as phase offsets $\chi_{i}$ because azimuthal angles can be deformed without fixing their endpoints as discussed above. These constant modes are essential for the sign/StN problem reduction achieved in $(1+1)$D examples below. ### II.3 Path integral deformations for noisy observables We next review the deformed observable approach presented in Ref Detmold _et al._ (2020), in which contour deformations of lattice field theory path integrals are used to define modified observables with improved noise properties and unchanged expectation value. We focus here on $SU(N)$ lattice gauge theory path integrals, which are high-dimensional integrals over a collection of group-valued degrees of freedom $U_{x,\mu}\in SU(N)$. Here, $x$ specifies a site on the discrete spacetime lattice and $\mu\in\\{1,\dots,N_{d}\\}$ is any of the $N_{d}$ spacetime directions on the lattice. The integrals under study take the form $\begin{split}\left\langle\mathcal{O}\right\rangle\equiv\frac{1}{Z}\int\mathcal{D}U\ \mathcal{O}(U)\ e^{-S(U)},\end{split}$ (9) where $\begin{split}Z=\int\mathcal{D}U\ e^{-S(U)}\end{split}$ (10) and the action $S(U)\in\mathbb{R}$ is a function of all gauge links. Details on the construction of lattice gauge theory are presented in Sec. III. It is possible to deform the integration contour of an $SU(N)$ lattice gauge theory path integral by individually deforming each group-valued variable $U_{x,\mu}$ using the formalism presented above. In principle the deformed link, $\widetilde{U}_{x,\mu}$, could be a function of all other links on the lattice. However, evaluating the Jacobian factor arising from such an arbitrary deformation would require $O(V^{3})$ operations, where $V$ is the number of sites of the lattice. For state-of-the-art lattice field theory calculations, this is intractable. A similar obstacle is encountered in the application of normalizing flows to sampling probability distributions in image or lattice data analysis, see e.g. Ref. Papamakarios _et al._ (2019) for a review, where it is avoided by explicitly restricting to triangular Jacobians for which the determinant factor is efficiently calculable from the diagonal elements. In this work, we similarly restrict to triangular Jacobians by allowing deformations of each variable to depend only on previous variables in a canonical ordering, described in detail for our particular studies of $SU(2)$ and $SU(3)$ in the following sections. Exploring other options is an interesting possibility for future work; for example, transformations built from a composition of multiple triangular transformations may allow more general spacetime dependence without significant increase in cost, whereas other alternatives based on convolutions may scale supra-linearly as the volume is increased. Given a deformed manifold $\mathcal{M}$ with tractable Jacobian factor, a deformed integral can be constructed to compute the same expectation value defined by Eq. (9), $\left\langle\mathcal{O}\right\rangle=\frac{1}{Z}\int_{\mathcal{M}}\mathcal{D}\widetilde{U}\ \mathcal{O}(\widetilde{U})\ e^{-S(\widetilde{U})}.$ (11) The above equality holds if the original manifold can be deformed to $\mathcal{M}$ without encountering any non-holomorphic regions of the integrand, i.e. if the integrand is holomorphic in the region bounded by the union of $\mathcal{M}$ and the original manifold. The abstract manifold $\mathcal{M}$ can be specified by a map $\widetilde{U}(U)$, which then gives a concrete prescription for computing Eq. (11), $\begin{split}\left\langle\mathcal{O}\right\rangle&=\frac{1}{Z}\int\mathcal{D}U\,J(U)\mathcal{O}(\widetilde{U}(U))\,e^{-S(\widetilde{U}(U))}.\end{split}$ (12) In contrast to Eq. (9), the path integral in Eq. (12) involves a generically complex-valued action, $S(\widetilde{U}(U))\in\mathbb{C}$, and a different observable $J(U)\mathcal{O}(\widetilde{U}(U))$ written in terms of the (efficiently computed) Jacobian factor $J(U)$. Cauchy’s theorem implies that these modifications conspire to cancel and result in an identical expectation value. It is useful to further rewrite Eq. (12) as a path integral with respect to the original action, $\begin{split}\left\langle\mathcal{O}\right\rangle&=\frac{1}{Z}\int\mathcal{D}U\left\\{J(U)\mathcal{O}(\widetilde{U}(U))e^{-S(\widetilde{U}(U))+S(U)}\right\\}e^{-S(U)}\\\ &\equiv\frac{1}{Z}\int\mathcal{D}U\,\mathcal{Q}(U)\,e^{-S(U)}=\left\langle\mathcal{Q}\right\rangle.\end{split}$ (13) In this rewriting, it is clear that the deformed path integral is still accessible by performing Monte Carlo sampling with respect to the original action $S(U)$ and estimating the sample mean of $\mathcal{Q}(U)$. These methods can therefore be applied at the measurement step, after an ensemble of field configurations has been sampled. In essence, the deformed path integral defines a new observable $\mathcal{Q}(U)$ that has provably identical expectation value to the original observable $\mathcal{O}(U)$. Notably, this new observable generically has very different structure from $\mathcal{O}$, as it may be non-local and depends on the structure of the action. The variance of $\mathcal{Q}(U)$ can, however, be vastly different from the variance of $\mathcal{O}(U)$. For most observables, samples of $\mathcal{O}(U)$ are complex-valued, and the variance of the real and imaginary components are given by $\begin{split}\operatorname{Var}[\operatorname{Re}{\mathcal{O}}]&=\frac{1}{2}\left\langle|\mathcal{O}^{2}|\right\rangle+\frac{1}{2}\left\langle\mathcal{O}^{2}\right\rangle-[\operatorname{Re}\left\langle\mathcal{O}\right\rangle]^{2},\\\ \operatorname{Var}[\operatorname{Im}{\mathcal{O}}]&=\frac{1}{2}\left\langle|\mathcal{O}^{2}|\right\rangle-\frac{1}{2}\left\langle\mathcal{O}^{2}\right\rangle-[\operatorname{Im}\left\langle\mathcal{O}\right\rangle]^{2}.\end{split}$ (14) These are not generically identical to the variance of $\operatorname{Re}{\mathcal{Q}}$ and $\operatorname{Im}{\mathcal{Q}}$, $\begin{split}\operatorname{Var}[\operatorname{Re}{\mathcal{Q}}]&=\frac{1}{2}\left\langle|\mathcal{Q}^{2}|\right\rangle+\frac{1}{2}\left\langle\mathcal{Q}^{2}\right\rangle-[\operatorname{Re}\left\langle\mathcal{Q}\right\rangle]^{2},\\\ \operatorname{Var}[\operatorname{Im}{\mathcal{Q}}]&=\frac{1}{2}\left\langle|\mathcal{Q}^{2}|\right\rangle-\frac{1}{2}\left\langle\mathcal{Q}^{2}\right\rangle-[\operatorname{Im}\left\langle\mathcal{Q}\right\rangle]^{2}.\end{split}$ (15) The final term is unchanged because $\left\langle\mathcal{O}\right\rangle=\left\langle\mathcal{Q}\right\rangle$, but the first two terms in both lines of Eq. (15) are not generically equal to the corresponding terms in Eq. (14). Explicitly, those terms are given by $\begin{split}\left\langle|\mathcal{Q}^{2}|\right\rangle&=\frac{1}{Z}\int\mathcal{D}U\left|J(U)\mathcal{O}(\widetilde{U}(U))\right|^{2}e^{-2\operatorname{Re}{S(\widetilde{U}(U))}+S(U)},\\\ \left\langle\mathcal{Q}^{2}\right\rangle&=\frac{1}{Z}\int\mathcal{D}UJ(U)^{2}\mathcal{O}(\widetilde{U}(U))^{2}e^{-2S(\widetilde{U}(U))+S(U)}.\end{split}$ (16) Extra factors of $S(U)$ persist in both expressions in Eq. (16), and a non- holomorphic absolute value appears for $\left\langle|\mathcal{Q}^{2}|\right\rangle$, preventing identification with the terms in Eq. (14). It can thus be fruitful to look for a modified observable $\mathcal{Q}$ for which the terms in Eq. (16) are minimized and the statistical noise is less than that of the original observable. Given an explicit parameterization of the deformed contour, standard gradient- based optimization methods can be applied to find the parameters that minimize the terms in Eq. (16). Since the parameters only affect the observable itself (the sampling weight is always $e^{-S(U)}$), the gradient of the variance with respect to the parameters can be written as an expectation value under the original Monte Carlo sampling. In this work, minimization of Eq. (16) is performed using stochastic gradient descent over the deformed contour parameters, which approximately converges to a local minimum of the variance under mild assumptions Robbins and Monro (1951); Borkar (2009); Chee and Toulis (2018); Bottou _et al._ (2018), and starting at the original manifold ensures that the result improves relative to (or at worst is equivalent to) the original variance. A potential obstacle to the deformed observable method is that some contour deformations could lead to a severe overlap problem between probability distributions proportional to $e^{-S(U)}$ and $e^{-\text{Re}[S(\widetilde{U}(U))]}$ that could make estimates of deformed observable variances from finite Monte Carlo ensembles unreliable. The possibility of constructing deformed observables with severe overlap problems can be mitigated, however, by constructing deformed observables that minimize the variance of $\mathcal{Q}$ since large fluctuations of $e^{-S(\widetilde{U}(U))+S(U)}$ will tend to increase the variance of $\mathcal{Q}$. Overlap problems could still arise due to overfitting to the sample variance of a Monte Carlo ensemble that does not sample the field configurations associated with large fluctuations of $e^{-S(\widetilde{U}(U))+S(U)}$; practical strategies to avoid such overfitting problems are discussed in Sec. IV.3 below. The terms to minimize in Eq. (16) are specific to a particular observable $\mathcal{O}$. There is no reason to expect a single manifold deformation to be optimal for all possible observables, but one does expect similar observables to be highly correlated, and thus to receive similar variance improvements from the same deformation. This suggests two useful practical improvements: 1. 1. Optimal manifolds can be found for a few representative observables and can be reused for other similar observables. 2. 2. When optimizing manifold parameters, the optimal parameters for a similar observable can be used to initialize the search. In our studies of $SU(2)$ and $SU(3)$ lattice gauge theory in Sec. IV and Sec. V, the initialization approach was found to significantly reduce the number of steps required for optimization. ### II.4 Rewriting observables before deformation It is often the case that the expectation value of an observable $\mathcal{O}$ has multiple equivalent path integral representations, for example when a theory possesses a gauge or global symmetry that modifies $\mathcal{O}$ but leaves the action and expectation values of observables invariant. In addition to the freedom of choosing the parameterization of contour deformations for a given path integral discussed above, the application of contour deformations to observables includes the freedom of choosing which path integral representation to deform. Figure 2: Ratios of the variance of observable $\operatorname{Re}(e^{i\phi})=\cos(\phi)$ to the variance of deformed observables obtained by applying the transformation $\phi\rightarrow\phi+if$ to the integrand $e^{i\phi}$ (left) or $\cos(\phi)$ (right) in the one- dimensional path integral with Euclidean action $-\beta\cos\phi$ discussed in the main text. Gray hatching indicates the region in which the variance of the deformed observable is higher than the original (i.e. there is no improvement). Simple examples of this freedom arise in two-dimensional $U(1)$ Euclidean lattice gauge theory. For instance, path integral representations of Wilson loops with unit area in this theory are proportional to $\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\ e^{i\phi}\ e^{\beta\cos\phi}=I_{1}(\beta),$ (17) which is an integral representation of the modified Bessel function of the first kind $I_{n}(\beta)$, with $n=1$, written in terms of the field variable $\phi$. The integrand appearing can be interpreted as a product of an observable $\mathcal{O}=e^{i\phi}$ and an $e^{-S}$ factor for the Euclidean action $S=-\beta\cos\phi$. This theory has a charge conjugation symmetry that acts on the integrand of Eq. (17) by $\phi\rightarrow-\phi$, which leaves the action invariant but modifies the observable $e^{i\phi}\rightarrow e^{-i\phi}$. An equally valid integral representation of $I_{1}(\beta)$ is obtained by averaging the observable and its charge conjugate as $I_{1}(\beta)=\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\ \cos(\phi)\ e^{\beta\cos\phi}.$ (18) The choice of integrand in Eq. (17) or Eq. (18) is irrelevant for Monte Carlo calculations using the integration contours shown because the Monte Carlo estimator used in the first case is $\operatorname{Re}{e^{i\phi}}=\cos{\phi}$. However, the ability of path integral contour deformations to reduce the variance of a Monte Carlo calculation does depend on the representation. In particular, the variance of a Monte Carlo evaluation of Eq. (17) can be significantly reduced by contour deformations while the variance of a Monte Carlo evaluation of Eq. (18) cannot, as discussed below and shown in Fig. 2. Ref. Detmold _et al._ (2020) demonstrated that the variance of two- dimensional $U(1)$ Wilson loops can be significantly reduced using contour deformations of the representation given in Eq. (17); we review the analytical derivation here. Denote averaging of $\mathcal{O}(\phi)$ with respect to the path integral of the theory with action $-\beta\cos{\phi}$ by $\left<\mathcal{O}(\phi)\right>_{\beta}\equiv\int_{-\pi}^{\pi}\frac{d\phi}{2\pi I_{0}(\beta)}\mathcal{O}(\phi)e^{\beta\cos{\phi}},$ (19) where the modified Bessel function with rank $n=0$ is used to normalize the distribution. In general, the modified Bessel functions of the first kind are given by $I_{n}(\beta)=\int_{-\pi}^{\pi}\frac{d\phi}{2\pi}e^{in\phi}e^{\beta\cos\phi}$ and will be used throughout the following derivation. Further denoting the corresponding variance of $\mathcal{O}$ given $\beta$ by $\text{Var}_{\beta}[\mathcal{O}]$, the variance of $\operatorname{Re}{e^{i\phi}}=\cos\phi$ can be computed to be $\begin{split}\text{Var}_{\beta}[\operatorname{Re}{e^{i\phi}}]&=\left<\cos^{2}(\phi)\right>_{\beta}-\left<e^{i\phi}\right>_{\beta}^{2}\\\ &=\frac{1}{2}\left[1+\frac{I_{2}(\beta)}{I_{0}(\beta)}\right]-\left[\frac{I_{1}(\beta)}{I_{0}(\beta)}\right]^{2}.\end{split}$ (20) The constant vertical deformation $\phi\;\rightarrow\;\widetilde{\phi}(\phi)=\phi+if$ (21) leads to a deformed observable $\mathcal{Q}_{e}(\phi)=e^{i\widetilde{\phi}(\phi)}e^{\beta\cos(\widetilde{\phi}(\phi))-\beta\cos(\phi)}$ (22) satisfying $\left<\mathcal{Q}_{e}\right>_{\beta}=\left<e^{i\phi}\right>_{\beta}$. Using $\operatorname{Re}{\mathcal{Q}_{e}}$ as a Monte Carlo estimator instead of $\operatorname{Re}{e^{i\phi}}$ results in variance $\begin{split}\text{Var}_{\beta}[\operatorname{Re}{\mathcal{Q}_{e}}]&=e^{-2f}\left[\frac{R(\beta,f)}{2}+V(\beta,f)\right]-\left[\frac{I_{1}(\beta)}{I_{0}(\beta)}\right]^{2},\end{split}$ (23) where $\begin{split}V(\beta,f)&=\left(\frac{e^{f}-\frac{1}{2}}{e^{-f}-\frac{1}{2}}\right)\frac{I_{2}(\beta\sqrt{5-4\cosh(f)})}{2I_{0}(\beta)},\\\ R(\beta,f)&=\frac{I_{0}(\beta(2\cosh(f)-1))}{I_{0}(\beta)}.\end{split}$ (24) For positive $\beta$, $\text{Var}_{\beta}[\operatorname{Re}{\mathcal{Q}_{e}}]$ generically has a minimum at some strictly positive $f$ that defines the optimal contour for reducing the variance of $\mathcal{Q}_{e}$. Using deformed observables associated with this optimal contour rather than the original contour leads to variance reduction whose significance is larger than an order of magnitude for $\beta\gtrsim 4$ as shown in Fig. 2. Larger variance reductions can be obtained for multi-dimensional generalizations of Eq. (17) associated with larger $U(1)$ Wilson loops Detmold _et al._ (2020). Applying the same constant vertical deformation to the representation in Eq. (18) leads to an alternative deformed observable $\mathcal{Q}_{c}(\phi)=\cos(\widetilde{\phi}(\phi))\ e^{\beta\cos(\widetilde{\phi}(\phi))-\beta\cos(\phi)}$ (25) satisfying $\left<\mathcal{Q}_{c}\right>_{\beta}=\left<e^{i\phi}\right>_{\beta}$. The real part of this alternative deformed observable has variance $\begin{split}\text{Var}_{\beta}[\operatorname{Re}{\mathcal{Q}_{c}}]&=\frac{1}{2}e^{-2f}V(\beta,f)+\frac{1}{2}e^{2f}V(\beta,-f)\\\ &\hskip 10.0pt+\frac{1}{2}R(\beta,f)-\left[\frac{I_{1}(\beta)}{I_{0}(\beta)}\right]^{2}.\end{split}$ (26) This expression is symmetric under $f\rightarrow-f$ and its gradient with respect to $f$ therefore vanishes at $f=0$, which can be verified to be the minimum of $f$. The deformed observable $\mathcal{Q}_{c}$ associated with integrating along the real axis is thus the choice with lowest variance, while increasing $|f|$ away from zero always leads to increased variance as illustrated for several choices of $\beta$ in Fig. 2. The qualitative features of these results can be understood from the behaviors of magnitude and phase fluctuations of deformed observables. The magnitude of $e^{i\widetilde{\phi}}$ is reduced for all $\phi$ by a constant vertical deformation with $f>0$. To preserve the results of the holomorphic integral in Eq. (17) under this deformation, cancellations from phase fluctuations must correspondingly be less severe and sign/StN problems should therefore be improved. On the other hand, the magnitude of $\cos(\widetilde{\phi})$ always satisfies $|\cos(\widetilde{\phi})|\geq|\cos(\phi)|$ and therefore can only be increased by applying a vertical contour deformation. This suggests that phase fluctuations must become more severe to preserve deformation-invariant integral results and that sign/StN problems will be worsened. This argument applies to non-constant vertical deformations and suggests that it is generically difficult to construct a contour deformation of Eq. (18) that leads to a deformed observable with variance comparable to the observable $\mathcal{Q}_{e}$ obtained by deforming Eq. (17). In the non-Abelian gauge theories that are focus of this work, the traces of Wilson loops define gauge invariant observables related to the potential energies of static quark configurations analogous to $e^{i\phi}$ in $U(1)$ gauge theory. The eigenvalues of Wilson loops in $U(N)$ or $SU(N)$ gauge theories can be expressed as $e^{i\phi_{j}}$, where $\phi_{j}\in\mathbb{R}$ for $j=1,\ldots,N$ and for the case of $SU(N)$ the phases satisfy $\sum_{j=1}^{N}\phi_{j}=0\mod 2\pi$. The trace of the Wilson loop is given by $\sum_{j}e^{i\phi_{j}}$. For $SU(N)$ gauge theory, the unit determinant condition therefore results in analogous obstacles to improving the variance of the trace of Wilson loops if the observable is not first rewritten using symmetries. For the case of $SU(2)$, there is a precise correspondence between Wilson loop traces and Eq. (18). The unit determinant condition requires the Wilson loop eigenvalues to be of the form $e^{i\phi}$ and $e^{-i\phi}$ and the trace appearing in both the observable and the Wilson action Wilson (1974) (discussed below in Sec. IV) are proportional to $\cos(\phi)$. It is therefore similarly impossible to improve the variance of a unit area $SU(2)$ Wilson loop trace using constant vertical deformations, and the previous analysis suggests it is difficult to make significant variance improvements to traces of $SU(2)$ Wilson loops of general area using vertical contour deformations. However, this analysis also indicates that $e^{i\phi}$ could provide a suitable starting point for defining deformed observables. The gauge invariant component of the Wilson loop eigenvalue $e^{i\phi}$ is $\cos(\phi)$ making this a suitable rewriting of the observable that does not affect the expectation value but enables variance improvements by contour deformation.333We could instead use linearity of the expectation value to start from an explicitly gauge-invariant observable and rewrite $\left<\cos(\phi)\right>=\frac{1}{2}\left<e^{i\phi}\right>+\frac{1}{2}\left<e^{-i\phi}\right>$. Most generally, we could then apply distinct deformations to each expectation value on the right-hand-side of this expression, and in particular applying constant imaginary shifts with opposite signs will result in the same StN improvement for both terms. In this example, the two expectation values can be equated using charge conjugation symmetry, which allows the exact rewriting $\left<\cos(\phi)\right>=\left<e^{i\phi}\right>$, but the technique of splitting the expectation value using linearity and applying independent deforms may be useful where such a symmetry is not present. In the case of $SU(N)$ with $N\geq 3$, complex eigenvalues are not guaranteed to come in complex conjugate pairs, but the unit determinant condition still guarantees that any vertical deformation of the eigenvalue phases will increase the magnitude of at least one $e^{i\phi_{j}}$ phase factor. In the sum over eigenvalues defining the trace of a Wilson loop, the largest eigenvalue magnitude will set the typical observable magnitude on generic gauge fields in the integration domain. While it is possible for stronger cancellation between eigenvalues to occur on particular gauge fields, this larger typical magnitude throughout the majority of the domain of integration suggests that cancellations from phase fluctuations with similar severity to those of the original observable will therefore be required for such deformed observables to achieve identical expectation values, and significant sign/StN problems may be expected. If one instead chooses the non-gauge-invariant integrand $e^{i\phi_{1}}$ to measure the same expectation value, the determinant constraint does not prevent exponentially decreasing the magnitude throughout the integration domain using contour deformations. For example, one can choose the shift $\displaystyle\phi_{1}$ $\displaystyle\rightarrow\;\widetilde{\phi}_{1}=\phi_{1}+if$ (27) $\displaystyle\phi_{2}$ $\displaystyle\rightarrow\;\widetilde{\phi}_{2}=\phi_{2}-if/2$ $\displaystyle\phi_{3}$ $\displaystyle\rightarrow\;\widetilde{\phi}_{3}=\phi_{3}-if/2$ which has the effect of reducing the observable magnitude by $e^{-f}$ on every gauge field in the integration domain. Rewriting Wilson loop observables based on eigenvalues requires diagonalization, and a more practical alternative is to use the $(1,1)$ component of the (matrix-valued) Wilson loop as another non-gauge-invariant function with the same expectation value as the trace divided by $N$. The phase of this (or any other) single color component of the Wilson loop is not constrained by the unit determinant condition and therefore one expects that a suitable parameterization can be found in which vertical deformations can be applied to the phase of the $(1,1)$ component of the Wilson loop analogously to $e^{i\phi}$. Such parameterizations are given for $SU(2)$ in Sec. IV and for $SU(3)$ in Sec. V following Ref. Bronzan (1988) and are used as starting points for defining deformed observables with reduced variance in calculations of Wilson loop expectation values. An alternative parameterization of $SU(2)$ in which the real part of the $(1,1)$ component of the Wilson loop is expressed as $\cos(\alpha/2)$ is explored in Appendix B; as expected, vertical contour deformations do not improve the variance of unit area Wilson loops and less (though still significant) variance reduction is found for larger area Wilson loops with this alternative parameterization. ## III Noise problems in $SU(N)$ lattice gauge theory A simple setting for analyzing $SU(N)$ lattice gauge theory is obtained by considering $(1+1)$D Euclidean spacetime with open boundary conditions. In this spacetime geometry, much like in $(3+1)$ dimensions, the theory features confinement of static test charges and an exponentially severe StN problem associated with static quark correlation functions, which can be identified with Wilson loops Wilson (1974). Numerical calculations of Wilson loop expectation values can be performed at much lower computational cost in $(1+1)$D than $(3+1)$D, facilitating a first exploration of path integral contour deformations applied to non-Abelian gauge theory observables on non- trivial lattices. Analytic results for $(1+1)$D observables such as Wilson loops are also known Wadia (2012); Gross and Witten (1980) and can be used to verify the correctness of numerical results. These results are extended to analytic results for the variances of $(1+1)$D Wilson loops below, which are then used in Secs. IV–V to verify the correctness and study the effectiveness of contour deformations applied to Wilson loops. In particular, analytic results can be used to determine the StN gains obtained by using deformed observables even when the corresponding undeformed observables are too noisy to be determined reliably. ### III.1 $SU(N)$ lattice gauge theory in $(1+1)$D Lattice gauge theory in $(1+1)$D is defined on a set $\mathcal{V}$ of Euclidean spacetime points $x$ arranged in a discrete two-dimensional lattice, with vectors $\hat{1}$ and $\hat{2}$ giving the displacement in lattice units between neighboring lattice sites along the two Euclidean spacetime axes. The discretized gauge field is represented by group-valued variables on each link of the lattice, with $U_{x,\mu}$ denoting the variable associated with link $(x,x+\hat{\mu})$. The physical content of the theory is encoded in the (discretized) action. We consider the Wilson action for $SU(N)$ lattice gauge theory Wilson (1974), given for a $(1+1)$D Euclidean spacetime volume by $S\equiv-\frac{1}{g^{2}}\sum_{x\in\mathcal{V}}\operatorname{tr}\left(P_{x}+P_{x}^{-1}\right),$ (28) where $g$ is the bare gauge coupling and each _plaquette_ $P_{x}\in SU(N)$ is defined as $P_{x}\equiv U_{x,1}U_{x+\hat{1},2}U^{-1}_{x+\hat{2},1}U^{-1}_{x,2}.$ (29) Writing the action and plaquettes using inversion rather than Hermitian conjugation allows the relevant integrands to be interpreted in the following sections as holomorphic functions of integration variables throughout the complexified domain. For $SU(N)$ elements these operations are equivalent, but analytically continuing the action to $SL(N,\mathbb{C})$ requires the use of the inverse Aarts and Stamatescu (2008); Sexty (2014); Seiler (2018). Expectation values of operators $\mathcal{O}(U)$ in the lattice regularized theory are defined by specializing Eq. (9)–(10) to the particular case of $SU(N)$ lattice gauge theory $\begin{split}\left\langle\mathcal{O}\right\rangle=\frac{1}{Z}\int\mathcal{D}U\ \mathcal{O}(U)\ e^{-S(U)},\end{split}$ (30) where the Euclidean partition function $Z$ is defined by $\begin{split}Z=\int\mathcal{D}U\ e^{-S(U)}\end{split}$ (31) and $\mathcal{D}U=\prod_{x,\mu}dU_{x,\mu}$ in terms of the Haar measure $dU_{x,\mu}$ of $SU(N)$. With open boundary conditions in $(1+1)$D, the partition function defined by this action factorizes into a product of independent integrals over each $P_{x}$. To exploit this factorization in $(1+1)$D, a gauge fixing prescription can be applied in which $U_{x,2}=1$ for all $x$ and $U_{x,1}=1$ for sites with $x_{2}=0$ (a maximal tree gauge). In this gauge, $P_{x}=U_{x,1}U_{x+\hat{2},1}^{-1},$ (32) which can be easily inverted to obtain $U_{x,1}=\left[\prod_{k=0}^{x_{2}-1}P_{x+k\hat{2}}\right]^{-1}.$ (33) The variables $P_{x}$ are therefore in one-to-one correspondence with the remaining non-gauge-fixed $U_{x,1}$. The Haar measure is invariant under this change of variables, and the path integral defining the partition function factorizes as $\begin{split}Z=\prod_{x\in\mathcal{V}^{\prime}}z=z^{|\mathcal{V}^{\prime}|},\end{split}$ (34) where $\mathcal{V}^{\prime}\subset\mathcal{V}$ is the subset of lattice points with unconstrained $U_{x,1}$ in this gauge (those for which $x_{2}\neq 0$) and $z$ is the contribution to the partition function from a single plaquette, $\begin{split}z\equiv\int dP\ e^{\frac{1}{g^{2}}\operatorname{tr}\left(P+P^{-1}\right)}.\end{split}$ (35) The calculations of $z$ and similar single-variable $SU(N)$ integrals are presented in Appendix A. Wilson loops are defined by the matrix-valued quantity $\begin{split}W_{\mathcal{A}}\equiv\prod_{x,\mu\in\partial\mathcal{A}}U_{x,\mu},\end{split}$ (36) where $\prod_{x,\mu\in\partial\mathcal{A}}U_{x,\mu}$ represents an ordered product of links along the boundary $\partial\mathcal{A}$ of the two- dimensional region $\mathcal{A}$ with area $A$. The expectation value of the gauge-invariant observable $\frac{1}{N}\operatorname{tr}\left(W_{\mathcal{A}}\right)$ probes the interaction between a pair of static quarks if the region $\mathcal{A}$ is taken to be rectangular. Inserting Eq. (33) into Eq. (36) gives444For simplicity we restrict to rectangular Wilson loops with one corner at the origin. $\begin{split}\frac{1}{N}\operatorname{tr}\left(W_{\mathcal{A}}\right)=\frac{1}{N}\operatorname{tr}\left(\prod_{x\in\mathcal{A}}P_{x}\right).\end{split}$ (37) Using linearity of expectation values and factorization of path integrals analogous to Eq. (34), the expectation values of Wilson loops can be related to products of (matrix-valued) single-variable expectation values, $\left<\frac{1}{N}\operatorname{tr}\left(W_{\mathcal{A}}\right)\right>=\frac{1}{N}\operatorname{tr}\left(\prod_{x\in\mathcal{A}}\left\langle P_{x}\right\rangle\right).$ (38) Each single-variable expectation value is given by $\left<P_{x}^{ab}\right>=\left\langle\chi_{1}\right\rangle\delta^{ab}$, allowing the traced Wilson loop to be written as a product of scalars, $\begin{split}\left\langle\frac{1}{N}\operatorname{tr}\left(W_{\mathcal{A}}\right)\right\rangle&=\prod_{x\in\mathcal{A}}\left\langle\chi_{1}\right\rangle=\left\langle\chi_{1}\right\rangle^{A},\\\ \end{split}$ (39) where we have introduced the single-variable normalized expectation value of the group character function $\chi_{1}(P)=\operatorname{tr}(P)$, $\begin{split}\left\langle\chi_{1}\right\rangle&\equiv\frac{1}{z}\int dP\ \frac{1}{N}\ \operatorname{tr}(P)\ e^{\frac{1}{g^{2}}\operatorname{tr}\left(P+P^{-1}\right)},\end{split}$ (40) whose value is computed in Appendix A. Eq. (39) implies that Wilson loop expectation values follow area law scaling, $\left\langle\operatorname{tr}(W_{\mathcal{A}})/N\right\rangle\sim e^{-\sigma A}$, and $SU(N)$ gauge theory in $(1+1)$D confines for all values of the coupling, with a separation-independent force between static test charges given by the string tension $\sigma\equiv-\lim_{A\rightarrow\infty}\partial_{A}\ln W_{\mathcal{A}}=-\ln\left\langle\chi_{1}\right\rangle.$ (41) Although $\left\langle\chi_{1}\right\rangle$ is in general given by a convergent infinite series in Eq. (77), in the case of $SU(2)$ a simpler form can be found in terms of modified Bessel functions, $\begin{split}\sigma^{SU(2)}=\ln\left(\frac{I_{1}(4/g^{2})}{I_{2}(4/g^{2})}\right),\end{split}$ (42) which goes to zero as $g^{2}\rightarrow 0$. This observation can be generalized to all $SU(N)$ groups, and the lattice-units string tension goes to zero while the static quark correlation length grows to infinity in the limit of $g^{2}\rightarrow 0$ in all cases. We can consider this to be the naive continuum limit of the theory, though the correlation lengths of dynamical quantities such as plaquettes or localized Wilson loops remain finite by the factorization of the path integral. When investigating the approach to the continuum in Sec. IV and V, we should decrease the coupling while fixing the dimensionless quantity $\sigma V$, where $V$ is the total number of plaquettes; the particular choices of couplings and $V$ used in our numerical studies are reported in Table 1. Results are plotted versus $\sigma A$ when comparing quantities at fixed physical separation is important. | | $SU(2)$ | $SU(3)$ ---|---|---|--- $\sigma$ | $V$ | $g$ | $\beta$ | $g$ | $\beta$ 0.4 | $16$ | 0.98 | 4.2 | 0.72 | 11.7 0.2 | $32$ | 0.71 | 8.0 | 0.53 | 21.7 0.1 | $64$ | 0.51 | 15.5 | 0.38 | 41.8 Table 1: The couplings used in our numerical studies of $SU(2)$ and $SU(3)$ lattice gauge theory. The dimensionless quantity $\sigma V$ is fixed to $6.4$ while $\sigma$ and $V$ are individually varied. The conventional Wilson action parameter $\beta=2N/g^{2}$ is also reported. ### III.2 Noise and sign problems in the Wilson loop Although the expectation value $\left\langle\operatorname{tr}(W_{\mathcal{A}})/N\right\rangle$ is real, the integrand $\operatorname{tr}(W_{\mathcal{A}})/N$ has fluctuating signs (for $N=2$) or fluctuating complex phases (for $N\geq 3$) across the domain of integration. These fluctuations result in a sign/StN problem for this observable. The sample mean of $\operatorname{Re}{\operatorname{tr}(W_{\mathcal{A}})/N}$ gives an estimator for $\left\langle\operatorname{tr}(W_{\mathcal{A}})/N\right\rangle$, and the variance of this estimator can be directly computed, $\begin{split}&\text{Var}[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})/N]\\\ &\quad=\frac{1}{N^{2}}\left<\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})^{2}\right>-\frac{1}{N^{2}}\operatorname{Re}\left<\operatorname{tr}(W_{\mathcal{A}})\right>^{2}\\\ &\quad=\frac{1}{2N^{2}}\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|\right>+\frac{1}{2N^{2}}\left<\operatorname{tr}(W_{\mathcal{A}})^{2}\right>\\\ &\qquad-\frac{1}{N^{2}}\operatorname{Re}\left<\operatorname{tr}(W_{\mathcal{A}})\right>^{2}.\end{split}$ (43) The expectation values in the first and second terms in the variance can be factorized analogously to the Wilson loop expectation value, and are shown in Appendix A to be $\displaystyle\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|\right>$ $\displaystyle=1+(N^{2}-1)\langle\chi_{1,-1}\rangle^{A}$ (44) $\displaystyle\left<\operatorname{tr}(W_{\mathcal{A}})^{2}\right>$ $\displaystyle=\frac{N(N+1)}{2}\langle\chi_{2}\rangle^{A}+\frac{N(N-1)}{2}\langle\chi_{1,1}\rangle^{A},$ in terms of the single-site integrals $\left\langle\chi_{1,-1}\right\rangle$, $\left\langle\chi_{2}\right\rangle$, and $\left\langle\chi_{1,1}\right\rangle$, defined in Eqs. (80) and (84). In total, the variance is $\begin{split}\operatorname{Var}[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})/N]&=\frac{1}{2N^{2}}+\frac{O(e^{-cA})}{2N^{2}}-e^{-2\sigma A},\end{split}$ (45) where $c$ is a constant. The fact that $\left<\chi_{r}\right><1$ for non- trivial irreps $r$ (assuming that $g^{2}$ is finite) Drouffe and Zuber (1983) implies that $c>0$ and therefore that the variance is asymptotically constant as $A\rightarrow\infty$, $\operatorname{Var}[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})/N]\sim\frac{1}{2N^{2}}.$ (46) The signal-to-noise ratio for $n$ samples can be written exactly in terms of Eqs. (43), (44), and (39), but for this analysis it is sufficient to identify the asymptotic behavior from Eqs. (45) and (39), giving $\displaystyle\text{StN}[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})/N]$ $\displaystyle=\frac{\left\langle\frac{1}{N}\operatorname{tr}\left(W_{\mathcal{A}}\right)\right\rangle}{\sqrt{\frac{1}{n}\text{Var}[\operatorname{Re}\frac{1}{N}\operatorname{tr}(W_{\mathcal{A}})]}}$ (47) $\displaystyle\sim N\sqrt{2n}e^{-\sigma A},$ which degrades exponentially with area $A$. For the estimator $\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}})/N$, the analysis above shows that this can only be counteracted by exponentially increasing the number of samples $n$. Eqs. (44) and (45) also make clear that the leading asymptotic behavior of the variance is due to the typical magnitude-squared of the observable, $\left<|\operatorname{tr}(W_{\mathcal{A}})^{2}|/N^{2}\right>$, which remains $O(1)$ for all areas. Cancellations due to phase fluctuations are required to reproduce the exponentially small Wilson loop expectation values predicted for large areas, confirming that the StN problem can be related to a sign problem in the Wilson loop observable. Attributing the StN problem to $O(1)$ magnitudes for individual samples of the Wilson loop observable at all areas also inspires our deformations of the Wilson loop observable in the following sections. The quantity $\left<|\operatorname{tr}(W_{\mathcal{A}})^{2}|/N^{2}\right>$ can be written as an integral of a non-holomorphic integrand which will generically be modified by contour deformations of the path integral. If we choose contour deformations that reduce the average magnitude of the observable, this quantity, and thus the leading term of the variance, will be reduced. The observable mean is unchanged and the StN ratio will thus increase under such a deformation. For $SU(2)$, the single-site integrals can be evaluated straightforwardly (see Appendix A) and the $SU(2)$ variance is $\begin{split}\text{Var}&[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}}^{SU(2)})/N]\\\ &=\frac{1}{4}+\frac{3}{4}\left(\frac{I_{3}(4/g^{2})}{I_{1}(4/g^{2})}\right)^{A}-e^{-2\sigma^{SU(2)}A},\end{split}$ (48) where $\sigma^{SU(2)}$ is given in Eq. (42). The StN of $SU(2)$ Wilson loops in $(1+1)$D can therefore be explicitly calculated, $\begin{split}\text{StN}&[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}}^{SU(2)})/N]\\\ &=\frac{2\sqrt{n}e^{-\sigma^{SU(2)}A}}{\sqrt{1+3\left(\frac{I_{3}(4/g^{2})}{I_{1}(4/g^{2})}\right)^{A}-4e^{-2\sigma^{SU(2)}A}}}.\end{split}$ (49) Using numerical evaluation of the corresponding single-site integrals for $SU(N)$ Wilson loops yields theoretical curves for the variance and signal-to- noise for general $N$. In the studies below, we choose to deform the $(1,1)$ component of the Wilson loop, $W_{\mathcal{A}}^{11}$, instead of $\operatorname{tr}{W_{\mathcal{A}}}/N$ following the reasoning of Sec. II.4. The variance of $W_{\mathcal{A}}^{11}$ can be related to the variance of $\operatorname{tr}(W_{\mathcal{A}})/N$ and is compared to Monte Carlo results in the following sections. ## IV $SU(2)$ path integral contour deformations As a proof-of-principle, we apply path integral contour deformations to Wilson loop calculations in $SU(2)$ lattice gauge theory in $(1+1)$D with open boundary conditions. An identical setting with gauge group $SU(3)$ is investigated in the following section. ### IV.1 Gauge field parameterization There are many possible parameterizations of the $SU(2)$ group manifold, any of which can be used to define valid path integral contour deformations. We argue above that it is advantageous to consider a single component of the Wilson loop, taken without loss of generality to be $W_{\mathcal{A}}^{11}$, as the observable whose path integral contour is deformed in order to calculate $\left<W_{\mathcal{A}}^{11}\right>=\left<\operatorname{tr}(W_{\mathcal{A}})/N\right>$. Contour deformations that reduce the magnitude of $W_{\mathcal{A}}^{11}$ in generic gauge field configurations while preserving $\left<W_{\mathcal{A}}^{11}\right>$ can be expected to reduce phase fluctuations and therefore the variance of $W_{\mathcal{A}}^{11}$. The angular parameterization of each plaquette $P_{x}\in SU(2)$ is useful for this purpose, and is explicitly defined by $\displaystyle P_{x}^{11}$ $\displaystyle=\sin\theta_{x}\,e^{i\phi^{1}_{x}},$ (50) $\displaystyle P_{x}^{12}$ $\displaystyle=\cos\theta_{x}\,e^{i\phi^{2}_{x}},$ $\displaystyle P_{x}^{21}$ $\displaystyle=-\cos\theta_{x}\,e^{-i\phi^{2}_{x}},$ $\displaystyle P_{x}^{22}$ $\displaystyle=\sin\theta_{x}\,e^{-i\phi^{1}_{x}},$ following the generalized $SU(N)$ angular parameterization given in Ref. Bronzan (1988). The azimuthal angles satisfy $\phi^{1}_{x},\phi^{2}_{x}\in[0,2\pi]$, with endpoints identified, while the angle $\theta_{x}$ spans the finite interval $[0,\pi/2]$. The normalized Haar measure can be written in these coordinates as $\displaystyle dP_{x}$ $\displaystyle=h(\Omega_{x})\ d\Omega_{x}\equiv\frac{1}{4\pi^{2}}\sin(2\theta_{x})\ d\theta_{x}\ d\phi^{1}_{x}\ d\phi^{2}_{x}.$ (51) We begin by considering the effects of simple deformations using these parameters. In the simplest case of a region $\mathcal{A}$ with area $A=1$, the Wilson loop consists of a single plaquette, $W_{\mathcal{A}}^{11}=P_{x}^{11}$, where the loop starts and ends at site $x$. The magnitude of $W_{\mathcal{A}}^{11}$ can be reduced by $e^{-\lambda}$ by deforming $\phi^{1}_{x}\rightarrow\phi^{1}_{x}+i\lambda$ analogously to the approach described for $e^{i\phi}$ integrals above. In the case of $A=2$, the Wilson loop can be written in terms of the product of two plaquettes, $W_{\mathcal{A}}^{11}=(P_{x}P_{x^{\prime}})^{11}$. In the angular parameterization, the Wilson loop is a sum of two terms $(P_{x}P_{x^{\prime}})^{11}=\sin\theta_{x}\sin\theta_{x^{\prime}}e^{i\phi_{x}^{1}+i\phi^{1}_{x^{\prime}}}+\cos\theta_{x}\cos\theta_{x^{\prime}}e^{i\phi_{x}^{2}-i\phi^{2}_{x^{\prime}}}.$ (52) The first term involves products of diagonal entries whose magnitude can be reduced by $e^{-\lambda}$ by taking $\phi_{x}^{1}\rightarrow\phi_{x}^{1}+i\lambda$ or $\phi^{1}_{x^{\prime}}\rightarrow\phi^{1}_{x^{\prime}}+i\lambda$ and the second term involves off-diagonal components whose magnitude can be reduced analogously by taking $\phi_{x}^{2}-\phi^{2}_{x^{\prime}}\rightarrow(\phi_{x}^{2}-\phi^{2}_{x^{\prime}})+i\lambda$. For $A>2$, it can be seen similarly that shifting $\phi^{1}_{x}\rightarrow\phi^{1}_{x}+i\lambda$ and $(\phi^{2}_{x}-\phi^{2}_{x+1})\rightarrow(\phi^{2}_{x}-\phi^{2}_{x+1})+i\lambda$ for all $x$ leads to suppression of the magnitudes of all terms appearing in $W_{\mathcal{A}}^{11}$. A family of contour deformations capable of expressing these constant imaginary shifts to the phases of all elements of $P_{x}$ can therefore be expected to reduce phase fluctuations and the variance of $W_{\mathcal{A}}^{11}$. Such a family of contour deformations is parameterized below as a subset of the vertical deformation expanded in a Fourier series in Eqs. (5)–(8). An alternative parameterization of $SU(2)$ is explored in Appendix B, in which it is found that imaginary shifts along these lines are more difficult to express and orders of magnitude less variance reduction is achieved when applying the same optimization methods. This exploration suggests that a choice of parameterization that allows the observable to be expressed in the form $e^{i\phi}$ is important for variance reduction in observables afflicted with a sign problem. It is also possible to directly parameterize the gauge field $U_{x,\mu}\in SU(2)$ using Eq. (50). This alternative parameterization may be useful in more than two spacetime dimensions, where Gauss’ Law constraints imply that not all plaquettes are independent and a path integral change of variables from $U_{x,\mu}$ to $P^{\mu\nu}_{x}$ cannot be performed straightforwardly. ### IV.2 Fourier deformation basis In our study of $SU(2)$ gauge theory, we optimize over a family of vertical contour deformations expressed in terms of a Fourier series truncated above a specific cutoff mode. To avoid a costly Jacobian calculation, each plaquette variable $P_{x}$ is deformed conditioned on plaquettes $P_{y}$ at sites earlier in the product defining $W_{\mathcal{A}}$ in Eq. (37), which we write as $y\leq x$. This family of deformations is given by $\begin{split}\tilde{\theta}_{x}&\equiv\theta_{x}+i\sum_{y\leq x}f_{\theta}(\theta_{y},\phi^{1}_{y},\phi^{2}_{y};\kappa^{xy},\lambda^{xy},\eta^{xy},\chi^{xy},\zeta^{xy}),\\\ \tilde{\phi}^{1}_{x}&\equiv\phi^{1}_{x}+i\kappa_{0}^{x;\phi^{1}}\\\ &\hskip 20.0pt+i\sum_{y\leq x}f_{\phi^{1}}(\theta_{y},\phi^{1}_{y},\phi^{2}_{y};\kappa^{xy},\lambda^{xy},\eta^{xy},\chi^{xy},\zeta^{xy}),\\\ \tilde{\phi}^{2}_{x}&\equiv\phi^{2}_{x}+i\kappa_{0}^{x;\phi^{2}}\\\ &\hskip 20.0pt+i\sum_{y\leq x}f_{\phi^{2}}(\theta_{y},\phi^{1}_{y},\phi^{2}_{y};\kappa^{xy},\lambda^{xy},\eta^{xy},\chi^{xy},\zeta^{xy}),\end{split}$ (53) in terms of parameters $\kappa^{xy}$, $\lambda^{xy}$, $\eta^{xy}$, $\chi^{xy}$, and $\zeta^{xy}$. The functions $f_{\theta}$, $f_{\phi^{1}}$, and $f_{\phi^{2}}$ compute the shift in the imaginary direction of the angular parameters of $P_{x}$ conditioned on $P_{y}$, and their decomposition in terms of Fourier modes is detailed below. For this ordered dependence on previous sites, the Jacobian determinant factorizes into a product of block determinants per lattice site $\begin{split}J&=\prod_{x}j_{x}(\theta_{x},\phi^{1}_{x},\phi^{2}_{x}),\end{split}$ (54) where $j_{x}(\theta_{x},\phi^{1}_{x},\phi^{2}_{x})=\text{det}\begin{pmatrix}\frac{\partial f_{\theta}}{\partial\theta_{x}}&\frac{\partial f_{\theta}}{\partial\phi^{1}_{x}}&\frac{\partial f_{\theta}}{\partial\phi^{2}_{x}}\\\ \frac{\partial f_{\phi^{1}}}{\partial\theta_{x}}&\frac{\partial f_{\phi^{1}}}{\partial\phi^{1}_{x}}&\frac{\partial f_{\phi^{1}}}{\partial\phi^{2}_{x}}\\\ \frac{\partial f_{\phi^{2}}}{\partial\theta_{x}}&\frac{\partial f_{\phi^{2}}}{\partial\phi^{1}_{x}}&\frac{\partial f_{\phi^{2}}}{\partial\phi^{2}_{x}}\end{pmatrix}.$ (55) The structure of the deformation in Eq. (53) therefore bypasses the need for expensive Jacobian determinant calculations involving matrices whose rank grows with the spacetime volume and is inspired by analogous methods to simplify Jacobian determinant calculations in normalizing flows Papamakarios _et al._ (2019). Note that an absolute value is not taken over the determinant in Eq. (55). The vertical deformation in Eq. (53) can be expanded in a multi-parameter Fourier series as $\begin{split}f_{\theta}&=\sum_{m=1}^{\Lambda}\kappa_{m}^{xy;\theta}\sin\left(2m\theta_{y}\right)\left\\{1+\sum_{n=1}^{\Lambda}\left[\eta_{mn}^{xy;\theta,\phi^{1}}\sin(n\phi^{1}_{y}+\chi_{mn}^{xy;\theta,\phi^{1}})+\eta_{mn}^{xy;\theta,\phi^{2}}\sin(n\phi^{2}_{y}+\chi_{mn}^{xy;\theta,\phi^{2}})\right]\right\\},\\\ f_{\phi^{1}}&=\sum_{m=1}^{\Lambda}\kappa_{m}^{xy;\phi^{1}}\sin(m\phi^{1}_{y}+\zeta_{m}^{xy;\phi^{1}})\left\\{1+\sum_{n=1}^{\Lambda}\left[\lambda_{mn}^{xy;\phi^{1},\theta}\sin(2n\theta_{y})+\eta_{mn}^{xy;\phi^{1},\phi^{2}}\sin(n\phi^{2}_{y}+\chi_{mn}^{xy;\phi^{1},\phi^{2}})\right]\right\\},\\\ f_{\phi^{2}}&=\sum_{m=1}^{\Lambda}\kappa_{m}^{xy;\phi^{2}}\sin(m\phi^{2}_{y}+\zeta_{m}^{xy;\phi^{2}})\left\\{1+\sum_{n=1}^{\Lambda}\left[\lambda_{mn}^{xy;\phi^{2},\theta}\sin(2n\theta_{y})+\eta_{mn}^{xy;\phi^{2},\phi^{1}}\sin(n\phi^{1}_{y}+\chi_{mn}^{xy;\phi^{2},\phi^{1}})\right]\right\\},\end{split}$ (56) where $\Lambda$ is a hyperparameter that sets the maximum Fourier mode to include and controls the total number of free parameters. As the zero modes have trivial $y$ dependence, we have collected them in Eq. (53) into the $y$-independent terms $\kappa_{0}^{x;\phi^{1}}$ and $\kappa_{0}^{x;\phi^{2}}$. The included Fourier terms are defined to satisfy the constraints $\widetilde{\theta}_{x}(0)=0$, $\widetilde{\theta}_{x}(\pi/2)=\pi/2$, $\widetilde{\phi}^{1}_{x}(0)=\widetilde{\phi}^{1}_{x}(2\pi)$, and $\widetilde{\phi}^{2}_{x}(0)=\widetilde{\phi}^{2}_{x}(2\pi)$, which together ensure that the endpoints of both the zenith and azimuthal integration domains are appropriately handled as described in Sec. II.1. The derivatives needed for the Jacobian in Eqs. (54)–(55) can be calculated straightforwardly by differentiating Eq. (56). The additional factor describing the change in the Haar measure needed to compute the Jacobian of the group measure is given in these coordinates as $\prod_{x}\frac{h(\widetilde{\Omega}_{x})}{h(\Omega_{x})}=\prod_{x}\left[\frac{\sin(2\tilde{\theta}_{x})}{\sin(2\theta_{x})}\right].$ (57) Combining the results of Eq. (50) and Eqs. (53)–(57), the expectation value of any holomorphic observable $\mathcal{O}(\\{P_{x}\\})$ is equal to the expectation value of the deformed observable $Q(\\{P_{x}\\})\equiv\mathcal{O}(\\{\widetilde{P}_{x}\\})\frac{e^{-S(\\{\widetilde{P}_{x}\\})}}{e^{-S(\\{P_{x}\\})}}\prod_{x}j_{x}\left[\frac{\sin(2\tilde{\theta}_{x})}{\sin(2\theta_{x})}\right],$ (58) where $\widetilde{P}_{x}=\left(\begin{matrix}\sin\widetilde{\theta}_{x}e^{i\widetilde{\phi}^{1}_{x}}&\cos\widetilde{\theta}_{x}e^{i\widetilde{\phi}^{2}_{x}}\\\ -\cos\widetilde{\theta}_{x}e^{-i\widetilde{\phi}^{2}_{x}}&\sin\widetilde{\theta}_{x}e^{-i\widetilde{\phi}^{1}_{x}}\end{matrix}\right)\in SL(2,\mathbb{C}).$ (59) If the plaquettes are sampled in the matrix representation for Monte Carlo evaluation, computing the observable $Q$ in Eq. (58) requires converting to the angular representation before deforming and evaluating. This conversion is given by $\displaystyle\theta_{x}$ $\displaystyle=\arcsin(|P^{11}_{x}|),$ (60) $\displaystyle\phi^{1}_{x}$ $\displaystyle=\arg(P^{11}_{x}),$ $\displaystyle\phi^{2}_{x}$ $\displaystyle=\arg(P^{12}_{x}),$ and can be done when evaluating the observable $Q(\\{P_{x}\\})$. Though these functions are not entire, the conversion used here does not determine whether the integrand itself is a holomorphic function of these angular parameters. ### IV.3 Optimization procedure This contour deformation expanded in a Fourier series provides a means of exploring deformed observables with potentially reduced variance. It is shown above that simple deformations within this family are possible to construct by hand and are already sufficient to reduce the average magnitude of Wilson loop observables. However, these deformations are restricted to zero modes of the Fourier expansion and rely on construction based on intuition. To maximize the variance reduction, we explore numerical optimization of the deformation parameters $\kappa^{xy}$, $\lambda^{xy}$, $\eta^{xy}$, $\chi^{xy}$, and $\zeta^{xy}$ as discussed in Sec. II.3. We are interested in $\operatorname{Re}W_{\mathcal{A}}^{11}$, for which the terms of Eq. (43) that can be modified by contour deformation are $\mathcal{L}\equiv\left<(\operatorname{Re}Q_{\mathcal{A}})^{2}\right>=\frac{1}{2}\left<|Q_{\mathcal{A}}^{2}|\right>+\frac{1}{2}\left<Q_{\mathcal{A}}^{2}\right>,$ (61) where $Q_{\mathcal{A}}$ is the deformed observable associated with the $W_{\mathcal{A}}^{11}$. The first term in Eq. (61) is manifestly non- holomorphic due to the absolute value over a complex-valued observable, while the second term includes squared reweighting factors of the original and deformed action which prevent identification as a deformation of $\left<(W_{\mathcal{A}}^{11})^{2}\right>$. These terms together define the _loss function_ $\mathcal{L}$ that we aim to minimize as a function of the deformation parameters. This loss function is written as an expectation value in terms of sampling from the original Monte Carlo weights $e^{-S(\\{P_{x}\\})}$, and its gradient can similarly be written as an expectation value, $\nabla\mathcal{L}=\left<2\operatorname{Re}Q_{\mathcal{A}}\nabla\operatorname{Re}Q_{\mathcal{A}}\right>.$ (62) The term $\nabla\operatorname{Re}Q_{\mathcal{A}}$ can be expanded using the explicit form of $Q_{\mathcal{A}}$ given in Eq. (58), as well as the forms of the observable $W_{\mathcal{A}}^{11}$ and the action $S$ in terms of $\\{P_{x}\\}$ in Sec. III.1. For this study, the gradient $\nabla\operatorname{Re}Q_{\mathcal{A}}$ was computed explicitly and cross- checked using automatic differentiation available in the JAX library Bradbury _et al._ (2018). Eq. (62) can be stochastically estimated using an (undeformed) ensemble of $n$ configurations $\\{P^{i}_{x}\\}$, $i\in[1,\dots,n]$, drawn proportional to the weight $e^{-S(\\{P_{x}\\})}$, $\nabla\mathcal{L}\approx\frac{1}{n}\sum_{i=1}^{n}\left[2\operatorname{Re}Q_{\mathcal{A}}(\\{P^{i}_{x}\\})\nabla\operatorname{Re}Q_{\mathcal{A}}(\\{P^{i}_{x}\\})\right].$ (63) In all experiments below, we used the Adam optimizer Kingma and Ba (2017) to iteratively update parameters based on these stochastic gradient estimates. Each gradient estimate was computed using $1/100$th of the generated ensemble; empirically, this small subset of the data was sufficient to learn useful manifold parameters with significant variance reduction. The optimizer was configured with default hyperparameters, except for a dynamically scheduled step size. Stochastic noise on gradient estimates and large optimizer step size can either slow convergence or result in unstable training dynamics, while step sizes that are too small waste computation as parameters fail to move quickly along precisely estimated gradients. We thus used a dynamic schedule that reduced the step size over time. In particular, our step size schedule started with an initial step size $s_{0}$ and then permanently reduced the step size by a factor of $F$ (i.e. $s_{i+1}=s_{i}/F$) each time the loss function failed to improve over a window of $W$ steps. The schedule halted training after the step size was reduced $N_{r}$ times. We used the parameters $F=10$, $W=50$, $N_{r}=2$, and $s_{0}=10^{-2}$ for both $SU(2)$ and $SU(3)$ gauge theory. Figure 3: $SU(2)$ Wilson loop loss $\mathcal{L}$ plotted with respect to training iterations for optimization of Wilson loops with $g=0.71$ and area $A$ starting with the undeformed manifold (left) or starting with the deformed manifold calculated for area $A-1$ (right). The loss curves are averaged over blocks of 25 iterations for clarity. In each case, the training is halted based on the plateau criterion described in the main text, resulting in traces of different lengths. For Wilson loops with larger areas, the final loss value is substantially lower when initialized from optimal manifold parameters at a $A-1$, despite using nearly four times fewer training iterations. In preliminary investigations, we found that naive manifold optimization resulted in overtraining, i.e. overfitting parameters to the specific training data available Hardt _et al._ (2016); Lin _et al._ (2016); Zhang _et al._ (2018). In the context of manifold optimization, this corresponds to minimizing a finite-ensemble variance estimator rather than minimizing the true variance of $\operatorname{Re}Q_{\mathcal{A}}$. In practice this produced deformed observables with higher variance when measured on a different ensemble than the training set. To mitigate overtraining in the final results, we reserved a “test set” of configurations, independent of the training data, on which the loss function was periodically measured Raschka (2020); the reserved test set of configurations was chosen to have the same size as the training set. The step size schedule was configured to use loss measurements based on this test set, ensuring that training was slowed and halted before significantly overfitting. We further used a mini-batching technique, in which a set of $m\leq n$ configurations are resampled from the training set to estimate Eq. (63), as a means of avoiding overtraining Bottou _et al._ (2018). The mini-batch size was chosen to be equal to the size of the full training set (i.e. $m=n$), thus mini-batch evaluation in this context was just a resampling operation, giving variation in gradient estimates over multiple evaluations. The fluctuations in these gradient estimates are on the order of the uncertainty of the variance estimator, preventing overfitting below this uncertainty. For each observable $W_{\mathcal{A}}^{11}$ we also found it important to restrict to deforming only the plaquettes within the region $\mathcal{A}$. Though including extra degrees of freedom cannot make the optimal variance any higher (the optimization steps could always leave those plaquettes undeformed), in practice we found that including such degrees of freedom allowed the deformed manifold to rapidly overtrain, resulting in worse performance overall. Appendix C details further possible approaches to avoiding overfitting and overlap problems using a regularizing term added to the loss function. These approaches were found to be unnecessary for our final results. Finally, making a good choice of initial manifold parameters yielded practical improvement in training time and quality. On one hand, initializing the manifold parameters to zero ensures that gradient descent starts from the original manifold, and the optimization procedure should find a local minimum with variance no higher than the original manifold (up to noise from stochastic gradient estimates). However, correlations in sign and magnitude fluctuations of observables with similar structure, such as Wilson loops $W_{\mathcal{A}}^{11}$ and $W_{\mathcal{A}^{\prime}}^{11}$ with overlapping areas $\mathcal{A}$ and $\mathcal{A}^{\prime}$, suggest that the variance reduction from contour deformations will be correlated as well. Though the optimal manifold for one observable will not generically be optimal for the other, it can serve as a better starting point than the original manifold. In our study of Wilson loops, we initialized manifold parameters for each Wilson loop of area $A$ using the optimal parameters for the Wilson loop of area $A-1$, when the Fourier cutoff $\Lambda=0$, or using the optimal parameters for the Wilson loop of area $A$ and cutoff $\Lambda-1$, when $\Lambda\neq 0$. Figure 3 shows the improvement in optimization time and quality using this method for manifold deformations with $\Lambda=0$. While this approach sacrifices the guarantee that the local minimum obtained corresponds to a deformed observable with variance no higher than the original observable (in the limit of infinitely precise gradient estimates), in practice we find that this property is not violated. We also note that this property can be easily checked after optimization, and if the variance were found to increase with respect to the original observable training could instead be started from the original manifold to recover the guarantee. ### IV.4 Monte Carlo calculations We investigated the practical performance of these contour deformations on the three sets of $SU(2)$ parameters detailed in Table 1. These parameters correspond to variation in the string tension by a factor of 4. At each choice of parameters, $n=32000$ configurations were generated, exploiting the factorization of the path integral to draw samples of each plaquette $P_{x}$ independently from the marginal distribution, $p(P_{x})\propto e^{-\frac{1}{g^{2}}\operatorname{tr}(P_{x}+P_{x}^{-1})}.$ (64) Plaquette samples were generated using Hybrid Monte Carlo Duane _et al._ (1987) with parameters tuned to maintain autocorrelation times of order 1–2,555 For $SU(2)$ gauge theory it is also possible to apply a heat bath algorithm to directly draw samples. However, more complicated variants are required for $SU(3)$ Creutz (1980); Pietarinen (1981); de Forcrand and Jahn (2005) and Hybrid Monte Carlo was selected for simplicity. and these individual plaquettes were then arranged into lattices consisting of $V$ sites each. A random shuffle was applied to the collection of plaquettes prior to this assignment to avoid spurious spatial correlations, ensuring an asymptotically vanishing bias in the limit of infinite ensemble size. On each ensemble, we performed a study of deformations with Fourier cutoff $\Lambda$ fixed to zero. For all three choices of parameters, training on a subset of $320$ configurations was sufficient to converge to manifolds with variance reduction up to four orders of magnitude when comparing the largest deformed observables to the original Wilson loop observables. An additional subset of $320$ configurations was reserved as the test set during optimization, and the remaining 31360 configurations were used to measure results. Measurements of $Q_{\mathcal{A}}$ were found to be consistent with the exact results given in Sec. III.1 and with the Monte Carlo results for $W_{\mathcal{A}}^{11}$ in the region where the estimates of $W_{\mathcal{A}}^{11}$ were reliable. The variance of $\operatorname{Re}W_{\mathcal{A}}^{11}$ is dominated by $\left<(\operatorname{Re}W_{\mathcal{A}}^{11})^{2}\right>$, which is positive- definite for all gauge field configurations. It was therefore possible to precisely measure the variance without a sign/StN problem, and the expected agreement with analytical results was reproduced. In particular, $O(1)$ scaling at large area can be clearly seen. In contrast, we found that the variance of $Q_{\mathcal{A}}$ for manifolds optimized as above falls exponentially at large area, giving exponential improvements in the signal-to- noise. These results are presented for all three ensembles in Fig. 4. Figure 4: $SU(2)$ Wilson loop expectation values and variances for ensembles with three different values of the gauge coupling $g=0.98,\ 0.71,\ 0.51$ (top to bottom). Red lines indicate analytical results for $\left<W_{\mathcal{A}}^{11}\right>=\left<\operatorname{tr}(W_{\mathcal{A}})/2\right>$ (left plots) and for $\operatorname{Var}(\operatorname{Re}W_{\mathcal{A}}^{11})$ (right plots). Improvements from contour deformation were also found to be similar for Wilson loops with fixed physical area $\sigma A$ across the three values of the lattice spacing. Fig. 5 compares the improvement in the Wilson loop variance versus the dimensionless scale $\sigma A$ across all three ensembles, and the three curves can be seen to nearly collapse at small areas, though there are differences of roughly a factor of 4 at the largest areas. Despite this variation, the variance of the Wilson loop with largest area is reduced by more than $10^{3}$ even for the finest ensemble. Analogous results were observed for $(1+1)$D $U(1)$ gauge theory in Ref. Detmold _et al._ (2020). For $\Lambda=0$, there are few enough parameters that it is possible to investigate the optimal parameters found by the optimization procedure. Fig. 6 depicts the values of $\kappa_{0}^{x;\phi^{1}}$ and $\kappa_{0}^{x;\phi^{2}}$ when optimized to reduce variance of $Q_{\mathcal{A}}$ at three choices of the loop area. It is argued in Sec. IV.1 that the magnitude of $Q_{\mathcal{A}}$ can be reduced on each sample if $\phi^{1}_{x}$ and the differences $\phi^{2}_{x}-\phi^{2}_{x+1}$ are shifted by a positive imaginary constant. This manifold is approximately discovered by the optimization procedure: the final parameters are a positive, nearly constant $\kappa_{0}^{x;\phi^{1}}$, corresponding to a positive imaginary shift applied to $\phi^{1}_{x}$, and a decreasing $\kappa_{0}^{x;\phi^{2}}$, corresponding to a positive imaginary shift applied to each difference $\phi^{2}_{x}-\phi^{2}_{x+1}$. Only these relative differences between neighboring $\phi^{2}_{x}$ have an effect on the value of $Q_{\mathcal{A}}$, thus the overall shift on the collection of $\kappa_{0}^{x;\phi^{2}}$ is irrelevant. Figure 5: $SU(2)$ Wilson loop variance ratios of standard observables to deformed observables for ensembles with three different values of the gauge coupling $g$, corresponding to three different values of lattice spacing. Figure 6: The manifold parameters found by optimizing the variance of the deformed Wilson loop observable $Q_{\mathcal{A}}$ at three different choices of area $A$ on the ensemble with total volume $V=32$ and $\beta=8.0$. Optimization at each $A$ was initialized using the parameters found for the observable with area $A-1$, as detailed in the main text. We further optimized manifold parameters using Fourier cutoffs $\Lambda=1,2$ on the ensemble with coupling $g=0.71$ to investigate gains from including higher Fourier modes. Including Fourier modes larger than the constant term enables more complex deformations of each angular parameter, and introduces possible dependence on plaquettes at sites $y\leq x$ when deforming $P_{x}$. Despite this increased expressivity, these additional parameters did not provide significantly larger StN improvements compared to using only constant terms, as shown in Fig. 7. In some cases, the optimized manifold with larger cutoff resulted in higher variance (lower variance ratio) than the optimized manifold with cutoff $\Lambda=0$. The manifolds with larger cutoffs include all possible manifolds with smaller cutoffs, thus this is necessarily a training effect, likely due to noisier gradients and less stable training dynamics. We did not pursue noise reduction and alternative approaches to training (such as iteratively including higher $\Lambda$) as these manifolds with higher cutoffs did not produce significant improvements at any value of the area. Figure 7: $SU(2)$ Wilson loop variance ratios of standard observables to deformed observables for ensembles with $g=0.71$ and three different values of the manifold parameterization cutoff. ## V $SU(3)$ path integral contour deformations We further investigated the ability of contour deformations to reduce the variance of Wilson loops in $SU(3)$ gauge theory in $(1+1)$D with open boundary conditions. This setting is identical to the previous section, except for the use of $SU(3)$ rather than $SU(2)$ gauge field variables. Suitable parameterizations for contour deformations of $SU(3)$ gauge fields are discussed below. ### V.1 Gauge field parameterization and contour deformation For the $SU(3)$ gauge group, we use the angular parameterization constructed in Ref. Bronzan (1988). The components of a single plaquette $P_{x}\in SU(3)$ are parameterized as $\begin{split}P_{x}^{11}&=\cos\theta_{x}^{1}\cos\theta_{x}^{2}e^{i\phi_{x}^{1}},\\\ P_{x}^{12}&=\sin\theta_{x}^{1}e^{i\phi_{x}^{3}},\\\ P_{x}^{13}&=\cos\theta_{x}^{1}\sin\theta_{x}^{2}e^{i\phi_{x}^{4}}\,,\\\ P_{x}^{21}&=\sin\theta_{x}^{2}\sin\theta_{x}^{3}e^{-i(\phi_{x}^{4}+\phi_{x}^{5})}\\\ &\hskip 20.0pt-\sin\theta_{x}^{1}\cos\theta_{x}^{2}\cos\theta_{x}^{3}e^{i(\phi_{x}^{1}+\phi_{x}^{2}-\phi_{x}^{3})}\,,\\\ P_{x}^{22}&=\cos\theta_{x}^{1}\cos\theta_{x}^{3}e^{i\phi_{x}^{2}}\,,\\\ P_{x}^{23}&=-\cos\theta_{x}^{2}\sin\theta_{x}^{3}e^{-i(\phi_{x}^{1}+\phi_{x}^{5})}\\\ &\hskip 20.0pt-\sin\theta_{x}^{1}\sin\theta_{x}^{2}\cos\theta_{x}^{3}e^{i(\phi_{x}^{2}-\phi_{x}^{3}+\phi_{x}^{4})}\,,\\\ P_{x}^{31}&=-\sin\theta_{x}^{1}\cos\theta_{x}^{2}\sin\theta_{x}^{3}e^{i(\phi_{x}^{1}-\phi_{x}^{3}+\phi_{x}^{5})}\\\ &\hskip 20.0pt-\sin\theta_{x}^{2}\cos\theta_{x}^{3}e^{-i(\phi_{x}^{2}+\phi_{x}^{4})}\,,\\\ P_{x}^{32}&=\cos\theta_{x}^{1}\sin\theta_{x}^{3}e^{i\phi_{x}^{5}}\,,\\\ P_{x}^{33}&=\cos\theta_{x}^{2}\cos\theta_{x}^{3}e^{-i(\phi_{x}^{1}+\phi_{x}^{2})}\\\ &\hskip 20.0pt-\sin\theta_{x}^{1}\sin\theta_{x}^{2}\sin\theta_{x}^{3}e^{-i(\phi_{x}^{3}-\phi_{x}^{4}-\phi_{x}^{5})}\,,\end{split}$ (65) in terms of the three zenith angles $0\leq\theta^{1}_{x},\theta^{2}_{x},\theta^{3}_{x}\leq\pi/2$ and the five azimuthal angles $0\leq\phi^{1}_{x},\ldots,\phi^{5}_{x}\leq 2\pi$ for each plaquette. We collect these angles into a variable $\Omega_{x}=(\theta^{1}_{x},..,\theta^{3}_{x},\phi^{1}_{x},...,\phi^{5}_{x})$ for ease of notation. The Haar measure is related to the measure of $\Omega$ by Bronzan (1988) $\displaystyle dP_{x}=\frac{1}{2\pi^{5}}$ $\displaystyle\sin\theta_{x}^{1}(\cos\theta_{x}^{1})^{3}\sin\theta_{x}^{2}\cos\theta_{x}^{2}\sin\theta_{x}^{3}\cos\theta_{x}^{3}$ (66) $\displaystyle\times d\theta_{x}^{1}\,d\theta_{x}^{2}\,d\theta_{x}^{3}\,d\phi_{x}^{1}\ldots d\phi_{x}^{5}\,.$ To compute deformed observables from Monte Carlo samples in the matrix representation, an inverse map of (65) is needed and for example can be specified by $\displaystyle\theta_{x}^{1}$ $\displaystyle=\text{arcsin}(|P_{x}^{12}|),$ (67) $\displaystyle\theta_{x}^{2}$ $\displaystyle=\text{arccos}\left(|P_{x}^{11}|/\text{cos}(\theta_{x}^{1})\right),$ $\displaystyle\theta_{x}^{3}$ $\displaystyle=\text{arccos}\left(|P_{x}^{22}|/\text{cos}(\theta_{x}^{1})\right),$ $\displaystyle\phi_{x}^{1}$ $\displaystyle=\arg(P_{x}^{11}),$ $\displaystyle\phi_{x}^{2}$ $\displaystyle=\arg(P_{x}^{22}),$ $\displaystyle\phi_{x}^{3}$ $\displaystyle=\arg(P_{x}^{12}),$ $\displaystyle\phi_{x}^{4}$ $\displaystyle=\arg(P_{x}^{13}),$ $\displaystyle\phi_{x}^{5}$ $\displaystyle=\arg(P_{x}^{32}).$ An $SU(3)$ field configuration in $(1+1)$D with open boundary conditions is defined by a collection of angular variables $\Omega_{x}$ associated all plaquettes $P_{x}$ on the lattice. The $a^{\text{th}}$ component of the deformed angles at site $x$ is denoted by $(\widetilde{\Omega}_{x})_{a}$, and for vertical deformations expanded in Fourier modes is specified by $\displaystyle\tilde{\phi}^{a}_{x}$ $\displaystyle=\phi^{a}_{x}+i\kappa_{0}^{x;\phi^{a}}+i\sum_{y\leq x}f_{\phi^{a}}(\Omega_{y};\kappa^{xy},\lambda^{xy},\chi^{xy},\zeta^{xy}),$ (68) $\displaystyle\tilde{\theta}^{a}_{x}$ $\displaystyle=\theta^{a}_{x}+i\sum_{y\leq x}f_{\theta^{a}}(\Omega_{y};\kappa^{xy},\lambda^{xy},\chi^{xy}).$ where $\displaystyle f_{\theta^{a}}$ $\displaystyle=\sum_{m=1}^{\Lambda}{\kappa_{m}^{xy;a}\text{sin}(2m\theta^{a}_{y})\bigg{\\{}1+\sum_{n=1}^{\Lambda}\big{[}\sum_{\begin{subarray}{c}r\neq a\\\ r=1\end{subarray}}^{3}{\lambda_{mn}^{xy;ar}\text{sin}(2n\theta^{r}_{y})}+\sum_{s=1}^{5}{\eta_{mn}^{xy;as}\text{sin}(n\phi_{y}^{s}+\chi_{mn}^{xy;as})}\big{]}\bigg{\\}}},$ (69) $\displaystyle f_{\phi^{a}}$ $\displaystyle=\sum_{m=1}^{\Lambda}{{{\kappa}}_{m}^{xy;a}\text{sin}(m\phi^{a}_{y}+{\zeta}_{m}^{xy;a})\bigg{\\{}1+\sum_{n=1}^{\Lambda}\big{[}\sum_{r=1}^{3}{{\lambda}_{mn}^{xy;ar}\text{sin}(2n\theta^{r}_{y})}+\sum_{\begin{subarray}{c}s\neq a\\\ s=1\end{subarray}}^{5}{{\eta}_{mn}^{xy;as}\text{sin}(n\phi^{s}_{y}+{\chi}_{mn}^{xy;as})}\big{]}\bigg{\\}}}.$ Deformed observables analogous to Eq. (58) can be constructed for the $SU(3)$ case using this parameterization and ratios of the deformed and undeformed Haar measure factors obtained from Eq. (66). ### V.2 Results Figure 8: $SU(3)$ Wilson loop expectation values and variances for ensembles with three different values of the gauge coupling that from top to bottom correspond to $g=0.72,\ 0.53,\ 0.38$. Red lines correspond to analytical results for $\left<W_{\mathcal{A}}^{11}\right>=\left<\operatorname{tr}(W_{\mathcal{A}})/3\right>$ (left plots) and $\operatorname{Var}(\operatorname{Re}W_{\mathcal{A}}^{11})$ (right plots). Practical performance of these deformations was investigated by optimizing Wilson loop variance using the three sets of $SU(3)$ parameters detailed in Table 1 as in the $SU(2)$ case. The couplings were tuned to match the string tensions used for $SU(2)$ gauge theory and correspond to lattice spacings varying by a factor of 4. For each choice of parameters, an ensemble of $n=32000$ configurations was generated using the factorized HMC method discussed in Sec. IV.4. Fig. 8 shows variance reduction for Wilson loops of all possible sizes for the three lattice spacings studied. At the largest loop areas, we found variance reduction of greater than three orders of magnitude. Across all three ensembles, analytical results for the Wilson loop expectation values and variances were reproduced by the undeformed Monte Carlo estimates. The expectation value of the deformed observable is consistent with the analytical and original Monte Carlo results, while the variance of the deformed observable exponentially decreases with increasing $\sigma A$. Figure 9: $SU(3)$ Wilson loop variance ratios of standard observables to deformed observables for ensembles with three different values of the gauge coupling that correspond to $g=0.72,\ 0.53,\ 0.38$ (top to bottom). Fig. 9 compares the variance reduction achieved at all three lattice spacings versus physical loop area $\sigma A$. We found approximately equivalent improvement in the variance at the two coarser lattice spacings ($g=0.72$ and $g=0.53$), and a small, yet significant, decrease in the variance improvement achieved at the finest lattice spacing ($g=0.38$). Despite this, the variance was reduced by three orders of magnitude at the largest area on the finest lattice by using an optimized deformed observable, and at all three couplings variance improvements are consistent with exponential in the physical loop area. The number of parameters to be optimized grows with the volume in lattice units, and the analogous results observed for $SU(2)$ gauge theory suggest that the results at finer lattice spacings could be partially explained by increased difficulty in training the larger number of parameters. Figure 10: The parameters of the optimal manifold for Wilson loop $Q_{\mathcal{A}}$ at three different areas $A$, as determined on the ensemble with total volume $V=32$ and $g=0.53$. Optimization for each $Q_{\mathcal{A}}$ was initialized using the parameters for optimal $Q_{\mathcal{A}^{\prime}}$ with region $\mathcal{A}^{\prime}$ of area $A-1$, as detailed in the main text. The $\Lambda=0$ manifold parameterization involves few enough parameters that it is possible to investigate the structure of the optimal parameters similarly to the case of $SU(2)$ gauge theory. As shown in Fig. 10, we found that the optimized values of $\kappa_{0}^{x;3}$ and $\kappa_{0}^{x;4}$ decrease with $x$, while $\kappa_{0}^{x;1}$ and $\kappa_{0}^{x;2}$ appear to converge towards approximately constant positive and negative values, respectively. The final parameter $\kappa_{0}^{x;5}$ fluctuates in both the positive and negative directions. These results can be qualitatively explained by expanding the $(1,1)$ component of the Wilson loop for small area. For $A=1$ the Wilson loop is equivalent to $P_{x}$, for which the $(1,1)$ component is $P_{x}^{11}=\cos\theta^{1}_{x}\cos\theta^{2}_{x}e^{i\phi^{1}_{x}}.$ (70) The magnitude of this quantity can be reduced by shifting $\phi^{1}\rightarrow\widetilde{\phi}_{x}^{1}=\phi_{x}^{1}+i\lambda$, which is consistent with the positive $\kappa_{0}^{x;1}$ values obtained after optimization shown in Fig. 10. Extending the analysis to the $A=2$ Wilson loop, the $(1,1)$ component is given by $\displaystyle(P_{x}P_{x^{\prime}})^{11}=$ (71) $\displaystyle\quad e^{i\phi^{1}_{x}+i\phi^{1}_{x^{\prime}}}\cos\theta^{1}_{x}\cos\theta^{1}_{x^{\prime}}\cos\theta^{2}_{x}\cos\theta^{2}_{x^{\prime}}$ $\displaystyle\quad-e^{i\phi^{1}_{x^{\prime}}+i\phi^{2}_{x^{\prime}}+i(\phi^{3}_{x}-\phi^{3}_{x^{\prime}})}\cos\theta^{2}_{x^{\prime}}\cos\theta^{3}_{x^{\prime}}\sin\theta^{1}_{x}\sin\theta^{1}_{x^{\prime}}$ $\displaystyle\quad-e^{-i\phi^{2}_{x^{\prime}}+i(\phi^{4}_{x}-\phi^{4}_{x^{\prime}})}\cos\theta^{1}_{x}\cos\theta^{3}_{x^{\prime}}\sin\theta^{2}_{x}\sin\theta^{2}_{x^{\prime}}$ $\displaystyle\quad-e^{i\phi^{4}_{x}+i\phi^{1}_{x^{\prime}}-i\phi^{3}_{x^{\prime}}+i\phi^{5}_{x^{\prime}}}\cos\theta^{1}_{x}\cos\theta^{2}_{x^{\prime}}\sin\theta^{1}_{x^{\prime}}\sin\theta^{2}_{x}\sin\theta^{3}_{x^{\prime}}$ $\displaystyle\quad+e^{i\phi^{3}_{x}-i\phi^{4}_{x^{\prime}}-i\phi^{5}_{x^{\prime}}}\sin\theta^{1}_{x}\sin\theta^{2}_{x^{\prime}}\sin\theta^{3}_{x^{\prime}}\,.$ The magnitude of the first, second, and fourth terms are reduced by shifting $\phi^{1}_{x}\rightarrow\phi^{1}_{x}+i\lambda$ and $\phi^{1}_{x^{\prime}}\rightarrow\phi^{1}_{x^{\prime}}+i\lambda$ with $\lambda>0$. The magnitude of the second and third terms can be reduced by shifting $\phi^{3}_{x}-\phi^{3}_{x^{\prime}}\rightarrow\phi^{3}_{x}-\phi^{3}_{x^{\prime}}+i\delta$ and $\phi^{4}_{x}-\phi^{4}_{x^{\prime}}\rightarrow\phi^{4}_{x}-\phi^{4}_{x^{\prime}}+i\delta$ with $\delta>0$; this is also consistent with a positive imaginary shift of $i\delta$ in $\phi^{3}_{x}-\phi^{4}_{x^{\prime}}$ and $\phi^{4}_{x}-\phi^{3}_{x^{\prime}}$, reducing the magnitude of the fourth and fifth terms. These deformations result in reduced magnitude and correspondingly lower variance. Deformations with these qualitative features are reproduced in the optimized manifolds found for $\Lambda=0$. Finally, we note that $\phi^{2}_{x^{\prime}}$ appears in the exponent with opposite signs in the second and third terms, and similarly for $\phi^{5}_{x^{\prime}}$ in the fourth and fifth terms, so there is no constant vertical deformation of these terms that will reduce the overall magnitude. Figure 11: $SU(3)$ Wilson loop variance ratios of standard observables to deformed observables for ensembles with $g=0.53$ and three different values of the manifold parameterization cutoff. Analogously to the case of $SU(2)$ gauge theory, no significant StN improvements were obtained by including higher Fourier modes. Fig. 11 directly compares optimized manifolds for $\Lambda\leq 2$ at the coarsest lattice spacing, showing that the variance improvement was unchanged by including higher Fourier modes. Constant vertical deformations therefore appear to saturate the StN benefits achieved by general Fourier series parameterizations for deformed Wilson loops in $(1+1)$D. More complicated parameterizations of contour deformations may prove more useful in $(3+1)$D; however, numerical studies of deformed observables in $(3+1)$D lattice gauge theory are left to future work. ## VI Conclusions In this work, we have defined a family of complex manifolds for path integral deformations in $SU(N)$ lattice field theories. The manifolds introduced here are described in terms of an angular parameterization of each $SU(N)$ variable, with dependence between variables at differing spacetime sites restricted to enforce a triangular Jacobian. We find that choosing a parameterization in which the observable of interest can be written as $\mathcal{O}=e^{i\theta}X$ is a useful practical choice, allowing constant shift deformations in the parameter $\theta$ to make substantial progress in reducing noise. Choosing a spacetime dependence of the deformation that gives rise to a triangular Jacobian is key to ensuring the deformed integral can be computed efficiently, as the Jacobian determinant can then be evaluated with cost linear in the number of spacetime lattice sites. This manifold parameterization can be combined with the method of deformed observables introduced in Ref. Detmold _et al._ (2020) to reduce noise in observables. This approach is applicable when the action is real and the Boltzmann weight $e^{-S}$ can be treated as a probability measure. We stress that this method of deformed observables does not require changing the Monte Carlo sampling, despite being based on an analysis of contour deformation of the entire path integral, and can be thought of as an approach to analytically relate observables with identical expectation values and different variance. Keeping the Monte Carlo weights unchanged allows manifold optimization using estimates of the deformed observable variance computed with respect to a fixed Monte Carlo ensemble. There is a tradeoff between the cost of optimizing manifold parameters and the statistical precision gained. In practice, we find that initializing manifold parameters from optimal parameters for similar observables significantly reduces the associated cost. This method was shown in Sec. IV and V to improve the variance of Wilson loop observables in $(1+1)$D $SU(2)$ and $SU(3)$ lattice gauge theory by orders of magnitude. For the original Wilson loop observables, the signal-to-noise ratio decreases exponentially with area. The deformed observables mitigate this StN problem, and in particular we find that the improvements are consistent with an exponential in the physical Wilson loop area, with the most significant reduction in noise for Wilson loops of the largest area. The improvement in variance was empirically found to be similar at three different lattice spacings, though less of an improvement was seen at the finest lattice spacing for $SU(3)$; the achieved improvements in the continuum limit and for other theories is thus an interesting subject of future investigation. However, we stress that making any gains at finite lattice spacing is still a significant step forward due to the convenience of the method: optimizing a deformation on a fixed ensemble quickly gives new observables that encode the same physical content while having significantly reduced noise. In demonstrating the method, we focused here on a particular deformation of the angular parameters based on a Fourier series expansion and shift in the imaginary direction only. Writing the observable phase fluctuations in terms of these periodic angular parameters that can be shifted by a constant in the imaginary direction led to deformed observables with significant StN improvement. The surprising result that zero-mode terms alone significantly reduce noise, with neither dependence between plaquettes at different spacetime sites nor dependence on the values of the angular parameters themselves, suggests that the majority of the StN problem in these $(1+1)$D theories arises from independent local fluctuations of $SU(N)$ angular parameters. Complications are expected in higher dimensions, as Gauss’ Law implies that plaquettes at differing spacetime locations and orientations must satisfy many independent constraints. Deformations thus cannot independently address fluctuations in each plaquette included in a Wilson loop, or more generally in each fundamental degree of freedom included in an observable. It is therefore an interesting line of future work to determine how best to incorporate this spacetime dependence in higher dimensional applications of path integral contour deformations. The approach employed here is one of many possible approaches to creating expressive transformations with efficiently computable Jacobians. This issue has been explored in some depth in normalizing flows for sampling in many contexts, including image generation and ensemble generation for lattice field theory; see Refs. Papamakarios _et al._ (2019); Kobyzev _et al._ (2020) for recent reviews. Similar techniques may prove more fruitful in future applications of path integral contour deformations to observables of phenomenological interest in $(3+1)$D lattice gauge theories. ###### Acknowledgements. W.D. and G.K. are supported in part by the U.S. DOE grant No. DE-SC0011090. W.D. is also supported by the SciDAC4 award DE-SC0018121. H.L. is supported by a Department of Energy QuantiSED grant. N.C.W. is supported by the U.S. DOE under Grant No. DE-FG02-00ER41132. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. ## Appendix A Single-variable $SU(N)$ integrals The calculation of $z$ for $SU(N)$ can be performed analogously to the $U(N)$ case considered in Refs. Gross and Witten (1980); Wadia (2012) with further details for the $SU(N)$ case given in Ref. Drouffe and Zuber (1983). In the eigenbasis of $P$, the Haar measure is given by $\begin{split}dP&=\frac{1}{N!}\prod_{I=1}^{N}\left[\frac{d\theta_{I}}{2\pi}\prod_{J<I}\left|e^{i\theta_{I}}-e^{i\theta_{J}}\right|^{2}\right.\\\ &\hskip 20.0pt\left.\times\sum_{n=-\infty}^{\infty}2\pi\delta\left(\sum_{K=1}^{N}\theta_{K}+2\pi n\right)\right],\end{split}$ (72) where the $\delta$-function enforces the unit determinant condition of $SU(N)$. Using the Fourier series representation of the $\delta$-function for the compact variable $\sum_{K=1}^{N}\theta_{K}$, this can be expressed as $\begin{split}dP=\frac{1}{N!}\prod_{I=1}^{N}\left[\frac{d\theta_{I}}{2\pi}\prod_{J<I}\left|e^{i\theta_{I}}-e^{i\theta_{J}}\right|^{2}\sum_{q=-\infty}^{\infty}e^{iq\theta_{I}}\right].\end{split}$ (73) The product of $e^{i\theta_{I}}-e^{i\theta_{J}}$ factors can be expressed in terms of the determinant of a Vandermonde matrix Gross and Witten (1980); Wadia (2012). The $SU(N)$ Haar measure is given by a sum of similar determinants, and $z$ can be expressed as Drouffe and Zuber (1983) $\begin{split}z=\sum_{q=-\infty}^{\infty}\det(\mathcal{Z}^{q}),\end{split}$ (74) where the entries of the matrix $\mathcal{Z}^{q}$ are given by $\begin{split}\mathcal{Z}^{q}_{IJ}&\equiv\int\frac{d\theta}{2\pi}e^{i[q+I-J]\theta}e^{\frac{2}{g^{2}}\cos(\theta)}\\\ &=I_{q+I-J}\left(\frac{2}{g^{2}}\right),\end{split}$ (75) where $I_{n}(x)$ is a modified Bessel function. For example, in the $SU(2)$ case $z$ is given explicitly by $\begin{split}z^{SU(2)}&=\sum_{q=-\infty}^{\infty}\left[I_{q}\left(\frac{2}{g^{2}}\right)\right]^{2}-I_{q+1}\left(\frac{2}{g^{2}}\right)I_{q-1}\left(\frac{2}{g^{2}}\right).\end{split}$ (76) A simpler but equivalent form $z^{SU(2)}=\frac{g^{2}}{2}I_{1}(4/g^{2})$ can also be derived using the parameterization introduced in Section IV. From this, we can derive an expression for $\left\langle\chi_{1}\right\rangle$ from taking derivatives of $z$ $\frac{\partial}{\partial(2N/g^{2})}\text{log}\,z=-\frac{g^{3}}{4N}\frac{\partial}{\partial g}\text{log}\,z=\left\langle\chi_{1}\right\rangle=\text{e}^{-\sigma}.$ (77) $r$ | $d_{r}$ | $\chi_{r}$ ---|---|--- $\\{1\\}$ | $N$ | $\operatorname{tr}U$ $\\{2\\}$ | $\frac{N(N+1)}{2}$ | $\frac{1}{2}\left(\operatorname{tr}^{2}U+\operatorname{tr}U^{2}\right)$ $\\{1,1\\}$ | $\frac{N(N-1)}{2}$ | $\frac{1}{2}\left(\operatorname{tr}^{2}U-\operatorname{tr}U^{2}\right)$ $\\{1,-1\\}$ | $N^{2}-1$ | $\left|\operatorname{tr}U\right|^{2}-1$ Table 2: Properties of group representations: the dimension $d_{r}$ and the character $\chi_{r}$. We have followed the normalizations in Table 14 of Ref. Drouffe and Zuber (1983). Similar to $\left<\operatorname{tr}(W_{\mathcal{A}})/N\right>$, we can factorize $\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|/N^{2}\right>$ and $\left<\operatorname{tr}(W_{\mathcal{A}})^{2}/N^{2}\right>$ into integrals of character functions over single-elements of $SU(N)$ using an identity Creutz (1978) $\begin{split}&\int d\Omega\ \Omega_{i_{1}j_{1}}\Omega^{\dagger}_{k_{1}l_{1}}\Omega_{i_{2}j_{2}}\Omega^{\dagger}_{k_{2}l_{2}}\\\ &=\frac{1}{N^{2}-1}\left(\delta_{i_{1}l_{1}}\delta_{j_{1}k_{1}}\delta_{i_{2}l_{2}}\delta_{j_{2}k_{2}}+\delta_{i_{1}l_{2}}\delta_{j_{1}k_{2}}\delta_{i_{2}l_{1}}\delta_{j_{2}k_{1}}\right)\\\ &\hskip 10.0pt-\frac{1}{N(N^{2}-1)}\left(\delta_{i_{1}l_{2}}\delta_{j_{1}k_{1}}\delta_{i_{2}l_{1}}\delta_{j_{2}k_{2}}+\delta_{i_{1}l_{1}}\delta_{j_{1}k_{2}}\delta_{i_{2}l_{2}}\delta_{j_{2}k_{1}}\right).\end{split}$ (78) From Table 2, we recognize that $\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}/N^{2}\right|\right>$ is related to $\chi_{1,-1}(P_{x})$. Thus, applying Eq. (78) we can derive that $\begin{split}&\int d\Omega\ \chi_{1,-1}(A\Omega P\Omega^{\dagger}B)\\\ &=\operatorname{tr}(AB)\operatorname{tr}(B^{\dagger}A^{\dagger})\langle\chi_{1,-1}\rangle\\\ &\qquad+\operatorname{tr}(AA^{\dagger}B^{\dagger}B)\frac{1}{N}\left(1-\langle\chi_{1,-1}\rangle\right)-1\\\ &=\left[\operatorname{tr}(AB)\operatorname{tr}(B^{\dagger}A^{\dagger})-1\right]\langle\chi_{1,-1}\rangle\\\ &=\chi_{1,-1}(AB)\langle\chi_{1,-1}\rangle\end{split}$ (79) where $\begin{split}\langle\chi_{1,-1}\rangle&\equiv\frac{1}{z}\int dP\ \frac{1}{N^{2}-1}\ \chi_{1,-1}(P)\ e^{\frac{1}{g^{2}}\operatorname{tr}\left(P+P^{\dagger}\right)}.\end{split}$ (80) Iterating this identity within $\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|\right>$ gives $\begin{split}&\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|\right>\\\ &\quad=N^{2}\langle\chi_{1,-1}\rangle^{A}+(1-\langle\chi_{1,-1}\rangle)\sum_{k=0}^{A-1}\left(\frac{\langle\chi_{1,-1}\rangle-1}{N^{2}}\right)^{k}\\\ &\quad=1+(N^{2}-1)\langle\chi_{1,-1}\rangle^{A}.\end{split}$ (81) Since $\operatorname{tr}(W_{\mathcal{A}})^{2}$ is not a character $\chi_{r}$, attempting to factorize it generates new terms. Specifically, in addition to $\operatorname{tr}(W_{\mathcal{A}^{\prime}})^{2}$ one finds $\operatorname{tr}(W_{\mathcal{A}^{\prime}}^{2})$ for $\mathcal{A}^{\prime}\subset\mathcal{A}$. Instead, a basis involving $\chi_{2}(P)$ and $\chi_{1,-1}(P)$ can be constructed from linear combinations of these traces and satisfies $\int d\Omega\ \chi_{2}(A\Omega P\Omega^{\dagger}B)=\frac{2}{N(N+1)}\chi_{2}(AB)\left\langle\chi_{2}\right\rangle,\\\ $ (82) and $\int d\Omega\ \chi_{1,1}(A\Omega P\Omega^{\dagger}B)=\frac{2}{N(N-1)}\chi_{1,1}(AB)\left\langle\chi_{1,1}\right\rangle,\\\ $ (83) where as in Eq. (80) we have factorized using the expectation values of the characters $\begin{split}\left\langle\chi_{2}\right\rangle&\equiv\frac{1}{z}\int dP\ \frac{2}{N(N+1)}\ \chi_{2}(P)\ e^{\frac{1}{g^{2}}\operatorname{tr}\left(P+P^{\dagger}\right)},\\\ \left\langle\chi_{1,1}\right\rangle&\equiv\frac{1}{z}\int dP\ \frac{2}{N(N-1)}\ \chi_{1,1}(P)\ e^{\frac{1}{g^{2}}\operatorname{tr}\left(P+P^{\dagger}\right)}.\end{split}$ (84) Putting these together and iterating gives $\begin{split}\left\langle\operatorname{tr}(W_{\mathcal{A}})^{2}\right\rangle&=\frac{N(N+1)}{2}\left\langle\chi_{2}\right\rangle^{A}+\frac{N(N-1)}{2}\left\langle\chi_{1,1}\right\rangle^{A}.\end{split}$ (85) Combining Eqs. (81) and (85) gives the general expression for variance of the $SU(N)$ Wilson loop $\begin{split}&\operatorname{Var}[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}}^{SU(N)})/N]\\\ &=\frac{1}{2N^{2}}\left<\left|\operatorname{tr}(W_{\mathcal{A}})^{2}\right|\right>+\frac{1}{2N^{2}}\left<\operatorname{tr}(W_{\mathcal{A}})^{2}\right>\\\ &\qquad\qquad-\frac{1}{N^{2}}\left<\operatorname{tr}(W_{\mathcal{A}})\right>^{2}\\\ &=\frac{1}{2N^{2}}\bigg{[}1+(N^{2}-1)\langle\chi_{1,-1}\rangle^{A}+\frac{N(N+1)}{2}\left\langle\chi_{2}\right\rangle^{A}\\\ &\qquad\qquad+\frac{N(N-1)}{2}\left\langle\chi_{1,1}\right\rangle^{A}\bigg{]}-e^{-2\sigma^{SU(N)}A}.\end{split}$ (86) The character expectation values can be obtained for general $SU(N)$ gauge groups by numerically evaluating the integrals in Eq. (84). For $SU(2)$, it is straightforward to analytically evaluate the character expection values appearing in Eq. (84). Not all representations of $SU(2)$ are independent using the enumeration in Table 2, and in particular $\chi_{2}(P)=\chi_{1,-1}(P)$ and $\chi_{1,1}(P)=\chi_{0}(P)=1$. After removing the redundant characters, the remain integrals $\chi_{j}(P)$ can be solved using expression of the characters in terms of angles Drouffe and Zuber (1983) $\chi_{j}=\frac{\sin([j+1/2]\alpha)}{\sin(\alpha/2)}$ (87) where $j=0,\frac{1}{2},1\cdots$ index the unique characters of $SU(2)$ and $r=\\{1,-1\\}$ equals $j=1$. With these, one has $\frac{1}{z}\int DP\frac{1}{d_{j}}\chi_{j}(P)\,e^{\frac{1}{g^{2}}\operatorname{Tr}(P+P^{\dagger})}=\frac{I_{2j+1}(4/g^{2})}{I_{1}(4/g^{2})}.$ (88) From which we derive $\begin{split}\text{Var}&[\operatorname{Re}\operatorname{tr}(W_{\mathcal{A}}^{SU(2)})/N]\\\ &=\frac{1}{4}+\frac{3}{4}\left(\frac{I_{3}(4/g^{2})}{I_{1}(4/g^{2})}\right)^{A}-e^{-2\sigma^{SU(2)}A},\end{split}$ (89) ## Appendix B Alternative $SU(2)$ coordinates Figure 12: Colored points $SU(2)$ Wilson loop variance ratios using the alternative gauge field parameterization defined in Eq. (90). Gray points show analogous variance ratios using the parameterization defined in Sec. IV.1 for comparison and are identical to the results in Fig. 5. As an alternative to the parameterization presented in Sec. IV.1, plaquettes $P_{x}\in SU(2)$ can be represented as $\begin{split}P_{x}&=\exp\left(\frac{i\alpha_{x}}{2}\hat{n}_{x}\cdot\vec{\sigma}\right)=\cos\left(\frac{\alpha_{x}}{2}\right)+i\sin\left(\frac{\alpha_{x}}{2}\right)\hat{n}_{x}\cdot\vec{\sigma}.\end{split}$ (90) The $\mathfrak{su}(2)$ unit vector $\hat{n}_{x}$ can be further parameterized as $\hat{n}_{x}=(\cos\phi_{x}\sin\theta_{x},\ \sin\phi_{x}\sin\theta_{x},\ \cos\theta_{x}),$ (91) and a general $SU(2)$ group element $P_{x}$ can be parameterized in terms of the three angles $\begin{split}0\leq\alpha_{x}<2\pi,\hskip 20.0pt0\leq\phi_{x}<2\pi,\hskip 20.0pt0\leq\theta_{x}<\pi.\end{split}$ (92) The Haar measure is given in these coordinates as $\begin{split}dP_{x}&=\frac{1}{4\pi^{2}}\sin^{2}\left(\frac{\alpha_{x}}{2}\right)d\alpha_{x}\sin\theta_{x}d\theta_{x}d\phi_{x},\end{split}$ (93) The inverse map needed to obtain these angular parameters for an $SU(2)$ matrix $P_{x}$ is given by $\begin{split}\alpha_{x}&=2\ \text{arccos}\left[\frac{1}{2}\left(P_{x}^{11}+P_{x}^{22}\right)\right]\\\ \theta_{x}&=\text{arccos}\left[\frac{P_{x}^{11}-P_{x}^{22}}{2i\sin(\alpha_{x}/2)}\right]\\\ \phi_{x}&=\frac{1}{2}\text{arg}\left[\frac{P_{x}^{21}}{P_{x}^{12}}\right].\end{split}$ (94) As with Eq. (60), these are not entire functions of $P_{x}$ but this is not an obstacle for contour deformation because it is only the parameterization given by Eqs. (90) and (91) that determines whether path integrands can be interpreted as holomorphic functions of the angles $\\{\alpha_{x},\ \theta_{x},\ \phi_{x}\\}$ associated with $P_{x}$. Deformed observables starting with the $(1,1)$ component of $SU(2)$ Wilson loops can be defined using this parameterization. A family of vertical deformations for $\\{\alpha_{x},\ \theta_{x},\ \phi_{x}\\}$ can be defined analogously to the deformation described in Sec. IV.2. Since $\alpha_{x}$ and $\theta_{x}$ have fixed (non-identified) integration contour endpoints, a constant vertical deformation can only be applied to $\phi_{x}$. For $A=1$ in particular, $\operatorname{Tr}(P_{x})=\cos(\alpha_{x}/2)$, and the trace is independent of the only constant vertical deformation that can be applied. Neither this constant vertical deformation nor non-constant vertical deformations corresponding to Fourier basis cutoffs $\Lambda=1,2$ lead to statistically significant variance reduction with $A=1$. As shown in Fig. 12, for $A>1$ deformed observable results using this parameterization with $\Lambda=0$ do lead to significant variance reduction when compared to undeformed contour results. However, orders of magnitude less variance reduction is obtained for large area Wilson loops using optimized deformed observables with this parameterization when compared to results using the parameterization explored in Sec. IV.1. The fact that the parameterization in Eq. (90) leads to less variance reduction than the parameterization in Sec. IV.1 can be intuitively explained by the inability of constant vertical deformations to decrease the magnitudes of the $(1,1)$ components of (products of) $SU(2)$ matrices using the parameterization Eq. (90). The significance of the difference between the results demonstrates the utility of rewriting observables before deformation in achieving practical StN improvements, as discussed in Sec. II.4. ## Appendix C Regularization terms to avoid overtraining and overlap problems When $\operatorname{Re}{\tilde{S}}$ is significantly different from $S$, we can encounter an overlap problem for training and evaluation; both processes involve factors of $e^{-\operatorname{Re}{\tilde{S}}+S}$ that can have very large magnitude fluctuations in this situation. To mitigate this problem, it is helpful to include regularization terms in the loss function $\mathcal{L}$. These terms may bias the exact loss minimum away from the optimal, but allow closer convergence to that optimal solution given finite statistics estimates of $\mathcal{L}$. The strength of these terms is controlled by a small parameter $\epsilon$. We discuss two possible terms here. First, an L2 regularizer Krogh and Hertz (1991) may be used, which simply ensures the parameters controlling the deformation all remain close to zero. Generically labeling those parameters as $\lambda_{i}$, this loss term can be written $\mathcal{L}_{\text{L2}}\equiv\epsilon\sum_{i}\left|\lambda_{i}\right|^{2}.$ (95) In the limit of $\epsilon\rightarrow\infty$, the parameters $\lambda_{i}$ are forced to zero and the optimization procedure must remain at the original manifold. A smaller choice of $\epsilon$ mildly biases the optimization procedure towards the original manifold, such that the loss function and gradients remain feasible to estimate with finite statistics. An alternate approach is to directly penalize distance between $\operatorname{Re}{\tilde{S}}$ and $S$ using a regularization term, $\mathcal{L}_{\text{act}}\equiv\epsilon\frac{1}{Z}\int dx\,e^{-S(x)}\left|S(x)-\operatorname{Re}{\tilde{S}(x)}\right|.$ (96) This term is minimized when $S=\operatorname{Re}{\tilde{S}}$, providing a bias towards remaining close to the original manifold. Though written as a path integral, this quantity can be estimated using the original samples, much like the main loss function and gradients. Both of these regularizer terms were explored, however no severe overlap problem was observed during training when deformation parameters were restricted to those contained within the target Wilson loop observables. Final results are based on training without either term. ## References * Aoki _et al._ (2020) S. Aoki _et al._ (Flavour Lattice Averaging Group), Eur. Phys. J. C 80, 113 (2020), arXiv:1902.08191 [hep-lat] . * Detmold _et al._ (2019) W. Detmold, R. G. Edwards, J. J. Dudek, M. Engelhardt, H.-W. Lin, S. Meinel, K. Orginos, and P. Shanahan (USQCD), Eur. Phys. J. A55, 193 (2019), arXiv:1904.09512 [hep-lat] . * Lehner _et al._ (2019) C. Lehner _et al._ (USQCD), Eur. Phys. J. A 55, 195 (2019), arXiv:1904.09479 [hep-lat] . * Cirigliano _et al._ (2019) V. Cirigliano, Z. Davoudi, T. Bhattacharya, T. Izubuchi, P. E. Shanahan, S. Syritsyn, and M. L. Wagman (USQCD), Eur. Phys. J. A55, 197 (2019), arXiv:1904.09704 [hep-lat] . * Aoyama _et al._ (2020) T. Aoyama _et al._ , Phys. Rept. 887, 1 (2020), arXiv:2006.04822 [hep-ph] . * Wagman and Savage (2017) M. L. Wagman and M. J. Savage, Phys. Rev. D96, 114508 (2017), arXiv:1611.07643 [hep-lat] . * Wagman (2017) M. L. Wagman, _Statistical Angles on the Lattice QCD Signal-to-Noise Problem_ , Ph.D. thesis, U. Washington, Seattle (main) (2017), arXiv:1711.00062 [hep-lat] . * Parisi (1984) G. Parisi, _Common trends in particle and condensed matter physics: Proceedings of Les Houches Winter Advanced Study Institute, February 1980_ , Phys. Rept. 103, 203 (1984). * Lepage (1989) G. P. Lepage, in _Boulder TASI 1989:97-120_ (1989) pp. 97–120. * Beane _et al._ (2009) S. R. Beane, W. Detmold, T. C. Luu, K. Orginos, A. Parreño, M. J. Savage, A. Torok, and A. Walker-Loud, Phys. Rev. D79, 114502 (2009), arXiv:0903.2990 [hep-lat] . * Beane _et al._ (2015) S. R. Beane, W. Detmold, K. Orginos, and M. J. Savage, J. Phys. G 42, 034022 (2015), arXiv:1410.2937 [nucl-th] . * Davoudi _et al._ (2020) Z. Davoudi, W. Detmold, K. Orginos, A. Parreño, M. J. Savage, P. Shanahan, and M. L. Wagman, (2020), arXiv:2008.11160 [hep-lat] . * Gibbs (1986) P. E. Gibbs, Phys. Lett. B182, 369 (1986). * Cohen (2003a) T. D. Cohen, Phys. Rev. Lett. 91, 222001 (2003a), arXiv:hep-ph/0307089 [hep-ph] . * Cohen (2003b) T. D. Cohen, Phys. Rev. Lett. 91, 032002 (2003b), arXiv:hep-ph/0304024 . * Splittorff and Verbaarschot (2007a) K. Splittorff and J. J. M. Verbaarschot, Phys. Rev. Lett. 98, 031601 (2007a), arXiv:hep-lat/0609076 [hep-lat] . * Splittorff and Verbaarschot (2007b) K. Splittorff and J. J. M. Verbaarschot, Phys. Rev. D75, 116003 (2007b), arXiv:hep-lat/0702011 [HEP-LAT] . * de Forcrand (2009) P. de Forcrand, _Proceedings, 27th International Symposium on Lattice field theory (Lattice 2009): Beijing, P.R. China, July 26-31, 2009_ , PoS LAT2009, 010 (2009), arXiv:1005.0539 [hep-lat] . * Alexandru _et al._ (2015) A. Alexandru, C. Gattringer, H. P. Schadler, K. Splittorff, and J. J. M. Verbaarschot, Phys. Rev. D91, 074501 (2015), arXiv:1411.4143 [hep-lat] . * Cristoforetti _et al._ (2012) M. Cristoforetti, F. Di Renzo, and L. Scorzato (AuroraScience), Phys. Rev. D86, 074506 (2012), arXiv:1205.3996 [hep-lat] . * Aarts (2013) G. Aarts, Phys. Rev. D88, 094501 (2013), arXiv:1308.4811 [hep-lat] . * Cristoforetti _et al._ (2013) M. Cristoforetti, F. Di Renzo, A. Mukherjee, and L. Scorzato, Phys. Rev. D88, 051501 (2013), arXiv:1303.7204 [hep-lat] . * Mukherjee _et al._ (2013) A. Mukherjee, M. Cristoforetti, and L. Scorzato, Phys. Rev. D88, 051502 (2013), arXiv:1308.0233 [physics.comp-ph] . * Aarts _et al._ (2014) G. Aarts, L. Bongiovanni, E. Seiler, and D. Sexty, JHEP 10, 159 (2014), arXiv:1407.2090 [hep-lat] . * Cristoforetti _et al._ (2014) M. Cristoforetti, F. Di Renzo, G. Eruzzi, A. Mukherjee, C. Schmidt, L. Scorzato, and C. Torrero, Phys. Rev. D89, 114505 (2014), arXiv:1403.5637 [hep-lat] . * Alexandru _et al._ (2016a) A. Alexandru, G. Başar, and P. Bedaque, Phys. Rev. D93, 014504 (2016a), arXiv:1510.03258 [hep-lat] . * Alexandru _et al._ (2016b) A. Alexandru, G. Başar, P. F. Bedaque, G. W. Ridgway, and N. C. Warrington, JHEP 05, 053 (2016b), arXiv:1512.08764 [hep-lat] . * Alexandru _et al._ (2016c) A. Alexandru, G. Başar, P. F. Bedaque, S. Vartak, and N. C. Warrington, Phys. Rev. Lett. 117, 081602 (2016c), arXiv:1605.08040 [hep-lat] . * Fujii _et al._ (2015) H. Fujii, S. Kamata, and Y. Kikukawa, JHEP 12, 125 (2015), [Erratum: JHEP09,172(2016)], arXiv:1509.09141 [hep-lat] . * Tanizaki _et al._ (2016) Y. Tanizaki, Y. Hidaka, and T. Hayata, New J. Phys. 18, 033002 (2016), arXiv:1509.07146 [hep-th] . * Alexandru _et al._ (2017a) A. Alexandru, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D96, 094505 (2017a), arXiv:1709.01971 [hep-lat] . * Alexandru _et al._ (2017b) A. Alexandru, G. Başar, P. F. Bedaque, and G. W. Ridgway, Phys. Rev. D95, 114501 (2017b), arXiv:1704.06404 [hep-lat] . * Mori _et al._ (2018) Y. Mori, K. Kashiwa, and A. Ohnishi, PTEP 2018, 023B04 (2018), arXiv:1709.03208 [hep-lat] . * Tanizaki _et al._ (2017) Y. Tanizaki, H. Nishimura, and J. J. M. Verbaarschot, JHEP 10, 100 (2017), arXiv:1706.03822 [hep-lat] . * Alexandru _et al._ (2018a) A. Alexandru, P. F. Bedaque, and N. C. Warrington, Phys. Rev. D98, 054514 (2018a), arXiv:1805.00125 [hep-lat] . * Alexandru _et al._ (2018b) A. Alexandru, G. Başar, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D98, 034506 (2018b), arXiv:1807.02027 [hep-lat] . * Alexandru _et al._ (2018c) A. Alexandru, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D97, 094510 (2018c), arXiv:1804.00697 [hep-lat] . * Alexandru _et al._ (2018d) A. Alexandru, P. F. Bedaque, H. Lamm, S. Lawrence, and N. C. Warrington, Phys. Rev. Lett. 121, 191602 (2018d), arXiv:1808.09799 [hep-lat] . * Kashiwa _et al._ (2019a) K. Kashiwa, Y. Mori, and A. Ohnishi, Phys. Rev. D99, 014033 (2019a), arXiv:1805.08940 [hep-ph] . * Fukuma _et al._ (2019a) M. Fukuma, N. Matsumoto, and N. Umeda, (2019a), arXiv:1912.13303 [hep-lat] . * Fukuma _et al._ (2019b) M. Fukuma, N. Matsumoto, and N. Umeda, Phys. Rev. D100, 114510 (2019b), arXiv:1906.04243 [cond-mat.str-el] . * Kashiwa _et al._ (2019b) K. Kashiwa, Y. Mori, and A. Ohnishi, Phys. Rev. D99, 114005 (2019b), arXiv:1903.03679 [hep-lat] . * Mou _et al._ (2019) Z.-G. Mou, P. M. Saffin, and A. Tranberg, JHEP 11, 135 (2019), arXiv:1909.02488 [hep-th] . * Ulybyshev _et al._ (2020) M. Ulybyshev, C. Winterowd, and S. Zafeiropoulos, Phys. Rev. D101, 014508 (2020), arXiv:1906.07678 [cond-mat.str-el] . * Lawrence (2020) S. Lawrence, “Sign problems in quantum field theory: Classical and quantum approaches,” (2020), arXiv:2006.03683 [hep-lat] . * Lawrence and Yamauchi (2021) S. Lawrence and Y. Yamauchi, (2021), arXiv:2101.05755 [hep-lat] . * Alexandru _et al._ (2020) A. Alexandru, G. Basar, P. F. Bedaque, and N. C. Warrington, “Complex paths around the sign problem,” (2020), arXiv:2007.05436 [hep-lat] . * Detmold _et al._ (2020) W. Detmold, G. Kanwar, M. L. Wagman, and N. C. Warrington, Phys. Rev. D 102, 014514 (2020), arXiv:2003.05914 [hep-lat] . * Di Renzo and Eruzzi (2018) F. Di Renzo and G. Eruzzi, Physical Review D 97 (2018), 10.1103/physrevd.97.014503. * Schmidt and Ziesché (2017) C. Schmidt and F. Ziesché, PoS LATTICE2016, 076 (2017), arXiv:1701.08959 [hep-lat] . * Ohnishi _et al._ (2019) A. Ohnishi, Y. Mori, and K. Kashiwa, JPS Conf. Proc. 26, 024011 (2019). * Zambello and Renzo (2018) K. Zambello and F. D. Renzo, “Towards lefschetz thimbles regularization of heavy-dense qcd,” (2018), arXiv:1811.03605 [hep-lat] . * Lüscher and Weisz (2001) M. Lüscher and P. Weisz, JHEP 09, 010 (2001), arXiv:hep-lat/0108014 [hep-lat] . * Range (2013) R. M. Range, _Holomorphic functions and integral representations in several complex variables_ , Vol. 108 (Springer Science & Business Media, 2013). * Bronzan (1988) J. Bronzan, Phys. Rev. D 38, 1994 (1988). * Alexandru _et al._ (2018e) A. Alexandru, P. F. Bedaque, H. Lamm, and S. Lawrence, Phys. Rev. D 97, 094510 (2018e). * Kashiwa and Mori (2020) K. Kashiwa and Y. Mori, Phys. Rev. D 102, 054519 (2020), arXiv:2007.04167 [hep-lat] . * Alexandru _et al._ (2017c) A. Alexandru, G. Basar, P. F. Bedaque, G. W. Ridgway, and N. C. Warrington, Phys. Rev. D 95, 014502 (2017c), arXiv:1609.01730 [hep-lat] . * Bump (2013) D. Bump, “Complexification,” in _Lie Groups_ (Springer New York, New York, NY, 2013) pp. 205–211. * Aarts and Stamatescu (2008) G. Aarts and I.-O. Stamatescu, JHEP 09, 018 (2008), arXiv:0807.1597 [hep-lat] . * Sexty (2014) D. Sexty, _Proceedings, 24th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions (Quark Matter 2014): Darmstadt, Germany, May 19-24, 2014_ , Nucl. Phys. A931, 856 (2014), arXiv:1408.6767 [hep-lat] . * Seiler (2018) E. Seiler, _Proceedings, 35th International Symposium on Lattice Field Theory (Lattice 2017): Granada, Spain, June 18-24, 2017_ , EPJ Web Conf. 175, 01019 (2018), arXiv:1708.08254 [hep-lat] . * Alexandru _et al._ (2018f) A. Alexandru, P. F. Bedaque, H. Lamm, S. Lawrence, and N. C. Warrington, Phys. Rev. Lett. 121, 191602 (2018f). * Papamakarios _et al._ (2019) G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan, “Normalizing Flows for Probabilistic Modeling and Inference,” (2019), arXiv:1912.02762 [stat.ML] . * Robbins and Monro (1951) H. Robbins and S. Monro, Ann. Math. Statist. 22, 400 (1951). * Borkar (2009) V. S. Borkar, _Stochastic approximation: a dynamical systems viewpoint_ , Vol. 48 (Springer, 2009) Chap. 2. * Chee and Toulis (2018) J. Chee and P. Toulis, “Convergence diagnostics for stochastic gradient descent with constant step size,” (2018), arXiv:1710.06382 [stat.ML] . * Bottou _et al._ (2018) L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” (2018), arXiv:1606.04838 [stat.ML] . * Wilson (1974) K. G. Wilson, Phys. Rev. D10, 2445 (1974). * Wadia (2012) S. R. Wadia, (2012), arXiv:1212.2906 [hep-th] . * Gross and Witten (1980) D. Gross and E. Witten, Phys. Rev. D 21, 446 (1980). * Drouffe and Zuber (1983) J.-M. Drouffe and J.-B. Zuber, Phys. Rept. 102, 1 (1983). * Bradbury _et al._ (2018) J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, “JAX: composable transformations of Python+NumPy programs,” (2018). * Kingma and Ba (2017) D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” (2017), arXiv:1412.6980 [cs.LG] . * Hardt _et al._ (2016) M. Hardt, B. Recht, and Y. Singer, “Train faster, generalize better: Stability of stochastic gradient descent,” (2016), arXiv:1509.01240 [cs.LG] . * Lin _et al._ (2016) J. Lin, R. Camoriano, and L. Rosasco, in _Proceedings of The 33rd International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 48, edited by M. F. Balcan and K. Q. Weinberger (PMLR, New York, New York, USA, 2016) pp. 2340–2348. * Zhang _et al._ (2018) C. Zhang, O. Vinyals, R. Munos, and S. Bengio, “A study on overfitting in deep reinforcement learning,” (2018), arXiv:1804.06893 [cs.LG] . * Raschka (2020) S. Raschka, “Model evaluation, model selection, and algorithm selection in machine learning,” (2020), arXiv:1811.12808 [cs.LG] . * Duane _et al._ (1987) S. Duane, A. Kennedy, B. Pendleton, and D. Roweth, Phys. Lett. B 195, 216 (1987). * Creutz (1980) M. Creutz, Phys. Rev. D 21, 2308 (1980). * Pietarinen (1981) E. Pietarinen, Nucl. Phys. B190, 270 (1981). * de Forcrand and Jahn (2005) P. de Forcrand and O. Jahn, in _3rd International Workshop on Numerical Analysis and Lattice QCD_ (2005) pp. 67–73, arXiv:hep-lat/0503041 . * Kobyzev _et al._ (2020) I. Kobyzev, S. Prince, and M. Brubaker, IEEE Transactions on Pattern Analysis and Machine Intelligence , 1–1 (2020). * Creutz (1978) M. Creutz, J. Math. Phys. 19, 2043 (1978). * Krogh and Hertz (1991) A. Krogh and J. A. Hertz, in _Proceedings of the 4th International Conference on Neural Information Processing Systems_ , NIPS’91 (Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1991) p. 950–957.
# Radiative Poincare type eon and its follower Paweł Nurowski Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotników 32/46, 02-668 Warszawa, Poland<EMAIL_ADDRESS> ###### Abstract. We consider two consecutive eons $\hat{M}$ and $\check{M}$ from Penrose’s Conformal Cyclic Cosmology and study how the matter content of the past eon ($\hat{M}$) determines the matter content of the present eon ($\check{M}$) by means of the reciprocity hypothesis. We assume that the only matter content in the final stages of the past eon is a spherical wave described by Einstein’s equations with the pure radiation energy momentum tensor $\hat{T}^{ij}=\hat{\Phi}K^{i}K^{j},\quad\hat{g}_{ij}K^{i}K^{j}=0,$ and with cosmological constant $\hat{\Lambda}$ . We solve these Einstein’s equations associating to $\hat{M}$ the metric $\hat{g}=t^{-2}\big{(}-{\rm d}t^{2}+h(t)\big{)}$, which is a Lorentzian analog of the Poincaré-Einstein metric known from the theory of conformal invariants. The solution is obtained under the assumption that the 3-dimensional conformal structure $[h]$ on the $\mathscr{I}^{+}$ of $\hat{M}$ is flat, that the metric $\hat{g}$ admits a power series expansion in the time variable $t$, and that $h(0)\in[h]$. Such solution depends on one real arbitrary function of the radial variable $r$. Applying the reciprocal hypothesis, $\hat{g}\to\check{g}=t^{4}\hat{g}$, we show that the new eon $(\check{M},\check{g})$ created from the one containing a single spherical wave, is filled at its initial state with three types of radiation: (i) the damped spherical wave which continues its life from the previous eon, (ii) the in-going spherical wave obtained as a result of a collision of the wave from the past eon with the Bang hypersurface and (3) randomly scattered waves that could be interpreted as perfect fluid with the energy density $\check{\rho}$ and the isotropic pressure $\check{p}$ such that $\check{p}=\tfrac{1}{3}\check{\rho}$. The metric $\check{g}$ solves the Einstein’s equations without cosmological constant and with the energy- momentum tensor $\check{T}^{ij}=\check{\Phi}K^{i}K^{j}+\check{\Psi}L^{i}L^{j}+(\check{\rho}+\check{p})\check{u}^{i}\check{u}^{j}+\check{p}\check{g}^{ij},$ in which $\check{u}^{i}\check{u}^{j}\check{g}_{ij}=-1$, $\check{g}_{ij}L^{i}L^{j}=0$ and $L^{i}K^{j}\check{g}_{ij}=-2$. The research was funded from the Norwegian Financial Mechanism 2014-2021 with project registration number 2019/34/H/ST1/00636. ## 1\. The setting In this short note we show a model of a bandage region of two consecutive eons from the _Penrose’s Conformal Cyclic Cosmology_ (CCC) [2], which have the following properties111We closely follow Paul Tod’s setting, notation and terminology as presented in [3]: * • The common three-surface $\Sigma$ of $\mathscr{I}^{+}$ of the past eon and the Big Bang of the present eon is equipped with a conformal class $[h_{0}]$ of signature $(+,+,+)$ which _has vanishing Cotton tensor_ , i.e. $[h_{0}]$ is _conformally flat_ ; in the following we will chose $h_{0}$ to be the _flat_ representative of the conformal class $[h_{0}]$; * • The Poincaré-type extension $(\hat{M},\hat{g})$, with $\hat{M}=]0,\epsilon[\times\Sigma$, of the conformal three-manifold $(\Sigma,[h_{0}])$ has the Lorentzian metric [1]: (1.1) $\hat{g}=t^{-2}(-{\rm d}t^{2}+h_{t});$ Here $t\in]0,\epsilon[$ is a coordinate along the extension $\mathbb{R}_{+}$ of $\Sigma$ in $\hat{M}$, and $h_{t}=h(t,x)$ is a $t$-dependent 1-parameter family of metrics on $\Sigma$ such that $h(0,x)=h_{0}\in[h_{0}]$; here $x$ is a point in $\Sigma$, $x\in\Sigma$; * • The Poincaré-type metric $\hat{g}$ in $\hat{M}$ satisfies the _pure radiation Einstein’s equations with cosmological constant_ $\hat{\Lambda}$: (1.2) $\hat{R}^{ij}=\hat{\Lambda}\hat{g}^{ij}+\hat{\Phi}K^{i}K^{j};$ Here $K^{i}$ is an _expanding null_ vector field _without shear and without twist_ on $\hat{M}$; in particular we have $\hat{g}_{ij}K^{i}K^{j}=0$; * • The Lorentzian four-metric $g=-{\rm d}t^{2}+h_{t}$ conformal to the Poincare-type metric $\hat{g}$ in $\hat{M}$ is naturally extended to $M=\hat{M}\cup\Sigma\cup\check{M}$, which is a bundle $\Sigma\to M\stackrel{{\scriptstyle\pi}}{{\to}}I$ over the interval $I=]-\epsilon,\epsilon[\subset\mathbb{R}$ parameterized by $t$, with the following preimages of $]-\epsilon,\epsilon[$: $\pi^{-1}(t>0)=\hat{M}$, $\pi^{-1}(t=0)=\Sigma$, and $\pi^{-1}(t<0)=\check{M}$; * • The metric $g$ is used to define a Lorentzian metric $\check{g}$ in $\check{M}$, which is $\check{g}=t^{2}(-{\rm d}t^{2}+h_{t}),\quad\quad\mathrm{for}\quad t<0;$ * • Note that for $t>0$ we have $\hat{g}=\hat{\Omega}^{2}g$ and that for $t<0$ we have $\check{g}=\check{\Omega}^{2}g$ with $\check{\Omega}=-\hat{\Omega}^{-1}=t$. One of the aims of this note is to identify the above four-manifold $M$, equipped with the three Lorentzian metrics $\hat{g}$, $g$ and $\check{g}$, with the _bandage region_ [3] of the _Penrose’s cyclic Universe_ [2] in which the _past eon ends_ as filled _with only one spherical gravitational wave_ propagating along the null vector $K^{i}$. Forcing the Poincaré-type expansion metric $\hat{g}$ to satisfy the Einstein’s equations (1.2) was the first step to achieve this aim. Another aim is to see how the gravitational wave contained in the past eon will change into a matter content at the beginning of the present eon, by means of _Penrose’s reciprocal hypothesis_ , stating that the three metrics $\hat{g}$, $g$ and $\check{g}$ in the bandage region should be related via: $\hat{g}=\Omega^{-2}g$ and $\check{g}=\Omega^{2}g$. As we show below, under further simplification assumptions, the explicit form of the Poincaré-type metric $\hat{g}$ can be easily found up to an arbitrarily prescribed accuracy, and as a byproduct one gets a _remarkably pleasant_ consequences of the so obtained $\hat{g}$ for $\check{g}$, and in particular for the _matter content of the spacetime_ $(\check{M},\check{g})$, which is interpreted as the beginning of the _present eon_. ## 2\. The ansatz and the model for the past eon In the _theory of conformal invariants_ as presented by Fefferman and Graham in [1], given a conformal class $[h]$ on $\Sigma$, one obtains the system of conformal invariants of $[h]$ in terms of the (pseudo)Riemannian invariants of a certain (pseudo)Riemannian metric $\hat{g}$. This metric is naturally associated with the conformal class $[h]$ via $\hat{g}=t^{-2}(\varepsilon{\rm d}t^{2}+h_{t})$, where $h_{t}$ is a 1-parameter family of metrics on $\Sigma$, such that $h_{0}=h$ and $h$ is a representative of $[h]$. The metric $\hat{g}$ is defined for $t>0$ and the value $\varepsilon=1$ is chosen. To encode the conformal properties of $[h]$, this metric is demanded to be _unique_. This is done by the requirements that $\hat{g}$ is _Einstein_ , $\hat{Ric}(\hat{g})=\hat{\Lambda}\hat{g}$, and that $h_{t}$ is real analytic and symmetric in $t$. We present here a _milder version of this construction_ , applied to the conformally flat Riemannian structure $(\Sigma,[h])$ on a three-dimensional $\Sigma$, to obtain the metric $\hat{h}$ of the past eon with a desirable physical properties. In our ‘milder version’ we do as follows: * (a) We replace $\varepsilon=1$ by $\varepsilon=-1$ \- to have the Lorentzian signature of $\hat{g}$; * (b) We replace the _Einstein condition_ $\hat{Ric}(\hat{g})=\hat{\Lambda}\hat{g}$ by the Einstein equation (1.2) - to have the past eon filled by a spherical gravitational wave; * (c) And we drop the condition that $h_{t}$ is symmetric in the variable $t$ \- to have more flexibility on the matter content of the present eon $\check{M}$. Since in our milder-than-Fefferman-Graham-setting we do not have uniqueness theorems as in [1] the outcome past eon metric $\hat{g}$ is not rigidly constrained. Thus, instead of working with the most general form of the the 1-parameter family of metrics $h_{t}$ _we make a physically motivated ansatz_ for them, hoping that it is compatible with the Einstein’s equations (1.2). We start with a conformal class $[h_{0}]$ represented by the flat 3-dimensional metric $h_{0}=\frac{2r^{2}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}+{\rm d}r^{2}.$ Then as $h_{t}$ we take the _spherically symmetric_ 1-parameter family $h_{t}=\frac{2r^{2}\big{(}1+\nu(t,r)\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}+\big{(}1+\mu(t,r)\big{)}{\rm d}r^{2},$ where the unknown function $\nu=\nu(t,r)$ and $\mu=\mu(t,r)$ are both _real analytic_ in the variable $t$ and such that: $\nu(0,r)=0\quad\mathrm{and}\quad\mu(0,r)=0.$ This obviously satisfies $h_{t=0}=h_{0}$ and because of the analyticity assumption we have $\nu(t,r)=\sum_{i=1}^{\infty}a_{i}(r)t^{i}\quad\mathrm{and}\quad\mu(t,r)=\sum_{i=1}^{\infty}b_{i}(r)t^{i},$ with a set of differentiable functions $a_{i}=a_{i}(r)$ and $b_{i}=b_{i}(r)$ depending on the $r$ variable only. This leads to the following ansatz for the pre-Poincaré-type metric $\hat{g}$ in $\hat{M}$: (2.1) $\hat{g}=t^{-2}\,\Big{(}\,-{\rm d}t^{2}\,+\,\frac{2r^{2}\big{(}\,1+\sum_{i=1}^{\infty}a_{i}(r)t^{i}\,\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}\,+\,\big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\big{)}{\rm d}r^{2}\,\Big{)}.$ Our (pre)past eon manifold $\hat{M}$ is parameterized by $t>0$, $r>0$ and $z\in\mathbb{C}\cup\\{\infty\\}$. We now consider the following null vector field $K$ on $\hat{M}$: $K=\partial_{t}+\Big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\Big{)}^{-\tfrac{1}{2}}\partial_{r}.$ It is tangent to a congruence of null geodesics without shear and twist, which represents light rays emanating from the source at the surface $r=0$. We require that the Poincaré-type metric (2.1) satisfies the Einstein equations (1.2) with this null vector field $K$ and some functions $\hat{\Phi}$ and $\hat{\Lambda}$. We have the following theorem/conjecture. Theorem 1. If the metric (2.2) $\displaystyle\hat{g}=$ $\displaystyle t^{-2}(-{\rm d}t^{2}+h_{t})=$ $\displaystyle t^{-2}\,\Big{(}\,-{\rm d}t^{2}\,+\,\frac{2r^{2}\big{(}\,1+\nu(t,r)\,\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}\,+\,\big{(}\,1+\mu(t,r)\,\big{)}{\rm d}r^{2}\,\Big{)}=$ $\displaystyle t^{-2}\,\Big{(}\,-{\rm d}t^{2}\,+\,\frac{2r^{2}\big{(}\,1+\sum_{i=1}^{\infty}a_{i}(r)t^{i}\,\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}\,+\,\big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\big{)}{\rm d}r^{2}\,\Big{)}$ satisfies Einstein’s equations (2.3) $\hat{E}{}_{ij}:=\hat{R}{}_{ij}-\hat{\Lambda}\hat{g}{}_{ij}-\hat{\Phi}\hat{K}{}_{i}\hat{K}{}_{j}=0$ with (2.4) $K=K^{i}\partial_{i}=\partial_{t}+\Big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\Big{)}^{-\tfrac{1}{2}}\partial_{r},\quad\quad\hat{K}_{i}=\hat{g}_{ij}K^{j},$ then we have: * • The coefficients $a_{1}(r)$, $a_{2}(r)$ $b_{1}(r)$ and $b_{2}(r)$ identically vanish, $a_{1}(r)=a_{2}(r)=b_{1}(r)=b_{2}(r)=0$, and the power series expansion of $h_{t}$ starts at the $t^{3}$ terms, $h_{t}=t^{3}\chi(r)+\mathcal{O}(t^{4})$. * • The metric $\hat{g}$, or what is the same, the power series expansions $\nu(t,r)=\sum_{i=1}^{\infty}a_{i}(r)t^{i}$ and $\mu(t,r)=\sum_{i=1}^{\infty}b_{i}(r)t^{i}$, are totally determined up to infinite order by an arbitrary differentiable function $f=f(r)$. * • More precisely, the Einstein equations $\hat{E}{}_{ij}=\mathcal{O}(t^{k+1})$ solved up to an order $k$, together with an arbitrary differentiable function $f=f(r)$, uniquely determine $\nu(t,r)$ and $\mu(t,r)$ up to an order $(k+2)$. * • In the lowest order the solution reads: $\nu=\frac{f}{r^{3}}t^{3}+\mathcal{O}(t^{4})\quad\mathrm{and}\quad\mu=-\frac{2f}{r^{3}}t^{3}+\mathcal{O}(t^{4});$ The energy function $\hat{\Phi}$ and the cosmological constant $\hat{\Lambda}$ are: $\hat{\Phi}=3\frac{f^{\prime}}{r^{3}}t^{6}+\mathcal{O}(t^{7})\quad\mathrm{and}\quad\hat{\Lambda}=3+\mathcal{O}(t^{k+3});$ the Weyl tensor of the solution is $\hat{W}^{i}{}_{jkl}=\mathcal{O}(t).$ In particular, the Weyl tensor $\hat{W}^{i}{}_{jkl}$ vanishes at $t=0$ and $\hat{\Lambda}>0$ there. With the use of computers we calculated this solution up to the order $k=10$, finding explicitly $\nu=\sum_{k=3}^{10}a_{k}t^{k}$ and $\mu=\sum_{k=3}^{10}b_{k}t^{k}$. The formulas are compact enough up to $k=8$ and up to the order $k=8$ they read: $\displaystyle\nu(t,r)=$ $\displaystyle f\tfrac{t^{3}}{r^{3}}\,-\,\tfrac{3}{4}f^{\prime}\tfrac{t^{4}}{r^{4}}+\tfrac{1}{10}\big{(}-2rf^{\prime}+3r^{2}f^{\prime\prime}\big{)}\tfrac{t^{5}}{r^{5}}+$ $\displaystyle\tfrac{1}{24}\big{(}3f^{2}-3rf^{\prime}+3r^{2}f^{\prime\prime}-2r^{3}f^{(3)}\big{)}\tfrac{t^{6}}{r^{6}}+$ $\displaystyle\tfrac{r}{280}\big{(}-24f^{\prime}-105ff^{\prime}+24rf^{\prime\prime}-12r^{2}f^{(3)}+5r^{3}f^{(4)}\big{)}\tfrac{t^{7}}{r^{7}}-$ $\displaystyle\tfrac{r}{960}\big{(}60f^{\prime}+288ff^{\prime}-150rf^{\prime}{}^{2}-60rf^{\prime\prime}-216rff^{\prime\prime}+30r^{2}f^{(3)}-10r^{3}f^{(4)}+3r^{4}f^{(5)}\big{)}\tfrac{t^{8}}{r^{8}}+$ $\displaystyle\mathcal{O}(\big{(}\tfrac{t}{r}\big{)}^{9})$ $\displaystyle\mu(t,r)=$ $\displaystyle-2f\tfrac{t^{3}}{r^{3}}\,+\,\tfrac{3}{4}f^{\prime}\tfrac{t^{4}}{r^{4}}-\tfrac{1}{5}f^{\prime\prime}\tfrac{t^{5}}{r^{5}}\,+\,\tfrac{1}{24}\big{(}39f^{2}+r^{3}f^{(3)}\big{)}\tfrac{t^{6}}{r^{6}}\,-\,\tfrac{r}{280}\big{(}390ff^{\prime}+2r^{3}f^{(4)}\big{)}\tfrac{t^{7}}{r^{7}}+$ $\displaystyle\tfrac{r}{960}\big{(}-18ff^{\prime}+300rf^{\prime}{}^{2}+378rff^{\prime\prime}+r^{4}f^{(5)}\big{)}\tfrac{t^{8}}{r^{8}}+\mathcal{O}(\big{(}\tfrac{t}{r}\big{)}^{9}).$ For a solution up to this order we find that: $\displaystyle\hat{\Phi}\,=$ $\displaystyle 3r^{3}f^{\prime}\tfrac{t^{6}}{r^{6}}\,+\,3r^{3}\big{(}f^{\prime}-rf^{\prime\prime}\big{)}\tfrac{t^{7}}{r^{7}}\,+\,\tfrac{3r^{3}}{2}\big{(}2f^{\prime}-2rf^{\prime\prime}+r^{2}f^{(3)}\big{)}\tfrac{t^{8}}{r^{8}}\,+$ $\displaystyle\tfrac{r^{3}}{2}\big{(}6f^{\prime}+6ff^{\prime}-6rf^{\prime\prime}+3r^{2}f^{(3)}-r^{3}f^{(4)}\big{)}\tfrac{t^{9}}{r^{9}}+$ $\displaystyle\tfrac{r^{3}}{8}\big{(}24f^{\prime}+66ff^{\prime}-12rf^{\prime}{}^{2}-24rf^{\prime\prime}-30rff^{\prime\prime}+12r^{2}f^{(3)}-4r^{3}f^{(4)}+r^{4}f^{(5)}\big{)}\tfrac{t^{10}}{r^{10}}+$ $\displaystyle\tfrac{r^{3}}{40}\big{(}120f^{\prime}+522ff^{\prime}-177rf^{\prime}{}^{2}-120rf^{\prime\prime}-378rff^{\prime\prime}+93r^{2}f^{\prime}f^{\prime\prime}+60r^{2}f^{(3)}+90r^{2}ff^{(3)}-20r^{3}f^{(4)}+5r^{4}f^{(5)}-r^{5}f^{(6)}\big{)}\tfrac{t^{11}}{r^{11}}+$ $\displaystyle\mathcal{O}(\Big{(}\tfrac{t}{r}\Big{)}^{12}),$ $\hat{\Lambda}=3+\mathcal{O}(t^{9}).$ I have no patience to type the Weyl tensor components up to high order. It is enough to say that that up to the 4th order in $t$, modulo a nonzero constant tensor $C^{i}{}_{jkl}$, it is equal to: $\hat{W}^{i}{}_{jkl}=\Big{(}\frac{f}{r^{2}}\frac{t}{r}-\frac{f^{\prime}}{r}\frac{t^{2}}{r^{2}}+\frac{f^{\prime\prime}}{2}\frac{t^{3}}{r^{3}}\Big{)}C^{i}{}_{jkl}+\mathcal{O}(\Big{(}\tfrac{t}{r}\Big{)}^{4}).$ Of course, for the positivity of the energy density $\hat{\Phi}$ close to the surface $\mathscr{I}^{+}$ of $\hat{M}$ we need $f^{\prime}>0.$ ###### Corollary 2.1. The Poincaré-type metric (2.2) can be interpreted as the ending stage of the evolution of the past eon in Penrose’s CCC. The eon has a positive cosmological constant $\hat{\Lambda}\simeq 3$, which is filled with a spherically symmetric pure radiation moving along the null congruence generated by the vector field $K$. ## 3\. Using reciprocity for the model of the present eon Now, following the Penrose-Tod reciprocal hypothesis procedure, we summarize the properties of the spacetime $\check{M}$ equipped with the metric $\check{g}$ obtained from $\hat{g}$ as in Theorem 1, by the reciprocal change $\check{\Omega}\to-\hat{\Omega}^{-1}=t$. In other words, we are now interested in the properties of the metric $\check{g}=t^{4}\hat{g}$. We have the following theorem. Theorem 2. Assume that the metric $\hat{g}$ as in (2.2) satisfies the Einstein equations (2.3)-(2.4), $\hat{E}{}_{ij}=0$. Then, the reciprocal metric $\displaystyle\check{g}=$ $\displaystyle t^{2}\,\Big{(}\,-{\rm d}t^{2}\,+\,\frac{2r^{2}\big{(}\,1+\nu(t,r)\,\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}\,+\,\big{(}\,1+\mu(t,r)\,\big{)}{\rm d}r^{2}\,\Big{)}=$ $\displaystyle t^{2}\,\Big{(}\,-{\rm d}t^{2}\,+\,\frac{2r^{2}\big{(}\,1+\sum_{i=1}^{\infty}a_{i}(r)t^{i}\,\big{)}{\rm d}z{\rm d}\bar{z}}{(1+\frac{z\bar{z}}{2})^{2}}\,+\,\big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\big{)}{\rm d}r^{2}\,\Big{)}$ satisfies the Einstein equations (3.1) $\check{E}{}_{ij}=\check{R}_{ij}-\check{\Phi}\check{K}_{i}\check{K}_{j}-\check{\Psi}\check{L}_{i}\check{L}_{j}-(\check{\rho}+\check{p})\check{u}_{i}\check{u}_{j}-\tfrac{1}{2}(\check{\rho}-\check{p})\check{g}_{ij}=0.$ Here $\check{K}_{i}$ and $\check{L}_{i}$ are the null 1-forms corresponding to the pair of outgoing-ingoing null vector fields $K=K^{i}\partial_{i}=\partial_{t}{\color[rgb]{0,1,0}+}\Big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\Big{)}^{-\tfrac{1}{2}}\partial_{r}\quad\mathrm{and}\quad L=L^{i}\partial_{i}=\partial_{t}{\color[rgb]{1,0,0}-}\Big{(}\,1+\sum_{i=1}^{\infty}b_{i}(r)t^{i}\,\Big{)}^{-\tfrac{1}{2}}\partial_{r},$ via $\check{K}_{i}=\check{g}_{ij}K^{j}$ and $\check{L}=\check{g}_{ij}L^{j}$, and the 1-form vector field $\check{u}_{i}$ corresponds to the future oriented222Note that now $t<0$ (!) timelike unit vector field $\check{u}=\check{u}^{i}\partial_{i}=-t^{-1}\partial_{t},$ via $\check{u}_{i}=\check{g}_{ij}\check{u}^{j}$. Before giving the explicit formulas for the power expansions of functions $\check{\Phi}$, $\check{\Psi}$, $\check{\rho}$ and $\check{p}$ appearing in this theorem, we make the following remark. Remark. * • The Einstein equations (3.1) are equations with an energy momentum tensor consisting of gravitational radiation propagating with spherical fronts outward (along $K$) and inward (along $L$); it also consists of a perfect fluid moving along the present eon’s cosmological time $T=-\int t{\rm d}t$ . Each front of the spherical wave present in the past eon that reached the $t=0$ surface in the present eon produces (i) a _spherical outward wave_ , going along $K$ out of this sphere with energy density $\check{\Phi}$, (ii) a _spherical inward wave_ , going along $L$ to the center of this sphere with energy density $\check{\Psi}$, and (iii) a bit of a _perfect fluid_ with energy density $\check{\rho}$ and isotropic pressure $\check{p}$. For the solutions $\nu(t,r)$, $\mu(t,r)$ of the past eon’s Einstein’s equations (2.3)(2.4), which were given in terms of the power series expansions as $\nu(t,r)=\sum_{i=3}^{k+2}a_{i}(r)t^{i}+\mathcal{O}(t^{k+3})$ and $\mu(t,r)=\sum_{i=3}^{k+2}b_{i}(r)t^{i}+\mathcal{O}(t^{k+3})$ in Theorem 1, the formulae for the power series expansions of the energy densities $\check{\Phi}$ $\check{\Psi}$, $\check{\rho}$ and the pressure $\check{p}$ are as follows: $\displaystyle\check{\Phi}=$ $\displaystyle-\frac{9f}{r^{3}}t^{-3}\,+\,\frac{9f^{\prime}}{r^{3}}t^{-2}\,+\,\frac{1}{2r^{4}}\big{(}8f^{\prime}-11rf^{\prime\prime}\big{)}t\,+\,\frac{3}{4r^{5}}\big{(}5f^{\prime}-5rf^{\prime\prime}+3r^{2}f^{(3)}\big{)}\,+$ $\displaystyle\frac{9}{40r^{6}}\big{(}16f^{\prime}+5ff^{\prime}-16rf^{\prime\prime}+8r^{2}f^{(3)}-3r^{3}f^{(4)}\big{)}t\,+$ $\displaystyle\frac{1}{120r^{7}}\big{(}420f^{\prime}+1068ff^{\prime}-30rf^{\prime}{}^{2}-420rf^{\prime\prime}-384rff^{\prime\prime}+210r^{2}f^{(3)}-70r^{3}f^{(4)}+19r^{4}f^{(5)}\big{)}t^{2}\,+$ $\displaystyle\dots+\mathcal{O}\big{(}t^{k-3}\big{)},$ $\displaystyle\check{\Psi}=$ $\displaystyle-\frac{9f}{r^{3}}t^{-3}\,+\,\frac{6f^{\prime}}{r^{3}}t^{-2}\,+\,\frac{1}{2r^{4}}\big{(}2f^{\prime}-5rf^{\prime\prime}\big{)}t^{-1}\,+\,\frac{3}{4r^{5}}\big{(}f^{\prime}-rf^{\prime\prime}+r^{2}f^{(3)}\big{)}\,+$ $\displaystyle\frac{1}{40r^{6}}\big{(}24f^{\prime}-75ff^{\prime}-24rf^{\prime\prime}+12r^{2}f^{(3)}-7r^{3}f^{(4)}\big{)}t\,+$ $\displaystyle\frac{1}{60r^{7}}\big{(}30f^{\prime}+39ff^{\prime}+75rf^{\prime}{}^{2}-30rf^{\prime\prime}+33rff^{\prime\prime}+15r^{2}f^{(3)}-5r^{3}f^{(4)}+2r^{4}f^{(5)}\big{)}t^{2}\,+$ $\displaystyle\dots+\mathcal{O}\big{(}t^{k-3}\big{)},$ $\displaystyle\check{\rho}=$ $\displaystyle 3t^{-4}+\frac{18f}{r^{3}}t^{-1}\,-\,\frac{18f^{\prime}}{r^{3}}\,+\,\frac{-6f^{\prime}+9rf^{\prime\prime}}{r^{4}}t-\,\frac{3}{4r^{6}}\big{(}9f^{2}+3rf^{\prime}-3r^{2}f^{\prime\prime}+2r^{3}f^{(3)}\big{)}t^{2}\,+$ $\displaystyle\frac{3}{20r^{6}}\big{(}-24f^{\prime}+105ff^{\prime}+24rf^{\prime\prime}-12r^{2}f^{(3)}+5r^{3}f^{(4)}\big{)}t^{3}\,-$ $\displaystyle\frac{1}{20r^{7}}\big{(}60f^{\prime}+96ff^{\prime}+120rf^{\prime}{}^{2}-60rf^{\prime\prime}+72rff^{\prime\prime}+30r^{2}f^{(3)}-10r^{3}f^{(4)}+3r^{4}f^{(5)}\big{)}t^{4}\,+$ $\displaystyle\dots+\mathcal{O}\big{(}t^{k-1}\big{)},$ $\displaystyle\check{p}=$ $\displaystyle t^{-4}+\frac{6f}{r^{3}}t^{-1}\,+\,\frac{1}{r^{4}}\big{(}2f^{\prime}-rf^{\prime\prime}\big{)}t+\,\frac{1}{2r^{6}}\big{(}18f^{2}+3rf^{\prime}-3r^{2}f^{\prime\prime}+r^{3}f^{(3)}\big{)}t^{2}\,-$ $\displaystyle\frac{3}{20r^{6}}\big{(}-8f^{\prime}+45ff^{\prime}+8rf^{\prime\prime}-4r^{2}f^{(3)}+r^{3}f^{(4)}\big{)}t^{3}\,+$ $\displaystyle\frac{1}{30r^{7}}\big{(}30f^{\prime}+57ff^{\prime}+45rf^{\prime}{}^{2}-30rf^{\prime\prime}+39rff^{\prime\prime}+15r^{2}f^{(3)}-5r^{3}f^{(4)}+r^{4}f^{(5)}\big{)}t^{4}\,+$ $\displaystyle\dots+\mathcal{O}\big{(}t^{k-1}\big{)}.$ In these formulas all the _doted_ terms are explicitly determined in terms of $f$ and its derivatives (I was lazy, and typed only the terms adapted to the choice $k=6$ in Theorem 1). The following remarks are in order: Remarks. * • Note that since in $\check{M}$ the time $t<0$, the requirement that the energy densities are positive near the Big Bang hypersurface $t=0$ implies that $f>0$ in addition to $f^{\prime}>0$, the requirement we got from the past eon. Indeed, the leading terms in $\check{\Phi}$ and $\check{\Psi}$ are $\check{\Phi}=\check{\Psi}=-\frac{9f}{r^{3}}t^{-3}$, hence $\check{\Phi}$ and $\check{\Psi}$ are both positive in the regime $t\to 0^{-}$ provided that $f>0$. Note also that $f>0$ and $f^{\prime}>0$ are the only conditions needed for the positivity of energy densities, as the leading term in $\check{\rho}$ is $\check{\rho}\simeq 3t^{-4}$, and is positive regardless of the sign of $t$. * • Remarkably the leading terms in $\check{\rho}$ and $\check{p}$, i.e. the terms with negative powers in $t$, are proportional to each other with the numerical factor _three_. We have $\check{p}=\tfrac{1}{3}\check{\rho}+\mathcal{O}(t^{0}).$ This means that immediately after the Bang, apart from the matter content of two spherical ingoing and outgoing waves in the new eon, there is also a scattered _radiation_ there, described by the perfect fluid with $\check{p}=\tfrac{1}{3}\check{\rho}$. So what the _Penrose-Tod scenario does to the new eon out of a single spherical wave in the past eon_ , is it splits this wave into _three portions of radiation: the two spherical waves_ , one which is a dumped continuation from the previous eon, the other that is focusing, _and in addition a lump of scattered radiation described by the statistical physics_. ## References * [1] Fefferman C. and Graham C. R., (2012), ‘The ambient metric’. _Annals of Mathematics Studies_ , 178, Princeton University Press, Princeton, NJ. * [2] Penrose R., (2010), ‘Cycles of Time: An Extraordinary New View of the Universe’, Bodley Head, * [3] Tod K. P., (2015), ‘The equations of Conformal Cyclic Cosmology’, _Gen. Rel. Grav._ 47,https://doi.org/10.1007/s10714-015-1859-7
# Covering a compact space by fixed-radius or growing random balls David J. Aldous Department of Statistics, 367 Evans Hall # 3860, U.C. Berkeley CA 94720<EMAIL_ADDRESS>www.stat.berkeley.edu/users/aldous. ###### Abstract Simple random coverage models, well studied in Euclidean space, can also be defined on a general compact metric space. By analogy with the geometric models, and with the discrete coupon collector’s problem and with cover times for finite Markov chains, one expects a “weak concentration” bound for the distribution of the cover time to hold under minimal assumptions. We give two such results, one for random fixed-radius balls and the other for sequentially arriving randomly-centered and deterministically growing balls. Each is in fact a simple application of a different more general bound, the former concerning coverage by i.i.d. random sets with arbitrary distribution, and the latter concerning hitting times for Markov chains with a strong monotonicity property. The growth model seems generally more tractable, and we record some basic results and open problems for that model. ## 1 Introduction Analogs of the classical coupon collector’s problem have been extensively studied in several different contexts. One context is geometric: covering by (for instance) random balls in Euclidean space [14, 19]. Another context involves the time for an irreducible finite-state Markov chain to visit every state. Systematic study of that cover time $C_{MC}$, particularly for the case of random walks on graphs, started in the 1980s [1]. In any context, study of the expectation of the cover time (or more refined study of exact limit rescaled distributions) necessarily depends on the specifics of a model, and has been carried out via explicit calculations for many models. However one expects that the “weak concentration” property of the coupon collector time $T_{n}$ (that s.d.$(T_{n})/\mathbb{E}T_{n}\to 0$ as $n\to\infty$) should extend quite generally to other cover time contexts, and should hold under minimal assumptions even when one cannot calculate the expectation explicitly. Indeed this is known to be true in the Markov chain context (see section 6). The purpose of this article is to study one analog of geometric covering, in which the Euclidean space is replaced by a metric space. Another part of our purpose is to spotlight two different general methods (known, but apparently not well known) for showing weak concentration in general settings without calculating the expectation of the cover time111Other than its order of magnitude.. In each of sections 2 and 3 we specify a model (fixed-radius or growing random balls), recall the relevant general method, and show that a concentration bound is obtained very easily using that method. The growth model seems worthy of further study: we give some more basic results in section 4 and pose some challenging open problems. The special case of the circle is outlined in section 5. Further discussion of models and methodology is deferred to section 6. ## 2 Covering with fixed radius random balls Here we indicate how a concentration result for covering, obtainable on Euclidean space in sharp form by explicit calculation [14], can be extended to weak bounds in a very general setting. Take a compact metric space $(S,\rho)$. Let $\mu$ be a probability measure on $S$ with full support, and for $r>0$ define $\eta(r):=\inf_{s}\mu(\mathrm{ball}(s,r))>0$ where $\mathrm{ball}(s,r)=\\{s^{\prime}:\rho(s,s^{\prime})\leq r\\}$. Write $\sigma_{1},\sigma_{2},\ldots$ for i.i.d. random points of $S$ from distribution $\mu$. For fixed $r_{0}>0$ consider the random subset $\mathcal{R}_{n}=\mathcal{R}_{n}^{(r_{0})}:=\cup_{1\leq i\leq n}\mathrm{ball}(\sigma_{i},r_{0}).$ We call this the fixed-radius model. Consider the cover time $C=C^{(r_{0})}:=\min\\{n:\mathcal{R}_{n}=S\\}$ (1) for which compactness easily implies $\mathbb{E}C<\infty$. The probability that a given point $s$ is in $\mathrm{ball}(\sigma_{i},r_{0})$ equals $\mu(\mathrm{ball}(s,r_{0}))$, and so the mean time until point $s$ is covered equals $1/\mu(\mathrm{ball}(s,r_{0}))$, which is at most $1/\eta(r_{0})$. So to obtain a concentration result for $C$ a natural assumption is that $\mathbb{E}C\gg 1/\eta(r_{0})$, in other words that $\eta(r_{0})\mathbb{E}C$ is large. Our result below is of that general form, but also involves the dimension-related quantity $d(r)$ defined as the smallest integer such that each ball of radius $r$ can be covered by $d(r)$ balls of radius $r/2$. (2) ###### Proposition 1 In the fixed-radius model, for the cover time $C$ at (1), $\mathrm{var}\left(\frac{C}{\mathbb{E}C}\right)\leq\kappa\,\frac{d(r_{0})}{\eta(r_{0}/2)\mathbb{E}C}$ for the absolute constant $\kappa$ stated in Proposition 2 below. We will derive Proposition 1 from a known general result, discussed as Proposition 2 below. ### 2.1 The random subset cover bound Here we copy the setup and result directly from [5]. Let $S_{0}$ be a finite set. Let $\mathcal{Y}$ be a random subset of $S_{0}$, whose distribution is arbitrary subject to the requirement ${\mathbb{P}}(s\in\mathcal{Y})>0$ for each $s\in S_{0}$. (3) Let $\mathcal{Y}_{1},\mathcal{Y}_{2},\ldots$ be independent random subsets distributed as $\mathcal{Y}$. Let $\mathcal{R}_{n}$ be the range of this process: $\mathcal{R}_{n}=\cup_{i\leq n}\mathcal{Y}_{i}$ and let $C_{set}$ be the cover time $C_{set}:=\min\\{n:\mathcal{R}_{n}=S_{0}\\}.$ Note $\mathbb{E}C_{set}<\infty$ by (3) and finiteness of $S_{0}$. For any non- random subset $B\subset S_{0}$ let $c(B)$ be the mean cover time of $B$: $c(B):=\mathbb{E}C(B);\quad C(B):=\min\\{n:\mathcal{R}_{n}\supseteq B\\}.$ Our bound involves the terminal set $\mathcal{T}:=S_{0}\setminus\mathcal{R}_{C_{set}-1}$ that is the last uncovered portion of $S_{0}$. ###### Proposition 2 ([5] Theorem 1) $\mathrm{var}\left(\frac{C_{set}}{\mathbb{E}C_{set}}\right)\leq\kappa\,\frac{\mathbb{E}c(\mathcal{T})}{\mathbb{E}C_{set}}$ for an absolute constant $\kappa$. Though stated in [5] for a finite state space $S_{0}$, Proposition 2 extends to continuous space, in particular our compact metric space $S$, with unchanged proof, except that now we need to replace assumption (3) by the assumption $\mathbb{E}C_{set}<\infty$. Of course it may be difficult to analyze $\mathcal{T}$, and so one does not expect to obtain sharp bounds on specific models in this way. But Proposition 2 may be useful in obtaining order of magnitude bounds in general settings. In particular if there is some geometric or metric structure on the set and if the random subsets $\mathcal{Y}$ are small in diameter, then $\mathcal{T}$ must be small in diameter, so one needs only to bound $c(B)$ as a function of the diameter of $B$. The next section gives a simple illustration of that method. ### 2.2 Proof of Proposition 1 In the notation of Proposition 2, the terminal set $\mathcal{T}$ is such that $\mathcal{T}\subset\mathrm{ball}(s,r_{0})$ for some $s\in S$, so $c(\mathcal{T})\leq\sup_{s}\mathbb{E}C(\mathrm{ball}(s,r_{0})).$ The mean time until one of the random centers $\sigma$ falls in a given ball of radius $r_{0}/2$ is at most $1/\eta(r_{0}/2)$. Note that a ball of radius $r_{0}/2$ is covered by any ball of radius $r_{0}$ whose center is in the former ball. So from the definition of dimension $d$, for each $s$ there are $d$ points $s_{1},\ldots,s_{d}$ such that $\mathrm{ball}(s,r_{0})$ is covered whenever each of $(\mathrm{ball}(s_{i},r_{0}/2),1\leq i\leq d)$ contains at least one of the random centers $\sigma$, and so $\sup_{s}\mathbb{E}C(\mathrm{ball}(s,r_{0}))\leq d/\eta(r_{0}/2).$ The result follows from Proposition 2. ## 3 The growth model Consider as before a compact metric space $(S,\rho)$, a probability measure $\mu$ on $S$, but now introduce two rates $0<\lambda<\infty$ and $0<v<\infty$. Write $0<\tau_{1}<\tau_{2}<\ldots$ for the times of a rate-$\lambda$ Poisson process, and write $\sigma_{1},\sigma_{2},\ldots$ for i.i.d. random points of $S$ from distribution $\mu$. The verbal description > seeds arrive at times of a Poisson process at i.i.d. random positions, and > then create balls whose radius grows at rate $v$ is formalized as the set-valued growth process $\mathcal{X}(t):=\cup_{i:\tau_{i}\leq t}\ \mathrm{ball}\,(\sigma_{i},v(t-\tau_{i})).$ (4) We study the cover time $C:=\min\\{t:\ \mathcal{X}(t)=S\\}$ which is finite because $\mathbb{E}\tau_{1}=1/\lambda$ and so $1/\lambda\leq\mathbb{E}C\leq 1/\lambda+\Delta/v$ (5) where $\Delta$ is the diameter of $S$. To obtain a concentration bound it is natural to require that $\mathbb{E}C$ is large relative to the maximum expected time to cover any given single point, that is relative to $c^{*}:=\max_{s\in S}\mathbb{E}C(s);\quad C(s):=\min\\{t:\ s\in\mathcal{X}(t)\\}.$ It turns out this is the only requirement. ###### Proposition 3 In the growth model (4), $\mathrm{var}\left(\frac{C}{\mathbb{E}C}\right)\leq\frac{c^{*}}{\mathbb{E}C}$. We will derive Proposition 3 from a known general result, discussed as Proposition 4 below. Note that the expectation of the number of balls covering $v$ at time $t$ equals $\int_{0}^{t}\mu(\mathrm{ball}(s,vu))\ \lambda du$ and so from the Poisson property ${\mathbb{P}}(C(s)>t)=\exp\left(-\int_{0}^{t}\mu(\mathrm{ball}(s,vu))\ \lambda du\right)$ (6) from which we can in principle obtain a formula for $\mathbb{E}C(s)$. ### 3.1 A monotonicity bound for Markov chains Here we copy the setup and result directly from [7]. The setting there is a continuous-time Markov chain $(X_{t})$ on a finite state space $\Sigma$, where we study the hitting time $T:=\inf\\{t:\ X_{t}\in\Sigma_{0}\\}$ (7) for a fixed subset $\Sigma_{0}\subset\Sigma$. Assume $h(x):=\mathbb{E}_{x}T<\infty\mbox{ for each }x\in\Sigma$ (8) which holds in the finite case under the natural “reachability” condition. Assume also a rather strong “monotonicity” condition: $h(x^{\prime})\leq h(x)\mbox{ whenever $x\to x^{\prime}$ is a possible transition}.$ (9) ###### Proposition 4 ([7]) Under conditions (8, 9), for any initial state, $\frac{\mathrm{var}\ T}{\mathbb{E}T}\leq\max\\{h(x)-h(x^{\prime}):\ x\to x^{\prime}\mbox{ a possible transition}\\}.$ Though stated in [7] for a finite state space $\Sigma$, Proposition 4 extends to continuous space with essentially unchanged proof. ### 3.2 Proof of Proposition 3 The cover time $C$ for our growth model $\mathcal{X}(t)$ at (4) is of the form in Proposition 4; the state space is the space of compact subsets $x$ of the compact metric space $S$. The only discontinuities of $h(\mathcal{X}(t))$ are at a time $\tau$ when a new seed arrives at a point $\sigma$, at which time there is a transition $x\to x\cup\\{\sigma\\}$ of $\mathcal{X}(t)$. To apply Proposition 4 to prove Proposition 3 it is enough to show that, for each pair $(x,\sigma)$, $h(x)-h(x\cup\\{\sigma\\})\leq\mathbb{E}C(\sigma).$ (10) But this holds by considering the natural coupling $(\mathcal{X}(t),\mathcal{X}^{\prime}(t)=\mathcal{X}(t)\cup\mathrm{ball}(\sigma,vt),t\geq 0)$ of the growth processes with $\mathcal{X}(0)=x,\mathcal{X}^{\prime}(0)=x\cup\\{\sigma\\}$. In this coupling, for the time $C^{*}(\sigma)$ at which $\sigma$ is reached by a ball of $\mathcal{X}(\cdot)$ whose seed arrived after time $0$, we have (by the triangle inequality on $S$) that $\mathcal{X}(C^{*}(\sigma)+t)\supseteq\mathcal{X}^{\prime}(t)$, and so the cover times for these two processes differ by at most $C^{*}(\sigma)$. But this $C^{*}(\sigma)$ is distributed as $C(\sigma)$ for the growth process started at the empty set, establishing (10). ## 4 Further analysis of the general growth model Comparing the statements of Propositions 1 and 3 suggests that the growth model is more tractable for the study of covering. Intuitively this is because the behavior of the growth model is “smoother” in that it does not rely on the detailed geometry of the space $(S,\rho)$ at the given distance $r_{0}$. In this section we record some simple observations and then pose some open problems. We can “standardize” the growth model by choosing time and distance units to make $\lambda=v=1$. With this standardization we have a relationship between the diameter $\Delta$ and $\mathbb{E}C$. ###### Proposition 5 In the standardized growth model on a space $(S,\rho)$, (a) $\mathbb{E}C\leq 1+\Delta$. (b) If $S$ is connected then $\Delta\leq\kappa_{1}(\mathbb{E}C)^{2}$ for an absolute constant $\kappa_{1}$. Proof. Part (a) is (5). For (b), at time $t$ the sum of diameters of balls is at most $D(t):=2\sum_{i}(t-\tau_{i})^{+}.$ By connectedness we must have $\Delta\leq D(C).$ We can rewrite $D(t)$ in terms of the Poisson counting process $(N(t))$ as $D(t)=2\int_{0}^{t}N(u)du$ and then $\Delta\leq\mathbb{E}D(C)=2\int_{0}^{\infty}\mathbb{E}[N(t)1_{(t\leq C)}]\ dt.$ Using the Cauchy-Schwarz inequality $\Delta\leq 2\int_{0}^{\infty}(t^{2}+t)^{1/2}\ \sqrt{{\mathbb{P}}(C\geq t)}\ dt.$ (11) Now the obvious submultiplicative property of the cover time $C$, that is ${\mathbb{P}}(C\geq t_{1}+t_{2})\leq{\mathbb{P}}(C\geq t_{1})\ {\mathbb{P}}(C\geq t_{2})$ combined with Markov’s inequality ${\mathbb{P}}(C\geq e\mathbb{E}C)\leq e^{-1}$ implies an exponential tail bound ${\mathbb{P}}(C\geq t)\leq\exp(1-{\textstyle\frac{t}{e\mathbb{E}C}})$ (12) and the result follows from (11) and straightforward calculus bounds (note $\mathbb{E}C\geq 1$ from (5)). Continuing with this standardization, consider a sequence of connected compact metric spaces $S=S^{(n)}$ and probability distributions $\mu=\mu^{(n)}$. Proposition 3 implies that as $n\to\infty$ $\mbox{ if }{\textstyle\frac{c^{*}}{\mathbb{E}C}}\to 0\mbox{ then }{\textstyle\frac{C}{\mathbb{E}C}}\to 1\mbox{ in }L^{2}.$ (13) Can we relate the hypothesis $c^{*}/\mathbb{E}C\to 0$ to other aspects of the spaces? Recall that $c^{*}$ is in principle directly calculable from (6), whereas determining whether $\mathbb{E}C$ is of the same order, or larger order, than $c^{*}$ requires some more detailed knowledge of the space $S$. If the diameters $\Delta^{(n)}$ are bounded (as $n$ increases) then by Proposition 5 the mean cover times $\mathbb{E}C^{(n)}$ are bounded; because ${\mathbb{P}}(C^{(n)}>t)\geq\exp(-t)$ the conclusion (and hence the assumption) of (13) is false. So we need study only the case $\Delta^{(n)}\to\infty$. Here is a simple example to show that the conclusion of (13) is not always true. Example. Take $S^{(n)}$ to be the real line segment $[0,n]$ and $\mu^{(n)}(0)=1-1/n$ and $\mu^{(n)}(n)=1/n$. One easily sees that $n^{-1}C^{(n)}\to_{d}\min(1,{\textstyle\frac{1}{2}}(1+\xi))$ where $\xi$ has Exponential(1) distribution. In an opposite direction, we note a simple upper bound on $\mathbb{E}C/c^{*}$, that is a lower bound on $c^{*}/\mathbb{E}C$, in terms of the covering number $\mathrm{cov}(r):=\mbox{ minimum number of radius $r$ balls that cover $S$ }.$ (14) ###### Proposition 6 In the standardized growth model, $\frac{\mathbb{E}C}{c^{*}}\leq\min_{a>0}[a+e(e+\log\mathrm{cov}(ac^{*}))].$ Proof. As at (12) the submultiplicative property of $C(s)$ implies ${\mathbb{P}}(C(s)\geq t)\leq\exp(1-{\textstyle\frac{t}{e\mathbb{E}C(s)}})$. Applying this to the centers $(s_{i})$ of $\mathrm{cov}(r)$ covering radius $r$ balls, ${\mathbb{P}}(\max_{i}C(s_{i})\geq t)\leq e\ \mathrm{cov}(r)\exp(-{\textstyle\frac{t}{ec^{*}}}).$ Setting $t_{0}:=ec^{*}\log\mathrm{cov}(r)$, $\mathbb{E}[\max_{i}C(s_{i})]=\int_{0}^{\infty}{\mathbb{P}}(\max_{i}C(s_{i})\geq t)\;dt\leq t_{0}+e\cdot ec^{*}.$ Because $C\leq r+\max_{i}C(s_{i})$ we have $\mathbb{E}C\leq r+ec^{*}(e+\log\mathrm{cov}(r)).$ Setting $r=ac^{*}$ gives the stated bound. ### 4.1 The minimizing seed distribution For the standardized growth model on connected compact $(S,\rho)$, take two points $s_{1},s_{2}$ which are diametrically opposite, that is $\rho(s_{1},s_{2})=\Delta$. Then the maximum of $\mathbb{E}_{\mu}C$ over $\mu$ equals $1+\Delta$, attained by the measure $\mu$ degenerate at $s_{1}$. But what can we say about the minimum of $\mathbb{E}_{\mu}C$ over $\mu$? Intuitively this should be related to the covering numbers $\mathrm{cov}(r)$ at (14). And indeed there is a simple upper bound in terms of the covering numbers. Given $r$, consider $\mu$ uniform on the centers $(s_{i},1\leq i\leq\mathrm{cov}(r))$ of the covering radius-$r$ balls. Then $C\leq r+\tau_{\mathrm{cov}(r)}$ where $\tau_{n}$ is the elementary coupon collector time with $\mathbb{E}\tau_{n}=n(1+1/2+\ldots+1/n)\leq(1+\log n)n$. So we have established ###### Proposition 7 In the standardized growth model, $\min_{\mu}\mathbb{E}_{\mu}C\leq\min_{r>0}[r+\mathrm{cov}(r)(1+\log\mathrm{cov}(r))].$ For a bound in the opposite direction, observe first that for the Poisson counting process $(N(t),0\leq t<\infty)$ of seed arrival times, ###### Lemma 8 If $t_{0}$ and $c_{0}$ are such that ${\mathbb{P}}(C>c_{0})+{\mathbb{P}}(N(c_{0})>t_{0})<1$ then $\mathrm{cov}(c_{0})\leq t_{0}$. Proof. The assumption implies that the event $\\{C\leq c_{0},N(c_{0})\leq t_{0}\\}$ has non-zero probability; on that event we have $\mathrm{cov}(c_{0})\leq N(C)\leq N(c_{0})\leq t_{0}.$ Applying Lemma 8 with $c_{0}=3\mathbb{E}C$ and $t_{0}=3c_{0}$ gives $\mathrm{cov}(3\mathbb{E}C)\leq 9\mathbb{E}C$. This is true for any $\mu$ and so ###### Proposition 9 In the standardized growth model, $\min_{\mu}\mathbb{E}_{\mu}C\geq\min\\{r:\mathrm{cov}(3r)\leq 9r\\}.$ Roughly speaking, if the space is $d$-dimensional in the sense that $\mathrm{cov}(r)\asymp(A/r)^{d}$ for fixed large $A$, then Propositions 7 and 9 imply that $\min_{\mu}\mathbb{E}_{\mu}C$ is between orders $A^{\frac{d}{d+1}}$ and $A^{\frac{d}{d+1}}\log A$. ### 4.2 Open problems for the general growth model * • As mentioned above, can we find easily checkable conditions to ensure that $c^{*}/\mathbb{E}C\to 0$? * • Can one improve the upper and lower bounds on $\min_{\mu}\mathbb{E}_{\mu}C$ above? In particular, can $\min_{\mu}\mathbb{E}_{\mu}C$ be more sharply related to some measure of entropy of the metric space (see e.g. [18] for possible notions of entropy)? * • For $\mu$ attaining the minimum $\min_{\mu}\mathbb{E}_{\mu}C$, do we always have weak concentration? That is, is there a function $\psi(\Delta)\downarrow 0$ as $\Delta\uparrow\infty$ such that on every connected compact metric space, for the standardized growth model, $\mathrm{var}_{\mu}\left(\frac{C}{\mathbb{E}_{\mu}C}\right)\leq\psi(\Delta)$ for the minimizing $\mu$? * • Is there an effective algorithmic procedure for finding a minimizing $\mu$? This seems loosely similar to the well-studied k-median problem [8]. * • If $S$ is a compact group, with a metric invariant under the group action, then is the uniform (Haar) measure the minimizing measure? Regarding the final problem above, it can be shown that, on the circle of integer circumference $L$, for the fixed-radius model with $r=1/2$, the mean cover time for seed distribution $\mu$ uniform on $L$ evenly-spaced points is smaller than that for $\mu$ uniform on the circle (the discrete analog is noted in [12] Example 4.1). We do not know if this type of example is a counter-example in the growth model; if so, replace by an asymptotic ($\Delta\to\infty$) conjecture. ## 5 The growth process on the circle Here we consider the standardized growth model on the circle $S$ of circumference $L$, with uniform distribution $\mu$. What is the $L\to\infty$ limit distribution of the cover time $C(L)$? We will treat this as another example where the Poisson clumping heuristic (PCH) [2] gives a recipe for calculating explicitly the limit distribution; the method is heuristic in the sense of not justifying the approximations, but would provide a template for making a rigorous proof. ### 5.1 The calculation Consider an interval $A\subset S$ of length $a\ll L$. The number $N_{A}(t)$ of balls intersecting $A$ at time $t$ has Poisson distribution with $\mathbb{E}N_{A}(t)=\int_{0}^{t}\min(a+2u,L)\ L^{-1}du.$ We will use this only when $a$ and $t$ are order $L^{1/2+o(1)}$ and so we can ignore the truncation. This gives (the equalities below are really approximations) $\mathbb{E}N_{A}(t)=(at+t^{2})/L$ ${\mathbb{P}}(N_{A}(t)=0)=\exp(-(at+t^{2})/L).$ (15) The probability $p(t)$ that a specified point $s\in S$ is not covered at time $t$ is the case $a=0$, so $p(t)=\exp(-t^{2}/L).$ (16) As $t$ approaches $C(L)$ the uncovered region is a union of intervals of lengths small relative to $L$. The PCH asserts that, as a good approximation which gives correct asymptotics, one can assume these intervals have i.i.d. lengths (with some distribution $\Lambda(t)$) and their centers are as a Poisson process (of some rate $\lambda(t)$ per unit length); these quantities are related by $p(t)=\lambda(t)\mathbb{E}\Lambda(t).$ The number of uncovered intervals therefore has Poisson distribution with mean $L\lambda(t)$ and so ${\mathbb{P}}(C(L)\leq t)=\exp(-L\lambda(t)).$ In this example it is easy to ascertain the distribution of $\Lambda(t)$. Given that a point $s$ is uncovered, the conditional probability that the interval $[s-a_{1},s+a_{2}]$ is uncovered (that is, is a subset of the uncovered interval containing $s$) equals, by (15), $\exp(-(a_{1}+a_{2})t/L)$, and therefore the whole uncovered interval $[s-A_{1},s+A_{2}]$ is such that $A_{1}$ and $A_{2}$ are independent with Exponential($t/L$) distribution. This length $A_{1}+A_{2}$ is the size-biased distribution of $\Lambda(t)$, so its un-size-biased distribution is just Exponential($t/L$), with expectation $\mathbb{E}\Lambda(t)=L/t.$ Combining the displayed equations above gives us ${\mathbb{P}}(C(L)\leq t)\approx\exp(-te^{-t^{2}/L})$ (17) where we are now acknowledging that this is an approximation222For large $L$, and $t$ not in the tails., expected to lead to the correct asymptotics. ### 5.2 Asymptotics And indeed (17) corresponds, as one expects from general extreme value theory [20], to a limit result of the form $(C(L)-t_{0}(L))/\sigma(L)\to_{d}\zeta,\quad{\mathbb{P}}(\zeta\leq x)=\exp(-e^{-x}),-\infty<x<\infty.$ (18) To make this explicit, define $G(y)$ to be the inverse function of $y=x\exp(-x^{2})$ for large $x$ and small $y$ and then define $t_{0}(L):=L^{1/2}G(L^{-1/2})$ (19) so that (17) becomes ${\mathbb{P}}(C(L)\leq t_{0}(L))\approx\exp(-1).$ One can now calculate from (17) that for fixed $x$ ${\mathbb{P}}\left(C(L)\leq t_{0}+{\textstyle\frac{xL^{1/2}}{2G(L^{-1/2})}}\right)\approx\exp(-e^{-x})$ corresponding to (18) with $t_{0}(L)$ defined by (19) and $\sigma(L)$ defined by $\sigma(L):={\textstyle\frac{L^{1/2}}{2G(L^{-1/2})}}.$ The function $G(L^{-1/2})$ is slowly varying, roughly as $\sqrt{\log L}$. For comparison with the general result of Proposition 3, note that from (16) $c^{*}=\mathbb{E}C(s)=\int_{0}^{\infty}\exp(-t^{2}/L)\ dt={\textstyle\frac{1}{2}}\pi^{1/2}L^{1/2}$ and so $\mathrm{var}\left(\frac{C}{\mathbb{E}C}\right)\asymp\frac{1}{G^{4}(L^{-1/2})},\quad\frac{c^{*}}{\mathbb{E}C}\asymp\frac{1}{G(L^{-1/2})}.$ The right side is the upper bound (Proposition 3) and the left side is the correct order of magnitude. ## 6 Discussion ### 6.1 Comments on the two models The fixed-radius model is a natural generalization of covering Euclidean space with balls, though this generalization to metric spaces has apparently has not been studied before. The growth model in our simple form has also apparently not been studied, though it can be regarded as an extremely basic model for the spread of information or the spread of an epidemic, a field with a huge literature studying models on graphs or Euclidean space [11, 16, 21]. A related growth model in two dimensions, where seeds arrive (instead of as a constant-rate process) as a Poisson process whose rate is the current occupied area, is studied in [6, 9]. ### 6.2 The growth model on other spaces In addition to the open problems in section 4.2, there is much scope for further study of the growth model on specific spaces. As well as other classical compact spaces familiar from analysis, one can consider a finite graph with edge lengths, with the metric of shortest route length. Moreover there are random metric spaces of contemporary interest in probability, such as the “mean-field model of distance” [4], the Brownian CRT [13], or the Brownian map [17]. ### 6.3 Other uses of the two general bounds We have used two general methods – the random subset cover bound (Proposition 2) and the monotonicity bound (Proposition 4) – which are in principle applicable in very general covering-like contexts to establish weak concentration bounds in general settings without calculating the expectation of the covering time. We provide some history of these methods below, and speculate that there may be other applications not yet explored. The random subset cover bound, Proposition 2, for general i.i.d. random subsets of a set, was given in [5] as part of the proof of a weak concentration bound for the Markov chain cover time $C_{MC}$. In the Markov chain context, the i.i.d. subsets arise as excursions from a given state. In the result, the essential condition is that the maximum mean hitting time to any single state is $o(\mathbb{E}C_{MC})$. In that sense the bound is closely analogous to the bounds in this article. In the 30 years since [5], study of random walk cover times has entered a more sophisticated phase based on the discovery [10] of its connection with Gaussian free fields and Talagrand’s theory of majorizing measures. In contrast, the program of using general results for i.i.d. random subsets as part of analysis of specific contexts within covering seems not to have been developed until the recent work [12]. That paper discusses known results in combinatorial settings, develops new general results and applies them to several topics: connectivity in random graphs; covering a square with random discs; covering the edges of a graph by spanning trees, and matroids by bases; and random $k$-SAT. The monotonicity bound, Proposition 4, was given in [7] as a tool for establishing weak concentration for first passage percolation times on general graphs. It was also used [3] for weak concentration of the time of emergence of the giant component in bond percolation on general graphs. Both contexts involve hitting time of an increasing set-valued Markov process, as does our application in section 3.2. ## References * [1] David Aldous. An introduction to covering problems for random walks on graphs. J. Theoret. Probab., 2(1):87–89, 1989. * [2] David Aldous. Probability approximations via the Poisson clumping heuristic, volume 77 of Applied Mathematical Sciences. Springer-Verlag, New York, 1989. * [3] David Aldous. The incipient giant component in bond percolation on general finite weighted graphs. Electron. Commun. Probab., 21:Paper No. 68, 9, 2016. * [4] David Aldous and J. Michael Steele. The objective method: probabilistic combinatorial optimization and local weak convergence. In Probability on discrete structures, volume 110 of Encyclopaedia Math. Sci., pages 1–72. Springer, Berlin, 2004. * [5] David J. Aldous. Threshold limits for cover times. J. Theoret. Probab., 4(1):197–211, 1991. * [6] David J. Aldous. When knowing early matters: gossip, percolation and Nash equilibria. In Prokhorov and contemporary probability theory, volume 33 of Springer Proc. Math. Stat., pages 3–27. Springer, Heidelberg, 2013. * [7] David J. Aldous. Weak concentration for first passage percolation times on graphs and general increasing set-valued processes. ALEA Lat. Am. J. Probab. Math. Stat., 13(2):925–940, 2016. * [8] Moses Charikar, Sudipto Guha, Éva Tardos, and David B. Shmoys. A constant-factor approximation algorithm for the $k$-median problem. volume 65, pages 129–149. 2002. Special issue on STOC, 1999 (Atlanta, GA). * [9] Shirshendu Chatterjee and Rick Durrett. Asymptotic behavior of Aldous’ gossip process. Ann. Appl. Probab., 21(6):2447–2482, 2011. * [10] Jian Ding, James R. Lee, and Yuval Peres. Cover times, blanket times, and majorizing measures. Ann. of Math. (2), 175(3):1409–1471, 2012. * [11] Moez Draief and Laurent Massoulié. Epidemics and rumours in complex networks, volume 369 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2010. * [12] Victor Falgas-Ravry, Joel Larsson, and Klas Markström. Speed and concentration of the covering time for structured coupon collectors. Adv. in Appl. Probab., 52(2):433–462, 2020. * [13] Christina Goldschmidt. Scaling limits of random trees and random graphs. In Random graphs, phase transitions, and the Gaussian free field, volume 304 of Springer Proc. Math. Stat., pages 1–33. Springer, Cham, [2020] ©2020. * [14] Peter Hall. Introduction to the theory of coverage processes. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York, 1988. * [15] Svante Janson. One, two and three times $\log n/n$ for paths in a complete graph with random weights. volume 8, pages 347–361. 1999. Random graphs and combinatorial structures (Oberwolfach, 1997). * [16] István Z. Kiss, Joel C. Miller, and Péter L. Simon. Mathematics of epidemics on networks, volume 46 of Interdisciplinary Applied Mathematics. Springer, Cham, 2017. From exact to approximate models. * [17] Jean-François Le Gall. Uniqueness and universality of the Brownian map. Ann. Probab., 41(4):2880–2960, 2013. * [18] Tom Leinster and Emily Roff. The maximum entropy of a metric space. arXiv:1908.11184v3, 2020. * [19] Mathew D. Penrose. Random euclidean coverage from within, 2021. arXiv 2101.06306. * [20] Sidney I. Resnick. Extreme values, regular variation, and point processes, volume 4 of Applied Probability. A Series of the Applied Probability Trust. Springer-Verlag, New York, 1987. * [21] Steven Riley, Ken Eames, Valerie Isham, Denis Mollison, and PieterTrapman. Five challenges for spatial epidemic models. Epidemics, 10:68–71, 2015.
# Proximity effects and tunneling in an Ising superconductor-magnetic insulator heterostructure Darshana Wickramaratne Center for Computational Materials Science, U.S. Naval Research Laboratory, Washington, DC 20375, USA <EMAIL_ADDRESS>Menashe Haim The Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel Maxim Khodas The Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel I.I. Mazin Department of Physics and Astronomy, George Mason University, Fairfax, VA 22030, USA Quantum Science and Engineering Center, George Mason University, Fairfax, VA 22030, USA ###### Abstract Hybrid Ising superconductor-ferromagnetic insulator heterostructures provide a unique opportunity to explore the interplay between proximity-induced magnetism, spin-orbit coupling and superconductivity. Here we use a combination of first-principles calculations of NbSe2/CrBr3 heterostructures and an analytical theory of Ising superconductivity in the presence of defects, and propose a number of nontrivial effects are at play in such junctions. In particular, we address spin and momentum filtering in such junctions and show that tunneling of the states at the K-point are suppressed and the often overlooked states near the $\Gamma$-point in monolayer NbSe2 contribute to tunneling. We then show that the interplay of the proximity- induced exchange splitting and scattering off paramagnetic defects leads to the following nontrivial effects; an increase in the magnitude of the superconducting gap, broadening of the tunneling peaks and $C_{2}$ symmetry breaking. Our results provide a unifying microscopic description of tunneling spectroscopy studies performed on these two-dimensional Josephson junction heterostructures which form the basis for novel spintronic and superconducting devices. ## I Introduction The advent of two-dimensional (2D) superconductors such as monolayers of the transition metal dichalcogenides (TMD), NbSe2, TaS2, and TaSe2, [1, 2, 3, 4, 5] and 2D magnetic insulators offers a route to design novel devices such as 2D Josephson junctions [6]. Monolayers of the superconducting TMDs are Ising superconductors [2, 3, 4, 5] and some of these materials have been suggested to be on the verge of a magnetic instability[3, 7, 8]. If the insulating material of the junction is ferromagnetic, this raises the unique opportunity to couple superconductivity with magnetism via the proximity effect [9, 10, 11]. Superconductor/ferromagnetic insulator junctions have been used to elucidate the fundamental properties of the superconducting contacts and are also pursued for potential for applications in spintronics [12] or hosting topologically protected states [13, 14]. The prospects for such devices may soon by realized given the recent discovery of ferromagnetism in single monolayers of the chromium trihalides [15, 16] and preliminary reports of tunneling microscopy measurements performed on bilayer and trilayer heterostructures comprised of NbSe2 and these magnetic insulators [17, 18, 19, 20, 21, 22, 23]. We highlight the observations of one such experiment [22]. In the study by Kang et al. [22], tunneling conductance measurements were performed on a vertical heterostructure comprised of CrBr3 sandwiched between NbSe2, which serve as superconducting contacts. An in-plane magnetic field (lower than the superconducting critical field of NbSe2), rotates the CrBr3 spins from the magnetic easy axis along $\hat{z}$ to being in-plane. This leads to a modest increase in the conductance gap, $\Delta$, by $\sim$2% and a concomitant increase in the broadening of the tunneling peaks by $\sim$50%. This is counterintuitive. One may expect that when $\Delta$ increases the width of the tunneling peaks should decrease. The same experiments find a puzzling hysteresis in the tunneling conductance, that has an onset $\sim$ 2K below $T_{c}$. A separate tunneling study on NbSe2/CrBr3 [19] also identified evidence of a two-fold rotation symmetry of the superconducting state, which is unexpected given the six-fold symmetry of the hexagonal lattice of NbSe2. These experimental observations contain a lot of interpretative power and form a three-pronged puzzle that we will provide microscopic insight into in this study using a combination of first-principles calculations and analytical calculations based on a theory of Ising superconductivity. Obtaining microscopic insight into each of these phenomena is predicated on understanding the subtle interplay between Ising superconductivity, magnetism, the atomic and electronic structure of the heterostructure, the proximity- induced magnetic field in the superconductor, and the impact of all of this on tunneling processes. We show the modest change in $\Delta$ and the large increase in the broadening of the tunneling peaks are related to the combination of Ising spin-orbit coupling [3] and spin-conserving scattering due to paramagnetic point defects (which we refer to generically as defects henceforth) [24, 25, 26, 27, 28]. Based on this theoretical framework we present a physically intuitive explanation for the two-fold rotational symmetry [29, 19] and the hysteresis observed in tunneling spectroscopy by considering the role of extended defects [25] that order in a manner that breaks the six-fold rotational symmetry of the hexagonal lattice and thus generates anisotropic pair-breaking scattering. ## II First-principles calculations We begin by examining the electronic structure of the NbSe2/CrBr3/NbSe2 trilayer heterostructure using first-principles calculations (see Methods and Supplementary Material). The atomic structure of the heterostructure with the lowest energy is illustrated in Fig. 1(a). Figure 1: First-principles calculation of the electronic structure, band alignments and interlayer coupling of the NbSe2-CrBr3-NbSe2 heterostructure. (a) Schematic illustration of the trilayer heterostructure showing the side view and top view. The $x,y$ and $z$ axes denote the cartesian axes. (b) Alignment of the energy levels of NbSe2 with respect to monolayer CrBr3 at the K-point. (c) Spin-polarized band structure of the trilayer heterostructure around the K point. The purple bands denote contributions from CrBr3 and green bands denote contributions from NbSe2. (d) Interlayer coupling, 2t⟂, of the NbSe2/CrBr3/NbSe2 trilayer heterostructure as a function of momentum. $\Lambda$ corresponds to the midpoint along the $\Gamma$-K path. We schematically illustrate the alignment of the NbSe2 states at the Fermi level at K with respect to the CrBr3 spin up and spin down states obtained with respect to the vacuum level from our first-principles calculations, in Fig. 1(b). The spin polarized band structure of the trilayer heterostructure is shown in Fig. 1(c). The NbSe2 states reside within the spin-up gap of CrBr3, close to the spin up conduction band states of CrBr3. One striking change in the electronic structure of the heterostructure is the large exchange splitting, $\lambda_{\mathrm{ex}}$, of the NbSe2 derived states (the bias field $\mu_{B}B=\lambda_{ex}/2)$. For the heterostructure in Fig. 1(a), $\lambda_{\mathrm{ex}}$ is 121 meV between the spin up and spin down states. The origin of this splitting can be understood as follows. In bilayer NbSe2 [30], there are two pairs of spin degenerate bands, one pair corresponds to states from the top monolayer while the second pair is contributed by the bottom monolayer. Both pairs of bands are split due to interlayer coupling, $t_{\perp}$. In the heterostructure calculations we find the Nb atoms in the top and bottom monolayers aquire a magnetic moment, $m_{\rm Nb}$, of $\sim$ 0.10 to 0.13 $\mu_{B}$ due to proximity induced coupling between the Cr and Nb states. This manifests in a proximity induced exchange splitting, $\lambda_{\rm ex}$, illustrated in Fig. 1(c) that breaks the spin degeneracy of these bands. The magnitude of $\lambda_{\mathrm{ex}}$ reflects the magnitude of orbital overlap between the Nb and Cr $d-$electrons. Hence, it crucially depends on the overlap between the least overlapping orbitals in the magnetic exchange path, namely Se and Br $p$-states, which in turn is sensitive to the vertical separation distance between the Nb and Cr atoms, $d_{\mathrm{Cr-Nb}}$. For the different stacking configurations we explored, $d_{\mathrm{Cr-Nb}}$ varies due to the steric interaction between the Br and Se atoms at the interface [30]. To form a commensurate trilayer heterostructure, we assume CrBr3 layer is under biaxial tensile strain with respect to NbSe2. In reality, the strain due to lattice mismatch will likely be relieved by a rotation of one layer with respect to another, which leads to spatially varying stacking of NbSe2 with respect to CrBr3. Hence, the effective overlap and $\lambda_{\mathrm{ex}}$ should be averaged over all possible mutual orientations between the two layers, while the equilibrium distance corresponds (even ideally, in the absence of any defects in the van-der-Waals gap) to the sterically least favorable geometry, i.e., when Br and Se ions are aligned vertically. The effect of this averaging [30] is that $\lambda_{\mathrm{ex}}$ at $d_{\mathrm{Cr-Nb}}\approx 6.88$Å which, according to our calculations is the maximal possible separation distance between NbSe2 and CrBr3, becomes $\left\langle\lambda_{\mathrm{ex}}\right\rangle\approx 0.04$ meV, which is equivalent to a magnetic exchange field, $B\approx$ 0.7 T. Inserting a single monolayer of CrBr3 increases the interlayer separation between the NbSe2 layers, which changes the interlayer coupling, $t_{\perp}$. We determine the magnitude of 2$t_{\perp}$ (see Methods) along $\Gamma$-M and along $\Gamma-\Lambda$ (where $\Lambda$ is the midpoint along the $\Gamma$-K path). These results are illustrated in Fig. 1(d). Similar to the case of the NbSe2 monolayers separated by vacuum [30], $t_{\perp}^{\Gamma}$ $\gg$ $t_{\perp}^{\mathrm{K}}$. We find $t_{\perp}^{\Gamma}$ is 31.2 meV for the spin-up states and 18.1 meV for the spin down states at $\Gamma$ while $t_{\perp}^{\mathrm{K}}$ is 1.8 meV for the spin-up states and 1.1 meV for the spin down states at K. With two monolayers or more of CrBr3 (as used by Kang et al. [22]), $t_{\perp}$ at K is suppressed significantly compared to $t_{\perp}$ at $\Gamma$ [30]. Moving away from $\Gamma$ our calculations in Fig. 1(d) shows that 2$t_{\perp}$ along the diagonal path ($\Gamma$-K), is lower compared to 2$t_{\perp}$ along the $\Gamma$-M path. Note that the magnitude of the spin- orbit coupling is finite along $\Gamma$-K and reaches a maximum at $\Lambda$, while it is zero along $\Gamma$-M [3]. Hence, the orbitals that contribute the least to $2t_{\perp}$ leads to the largest $\Delta_{\rm SOC}$. This has crucial implications on interpreting tunneling measurements in these heterostructures, which we discuss next. ## III Role of magnetic exchange field and defects on the tunneling conductance Armed with this quantitative understanding of $\lambda_{\mathrm{ex}}$, $t_{\perp}^{\Gamma}$, and $t_{\perp}^{\mathrm{K}}$ we proceed to perform model Hamiltonian calculations to describe tunneling across an Ising superconductor - ferromagnetic insulator - Ising superconductor junction [30]. We first consider at a heuristic level, the impact of a magnetic exchange field that is out-of-plane (parallel to $\hat{z}$ in Fig. 1(a)) and in-plane (parallel to $\hat{x}$ in Fig. 1(a)), on the position of the conductance peak and the broadening (full-width half maximum) of the conductance peak. For a magnetic exchange field, $B$, that is out-of-plane ($B\parallel c$), the spin-orbit coupling (SOC) field, $\Delta_{\mathrm{SOC}}$, and $B$ polarize the electron spins along $\hat{z}$ regardless of the in-plane momentum. Hence, the magnetic exchange interaction reduces the energy of the singlet Cooper pairs, while SOC plays no role. This leads to a familiar Pauli limited superconductivity where the critical magnetic field is of the order of the gap, $\Delta$ [31]. Furthermore, the Cooper pairs retain their singlet identity and are immune to the disorder scattering in accordance with the Anderson theorem [32]. In contrast when $B\perp c$, the impact on the order parameter is weak due to the strong $\Delta_{\mathrm{SOC}}$. However, the broadening of the conductance peak is sensitive to $B\perp c$ and grows with the magnitude of $B$. This is due to the fact that when $B\perp c$, the spins at $\mathbf{k}$ and $\mathbf{k}^{\prime}$ will acquire a finite in-plane component and a finite probability for spin-flip scattering processes. The degree of the tilt angle is determined by the strength of $\Delta_{\mathrm{SOC}}$. Hence, the paramagnetic defects behave as magnetic defects due to a finite in-plane $B$ [26, 28]. Figure 2: Differential conductance $dI/dV$ as a function of the bias voltage, $|e|V$. Results are shown for an (a) out-of-plane magnetic field that is along $\hat{z}$ ($B\parallel c$) and an (b) in-plane magnetic field that is along $\hat{x}$ ($B\perp c$). We use four values of the magnetic exchange field, $B$ that corresponds to $B$ = 0 $T_{c}$ (blue), 0.225 $T_{c}$ (red), 0.45 $T_{c}$ (green), and 0.67 $T_{c}$ (grey) to determine the change in differential conductance as a function of $B$. We use T=0.5$T_{c}$, $\Delta_{\rm SOC}$=20$T_{c}$ and a scattering rate, $\eta$=$T_{c}$ for all of our differential conductance calculations. The inset in each panel (a) and (b) shows the suppression of the order parameter $\Delta_{o}$ as a function of $B$. $\Delta_{o}$ in panel (a) is calculated with T=0.5Tc and in panel (b) is computed using T=0.74Tc. (c) Change in the peak position of the differential conductance as when $B\parallel c$ and $B\perp c$ is illustrated on the left vertical axis and the change in the FWHM of the differential conductance as a function of $B\perp c$ is illustrated on the right vertical axis. These qualitative considerations are summarized in Table 1. Table 1: Parameters that control the position and broadening of the conductance peak as a function of the magnetic exchange field that is out-of-plane, $B\parallel c$ (for moderately low temperatures) and in-plane, $B\perp c$. $B$ denotes the magnitude of the exchange field, $\Delta_{o}$ is the order parameter, $\Delta_{\mathrm{SOC}}$ is the magnitude of spin-orbit coupling, and $\eta$ is the disorder scattering rate, which is lower than $\Delta_{\mathrm{SOC}}$. | $B\parallel c$ | $B\perp c$ ---|---|--- peak shift | $(B/\Delta_{o})^{2}$ | $(B/\Delta_{\mathrm{SO}})^{2}$ peak broadening | $0$ | $\eta B^{2}/(B^{2}+\Delta_{\mathrm{SO}}^{2})$ To put these qualitative considerations on a firm theoretical footing we use a model band dispersion, $\xi(\mathbf{k})$, to describe the $\Gamma$-valley of monolayer NbSe2 calculated with respect to $E_{F}$. We use this to calculate the order parameter for $B\parallel c$ and $B\perp c$ and combine this with the information from our first-principles calculations to determine the spin- dependent tunneling conductance (see Methods and Supplementary Material). The calculated $dI/dV$ for an out-of-plane $B$ is shown in Fig. 2(a). Note that for each value of $B\parallel c$, we only find one $dI/dV$ peak at $|e|V=2\Delta_{o}$ which is not split by Zeeman coupling. This is due to the fact that the top and bottom NbSe2 layers undergo the same amount of proximity induced exchange splitting, $\lambda_{\mathrm{ex}}$. This is because the magnitude of $m_{\rm Nb}$ is the same in the top and bottom NbSe2 layers (Sec.II). Hence, the superconducting density of states of NbSe2 is split by the same amount and spin is conserved during tunneling which leads to the single peak [33]. From Figure 2(a) it is evident that as the magnitude of $B$ increases, the position of the $dI/dV$ peak decreases. This is due to the suppression of the order parameter, $\Delta_{o}$, which is proportional to $B^{2}$. This is in the contrast to the Zeeman split peaks in the density of states which shifts linearly with the magnitude of the exchange field. We also find that the full- width half maximum (FWHM) of the conductance peaks remains unchanged and is insensitive to the amount of disorder that we consider. In Fig. 2(b) we illustrate the calculated $dI/dV$ when $B\perp c$. We find a number of striking changes compared to Figure 2(a). The peak position of the $dI/dV$ decreases and is weakly dependent on the magnitude of $B$. Secondly, the FWHM of the $dI/dV$ increases as the magnitude of the in-plane $B$ increases. This is consistent with the spin-flip scattering rate increasing quadratically as $\eta B^{2}/2\Delta_{\mathrm{SOC}}^{2}$, where $\eta$ is the scattering rate due to paramagnetic defects [26, 28]. In Figure 2(c) we summarize our calculations on how the peak position and FWHM in a tunneling measurement are impacted by the magnitude and direction of $B$. These results, which are consistent with our qualitative considerations in Table 1, provides a physically intuitive explanation for the modest increase in $\Delta$ and the broadening of the differential conductance that has been observed in tunneling measurements [22]. In addition to paramagnetic point defects, as-grown NbSe2 is known to exhibit grain boundaries [25], which breaks the $C_{6}$ rotational symmetry in a manner determined by the orientation of the grain boundary with respect to the basal plane. Some tunneling studies have found the a change in symmetry from $C_{6}$ to $C_{2}$ with respect to the direction of the applied in-plane magnetic field [29, 19]. The question we would like to ask is: can such symmetry-breaking extended defects manifest in a tunneling conductance that has $C_{2}$ symmetry with respect to the direction of the external magnetic field without impacting the symmetry of the superconducting order parameter? Within our theoretical framework the defect-induced broadening of the tunneling conductance peaks will depend on the angle between the direction of the applied magnetic field and the orientation of the extended defects within the basal plane of NbSe2. This will broaden the superconducting density of states near the conductance peak, break the $C_{6}$ rotational symmetry, but not well above or below the conductance peak, in agreement with the existing experimental observations [29, 19]. Within this interpretation, the symmetry of the superconducting state is not affected. In contrast, the prevailing interpretations invoke the emergence of a symmetry-broken superconducting phase that is distinct from the $s$-wave Ising superconducting phase [29]. However, this would suggest that this lower-symmetry state should become more prevalent as the temperature is lowered instead of being maximum near $T_{c}$, which is inconsistent with what is seen in experiments. The final aspect of this three-pronged puzzle is the hysteresis in the tunneling conductance and why it occurs at $T\lesssim(T_{c}-2K)$ [22]. Recall that NbSe2 is close to a magnetic instability [3, 7]. Hence, it is likely that defects in NbSe2 may have a finite magnetic moment with a spin axis that is along a given orientation. To determine the ramifications of this, we calculated the scattering rate off of anisotropic magnetic impurities, within the Born approximation. If the magnetic moment of the defect lies along the basal plane of NbSe2, we expect it to lead to some anisotropy in the in-plane critical field [27]. In Fig. 3(a) we illustrate the critical field as a function of the orientation of $B\perp c$. Figure 3: Magnetic defects and their impact on scattering in NbSe2. (a)The in-plane critical field, $B_{c}$ as a function of the field orientation, specified by the angle $\phi_{B}$ formed by the magnetic field with respect to $x$-direction. We consider the magnetic easy axis of the defect spin along $\hat{x}$ (green), $\hat{y}$ (red) and $\hat{z}$ (grey) with spin-flip scattering rates $\eta_{1}$,$\eta_{2}$ and $\eta_{3}$ equal to 0.25 Tc. We set T = 0.2Tc and $\Delta_{\mathrm{SOC}}$ = 20 Tc. (b)Spin density of a single selenium vacancy within a 10$\times$10$\times$1 supercell of monolayer NbSe2. The different colors correspond to different signs of the magnetization. The net magnetization is $\sim$ 0.6 $\mu_{B}$. The position of the missing selenium atom is denoted with the black dotted circle. From Fig. 3(a) it is clear that the suppression of the critical magnetic field is maximum when the easy axis of the spin of the defect is orthogonal to the applied magnetic field. When the defect spin is along $\hat{z}$, the easy axis is perpendicular to the applied in-plane magnetic field for all orientations of the field. Although such defects are a source of efficient pair breaking, their effect is independent of the orientation of the applied magnetic field. However, if the magnetic easy axis of the defect is along the basal plane of NbSe2, this gives rise to a two-fold oscillation of the critical field. Within this configuration, the spin flip scattering is a pair breaking process that adds to the pair breaking effect with the applied magnetic field. Moreover, for point defects with a perpendicular in-plane easy axis, the critical field oscillates out-of-phase. One candidate defect that can give rise to this phenomenon are selenium vacancies ($V_{\rm Se}$), which have been identified to be magnetic and present in as-grown NbSe2 [24]. The spin density of $V_{\rm Se}$ from our first-principles calculations is illustrated in Fig. 3(b). We find a sizeable magnetization ($\approx 0.6\ \mu_{B}$, within our 300 atom supercell). Furthermore, the induced magnetization has a finite length scale and is commensurate with the size of our large supercell. This places a lower bound on the spatial extent of the magnetization, $R_{d}$, which in our calculations is $\sim$ 15 Å. Now consider that as the temperature is lowered below $T_{c}$, the superconducting coherence length, $\xi$, decreases and at some point may become lower than $R_{d}$. When $\xi<R_{d}$, scattering would occur within the unitary limit which would result in superconductivity being suppressed at a length scale $R_{d}$ from the site of the vacancy. This suppression only occurs when the magnetic moment of the defect is oriented along $\hat{z}$. When the pairing energy of the resulting “puddle” of finite magnetization, which is $\sim\Delta^{2}N(0)R_{d}^{2}$, becomes larger than the magnetic anisotropy energy (typically on the order of $\mu$eV for point defects), the magnetic moment of the point defect would flop to be in-plane. We expect this behavior to be hysteretic, as associated with any magnetic transition. Hence, we find that both within the Born limit and unitary limit, there is a reduction in the superconducting energy, which leads to pair breaking, albeit for different reasons. This pair breaking effect can be avoided if the magnetization of the point defect, is flipped to be in-plane. This change in the direction of the magnetization of the point defect, which is likely to be hysteretic as illustrated by the two-fold oscillation of the critical field (Fig. 3(a)), provides a natural explanation for the hysteresis that has been observed in the tunneling conductance at a critical temperature below $T_{c}$ which we surmize is when the coherence length of the Cooper pairs becomes commensurate with or is lower than $R_{d}$ [22]. ## IV Conclusions We have presented a detailed analysis of tunneling in NbSe2/CrBr3/NbSe2 heterostructures. We find CrBr3 leads to a proximity-induced magnetic moment in NbSe2 that manifests in exchange splitting of the NbSe2 states. We show the states at the $\Gamma$-valley contribute the most to the tunneling conductance, as opposed to the states centered around the K-point that are often considered in such analyses. The states at $\Gamma$ that tunnel across the heterostructure scatter off of paramagnetic point defects, which leads to a pronounced broadening of the tunneling peaks and a modest enhancement of the superconducting gap when the magnetic exchange field is in-plane. This nontrivial result offers a transparent means to interpret tunneling spectra of Ising superconductors that are interfaced with magnetic insulators. Our theoretical framework offers a unifying and natural explanation for several experimental observations. This includes the $C_{2}$ “nematic” symmetry breaking near $T_{c}$, which can be understood if one considers the interaction of extended defects, with paramagnetic point defects and tunneling processes. This explanation does not require one to invoke exotic symmetry- breaking superconducting states. Finally, we show the proximity of NbSe2 to a magnetic instability leads to defects that have a finite spin polarization that spans a finite length scale which can manifest in a finite hysteresis in the conductance, as has been observed in tunneling measurements at a critical temperature below $T_{c}$. ## V Methods ### V.1 First-principles calculations methodology Our first-principle calculations are based on density functional theory within the projector-augmented wave method [34] as implemented in the VASP code [35, 36] using the generalized gradient approximation defined by the Perdew-Burke- Ernzerhof (PBE) functional [37]. We use our DFT calculations to determine the equilibrium atomic structure of the NbSe2/CrBr3 heterostructure, the magnitude of the proximity-induced exchange splitting and the interlayer coupling for different stacking configurations. For NbSe2 we found it is essential that Nb $5s^{1},4s^{2},4p^{6},4d^{4}$ electrons and Se $4s^{2},4p^{4}$ electrons are treated as valence. For CrBr3, Cr $4s^{1},3p^{6},3d^{5}$ and Br $4s^{2},4p^{5}$ electrons are treated as valence. For the heterostructure calculations we use a ($9\times 9\times 1$) $\Gamma$-centered $k$-point grid when performing structural optimization and a ($16\times 16\times 1$) $\Gamma$-centered $k$-point grid with the tetrahedron method to obtain converged total energies and magnetic moments on Nb. All of the calculations of the trilayer structures, monolayer CrBr3 and bilayer NbSe2 use a vacuum spacing of 20Å along the $c$-axis to avoid spurious interactions with periodically repeated surfaces. We calculated the value of the interlayer coupling, $t_{\perp}$ for our trilayer heterostructure by examining the magnitude of the splitting of the NbSe2 states with the same spin character on different monolayers (i.e., the bonding anti-bonding splitting $2t_{\perp})$. To ensure that our calculations of the interlayer coupling are converged we use a planewave energy cutoff of 600 eV, which is significantly larger than the maximum recommended plane wave cutoff for each of the elements in our heterostructure. The atomic positions of the heterostructure were optimized with the Grimme-D3 van der Waals correction [38] using a force convergence criteria of 5 meV/Å. We also investigated the effect of different van-der-Waals correction schemes such as Grimme-D2 [39], Grimme-D3 with damping [40] and the vdW-DF functional [41] and found they lead to minor differences in $d_{\mathrm{Nb-Nb}}$ on the order of $\sim$0.04Å. We performed collinear DFT calculations of a selenium vacancy ($V_{\rm Se}$) starting from a 10$\times$10$\times$1 supercell (300 atoms) of monolayer NbSe2. We use the $\Gamma$-point to relax the atomic coordinates. We verified that the results are converged using a (2$\times$2$\times$1) $k$-point grid. ### V.2 Analytical model of Ising superconductivity and tunneling To describe the dispersion of NbSe2 we use the following model Hamiltonian. $H_{0}(\mathbf{k})=\xi(\mathbf{k})\sigma_{0}+\left[\boldsymbol{\gamma}(\mathbf{k})-\mathbf{B}\right]\cdot\boldsymbol{\sigma}$ (1) The SOC, $\Delta_{\mathrm{SOC}}(\mathbf{k})$, and the proximity-induced magnetic exchange field, $B$, both couple to the spin, which is parametrized by the set of Pauli matrices $\boldsymbol{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$. $\sigma_{0}$ stands for the unit matrix in spin space. The effect of SOC at the $\Gamma$-valley is described using $\boldsymbol{\gamma}(\mathbf{k})=\hat{z}\sqrt{2}\Delta_{\mathrm{SOC}}\cos(3\phi_{\mathbf{k}})$, where $\phi_{\mathbf{k}}=\arctan(k_{y}/k_{x})$ [3]. While the maximum value of $\Delta_{\mathrm{SOC}}$ at $\Gamma$ is $\sim$70 meV, which occurs at $\Lambda$ [3], our calculations in Fig. 1(d) show that $t_{\perp}$ is suppressed at this point. To account for the fact that $t_{\perp}$ is larger along $\Gamma$-M while $\Delta_{\rm SOC}$ is suppressed along this path, we use a lower value of $\Delta_{\mathrm{SOC}}$ of 20$T_{c}$ for all of our calculations. The spin dependent density of states is expressed via the disorder-averaged Green function, $\hat{G},$ in the standard way, $N_{s}(\omega)=-\frac{1}{\pi}\sum_{\mathbf{k}}\mathrm{Im}\hat{G}_{ss}(\mathbf{k},\omega)\,.$ (2) The Green function, $\hat{G}(\mathbf{k},\omega),$ and the anomalous Green function, $\hat{F}(\mathbf{k},\omega)$ define the Nambu Green function, $\bar{G}(\mathbf{k},\omega)=\begin{bmatrix}\hat{G}(\mathbf{k},\omega)&\hat{F}(\mathbf{k},\omega)\\\ -\hat{F}^{\ast}(-\mathbf{k},\omega)&-\hat{G}^{\ast}(-\mathbf{k},\omega)\end{bmatrix}$ (3) which is a direct product of the particle hole and spin spaces. It satisfies the Gor’kov equation, $\left[(\omega+i\delta)\bar{1}-\bar{H}_{\mathrm{BdG}}(\mathbf{k})-\bar{\Sigma}(\mathbf{k})\right]\bar{G}(\mathbf{k},\omega)=\bar{1}\,,$ (4) where $\bar{1}$ is a unit matrix in the Nambu space, $\hat{H}_{\mathrm{BdG}}(\mathbf{k})$ is the Bogoliubov-de Gennes (BdG) Hamiltonian, $\bar{\Sigma}(\mathbf{k})$ is the self energy accounting for the effect of the disorder, and $\delta$ is a positive infinitesimal. For isotropic scattering this self energy is defined within the self- consistent Born approximation using Fermi’s golden rule as: $\bar{\Sigma}(\mathbf{k})=\eta\int\frac{d\xi_{\mathbf{k}^{\prime}}}{\pi}\\!\\!\int\frac{d\phi_{\mathbf{k}^{\prime}}}{2\pi}\bar{\sigma}_{z}\bar{G}(\mathbf{k}^{\prime},\omega)\bar{\sigma}_{z}\,,$ (5) where $\bar{\sigma}_{z}=\mathrm{diag}(\sigma_{0},-\sigma_{0})$, and $2\eta$ is the scattering rate due to disorder as defined within Fermi’s Golden rule. In practice we set $\delta$ as a small but finite number for regularization. The BdG Hamiltonian takes the form, $\bar{H}_{\mathrm{BdG}}(\mathbf{k})=\begin{bmatrix}H_{0}(\mathbf{k})&\Delta_{o}(\mathbf{k})\\\ \Delta_{o}^{\dagger}(\mathbf{k})&-H_{0}^{\ast}(-\mathbf{k})\end{bmatrix}\ $ (6) where the normal state Hamiltonian, $H_{0}(\mathbf{k})=\xi(\mathbf{k})\sigma_{0}+\left[\boldsymbol{\gamma}(\mathbf{k})-\mathbf{B}\right]\cdot\boldsymbol{\sigma}$ (7) includes the dispersion $\xi(\mathbf{k})$ calculated with respect to $E_{F}$; the spin orbit coupling (SOC), ${\bf\gamma}(\bf k)$, and exchange field, $\mathbf{B}$, both of which couple to the spin parametrized by the set of Pauli matrices $\boldsymbol{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$, whereas $\sigma_{0}$ stands for the unit matrix in spin space. For the order parameter, $\Delta_{o}(\mathbf{k})$ within the BdG formulation in Eq. 6, we consider only spin singlet pairing, and assume the order parameter to be isotropic, $\Delta_{o}(\mathbf{k})=(i\sigma_{y})_{ss^{\prime}}\Delta_{o}\,.$ (8) The derivation of the singlet order parameter for a magnetic field that is out-of-plane follow Ref. [31]. For the magnetic field that is in-plane, the order parameter is computed along the lines of [28] and is elaborated on further in the Supplementary Material [30]. We use different approaches to determine the order parameter for these two configurations of $B$ in the presence of disorder. When $B\parallel c$, SOC does not affect the order parameter, $\Delta_{o}$. Indeed, in this case the SOC field commutes with $B$ (since they are both along $\hat{z}$), and the energy penalty of the singlet Cooper pairs due to the Zeeman field is the same as without SOC. This allows us to compute $\Delta_{o}$ using the low- temperature expansion of the self-consistency equations in terms of the Bessel functions [31]. The zero-temperature and zero-field order parameter used in the expansion relies on the BCS value, $\Delta_{\rm o}(T=0)=1.76T_{c}$. For the range of temperatures that we investigate here, a small number of terms is sufficient to achieve converged results. At low temperatures, $T\ll\Delta_{\rm o}(T=0)$, we find the $\Delta_{o}$ has a rather weak field dependence that progressively becomes more pronounced when $T\lesssim$ 0.5$T_{c}$. When $B\perp c$, we use a Landau expansion of the free energy near $T_{c}$ to determine $\Delta_{o}$ [30]. To ensure that $\Delta_{o}$ has the same value at zero field for both magnetic field orientations we use $T$=0.74 $T_{c}$ for the in-plane exchange field calculations. We compute the conductance using the tunneling Hamiltonian approach, and only retain the contribution of the quasi-particles to the tunnel current, $I$. The magnitude of the conductance depends in part on the magnitude of the tunneling matrix element, which in turn depends on the momenta and spins of the initial and final states involved in tunneling. Since $t_{\perp}^{\Gamma}\gg t_{\perp}^{\mathrm{K}}$ we limit our calculations to the $\Gamma$ valley, where the spin-orbit splitting is small and angle-dependent. The magnitude of $dI/dV$ due to tunneling across the CrBr3 band gap is characterized by the tunneling matrix element $\hat{t}_{\mathbf{k}s,\mathbf{k}^{\prime}s^{\prime}}$ which is determined by the momenta and spins of the initial state, $\mathbf{k}s$, and final state, $\mathbf{k}^{\prime}s^{\prime}$ respectively. Since the dispersion around $\Gamma$ is approximately isotropic, we can neglect the momentum dependence of the tunneling matrix element and define $\hat{t}_{\mathbf{k}s,\mathbf{k}^{\prime}s^{\prime}}=t_{\perp}^{\Gamma}\delta_{ss^{\prime}}$. A voltage, $V$, that is applied along $\hat{z}$ gives rise to an out-of-plane tunneling current, $I$, which we define as : $\begin{split}I(V)=e|t_{\perp}^{\Gamma}|^{2}\sum_{s}\int\frac{d\omega}{2\pi}&\left[N_{s}(\omega)N_{s}(\omega+eV)\right]\\\ &\times[n_{F}(\omega)-n_{F}(\omega+eV)]\ \end{split}$ (9) where $e$ is the electron charge, $N_{s}(\omega)$ is the density of states for quasiparticles with spin $s$, and $n_{F}(\omega)$ is the Fermi-Dirac function. We include the effect of paramagnetic defects by defining a disorder self energy, $\bar{\Sigma}(\mathbf{k})$, within the self-consistent Born approximation. We calculate $dI/dV$ as follows. At a given temperature, $T$ we calculate the order parameter, $\Delta_{o}$ for $B$=0 Tc, $B$=0.225 Tc, $B$=0.45 Tc and $B$=0.675 Tc, where Tc is the critical temperature at zero field [30]. For a given $\Delta_{o}$ we then solve the Gor’kov equation, and find the density of states. This allows us to compute $dI/dV$ using Eq. 9. We perform these calculations for $B\parallel c$ and $B\perp c$. ## Data availability The data that supports the findings of this study are available from the corresponding author upon reasonable request. ## Competing interests The authors declare no competing interests. ###### Acknowledgements. We thank David Möckli, Kin Fai Mak and Jie Shan for helpful discussions. D.W was supported by the Office of Naval Research through the Naval Research Laboratory’s Basic Research Program. I.I.M. was supported by ONR through grant N00014-20-1-2345. M.H. and M.K. acknowledge the financial support from the Israel Science Foundation, Grant No. 2665/20. Calculations by D.W. and I.I.M. were performed at the DoD Major Shared Resource Center at AFRL. ## References * Xi _et al._ [2016] X. Xi, Z. Wang, W. Zhao, J.-H. Park, K. T. Law, H. Berger, L. Forró, J. Shan, and K. F. Mak, Nat. Phys. 12, 139 (2016). * Sergio _et al._ [2018] C. Sergio, M. R. Sinko, D. P. Gopalan, N. Sivadas, K. L. Seyler, K. Watanabe, T. Taniguchi, A. W. Tsen, X. Xu, D. Xiao, and B. Hunt, Nat. Comm. 9, 1427 (2018). * Wickramaratne _et al._ [2020] D. Wickramaratne, S. Khmelevskyi, D. F. Agterberg, and I. I. Mazin, Phys. Rev. X 10, 041003 (2020). * Shaffer _et al._ [2020] D. Shaffer, J. Kang, F. Burnell, and R. M. Fernandes, Phys. Rev. B 101, 224503 (2020). * Möckli and Khodas [2020] D. Möckli and M. Khodas, Phys. Rev. B 101, 014510 (2020). * Dvir _et al._ [2018] T. Dvir, F. Massee, L. Attias, M. Khodas, M. Aprili, C. H. Quay, and H. Steinberg, Nature communications 9, 1 (2018). * Divilov _et al._ [2020] S. Divilov, W. Wan, P. Dreher, M. M. Ugeda, and F. Ynduráin, arXiv preprint arXiv:2005.06210 (2020). * [8] W. Wan, P. Dreher, R. Harsh, F. Guinea, and M. M. Ugeda, arXiv preprint arXiv:2101.04050 . * Tokuyasu _et al._ [1988] T. Tokuyasu, J. A. Sauls, and D. Rainer, Phys. Rev. B 38, 8823 (1988). * Tedrow _et al._ [1986] P. Tedrow, J. Tkaczyk, and A. Kumar, Phys. Rev. Lett. 56, 1746 (1986). * Heikkilä _et al._ [2019] T. T. Heikkilä, M. Silaev, P. Virtanen, and F. S. Bergeret, Progress in Surface Science 94, 100540 (2019). * Eschrig [2015] M. Eschrig, Reports on Progress in Physics 78, 104501 (2015). * Sau _et al._ [2012] J. D. Sau, S. Tewari, and S. D. Sarma, Phys. Rev. B 85, 064512 (2012). * Głodzik and Ojanen [2020] S. Głodzik and T. Ojanen, New Journal of Physics 22, 013022 (2020). * McGuire [2017] M. A. McGuire, Crystals 7, 121 (2017). * Kim _et al._ [2019a] H. H. Kim, B. Yang, S. Li, S. Jiang, C. Jin, Z. Tao, G. Nichols, F. Sfigakis, S. Zhong, C. Li, S. Tian, D. Cory, G.-X. Miao, J. Shan, K. F. Mak, H. Lei, K. Sun, L. Zhao, and A. Tsen, Proc. Natl. Acad. Sci. U.S.A 116, 11131 (2019a). * Kezilebieke _et al._ [2020a] S. Kezilebieke, M. N. Huda, V. Vaňo, M. Aapro, S. C. Ganguli, O. J. Silveira, S. Głodzik, A. S. Foster, T. Ojanen, and P. Liljeroth, Nature 588, 424 (2020a). * Kezilebieke _et al._ [2020b] S. Kezilebieke, O. J. Silveira, M. N. Huda, V. Vaňo, M. Aapro, S. C. Ganguli, J. Lahtinen, R. Mansell, S. van Dijken, A. S. Foster, _et al._ , arXiv preprint arXiv:2009.13465 (2020b). * Hamill _et al._ [2020] A. Hamill, B. Heischmidt, E. Sohn, D. Shaffer, K.-T. Tsai, X. Zhang, X. Xi, A. Suslov, H. Berger, L. Forró, F. Burnell, J. Shan, K. F. Mak, R. Fernandes, K. Wang, and V. Pribiag, arXiv preprint arXiv:2004.02999 (2020). * Kim _et al._ [2019b] H. H. Kim, B. Yang, S. Tian, C. Li, G.-X. Miao, H. Lei, and A. W. Tsen, Nano Lett. 19, 5739 (2019b). * Idzuchi _et al._ [2020] H. Idzuchi, F. Pientka, K.-F. Huang, K. Harada, Ö. Gül, Y. Shin, L. Nguyen, N. Jo, D. Shindo, R. Cava, _et al._ , arXiv preprint arXiv:2012.14969 (2020). * Kang _et al._ [2021] K. Kang, S. Jiang, H. Berger, K. Watanabe, T. Taniguchi, L. Forró, J. Shan, and K. F. Mak, Giant anisotropic magnetoresistance in ising superconductor-magnetic insulator tunnel junctions (2021), arXiv:2101.01327 [cond-mat.supr-con] . * [23] L. Ai, E. Zhang, C. Huang, X. Xie, Y. Yang, Z. Jia, Y. Zhang, S. Liu, Z. Li, P. Leng, _et al._ , arXiv preprint arXiv:2101.04323 . * Nguyen _et al._ [2017] L. Nguyen, H.-P. Komsa, E. Khestanova, R. J. Kashtiban, J. J. Peters, S. Lawlor, A. M. Sanchez, J. Sloan, R. V. Gorbachev, I. V. Grigorieva, _et al._ , ACS Nano 11, 2894 (2017). * Wang _et al._ [2017] H. Wang, X. Huang, J. Lin, J. Cui, Y. Chen, C. Zhu, F. Liu, Q. Zeng, J. Zhou, P. Yu, _et al._ , Nat. Comm. 8, 1 (2017). * Sosenko _et al._ [2017] E. Sosenko, J. Zhang, and V. Aji, Phys. Rev. B 95, 144508 (2017). * Möckli _et al._ [2020] D. Möckli, M. Haim, and M. Khodas, Journal of Applied Physics 128, 053903 (2020). * Haim _et al._ [2020] M. Haim, D. Möckli, and M. Khodas, Physical Review B 102, 214513 (2020). * woo Cho _et al._ [2020] C. woo Cho, J. Lyu, T. Han, C. Y. Ng, Y. Gao, G. Li, M. Huang, N. Wang, J. Schmalian, and R. Lortz, arXiv preprint arXiv:2003.12467 (2020). * [30] See Supplemental Material at … for additional electronic structure calculations of the trilayer heterostructure and the derivation of the free energy of the order parameter for a magnetic exchange field that is applied parallel to the plane of the heterostructure. * Maki and Tsuneto [1964] K. Maki and T. Tsuneto, Progress of Theoretical Physics 31, 945 (1964). * Anderson [1959] P. W. Anderson, Journal of Physics and Chemistry of Solids 11, 26 (1959). * Meservey and Tedrow [1994] R. Meservey and P. Tedrow, Physics reports 238, 173 (1994). * Blöchl [1994] P. E. Blöchl, Phys. Rev. B 50, 17953 (1994). * Kresse and Hafner [1993] G. Kresse and J. Hafner, Phys. Rev. B 47, 558 (1993). * Kresse and Furthmüller [1996] G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). * Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996). * Grimme _et al._ [2010] S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, J. Chem. Phys. 132, 154104 (2010). * Grimme [2006] S. Grimme, J. Comput. Chem. 27, 1787 (2006). * Grimme _et al._ [2011] S. Grimme, S. Ehrlich, and L. Goerigk, J. Comput. Chem. 32, 1456 (2011). * Klimeš _et al._ [2011] J. Klimeš, D. R. Bowler, and A. Michaelides, Phys. Rev. B 83, 195131 (2011).
21cm(0.2cm,0.2cm) © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. # Leveraging Domain Labels for Object Detection from UAVs ###### Abstract Object detection from Unmanned Aerial Vehicles (UAVs) is of great importance in many aerial vision-based applications. Despite the great success of generic object detection methods, a large performance drop is observed when applied to images captured by UAVs. This is due to large variations in imaging conditions, such as varying altitudes, dynamically changing viewing angles, and different capture times. We demonstrate that domain knowledge is a valuable source of information and thus propose domain-aware object detectors by using freely accessible sensor data. By splitting the model into cross- domain and domain-specific parts, substantial performance improvements are achieved on multiple datasets across multiple models and metrics. In particular, we achieve a new state-of-the-art performance on UAVDT for real- time detectors. Furthermore, we create a new airborne image dataset by annotating 13 713 objects in 2 900 images featuring precise altitude and viewing angle annotations111Dataset and code will be made available at this URL.. Index Terms— Object detection, UAV, Domain, DNN ## 1 Introduction Deep learning-based object detection from Unmanned Aerial Vehicles (UAVs) has developed to an important line of research due to its impact on many application areas, such as traffic surveillance, smart cities and search and rescue [1]. While generic object detection has improved drastically lately [2], object detection on images captured from UAVs still poses great challenges [3]. Among these, the variation across domains is particularly challenging. For example, an object detector encounters images taken from varying altitudes, angles of view, and at different times. More precisely, varying flying altitudes result in images containing differently sized objects with different resolutions while different viewing angles yield a multitude of different visual appearances. This problem becomes more severe when the interplay with different domains is considered (see Fig. 1). Fig. 1: Predictions of the angle experts on two images of the dataset POG, showing the very same scenery taken from different perspectives (top: 10m, 10∘, bottom: 100m, 90∘) Another important factor of variation is the change in illumination and appearance at different times. While domain information is implicitly encoded in the captured images, it is also explicitly available from the UAVs’ sensors: the altitude of the aircraft can be retrieved from the onboard GPS or barometer, the viewing angle from the gimbal pitch angle of the camera, and the time from an onboard clock. Therefore, we propose so-called expert models, which are composed of shared layers and domain-specific layers. While the shared layers learn domain- invariant representations, the others preserve the domain-specific information, yielding robust performances in multi-domain settings. Our contributions are threefold: (i) We are the first to cast object detection from UAVs as a multi-domain learning [4] problem and construct domain-robust models by dividing them into shared and domain-specific layers, dubbed expert models. (ii) We demonstrate that the expert models consistently outperform multiple domain-agnostic models without sacrificing speed on three benchmarks, UAVDT [5], Visdrone [1] and PeopleOnGrass (POG), a dataset (iii) we captured and annotated with bounding boxes and precise domain labels, which will be provided to the community. ## 2 Related Work With the release of the UAVDT [5] and VisDrone [1] datasets, several works develop models specifically aimed at object detection from UAVs [6, 7, 8]. With [9], the concept of domains enters the field of object detection from UAVs, where a Siamese-GAN is introduced to learn invariant feature representations for labeled and unlabeled aerial images from two different domains. However, such a domain adaptation method’s focus is to adapt the model from a fixed source domain to a fixed target domain. Fine-grained domains are utilized by [10], where adversarial losses are employed to disentangle domain-specific nuisances. However, the training is slow and unstable, and domain labels are ignored at test time. Expert models are proposed in [11] to account for objects with particular shapes (horizontally/vertically elongated, square-like). Since no domain labels are used in this work, they are formulated as a model ensemble too expensive to employ in multiple domains. A multi-domain learning approach for object detection is investigated in [12], where the focus is on learning multiple distinct datasets. Transfer learning [13] is different in that it generally aims to learn invariant representations, whereas multi-domain learning preserves the domain-specific representations. ## 3 MULTI-DOMAIN LEARNING APPROACH In multi-domain learning, image samples $x$ with corresponding bounding box annotations $y$ are accompanied by a discrete domain indicator $d\in D=\\{D_{1},\dots,D_{m}\\}$ (which also is available at test time), such that a training sample is $(x,d,y)$ and a test sample is $(x,d)$. In particular, that means, we can leverage this domain information $d$ at test time, which is the key to our expert models. Motivated by [14] and [15], given an object detector model, we share lower layers across all domains and leave higher layers domain-specific. This approach follows the conventional wisdom that lower layers extract lower-level features, which are present across all domains, while higher layers extract higher-level features, which may differ substantially between domains (such as the people in Fig. 1). Empirically, this is backed up by [12], which shows that activations in higher layers differ vastly. From preliminary experiments, we found empirically that it is best to split models not based on individual layers, but on groups of layers, which are known as stages [12]. We denote the resulting model according to the domain dimension that is split and the stage until it is shared, such that the model in Fig. 2 is called time@3. The branch for a particular domain is called an expert for that domain. We explore empirically which stages are to be shared in section 4. While the number of parameters scales linearly with the number of domains, the inference speed stays constant as only a single expert is evaluated at a time. Therefore, the experts effectively increase the model’s capacity without hampering the inference speed. Furthermore, the experts’ sizes are still small enough such that they all fit even in embedded GPUs’ memory, as will be seen in section 4. Lastly, similar to what is done in transfer learning [13], we freeze the shared stages after pretraining them on all domains in order to transfer knowledge between domains and such that weights will not be biased towards the over-represented domains [16]. This is particularly beneficial for datasets with great domain imbalances as is the case in UAVDT [5] and VisDrone (see Table 1). Fig. 2: Illustration of a time@3 model with day and night experts. The time dimension is split into two domains, day (red) versus night (blue), where green outputs represent the shared stages (first, second, third). Every image is passed through the shared green stages. Then it is checked whether it is a day or night image and consequently passed through the red or blue stages, respectively. | | B | A ---|---|---|--- | 6 471 / 343 205 | 1 293 / 54 008 | 5 178 / 289 197 H | 1 423 / 71 073 | | D | 505 / 26 755 ---|--- N | 166 / 4 120 | D | 752 / 40 198 ---|--- N | 148 / 6 219 M | 2 645 / 130 327 | | D | 400 / 16 357 ---|--- N | 116 / 3 424 | D | 1 781 / 96 899 ---|--- N | 348 / 13 647 L | 2 403 / 141 805 | | D | 81 / 2 671 ---|--- N | 25 / 681 | D | 1 786 / 114 504 ---|--- N | 511 / 23 949 Table 1: Number of images / objects in the respective domain in the VisDrone train set. The domains are bird view (B), acute viewing angle (A), high (H), medium (M), low (L), day (D) and night (N). ## 4 EXPERIMENTAL RESULTS AND ABLATIONS In the first two sets of experiments, we show how leveraging domain labels on UAVDT and VisDrone improves multiple model architectures’ performance. Furthermore, we investigate the effect of different splitting strategies and ablations. Lastly, we show that finer domain splitting is possible in the case of the dataset POG. We evaluate our models using the official evaluation protocols, i.e. AP70 for UAVDT and mAP and mAP50 for VisDrone, respectively. Furthermore, similar to [12], we report results on individual domains and their respective averages AP${}_{70}^{\text{avg}}$ and mAP${}_{50}^{\text{avg}}$ over all respective domains to measure the universal cross-domain performance. These metrics weigh each domain equally and therefore mitigate the influence of domain imbalances in the test set. They favor models that perform acceptably on all domains instead of just a few, possibly over-represented domains. ### 4.1 VisDrone The object detection track from VisDrone consists of around 10k images with 10 categories. All frames are annotated with domain labels regarding altitude (low (L), medium (M), high (H)), viewing angle (front, side, bird (B)) and light condition (day (D), night (N)) [10]. Note that we fuse the two domains ”front” and ”side” into a single domain ”acute angle (A)”, as, at test time, we can only distinguish between bird view and not bird view based on the camera angle. We reimplement the best performing single-model (no ensemble) from the workshop report, DE-FPN [1], i.e. a Faster R-CNN with a ResNeXt-101 64-4d [17] backbone (removing P6), which was trained using color jitter and random image cropping achieving 48.7% mAP50 on the test set. To compare with [10], we evaluate our models on the unseen validation set, on which our implementation yielded 48.6% mAP50. From Table 2, we can make three observations: First, the altitude-experts improve over the baseline DE-FPN on the whole validation set and all domains individually if more than the second stage is shared. The performance drop of Altitude@0 and Altitude@1 is likely caused by overfitting on the small domain H, on which the performance drop is -0.5 mAP50. Second, there seems to be an upward trend in performance, peaking at the fourth stage and dropping at the fifth stage. Third, improvements are seen for all experts: +1.3, +0.8 and +0.4 mAP50 for the altitude-, angle- and time-experts, respectively. Furthermore, the performance improvements are seen in the domain-sensitive metric mAP${}_{50}^{\text{avg}}$, yielding +1.2, +1.2 and +0.6 points for the respective experts. | L | M | H | mAP50 | mAP | mAP${}_{50}^{\text{avg}}$ ---|---|---|---|---|---|--- DE-FPN [1] | 49.1 | 49.7 | 36.0 | 48.6 | 26.1 | 44.9 Altitude@0 | 49.4 | 49.6 | 35.5 | 48.3 | 25.9 | 44.8 Altitude@1 | 49.5 | 49.7 | 35.7 | 48.5 | 25.9 | 45.0 Altitude@2 | 49.5 | 49.9 | 36.1 | 48.7 | 26.1 | 45.2 Altitude@3 | 50.2 | 50.2 | 36.8 | 49.2 | 26.6 | 45.7 Altitude@4 | 50.7 | 50.2 | 37.5 | 49.9 | 27.4 | 46.1 Altitude@5 | 50.5 | 50.0 | 37.5 | 49.7 | 27.0 | 46.0 | B | A | | | | DE-FPN [1] | 38.0 | 49.0 | | 48.6 | 26.1 | 43.5 Angle@4 | 39.6 | 49.8 | | 49.4 | 27.0 | 44.7 | D | N | | | | DE-FPN [1] | 48.5 | 52.0 | | 48.6 | 26.1 | 50.2 Time@4 | 49.0 | 52.6 | | 49.0 | 26.6 | 50.8 Table 2: Expert results for various sharing strategies on VisDrone Table 3 shows that sharing along two and three domain dimensions is advantageous. The altitude-angle@4-experts and the altitude-angle- time@4-experts improve DE-FPN on all domains individually and overall. In particular, we obtain a +1.8 and +2 mAP${}_{50}^{\text{avg}}$ increase, respectively. The standard metrics mAP and mAP50 show an improvement as well, albeit a lower one which is attributed to the failure of these metrics to capture domain imbalances in the validation set. This contrast is, furthermore, seen in underrepresented domains being improved the most. For example, the altitude-angle-time@4-experts improve the performance on the domain M+A+N, which only contains 348 images (see Table 1), from 51.6 mAP50 to 56.5 mAP50. Similar observations can be made from Table 4, where the altitude-time@4- and angle-time@4-experts both improve by +1.8 mAP${}_{50}^{\text{avg}}$. | $\downarrow$ \+ $\rightarrow$ | L | M | H | mAP50 | mAP | mAP${}_{50}^{\text{avg}}$ ---|---|---|---|---|---|---|--- DE-FPN [1] | | B --- A | 84.6 --- 49.1 | 42.5 --- 50.0 | 35.6 --- 41.2 48.6 | 26.1 | 50.5 | Altitude- --- angle@4 | B --- A | 87.4 --- 49.7 | 44.8 --- 50.1 | 39.6 --- 42.2 49.0 | 26.3 | 52.3 DE-FPN [1] | | B+D --- A+D A+N | 84.6 --- 49.0 52.8 | 42.5 --- 50.2 51.6 | 35.6 --- 41.2 – 48.6 | 26.1 | 50.9 | Altitude- --- angle- time@4 | B+D --- A+D A+N | 87.5 --- 50.1 54.4 | 44.8 --- 50.6 56.5 | 39.6 --- 42.2 – 49.6 | 26.8 | 52.9 Table 3: Results on specific domains for multi-dimension experts on VisDrone. For example, the Altitude-angle-time@4-expert achieves 54.4 mAP50 on the domain A+N+L (acute viewing angle, at night and low altitude). Note that there are no validation images in the domains B+N and A+N+H. | mAP50 | mAP | mAP${}_{50}^{\text{avg}}$ ---|---|---|--- DE-FPN [1] | 48.6 | 26.1 | 49.7 Altitude-time@4 | 49.1 | 26.3 | 51.5 DE-FPN [1] | 48.6 | 26.1 | 50.1 Angle-time@4 | 49.2 | 26.4 | 51.9 Table 4: Multi dimension experts on VisDrone Validation set | B | A | mAP50 | mAP${}_{50}^{\text{avg}}$ ---|---|---|---|--- D0 | 21.5 | 24.9 | 26.3 | 23.2 Angle | 22.1 | 26.2 | 27.6 | 24.2 Table 5: EfficientDet D0 Angle experts on VisDrone validation set To further test our approach in real-time scenarios, we choose the current best model family on the COCO test-dev according to [18], i.e. EfficientDet [19], and take the smallest model D0 as our baseline model. We test it on the NVIDIA Jetson AGX Xavier, which is an embedded computer with integrated GPU suitable for on-board processing. For that, we convert the trained model to half-precision using JetPack and TensorRT and set the performance mode to MAX-N. The inference speed is reported in frames per second (fps) averaged over the validation set. Similar to [20], the fps times do not include the non-maximum suppression stage as TensorRT does not supported this. Keeping the image ratio, the employed longer image side is 1408px for training and testing. As shown in Table 5, sharing the backbone yields an improvement of 1.3 point mAP50 for the angle experts. Both models run at 21.8fps, suitable for live on-board processing. With all pre- and post-processing steps, we obtain a wall-clock time of 18.1fps. ### 4.2 UAVDT UAVDT contains around 41k frames with annotated cars, busses and trucks. Similar to [10], we fuse all classes into a single vehicle class. All frames are domain-annotated like VisDrone. To compare our experts, we trained a Faster R-CNN with ResNet-101-FPN like [10], which report 49.1 AP70, and obtained 49.4 AP70 on the test set serving as our baseline model. As Table 6 shows, the angle@2- and time@2-experts improve performance over the baseline on both metrics. In particular, the angle@2-expert improves the baseline by +3 AP${}_{70}^{\text{avg}}$. | L | M | H | AP70 | AP${}_{70}^{\text{avg}}$ ---|---|---|---|---|--- Resnet-101-FPN [10] | 61.9 | 58.1 | 24.1 | 49.4 | 48.0 Altitude@2 | 62.5 | 60.5 | 24.1 | 49.4 | 49.0 | B | A | | | Resnet-101-FPN [10] | 28.9 | 59.1 | | 49.4 | 44.0 Angle@2 | 33.6 | 60.4 | | 50.4 | 47.0 | D | N | | | Resnet-101-FPN [10] | 51.4 | 50.6 | | 49.4 | 51 Time@2 | 53.4 | 54.1 | | 50.1 | 53.8 Table 6: Domain experts on the UAVDT test set We also demonstrate that the performance gain using expert models does not vanish as we switch to another backbone, e.g. Resnet-101. As shown in Table 7, the angle experts yield an increase in +3 AP70 and +5 $AP_{70}^{\text{avg}}$ and even outperform NDFT, an approach using adversarial losses on domain labels. | B | A | AP70 | AP${}_{70}^{\text{avg}}$ ---|---|---|---|--- Resnet-101 [10] | 27.1 | 54.4 | 45.6 | 40.1 NDFT [10] | 28.8 | 56.0 | 47.9 | 43.4 Angle@2 | 31.6 | 58.6 | 48.6 | 45.1 Table 7: Results for Resnet-101 backbone on UAVDT Similar as for VisDrone, Table 8 shows how the altitude experts with shared backbone can regain precision that has been sacrificed to the high speed of the D0 model. The large improvement of +18.1 $AP_{70}$ is likely caused by the heavy domain imbalance of UAVDT [5] that the experts are successful to mitigate. In particular, we set a new state-of-the-art performance for real- time detectors on embedded hardware by improving upon [20] by +9.0 AP70. Note that they tested on different embedded hardware. | AP70 | FPS | ---|---|---|--- D0 | 17.1 | 21.8 | UAV-Net [20] | 26.2 | 18.3 | Altitude | 35.2 | 21.8 | Table 8: Altitude experts results on UAVDT test set ### 4.3 POG: baseline and expert results Lastly, we would like to note that there are no publicly available datasets for object detection from UAVs that include precise domain labels regarding altitude and viewing angle. We argue that this is a major impediment in the development of domain-aware models since these two factors majorly contribute to appearance changes. For that reason, we record the experimental dataset PeopleOnGrass (POG) containing 2 900 images (3840x2160px), showing people from various angles and altitudes varying from 0∘ (horizontally facing) to 90∘ (top-down) and 5m to 100m, respectively, each labeled with the precise altitude and angle it was captured at. Further metadata, such as GPS location, UAV speed, and timestamps are also included. We annotate 13 713 people and balance it with respect to the domain dimensions angle and altitude. This dataset will be released with the paper and hopefully will benefit the development of domain-aware models. For future reference, we establish an EfficientDet D0 baseline which can run in real-time on embedded hardware such as the Xavier board. Finally, we employ altitude experts with shared backbone to showcase the effectiveness of a multi-domain learning approach on finer domains. We split the altitude dimension ($0$m-$100$m) into three and six equidistant domains and denote the experts 3xAltitude and 6xAltitude, respectively. Table 9 shows that the baseline achieves 82.0 AP50, which the experts improve by +4.2 and +5.9 AP50, respectively, showing that experts further benefit from finer domain splits (6xAltitude +1.7 AP50 compared to 3xAltitude). | AP50 | AP | AP${}_{50}^{\text{avg}}$ ---|---|---|--- D0 | 82.0 | 36.4 | 82.9 3xAltitude | 86.2 | 40.3 | 86.0 6xAltitude | 87.9 | 40.8 | 88.1 Table 9: (Finer) Altitude experts results on POG test set ## 5 Conclusion We are the first to successfully apply a multi-domain learning method to object detection from UAVs. We propose and analyze expert models leveraging metadata at test time. Although these expert models are conceptually simple, they achieve domain awareness and consistently improve several, heavily optimized state-of-the-art models on multiple datasets and metrics. In particular, our D0 expert yields 35.2% AP70 on UAVDT, making it the new state- of-the-art real-time detector on embedded hardware. Lastly, due to the lack of datasets with precise meta labels we introduce a new dataset that may help further studies in the field of multi-domain learning in object detection. ## References * [1] Pengfei Zhu, Longyin Wen, Dawei Du, Xiao Bian, Haibin Ling, Qinghua Hu, Qinqin Nie, Hao Cheng, Chenfeng Liu, Xiaoyu Liu, et al., “VisDrone-DET2018: The vision meets drone object detection in image challenge results,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0–0. * [2] Zhong-Qiu Zhao, Peng Zheng, Shou-tao Xu, and Xindong Wu, “Object detection with deep learning: A review,” IEEE transactions on neural networks and learning systems, vol. 30, no. 11, pp. 3212–3232, 2019. * [3] Pengfei Zhu, Longyin Wen, Dawei Du, Xiao Bian, Qinghua Hu, and Haibin Ling, “Vision meets drones: Past, present and future,” arXiv preprint arXiv:2001.06303, 2020. * [4] Mahesh Joshi, Mark Dredze, William Cohen, and Carolyn Rose, “Multi-domain learning: when do domains matter?,” in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2012, pp. 1302–1312. * [5] Dawei Du, Yuankai Qi, Hongyang Yu, Yifan Yang, Kaiwen Duan, Guorong Li, Weigang Zhang, Qingming Huang, and Qi Tian, “The unmanned aerial vehicle benchmark: Object detection and tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 370–386. * [6] Igor Ševo and Aleksej Avramović, “Convolutional neural network based automatic object detection on aerial images,” IEEE geoscience and remote sensing letters, vol. 13, no. 5, pp. 740–744, 2016. * [7] Lars Wilko Sommer, Tobias Schuchert, and Jürgen Beyerer, “Fast deep vehicle detection in aerial images,” in 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2017, pp. 311–319. * [8] Jian Ding, Nan Xue, Yang Long, Gui-Song Xia, and Qikai Lu, “Learning roi transformer for detecting oriented objects in aerial images,” arXiv preprint arXiv:1812.00155, 2018. * [9] Laila Bashmal, Yakoub Bazi, Haikel AlHichri, Mohamad M AlRahhal, Nassim Ammour, and Naif Alajlan, “Siamese-gan: Learning invariant representations for aerial vehicle image categorization,” Remote Sensing, vol. 10, no. 2, pp. 351, 2018. * [10] Zhenyu Wu, Karthik Suresh, Priya Narayanan, Hongyu Xu, Heesung Kwon, and Zhangyang Wang, “Delving into robust object detection from unmanned aerial vehicles: A deep nuisance disentanglement approach,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 1201–1210. * [11] Hyungtae Lee, Sungmin Eum, and Heesung Kwon, “ME r-cnn: Multi-expert r-cnn for object detection,” IEEE Transactions on Image Processing, vol. 29, pp. 1030–1044, 2019\. * [12] Xudong Wang, Zhaowei Cai, Dashan Gao, and Nuno Vasconcelos, “Towards universal object detection by domain attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7289–7298. * [13] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020. * [14] Rich Caruana, “Multitask learning,” Machine learning, vol. 28, no. 1, pp. 41–75, 1997. * [15] Sebastian Ruder, “An overview of multi-task learning in deep neural networks,” arXiv preprint arXiv:1706.05098, 2017. * [16] Kemal Oksuz, Baris Can Cam, Sinan Kalkan, and Emre Akbas, “Imbalance problems in object detection: A review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\. * [17] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492–1500. * [18] “Object Detection on COCO test-dev,” https://paperswithcode.com/sota/object-detection-on-coco, Accessed: 2021-01-11. * [19] Mingxing Tan, Ruoming Pang, and Quoc V Le, “Efficientdet: Scalable and efficient object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10781–10790. * [20] Tobias Ringwald, Lars Sommer, Arne Schumann, Jurgen Beyerer, and Rainer Stiefelhagen, “UAV-net: A fast aerial vehicle detector for mobile platforms,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0.
# Rigidity of Complete Gradient Steady Ricci Solitons with Harmonic Weyl Curvature Fengjiang Li School of Mathematical Sciences East China Normal University Shanghai, 200241<EMAIL_ADDRESS> ###### Abstract. Our main aim in this paper is to investigate the rigidity of complete noncompact gradient steady Ricci solitons with harmonic Weyl tensor. More precisely, we prove that an $n$-dimensional ($n\geq 5$) complete noncompact gradient steady Ricci soliton with harmonic Weyl tensor is either Ricci flat or isometric to the Bryant soliton up to scaling. We also derive a classification result for complete noncompact gradient expanding Ricci solitons with harmonic Weyl tensor. Meanwhile, for $n\geq 5$, we provide a local structure theorem for $n$-dimensional connected (not necessarily complete) gradient Ricci solitons with harmonic Weyl curvature, thus extending the work of Kim [31] for $n=4$. Furthermore, a similar method can be applied to treat vacuum static spaces and CPE metrics with harmonic curvature [32, 11], as well as quasi-Einstein manifolds with harmonic Weyl curvature [12]. ###### Key words and phrases: Gradient Ricci solitons, harmonic Weyl curvature, Codazzi tensor ###### 2020 Mathematics Subject Classification: Primary 53C21; Secondary 53C25, 53E20 ## 1\. Introduction An $n$-dimensional Riemannian manifold $(M^{n},g)$ is called a gradient Ricci soliton if there exists a smooth function $f$ on $M$ such that the Ricci tensor satisfies the following equation (1.1) $Ric+Hess(f)=\rho g$ for some constant $\rho$, where $Ric$ is the Ricci tensor of $g$ and $Hess(f)$ denotes the Hessian of the potential function $f$. The Ricci soliton is said to be shrinking, steady, or expanding accordingly as $\rho$ is positive, zero, or negative, respectively. When the potential function $f$ is constant, the gradient Ricci soliton is simply an Einstein manifold and is said to be trivial. Ricci solitons generate self-similar solutions of the Ricci flow, and they play a fundamental role in the formation of singularities of the flow (see [5] for a nice overview). Hence the classification of gradient Ricci solitons has been a very interesting problem. In the shrinking case, classification results for gradient Ricci solitons have been obtained by many authors under various curvature conditions on the Weyl tensor, e.g., [30, 36, 24, 34, 35, 10, 39, 14, 42, 25, 33, 9, 20, 41] etc. In particular, Fernández-López and García-Río [25] together with Munteanu and Sesum [33] proved that any $n$-dimensional complete gradient shrinker with harmonic Weyl tensor is rigid, i.e., it is isometric to a (finite) quotient of $N\times\mathbb{R}^{k}$, the product soliton of an Einstein manifold $N$ of positive scalar curvature with the Gaussian soliton $\mathbb{R}^{k}$. On the other hand, it is well-known that compact gradient steady solitons are necessarily Ricci flat. In dimension $n=2$, the only complete noncompact gradient steady Ricci soliton with positive curvature is Hamilton’s cigar soliton $\Sigma^{2}$, see Hamilton [27]. In dimension three, known examples are given by $\mathbb{R}^{3}$, $\Sigma^{2}\times\mathbb{R}$, and the rotationally symmetric Bryant soliton [4]. In [3], Brendle showed that the Bryant soliton is the only complete noncompact, nonflat, $\kappa$-noncollapsed, gradient steady Ricci soliton, proving a conjecture by Perelman [36]. For $n\geq 4$, such a uniqueness result is not expected to hold, and it is desirable to find geometrically interesting conditions under which the uniqueness would hold. Indeed, in the Kähler case, Cao [6] constructed a complete gradient steady Kähler-Ricci soliton on $\mathbb{C}^{m}$, for $m\geq 2$, with positive sectional curvature and $U(m)$ symmetry. In [8], for $n\geq 3$, Cao-Chen showed that a complete noncompact $n$-dimensional locally conformally flat gradient steady Ricci soliton is either flat or isometric to the Bryant soliton up to scaling; see also [15] for an independent proof for $n\geq 4$. Moreover, an important covariant $3$-tensor $D$ for gradient Ricci solitons was introduced in [8, 9] (see also Section 2 for the definition of $D$-tensor). Classification results have been obtained in [7] for Bach flat steady solitons in dimension $n\geq 4$ under some conditions. In particular, it follows that Bach flatness implies local conformal flatness under positive Ricci curvature assumption. However, not much was known about the rigidity of complete gradient steady Ricci solitons with harmonic curvature. In fact, it is quite natural to ask the following: Main Question. _Is it true that any $n$-dimensional $(n\geq 4)$ steady gradient Ricci soliton $(M,g,f)$ with harmonic Weyl curvature is either Ricci flat or isometric to the Bryant soliton up to scaling?_ Recently, Kim [31] has provided a positive answer to the above question for $n=4$. In fact, Kim produced a very nice local description of such Ricci soliton metrics and their potential functions. His method of proof was motivated by the work of Cao-Chen [8, 9], and also based on Derdziński’s study on Codazzi tensors [23] (the harmonicity of the Weyl tensor is equivalent to the Schouten tensor being Codazzi). Combining with the gradient Ricci soliton condition, Kim managed to analyze in detail the situation when the Ricci tensor has two and three distinct eigenvalues. However, difficulties arise in the higher dimensional case. Indeed, as the dimension increases, so do the numbers and multiplicities of distinct Ricci-eigenvalues and the situation becomes more subtle. In this paper, motivated by Kim’s work [31], we study $n$-dimensional ($n\geq 5$) complete gradient Ricci solitons with harmonic Weyl curvature. Our first main result is the following general classification result. ###### Theorem 1.1. Let $(M^{n},g,f)$, $n\geq 5$, be a complete $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. Then it is one of the following types: (i) $(M^{n},g)$ is isometric to a quotient of $\mathbb{R}^{r}\times N^{n-r}$ ($2\leq r\leq n-2$), where $N^{n-r}$ is an $(n-r)$-dimensional Einstein manifold with the Einstein constant $\rho\neq 0$. Also the potential function is given by $f=\frac{\rho}{2}|x|^{2}$ modulo a constant on the Euclidean factor. (ii) $(M^{n},g)$ has the vanishing covariant $3$-tensor $D$ of Cao-Chen [8, 9]. Note that, as mentioned before, the shrinking case of Theorem 1.1 was already known [25, 33]. In addition, Theorem 1.1 (ii) includes the case when $(M^{n},g)$ is Einstein: if $f$ is a constant function then the vanishing of $D$-tensor follows from its definition, and the case of nonconstant $f$ follows from Theorem 6.1 or from, e.g., the work of Cheeger-Colding [17] on warped products and Hessians. Meanwhile, Cao-Chen [9] showed that any $n$-dimensional ($n\geq 4$) gradient Ricci soliton with vanishing $D$-tensor has harmonic Weyl curvature. Therefore, together with Theorem 1.1, an $n$-dimensional ($n\geq 5$) gradient steady Ricci soliton has harmonic Weyl curvature if and only if the $D$-tensor vanishes. In particular, as a consequence of Theorem 1.1 and the very recent work of Cao-Yu [13], we have the following positive answer to the Main Question for all $n\geq 5$. ###### Theorem 1.2. Let $(M^{n},g,f)$, $n\geq 5$, be a complete noncompact gradient steady Ricci soliton with harmonic Weyl curvature. Then it is either Ricci flat or isometric to the Bryant soliton up to scaling. We remark that for the shrinking soliton case, Theorem 1.1 together with Proposition 3.2 of Cao-Yu [13] provides a different proof (which is pointwise) of the rigidity result of [25, 33]. On the other hand, the expanding solitons are less rigid, and various works have been done recently; see, e.g., [38, 19, 21] and the references therein. As another consequence of Theorem 1.1 and the work of Cao-Yu [13] for complete $D$-flat expanding solitons, we have the following classification result. ###### Theorem 1.3. Let $(M,g,f)$, $n\geq 5$, be a complete expanding gradient Ricci soliton with harmonic Weyl curvature. Then it is one of the following types. (i) $g$ is an Einstein metric with a constant potential function $f$. (ii) $(M^{n},g)$ is isometric to a quotient of $\mathbb{R}^{r}\times{N}^{n-r}$, where $2\leq r\leq n-2$, $\mathbb{R}^{r}$ is the Gaussian expanding soliton, and ${N}^{n-r}$ is an $(n-r)$-dimensional Einstein manifold with the Einstein constant $\rho<0$. (iii) $(M^{n},g)$ is rotationally symmetric and a quotient of an expanding soliton of the form $\left([0,\,\infty),\,ds^{2}\right)\times\,_{h}\left(\mathbb{S}^{n-1},\bar{g}_{0}\right)$ where $\bar{g}_{0}$ is the round metric on $\mathbb{S}^{n-1}$. (iv) $(M^{n},g)$ is a quotient of some warped product expanding Ricci soliton of the form $\left(\mathbb{R},\,ds^{2}\right)\times\,_{h}\left(N^{n-1},\bar{g}\right)$ where $\left(N^{n-1},\bar{g}\right)$ is an Einstein manifold of negative scalar curvature. Inspired by the work of Cao-Chen [8, 9] and the work of Kim [31], we in fact derived a local description of Ricci soliton metrics and potential functions under the assumption of harmonic Weyl curvature. By using the method of exterior differential and moving frames, we first obtain the integrability condition of harmonicity and give a local structure of the soliton metric, which is necessarily a multiply warped product. Then we divide the discussion into three cases, according to the numbers and multiplicities of distinct Ricci-eigenvalues. As a result, we obtain the following local classification. ###### Theorem 1.4. Let $(M^{n},g,f)$, $(n\geq 5)$, be an $n$-dimensional (not necessarily complete) connected gradient Ricci soliton with harmonic Weyl curvature. Then it is one of the following four types. (i) $(M^{n},g)$ is Einstein, and $f$ is a constant function. (ii) For each point $p\in M$, there exists a neighborhood $U$ of $p$ such that $(U,g)$ is isometric to a domain in the Riemannian product $\mathbb{R}^{r}\times N^{n-r}$ with $g=ds^{2}+s^{2}d\mathbb{S}^{2}_{r-1}+\tilde{g}$, where $2\leq r\leq n-2$ and $\left(N^{n-r},\tilde{g}\right)$ is Einstein with the Einstein constant $\rho\neq 0$. The potential function is given by $f=\frac{\rho}{2}|x|^{2}$ modulo a constant on the Euclidean factor. (iii) For each point $p\in M$, there exists a neighborhood $U$ of $p$ such that $(U,g)$ is isometric to a domain in $\mathbb{R^{+}}\times\mathbb{R}\times N^{n-2}$ with $g=ds^{2}+s^{\frac{2(n-3)}{n-1}}dt^{2}+s^{\frac{4}{n-1}}\tilde{g}$, where $\left(N^{n-2},\tilde{g}\right)$ is Ricci flat. Also, $n\neq 5$, $\rho=0$ and $f=\frac{2(n-3)}{n-1}\log(s)$ modulo a constant. (iv) For each point $p\in M$, there exists a neighborhood $U$ of $p$ such that $(U,g)$ is isometric to a domain in $\mathbb{R}\times N^{n-1}$ with $g=ds^{2}+h^{2}(s)\tilde{g},$ where $\tilde{g}$ is an Einstein metric on some $(n-1)$-manifold $N^{n-1}$. Moreover, the covariant $3$-tensor $D$ of Cao-Chen [8, 9] vanishes. We point out that the incomplete steady gradient soliton in Theorem 1.4 (iii) has negative scalar curvature, which is in contrast to the fact that complete steady gradient solitons should have nonnegative scalar curvature [18]. Thus, Theorem 1.1 follows immediately from Theorem 1.4. ###### Remark 1.5. The proof of Theorem 1.4 has been further extended to treat vacuum static spaces and CPE metrics with harmonic curvature [32, 11], as well as quasi- Einstein manifolds with harmonic Weyl curvature [12]. Next, we discuss some applications. In [16], the authors obtained some results on gradient Ricci solitons with certain vanishing conditions on the Weyl tensor. More precisely, assuming ${\rm div}^{4}(W)=0$ and under certain Ricci curvature assumptions, they showed that the Ricci soliton has harmonic Weyl curvature. Combining this fact with Theorem 1.2, Theorem 1.3 and the work of [31] for $n=4$, we have the following two corollaries. ###### Corollary 1.6. Let $(M^{n},g,f)$, $n\geq 4$, be a complete gradient steady Ricci soliton with positive Ricci curvature such that the scalar curvature attains its maximum at some point $p_{0}\in M$. If in addition $\operatorname{div}^{4}(W)=0$, then $M$ is either Ricci flat or isometric to the Bryant soliton. ###### Corollary 1.7. Let $(M^{n},g)$, $n\geq 4$, be a complete gradient expanding Ricci soliton with nonnegative Ricci curvature. If ${\rm div}^{4}(W)=0$, then $M$ is one of the following: (i) $g$ is an Einstein metric with $f$ a constant function. (ii) $(M^{n},g)$ is isometric to a quotient of $\mathbb{R}^{r}\times{N}^{n-r}$, where $2\leq r\leq n-2$, $\mathbb{R}^{r}$ is the Gaussian expanding soliton, and ${N}^{n-r}$ is an $(n-r)$-dimensional Einstein manifold with the Einstein constant $\rho<0$. (iii) $(M^{n},g)$ is rotationally symmetric and a quotient of an expanding soliton of the form $\left([0,\,\infty),\,ds^{2}\right)\times\,_{h}\left(\mathbb{S}^{n-1},\bar{g}_{0}\right)$ where $\bar{g}_{0}$ is the round metric on $\mathbb{S}^{n-1}$. (iv) $(M^{n},g)$ is a quotient of some warped product expanding Ricci soliton of the form $\left(\mathbb{R},\,ds^{2}\right)\times\,_{h}\left(N^{n-1},\bar{g}\right)$ where $\left(N^{n-1},\bar{g}\right)$ is an Einstein manifold of negative scalar curvature. Finally, as another application of Theorem 1.4, we will obtain the local classification of gradient Ricci soliton with harmonic curvature. As a consequence, it immediately follows that complete gradient Ricci solitons with harmonic curvature are rigid, which was first proved by Petersen and Wylie in [38]. ###### Corollary 1.8. Let $(M^{n},g,f)$, $n\geq 5$, be a (not necessarily complete) $n$-dimensional gradient Ricci soliton with harmonic curvature. Then it is locally one of the two types below. (i) $(M,g)$ is an Einstein manifold and $f$ is constant. (ii) For each point $p$, there exists a neighborhood $U$ of $p$, such that $(U,g)$ is isometric to a domain in $\mathbb{R}^{r}\times N^{n-r}$, where $\mathbb{R}^{r}$ has the Euclidean metric, $N^{n-r}$ is an $(n-r)$ dimensional Einstein manifold with the Einstein constant $\rho\neq 0$, $1\leq r\leq n$ and $r\neq n-1$. Also $f=\frac{\rho}{2}|x|^{2}$ modulo a constant on the Euclidean factor. This paper is organized as follows. In Section 2, we give some formulas and notations for Riemannian manifolds and Ricci solitons by using the method of moving frames. In Section 3, we derive the integrability conditions (ODEs) for a gradient Ricci soliton with harmonic Weyl tensor and show that, locally, the soliton metric is a multiply warped product. In Section 4-6, we divide our discussion into three cases according to the numbers and multiplicities of distinct Ricci-eigenvalues $\lambda_{i}$, $i=1,2,\cdots,n$. Here, we denote $\lambda_{1}$ as the Ricci-eigenvalue with respect to the gradient vector $\nabla f$ of the potential function. Concretely, in Section 4, we study the case that there are at least three mutually different values in the eigenvalues $\lambda_{2},\cdots,\lambda_{n}$, but it turns out that this case cannot occur. In Section 5, we analyze the case that there are exactly two distinct Ricci values in the eigenvalues $\lambda_{2},\cdots,\lambda_{n}$. Then two subcases are divided according to whether one of the two distinct Ricci-eigenfunctions is of single multiplicity or not. Types (ii) and (iii) of Theorem 1.4 come from this part. In Section 6, we treat the remaining case that all $\lambda_{2},~{}\lambda_{3},~{}\cdots,~{}\lambda_{n}$ are equal, for which the $D$-tensor must vanish. In the last section, we summarize and prove the stated theorems. ## 2\. Preliminaries In this section, we first recall some formulae and notations for Riemannian manifolds by using the method of moving frames. Then we give some fundamental formulae of Ricci solitons. ### 2.1. Some notations for Riemannian manifolds. Let $M^{n}(n\geq 3)$ be an $n$-dimensional Riemannian manifold, $E_{1},\cdots,E_{n}$ be a local orthonormal frame fields on $M^{n}$, and $\omega_{1},\cdots,\omega_{n}$ be their dual 1-forms. In this paper we make the following conventions on the range of indices: $1\leq i,j,k,\cdots\leq n$ and agree that repeated indices are summed over the respective ranges. Then we can write the structure equations of $M^{n}$ as follows: (2.1) $d\omega_{i}=\omega_{j}\wedge\omega_{ji}\quad{\rm and}\quad\omega_{ij}+\omega_{ji}=0;$ (2.2) $-\frac{1}{2}R_{ijkl}\omega_{k}\wedge\omega_{l}=d\omega_{ij}-\omega_{ik}\wedge\omega_{kj}\quad{\rm and}\quad R_{ijkl}=-R_{jikl},$ where $d$ is the exterior differential operator on $M$, $\omega_{ij}$ is the Levi-Civita connection form and $R_{ijkl}$ is the Riemannian curvature tensor of $M$. It is known that the Riemannian curvature tensor satisfies the following identities: (2.3) $R_{ijkl}=-R_{ijlk},\quad R_{ijkl}=R_{klij}\quad{\rm and}\quad R_{ijkl}+R_{iklj}+R_{iljk}=0.$ The Ricci tensor $R_{ij}$ and scalar curvature $R$ are defined respectively by (2.4) $R_{ij}:=\sum\limits_{k}R_{ikjk}\quad{\rm and}\quad R=\sum\limits_{i}R_{ii}.$ Let $f$ be a smooth function on $M^{n}$, we define the covariant derivatives $f_{i}$, $f_{i,j}$ and $f_{i,jk}$ as follows: (2.5) $f_{i}\omega_{i}:=df,\quad f_{i,j}\omega_{j}:=df_{i}+f_{j}\omega_{ji},$ and (2.6) $f_{i,jk}\omega_{k}:=df_{i,j}+f_{k,j}\omega_{ki}+f_{i,k}\omega_{kj}.$ We know that (2.7) $f_{i,j}=f_{j,i}\quad{\rm and}\quad f_{i,jk}-f_{i,kj}=f_{l}R_{lijk}.$ The gradient, Hessian and Laplacian of $f$ are defined by the following formulae: (2.8) $\nabla f:=f_{i}E_{i},\quad Hess(f):=f_{i,j}\omega_{i}\otimes\omega_{j}\quad{\rm and}\quad\Delta f:=\sum\limits_{i}f_{i,i}.$ The covariant derivatives of tensors $R_{ij}$ and $R_{ijkl}$ are defined by the following formulae: (2.9) $R_{ij,k}\omega_{k}:=dR_{ij}+R_{kj}\omega_{ki}+R_{ik}\omega_{kj}$ and (2.10) $R_{ijkl,m}\omega_{m}:=dR_{ijkl}+R_{mjkl}\omega_{mi}+R_{imkl}\omega_{mj}+R_{ijml}\omega_{mk}+R_{ijkm}\omega_{ml}.$ By exterior differentiation of (2.2), one can get the second Bianchi identity (2.11) $R_{ijkl,m}+R_{ijlm,k}+R_{ijmk,l}=0.$ From (2.4), (2.10) and (2.11), we have (2.12) $R_{ij,k}-R_{ik,j}=-\sum\limits_{l}R_{lijk,l},$ and so (2.13) $\sum\limits_{j}R_{ji,j}=\frac{1}{2}R_{i}.$ We define the Schouten tensor as $A=A_{ij}\omega_{i}\otimes\omega_{j},$ where (2.14) $A_{ij}:=R_{ij}-\frac{1}{2(n-1)}R\delta_{ij},$ then $A_{ij}=A_{ji}$. The tensor (2.15) $W_{ijkl}:=R_{ijkl}-\frac{1}{n-2}(A_{ik}\delta_{jl}+A_{jl}\delta_{ik}-A_{il}\delta_{jk}-A_{jk}\delta_{il})$ is called the Weyl conformal curvature tensor which does not change under the conformal transformation of the metric. Moreover, as it can be easily seen by the formula above, $W$ is totally trace-free. In dimension three, $W$ is identically zero on every Riemannian manifold, whereas, when $n\geq 4$, the vanishing of the Weyl tensor is is equivalent to the locally conformal flatness of $(M^{n},g)$. We also recall that in dimension $n=3$, $(M,g)$ is locally conformally flat if and only if the Cotton tensor $C$, defined as follows, vanishes (2.16) $C_{ijk}:=A_{ij,k}-A_{ik,j}.$ We recall that, for $n\geq 4$, using the second Bianchi identity the Cotton tensor can also be defined as one of the possible divergences of the Weyl tensor: (2.17) $-\frac{n-2}{n-3}\sum\limits_{l}W_{lijk,l}=C_{ijk}.$ On any $n$-dimensional manifold $(M,g)$ $(n\geq 4)$, in what follows a relevant role will be played by the Bach tensor, first introduced in general relativity by Bach [1] in early 1920s’. By definition, we have (2.18) $B_{ij}:=\frac{1}{n-3}W_{ikjl,kl}+\frac{1}{n-2}R_{kl}W_{ikjl}$ and by equation (2.17), we have an equivalent expression of the Bach tensor: (2.19) $B_{ij}=\frac{1}{n-2}\left(C_{ijk,k}+R_{kl}W_{ikjl}\right).$ ### 2.2. Some basic facts for gradient Ricci solitons. Now, let $(M^{n},g,f)$ be a gradient Ricci soliton and equation (1.1) can be written as (2.20) $R_{ij}+f_{i,j}=\rho\delta_{ij}.$ We will recall some well-known facts of gradient Ricci solitons. ###### Lemma 2.1. (Hamilton [27]) Suppose that $(M^{n},g,f)$ is a gradient Ricci soliton satisfying (2.20), then the following formulae hold, (2.21) $\nabla R=2Ric(\nabla f,\cdot),$ (2.22) $R+\left|\nabla f\right|^{2}-2\rho f=C_{0}$ and (2.23) $\Delta R=\langle\nabla R,\nabla f\rangle+2\rho R-2|Ric|^{2},$ where $C_{0}$ is constant and $|Ric|^{2}=\sum\limits_{i,j}R_{ij}^{2}.$ The covariant 3-tensor $D$, introduced by H.-D. Cao and Q. Chen in [8], turns out to be a fundamental tool in the study of the geometry of gradient Ricci solitons (more in general for gradient Einstein-type manifolds). In components it is defined as (2.24) $D_{ijk}=\frac{1}{n-2}(A_{ij}f_{k}-A_{ik}f_{j})+\frac{1}{(n-1)(n-2)}(\delta_{ij}E_{kl}-\delta_{ik}E_{jl})f_{l}$ where $E_{ij}=R_{ij}-\frac{R}{2}\delta_{ij}$ is the Einstein tensor. This 3-tensor $D_{ijk}$ is closely tied to the Cotton tensor and played a significant role in [8] and [9] on classifying locally conformally flat gradient steady solitons and Bach flat shrinking Ricci solitons. ###### Lemma 2.2. (Cao-Chen [8, 9]) Let $(M^{n},g,f)$, $n\geq 3$, be a complete gradient Ricci soliton satisfying (2.20). $D_{ijk}$ is closely related to the Cotton tensor and the Weyl tensor by (2.25) $D_{ijk}=C_{ijk}+f_{l}W_{lijk}.$ The Bach tensor $B_{ij}$ can be expressed in terms of $D_{ijk}$ and the Cotton tensor $C_{ijk}$ (2.26) $B_{ij}=\frac{1}{n-2}\left(\sum_{k}D_{ijk,k}+\frac{n-3}{n-2}f_{k}C_{jik}\right).$ Finally, by using equations (2.7), (2.20), (2.14), (2.16) and (2.23), we immediately have the following lemma (e.g., see Kim [31]). ###### Lemma 2.3. Let $(M^{n},g,f)$ be a gradient Ricci soliton with harmonic Weyl tensor, i.e., $\delta W=\sum_{l}W_{lijk,l}=0$. Then the Cotten tensor $C$ vanishes and the Schouten tensor $A$ is Codazzi. Moreover the following formula holds (2.27) $\displaystyle f_{l}R_{lijk}=$ $\displaystyle R_{ik,j}-R_{ij,k}$ $\displaystyle=$ $\displaystyle\frac{1}{2(n-1)}\left(R_{j}\delta_{ik}-R_{k}\delta_{ij}\right)$ $\displaystyle=$ $\displaystyle\frac{1}{n-1}f_{l}\left(R_{lj}\delta_{ik}-R_{lk}\delta_{ij}\right).$ ## 3\. The basic local structure for $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature In this section, for any gradient Ricci soliton with harmonic Weyl tensor, we first recall by the arguments of [9, 31] that $\frac{\nabla f}{|\nabla f|}$ is a Ricci-eigenvector field with its eigenvalue $\lambda_{1}$. There is a local function $s$ with $\nabla s=\frac{\nabla f}{|\nabla f|}$, such that $\lambda_{1}$ and $R$ are functions of $s$ only. Secondly, we show in Lemma 3.3 that the Ricci-eigenvalues $\lambda_{i}$, $i=1,2,\cdots,n$ locally depend only on the variable $s$. At last, we derive the integrability conditions and the local structure of the metric being a multiply warped product (see Theorem 3.5). First of all, we have the next lemma (see also Lemma 2.5 in [31]). ###### Lemma 3.1. In some neighborhood $U$ of each point in $\\{\nabla f\neq 0\\}$, we choose an orthonormal frame field $\\{E_{1}=\frac{\nabla f}{|\nabla f|},E_{2},\cdots,E_{n}\\}$ with dual frame field $\\{\omega_{1}=\frac{df}{|\nabla f|},\omega_{2},\cdots,\omega_{n}\\}$. The following properties hold. (i) $E_{1}=\frac{\nabla f}{|\nabla f|}$ is an eigenvector field of the Ricci tensor. (ii) The 1-form $\omega_{1}=\frac{df}{|\nabla f|}$ is closed. So the distribution $V={\rm Span}\\{E_{2},\cdots,E_{n}\\}$ is integrable from the Frobenius theorem. We denote by $L$ and $N$ the integrable curve of the vector field $E_{1}$ and the integrable submanifold of $V$ respectively. Then it holds locally $M=L\times N$ and there exist local coordinates $(s,x_{2},\cdots,x_{n})$ of $M$ such that $ds=\frac{df}{|\nabla f|}$, $E_{1}=\nabla s$, $V={\rm Span}\\{\frac{\partial}{\partial x_{2}},\cdots,\frac{\partial}{\partial x_{n}}\\}$ and $g=ds^{2}+\sum_{a}\omega_{a}^{2}$. (iii) $R$ and $R_{11}=Ric(E_{1},E_{1})$ can be considered as functions of the variable $s$ only, and we write the derivative in $s$ by a prime: $f^{{}^{\prime}}=\frac{df}{ds}$, etc.. ###### Proof. Noting that $f_{1}=|\nabla f|\neq 0$, $f_{a}=0$ for $2\leq a\leq n$, (2.27) gives us $0=f_{1}R_{111a}=f_{l}R_{l11a}=\frac{1}{n-1}f_{l}\left(R_{l1}\delta_{1a}-R_{la}\delta_{11}\right)=-\frac{1}{n-1}f_{1}R_{1a}.$ Then $R_{1a}=0$, which implies $E_{1}=\frac{\nabla f}{|\nabla f|}$ is an eigenvector field of the Ricci cuvature. We proved (i). Making use of (2.21), we get (3.1) $R_{a}=2R_{aj}f_{j}=2R_{a1}f_{1}+2\sum_{b\geq 2}R_{ab}f_{b}=0,$ together with (2.22), which shows $\left(\left|\nabla f\right|^{2}\right)_{a}=-R_{a}+2\rho f_{a}=0.$ Therefore $d\omega_{1}=d\left(\frac{df}{|\nabla f|}\right)=-\frac{1}{2|\nabla f|^{\frac{3}{2}}}d\left(|\nabla f|^{2}\right)\wedge df=0$ and (ii) is proved. Since $R_{1}=2R_{1j}f_{j}=2R_{11}f_{1}$ implies that $R_{11}=\frac{1}{2|\nabla f|}R_{1},$ by combining with (3.1), one can immediately get (iii). ∎ Let $(M^{n},g,f)$ be a gradient Ricci soliton with harmonic Weyl curvature. The condition harmonic Weyl curvature is equivalent to the Schouten tensor being a Codazzi tensor. Derdziński [23] described the following: for a Codazzi tensor $A$ and a point $x$ in $M$, let $E_{A}(x)$ be the number of distinct eigenvalues of $A_{x}$, and set $M_{A}=\\{x\in M\ |E_{A}{\rm\ is\ constant\ in\ a\ neighborhood\ of\ }x\\}.$ Then $M_{A}$ is an open dense subset of $M$; in each connected component of $M_{A}$, the eigenvalues are well-defined and differentiable functions; eigenspaces of $A$ form mutually orthogonal differentiable distributions. On the other hand, as a gradient Ricci soliton, $(M,g,f)$ is real analytic in harmonic coordinates; see [29, 28]. Then if $f$ is not constant, $\\{\nabla f\neq 0\\}$ is open and dense in $M$. Hence for each point $p\in M_{A}\cap\\{\nabla f\neq 0\\}$, there exists a neighborhood $U$ of $p$, such that the number of distinct eigenvalues of the Ricci tensor is constant on $U$, and we assume that the multiplicities of $m$ distinct Ricci eigenvalues are $r_{1},r_{2},\cdots,r_{m}$, respectively, except for the one, which is the eigenvalue with respect to the eigenvector $\nabla f$, where $1+r_{1}+r_{2}+\cdots+r_{m}=n$. Combining with Lemma 3.1, we can choose a locally orthonormal frame field $\\{E_{1}=\frac{\nabla f}{|\nabla f|},E_{2},\cdots,E_{n}\\}$, with the dual $\\{\omega_{1}=\frac{df}{|\nabla f|},\omega_{2},\cdots,\omega_{n}\\}$ such that (3.2) $R_{ij}=\lambda_{i}\delta_{ij}.$ Without loss of generality, we assume that $\lambda_{2}=\cdots=\lambda_{r_{1}+1},\quad\lambda_{r_{1}+2}=\cdots=\lambda_{r_{1}+r_{2}+1},\quad\cdots\quad\lambda_{r_{1}+r_{2}+\cdots+r_{m-1}+2}=\cdots=\lambda_{n},$ and $\lambda_{2},\,\lambda_{r_{1}+2},\,\cdots,\,\lambda_{r_{1}+r_{2}+\cdots+r_{m-1}+2}$ are distinct. Next, our first goal is to prove that all Ricci-eigenfunctions $\lambda_{i}$ $(i=1,\cdots,n)$ depend only on the local variable $s$. ###### Lemma 3.2. Let $(M,g,f)$ be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. For the above local frame field $\\{E_{i}\\}$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, we have that (3.3) $f^{\prime\prime}+\lambda_{1}=\rho,$ (3.4) $\omega_{1a}=\xi_{a}\omega_{a}$ and (3.5) $R_{aa,1}=\left(\lambda_{1}-\lambda_{a}\right)\xi_{a}+\frac{R_{1}}{2(n-1)},$ where (3.6) $\xi_{a}:=\frac{1}{|\nabla f|}(\rho-\lambda_{a}).$ ###### Proof. From (2.5) and (2.6), it follows that for $2\leq a\leq n$, $f_{1}=|\nabla f|=f^{\prime}$, $f_{a}=0$, $f_{1,1}=f^{\prime\prime}\quad{\rm and}\quad f^{\prime}\omega_{1a}=f_{a,j}\omega_{j}.$ Putting $f_{i,j}=\left(\rho-\lambda_{i}\right)\delta_{ij}$ into the above equations, one can immediately get (3.3) and (3.4). Covariant derivatives of $R_{ij}$ can be shown by (2.9) $R_{11,1}=\lambda_{1}^{\prime}\quad{\rm and}\quad R_{1a,a}=\left(\lambda_{1}-\lambda_{a}\right)\xi_{a}.$ Combining with the harmonic condition $R_{aa,1}=R_{1a,a}+\frac{R_{1}}{2(n-1)}$ yields (3.5), and we have completed the proof of this lemma. ∎ Based on the above lemmas, we present the key lemma which was proved by Kim [31] for the four dimensional case. Here we include a proof similar to the Shin’s (see also Lemma 3 in [40]). ###### Lemma 3.3. Let $(M,g,f)$ be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. For the above local frame field $\\{E_{i}\\}$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, the Ricci eigenfunctions $\lambda_{i}$ $(i=1,\cdots,n)$ depend only on the local variable $s$, so do the functions $\xi_{a}$ for $2\leq a\leq n$. ###### Proof. Firstly, it follows from Lemma 3.1 that $R={\rm tr}(Ric)$ and $\lambda_{1}=R_{11}$ depend on $s$ only, and then we consider the Laplacian of the scalar curvature. Calculating the covariant derivatives $R$ gives us $R_{1,1}=R^{\prime\prime}$ and $R_{a,b}=R^{\prime}\xi_{a}\delta_{ab}$. Thus $\Delta R=R_{1,1}+\sum\limits_{a}R_{a,a}=R^{\prime\prime}+R^{\prime}\frac{(n-1)\rho-R+\lambda_{1}}{f^{\prime}}$ depends on $s$ only. Then from (2.23), ${\rm tr}(Ric^{2})=|Ric|^{2}$ also depends on $s$ only. Therefore by this motivation, we denote $(Ric^{k})_{ij}=R_{ii_{1}}R_{i_{1}i_{2}}\cdots R_{i_{k-1}j}$ with its trace ${\rm tr}(Rc^{k})=\sum_{i=1}^{n}(\lambda_{i})^{k}$, and then our goal is to show that the functions ${\rm tr}(Ric^{k})$ for $k=1,2,\cdots,n-1$ depend on $s$ only, which implies that $\lambda_{i}$ for $i=1,\cdots,n$ also depend only on $s$. Next, we apply the mathematical induction to prove the desired results. Assume ${\rm tr}(Ric^{l})$ depends on $s$ only for $1\leq l\leq k$, and then ${\rm tr}(Ric^{k+1})$ will be done. In fact, $\displaystyle\left(\sum_{i=1}^{n}(R_{ii}^{k})\right)_{1}=$ $\displaystyle\sum_{i=1}^{n}kR_{ii}^{k-1}(R_{ii})_{1}$ $\displaystyle=$ $\displaystyle k\left(R_{11}^{k-1}R_{11,1}+\sum_{a=2}^{n}R_{aa}^{k-1}R_{aa,1}\right).$ Putting (3.2), (3.5) and (3.6) into the above equation, we have $\displaystyle\left(\sum_{i=1}^{n}(R_{ii}^{k})\right)_{1}=$ $\displaystyle k\lambda^{k-1}_{1}\lambda_{1}^{\prime}+k\frac{1}{f^{\prime}}\left[(\sum_{i}\lambda^{k+1}_{i}-\lambda^{k+1}_{1})-(\lambda_{1}+\rho)(\sum_{i}\lambda^{k}_{i}-\lambda^{k}_{1})\right]$ $\displaystyle+k\left(\rho\frac{1}{f^{\prime}}\lambda_{1}+\frac{R^{\prime}}{2(n-1)}\right)\left(\sum_{i}\lambda^{k-1}_{i}-\lambda^{k-1}_{1}\right),$ By assumption, every term except for $\sum_{i}\lambda_{i}^{k+1}$ in the above equation depends only on $s$. Thus ${\rm tr}(Ric^{k+1})=\sum_{i=1}^{n}\lambda_{i}^{k+1}$ is also a function of $s$ only, and we have completed the proof of this lemma. ∎ We proceed to obtain the local structure of the metric for $n$-dimensional gradient Ricci solitons with harmonic Weyl curvature. First, we denote $[a]=\\{b|\lambda_{b}=\lambda_{a}~{}{\rm and}~{}b\neq 1\\}$ for $2\leq a\leq n$ and make the following conventions on the range of indices: $2\leq a,b,c\cdots\leq n\quad{\rm and}\quad 2\leq\alpha,\beta,\cdots\leq n$ where $[a]=[b]$, $[\alpha]=[\beta]$ and $[a]\neq[\alpha]$. ###### Lemma 3.4. Let $(M,g,f)$ be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. For the above local frame field $\\{E_{i}\\}$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, we have that (3.7) $\omega_{a\alpha}=0,$ (3.8) $R_{1a1b}=-\left(\xi^{\prime}_{a}+\xi^{2}_{a}\right)\delta_{ab}$ and (3.9) $R_{a\alpha b\beta}=-\xi_{a}\xi_{\alpha}\delta_{ab}\delta_{\alpha\beta}.$ ###### Proof. We continue to compute the covariant derivatives of $R_{ij}$. For $a\neq b$ $R_{aa,1}=\lambda^{\prime}_{a},\quad R_{aa,b}=R_{aa,\alpha}=0\quad and\quad R_{ab,i}=0;$ while for the different range of indices, $\left(\lambda_{a}-\lambda_{\alpha}\right)\omega_{a\alpha}=\sum_{k=1}^{n}R_{a\alpha,k}\omega_{k}=R_{a\alpha,1}\omega_{1}+R_{a\alpha,a}\omega_{a}+R_{a\alpha,\alpha}\omega_{\alpha}+\sum_{i\neq 1,a,\alpha}R_{a\alpha,i}\omega_{i}.$ Since $R_{a\alpha,1}=0$ and $R_{a\alpha,a}=0$, $\omega_{a\alpha}(E_{1})=\omega_{a\alpha}(E_{a})=\omega_{a\alpha}(E_{\alpha})=0$. Next we will compute the curvature. From (2.2) and (3.4), $-\frac{1}{2}R_{1aij}\omega_{i}\wedge\omega_{j}=\left(\xi^{\prime}_{a}+\xi^{2}_{a}\right)\omega_{1}\wedge\omega_{a}+\sum_{\alpha\geq 2}\left(\xi_{a}-\xi_{\alpha}\right)\omega_{a\alpha}\wedge\omega_{\alpha},$ which shows (3.8) since $\omega_{a\alpha}(E_{1})=0$. On the other hand, by (2.27), $R_{1ijk}=0$ when $i,~{}j~{}{\rm and}~{}k$ are mutually different. Thus, $\omega_{a\alpha}(E_{i})=0$ since $\lambda_{a}\neq\lambda_{\alpha}$ implies $\xi_{a}\neq\xi_{\alpha}$, and then (3.7) holds. Therefore, from the structure equation $-\frac{1}{2}R_{a\alpha ij}\omega_{i}\wedge\omega_{j}=\xi_{a}\xi_{\alpha}\omega_{a}\wedge\omega_{\alpha},$ which implies (3.9). We have completed the proof of this lemma. ∎ At last, we are ready to derive the integrability conditions and prove the local structure of the metric as a multiply warped product. For the multiplicities of distinct Ricci eigenvalues, without loss of generality, we assume that $r_{1}=r_{2}=\cdots=r_{l}=1$ and $r_{l+1},~{}r_{l+2},~{}\cdots,~{}r_{m}\geq 2$. ###### Theorem 3.5. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. For each point $p\in M_{A}\cap\\{\nabla f\neq 0\\}$, there exists a neighborhood $U$ of $p$ such that $U=L\times\,_{h_{1}}L_{1}\cdots\times\,_{h_{l}}L_{l}\cdots\times\,_{h_{l+1}}N_{l+1}\times\cdots\times\,_{h_{m}}N_{m}$ is a multiply warped product furnished with the metric (3.10) $g=ds^{2}+h^{2}_{1}(s)dt_{1}^{2}+\cdots+h^{2}_{l}(s)dt_{l}^{2}+h^{2}_{l+1}(s)\tilde{g}_{l+1}+\cdots+h^{2}_{m}(s)\tilde{g}_{m},$ where $h_{j}(s)$ are smooth positive functions for $1\leq j\leq m$, $dim~{}L_{\nu}$=1 for $1\leq\nu\leq l$, and $(N_{\mu},\tilde{g}_{\mu})$ is an $r_{\mu}$-dimensional Einstein manifold with the Einstein constant $(r_{\mu}-1)k_{\mu}$ for $l+1\leq\mu\leq m$. Moreover, let $\lambda_{i}$ $(i=1,\cdots,n)$ be the Ricci-eigenvalues, $\lambda_{1}$ being the Ricci-eigenvalue with respect to the gradient vector $\nabla f$, and functions $\xi_{a}$ be given by (3.6). Then the following integrability conditions hold (3.11) $\xi^{\prime}_{a}+\xi^{2}_{a}=-\frac{R^{\prime}}{2(n-1)f^{\prime}},$ (3.12) $\lambda^{\prime}_{a}-\left(\lambda_{1}-\lambda_{a}\right)\xi_{a}=\frac{R^{\prime}}{2(n-1)},$ (3.13) $\lambda_{1}=-f^{\prime\prime}+\rho=-(n-1)\left(\xi^{\prime}_{a}+\xi^{2}_{a}\right)$ and (3.14) $\lambda_{a}=-f^{\prime}\xi_{a}+\rho=-\xi^{\prime}_{a}-\xi_{a}\sum^{n}_{i=2}\xi_{i}+(r-1)\frac{k}{h^{2}},$ Here $2\leq a\leq n$ and $\xi_{a}=h^{\prime}/h$; in (3.14), when $a=2,\dots,l+1$, then $r=1$ and $h=h_{a-1}$; when $a\in\left[l+r_{l+1}+\cdots+r_{\mu-1}+2\right]$ , then $r=r_{\mu}$, $h=h_{\mu}$ and $k=k_{\mu}$. ###### Proof. First, making use of (2.27) again, we see $R_{1a1a}=\frac{R^{\prime}}{2(n-1)f^{\prime}}$ which shows that all $R_{1a1a}$ are equal for $a=2,\cdots,n$, and (3.11) holds by (3.8). Then in combination with (3.3), we immediately get (3.13). Putting $R_{aa,1}=\lambda^{\prime}_{1}$ into (3.5) implies harmonic condition (3.12). Moreover, by equations (2.1), (3.4) and (3.7), $d\omega_{\alpha}=\sum_{\beta}\left(\xi_{\alpha}\delta_{\alpha\beta}+\omega_{\alpha\beta}\right)\wedge\omega_{\beta}\equiv 0~{}({\rm mod}~{}\omega_{\beta}).$ By $d\omega_{1}=0$ and the Frobenius theorem, the distribution $V_{a}={\rm Span}\\{E_{b}:b\in[a]\\}$ is integrable. It is worth noting that $\omega_{a\alpha}=0$ yields more information than that Ricci-eigenspaces form mutually orthogonal differentiable distributions. Denoting $N_{a}$ to be the integrable submanifold of $V_{a}$, we consider the $r$-dimensional isometric immersion submanifold $N_{a}$ with the metric $\bar{g}=\sum_{b\in[a]}\omega^{2}_{b}$ of the manifold $(M,g)$. When the multiplicity $r$ of the Ricci-eigenvalue $\lambda_{a}$ is not less than two, it follows from (3.8) and (3.9) that (3.15) $\displaystyle R_{ab}=$ $\displaystyle R_{a1b1}+\sum_{\alpha}R_{a\alpha b\alpha}+\sum_{c}R_{acbc}$ $\displaystyle=$ $\displaystyle\left[-\left(\xi^{\prime}_{a}+\xi^{2}_{a}\right)-\xi_{a}\sum_{\alpha}\xi_{\alpha}\right]\delta_{ab}+\sum_{c}R_{acbc}.$ Making use of equations (3.4) and (3.7) again, Gauss equations imply (3.16) $\bar{R}_{abcd}={R}_{abcd}+\xi^{2}_{a}\left(\delta_{ac}\delta_{bd}-\delta_{ad}\delta_{bc}\right),$ where $\bar{R}_{abcd}$ is the curvature tensor of $\left(N_{a},\bar{g}\right)$. Actually, equation (3.7) means that the submanifold $(N_{a},\bar{g})$ of the Riemannian manifold $(N,~{}\sum_{i\neq 1}\omega^{2}_{i})$ is totally geodesic, where $N$ is the integrable submanifold generated by the distribution $V={\rm Span}\\{E_{2},\cdots,E_{n}\\}$, see Lemma 3.1. Taking trace in (3.16) and then plugging (3.15) into it, one can get $\displaystyle\bar{R}_{ac}=$ $\displaystyle\sum_{b}R_{abcb}+\left(r-1\right)\xi^{2}_{a}\delta_{ac}$ $\displaystyle=$ $\displaystyle\left[\lambda_{a}+\left(\xi^{\prime}_{a}+\xi^{2}_{a}\right)+\xi_{a}\sum_{\alpha\notin[a]}\xi_{\alpha}+(r-1)\xi^{2}_{a}\right]\delta_{ac}$ which depends on $s$ only. Hence, the metric $\bar{g}$ is Einstein. Then we can assume that $\bar{g}=h^{2}(s)\tilde{g}$, where $h(s)$ is a positive function and $\tilde{g}$ is Einstein with the Einstein constant $(r-1)k$. So $\bar{R}_{ac}=\frac{(r-1)k}{h^{2}}\delta_{ac}$ and (3.17) $\lambda_{a}=-\xi^{\prime}_{a}-\xi_{a}\sum^{n}_{i=2}\xi_{i}+(r-1)\frac{k}{h^{2}}.$ Also, it follows from the structure equation that $\xi_{a}=h^{\prime}/h$. When the Ricci tensor eigenvalue $\lambda_{a}$ has single multiplicity, by using equations (2.1), (3.4) and (3.7), we calculate the exterior differential of 1-form $\omega_{a}$: $d\omega_{a}=\omega_{i}\wedge\omega_{ia}=\xi_{a}\omega_{1}\wedge\omega_{a}.$ Setting the positive function $h(s)$ such that $\xi_{a}=h^{\prime}/h$, and we see that 1-form $\frac{1}{h}\omega_{a}$ is closed. Then there is a local function $t$ satisfying $dt=\frac{1}{h(s)}\omega_{a}$ and (3.18) $\lambda_{a}=R_{1a1a}+\sum_{\alpha\notin[a]}R_{a\alpha a\alpha}=-\xi^{\prime}_{a}-\xi_{a}\sum^{n}_{i=2}\xi_{i}.$ Consequently, we obtain that the metric is a multiply warped product as seen in (3.10). Furthermore, it follows that (3.14) holds from equations (2.20), (3.17) and (3.18). In fact, by comparing with (3.18), there is no term $(r-1)\frac{k}{h^{2}}$ in (3.17). However, in the case $r=1$, this term vanishes for any constant $k$, and then the Ricci-eigenfunction $\lambda_{a}$ also satisfies the form like (3.17). Hence, for convenience sometimes we add this term. We have completed the proof of this theorem. ∎ ## 4\. The local structure of the case with more than two distinct Ricci- eigenfunctions In this section, and Sections 5-6, we will give the local classification of $n$-dimensional gradient Ricci solitons with harmonic Weyl curvature, according to numbers and multiplicities of distinct Ricci-eigenvalues written as $\lambda_{a}$, $a=2,\cdots,n$. To avoid repetition, unless stated otherwise, the Ricci-eigenvalues mentioned in the following discussion do not include $\lambda_{1}$, which is the eigenvalue with respect to the gradient vector of the potential function. In this section, we shall study the case when $\lambda_{2},~{}\lambda_{3},~{}\cdots,~{}\lambda_{n}$ are at least three mutually different, but it turns out that this case cannot occur. First, we recall that as a gradient Ricci soliton, $(M,g,f)$ is real analytic in harmonic coordinates [29], i.e., $g$ and $f$ are real analytic (in harmonic coordinates). To exploit the real analyticity, we shall use the following simple facts: (i). If an analytic function $P$ is not constant, $\\{\nabla P\neq 0\\}$ is open and dense in $M$. (ii). If $P\cdot Q$ equals zero (identically) on an open connected set $U$ for two real analytic functions $P$ and $Q$, then either $P$ equals zero on $U$ or $Q$ equals zero on $U$. Proceeding, we analyze the integrability conditions in Theorem 3.5. Assume that $\lambda_{a}$ and $\lambda_{\alpha}$ are mutually different of multiplicities $r_{1}$ and $r_{2}$, i.e., $\lambda_{a}:=\lambda_{2}=\cdots=\lambda_{r_{1}+1}\quad{\rm and}\quad\lambda_{\alpha}:=\lambda_{r_{1}+2}=\cdots=\lambda_{r_{1}+r_{2}+1};$ Here we make the following conventions on the range of indices: $2\leq a,b,\cdots\leq({r_{1}+1});\quad({r_{1}+2})\leq\alpha,\beta,\cdots\leq({r_{1}+r_{2}+1});\quad$ Denote $\xi_{a}:=X$ and $\xi_{\alpha}:=Y$, from Section 3, and then they satisfy the following integrability conditions: (4.1) $X^{\prime}+X^{2}=Y^{\prime}+Y^{2}=\xi^{\prime}_{i}+\xi^{2}_{i},$ (4.2) $\lambda_{1}=-f^{\prime\prime}+\rho=-(n-1)\left(X^{\prime}+X^{2}\right),$ (4.3) $\lambda_{a}=-f^{\prime}X+\rho=-(X^{\prime}+X^{2})+X^{2}+(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-X\sum^{n}_{i=2}\xi_{i},$ (4.4) $\lambda_{\alpha}=-f^{\prime}Y+\rho=-(Y^{\prime}+Y^{2})+Y^{2}+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}-Y\sum^{n}_{i=2}\xi_{i}$ and (4.5) $\lambda^{\prime}_{a}-\left(\lambda_{1}-\lambda_{a}\right)X=\lambda^{\prime}_{\alpha}-\left(\lambda_{1}-\lambda_{\alpha}\right)Y.$ By using the above basic facts, it is easy to get the following equations: ###### Lemma 4.1. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. in some neighborhood $U$ of $p\in M_{A}\cap\\{\nabla f\neq 0\\}$, suppose $\lambda_{a}$ and $\lambda_{\alpha}$ are mutually different Ricci eigenvalues with multiplicities $r_{1}$ and $r_{2}$. The following identities hold: (4.6) $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}=(X-Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)-f^{\prime}\right],$ (4.7) $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}=2(X^{\prime}+X^{2})+\rho+\sum^{n}_{i=2}\xi^{2}_{i}-(X^{2}+Y^{2}),$ (4.8) $\displaystyle(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y$ $\displaystyle=$ $\displaystyle(X-Y)\left\\{(X^{\prime}+X^{2})+\rho+(X+Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)-f^{\prime}\right]+XY\right\\},$ (4.9) $-(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}Y+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}X=(X-Y)[(X^{\prime}+X^{2})+\rho+XY],$ (4.10) $\displaystyle(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y$ $\displaystyle=$ $\displaystyle(X-Y)\left[(X^{\prime}+X^{2})+\sum^{n}_{i=2}\xi^{2}_{i}-(X^{2}+Y^{2})-XY\right],$ (4.11) $\displaystyle\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}\right](X+Y)$ $\displaystyle=$ $\displaystyle(X-Y)\left[\sum^{n}_{i=2}\xi^{2}_{i}-(X^{2}+Y^{2})-2XY-\rho\right]$ and (4.12) $\sum^{n}_{i=2}\xi^{2}_{i}-\rho=(X+Y)\left(\sum^{n}_{i=2}\xi_{i}-f^{\prime}\right).$ ###### Proof. First, subtracting (4.3) from (4.4) gives us (4.13) $\displaystyle\lambda_{\alpha}-\lambda_{a}=$ $\displaystyle f^{\prime}(X-Y)$ $\displaystyle=$ $\displaystyle(X-Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)\right]-\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}\right],$ which implies (4.6). Noting that $h^{\prime}_{1}/h_{1}=X$ and $h^{\prime}_{2}/h_{2}=Y$ and differentiating (4.13) lead to (4.14) $\displaystyle(\lambda_{\alpha}-\lambda_{a})^{\prime}=\left[f^{\prime}(X-Y)\right]^{\prime}$ $\displaystyle=$ $\displaystyle(X-Y)\Bigg{\\{}(n-3)(X^{\prime}+X^{2})-(X+Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)\right]$ $\displaystyle-\left[\sum^{n}_{i=2}\xi^{2}_{i}-(X^{2}+Y^{2})\right]\Bigg{\\}}+2\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y\right].$ On the other hand, applying (4.1), (4.2) and (4.13), we see (4.15) $\displaystyle\left[f^{\prime}(X-Y)\right]^{\prime}=f^{\prime\prime}(X-Y)-f^{\prime}(X-Y)(X+Y)$ $\displaystyle=$ $\displaystyle(X-Y)\left\\{[(n-1)(X^{\prime}+X^{2})+\rho]+(X+Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)\right]\right\\}$ $\displaystyle+(X+Y))\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}\right].$ Comparing with equations (4.14) and (4.15) gives us (4.7), where the fact $X\neq Y$ was used. Next, we consider the harmonic condition. It follows from (4.2), (4.3) and (4.4) that (4.16) $\displaystyle(\lambda_{1}-\lambda_{\alpha})Y-(\lambda_{1}-\lambda_{a})X$ $\displaystyle=$ $\displaystyle(X-Y)\left\\{(n-2)(X^{\prime}+X^{2})-(X+Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)\right]-XY\right\\}$ $\displaystyle+\left[(r_{1}-1)\frac{2k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{2k_{2}}{h^{2}_{2}}Y\right].$ Since $\lambda^{\prime}_{\alpha}-\lambda^{\prime}_{a}=\left(\lambda_{1}-\lambda_{\alpha}\right)Y-\left(\lambda_{1}-\lambda_{a}\right)X$ from harmonic condition (4.5), by comparing (4.15) and (LABEL:4.16) we obtain (4.8). Putting $f^{\prime}(X-Y)=(X-Y)\left[\sum^{n}_{i=2}\xi_{i}-(X+Y)\right]-\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}\right]$ into (4.8) implies (4.9). On the other hand, we can obtain (4.10) by combining (4.14) and (LABEL:4.16). Meanwhile, subtracting (4.10) from (4.9) shows (4.11), while (4.12) is proved by subtracting (4.8) from (4.10), and we have completed the proof of this lemma. ∎ Proceeding, we will apply equations (4.1), (4.2) and (4.12) to obtain the following desired result. ###### Theorem 4.2. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. Then in some neighborhood $U$ of $p\in M_{A}\cap\\{\nabla f\neq 0\\}$, the Ricci eigenvalues $\lambda_{2},~{}\lambda_{3},~{}\cdots,~{}\lambda_{n}$ cannot be more than two distinct. ###### Proof. If not, we assume that $\lambda_{2},~{}\lambda_{3},~{}\cdots,~{}\lambda_{n}$ are at least three mutually different, and denote by $\lambda_{a}$, $\lambda_{\alpha}$ and $\lambda_{p}$ with multiplicities $r_{1}$, $r_{2}$ and $r_{3}$. For convenience, we also denote $\xi_{a}:=X$, $\xi_{\alpha}:=Y$ and $\xi_{p}:=Z$. by (4.12), with the assumption of Lemma 4.1, we see that $\sum^{n}_{i=2}\xi^{2}_{i}-\rho=(X+Y)\left(\sum^{n}_{i=2}\xi_{i}-f^{\prime}\right)=(X+Z)\left(\sum^{n}_{i=2}\xi_{i}-f^{\prime}\right)=(Y+Z)\left(\sum^{n}_{i=2}\xi_{i}-f^{\prime}\right).$ Therefore, it follows that (4.17) $\sum^{n}_{i=2}\xi_{i}=f^{\prime}$ since $X$, $Y$ and $Z$ are distinct, and then we have (4.18) $\sum^{n}_{i=2}\xi^{2}_{i}=\rho.$ On the other hand, by (4.1), (4.2) and differentiating (4.17), we obtain $-\sum^{n}_{i=2}\xi^{2}_{i}=\rho,$ hence, which yields $\sum^{n}_{i=2}\xi^{2}_{i}=0$ by combining with (4.18). This is a contradiction since $X,~{}Y,~{}Z$ are distinct. We have completed the proof of this theorem. ∎ ## 5\. The local structure of the case with two distinct Ricci-eigenfunctions In this section we begin to study the case when there are exactly two distinct Ricci values in the eigenvalues $\lambda_{2},\cdots,\lambda_{n}$. Types (ii) and (iii) of Theorem 1.4 come from this section. First of all, we need the following lemmas to prepare for the local structure of the case. ###### Lemma 5.1. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. Assume there are exactly two distinct values in the Ricci eigenvalues $\lambda_{2},\cdots,\lambda_{n}$, denoted by $\lambda_{a}$ and $\lambda_{\alpha}$ of multiplicities $r_{1}$ and $r_{2}:=n-r_{1}-1$ in some neighborhood $U$ of $p\in M_{A}\cap\\{\nabla f\neq 0\\}$. Then the functions $X$ and $Y$ satisfy the following equations: (5.1) $(n-1)XY+\rho=f^{\prime}(X+Y),$ (5.2) $X^{\prime}+X^{2}+XY=0$ and (5.3) $XY[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho]=0.$ ###### Proof. In this case, equations (4.6)-(4.12) in Lemma 4.1 become the following basic identities: (5.4) $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}=(X-Y)[(r_{1}-1)X+(r_{2}-1)Y-f^{\prime}],$ (5.5) $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}=2(X^{\prime}+X^{2})+\rho+[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}],$ (5.6) $\displaystyle(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y$ $\displaystyle=$ $\displaystyle(X-Y)\\{(X^{\prime}+X^{2})+\rho+(X+Y)[(r_{1}-1)X+(r_{2}-1)Y-f^{\prime}]+XY\\},$ (5.7) $-(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}Y+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}X=(X-Y)[(X^{\prime}+X^{2})+\rho+XY],$ (5.8) $\displaystyle(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y$ $\displaystyle=$ $\displaystyle(X-Y)\left[(X^{\prime}+X^{2})+(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-XY\right],$ (5.9) $\displaystyle\left[(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}-(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}\right](X+Y)$ $\displaystyle=$ $\displaystyle(X-Y)\left[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho\right]$ and (5.10) $r_{1}X^{2}+r_{2}Y^{2}-\rho=(X+Y)\left(r_{1}X+r_{2}Y-f^{\prime}\right).$ We can immediately simplify (5.10) to get (5.1). By (4.1), (4.2) and differentiating (5.1), we obtain (5.2). Next, by (4.1), (5.2) and differentiating (5.5), we see (5.11) $\displaystyle(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}X+(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}Y$ $\displaystyle=$ $\displaystyle-2XY(X+Y)+[(r_{1}-1)X^{2}(X+Y)+(r_{2}-1)Y^{2}(X+Y)]$ $\displaystyle=$ $\displaystyle(X+Y)[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY].$ Equations (5.9) and (5.5) yield that $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}(X+Y)=[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY]X+\rho Y$ and $(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}(X+Y)=[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY]Y+\rho X.$ Multiplying both sides of (5.11) by $(X+Y)$ and then putting the above two equation into it, we obtain that $2XY[(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho]=0.$ Therefore, the proof of this lemma is completed. ∎ ###### Lemma 5.2. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature which satisfies the hypothesis of Lemma 5.1. Then functions $X$ and $Y$ also satisfy the following inequalities (5.12) $X+Y\neq 0$ and (5.13) $(n-1)XY+\rho\neq 0.$ ###### Proof. First, if $X+Y=0$, by combining with $X^{\prime}+X^{2}=Y^{\prime}+Y^{2}$, we have $X^{\prime}=Y^{\prime}=0$. $X$ and $Y$ are nonzero constant since they are distinct. Plugging them into (5.1), then $(n-1)X^{2}-\rho=0.$ From (4.2), we see $f^{\prime\prime}=2\rho$. It follows from (4.3) that $-f^{\prime}X+\rho=-(r_{1}-r_{2})X^{2}+(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}$ and then differentiating it, one can easily get $f^{\prime\prime}=2(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}$. Therefore $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}=\rho.$ Similarly, it holds that $(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}=\rho$. Putting them into (5.4), we see $f^{\prime}=(r_{1}-r_{2})X$ is constant, which implies that $f^{\prime\prime}=0$. Hence $(r_{1}+r_{2})X^{2}=\rho=0$ and then $X=0$, which is a contradiction. Thus (5.12) is proved, and (5.13) immediately follows from (5.1). The proof of this lemma is completed. ∎ ###### Lemma 5.3. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature which satisfies the hypothesis of Lemma 5.1. Then we have the following properties: (i) The equality $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}=(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}$ holds if and only if (5.14) $(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho=0.$ (ii) If $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}=(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}$, then the Ricci soliton is steady (i.e., $\rho=0$) and (5.15) $(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY=0.$ ###### Proof. First, noting (5.12), (i) immediately holds by (5.9). Next, if $(r_{1}-1)\frac{k_{1}}{h^{2}_{1}}=(r_{2}-1)\frac{k_{2}}{h^{2}_{2}}$, from (5.4), we have $(r_{1}-1)X+(r_{2}-1)Y=f^{\prime}.$ Then differentiating this equation gives us $(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY+\rho=0,$ where (4.1) and (4.2) were used. By combining with equation (5.14), we immediately have (ii) holds. The proof of this lemma is completed. ∎ Proceeding we will discuss two cases according to whether one of the two Ricci-eigenfunctions is of single multiplicity. ### 5.1. The multiplicities of two Ricci-eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$ are more than one In this subsection, we will study the case when the multiplicities of two Ricci eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$ are more than one in some neighborhood $U$ of a point $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$. We have the following local classification result, which forms type (ii) of Theorem 1.4 for $3\leq r\leq n-2$. ###### Theorem 5.4. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. Assume in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, there are exactly two distinct values in the Ricci eigenvalues $\lambda_{2},\cdots,\lambda_{n}$, denoted by $\lambda_{a}$ and $\lambda_{\alpha}$, where both of the multiplicities are bigger than one. Then $(U,g,f)$ can not be steady (i.e., $\rho\neq 0$). The two distinct eigenvalues are exactly 0 and $\rho$ with the multiplicities $(r+1)$ and $(n-r-1)$, respectively, where $2\leq r\leq n-3$. Also, $\nabla f$ is a null Ricci-eigenvector. Moreover, $(U,g)$ is locally isometric to a domain in $\mathbb{R}^{r+1}\times N^{n-r-1}$ with $g=ds^{2}+s^{2}d\mathbb{S}^{2}_{r}+\tilde{g}$, where $\left(N^{n-r-1},\tilde{g}\right)$ is an $(n-r-1)$-dimensional Einstein manifold with the Einstein constant $\rho\neq 0$. The potential function is given by $f=\frac{\rho}{2}s^{2}$ modulo a constant. ###### Proof. In this case, we denoted the two distinct Ricci eigenvalues by $\lambda_{a}$ and $\lambda_{\alpha}$ with multiplicity $r_{1}=r$ and $r_{2}=(n-r-1)$, where $r_{i}\geq 2,\ i=1,\ 2$. Then it follows from Theorem 3.5 that $(U,g)$ is isometric to a domain in $\mathbb{R}\times N^{r}_{1}\times N^{n-r-1}_{2}$ with $g=ds^{2}+h^{2}_{1}(s)\tilde{g}_{1}+h^{2}_{2}(s)\tilde{g}_{2},$ where $(N_{i},\tilde{g}_{i})$ is an $r_{i}$-dimensional Einstein manifold with the Einstein constant $(r_{i}-1)k_{i}$ for $i=1,\ 2$. Based on what we have discussed, we first claim that one of the functions $X$ and $Y$ vanishes, where $X:=\xi_{a}$ and $Y:=\xi_{\alpha}$ are the functions corresponding to the Ricci eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$, respectively. In fact, if not, then neither X nor Y is zero. It follows from (5.3) that $(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho=0.$ Therefore, $\rho=(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY=(r_{1}-2)X^{2}+(r_{2}-2)Y^{2}+(X-Y)^{2}$ is positive since $r_{1},~{}r_{2}\geq 2$. On the other hand, from Lemma 5.3, we see $\rho=0$, which is a contradiction. Next, without loss of generality, we assume that $X\neq 0$ and $Y=0$. $Y=0$ implies that $h_{2}$ is constant and $X^{{}^{\prime}}+X^{2}=0$. Thus $X=\frac{1}{s-c_{1}}$ and $h_{1}=c_{h_{1}}(s-c_{1})$ after with integrating $X=\frac{h^{{}^{\prime}}_{1}}{h_{1}}$ for constants $c_{1}$ and $c_{h_{1}}$. By (4.2), we have $\lambda_{1}=-f^{\prime\prime}+\rho=0,$ which yields that $f^{\prime\prime}=\rho$ and $f(s)=\frac{1}{2}\rho(s-c_{1})^{2}+C_{1}.$ Putting $Y=0$ into (4.3) and (4.4) respectively shows (5.16) $\lambda_{a}=-f^{\prime}X+\rho=-(r-1)(X^{2}-\frac{k_{1}}{h^{2}_{1}})$ and $\lambda_{\alpha}=\rho=(n-r-2)\frac{k_{1}}{h^{2}_{2}}.$ Noting that $-f^{\prime}X+\rho=0$, by (5.16), we see $\lambda_{a}=0$ and $\lambda_{\alpha}=\rho\neq 0$. Therefore, the Ricci curvature components and the scalar curvature are as follows: $R_{11}=R_{aa}=0$, $R_{\alpha\alpha}=\rho$, $R_{ij}=0$ $(i\neq j)$ and $R=(n-r-1)\rho$. Furthermore, since $r\geq 2$, (5.16) shows $X^{2}-\frac{k_{1}}{h^{2}_{1}}=0$, and then $\frac{k_{1}}{c^{2}_{h_{1}}}=1$. Hence, $(U,g)$ is isometric to a domain in $\mathbb{R}^{1}\times N^{r}_{1}\times N^{n-r-1}_{2}$ with $g=ds^{2}+s^{2}\tilde{g}_{1}+\tilde{g}_{2}$, where $\left(N^{r}_{1},\tilde{g}_{1}\right)$ and $\left(N^{n-r-1}_{2},\tilde{g}_{2}\right)$ are Einstein manifolds with the Einstein constants $(r-1)$ and $\rho\neq 0$, respectively. Finally, we consider the manifold $\mathbb{R}^{1}\times N^{r}_{1}$ with $\bar{g}=ds^{2}+s^{2}\tilde{g}_{1}$. From (3.5), we have $f_{1,1}=f_{a,a}=\rho\neq 0$ and $f_{a,b}=f_{1,a}=0$, i.e., $Hess_{\bar{g}}(f)=\rho\bar{g}.$ This shows that $f$ is a proper strictly convex function. We also see that $f=\frac{\rho}{2}s^{2}$ is a distance function from the unique minimum of $f$ up to a scaling. It is easy to see that the radial curvatures vanish and then that the space $\left(\mathbb{R}^{1}\times N^{r}_{1},\bar{g},f\right)$ is a Gaussian soliton. This completes the proof of this theorem. ∎ ### 5.2. One of the multiplicities of two Ricci-eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$ is single In this subsection, we will study the case that one of the multiplicities of two Ricci-eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$ is single in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$. Without loss of generality, we assume that the multiplicity of Ricci-eigenfunction $\lambda_{a}$ is one, that means $r_{1}=1,~{}r_{2}=n-2\geq 2$. Then we shall give the local structure of a gradient Ricci soliton $\left(M^{n},g,f\right)$ with harmonic Weyl curvature in this case according to the integrability condition. Types (iii) and (ii) for $r=2$ in Theorem 1.4 come from this subsection. First of all, it follows from Theorem 3.5 that $(U,g)$ is isometric to a domain in $L\times L_{1}\times N^{n-2}_{2}$ with $g=ds^{2}+h^{2}_{1}(s)dt^{2}+h^{2}_{2}(s)\tilde{g}_{2},$ where $(N_{2},\tilde{g}_{2})$ is an $(n-2)$-dimensional Einstein manifold with the Einstein constant $(n-3)k_{2}$. ###### Lemma 5.5. Let $(M^{n},g,f)$, $(n\geq 4)$, be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, assume $\lambda_{a}$ and $\lambda_{\alpha}$ are mutually different Ricci eigenvalues with multiplicities $1$ and $n-2$. Then $X:=\xi_{a}\neq 0.$ ###### Proof. If $X=0$, then $Y\neq 0$ since $X$ and $Y$ are distinct. $X=0$ implies $h_{1}$ is constant and $Y^{{}^{\prime}}+Y^{2}=0$. Thus $Y=\frac{1}{s-c_{1}}$ and $h_{2}=c_{h_{2}}(s-c_{1})$ with integrating $Y=\frac{h^{{}^{\prime}}_{2}}{h_{2}}$. Putting $X=0$ into (4.3) yields $\rho=0$. Then it follows from (5.1) that $f^{\prime}=0$, which is impossible. The proof of this lemma is completed. ∎ Next, we will discuss two cases according to whether the constant $k_{2}$ vanishes, which is equivalent to that the $(n-2)$-dimensional Einstein manifold $(N_{2},\tilde{g}_{2})$ is Ricci flat. Then the local structure of both cases will be shown according to the integrability condition. Subcase I. $k_{2}=0$ For this subcase, the following result shows that the Ricci soliton is steady and it can be classified. This will give rise to type (iii) of Theorem 1.4. ###### Theorem 5.6. For a gradient Ricci soliton $\left(M^{n},g,f\right)$ with harmonic Weyl curvature, assume in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, $(M^{n},g)$ is locally isometric to a domain in $L\times L_{1}\times N^{n-2}_{2}$ with $g=ds^{2}+h^{2}_{1}(s)dt^{2}+h^{2}_{2}(s)\tilde{g}_{2},$ where the $(n-2)$-dimensional Einstein manifold $(N_{2},\tilde{g}_{2})$ is Ricci flat. Then $n\neq 5$, the gradient Ricci soliton is steady (i.e., $\rho=0$) and $g$ is locally isometric to the metric $ds^{2}+s^{\frac{2(n-3)}{n-1}}dt^{2}+s^{\frac{4}{n-1}}\tilde{g}$ on a domain of $\mathbb{R}^{+}\times\mathbb{R}\times N^{n-2}_{2}$, where $\tilde{g}$ is Ricci flat. Also, the potential function is given by $f=\frac{2(n-3)}{n-1}\log(s)$ modulo a constant. Furthermore, the Ricci curvature components and the scalar curvature are as follows; $R_{11}=\frac{2(n-3)}{(n-1)s^{2}}$, $R_{22}=-\frac{2(n-3)^{2}}{(n-1)^{2}s^{2}}$, $R_{\alpha\alpha}=-\frac{4(n-3)}{(n-1)^{2}s^{2}}$, $R_{ij}=0$ $(i\neq j)$ and $R=-\frac{4(n-3)^{2}}{(n-1)^{2}s^{2}}$. Hence the scalar curvature is negative and non-constant. ###### Proof. Let $X:=\xi_{a}$ and $Y:=\xi_{\alpha}$ be the functions corresponding to the Ricci eigenfunctions $\lambda_{a}$ and $\lambda_{\alpha}$, respectively. Claim. (i) $Y\neq 0$. (ii) $\rho=0$. (iii) $(n-3)Y-2X=0$. In fact, in this case, we note that $r_{1}=1$ and $k_{2}=0$. Then (5.4) gives us $(n-3)Y-f^{\prime}=0$, which shows $Y\neq 0$. Meanwhile, from (ii) of Lemma 5.3, it follows that $\rho=0$ and $[(n-3)Y-2X]Y=0,$ which implies (iii) since $Y\neq 0$. It is easy to see that $n\neq 5$ from (iii) of Claim because $X$ and $Y$ are distinct. So $X^{\prime}+X^{2}=Y^{\prime}+Y^{2}=-\frac{2}{n-3}X^{\prime}+\frac{4}{(n-3)^{2}}X^{2},$ which is $X^{\prime}+\frac{n-1}{n-3}X^{2}=0.$ Solving the above equation, $X=\frac{1}{qs-c_{1}}$ for some constant $c_{1}$, where $q=\frac{n-1}{n-3}$. Using (4.3), $f^{\prime}=2X$; by integrating this equation, the potential function can be expressed as $f=\frac{2}{q}\log(s)$ modulo a constant. From $\frac{h^{\prime}_{1}}{h_{1}}=X$ and $\frac{h^{\prime}_{2}}{h_{2}}=Y$, we can get functions $h_{1}$ and $h_{2}$. Putting them into (4.2), (4.3) and (4.4), one can easily get the Ricci curvature components and the scalar curvature of $g$. The proof of this theorem is completed. ∎ Subcase II. $k_{2}\neq 0$ For this subcase, we have the following classification result, which forms type (ii) of Theorem 1.4 for $r=2$. ###### Theorem 5.7. For a gradient Ricci soliton $\left(M^{n},g,f\right)$ with harmonic Weyl curvature, assume in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, $(M^{n},g)$ is locally isometric to a domain in $\mathbb{R}^{1}\times\mathbb{R}^{1}\times N^{n-2}_{2}$ with $g=ds^{2}+h^{2}_{1}(s)dt^{2}+h^{2}_{2}(s)\tilde{g}_{2},$ where the $(n-2)$-dimensional Einstein manifold $(N_{2},\tilde{g}_{2})$ is not Ricci flat. Then it is not steady (i.e., $\rho\neq 0$). The two distinct eigenvalues are exactly 0 and $\rho$ with multiplicities $2$ and $(n-2)$. Also, $\nabla f$ is a null Ricci-eigenvector. Moreover, $(U,g)$ is locally isometric to a domain in $\mathbb{R}^{2}\times N^{n-2}$ with $g=ds^{2}+s^{2}dt^{2}+\tilde{g}$, where $\left(N^{n-2},\tilde{g}\right)$ is an $(n-2)$-dimensional Einstein manifold with the Einstein constant $\rho\neq 0$. The potential function is given by $f=\frac{\rho}{2}s^{2}$ modulo a constant. ###### Proof. First of all, we claim that $Y=0.$ In fact, in this case, we note that $r_{1}=1$ and $k_{2}\neq 0$, from (i) of Lemma 5.3, and it follows that $(r_{1}-1)X^{2}+(r_{2}-1)Y^{2}-2XY-\rho\neq 0.$ Combining with (5.3) implies $Y=0$ since $X\neq 0$. Next, the similar method of the proof of Theorem 5.4 can be used to treat this subcase. $Y=0$ implies that $h_{2}$ is constant and $X^{{}^{\prime}}+X^{2}=0$. Thus $X=\frac{1}{s-c_{1}}$ and $h_{1}=c_{h_{1}}(s-c_{1})$ with integrating $X=\frac{h^{{}^{\prime}}_{1}}{h_{1}}$ for constants $c_{1}$ and $c_{h_{1}}$. By (4.2), we have $\lambda_{1}=-f^{\prime\prime}+\rho=0,$ which yields that $f^{\prime\prime}=\rho$ and then $f(s)=\frac{1}{2}\rho(s-c_{1})^{2}+C_{1}.$ From (4.3) and (4.4), we have $\lambda_{a}=-f^{\prime}X+\rho=0\quad{\rm and}\quad\lambda_{\alpha}=\rho=(n-3)\frac{k_{1}}{h^{2}_{2}}\neq 0.$ Therefore, $(M,g)$ is locally isometric to a domain in $\mathbb{R}^{2}\times N^{n-2}$ with $g=ds^{2}+s^{2}dt^{2}+\tilde{g_{2}}$, where $\left(N^{n-2},\tilde{g_{2}}\right)$ is an $(n-2)$-dimensional Einstein manifold with the Einstein constant $\rho\neq 0$. ∎ ## 6\. The local structure of the case with the same Ricci-eigenfunctions In this section we treat the case that all Ricci-eigenfunctions are equal, except for the first one. We set $\lambda_{\alpha}:=\lambda_{2}=\cdots=\lambda_{n}.$ Type (iv) of Theorem 1.4 comes from this section. For convenience, here we make the following convention on the range of indices: $2\leq\alpha,\beta,\gamma\cdots\leq n$ and denote $\xi_{\alpha}:=X=\frac{h^{\prime}}{h}$. From Section 3, we have (6.1) $\frac{h^{\prime\prime}}{h}=X^{\prime}+X^{2}=-\frac{R^{\prime}}{2(n-1)f^{\prime}},$ (6.2) $\lambda_{1}=-f^{\prime\prime}+\rho=-(n-1)\left(X^{\prime}+X^{2}\right)$ and (6.3) $\lambda_{\alpha}=-f^{\prime}X+\rho=-\left(X^{\prime}+X^{2}\right)-(n-2)\left(X^{2}-\frac{k}{h^{2}}\right).$ Consequently, we have the following theorem. ###### Theorem 6.1. Let $(M^{n},g,f)$ be an $n$-dimensional gradient Ricci soliton with harmonic Weyl curvature. Suppose that in some neighborhood $U$ of $p$ in $M_{A}\cap\\{\nabla f\neq 0\\}$, all Ricci-eigenfunctions are equal except for the one with respect to the gradient vector of the potential function. Then $g$ is a warped product: (6.4) $g=ds^{2}+h^{2}(s)\tilde{g},$ for a positive function $h$, where the Riemannian metric $\tilde{g}$ is Einstein. Furthermore, the $D$ tensor vanishes, so does the Bach tensor. ###### Proof. The first part of this theorem has been obtained by Theorem 3.5 in the third section. In particular, if all Ricci-eigenfunctions are equal, then the metric is Einstein. If $f$ is not constant, then the conclusion of Theorem 6.1 still holds. In fact, Cheeger and Colding [17] presented a characterization of warped product structures on a Riemannian manifold $M$ in terms of solutions to the more general equation $Hess(f)=\mu g$ for some smooth function $\mu$ on $M$. More precisely, the Einstein metric $g$ becomes locally of the form $g=ds^{2}+(f^{\prime}(s))^{2}\tilde{g}$ where $\tilde{g}$ is an Einstein metric. Next we only need to verify the second part. It is easy to check that the gradient Ricci soliton is $D$-flat and Bach-flat. In fact, from Section 3, its Riemannian curvatures are expressed as $R_{1\alpha 1\beta}=-\left(X^{\prime}+X^{2}\right)\delta_{\alpha\beta}=-\frac{R^{\prime}}{2(n-1)}\delta_{\alpha\beta}$ and $R_{\alpha\beta\gamma\delta}=\tilde{R}_{\alpha\beta\gamma\delta}-X^{2}\left(\delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma}\right).$ By equations (6.2) and (6.3), the scalar curvature and the Schouten tensor are as follows $R=-2(n-1)\left(X^{\prime}+X^{2}\right)-(n-1)(n-2)\left(X^{2}-\frac{k}{h^{2}}\right),$ $A_{11}=-(n-2)\left(X^{\prime}+X^{2}\right)+\frac{n-2}{2}\left(X^{2}-\frac{k}{h^{2}}\right)$ and $A_{\alpha\beta}=-\frac{n-2}{2}\left(X^{2}-\frac{k}{h^{2}}\right)\delta_{\alpha\beta}.$ Putting them into (2.15), the Weyl tensor is given by $W_{1\alpha 1\beta}=0,\quad W_{1\alpha\beta\gamma}=0$ and $W_{\alpha\beta\gamma\delta}=\frac{1}{h^{2}}\tilde{R}_{\alpha\beta\gamma\delta}-\frac{k}{h^{2}}\left(\delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma}\right).$ Finally, the harmonicity of Weyl tensor implies that the Cotton tensor $C_{ijk}=0$. With the relationships (2.25) and (2.26), it follows that $D$ tensor vanishes and the gradient Ricci soliton is Bach-flat. ∎ ## 7\. Classification of gradient Ricci solitons with harmonic Weyl curvature In this section, we summarize and prove the theorems stated in the introduction. Now we are going to combine theorems 5.4, 5.6, 5.7 and 6.1 to prove Theorem 1.4, in a similar way to the $4$-dimensional case of Kim [31]. Proof of Theorem 1.4. When the real analytic potential function $f$ is constant on some non-empty open subset, then it is constant on $M$ because of the connectivity of $M$. So, if the soliton is of type (i) on some non-empty open subset, it will be so on $M$. If the soliton is of type (iv) on some non-empty open subset with nonconstant $f$, then $D=0$ on $U$ and the real analytic function $\left|D\right|=0$ everywhere on $M$. Hence the soliton is of type (iv) on $M$ and $f$ is nonconstant on $M$. The Ricci tensor of a gradient Ricci soliton with vanishing $D$-tensor either has a unique eigenvalue, or has two distinct eigenvalues of multiplicity 1 and $(n-1)$ respectively. Hence types (ii) and (iii) do not satisfy $D=0$. For type (ii), the scalar curvature $R=(n-r-1)\rho$ is a nonzero constant on some non-empty open subset $U$, and by real analyticity, $R=(n-r-1)\rho$ on $M$. However, for type (iii), the scalar curvature $R=-\frac{4(n-3)^{2}}{(n-1)^{2}s^{2}}$ is not constant. Therefore, among types (i)-(iv) in Theorem 1.4, each type is different from the other three types. This completes the proof of Theorem 1.4. $\Box$ Finally, we complete the proof the (local) classification of gradient Ricci solitons with harmonic curvature. Proof of Corollary 1.8. By the second Bianchi identity, the gradient Ricci soliton with harmonic curvature is of constant scalar curvature. On the other hand, the soliton metric $ds^{2}+s^{\frac{2(n-3)}{n-1}}dt^{2}+s^{\frac{4}{n-1}}\tilde{g}$ in Theorem 1.4 (iii) does not have constant scalar curvature, so does not have harmonic curvature. Note that when $2\leq r\leq n-2$, type (ii) of Corollary 1.8 should come from Theorem 1.4 (ii). We need to consider the cases $r=1$ and $r=n$ which comes from Theorem 1.4 (iv) with the metric $g=ds^{2}+h^{2}(s)\tilde{g}$, where $\tilde{g}$ is an Einstein metric. Lemma 2.1 gives $R+|\nabla f|^{2}-2\rho f={\rm constant}$. We differentiate the both side with respect to the local variable $s$ where $|\nabla f|\neq 0$, and get $2f^{{}^{\prime}}f^{{}^{\prime\prime}}=2\rho f^{{}^{\prime}}$ since $R$ is constant. So, $f^{{}^{\prime\prime}}=\rho$. From (6.2) and (6.1), $h^{{}^{\prime\prime}}=0$. Either $h=c$ or $h=cs$ for some constant $c\neq 0$ after shifting $s$ by a constant. When $h=c$, from (6.3) we get $\frac{k}{c^{2}}=\frac{\rho}{n-2}$. We have $g=ds^{2}+\tilde{g}$, where $\tilde{g}$ is an Einstein metric with the Einstein constant $\rho$. We set $f=\frac{\rho}{2}s^{2}+C$ by shifting $s$. As $f$ is not constant, $\rho\neq 0$, which forms type (ii) for $r=1$. When $h=cs$, using (6.2) and $f^{{}^{\prime\prime}}=\rho$ we obtain that $f^{{}^{\prime}}=\rho s+c_{1}$ and $k=c^{2}$. Then $f=\frac{1}{2}\rho s^{2}$ and $\rho\neq 0$. Thus, $g=ds^{2}+s^{2}\tilde{g},$ where $\tilde{g}$ is an Einstein metric with the Einstein constant $(n-2)$. Moreover from (3.5), $f_{1,1}=f_{\alpha,\alpha}=\rho\neq 0$ and $f_{\alpha,\beta}=f_{1,\alpha}=0$. We have $g=ds^{2}+s^{2}d\mathbb{S}^{2}_{n-1}$ from the proof of Theorem 5.4, which yields the Gaussian soliton. We have completed the proof of Corollary 1.8. $\Box$ Acknowledgments This work was completed while the author was visiting Lehigh University from August, 2019 to August, 2020. She would like to thank her advisor Professor Huai-Dong Cao for suggesting the research project, and for his invaluable guidance, constant encouragement and support. She is grateful to her advisors Professor Yu Zheng and Professor Zhen Guo for their constant encouragement and support. She also would like to thank Junming Xie, Jiangtao Yu, and other members of the geometry group at Lehigh for their interest, helpful discussions, and suggestions during the preparation of this paper. She also would like to thank the China Scholarship Council (No: 201906140158) and the Program for Graduate Students of East China Normal University (No: YBNLTS 2020-044) for the financial support, and the Department of Mathematics at Lehigh University for hospitality and for providing a great environment for research. ## References * [1] R. Bach. Zur Weylschen Relativit atstheorie und der Weylschen Erweiterung des Krummungstensorbegriffs, Math. Z. 9 (1921), 110–135. * [2] A. Besse, Einstein manifolds. Ergebnisse der Mathematik, 3 Folge, Band 10, Springer-Verlag, 1987. * [3] S. Brendle, Rotational symmetry of self-similar solutions to the Ricci flow, Invent. Math. 194(3) (2013), 731-764. * [4] R. Bryant, Ricci flow solitons in dimension three with $SO(3)$–symmetries, preprint https://services. math.duke.edu/$\sim$bryant/3DRotSymRicciSolitons.pdf * [5] H.-D. Cao, Recent progress on Ricci solitons, Recent advances in geometric analysis, 138, Adv. Lect. Math. (ALM), 11, Int. Press, Somerville, MA, 2010. * [6] H.-D. Cao, Existence of gradient Kähler-Ricci solitons, Elliptic and parabolic methods in geometry (Minneapolis, MN, 1994), A K Peters, Wellesley, MA, 1996, 1-16. * [7] H.-D. Cao, G. Catino, Q. Chen, C. Mantegazza, and L. Mazzieri, Bach-flat gradient steady ricci solitons, Calc. Var. Partial Differential Equations 49 no. 1-2 (2014), 125-138. * [8] H.-D. Cao and Q. Chen, On locally conformally flat gradient steady Ricci solitons, Trans. Amer. Math. Soc. 364 (2012), 2377-2391. * [9] H.-D. Cao and Q. Chen, On Bach-flat gradient shrinking Ricci solitons, Duke Math. J. 162 (2013), 1003-1204. * [10] H.-D. Cao, B.-L. Chen and X.-P. Zhu, Recent developments on Hamilton’s Ricci flow, Surveys in differential geometry. Vol. XII. Geometric flows, 47-112, Surv. Differ. Geom., 12, Int. Press, Somerville, MA, 2008. * [11] H.-D. Cao and F.J. Li, Besse conjecture and critical spaces with harmonic curvature, preprint (2020). * [12] H.-D. Cao, F.J. Li and J. Siene, Quasi-Einstein manifolds with harmonic Weyl curvature, in preparation. * [13] H.-D. Cao and J.T. Yu, On complete gradient steady Ricci solitons with vanishing $D$-tensor, to appear in Proc. Amer. Math. Soc. * [14] X. Cao, B. Wang and Z. Zhang, On locally conformally flat gradient shrinking Ricci solitons, Commun. Contemp. Math. 13 (2) (2011), 269-282. * [15] G. Catino and C. Mantegazza, The evolution of the Weyl tensor under the Ricci flow, Ann. Inst. Fourier (Grenoble) 61(4) (2011), 1407-1435. * [16] G. Catino, P. Mastrolia and D. Monticelli, Gradient Ricci solitons with vanishing conditions on Weyl, J. Math. Pures Appl. 108(1) (2017), 1–13. * [17] J. Cheeger and T.H. Colding, Lower Bounds on Ricci Curvature and the Almost Rigidity of Warped Products, Ann. of Math. 144(1) (1996), 189-237. * [18] B.-L. Chen, Strong uniqueness of the Ricci flow, J. Differential Geom. 82(2) (2009), 362-382. * [19] C.-W. Chen and A. Deruelle, Structure at infinity of expanding gradient Ricci soliton, Asian J. Math. 19(5) (2015), 933–950. * [20] X.X. Chen and Y. Wang, On four-dimensional anti-self-dual gradient Ricci solitons, J. Geom. Anal. 25(2) (2015), 1335-1343. * [21] O. Chodosh, Expanding Ricci solitons asymptotic to cones, Calc. Var. Partial Differential Equations, 51(1-2) (2014), 1-15. * [22] B. Chow and L.-F. Wu, The Ricci flow on compact 2-orbifolds with curvature negative somewhere, Comm. Pure Appl. Math. 44 (1991), 275-286. * [23] A. Derdziński, Classification of Certain Compact Riemannian Manifolds with Harmonic Curvature and Non-parallel Ricci Tensor, Math. Z. 172 (1980), 273-280. * [24] M. Eminenti, G. La Nave and C. Mantegazza, Ricci solitons: the equation point of view, Manuscripta Math. 127(3) (2008), 345-367. * [25] M. Fernández-López and E. García-Río, Rigidity of shrinking Ricci solitons, Math. Z. 269 (2011), 461-466. * [26] R.S. Hamilton, Three-manifolds with positive Ricci curvature, J. Differential. Geom. 17 (1982), 255–306. * [27] R.S. Hamilton, The formation of singularities in the Ricci flow, Surveys in Differential Geometry (Cambridge, MA, 1993), 2, 7-136, International Press, Cambridge, MA, 1995. * [28] C. He, P. Petersen and W. Wylie, On the classification of warped product Einstein metrics, Comm. Anal. Geom. 20(2) (2012), 271-311. * [29] T. Ivey, Local existence of Ricci solitons, Manuscripta Math. 91 (1996), 151-162. * [30] T. Ivey, Ricci solitons on compact three-manifolds, Differential Geom. Appl. 3(4) (1993), 301-307. * [31] J. Kim, On a classification of 4-d gradient Ricci solitons with harmonic Weyl curvature, J. Geom. Anal. 27(2) (2017), 986–1012. * [32] F.J. Li, Vacuum static spaces with harmonic curvature, preprint (2020). * [33] O. Munteanu and N. Sesum, On gradient Ricci solitons, J. Geom. Anal. 23 (2013), 539-561. * [34] A Naber, Noncompact shrinking four solitons with nonnegative curvature. J. Reine Angew. Math. 645, 125–153 (2010). * [35] L. Ni and N. Wallach, On a classification of the gradient shrinking solitons, Math. Res. Lett, 15 (5) (2008), 941-955. * [36] G. Perelman, The entropy formula for the Ricci flow and its geometric applications, http://arxiv.org/pdf/math/0211159v1.pdf (2002). * [37] P. Petersen, Riemannian Geometry, Grad. texts in Math. 171, Springer-Verlag, 1998. * [38] P. Petersen and W. Wylie, Rigidity of gradient Ricci solitons. Pacific J. Math. 241 (2009), 329-345. * [39] P. Petersen and W. Wylie, On the classification of gradient Ricci solitons, Geom. Topol. 14(4) (2010), 2277-2300. * [40] J. Shin, On the classification of 4-dimensional $(m,\rho)$-quasi-Einstein manifolds with harmonic Weyl curvature. Ann. Global Anal. Geom. 51(4) (2017), 379–399. * [41] J.-Y Wu, P. Wu and W. Wylie, Gradient shrinking Ricci solitons of half harmonic Weyl curvature. Calc. Var. Partial Differential Equations 57 (2018), no. 5, Paper No. 141, 15 pp. * [42] Z.H. Zhang, Gradient shrinking solitons with vanishing Weyl tensor, Pacific J. Math. 242 (1) (2009), 189-200.
0 11institutetext: Brno University of Technology, Brno, Czech Republic 11email<EMAIL_ADDRESS>22institutetext: University of California, Berkeley, USA 33institutetext: RWTH Aachen University, Aachen, Germany # Inductive Synthesis for Probabilistic Programs Reaches New Horizons††thanks: This work has been partially supported by the Czech Science Foundation grant GJ20-02328Y and the ERC AdG Grant 787914 FRAPPANT, the NSF grants 1545126 (VeHICaL) and 1646208, by the DARPA Assured Autonomy program, by Berkeley Deep Drive, and by Toyota under the iCyPhy center. Roman Andriushchenko 11 Milan Češka (🖂) 11 Sebastian Junges 22 Joost-Pieter Katoen 33 ###### Abstract This paper presents a novel method for the automated synthesis of probabilistic programs. The starting point is a program sketch representing a finite family of finite-state Markov chains with related but distinct topologies, and a reachability specification. The method builds on a novel inductive oracle that greedily generates counter-examples (CEs) for violating programs and uses them to prune the family. These CEs leverage the semantics of the family in the form of bounds on its best- and worst-case behaviour provided by a deductive oracle using an MDP abstraction. The method further monitors the performance of the synthesis and adaptively switches between inductive and deductive reasoning. Our experiments demonstrate that the novel CE construction provides a significantly faster and more effective pruning strategy leading to an accelerated synthesis process on a wide range of benchmarks. For challenging problems, such as the synthesis of decentralized partially-observable controllers, we reduce the run-time from a day to minutes. ## 1 Introduction #### Background and motivation. Controller synthesis for Markov decision processes (MDPs [35]) and temporal logic constraints is a well-understood and tractable problem, with a plethora of mature tools providing efficient solving capabilities. However, the applicability of these controllers to a variety of systems is limited: Systems may be decentralized, controllers may not be able to observe the complete system state, cost constraints may apply, and so forth. Adequate operational models for these systems exist in the form of _decentralized partially- observable MDPs_ (DEC-POMDPs [33]). The controller synthesis problem for these models is undecidable [30], and tool support (for verification tasks) is scarce. This paper takes a different approach: the controller together with the environment can be modelled as probabilistic program sketches where “holes” in the probabilistic program model choices that the controller may make. Conceptually, the controllers of the DEC-POMDP are described by a user-defined finite family $\mathcal{M}$ of Markov chains. _The synthesis problem that we consider is to find a Markov chain $M$ (i.e., a probabilistic program) in the family $\mathcal{M}$, such that $M\models\varphi$, where $\varphi$ is the specification._ To allow efficient algorithms, the family must have some structure. In particular, in our setting, the family is parameterized by a set of discrete _parameters_ $K$; an assignment $K\rightarrow V$ of these parameters with concrete values $V$ from its associated domain yields a family member, i.e., a Markov chain (MC). Such a parameterization is naturally obtained from the probabilistic program sketch, where some constants (or program parts) can be left open. The search for a family member can thus be considered as the search for a hole-assignment. This approach fits within the realm of syntax-guided synthesis [2]. _Motivating example._ _Herman’s protocol_ [24] is a well-studied randomized distributed algorithm aimed to obtain fast stabilization on average. In [26], a family $\mathcal{M}$ of MCs is used to model different protocol instances. They considered each instance separately, and found which of the controllers for Herman’s protocol performs best. Let us consider the protocol in a bit more detail: It considers self-stabilization of a unidirectional ring of network stations where all stations have to behave similarly—an anonymous network. Each station stores a single bit, and can read the internal bit of one (say left) neighbour. To achieve stabilization, a station for which the two legible bits coincide updates its own bit based on the outcome of a coin flip. The challenge is to select a controller that flips this coin with an optimal bias, i.e., minimizing the expected time until stabilization. In a setting where the probabilities range over $0.1,0.2,\ldots,0.9$, this results in analyzing nine different MCs. Does the expected time until stabilization reduce if the controllers are additionally allowed to have a single bit of memory? In every step, there are $9{\cdot}9$ combinations for selecting the coin flip and for each memory cell and coin flip outcome, the memory can now be updated, yielding $2{\cdot}2{\cdot}2$ possibilities. This one-bit extension thus results in a family of $648$ models. If, in addition, one allows stations to make decisions depending on the token-bits, both the coin flips and the memory updates are multiplied by a factor $4$, yielding $10,368$ models. Eventually, analyzing all individual MCs is infeasible. _Oracle-guided synthesis._ To tackle the synthesis problem, we introduce an _oracle-guided inductive synthesis_ approach [25, 39]. A learner selects a family member and passes it to the oracle. The oracle answers whether the family member satisfies $\varphi$, and crucially, gives additional information in case this is not the case. Inspired by [9], if the family member violates the specification $\varphi$, our oracle returns a set $K^{\prime}$ of parameters such that all family members obtained by changing only the values assigned to $K^{\prime}$ violate $\varphi$. We argue that such an oracle must (1) induce little overhead in providing $K^{\prime}$, (2) be aware of the existence of parameters in the family, and (3) have (resemblance of) awareness about the semantics of the parameters and their values. _Oracles._ With these requirements in mind, we construct a counterexample (CE)-based oracle from scratch. We do so by carefully exploiting existing methods. We construct critical subsystems as CEs [1]. Critical subsystems are parts of the MC that suffice to refute the specification. If a hole is absent in a CE, its value is irrelevant. To avoid the cost of finding optimal CEs—an NP-hard problem [19]—we consider greedy CEs that are similar to [9]. However, our greedy CEs are aware of the parameters, and try to limit the occurrence of parameters in the CE. Finally, to provide awareness of the semantics of parameter values, we provide lower and upper bounds on all states: Their difference indicates how much varying the value at a hole may change the overall reachability probability. These bounds are efficiently computed by another oracle. This oracle analyses a quotient MDP obtained by employing an abstraction method that is part of the abstraction-refinement loop in [10]. _A hybrid variant._ The two oracles are significantly different. Abstraction refinement is _deductive_ : it argues about single family members by considering (an aggregation of) all family members. The critical subsystem oracle is _inductive_ : by examining a single family member, it infers statements about other family members. This suggests a middle ground: a _hybrid strategy_ monitors the performance of the two oracles during the synthesis and suggests their best usage. More precisely, the hybrid strategy integrates the counterexample-based oracle into the abstraction-refinement loop. _Major results._ We present a novel and dedicated oracle deployed in an efficacious synthesis loop. We use model-checking results on an abstraction to tailor smaller CEs. Our greedy and family-aware CE construction is substantially faster than the use of optimal CEs. Together, these two improvements yield CEs that are on par with optimal CEs, but are found much faster. The integration of multiple abstraction-refinement steps yields a superior performance:x We compare our performance with the abstraction- refinement loop from [10] using benchmarks from [10]. Benchmarks can be classified along two dimensions: ($A$) Benchmarks with a structure good for CE-generation. ($B$) Benchmarks with a structure good for abstraction- refinement. A-benchmarks are a natural strength of our novel oracle. Our simple, efficient hybrid strategy significantly outperforms the state-of-the- art on $A$-benchmarks, while it only yields limited overhead for $B$-benchmarks. Most importantly, the novel hybrid strategy can solve benchmarks that are out of reach for pure abstraction-refinement or pure CE- based reasoning. In particular, our hybrid method is able to synthesize the optimal Herman protocol with memory—the synthesis time on a design space with 3.1 millions of candidate programs reduces from a day to minutes. ### 1.0.1 Related work The synthesis problems for parametric probabilistic systems can be divided into the following two categories. _Topology synthesis,_ akin to the problem considered in this paper, assumes a finite set of parameters affecting the MC topology. Finding an instantiation satisfying a reachability property is NP-complete in the number of parameters [12], and can naively be solved by analyzing all individual family members. An alternative is to model the MC family by an MDP and resort to standard MDP model-checking algorithms. Tools such as ProFeat [13] or QFLan [40] take this approach to quantitatively analyze alternative designs of software product lines [21, 28]. These methods are limited to small families. This motivated (1) _abstraction-refinement_ over the MDP representation [10], and (2) _counterexample-guided inductive synthesis_ (CEGIS) for MCs [9], mentioned earlier. The alternative problem of sketching for probabilistic programs that fit given data is studied, e.g., in [32, 38]. _Parameter synthesis_ considers models with uncertain parameters associated to transition probabilities, and analyses how the system behaviour depends on the parameter values. The most promising techniques are based on _parameter lifting_ that treats identical parameters in different transitions independently [8, 36] and has been implemented in the state-of-the-art probabilistic model checkers Storm [18] and PRISM [27]. An alternative approach based on building rational functions for the satisfaction probability has been proposed in [15] and further improved in [22, 17, 4]. This approach has been also applied to different problems such as model repair [5, 34, 11]. Both synthesis problems can be also attacked by _search-based techniques_ that do not ensure an exhaustive exploration of the parameter space. These include evolutionary techniques [23, 31] and genetic algorithms [20]. Combinations with parameter synthesis have been used [7] to synthesize robust systems. ## 2 Problem Statement We formalize the essential ingredients and the problem statement. See [3] for more material. #### Sets of Markov chains. A _(discrete) distribution_ over a finite set $X$ is a function $\mu\colon S\rightarrow[0,1]$ s.t. $\sum_{x}\mu(x)=1$. The set $Distr(X)$ contains all distributions over $X$. The _support_ of $\mu\in Distr(X)$ is $\mathrm{supp}(\mu)=\\{x\in X\mid\mu(x)>0\\}$. ###### Definition 1 (MC) A _Markov chain (MC)_ is a tuple $D=(S,s_{0},\boldsymbol{P})$, where $S$ is a finite set of _states_ , $s_{0}\in S$ is an _initial state_ , and $\boldsymbol{P}\colon S\rightarrow Distr(S)$ is a _transition probability function_. We write $\boldsymbol{P}(s,t)$ to denote $\boldsymbol{P}(s)(t)$. The state $s$ is _absorbing_ if $\boldsymbol{P}(s,s)=1$. Let $K$ denote a finite set of discrete parameters with finite domain $V_{k}$. For brevity, we often assume that all domains are the same, and omit the subscript $k$. A _realization_ $r$ maps parameters to values in their domain, i.e., $r\colon K\rightarrow V$. Let $\mathcal{R}^{\mathcal{D}}$ denote the set of all realizations of a set $\mathcal{D}$ of MCs. A $K$-parameterized set of MCs $\mathcal{D}(K)$ contains the MCs $\mathcal{D}_{r}$, for every $r\in\mathcal{R}^{\mathcal{D}}$. In Sect. 3.0.1, we give an operational model for such sets. In particular, realizations will fix the targets of transitions. In our experiments, we describe these sets using the PRISM modelling language where parameters are described by undefined integer values. #### Properties and specifications. For simplicity, we consider (unbounded) _reachability_ properties111Our implementation also supports expected reachability rewards.. For a set $T\subseteq S$ of _target states_ , let $\mathbb{P}[D,s\models\Diamond T]$ denote the probability in MC $D$ to eventually reach some state in $T$ when starting in the state $s\in S$. A property $\varphi\equiv\mathbb{P}_{\bowtie\lambda}[\Diamond T]$ with $\lambda\in[0,1]$ and $\bowtie\,\in\\{\leq,\geq\\}$ expresses that the probability to reach $T$ does relate to $\lambda$ according to $\bowtie$. If $\bowtie\,={\leq}$, then $\varphi$ is a _safety_ property; otherwise, it is a _liveness_ property. Formally, state $s$ in MC $D$ satisfies $\varphi$ if ${\mathbb{P}[D,s\models\Diamond T]\geq\lambda}$. The MC $D$ satisfies $\varphi$ if the above holds for its initial state. A _specification_ is a set of properties $\Phi=\\{\varphi_{i}\\}_{i\in I}$, and $D\models\Phi$ if $\forall i\in I:D\models\varphi_{i}$. #### Problem statement. The key problem statement in this paper is _feasibility_ : Given a parameterized set of Markov chains $\mathcal{D}(K)$ over parameters $K$ and a specification $\Phi$, find a realization $r\colon K\rightarrow V$ such that $\mathcal{D}_{r}\models\Phi$. When $\mathcal{D}$ is clear from the context, we often write $r\models\Phi$ to denote $\mathcal{D}_{r}\models\Phi$. We additionally consider the optimizing variant of the synthesis problem. The _maximal synthesis_ problem asks: given a maximizing property $\varphi_{\max}\equiv\mathbb{P}_{\bowtie\lambda}[\Diamond T]$, identify $r^{*}\in\operatorname*{arg\,max}_{r\in\mathcal{R}^{\mathcal{D}}}\left\\{\mathbb{P}[\mathcal{D}_{r}\models\Diamond T]\mid\mathcal{D}_{r}\models\Phi\right\\}$ provided it exists. The _minimal synthesis_ problem is defined analogously. As the state space $S$, the set $K$ of parameters, and their domains are all finite, the above synthesis problems are decidable. One possible solution, called the _one-by-one approach_ [14], considers each realization $r\in\mathcal{R}^{\mathcal{D}}$. The state-space and parameter-space explosion renders this approach unusable for large problems, necessitating the usage of advanced techniques that exploit the family structure. ## 3 Counterexample-Guided Inductive Synthesis In this section, we recap a baseline for a counterexample-guided inductive synthesis (CEGIS) loop, as put forward in [9]. In particular, we first instantiate an oracle-guided synthesis method, discuss an operational model for families, giving structure to the parameterized set of Markov chains, and finally detail the usage of CEs to create an oracle. LearnerOracle$\mathcal{R}$$\mathcal{D},\Phi$$r\in\mathcal{R}$$r\in\mathcal{R}^{\prime}\subseteq\mathcal{R}$, $\mathcal{R}^{\prime}$ all violate $\Phi$$r\models\Phi$no $r\models\Phi$ Figure 1: Oracle-guided synthesis Consider Fig. 1. A learner takes a set $\mathcal{R}$ of realizations, and has to find a realization $\mathcal{D}_{r}$ satisfying the specification $\Phi$. The learner maintains (a symbolic representation of) a set $Q\subseteq\mathcal{R}$ of realizations that need to be checked. It iteratively asks the oracle whether a particular $r\in Q$ is a solution. If it is a solution, the oracle reports success. Otherwise, the oracle returns a set $\mathcal{R}^{\prime}$ containing $r$ and potentially more realizations all violating $\Phi$. The learner then prunes $\mathcal{R}^{\prime}$ from $Q$. In Section 4, we focus on creating an efficient oracle that computes a set $\mathcal{R}^{\prime}$ (with $r\in\mathcal{R}^{\prime}$) of realizations that are all violating $\Phi$. In Section 5, we provide a more advanced framework that extends this method. The remainder of this section lays the groundwork for these sections. ### 3.0.1 Families of Markov chains To avoid the need to iterate over all realizations, an efficient oracle exploits some structure of the family. In this paper, we focus on sets of Markov chains having different topologies. We explain our concepts using the operational model of families given in [10]. Our implementation supports (more expressive) PRISM programs with undefined integer constants. ###### Definition 2 (Family of MCs) A _family of MCs_ is a tuple $\mathcal{D}=(S,s_{0},K,\mathcal{B})$ with $S$ and $s_{0}$ as before, $K$ is a finite set of parameters with domains $V_{k}\subseteq S$ for each $k\in K$, and $\mathcal{B}:S\rightarrow Distr(K)$ is a family of transition probability functions. Function $\mathcal{B}$ of a family $\mathcal{D}$ of MCs maps each state to a distribution over parameters $K$. In the context of the synthesis of probabilistic models, these parameters represent unknown options or features of a system under design. Realizations are now defined as follows. ###### Definition 3 (Realization) A _realization_ of a family $\mathcal{D}=(S,s_{0},K,\mathcal{B})$ of MCs is a function $r:K\rightarrow S$ s.t. $r(k)\in V_{k}$, for all $k\in K$. We say that realization $r$ induces MC $\mathcal{D}_{r}=(S,s_{0},\mathcal{B}_{r})$ iff $\mathcal{B}_{r}(s,s^{\prime})=\sum_{k\in K,r(k)=s^{\prime}}\mathcal{B}(s)(k)$ for any pair of states $s,s^{\prime}\in S$. The set of all realizations of $\mathcal{D}$ is denoted as $\mathcal{R}^{\mathcal{D}}$. The set $\mathcal{R}^{\mathcal{D}}=\prod_{k\in K}V_{k}$ of all possible realizations is exponential in $\lvert K\rvert$. ### 3.0.2 Counterexample-guided oracles We first consider the feasibility synthesis for a single-property specification and later, cf. Remark 1, generalize this to multiple properties and to optimal synthesis. The notion of counterexamples is at the heart of the oracle from [9] and Sect. 4. If an MC $D\not\models\varphi$, a _counterexample_ (CE) based on a critical subsystem can serve as diagnostic information about the source of the failure. We consider the following CE, motivated by the notion of critical subsystem in [37]. ###### Definition 4 (Counterexample) Let $D=(S,s_{0},\boldsymbol{P})$ be an MC with $s_{\bot}\not\in S$. The _sub- MC_ of $D$ induced by $C\subseteq S$ is the MC $D{\downarrow}C=(S\cup\\{s_{\bot}\\},s_{0},\boldsymbol{P}^{\prime})$, where the transition probability function $\boldsymbol{P}^{\prime}$ is defined by: $\displaystyle\boldsymbol{P}^{\prime}(s)=\begin{cases}\boldsymbol{P}(s)&\text{ if }s\in C,\\\ [s_{\bot}\mapsto 1]&\text{ otherwise}.\end{cases}$ The set $C$ and the sub-MC $D{\downarrow}C$ are called a _counterexample_ (CE) for the property $\mathbb{P}_{\leq\lambda}[\Diamond T]$ on MC $D$, if $D{\downarrow}C\not\models\mathbb{P}_{\leq\lambda}[\Diamond(T\cap(C\cup\\{s_{0}\\}))]$. Let $\mathcal{D}_{r}$ be an MC violating the specification $\varphi$. To compute other realizations violating $\varphi$, the oracle computes a critical subsystem $\mathcal{D}_{r}{\downarrow}C$, which is then used to deduce a so- called _conflict_ for $\mathcal{D}_{r}$ and $\varphi$. ###### Definition 5 (Conflict) For family of MCs $\mathcal{D}=(S,s_{0},K,\mathcal{B})$ and $C\subseteq S$, the set $K_{C}$ of _relevant parameters_ (called _conflict_) is given by $\bigcup_{s\in C}\mathrm{supp}(\mathcal{B}(s))$. It is straightforward to compute a set of violating realizations from a conflict. A _generalization_ of realization $r$ induced by the set $K_{C}\subseteq K$ of relevant parameters is the set $r{\uparrow}K_{C}=\\{r^{\prime}\in\mathcal{R}\mid\forall k\in K_{C}:$ $r(k)=r^{\prime}(k)\\}$. We often use the term _conflict_ to refer to its generalization. The size of a conflict, i.e., the number $|K_{C}|$ of relevant parameters $K_{C}$ is crucial. Small conflicts potentially lead to generalizing $r$ to larger subfamilies $r{\uparrow}K_{C}$. It is thus important that the CEs contain as few parameterized transitions as possible. The size of a CE in terms of the number of states is not of interest. Furthermore, the overhead of providing CEs should be bounded from below by the payoff: Finding a large generalization may take some time, but small generalizations should be returned quickly. The CE-based oracle in [9] uses an off-the-shelf CE procedure [16, 41], and mostly does not provide small CEs. ## 4 A Smart Oracle with Counterexamples and Abstraction This section develops an oracle based on CEs, tailored for the use in an oracle-guided inductive synthesis loop described in Sect. 3.0.1. Its main features are: * • a fast greedy approach to compute CEs that provide small conflicts: We achieve this by taking into account the position of the parameters. * • awareness about the semantics of parameters by using model-checking results from an abstraction of the family. Before going into details, we provide some illustrative examples. ### 4.0.1 A motivating example Figure 2: Counterexamples for smaller conflicts. First, we illustrate what it means to take CEs that lead to small conflicts. Consider Fig. 2, with a family member $\mathcal{D}_{r}$ (left), where the superscript of a state identifier $s_{i}$ denotes parameters relevant to $s_{i}$. Consider the safety property $\varphi\equiv\mathbb{P}_{\leq 0.4}[\Diamond\\{t\\}]$. Clearly, $\mathcal{D}_{r}\not\models\varphi$, and we can construct two CEs: $C_{1}=\\{s_{0},s_{3},t\\}$ (center) and $C_{2}=\\{s_{0},s_{1},s_{2},t\\}$ (right) with conflicts $K_{C_{1}}=\\{X,Y\\}$ and $K_{C_{2}}=\\{X\\}$, respectively. It illustrates that a smaller CE does not necessarily induce a smaller conflict. We now illustrate awareness of the semantics of parameters. Consider the family $\mathcal{D}=(S,s_{0},K^{\prime},\mathcal{B})$, where $S=\\{s_{0},s_{1},s_{2},t,f\\}$, the parameters are $K^{\prime}=\\{X,Y,T^{\prime},F^{\prime}\\}$ with domains $V_{X}=\\{s_{1},s_{2}\\}$, $V_{Y}=\\{t,f\\}$, $V_{T^{\prime}}=\\{t\\}$, $V_{F^{\prime}}=\\{f\\}$, and a family $\mathcal{B}$ of transition probability functions defined in Fig. 3 (left). $\displaystyle\mathcal{B}(s_{0})$ $\displaystyle=[X\mapsto 1],$ $\displaystyle\mathcal{B}(s_{1})$ $\displaystyle=[T^{\prime}\mapsto 0.6,Y\mapsto 0.2,F^{\prime}\mapsto 0.2],$ $\displaystyle\mathcal{B}(s_{2})$ $\displaystyle=[T^{\prime}\mapsto 0.2,Y\mapsto 0.2,F^{\prime}\mapsto 0.6],$ $\displaystyle\mathcal{B}(t)$ $\displaystyle=[T^{\prime}\mapsto 1],$ $\displaystyle\mathcal{B}(f)$ $\displaystyle=[F^{\prime}\mapsto 1]$ Figure 3: A family $\mathcal{D}$ of four Markov chains (unreachable states are grayed out). As the parameters $T^{\prime}$ and $F^{\prime}$ each can take only one value, we consider $K=\\{X,Y\\}$ as the set of parameters. There are $\lvert V_{X}\rvert\times\lvert V_{Y}\rvert=4$ family members, depicted in Fig. 3(right). For conciseness, we omit some of the transition probabilities (recall that transition probabilities sum to one). Only realization $r_{3}$ satisfies the safety property $\varphi\equiv\mathbb{P}_{\leq 0.3}[\Diamond\\{t\\}]$. _CEGIS[9] illustrated_: Consider running CEGIS, and assume the oracle gets realization $r_{0}$ first. A model checker reveals $\mathbb{P}[{\mathcal{D}_{r_{0}}},s_{0}\models\Diamond T]=0.8>0.3$. The CE for $\mathcal{D}_{r_{0}}$ and $\varphi$ contains the (only) path to the target: $s_{0}{\rightarrow}s_{1}{\rightarrow}\,t$ having probability $0.8>0.3$. The corresponding CE $C=\\{s_{0},s_{1},t\\}$ induces the conflict $K_{C}=\\{X,Y\\}$. None of the parameters is generalized. The same argument applies to any subsequent realization: the constructed CEs do not allow for generalization, the oracle returns only the passed realization, and the learner keeps iterating until accidentally guessing $r_{3}$. _Can we do better?_ To answer this, consider CE generation as a game: The Pruner creates a critical subsystem $C$. The Adversary wins if it finds a MC satisfying $\varphi$ containing $C$, thus refuting that $C$ is a counterexample. In our setting, we change the game: The Adversary must select a family member rather than an arbitrary MC. Analogously, off-the-shelf CE generators construct a critical subsystem $C$ that for every possible extension of $C$ is a CE. These are _CEs without context_. In our game, the Adversary may not extend the MC arbitrarily, but must choose a family member. These are _CEs modulo a family_. _Back to the example:_ Observe that for a CE for $\mathcal{D}_{r_{0}}$, we could omit states $t$ and $s_{1}$ from the set $C$ of critical states: we know for sure that, once $\mathcal{D}_{r_{0}}$ takes transition $(s_{0},s_{1})$, it will reach target state $t$ with probability at least $0.6$. This exceeds the threshold $0.3$, regardless of the value of the parameter $Y$. Hence, for family $\mathcal{D}$, the set $C^{\prime}=\\{s_{0}\\}$ is a critical subsystem. The immediate advantage is that this set induces conflict $K_{C^{\prime}}=\\{X\\}$ (parameter $Y$ has been generalized). This enables us to reject all realizations from the set $r_{0}{\uparrow}K_{C^{\prime}}=\\{r_{0},r_{1}\\}$. _It is ‘easier’ to construct a CE for a (sub)family than for arbitrary MCs_. More generally, a successful oracle needs to have access to useful bounds, and effectively integrate them in the CE generation. ### 4.0.2 Counterexample construction We develop an algorithm using bounds on reachability probabilities, similar to the bounds used above. Let us assume that for some set of realizations $\mathcal{R}$ and for every state $s$, we have bounds $\textsl{lb}^{\mathcal{R}}(s),\textsl{ub}^{\mathcal{R}}(s)$, such that for every $r\in\mathcal{R}$ we have $\textsl{lb}^{\mathcal{R}}(s)\leq\mathbb{P}[\mathcal{D}_{r},s\models\Diamond T]\leq\textsl{ub}^{\mathcal{R}}(s)$. Such bounds always exist (take $0$ and $1$). We see later how we compute these bounds. In what follows, we fix $r$ and denote $\mathcal{D}_{r}=(S,s_{0},\boldsymbol{P})$. Let us assume $\mathcal{D}_{r}$ violates a safety property $\varphi\equiv\mathbb{P}_{\leq\lambda}[\Diamond T]$. The following definition is central: ###### Definition 6 (Rerouting) Let MC $D=(S,s_{0},\boldsymbol{P})$ with $s_{\top},s_{\bot}\not\in S$, $C\subseteq S$ a set of _expanded states_ and $\boldsymbol{\gamma}\colon S\setminus C\rightarrow[0,1]$ a _rerouting vector_. The _rerouting_ of MC $D$ w.r.t. $C$ and $\boldsymbol{\gamma}$ is the MC $D{\downarrow}C[\boldsymbol{\gamma}]=(S\cup\\{s_{\bot},s_{\top}\\},s_{0},\boldsymbol{P}^{C}_{\boldsymbol{\gamma}})$ with: $\displaystyle\boldsymbol{P}^{C}_{\boldsymbol{\gamma}}(s)=\begin{cases}\boldsymbol{P}(s)&\text{ if }s\in C,\\\ [s_{\top}\mapsto\boldsymbol{\gamma}(s),s_{\bot}\mapsto(1{-}\boldsymbol{\gamma}(s))]&\text{ if }s\in S{\setminus}C,\\\ [s\mapsto 1]&\text{ if }s\in\\{s_{\top},s_{\bot}\\}.\\\ \end{cases}$ Essentially, $D{\downarrow}C[\boldsymbol{\gamma}]$ extends the MC $D$ with additional _sink states_ $s_{\top}$ and $s_{\bot}$ and replaces all outgoing transitions of any non-expanded state $s\in S{\setminus}C$ by a transition leading to $s_{\top}$ (with probability $\boldsymbol{\gamma}(s)$) and a complementary one to $s_{\bot}$. We consider $s_{\top}$ to be the new target and let $\varphi^{\prime}$ denote the updated property. The transition $s\xrightarrow{\boldsymbol{\gamma}(s)}s_{\top}$ may be considered a ‘shortcut’ that by-passes successors of $s$ and leads straight to target $s_{\top}$ with probability $\boldsymbol{\gamma}(s)$. To ensure that $D{\downarrow}C[\boldsymbol{\gamma}]$ is a CE, the value $\boldsymbol{\gamma}(s)$ must be a lower bound on the reachability probability from $s$ in $D$. When constructing a CE for a singular MC, we pick $\boldsymbol{\gamma}=\boldsymbol{0}$, whereas when this MC is induced by a realization $r\in\mathcal{R}$, we can safely pick $\boldsymbol{\gamma}=\textsl{lb}^{\mathcal{R}}$. The CE will be valid for every $r^{\prime}\in\mathcal{R}$. It is a CE-modulo-$\mathcal{R}$. Algorithmically, we employ a state-exploration approach and therefore start with $C^{(0)}=\emptyset$, i.e., all states are initially rerouted. If this is a CE, we are done. Otherwise, if the rerouting $D{\downarrow}C^{(0)}[\boldsymbol{\gamma}]$ satisfies $\varphi^{\prime}$, then we ‘expand’ some states to obtain a CE. Naturally, we must expand reachable states to change the satisfaction of $\varphi$. By expanding some state $s\in S$, we abandon the abstraction associated with the shortcut $s\xrightarrow{\boldsymbol{\gamma}(s)}s_{\top}$ and replace it with concrete behavior that was inherent to state $s$ in MC $D$. Expanding a state cannot decrease the induced reachability probability as $\textsl{lb}^{\mathcal{R}}$ is a valid lower bound. This gradual expansion of the reachable state space continues until for some $C\subseteq S$ the corresponding rerouting $D{\downarrow}C[\boldsymbol{\gamma}]$ violates $\varphi^{\prime}$. This gradual expansion process terminates as $D{\downarrow}S[\boldsymbol{\gamma}]\equiv D$ and our assumption is $D\not\models\varphi$. We show this process on an example. Figure 4: Finding a CE to $\mathcal{D}_{r_{0}}$ and $\varphi$ from Fig. 3 using the rerouting vector $\boldsymbol{\gamma}=\textsl{lb}^{\mathcal{R}}$. ###### Example 1 Reconsider $\mathcal{D}$ in Fig. 3 with $\varphi\equiv\mathbb{P}_{\leq 0.3}[\Diamond\\{t\\}]$. Using the method outlined below we get: $\textsl{lb}^{\mathcal{R}}=[s_{0}\mapsto 0.2,s_{1}\mapsto 0.6,s_{2}\mapsto 0.2,t\mapsto 1,f\mapsto 0]$. In absence of any bounds, the CE is $\\{s_{0},s_{1},t\\}$. Consider the gradual rerouting approach: We set $\boldsymbol{\gamma}=\textsl{lb}^{\mathcal{R}}$, $C^{(0)}=\emptyset$ and have $D^{(0)}\coloneqq\mathcal{D}_{r_{0}}{\downarrow}C^{(0)}[\boldsymbol{\gamma}]$, see Fig. 4(a). Verifying this MC against $\varphi^{\prime}=\mathbb{P}_{\leq 0.3}[\Diamond T\cup\\{s_{\top}\\}]$ yields $\mathbb{P}[D^{(0)},s_{0}\models\Diamond T\cup\\{s_{\top}\\}]=\boldsymbol{\gamma}(s_{0})=0.2\leq 0.3$, i.e., the set $C^{(0)}$ is not a CE. We now expand the initial state, i.e., $C^{(1)}=\\{s_{0}\\}$ and let $D^{(1)}\coloneqq\mathcal{D}_{r_{0}}{\downarrow}C^{(1)}[\boldsymbol{\gamma}]$, see Fig. 4(b). Verifying $D^{(1)}$ yields $\mathbb{P}[D^{(1)},s_{0}\models\Diamond T\cup\\{s_{\top}\\}]=1\cdot\boldsymbol{\gamma}(s_{1})=0.6>0.3$. Thus, the set $C^{(1)}$ is critical and the corresponding conflict is $K_{C^{(1)}}=\mathrm{supp}(s_{0})=\\{X\\}$. This is smaller than the naively computed conflict $\\{X,Y\\}$. ### 4.0.3 Greedy state expansion strategy Recall from Fig. 2 that for an MC $\mathcal{D}_{r}$ with $\mathcal{D}_{r}\not\models\varphi$, multiple CEs may exist inducing different conflicts. An efficient expansion strategy should yield a CE that induces a small amount of relevant parameters (to prune more family members) and this CE is preferably obtained by a small number of model-checking queries. The method presented in Alg. 1 meets these criteria. Input : An MC $\mathcal{D}_{r}$ a property $\varphi\equiv\mathbb{P}_{\bowtie\lambda}[\Diamond T]$ s.t. $\mathcal{D}_{r}\not\models\varphi$, a rerouting vector $\boldsymbol{\gamma}$. Output : A conflict $K$ for $\mathcal{D}_{r}$ and $\varphi$. 1 2$i\leftarrow 0$, $K^{(i)}\leftarrow\emptyset$ 3 while _true_ do 4 $C^{(i)},H^{(i)}\leftarrow\mathrm{reachableViaHoles}(\mathcal{D}_{r},K^{(i)})$ 5 $D^{(i)}\leftarrow\mathcal{D}_{r}{\downarrow}C^{(i)}[\boldsymbol{\gamma}]$ 6 if _$\mathbb{P}[D^{(i)}\models\Diamond T\cup\\{s_{\top}\\}]\not\bowtie\lambda$_ then return _$K^{(i)}$_ ; 7 $\overline{s}\leftarrow\mathrm{chooseToExpand}(H^{(i)},K^{(i)})$ 8 $K^{(i+1)}=K^{(i)}\cup\mathrm{supp}(\mathcal{B}(\overline{s}))$ 9 $i\leftarrow i+1$ 10 11 end while 12 Algorithm 1 Counterexample construction based on rerouting. The algorithm expands multiple states between subsequent model checks, while expanding only states that are associated with parameters that are relevant. In particular, in each iteration, we keep track of the set $K^{(i)}$ of relevant parameters optimistically starting with $K^{(0)}=\emptyset$. We compute (see line 1) the set $C^{(i)}$ of states that are reachable from the initial state via states which are associated only with relevant parameters in $K^{(i)}$, i.e., via states for which $\mathrm{supp}(\mathcal{B}(s))\subseteq K^{(i)}$. Here, $H^{(i)}$ represents a state exploration ‘horizon’: the set of states reachable from $C^{(i)}$ but containing some (still) irrelevant parameters. We then construct the corresponding rerouting $D{\downarrow}C^{(i)}[\boldsymbol{\gamma}]$ and check whether it is a CE. Otherwise, we greedily choose a state $\overline{s}$ from the horizon $H^{(i)}$ containing the least number of irrelevant parameters and add these parameters to our conflict (see line 7). The resulting conflict may not be minimal, but is computed fast. Our algorithm applies to probabilistic liveness properties222Some care is required regarding loops, see [9]. too using $\boldsymbol{\gamma}=\textsl{ub}^{\mathcal{R}}$. ### 4.0.4 Computing bounds We compute $\textsl{lb}^{\mathcal{R}}$ and $\textsl{ub}^{\mathcal{R}}$ using an abstraction [10]. The method considers some set $\mathcal{R}$ of realizations and computes the corresponding _quotient Markov decision process (MDP)_ that over-approximates the behavior of all MCs in the family $\mathcal{R}$. Model checking this MDP yields an upper and a lower bound of the induced probabilities for all states over all realizations in $\mathcal{R}$. That is, $\textsl{Bound}(\mathcal{D},\mathcal{R})$ computes $\textsl{lb}^{\mathcal{R}}\in\mathbb{R}^{S}$ and $\textsl{ub}^{\mathcal{R}}\in\mathbb{R}^{S}$ such that for each $s\in S$: $\textsl{lb}^{\mathcal{R}}(s)\ \leq\ \min_{r\in\mathcal{R}}\mathbb{P}[\mathcal{D}_{r},s\models\Diamond T]\ \leq\ \max_{r\in\mathcal{R}}\mathbb{P}[\mathcal{D}_{r},s\models\Diamond T]\ \leq\ \textsl{ub}^{\mathcal{R}}(s).$ To allow for refinement, two properties are crucial (with point-wise inequalities): $\mbox{1. }\textsl{lb}^{\mathcal{R}}\leq\textsl{lb}^{\mathcal{R}^{\prime}}\wedge\textsl{ub}^{\mathcal{R}}\geq\textsl{ub}^{\mathcal{R}^{\prime}}\mbox{ for }\mathcal{R}^{\prime}\subseteq\mathcal{R}\quad\mbox{and}\quad\mbox{2. }\textsl{lb}^{\\{r\\}}=\textsl{ub}^{\\{r\\}}\mbox{ for }r\in\mathcal{R}.$ In [10], the abstraction and refinement together define an abstraction- refinement loop (AR) that addresses the feasibility problem. In the worst case, this loop analyses $2\cdot|\mathcal{R}|$ quotient MDPs, which (as of now) may be arbitrarily larger than the number of family members they represent. ## 5 Hybrid Dual-Oracle Synthesis We introduce an extended synthesis loop in which the abstraction-based reasoning is used to prune the family $\mathcal{R}$, and to accelerate the CE- based oracle from Sect. 4. The intuitive idea is outlined in Fig. 5. Note that if the CE-based oracle is not exploited, we emulate AR (explained in computing bounds above), whereas if the abstraction oracle is not used, we emulate CEGIS (with the novel oracle). LearnerCE-OracleAbstr- Oracle$\mathcal{R}$$\mathcal{D},\Phi$$\mathcal{D},\Phi$$r\in\mathcal{R}$+bounds$\mathcal{R}^{\prime}\subseteq\mathcal{R}$ violate $\Phi$$\mathcal{R}^{\prime}\subseteq\mathcal{R}$bounds _or_ $\mathcal{R}^{\prime}$ violates$r\models\Phi$each $r\in\mathcal{R}^{\prime}$, $r\models\Phi$no $r\models\Phi$ Figure 5: Conceptual hybrid (dual-oracle) synthesis . Let us motivate combining these oracles in a flexible way. The naive version outlined in the previous section assumed a single abstraction step, and invokes CEGIS with the bounds obtained from that step. Evidently, the better (tighter) the bounds $\boldsymbol{\gamma}$, the better the CEs. However, the abstraction-based bounds for $\mathcal{R}$ may be very loose. These bounds can be improved by splitting the set $\mathcal{R}$ and using the bounds on the two sub-families. The idea is to run a limited number of AR steps and then invoke CEGIS. Our experiments reveal that it can be crucial to be adaptive, i.e., the integrated method must be able to detect at run time when to switch. Input : A family $\mathcal{D}$, a reachability property $\varphi$. Output : Either a member $r$ in $\mathcal{D}$ with $r\models\varphi$, or no such $r$ exists in $\mathcal{D}$ $\overline{\mathcal{R}}\leftarrow\\{\mathcal{R}^{\mathcal{D}}\\}$ ; // each analysed (sub-)family also holds bounds $\delta_{CEGIS}\leftarrow 1$ ; // time allocation factor for CEGIS 1 while _true_ do 2 result,$\overline{\mathcal{R}}^{\prime},\sigma_{AR},t_{AR}\leftarrow$AR.run$(\overline{\mathcal{R}},\varphi)$ 3 if _$result.\mathrm{decided}()$_ then return _$result$_ ; 4 CEGIS.setTimeout($t_{AR}\cdot\delta_{CEGIS}$) 5 $result,\sigma_{CEGIS},\overline{\mathcal{R}}^{\prime\prime}\leftarrow\mathrm{CEGIS.run}(\overline{\mathcal{R}}^{\prime},\varphi)$ 6 if _$result.\mathrm{decided}()$_ then return _$result$_ ; 7 $\delta_{CEGIS}\leftarrow\sigma_{CEGIS}/\sigma_{AR}$ 8 $\overline{\mathcal{R}}\leftarrow\overline{\mathcal{R}}^{\prime\prime}$ 9 10 end while Algorithm 2 Hybrid (dual-oracle) synthesis. The proposed hybrid method switches between AR and CEGIS, where we allow for refining during the AR phase and use the obtained refined bounds during CEGIS. Additionally, we estimate the efficiency $\sigma$ (e.g., the number of pruned MCs per time unit) of the two methods and allocate more time $t$ to the method with superior performance. That is, if we detect that CEGIS prunes sub- families twice as fast as AR, we double the time in the next round for CEGIS. The resulting algorithm is summarized in Alg. 2. Recall that AR (at line 5) takes one family from $\overline{\mathcal{R}}$, either solves it or splits it and returns the set of undecided families $\overline{\mathcal{R}}^{\prime}$. In contrast, CEGIS processes multiple families from $\overline{\mathcal{R}}^{\prime}$ until the timeout and then returns the set of undecided families $\overline{\mathcal{R}}^{\prime\prime}$. This workflow is motivated by the fact that one iteration of AR (i.e., the involved MDP model-checking) is typically significantly slower that one CEGIS iteration. ###### Remark 1 Although the developed framework for integrated synthesis has been discussed in the context of feasibility with respect to a single property $\varphi$, it can be easily generalized to handle _multiple_ -property specifications as well as to treat _optimal_ synthesis. Regarding multiple properties, the idea remains the same: Analyzing the quotient MDP with respect to multiple properties yields multiple probability bounds. After initiating a CEGIS-loop and obtaining an unsatisfiable realization, we can construct a separate conflict for each unsatisfied property, while using the corresponding probability bound to enhance the CE generation process. Optimal synthesis is handled similarly to feasibility, but, after obtaining a satisfiable solution, we update the optimizing property to exclude this solution: e.g., for maximal synthesis this translates to increasing the threshold of the maximizing property. Having exhausted the search space of family members, the last obtained solution is declared to be the optimal one. ## 6 Experimental evaluation #### Implementation. We implemented the hybrid oracle on top of the probabilistic model checker Storm [18]. While the high-performance parts were implemented in C++, we used a python API to flexibly construct the overall synthesis loop. For SMT solving, we used Z3 [29]. The tool chain takes a PRISM [27] or JANI [6] sketch and a set of temporal properties, and returns a satisfying realization, if such exists, or outputs that such realization does not exist. The implementation in the form of an artefact is available at https://zenodo.org/record/4422543. #### Set-up. We compare the adaptive oracle-guided synthesis with two state-of-the-art synthesis methods: program-level CEGIS [9] using a MaxSat CE generation [16, 41] and AR [10]. These use the same architecture and data structures from Storm. All experiments are run on an Ubuntu 19.04 machine with Intel i5-8300H (4 cores at 2.3 GHz) and using up to 8 GB RAM, with all the algorithms being executed on a single thread. The benchmarks consists of five different models, see Table 1, from various domains that were used in [9, 10]. As opposed to the benchmark considered in [9, 10], we use larger variants of _Grid_ and _Herman_ to better demonstrate differences in the performance of individual methods. To investigate the scalability of the methods, we consider a new variant of the _Herman_ model, that allows us to scale the number of randomization strategies and thus the family size. In particular, we will compare performance on two instances of different sizes: _small Herman ∗_ (5k members) and _large Herman ∗_ (3.1M members, other statistics are reported in Table 1). model | $|K|$ | $|\mathcal{R}^{\mathcal{D}}|$ | MDP size | avg. MC size ---|---|---|---|--- _Grid_ | 8 | 65k | 11.5k | 1.2k _Maze_ | 20 | 1M | 9k | 5.4k _DPM_ | 16 | 43M | 9.5k | 2.2k model | $|K|$ | $|\mathcal{R}^{\mathcal{D}}|$ | MDP size | avg. MC size ---|---|---|---|--- _Pole_ | 17 | 1.3M | 6.6k | 5.6k _Herman_ | 8 | 0.5k | 48k | 5.2k _Herman ∗_ | 7 | 3.1M | 6k | 1k Table 1: Summary of the benchmarks and their statistics To reason about the pruning efficiency of different synthesis methods, we want to avoid feasible synthesis problems, where the order of family exploration can lead to inconsistent performance. Instead, we will primarily focus on non- feasible problems, where all realizations need to be explored in order to prove unsatisfiability. The experimental evaluation is presented in three parts. (1) We evaluate the novel CE construction method and compare it with the MaxSat-based oracle from [9]. (2) We compare the hybrid synthesis loop with the two baselines AR and CEGIS. (3) We consider novel hard synthesis instances (multi-property synthesis, finding optimal programs) on instances of the model _Herman ∗_. ### 6.0.1 Comparing CE construction methods We consider _the quality of the CEs_ and _their generation time_. In particular, we want to investigate (1) whether using CEs-modulo-families yields better CEes, (2) how the quality of CEs from the smart oracle compares to the MaxSat-based oracle, and how their time consumption compares. As a measure of quality of a CE, the average number of its relevant parameters w.r.t. the total number of its parameters is taken. That is, smaller ratios imply better CEs. To measure the influence of using CEs-modulo-families, two types of bounds are used: (i) trivial bounds (i.e., $\boldsymbol{\gamma}=\boldsymbol{0}$ for safety and $\boldsymbol{\gamma}=\boldsymbol{1}$ for liveness properties), and (ii) non- trivial bounds corresponding to the entire family $\mathcal{R}^{\mathcal{D}}$ representing the most conservative estimate. The results are reported in (the left part of) Table 2. In the next subsection, we investigate this same benchmark from the point of view of the performance of the synthesis methods, which also shows the immediate effect of the new CE generation strategy. model | CE quality | performance ---|---|--- MaxSat [16] | state expansion (new) | CEGIS [9] | AR [10] | Hybrid (new) trivial | non-trivial | iters | time | iters | time | iters | time _Grid_ | | 0.59 (0.025) | 0.50 (0.001) | 0.50 | 613 | 30 | 5325 | 486 | (285, 11) | 6 $*$ | 0.74 (0.026) | 0.65 (0.001) | 0.65 | 1801 | 93 | 6139 | 540 | (2100, 127) | 33 _Maze_ | | 0.21 (0.247) | 0.55 (0.009) | 0.38 | 290 | 5449 | 49 | 17 | (105, 13) | 7 $*$ | 0.24 (2.595) | 0.63 (0.012) | 0.46 | 301 | 6069 | 63 | 26 | (146, 17) | 9 _DPM_ | | 0.32 (0.447) | 0.61 (0.007) | 0.53 | 2906 | 2488 | 299 | 25 | (631, 143) | 23 $*$ | 0.33 (0.525) | 0.49 (0.006) | 0.42 | 3172 | 2782 | 1215 | 81 | (2374, 545) | 76 _Pole_ | | - | 0.87 (0.062) | 0.16 | - | - | 309 | 12 | (3, 5) | 1 $*$ | - | 0.54 (0.041) | 0.29 | - | - | 615 | 23 | (80, 61) | 6 _Herman_ | | - | 0.91 (0.011) | 0.50 | - | - | 171 | 86 | (24, 1) | 9 $*$ | - | 0.88 (0.016) | 0.87 | - | - | 643 | 269 | (485, 13) | 29 Table 2: CE quality for different methods and performance of three synthesis methods. For each model/property, we report results for two different thresholds where the symbol ‘$*$’ marks the one closer to the feasibility threshold, representing the more difficult synthesis problem. Symbol ‘-’ marks a two-hour timeout. CE quality: The presented numbers give the CE quality (i.e., the smaller, the better). The numbers in parentheses represent the average run-time of constructing one CE in seconds (run-times for constructing CE using non-trivial bounds are similar as for trivial ones and are thus not reported). Performance: for each method, we report the number of iterations (for the hybrid method, the reported values are iterations of the CEGIS and AR oracle, respectively) and the run-time in seconds. The first observation is that using non-trivial bounds (as opposed to trivial ones) for the state expansion approach can drastically decrease the conflict size. It turns out that the CEs obtained using the greedy approach are mostly larger than those obtained with the MaxSat method. However (see _Grid_), even for trivial bounds, we may obtain smaller CEs than for MaxSat: computing a minimal-command CE does not necessarily induce an optimal conflict. On the other hand, comparing the run-times in the parentheses, one can see that computing CEs via the greedy state expansion is orders of magnitude faster than computing command-optimal ones using MaxSat. It is good to realize that the greedy method makes at most $|K|$ model-checking queries to compute CEs, while the MaxSat method may make exponentially many such queries. Overall, the greedy method using the non-trivial bounds is able to obtain CEs of comparable quality as the MaxSat method, while being orders of magnitude faster. ### 6.0.2 Performance comparison with AR/CEGIS We compare the hybrid synthesis loop from Sect. 5 with two state-of-the-art baselines: CEGIS and AR. The results are displayed in (the right half of) Table 2. _In all 10 cases, the hybrid method outperforms the baselines. It is up to an order of magnitude faster_. Let us discuss the performance of the hybrid method. We classify benchmarks along two dimensions: (1) the performance of CEGIS and (2) the performance of AR. Based on the empirical performance, we classify (_Grid_) as good-for-CEGIS (and not for AR), _Maze_ , _Pole_ and _DPM_ as good-for-AR (and not for CEGIS), and _Herman_ as hard (for both). Roughly, AR works well when the quotient MDP does not blow up and its analysis is precise due to consistent schedulers, i.e., when the parameter dependencies are not crucial for a precise analysis. CEGIS performs well when the CEs are small and fast to compute. On the other hand, synthesis problems for which neither pure CEGIS nor pure AR are able to effectively reason about non-trivial subfamilies, inherently profit from a hybrid method. The main point we want to discuss is _how the hybrid method reinforces the strengths of both methods, rather than their weaknesses_. In the hybrid method, there are two factors that determine the efficiency: (i) _how fast do we get bounds on the reachability probability that are tight enough to enable construction of good counterexamples?_ and (ii) _how good are the constructed counterexamples?_ The former factor is attributed to the proposed adaptive scheme (see Alg. 2), where the method will prefer AR-like analysis and continue refinement until the computed bounds allow construction of small counterexamples. The latter is reflected above. Let us now discuss how these two aspects are reflected in the benchmarks. In good-for-CEGIS benchmarks like _Grid_ , after analyzing a quotient MDP for the whole family, the hybrid method mostly profits from better CEs yielding better bounds, thus outperforming CEGIS. Indeed, the CEs are found so fast that the bottleneck is no longer their generation. This also explains why the speedup is not immediately translated to the speedup on the overall synthesis loop. In the good-for-AR benchmark _DPM_ , the hybrid method provides only a minor improvement as it has to perform a large number of AR-iterations before the novel CE-based pruning can be effectively used. This can be considered as the worst-case scenario for the hybrid method. On other good-for-AR benchmarks like _Maze_ and _Pole_ , the good performance on AR allows to quickly obtain tight bounds which can then be exploited by CEGIS. Finally, in hard models like _Herman_ , abstraction-refinement is very expensive, but even the bounds from the first round yield bounds that, as opposed to the trivial bounds, now enable good CEs: CEGIS can keep using these bounds to quickly prune the state space. ### 6.0.3 More complicated synthesis problems Our new approach can push the limits of synthesis benchmarks significantly. We illustrate this by considering a new variant of the _Herman_ model, _Herman ∗_, and a property imposing an upper bound on the expected number of rounds until stabilization. We put this bound just below the optimal (i.e., the minimal) value, yielding a hard non-feasible problem. The synthesis results are summarized in Table 3. As CEGIS performs poorly on _Herman_ , it is excluded here. First, we investigate on _small Herman ∗_ how the methods can handle the synthesis for multi-property specifications. We add one feasible property to the (still non-feasible) specification (row ‘two properties’). While including more properties typically slows down the AR computation, the performance of the hybrid method is not affected as the corresponding overhead is mitigated by additional pruning opportunities. Second, we consider optimal synthesis for the property as used in the feasibility synthesis. The hybrid method requires only a minor overhead to find an optimal solution compared to checking feasibility. This overhead is significantly larger for AR. Next, we consider _larger Herman ∗_ model having significantly more randomization strategies (3.1M members) that include solutions leading to a considerably faster stabilization. This model is out of reach for existing synthesis approaches: one-by-one enumeration takes more than 27 hours and the AR performs even worse—solving the feasibility and optimality problems requires 47 and 55 hours, respectively. On the other hand, the proposed hybrid method is able to solve these problems within minutes. Finally, we consider a relaxed variant of optimal synthesis ($5\%$-optimality) guaranteeing that the found solution is up to $5\%$ worse than the optimal. Relaxing the optimally criterion speeds up the hybrid synthesis method by about a factor three. These experiments clearly demonstrate that scaling up the synthesis problem several orders of magnitude renders existing synthesis methods infeasible: they need tens of hours to solve the synthesis problems. Meanwhile, the hybrid method tackles these difficult synthesis problems without significant penalty and is capable of producing a solution within minutes. synthesis | AR | Hybrid ---|---|--- problem | iters | time | iters | time feasibility | 81 | 30s | (274, 1) | 7s two properties | 97 | 38s | (274, 1) | 8s optimality | 531 | 150s | (571, 7) | 12s synthesis AR Hybrid problem iters time iters time feasibility 69k 47h (14280, 2) 13.4m optimality 83k 55h (16197, 3) 16.8m $5\%$-optimality 60k 42h (6421, 7) 5.1m Table 3: The impact of scaling the family size (of the _Herman ∗_ model) and handling more complex synthesis problems. The left part shows the results for the smaller variant (5k members), the right part for the larger one (3.1M members). ## 7 Conclusion We present a novel method for the automated synthesis of probabilistic programs. Pairing the counterexample-guided inductive synthesis with the deductive oracle using an MDP abstraction, we develop a synthesis technique enabling faster construction of smaller counterexamples. Evaluating the method on case studies from different domains, we demonstrate that the novel CE construction and the adaptive strategy lead to a significant acceleration of the synthesis process. The proposed method is able to reduce the run-time for challenging problems from days to minutes. In our future work, we plan to investigate counterexamples on the quotient MDPs and improve the abstraction refinement strategy. ## References * [1] Ábrahám, E., Becker, B., Dehnert, C., Jansen, N., Katoen, J.P., Wimmer, R.: Counterexample generation for discrete-time Markov models: An introductory survey. In: SFM. LNCS, vol. 8483, pp. 65–121. Springer (2014) * [2] Alur, R., Bodík, R., Dallal, E., Fisman, D., Garg, P., Juniwal, G., Kress-Gazit, H., Madhusudan, P., Martin, M.M.K., Raghothaman, M., Saha, S., Seshia, S.A., Singh, R., Solar-Lezama, A., Torlak, E., Udupa, A.: Syntax-guided synthesis. In: Dependable Software Systems Engineering, NATO Science for Peace and Security Series, vol. 40, pp. 1–25. IOS Press (2015) * [3] Baier, C., de Alfaro, L., Forejt, V., Kwiatkowska, M.: Model checking probabilistic systems. In: Handbook of Model Checking, pp. 963–999. Springer (2018) * [4] Baier, C., Hensel, C., Hutschenreiter, L., Junges, S., Katoen, J., Klein, J.: Parametric markov chains: PCTL complexity and fraction-free gaussian elimination. Inf. Comput. 272, 104504 (2020) * [5] Bartocci, E., Grosu, R., Katsaros, P., Ramakrishnan, C.R., Smolka, S.A.: Model repair for probabilistic systems. In: TACAS’11. LNCS, vol. 6605, pp. 326–340 (2011) * [6] Bornholt, J., Torlak, E., Grossman, D., Ceze, L.: Optimizing synthesis with metasketches. In: POPL’16. p. 775–788. Association for Computing Machinery (2016) * [7] Calinescu, R., Češka, M., Gerasimou, S., Kwiatkowska, M., Paoletti, N.: Efficient synthesis of robust models for stochastic systems. J. of Systems and Softw. 143, 140–158 (2018) * [8] Češka, M., Dannenberg, F., Paoletti, N., Kwiatkowska, M., Brim, L.: Precise parameter synthesis for stochastic biochemical systems. Acta Inf. 54(6), 589–623 (2017) * [9] Češka, M., Hensel, C., Junges, S., Katoen, J.P.: Counterexample-driven synthesis for probabilistic program sketches. In: FM. LNCS, vol. 11800, pp. 101–120. Springer (2019) * [10] Češka, M., Jansen, N., Junges, S., Katoen, J.P.: Shepherding hordes of Markov chains. In: TACAS (2). LNCS, vol. 11428, pp. 172–190. Springer (2019) * [11] Chatzieleftheriou, G., Katsaros, P.: Abstract model repair for probabilistic systems. Inf. Comput. 259(1), 142–160 (2018) * [12] Chonev, V.: Reachability in augmented interval Markov chains. In: RP’2019. LNCS, vol. 11674, pp. 79–92. Springer (2019) * [13] Chrszon, P., Dubslaff, C., Klüppelholz, S., Baier, C.: ProFeat: feature-oriented engineering for family-based probabilistic model checking. Formal Asp. Comput. 30(1), 45–75 (2018) * [14] Classen, A., Cordy, M., Heymans, P., Legay, A., Schobbens, P.Y.: Model checking software product lines with SNIP. Int. J. on Softw. Tools for Technol. Transf. 14, 589–612 (2012) * [15] Daws, C.: Symbolic and parametric model checking of discrete-time Markov chains. In: ICTAC. LNCS, vol. 3407, pp. 280–294. Springer (2004) * [16] Dehnert, C., Jansen, N., Wimmer, R., Ábrahám, E., Katoen, J.P.: Fast debugging of PRISM models. In: ATVA. LNCS, vol. 8837, pp. 146–162. Springer (2014) * [17] Dehnert, C., Junges, S., Jansen, N., Corzilius, F., Volk, M., Bruintjes, H., Katoen, J.P., Ábrahám, E.: PROPhESY: A PRObabilistic ParamEter SYNnthesis Tool. In: CAV’15. LNCS, vol. 9206, pp. 214–231. Springer (2015) * [18] Dehnert, C., Junges, S., Katoen, J.P., Volk, M.: A Storm is coming: A modern probabilistic model checker. In: CAV. LNCS, vol. 10427, pp. 592–600. Springer (2017) * [19] Funke, F., Jantsch, S., Baier, C.: Farkas certificates and minimal witnesses for probabilistic reachability constraints. In: TACAS (1). LNCS, vol. 12078, pp. 324–345. Springer (2020) * [20] Gerasimou, S., Calinescu, R., Tamburrelli, G.: Synthesis of probabilistic models for quality-of-service software engineering. Autom. Softw. Eng. 25(4), 785–831 (2018) * [21] Ghezzi, C., Sharifloo, A.M.: Model-based verification of quantitative non-functional properties for software product lines. Inf. & Softw. Technol. 55(3), 508–524 (2013) * [22] Hahn, E.M., Hermanns, H., Zhang, L.: Probabilistic reachability for parametric Markov models. Int. J. on Softw. Tools for Technol. Transf. 13(1), 3–19 (2011) * [23] Harman, M., Mansouri, S.A., Zhang, Y.: Search-based software engineering: Trends, techniques and applications. ACM Comp. Surveys 45(1), 11:1–11:61 (2012) * [24] Herman, T.: Probabilistic self-stabilization. Inf. Process. Lett. 35(2), 63–67 (1990) * [25] Jha, S., Gulwani, S., Seshia, S.A., Tiwari, A.: Oracle-guided component-based program synthesis. In: ICSE. p. 215–224. ACM (2010) * [26] Kwiatkowska, M., Norman, G., Parker, D.: Probabilistic verification of Herman’s self-stabilisation algorithm. Formal Aspects of Computing 24(4), 661–670 (2012) * [27] Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: Verification of probabilistic real-time systems. In: CAV. LNCS, vol. 6806, pp. 585–591. Springer (2011) * [28] Lanna, A., Castro, T., Alves, V., Rodrigues, G., Schobbens, P.Y., Apel, S.: Feature-family-based reliability analysis of software product lines. Inf. and Softw. Technol. 94, 59–81 (2018) * [29] Lindemann, C.: Performance modelling with deterministic and stochastic Petri nets. SIGMETRICS Perform. Eval. Rev. 26(2), 3 (1998) * [30] Madani, O., Hanks, S., Condon, A.: On the undecidability of probabilistic planning and infinite-horizon partially observable Markov decision problems. In: AAAI/IAAI. pp. 541–548. AAAI Press / The MIT Press (1999) * [31] Martens, A., Koziolek, H., Becker, S., Reussner, R.: Automatically improve software architecture models for performance, reliability, and cost using evolutionary algorithms. In: WOSP/SIPEW. pp. 105–116. ACM (2010) * [32] Nori, A.V., Ozair, S., Rajamani, S.K., Vijaykeerthy, D.: Efficient synthesis of probabilistic programs. In: PLDI’14. pp. 208–217. ACM (2015) * [33] Oliehoek, F.A., Amato, C.: A Concise Introduction to Decentralized POMDPs. Springer Briefs in Intelligent Systems, Springer (2016) * [34] Pathak, S., Ábrahám, E., Jansen, N., Tacchella, A., Katoen, J.P.: A greedy approach for the efficient repair of stochastic models. In: NFM’15. LNCS, vol. 9058, pp. 295–309. Springer (2015) * [35] Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series in Probability and Statistics, Wiley (1994) * [36] Quatmann, T., Dehnert, C., Jansen, N., Junges, S., Katoen, J.P.: Parameter synthesis for Markov models: Faster than ever. In: ATVA’16. LNCS, vol. 9938, pp. 50–67 (2016) * [37] Quatmann, T., Jansen, N., Dehnert, C., Wimmer, R., Ábrahám, E., Katoen, J.P., Becker, B.: Counterexamples for expected rewards. In: FM. pp. 435–452. Springer (2015) * [38] Saad, F.A., Cusumano-Towner, M.F., Schaechtle, U., Rinard, M.C., Mansinghka, V.K.: Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages 3(POPL), 1–32 (2019) * [39] Solar-Lezama, A., Rabbah, R., Bodík, R., Ebcioğlu, K.: Programming by sketching for bit-streaming programs. In: PLDI’05. pp. 281–294. ACM (2005) * [40] Vandin, A., ter Beek, M.H., Legay, A., Lluch-Lafuente, A.: Qflan: A tool for the quantitative analysis of highly reconfigurable systems. In: FM. LNCS, vol. 10951, pp. 329–337. Springer (2018) * [41] Wimmer, R., Jansen, N., Vorpahl, A., Ábrahám, E., Katoen, J.P., Becker, B.: High-level counterexamples for probabilistic automata. Logical Methods in Computer Science 11(1) (2015) Open Access This chapter is licensed under the terms of the Creative CommonsAttribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intendeduse is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
11institutetext: Aalto University Metsähovi Radio Observatory, Metsähovintie 114, 02540 Kylmälä, Finland; 11email<EMAIL_ADDRESS>22institutetext: Aalto University Department of Electronics and Nanoengineering, PL15500, FI-00076 Aalto, Finland 33institutetext: DeutschesGeoForschungsZentrum (GFZ), Potsdam, Telegrafenberg, 14473 Potsdam, Germany 44institutetext: Institute of Geodesy and Geoinformation Science, Technische Universität Berlin, Straße des 17. Juni 135, 10623, Berlin, Germany 55institutetext: Finnish Geospatial Research Institute, Geodeetinrinne 2, FIN-02430 Masala, Finland # Evidence of the $Gaia$–VLBI position differences being related to radio source structure Ming H. Xu 112244 Susanne Lunz 33 James M. Anderson 44 Tuomas Savolainen 1122 Nataliya Zubko 66 Harald Schuh 4433 (Received ***; accepted ***) ###### Abstract Context. We report the relationship between the $Gaia$–VLBI position differences and the magnitudes of source structure effects in VLBI observations. Aims. Because the $Gaia$–VLBI position differences are statistically significant for a considerable number of common sources, we attempt to discuss and explain these position differences based on VLBI observations and available source images at cm-wavelengths. Methods. Based on the derived closure amplitude root-mean-square (CARMS), which quantifies the magnitudes of source structure effects in the VLBI observations used for building the third realization of the International Celestial Reference Frame, the arc lengths and normalized arc lengths of the position differences are examined in detail. The radio jet directions and the directions of the $Gaia$–VLBI position differences are investigated for a small sample of sources. Results. Both the arc lengths and normalized arc lengths of the $Gaia$ and VLBI positions are found to increase with the CARMS values. The majority of the sources with statistically significant position differences are associated with the sources having extended structure. Radio source structure is the one of the major factors of these position differences, and it can be the dominate factor for a number of sources. The vectors of the $Gaia$ and VLBI position differences are parallel to the radio-jet directions, which is confirmed with stronger evidence. ###### Key Words.: galaxies: active / galaxies: jets / astrometry / reference systems / radio continuum: galaxies ## 1 Introduction The International Celestial Reference Frame (ICRF) was adopted as the Fundamental Celestial Reference Frame for astronomy in January 1998 by the International Astronomical Union (IAU) (Ma et al., 1998). The ICRF is realized by the positions of distant radio sources, mostly active galactic nuclei (AGNs), based on the astrometric/geodetic very-long-baseline interferometry (VLBI) observations coordinated by the International VLBI Service for Geodesy and Astrometry (IVS; Schuh & Behrend, 2012; Nothnagel et al., 2017, please also refer to the IVS website111https://ivscc.gsfc.nasa.gov/index.html), and relies on the VLBI technique for its maintenance and improvement. As officially adopted by the IAU in January 2019, the third realization of the ICRF (ICRF3; Charlot et al., 2020) is established based on 40 years of VLBI observations and, for the first time, at three different radio frequencies independently. The radio source positions in the ICRF3 have accuracies at the sub-milliarcsecond levels. The European Space Agency mission $Gaia$222https://sci.esa.int/web/gaia (Gaia Collaboration et al., 2016) has released position estimates and other astrometric parameters for the celestial objects with optical $G$ magnitudes ¡ 21 mag based on the observations during the 22 months since July 2014 (DR2; Gaia Collaboration et al., 2018a). The color-dependent calibration is possible based on the $Gaia$ DR2 and leads to improvements in the astrometric solution thereafter (Lindegren et al., 2020). $Gaia$ Early Data Release 3 (EDR3; Gaia Collaboration et al., 2020) has made the data available based on the first 34 months of its observations. A good overall agreement between radio and optical positions was achieved for the cross-matched common objects (Mignard et al., 2016; Gaia Collaboration et al., 2018b); the median arc length between the source positions from $Gaia$ and VLBI is $\sim$0.5 milliarcsecond (mas) based on the $Gaia$ DR2 and the ICRF3. However, the distribution of the arc lengths between radio and optical positions normalized by their uncertainties, called normalized arc length hereafter, deviates from the expected Rayleigh distribution with $\sigma$ = 1. The most obvious deviations in that distribution are the long tail spreading to very large normalized arc lengths and the significant deficit of values in the bins around the expected peak. In the $Gaia$ DR1, there were only a few percent of sources with normalized arc lengths ¿ 3 (Mignard et al., 2016; Petrov & Kovalev, 2017b). In the $Gaia$ DR2, the number of such sources even increases to larger than 10 percent (Petrov & Kovalev, 2017a; Gaia Collaboration et al., 2018b; Petrov et al., 2019; Makarov et al., 2019). By deselecting objects mostly based on the optical properties, Makarov et al. (2019) still found 20 percent of sources having normalized arc lengths ¿ 3. The factors causing these position differences between optical and radio are still unclear, even though there are a variety of possible astrophysical explanations (Makarov et al., 2019; Plavin et al., 2019a; Kovalev et al., 2020). For instance, Kovalev et al. (2017) and Petrov et al. (2019) suggested that the main cause of the position differences is optical structure, the optical jets at the mas scales. Understanding these position differences is very important because (1) it will lead to a better selection of the common sources for aligning the optical frame to the radio frame; (2) the number of sources with statistically significant position differences can continue to increase in future $Gaia$ data releases, which would allow more and more small position differences be detected at the 3$\sigma$ confidence level; and (3) the position differences may tell something important about the astrophysics of the AGNs. We examine the position differences between $Gaia$ and VLBI from the radio side. As demonstrated in the imaging survey of radio sources (Charlot, 1990a; Fey & Charlot, 1997), the celestial reference frame (CRF) sources commonly have angular structure at the mas scales at cm-wavelengths333Refer to the images of CRF sources at http://www.physics.purdue.edu/astro/MOJAVE/.. Source structure is time and frequency dependent, and it is not modeled in the data analysis of building the ICRF3. The position estimates and their uncertainties in the ICRF3 are based on global least-square fitting (LSQ) and are thus not able to characterize the impacts of the systematical position variations over the 40 years due to source structure. For example, the position uncertainties from LSQ are likely underestimated in the presence of systematic errors. Our previous study has used the same VLBI observations as for the creation of the ICRF3 to quantify the magnitude of effects of source structure on VLBI observables for each individual source (Xu et al., 2019). In this paper, we apply the results to investigate the relationship between the $Gaia$–VLBI position differences and source structure at the cm-wavelengths. We then attempt to explain and discuss these position differences based on the radio images from the Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE; Lister et al., 2018). The paper is structured as follows. We introduce in Sect. 2 how the arc lengths of position differences, the normalized arc lengths, and the quantitative values of measuring structure effects are derived. We describe in Sect. 3 the results from the examination of arc lengths, normalized arc lengths, optical $G$ magnitudes, and redshifts with respect to source structure. In Sect. 4, the following topics are discussed: (1) source structure and its quantification; (2) the impact of frequency dependence of source structure; (3) the large position differences that are statistically significant; (4) the magnitudes of the position differences; and (5) the directions of the position differences. We make the conclusion in Sect. 5. ## 2 Data ### 2.1 Source positions from $Gaia$ and VLBI We used the right ascension and declination estimates, their uncertainties and the correlations between these two coordinates in the ICRF3444http://hpiers.obspm.fr/icrs-pc/newwww/icrf/icrf3sx.txt, which contains 4536 sources observed by astrometric/geodetic VLBI at S/X band. The median uncertainties of right ascension and declination reported in the ICRF3 are 0.155 mas and 0.217 mas, respectively. We used the $Gaia$ DR2 and EDR3555https://gea.esac.esa.int/archive/ to get the five astrometric parameters (source position, proper motion, and parallax), their uncertainties, the correlations between them, and the optical magnitude. Even though the cross match between radio and optical catalogs basically relies on the position coincidence, other criteria are needed to reduce the risk of false match. Lindegren et al. (2018) applied constraints on the other three astrometric parameters and the number of observations, and masked out the region near the Galactic plane, as shown in Eq. (13) of the publication. Petrov & Kovalev (2017b) used the concept of probability of false association as a function of $Gaia$ source density on a regular grid and the possible area defined by the positions and the uncertainties at radio and optical wavelengths for each potential match. We combined these two methods to identify the common objects between the ICRF3 and the $Gaia$ DR2, which gives 2970 sources (Lunz et al., 2019, please refer to the poster666http://www.oan.es/evga2019/EVGA2019_PDF/P310_EVGA2019_Lunz.pdf). Based on the $Gaia$ EDR3 and the ICRF3, we identified 3142 common sources, the same number of matched sources as found by the $Gaia$ team in the on-going analysis (François Mignard, private communication). For each common source, we calculated the arc length between the $Gaia$ and VLBI positions, $\rho$, by $\rho=\sqrt{(\Delta_{\alpha}\cos\delta)^{2}+\Delta_{\delta}^{2}},$ (1) where $\Delta_{\alpha}$ and $\Delta_{\delta}$ are the differences of right ascension and declination in the $Gaia$ data and the ICRF3, respectively, and $\delta$ is the declination. The normalized arc length, $X_{\rho}$, is defined and calculated by $X_{\rho}=\rho/\sigma_{\rho},$ (2) where $\sigma_{\rho}$ is the uncertainty of $\rho$ based on the full 2$\times$2 covariance matrix, as described in Eqs. (4) and (5) of Mignard et al. (2016). To characterize the position uncertainty with a single value, the semi-major axis of the error ellipse, $\sigma_{\texttt{pos,max}}$, was computed for both $Gaia$ and VLBI by $\begin{split}\sigma_{\texttt{pos,max}}^{2}=&\frac{1}{2}\Bigg{[}\ (\sigma_{\alpha}\cos\delta)^{2}+\sigma_{\delta}^{2}\\\ &+\sqrt{\left((\sigma_{\alpha}\cos\delta)^{2}-\sigma_{\delta}^{2}\right)^{2}+(2\text{C}_{\alpha\delta}\sigma_{\alpha}\cos\delta\sigma_{\delta})^{2}}\ \Bigg{]},\end{split}$ (3) where $\sigma_{\alpha}$ and $\sigma_{\delta}$ are the uncertainties of right ascension and declination, respectively, and $\text{C}_{\alpha\delta}$ is the correlation coefficient of the two coordinates. Based on the $Gaia$ DR2 and the ICRF3, there are 732 sources with $X_{\rho}$ ¿ 3.0; for the $Gaia$ EDR3, 804 sources have $X_{\rho}$ ¿ 3.0. ### 2.2 Closure amplitude root-mean-square (CARMS) We adopted the _log_ closure amplitude root-mean-square (CARMS) values from Table 2 in Xu et al. (2019) to quantify the magnitude of source structure effects for each individual source777The complete table is available through the CANFAR data DOI at: https://www.canfar.net/citation/landing?doi=20.0010.. Due to the missing data for calibration and the insensitivity of the parameters of geodetic concern, visibility amplitudes from interferometry were not used for most of the geodetic VLBI observations. However, they carry valuable information about source angular structure, which causes structure effects in group delays up to hundreds of picoseconds (Charlot, 1990b; Xu et al., 2016). By forming quadrangles with four baselines, one can get a ratio of the four amplitude observables to cancel out exactly the station-based errors, which is called closure amplitude and provides information about the intrinsic source structure. For an ideal point-like source, all the baselines will observe the same amplitude within the thermal noise, giving closure amplitudes close to unity; for an extended source, the closure amplitudes deviate from unity. The CARMS value of a source is defined to be the root-mean-square (rms) of _log_ closure amplitudes at the X-band based on the basic weighting scheme (See Eqs. (2)–(4) and (6)–(8) in Xu et al. (2019)). In addition to the study in Xu et al. (2019), please also refer to its supporting material888https://www.canfar.net/citation/landing?doi=19.0007, where the closure phase and closure amplitude plots are available for tens of sources to demonstrate the source structure effects and compare with their CARMS values. The CARMS values are available for 3417 radio sources in the ICRF3 and were derived from the astrometric/geodetic VLBI observations from 1979 to 2018, the same dataset establishing the ICRF3. They are in the range 0.03–1.63, and the mean and median values are 0.31 and 0.24, respectively. The CARMS values generally classify the CRF sources into three categories: 1. 1. CARMS ¡ 0.2 indicates minimum structure; 2. 2. CARMS ¿ 0.3 indicates significant structure; 3. 3. CARMS ¿ 0.4 indicates very extended structure. The CARMS values were validated by the different source categories in the ICRF catalogs. For instance, the 39 special handling sources in the second realization of the ICRF (Fey et al., 2015), which have variations in the time series of VLBI position estimates at the mas or sub-mas levels, have the median CARMS of 0.60, while the median value for the ICRF3 defining sources, used for defining the axis directions of the ICRF3, is 0.25. Recently, these CARMS values were used to select radio sources with minimum structure to assess the quality of group delays in the broadband VLBI system (Xu et al., 2020a). For the 3142 common sources from the $Gaia$ EDR3, the CARMS values are available for 2460 sources, 78 percent; the mean and median CARMS values are 0.30 and 0.24, respectively, which are at the same level as those of the 3417 sources. We examined the source position estimates based on both the $Gaia$ DR2 and EDR3, but we will focus on the results from the EDR3 in our study. ### 2.3 Redshift We used the Optical Characteristics of Astrometric Radio Sources catalog (OCARS; Malkin, 2018) to search for the redshifts. The OCARS conveniently provides the redshifts for radio sources by collecting them in the literature. Among the 2460 sources, we got the redshifts for 2198 sources, $\sim$89 percent. They are in the range 0.01–5.06 with mean and median values of 1.28 and 1.18, respectively. ## 3 Results ### 3.1 Arc length $\rho$ We examined the arc lengths $\rho$ between the VLBI and $Gaia$ source position estimates with respect to the CARMS values. Table 1 shows the mean and median values of $\rho$ and $\sigma_{\rho}$ for different ranges of CARMS values. The median $\rho$ steadily increases from $\sim$0.4 mas to $\sim$1.3 mas when the CARMS values increase from 0.4. The mean $\rho$ increases more significantly from $\sim$0.7 mas to $\sim$3.7 mas. The median $\rho$ begins to arise when CARMS $\simeq$ 0.6; the mean $\rho$ arises significantly, above 1.0 mas, when CARMS $\simeq$ 0.3. We note that in general a smaller CARMS value of a source indicates that it causes less structure effects. When CARMS ¡ 0.3, the arc lengths $\rho$ have mean values of $\sim$0.7 mas and median values of $\sim$0.4 mas, and their uncertainties have mean values of $\sim$0.4 and median values of $\sim$0.3. It is reasonable to expect that these arc lengths will decrease with better uncertainties from $Gaia$ in the near future, as happened from the DR2 to the EDR3. However, when CARMS ¿ 0.6, the arc lengths are larger and statistically significant, and they have even better uncertainties than the sources with CARMS ¡ 0.3. It is obvious that the sources with extended structure have larger position differences between VLBI and $Gaia$, which are statistically very significant, whereas the sources with minimum structure tend to have smaller position differences, which are statistically insignificant. The mean $\rho$ is always larger than the median due to a small fraction of sources having considerably larger $\rho$ than the rest of sources in each group. The differences between the mean and median $\rho$ increase with the CARMS values. Table 1: Arc lengths $\rho$ and normalized arc lengths $X_{\rho}$ with respect to CARMS CARMS | $N_{\texttt{src}}$ | $\rho$ [mas] | | $X_{\rho}$ | | $\sigma_{\rho}$ [mas] ---|---|---|---|---|---|--- Mean | Median | | Mean | Median | | Mean | Median ¡ 0.10 | 207 | 0.717 | 0.459 | | 1.825 | 1.492 | | 0.382 | 0.304 $[$ 0.10 – 0.20 ) | 724 | 0.710 | 0.448 | | 2.135 | 1.600 | | 0.350 | 0.269 $[$ 0.20 – 0.30 ) | 617 | 0.772 | 0.418 | | 2.845 | 1.751 | | 0.297 | 0.229 $[$ 0.30 – 0.40 ) | 334 | 1.080 | 0.411 | | 3.665 | 1.876 | | 0.287 | 0.218 $[$ 0.40 – 0.50 ) | 220 | 1.347 | 0.452 | | 5.425 | 2.543 | | 0.258 | 0.189 $[$ 0.50 – 0.60 ) | 128 | 1.238 | 0.483 | | 5.286 | 2.396 | | 0.268 | 0.215 $[$ 0.60 – 0.70 ) | 87 | 1.634 | 0.845 | | 6.273 | 3.747 | | 0.324 | 0.252 $[$ 0.70 – 0.80 ) | 59 | 1.536 | 0.769 | | 7.918 | 3.842 | | 0.268 | 0.173 $[$ 0.80 – 0.90 ) | 35 | 3.516 | 1.081 | | 12.247 | 5.686 | | 0.327 | 0.245 $\geq$ 0.90 | 49 | 3.660 | 1.334 | | 14.502 | 6.894 | | 0.276 | 0.204 all | 2460 | 1.012 | 0.459 | | 3.628 | 1.843 | | 0.314 | 0.240 ### 3.2 Normalized arc length $X_{\rho}$ We examined the normalized arc lengths $X_{\rho}$ with respect to the CARMS values. The statistics of $X_{\rho}$ are shown in Table 1. A dependence of $X_{\rho}$ on the CARMS values is revealed. Figure 1 shows the three distributions of $X_{\rho}$ for the 2460 common sources (top), the sources with CARMS ¡ 0.10 (middle), and the sources with CARMS ¿ 0.40 (bottom). About 26 percent of the 2460 sources have $X_{\rho}$ ¿ 3. For the 207 radio sources with little structure, the distribution of $X_{\rho}$ shown in the middle panel is close to the expected Rayleigh distribution; however, for the 556 radio sources with CARMS ¿ 0.40, the distribution of $X_{\rho}$ clearly deviates from the Rayleigh distribution — half of the sources have $X_{\rho}$ ¿ 3 and one sixth even have $X_{\rho}$ ¿ 10. The probability of having statistically significant position differences is doubled for the radio sources with extended source structure (CARMS ¿ 0.4). Figure 1: Histogram of $X_{\rho}$ for the 2490 sources (top), the sources with CARMS ¡ 0.10 (middle), and the sources with CARMS ¿ 0.40 (bottom). The sources with $X_{\rho}$ ¿ 10 are accounted in the last bin. The blue curves show the Rayleigh distributions with unit standard deviation. The total number of sources in each of the three samples is shown in black on the top-right of each panel, and the number of the sources with $X_{\rho}$ ¿ 3.0 in red. The straight red lines correspond to $X_{\rho}$ = 3. The remarkable differences in the distributions of $X_{\rho}$ between these three groups of sources are the numbers of sources in the last bins, $X_{\rho}$ ¿ 9.8. Figure 2 shows the distributions of the CARMS values for all 2460 sources (gray filled bins) and the 147 sources with $X_{\rho}$ ¿ 10 (red open bins). The mean and median CARMS values for all 2460 sources are 0.30 and 0.24, respectively; for the 147 sources with $X_{\rho}$ ¿ 10 these values are 0.52 and 0.48. About 60 percent of the sources with $X_{\rho}$ ¿ 10 have CARMS ¿ 0.40. Given that only 23 percent of the 2460 sources have CARMS ¿ 0.40, the high correlation is also identified between the sources with statistically significant position differences and the sources with extended structure. The differences of the CARMS values for the sources with various ranges of $\rho$ and $X_{\rho}$ are shown in Table 2. For different magnitudes of $\rho$, the mean and median CARMS values for the sources with $X_{\rho}$ ¿ 4 are all the largest among the three categories based on $X_{\rho}$; these values for the sources with $X_{\rho}$ in the range of 3 to 4 are larger than for the sources with $X_{\rho}$ ¡ 3. On average, the difference in the CARMS values is $\sim$0.2 between the sources with and without statistically significant position differences. There are only a slight increase in the mean and median CARMS values as $\rho$ increases for $X_{\rho}$ ¿ 4. One should be cautious when interpreting the results in Table 2, because they will change with better uncertainties of source positions available in future $Gaia$ data releases. With the significant improvement in position uncertainties expected from the $Gaia$ observations, the sources with current $X_{\rho}\leq$ 3 can have $X_{\rho}>$ 3, as happened for the $Gaia$ DR2 compared to the $Gaia$ DR1 and for $Gaia$ EDR3 compared to the $Gaia$ DR2. Meanwhile, the arc lengths $\rho$ for the 1813 sources with $X_{\rho}\leq$ 3 will generally decrease, which can be demonstrated by the $Gaia$ DR2 and EDR3. The arc lengths of these 1813 sources are all smaller than 4.0 mas; the number of the sources with $\rho$ $\geq$ 4.0 mas should not be changed dramatically, unless a few new common sources between $Gaia$ and the ICRF3 will be identified from the future $Gaia$ data releases. Since the ICRF3 sources were systematically included in the $Gaia$ quasar list, those missing sources in the $Gaia$ EDR3 are probably too faint in optical and it is unlikely to have significantly more matches from $Gaia$. As shown in Table 1, the uncertainties of $\rho$ have mean and median values of about 0.3 mas and 0.2 mas, which allow the large $\rho$, for instance larger than 4.7 mas, be confidently detected but are not able to fully identify the sources with $\rho$ ¡ 1.0 mas. Therefore, when the final $Gaia$ data release is available to identify more sources with small $\rho$ and large $X_{\rho}$, the mean and median CARMS values will thus decrease for the sources with $\rho$ ¡ 1.0 mas and $X_{\rho}$ ¿ 4. We would expect to have the CARMS values steady increasing with respect to $\rho$ in the future $Gaia$ data releases, as we see that $\rho$ increases with CARMS in Table 1. In the following investigation, we set the limit of $X_{\rho}$ = 4.0, at the 99.994 $\%$ confidence level, to identify the sources with statistically significant position differences. Table 2: CARMS values with respect to $\rho$ and $X_{\rho}$ $\rho$ [mas] | if ( $X_{\rho}$ ¿ 4 ) | | if ( 4 $\geq$ $X_{\rho}$ ¿ 3 ) | | if ( $X_{\rho}$ $\leq$ 3 ) ---|---|---|---|---|--- $N_{\texttt{src}}$ | Mean | Median | | $N_{\texttt{src}}$ | Mean | Median | | $N_{\texttt{src}}$ | Mean | Median ¡ 0.4 | 34 | 0.43 | 0.44 | | 33 | 0.35 | 0.30 | | 1011 | 0.26 | 0.23 $[$ 0.4 – 0.7 ) | 56 | 0.43 | 0.36 | | 62 | 0.33 | 0.28 | | 425 | 0.25 | 0.21 $[$ 0.7 – 1.0 ) | 55 | 0.47 | 0.45 | | 48 | 0.25 | 0.21 | | 179 | 0.27 | 0.21 $[$ 1.0 – 2.0 ) | 114 | 0.50 | 0.43 | | 52 | 0.30 | 0.19 | | 164 | 0.25 | 0.20 $[$ 2.0 – 4.0 ) | 98 | 0.40 | 0.35 | | 16 | 0.50 | 0.43 | | 34 | 0.26 | 0.20 $[$ 4.0 – 7.0 ) | 44 | 0.46 | 0.44 | | 4 | 0.36 | 0.33 | | 0 | $\ldots$ | $\ldots$ $\geq$ 7.0 | 31 | 0.51 | 0.47 | | 0 | $\ldots$ | $\ldots$ | | 0 | $\ldots$ | $\ldots$ all | 432 | 0.46 | 0.42 | | 215 | 0.32 | 0.27 | | 1813 | 0.26 | 0.22 Figure 2: Histogram of the CARMS values for the 2460 radio sources (filled gray bars) and for the 147 radio sources with $X_{\rho}$ ¿ 10.0 (open red bars). About 3/5 of these 147 sources have the CARMS values larger than 0.40, while less than one quarter of the 2460 sources have the CARMS values larger than 0.40. ### 3.3 Optical $G$ magnitude and redshift We examined optical $G$ magnitude and redshift $z$ with respect to the CARMS values to investigate if there is any potential correlation between the CARMS values and the optical properties. Table 3 shows the statistics of $G$ and $z$. Both the mean and median magnitudes generally decrease with respect to the CARMS values; the difference in $G$ between the sources with CARMS ¡ 0.1 and with CARMS ¿ 0.9 is about 0.9 mag. Based on the high correlation between the radio luminosity and the optical luminosity is shown in Arshakian et al. (2010), the sources with higher luminosity at optical wavelengths will have higher radio flux densities. One can also expect a correlation between radio luminosity and extended structure which is driven by jet power — larger power means higher radio luminosity and more extended structure in linear scale due to the jet being able to drill its path. Radio sources with higher flux densities tend to have more extended structure and consequently larger CARMS values, as shown for the 30 most frequently observed sources in geodetic VLBI by Xu et al. (2019). Since the CRF sources are flux-limited, at large redshifts the sources must have high luminosity, and consequently their powers and extents are larger than at low redshift. This should partly explain the correlation between $z$ and CARMS in the table. The correlation between CARMS and both $G$ and $z$ seems to be significant. Table 3: Optical $G$ magnitude and redshift $z$ CARMS | Optical $G$ magnitude [mag] | | Redshift $z$ ---|---|---|--- $N_{\texttt{src}}$ | Mean | Median | | $N_{z}$ | Mean | Median ¡ 0.1 | 207 | 19.283 | 19.486 | | 181 | 1.270 | 1.062 $[$ 0.1 – 0.2 ) | 724 | 18.965 | 19.112 | | 624 | 1.188 | 1.072 $[$ 0.2 – 0.3 ) | 617 | 18.665 | 18.759 | | 553 | 1.214 | 1.139 $[$ 0.3 – 0.4 ) | 334 | 18.712 | 18.841 | | 305 | 1.355 | 1.292 $[$ 0.4 – 0.5 ) | 220 | 18.460 | 18.498 | | 204 | 1.350 | 1.256 $[$ 0.5 – 0.6 ) | 128 | 18.489 | 18.581 | | 116 | 1.413 | 1.312 $[$ 0.6 – 0.7 ) | 87 | 18.381 | 18.474 | | 78 | 1.268 | 1.203 $[$ 0.7 – 0.8 ) | 59 | 18.442 | 18.446 | | 53 | 1.622 | 1.400 $[$ 0.8 – 0.9 ) | 35 | 18.678 | 18.740 | | 33 | 1.506 | 1.460 $\geq$ 0.9 | 49 | 18.359 | 18.597 | | 47 | 1.434 | 1.351 all | 2460 | 18.763 | 18.910 | | 2198 | 1.275 | 1.182 We further examined $G$ and $z$ in more detail. This investigation can be biased, because the uncertainties of $Gaia$ positions depend on $G$, as shown in Gaia Collaboration et al. (2018b). The statistics of arc lengths and normalized arc lengths with respect to $G$ can be dramatically changed when new position estimates with improved uncertainties are available from $Gaia$ in the near future. We nevertheless attempt to address it based on the $Gaia$ EDR3. Table 4 shows the statistics of arc lengths, the major axes of the error ellipses of the $Gaia$ positions and the VLBI positions, the CARMS values, and $z$ with respect to different optical $G$ magnitudes for the 2028 sources with $X_{\rho}$ $\leq$ 4\. As we expect, the $G$ and the $z$ are positively correlated for these sources — when object is further away, it appears dimmer. The differences of the mean CARMS values at various ranges of $G$ are no larger than 0.06 and those of the median values are no larger than 0.08. There is a small decrease in the CARMS values when $G$ increases, which demonstrates that when a source locates farther away the scale of its structure may decrease. The magnitudes of $\rho$ gradually increase with respect to $G$, however, the position uncertainties of both $Gaia$ and VLBI also vastly increase. Since the ratio of the arc lengths to its uncertainties is always at the same level for different ranges of $G$, it is not possible from the result to conclude that there is dependence of $\rho$ on $G$. Table 4: Statistics of the 2028 sources with $X_{\rho}\leq$ 4. $G$ [mag] | $N_{\texttt{src}}$ | $\rho$ [mas] | | $\sigma_{\texttt{pos,max}}$ [mas] | | CARMS | | $z$ ---|---|---|---|---|---|---|---|--- Mean | Median | | $Gaia$ | VLBI | | Mean | Median | | $N_{z}$ | Mean | Median ¡ 15.0 | 7 | 0.304 | 0.256 | | 0.020 | 0.218 | | 0.25 | 0.21 | | 7 | 0.304 | 0.200 $[$ 15.0 – 16.0 ) | 35 | 0.258 | 0.175 | | 0.030 | 0.175 | | 0.30 | 0.27 | | 35 | 0.459 | 0.310 $[$ 16.0 – 16.5 ) | 29 | 0.316 | 0.244 | | 0.043 | 0.199 | | 0.29 | 0.26 | | 29 | 0.711 | 0.557 $[$ 16.5 – 17.0 ) | 62 | 0.283 | 0.218 | | 0.051 | 0.177 | | 0.28 | 0.24 | | 59 | 1.014 | 1.003 $[$ 17.0 – 17.5 ) | 125 | 0.337 | 0.271 | | 0.072 | 0.199 | | 0.29 | 0.25 | | 120 | 1.029 | 0.954 $[$ 17.5 – 18.0 ) | 176 | 0.312 | 0.215 | | 0.094 | 0.186 | | 0.29 | 0.25 | | 172 | 1.211 | 1.093 $[$ 18.0 – 18.5 ) | 265 | 0.347 | 0.288 | | 0.126 | 0.210 | | 0.28 | 0.24 | | 251 | 1.261 | 1.200 $[$ 18.5 – 19.0 ) | 312 | 0.392 | 0.331 | | 0.179 | 0.207 | | 0.27 | 0.23 | | 292 | 1.452 | 1.384 $[$ 19.0 – 19.5 ) | 331 | 0.492 | 0.399 | | 0.247 | 0.230 | | 0.25 | 0.21 | | 301 | 1.423 | 1.375 $[$ 19.5 – 20.0 ) | 315 | 0.611 | 0.491 | | 0.370 | 0.223 | | 0.24 | 0.19 | | 259 | 1.428 | 1.300 $[$ 20.0 – 20.5 ) | 275 | 0.917 | 0.790 | | 0.623 | 0.228 | | 0.24 | 0.19 | | 202 | 1.502 | 1.375 $\geq$ 20.5 | 96 | 1.687 | 1.434 | | 1.079 | 0.272 | | 0.27 | 0.20 | | 64 | 1.274 | 0.980 all | 2028 | 0.633 | 0.454 | | 0.293 | 0.216 | | 0.26 | 0.22 | | 1791 | 1.314 | 1.219 * • Note. The values in the fifth and sixth columns are the mean $\sigma_{\texttt{pos,max}}$ for $Gaia$ and VLBI position estimates, respectively. Table 5 shows the statistics of the same quantities as Table 4 but for the 432 sources with $X_{\rho}$ ¿ 4. The arc lengths increase by a factor of $\sim$10 from $G$ ¡ 15 mag to $G$ $\geq$ 20 mag. This apparent dependence of $\rho$ on $G$, however, is mainly due to the high correlation between the $Gaia$ position uncertainties and $G$, as shown in the Table. Because the $Gaia$ position uncertainties get worse dramatically as $G$ becomes higher, a uniformed threshold of $X_{\rho}$, which is 4 in the study, forces only the sources with large enough arc lengths to be selected at the higher optical magnitudes. As discussed before, these statistics will be changed with new position estimates available from the future $Gaia$ data releases. Table 5: Statistics of the 432 sources with $X_{\rho}$ ¿ 4. $G$ [mag] | $N_{\texttt{src}}$ | $\rho$ [mas] | | $\sigma_{\texttt{pos,max}}$ [mas] | | CARMS | | $z$ ---|---|---|---|---|---|---|---|--- Mean | Median | | Gaia | VLBI | | Mean | Median | | $N_{z}$ | Mean | Median ¡ 15.0 | 9 | 0.369 | 0.269 | | 0.016 | 0.053 | | 0.42 | 0.30 | | 9 | 0.228 | 0.160 $[$ 15.0 – 16.0 ) | 17 | 0.978 | 0.521 | | 0.032 | 0.074 | | 0.59 | 0.63 | | 17 | 0.394 | 0.302 $[$ 16.0 – 16.5 ) | 19 | 1.083 | 0.839 | | 0.040 | 0.121 | | 0.56 | 0.60 | | 18 | 1.182 | 1.258 $[$ 16.5 – 17.0 ) | 34 | 3.820 | 0.867 | | 0.062 | 0.155 | | 0.47 | 0.43 | | 34 | 1.332 | 1.140 $[$ 17.0 – 17.5 ) | 43 | 1.810 | 0.868 | | 0.088 | 0.125 | | 0.48 | 0.43 | | 43 | 1.093 | 0.994 $[$ 17.5 – 18.0 ) | 59 | 1.581 | 0.951 | | 0.098 | 0.157 | | 0.47 | 0.44 | | 57 | 1.283 | 1.285 $[$ 18.0 – 18.5 ) | 68 | 2.483 | 1.505 | | 0.146 | 0.167 | | 0.44 | 0.39 | | 68 | 1.228 | 1.208 $[$ 18.5 – 19.0 ) | 52 | 4.258 | 1.535 | | 0.184 | 0.229 | | 0.44 | 0.41 | | 48 | 1.065 | 0.949 $[$ 19.0 – 19.5 ) | 53 | 4.777 | 2.289 | | 0.250 | 0.222 | | 0.46 | 0.41 | | 44 | 1.061 | 0.726 $[$ 19.5 – 20.0 ) | 35 | 4.953 | 2.987 | | 0.405 | 0.241 | | 0.44 | 0.35 | | 31 | 1.054 | 0.667 $[$ 20.0 – 20.5 ) | 28 | 5.344 | 3.623 | | 0.622 | 0.255 | | 0.31 | 0.30 | | 26 | 0.940 | 0.770 $\geq$ 20.5 | 15 | 4.176 | 3.151 | | 0.805 | 0.292 | | 0.49 | 0.46 | | 12 | 1.254 | 1.037 all | 432 | 3.173 | 1.510 | | 0.207 | 0.183 | | 0.46 | 0.42 | | 407 | 1.103 | 0.901 * • Note. The values in the fifth and sixth columns are the mean $\sigma_{\texttt{pos,max}}$ for $Gaia$ and VLBI position estimates, respectively. By comparing the results in Tables 4 and 5, the major differences of these two groups of sources are found to be CARMS and $z$. The CARMS values of the sources with $X_{\rho}$ ¿ 4 are larger by $\sim$0.2 than those of the sources with $X_{\rho}$ $\leq$ 4; the mean and median $z$ values are smaller by 0.21 and 0.32, respectively. The relationship between $G$ and $z$ for these two groups of sources are shown in Fig. 3. The sources with $X_{\rho}\leq$ 4 have the $z$ steady increasing over $G$, while the sources with $X_{\rho}$ ¿ 4 even have a small decrease in $z$ when $G$ ¿ 16.5 mag. Figure 3: Mean redshift values with respect to the optical $G$ magnitudes for the 2198 sources with their redshifts available. The blue curve is for the sources with $X_{\rho}$ $\leq$ 4, and the red curve is for the sources with $X_{\rho}$ ¿ 4. The error bars show the estimated uncertainties of the mean values. The bin windows of $G$ are shown in the first column in Table 4. The sources with $X_{\rho}$ ¿ 4 have substantially lower $Z$ at $G$ ¿ 18.5 mag but higher $z$ at $G\simeq 16.5$ mag than the sources with $X_{\rho}\leq$ 4\. The statistics are shown in Tables 4 and 5. We argue that the statistically significant position differences may also be associated with, for instance, some weak but nearby (small $z$) optical objects. ## 4 Discussion ### 4.1 Radio source structure The CRF sources have radio emission with angular scales at mas levels over the sky, called source structure. It causes structure delays up to hundreds of picoseconds as shown in modeling by Charlot (1990b) and in actual observations by Xu et al. (2016). Based on the CONT14 observations999https://ivscc.gsfc.nasa.gov/program/cont14/, Anderson & Xu (2018) suggested that source structure $is$ the major contributor to errors in the astrometric/geodetic VLBI. Since these effects in VLBI group delays have not been modeled in the VLBI data analysis, based on which the ICRFs were built and maintained, the source positions from VLBI change over time due to both the different observing geometry between antennas and sources and the varying structure. For a large fraction of CRF sources, the structure effects can change their positions at the level of 0.5 mas, as shown in the position time series of 39 well-observed sources (Ma et al., 2009, see the plots in the IERS Technical Note 35101010https://www.iers.org/SharedDocs/Publikationen/EN/IERS/Publications/tn/TechnNote35/tn35_017.pdf?__blob=publicationFile&v=1). The number of sources affected by the structure effects will dramatically increase when we consider the position differences between $Gaia$ and VLBI down to the levels of $\sim$0.3 mas. Based on the CARMS values, 40 percent of CRF sources have significant structure. CARMS tells the structure effects in amplitude observables. For a source with CARMS = 0.1, the ratios of the amplitude observables over various combinations of quadrangle have an rms of 1.1. Those ratios have an rms of 1.5 for CARMS = 0.4, and 1.8 for CARMS = 0.6. It is straightforward to understand that the source with a small CARMS value is close to point-like, and with a large CARMS value has extended structure. In Fig. 4, we show the images from MOJAVE for four sources, 0048$-$097, 0059$+$581, 1803$+$784, and 1928$+$738\. Since the VLBI observations for deriving the images are at different frequencies by different antenna arrays during different time periods compared to the observations for the ICRF3 and the CARMS values, we cannot expect an exact proportional relation between the CARMS values and the scales of the MOJAVE images. However, they are already of great help to demonstrate the differences between the CARMS values smaller and larger than 0.3. The two sources 0048$-$097 (CARMS=0.11) and 0059$+$581 (CARMS=0.27) have virtually compact cores only, whereas the other two sources, 1803$+$784 (CARMS=0.35) and 1928$+$738 (CARMS=0.88), have significant emissions from the jets at mas scales. The relative positions between $Gaia$ and VLBI are also shown in the plots. It is obvious in the plots that the $Gaia$-VLBI position differences are typically parallel to the jet directions, which has already been reported by Kovalev et al. (2017) and Petrov et al. (2019) and will be discussed in Sec. 4.5. Figure 4: MOJAVE images of four sources, 0048$-$097 (CARMS=0.11, upper-left), 0059+581 (CARMS=0.27, upper-right), 1803+784 (CARMS=0.35, bottom-left), and 1928+738 (CARMS=0.88, bottom-right). These images were made based on VLBA observations at 15.35 GHz by the MOJAVE project. The peaks of flux are selected as the origins. The VLBI positions are formally assumed to be at the origins as shown by the red dots. Since the MOJAVE images and the ICRF3 are derived from observations at different frequencies , this assumption may introduce systematic errors. Based on their position differences between the $Gaia$ EDR3 and the ICRF3, the $Gaia$ positions are thus located at the blue dots. The error bars are the 3$\sigma$ uncertainties of right ascension and declination from $Gaia$ and VLBI. It is also conspicuous that when $X_{\rho}$ ¿ 4 the VLBI to $Gaia$ position vectors favor the directions along and opposite to the jets, as shown by (Kovalev et al., 2017; Petrov et al., 2019). The $\rho$ and $X_{\rho}$ values are shown on the upper-left corner of each plot. These four plots demonstrate how the scales of the structure look like in terms of different CARMS values. Nevertheless, we should mention that the CARMS values and the ICRF3 are based on VLBI observations at the frequency band around 8.4 GHz over 40 years, while these images were made from observations at 15.35 GHz during the short periods shown at the top of each plot. The jet components always become more prominent at the lower frequency bands. The images were convolved with a circular beam of 0.3 mas as indicated by the black circle in the bottom left corner, about 40$\%$ of the typical MOJAVE beam size. Overlay contours are shown at ten levels of peak percentage specified in the bottom of plots. There are four remarks concerning CARMS. First, it was calculated based on actual VLBI observations rather than based on the maps of radio sources. Once the CARMS is large, the source should have extended structure; but if the source has extended structure, it does not necessarily have a large CARMS value due to insufficient observations in terms of $(u,v)$ coverage to capture the structure. However, the great advantage of using actual VLBI observations is that it quantifies the magnitude of structure effects over the whole time period of 40 years. Second, CARMS is based on (log) closure amplitudes, which are not sensitive to the absolute source position. Therefore, only the relative structure, i.e., the relative positions and the relative fluxes between the multiple components, is defined by CARMS; if a source with compact structure changes its position on the sky, the CARMS value cannot tell that change. Third, since there was no attempt to do proper weighting for different sizes of quadrangle and select an independent set of closure amplitudes for each individual source in deriving CARMS values, it becomes difficult to tell a source with a medium CARMS value, 0.25–0.30, as having structure to what extent. Fourth, CARMS was derived from the X-band observations only, while the ICRFs are based on the ionosphere-free delays through the linear combination of the group delays at the S/X band. The structure effects in the S-band observations thus are ignored in this study. Even though the contribution of the structure effects at the S-band is scaled down by a factor of $\sim$13.8 in that linear combination process, it can be significant for some radio sources. These should partly explain why there are sources with CARMS ¡ 0.10 but with $X_{\rho}$ ¿ 3.0, as shown in the middle panel of Fig. 1. Modeling structure effects is still missing in astrometric/geodetic VLBI data analysis after it has been discussed for several decades. The practical problems are to continuously make images for hundreds of sources and for each source many times if structure changes. The main challenge is that the images for modeling structure effects have to be registered over time for each source in order to maintain a stable CRF at high accuracy. The next generation of geodetic VLBI, known as VGOS (Niell et al., 2007; Petrachenko et al., 2009), requires to register the images of each source at the four different bands in the range of 3.0–14.0 GHz (Xu et al., 2020b). Otherwise, only the relative structure effects can be reduced, and the misalignment of the images at different epochs or at different frequency bands due to core shift, discussed in the next section, inevitably leads to source position variations. Due to the limitation in imaging resolutions, identifying the reference points in structure/images is difficult for the accuracy levels better than 0.1 mas. Therefore, aligning the images and investigating core shift are very crucial in order to mitigate these systematic effects. ### 4.2 Core shift Source structure is frequency-dependent due to two factors: (1) the steep spectrum of the extended jet causing the sources to have larger scales at lower frequencies; and (2) synchrotron self-absorption causing changes in the optical depth along the jet. The latter factor leads to changes in the position of the core, where the optical depth is unity, depending on the observing frequency. This effect, so called core shift, was predicted by Blandford & Königl (1979). When the observing frequency increases, it causes the position of the core to move towards the jet base. Core shift was first measured for the source 1038$+$528A with a magnitude of $\sim$0.7 mas at 2.3 GHz and 8.4 GHz by referring to its nearby source 1038$+$528B (Marcaide et al., 1985). Since then, it has been measured for 29 sources with a median value of 0.44 mas between 2.3 GHz and 8.4 GHz by Kovalev et al. (2008), 20 sources with a median value of 1.21 mas between 1.4 .GHz and 15.4 GHz and 0.24 mas between 5.0 GHz and 15.4 GHz by Sokolovsky et al. (2011), 163 sources with a median value of 0.128 mas between 8.4 GHz and 15 GHz Pushkarev et al. (2012) and 40 sources with a typical value of 0.5 mas between 2.3 GHz and 8.4 GHz Plavin et al. (2019b). The frequency-dependency of the core position can be parameterized by $k\nu^{-\beta}$, where $k$ is a source-dependence core shift parameter — it can be variable over time according to the study of Plavin et al. (2019b), $\nu$ is the observing frequency, and $\beta$ is an astrophysical parameter. So far, $\beta$ is measured to be close to 1 (Lobanov, 1998; Sokolovsky et al., 2011), which agrees with the prediction under the condition of the equipartition between jet particle and magnetic field energy densities (Blandford & Königl, 1979). The impact of core shift on astrometric positions measured by VLBI was discussed by Porcas (2009), using a simple model of a point-source core. Based on the median core shift between 2.3 GHz and 8.4 GHz, 0.44 mas, from Kovalev et al. (2008), the core position is shifted by 0.166 mas at the frequency of 8.4 GHz and varies by 0.014 mas over the frequency band of 8.2–8.9 GHz used in most of geodetic VLBI observations. The position shift of 0.166 mas can cause visibility phase variations of several degrees over the band, which are canceled out exactly by the additional phase variations due to the position shifts of 0.014 mas over the band. It was shown that given $\beta=1$, group delays of observations on a point-like source refer to a fixed point at the jet base at any frequency and at any time, no matter whether $k$ varies or not over time. It is therefore believed that core shift will not contribute to the position differences between $Gaia$ and VLBI, given that $\beta\simeq 1$. Our special concern about core shift is not only the robust validation of $\beta\simeq 1$ for the CRF sources, but also the simple source model used in Porcas (2009). Core shift has two effects on source structure: (1) moving the absolute position of the core towards the jet base when the frequency increases; and (2) changing the relative positions between the core and the jet components in structure. Apparently, the discussion of Porcas (2009) investigated the first effect only. The truth is again that almost all the CRF sources have structure at the mas scales, which changes over time. In the previous discussion, the relative positions between the core and the jets will also be changed by amount of 0.014 mas over the band to the opposite direction of the absolute position shift of the core. The cancellation of the across- band phase variations in the point-source case breaks down for extended sources. Therefore, core shift can influence the position estimates determined from VLBI group delays. In this context, even though there may be no real connection between the magnitude of core shift and the scales of source structure, the impact of core shift will correlate with structure effects — extended sources with large source structure effects tend to have larger core shift effect in the position differences between radio and optical than the sources with minimum structure. Further studies are needed to verify this assumption. ### 4.3 Sources with $\rho$ ¿ 4.0 mas and $X_{\rho}$ ¿ 4 There are 75 sources with $\rho$ ¿ 4.0 mas and $X_{\rho}$ ¿ 4. Among them, 53 sources have CARMS ¿ 0.3 and 41 sources have CARMS ¿ 0.4. Out of the 22 sources with CARMS values $\leq$ 0.3, 20 sources have their $z$ available, and 15 sources have $z$ ¡ 0.7. The median $z$ of these 20 sources is 0.25, which is only one fifth of the median $z$ of the 2198 sources with known $z$. A small fraction of these sources seem to be weak but nearby optical objects. It is important to investigate this further. ### 4.4 Magnitudes of the position differences With an improvement in $Gaia$ position estimates in the near future, the number of the sources with $X_{\rho}$ ¿ 4 may continue to increase. However, there should be no significant increase in the number of the sources with extremely large differences, for instance $\rho$ ¿ 4.0 mas; currently, there are 79 sources, less than 4 percent. As shown in Tables 4 and 5, there are 615 sources with $G$ ¡ 18 mag, and the mean semi-major axis of the error ellipses of the $Gaia$ positions for these 615 sources is already smaller than 0.1 mas. In this sample of 615 sources, 181 sources have $X_{\rho}$ ¿ 4, 2/5. About 74 percent of these 181 sources have $\rho$ ¡ 1.5 mas; the median $\rho$ is $\sim$0.8 mas. Therefore, the magnitude of $\rho$ for the majority of the sources with $X_{\rho}$ ¿ 4 is expected to be at the same level as source structure effects and core shift. For the 434 sources with $X_{\rho}$ $\leq$ 4, the median $\rho$ is $\sim$0.24 mas, which is at the same level as their uncertainties dominated by VLBI. This may provide insights on the final agreement of source positions between $Gaia$ and VLBI for the whole ensemble of common sources. If we assume that the median uncertainty of the $Gaia$ source positions at higher optical magnitudes is $\sim$0.1 mas, which is better than the predicted end-of-mission accuracies but still possible (Perryman et al., 2001; de Bruijne et al., 2014), the $Gaia$ and VLBI positions will agree with each other within their uncertainties for the 3/5 sources, and the median $\rho$ for these sources will be at the level of 0.24 mas. There will be 2/5 sources having statistically significant position differences with a median $\rho$ of 0.8 mas. Based on about 2000 evenly distributed sources over the sky with position differences of $\sim$0.24 mas, the orientation stability of the $Gaia$ frame with respect to the ICRF3 may be achieved at the level of ten microarcseconds ($\mu$as); it is sufficient enough to detect systematic position differences between $Gaia$ and VLBI at the level of hundreds of $\mu$as. Several hundreds of sources with well-detected position differences at such levels will provide invaluable information to investigate the physical properties of radio sources. ### 4.5 Directions of the position differences Source structure and core shift are expected to cause the derived source positions from VLBI to shift towards the jets. If the VLBI-to-$Gaia$ position vectors are opposite to the directions of the radio jets, as shown for the source 1928$+$738 in the bottom-right panel of Fig. 4, the position differences can be explained by source structure effects or core shift. However, it seems to be difficult to explain these position vectors along the jets, as shown for the source 1803$+$784 in the bottom-left panel, by the effects of radio source structure and core shift. The recent studies have demonstrated that the VLBI-to-$Gaia$ position vectors favor the directions both along and opposite to the jets (Kovalev et al., 2017; Petrov et al., 2019), and more sources have these position vectors along the jets than opposite to the jets. The presence of parsec-scale optical jet structure in the directions of radio jets is proposed to explain the phenomenon in these studies. We compared the directions of the VLBI-to-$Gaia$ position vectors and of the radio jets based on the MOJAVE data. The jet directions were calculated as the median values of the jet position angles for the multiple jets of each individual source in the MOJAVE project. These jet position angles were robustly determined from multiple-epoch measurements by MOJAVE (Lister et al., 2018). Figure 5 shows the 208 sources with the uncertainties of both the jet position angles and the VLBI-to-$Gaia$ position directions smaller than 30 degrees in gray dots and the 81 sources with those uncertainties smaller than 12 degrees in red dots. About 88 percent of these 81 sources have the VLBI- to-$Gaia$ position vectors parallel to the jet directions within 25 degrees and 96 percent within 45 degrees. It enhances the already known results from Kovalev et al. (2017) and Petrov et al. (2019) with stronger evidence. The majority of the sources have the directions of the position vectors along the jets and a significant fraction of sources have those vectors opposite to the jets, also confirmed by this small sample of well-determined jet position angles. Figure 5: Angles of the VLBI-to-$Gaia$ position vectors with respect to the jet directions as a function of the jet position angles based on the MOJAVE data. The error bars shown are the combined uncertainties from the formal errors of the two directions. There are 327 sources with robust multi-epoch and multi-jet position angles, cross-matched from the 3142 sources. Out of them, 208 sources have both the uncertainties of the VLBI-to-$Gaia$ position directions and the median jet directions smaller than 30 degrees and are shown as gray dots. There are 81 sources with those uncertainties smaller than 12 degrees, shown as red dots. For these 81 sources, the median $\rho$ is 0.93 mas, and the $X_{\rho}$ values are larger than 3.3. Among them, 54 sources have the directions of the position differences along the jet directions within 25 degrees and their $\rho$ are in the range 0.2–28.0 mas; 17 sources have the directions of the position differences opposite to the jet directions and their $\rho$ are in the range 0.2–39.1 mas. However, we address several cases where the jet position angles can be determined in the opposite direction. Figure 6 shows the images of source 0743$-$006, one of the ICRF3 defining sources but with the CARMS value of 0.64, at two different epochs. It has two compact components separated by $\sim$1 mas and a fuzzy emission region extending to the north-east direction. The peak of flux changed between the two components from 2010 to 2020. According to its jet motions from model fitting, which are relatively small and weak for this particular source, the core was suggested by MOJAVE to be the peak of flux in the image from 2010, and consequently it has two-sided jets. It seems that the south-west component can be the core, which means that the source actually has a one-sided jet. In this case, the jet position angle can be determined with an offset of 180 degrees. The source has $\rho$=1.1 mas and $X_{\rho}$=16.5. If the south-west component is the core, the difference between its $Gaia$ and VLBI positions can be explained by its radio source structure. As we can see, in the right-hand plot, if we move the VLBI position to the next component to the upper-left, then the $Gaia$ position fits very well the core. Figure 6: MOJAVE images of source 0743$-$006 (CARMS=0.64) at 15 Oct. 2010 (left) and at 13 Jun. 2020 (right). See the caption of Fig. 4 for the plot design. The peak of flux changed between the two components from 2010 to 2020, as indicated by the red dots. Based on its jet motions from model fitting, it was suggested in the MOJAVE project that the source has two-sided jets and the core is located close to the component marked as the red dot in the left plot. It seems to be possible that the south-west component is the core, meaning that the source has a one-sided jet. We further discuss two cases, sources 0923$+$392 and 0429+415, of extremely large and statistically significant position differences between VLBI and $Gaia$, which can be explained by their radio structure. Their MOJAVE images are shown in Fig. 7 with their relative positions between VLBI and $Gaia$ illustrated. The figure demonstrates that the source positions from geodetic VLBI are dominated by the positions of the peak fluxes, whereas the optical positions from $Gaia$ are located close to the cores. The separations between the cores and the jets for the CRF sources are typically at the mas level as demonstrated in Figs. 4 and 6 and up to tens of mas as shown in Fig. 7. We should emphasize that for a significant number of sources the VLBI position seems to be that of a jet component rather than the core. Without absolute position information in the MOJAVE images, however, we have no knowledge of where the VLBI position really is. Since the VLBI position to the core in the MOJAVE images is so large if it locates at different jet components for the cases like these two sources, phase referencing observations can determine the positions of the jet components with sufficient accuracy, which will allow us to locate the VLBI position within the image. This will eventually help to understand where the $Gaia$ position locates. One also should notice from Fig. 7 that since the cores of these two sources are not the brightest components, without spectral index images it will be difficult to identify them from radio images, which can lead to a shift of 180 degrees in determining jet position angles. Figure 7: Explanation of the large $Gaia$-VLBI position differences for two sources, 0923+392 with $\rho$=2.7 mas (4C39.25, CARMS=0.80, left) and 0429+415 with $\rho$=39.1 mas (CARMS=0.83, right), based on the MOJAVE images. See the caption of Fig. 4 for the plot design. According to the spectral index images from the MOJAVE project, the cores are not the brightest components in the images. The core of the source 0923+392 is the western, weak component, and the core of the source 0429+415 is the north-east component. Their $Gaia$ positions are located close to the cores, given that the VLBI positions are located at the peaks of flux. These two sources strongly demonstrate the effects of source structure on the position differences between VLBI and $Gaia$ — the source positions from geodetic VLBI are dominated by the positions of the peak fluxes, whereas the optical positions from $Gaia$ are located close to the cores. To conclude, our study suggests that radio source structure is one of the major factors causing the position differences and that the optical jet structure tends to be also strong for the sources with extended structure at cm-wavelengths. ## 5 Conclusion We made the conclusion based on the position differences between the $Gaia$ EDR3 and the ICRF3 as follows: 1. 1. The arc lengths $\rho$ of the $Gaia$ and VLBI position differences increase with the CARMS values. 2. 2. The majority of the sources with statistically significant arc lengths, $X_{\rho}$ ¿ 4, are associated with the extended sources. For instance, the median CARMS of the 432 sources with $X_{\rho}$ ¿ 4 is 0.42, while that of the remaining 2028 sources is only 0.22. 3. 3. For the sources with $\rho$ ¿ 4.0 mas and $X_{\rho}$ ¿ 4, the majority, 70 percent, have extended structure. The source 0429+415 has been used as an example to demonstrate that based on the MOJVAE image shown in Fig. 7. 4. 4. Distinct relations between the optical magnitudes and the redshifts are found for the sources with and without statistically significant position differences. The sources with $X_{\rho}$ ¿ 4 have substantially smaller redshift values, $\sim$0.3. Our study suggests that a small fraction of these sources may be associated with the weak but nearby (small redshifts) optical objects. 5. 5. We argue that core shift can contribute to the position differences if the source has extended structure. 6. 6. The $Gaia$ and VLBI position differences can be well explained through the radio images for several sources as examples. The vectors of the $Gaia$ and VLBI position differences are parallel to the radio-jet directions, which is confirmed with stronger evidence. ###### Acknowledgements. We would like to thank the reviewer François Mignard for his helpful comments. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team (Lister et al., 2018). All components of the International VLBI Service for Geodesy and Astrometry are deeply appreciated for providing the VLBI observations. This research was supported by the Academy of Finland project No. 315721 and the National Natural Science Foundation of China No. 11973023. SL is supported by the DFG grant No. HE59372-2. ## References * Anderson & Xu (2018) Anderson, J. M. & Xu, M. H. 2018, Journal of Geophysical Research (Solid Earth), 123, 10,162 * Arshakian et al. (2010) Arshakian, T. G., Torrealba, J., Chavushyan, V. H., et al. 2010, A&A, 520, A62 * Blandford & Königl (1979) Blandford, R. D. & Königl, A. 1979, ApJ, 232, 34 * Charlot (1990a) Charlot, P. 1990a, A&A, 229, 51 * Charlot (1990b) Charlot, P. 1990b, AJ, 99, 1309 * Charlot et al. (2020) Charlot, P., Jacobs, C. S., Gordon, D., et al. 2020, arXiv e-prints, arXiv:2010.13625 * de Bruijne et al. (2014) de Bruijne, J. H. J., Rygl, K. L. J., & Antoja, T. 2014, in EAS Publications Series, Vol. 67-68, EAS Publications Series, 23–29 * Fey & Charlot (1997) Fey, A. L. & Charlot, P. 1997, ApJS, 111, 95 * Fey et al. (2015) Fey, A. L., Gordon, D., Jacobs, C. S., et al. 2015, AJ, 150, 58 * Gaia Collaboration et al. (2018a) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018a, A&A, 616, A1 * Gaia Collaboration et al. (2020) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2020, arXiv e-prints, arXiv:2012.01533 * Gaia Collaboration et al. (2016) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2 * Gaia Collaboration et al. (2018b) Gaia Collaboration, Mignard, F., Klioner, S. A., et al. 2018b, A&A, 616, A14 * Kovalev et al. (2008) Kovalev, Y. Y., Lobanov, A. P., Pushkarev, A. B., & Zensus, J. A. 2008, A&A, 483, 759 * Kovalev et al. (2017) Kovalev, Y. Y., Petrov, L., & Plavin, A. V. 2017, A&A, 598, L1 * Kovalev et al. (2020) Kovalev, Y. Y., Zobnina, D. I., Plavin, A. V., & Blinov, D. 2020, MNRAS, 493, L54 * Lindegren et al. (2018) Lindegren, L., Hernández, J., Bombrun, A., et al. 2018, A&A, 616, A2 * Lindegren et al. (2020) Lindegren, L., Klioner, S. A., Hernández, J., et al. 2020, arXiv e-prints, arXiv:2012.03380 * Lister et al. (2018) Lister, M. L., Aller, M. F., Aller, H. D., et al. 2018, ApJS, 234, 12 * Lobanov (1998) Lobanov, A. P. 1998, A&A, 330, 79 * Lunz et al. (2019) Lunz, S., Anderson, J., Heinkelmann, R., Xu, M. H., & Schuh, H. 2019, in Poster of the 24th European VLBI Group for Geodesy and Astrometry Working Meeting, ed. R. Haas, S. Garcia-Espada, & J. A. López Fernández, 0 * Ma et al. (2009) Ma, C., Arias, E. F., Bianco, G., et al. 2009, IERS Technical Note, 35, 1 * Ma et al. (1998) Ma, C., Arias, E. F., Eubanks, T. M., et al. 1998, AJ, 116, 516 * Makarov et al. (2019) Makarov, V. V., Berghea, C. T., Frouard, J., Fey, A., & Schmitt, H. R. 2019, ApJ, 873, 132 * Malkin (2018) Malkin, Z. 2018, ApJS, 239, 20 * Marcaide et al. (1985) Marcaide, J. M., Shapiro, I. I., Corey, B. E., et al. 1985, A&A, 142, 71 * Mignard et al. (2016) Mignard, F., Klioner, S., Lindegren, L., et al. 2016, A&A, 595, A5 * Niell et al. (2007) Niell, A., Whitney, A., Petrachenko, W., et al. 2007, VLBI2010: a Vision for Future Geodetic VLBI, ed. P. Tregoning & C. Rizos, 757 * Nothnagel et al. (2017) Nothnagel, A., Artz, T., Behrend, D., & Malkin, Z. 2017, Journal of Geodesy, 91, 711 * Perryman et al. (2001) Perryman, M. A. C., de Boer, K. S., Gilmore, G., et al. 2001, A&A, 369, 339 * Petrachenko et al. (2009) Petrachenko, B., Niell, A., Behrend, D., et al. 2009, Design Aspects of the VLBI2010 System. Progress Report of the IVS VLBI2010 Committee, June 2009., Tech. rep. * Petrov & Kovalev (2017a) Petrov, L. & Kovalev, Y. Y. 2017a, MNRAS, 471, 3775 * Petrov & Kovalev (2017b) Petrov, L. & Kovalev, Y. Y. 2017b, MNRAS, 467, L71 * Petrov et al. (2019) Petrov, L., Kovalev, Y. Y., & Plavin, A. V. 2019, MNRAS, 482, 3023 * Plavin et al. (2019a) Plavin, A. V., Kovalev, Y. Y., & Petrov, L. Y. 2019a, ApJ, 871, 143 * Plavin et al. (2019b) Plavin, A. V., Kovalev, Y. Y., Pushkarev, A. B., & Lobanov, A. P. 2019b, MNRAS, 485, 1822 * Porcas (2009) Porcas, R. W. 2009, A&A, 505, L1 * Pushkarev et al. (2012) Pushkarev, A. B., Hovatta, T., Kovalev, Y. Y., et al. 2012, A&A, 545, A113 * Schuh & Behrend (2012) Schuh, H. & Behrend, D. 2012, Journal of Geodynamics, 61, 68 * Sokolovsky et al. (2011) Sokolovsky, K. V., Kovalev, Y. Y., Pushkarev, A. B., & Lobanov, A. P. 2011, A&A, 532, A38 * Xu et al. (2019) Xu, M. H., Anderson, J. M., Heinkelmann, R., et al. 2019, ApJS, 242, 5 * Xu et al. (2020a) Xu, M. H., Anderson, J. M., Heinkelmann, R., et al. 2020a, Submitted to Journal of Geodesy * Xu et al. (2016) Xu, M. H., Heinkelmann, R., Anderson, J. M., et al. 2016, AJ, 152, 151 * Xu et al. (2020b) Xu, M. H., Savolainen, T., Zubko, N., et al. 2020b, Earth and Space Science Open Archive, 42
AppReferences # How many data clusters are in the Galaxy data set? Bayesian cluster analysis in action Bettina Grün, Gertraud Malsiner-Walli and Sylvia Frühwirth-Schnatter WU Vienna University of Economics and BusinessWU Vienna University of Economics and BusinessWU Vienna University of Economics and Business In model-based clustering, the Galaxy data set is often used as a benchmark data set to study the performance of different modeling approaches. Aitkin (2001) compares maximum likelihood and Bayesian analyses of the Galaxy data set and expresses reservations about the Bayesian approach due to the fact that the prior assumptions imposed remain rather obscure while playing a major role in the results obtained and conclusions drawn. The aim of the paper is to address Aitkin’s concerns about the Bayesian approach by shedding light on how the specified priors impact on the number of estimated clusters. We perform a sensitivity analysis of different prior specifications for the mixtures of finite mixture model, i.e., the mixture model where a prior on the number of components is included. We use an extensive set of different prior specifications in a full factorial design and assess their impact on the estimated number of clusters for the Galaxy data set. Results highlight the interaction effects of the prior specifications and provide insights into which prior specifications are recommended to obtain a sparse clustering solution. A clear understanding of the impact of the prior specifications removes restraints preventing the use of Bayesian methods due to the complexity of selecting suitable priors. Also, the regularizing properties of the priors may be intentionally exploited to obtain a suitable clustering solution meeting prior expectations and needs of the application. Keywords. Bayes, cluster analysis, Galaxy data set, mixture model, specification. ## 1 Introduction This paper investigates the impact of different prior specifications on the results obtained in Bayesian cluster analysis based on mixture models. Mixture models may be used to either approximate arbitrary densities in a semi- parametric way or in a model-based clustering context to identify groups in the data. We will focus on the later application where each component is assumed to potentially represent a data cluster and the cluster distribution is not approximated by several mixture components. Hennig and Liao (2013) claim that “there are no unique ‘true’ or ‘best’ clusters in a data set” but that the prototypical shape of a cluster needs to be specified before this question can be answered. For clustering methods using mixture models, the prototypical shape of a cluster is in general specified by selecting the component-specific distributions. For the fitted mixture model, then a one-to-one relationship between components and clusters is assumed. For example, in the case of multivariate metric data one can specify isotropic Gaussian distributions as component distributions, where the variance is comparable across components, or Gaussian distributions with arbitrary variance-covariance matrices, which are allowed to considerably vary across components (see, for example Fraley and Raftery, 2002). The Bayesian framework provides a principled approach to specify the prototypical shape of the clusters. By specifying priors on the model parameters, both the mean prototypical shape as well as the variability around this prototypical shape are included, i.e., what the shape is on average as well as how much the component distributions vary from one another across components. In this sense the Bayesian approach provides more flexibility to incorporate the prototypical shape of a cluster in the analysis and hence arrive at a suitable cluster solution for the specific analysis undertaken compared to other clustering methods. In addition the Bayesian framework also allows to specify a prior on the component weights, thus influencing if the clusters are a-priori assumed to be rather balanced in size or include both very small and large clusters in size. By contrast, for example, $k$-means clustering assumes that the clusters have an isotropic shape with similar cluster size and volume (see, for example, Grün, 2019). However, the additional flexibility provided by the Bayesian approach might also be perceived as overwhelming, in particular, if the influence of different prior specifications on results obtained remains rather opaque. Aitkin (2001) compares maximum likelihood and Bayesian analyses of mixture models and expresses reservations about the Bayesian approach due to the fact that the prior assumptions imposed remain rather obscure while playing a major role in the results obtained and conclusions drawn. Certainly having sufficient insight into the influence of prior specifications on the clustering results is crucial in order to leverage the advantages of the Bayesian approach where the priors may be used to regularize the problem and also guide the analysis to focus on the clustering results of interest. In the following we consider the mixture of finite mixture model (MFM), a name coined by Miller and Harrison (2018) following Richardson and Green (1997). The MFM is a hierarchical finite mixture model where a prior on the number of components $K$ is included. We focus on the MFM, because the Bayesian analysis of the MFM results in an a-posteriori distribution of the number of data clusters $K_{+}$ as well as an a-posteriori distribution of partitions $\mathcal{C}$. These are both core components of a Bayesian cluster analysis to address the questions how many data clusters there are in the data set and how the observations should be grouped into these data clusters. Note that in our analyses of the MFM, we make a crucial distinction between $K$, the number of components in the mixture distribution, and $K_{+}$, the number of filled components, to which observations are actually assigned given the partition which groups the observations. However, only a filled component corresponds to a data cluster. This implies that, when estimating the number of clusters in the data, the posterior of $K_{+}$ is of interest, rather than the posterior of $K$. We will thus not only investigate the prior on $K$, but also explicitly inspect the prior on $K_{+}$, which is induced by the prior on $K$ and the prior on the mixture weights. In the analysis of the results focus is given to the posterior of $K_{+}$ (rather than $K$), determining in particular the mode of this distribution and its entropy. We illustrate the impact of different prior specifications using a MFM of univariate Gaussian distributions for the (in-)famous Galaxy data set originally introduced to the statistical literature by Roeder (1990). Several results obtained for this data set using either maximum likelihood estimation or Bayesian analysis methods were compared and discussed in Aitkin (2001). Aitkin (2001) concluded that the maximum likelihood analysis, while having complications of its own, would be rather straightforward to implement and be well understood. By contrast Aitkin (2001) formulated a call for action with respect to the Bayesian analysis, asking for a careful analysis of the role of the priors. This paper aims at responding to this call for action. ## 2 Model specification In our specification of the MFM model with Gaussian component distributions, the following data generation process is assumed for a univariate data set of size $n$ given by $\bm{y}=(y_{1},\ldots,y_{n})$ (see also Richardson and Green, 1997). One assumes that the number of components $K$ of the mixture model is sampled from the prior $p(K)$. Given $K$ the component weights $\bm{\eta}=(\eta_{1},\ldots,\eta_{K})$ are sampled from a symmetric $K$-dimensional Dirichlet distribution with parameter $\gamma_{K}$. For each observation $i$ component assignments $S_{i}$ are drawn from a multinomial distribution with parameter $\bm{\eta}$. Regarding the Gaussian component distributions, the component means $\mu_{k}$ and the component variances $\sigma^{2}_{k}$, $k=1,\ldots,K$, are independently drawn from the same prior distributions to have exchangeability. The component means $\mu_{k}$ are drawn from the normal distribution with mean $b_{0}$ and variance $B_{0}$, while the component precisions $\sigma^{-2}_{k}$, i.e., the inverse variances, are assumed to follow a Gamma distribution with parameters $c_{0}$ and $C_{0}$ (and expectation $c_{0}/C_{0}$). Note that for the component distribution not the conjugate prior for the normal distribution with unknown mean and variance is used, but the independence prior is employed. If instead the conjugate prior had been used, the component-specific variances would influence the prior variability of the component means. This would imply that components which have less variability also have their mean closer to the prior mean $b_{0}$. This prior implication does in general not seem to be appealing in the mixture context and hence the independence prior is used. For a further detailed discussion of the priors for the component distributions see Frühwirth-Schnatter (2006, Chapter 6). Summarizing, this specification results in the following Bayesian hierarchical MFM model: $\displaystyle K$ $\displaystyle\sim p(K),$ $\displaystyle\bm{\eta}|K$ $\displaystyle\sim\mathcal{D}_{K}(\gamma_{K}),$ $\displaystyle S_{i}|\bm{\eta}$ $\displaystyle\sim\mathcal{M}(\bm{\eta}),\quad i=1,\ldots,n,$ (2.1) $\displaystyle\mu_{k}|b_{0},B_{0}$ $\displaystyle\sim\mathcal{N}(b_{0},B_{0}),\quad k=1,\ldots K,$ $\displaystyle\sigma^{-2}_{k}|c_{0},C_{0}$ $\displaystyle\sim\mathcal{G}(c_{0},C_{0}),\quad k=1,\ldots K,$ $\displaystyle y_{i}|\bm{\mu},\bm{\sigma}^{2},S_{i}=k$ $\displaystyle\sim\mathcal{N}(\mu_{k},\sigma^{2}_{k}),\quad i=1,\ldots,n,$ where $\bm{\mu}=(\mu_{k})_{k=1,\ldots,K}$ and $\bm{\sigma}^{2}=(\sigma_{k}^{2})_{k=1,\ldots,K}$. Additionally, hyperpriors might be specified. For example, Richardson and Green (1997) suggest to specify a hyperprior on $C_{0}$ and Malsiner-Walli et al. (2016) add an additional layer for the prior on the component means which corresponds to a shrinkage prior allowing for variable selection. However, in the following we do not consider adding further hyperpriors in order to better assess the influence of different specifications for these priors and their parameters on the clustering results. Thus in this paper we focus on the specification of the following priors and parameters: * • the prior $p(K)$ of the number of components $K$, * • the value $\gamma_{K}$ used for the Dirichlet prior, * • the prior parameters $b_{0}$ and $B_{0}$ for the component means, * • the prior parameters $c_{0}$ and $C_{0}$ for the component variances. ## 3 The Galaxy data set in statistics The Galaxy data set was originally published in astronomy by Postman et al. (1986) and consists of univariate measurements representing velocities of galaxies, moving away from our galaxy. In this original publication 83 observations are listed. Roeder (1990) introduced the data set to the statistics literature, but omitted the smallest observation such that in the following in the statistics literature only 82 observations were considered. Unfortunately Roeder (1990) also introduced a typo, i.e., one observation has a different value than in Table 1 in Postman et al. (1986). A further influential statistics publication using the Galaxy data set was Richardson and Green (1997) who also considered only the 82 observations, but corrected the typo and scaled the units by 1000. The data set was used in statistics by a number of authors to demonstrate density estimation methods and investigate mixture model methods. They used either the version presented by Roeder (1990) or by Richardson and Green (1997). A number of textbooks on applied statistics also use the data set to demonstrate different statistical methods (see, e.g., Lunn et al., 2012; Hothorn and Everitt, 2014). In the following we will use the Galaxy data set as used by Richardson and Green (1997). A histogram of the data set is given in Figure 3. This version of the data set was also used by Aitkin (2001) when comparing maximum likelihood and Bayesian analysis methods for estimating mixture models, focusing in particular on the question of the number of data clusters in the data set. Within the maximum likelihood framework, Aitkin (2001) considered mixtures of univariate Gaussian distributions with equal as well as unequal variances. The mixture models were fitted using the EM algorithm (Dempster et al., 1977) and for each class of component distributions, the number of components were selected based on the results of a bootstrap likelihood ratio test (Aitkin et al., 1981; McLachlan, 1987). This maximum likelihood analysis may easily be replicated using the R package mclust (Scrucca et al., 2016). Based on the maximum likelihood results, Aitkin (2001) concludes that “there is convincing evidence of three equal variance components, or four unequal variance components, but no convincing evidence of more than these numbers, in the velocity data” (p. 296). In addition, Aitkin (2001) reviews the Bayesian analysis of the Galaxy data set presented in Escobar and West (1995), Carlin and Chib (1995), Phillips and Smith (1996), Roeder and Wasserman (1997) and Richardson and Green (1997). Table 3 in Aitkin (2001), according to its caption, summarizes the posterior distributions of $K$. However, in fact for the Dirichlet process mixture fitted by Escobar and West (1995), the posterior distribution of $K_{+}$ is given. The Bayesian approaches compared differ considerably with respect to the prior on $K$ and the prior on the component-specific variances and lead to rather diverse results. Aitkin (2001) concludes that some analysis results in overwhelming posterior evidence for three groups, while other posterior distributions are either relatively diffuse over 4–9 with a mode around 6–7 or are concentrated on the range 7–9. Overall the cluster solutions for the Galaxy data set are interpreted as either being sparse, with up to four clusters, or contain many, i.e., more than four, clusters. ## 4 Prior specifications In this section, we discuss possible specifications and previous suggestions in the literature for each of the prior distributions and their parameters, taking in particular those into account considered in the Bayesian analysis reviewed in Aitkin (2001). This informs the choice of specific prior settings to be used for a systematic sensitivity analysis of the influence of different prior specifications on the clustering results for the Galaxy data set. We also discuss our expectation regarding the effect of these prior specifications on the cluster solutions obtained, focusing in particular on the estimated number of data clusters. ### 4.1 Prior on $K$ Frühwirth-Schnatter et al. (2020) provide an overview on previously used priors on $K$ including the uniform distribution (Richardson and Green, 1997), the truncated Poisson distribution (Phillips and Smith, 1996; Nobile, 2004) and the shifted geometric distribution (Miller and Harrison, 2018). They also propose the shifted beta-negative-binomial (BNB) distribution as a suitable alternative which represents a generalization of the Poisson and the geometric distribution. Based on this overview, we consider the following priors on $K$: * • the uniform distribution $\text{U}(1,30)$ with prior mean $\mathbb{E}[K]=15.5$ and prior variance $\mathbb{V}[K]=74.9$ (Richardson and Green, 1997), * • the truncated Poisson distribution $\text{trPois}(3)$ with prior mean $\mathbb{E}[K]=3.2$ and prior variance $\mathbb{V}[K]=2.7$ (Phillips and Smith, 1996), * • the shifted geometric distribution $K-1\sim\text{Geom}(0.1)$ with prior mean $\mathbb{E}[K]=10$ and prior variance $\mathbb{V}[K]=90$ (Miller and Harrison, 2018), * • the shifted BNB distribution $K-1\sim\text{BNB}(1,4,3)$ with prior mean $\mathbb{E}[K]=2$ and prior variance $\mathbb{V}[K]=4$ (Frühwirth-Schnatter et al., 2020). These priors essentially cover all Bayesian MFM analysis reviewed and compared by Aitkin (2001). The only exceptions are Carlin and Chib (1995) who perform model selection to decide between a 3- and a 4-component solution and Roeder and Wasserman (1997) who use a uniform distribution with support $[1,10]$. The proposed priors for $K$ differ considerable in the prior means and variances induced. The shifted $\text{BNB}(1,4,3)$ has the smallest prior mean; the truncated Poisson distribution has the smallest prior variance, with only a slightly higher prior mean. We expect the two prior distributions $\text{trPois}(3)$ and the shifted $\text{BNB}(1,4,3)$, which have comparable, small means, to induce cluster solutions with less data clusters compared to the other two priors. We expect this behavior to be most pronounced for the truncated Poisson distribution, because of its lowest variance, thus putting only very little mass on large values of $K$, e.g., the probability of $K>10$ is less than 0.001. ### 4.2 Prior parameter $\gamma_{K}$ for the component weights All Bayesian MFM analysis considered in Aitkin (2001) are based on a MFM with $\gamma_{K}\equiv 1$. However, as will be demonstrated in Section 4.3, the Dirichlet parameter $\gamma_{K}$ crucially affects the prior on $K_{+}$, since it determines how closely the prior on $K_{+}$ follows the prior on $K$. A more detailed discussion on the specification of $\gamma_{K}$ for the MFM is given in Frühwirth-Schnatter et al. (2020). Frühwirth-Schnatter et al. (2020) suggest to use an arbitrary sequence for the Dirichlet parameter $\gamma_{K}$ which might depend on the number of components $K$. They distinguish two special cases: the static MFM where $\gamma_{K}\equiv\gamma$ and the dynamic MFM where $\gamma_{K}=\alpha/K$. McCullagh and Yang (2008) already discussed these two special cases indicating that they are structurally different. While previous applications of the MFM focused on the static case, the Dirichlet process mixture model is included in the dynamic case. In the following we will consider the static as well as the dynamic MFM using in the static case $\gamma\in\\{0.01,1,10\\}$ and in the dynamic case $\alpha\in\\{0.01,1,10\\}$. Thus, additionally to the popular choice $\gamma\equiv 1$, we consider also a much smaller value of $\gamma$ and $\alpha$ as well as a much larger value. The much smaller values are expected to induce a sparse cluster solution with only very few data clusters and thus also achieve a certain independence of the specification of the prior on $K$. The much larger values are expected to induce cluster solutions with rather equally sized data clusters and also a stronger link between the number of data clusters and the number of components, which implies a larger influence of the prior on $K$ in this setting. We expect that the dynamic MFM leads to sparser solutions than the static MFM. ### 4.3 Induced prior of the number of data clusters $K_{+}$ As the posterior of $K_{+}$, the number of filled components, is the aim of the analysis, it is illuminating to study the prior on $K_{+}$. The prior on $K_{+}$ is implicitly induced through the specification of the prior on $K$ and the prior parameter $\gamma_{K}$. Frühwirth-Schnatter et al. (2020) and Greve et al. (2020) present formulas to derive this implicit prior in a computational efficient way. We also investigate the prior on $K_{+}$ induced by the prior specifications on $K$ and $\gamma_{K}$ considered for the Galaxy data set to further gauge our prior expectations of the influence of these prior specifications on the cluster solutions obtained. Figure 1: The prior probabilities of $K$ (in blue) and $K_{+}$ (in red) for the static MFM for different priors on $K$ and values for $\gamma$ with $n=82$. Figure 2: The prior probabilities of $K$ (in blue) and $K_{+}$ (in red) for the dynamic MFM for different priors on $K$ and values for $\alpha$ with $n=82$. Using $n=82$ – the sample size of the Galaxy data set – the priors on $K$ (in blue) and on $K_{+}$ (in red) are visualized by bar plots in Figure 2 for the static MFM and in Figure 2 for the dynamic MFM. The different priors on $K$ are in the columns and the values $\gamma\in\\{0.01,1,10\\}$ and $\alpha\in\\{0.01,1,10\\}$ are in the rows. The priors on $K$ are ordered by their prior mean of $K^{2}$, i.e., the squared prior mean of $K$ plus the prior variance. In total there are 12 combinations of specifications on $(K,\gamma_{K})$ inducing different priors on the data clusters $K_{+}$. Comparing Figure 2 with Figure 2 indicates that in general the dynamic MFM leads to priors on $K_{+}$ inducing stochastically smaller values. Figure 2 clearly indicates that only $\gamma=0.01$ leads to a sparse prior on the number of data clusters $K_{+}$ and that the impact of the prior on $K$ increases with increasing $\gamma$. For $\gamma=10$, the two priors $p(K)$ and $p(K_{+})$ are essentially the same. For the dynamic case shown in Figure 2, the prior on the number of data clusters $K_{+}$ induces a very sparse solution for $\alpha=0.01$ regardless of the prior on $K$. For $\alpha=1$, the prior on $K_{+}$ is sparser than the prior on $K$ but the induced prior clearly considerably varies depending on the selected prior on $K$. For $\alpha=10$ a close link between the priors on $K$ and $K_{+}$ is discernible if the prior on $K$ puts essentially all mass on small values of $K$, while still considerable differences between these two priors are visible for the shifted geometric prior and the uniform prior on $K$ which assign substantial mass to values $K>10$. In summary, if a sparse cluster solution is of interest, also a sparse prior on $K_{+}$ should be specified. This can be achieved by specifying a sparse prior on $K$ and/or small values for $\gamma/\alpha$. In contrast a flat prior on $K$ (e.g., $\text{U}(1,30)$) and large values of $\gamma/\alpha$ will a-priori support large values of $K_{+}$ (i.e., larger than $4$). ### 4.4 Prior parameters $b_{0}$ and $B_{0}$ for the component means Richardson and Green (1997) proposed to use empirical Bayes estimates for $b_{0}$ and $B_{0}$ which correspond to the midpoint of the observed data range for $b_{0}$ and the squared length of the observed data range $R^{2}$ for $B_{0}$. This choice makes the prior invariant to the scaling of the data, i.e., invariant to the units of the data used or standardization of the data. Richardson and Green (1997) argue that this is a sensible weakly informative prior which does not constrain the component means and does not encourage mixtures with close component means. They perform a sensitivity analysis for this prior by considering values from $R^{2}/10^{2}$ to $R^{2}$ for $B_{0}$, indicating for the Acidity data set that the number of components are inverse U-shaped, by first increasing with increasing values for $B_{0}$ and then decreasing again. In the following we also use the midpoint of the data for $b_{0}$. For $B_{0}$ we vary the values to assess the impact on the number of data clusters by considering the values $B_{0}\in\\{6.3,20,100,630\\}$. The extreme values correspond to the limits $R^{2}/10^{2}$ and $R^{2}$ considered by Richardson and Green (1997), 20 corresponds to the empirical variance of the data and Phillips and Smith (1996) used 100 in their analysis. Figure 3: The prior distributions for the component means $\mu_{k}\sim N(b_{0},B_{0})$ with $b_{0}$ equal to the data midpoint and $B_{0}\in\\{6.3,20,100,630\\}$ together with a histogram of the Galaxy data set. Figure 3 visualizes the prior distributions for the component means considered together with a histogram of the Galaxy data set. $B_{0}=R^{2}=630$ induces a weakly informative prior as suggested by Richardson and Green (1997) with approximately the same prior density values assigned to all data values observed. $B_{0}=R^{2}/100=6.3$ induces the tightest prior for the component means and assigns very low prior density values to the extreme data values, thus shrinking the prior component means to $b_{0}$. The smallest value for $B_{0}$ seems problematic as hardly any weight is assigned to values above 30, where, however, the histogram would suggest that the center of a small data cluster is located. We consider this rather extreme range of $B_{0}$ values to also assess for the Galaxy data set if the inverse U-shape for the estimated number of data clusters is observed. ### 4.5 Prior parameters $c_{0}$ and $C_{0}$ for the component variances Richardson and Green (1997) propose to use $\sigma^{-2}_{k}\sim\mathcal{G}(c_{0},C_{0})$ with a hierarchical prior on $C_{0}$, but also assess differences in results for a fixed and a random $C_{0}$. As we are interested in assessing the impact of different prior specifications, we only consider the case of fixed values for $C_{0}$. Following Escobar and West (1995), Phillips and Smith (1996) and Richardson and Green (1997), we use $c_{0}=2$. We consider $C_{0}\in\\{0.5,1,5,12.5\\}$, where $C_{0}=0.5$ is used in Phillips and Smith (1996), $C_{0}=1$ in Escobar and West (1995), and $C_{0}=12.5$ corresponds to the mean value considered for the random $C_{0}$ in Richardson and Green (1997). Figure 4: The prior distributions for $4\sigma_{k}$ induced by the prior on the component precisions $\sigma_{k}^{-2}\sim\mathcal{G}(c_{0},C_{0})$ with $c_{0}=2$ and $C_{0}\in\\{0.5,1,5,12.5\\}$ together with a histogram of the Galaxy data set. Figure 4 visualizes the prior distributions for the component variances considered together with a histogram of the Galaxy data set. The priors induced for $4\sigma_{k}$ are visualized. These values correspond to the length of the 95% prediction interval for a single component and might be thus seen as representing the volume considered for the components and hence reflect the prototypical shape imposed for the clusters. Clearly $C_{0}=0.5$ or $C_{0}=1$ induce prior standard deviations which allow to include components able to capture the extreme observations in data clusters of their own, whereas $C_{0}=12.5$ suggests to approximate the data with overlapping component distributions. Smaller $C_{0}$ values induce a more fine-grained density approximation, whereas larger $C_{0}$ values lead to a coarser density approximation and hence we expect the number of estimated data clusters to decrease for increasing $C_{0}$. ## 5 Posterior inference In order to sample the complete parameter vector, which consists of $K$ and, conditional on $K$, of $\bm{\eta}=(\eta_{k})_{k=1,\ldots,K}$, $\bm{\mu}=(\mu_{k})_{k=1,\ldots,K}$, and $\bm{\sigma}^{2}=(\sigma_{k}^{2})_{k=1,\ldots,K}$, from the posterior distribution, a transdimensional sampler is required which is able to sample parameter vectors of varying dimension. We use the telescoping sampler proposed by Frühwirth-Schnatter et al. (2020). This MCMC sampling scheme includes a sampling step where $K$ is explicitly sampled as an unknown parameter, but otherwise requires only sampling steps used for finite mixtures. The posterior inference uses data augmentation and also samples the component assignments $\bm{S}=(S_{i})_{i=1,\ldots,n}$. These component assignments induce partitions, thus that the sampling scheme directly also allows to obtain the posterior distribution of the partitions $\mathcal{C}=\\{\mathcal{C}_{1},\ldots,\mathcal{C}_{K_{+}}\\}$ of the data and the induced number of data clusters $K_{+}$, with $\mathcal{C}_{k}$ being the index set of observations assigned to the $k$th group of the partition $\mathcal{C}$. To illustrate the connection between the component assignments $\bm{S}$ and the partitions, assume that $K=3$ and $\bm{S}=(2,1,1,2,1,2,1,1,1,1)$ for $n=10$ observations. Then $K_{+}=2$, since no observations are assigned to the third group, and the induced partition is given by $\mathcal{C}=\\{\mathcal{C}_{1},\mathcal{C}_{2}\\}$ with $\mathcal{C}_{1}=\\{2,3,5,7,8,9,10\\}$ and $\mathcal{C}_{2}=\\{1,4,6\\}$. Following Frühwirth-Schnatter et al. (2020), the sampling steps consist of: 1. 1. Update the partition $\mathcal{C}$ by sampling $\bm{S}$ from $p(\bm{S}|\bm{\eta},\bm{\mu},\bm{\sigma}^{2},\bm{y})$ given by $P(S_{i}=k|\bm{\eta},\bm{\mu},\bm{\sigma}^{2},y_{i})\propto\eta_{k}f_{N}(y_{i}|\mu_{k},\sigma^{2}_{k})$. 2. 2. Conditional on $\mathcal{C}$, update the parameters of the non-empty components for $k=1,\ldots,K_{+}$: 1. (a) Draw the component-specific precisions from the posterior: $\displaystyle\sigma_{k}^{-2}|\mu_{k},\mathcal{C},\bm{y}$ $\displaystyle\sim\mathcal{G}(c_{k},C_{k}),$ with $\displaystyle c_{k}$ $\displaystyle=c_{0}+\frac{1}{2}N_{k},$ $\displaystyle C_{k}$ $\displaystyle=C_{0}+\frac{1}{2}\sum_{i\in\mathcal{C}_{k}}(y_{i}-\mu_{k})^{2},$ where $N_{k}$ are the number of observations assigned to $\mathcal{C}_{k}$, the $k$th group in the partition $\mathcal{C}$. 2. (b) Draw the component-specific means from the posterior: $\displaystyle\mu_{k}|\sigma^{-2}_{k},\mathcal{C},\bm{y}$ $\displaystyle\sim\mathcal{N}(b_{k},B_{k}),$ with $\displaystyle b_{k}$ $\displaystyle=B_{k}(B_{0}^{-1}b_{0}+\sigma_{k}^{-2}N_{k}\bar{y}_{k}),$ $\displaystyle B_{k}$ $\displaystyle=(B_{0}^{-1}+N_{k}\sigma_{k}^{-2})^{-1},$ where $\bar{y}_{k}$ is the sample mean of the observations assigned to $\mathcal{C}_{k}$. 3. 3. Conditional on $\mathcal{C}$, draw a new value of $K$ using $p(K|\mathcal{C})\propto p(\mathcal{C}|K)p(K)$. 4. 4. Add $K-K_{+}$ empty components with component-specific parameters drawn from the priors: $\displaystyle\mu_{k}$ $\displaystyle\sim\mathcal{N}(b_{0},B_{0}),$ $\displaystyle\sigma_{k}^{-2}$ $\displaystyle\sim\mathcal{G}(c_{0},C_{0}),$ for $k=K_{+}+1,\ldots,K$. 5. 5. Conditional on $\bm{N}=(N_{1},\ldots,N_{K_{+}},\bm{0}_{K-K_{+}})$, with $\bm{0}_{K-K_{+}}$ being a $K-K_{+}$ vector of zeros, draw a new value of $\bm{\eta}$: $\displaystyle\bm{\eta}|\bm{N}$ $\displaystyle\sim\mathcal{D}_{K}(\bm{\gamma}),$ with $\bm{\gamma}=(\gamma_{k})_{k=1,\ldots,K}$ and $\displaystyle\gamma_{k}$ $\displaystyle=\gamma_{K}+N_{k}.$ Inspecting the details of the sampling scheme provides insights into how the prior specifications influence the conditional posterior distributions used. The prior specifications of the component-specific parameters influence Steps 2 and 4. In Step 2, the updates for $c_{k}$ indicate that $2c_{0}$ might be interpreted as a prior sample size and $C_{0}/c_{0}$ corresponds to the variance assumed for this prior observations. The choice of $c_{0}=2$ thus corresponds to adding 4 observations a-priori to each component with a variance of $C_{0}/2$. If $C_{0}/2$ is larger than the empirical within- cluster variance, then $C_{k}$ is increased leading to the sampling of inflated $\sigma^{2}_{k}$ values. This in turn induces more overlap across the component densities and thus potentially leads to a sparser cluster solution with less number of clusters estimated. The updates for $b_{k}$ indicate that $b_{k}$ results as a weighted mean of the prior value $b_{0}$ and the mean of the observations currently assigned to the cluster. According to the formula for $B_{k}$, the influence of $B_{0}$ decreases for data clusters containing many observations, as the second summand increases with $N_{k}$. It is also clear that there is an interaction with the estimate for the component-specific variance, with larger variances allowing the component-specific means to vary more in the posterior updates. For the largest values of $B_{0}$ considered, we would expect that the prior influence is negligible, and that the posterior updates are only influenced by the data points currently assigned to this cluster. Step 3 is influenced by the choice of prior on $K$ and $\gamma_{K}$. More details on this step are given in Frühwirth-Schnatter et al. (2020). The new $K$ is sampled from a distribution with support $K\geq K_{+}$. This distribution is the more spread out the more the prior on $K$ puts mass on larger values of $K$ and the smaller $\gamma_{K}$ is. In addition the distribution depends on $K_{+}$ and the cluster sizes $(N_{1},\ldots,N_{K_{+}})$. This step allows for the birth and death of empty components. In Step 4 the parameters of the component-specific distributions of the new empty components are drawn from the priors. “Unattractive” empty components result in particular when $B_{0}$ is large and $C_{0}$ is small. In this case the sampled $\mu_{k}$ can be located far away from the data and the probability that observations are assigned to this empty component is extremely small in the following Step 1. Thus, the “attractiveness” of the empty components influences whether new empty components are filled and thus, the number of filled components increases or not. Step 5 is influenced by the choice of $\gamma_{K}$. In particular for empty components, the value of the Dirichlet parameter only depends on this prior value, influencing the value $\eta_{k}$ drawn for these components and hence also the probability of such an empty component having observations assigned in Step 1. The smaller $\gamma_{k}$, the smaller the sampled $\eta_{k}$ and thus the smaller the probability that an observation will be assigned to this component in Step 1. Furthermore, it can be seen that the prior sample size is equal to $K\gamma_{K}$. Thus, in the dynamic MFM the prior sample size is constant over mixtures with different number of components, whereas for the static MFM the prior sample size linearly increases with the number of components. ## 6 Assessing the impact of different prior specifications After discussing in detail how the prior specifications might affect the posterior of the number of data clusters, the following analysis investigates whether these theoretical considerations can be empirically verified. The MFM model is fitted to the Galaxy data set with 384 different prior settings, using four different specifications of the prior on $K$, using either the static or the dynamic MFM, considering three different values for the Dirichlet parameter and four different parameters each for $B_{0}$ and $C_{0}$ in a full factorial design. ### 6.1 MCMC estimation For each prior setting, posterior inference is performed based on 200,000 iterations after 10,000 burn-in iterations with every fourth draw being recorded (i.e., a thinning of 4). Initially 10 components are filled. The MCMC algorithm is initialized by specifying values for the component weights and the component-specific parameters. Equal component weights are specified and all component-specific variances $\sigma^{2}_{k}$, $k=1,\ldots,10$ are set equal to $C_{0}/2$. The component-specific means $\mu_{k}$ are set equal to the centroids obtained when applying the $k$-means algorithm with 10 clusters to the data set. The MCMC iterations start with Step 1 by assigning observations to the 10 components according to the a-posteriori probabilities. Partitions are label-invariant. Hence also the number of data clusters or filled components is a label-invariant quantity and it is not necessary to resolve the label switching problem (Redner and Walker, 1984) for the following analysis of the results. ### 6.2 Analysis of results The analysis of the results focuses on the impact of the prior specifications on the posterior $p(K_{+}|\bm{y})$ of the number of data clusters. The mode of $p(K_{+}|\bm{y})$ is used as point estimator. In addition the entropy of the posterior of $K_{+}$ is determined to indicate how informative this posterior is for a point estimate of $K_{+}$. The entropy of a discrete random variable $X$ with possible outcomes $x_{1},\ldots,x_{I}$ is given by $-\sum_{i=1}^{I}P(X=x_{i})\log(P(X=x_{i}))$. Thus, a high entropy value for the posterior of $K_{+}$ indicates rather equal posterior probabilities for the different values of $K_{+}$, while a low entropy value results if the posterior is concentrated on a few values. The marginal impact of each of the prior specifications on the estimated number of data clusters $K_{+}$, based on the posterior mode, is assessed by averaging the results across all other prior settings. Table 1 shows that on average the estimated number of data clusters $K_{+}$ (a) is higher for the static than the dynamic MFM, (b) increases for increasing values of the Dirichlet parameter, (c) is lowest for the truncated Poisson prior followed by the BNB(1, 4, 3) prior and then followed after a substantial gap by the $\text{Geom}(0.1)$ and finally the uniform $\text{U}(1,30)$ prior. For the priors on the component-specific parameters, a non-monotonic influence is indicated for $B_{0}$. While the average number of estimated data clusters $K_{+}$ is highest for $B_{0}=20$, comparable results are obtained for $B_{0}=6.3$ and $B_{0}=100$, while a substantial lower average number of data clusters $K_{+}$ is estimated for $B_{0}=630$. The influence of $C_{0}$ on the average number of data clusters estimated is monotonic and the number substantially decreases for increasing values of $C_{0}$. The marginal effects observed in Table 1 are in line with our prior expectations based on theoretic considerations and previous results. MFM | $\hat{K}_{+}$ | $\gamma$ / $\alpha$ | $\hat{K}_{+}$ | $p(K)$ | $\hat{K}_{+}$ | $B_{0}$ | $\hat{K}_{+}$ | $C_{0}$ | $\hat{K}_{+}$ ---|---|---|---|---|---|---|---|---|--- static | 5.89 | 0.01 | 2.98 | trPois(3) | 3.99 | 6.3 | 5.39 | 0.5 | 6.93 dynamic | 4.70 | 1 | 5.56 | $\text{BNB}(1,4,3)$ | 4.35 | 20 | 6.69 | 1 | 6.21 | | 10 | 7.33 | Geom(0.1) | 6.00 | 100 | 5.20 | 5 | 4.53 | | | | U(1, 30) | 6.82 | 630 | 3.90 | 12.5 | 3.50 Table 1: Average number of data clusters $K_{+}$ estimated based on the mode marginally for each of the different prior specifications. Figure 5 visualizes the results obtained for the 384 different settings in more detail. This allows not only to assess marginal effects, but also to gain insights into the interaction between the specifications on the priors. For each prior setting, the number of data clusters $K_{+}$ estimated based on the posterior mode is indicated by a dot with the value being shown on the $y$-axis. The results are split into six panels where the top panels contain the results for the static MFM, while the bottom panels contain the results for the dynamic MFM. The columns represent the different values selected for the Dirichlet parameter, $\alpha$ for the dynamic MFM and $\gamma$ for the static MFM, with values 0.01, 1, and 10 (from left to right). Within each of the panels the results are grouped on the $x$-axis by the prior $p(K)$. The priors $p(K)$ are ordered by their prior mean of $K^{2}$. Colors and point characters are used to indicate the different settings used for the component- specific parameters. Small values of $B_{0}$ are in red, whereas the large values of $B_{0}$ are in blue. The highly saturated colors indicate the extreme values of $B_{0}$ and lighter colors are used for the middle values of $B_{0}$. The filled shapes represent the large values of $C_{0}$, whereas the empty shapes are used for the small values of $C_{0}$. Figure 5: Galaxy data set. Estimated number of data clusters $K_{+}$ for different prior specifications. In the rows, the results for the static and dynamic MFM are reported, in the columns for $\gamma/\alpha\in\\{0.01,1,10\\}$, respectively. Focusing on the dynamic MFM with $\alpha=0.01$ (in the bottom left panel), one can clearly see that for nearly all settings the number of data clusters $K_{+}$ are estimated to be equal to 3. Only for some cases, an even smaller number of data clusters $K_{+}=2$ is estimated. This only occurs for settings where $B_{0}$ is small and $C_{0}$ is large. This suggests that in this panel, where the dynamic MFM with a sparsity inducing parameter $\alpha$ is fitted, a sparse clustering solution is obtained regardless of prior on $K$ and also quite unaffected by the specification on the component-specific parameters. The results for the static MFM with $\gamma=0.01$ are shown above this panel (in the top left panel). Clearly the sparsity inducing prior used for $K_{+}$ leads to most of the estimated number of data clusters being equal to 3. Only for very few settings, a lower or a higher number of data clusters than 3 (i.e., 2, 4, or 5) is estimated. Again a lower number of data clusters is only observed in the case where $B_{0}$ is small and $C_{0}$ is large. The higher number of data clusters is only observed for small values of $C_{0}$ and middle values of $B_{0}$. Overall the results for $\alpha=0.01$ for the dynamic MFM and $\gamma=0.01$ for the static MFM indicate that the prior on $K$ is not very influential, as regardless of choice of the prior on $K$ a sparsity inducing prior for $K_{+}$ is imposed where a rather large gap between $K$ and $K_{+}$ a-priori is likely to occur. Also the results are quite insensitive to the selection of the parameters for the component-specific distributions. This implies that if a sparse clustering solution is desired, one clearly needs to fix the Dirichlet parameter to have a small value. The results are rather insensitive to the specification of the other priors. If the cluster analysis aims at answering the question what is the minimum number of data clusters necessary to approximate the data distribution reasonably well, such a sparsity inducing prior is warranted. In this case the question how many data clusters are in the Galaxy data set would also be rather unambiguously answered by three. Increasing $\alpha$ and $\gamma$ to 1 indicates that the influence of the other prior specifications on the estimated number of data clusters increases (middle panels). The dynamic MFM tends to estimate less number of data clusters than the static MFM. The difference to the static MFM becomes more pronounced if the prior on $K$ puts more mass on the tails. For the dynamic MFM, all estimated number of data clusters are at most 7, with higher numbers being more likely for the uniform and the geometric prior, followed by the BNB prior and the truncated Poisson prior. Under the static MFM extremely large values are obtained for the uniform and the geometric prior, with estimates as large as 20. These large values are obtained if small values are used in the prior specification for $B_{0}$ and $C_{0}$. For the dynamic MFM, a higher number of data clusters $K_{+}$ is estimated for $\alpha=10$ compared to $\alpha=1$, while for the static MFM, rather similar results are obtained for $\gamma=1$ and $\gamma=10$ (panels on the right). For the uniform and geometric prior on $K$ the estimated number of data clusters varies most, regardless of whether a static or dynamic MFM is fitted. The prior on $K$ is not particularly sparsity inducing and thus the prior on the component-specific parameters influences which approximation of the data density is selected. Small values for $B_{0}$ induce the most extreme values for the estimated number of data clusters, with large values of $C_{0}$ leading to small numbers and small values of $C_{0}$ encouraging large numbers of data clusters. Figure 6: Galaxy data set. Entropy of the posterior on $K_{+}$ for different prior specifications. In the rows, the results for the static and dynamic MFM are reported, in the columns for $\gamma/\alpha\in\\{0.01,1,10\\}$, respectively. Figure 6 also visualizes the results obtained for the 384 settings in detail. Instead of the estimated number of data clusters $K_{+}$, the entropy of the posterior of $K_{+}$ is shown. If the entropy is 0, then all mass is assigned to a single value (which then also corresponds to the mode shown in Figure 5). For a fixed support, the uniform distribution has the maximum entropy. For $\text{U}(1,30)$, the entropy is $\log(30)\approx 3.40$, which corresponds to the case where clustering solutions with 1 up to 30 data clusters are equally likely. Figure 6 shows that the entropy values are smallest for the dynamic MFM with $\alpha=0.01$ with slightly larger values for the static MFM with $\gamma=0.01$. For the dynamic MFM, the entropy increases for increasing $\alpha$. For the static MFM, the entropy values also increase from $\gamma=0.01$ to $\gamma=1$, but are rather comparable for $\gamma=1$ and $\gamma=10$. Regarding the prior on $K$, smaller entropy values are observed for the truncated Poisson prior compared to the other priors which have rather comparable entropy values for a given $\gamma_{K}$ setting. This indicates that the smaller prior variance of the prior on $K$ has a substantial impact on the entropy. Regarding the prior on $B_{0}$, a general pattern of the red points being above the blue points is discernible. This implies that the posterior on $K_{+}$ is particularly spread out for small values of $B_{0}$, i.e., where the component-specific mean values are shrunken towards the midpoint. We conjecture that in this setting posterior mass is also assigned to small values of $K_{+}$ as due to the shrinkage there is also posterior support for solutions with few data clusters. In the Galaxy data set, for example, the observations with large values which seem to form a small data cluster of their own, might be merged with observations from the middle bulk of the observations due to shrinkage, inducing a large component-specific variance and thus a coarse density approximation. Regarding $C_{0}$, the general pattern is that the filled shapes are below the empty shapes, indicating that the entropy increases with decreasing values for $C_{0}$. This implies that aiming at a fine-grained approximation, using a rather small volume as prototypical shape for the clusters, leads to the mass being more spread out. In particular, if the aim is semi-parametric density estimation and a small volume is imposed, no clear estimate for a single $K_{+}$ value is expected, but rather a combination of mixtures with different values of $K_{+}$ is desired to obtain a good approximation. ## 7 Discussion and conclusions In this paper, we respond to the call for action made by Aitkin (2001) regarding the need to provide more insights into the influence of different prior specifications when fitting Bayesian mixture models. Based on recent developments in the context of MFMs, we use the model specification of a MFM, considering the static as well as the dynamic case. The Galaxy data set is used to illustrate the prior impact on the estimated number of data clusters $K_{+}$ using the mode as well as on the entropy of the posterior of $K_{+}$. Results confirm the marginal effects postulated, but also interesting interaction effects are discerned. Aiming at a sparse clustering solution using a dynamic MFM with $\alpha=0.01$ gives stable results regardless of the other prior choices (prior on $K$ and the component-specific parameters). Such an estimate could be interpreted as the “minimum number of data clusters” being present in the data, where the sparsity inducing property of the $(K,\gamma_{K})$ specification leads to insensitivity regarding misspecification of the component-specific parameters. Such a setting would lead to an unambiguous estimate of three data clusters based on the mode for the Galaxy data set, with also the posterior distributions being rather concentrated on very few values. This is in line with the conclusion drawn in Aitkin (2001) for the maximum likelihood framework using equal variance components in the mixture model. We suggest to use the dynamic MFM with small $\alpha$ value and reasonable component-specific distributions in a Bayesian model-based clustering application where a minimum number of data clusters is to be identified. For the component-specific distributions, shrinking the prior mean is not recommended, whereas for the component-specific variances using reasonable values is important to guard against too fine-grained or too coarse approximations. In the univariate case the visualization of the induced volume (see Figure 4) is useful to determine a suitable value for $C_{0}$. A generalization of such a visual tool to the multivariate case or other component-specific distributions would be of interest. Further analysis is also required to gain insights of the prior impact on Bayesian cluster analysis results for larger data sets, containing many observations, but also more variables and with other component-specific distributions. Acknowledgements. The authors gratefully acknowledge support from the Austrian Science Fund (FWF): P28740, and through the WU Projects grant scheme: IA-27001574. ## References * Aitkin (2001) Aitkin M (2001) Likelihood and Bayesian analysis of mixtures. Statistical Modelling 1(4):287–304, DOI 10.1177/1471082x0100100404 * Aitkin et al. (1981) Aitkin M, Anderson D, Hinde J (1981) Statistical modelling of data on teaching styles. Journal of the Royal Statistical Society A 144(4):419–461, DOI 10.2307/2981826 * Carlin and Chib (1995) Carlin BP, Chib S (1995) Bayesian model choice via Markov chain Monte Carlo methods. Journal of the Royal Statistical Society B 57:473–484, DOI 10.1111/j.2517-6161.1995.tb02042.x * Dempster et al. (1977) Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B 39(1):1–38, DOI 10.1111/j.2517-6161.1977.tb01600.x * Escobar and West (1995) Escobar MD, West M (1995) Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association 90(430):577–588, DOI 10.1080/01621459.1995.10476550 * Fraley and Raftery (2002) Fraley C, Raftery AE (2002) Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association 97(458):611–631, DOI 10.1198/016214502760047131 * Frühwirth-Schnatter (2006) Frühwirth-Schnatter S (2006) Finite Mixture and Markov Switching Models. Springer, New York * Frühwirth-Schnatter et al. (2020) Frühwirth-Schnatter S, Malsiner-Walli G, Grün B (2020) Generalized mixtures of finite mixtures and telescoping sampling, URL https://arxiv.org/abs/2005.09918, arXiv:2005.09918 [stat.ME] * Greve et al. (2020) Greve J, Grün B, Malsiner-Walli G, Frühwirth-Schnatter S (2020) Spying on the prior of the number of data clusters and the partition distribution in Bayesian cluster analysis, URL https://arxiv.org/abs/2012.12337, arXiv:2012.12337 [stat.ME] * Grün (2019) Grün B (2019) Model-based clustering. In: Frühwirth-Schnatter S, Celeux G, Robert CP (eds) Handbook of Mixture Analysis, Chapman and Hall/CRC, pp 157–192 * Hennig and Liao (2013) Hennig C, Liao TF (2013) How to find an appropriate clustering for mixed-type variables with application to socio-economic stratification. Journal of the Royal Statistical Society C 62(3):309–369, DOI 10.1111/j.1467-9876.2012.01066.x * Hothorn and Everitt (2014) Hothorn T, Everitt BS (2014) A Handbook of Statistical Analyses using R, 3rd edn. Chapman and Hall/CRC * Lunn et al. (2012) Lunn D, Jackson C, Best N, Thomas A, Spiegelhalter D (2012) The BUGS Book: A Practical Introduction to Bayesian Analysis. Chapman and Hall/CRC * Malsiner-Walli et al. (2016) Malsiner-Walli G, Frühwirth-Schnatter S, Grün B (2016) Model-based clustering based on sparse finite Gaussian mixtures. Statistics and Computing 26(1):303–324, DOI 10.1007/s11222-014-9500-2 * McCullagh and Yang (2008) McCullagh P, Yang J (2008) How many clusters? Bayesian Analysis 3(1):101–120 * McLachlan (1987) McLachlan GJ (1987) On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture. Journal of the Royal Statistical Society C 36(3):318–324 * Miller and Harrison (2018) Miller JW, Harrison MT (2018) Mixture models with a prior on the number of components. Journal of the American Statistical Association 113(521):340–356, DOI 10.1080/01621459.2016.1255636 * Nobile (2004) Nobile A (2004) On the posterior distribution of the number of components in a finite mixture. The Annals of Statistics 32(5):2044–2073, DOI 10.1214/009053604000000788 * Phillips and Smith (1996) Phillips DB, Smith AFM (1996) Bayesian model comparison via jump diffusions. In: Gilks W, Richardson S, Spiegelhalter DJ (eds) Markov Chain Monte Carlo in Practice, Chapman & Hall, London, pp 215–239 * Postman et al. (1986) Postman M, Huchra JP, Geller MJ (1986) Probes of large-scale structure in the Corona Borealis region. The Astronomical Journal 92(6):1238–1247, DOI 10.1086/114257 * Redner and Walker (1984) Redner RA, Walker HF (1984) Mixture densities, maximum likelihood and the EM algorithm. SIAM Review 26(2):195–239 * Richardson and Green (1997) Richardson S, Green PJ (1997) On Bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society B 59(4):731–792, DOI 10.1111/1467-9868.00095 * Roeder (1990) Roeder K (1990) Density estimation with confidence sets exemplified by superclusters and voids in galaxies. Journal of the American Statistical Association 85(411):617–624, DOI 10.1080/01621459.1990.10474918 * Roeder and Wasserman (1997) Roeder K, Wasserman L (1997) Practical Bayesian density estimation using mixtures of normals. Journal of the American Statistical Association 92(439):894–902, DOI 10.1080/01621459.1997.10474044 * Scrucca et al. (2016) Scrucca L, Fop M, Murphy TB, Raftery AE (2016) mclust 5: clustering, classification and density estimation using Gaussian finite mixture models. The R Journal 8(1):289–317, DOI 10.32614/RJ-2016-021
# Moduli of elliptic $K3$ surfaces: monodromy and Shimada root lattice strata Klaus Hulek Institut für Algebraische Geometrie, Leibniz Universität Hannover, 30060 Hannover, Germany<EMAIL_ADDRESS>and Michael Lönne Mathematisches Institut der Universität Bayreuth, Universitätsstr. 30, 95447 Bayreuth, Germany<EMAIL_ADDRESS> (Date: today) ###### Abstract. In this paper we investigate two stratifications of the moduli space of elliptically fibred $K3$ surfaces. The first comes from Shimada’s classification of connected components of elliptically fibred $K3$ surfaces and is closely related to the root lattices of the fibration. The second is the monodromy stratification defined by Bogomolov, Petrov and Tschinkel. The main result of the paper is a classification of all positive-dimensional ambi- typical strata, that is strata which are both Shimada root strata and monodromy strata. We also discuss the relationship with moduli spaces of lattice-polarised $K3$ surfaces. The appendix by M. Kirschmer contains computational results about the $1$-dimensional ambi-typical strata. ## 1\. introduction Elliptically fibred $K3$ surfaces have been studied over a long period from many different angles. We refer the reader in particular to the book [ScSh] where $K3$ surfaces are especially treated in Sections 11 and 12. Due to the work of Miranda [Mi90] it is well known that the moduli space $\mathcal{F}$ of elliptically fibred $K3$ surfaces with a section, also known as Jacobian fibrations, can be described as a GIT quotient of an open subset $V$ in the weighted projective space $\mathbb{P}_{8,12}(9,13)$ by the group $\operatorname{SL}(2,\mathbb{C})$. Alternatively, the moduli space $\mathcal{F}$ can also be constructed as the moduli space of lattice polarised $K3$ surfaces, more precisely $U$-quasi-polarised $K3$ surfaces (where $U$ is the hyperbolic plane spanned by the classes of the section and a general fibre). The moduli space $\mathcal{F}$ itself is a rational variety by a result of Lejarraga [Le]. Geometric properties of elliptically fibred $K3$ surfaces provide this space with many interesting geometric features. The starting point of our work are two stratifications of the moduli space $\mathcal{F}$. The first of these was introduced by Bogomolov, Petrov and Tschinkel in [BPT]. The strata of this decomposition are defined by the property that they are maximal locally closed irreducible subvarieties with constant monodromy group $\Gamma$ where $\Gamma$ is a fixed subgroup of $\operatorname{SL}(2,\mathbb{Z})$ modulo conjugation. Bogomolov, Petrov and Tschinkel prove the remarkable fact that all of these monodromy strata are themselves rational varieties. The other stratification is due to the work of Shimada [Shi1], [Shi2], [Shi3] in which he classifies all connected components of the moduli of elliptic $K3$ surfaces with fixed combinatorial type. This means the following: Given an elliptically fibred $K3$ sufrace $f:S\to\mathbb{P}^{1}$ the components of the singular fibres not meeting the $0$-section define a root lattice $R$ which is the direct sum of some $ADE$-lattices. This need not be be saturated in the Néron-Severi group $\operatorname{NS}(S)$. Its saturation $L$ corresponds, by lattice theory, to an isotropic subgroup $G$ of the discriminant group of $R$. Shimada’s work provides a complete classification of all connected families of elliptically fibred $K3$ surfaces with given $(R,G)$. This leads to a total of 3932 families and in this way one obtains a second stratification of the moduli space $\mathcal{F}$. We refer to the strata in this stratification as Shimada root strata or, shorter, simply as Shimada strata. A priori the stratifications given by monodromy strata and Shimada strata are not related and none is a refinement of the other. However, both stratifications are refined by a third stratification, namely the one given by a fixed configuration of the singular fibres. We call these the configuration strata. Their properties were investigated by Klosterman in [Kl] . The starting point of our work is the observation that there are some strata which appear in both the monodromy stratification and Shimada’s stratification. We call these ambi-typical strata and the main purpose of this paper is to understand these special strata. Our main result (Theorem 4.2) is ###### Main Theorem. There are exactly 50 positive-dimensional ambi-typcical strata. These are listed in Table 11. We then prove in Theorem 4.3 that the ambi-typical Shimada strata are, with the exception of two strata, already completely determined by the root lattice $R$ itself. In Theorem 4.4 and Table 12 we further characterise the ambi- typical strata in terms of local monondromy around the singular fibres and the branching behaviour of the $j$-invariant. Clearly, there is a connection with moduli spaces of lattice polarised $K3$ surfaces. Indeed, every ambi-typical stratum gives rise to a priori several moduli spaces of lattice-polarised $K3$ surfaces. It turns out that the total number of possible components of moduli spaces associated to a given ambi- typical stratum is either 1,2, or 4, as follows from Corollary 11.4 and Proposition 11.7. For each such component $\mathcal{N}$ associated to an ambi- typical stratum $\mathcal{M}$ there is a natural finite dominant map $\mathcal{N}\to\mathcal{M}$ and we compute its degree. This is the content of Proposition 12.1, Theorem 12.8 and Corollary 12.9. We will now discuss the contents of the paper in some more detail: In Section 2 we recall basic facts about elliptically fibred $K3$ surfaces and Miranda’s construction of the moduli space $\mathcal{F}$. The monodromy and Shimada stratifications are introduced in Section 3 where we also recall the principal results of Shimada’s theory. In Section 4 we formulate the main results of our paper. Sections 5 to 10 are dedicated to the proof of the Main Theorem. We start in Section 5 by recalling the configuration strata studied by Klosterman [Kl]. It is sufficient to look for ambi-typical configuration strata and this will lead us to the severe restriction that the generic element of an ambi-typical stratum cannot have singular fibres of type $II,III,IV,II^{*}$ or $III^{*}$ (Proposition 5.7). In Section 6 we recall a natural factorisation of the $j$-invariant $j(\mathcal{E})=j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ via the modular curve $X(\bar{\Gamma})$ defined by the modular monodromy group $\bar{\Gamma}\subset\operatorname{PSL}(2,\mathbb{Z})$. The restrictions on fibre configurations then translates into branching properties of $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$ over the points $0,1,\infty$. In Section 7 we use an Euler number consideration to find a list of possible candidates for the modular monodromy group $\bar{\Gamma}$. In Proposition 7.1 we prove that the index of the modular group $\bar{\Gamma}$ in $\operatorname{PSL}(2,\mathbb{Z})$ is bounded by 18. For the rest of the proof we shall distinguish between low index $\leq 6$ and high index $>6$. The next step in the proof is that we provide Weierstraß data for all possible ambi- typical strata with modular monodromy group $\bar{\Gamma}$ of low index in Proposition 8.1. We also obtain some results about the Mordell-Weil lattices and the monodromy of the 6 families listed in this proposition. In Sections 9 and 10 we finally complete the proof of the classification in the low and high index case respectively. Section 11 and 12 are devoted to relating the ambi-typical strata which we have found to moduli spaces of lattice polarised $K3$ surfaces. The appendix by M. Kirschmer contains explicit calculations concerning the $1$-dimensional ambi-typical strata, in particular the genus of the moduli spaces of lattice polarised $K3$ surfaces covering these ambi-typical strata and the degree of the covering map. We find it remarkable that although the ambi-typical $1$-dimensional strata all have genus $0$, in accordance with the rationality result of Bogomolov, Petrov and Tschinkel, the genus of the moduli space of lattice-polarised $K3$ surfaces can be as high as $13$. ### Acknowledgements We thank Ichiro Shimada for extensive discussions and for sharing his insight and calculations with us. The first author thanks Simon Brandhorst for discussions on lattice genera and for supplying the proof of Proposition 11.3. He also thanks Eduard Looijenga for an exchange of e-mails. The first author is further grateful to DFG for partial support under grant Hu 337/7-1. The second author acknowledges the support of the ERC 2013 Advanced Research Grant 340258-TADMICAMT. ## 2\. Elliptically fibred $K3$ surfaces A $K3$ surface $S$ is elliptically fibred if there exists a surjective morphism $f:S\to\mathbb{P}^{1}$ whose generic geometric fibre is a genus $1$ curve. We say that $f:S\to\mathbb{P}^{1}$ is a Jacobian fibration if it also has a section $s$. We denote the Néron-Severi group by $\operatorname{NS}(S)$ and its rank, also called the Picard rank by $\rho(S)$. A general fibre $f$ and the $0$-section $s$ define a hyperbolic sublattice $U\subset\operatorname{NS}(S)$ and conversely, every such hyperbolic sublattice defines an elliptic fibration We will typically denote $K3$ surfaces with a Jacobian fibration by $f:\mathcal{E}\to\mathbb{P}^{1}$ or simply by $\mathcal{E}$, where we think of $\mathcal{E}$ as an elliptic curve over the function field $\mathbb{C}(\mathbb{P}^{1})\cong\mathbb{C}(t)$. The sections of $\mathcal{E}$ form a finitely generated abelian group, the Mordell-Weil group of $\mathcal{E}$ which we denote by $\operatorname{MW}(\mathcal{E})$. The components of all singular fibres of $\mathcal{E}$ which do not meet the section generate a sublattlice $R(\mathcal{E})$ of $\operatorname{NS}(\mathcal{E})$ which is a root lattice, more precisely an orthogonal sum of lattices of $ADE$ type. The trivial part of the Néron-Severi group $\operatorname{NS}(\mathcal{E})$ is $\operatorname{NS}_{\operatorname{tr}}(\mathcal{E})=U+R(\mathcal{E})$ where $U$ is the hyperbolic plane defined by the Jacobian fibration. To simplify notation we write here and in the sequel simply $+$ for a direct orthogonal sum. We will denote the rank of the trivial part by $\rho_{\operatorname{tr}}(\mathcal{E})$. In general $R(\mathcal{E})$ is not saturated in $\operatorname{NS}(\mathcal{E})$ and we will denote its saturation by $L(\mathcal{E})$. It is well known, see [ScSh, Corollary 6.20], that $L(\mathcal{E})/R(\mathcal{E})\cong\operatorname{MW}_{\operatorname{tors}}(\mathcal{E}).$ We recall that a Jacobian fibration $\mathcal{E}$ is called extremal if $\rho_{\operatorname{tr}}(\mathcal{E})=20$. Jacobian fibrations can be classified via their Weierstraß models and this is the approach taken by Miranda. For the basic theory of Weierstraß equations we refer the reader to Miranda’s paper [Mi81]. Any Jacobian fibration $f:\mathcal{E}\to\mathbb{P}^{1}$, where $\mathcal{E}$ is a $K3$ surface, is birational to a minimal Weierstraß model (2.1) $y^{2}z=4x^{3}-g_{2}xz^{2}-g_{3}z^{3}\,{\mbox{ where}}\,g_{2}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(8)),g_{3}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(12))$ and where the following (open) conditions hold: * (1) $\Delta=g_{2}^{3}-27g_{3}^{2}\not\equiv 0$. * (2) For every point $q\in\mathbb{P}^{1}$ the inequality $\min\\{3\nu_{q}(g_{2}),2\nu_{q}(g_{3})\\}<12$ holds, where $\nu_{q}(g)$ is the vanishing order of a polynomial $g$ at $q$. The latter condition ensures that the Weierstraß equation is minimal in the sense that we cannot write $g_{2}=h^{4}\overline{g}_{2}$ and $g_{3}=h^{6}{\overline{g}}_{3}$ with ${\overline{g}}_{2}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(4))$ and ${\overline{g}}_{3}\in H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(6))$. Conversely, given an equation as above defines a surface $\mathcal{E}^{\prime}$ in the projective bundle $\mathbb{P}(\mathcal{O}_{\mathbb{P}^{1}}(4)\oplus\mathcal{O}_{\mathbb{P}^{1}}(6)\oplus\mathcal{O}_{\mathbb{P}^{1}})$ with at most rational double points. A minimal resolution of $\mathcal{E}$ is then a $K3$ surface and the projection onto the base of the projective bundle gives a Jacobian fibration $f:\mathcal{E}\to\mathbb{P}^{1}$. Weierstraß equations with coefficients $g_{2},g_{3}$ and $g^{\prime}_{2},g^{\prime}_{3}$ respectively, define isomorphic Jacobian fibrations if and only if there exists a coordinate transformation $\operatorname{GL}(2,\mathbb{C})$, which maps the pair $(g_{2},g_{3})$ to $(g^{\prime}_{2},g^{\prime}_{3})$. This allows to describe the moduli of Jacobian fibrations in terms of a GIT quotient. In order to do this we consider the weighted projective space $\mathbb{P}_{8,12}(9,13)$ associated to $H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(8))\oplus H^{0}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(12))$. The open conditions described above define an open subset $V\subset\mathbb{P}_{8,12}(9,13)$ and the group $\operatorname{SL}(2,\mathbb{C})$ acts on $\mathbb{P}_{8,12}(9,13)$ as well as on the open subset $V$. By [Mi81, Proposition 5.1] all points of $V$ are stable with respect to the action of $\operatorname{SL}(2,\mathbb{C})$. Hence we can form the quotient $\mathcal{F}:=V/\\!/\operatorname{SL}(2,\mathbb{C})$ which is a quasi-projective variety. Its geometric points correspond bijectively to isomorphism classes of Jacobian fibrations $f:\mathcal{E}\to\mathbb{P}^{1}$ where $\mathcal{E}$ is a $K3$ surface. In Section 11 we will discuss the theory of moduli spaces of lattice (quasi-)polarised $K3$ surfaces. Taking the hyperbolic plane $U$ (which one should think of as being spanned by the class of the section and of a general fibre) one obtains an alternative construction of the moduli space $\mathcal{F}$ as a quotient of a homogeneous domain of type IV by an arithmetic group. As we shall see, both points of view are relevant for our purposes. An interesting result of Odaka and Oshima [OO, Theorem 7.9] even says that the GIT compactification and the Baily-Borel compactification of this space coincide, i.e. $\mathcal{F}^{\operatorname{GIT}}\cong\mathcal{F}^{\operatorname{BB}}$. ## 3\. Monodromy and Shimada strata In this section we will introduce the main protagonists of this paper, namely monodromy strata and Shimada strata. ### Monodromy strata Monodromy strata for Jacobian fibrations where first introduced by Bogomolov, Petrov and Tschinkel in [BPT]. Here we recall some of their results, restricting ourselves to the case in hand, namely Jacobian fibrations of $K3$ surfaces. The first step is to consider the open subset $V^{\prime}\subset V$ corresponding to non-isotrivial fibrations. It is naturally obtained by replacing condition (1) above by the slightly stronger condition * (1’) $\Delta=g_{2}^{3}-27g_{3}^{2}$ and $g_{2}^{3}$ are not proportional. (which includes the case the $g_{2}\not\equiv 0$). Since the $j$-invariant is the quotient of these two expressions, this translates directly into several equivalent, more geometric, characterisations of points $v\in V^{\prime}$: 1. (1) The $j$-invariant of the Jacobian elliptic fibration $\mathcal{E}_{v}$ is non- constant. 2. (2) The Jacobian elliptic fibration $\mathcal{E}_{v}$ is not isotrivial. 3. (3) The number of singular fibres of multiplicative type $I_{\nu}$ or additive type $I_{\nu}^{*}$, $\nu>0$, is positive. Clearly $V^{\prime}$ is invariant (as a set) under the action of $\operatorname{SL}(2,\mathbb{C})$ and we denote the quotient by $\mathcal{F}^{\prime}\subset\mathcal{F}$. This is the open subset parameterising all non-isotrivial Jacobian fibrations of $K3$ surfaces. To decompose $V^{\prime}$ (and thus $\mathcal{F}^{\prime}$) in a geometrically meaningful way, [BPT] exploit the _monodromy group_ of elliptic fibrations: the complement $\mathcal{E}^{\prime}$ of the union of singular fibres of a Jacobian fibration $\mathcal{E}$ is topologically equivalent to a torus bundle over the base punctured at the critical values of the fibration. The image of the associated monodromy representation in the automorphisms of the first homology of a fibre is orientation preserving. Upon the choice of a basis, it becomes the _monodromy group_ $\Gamma(\mathcal{E})$, a subgroup of $\operatorname{SL}(2,\mathbb{Z})$, well-defined up to conjugacy, see [Mi89, VI,3]. Note that according to this definition we will consider monodromy groups as subgroups of $\operatorname{SL}(2,\mathbb{Z})$, and keep in mind that it is the conjugacy class which is the invariant. The group $\Gamma(\mathcal{E})$ determines a group $\bar{\Gamma}(\mathcal{E})\subset\operatorname{PSL}(2,\mathbb{Z})$ which we will call the modular monodromy group. The initial observation in [BPT] is the following semi-continuity property with respect to the Zariski topology on the quasi-projective variety $\mathcal{F}^{\prime}$: the monodromy group can only change if the configuration of singular fibres changes, i.e. if some fibres split or come together. The latter case only occurs on closed subsets of the base of a family and the monodromy group will be a subgroup after the process. Therefore, the property that the monodromy group is a subgroup of a fixed subgroup of $\operatorname{SL}(2,\mathbb{Z})$ (up to conjugacy) defines a closed subset. This can be rephrased as follows. We consider the set $P$ of all conjugacy classes of subgroups of $\operatorname{SL}(2,\mathbb{Z})$ together with the partial ordering induced by inclusion. On this we define the Alexandroff topology for which a set $U$ is open if and only if $p\in U,p\leq q$ implies $q\in U$. The semicontinuity property observed by Bogomolov, Petrov and Tschinkel then says that the map $V^{\prime}\to P$ is continuous with respect to the Alexandroff topology on $P$ and the Zariski topology on $V^{\prime}$. It implies that the fibres $V^{\prime}_{\Gamma}$ of this map are locally closed for a fixed (conjugacy class of a) subgroup $\\\ Gamma\subset\operatorname{SL}(2,\mathbb{Z})$. We can decompose each such set into its irreducible components $V^{\prime}_{\\\ Gamma}=\cup_{i}V^{\prime}_{\Gamma,i}$. Bogomolov, Petrov and Tschinkel showed that the number of possible monodromy groups is finite and in this way obtained the ###### Lemma 3.1 (cf. [BPT], Lemma 3.1). The variety $V^{\prime}$ is a finite union of locally closed irreducible subvarieties $V^{\prime}_{\Gamma,i}$, each preserved under the action of $\operatorname{SL}(2,\mathbb{C})$ such that for every $v\in V^{\prime}_{\Gamma,i}$ one has $\Gamma(\mathcal{E}_{v})\sim\Gamma.$ We denote $\mathcal{F}^{\prime}_{\Gamma,i}=V^{\prime}_{\Gamma,i}/\\!\\!/\operatorname{SL}(2,\mathbb{C})$ and in this way one obtains a finite decomposition (3.1) $\mathcal{F}^{\prime}=\cup_{i}\mathcal{F}^{\prime}_{\Gamma,i}$ into irreducible locally closed subvarieties. ###### Definition 3.2. We refer to (3.1) as the monodromy stratification of the moduli space of $K3$ surfaces with a non-isotrivial Jacobian fibration, and call the subvarieties $\mathcal{F}^{\prime}_{\Gamma,i}$ the monodromy strata of $\mathcal{F}^{\prime}$. While the objective of [BPT] is the rationality of the monodromy strata $\mathcal{F}^{\prime}_{\Gamma,i}$ in the moduli space, for our arguments it is important to note the following maximality condition (3.2) $v\in\overline{V^{\prime}_{\Gamma,i}}\,\setminus V^{\prime}_{\Gamma,i}\quad\implies\quad\Gamma(\mathcal{E}_{v})\not\sim\Gamma.$ ### Shimada strata We have already seen the root lattice $R(\mathcal{E})$ associated to a Jacobian fibration $f:\mathcal{E}\to\mathbb{P}^{1}$ of a $K3$ surface. In general $R(\mathcal{E})$ is not saturated in $\operatorname{NS}(\mathcal{E})$ and we denoted its saturation by $L(\mathcal{E})$. The overlattice $L(\mathcal{E})$ of $R(\mathcal{E})$ defines, and can be reconstructed, from the finite group $G(\mathcal{E})=L(\mathcal{E})/R(\mathcal{E})\subset D(R(\mathcal{E}))$. Here we use standard notation and standard facts form lattice theory: if $L$ is an even definite lattice we denote by $L^{\vee}$ the dual lattice which can be defined as $L^{\vee}=\\{x\in L_{\mathbb{Q}}\mid(x,y)\in\mathbb{Z}{\mbox{ for all }}y\in L\\}$. The discriminant of $L$ is the quotient $D(L)=L^{\vee}/L$. This is a finite group and it is equipped with a quadratic form with values in $\mathbb{Q}/2\mathbb{Z}$. The importance of the discriminant group in lattice theory was fully developed in Nikulin’s seminal paper [Nik]. Overlattices of $L$ correspond to isotropic subgroups $G\subset D(L)$, see [Nik, Proposition 1.4.1]. In his paper [Shi1] Shimada investigated the question which pairs $(R,G)$, where $R$ is an $ADE$ root lattice and $G\subset D(R)$ is a finite (isotropic) subgroup of the discriminant group, can occur as $(R(\mathcal{E}),G(\mathcal{E}))$ and how many irreducible families these Jacobian fibrations form. At this point it is important to note that in general a pair $(R,G)$ does not necessarily determine a unique family. There are several reasons for this. Firstly, the abstract group $G$ does not necessarily define a unique overlattice $L$ of $R$, more precisely one must choose a specific isotropic subgroup $G$ in $D(R)$ and it can happen that $G$ can be embedded in several ways as an isotropic subgroup in $D(R)$. Secondly, given $L$ one must also specify a primitive embedding of $M=U\oplus L$ into the $K3$ lattice $L_{K3}=3U+2E_{8}$ where $E_{8}$ is the unique even unimodular negative definite lattice of rank $8$ and the lattice $M$ may have several such embeddings (modulo the orthogonal group $\operatorname{O}(L_{K3})$). Given such an embedding one can consider the moduli space of lattice polarised $K3$ surfaces with lattice polarisation $M$. Note however, that a lattice polarization contains possibly more information than the pair $(R,G)$ as different lattice polarizations can give rise to the same elliptic $K3$ surface with a given configuration of singular fibres. We shall discuss this relationship in detail in Sections 11 and 12. The Shimada strata correspond to finite quotients of (open subsets) of moduli spaces of lattice polarised $K3$ surfaces. We also note that the moduli spaces can have $1$ or $2$ components and this, thirdly, can also increase the number of strata. If there are $2$ such components then they are complex conjugate to each other, i.e. surfaces of one component are complex conjugate to those of the other component, cf. [FM, p.206]. Shimada’s classification [Shi2] gives the following result. ###### Theorem 3.3 (Shimada). The following holds: * (1) There are $3278$ different root lattices which occur as $R(\mathcal{E})$ for a Jacobian fibration $\mathcal{E}$ of a $K3$ surface. Of these $2953$ belong to non-extremal fibrations. * (2) This decomposes the set of all Jacobian fibrations of $K3$ surfaces into $3932$ connected families, of which $3469$ belong to non-extremal fibrations. Shimada’s result defines a stratification of the moduli space $\mathcal{F}^{\prime}$ into maximal (with respect to inclusion) locally closed irreducible subvariety $\mathcal{M}^{\prime}_{R,G,j}$ and this defines a finite decomposition (3.3) $\mathcal{F}^{\prime}=\cup_{j}\mathcal{M}^{\prime}_{R,G,j}.$ ###### Definition 3.4. We call the decomposition (3.3) the Shimada stratification of $\mathcal{F}^{\prime}$ and the subvarieties $\mathcal{M}^{\prime}_{R,G,j}$ the Shimada strata of $\mathcal{F}^{\prime}$. For future use we also note the following remark which is simply obtained by counting the conditions imposed by the rank of the Néron-Severi group. ###### Remark 3.5. The dimension of a component $\mathcal{M}^{\prime}_{R,G,j}$ is given by (3.4) $\dim\mathcal{M}^{\prime}_{R,G,j}=18-\operatorname{rank}(R).$ We notice that this only depends on $R$ and not on $G$ nor the specific component we are considering. Note that all components of $\mathcal{M}^{\prime}_{R}$ have the same dimension, which we will also refer to as the dimension of $\mathcal{M}^{\prime}_{R}$. We note that the decomposition in Shimada strata (3.3) is a topological poset stratification in the sense of [YaYo]. ## 4\. The main classification result The primary goal of this paper is to compare the two stratifications of the moduli space $\mathcal{F}^{\prime}$ of non-isotrivial Jacobian fibrations on $K3$ surfaces which we have defined above, namely the monodromy stratification and the Shimada stratification. More precisely, we want to determine all positive dimensional monodromy and Shimada strata whose generic points coincide. In other words, we want to determine all pairs $\mathcal{F}^{\prime}_{\tilde{\Gamma,i}}$ and $\mathcal{M}^{\prime}_{R,G,j}$ such there intersection is non-empty and open (and hence dense) in both $\mathcal{F}^{\prime}_{{\Gamma,i}}$ and $\mathcal{M}^{\prime}_{R,G,j}$. This leads us to the definition ###### Definition 4.1. A positive dimensional irreducible closed subset $\mathcal{A}\subset\mathcal{F}^{\prime}$ is called an ambi-typical stratum if there is a monodromy stratum $\mathcal{F}^{\prime}_{\Gamma,i}$ and a Shimada stratum $\mathcal{M}^{\prime}_{R,G,j}$ such that (4.1) $\mathcal{A}=\overline{\mathcal{F}^{\prime}_{\Gamma,i}}=\overline{\mathcal{M}^{\prime}_{R,G,j}}.$ (The monodromy stratum and the Shimada stratum are then uniquely defined.) Our main result is ###### Theorem 4.2. There are $50$ ambi-typical strata in $\mathcal{F}^{\prime}$. As we have discussed, a Shimada stratum determines a pair $(R,G)$, but not necessarily the other way round. There are, however, cases where the root lattice defines a unique Shimada stratum. Indeed, this is the case for most of the ambi-typical strata. Before we make this more precise, we recall that $D(A_{1})\cong\mathbb{Z}/2\mathbb{Z}$ and $D(A_{3})\cong\mathbb{Z}/4\mathbb{Z}$ and both have a unique element of order $2$, denoted by $1$ and $2$ respectively. ###### Theorem 4.3. In all but $3$ cases the root lattice $R$ of an ambi-typical stratum $\mathcal{A}$ determines a unique Shimada stratum. The exceptions are: * (1) The root lattice $D_{4}+2A_{6}+A_{1}$ in 38/39 determines two Shimada strata, these are conjugate complex to each other. * (2) The root lattice $2A_{3}+8A_{1}$ in 6 determines two non-conjugate Shimada strata. Only one of these occurs as an ambi-typical stratum.111We will say more on this in Remark 9.6. By definition a monodromy stratum determines a monodromy group $\Gamma$, but the converse will in general not be true. In fact, as we will see in the rest of the paper, monodromy strata are determined by some fairly involved data. This includes the modular monodromy group $\bar{\Gamma}\subset\operatorname{PSL}(2,\mathbb{Z})$, but also information about the $j$-invariant $j(\mathcal{E}):\mathbb{P}^{1}\to\mathbb{P}^{1}$. We shall see in Section 6 that this can be factored in a unique way as $j(\mathcal{E})=j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ where $j_{\bar{\Gamma}}$ is a Belyi map. The additional data which determine a monodromy stratum are given by a stratum of rational maps $j_{\mathcal{E}}:\mathbb{P}^{1}\to\mathbb{P}^{1}$ and a rational family of divisors on $\mathbb{P}^{1}$ corresponding to the $*$-fibres. However, in the case of ambi-typical strata these data are fully determined by much more accessible ones: ###### Theorem 4.4. The $50$ ambi-typical monodromy strata determine the list of invariants given in Table 12, consisting of 1. (1) the local monodromies of $j_{\bar{\Gamma}}$ at $0,1,\infty$, 2. (2) the branching of $j_{\mathcal{E}}$ for a generic element $\mathcal{E}$ at the non-critical pre-images of $0,1$ and at poles of $j_{\bar{\Gamma}}$, 3. (3) the number of $*$ fibres. Conversely, the data are pairwise distinct and each determines a unique monodromy stratum with the following exception: the strata 38/39 correspond to two distinct maps $j_{\bar{\Gamma}}$ having the same combinatorial type but with monodromy factorizations in distinct conjugation classes. The proof of these theorems is quite involved and takes up most of the remaining paper. Here we shall give a rough outline of our strategy: * • In Section 5 we study configurations of singular fibres where we use the work of Kloosterman [Kl, sec.4]. This leads to a semi-continuous invariant which induces a stratification which refines both the stratifications by monodromy and by root lattice. Hence it suffices to look for ambi-typical fibre configuration strata. We find that to be ambi-typical poses severe restrictions on the possible singular fibres. * • In Section 6 we recall the factorisation $j(\mathcal{E})=j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ of the $j$-invariant. The restrictions on fibre configurations then translates into branching properties of $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$ over the points $0,1,\infty$. * • In Section 7 we use an Euler number restriction to find a list of candidates for the modular monodromy group $\bar{\Gamma}$ and give an upper bound for the dimension of the corresponding strata. * • In Section 8 we parameterise closed subsets of $V^{\prime}$ which give Weierstraß data for all possible ambi-typical strata with modular monodromy group $\bar{\Gamma}$ of low index, i.e. at most $6$. * • In Section 9 we determine the ambi-typical strata among these families of Weierstraß data and their invariants. * • In Section 10 we address the classification of ambi-typical strata with modular monodromy group $\bar{\Gamma}$ of high index, i.e. at least $7$. In the first step we determine the possible corresponding root lattices and the topological types of the $j_{\mathcal{E}}$ factor of the $j$-function. In the second step we determine the topology of the maps $j_{\bar{\Gamma}}$ and the corresponding monodromy groups. This program will finally allow us to complete the proofs of our theorems. ## 5\. Singular fibre configurations and generic invariants In this section we consider the configuration of singular fibres of elliptic surfaces. The possible singular fibre types have been classified by Kodaira and are listed in Table 1, see [BHPV, Table V.6] with some of their invariants. The relation between singular fibre types and Weierstraß data is given in Table 2. This contains all information necessary to apply the _Tate algorithm_ over the complex numbers, i.e. to determine the fibre types from the vanishing orders of $\nu_{2}=\nu(g_{2}),\nu_{3}=\nu(g_{3})$ and $\nu_{\Delta}=\nu(\Delta)$ of $g_{2},g_{3}$ and $\Delta$ respectively. fibre type | $ADE$-type | Euler number | local monodromy | local $j$-expansion ---|---|---|---|--- | | | | $I_{0}$ | – | $0$ | $(\begin{smallmatrix}1&0\\\ 0&1\end{smallmatrix})$ | | $j=s^{3k}$, $j=1+s^{2k}$ --- or $j\neq 0,1$ $I_{1}$ | – | $1$ | $(\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix})$ | pole of order $1$ $I_{b}\>\>(b\geq 2)$ | $A_{b-1}$ | $b$ | $(\begin{smallmatrix}1&b\\\ 0&1\end{smallmatrix})$ | pole of order $b$ $I^{*}_{0}$ | $D_{4}$ | $6$ | $(\begin{smallmatrix}-1&0\\\ 0&-1\end{smallmatrix})$ | same as in first case $I^{*}_{b}\>\>(b\geq 1)$ | $D_{4+b}$ | $6+b$ | $(\begin{smallmatrix}-1&-b\\\ 0&-1\end{smallmatrix})$ | pole of order $b$ $II$ | – | $2$ | $(\begin{smallmatrix}1&1\\\ -1&0\end{smallmatrix})$ | $j=s^{3k+1}$ $III$ | $A_{1}$ | $3$ | $(\begin{smallmatrix}0&1\\\ -1&0\end{smallmatrix})$ | $j=1+s^{2k+1}$ $IV$ | $A_{2}$ | $4$ | $(\begin{smallmatrix}0&1\\\ -1&-1\end{smallmatrix})$ | $j=s^{3k+2}$ $IV^{*}$ | $E_{6}$ | $8$ | $(\begin{smallmatrix}-1&-1\\\ 1&0\end{smallmatrix})$ | $j=s^{3k+1}$ $III^{*}$ | $E_{7}$ | $9$ | $(\begin{smallmatrix}0&-1\\\ 1&0\end{smallmatrix})$ | $j=1+s^{2k+1}$ $II^{*}$ | $E_{8}$ | $10$ | $(\begin{smallmatrix}0&-1\\\ 1&1\end{smallmatrix})$ | $j=s^{3k+2}$ Table 1. Fibre types of elliptic surfaces $\begin{array}[]{c|c@{\hspace*{2mm}}c@{\hspace*{1mm}}c|c|c@{\hspace*{2mm}}c@{\hspace*{1mm}}c|c|c|c|c|c|c|c}\text{type}&\hfil\hskip 5.69054pt&I_{0}\hfil\hskip 2.84526pt&&I_{k}&\hfil\hskip 5.69054pt&I_{0}^{*}\hfil\hskip 2.84526pt&&I_{k}^{*}&II&III&IV&IV^{*}&III^{*}&II^{*}\\\ \hline\cr j&0\hfil\hskip 5.69054pt&1\hfil\hskip 2.84526pt&gen.&\,\infty&0\hfil\hskip 5.69054pt&1\hfil\hskip 2.84526pt&gen.&\infty&0&1&0&0&1&0\\\ \nu_{2}&>\\!0\hfil\hskip 5.69054pt&0\hfil\hskip 2.84526pt&0&0&>\\!2\hfil\hskip 5.69054pt&2\hfil\hskip 2.84526pt&2&2&>\\!0&1&>\\!1&>\\!2&3&>\\!3\\\ \nu_{3}&0\hfil\hskip 5.69054pt&>\\!0\hfil\hskip 2.84526pt&0&0&3\hfil\hskip 5.69054pt&>\\!3\hfil\hskip 2.84526pt&3&3&1&>\\!1&2&4&>\\!4&5\\\ \nu_{\Delta}&0\hfil\hskip 5.69054pt&0\hfil\hskip 2.84526pt&0&k&6\hfil\hskip 5.69054pt&6\hfil\hskip 2.84526pt&6&k\\!+\\!6&2&3&4&8&9&10\\\\[5.69054pt] \end{array}$ Table 2. Tate algorithm (cf. [Mi89, Table IV.3.1]) In the ensuing discussion we will use notation as introduced in Kloosterman’s paper [Kl]. For a Jacobian fibration $f:S\to\mathbb{P}^{1}$ let $C(f)$ denote the configuration of its singular fibres. Since we are restricting ourselves to non-isotrivial fibrations, we can assume that $C(f)$ contains at least one fibre of type $I_{\nu}$ or $I_{\nu}^{*}$ with $\nu>0$. For an abstract configuration $C$ of singular fibres we define (5.1) $L(C):=\\{[f:S\rightarrow\mathbb{P}^{1}]\in\mathcal{F}^{\prime}\mid C(f)=C\\}.$ ###### Remark 5.1. We recall from [Kl] that $L(C)\subset\mathcal{F}^{\prime}$ is constructible. Fibre configurations are partially ordered by degeneration of several singular fibres into fewer more complicated ones. Degeneration occurs on closed subsets, thus semi-continuity holds and $\mathcal{F}^{\prime}$ is topologically stratified by the components of the sets $L(C)$. ###### Definition 5.2. We call $L(C)$ a configuration locus and the components of the sets $L(C)$ the configuration strata. By construction, each irreducible component of $L(C)$ corresponds to Jacobian elliptic fibrations with topologically equivalent complements of singular fibres. In particular all elements in an irreducible component of $L(C)$ have the same monodromy group and each irreducible component $\mathcal{F}^{\prime}_{\Gamma,i}$ is a finite union of irreducible components of certain $L(C)$. The only one of these, which is open, corresponds thus to the configuration of a generic member of the monodromy stratum. We call this configuration $C(\mathcal{F}^{\prime}_{\Gamma,i})$ and all components of $L(C(\mathcal{F}^{\prime}_{\Gamma,i}))$ have the same dimension as $\mathcal{F}^{\prime}_{\Gamma,i}$. As we will need the dimension of the components of $L(C)$ we recall ###### Lemma 5.3. [Kl, Lemma 4.6, As. 4.3] Assume that $C$ is a configuration of singular fibres arising from a non-isotrivial Jacobian fibration. Then all components of $L(C)$ have the dimension $\dim L(C)=\\#\\{\mbox{singular fibres}\\}+\\#\\{\mbox{fibres of type }II^{*},III^{*},IV^{*},I_{\nu}^{*}\\}-6.$ By definition all Jacobian fibrations $\mathcal{E}\in L(C)$ have the same root configuration and hence the same rank $\rho_{\operatorname{tr}}(\mathcal{E})$ of the trivial lattice. From this one easily obtains ###### Lemma 5.4. [Kl, Prop. 4.7] Let $C$ be a configuration of singular fibres, containing at least one $I_{\nu}$ or $I_{\nu}^{*}$-fibre ($\nu>0$) with $L(C)\neq\emptyset$ and let $\mathcal{E}$ be any element in $L(C)$. Then $\dim L(C)=20-\rho_{\operatorname{tr}}(\mathcal{E})-\\#\\{\mbox{fibres of type }II,III\mbox{ or }IV\\}.$ ###### Proof. We use the above lemma and apply the fact that the rank of the trivial lattice $\rho_{\operatorname{tr}}(\mathcal{E})$ is given by $\displaystyle\rho_{\operatorname{tr}}(\mathcal{E})\quad=$ $\displaystyle 2+\sum_{\mbox{$F$ multiplicative}}(e(F)-1)+\sum_{\mbox{$F$ additive}}(e(F)-2)$ $\displaystyle=$ $\displaystyle 26-\\#\\{\mbox{multiplicative fibres}\\}-2\\#\\{\mbox{additive fibres}\\}.$ Here $e(F)$ denotes the Euler number of a fibre $F$. ∎ Any component of a stratum $L(C)$ defines an embedding $U+R\hookrightarrow L_{K3}$ (up to isometries in $\operatorname{O}(L(K3))$ and thus an embedding into a root stratum. Hence the stratification given by the configuration strata refines both the monodromy stratification and the root lattice stratification. In particular, every Shimada stratum $\mathcal{M}^{\prime}_{R,G,j}$ and every monodromy stratum $\mathcal{F}^{\prime}_{\Gamma,i}$ contains a unique configuration stratum which is open and dense in this stratum. This also allows us to talk about the properties of a generic element of a Shimada stratum or a monodromy stratum. In particular, the following data, in addition to the configuration $C(\mathcal{E})$, are invariant for all members $\mathcal{E}$ of a configuration stratum: the monodromy group $\Gamma(\mathcal{E})$ (as we have already pointed out), the root lattice $R(\mathcal{E})$, the trivial lattice $\operatorname{NS}_{\operatorname{tr}}(\mathcal{E})$, the saturation $L(\mathcal{E})$ of $R(\mathcal{E})$ in $\operatorname{NS}(\mathcal{E})$ and the torsion of the Mordell-Weil lattice $\operatorname{MW}_{\operatorname{tors}}(\mathcal{E})\cong L(\mathcal{E})/R(\mathcal{E})$. We are now ready to start the classification of ambi-typical strata. For this we first collect necessary conditions which a generic elements of such a stratum must fulfill. Indeed, the first restrictions on the possible configurations can be derived easily. ###### Proposition 5.5. Suppose $\Gamma$ is a proper subgroup of $\operatorname{SL}(2,\mathbb{Z})$ of finite index and $\mathcal{E}\in\mathcal{F}^{\prime}_{\Gamma,i}$ a generic element of a monodromy stratum with monodromy group $\Gamma$. Then the following properties are equivalent: 1. (1) The Jacobian fibration $\mathcal{E}$ has no fibres of type $II,III\mbox{ or }IV$. 2. (2) The dimensions of the Shimada stratum $\mathcal{M}^{\prime}_{R(\mathcal{E}),G(\mathcal{E}),j}$ which contains $\mathcal{E}$ and of $\mathcal{F}^{\prime}_{\Gamma,i}$ coincide. In particular, the generic element of an ambi-typical stratum does not have any fibres of type $II,III$ or $IV$. ###### Proof. Since $\mathcal{E}$ is a generic element of $\mathcal{F}^{\prime}_{\Gamma,i}$ it lies in a unique open and dense configuration stratum, namely a component of $L(C(\mathcal{E}))$. Hence $\dim\mathcal{F}^{\prime}_{\Gamma,i}=\dim L(C(\mathcal{E}))=20-\rho_{tr}(\mathcal{E})-\\#\\{\mbox{fibres of type }II,III\mbox{ or }IV\\}$ where the last equality follows from Lemma 5.4. Since the Shimada stratum has dimension equal to $20-\rho_{tr}(\mathcal{E})$ the claim follows immediately. ∎ ###### Remark 5.6. The above lemma shows: if the generic element of a monodromy stratum $\mathcal{F}^{\prime}_{\Gamma,i}$ has fibres of type $II,III$ or $IV$, then this monodromy stratum is contained in a Shimada stratum of strictly bigger dimension. We can find further severe restrictions on ambi-typical strata by studying how the monodromy behaves under certain fibre degenerations. ###### Proposition 5.7. Assume that $\mathcal{E}$ is a generic element of an ambi-typical stratum. Then $\mathcal{E}$ has no singular fibres of type $II,III,IV,II^{*}$ or $III^{*}$. If $-\operatorname{id}\in\Gamma(\mathcal{E})$ then $\mathcal{E}$ also has no singular fibres of type $I_{>0}^{*},IV^{*}$. ###### Proof. We have already seen in Proposition 5.5 that fibres of type $II,III$ or $IV$ cannot exist. Our strategy is the following: suppose that $\mathcal{E}$ has a singular fibre of type $II^{*},III^{*},I_{>0}^{*}$ or $IV^{*}$, where in the latter two cases we assume that $-\operatorname{id}\in\Gamma(\mathcal{E})$. We will then construct a family containing $\mathcal{E}$ where the monodromy remains constant, but where the rank of the trivial Néron-Severi group drops. This contradicts the assumption that $\mathcal{E}$ is a generic element of a Shimada stratum. To do this pick a Weierstraß datum $g_{2},g_{3}$ for $\mathcal{E}$. By inspection of the Tate Table 2, the presence of a *-fibre implies that $g_{2},g_{3}$ have a common zero at this fibre. More precisely, we can factor $g_{2}=\check{g}_{2}x^{2},g_{3}=\check{g}_{3}x^{3}$ where $x$ is a linear form vanishing at this fibre. We then consider the family of Weierstraß data $\check{g}_{2}(x-t)^{2},\check{g}_{3}(x-t)^{3}$ where $t$ varies. It has the same $j$-invariant as $\mathcal{E}$, hence the projective monodromy group $\bar{\Gamma}(\mathcal{E}_{t})$ is constant, see [BHPV, p.211]. If in addition $-\operatorname{id}\in\Gamma(\mathcal{E})$, then also the monodromy group $\Gamma(\mathcal{E}_{t})$ is constant, since it can only get smaller on a closed subset. By Table 1, for fibres of type $II^{*},III^{*}$ a power of the local monodromy is $-\operatorname{id}$, so this assumption is fulfilled. For fibres of type $I_{>0}^{*},IV^{*}$ this is part of our assumptions. Hence the monodromy group remains constant if we vary $t$, however for $t\neq 0$ the fibre of $*$-type is replaced by an $I_{0}^{*}$ fibre and the fibre without $*$ that has the same local monodromy up to $-\operatorname{id}$, see Table 1. But then, again by Table 1, this implies that the rank of the trivial Néron- Severi group $\operatorname{NS}_{\operatorname{tr}}(\mathcal{E}_{t})$ group drops as $t\neq 0$, giving the desired contradiction. ∎ ###### Remark 5.8. A typical example for the situation discussed is when a type $II^{*}$-fibre splits into a type $IV$\- and an $I_{0}^{*}$-fibre. Here the rank of the trivial Néron-Severi drops by $2$. This gives examples where a Shimada stratum is contained as a proper subset in a bigger monodromy stratum. ## 6\. factorisations of the $j$-invariant An essential tool for our classification result is the $j$-function associated to a Jacobian fibration $f\colon\mathcal{E}\to\mathbb{P}^{1}$. As usual we denote the upper half plane by $\mathbb{H}_{1}$ and set $\overline{\mathbb{H}}_{1}=\mathbb{H}_{1}\cup\mathbb{Q}\cup\\{\infty\\}.$ The quotient $X(1):=\overline{\mathbb{H}}_{1}/\operatorname{PSL}(2,\mathbb{Z})\cong\mathbb{P}^{1}$ is the modular curve of level $1$, namely the compactification of the $j$-line. If the Jacobian fibration $f:\mathcal{E}\to\mathbb{P}^{1}$ has the monodromy group $\Gamma\subset\operatorname{SL}(2,\mathbb{Z})$ then we denote its image in $\operatorname{PSL}(2,\mathbb{Z})$ by $\bar{\Gamma}$. This defines the modular curve $X({\bar{\Gamma}}):=\overline{\mathbb{H}}_{1}/\bar{\Gamma}$ which in our case is again isomorphic to $\mathbb{P}^{1}$. The $j$-function $j(\mathcal{E})\colon\mathbb{P}^{1}\to X(1)\cong\mathbb{P}^{1}$ has a unique factorisation (6.1) $j(\mathcal{E})=j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ up to deck transformations of $j_{\bar{\Gamma}}$, where $j_{\bar{\Gamma}}:X(\bar{\Gamma})\cong\mathbb{P}^{1}\to X(1)\cong\mathbb{P}^{1}$ is the natural quotient map. We refer the reader also to [BPT, p. 1107]. In fact, the group $\bar{\Gamma}$, and thus the factorisation (6.1), can be characterised without recourse to the monodromy of the elliptic surface $\mathcal{E}$: though $\Gamma$ is the determined by Kodaira’s homological invariant, the image $\bar{\Gamma}$ is determined by the functional invariant $j(\mathcal{E})$ alone according to the discussion [BHPV, p.211]. ###### Remark 6.1. In the sequel it will be very helpful to investigate subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ by means of the following three sets, where $S_{d}$ denotes the symmetric group of $d$ elements: 1. (1) subgroups $\bar{\Gamma}$ in $\operatorname{PSL}(2,\mathbb{Z})$ of index $d$ up to conjugacy in $\operatorname{PSL}(2,\mathbb{Z})$, 2. (2) (connected) branched covers $j:C\to X(1)$ of degree $d$ up to equivalence of covers, branched only over $0,1,\infty$ with multiplicities $1,3$ over $0$ and $1,2$ over $1$, 3. (3) homomorphisms $\mu:\pi_{1}(\mathbb{C}\setminus\\{0,1\\})\to S_{d}$ up to conjugacy in $S_{d}$, such that simple loops around $0$, resp. $1$ map to elements of order $1$ or $3$, resp. $1$ or $2$, and the image acts transitively. Since $\pi_{1}(\mathbb{C}\setminus\\{0,1\\})$ is freely generated by simple loops around $0$ and $1$, while $\operatorname{PSL}(2,\mathbb{Z})$ as the free product $\mathbb{Z}/2\ast\mathbb{Z}/3$ is generated by $(\begin{smallmatrix}0&1\\\ -1&0\end{smallmatrix})$ and $(\begin{smallmatrix}1&1\\\ -1&0\end{smallmatrix})$, these sets are in bijective correspondence with each other via the following maps: (1)$\to$(2): Given $\bar{\Gamma}$, associate the branched cover $j_{\bar{\Gamma}}:X(\bar{\Gamma})\to X(1)$. (1)$\to$(3): Given $\bar{\Gamma}$, associate left multiplication $\operatorname{PSL}(2,\mathbb{Z})\to S(\operatorname{PSL}(2,\mathbb{Z})/\bar{\Gamma})\cong S_{d}$ on cosets and compose with $\pi_{1}(\mathbb{C}\setminus\\{0,1\\})\to\operatorname{PSL}(2,\mathbb{Z})$ to get $\mu$. (2)$\to$(3): Given $j:C\to\mathbb{P}^{1}$, restrict to $\mathbb{C}\setminus\\{0,1\\}$, which is a topological cover of degree $d$ and associate the representation $\mu:\pi_{1}(\mathbb{C}\setminus\\{0,1\\})\to S_{d}$ by permutations of a fibre. (3)$\to$(1): Given $\mu$, note that by our assumptions this factors through a homomorphism $\bar{\mu}:\operatorname{PSL}(2,\mathbb{Z})\to S_{d}$ and associate the stabiliser subgroup $\bar{\Gamma}\subset\operatorname{PSL}(2,\mathbb{Z})$ of $1$. (3)$\to$(2): Given $\mu$, associate the connected topological cover over $\mathbb{C}\setminus\\{0,1\\}$ of degree $d$ that extends a branched cover $j:C\to\mathbb{P}^{1}$ by Riemann existence. The following is a useful result on factorisations [BT, Lemma 2.3]: ###### Lemma 6.2. Let $j:\mathbb{P}^{1}\to X(1)$ be any surjective holomorphic map. Then there exists a unique subgroup $\bar{\Gamma}\subset\operatorname{PSL}(2,\mathbb{Z})$ of finite index such that (6.2) $\displaystyle j\text{ factors through }j_{\bar{\Gamma}},$ (6.3) $\displaystyle\text{if $j$ factors through $j_{\bar{\Gamma}^{\prime}}$ then }\bar{\Gamma}\subset\bar{\Gamma}^{\prime}\text{ up to conjugation}.\hskip 48.36967pt$ An equivalent statement can be obtained using Remark 6.1 above: ###### Lemma 6.3. Given the holomorphic map $j(\mathcal{E}):\mathbb{P}^{1}\to X(1)$ a factorisation $j_{2}\circ j_{1}$ is equivalent to $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ if and only if $\displaystyle j_{2}$ is branched only over $0,1,\infty$ with multiplicities $1,3$ over $0$ and $1,2$ over $1$, (6.5) $\displaystyle j_{1}$ has no proper left factor $j^{\prime}$, such that $j_{2}\circ j^{\prime}$ has the property above. In the topological analysis of more arbitrary (connected) branched covers of $\mathbb{P}^{1}$ we first fix a base point not in the branch locus. We then use the monodromy homomorphism taking homotopy classes of closed paths inside the complement of the branch locus around this base point to the permutation group of the fibre over the base point. This defines a local monodromy which associates to a branch point the conjugacy class of the monodromy of the positively oriented boundary of a sufficiently small disc centred at the branch point. While the monodromy depends on the choice of a fibre, the conjugacy class does not, and it is thus an invariant, which we may as well give by the cycle type of the permutation or the corresponding partition of the fibre cardinality. The sizes of the parts of that partition are exactly the multiplicities of the pre-images of the branch point. Accordingly, we can associate to $j_{\bar{\Gamma}}$ the three conjugacy classes $C_{0},C_{1},C_{\infty}$ corresponding to the branch points $0,1,\infty$. In practice we will give the three conjugacy classes as a $3$-tuple of partitions of $\deg j_{\bar{\Gamma}}$. The degree of $j_{\bar{\Gamma}}$ and the numbers $e_{3}$ and $e_{2}$ of fixed points of elements in $C_{0}$ and $C_{1}$ respectively, are related by the following congruences (6.6) $\deg j_{\bar{\Gamma}}\equiv_{2}e_{2},\qquad\deg j_{\bar{\Gamma}}\equiv_{3}e_{3}.$ This follows since the difference of the two numbers in question is the sum of cycle lengths of transpositions and $3$-cycles respectively. Note that for the subgroup $\bar{\Gamma}$ of $\operatorname{PSL}(2,\mathbb{Z})$ corresponding to $j_{\bar{\Gamma}}$, the numbers $e_{2},e_{3}$ count the conjugacy classes into which the conjugacy classes of $(\begin{smallmatrix}0&1\\\ -1&0\end{smallmatrix})$ and $(\begin{smallmatrix}1&1\\\ -1&0\end{smallmatrix})$ decompose. For the next lemma, we only require a small part of the branching datum of $j_{\mathcal{E}}$. We need the partitions corresponding to local monodromy around each non-ramified pre-image of $0$ under $j_{\bar{\Gamma}}$, which we will call _3-torsion_ points. Similarly we exploit the partitions corresponding to local monodromy around each non-ramified pre-image of $1$ called _2-torsion_ points. ###### Lemma 6.4. The following properties hold for the invariants associated to the factors $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$ of a generic element $\mathcal{E}$ in an ambi-typical monodromy stratum: 1. (1) partitions associated to $j_{\mathcal{E}}$ at 2-torsion points have only even size parts, 2. (2) the parts of the partitions associated to $j_{\mathcal{E}}$ at 3-torsion points all have size divisible by $3$ except for a total of $\\#IV^{*}$ parts of size equal to $1\\!\\!\pmod{3}$, 3. (3) $e_{2}<2$, 4. (4) $e_{3}<2$, if $\\#IV^{*}=0$, 5. (5) $e_{3}<3$, if $\\#IV^{*}\leq 2$, 6. (6) $\deg j_{\mathcal{E}}\equiv_{3}1$ and $\\#IV^{*}=2$, if $e_{3}=2$ and $\\#IV^{*}<3$. ###### Proof of 1). Let $s$ be a local coordinate at a point corresponding to an odd part. Then the $j$-invariant takes value $1$ and has odd multiplicity at $s=0$. Hence the local expansion is $j=1+s^{2k+1}$. But then the corresponding fibre of $\mathcal{E}$ would be, according to Table 1, of type $III$, or $III^{*}$. This we have already excluded in Proposition 5.7. _of 2)_ If $s$ is a local coordinate at a point corresponding to a part of size $\ell$, then the $j$-invariant has value $0$ and multiplicity equal to $\ell$. From the local expansion $j=s^{\ell}$ we can determine the corresponding fibre type again from Table 1. It is $IV,II^{*}$ in case $\ell\equiv_{3}2$, but these are excluded by Proposition 5.7. It is $II$ or $IV^{*}$ in case $\ell\equiv_{3}1$, but the former is excluded again by the same reason and hence the number of $IV^{*}$ fibres is equal to the number of parts of size $\ell\equiv_{3}1$. _of 3)_ Suppose $e_{2}\geq 2$, then there are at least two partitions with only even size parts. By Proposition 6.6 below, $j_{\mathcal{E}}$ then has a proper factor $j^{\prime}$ which violates condition (6.5) on the factorisation of the $j$-invariant. _of 4)_ Suppose $e_{3}\geq 2$ and $\\#IV^{*}=0$ Then there are at least two partitions with parts of size divisible by $3$. Again by Proposition 6.6 below, $j_{\mathcal{E}}$ then has a proper factor $j^{\prime}$ violating condition (6.5). _of 5)_ The claim follows since the number of parts of size $\ell\equiv_{3}1$ is bounded below by $e_{3}$ if $\deg j_{\mathcal{E}}$ is not divisible by $3$, otherwise the number of parts of size $\ell\equiv_{3}1$ is at least $3$ or _4)_ applies. _of 6)_ By _4)_ it is not possible to have $\\#IV^{*}=0$. In case $\deg j_{\mathcal{E}}\equiv_{3}0$ or $2$, the number of parts of size $\ell\equiv_{3}1$ is therefore at least $3$, respectively $4$. So $\deg j_{\mathcal{E}}\equiv_{3}1$ and the number of parts of size $\ell\equiv_{3}1$ is $2$ and hence $\\#IV^{*}=2$. ∎ In order to prove the factorisation result used above we now study branched coverings of the Riemann sphere more systematically from a topological point of view. Consider a finite branched covering $\mathbb{P}^{1}\to\mathbb{P}^{1}$ of degree $d$ and let $r$ denote the number of branch points. The fundamental group of the complement with respect to a base point is generated by elements associated to $r$ closed paths around these points subject only to the relation that their product is trivial. These elements act by permutations on the set $I=\\{1,\dots,d\\}$ in bijection to the elements of the fibre over the base point, giving rise to the monodromy elements $\sigma_{1},\dots,\sigma_{r}\in S(I)$ where $\sigma_{i}$ is given by monodromy along the path around the $i$-th branch point. Of course the choice of paths form an orbit under the action of the Hurwitz braid group [Hur]. The induced _Hurwitz action_ on $r$-tuples of monodromy elements is generated by transformations on adjacent pairs $(\sigma_{1},\dots,\sigma_{i},\sigma_{i+1},\dots,\sigma_{r})\mapsto(\sigma_{1},\dots,\sigma_{i+1},\sigma_{i+1}^{-1}\sigma_{i}{\sigma_{i+1}},\dots,\sigma_{r}).$ ###### Lemma 6.5. Suppose that the covering $g:\mathbb{P}^{1}\to\mathbb{P}^{1}$ has degree $kh$ with $k>1$, and $I$ has a partition into parts $I_{1},\dots,I_{k}$ each of cardinality $h$, such that 1. (1) all $\sigma_{i},i>2$ preserve all parts and 2. (2) $\sigma_{1},\sigma_{2}$ permute the parts. Then 1. (1) the covering map $g$ is the composition of two factors $g=g_{2}\circ g_{1}$ where 2. (2) the second factor $g_{2}$ is a cyclic branched cover of degree $k$ branched at the two branch points corresponding to $\sigma_{1},\sigma_{2}$. ###### Proof. Let us consider the topological cover over the complement of the branch points and an element in $I_{1}$. Its stabiliser in the fundamental group determines the covering. The setwise stabiliser of $I_{1}$ determines an intermediate cover $g_{2}$ which is of degree $k>1$ over the base. By assumption $\sigma_{i},i>2$ stabilises $I_{1}$, so $g_{2}$ is cyclically branched of degree $k$ over the two points corresponding to $\sigma_{1},\sigma_{2}$. ∎ We now prove the factorisation of $j_{\mathcal{E}}$ into two factors, which we used in the proof of Lemma 6.4 in a more abstract setting. ###### Proposition 6.6. Suppose $P_{1},\dots,P_{r}$ are the partitions associated to the branch points of a branched covering $g:\mathbb{P}^{1}\to\mathbb{P}^{1}$ of degree $hk$ with $k>1$. If all parts of $P_{1}$ and $P_{2}$ have length divisible by $k$ then 1. (1) the covering map $g$ is the composition of two factors $g=g_{2}\circ g_{1}$ and 2. (2) the second factor $g_{2}$ is a cyclic branched cover of degree $k$ branched at the two branch points corresponding to $P_{1},P_{2}$. ###### Proof. We choose a base point and $r$ closed paths around the $d$ points making up the branch locus. Let $\sigma_{1},\dots,\sigma_{r}$ be the permutations associated to these paths. We recall that the conjugacy class of $\sigma_{i}$ is determined by $P_{i}$ and does not depend on the chosen paths. The product of the $\sigma_{i}$ is the identity. It suffices to show that there is a decomposition of the fibre $I$ over the base point into $k$ subsets of cardinality $h$, the _blocks_ , such that the hypothesis of Lemma 6.5 is met. Without loss of generality we may assume $\sigma_{1},\sigma_{2}$ to have $h$ cycles of length $k$ each, and all other $\sigma_{i},i>2$ to be transpositions. This case we call the generic case. In fact, in any other case there are fewer monodromy elements. We can factor each of the first two elements into a permutation with cycles of length $k$ only and a minimal number of transpositions. Any other element can be factored into a minimal number of transpositions. Then the sequence of factors – after a suitable reordering using Hurwitz transformations – is just a sequence of permutations as in the generic case. Thus is suffices to prove the claim in the generic case, since the monodromy elements meet the hypotheses of Lemma 6.5, if the factors do. Recall that the cycles of $\sigma_{i}$ correspond bijectively to the points in the $i$-th fibre. The Riemann Hurwitz formula, which relates the Euler number of the domain to the Euler number of the target, the degree and the fibre defects, reads $2=2hk-2(hk-h)-(r-2)\quad\implies\quad r=2h.$ Let us write our monodromy elements $\sigma_{1},\sigma_{2},\tau_{3},\dots,\tau_{2h}$ where the $\tau_{i}$ are transpositions. We will now rely on the following elementary observation: If $\sigma$ is a permutation in $S_{n}$ and $\tau$ the transposition of elements $a,b$, then either (1): $a,b$ belong to a cycle $c$ of $\sigma$ of length $\ell$, and 1. (a): there exists a minimal $\ell_{1}>0$ such that $\sigma^{\ell_{1}}(a)=b$, 2. (b): $\sigma$ and $\sigma\tau$ have all cycles of $\sigma$ except $c$ in common, 3. (c): the elements of $c$ belong to two cycles of $\sigma\tau$ which are of lengths $\ell_{1}$ and $\ell_{2}=\ell-\ell_{1}$. or (2): $a,b$ belong to distinct cycles $c_{1},c_{2}$ of $\sigma$ of lengths $\ell_{1}$, $\ell_{2}$ respectively, and 1. (a): $\sigma$ and $\sigma\tau$ have all cycles of $\sigma$ except $c_{1},c_{2}$ in common, 2. (b): the elements of $c_{1}\cup c_{2}$ belong to one cycle of $\sigma\tau$ which has length $\ell=\ell_{1}+\ell_{2}$. Next we exploit transitivity of the group generated, which needs only the elements $\sigma_{2},\tau_{i},i>2$, since the product equals identity. The element $\sigma_{2}$ has $h$ orbits, so it generates a transitive group only in case $h=1$. For $h>1$ there must be a sequence $\tau_{i_{1}},\dots,\tau_{i_{h-1}}$ with $i_{1}<\dots<i_{h-1}$ such that $\rho:=\sigma_{2}\tau_{i_{1}}\dots\tau_{i_{h-1}}$ is an $hk$-cycle. Using Hurwitz transformations we may assume without loss of generality that $i_{1}=3,\dots,i_{h-1}=h+1$. The element $\rho^{k}$ has order $h$, hence $I$ decomposes into $k$ orbits of length $h$. It remains to show, that this is the decomposition into blocks we need for Lemme 6.5. Assume to the contrary that $\tau_{i}$ for some $2<i\leq h+1$ does not preserve the blocks. Using Hurwitz transformations we may write $\sigma_{2}=\rho\tau_{h+1}\dots\tau_{3}=\rho\tau_{i}\tau^{\prime}_{h}\dots\tau^{\prime}_{3}.$ Then $\tau_{i}$ transposes two elements of the cycle of $\rho$. By observation $(1.a)$ the permutation $\rho\tau_{i}$ has a cycle of length $\ell_{1}$ which $k$ does not divide, since $\tau_{i}$ is assumed _not_ to preserve the blocks. We remain in the case $(1)$ for all the following $\tau^{\prime}$, so also $\sigma_{2}$ has a cycle of length not divisible by $k$ contrary to the hypothesis of the proposition. Since blocks are permuted by the element $\rho$ and preserved by the $\tau_{i}$, $2<i\leq h+1$ the element $\sigma_{2}$ permutes the blocks. We argue in the analogous way with $\tau_{i}$, $i>h+1$ and $\sigma_{1}^{-1}=\rho\tau_{h+2}\dots\tau_{2h}=\rho\tau_{i}\tau^{\prime}_{h+3}\dots\tau^{\prime}_{2h}.$ So also these $\tau_{i}$ preserve the blocks and they are permuted by $\sigma_{1}$. Therefore we are in a position to apply Lemma 6.5 and this concludes the proof. ∎ ## 7\. Restrictions on possible modular monodromy groups The results collected so far are sufficient to derive a first characterization of the modular monodromy groups $\bar{\Gamma}$ which can occur for the elliptic surfaces $\mathcal{E}$ we consider. Since the singular fibres of a generic element are all of the form $I_{k}$, $I_{k}^{*}$ and $IV^{*}$ we can, using Table 1, write the Euler number formula in the following form: (7.1) $24\quad=\quad\sum_{I_{k},I^{*}_{k}}k+\sum_{I^{*}_{k}}6+8\\#IV^{*}=\quad\deg j_{\bar{\Gamma}}\cdot\deg j_{\mathcal{E}}+6\\#I^{*}+8\\#IV^{*}.$ We will consider all numerically possible combinations of the four integers $\deg j_{\mathcal{E}}>0$, $\\#I^{*}\geq 0$, $\\#IV^{*}\geq 0$ and $\deg j_{\bar{\Gamma}}>0$ and we discard the trivial case $\deg j_{\bar{\Gamma}}=1$ corresponding to $\bar{\Gamma}=\operatorname{PSL}(2,\mathbb{Z})$ and hence to $\Gamma=\operatorname{SL}(2,\mathbb{Z})$. We set up the corresponding table of combinations in two parts, namely low ($\leq 6$) and high ($>6$) index $[\operatorname{PSL}(2,\mathbb{Z}):\bar{\Gamma}]$, and we add two rows giving $e_{2}$ and $e_{3}$. Since $\\#IV^{*}\leq 2$ in every column, they are determined by 1. (1) $e_{2}<2$ and $e_{2}\equiv_{2}\deg j_{\bar{\Gamma}}$, according to (6.6) and Lemma 6.4.(3). 2. (2) $e_{3}<3$ and $e_{3}\equiv_{3}\deg j_{\bar{\Gamma}}$, according to (6.6) and Lemma 6.4.(5). In a last row we mark columns, which we _discard_ from further consideration according to one of the following arguments: 1. (3) $e_{3}=2$ implies $\\#IV^{*}=2$, according to Lemma 6.4.(6). 2. (4) If $\deg j_{\mathcal{E}}=1$ then the $j$-invariant of $\mathcal{E}$ is rigid. Thus $\mathcal{E}$ is rigid, except when there are $I_{0}^{*}$ fibres, which is obviously excluded for $\\#I^{*}=0$. But this is also excluded for $\\#IV^{*}>0$ since the presence of an $I_{0}^{*}$-fibre implies that $-\operatorname{id}$ is in the monodromy group, which in turn forbids the existence of a $IV^{*}$ fibre in $\mathcal{E}$ by Proposition 5.7. $\begin{array}[]{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}\deg j_{\bar{\Gamma}}&2&2&2&2&2&2&2&2&2&3&3&3&3&4&4&4&4&4&5&6&6&6&6\\\ \deg j_{\mathcal{E}}&1&2&3&4&5&6&8&9&12&2&4&6&8&1&2&3&4&6&2&1&2&3&4\\\ \\#I^{*}&1&2&3&0&1&2&0&1&0&3&2&1&0&2&0&2&0&0&1&3&2&1&0\\\ \\#IV^{*}&2&1&0&2&1&0&1&0&0&0&0&0&0&1&2&0&1&0&1&0&0&0&0\\\ \hline\cr e_{2}&0&0&0&0&0&0&0&0&0&1&1&1&1&0&0&0&0&0&2&0&0&0&0\\\ e_{3}&2&2&2&2&2&2&2&2&2&0&0&0&0&1&1&1&1&1&2&0&0&0&0\\\ \hline\cr\emph{discard}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&&&&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}&&&&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&&&&\end{array}$ Table 3. Combinations of numerical invariants (low index) $\begin{array}[]{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}\deg j_{\bar{\Gamma}}&8&8&8&9&10&12&12&16&18&24\\\ \deg j_{\mathcal{E}}&1&2&3&2&1&1&2&1&1&1\\\ \\#I^{*}&0&0&0&1&1&2&0&0&1&0\\\ \\#IV^{*}&2&1&0&0&1&0&0&1&0&0\\\ \hline\cr e_{2}&0&0&0&1&0&0&0&0&0&0\\\ e_{3}&2&2&2&0&1&0&0&1&0&0\\\ \hline\cr\emph{discard}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3}&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}&&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}&&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}4}\end{array}$ Table 4. Combinations of numerical invariants (high index) Note that $e_{2}=e_{3}=0$ is equivalent to $\bar{\Gamma}$ being torsion free, and that $\deg j_{\bar{\Gamma}}$ is equal to the index of $\bar{\Gamma}$ in $\operatorname{PSL}(2,\mathbb{Z})$. Furthermore, subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ of index at most $6$ are congruence subgroups [Wo, Thm.5]. Accordingly, our groups of low index occur in [CuPa, Table 2] with genus $0$, the genus of the domain of $j_{\bar{\Gamma}}$, and with $(I=\deg j_{\bar{\Gamma}},e_{2},e_{3})$ as in one of the columns in our Table 3. We find the entries $2A^{0},2B^{0},3B^{0},2C^{0}$ and $4B^{0}$. By [CuPa, Table 4] they usually go by the standard names $\bar{\Gamma}^{2},\bar{\Gamma}_{1}(2),\bar{\Gamma}_{1}(3),\bar{\Gamma}(2)$ and $\bar{\Gamma}_{1}(4)$ – which we recall after the proposition – and we get: ###### Proposition 7.1. Let $\bar{\Gamma}$ be a subgroup of $\operatorname{PSL}(2,\mathbb{Z})$ which appears in one of the columns of Table 3 or Table 4 which have not been discarded. Then it belongs to one of the following, mutually exclusive cases: 1. (1) $\bar{\Gamma}$ has index $18$ in $\operatorname{PSL}(2,\mathbb{Z})$ and is torsion free, 2. (2) $\bar{\Gamma}$ has index $12$ in $\operatorname{PSL}(2,\mathbb{Z})$ and is torsion free, 3. (3) $\bar{\Gamma}$ has index $9$ in $\operatorname{PSL}(2,\mathbb{Z})$ with $e_{2}=1,e_{3}=0$, 4. (4) $\bar{\Gamma}$ is either $\bar{\Gamma}(2)$ or $\bar{\Gamma}_{1}(4)$, has index $6$ in $\operatorname{PSL}(2,\mathbb{Z})$ and is torsion free, 5. (5) $\bar{\Gamma}_{1}(3)$ which has index $4$ with $e_{2}=0,e_{3}=1$, 6. (6) $\bar{\Gamma}_{1}(2)$ which has index $3$ with $e_{2}=1,e_{3}=0$, 7. (7) $\bar{\Gamma}^{2}$ which has index $2$ with $e_{2}=0,e_{3}=2$. Here we recall the standard notation for certain congruence subgroups of $\operatorname{SL}(2,\mathbb{Z})$ which are of importance for us. The principal congruence subgroup of level $n$ is denoted by $\Gamma(n)$ and defined as $\Gamma(n)=\left\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\Big{|}\quad a\equiv d\equiv 1,b\equiv c\equiv 0\mod n\right\\}.$ Its geometric relevance is the fact that the modular curve $X^{0}(n)=\mathbb{H}_{1}/\Gamma(n)$ parameterises elliptic curves with a level $n$ structure, i.e. a symplectic basis with respect to the Weil form, of the group $E[n]$ of $n$-torsion points. Next we recall the group $\Gamma_{1}(n)=\left\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\Big{|}\quad a\equiv d\equiv 1,c\equiv 0\mod n\right\\}$ whose meaning is that the modular curve $X_{1}^{0}(n)=\mathbb{H}_{1}/\Gamma_{1}(n)$ parameterises elliptic curves with a fixed point of order $n$. We will also use the group $\Gamma_{0}(n)=\left\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\Big{|}\quad c\equiv 0\mod n\right\\}.$ Its significance is that the modular curve $X_{0}^{0}(n)=\mathbb{H}_{1}/\Gamma_{0}(n)$ is the moduli space of elliptic curves with a distinguished subgroup $\mathbb{Z}/n\mathbb{Z}\subset E[n]$ of $n$-torsion points. By $\bar{\Gamma}(n)$, $\bar{\Gamma}_{1}(n)$ and $\bar{\Gamma}_{0}(n)$ we denote the images of these groups in $\operatorname{PSL}(2,\mathbb{Z})$. Finally we recall that $\bar{\Gamma}^{2}$ is the subgroup of $\operatorname{PSL}(2,\mathbb{Z})$ which is generated by all squares. ###### Remark 7.2. We note here that $\bar{\Gamma}_{0}(2)=\bar{\Gamma}_{1}(2)$ and $\bar{\Gamma}_{0}(3)=\bar{\Gamma}_{1}(3)$. This will be relevant for Proposition 9.3. The data for the groups $\bar{\Gamma}$ in Proposition 7.1 can be further exploited to obtain upper bounds for the dimension of any corresponding monodromy stratum. For this purpose we make the following ###### Definition 7.3. Let $\Gamma$ be a subgroup of finite index of $\operatorname{SL}(2,\mathbb{Z})$ We define the maximal dimension of a monodromy stratum associated to $\Gamma$ by: $m(\Gamma)=\begin{cases}\operatorname{max}\\{\dim\mathcal{F}_{\Gamma,i}\mid\mathcal{F}_{\Gamma,i}\mbox{ is a monodromy stratum associated to }\Gamma\\}\\\ -\infty\mbox{ if there exists no monodromy stratum with monodromy group }\Gamma.\end{cases}$ Together with the explicit bounds in the upcoming Lemma 7.4 this will be used later in the following way: The closure of a Shimada stratum of dimension $d$ is ambi-typical if $d\geq m(\Gamma)$ for its generic monodromy $\Gamma$. ###### Lemma 7.4. Let $\bar{\Gamma}$ be a subgroup of $\operatorname{PSL}(2,\mathbb{Z})$ which appears in one of the columns of Table 3 or Table 4 which have not been discarded and let $\Gamma$ be any lift of $\bar{\Gamma}$ in $\operatorname{SL}(2,\mathbb{Z})$. The invariants are listed in Table 5 . $\begin{array}[]{c|ccccccc}\deg j_{\bar{\Gamma}}&2&3&4&9&6&12&18\\\ e_{2}&0&1&0&1&0&0&0\\\ e_{3}&2&0&1&0&0&0&0\\\ \\#\operatorname{poles}\operatorname{of}j_{\bar{\Gamma}}&1&2&2&3&3&4&5\\\ m(\Gamma)&\leq 6&\leq 10&\leq 6&\leq 2&\leq 6&\leq 2&\leq 1\\\ \end{array}$ Table 5. Maximal dimension of $\Gamma$-strata ###### Proof. The first row in the table simply lists the possible degrees for $j_{\bar{\Gamma}}$ which occur in Table 3 or Table 4. The second and third row are also copied from these tables. The number of poles of $j_{\bar{\Gamma}}$ can be computed via Riemann-Hurwitz and equals $2+\deg j_{\bar{\Gamma}}/6-2e_{3}/3-e_{2}/2$. Thus it remains to prove the upper bound for the dimension of the monodromy strata. The dimension of a stratum is equal to the open dense irreducible subset of elliptic surfaces sharing the generic configuration of singular fibres. In Lemma 5.3 the dimension augmented by $6$ is given as the sum of cardinality of singular fibres and the cardinality of $*$-fibres. Thus (7.3) $6+\dim\quad\leq\quad s_{\infty}+s_{0}+s_{1}+s^{*}+s^{*}$ where $s_{\infty}$, $s_{0},s_{1}$ are the number of singular fibres with $j=\infty,0,1$ resp. and $s^{*}$ is the number of $*$-fibres which is also an upper bound for the number of singular fibres with $j\not\in\\{0,1,\infty\\}$. On the other hand the Euler sum gives the bound (7.4) $24\geq\deg j_{\mathcal{E}}\deg j_{\bar{\Gamma}}+2s_{0}+3s_{1}+6s^{*}$ as follows immediately from Table 1. Indeed, there are further restrictions on $s_{\infty},s_{0},s_{1}$ due to the map $j_{\bar{\Gamma}}$: 1. (1) $s_{\infty}$ is the number of poles of the $j$-invariant and thus bounded above by $\deg j_{\mathcal{E}}$ times the number of poles of $j_{\bar{\Gamma}}$. 2. (2) $s_{0}$ is the number of points with local $j$-expansion $s^{k}$, $k$ not a multiple of $3$, so $s_{0}=0$ if $e_{3}=0$. 3. (3) $s_{1}$ is the number of points with local $j$-expansion $1+s^{k}$, $k$ odd, so $s_{1}=0$ if $e_{2}=0$. We shall now give the proof exemplary for the case of index $9$. The other cases can be argued similarly. From (7.4) and using $s_{0}\geq 0$ we find $24\geq\deg j_{\bar{\Gamma}}\deg j_{\mathcal{E}}+3s_{1}+6s^{*}.$ Since $j_{\bar{\Gamma}}$ has degree $9$ and $3$ poles this implies $24\geq 9\frac{s_{\infty}}{3}+3s_{1}+6s^{*}$ or equivalently $8\geq s_{\infty}+s_{1}+2s^{*}.$ Since $e_{3}=0$ in this case, and hence $s_{0}=0$, it then follows immediately from (7.3) that $m(\Gamma)\leq 2$. ∎ In the case of low index we have ample information on the two factors in the factorisation $j(\mathcal{E})=j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$. Firstly, since we know the groups $\bar{\Gamma}$ explicitly we can in fact also write down the maps $j_{\bar{\Gamma}}$ explicitly, as listed in Table 6. $\begin{array}[]{rclrl}\bar{\Gamma}(2)&:&j_{\bar{\Gamma}}=\frac{(z^{2}+3)^{3}}{z^{2}(z^{2}-9)^{2}}&=&1+\frac{27(z^{2}-1)^{2}}{z^{2}(z^{2}-9)^{2}}\\\ barGamma_{1}(4)&:&j_{\bar{\Gamma}}=\frac{4(z^{2}-4z+1)^{3}}{27z(z-4)}&=&1+\frac{(z-2)^{2}(2z^{2}-8z-1)^{2}}{27z(z-4)}\\\ \bar{\Gamma}_{1}(3)&:&j_{\bar{\Gamma}}=\frac{z(z+8)^{3}}{64(z-1)^{3}}&=&1+\frac{(z^{2}-20z-8)^{2}}{64(z-1)^{3}}\\\ \bar{\Gamma}_{1}(2)&:&j_{\bar{\Gamma}}=\frac{(z+3)^{3}}{27(z-1)^{2}}&=&1+\frac{z(z-9)^{2}}{27(z-1)^{2}}\\\ \bar{\Gamma}^{2}&:&j_{\bar{\Gamma}}=\frac{4z}{(z+1)^{2}}&=&1-\frac{(z-1)^{2}}{(z+1)^{2}}\end{array}$ Table 6. $j$-functions There are various different explicit formulae in the literature, e.g. [FK, MS] since coordinates on the domain can be chosen arbitrarily. Indeed, to check our formulae it suffices to check their degrees and their branching over $0,1$ and $\infty$ which are determined by the multiplicity sequences of the two numerators and the denominator respectively. With our choice of $z$-coordinate we note that (7.5) $\displaystyle\text{\, row }1:$ $z=0,\pm 3$ are the poles of $j_{\bar{\Gamma}}$, of pole order $2$ each (7.6) $\displaystyle\text{\, row }2:$ $z=0,4,\infty$ are the poles of $j_{\bar{\Gamma}}$, of pole order $1,1,4$ respectively (7.7) $\displaystyle\text{\, row }3:$ $e_{2}=0$ and $z=0$ is the only $3$-torsion point (non-critical point of value $0$) (7.8) $\displaystyle\text{\, row }4:$ $e_{3}=0$ and $z=0$ is the only $2$-torsion point (non-critical point of value $1$) (7.9) $\displaystyle\text{\, row }5:$ $e_{2}=0$ and $z=0,\infty$ are the $3$-torsion points mapping to $0$. Secondly, information from Table 3 also allows us to obtain information about the map $j_{\mathcal{E}}$. In particular, we can list the possible degrees of this map as well as the branching behaviour over _special point_ points, namely the $3$-torsion and $2$-torsion points, see Lemma 6.4. Using the notation of [BPT] we will denote these by $A$ and $B$ respectively. So we can derive the number of pre-images of special points and their multiplicities. They are included into Table 7 as the tuple of multiplicities with index the point they map to. The last column of this table gives the most general polynomial expression for $j_{\mathcal{E}}$ fitting the branching data. Here $\alpha_{i},\beta_{i},\gamma_{i},\delta_{i}$ denote coprime homogeneous bivariate polynomials of degree $i$. $\begin{array}[]{rcccc}\bar{\Gamma}&\deg j_{\mathcal{E}}&\text{special point(s)}&\text{branch data}&j_{\mathcal{E}}\\\\[2.84526pt] \bar{\Gamma}(2),\bar{\Gamma}_{1}(4)&4&-&-&\alpha_{4}:\beta_{4}\\\ &3&-&-&\alpha_{3}:\beta_{3}\\\ &2&-&-&\alpha_{2}:\beta_{2}\\\ &1&-&-&\alpha_{1}:\beta_{1}\\\ \bar{\Gamma}_{1}(3)&6&A=0&(3,3)_{A}&\alpha_{2}^{3}:\beta_{6}\\\ &4&A=0&(3,1)_{A}&\alpha_{1}^{3}\gamma_{1}:\beta_{4}\\\ &3&A=0&(3)_{A}&\alpha_{1}^{3}:\beta_{3}\\\ &2&A=0&(1,1)_{A}&\gamma_{2}:\beta_{2}\\\ \bar{\Gamma}_{1}(2)&8&B=0&(2,2,2,2)_{B}&\alpha_{4}^{2}:\beta_{8}\\\ &6&B=0&(2,2,2)_{B}&\alpha_{3}^{2}:\beta_{6}\\\ &4&B=0&(2,2)_{B}&\alpha_{2}^{2}:\beta_{4}\\\ &2&B=0&(2)_{B}&\alpha_{1}^{2}:\beta_{2}\\\ \bar{\Gamma}^{2}&4&A_{1}=0,A_{2}=\infty&(3,1)_{A_{1}},(3,1)_{A_{2}}&\alpha_{1}^{3}\gamma_{1}:\beta_{1}^{3}\delta_{1}\\\\[8.53581pt] \end{array}$ Table 7. $j_{\mathcal{E}}$-functions for low index ## 8\. Families of Weierstraß data for low index In this section we will analyse the Weierstraß data of the Jacobian fibrations with modular monodromy of low index. We will also discuss the Mordell-Weil group in these cases. We start with the following table of Weierstraß data, whose relevance is that this includes all the Weierstraß data needed to describe the families from Section 7 of low monodromy index. Note that in this table we still assume the polynomials to have the degree given by their index, but _no_ assumption of coprimality is imposed. $\begin{array}[]{l|c|l|l|l}\\#&\bar{\Gamma}&g_{2}&g_{3}&\Delta=g_{2}^{3}-27g_{3}^{2}\\\ &&&(\text{ up to constant })\\\ \hline\cr i)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}(2)&\alpha_{4}^{2}+3\beta_{4}^{2}&\beta_{4}(\alpha_{4}^{2}-\beta_{4}^{2})&\alpha_{4}^{2}(\alpha_{4}^{2}-9\beta_{4}^{2})^{2}\\\ ii)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}_{1}(4)&12(\alpha_{4}^{2}-4\alpha_{4}\beta_{4}+\beta_{4}^{2})&4(\alpha_{4}-2\beta_{4})(2\alpha_{4}^{2}-8\alpha_{4}\beta_{4}-\beta_{4}^{2})&\alpha_{4}(\alpha_{4}-4\beta_{4})\beta_{4}^{4}\\\ iii)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}_{1}(3)&3\alpha_{2}(\alpha_{2}^{3}+8\beta_{6})&\alpha_{2}^{6}-20\alpha_{2}^{3}\beta_{6}-8\beta_{6}^{2}&(\alpha_{2}^{3}-\beta_{6})^{3}\beta_{6}\\\ iv)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}_{1}(3)&3\alpha_{1}\gamma_{2}^{2}(\alpha_{1}^{3}+8\beta_{3})&\gamma_{2}^{3}(\alpha_{1}^{6}-20\alpha_{1}^{3}\beta_{3}-8\beta_{3}^{2})&\gamma_{2}^{6}(\alpha_{1}^{3}-\beta_{3})^{3}\beta_{3}\\\ v)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}_{1}(2)&3\alpha_{4}^{2}+9\beta_{8}&\alpha_{4}(\alpha_{4}^{2}-9\beta_{8})&(\alpha_{4}^{2}-\beta_{8})^{2}\beta_{8}\\\ vi)\parbox[b][14.22636pt][b]{0.0pt}{}&\bar{\Gamma}^{2}&-12\alpha_{1}\beta_{1}\gamma_{1}^{3}\delta_{1}^{3}&4\gamma_{1}^{4}\delta_{1}^{4}(\alpha_{1}^{3}\gamma_{1}-\beta_{1}^{3}\delta_{1})&(\alpha_{1}^{3}\gamma_{1}+\beta_{1}^{3}\delta_{1})^{2}\gamma_{1}^{8}\delta_{1}^{8}\end{array}$ Table 8. Weierstraß families ###### Proposition 8.1. Every elliptic surface with data as in Table 7 has Weierstraß datum in one of the families in Table 8. The correspondence between the monodromy group and the Weierstraß data is given by: $\begin{array}[]{rclrclrcl}i)&:&\bar{\Gamma}(2)&iii)&:&\bar{\Gamma}_{1}(3),\deg j_{\mathcal{E}}\equiv_{2}0&v)&:&\bar{\Gamma}_{1}(2)\\\\[5.69054pt] ii)&:&\bar{\Gamma}_{1}(4)&iv)&:&\bar{\Gamma}_{1}(3),\deg j_{\mathcal{E}}=3&vi)&:&\bar{\Gamma}^{2}.\end{array}$ ###### Proof. We have to show that for each row of Table 7 any elliptic surface $\mathcal{E}$ with the data provided by the row can be obtained by some suitable choice of Weierstraß datum from Table 8. Here we shall give the proof for $\mathcal{E}$ with modular monodromy $\bar{\Gamma}_{1}(3)$ which is the most subtle case. The other groups can be treated in an analogous way. We begin with the first corresponding row, so $\deg j_{\mathcal{E}}=6$. Composing the expressions for $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$ from Table 6 and Table 7 respectively, we get $j(\mathcal{E})\quad=\quad j_{\bar{\Gamma}}\circ j_{\mathcal{E}}\quad=\quad\frac{\alpha_{2}^{3}(\alpha_{2}^{3}+8\beta_{6})^{3}}{64(\alpha_{2}^{3}-\beta_{6})^{3}\beta_{6}}\quad=\quad 1+\frac{(\alpha_{2}^{6}-20\alpha_{2}^{3}\beta_{6}-8\beta_{6}^{2})^{2}}{64(\alpha_{2}^{3}-\beta_{6})^{3}\beta_{6}}.$ We look at the general expression of the $j$-function in terms of Weierstraß data $j\quad=\quad\frac{g_{2}^{3}}{g_{2}^{3}-27g_{3}^{2}}\quad=\quad\frac{(g_{2}/3)^{3}}{(g_{2}/3)^{3}-g_{3}^{2}}\quad=\quad 1+\frac{g_{3}^{2}}{(g_{2}/3)^{3}-g_{3}^{2}}.$ If we plug in the same $\alpha_{2}$ and $\beta_{6}$ we get the identical expression for the $j$-invariant as above. Moreover the analysis of common factors of $g_{2},g_{3}$ and their multiplicities according to the Tate algorithm, see Table 2, gives no singular fibres except of type $I_{\nu}$ since $\alpha_{2}$ and $\beta_{6}$ are coprime by assumption $\deg j_{\mathcal{E}}=6$. Therefore $\mathcal{E}$ and the elliptic surface given by the Weierstraß datum share the functional and the homological invariant and hence are isomorphic. Still in case $\bar{\Gamma}_{1}(3)$ but with $\deg j_{\mathcal{E}}=4$ we obtain from Table 6 and Table 7 that $j(\mathcal{E})\quad=\quad j_{\bar{\Gamma}}\circ j_{\mathcal{E}}\quad=\quad\frac{\alpha_{1}^{3}\gamma_{1}(\alpha_{1}^{3}\gamma_{1}+8\beta_{4})^{3}}{64(\alpha_{1}^{3}\gamma_{1}-\beta_{4})^{3}\beta_{4}}\quad=\quad 1+\frac{(\alpha_{1}^{6}\gamma_{1}^{2}-20\alpha_{1}^{3}\gamma_{1}\beta_{4}-8\beta_{4}^{2})^{2}}{64(\alpha_{1}^{3}\gamma_{1}-\beta_{4})^{3}\beta_{4}}$ This expression can not be obtained as easily. Indeed, we have to recall that we are allowed to plug in polynomials into the families which are _not_ necessarily coprime. Doing this with $\beta_{6}=\gamma_{1}^{2}\beta_{4}$ and $\alpha_{2}=\alpha_{1}\gamma_{1}$ in family $iii)$ we get $j\quad=\quad\frac{\alpha_{1}^{3}\gamma_{1}^{3}(\alpha_{1}^{3}\gamma_{1}^{3}+8\beta_{4}\gamma_{1}^{2})^{3}}{64(\alpha_{1}^{3}\gamma_{1}^{3}-\beta_{4}\gamma_{1}^{2})^{3}\beta_{4}\gamma_{1}^{2}}\quad=\quad 1+\frac{(\alpha_{1}^{6}\gamma_{1}^{6}-20\alpha_{1}^{3}\gamma_{1}^{5}\beta_{4}-8\gamma_{1}^{4}\beta_{4}^{2})^{2}}{64(\alpha_{1}^{3}\gamma_{1}^{3}-\beta_{4}\gamma_{1}^{2})^{3}\beta_{4}\gamma_{1}^{2}}.$ which is exactly the expression for $j(\mathcal{E})$ expanded by $\gamma_{1}^{8}$. The Weierstraß datum is thus $g_{2}=3\alpha_{1}\gamma_{1}^{3}(\alpha_{1}^{3}\gamma_{1}+8\beta_{4}),\quad g_{3}=\gamma_{1}^{4}(\alpha_{1}^{6}\gamma_{1}^{2}-20\alpha_{1}^{3}\gamma_{1}\beta_{4}-8\beta_{4}^{2}).$ The analysis with the Tate algorithm shows the existence of one $IV^{*}$ fibre and otherwise only singular fibres of type $I_{\nu}$ since $\alpha_{1}\gamma_{1}$ and $\beta_{4}$ are coprime and we may conclude again that $\mathcal{E}$ is isomorphic to a surface given by Weierstraß datum from family $iii)$. The case with $\bar{\Gamma}_{1}(3)$ and $\deg j_{\mathcal{E}}=2$ is very similar but with $\beta_{6}=\gamma_{2}^{2}\beta_{2}$ and $\alpha_{2}=\gamma_{2}$ sharing even a factor of degree two. The Weierstraß datum $g_{2}=3\gamma_{2}^{3}(\gamma_{2}+8\beta_{2}),\quad g_{3}=\gamma_{2}^{4}(\gamma_{2}^{2}-20\gamma_{2}\beta_{2}-8\beta_{2}^{2})$ defines then an elliptic surface sharing the functional and homological invariant with $\mathcal{E}$ again. This leaves us with $\bar{\Gamma}_{1}(3)$ and $\deg j_{\mathcal{E}}=3$. From Table 6 and Table 7 we get $j(\mathcal{E})\quad=\quad j_{\bar{\Gamma}}\circ j_{\mathcal{E}}\quad=\quad\frac{\alpha_{1}^{3}(\alpha_{1}^{3}+8\beta_{3})^{3}}{64(\alpha_{1}^{3}-\beta_{3})^{3}\beta_{3}}\quad=\quad 1+\frac{(\alpha_{1}^{6}-20\alpha_{1}^{3}\beta_{3}-8\beta_{3}^{2})^{2}}{64(\alpha_{1}^{3}-\beta_{3})^{3}\beta_{3}}.$ This time we choose to plug the coprime $\alpha_{1}$ and $\beta_{3}$ from an expression for $j(\mathcal{E})$ together with a $\gamma_{2}$ still to be determined into the family $iv)$ and obtain the $j$-function of this Weierstraß datum to be $j\quad=\quad\frac{\alpha_{1}^{3}\gamma_{2}^{6}(\alpha_{1}^{3}+8\beta_{3})^{3}}{64(\alpha_{1}^{3}-\beta_{3})^{3}\gamma_{2}^{6}\beta_{3}}\quad=\quad 1+\frac{\gamma_{2}^{6}(\alpha_{1}^{6}-20\alpha_{1}^{3}\beta_{3}-8\beta_{3}^{2})^{2}}{64(\alpha_{1}^{3}-\beta_{3})^{3}\gamma_{2}^{6}\beta_{3}}$ which is exactly the expression for $j(\mathcal{E})$ expanded by $\gamma_{2}^{6}$. Thus we get the $j$-invariant $j(\mathcal{E})$ with the Weierstraß datum $g_{2}=3\alpha_{1}\gamma_{2}^{2}(\alpha_{1}^{3}+8\beta_{3}),\quad g_{3}=\gamma_{2}^{3}(\alpha_{1}^{6}-20\alpha_{1}^{3}\beta_{3}-8\beta_{3}^{2}).$ Finally we choose $\gamma_{2}$ to vanish at the two $I_{0}^{*}$ fibres of $\mathcal{E}$. Then $\gamma_{2}$ is coprime to $\beta_{3}(\alpha_{1}^{3}-\beta_{3})$ since the $j$-invariant of an $I_{0}^{*}$ fibre is finite. The analysis of this datum with the Tate algorithm shows the existence of $I_{0}^{*}$ fibres precisely at the zeros of $\gamma_{2}$. Again we may conclude, since functional and homological invariant are shown to coincide. ∎ We next determine for the generic members of each family whether $-\operatorname{id}\in\Gamma$ or not. The following lemma will be used as a tool to determine the Mordell Weil torsion from the monodromy group. In Section 7 we already introduced the principal congruence subgroup $\Gamma(n)$. As we said there, the modular curve $X^{0}(n)=\mathbb{H}_{1}/\Gamma(n)$ is the classifying space of elliptic curves with a level-$n$ structure. This carries a universal family if $n\geq 3$. If $n=2$, we no longer have a universal family of elliptic curves, but a universal Kummer family still exists. We denote by $X(n)$ the compactification of $X^{0}(n)$ which is obtained by adding the cusps, i.e. $X(n)=\overline{\mathbb{H}}_{1}/\Gamma(n)$. The universal family over $X^{0}(n)$ can be extended to $X(n)$, the extension is known as Shioda modular surface. This has $n^{2}$ sections which restrict to the $n$-torsion points on the smooth fibres of the universal family. We had also introduced the group $\Gamma_{1}(n)$ and the curve $X_{1}^{0}(n)=\Gamma_{1}(n)\backslash\mathbb{H}$. This is the classifying space of elliptic curves with a fixed $n$-torsion point. As above we can compactify the curve $X^{0}_{1}(n)$ by adding the cusps and obtain a curve $X_{1}(n)$. If $m$ divides $n$ we consider the group $\Gamma_{m}(n):=\Gamma(m)\cap\Gamma_{1}(n).$ Obviously, $X^{0}_{m}(n):=\Gamma_{m}(n)\backslash\mathbb{H}$ parameterises elliptic curves with a level-$m$ structure and additionally an $n$-torsion point. Again by adding the cusps we obtain the compatification ${\overline{X}}_{m}(n)$. The universal family over $X^{0}_{m}(n)$ extends to $X_{m}(n)$ and in addition to the sections giving the $m$-torsion points we have a distinguished section of order $n$ which restricts to the distinguished $n$-torsion point on the smooth fibres. ###### Lemma 8.2. Let $\mathcal{E}$ be a Jacobian fibration with monodromy group $\Gamma$ and assume that $m$ and $n$ are positive integers with $m\mid n$. Then the following are equivalent: 1. (1) $\Gamma$ is contained in $\Gamma_{m}(n)$ 2. (2) The Mordell-Weil group $\operatorname{MW}(\mathcal{E})$ contains a subgroup $\mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}$. ###### Proof. We shall first prove the implication form (1) to (2). For this let $U$ be the subset of the base $\mathbb{P}^{1}$ of $\mathcal{E}$ which is given by removing the points $j(\mathcal{E})^{-1}\\{0,1,\infty\\}$ and the base points of the singular fibres. We want to construct sections which form a subgroup $\mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}$ of $\operatorname{MW}(\mathcal{E})$. It is enough to do this over $U$ as the sections can then be extended to $\mathbb{P}^{1}$ (since the base has dimension $1$). We choose a base point $x_{0}\in U$. For each point $x\in U$ we can choose a small disc $U(x)$ such that $\mathcal{E}|_{U(x)}\cong\mathbb{C}\times U(x)/(\mathbb{Z}+\mathbb{Z}\tau_{x})$ where $\tau_{x}:U(x)\to\mathbb{H}_{1}$ is a local lift of the $j$-function $j(\mathcal{E})$ and $\mathbb{Z}+\mathbb{Z}\tau_{x}$ acts fibrewise on $\mathbb{C}\times U(x)$ by translation on $\mathbb{C}\times\\{t\\}$ with the lattice $\mathbb{Z}+\mathbb{Z}\tau_{x}(t)$. In particular, the fibre $\mathcal{E}_{x_{0}}\cong\mathbb{C}/(\mathbb{Z}+\tau_{x_{0}}(x_{0}))$ where we have chosen a fixed lift $\tau_{x_{0}}$. Let $a,b\in\mathbb{Z}$ and $z_{0}=(a+b\tau_{x_{0}}(x_{0}))/\ell$ be an $\ell$-torsion point in $\mathcal{E}_{x_{0}}$ (where $\ell$ will become either $m$ or $n$). Using the above local uniformisation of $\mathcal{E}$ we can extend this to a local $\ell$-torsion section of $\mathcal{E}|_{U(x_{0})}$. We want to extend such a section to $U$. Given any point $y\in U$ we can choose a path $s:[0,1]\to U$ connecting $x_{0}$ with $y$. We can cover this path with finitely many open sets $U(x_{i}),i=0,\ldots,N$ with $x_{N}=y$ and choose local lifts $\tau_{{x_{i}}}$ with $\tau_{x_{i}}|{U_{i}\cap U_{i+1}}=\tau_{x_{i+1}}|{U_{i}\cap U_{i+1}}$. Then we can move the point $z_{0}$ along the path $s$ to an $\ell$-torsion point $z_{0}(y)\in\mathcal{E}_{y}$. Clearly, this will a priori depend on the chosen path $s$. The point $z_{0}(y)$ will be independent of this choice if all elements in the monodromy group $\Gamma(\mathcal{E})$ fix the point $(a+b)/\ell\in\mathbb{Z}/\ell+\mathbb{Z}/\ell$ (where we have chose a fixed representation $\Gamma(\mathcal{E})\to\operatorname{SL}(2,\mathbb{Z})$ and hence consider $\Gamma(\mathcal{E})$ as a subgroup of $\operatorname{SL}(2,\mathbb{Z})$). Given this observation, the claim now follows immediately from the definition of the group $\Gamma_{m}(n)$. The converse implication (2) to (1) follows by the same argument. ∎ As a consequence of the above discussion we can now obtain some first results on the torsion Mordell Weil group. ###### Lemma 8.3. Let $\mathcal{E}$ be a member of the family $iii)$, Then the following holds: * (1) $-\operatorname{id}\notin\Gamma(\mathcal{E})$, * (2) $\mathbb{Z}/3\mathbb{Z}\subset\operatorname{MW}(\mathcal{E})$. Moreover, for a generic element $\mathcal{E}$ the equality $\operatorname{MW}(\mathcal{E})=\mathbb{Z}/3\mathbb{Z}$ holds. ###### Proof. Consider the following Weierstraß datum of a rational elliptic surface $\bar{\mathcal{E}}$ in homogeneous coordinates $z_{1},z_{0}$ $\bar{g}_{2}=3z_{1}^{3}(z_{1}+8z_{0}),\quad\bar{g}_{3}=z_{1}^{4}(z_{1}^{2}-20z_{1}z_{0}-8z_{0}^{2}).$ A generic member $\mathcal{E}$ of family $iii)$ is given by coprime $\beta_{6}$ and $\alpha_{2}$. To compute the datum of the pull-back of $\bar{\mathcal{E}}$ along $j_{\mathcal{E}}$ we plug in $\alpha_{2}^{3}$ and $\beta_{6}$ into $z_{1}$ and $z_{0}$ resp. resulting in $g^{\prime}_{2}=3\alpha_{2}^{9}(\alpha_{2}^{3}+8\beta_{6}),\quad g^{\prime}_{3}=\alpha_{2}^{12}(\alpha_{2}^{6}-20\alpha_{2}^{3}\beta_{6}-8\beta_{6}^{2}).$ The proper Weierstraß datum of the normalised and possibly blown down pull- back is then $g_{2}=3\alpha_{2}(\alpha_{2}^{3}+8\beta_{6}),\quad g_{3}=\alpha_{2}^{6}-20\alpha_{2}^{3}\beta_{6}-8\beta_{6}^{2}$ which is exactly that of $\mathcal{E}$. By the Tate algorithm we find that the fibre configuration on the rational elliptic surface $\bar{\mathcal{E}}$ is $IV^{*},I_{1},I_{3}$. Due to the singular fibres the trivial lattice is $E_{6}+A_{2}$, which implies that the surface is no.69 in the list of Oguiso-Shioda [OS] and thus has Mordell Weil torsion $\mathbb{Z}/3$. In particular $-\operatorname{id}$ does not belong to the monodromy. Since torsion sections pull-back to torsion sections, the torsion Mordell Weil group of $\mathcal{E}$ also contains $\mathbb{Z}/3$. Moreover, the monodromy group of $\mathcal{E}$ is generated by the monodromy elements of the rational fibration associated to loops liftable along $j_{\mathcal{E}}$. So the monodromy of $\mathcal{E}$ is contained in that of the rational fibration and hence does not contain $-\operatorname{id}$. Conversely, the generic $\mathcal{E}$ does not have torsion Mordell Weil properly containing $\mathbb{Z}/3$. Otherwise $j(\mathcal{E})$ factors through $j_{\bar{\Gamma}}$ for some $\bar{\Gamma}$ properly contained in $\bar{\Gamma}_{1}(3)$. This contradicts $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ being the canonical factorisation. In the above arguments we have used that $\mathcal{E}$ is a generic element of the family. It thus remains to prove that $-\operatorname{id}$ is never contained in the monodromy group and $\mathbb{Z}/3$ is always contained in the torsion Mordell Weil. But this both follows by semicontinuity, since the monodromy group can only get smaller under specialization and the torsion Mordell Weil can only get bigger. ∎ ###### Lemma 8.4. Let $\mathcal{E}$ be a member of family $ii)$. Then the following holds: * (1) If $\alpha_{4}$ or $\beta_{4}$ is a perfect square, then $-\operatorname{id}\notin\Gamma(\mathcal{E})$. * (2) In both cases the generic element has $\operatorname{MW}(\mathcal{E})=\mathbb{Z}/2\mathbb{Z}$ and $\operatorname{MW}(\mathcal{E})=\mathbb{Z}/4\mathbb{Z}$ respectively. ###### Proof. In both cases the elliptic fibration is a pull-back along $j_{\mathcal{E}}$ of degree four of a rational elliptic fibration whose fibre configuration is one of the following $I_{4}^{*}+2\ I_{1},\quad I_{1}^{*}+I_{4}+I_{1}$ and with two simple ramification points over the $*$-fibre. Again, it suffices to investigate the rational elliptic fibrations. The monodromy is freely generated by two elements, so $-\operatorname{id}$ does not belong to the monodromy, since it is torsion. The surfaces occur as no.s 64,72 in the list of Oguiso-Shioda [OS] and have Mordell Weil torsion $\mathbb{Z}/2$ and $\mathbb{Z}/4$ respectively. So via the pull-back these groups are contained in the Mordell Weil torsion. Equality holds for all $j_{\mathcal{E}}$ such that $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ is the canonical factorisation, hence generically. ∎ ###### Lemma 8.5. If $\mathcal{E}$ is a generic member of either family $i)$ or $ii)$, then $-\operatorname{id}\in\Gamma(\mathcal{E})$. ###### Proof. Here we use the semicontinuity argument for the monodromy in the other direction. Since the group can only get smaller under specialisation, the generic monodromy for the family contains $-\operatorname{id}$, if it does so for some member. If $\alpha_{4}=\alpha_{3}\gamma_{1}$ and $\beta_{4}=\beta_{3}\gamma_{1}$ with $\alpha_{3},\beta_{3},\gamma_{1}$ pairwise coprime, then the Tate algorithm shows that for either family we get an $I_{0}^{*}$ fibre corresponding to $\gamma_{1}$. Hence $-\operatorname{id}$ is in the monodromy of such a special member. ∎ ## 9\. Classification for $j_{\bar{\Gamma}}$ of low degree We are now ready to classify the ambi-typical strata with $\bar{\Gamma}$ of index at most $6$. At the same time we determine the corresponding root lattices and Mordell Weil torsion. ###### Proposition 9.1. There is a unique ambi-typical stratum with modular monodromy $\bar{\Gamma}_{1}(2)$. Its monodromy group is $\Gamma_{1}(2)$. A generic element of this stratum has the following root lattice and Mordell Weil torsion: $8A_{1},\,\mathbb{Z}/2\mathbb{Z}.$ ###### Proof. Since $-\operatorname{id}$ is the only $2$-torsion element in $\operatorname{SL}(2,\mathbb{Z})$, there is no splitting map to $\operatorname{SL}(2,\mathbb{Z})$ from any subgroup of $\operatorname{PSL}(2,\mathbb{Z})$ containing $2$-torsion auch as $\bar{\Gamma}_{1}(2)$. In particular, every $\bar{\Gamma}_{1}(2)$ monodromy stratum is actually a $\Gamma_{1}(2)$ monodromy stratum. With our explicit knowledge of $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$ for the family $v)$ we can use the Tate algorithm and find that the generic fibre configuration is $8I_{1}+8I_{2}$. By Lemma 5.3 we find that the dimension of the configuration locus is $\dim L(8I_{1}+8I_{2})=10$. The corresponding generic root lattice is $8A_{1}$. The Shimada stratum with this root lattice and saturation given by the generic element of family $v)$ also has dimension 10, as has the corresponding Shimada stratum. This stratum and family $v)$ therefore determine the same irreducible closed subset of $\mathcal{F}^{\prime}$. It is ambi-typical by the argument of (LABEL:dim2), since $\Gamma_{1}(2)$ belongs to the second column in Table 5 and thus $m(\Gamma_{1}(2))\leq 10$ by Lemma 7.4. Any other Shimada stratum with $\bar{\Gamma}_{1}(2)$ monodromy corresponds by Proposition 8.1 to a stratum in family $v)$ and must be of dimension less than $10$. But its monodromy group must still be $\Gamma_{1}(2)$, so it belongs to the monodromy stratum above, see (3.2), and cannot be a monodromy stratum of its own. By Lemma 8.2 the Mordell Weil torsion always contains $\mathbb{Z}/2\mathbb{Z}$ and for a generic element this is equal to $\mathbb{Z}/2\mathbb{Z}$. The proof is analogous to that in the proof of Lemma 8.3. ∎ ###### Proposition 9.2. There is a unique ambi-typical stratum with modular monodromy $\bar{\Gamma}(2)$ and monodromy group $\Gamma(2)$. A generic element of this stratum has the following root lattice and Mordell Weil torsion: $12A_{1},\,\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}.$ ###### Proof. $\Gamma(2)$ is the pre-image of $\bar{\Gamma}(2)$ in $\operatorname{SL}(2,\mathbb{Z})$ and contains $-\operatorname{id}$, thus by Lemma 8.5 it is the monodromy group of the generic surface in family $i)$. We can now argue very much as in the proof of the previous proposition. The generic fibre configuration is $12I_{2}$ which gives a generic root lattice $12A_{1}$, so both the corresponding strata are of dimension $6$. On the other hand $\Gamma(2)$ belongs to Column 5 in Table 5. Hence $m(\Gamma(2))\leq 6$ by Lemma 7.4, and the irreducible closed subset of $\mathcal{F}^{\prime}$ corresponding to family $i)$ is ambi-typical by (LABEL:dim2). Any other Shimada stratum with $\Gamma(2)$ monodromy corresponds to a stratum in family $i)$ by Proposition 8.1 and must be of dimension less than $6$. Due to (3.2) it belongs to the monodromy stratum above, and can not be a monodromy stratum of its own. The claim about the Mordell Weil torsion again follows as in the previous proof. ∎ ###### Proposition 9.3. There is a unique ambi-typical stratum with modular monodromy $\bar{\Gamma}_{1}(4)$ and monodromy group $\Gamma_{0}(4)$. A generic element of this stratum has the following root lattice and Mordell Weil torsion: $4A_{3},\,\mathbb{Z}/2\mathbb{Z}.$ ###### Proof. $\Gamma_{0}(4)$ is the pre-image in $\operatorname{SL}(2,\mathbb{Z})$ of $\bar{\Gamma}_{1}(4)=\bar{\Gamma}_{0}(4)$ (see Remark 7.2), thus contains $-\operatorname{id}$ and – by Lemma 8.5 – is the monodromy group of the generic surface in family $ii)$. The argument then is as above, only applied to the family $ii)$. ∎ To complete the classification of monodromy strata for the two torsion free subgroups of $\operatorname{PSL}(2,\mathbb{Z})$, we have to exploit the fact, that for fibrations with smaller monodromy than the one containing $-\operatorname{id}$ the quotient map $\operatorname{SL}(2,\mathbb{Z})\to\operatorname{PSL}(2,\mathbb{Z})$ defines an isomorphism $\Gamma\cong\bar{\Gamma}$. ###### Proposition 9.4. There is a unique ambi-typical stratum with modular monodromy $\bar{\Gamma}(2)$ and monodromy group $\Gamma$ such that $\Gamma\to\bar{\Gamma}(2)$ is an isomorphism. The generic root lattice and Mordell Weil torsion are $2A_{3}+8A_{1},\,\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}.$ ###### Remark 9.5. The group $\Gamma$ in the proposition can be described as $\Gamma=\left\\{\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\Big{|}\quad b,c\equiv 0\mod 2,\quad a,d\equiv 1\mod 4\right\\}.$ ###### Proof. By Proposition 8.1 the surfaces of any stratum with constant $\Gamma$ monodromy are contained in family $i)$ where $j_{\bar{\Gamma}}$ by (7.5) of table 6 has three poles at $0$ and $\pm 3$. The splitting isomorphism gives a map from $\pi_{1}^{orb}(\mathbb{H}_{1}/\bar{\Gamma})\cong\bar{\Gamma}$ to $\Gamma\subset\operatorname{SL}(2,\mathbb{Z})$ and $\mathbb{H}_{1}/\bar{\Gamma}$ is identified with the complement in $\mathbb{P}^{1}$ of these poles. The conjugacy class of $\pm(\begin{smallmatrix}1&2\\\ 0&1\end{smallmatrix})$ is associated, through $(j_{\bar{\Gamma}})_{*}$, to the positive loop at each puncture and corresponds to $2\in\mathbb{Z}/6\mathbb{Z}\cong\operatorname{PSL}(2,\mathbb{Z})_{ab}$ under the isomorphism which maps the class of $\pm(\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix})$ to $1$. Hence the splitting isomorphism associates to each loop the conjugacy class of $(\begin{smallmatrix}1&2\\\ 0&1\end{smallmatrix})$ or $(\begin{smallmatrix}-1&-2\\\ 0&-1\end{smallmatrix})$ corresponding to $2$ resp. $8$ in $\mathbb{Z}/{12\mathbb{Z}}=\operatorname{SL}(2,\mathbb{Z})_{ab}$. Now the sum of the three elements in $\operatorname{SL}(2,\mathbb{Z})_{ab}$ must be zero, since the sum in homology of the three loops is zero. Under the assumption of a splitting isomorphism we therefore can assume that the conjugacy class associated to the loop around $0$ contains $(\begin{smallmatrix}-1&-2\\\ 0&-1\end{smallmatrix})$ – possibly after applying a deck transformation. We now consider the map $j_{\mathcal{E}}=(\alpha_{4}:\beta_{4})$ and try to understand the possible fibres over pre-images of $0$. Such a pre-image corresponds to a linear factor $\gamma_{1}$ of $\alpha_{4}$ which may occur with multiplicity $1\leq k\leq 4$. It may occur in $\beta_{4}$ with multiplicity $l=0$ or $l=1$, as long as $k>l$. Indeed, in case $k\leq l$ the image would not be $0$, in case $k>l>1$ the Weierstraß coefficients $g_{2},g_{3}$ would share a common factor of $\gamma_{1}^{12}$ which is not allowed. In all possible combinations we can compute the local fibre type at the zero of $\gamma_{1}$ from the local monodromy. Recall that we made an assumption on the matrix associated to the loop around $0$ in the codomain of $j_{\mathcal{E}}$. Since the multiplicity of $j_{\mathcal{E}}$ is $k-l$ at the zero of $\gamma_{1}$ in the domain of $j_{\mathcal{E}}$, the matrix associated to the loop around this zero must be conjugate to $(\begin{smallmatrix}-1&-2\\\ 0&-1\end{smallmatrix})^{k-l}$. These monodromy matrices determine the local fibre type by Table 1 in the way recorded in the last row of Table 9. The other row gives the local fibre types determined by the Tate algorithm: The pole order of $j(\mathcal{E})$ is the product of the multiplicities of $j_{\mathcal{E}}$ at the zero of $\gamma_{1}$ and of $j_{\bar{\Gamma}}$ at $0$, hence $2(k-l)$. The vanishing orders of $g_{2},g_{3}$ at the zero of $\gamma_{1}$ are determined by row $i)$ in Table 8, so for $l=0$ they do not vanish simultaneously, and $\nu_{2}=2,\nu_{3}=3$ for $l=1$. $\begin{array}[]{c|c|c|c|c|c|c|c|}(k,l)&1,0&2,0&3,0&4,0&2,1&3,1&4,1\\\ \hline\cr\text{Tate}&I_{2}&I_{4}&I_{6}&I_{8}&I_{2}^{*}&I_{4}^{*}&I_{6}^{*}\\\ \text{cover}&I_{2}^{*}&I_{4}&I_{6}^{*}&I_{8}&I_{2}^{*}&I_{4}&I_{6}^{*}\end{array}$ Table 9. Comparisons of fibre type calculations Since local fibre types are uniquely determined, our assumption on the splitting implies that $k$ is even. Hence we may conclude, that $\alpha_{4}=\gamma_{2}^{2}$. This condition generically corresponds to a map $j_{\mathcal{E}}$ branched outside the other poles of $j_{\bar{\Gamma}}$. Therefore $j_{\mathcal{E}}$ induces a surjection on orbifold fundamental groups and thus the monodromy group is $\Gamma$. Generically there are $I_{4}$ fibres at the zeroes of $j_{\mathcal{E}}$ and an $I_{2}$ fibre at each of the four unramified pre-images of each of the other two poles $\pm 3$ of $j_{\bar{\Gamma}}$. This yields generic fibre type $2I_{4}+8I_{2}$ and generic root lattice $2A_{3}+8A_{1}$ with corresponding strata of dimension $4$. They determine the ambi-typical stratum of the claim, since every stratum of root lattice type with constant $\Gamma$ monodromy was shown to belong to the closed set in family $i)$ defined by $\alpha_{4}=\gamma_{2}^{2}$. So we found the unique ambi-typical $\Gamma$ stratum, since every stratum of root lattice type with constant $\Gamma$ monodromy was shown to belong to the closed set in family $i)$ defined by $\alpha_{4}=\gamma_{2}^{2}$ and we can use conclusion ((3.2)) again. The assertion about the Mordell Weil torsion follows from Lemma 8.2 since the modular monodromy is $\bar{\Gamma}(2)$ and only contained in $\bar{\Gamma}_{m}(n)$ if $m,n$ divide $2$. ∎ ###### Remark 9.6. Note that the generic root lattice and the Mordell Weil torsion do not necessarily determine a unique Shimada stratum. In fact, Entry 95 in Table 3 of Shimada [Shi2] shows that there are two families with root lattice $2A_{3}+8A_{1}$ and Mordell Weil torsion $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$, which can be distinguished _algebraically_. That – in the terminology of Shimada – means, that the saturations of the root lattices are non-isomorphic for the two families. Shimada also gives a recipe how to analyse such a situation in Section 6.1 and his Example 6.4 corresponding to number 91 in his table is similar to our case in many aspects. By lattice theory the two possible saturations in case 95 correspond to two isotropy subgroups of the discriminant group of $2A_{3}+8A_{1}$ up to isometry. Shimada computes these and provides his result on his home page [Shi3] . The subgroup belonging to our family consists of the trivial element and the isotropic elements in $\operatorname{discr}(2A_{3}+8A_{1})=(\mathbb{Z}/4\mathbb{Z})^{2}\times(\mathbb{Z}/2\mathbb{Z})^{8}$ given by $(0,0,1,1,1,1,1,1,1,1),\quad(2,2,0,0,0,0,1,1,1,1),\quad(2,2,1,1,1,1,0,0,0,0),$ where a basis for the discriminant group is chosen in the obvious way. To prove that we are in that case requires a detailed analysis of the torsion sections and their intersections with fibre components. We shall not give the details here. ###### Proposition 9.7. There are two ambi-typical strata with modular monodromy $\bar{\Gamma}_{1}(4)$ and monodromy group $\Gamma$ such that $\Gamma\to\bar{\Gamma}_{1}(4)$ is an isomorphism. Their generic root lattice and Mordell Weil torsion are respectively * (1) $4A_{3}+2A_{1},\,\mathbb{Z}/4\mathbb{Z}$, * (2) $2A_{7},\,\mathbb{Z}/2\mathbb{Z}$. The two corresponding subgroups in $\operatorname{SL}(2,\mathbb{Z})$ are not conjugate. ###### Remark 9.8. The group is $\Gamma_{1}(4)$ in case $(1)$ and in case $(2)$ can be given by $\Gamma=\Big{\\{}\quad M\quad\Big{|}\quad M\equiv\begin{pmatrix}-1&1\\\ 0&-1\end{pmatrix}^{k}\mod 4,\quad k=0,1,2,3\Big{\\}}.$ ###### Proof. We argue as in the last proof, but with family $ii)$ and the associated map $j_{\bar{\Gamma}}$ which by (7.6) has poles at $0$, $4$ and $\infty$. The analysis of the splitting isomorphism again provides information about the monodromy at these poles. However, this time only the poles at $0$ and $4$ are equivalent under deck transformation, so we have to consider two cases instead of one 1. (1) the conjugacy class associated to the pole at $0$ contains $(\begin{smallmatrix}-1&-1\\\ 0&-1\end{smallmatrix})$, 2. (2) the conjugacy class associated to the pole at $\infty$ contains $(\begin{smallmatrix}-1&-4\\\ 0&-1\end{smallmatrix})$. In the first case, we apply the Tate algorithm to $j_{\mathcal{E}}=(\alpha_{4}:\beta_{4})$ and compare local monodromies at its zeroes. We infer $\alpha_{4}=\alpha_{2}^{2}$, then compute the generic fibre configuration $2I_{2}+4I_{1}+4I_{4}$, the generic root lattice $4A_{3}+2A_{1}$ and the dimension $4$ of the corresponding strata. In the second case we look at the poles instead and get $2I_{8}+8I_{1}$, $2A_{7}$ and again dimension 4. We can argue as before that these yield the ambi-typical strata of the claim and that there are no more. Lemma 8.4 provides the Mordell Weil torsion groups, showing in particular that the two subgroups are not conjugate. ∎ ###### Proposition 9.9. There is a unique ambi-typical stratum with modular monodromy $\bar{\Gamma}_{1}(3)$. Its monodromy group is $\Gamma_{1}(3)$. Its generic root lattice and Mordell Weil torsion are $6A_{2},\,\mathbb{Z}/3.$ ###### Proof. Any ambi-typical stratum with $\bar{\Gamma}_{1}(3)$ modular monodromy belongs to family $iii)$ or $iv)$. If it belongs to family $iii)$ then by Lemma 8.3, the Mordell Weil torsion contains $\mathbb{Z}/3$ and $-\operatorname{id}\not\in\Gamma$. By the former property and Lemma 8.2 family $iii)$ has monodromy contained in $\Gamma_{1}(3)$. Since $-\operatorname{id}\not\in\Gamma_{1}(3)$, no proper subgroup surjects onto $\bar{\Gamma}_{1}(3)$, thus $\Gamma_{1}(3)$ is the monodromy group for the generic surface in family $iii)$. We follow the proofs of Propositions 9.1 and 9.2 and get generic fibre configuration $6I_{3}+6I_{1}$, generic root lattice $6A_{2}$, corresponding strata dimension 6 and $m(\Gamma_{1}(3))\leq 6$ by Lemma 7.4. Using (LABEL:dim2) and (3.2) we conclude that family $iii)$ constitutes a unique ambi-typical stratum, with the invariants given in the claim. To address possible strata in family $iv)$ we look at the additional family with Weierstraß datum $g_{2}=3\alpha_{1}\alpha_{2}(\alpha_{1}^{3}\alpha_{2}+8\beta_{5}),\quad g_{3}=\alpha_{2}(\alpha_{1}^{6}\alpha_{2}^{2}-20\alpha_{1}^{3}\alpha_{2}\beta_{5}-8\beta_{5}^{2}),$ that has associated $j$-invariant $j\quad=\quad\frac{(g_{2}/3)^{3}}{(g_{2}/3)^{3}-g_{3}^{2}}\quad=\quad\frac{\alpha_{1}^{3}\alpha_{2}(\alpha_{1}^{3}\alpha_{2}+8\beta_{5})^{3}}{64(\alpha_{1}^{3}\alpha_{2}-\beta_{5})^{3}\beta_{5}^{2}}$ which is the composition of $j_{\bar{\Gamma}}$ for $\bar{\Gamma}_{1}(3)$ with $j_{\mathcal{E}}=\alpha_{1}^{3}\alpha_{2}/\beta_{5}$. It contains the family $iv)$ as a proper closed subset determined by the specialization to $\alpha_{2}=\gamma_{2},\beta_{5}=\gamma_{2}\beta_{3}$. A zero of $\gamma_{2}$ generically corresponds to a fibre of type $I_{0}^{*}$ so both have generic monodromy $\Gamma_{0}(3)$, the pre-image in $\operatorname{SL}(2,\mathbb{Z})$ of $\bar{\Gamma}_{1}(3)=\bar{\Gamma}_{0}(3)$. Repeating the argument above using (3.2) in particular, we deduce that no stratum of monodromy $\Gamma_{0}(3)$ is ambi-typical, except possibly that corresponding to the generic fibre configuration $2II+5I_{3}+5I_{1}$ in the additional family. However, by Proposition 5.5 the fibre $II$ may not occur in generic surfaces of an ambi-typical stratum. We are left to look for an ambi-typical stratum in family $iv)$ with $\Gamma\to\bar{\Gamma}_{1}(3)$ an isomorphism. The splitting isomorphism gives a map from $\pi_{1}^{orb}(\mathbb{H}/\Gamma)\cong\bar{\Gamma}$ to $\Gamma\subset\operatorname{SL}(2,\mathbb{Z})$. The positive loops around the $3$-torsion point and the poles of $j_{\bar{\Gamma}}$ of order $1$ and $3$ are mapped to $2$, $1$ and $3$ in $\mathbb{Z}/6=\operatorname{PSL}(2,\mathbb{Z})_{ab}$, so the splitting isomorphism associates to these loops the elements $2$ or $8$, $1$ or $7$, and $3$ or $9$ respectively in $\mathbb{Z}/{12}=\operatorname{SL}(2,\mathbb{Z})_{ab}$. In fact, we can be more precise. On the one hand $2$ can not be associated to a $3$-torsion point, since then the corresponding monodromy is conjugate to $(\begin{smallmatrix}1&1\\\ -1&0\end{smallmatrix})$ contradicting $-\operatorname{id}\not\in\Gamma$. On the other hand $1$ and $3$ can not be associated to the two poles. Otherwise the corresponding elements of $\Gamma$ are conjugate to powers of $(\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix})$ and so are all monodromies at poles of $j(\mathcal{E})$. Then the $I^{*}$ fibres – present according to Table 3 – are $I_{0}^{*}$ fibres contradicting again $-\operatorname{id}\not\in\Gamma$. We use again the fact that the sum of the three elements in $\operatorname{SL}(2,\mathbb{Z})_{ab}$ must be zero, since the sum in homology of the three loops is zero. Thus the elements must be $8,7$ and $9$, and $\mathcal{E}$ is a pullback (up to normalization and blow down) of an elliptic surface with singular fibre configuration $I_{1}^{*}+I_{3}^{*}+IV^{*}$ along $j_{\mathcal{E}}$. By Table 7 the map $j_{\mathcal{E}}$ has degree $3$ and ramification datum $(3)$ at the $3$-torsion point. By Table 3 the surface $\mathcal{E}$ has two $I^{*}$ fibres, so the ramification datum of $j_{\mathcal{E}}$ at the two poles must be $(2,1)$. But then $j_{\mathcal{E}}$ is unramified outside these points and therefore violates the maximality condition (6.5). ∎ ###### Proposition 9.10. There is no ambi-typical stratum with $\bar{\Gamma}=\bar{\Gamma}^{2}$. ###### Proof. We first assume that there is such a stratum with $\Gamma\to\bar{\Gamma}^{2}$ an isomorphism. By Lemma 8.1 we know that there is a corresponding generic surface $\mathcal{E}$ in the family $vi)$. The splitting isomorphism gives a map from $\pi_{1}^{orb}(\mathbb{H}/\Gamma)\cong\bar{\Gamma}$ to $\Gamma\subset\operatorname{SL}(2,\mathbb{Z})$. The positive loops around both $3$-torsion points and the pole of $j_{\bar{\Gamma}}$ are mapped to $2\in\mathbb{Z}/6=\operatorname{PSL}(2,\mathbb{Z})_{ab}$, so the splitting isomorphism associates to each loop either $2$ or $8$ in $\mathbb{Z}/{12}=\operatorname{SL}(2,\mathbb{Z})_{ab}$. However, $2$ can not be associated to a $3$-torsion point, since then the corresponding monodromy is conjugate to $(\begin{smallmatrix}1&1\\\ -1&0\end{smallmatrix})$ contradicting $-\operatorname{id}\not\in\Gamma$. Again we use the fact that the sum of the three elements in $\operatorname{SL}(2,\mathbb{Z})_{ab}$ must be zero. Thus all elements must be $8$ and $\mathcal{E}$ is a pullback (up to normalization and blow down) of an elliptic surface with singular fibre configuration $I_{2}^{*}+2IV^{*}$ along $j_{\mathcal{E}}$. By Table 7 the map $j_{\mathcal{E}}$ has degree $4$ and ramification datum $(3,1)$ at the two $3$-torsion points. By Table 3 the surface $\mathcal{E}$ has no $I^{*}$ fibre, so the ramification datum of $j_{\mathcal{E}}$ at the pole must be $(2,2)$. But then $j_{\mathcal{E}}$ is unramified outside these points and therefore violates the maximality condition (6.5). To show that there is neither a stratum with $-\operatorname{id}\in\Gamma$ we look at the new family with Weierstraß datum given by $g_{2}=-12\alpha_{1}\alpha_{3}\beta_{1}\beta_{3},\quad g_{3}=4\alpha_{3}\beta_{3}(\alpha_{1}^{3}\alpha_{3}-\beta_{1}^{3}\beta_{3})$ that has associated $j$-invariant $j\quad=\quad\frac{(g_{2}/3)^{3}}{(g_{2}/3)^{3}-g_{3}^{2}}\quad=\quad\frac{4\alpha_{3}\beta_{3}(\alpha_{1}\beta_{1})^{3}}{(\alpha_{1}^{3}\alpha_{3}+\beta_{1}^{3}\beta_{3})^{2}}$ which is the composition of $j_{\bar{\Gamma}}$ for $\bar{\Gamma}^{2}$ with $j_{\mathcal{E}}=(\alpha_{1}^{3}\alpha_{3}:\beta_{1}^{3}\beta_{3})$. It contains the family $vi)$ as a proper closed subset determined by the specialization to $\alpha_{3}=\gamma_{1}^{2}\delta_{1},\beta_{3}=\gamma_{1}\delta_{1}^{2}$, so both have the same generic monodromy $\Gamma$. Repeating the argument above, using in particular (3.2), we deduce that no stratum of that monodromy is ambi-typical, except possibly that corresponding to the generic fibre configuration $6II+6I_{2}$ of the new family. However, again by Proposition 5.5, the fibre $II$ may not occur in generic surfaces of an ambi-typical stratum. ∎ ## 10\. Classification in the high index cases In this section we will complete the classification of ambi-typical strata of high index. More precisely, we will classify the subgroups $\bar{\Gamma}$ of $\operatorname{PSL}(2,\mathbb{Z})$ of index at least $9$ for which an ambi- typical stratum exists and give a description of the possible monodromy groups $\Gamma$ in $\operatorname{SL}(2,\mathbb{Z})$ which cover $\bar{\Gamma}$. ### 10.1. Uniqueness of strata Given a group $\bar{\Gamma}$ we shall first determine the possible groups $\Gamma$ and prove the uniqueness of the $\Gamma$ ambi-typical strata. Moreover, we determine the root lattices, $j_{\mathcal{E}}$ branch data and the number of $*$-fibres of their generic members in terms of the ramification data of $j_{\bar{\Gamma}}$. In fact $j_{\mathcal{E}}$ will be of degree $2$ with branching in a $2$-torsion point and a non-torsion point, $(2)_{B},2$, of degree $2$ with branching in two non-torsion points, $2^{2}$, or of degree $1$ with no branching, $1$. The actual list of these ramification data, and thus the list of possible groups $\bar{\Gamma}$ is postponed to the next subsection, as is the computation of the corresponding Mordell-Weil torsion. ###### Proposition 10.1. Suppose $\bar{\Gamma}$ has index $9$ in $\operatorname{PSL}(2,\mathbb{Z})$. Any such subgroup defines at most one ambi-typical stratum and in this case the monodromy group $\Gamma$ is the pre-image of $\bar{\Gamma}$ in $\operatorname{SL}(2,\mathbb{Z})$. The generic invariants, root lattice, $j_{\mathcal{E}}$ ramification and number of $*$-fibres are $D_{4}+2A_{i_{1}}+\dots+2A_{i_{k}},\qquad(2)_{B},2,\qquad 1,$ with $k$ the number of poles of $j_{\bar{\Gamma}}$ of order at least two, $i_{1}+1,\dots,i_{k}+1$ their orders and root lattice rank $4+2i_{1}+\dots+2i_{k}=16$. ###### Proof. By Column $4$ of Table 4 the map $j_{\mathcal{E}}$ is a double cover and we have $e_{2}=1$ and $e_{3}=0$. So $\bar{\Gamma}$ contains $2$-torsion and has no splitting map to $\operatorname{SL}(2,\mathbb{Z})$, since $-\operatorname{id}$ is the only $2$-torsion element in $\operatorname{SL}(2,\mathbb{Z})$. In particular, every $\bar{\Gamma}$ monodromy stratum is actually a $\Gamma$ monodromy stratum with $\Gamma$ the pre-image. The map $j_{\mathcal{E}}$ is branched over the point corresponding to the conjugacy class of $2$-torsion elements according to Lemma 6.4(1). The family of such coverings is $1$-dimensional and irreducible. Moreover, it follows from Proposition 5.7 that the $I^{*}$-fibre in the generic member of the stratum must be $I_{0}^{*}$. Therefore we get a $2$-dimensional irreducible family of elliptic surfaces with monodromy $\Gamma$, where the second parameter is the position of the $I_{0}^{*}$-fibre. If we have an ambi-typical stratum the rank of the root lattice must be $16$. The generic fibre type consist of the $I_{0}^{*}$-fibre and the $I_{\nu}$-fibres corresponding to pole orders of the generic $j$-map $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$, which coincides with the lengths of the ramification partition at infinity. Since $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ is the canonical factorisation, the map $j_{\mathcal{E}}$ is a double cover not branched over $j_{\bar{\Gamma}}^{-1}(\infty)$, and the ramification partition is twice that of $j_{\bar{\Gamma}}$. We can now use the argument in (LABEL:dim2) to conclude that $\Gamma$ defines a monodromy stratum. This is the case since our family has varying moduli and is of maximal dimension by Column 4 of Table 5. ∎ ###### Proposition 10.2. Suppose $\bar{\Gamma}$ is of index $12$ in $\operatorname{PSL}(2,\mathbb{Z})$. Let $k$ be the number of poles of $j_{\bar{\Gamma}}$ of order at least two and $i_{1}+1,\dots,i_{k}+1$ their orders. If $\bar{\Gamma}$ gives rise to an ambi- typical stratum then it is torsion free and there are two possibilities: 1. (1) there is a unique ambi-typical $\Gamma$ stratum with $\Gamma\to\bar{\Gamma}$ an isomorphism. The generic invariants, root lattice of rank $16$, $j_{\mathcal{E}}$ ramification and number of $*$-fibres are $2A_{i_{1}}+\dots+2A_{i_{k}},\qquad 2^{2},\qquad 0,$ 2. (2) there is a unique ambi-typical $\Gamma$ stratum with $\Gamma\subset\operatorname{SL}(2,\mathbb{Z})$ the pre-image of $\bar{\Gamma}$. The generic invariants, root lattice of rank $16$, $j_{\mathcal{E}}$ ramification and number of $*$-fibres are $2D_{4}+A_{i_{1}}+\dots+A_{i_{k}},\qquad 1,\qquad 2.$ ###### Proof. By the hypothesis on $\bar{\Gamma}$ we are either in either the cases of Column $6$ or Column $7$ of Table 4 and we already know that $\bar{\Gamma}$ is torsion free by Proposition 7.1(2). In case of Column $6$ of Table 4 the map $j_{\mathcal{E}}$ is an isomorphism. Moreover the two $I^{*}$-fibres in the general member of the stratum must be of type $I_{0}^{*}$ according to Proposition 5.7, since otherwise the corresponding surface were rigid. Therefore we get a $2$-dimensional irreducible family of elliptic surfaces with monodromy $\Gamma$ containing $-\operatorname{id}$, where the parameters are the positions of the $I^{*}$-fibres, given by a reduced divisor of degree two on $\mathbb{P}^{1}$. The generic fibre type consists of the $I_{0}^{*}$-fibres and the $I_{\nu}$-fibres corresponding to the poles of the generic $j$-map $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$. Since $j_{\mathcal{E}}$ is a isomorphism, the pole orders are those of $j_{\bar{\Gamma}}$. In case of Column $7$ of Table 4 the map $j_{\mathcal{E}}$ is a double cover. It is branched generically at two points, since Lemma 6.4 imposes no additional conditions. The family of such coverings is $2$-dimensional and irreducible. The generic fibre type consists of the $I_{\nu}$-fibres corresponding to the pole orders of the generic $j$-map $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$, which coincide with the lengths of the ramification partition at infinity. Since $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$ is the canonical factorisation, the map $j_{\mathcal{E}}$ is a double cover not branched over $j_{\bar{\Gamma}}^{-1}(\infty)$, and the ramification partition is twice that of $j_{\bar{\Gamma}}$. By Column 6 of Table 5 our family is not contained in a monodromy stratum of larger dimension. Since the dimension of both families is two, the rank of the root lattice must be $16$. ∎ ###### Proposition 10.3. Suppose $\bar{\Gamma}$ is of index $18$ in $\operatorname{PSL}(2,\mathbb{Z})$. If $\bar{\Gamma}$ is associated to an ambi-typical stratum, then it is torsion free and the monodromy group $\Gamma$ is the pre-image of $\bar{\Gamma}$ in $\operatorname{SL}(2,\mathbb{Z})$. The generic invariants, root lattice of rank $17$, $j_{\mathcal{E}}$ ramification and number of $*$-fibres are $D_{4}+A_{i_{1}}+\dots+A_{i_{k}},\qquad 1,\qquad 1,$ with $k$ the number of poles of $j_{\bar{\Gamma}}$ of order at least two, and $i_{1}+1,\dots,i_{k}+1$ their orders. ###### Proof. By Proposition 7.1(1) we know that $\bar{\Gamma}$ is torsion free. We also note that the $I^{*}$-fibre in the general member of the stratum must be of type $I_{0}^{*}$, since $j_{\mathcal{E}}$ is of degree $1$ by Column $9$ of Table 4, thus positive dimensionality of the stratum must be due to moduli of a movable $I_{0}^{*}$-fibre. Accordingly $-\operatorname{id}\in\Gamma$ and $\Gamma$ is the pre-image in $\operatorname{SL}(2,\mathbb{Z})$ of $\bar{\Gamma}$. Therefore we get a $1$-dimensional irreducible family of elliptic surfaces with monodromy $\Gamma$, where the parameter is the position of the $I^{*}_{0}$-fibre, consequently the rank of the root lattice must be $17$. The generic fibre type consist of the $I_{0}^{*}$-fibre and the $I_{\nu}$-fibres corresponding to the poles of the generic $j$-map $j_{\bar{\Gamma}}\circ j_{\mathcal{E}}$. Since $j_{\mathcal{E}}$ is an isomorphism, the pole orders are those of $j_{\bar{\Gamma}}$. The property to be a monodromy stratum follows as before by (LABEL:dim2) and Table 5. ∎ ### 10.2. Classification of relevant subgroups in the high index cases In the previous subsection we have related the monodromy strata uniquely, resp. in a two to one way, with subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ in the high index case. So we have to classify these subgroups next. To this end we note that subgroups $\bar{\Gamma}$ are in bijective correspondence with maps $j_{\bar{\Gamma}}$ and such maps are in turn determined by triples of permutations $\mu_{0},\mu_{1},\mu_{\infty}$ whose product is the identity and which generate a group acting transitively on the set of $9,12$ and $18$ elements respectively. Since Condition (6.3) and the value of $e_{2},e_{3}$ impose restrictions on $\mu_{0}$ and $\mu_{1}$, we deduce: 1. (1) In the index $9$ case, $\mu_{0}$ has only $3$-cycles and $\mu_{1}$ only $2$-cycles except for one fixed point. 2. (2) In the torsion free cases, where the index is $12$ or $18$, $\mu_{0}$ has only $3$-cycles and $\mu_{1}$ only $2$-cycles. Moreover, concerning the fibres of type $I_{\nu},\nu>0$: 1. (3) The number of parts of $\mu_{\infty}$ is the number of poles and the pole orders are the lengths of the parts and determine the corresponding fibre. ###### Proposition 10.4. In the index $9$ subcase there are $4$ relevant subgroups $\bar{\Gamma}$ of $\operatorname{PSL}(2,\mathbb{Z})$. These are in bijection with the following $4$ partitions of $\mu_{\infty}$ of the corresponding map $j_{\bar{\Gamma}}$: $(7,1,1),(6,2,1),(5,3,1),(4,3,2).$ This translates into the following root lattices: $D_{4}+2A_{6},D_{4}+2A_{5}+2A_{1},D_{4}+2A_{4}+2A_{2},D_{4}+2A_{3}+2A_{2}+2A_{1}.$ with corresponding Mordell-Weil torsion $trivial,\quad\mathbb{Z}/2,\quad trivial,\quad\mathbb{Z}/2.$ ###### Proof. Since each relevant subgroup gives an irreducible stratum their number is bounded above by the number of Shimada components which have root lattice as described in Proposition 10.1 above. The list of Shimada [Shi1, http://arxiv.org/pdf/math/0505140.pdf] published in the arXiv version shows that there are four such root lattices, entries $2171,2179,2190$ and $2198$, which uniquely determine the Mordell-Weil torsion as claimed. They do not appear in the list of root lattices corresponding to multiple components, see Shimada [Shi2, Table II, p.38], therefore each of these contributes one component. To see that these Shimada strata are indeed ambi-typical it suffices to exhibit the four groups, which we do in terms of four tuples of permutations (10.1) $\displaystyle(123)(456)(789),$ $\displaystyle(14)(27)(56)(89),$ $\displaystyle(1643297)(5)(8)$ (10.2) $\displaystyle(123)(456)(789),$ $\displaystyle(14)(26)(57)(89),$ $\displaystyle(16)(259743)(8)$ (10.3) $\displaystyle(123)(456)(789),$ $\displaystyle(14)(25)(67)(89),$ $\displaystyle(16975)(243)(8)$ (10.4) $\displaystyle(123)(456)(789),$ $\displaystyle(14)(27)(59)(68),$ $\displaystyle(167)(2943)(58)$ where we use the correspondence from $(3)$ to $(1)$ in Remark 6.1. ∎ ###### Remark 10.5. One can give an alternative proof which avoids the use of Shimada’s list by performing an exhaustive search for all homomorphisms as in $(3)$ of Remark 6.1. This is straightforward though cumbersome and we omit the details. In the index $12$ subcase, we get, by Proposition 10.2 two monodromy strata for each relevant subgroup of $\operatorname{PSL}(2,\mathbb{Z})$. As it turns out Beauville [Be] has classified exactly these torsion free subgroups in his study of semi-stable rational elliptic fibrations. Note that Beauville uses the notation $\Gamma_{0}^{0}(n)$ for our groups $\Gamma_{1}(n)$. ###### Proposition 10.6. In the index $12$ subcase there are $6$ relevant subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ with $j_{\bar{\Gamma}}$ in bijection with the partition corresponding to $\mu_{\infty}$ being one of $(9,1,1,1),(8,2,1,1),(6,3,2,1),(5,5,1,1),(4,4,2,2),(3,3,3,3).$ This claim is proved in [Be, p.658f], which is used also in the following proof. ###### Proposition 10.7. If $j_{\mathcal{E}}$ is generic of degree $2$, so $\mathcal{E}$ is the pull- back along $j_{\mathcal{E}}$ of a rational modular elliptic surface without *-fibre, then the monodromy group $\Gamma$ is one of the following $\Gamma_{0}(9)\cap\Gamma_{1}(3),\quad\Gamma_{0}(8)\cap\Gamma_{1}(4),\quad\Gamma_{1}(6),\quad\Gamma_{1}(5),\quad\Gamma_{1}(4)\cap\Gamma(2),\quad\Gamma(3)$ with corresponding Mordell Weil torsion $\mathbb{Z}/3,\quad\mathbb{Z}/4,\quad\mathbb{Z}/6,\quad\mathbb{Z}/5,\quad\mathbb{Z}/4\times\mathbb{Z}/2,\quad\mathbb{Z}/3\times\mathbb{Z}/3.$ ###### Proof. The list for the monodromy groups of the rational modular elliptic surfaces is taken from [Be]. Indeed, they do not change under generic pull-back, since that induces a surjection on fundamental groups of the complements of singular values. The claim of the Mordell-Weil torsion is then obtained using Lemma 8.2. It can also be verified by a check of the corresponding lines 2242, 2262, 2322, 2345, 2368, 2373 in [Shi1]. ∎ The other families of surfaces with torsion free modular monodromy of index $12$ are obtained from the Beauville elliptic surfaces, which are rational and rigid for deformations preserving the monodromy group, by replacing two smooth fibres by fibres of type $I_{0}^{*}$. This _generic quadratic twisting_ corresponds to multiplication of the Weierstraß datum by $\gamma_{2}^{2}$, respectively $\gamma_{2}^{3}$, where the zeroes of $\gamma_{2}$ avoid the singular fibres. Geometrically this means the following. We first take the double cover branched at the smooth fibres over the zeroes of $\gamma_{2}$. This double cover is acted on by the deck-transformation and the involution which is $-\operatorname{id}$ on each fibre. The quotient by the product of these two involutions gives the twisted surface after resolution of its singularities. The choice of $\gamma_{2}$, or equivalently the position of the $I_{0}^{*}$ fibres, gives the two moduli of these families. ###### Proposition 10.8. If $\mathcal{E}$ is a rational modular elliptic surface without *-fibre _twisted_ at two smooth fibres, then the monodromy group $\Gamma$ is one of the following $\Gamma_{0}(9),\quad\Gamma_{0}(8),\quad\Gamma_{0}(6),\quad\Gamma_{1}(5)\\{\pm\operatorname{id}\\},\quad\Gamma_{0}(4)\cap\Gamma(2),\quad\Gamma(3)\\{\pm\operatorname{id}\\}$ with corresponding Mordell Weil torsion $trivial,\quad\mathbb{Z}/2,\quad\mathbb{Z}/2,\quad trivial,\quad\mathbb{Z}/2\times\mathbb{Z}/2,\quad trivial.$ ###### Proof. The monodromy groups are the groups $\Gamma\\{\pm\operatorname{id}\\}$ generated by the groups from [Be] and $-\operatorname{id}$ due to the presence of $I_{0}^{*}$ fibres. The claim of the Mordell-Weil torsion is then obtained with Lemma 8.2. It is also immediate by a check of the corresponding lines $2148-2153$ in [Shi1] ∎ ###### Proposition 10.9. In the index $18$ subcase there are $26$ relevant subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ corresponding to root lattices and Mordell- Weil torsion in the list of Shimada given in lines $2762-2786.$ The lattice appearing in line $2776$ gives rise to two different ambi-typical strata. ###### Remark 10.10. We remark without proof that the two components corresponding to line $2776$ are complex conjugate, as are the maps $j_{\bar{\Gamma}}$ for the corresponding modular monodromy groups. More precisely, this means that the corresponding maps $\mu:\pi_{1}(\mathbb{C}\setminus\\{0,1\\})\to S_{18}$ modulo conjugation in $S_{18}$ differ only by the automorphism of $\pi_{1}(\mathbb{C}\setminus\\{0,1\\})$ induced by complex conjugation. ###### Proof. Since each relevant subgroup gives an irreducible stratum, their number is bounded above by the number of Shimada strata which have root lattice of rank $17$ of the form $D_{4}+A_{i_{1}}+\dots+A_{i_{k}}$ as in Proposition 10.3 above. The list of Shimada[Shi1] shows that there are $25$ such root lattices and the component count of Shimada[Shi2] shows that each of these contributes one component except for that of line 2776 where two components exist. To see that all these components are related to torsion free subgroups we provide the corresponding list of $26$ triples of monodromy permutations together with a proof, that the two triples corresponding to line $2776$ are not equal under conjugation. We give representatives for all orbits, with $\mu_{1}=(12)(34)(56)(78)(9\,10)(11\,12)(13\,14)(15\,16)(17\,18)$ and $\mu_{0},\mu_{\infty}$ in the following Table 10. ${\scriptsize\begin{array}[]{rlll}1&(123)(567)(9\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>)(48\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(9)(\>\\!1\\!\\!\>3\\!\>)(23\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>8674)&1\\!\\!\>4\\!\>1111\\\ 2&(123)(567)(9\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>)(48\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(5)(9)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(23\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>8674)&1\\!\\!\>3\\!\>2111\\\ 3&(123)(567)(9\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>)(4\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>)(8\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(5)(9)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(23\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>6\\!\>867\>\\!1\\!\\!\>4\\!\>4)&1\\!\\!\>2\\!\>3111\\\ 4&(123)(458)(697)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(\>\\!1\\!\\!\>7\\!\>)(57)(\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(2389\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>64)&1\\!\\!\>2\\!\>2211\\\ 5&(123)(567)(489)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>4\\!\>)(239\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>8674)&1\\!\\!\>1\\!\>3211\\\ 6&(123)(567)(9\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>)(48\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(9)(10\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>2\\!\>)(23\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>8674)&1\\!\\!\>0\\!\>5111\\\ 7&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>0\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(5)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(9\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>2\\!\>)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>867\>\\!1\\!\\!\>0\\!\>9)&1\\!\\!\>0\\!\>4211\\\ 8&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>4\\!\>),(1)(5)(9\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>4\\!\>867\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>0\\!\>4)&1\\!\\!\>0\\!\>3311\\\ 9&(123)(457)(698)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(58)(\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>)(11\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>4\\!\>)(2379\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>64)&1\\!\\!\>0\\!\>3221\\\ 1\\!\\!\>0&(123)(567)(9\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>)(4\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>)(8\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(5)(9)(67\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>3\\!\>8)(23\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>4\\!\>4)&96111\\\ 1\\!\\!\>1&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(8\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(9\>\\!1\\!\\!\>2\\!\>)(67\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>5\\!\>8)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>0\\!\>4)&95211\\\ 1\\!\\!\>2&(123)(457)(69\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(9\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>6\\!\>)(5\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>8)(237\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>0\\!\>64)&85221\\\ 1\\!\\!\>3&(123)(457)(69\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(5\>\\!1\\!\\!\>1\\!\>8)(9\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>2\\!\>)(237\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>0\\!\>64)&84321\\\ 1\\!\\!\>4&(135)(274)(68\>\\!1\\!\\!\>0\\!\>)(9\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(14)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(376)(11\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>4\\!\>)(25\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>2\\!\>98)&83322\\\ 1\\!\\!\>5&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>5\\!\>),(1)(5)(16\>\\!1\\!\\!\>7\\!\>)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>0\\!\>4)(67\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>2\\!\>98)&77211\\\ 1\\!\\!\>6&(123)(567)(4\>\\!1\\!\\!\>1\\!\>9)(8\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>0\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>5\\!\>),(1)(5)(16\>\\!1\\!\\!\>7\\!\>)(239\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>2\\!\>4)(67\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>3\\!\>8)&77211\\\ 1\\!\\!\>7&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\>\\!\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(5)(9\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>2\\!\>)(67\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>4\\!\>8)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>0\\!\>4)&76311\\\ 1\\!\\!\>8&(123)(457)(689)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(11\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>)(23764)(59\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>0\\!\>8)&75321\\\ 1\\!\\!\>9&(123)(457)(69\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(9\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>)(5\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>4\\!\>8)(237\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>0\\!\>64)&74331\\\ 20&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(9\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>2\\!\>)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>0\\!\>4)(67\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>8)&66411\\\ 21&(135)(486)(79\>\\!1\\!\\!\>1\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(2\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(36)(9\>\\!1\\!\\!\>2\\!\>)(15\>\\!1\\!\\!\>8\\!\>)(1\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>0\\!\>74)(258\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>7\\!\>)&66222\\\ 22&(123)(567)(49\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>8\\!\>),(1)(5)(23\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>0\\!\>4)(67\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>4\\!\>8)(9\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>)&65511\\\ 23&(123)(457)(69\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>0\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(1)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(9\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>6\\!\>)(237\>\\!1\\!\\!\>0\\!\>64)(5\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>4\\!\>8)&65421\\\ 24&(135)(28\>\\!1\\!\\!\>0\\!\>)(7\>\\!1\\!\\!\>1\\!\>9)(4\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>4\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>7\\!\>)(6\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>),(89)(\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>8\\!\>)(1\>\\!1\\!\\!\>0\>\\!\>\\!1\\!\\!\>1\\!\>4)(3\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>7\\!\>6)(25\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>2\\!\>7)&64422\\\ 25&(135)(274)(69\>\\!1\\!\\!\>1\\!\>)(8\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>5\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>2\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>4\\!\>),(14)(9\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>2\\!\>)(\>\\!1\\!\\!\>3\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>)(25\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>4\\!\>8)(37\>\\!1\\!\\!\>5\\!\>\>\\!1\\!\\!\>0\\!\>6)&55332\\\ 26&(135)(279)(4\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>3\\!\>)(6\>\\!1\\!\\!\>4\\!\>\>\\!1\\!\\!\>5\\!\>)(8\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>7\\!\>)(\>\\!1\\!\\!\>0\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>2\\!\>),(3\>\\!1\\!\\!\>3\\!\>6)(7\>\\!1\\!\\!\>7\\!\>\>\\!1\\!\\!\>0\\!\>)(19\>\\!1\\!\\!\>2\\!\>4)(25\>\\!1\\!\\!\>5\\!\>8)(\>\\!1\\!\\!\>1\\!\>\>\\!1\\!\\!\>8\\!\>\>\\!1\\!\\!\>6\\!\>\>\\!1\\!\\!\>4\\!\>)&44433\end{array}}$ Table 10. Monodromy data of $j_{\bar{\Gamma}}$ Supposing the factorizations in line 15 and 16 are conjugate, there is a permutation $\sigma$ which commutes with $\mu_{1}$ and which conjugates $\mu_{0},\mu_{\infty}$ of line 15 to those of line 16, which for this proof we denote by $\rho_{0}$, $\rho_{\infty}$. From this we deduce the conditions $\sigma(\mu^{k}_{*}(n))\,=\,\rho^{k}_{*}(\sigma(n))\qquad\text{for all }n,k\text{ and for }*=0,1,\infty.$ In particular we get for $1=\mu_{\infty}(1)$ $\sigma(1)=\sigma(\mu_{\infty}(1))\implies\sigma(1)=\rho_{\infty}(\sigma(1)).$ Thus $\sigma(1)$ is a fixed point of $\rho_{\infty}$ and thus either $1$ or $5$. Then for $2=\mu_{1}(1)$ we get $\sigma(2)=\rho_{1}(\sigma(1))\in\\{\rho_{1}(1),\rho_{1}(5)\\}=\\{2,6\\}.$ On the other hand the $2$-cycle $(16\,17)$ is unique in both $\mu_{\infty},\rho_{\infty}$ so $(\sigma(16),\sigma(17))=(16,17)\implies\sigma(17)\in\\{16,17\\}.$ Consequently we get for $18=\mu_{1}(17)$ that $\sigma(18)=\rho_{1}(\sigma(17))\in\\{\rho_{1}(16),\rho_{1}(17)\\}=\\{15,18\\}.$ But in this way we arrive at a contradiction since $\sigma(\mu^{4}_{\infty}(2))=\sigma(18)\in\\{15,18\\}$ while $\rho^{4}_{\infty}(\sigma(2))\in\\{\rho^{4}_{\infty}(2),\rho^{4}_{\infty}(6)\\}=\\{14,11\\}.$ ∎ This finally concludes the proof of Theorems 4.2, 4.3 and 4.4. ### 10.3. Conclusion We obtain a list of 48 root lattices with uniquely associated torsion Mordell Weil groups. The latter appear in Table 11 using the notation of Shimada [Shi1] with $[n]$ for $\mathbb{Z}/n$ and $[n,m]$ for $\mathbb{Z}/n\times\mathbb{Z}/m$. In addition the Table 11 gives the dimensions of the $50$ ambi-typical strata, the index of $\bar{\Gamma}$ in $\operatorname{PSL}(2,\mathbb{Z})$, the cardinality of the kernel of $\Gamma\to\bar{\Gamma}$ and the corresponding number in the list of Shimada. Although it is not needed in this paper we also state for completeness which of the monodromy groups are congruence subgroups. This turns out to be the case if and only if the index is not divisible by $9$. For index $\leq 6$ this follows from [Wo, Theorem 5]. For index $9$ this follows from [CuPa], alternatively one can give an independent argument using the amplitudes of the cusps. Finally, for indices $12$ and $18$ the claim can be deduced from Sebbar’s classification [Se]. In Table 12 we list the branch behaviour of the maps $j_{\bar{\Gamma}}$ and $j_{\mathcal{E}}$. # | root lattice | MW | $\dim$ | ind | $|\ker|$ | [Shi] # | | Remarks ---|---|---|---|---|---|---|---|--- 0 | | $[1]$ | 18 | 1 | 2 | | | $\Gamma=\operatorname{SL}(2,\mathbb{Z})$ 1 | $8A_{1}$ | $[2]$ | 10 | 3 | 2 | 99 | | $\Gamma=\Gamma_{1}(2)$ 2 | $6A_{2}$ | $[3]$ | 6 | 4 | 1 | 559 | | $\Gamma=\Gamma_{1}(3)$ 3 | $4\,A_{3}$ | $[2]$ | 6 | 6 | 2 | 547 | | $\Gamma=\Gamma_{0}(4)$ 4 | $12\,A_{1}$ | $[2,2]$ | 6 | 6 | 2 | 565 | | $\Gamma=\Gamma(2)$ 5 | $2\,A_{7}$ | $[2]$ | 4 | 6 | 1 | 1134 | | $\bar{\Gamma}=\bar{\Gamma}_{0}(4)$ 222$-\operatorname{id}\not\in\Gamma$ 6 | $2\,A_{3}\,+\,8\,A_{1}$ | $[2,2]$ | 4 | 6 | 1 | 1223 | | $\bar{\Gamma}=\bar{\Gamma}(2)$ 222$-\operatorname{id}\not\in\Gamma$ 7 | $4\,A_{3}\,+\,2\,A_{1}$ | $[4]$ | 4 | 6 | 1 | 1215 | | $\Gamma=\Gamma_{1}(4)$ 8 | $D_{4}+2A_{6}$ | $[1]$ | 2 | $9$ | 2 | 2171 | | non-congruence 9 | $D_{4}+2A_{5}+2A_{1}$ | $[2]$ | 2 | $9$ | 2 | 2179 | | ” 10 | $D_{4}+2A_{4}+2A_{2}$ | $[1]$ | 2 | $9$ | 2 | 2190 | | ” 11 | $D_{4}+2A_{3}+2A_{2}+2A_{1}$ | $[2]$ | 2 | $9$ | 2 | 2198 | | ” 12 | $2\,D_{4}\,+\,A_{8}$ | $[1]$ | 2 | $12$ | 2 | 2148 | | twisted Beauville 13 | $2\,D_{4}\,+\,A_{7}\,+\,A_{1}$ | $[2]$ | 2 | $12$ | 2 | 2149 | | ” 14 | $2\,D_{4}\,+\,A_{5}\,+\,A_{2}\,+\,A_{1}$ | $[2]$ | 2 | $12$ | 2 | 2150 | | ” 15 | $2\,D_{4}\,+\,2\,A_{4}$ | $[1]$ | 2 | $12$ | 2 | 2151 | | ” 16 | $2\,D_{4}\,+\,2\,A_{3}\,+\,2\,A_{1}$ | $[2,2]$ | 2 | $12$ | 2 | 2152 | | ” 17 | $2\,D_{4}\,+\,4\,A_{2}$ | $[1]$ | 2 | $12$ | 2 | 2153 | | ” 18 | $2\,A_{8}$ | $[3]$ | 2 | $12$ | 1 | 2242 | | 2:1 Beauville 19 | $2\,A_{7}\,+\,2\,A_{1}$ | $[4]$ | 2 | $12$ | 1 | 2262 | | ” 20 | $2\,A_{5}\,+\,2\,A_{2}\,+\,2\,A_{1}$ | $[6]$ | 2 | $12$ | 1 | 2322 | | ” 21 | $4\,A_{4}$ | $[5]$ | 2 | $12$ | 1 | 2345 | | ” 22 | $4\,A_{3}\,+\,4\,A_{1}$ | $[4,2]$ | 2 | $12$ | 1 | 2368 | | ” 23 | $8\,A_{2}$ | $[3,3]$ | 2 | $12$ | 1 | 2373 | | ” 24 | $D_{4}\,+\,A_{13}$ | $[1]$ | 1 | $18$ | 2 | 2762 | | non-congruence 25 | $D_{4}\,+\,A_{12}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2763 | | ” 26 | $D_{4}\,+\,A_{11}\,+\,A_{2}$ | $[2]$ | 1 | $18$ | 2 | 2764 | | ” 27 | $D_{4}\,+\,A_{11}\,+\,2\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2765 | | ” 28 | $D_{4}\,+\,A_{10}\,+\,A_{2}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2766 | | ” 29 | $D_{4}\,+\,A_{9}\,+\,A_{4}$ | $[1]$ | 1 | $18$ | 2 | 2767 | | ” 30 | $D_{4}\,+\,A_{9}\,+\,A_{3}\,+\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2768 | | ” 31 | $D_{4}\,+\,A_{9}\,+\,2\,A_{2}$ | $[1]$ | 1 | $18$ | 2 | 2769 | | ” 32 | $D_{4}\,+\,A_{9}\,+\,A_{2}\,+\,2\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2770 | | ” 33 | $D_{4}\,+\,A_{8}\,+\,A_{5}$ | $[1]$ | 1 | $18$ | 2 | 2771 | | ” 34 | $D_{4}\,+\,A_{8}\,+\,A_{4}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2772 | | ” 35 | $D_{4}\,+\,A_{7}\,+\,A_{4}\,+\,2\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2773 | | ” 36 | $D_{4}\,+\,A_{7}\,+\,A_{3}\,+\,A_{2}\,+\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2774 | | ” 37 | $D_{4}\,+\,A_{7}\,+\,2\,A_{2}\,+\,2\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2775 | | ” 38 | $D_{4}\,+\,2\,A_{6}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2776 | | ” 39 | $D_{4}\,+\,2\,A_{6}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2776 | | ” 40 | $D_{4}\,+\,A_{6}\,+\,A_{5}\,+\,A_{2}$ | $[1]$ | 1 | $18$ | 2 | 2777 | | ” 41 | $D_{4}\,+\,A_{6}\,+\,A_{4}\,+\,A_{2}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2778 | | ” 42 | $D_{4}\,+\,A_{6}\,+\,A_{3}\,+\,2\,A_{2}$ | $[1]$ | 1 | $18$ | 2 | 2779 | | ” 43 | $D_{4}\,+\,2\,A_{5}\,+\,A_{3}$ | $[2]$ | 1 | $18$ | 2 | 2780 | | ” 44 | $D_{4}\,+\,2\,A_{5}\,+\,3\,A_{1}$ | $[2,2]$ | 1 | $18$ | 2 | 2781 | | ” 45 | $D_{4}\,+\,A_{5}\,+\,2\,A_{4}$ | $[1]$ | 1 | $18$ | 2 | 2782 | | ” 46 | $D_{4}\,+\,A_{5}\,+\,A_{4}\,+\,A_{3}\,+\,A_{1}$ | $[2]$ | 1 | $18$ | 2 | 2783 | | ” 47 | $D_{4}\,+\,A_{5}\,+\,2\,A_{3}\,+\,2\,A_{1}$ | $[2,2]$ | 1 | $18$ | 2 | 2784 | | ” 48 | $D_{4}\,+\,2\,A_{4}\,+\,2\,A_{2}\,+\,A_{1}$ | $[1]$ | 1 | $18$ | 2 | 2785 | | ” 49 | $D_{4}\,+\,3\,A_{3}\,+\,2\,A_{2}$ | $[2]$ | 1 | $18$ | 2 | 2786 | | ” Table 11. Data of ambi-typical strata # | $j_{\bar{\Gamma}}$ | $\bar{\Gamma}$ | $j_{\mathcal{E}}$ | $I_{0}^{*}$ fibres ---|---|---|---|--- 0 | $1$ | $\operatorname{PSL}(2,\mathbb{Z})$ | $(3^{8})_{A}$, $(2^{12})_{B}$, $2^{18}$ | - 1 | $(3),(2,1),(2,1)$ | $\bar{\Gamma}_{1}(2)$ | $(2,2,2,2)_{B}$, $2^{10}$ | - 2 | $(3,1),(2,2),(3,1)$ | $\bar{\Gamma}_{1}(3)$ | $(3,3)_{A}$, $2^{6}$ | - 3 | $(3,3),(2,2,2),(4,1,1)$ | $\bar{\Gamma}_{1}(4)$ | $2^{6}$ | - 4 | $(3,3),(2,2,2),(2,2,2)$ | $\bar{\Gamma}(2)$ | $2^{6}$ | - 5 | $(3,3),(2,2,2),(4,1,1)$ | $\bar{\Gamma}_{1}(4)$ | $(2,2)_{4_{\infty}}$, $2^{4}$ | - 6 | $(3,3),(2,2,2),(2,2,2)$ | $\bar{\Gamma}(2)$ | $(2,2)_{2_{\infty}}$, $2^{6}$ | - 7 | $(3,3),(2,2,2),(4,1,1)$ | $\bar{\Gamma}_{1}(4)$ | $(2,2)_{1_{\infty}}$, $2^{6}$ | - 8 | $(3,3,3),(2,2,2,2,1),(7,1,1)$ | | $(2)_{B}$, $2$ | 1 9 | $(3,3,3),(2,2,2,2,1),(6,2,1)$ | | $(2)_{B}$, $2$ | 1 10 | $(3,3,3),(2,2,2,2,1),(5,3,1)$ | | $(2)_{B}$, $2$ | 1 11 | $(3,3,3),(2,2,2,2,1),(4,3,2)$ | | $(2)_{B}$, $2$ | 1 12 | $(3,3,3,3),(2,2,2,2,2,2),(9,1,1,1)$ | | $1$ | 2 13 | $(3,3,3,3),(2,2,2,2,2,2),(8,2,1,1)$ | | $1$ | 2 14 | $(3,3,3,3),(2,2,2,2,2,2),(6,3,2,1)$ | | $1$ | 2 15 | $(3,3,3,3),(2,2,2,2,2,2),(5,5,1,1)$ | | $1$ | 2 16 | $(3,3,3,3),(2,2,2,2,2,2),(4,4,2,2)$ | | $1$ | 2 17 | $(3,3,3,3),(2,2,2,2,2,2),(3,3,3,3)$ | | $1$ | 2 18 | $(3,3,3,3),(2,2,2,2,2,2),(9,1,1,1)$ | | $2^{2}$ | - 19 | $(3,3,3,3),(2,2,2,2,2,2),(8,2,1,1)$ | | $2^{2}$ | - 20 | $(3,3,3,3),(2,2,2,2,2,2),(6,3,2,1)$ | | $2^{2}$ | - 21 | $(3,3,3,3),(2,2,2,2,2,2),(5,5,1,1)$ | | $2^{2}$ | - 22 | $(3,3,3,3),(2,2,2,2,2,2),(4,4,2,2)$ | | $2^{2}$ | - 23 | $(3,3,3,3),(2,2,2,2,2,2),(3,3,3,3)$ | | $2^{2}$ | - 24 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(14,1,1,1,1)$ | | $1$ | 1 25 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(13,2,1,1,1)$ | | $1$ | 1 26 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(12,3,1,1,1)$ | | $1$ | 1 27 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(12,2,2,1,1)$ | | $1$ | 1 28 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(11,3,2,1,1)$ | | $1$ | 1 29 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(10,5,1,1,1)$ | | $1$ | 1 30 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(10,4,2,1,1)$ | | $1$ | 1 31 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(10,3,3,1,1)$ | | $1$ | 1 32 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(10,3,2,2,1)$ | | $1$ | 1 33 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(9,6,1,1,1)$ | | $1$ | 1 34 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(9,5,2,1,1)$ | | $1$ | 1 35 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(8,5,2,2,1)$ | | $1$ | 1 36 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(8,4,3,2,1)$ | | $1$ | 1 37 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(8,3,3,2,2)$ | | $1$ | 1 38 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(7,7,2,1,1)$ | | $1$ | 1 39 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(7,7,2,1,1)$ | | $1$ | 1 40 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(7,6,3,1,1)$ | | $1$ | 1 41 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(7,5,3,2,1)$ | | $1$ | 1 42 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(7,4,3,3,1)$ | | $1$ | 1 43 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(6,6,4,1,1)$ | | $1$ | 1 44 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(6,6,2,2,2)$ | | $1$ | 1 45 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(6,5,5,1,1)$ | | $1$ | 1 46 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(6,5,4,2,1)$ | | $1$ | 1 47 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(6,4,4,2,2)$ | | $1$ | 1 48 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(5,5,3,3,2)$ | | $1$ | 1 49 | $(3,3,3,3,3,3),(2,2,2,2,2,2,2,2,2),(4,4,4,3,3)$ | | $1$ | 1 Table 12. $j$-factorisation data of ambi-typical strata ## 11\. Moduli spaces of lattice polarised $K3$ surfaces and Shimada strata In this section we want to start our discussion about the precise relationship between moduli spaces of lattice polarised $K3$ surfaces and ambi-typical strata. For this we first recall some basic facts about lattice polarised $K3$ surfaces and their moduli spaces. Our aim is to understand how many moduli spaces of lattice polarised $K3$ surfaces dominate a given ambi-typical stratum and to compute the degree of the finite map from a component of such a moduli space to the ambi-typical stratum it dominates. In Section 2 we encountered the following situation: we started with an elliptically fibred $K3$ surface $f:\mathcal{E}\to\mathbb{P}^{1}$ and root lattice $R(\mathcal{E})$ whose saturation in the $K3$ lattice was denoted by $L(\mathcal{E})$ Then $M(\mathcal{E})=U+L(\mathcal{E})$ is a hyperbolic lattice contained in the Néron-Severi group of $\mathcal{E}$. This gives rise to a lattice polarization. To explain this in more detail, let $M$ be an even lattice of signature $(1,t)$ which admits a primitive embedding $\iota:M\to L_{K3}$ into the $K3$ lattice $L_{K3}$. By $T=T(\iota)=\iota(M)^{\perp}_{L_{K3}}\subset L_{K3}$ we denote the orthogonal complement of the image of $\iota$. The lattice $T$ has signature $(2,19-t)$. We shall for the rest of this section assume that the rank of $T$ is at least $3$ (which is the case in our situation). Then $T$ defines a type IV homogeneous domain $\Omega_{T}=\\{[x]\in\mathbb{P}(T\otimes\mathbb{C})\mid(x,x)=0,(x,\bar{x})>0\\}=\mathcal{D}_{T}\cup\mathcal{D}^{\prime}_{T}$ of dimension $19-t$ consisting of two connected components, namely $\mathcal{D}_{T}$ and $\mathcal{D}^{\prime}_{T}$. We also consider the real cone $C_{M}=\\{x\in M_{\mathbb{R}}\mid(x,x)>0\\}=C_{M}^{+}\cup C_{M}^{-}$ which again consists of two connected components, of which we choose one, say $C_{M}^{+}$. Removing from $C_{M}^{+}$ all hyperplanes orthogonal to roots $\Delta_{M}=\\{d\in M,d^{2}=-2\\}$ subdivides $C_{M}^{+}$ into different connected components, the so called Weyl chambers, of which we choose one and call it $C_{M}^{\operatorname{pol}}$. An $(M,\iota)$-polarised $K3$ surface is a pair $(S,\tilde{\iota})$ where $\tilde{\iota}:M\to\operatorname{NS}(S)\subset H^{2}(S,\mathbb{Z})$ is a primitive embedding which is isomorphic to $\iota$ with respect to a suitable marking $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ and such that $\tilde{\iota}(C_{M}^{\operatorname{pol}})$ contains an ample class. In this case we call $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ an $M$-polarised marking. We call two $M$-polarised $K3$ surfaces $(S_{1},\tilde{\iota}_{1})$ and $(S_{2},\tilde{\iota}_{2})$ isomorphic if there is an isomorphims $f:S_{1}\to S_{2}$ with $f^{*}\circ\tilde{\iota}_{2}=\tilde{\iota}_{1}$. In order to describe the moduli space of $M$-polarised $K3$ surfaces we consider the group $\operatorname{O}(L_{K3},M,\iota):=\\{g\in O(L_{K3})\mid g|_{\iota(M)}=id_{\iota(M)}\\}$ and recall the definition of the stable orthogonal group $\operatorname{\widetilde{O}}(T):=\\{g\in O(T)\mid g|_{D(T)}=id_{D(T)}\\}$ consisting of all orthogonal transformations of $T$ which act trivially on the discriminant$D(T)$. It is well known, see [Nik, Corollary 1.5.2], that there is an isomorphism $\operatorname{O}(L_{K3},M,\iota)\cong\operatorname{\widetilde{O}}(T).$ The group $O(T)$, and hence also $\operatorname{\widetilde{O}}(T)$, acts properly discontinuously on $\Omega_{T}$ as well as on the open subset $\Omega_{T}^{\operatorname{pol}}=\Omega_{T}\setminus\bigcup_{d\in T,d^{2}=-2}(H_{d}\cap\Omega_{T})$ where $H_{d}=\langle d\rangle^{\perp}\subset\mathbb{P}(T\otimes{\mathbb{C}})$ is the hyperplane orthogonal to $d$. ###### Proposition 11.1. The quotient $\mathcal{N}^{a}_{M,\iota}=\Omega_{T}^{\operatorname{pol}}/\operatorname{\widetilde{O}}(T)$ is the moduli space of $(M,\iota)$-polarised $K3$ surfaces. ###### Proof. See [Do, Section 3] or [BHPV, p. 360] ∎ It is sometimes also useful to consider a weakening of $(M,\iota)$-polarizations. Recall that a line bundle $\mathcal{L}$ is said to be a quasi-polarization if it is nef and big. We say that $(S,\tilde{\iota})$ is an $(M,\iota)$-quasi-polarised $K3$ surface if $\tilde{\iota}(C_{M}^{\operatorname{pol}})$ contains a big and nef class. By [Do, Section 3] the quotient $\mathcal{N}_{M,\iota}=\Omega_{T}/\operatorname{\widetilde{O}}(T).$ is in $1:1$ correspondence with the set of isomorphism classes of $M$-quasi- polarised $K3$ surfaces. It can be viewed as the moduli space of $M$-polarized $K3$ surfaces with ADE singularities. This contains $\mathcal{N}^{a}_{M,\iota}$ as an open subset. At this point some remarks are in order. In the literature it is often tacitly assumed that the lattice $M$ has a unique primitive embedding into the $K3$ lattice. This assumption then justifies to talk about the moduli space of $M$-polarised $K3$ surfaces. For us it will be important to also allow the possibility that $M$ possesses different primitive embeddings into the $K3$ lattice (the number of such embeddings modulo $\operatorname{O}(L_{K3})$ is always finite). We will then consider the union $\mathcal{N}_{M}=\cup_{\iota}\mathcal{N}_{M,\iota}$ where $\iota$ runs over all classes of different primitive embeddings of $M$ into $L_{K3}$. Moreover, Dolgachev has formulated arithmetic conditions which lead to the notion of $m$-admissible lattices. This means in particular that the dual lattice $T$ splits off a summand $U(m)$, i.e. a multiple of a hyperbolic plane. This is relevant for mirror symmetry and the discussion of the Yukawa-coupling, but it plays no role for our purposes since the construction of the moduli space of lattice (quasi-)polarised $K3$ surfaces does not require this condition. In fact, a number of the lattices which we consider, in particular when $M$ has rank $19$, do not fulfill this condition, see the Appendix. In these cases the lattice $T$ does not split off a summand $U$ over $\mathbb{Q}$ resulting in compact moduli spaces of $M$-polarised $K3$ surfaces. Finally, we recall that the case $M=U$ gives us another construction for the moduli space $\mathcal{F}$ of elliptically fibre $K3$ sufaces with a section or Jacobian fibrations. Since $\operatorname{O}(T)$ acts properly discontinuously on $\Omega_{T}$ the quotient $\mathcal{N}_{M,\iota}$ has at most finite quotient singularities. By a well know result of Baily-Borel this is a quasi-projective variety. We also note that for small rank of the transcendental lattice $T$ there is a relation with Siegel spaces: if $t=18$ then $\mathcal{D}_{T}\cong\mathbb{H}_{1}$ is the upper half plane, if $t=17$, then $\mathcal{D}_{T}\cong\mathbb{H}_{1}\times\mathbb{H}_{1}$, and if $t=16$, then $\mathcal{D}_{T}\cong\mathbb{H}_{2}$, the Siegel upper half plane of genus $2$. We will discuss this in more detail in the case of $t=18$ in the Appendix. The quotient $\Omega_{T}/O(T)$ can have one or two components. If it has two components then these are complex conjugate to each other. An element $g\in\operatorname{O}(T)$ can either fix the two components $\mathcal{D}_{T}$ and $\mathcal{D}^{\prime}_{T}$ or interchange them, depending on its spinor norm. We recall that the real spinor norm is a homomorphism $\operatorname{sn}_{\mathbb{R}}:\operatorname{O}(T)\to\mathbb{R}^{*}/(\mathbb{R}^{*})^{2}=\\{\pm 1\\}.$ For a precise definition we refer the reader to [GHS1, Section 1]. Our normalization of the spinor norm is such that in the case of signature $(2,n)$, in which we here are, the transformation $g$ fixes the two components of $\Omega_{T}$ if and only if $\operatorname{sn}_{\mathbb{R}}(g)=1$ and it interchanges them if and only if $\operatorname{sn}_{\mathbb{R}}(g)=-1$. We define the groups $\operatorname{O}^{+}(T)=\\{g\in\operatorname{O}(T)\mid\operatorname{sn}_{\mathbb{R}}(g)=1\\}$ and $\operatorname{\widetilde{O}}^{+}(T)=\operatorname{O}^{+}(T)\cap\operatorname{\widetilde{O}}(T).$ The quotient $\Omega_{T}/\operatorname{\widetilde{O}}(T)$ then has two components if any only if $\operatorname{\widetilde{O}}(T)=\operatorname{\widetilde{O}}^{+}(T)$. We first want to discuss the number of moduli spaces of $M$-polarised $K3$ surfaces and the number of connected components. Clearly, there is only one such moduli space if there is a unique primitive embedding of $\iota:M\to L_{K3}$ (up to $\operatorname{O}(L_{K3})$). In general, however, such an embedding need not be unique. Assume that there is at least one primitive embedding $\iota:M\to L_{K3}$ and let $T(\iota)=\iota(M)^{\perp}_{L_{K3}}$. Then $T_{\iota}$ may depend on $\iota$, but its genus does not. It is determined by $\operatorname{sign}(T_{j})=(2,18-\operatorname{rank}(M))\mbox{ and }(D(T_{j}),q_{T_{j}})\cong(D(M),-q_{M}).$ We call this the genus orthogonal to $M$ and denote it by $\mathcal{G}_{T}$. ###### Proposition 11.2. Let $T\in\mathcal{G}_{T}$ and let $\overline{{\operatorname{O}}}(T)$ be the image of $\operatorname{O}(T)$ in $\operatorname{O}(D(T))$. Then the index $[\operatorname{O}(D(T)):\overline{\operatorname{O}}(T)]$ depends only on the genus $\mathcal{G}_{T}$ of $T$ and not on $T$ itself. ###### Proof. This follows from the theory of Miranda and Morrison, in particular [MM, Chapter VIII, Proposition 6.1. (2)]. Here we recall that we are always in the situation that $T$ is indefinite since we assume in this section that it has rank at least $3$. ∎ It follows from the theory of Miranda-Morrison [MM, Theorem VIII.7.2], see also [Shi2, p. 514], that there is an exact sequence, which is completely determined by $M$, of the form (11.1) $0\to\operatorname{coker}(\operatorname{O}(T))\to\operatorname{O}(D(T)))\to\mathcal{M}_{T}\to\mathcal{G}_{T}\to 0$ where $\mathcal{M}_{T}$ is a finite group which is in $1:1$ correspondence with the primitive embeddings $\iota:M\to L_{K3}$ and thus with the moduli spaces of lattice polarised $K3$ surfaces with lattice polarization $M$. The following result is far less obvious. We will not need it for the lattices we are concerned with, as in our cases the genus always consists of one element only, but state it to complete the picture. The proof of this was communicated to us by Simon Brandhorst, here we only give a sketch. ###### Proposition 11.3. The index $[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]$ depends only on the genus $\mathcal{G}_{T}$ of $T$ and not on $T$ itself. ###### Proof. (Sketch) This is a consequence of the strong approximation theorem. It can be deduced from [MM, Proposition VIII.6.1 (2)] with extra bookkeeping of the real spinor norm using the fact that (in the terminology of [MM]) the objects $\Gamma_{S}$, $\Sigma(T)$ and $\Sigma^{\sharp}(T)$ all depend only on the genus $\mathcal{G}_{T}$ and not on the lattice $T$ itself. ∎ As a corollary of the theory of Miranda and Morrison and in particular Sequence (11.1) together with Proposition 11.3 we thus obtain ###### Corollary 11.4. Let $M$ be a hyperbolic lattice which admits a primitive embedding into the $K3$-lattice $L_{K3}$ and let $\mathcal{G}_{T}$ be the genus of one, and hence any, orthogonal complement of such an embedding. * (1) The number of moduli spaces of $M$-polarised $K3$ surfaces is given by $|\mathcal{M}_{T}|=[\operatorname{O}(D(T)):\overline{\operatorname{O}}((T)]\cdot|\mathcal{G}_{T}|.$ * (2) The number of connected components of these moduli spaces is given by $|\mathcal{M}_{T}|^{c}=[\operatorname{O}(D(T)):\overline{\operatorname{O}}((T)]\cdot|\mathcal{G}_{T}|\cdot\frac{2}{[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]}.$ We shall now discuss which values these numbers can have in our cases. The relevant data for the ambi-typical strata are collected in Table 11. The lattice data consist first of a root lattice $R$ and an isotropic subgroup $G\subset D(R)$ in the discriminant group of $R$. This determines an overlattice $L$ of $R$ and the hyperbolic lattice $M=U+L$. We shall first prove that in our situation the genus $\mathcal{G}_{T}$ always consists of a single element. ###### Proposition 11.5. Let $M$ be a hyperbolic lattice associated to one of the families listed in Table 11. Then the genus given by the signature $(2,20-\operatorname{rank}(M))$ and discriminant group $(D(M),-q_{M})$ consists of one element only. ###### Proof. The claim for the entries (1),(2) and (3) of Table 11 follows immediately from the numerical conditions of Nikulin’s theorem [Nik, Theorem 1.14.2]. For the other lattices we shall use [CS, Chapter 15]. Since we have indeterminate lattices, it suffices, according to [CS, Section 15.9.7], to show that there are no non-tractable primes (for a definition of non-tractable primes see [CS, Chapter 15.9.6]). Let $d$ be the discriminant of $M$ and let $n=22-\operatorname{rank}(M)$ be the rank of the orthogonal complement. According to [CS, Theorem 15.20] a necessary condition for an odd prime $p$ to be non-tractable is $d\mbox{ is divisible by }p^{\binom{n}{2}}.$ Note that $n=8$ in the case of entry (4), i.e. Shimada’s case 565, and $n=4$ or $n=3$ in all other cases. One can now check by hand that this condition is never fulfilled for the lattices coming from Table 11. In most cases this follows already from the discriminants of the root lattices $R$ in the fourth column of this table. However, in some cases one has to be more careful. An example is entry (23) which is Shimada’s family 2373. Here $n=4$ and we must not have divisors of $d$ of the form $p^{6}$. Now the lattice $8A_{2}$ has discriminant $3^{8}$. However, the lattice $L$ is an overlattice of $8A_{2}(-1)+U$ with torsion group $[3,3]$. But this means that the order of the discriminant group of $M$ is $3^{8}/3^{4}=3^{4}$ and hence $3$ is not a non-tractable prime. The other cases can be treated in the same way. Thus the only possible non-tractable prime is $p=2$. By [CS, Theorem 15.20] this implies that $4^{[\frac{n}{2}]}d\mbox{ is divisible by }8^{\binom{n}{2}}.$ Again, this can be checked by hand. For example in case 2784 one has $4^{[\frac{n}{2}]}d(D_{4}+A_{5}+2A_{3}+2A_{1}+U)=2^{11}\cdot 3$. However, taking the torsion into account we obtain that $4^{[\frac{n}{2}]}d=2^{7}\cdot 3$ which is not divisible by $8^{3}=2^{9}$. The other cases are similar. ∎ ###### Corollary 11.6. Let $M$ be a hyperbolic lattice associated to one of the families listed in Table 11. Then the orthogonal complement of a primitive embedding $\iota:M\to L_{K3}$ depends only on $M$ and not on the chosen embedding $\iota$. This corollary allows us to speak of the orthogonal complement $T$ of the lattice $M$ in the $K3$ lattice $L_{K3}$, even if the primitive embedding $\iota:M\to L_{K3}$ is not uniquely defined (which can occur). ###### Proposition 11.7. Let $M$ be a hyperbolic lattice associated to one of the families listed in Table 11 and let $T$ be the unique element in the genus orthogonal to $M$. Then the map $\operatorname{O}(T)\to\operatorname{O}(D(T))$ is always surjective with the exception of the following five root lattices $R$: * (17) $2D_{4}+4A_{2}$, * (21) $4A_{4}$, * (37) $D_{4}+A_{7}+2A_{2}+2A_{1}$, * (48) $D_{4}+2A_{4}+2A_{2}+A_{1}$, * (49) $D_{4}+3A_{3}+2A_{2}$. In these cases the index $[\operatorname{O}(D(T)):\overline{\operatorname{O}}(T)]=2$, in particular, there are exactly two non-isomorphic primitive embeddings of $M$ into $L_{K3}$. ###### Proof. For the cases (1),(2) and (3) from Table 11 the surjectivity of $\operatorname{O}(T)\to\operatorname{O}(D(T))$ can be seen directly by Nikulin’s ciriterion [Nik, Theorem 1.16.10]. In general this follows from computations of Shimada, which are available from his website, see [Shi4]. For the rank $17$ cases this follows independently from Kirschmer’s computations, see Table 14, Column 5 in the Appendix. ∎ It now follows that the number of connected components of $M$-polarised $K3$ surfaces, where $M$ is a lattice corresponding to an ambi-typical stratum, is either $1,2$ or $4$. It is at least $2$ in the cases listed in Proposition 11.7. To determine the exact number one has to compute the index $[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]$. Indeed, all three possible cases occur as the following example shows. ###### Example 11.8. Assume that the root lattice $R$ has rank $17$. The the following holds: out of the $25$ rank $17$ lattices we have $|\mathcal{M}_{T}|^{c}=1$ in $18$ cases. In the four cases $D_{4}+A_{7}+A_{4}+2A_{1}$, $D_{4}+2A_{6}+A_{1}$, $D_{4}+A_{6}+A_{3}+2A_{2}$ and $D_{4}+2A_{5}+3A_{1}$ we have $|\mathcal{M}_{T}|=1$ and $[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]=1$ and thus $|\mathcal{M}_{T}|^{c}=2$. Finally, in the three cases $D_{4}+A_{7}+2A_{2}+2A_{1}$, $D_{4}+2A_{4}+2A_{2}+A_{1}$ or $D_{4}+3A_{3}+2A_{2}$ we have $|\mathcal{M}_{T}|=2$ and $[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]=1$ and hence $|\mathcal{M}_{T}|^{c}=4$. This follows from Kirschmer’s computations, see Table 14 in the Appendix. Note that we have $2$ components if either $[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]=1$ or $[\operatorname{O}(D(T)):\overline{\operatorname{O}(T)]}=2$ and $4$ components if both of these indices are $1$ and $2$ respectively. In all other cases we have $1$ component. ###### Remark 11.9. It is interesting to compare this with the Shimada’s list of non-connected moduli of elliptic $K3$ surfaces, see [Shi2, Corollary 1.5 and Table II]. We first observe that the lattices which appear in Proposition 11.7 do not appear in Shimada’s list. The reason is that different moduli spaces of lattice- polarised $K3$ surfaces can lead to the same Shimada stratum. This is due to symmetries of the root lattice $R$ and we shall discuss this in the next section. On the other hand, there are three root lattices which appear in both Table 11 of our paper and Table 3 in [Shi2, p.555-557]. These are $D_{4}+2A_{6}+A_{1}$, which are our cases (38/39), as well as $2A_{3}+8A_{1}$ and $4A_{3}+2A_{1}$ which are our cases (6) and (7) respectively. In the case of $D_{4}+2A_{6}+A_{1}$ we have two complex conjugate components. The lattice $4A_{3}+2A_{1}$ appears in [Shi2, Table 3] in connection with the Mordell-Weil torsion $\mathbb{Z}/2\mathbb{Z}$. In this case one has two components. However, in our situation we have Mordell-Weil torsion $\mathbb{Z}/4\mathbb{Z}$ and this leads to one component only. Finally, in the case of $2A_{3}+8A_{1}$ the two components in Shimada’s list come from inequivalent isotropic subgroups $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$ which correspond to different intersection behaviour of the torsion sections. Only one of these cases leads to an ambi-typical stratum, see the discussion in Remark 9.6. ## 12\. The moduli maps The aim of this section is to investigate the precise relationship between moduli spaces of lattice-polarised $K3$ surfaces and ambi-typical strata. In particular we show that there is a finite map from certain moduli spaces of lattice polarised $K3$ surfaces to ambi-typical strata and compute the degree of this map. For this we consider an ambi-typical stratum (12.1) $\mathcal{A}=\overline{\mathcal{F}^{\prime}_{\tilde{\Gamma},i}}=\overline{\mathcal{M}^{\prime}_{R,G,{\iota}}}.$ As before we denote by $L$ the lattice defined by the isotropic group $G\subset D(R)$ and set $M=U+L$. Our first goal is to associate to such a given ambi-typical stratum certain moduli spaces of lattice-polarised $K3$ surfaces. First of all, given any surface $S$ in such a stratum, we identify the sublattice spanned by the section and the fibre with a fixed summand $U$ of the $K3$ lattice $L_{K3}$. More precisely, we first choose the standard basis $e,f$ of $U$ with $e^{2}=f^{2}=0$ and $e.f=1$ and identify the fibre class with $f$ and the section $s$ with $e-f$. This can be done once and for all simultaneously for all surfaces in this stratum. Now choose a generic surface $S$ in $\overline{\mathcal{M}^{\prime}_{R,G,{\iota}}}.$ and a marking $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ where the sublattice spanned by the fibre and the section are identified with a given summand $U$ as described above. The components of the singular fibres not meeting the section define a sublattice isomorphic to $R$ whose saturation in the $K3$ lattice is isomorphic to $L$. We note that the isomorphism with $R$ is not canonical, but depends on an identification of these components with a set of simple roots of $R$. There are finitely many ways of choosing such an identification. Adding the sublattice spanned by the fibre and the section one obtains an embedding $\iota:M\to\operatorname{NS}(S)\subset H^{2}(S;\mathbb{Z})\cong L_{K3}.$ and thus an element $(S,\iota)$ in a connected component of $\mathcal{N}_{M,\ell}$ of the moduli space $\mathcal{N}_{M,\iota}$ of lattice polarised $K3$ surfaces. We shall call such a component, respectively the moduli space $\mathcal{N}_{M,\iota}$, associated to $\mathcal{A}$. Note that we do not claim that $\mathcal{N}_{M,\ell}$ or $\mathcal{N}_{M,\iota}$ are uniquely determined by the ambi-typical stratum since there is no canonical way of identifying the fibre components with a root system. Nor do we claim that we obtain a morphism from (an open part of) the stratum $\mathcal{A}$ to such a moduli space $\mathcal{N}_{M,\iota}$. Indeed, we always have (at least) the ambiguity given by the symmetries $S_{R}$ of the Dynkin diagram associated to $R$. Changing the isomorphism by such an element can have two effects. One is that it defines different points in the same moduli space of $M$-lattice polarised $K3$ surfaces. The other is that it leads to different moduli spaces of lattice polarised $K3$ surfaces. Note however, that two such moduli spaces of $M$-polarised $K3$ surfaces define the same Shimada stratum as the combinatorial type of the singular fibres is the same. As we shall see, both cases occur. We shall now discuss this in detail. Before doing this we establish the following convention. Given a moduli space $\mathcal{N}_{M,\iota}$ of lattice polarised $K3$ surfaces and a $K3$ surface $S$ which appears in this moduli space, we always have a distinguished copy $U$ in $\operatorname{NS}(S)$. This determines a Jacobian fibration $S\to\mathbb{P}^{1}$. Unless stated otherwise we will always work with this elliptic fibration. ###### Proposition 12.1. The following holds: * (1) Given an ambi-typical stratum $\mathcal{A}$ there are only finitely many components of moduli spaces of lattice-polarised $K3$ surfaces associated to $\mathcal{A}$. * (2) Let $\mathcal{N}_{M,\ell}$ be a component of a moduli space $\mathcal{N}_{M,\iota}$ of lattice polarised $K3$ surfaces associated to $\mathcal{A}$. Let further $\mathcal{N}_{M,\ell}^{0}$ be the open subset of $\mathcal{N}_{M,\ell}$ where the configuration of the singular fibres is generic. Then there is a natural finite to one dominant morphism $\mathcal{N}^{0}_{M,\ell}\to\mathcal{A}$. ###### Proof. Claim $(1)$ follows from Nikulin’s theory since there are only finitely many inequivalent primitive embeddings $\iota:L\to L_{K3}$. To prove (2) we recall that we have for every element in $\mathcal{N}_{M,\iota}$ a well defined Jacobian fibration $S\to\mathbb{P}^{1}$ which can be written in Weierstraß form. The fact that $\mathcal{F}$ is a coarse moduli space of $K3$ surfaces with a Jacobian fibration defines a morphism $\mathcal{N}_{M,\ell}\to\mathcal{F}$. On the open subset $\mathcal{N}_{M,\ell}^{0}$ the monodromy is constant with monodromy group $\tilde{\Gamma}$ and hence we obtain a morphism $\mathcal{N}_{M,\ell}^{0}\to\mathcal{A}$. This map has finite fibres since there are only finitely many ways of identifying the components of the singular fibres not intersection the $0$-section with a basis of the root lattice $R$. The map is dominant by the definition of an ambi-typical stratum. ∎ We now want to understand these maps better. The next two statements will be useful for this. ###### Lemma 12.2. Let $(R,G)$ be a pair consisting of a root lattice and an isotropic subgroup $G\subset D(R)$ with associated overlattice $L$, arising from one of the families listed in Table 11. Then the roots of $R$ and $L$ coincide. ###### Proof. A proof can be found in [Shi2, Proposition 3.2]. Indeed, this is a simple geometric argument. We can use the fact that $M=U+L$ is isomorphic to the Néron-Severi group $\operatorname{NS}(S)$ of some (sufficiently general) $K3$ surface $S$ and that $R$ is the subgroup of $\operatorname{NS}(S)$ generated by all fibre components which do not meet the $0$-section. Then the claim is geometrically clear: assume that $r$ is a root of $L$, which is not a root of $R$. Then $\pm r$ is effective and meets neither the $0$-section nor a general fibre, since it is orthogonal to the summand $U$ which contains the classes of the $0$-section and a general fibre. Hence $\pm r$ defines a union of rational curves on the associated elliptic $K3$ surfaces consisting of components of singular fibres, which do not intersect the $0$-section. But these roots are already contained in $R$. ∎ ###### Remark 12.3. Here we use the specific geometric situation. In general such a statement is false. Indeed, let $R=4A_{1}$ and let $H$ be the subgroup generated by the diagonal $(1,1,1,1)$ of $D(R)=4\mathbb{Z}/2\mathbb{Z}$. Then the overlattice is $D_{4}$ and one obtains a new root, namely $r=\frac{1}{2}\sum_{i=1}^{4}r_{i}$. Using the above lemma we can now prove ###### Proposition 12.4. Let $(R,G)$ be a pair of a root lattice and an isotropic subgroup arising from one of the families listed in Table 11 with associated overlattice $L$. Then $\operatorname{O}(L)\cong\\{g\in\operatorname{O}(R)\mid\overline{g}(G)=G\\}.$ ###### Proof. Clearly, the elements of $\operatorname{O}(R)$, which leave $G$ invariant (as a subgroup), define isometries of $L$, by construction of this lattice. Conversely, let $g\in\operatorname{O}(L)$. Then $g$ maps roots of $L$ to roots of $L$. By the above Lemma 12.2 these are exactly the roots of $R$ and hence $R$ is mapped to itself. Hence $g$ is an isometry of the lattice inclusion $R\subset L$ and thus $\overline{g}$ maps the isotropic subgroup $G$ to itself. ∎ Let $R$ be a root lattice. We recall that the Weyl group $W_{R}\subset\operatorname{O}(R)$ is the group generated by the reflections with respect to the roots $r\in R$. We denote by $S_{R}$ the subgroup of $\operatorname{O}(R)$ which is induced by symmetries of the Dynkin diagram. We also recall from [Hum, Theorem 12.2] that the isometry group of $R$ is the semi-direct product of the Weyl group $W_{R}$ and the group $S_{R}$: $\operatorname{O}(R)=W_{R}\rtimes S_{R}.$ Elements in the Weyl group $W_{R}$ act trivially on the discrimant $D(R)$ and hence, in our situation, lift to isometries of $L$. In particular, we can consider $W_{R}\subset\operatorname{O}(L)$. We further denote the subgroup of $S_{R}$ which leaves $G$ invariant (as a subgroup) by $S_{R}^{G}$. It now follows from Proposition 12.4 that $\operatorname{O}(L)$ is generated by the Weyl group $W_{R}$ together with the group $S_{R}^{G}$ of diagram isometries which fix the subgroup $G$, i.e. (12.2) $\operatorname{O}(L)=W_{R}\rtimes S_{R}^{G}.$ We can extend all elements in $\operatorname{O}(L)$ to isometries of $M=U+L$ by taking the identity on the first factor $U$. By doing this we can consider $S_{R}^{G}$ as a subgroup of $\operatorname{O}(M)$ and to simplify the notation we shall denote the image of $S_{R}^{G}$ in $O(M)$ by $S_{M}$. This notation is unambiguous in our situation as there is no lattice $R$ in Table 11 which comes with more than one subgroup $G\subset D(R)$. The lattice $M$ has signature $(1,\operatorname{rank}(M)-1)=(1,\operatorname{rank}(R)+1)$. As before we fix a connected component $C^{+}(M)$ of the positive cone in $M_{\mathbb{R}}$. We have already mentioned that the hyperplanes orthogonal to the roots $r\in M$ subdivide $C^{+}(M)$ into connected components, the Weyl chambers of $M$. It is well known, see [Huy, Proposition 8.2.6] that the Weyl group $W_{M}$ of $M$ acts simply transitively on the Weyl chambers in $C^{+}(M)$. By Lemma 12.2 we have (12.3) $W_{R}=W_{L}\subset W_{M}\subset\operatorname{\widetilde{O}}^{+}(M).$ In particular, $W_{R}$ also acts faithfully on the set of Weyl chambers of $M$. For our applications it will be essential that the group $S_{M}$ on the contrary maps Weyl chambers to themselves: ###### Proposition 12.5. Let $(R,G)$ be a pair of a root lattice and an isotropic subgroup arising from one of the families listed in Table 11. The elements of $S_{M}$, i.e. the symmetries of the Dynkin diagram of $R$ fixing $G$ as a group, map all Weyl chambers of $C^{+}(M)$ to themselves. ###### Proof. Here we make again use of the special situation, namely the fact that there exists an $M$-polarised $K3$ surface $S$ with $\operatorname{NS}(S)\cong M$ and such that the components of the singular fibres which do not intersect the $0$-section generate the root lattice $R$. In fact these define a set of simple roots which gives rise to the Dynkin diagram associated to $R$ and $S_{R}$ acts on these. More precisely, we can choose a primitive embedding $\iota:M\to L_{K3}$ and a marking $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ such that $\varphi(\operatorname{NS}(S))=\iota(M)$. Let $\Delta^{+}$ be the set of positive roots (i.e. effective $(-2)$-classes in $\operatorname{NS}(S)$). By [Huy, Section 8.2.3] it is enough to show that every element of $S_{M}$ maps positive roots to positive roots. Let $g\in S_{M}$ and let $f$ be the class of a fibre. Then $g(f)=f$. If $s$ is a positive root with $(f,s)\neq 0$, then $(f,s)>0$. Since $(f,g(s))=(g(f),g(s))=(f,s)>0$ it follows that $g(s)$ is again positive. By definition of the group $S_{M}$ we have $g(s_{0})=s_{0}$ where $s_{0}$ is the class of the $0$-section. Then the same argument also shows that $g(s)$ is positive for every positive root $s$ with $(s,s_{0})\neq 0$. It remains to consider the positive roots which are orthogonal to $f$ and $s_{0}$, but these are exactly the positive roots of $R$, which are given by non-negative combinations of components of singular fibres which do not intersect the $0$-section. Since $g$ permutes these, the claim follows. ∎ We shall now discuss the action of the group of symmetries of the Dynkin diagram on the moduli spaces of $M$-lattice polarised $K3$ surfaces in more detail. Let $(S,\tilde{\iota})$ be a general element in $\mathcal{N}_{M,\iota}$, more precisely an element in $\mathcal{N}_{M,\ell}^{0}$, where $\mathcal{N}_{M,\ell}$ is a connected component of $\mathcal{N}_{M,\iota}$ and $\mathcal{N}_{M,\ell}^{0}$ denotes the open set where the fibre configuration is constant. Then $\tilde{\iota}:M\to\operatorname{NS}(S)\subset H^{2}(S,\mathbb{Z})$ is a primitive embedding and there exists a marking $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ such that $\tilde{\iota}=\varphi^{-1}\circ\iota$ and such that $\tilde{\iota}(C_{M}^{\operatorname{pol}})$ contains an ample class on $S$ where $C_{M}^{\operatorname{pol}}$ is the fixed Weyl chamber in $C^{+}(M)$ which we have chosen once and for all. Every element $g_{M}\in S_{M}\subset\operatorname{O}(M)$ has, by Proposition 12.5, the property that it fixes the Weyl chambers in $C^{+}(M)$. Hence $\tilde{\iota}\circ g_{M}:M\to\operatorname{NS}(S)$ defines again an $M$-polarization on $S$. Now two cases can occur. The first is that $\tilde{\iota}$ and $\tilde{\iota}\circ g_{M}$ define isomorphic embeddings of $M$ into the $K3$ lattice $L_{K3}$. In this case $(S,\tilde{\iota})$ and $(S,\tilde{\iota}\circ g_{M})$ define elements in the same moduli space $\mathcal{N}_{M,\iota}$ and $g_{M}$ induces a map from $\mathcal{N}_{M,\iota}$ to itself identifying $(S,\tilde{\iota})$ and $(S,\tilde{\iota}\circ g_{M})$. Note that the moduli space $\mathcal{N}_{M,\iota}$ can have one or two components. If it has two components, then $g_{M}:\mathcal{N}_{M,\iota}\to\mathcal{N}_{M,\iota}$ can either fix the components or interchange them. By the discussion in Remark 11.8 all of these possibilities discussed actually occur in our situation. In the second case $\iota$ and $\iota\circ g_{M}$ define different embeddings and $g$ induces an isomorphism of moduli spaces $\mathcal{N}_{M,\iota}\to\mathcal{N}_{M,\iota\circ g_{M}}$. The lattices where more than one embedding exists are listed in Proposition 11.7. In these cases we have two different embeddings, but there exist symmetries of the Dynkin diagram which defines isomorphisms $\mathcal{N}_{M,\iota}\cong\mathcal{N}_{M,\iota\circ g_{M}}$ (by Remark 11.9). Again, note that $\mathcal{N}_{M,\iota}$ can have one or two components, but the latter does not occur among our cases. We can also formulate the above discussion in more group theoretic terms. The choice of a primitive embedding $\iota:M\to L_{K3}$ with orthogonal complement $T$ (up to isomorphism of embeddings) is equivalent to the choice of an isomorphism $\alpha_{\iota}:(D(M),q_{M})\cong(D(T),-q_{T})$ (modulo $\operatorname{\overline{O}}(T)$). Recall the definitions of the groups $S^{G}_{R}$ and $S_{M}$ from (12.2) and the subsequent paragraph. An element $g_{M}\in S_{M}$ defines an isometry $\overline{g}_{M}\in\operatorname{O}(D(M))$ and, via $\alpha_{\iota}$, an isometry $\alpha_{\iota}(\overline{g}_{M})\in\operatorname{O}(D(T))$. The morphism which maps $(S,\tilde{\iota})$ to $(S,\tilde{\iota}\circ g_{M})$ maps $\mathcal{N}_{M,\iota}$ to itself if and only if $\alpha_{\iota}(\overline{g}_{M})\in\operatorname{\overline{O}}(T)$. If $g_{M}$ induces a morphism from $\mathcal{N}_{M,\iota}$ to itself, then we can describe this map explicitly. In this case $\alpha_{\iota}(\overline{g}_{M})\in\operatorname{\overline{O}}(T)$ and we can lift this to an element $g_{T}\in\operatorname{O}(T)$ such that the pair $(g_{M},g_{T})$ extends to an isometry of $L_{K3}$. The lift $g_{T}$ is uniquely determined up to $\operatorname{\widetilde{O}}(T)$. The action of $g_{T}$ on $\mathcal{N}_{M,\iota}=\Omega_{T}/\operatorname{\widetilde{O}}(T)$ then induces the map on $\mathcal{N}_{M,\iota}$ in question. Now $\mathcal{N}_{N,\iota}$ has two components if and only if $\operatorname{\widetilde{O}}^{+}(T)=\operatorname{\widetilde{O}}(T)$ and in this case $g_{T}$ interchanges the two components if and only if $g_{T}$ has real spinor norm $-1$, i.e. if and only if $g_{T}\notin\operatorname{\widetilde{O}}^{+}(T)$ (which in this case is independent of the chosen lift). Let $\pi_{M}:\operatorname{O}(M)\to\operatorname{O}(D(M))$ and $\pi_{T}:\operatorname{O}(T)\to\operatorname{O}(D(T))$ be the canonical projections. We define (12.4) $\overline{S}_{M}=\pi_{M}(S_{M})$ which we will, via $\alpha_{\iota}:\operatorname{O}(D(M))\to\operatorname{O}(D(T))$, also consider as a subgroup $\overline{S}_{M}\subset\operatorname{O}(D(T))$. As a subgroup this depends on the embedding $\iota$ as it is defined via the isomorphism $\alpha_{\iota}$. This becomes important when we define (12.5) $\overline{S}_{M,\iota}=\overline{S}_{M}\cap\operatorname{\overline{O}}(T),\,\overline{S}^{+}_{M,\iota}=\overline{S}_{M}\cap\operatorname{\overline{O}}^{+}(T)$ and their pre-images (12.6) $\Gamma_{M,\iota}=\pi_{T}^{-1}(\overline{S}_{M,\iota})\subset\operatorname{O}(T),\,\Gamma^{+}_{M,\iota}=\pi_{T}^{-1}(\overline{S}^{+}_{M,\iota})\subset\operatorname{O}^{+}(T).$ The elements in $\Gamma_{M,\iota}$ are those isometries of $T$ which can be extended to the overlattice $L_{K3}$ of $T\oplus\iota(M)$ (where we do not ask that these isometries act trivially on $\iota(M)$). The group $\Gamma_{M,\iota}$ acts on the period domain $\Omega_{T}$ and induces an action on the moduli space $\mathcal{N}_{N,\iota}$. An element in $\Gamma_{M,\iota}$ fixes the components of $\mathcal{N}_{N,\iota}$ if and only if it is in $\Gamma^{+}_{M,\iota}$. By the above discussion the group $\overline{S}_{M}$ operates on $\operatorname{O}(D(T))/\operatorname{\overline{O}}(T)$ via $h\mapsto h\circ\alpha_{\iota}(\overline{g})$. It will be important for us to know whether this action is transitive. The following is essentially a reformulation of Remark 11.9. ###### Proposition 12.6. If $M$ is a hyperbolic lattice arising from one of the families listed in Table 11, then $\overline{S}_{M}$ acts transitively on $\operatorname{O}(D(T))/\operatorname{\overline{O}}(T)$ unless we are in the case where $R=2A_{3}+8A_{1}$. ###### Proof. As we have said before (see Remark 11.9) comparing the lists in [Shi2, Corollary 1.5] and [Shi2, Table 3] with our Table 11 we find three lattices, namely $4A_{3}+2A_{1}$, $2A_{3}+8A_{1}$ and $D_{4}+2A_{6}+A_{1}$. The first lattice is irrelevant for us as it appears with Mordell-Weil torsion $\mathbb{Z}/2\mathbb{Z}$ in Shimada’s lists, whereas we have torsion $\mathbb{Z}/4\mathbb{Z}$. The reason that $D_{4}+2A_{6}+A_{1}$ appears in Shimada’s lists is that there are two connected component, but they belong to the same primitive lattice embedding. Finally, for $2A_{3}+8A_{1}$ there exist two combinatorially different components of the moduli space coming from different lattice embeddings (but only one of them appears in our classification). ∎ Our previous discussion can now be summarised as follows. Let $\mathcal{A}$ be an ambi-typical stratum, resp. let $\mathcal{A}\cup\overline{\mathcal{A}}$ be the union of the two complex-conjugated components 38/39. Then there are finitely many components of moduli spaces $\mathcal{N}_{N,\iota}$ of lattice- polarised $K3$ surfaces which are associated to $\mathcal{A}$ or $\overline{\mathcal{A}}$. The group $\overline{S}_{M}$ acts transitively on the set of all moduli spaces $\mathcal{N}_{M,\iota}$ whose components are associated to $\mathcal{A}$ or $\mathcal{A}\cup\overline{\mathcal{A}}$ respectively. The groups $\Gamma_{M,\iota}$ act on the moduli spaces $\mathcal{N}_{M,\iota}$ (and their elements may interchange connected components of these moduli spaces). ###### Remark 12.7. By inspection one sees that for all our lattices $-1\in\overline{S}_{M,\iota}$ and hence also $-1\in\Gamma_{M,\iota}$ and hence $\Gamma_{M,\iota}/(\pm\operatorname{\widetilde{O}}(T))\cong\overline{S}_{M,\iota}/(\pm 1)$. ###### Theorem 12.8. Let $\mathcal{A}$ be an ambi-typical stratum and let $\mathcal{N}_{M,\iota}$ be a moduli space of lattice polarised $K3$ surfaces associated to $\mathcal{A}$. Then the following holds: * (1) If $\mathcal{A}$ is one of the ambi-typical strata different from 38/39, then the dominant map $\mathcal{N}_{M,\iota}^{0}\to\mathcal{A}$ is given by the action of the finite group $\Gamma_{M,\iota}/(\pm\operatorname{\widetilde{O}}(T))$ which acts faithfully. * (2) Let $\mathcal{A}\cup\overline{\mathcal{A}}$ be the union of the two complex conjugated strata 38/39. Then $\mathcal{N}_{M,\iota}$ has two connected components $\mathcal{N}_{M,\ell}$ and $\overline{\mathcal{N}}_{M,\ell}$ and the dominant map $\mathcal{N}_{M,\ell}^{0}\cup\overline{\mathcal{N}}_{M,\ell}^{0}\to\mathcal{A}\cup\overline{\mathcal{A}}$ is given by the group $\Gamma_{M,\iota}/(\pm\operatorname{\widetilde{O}}(T))$ which acts faithfully. ###### Proof. We start with a surface $S\in\mathcal{A}$ (or $S\in\overline{\mathcal{A}}$). As we have explained before, identifying the fibre components not meeting the $0$-section of the singular fibres with simple roots of $R$ and the lattice spanned by the fibre and section with a copy of $U$ we obtain a lattice polarization $\tilde{\iota}:M\to\operatorname{NS}(S)\subset H^{2}(S,\mathbb{Z})$. Let $\varphi:H^{2}(S,\mathbb{Z})\to L_{K3}$ be a marking and set $\iota=\varphi\circ\tilde{\iota}$. We have to determine when two lattice-polarised $K3$ surfaces in $\mathcal{N}_{M,\iota}$ are mapped to the same point in $\mathcal{A}$ or $\overline{\mathcal{A}}$ respectively. Assume that two surfaces $(S_{1},\tilde{\iota}_{1})$ and $(S_{2},\tilde{\iota}_{2})$ in $\mathcal{N}_{N,\iota}$ define the same point in $\mathcal{A}$ or $\mathcal{A}\cup\overline{\mathcal{A}}$ respectively. Then there is an isomorphism $f:S_{2}\to S_{1}$ which respects the elliptic fibration. This induces a map $f^{*}:\operatorname{NS}(S_{1})\to\operatorname{NS}(S_{2})$. Let $\varphi_{i}:H^{2}(S_{i};\mathbb{Z})\to L_{K3}$ be markings with $\iota=\tilde{\iota}_{i}\circ\varphi_{i}$ for $i=1,2$. Then $\varphi_{2}\circ f^{*}\circ\varphi_{1}$ defines an isometry of $M=U+L$. This is the identity on $U$ as the $0$-section and the general fibre are mapped to the $0$-section and the general fibre respectively. By restriction this then defines an isometry of $L$. Recall from (12.2) that $\operatorname{O}(L)=W_{R}\rtimes S_{R}^{G}$ and from (12.3) that $W_{R}=W_{L}\subset W_{M}\subset\operatorname{\widetilde{O}}^{+}(L)$. Since the isometry $\varphi_{2}\circ f^{*}\circ\varphi_{1}|_{M}$ maps $C_{M}^{\operatorname{pol}}$ to itself and since the Weyl group acts faithfully on the Weyl chambers it follows that $\varphi_{2}\circ f^{*}\circ\varphi_{1}|_{M}$ defines an element in $\Gamma_{M,\iota}$. Conversely, if two lattice polarised $K3$ surfaces in $\mathcal{N}_{N,\iota}^{0}$ are conjugate under $\Gamma_{M,\iota}$ the underlying elliptic $K3$ surfaces are isomorphic and hence define the same point in $\mathcal{A}$ or $\mathcal{A}\cup\overline{\mathcal{A}}$ respectively. Finally, since $\pm\operatorname{id}_{T}$ are the only elements in $\operatorname{O}(T)$ which act trivially on $\Omega_{M}$ it follows that the group $\Gamma_{M,\iota}/(\pm\operatorname{\widetilde{O}}(T))$ acts faithfully on $\mathcal{N}_{M,\iota}$. ∎ ###### Corollary 12.9. The following holds: * (1) If $\mathcal{A}$ is an ambi-typical stratum different from 38/39, then the degree of the dominant map $\mathcal{N}_{M,\iota}^{0}\to\mathcal{A}$ is given by $|\overline{S}_{M}/(\pm 1)|/|\mathcal{M}^{c}_{T}|=|\overline{S}^{+}_{M,\iota}/(\pm 1)|.$ * (2) If $\mathcal{A}$ (or $\overline{\mathcal{A}}$ respectively) is one of the strata 38/39, then the degree of the dominant map $\mathcal{N}_{M,\ell}^{0}\cup\overline{\mathcal{N}}_{M,\ell}^{0}\to\mathcal{A}\cup\overline{\mathcal{A}}$ is given by $2|\overline{S}_{M}/(\pm 1)|/|\mathcal{M}^{c}_{T}|=|\overline{S}^{+}_{M,\iota}/(\pm 1)|=24.$ ###### Proof. The first equality follows from the fact that $\overline{S}_{M}$ acts transitively on the connected components of $M$-polarised $K3$ surfaces with the exception of the cases 38/39 where we have two orbits. The second equality follows from the definition of the group $\overline{S}^{+}_{M,\iota}$ and the constructin of the covering map. ∎ In the case of $1$-dimensional strata one can compute all the data of the covering map from the moduli space of lattice polarised $K3$ surfaces to the ambi-typical stratum. In Table 13 we give these data for all strata of dimension $1$ with more than one connected component. We list the case number, the root lattice, the number of connected components, the order of the group $\overline{S}_{M}$ coming from the symmetries of the Dynkin diagram, the degree of the covering map, the genus of the modular curve parameterising the lattice polarised $K3$ surfaces and finally the genus $g_{\operatorname{BPT}}$ of the ambi-typical stratum. The later is always $0$ in accordance with [BPT, Theorem] which says that all monodromy strata are rational. We find it remarkable that in contrast the genus of the components of the associated moduli spaces of lattice-polarised $K3$ surfaces can be as high as $13$. Note that the group $\overline{S}_{M}$ is a subgroup of the symmetry group of the Dynkin diagram of the root lattice. If the isotropic group $G$ is trivial, then the two groups coincide. Otherwise the condition that $G$ must be fixed can impose nontrivial extra conditions. This is the case for numbers $37,48,49$ where $\overline{S}_{M,\iota}$ is a proper subgroup of index $2$ of $\overline{S}_{M}$. Furthermore, in these cases $\overline{S}^{+}_{M,\iota}$ is also an index $2$ subgroup of $\overline{S}_{M,\iota}$. Altogether $\overline{S}^{+}_{M,\iota}$ can have index $1$, $2$ or $4$ in $\overline{S}_{M}$ The relevant computations, in particular of the genera $g$, were performed by Markus Kirschmer and are presented in the appendix. $\begin{array}[]{c|c|c|c|c|c|c}\text{Number}&\text{root lattice}&|\mathcal{M}^{c}_{T}|&|\overline{S}_{M}/(\pm 1)|&d=|\overline{S}^{+}_{M,\iota}/(\pm 1)|&g&g_{\operatorname{BPT}}\\\ \hline\cr 35&D_{4}+A_{7}+A_{4}+2A_{1}&2&8&4&0&0\\\ 37&D_{4}+A_{7}+2A_{2}+2A_{1}&4&32&8&0&0\\\ 38/39&D_{4}+2A_{6}+A_{1}&2&24&24&1&0\\\ 42&D_{4}+A_{6}+A_{3}+2A_{2}&2&96&48&13&0\\\ 44&D_{4}+2A_{5}+3A_{1}&2&24&12&0&0\\\ 48&D_{4}+2A_{4}+2A_{2}+A_{1}&4&192&48&1&0\\\ 49&D_{4}+3A_{3}+2A_{2}&4&384&96&5&0\\\ \end{array}$ Table 13. Covering data for all $1$-dimensional ambi-typical strata with more than one component ## Appendix. Numerical calculations Markus Kirschmer In this appendix we summarise some explicit computations concerning the 25 rank 17 lattices in Table 11. All computations were done in Magma [Magma] and the complete code is available from www.math.rwth- aachen.de/~Markus.Kirschmer/magma/K3.html. Let $L$ be one of the rank 17 lattices in Table 11 and set $M:=U\oplus L$. Further let $\iota\colon M\hookrightarrow L_{K3}$ be a primitive embedding. ### Constructing $T$ We first need to construct an integral lattice $T$ isometric to $\iota(M)^{\perp}$. This can be done without constructing an embedding $\iota$ as follows: Let $(V,q)$ be the ambient quadratic space of $T$. The fact that $M\oplus T$ and $L_{K3}$ lie in isometric quadratic spaces shows that $(V,q)$ has signature $(2,1)$ and determinant $\det(M)$. It also uniquely determines the Hasse-Witt-invariants of $(V,q)$. Using [Kir, Alg. 3.4.3] we can construct a rational quadratic space isometric to $(V,q)$. Let $X$ be a maximal even lattice in $(V,q)$, ie. $q(X)\subseteq 2\mathbb{Z}$ and no lattice properly containing $X$ has that property, cf. [Kir, Alg. 3.5.5]. By [OM, Theorem 91:2], the genus of $X$ is unique, thus we may assume that $T\subseteq X$. For any prime $p$ dividing $\\#D(L)$, let $S_{p}$ be the $p$-Sylow subgroup of $D(L)$. Then $\\{Y\subseteq X\mid D(Y)\cong S_{p}\mbox{ and }\\#q_{Y}^{-1}(\\{a\\})=\\#q_{L}^{-1}(\\{-a\\})\mbox{ for all }a\in\mathbb{Q}/2\mathbb{Z}\\}$ consists of a single genus. Let $Y^{(p)}$ be any representative. By Proposition 11.5, the lattice $T:=\cap_{p}Y^{(p)}$ is isometric to $\iota(M)^{\perp}$. ### Computing $\operatorname{O}(T)$ and its subgroups A finite generating set of $\operatorname{O}(T)$ can be constructed using a variation of Voronoi’s algorithm by M.H. Mertens [Me]. A slight modification of Mertens’ algorithms also yields a finite presentation of $\operatorname{O}(T)$ using Bass-Serre theory [BCNS]. This modification was provided to us by S. Schönnenbeck. The group $\operatorname{\widetilde{O}}(T)$ is the kernel of the homomorphism $\pi_{T}\colon\operatorname{O}(T)\to\operatorname{O}(D(T),q_{T})$. Since the group $\operatorname{O}(D(T),q_{T})$ is finite, we can construct a finite generating set of $\operatorname{\widetilde{O}}(T)$ using the standard orbit stabiliser algorithm. Similarly, the spinor norm map $\textnormal{sp}_{\mathbb{R}}\colon\operatorname{O}(T)\to\\{\pm 1\\}$ yields finite generating sets for $\operatorname{O}^{+}(T)$ and $\operatorname{\widetilde{O}}^{+}(T)$. Note that spinor norms can be computed using Zassenhaus’ trick [Za]. But one has to keep in mind that our normalization of the spinor norm on $(T,q)$ corresponds to Zassenhaus’ spinor norm on the lattice $(T,-q)$. Next we want to compute generators for the group $\Gamma_{M,\iota}$ cf. equation (12.6). The group $D(M)$ is finite and so is its automorphism group $\operatorname{Aut}(D(M))$. Thus the subgroup $\operatorname{O}(D(M),q_{M})=\\{f\in\operatorname{Aut}(D(M))\mid q_{M}(f(x))=q_{M}(x)\mbox{ for all }x\in D(M)\\}$ can be enumerated by brute force. Similarily, we enumerate the subgroup $\overline{S}_{M}\subseteq\operatorname{O}(D(M),q_{M})$ induced by the automorphism group of the Dynkin diagram of the root lattice $R$. If $\overline{S}_{M}=\operatorname{O}(D(M),q_{M})$, then $\Gamma_{M,\iota}=\operatorname{O}(T)$. Suppose now $\overline{S}_{M}\subsetneq\operatorname{O}(D(M),q_{M})$. In these cases, it just happens that $\pi_{T}\colon\operatorname{O}(T)\to\operatorname{O}(D(T),q_{T})$ is onto. We start by computing any isometry $\alpha\colon(D(M),q_{M})\to(D(T),-q_{T})$ using a backtrack approach. This gives us an isomorphism $\operatorname{O}(D(T))\cong\operatorname{O}(D(M))$. The fact that $\operatorname{O}(T)\to\operatorname{O}(D(T),q_{T})$ is onto shows that there exists some $f\in\operatorname{O}(T)$ such that $\pi_{T}(f)\circ\alpha=\alpha_{\iota}$. In particular, the pre-image of $\overline{S}_{M,\iota}$ under $\operatorname{O}(T)\to\operatorname{O}(D(T))\cong\operatorname{O}(D(M))$ must be conjugate to $\Gamma_{M,\iota}$ and we find generators for this group using the orbit stabiliser algorithm. ### Fuchsian groups Let $\operatorname{SO}^{+}(T)=\\{\varphi\in\operatorname{O}^{+}(T)\mid\det(\varphi)=1\\}$. We denote by $\operatorname{SO}^{+}(2,1)$ the connected component of the special orthogonal group of a real quadratic space of signature $(2,1)$. The sporadic isomorphism between $\operatorname{SO}^{+}(2,1)$ and $\operatorname{PSL}(2,\mathbb{R})$ implies that the type IV homogeneous domain associated to $T$ is isomorphic to the upper half plane $\mathbb{H}_{1}$. Moreover, it induces an injection $\operatorname{SO}^{+}(T)\hookrightarrow\operatorname{PSL}(2,\mathbb{R})$ and thus an action of $\operatorname{SO}^{+}(T)$ on the upper half plane $\mathbb{H}_{1}$. This action is properly discontinuous. Hence any finite index subgroup $G$ of $\operatorname{SO}^{+}(T)$ is a Fuchsian group, cf. [Ka, Theorem 2.2.6]. Suppose $G$ is a finite index subgroup of $\operatorname{SO}^{+}(T)$ and let $g:=g(\mathbb{H}_{1}/G)$ be the genus of the (compactified) curve $\mathbb{H}_{1}/G$. By [Ka, Section 4.3], the group $G$ admits a presentation (A.1) $G\cong\left\langle a_{1},b_{1},\ldots,a_{g},b_{g},x_{1},\ldots,x_{d},y_{1},\dots,y_{t}\middle|\begin{array}[]{@{}l@{}}x_{1}^{m_{1}}=\ldots=x_{d}^{m_{d}}=1\mbox{ and }\\\ x_{1}\cdots x_{d}y_{1}\cdots y_{t}[a_{1},b_{1}]\cdots[a_{g},b_{g}]=1\end{array}\right\rangle$ with $t=0$ if and only if $\mathbb{H}_{1}/G$ is compact. Since the index of $G$ in $\operatorname{O}(T)$ is finite, we can obtain a finite presentation of $G$ from the presentation of $\operatorname{O}(T)$ using the Reidemeister-Schreier method [MKS, Section 2.3]. Even though this presentation might not be in the form of eq. (A.1), it is good enough to determine the genus of $\mathbb{H}_{1}/G$: ###### Lemma A.1. Let $L$ be one of the rank $17$ lattices of Table 11 and let $\iota\colon L\oplus U\to L_{K3}$ be a primitive embedding. Further, let $T:=\iota(L\oplus U)^{\perp}$ and let $G$ be a finite index subgroup of $\operatorname{SO}^{+}(T)$. 1. (1) The space $\mathbb{H}_{1}/G$ is compact. 2. (2) The torsion free part of $G/G^{\prime}$ has rank $2g(\mathbb{H}_{1}/G)$. ###### Proof. It suffices to prove the first statement for $G=\operatorname{SO}^{+}(T)$. An explicit computation shows that the abelian group $\operatorname{SO}^{+}(T)/\operatorname{SO}^{+}(T)^{\prime}\cong(\mathbb{Z}/2\mathbb{Z})^{r}$ is an elementary abelian $2$-group. Suppose $\mathbb{H}_{1}/\operatorname{SO}^{+}(T)$ is not compact, i.e. the parameter $t$ in eq. (A.1) is non-zero. The isomorphism type of $\operatorname{SO}^{+}(T)/\operatorname{SO}^{+}(T)^{\prime}$ implies that $g(\mathbb{H}_{1}/\operatorname{SO}^{+}(T))=0$ and $\operatorname{SO}^{+}(T)\cong\mathbb{Z}/2\mathbb{Z}*\ldots*\mathbb{Z}/2\mathbb{Z}$ is a free product of $r$ copies of $\mathbb{Z}/2\mathbb{Z}$. An explicit computation shows that for all lattices $T$, the groups $\operatorname{SO}^{+}(T)$ and $\mathbb{Z}/2\mathbb{Z}*\ldots*\mathbb{Z}/2\mathbb{Z}$ have different numbers of subgroups of small index. This proves the first assertion. The second assertion follows immediately from the fact that the parameter $t$ in eq. (A.1) is zero as $\mathbb{H}_{1}/G$ is compact. ∎ Suppose now $G$ is a subgroup of $\operatorname{O}^{+}(T)$. We denote by $\mathbb{H}_{1}/G$ the space $(\mathbb{H}_{1}/\pm G\cap\operatorname{SO}^{+}(T))$ where $\pm G$ is the subgroup of $\operatorname{O}(T)$ generated by $G$ and $-I_{3}$. ### Results For each rank 17 lattice in Table 11, we constructed a lattice $T$ as well as generating sets for $\operatorname{O}(T)$, $\operatorname{O}^{+}(T)$, $\operatorname{\widetilde{O}}(T)$ and $\operatorname{\widetilde{O}}^{+}(T)$. It turns out that in all cases $[\operatorname{O}(T):\operatorname{O}^{+}(T)]=2$ and $g(\mathbb{H}_{1}/\operatorname{O}^{+}(T))=0$. Table 14 lists the indices $I_{1}:=[\operatorname{O}^{+}(T):\operatorname{\widetilde{O}}^{+}(T)]$, $I_{2}:=[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]$ and $I_{3}:=[\operatorname{O}(D(T)):\overline{\operatorname{O}(T)}]$, the order of $\overline{S}_{M,\iota}/(\pm 1)$ and the genus of the moduli space of lattice polarised $K3$ surfaces. We note that in all but three cases $\overline{S}_{M}/(\pm 1)=\overline{S}_{M,\iota}/(\pm 1)$. The only cases where this is not the case are 37, 48 and 49 where $\overline{S}_{M,\iota}/(\pm 1)$ has index $2$ in $\overline{S}_{M}/(\pm 1$). Furthermore, in these cases, as well as in 35, 42 and 44 the group $\overline{S}^{+}_{M,\iota}/(\pm 1)$ has index $2$ in $\overline{S}_{M,\iota}/(\pm 1)$. $\begin{array}[]{c|c|c|c|c|c|c}\\#&\text{root lattice}&I_{1}&I_{2}&I_{3}&|\overline{S}_{M}/(\pm 1)|&g(\mathbb{H}_{1}/\operatorname{\widetilde{O}}^{+}(T))\\\ \hline\cr 24&D_{4}+A_{13}&12&2&1&6&1\\\ 25&D_{4}+A_{12}+A_{1}&12&2&1&6&1\\\ 26&D_{4}+A_{11}+A_{2}&16&2&1&4&0\\\ 27&D_{4}+A_{11}+2A_{1}&8&2&1&4&0\\\ 28&D_{4}+A_{10}+A_{2}+A_{1}&24&2&1&12&1\\\ 29&D_{4}+A_{9}+A_{4}&72&2&1&12&4\\\ 30&D_{4}+A_{9}+A_{3}+A_{1}&8&2&1&4&1\\\ 31&D_{4}+A_{9}+2A_{2}&96&2&1&48&7\\\ 32&D_{4}+A_{9}+A_{2}+2A_{1}&8&2&1&4&0\\\ 33&D_{4}+A_{8}+A_{5}&72&2&1&12&4\\\ 34&D_{4}+A_{8}+A_{4}+A_{1}&24&2&1&12&1\\\ 35&D_{4}+A_{7}+A_{4}+2A_{1}&8&1&1&8&0\\\ 36&D_{4}+A_{7}+A_{3}+A_{2}+A_{1}&16&2&1&8&1\\\ 37&D_{4}+A_{7}+2A_{2}+2A_{1}&32&1&2&32&0\\\ 38/39&D_{4}+2A_{6}+A_{1}&48&1&1&24&1\\\ 40&D_{4}+A_{6}+A_{5}+A_{2}&48&2&1&24&7\\\ 41&D_{4}+A_{6}+A_{4}+A_{2}+A_{1}&48&2&1&24&1\\\ 42&D_{4}+A_{6}+A_{3}+2A_{2}&96&1&1&96&13\\\ 43&D_{4}+2A_{5}+A_{3}&32&2&1&16&3\\\ 44&D_{4}+2A_{5}+3A_{1}&24&1&1&24&0\\\ 45&D_{4}+A_{5}+2A_{4}&96&2&1&48&7\\\ 46&D_{4}+A_{5}+A_{4}+A_{3}+A_{1}&16&2&1&8&1\\\ 47&D_{4}+A_{5}+2A_{3}+2A_{1}&16&2&1&8&0\\\ 48&D_{4}+2A_{4}+2A_{2}+A_{1}&192&1&2&192&1\\\ 49&D_{4}+3A_{3}+2A_{2}&384&1&2&384&5\\\ \end{array}$ Table 14. Indices $I_{1}=[\operatorname{O}^{+}(T):\operatorname{\widetilde{O}}^{+}(T)]$, $I_{2}=[\operatorname{\widetilde{O}}(T):\operatorname{\widetilde{O}}^{+}(T)]$, $I_{3}=[\operatorname{O}(D(T)):\overline{\operatorname{O}(T)}]$, order of $\overline{S}_{M,\iota}/(\pm 1)$ and genus of the moduli space of lattice polarised $K3$ surfaces. There are only four lattices $T$ for which $\Gamma_{M,\iota}\neq\operatorname{O}(T)$. In these cases, the genus of $\hat{C}_{T}:=\mathbb{H}_{1}/\Gamma^{+}_{M,\iota}$, which is birational to the ambitypical stratum, is again $0$, once more in concordance with [BPT, Theorem]. Table 15 lists the indices $[\operatorname{O}^{+}(T):\Gamma^{+}_{M,\iota}]$ and $[\Gamma_{M,\iota}:\Gamma^{+}_{M,\iota}]$ in these cases. $\begin{array}[]{c|c|c|cc}\\#&\text{root lattice}&[\operatorname{O}^{+}(T):\Gamma^{+}_{M,\iota}]&[\Gamma_{M,\iota}:\Gamma^{+}_{M,\iota}]\\\ \hline\cr 26&D_{4}+A_{11}+A_{2}&2&2\\\ 29&D_{4}+A_{9}+A_{4}&3&2\\\ 33&D_{4}+A_{8}+A_{5}&3&2\\\ 38/39&D_{4}+2A_{6}+A_{1}&1&1\end{array}$ Table 15. Indices of $\Gamma_{M,\iota}^{+}$ in $\operatorname{O}^{+}(T)$ and $\Gamma_{M,\iota}$. ## References * [BHPV] W. Barth, K. Hulek, C. Peters, A. Van de Ven, Compact complex surfaces. 2nd Enlarged Edition, Ergebnisse der Mathematik 3. Folge, 4, Springer Verlag 2004. * [Be] A. Beauville, Les familles stables de courbes elliptiques sur $\mathbb{P}^{1}$ admettant quatre fibres singulières. C. R. Acad. Sci. Paris Ser. I Math. 294, 657–660 (1982). * [BPT] F. Bogomolov, T. Petrov, Y. Tschinkel, Rationality of moduli of elliptic fibrations with fixed monodromy. Geom. Funct. Anal. 12:6, 1105–1160 (2002). * [BT] F. Bogomolov, Y. Tschinkel, Monodromy of elliptic surfaces. In: Galois groups and fundamental groups, Math. Sci. Res. Inst. Publ., 41, Cambridge Univ. Press, Cambridge, 2003, 167–181. * [BCNS] O. Braun, R. Coulagon, G. Nebe. S. Schönnenbeck, Computing in arithmetic groups with Voronoi’s algorithm, Journal of Algebra, 435, 263–285 (2015). * [CS] J. H. Conway, N. J. A. Sloane Sphere packings, lattices and groups. 3rd edition, Grundlehren der Mathematischen Wissenschaften 290, Springer Verlag 1999. * [CoPa] D.A. Cox, W.R. Parry, Torsion in elliptic curves over $k(t)$. Comp. Math. 41:3, 337–354 (1980). * [CuPa] C.J. Cummins, S. Pauli, Congruence subgroups of $\operatorname{PSL}(2,\mathbb{Z})$ of genus less than or equal to 24. Experiment. Math. 12:2, 243–255 (2003). * [Do] I. Dolgachev, Mirror symmetry for lattice polarized $K3$ surfaces. J. Math. Sci. 81:3, 2599–2630 (1996). * [FK] Kh. Filom, A. Kamalinejad, Dessins on Modular Curves. http://arxiv.org/pdf/math/1603.01693.pdf (2016). * [FM] R. Friedman, J. Morgan, Smooth four-manifolds and complex surfaces. Ergebnisse der Mathematik, 3.Folge, 27, Springer Verlag (1994). * [GHS1] V. Gritsenko, K. Hulek, G. K. Sankaran Abelianisation of orthogonal groups and the fundamental group of modular varieties. J. Algebra, 322:2, 463–478 (2009). * [GHS2] V. Gritsenko, K. Hulek, G. K. Sankaran Moduli of $K3$ surfaces and irreducible symplectic manifolds. Handbook of moduli. Volume I, 459–526, International Press (2015). * [He] J. Hempel, Existence conditions for a class of modular subgroups of genus zero. Bull. Austral. Math. Soc. 66:3, 517–525 (2002). * [Hum] J. Humphreys, Reflection groups and Coxeter groups. Cambridge University Press, Camb. Stud. Adv. Math. 29 (1992). * [Hur] A. Hurwitz, Ueber Riemann’sche Flächen mit gegebenen Verzweigungspunkten. Math. Ann. 39, 1–61 (1891). * [Huy] D. Huybrechts, Lectures on $K3$ surfaces. Cambridge University Press, Camb. Stud. Adv. Math. 158 (2016). * [Ka] S. Katok, Fuchsian groups, Chicago Lectures in Mathematics, University of Chicago Press, Chicago (1992). * [Kir] M. Kirschmer, Definite quadratic and hermitian forms with small class number. Habilitation, RWTH Aachen University, 2016, * [Kl] R. Kloosterman, Higher Noether-Lefschetz loci of elliptic surfaces. J. Diff. Geom. 76:2, 293–316 (2007). * [Le] P. Lejarraga, The moduli of Weierstrass fibrations over $\mathbb{P}^{1}$: Rationality. Rocky Mt. J. Math., 23:2, 649 – 650 (1993). * [Magma] W. Bosma, W. J. Cannon, C. Playoust, The Magma algebra system. I. The user language. J. Symbolic Comput., 24:3-4, 235–265 (1997). * [MS] J. McKay, A. Sebbar. J-invariants of arithmetic semistable elliptic surfaces and graphs. Proceedings on Moonshine and related topics, Montréal, QC, 119-130, 1999. CRM Proceedings and Lecture Notes 30, American Mathematical Society, Providence, RI, 2001. * [OM] O’Meara, O.T., Introduction to Quadratic Forms. Grundlehren der Mathematischen Wissenschaften, 117, Springer Verlag (1973). * [Me] M. H. Mertens, Automorphism groups of hyperbolic lattices. J. Algebra, 408, 147–165 (2014). * [Mi81] R. Miranda, The moduli of Weierstrass fibrations over $\mathbb{P}^{1}$. Math. Ann. 255:3, 379–394 (1981). * [Mi89] R. Miranda, The basic theory of elliptic surfaces. Dottorato di Ricerca in Matematica, ETS Editrice, Pisa, 1989. * [Mi90] R. Miranda, Persson’s list of singular fibres for a rational elliptic surface. Math. Z. 205, 191–211 (1990). * [MM] R. Miranda, D. Morrison Embeddings of integral quadratic forms (electronic).http://www.math.ucsb.edu/drm/manuscripts/eiqf.pdf (2009). * [MKS] W. Magnus, A. Karrass, D. Solitar, Combinatorial group theory. Presentations of groups in terms of generations and relations. 2nd Edition, Dover Publications, Inc., New York (1976). * [Nik] V. Nikulin, Integral symmetric bilinear forms and some of their applications. Izv. Akad. Nauk SSSR Ser. Mat. 43:1, 111–177, 238 (1979). * [OO] Y. Odaka, Y. Oshima, Collapsing $K3$ surfaces, tropical geometry and moduli compactifications of Satake, Morgan-Shalen type. arXiv:1810.07685.pdf (2018). * [OS] K. Oguiso, T. Shioda, The Mordell-Weil lattice of a rational elliptic surface. Comment. Math. Univ. St. Paul. 40, 83–99 (1991). * [ScSh] M. Schütt, T. Shioda, Mordell-Weil lattices. Springer Verlag, Ergebnisse der Mathematik 3. Folge, 70 (2019). * [Se] A.Sebbar, Classification of torsion-free genus zero congruence groups. Proc. Amer. Math. Soc. 129:9 (2001). * [Shi1] I. Shimada, On elliptic K3 surfaces. Michigan Math. J. 47:3, 423–446 (2000). extended version including table 1 published as http://arxiv.org/pdf/math/0505140.pdf (2005). * [Shi2] I. Shimada, Connected Components of the Moduli of Elliptic $K3$ Surfaces. Michigan Math. J. 67:3, 511–559 (2018). * [Shi3] I. Shimada, Connected components of the moduli of elliptic K3 surfaces: computational data. http://www.math.sci.hiroshima-u.ac.jp/~shimada/K3.html (2016). * [Shi4] I. Shimada, A note on Mirand-Morrison theory. http://www.math.sci.hiroshima-u.ac.jp/shimada/preprints/ConnEllK3/NoteMM.pdf (2016). * [Shio] T. Shioda, On the Mordell-Weil lattices. Comment. Math. Univ. St. Paul. 39, 211–240 (1990). * [Wo] K. Wohlfahrt, An Extension of F-Klein’s Level Concept. Illinois J.Math. 8, 529–535 (1964). * [YaYo] T. Yamaguchi, S. Yokura, Poset-stratified space structures of homotopy sets. Homology Homotopy Appl. 21:2, 1–22 (2019). * [Za] H. Zassenhaus, On the spinor norm. Arch. Math., 13, 434–451 (1962).
# Soliton resolution for the complex short pulse equation with weighted Sobolev initial data 111Corresponding author. _E-mail addresses_<EMAIL_ADDRESS><EMAIL_ADDRESS>(S. F. Tian) Zhi-Qiang Li, Shou-Fu Tian∗ and Jin-Jie Yang School of Mathematics, China University of Mining and Technology, Xuzhou 221116, People’s Republic of China ###### Abstract We employ the $\bar{\partial}$-steepest descent method in order to investigate the Cauchy problem of the complex short pulse (CSP) equation with initial conditions in weighted Sobolev space $H^{1,1}(\mathbb{R})=\\{f\in L^{2}(\mathbb{R}):f^{\prime},xf\in L^{2}(\mathbb{R})\\}$. The long time asymptotic behavior of the solution $u(x,t)$ is derived in a fixed space-time cone $S(x_{1},x_{2},v_{1},v_{2})=\\{(x,t)\in\mathbb{R}^{2}:y=y_{0}+vt,~{}y_{0}\in[y_{1},y_{2}],~{}v\in[v_{1},v_{2}]\\}$. Based on the resulting asymptotic behavior, we prove the solution resolution conjecture of the CSP equation which includes the soliton term confirmed by $N(I)$-soliton on discrete spectrum and the $t^{-\frac{1}{2}}$ order term on continuous spectrum with residual error up to $O(t^{-1})$. ###### keywords: Integrable system , The complex short pulse equation , Riemann-Hilbert problem , $\bar{\partial}$-steepest descent method , Soliton resolution. ††journal: Journal of LaTeX Templates ###### Contents 1. 1 Introduction 2. 2 The spectral analysis of CSP equation 1. 2.1 The case of singularity at z=0 2. 2.2 The case of singularity at z=$\infty$ 3. 2.3 The scattering matrix 4. 2.4 The connection between $\mu_{\pm}(x,t;z)$ and $\mu^{0}_{\pm}(x,t;z)$ 3. 3 The formulation of a RHP 4. 4 Conjugation 5. 5 Continuous extension to a mixed $\bar{\partial}$-RH problem 6. 6 Decomposition of the mixed $\bar{\partial}$-RH problem 7. 7 The pure RH problem 1. 7.1 Outer model RH problem: $M^{(out)}$ 1. 7.1.1 Renormalization of the RHP for reflectionless case 2. 7.1.2 Long-time behavior of soliton solutions 2. 7.2 Local solvable model near phase point $z=\pm z_{0}$ 3. 7.3 The small-norm RHP for $E(z)$ 8. 8 Pure $\bar{\partial}$-RH problem 9. 9 Soliton resolution for the CSP equation 10. 10 Appendix A: The parabolic cylinder model problem 11. 11 Appendix B: Detailed calculations for the pure $\bar{\partial}$-Problem ## 1 Introduction In nonlinear optics, the well-known nonlinear Schrödinger (NLS) equation can be used to model the pulse propagation in optical fibers[1]. It is effective that the NLS equation is used to approximate the Maxwell’s equations [2] as the amplitude changes slowly. Therefore, more attention is paid to the research of NLS-type equations [3]-[6]. However, when the pulse becomes shorter, i.e., the width of optical pulse in the order of femtosecond($10^{-15}s$), it is not suitable to use the NLS equation continuously for describing the optical pulse propagation [7]. In 2004, Schäfer and Wayne proposed the short pulse (SP) equation [8] $\displaystyle q_{xt}(x,t)=q(x,t)+\frac{1}{6}(q^{3}(x,t))_{xx},$ (1.1) which can be used to describe the ultra-short optical pulse and approximate the corresponding solution of the Maxwell’s equations more effectively. More importantly, the SP equation (1.1) can be viewed as the short-wave limit of the modified Camassa-Holm (CH) equation [9]-[10] $\displaystyle m_{t}+\left((u^{2}-u^{2}_{x})m\right)_{x}+2u_{x}=0.$ (1.2) That means the SP equation can be transformed into mCH equation via applying a transformation. Since the CH equation and modified CH equation have rich mathematical structure and properties [11]-[15], it is meaningful to study the SP equation (1.1). Regrettably, it is noted that $q(x,t)$ is a real-valued function in Eq.(1.1) which implies that the one-soliton solution of the SP equation (1.1) possesses no physical interpretation although the SP equation (1.1) is derived from the physical background [16, 17]. In order to study the solution of SP equation in the actual physical context, Feng proposed the so- called complex short pulse equation (CSP) equation[18] $\displaystyle u_{xt}+u+\frac{1}{2}(|u|^{2}u_{x})_{x}=0,$ (1.3) where $u(x,t)$ is a complex-valued function in 2015. It is worth noting that amplitude and phase can be described by using the complex-valued function. Thus, it is more effective to use CSP equation to describe the ultra-short optical pulse propagation in optical fibers. Moreover, like SP equation [19], the CSP equation (1.3) also admits a Wadati-Konno-Ichikawa (WKI)-type Lax pair [18, 20]. Then, lots of work for the CSP equation (1.3) have been done. For example, via applying Hirota method and Darboux transformation method, the soliton solution, multi-breather and higher-order rogue wave solution of the CSP equation (1.3) are reported [18, 21]. Moreover, the conservation laws of the CSP equation (1.3) have been studied in [22]. From Lax pair representation (2.1), the following formula can be obtained by employing the transformation $\Gamma=\psi_{2}\psi_{1}^{-1}$, i.e., $\displaystyle 2zu_{x}\Gamma=zu_{x}u_{x}^{*}-u_{x}(u^{-1}_{x}\cdot u_{x}\Gamma)_{x}-z(u_{x}\Gamma)^{2}.$ (1.4) Expanding $u_{x}\Gamma$ as follows $\displaystyle u_{x}\Gamma=\sum_{n=1}^{\infty}F_{n}z^{-n},$ and substituting it into Eq.(1.4), it is easy to derive that $F_{n}$ satisfies the following recurrence relation $\displaystyle 2F_{n}=u_{x}u_{x}^{*}\delta_{n,0}-u_{x}(u_{x}^{-1}F_{n-1})_{x}-\sum_{\ell=0}^{n}F_{\ell}F_{n-\ell}.$ The conserved density turns out to be $\displaystyle F_{0}$ $\displaystyle=-1+\sqrt{1+|u_{x}|^{2}},~{}~{}F_{1}=-\frac{u_{x}u_{xx}^{-1}(-1+\sqrt{1+|u_{x}|^{2}})}{2\sqrt{1+|u_{x}|^{2}}}-\frac{u^{*}_{x}u_{xx}+u_{x}u^{*}_{xx}}{4(1+|u_{x}|^{2})},$ $\displaystyle F_{2}$ $\displaystyle=-\frac{u_{x}u_{xx}^{-1}F_{1}-F_{1,x}-F^{2}_{1}}{2\sqrt{1+|u_{x}|^{2}}},\ldots.$ Then the conserved quantities can be expressed as $\displaystyle I_{0}$ $\displaystyle=\int_{-\infty}^{+\infty}\left(-1+\sqrt{1+|u_{x}(x,t)|^{2}}\right)dx,$ $\displaystyle I_{1}$ $\displaystyle=\int_{-\infty}^{+\infty}\left(-\frac{u_{x}u_{xx}^{-1}(-1+\sqrt{1+|u_{x}|^{2}})}{2\sqrt{1+|u_{x}|^{2}}}-\frac{u^{*}_{x}u_{xx}+u_{x}u^{*}_{xx}}{4(1+|u_{x}|^{2})}\right)dx,$ $\displaystyle I_{2}$ $\displaystyle=\int_{-\infty}^{+\infty}\left(-\frac{u_{x}u_{xx}^{-1}F_{1}-F_{1,x}-F^{2}_{1}}{2\sqrt{1+|u_{x}|^{2}}}\right)dx,\ldots.$ In addition, applying the nonlinear steepest descent method of Defit and Zhou, Xu and Fan [23] shows the long time asymptotic behavior of the CSP equation (1.3) with residual error up to $O(\frac{\log t}{t})$. In this work, we employ $\bar{\partial}$-steepest descent method to investigate the soliton resolution for the CSP equation with the initial value condition $\displaystyle u(x,0)=u_{0}(x)\in H^{1,1}(\mathbb{R}),$ (1.5) where $\displaystyle H^{1,1}(\mathbb{R})=\\{f\in L^{2}(\mathbb{R}):f^{\prime},xf\in L^{2}(\mathbb{R})\\}.$ (1.6) It is interesting that compared with the result reported in [23], our work has a more obvious advantage in the research of long time asymptotic behavior for the CSP equation (1.3). The accuracy of our asymptotic result can reach $O(t^{-1})$, which cannot be achieved in the previous work [23]. Since Manakov first paid attention to the long time asymptotic behavior of nonlinear evolution equations [24], the research of it has been widely concerned. In 1976, Zakharov and Manakov derived the long time asymptotic solutions of NLS equation with decaying initial value [25]. In 1993, Defit and Zhou developed a nonlinear steepest descent method which can be used to systematically study the long time asymptotic behavior of nonlinear evolution equations [26]. After years of unremitting research by scholars, the nonlinear steepest descent method has been improved. An example is that when the initial value is smooth and decay fast enough, the error term is $O(\frac{\log t}{t})$ which is shown in [27, 28]. And the work [29] shows that the error term is $O(t^{-(\frac{1}{2}+\iota)})$ for any $0<\iota<\frac{1}{4}$ when the initial value belongs to the weighted Sobolev space (1.6). In recent years, combining steepest descent with $\bar{\partial}$-problem, McLaughlin and Miller [30, 31], developed a $\bar{\partial}$-steepest descent method to study the asymptotic of orthogonal polynomials. Then, this method was successfully used to investigate defocusing NLS equation with finite mass initial data [32] and with finite density initial data [33]. It should be pointed out that different from the nonlinear steepest descent method, the delicate estimates involving $L^{p}$ estimates of Cauchy projection operators can be avoided by using $\bar{\partial}$-steepest descent method. Also, the work in [32] shows that the error term is $O(t^{-\frac{3}{4}})$ when the initial value belongs to the weighted Sobolev space (1.6). Therefore, a series of great work has been done by applying $\bar{\partial}$-steepest descent method [34]-[39]. In [37], Yang and Fan give the long time asymptotic behavior of the solution $q(x,t)$ of the SP equation (1.1) via applying the $\bar{\partial}$-steepest descent method. In this work, we extend above results to derive the long time asymptotic behavior of the solution $u(x,t)$ of the CSP equation (1.3). It is worth noting that there are some differences from that on SP equation (1.1) which is shown in the following four aspects. 1. (I) When we construct the Riemann-Hilbert problem (RHP) corresponding to the initial value problem for the CSP equation (1.3), an improved transformation need to be introduced to guarantee that the eigenfunctions tend to the identity matrix as the spectral parameter $z\rightarrow\infty$. An obvious result is that there exists an exponential term in the solution $u(x,t)$ which is shown in (3.18). 2. (II) Compared with the case in SP equation, the symmetry condition, i.e., $\displaystyle M(x,t,-z)=\sigma_{2}M(x,t,z)\sigma_{2},$ (1.7) does not exist when we construct the RHP corresponding to the CSP equation (1.3). 3. (III) Since the symmetry condition (1.7) does not exist, it is necessary to analyze the local model problem around the phase points $z=\pm z_{0}$ respectively, see subsection 7.2. Also due to this reason, the final results we have obtained in this work is essentially different from the case for the SP equation. 4. (IV) Due to the difference between the Lax pair of the CSP equation and SP equation, $\theta(z)$, which is defined in section 4, is different from the the case for the SP equation which will have influence on the analysis of the $\bar{\partial}$-RH problem for $M^{(3)}(z)$ which is defined in (6.1). We need to take some different scaling techniques to investigate the estimates of $M^{(3)}$, see section 8. Our main result and remark of the soliton resolution conjecture for the CSP equation (1.3) are given as follows. ###### Theorem 1.1. Suppose that the initial values $u_{0}(x)$ satisfy the Assumption (3.6) and $u_{0}(x)\in H^{1,1}(\mathbb{R})$. Let $u(x,t)$ be the solution of CSP equation (1.3). The scattering data is denoted as $\\{r,\\{z_{k},c_{k}\\}_{k=1}^{N}\\}$ which generated from the initial values $u_{0}(x)$. For fixed $y_{1},y_{2},v_{1},v_{2}\in\mathbb{R}$ with $x_{1}<x_{2}$, $v_{1}<v_{2}\in\mathbb{R}^{-}$, and $I=\\{z:-\frac{1}{4v_{1}}<|z|^{2}<-\frac{1}{4v_{2}}\\}$, $z^{2}_{0}=\frac{t}{4y}$, then as $t\rightarrow\infty$ and $(y,t)\in S(y_{1},y_{2},v_{1},v_{2})$ which is defined in (7.15), the solution $u(x,t)$ can be expressed as $\displaystyle\begin{split}u(x,t)e^{-2d}&=u(y(x,t),t)e^{-2d}\\\ &=u_{sol}(y(x,t),t;\sigma_{d}(I))T^{2}(0)(1+T_{1})-it^{-\frac{1}{2}}f^{\pm}_{12}+O(t^{-1}),\\\ y(x,t)=x-&c_{+}(x,t,\sigma_{d}(I))-iT_{1}^{-1}-it^{-\frac{1}{2}}f^{\pm}_{11}+O(t^{-1}).\end{split}$ (1.8) Here, $u_{sol}(x,t;\hat{\sigma}_{d}(I))$ is the $N(I)$ soliton solution, $T(z)$ is defined in (4.6), and $\displaystyle f^{\pm}_{12}=\frac{1}{i\sqrt{z_{0}}}$ $\displaystyle[M^{(out)}(0)^{-1}(M^{(out)}(z_{0})^{-1}M_{1}^{pc,\pm}(z_{0})M^{(out)}(z_{0})$ $\displaystyle+M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),\pm}(-z_{0})M^{(out)}(-z_{0}))M^{(out)}(0)]_{12},$ $\displaystyle f^{\pm}_{11}=\frac{1}{i\sqrt{z_{0}}}$ $\displaystyle[M^{(out)}(0)^{-1}(M^{(out)}(z_{0})^{-1}M_{1}^{pc,\pm}(z_{0})M^{(out)}(z_{0})$ $\displaystyle+M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),\pm}(-z_{0})M^{(out)}(-z_{0}))M^{(out)}(0)]_{11}.$ ###### Remark 1.2. Theorem 1.1 need the condition $u_{0}(x)\in H^{1,1}(\mathbb{R})$ so that the inverse scattering transform possesses well mapping properties. Also the condition $u_{0}(x)\in H^{1,1}(\mathbb{R})$ guarantees that there exists no discrete spectrum on the real axis. It is noted that the asymptotic results only depend on the $H^{1}(\mathbb{R})$ norm of $r$, therefore, for any $u_{0}(x)\in H^{1,1}(\mathbb{R})$ admitting the Assumption (3.6), the process of the large-time analysis and calculations shown in this work is unchanged. Organization of the rest of the work In section 2, based on the Lax pair of the CSP equation, we introduce two kinds of eigenfunctions to deal with the spectral singularity. Also, the analytical, symmetries and asymptotic properties are analyzed. In section 3, using similar ideas to [23], the RHP for $M(z)$ is constructed for the CSP equation with initial problem. In section 4, in order to obtain a new RHP for $M^{(1)}(z)$ that its jump matrix can be decomposed into two triangle matrices near the phrase point $z=\pm z_{0}$, we introduce the matrix function $T(z)$ to define the new RHP. In section 5, we make the continuous extension of the jump matrix off the real axis by introducing a matrix function $R^{(2)}(z)$ and get a mixed $\bar{\partial}$-Riemann-Hilbert(RH) problem. In section 6, we decompose the mixed $\bar{\partial}$-RH problem into two parts which are a model RH problem with $\bar{\partial}R^{(2)}=0$ and a pure $\bar{\partial}$-RH problem with $\bar{\partial}R^{(2)}\neq 0$, respectively, i.e., $M^{(2)}_{RHP}$ and $M^{(3)}$. In section 7, we solve the model RH problem $M^{(2)}_{RHP}$ via an outer model $M^{(out)}(z)$ for the soliton part and inner model $M^{(\pm z_{0})}$ near the phase point $\pm z_{0}$ which can be solved by matching parabolic cylinder model problem respectively. Also, the error function $E(z)$ with a small-norm RH problem is obtained. In section 8, the pure $\bar{\partial}$-RH problem for $M^{(3)}$ is studied. Finally, in section 9, we obtain the soliton resolution and long time asymptotic behavior of the CSP equation. ## 2 The spectral analysis of CSP equation In order to study the soliton resolution of the initial value problem (IVP) for the CSP equation via applying $\bar{\partial}$-steepest descent method, we first construct a RHP based on the Lax pair of the CSP equation. The WKI-type Lax pair of the CSP equation reads $\displaystyle\psi_{x}(x,t,z)=U(x,t,z)\psi(x,t,z),~{}~{}\psi_{t}(x,t,z)=V(x,t,z)\psi(x,t,z),$ (2.1) where $\displaystyle U(x,t,z)=izU_{1}=iz(\sigma_{3}+U_{0x}),$ $\displaystyle V(x,t,z)=-\frac{iz}{2}|u|^{2}U_{1}-\frac{1}{4iz}\sigma_{3}+\frac{1}{2}V_{0},$ with $\displaystyle U_{0}=\left(\begin{array}[]{cc}0&u\\\ u^{*}&0\\\ \end{array}\right),~{}~{}\sigma_{3}=\left(\begin{array}[]{cc}1&0\\\ 0&-1\\\ \end{array}\right),~{}~{}V_{0}=\left(\begin{array}[]{cc}0&u\\\ -u^{*}&0\\\ \end{array}\right).$ The $u^{*}$ infers to the conjugate of the complex potential function $u$. Generally, when we deal with the IVP of integrable equations, we just employ the $x$-part of the Lax pair base on the inverse scattering transform method. The $t$-part of Lax pair is used to control the time evolution of the scattering data. However, the Lax pair (2.1) of the CSP equation possesses two singularities, i.e., $z=0$ and $z=\infty$. Consequently, in order to recover the potential function $u(x,t)$, the $t$-part of Lax pair and the expansion of the eigenfunction as spectral parameter $z\rightarrow 0$. Therefore, we deal with the two singularities at $z=0$ and $z=\infty$ applying two different transformations in the following analysis. ### 2.1 The case of singularity at z=0 We first introduce a transformation $\displaystyle\psi(x,t;z)=\mu^{0}(x,t;z)e^{i(zx+\frac{1}{4iz}t)\sigma_{3}},$ (2.2) then, an equivalent Lax pair can be derived as $\displaystyle\begin{split}\mu^{0}_{x}&-iz[\sigma_{3},\mu^{0}]=U_{2}\mu^{0},\\\ \mu^{0}_{t}&-\frac{i}{4z}[\sigma_{3},\mu^{0}]=V_{2}\mu^{0},\end{split}$ (2.3) where $\displaystyle U_{2}=izU_{0x},~{}~{}V_{2}=-\frac{iz}{2}|u|^{2}U_{1}+\frac{1}{2}V_{0},$ and $\mu^{0}=\mu^{0}(x,t;z)$. Additionally, $[A,B]$ means $AB-BA$ where $A$ and $B$ are $2\times 2$ matrices. The Lax pair (2.3) can be written in full derivative form $\displaystyle d(e^{-i(zx+\frac{1}{4z}t)\hat{\sigma}_{3}}\mu^{0})=e^{-i(zx+\frac{1}{4z}t)\hat{\sigma}_{3}}(U_{2}dx+V_{2}dt)\mu^{0},$ (2.4) where $e^{\hat{\sigma}_{3}}A=e^{\sigma_{3}}Ae^{-\sigma_{3}}$. By selecting two special integration paths i.e., $(-\infty,t)\rightarrow(x,t)$ and $(+\infty,t)\rightarrow(x,t)$, on Eq.(2.4), we define two eigenfunction $\mu^{0}_{\pm}(x,t;z)$ which can be derived as the following Volterra type integrals $\displaystyle\begin{matrix}\mu^{0}_{-}(x,t;z)=\mathbb{I}+\int_{x}^{-\infty}e^{iz(x-y)\hat{\sigma}_{3}}U_{2}(y,t;z)\mu^{0}_{-}(y,t;z)dy,\\\ \mu^{0}_{+}(x,t;z)=\mathbb{I}-\int_{x}^{+\infty}e^{iz(x-y)\hat{\sigma}_{3}}U_{2}(y,t;z)\mu^{0}_{+}(y,t;z)dy.\end{matrix}$ (2.5) Then we can derive the analytic property and asymptotic property of $\mu^{0}_{\pm}(x,t;z)$. ###### Proposition 2.3. The properties of $\mu^{0}_{\pm}(x,t;z)$: * 1. (Analytic property) It is assumed that $u(x)-u_{0}\in H^{1,1}(\mathbb{R})$. Then, $\mu^{0}_{-,1},\mu^{0}_{+,2}$ are analytic in $C_{-}$ and $\mu^{0}_{-,2},\mu^{0}_{+,1}$ are analytic in $C_{+}$. The $\mu^{0}_{\pm,j}(j=1,2)$ mean the $j$-th column of $\mu^{0}_{\pm}$. * 2. (Asymptotic property) The function $\mu^{0}_{\pm}(x,t;z)$ admit the following asymptotic expansions as $z\rightarrow 0$, $\displaystyle\mu^{0}_{\pm}(x,t;z)=\mathbb{I}+\left(\begin{array}[]{cc}0&iu(x,t)\\\ iu^{*}(x,t)&0\\\ \end{array}\right)z+O(z^{2}).$ (2.8) ### 2.2 The case of singularity at z=$\infty$ Considering the singularity at $z=\infty$, we need to control the asymptotic behavior of eigenfunctions as $z\rightarrow\infty$. Thus, following the idea in [23], we introduce the transformation $\displaystyle\psi(x,t;z)=G(x,t)\phi e^{izp(x,t;z)\sigma_{3}},$ (2.9) where $\displaystyle G(x,t)=\sqrt{\frac{\sqrt{m(x,t)}+1}{2\sqrt{m(x,t)}}}\left(\begin{array}[]{cc}1&-\frac{\sqrt{m(x,t)}-1}{u^{*}_{x}(x,t)}\\\ \frac{\sqrt{m(x,t)}-1}{u_{x}(x,t)}&1\\\ \end{array}\right),$ $\displaystyle m(x,t)=1+|u_{x}|^{2},~{}~{}p(x,t;z)=x-\int_{x}^{\infty}(\sqrt{m(s,t)}-1)ds+\frac{t}{4z^{2}}.$ Consequently, the CSP equation (1.3) can be written in conservation law form $\displaystyle\left(\sqrt{m(x,t)}\right)_{t}=-\frac{1}{2}\left(|u(x,t)|^{2}\sqrt{m(x,t)}\right)_{x},$ and the derivative of function $p(x,t;z)$ with respect to $x$ and $t$ can be derived as $\displaystyle p_{x}(x,t;z)=\sqrt{m(x,t)},~{}~{}p_{t}(x,t;z)=-\frac{1}{2}|u(x,t)|^{2}\sqrt{m(x,t)}+\frac{1}{4z^{2}}.$ (2.10) Also the equivalent Lax pair of $\psi(x,t;z)$ (2.1) is transformed into $\displaystyle\begin{split}\phi_{x}-izp_{x}[\sigma_{3},\phi]=U_{3}\phi,\\\ \phi_{t}-izp_{t}[\sigma_{3},\phi]=V_{3}\phi,\end{split}$ (2.11) where $\displaystyle U_{3}=$ $\displaystyle-\left(\begin{array}[]{cc}\frac{u_{x}u^{*}_{xx}-u_{xx}u^{*}_{x}}{4\sqrt{m}(\sqrt{m}+1)}&\frac{(\sqrt{m}-1)u_{x}u^{*}_{xx}-(\sqrt{m}+1)u_{xx}u^{*}_{x}}{4mu_{x}^{*}}\\\ \frac{(\sqrt{m}+1)u_{x}u^{*}_{xx}-(\sqrt{m}-1)u_{xx}u^{*}_{x}}{4mu_{x}^{*}}&-\frac{u_{x}u^{*}_{xx}-u_{xx}u^{*}_{x}}{4\sqrt{m}(\sqrt{m}+1)}\\\ \end{array}\right),$ $\displaystyle V_{3}=$ $\displaystyle-\frac{1}{4iz}\frac{1}{\sqrt{m}}\sigma_{3}+\frac{1}{4iz}\frac{1}{\sqrt{m}}\left(\begin{array}[]{cc}0&u_{x}\\\ u_{x}^{*}&0\\\ \end{array}\right)+\frac{1}{4iz}\sigma_{3}$ $\displaystyle-\frac{1}{4\sqrt{m}}\left(\begin{array}[]{cc}u^{*}u_{x}-uu_{x}^{*}&-\frac{(\sqrt{m}+1)uu^{*}_{x}+(\sqrt{m}-1)u_{x}u^{*}}{u_{x}^{*}}\\\ \frac{(\sqrt{m}-1)uu^{*}_{x}+(\sqrt{m}+1)u_{x}u^{*}}{u_{x}}&-u^{*}u_{x}+uu_{x}^{*}\\\ \end{array}\right)$ $\displaystyle-\left(\begin{array}[]{cc}\frac{u_{x}u^{*}_{xt}-u_{xt}u^{*}_{x}}{4\sqrt{m}(\sqrt{m}+1)}&\frac{(\sqrt{m}-1)u_{x}u^{*}_{xt}-(\sqrt{m}+1)u_{xt}u^{*}_{x}}{4mu_{x}^{*}}\\\ \frac{(\sqrt{m}+1)u_{x}u^{*}_{xt}-(\sqrt{m}-1)u_{xt}u^{*}_{x}}{4mu_{x}^{*}}&-\frac{u_{x}u^{*}_{xt}-u_{xt}u^{*}_{x}}{4\sqrt{m}(\sqrt{m}+1)}\\\ \end{array}\right).$ Based on the Lax pair (2.11), it is not hard to verify that the solutions of spectral problem do not approximate the identity matrix as $z\rightarrow\infty$ which will cause difficulties in constructing RHP. Therefore, we need to introduce an improved transformation $\displaystyle\psi(x,t;z)=G(x,t)e^{d_{-}\hat{\sigma}_{3}}\mu(x,t;z)e^{-d_{+}\sigma_{3}}e^{izp(x,t;z)\sigma_{3}},$ (2.12) where $\displaystyle d_{-}=\int_{-\infty}^{x}\frac{u_{xx}u^{*}_{x}-u_{x}u^{*}_{xx}}{4\sqrt{m}(\sqrt{m}+1)}(s,t)ds,~{}~{}d_{+}=\int^{+\infty}_{x}\frac{u_{xx}u^{*}_{x}-u_{x}u^{*}_{xx}}{4\sqrt{m}(\sqrt{m}+1)}(s,t)ds,$ $\displaystyle d=d_{+}+d_{-}=\int_{-\infty}^{+\infty}\frac{u_{xx}u^{*}_{x}-u_{x}u^{*}_{xx}}{4\sqrt{m}(\sqrt{m}+1)}(s,t)ds.$ Then, the equivalent Lax pair of $\psi(x,t;z)$ (2.1) can be written as $\displaystyle\begin{split}\mu_{x}-izp_{x}[\sigma_{3},\mu]=e^{-d_{-}\hat{\sigma}_{3}}U_{4}\mu,\\\ \mu_{t}-izp_{t}[\sigma_{3},\mu]=e^{-d_{-}\hat{\sigma}_{3}}V_{4}\mu,\end{split}$ (2.13) where $\displaystyle U_{4}=$ $\displaystyle-\left(\begin{array}[]{cc}0&\frac{(\sqrt{m}-1)u_{x}u^{*}_{xx}-(\sqrt{m}+1)u_{xx}u^{*}_{x}}{4mu_{x}^{*}}\\\ \frac{(\sqrt{m}+1)u_{x}u^{*}_{xx}-(\sqrt{m}-1)u_{xx}u^{*}_{x}}{4mu_{x}^{*}}&0\\\ \end{array}\right),$ $\displaystyle V_{4}=$ $\displaystyle-\frac{1}{4iz}(\frac{1}{\sqrt{m}}-1)\sigma_{3}+\frac{1}{4iz}\frac{1}{\sqrt{m}}\left(\begin{array}[]{cc}0&u_{x}\\\ u_{x}^{*}&0\\\ \end{array}\right)$ $\displaystyle-\frac{1}{4\sqrt{m}}\left(\begin{array}[]{cc}0&-\frac{(\sqrt{m}+1)uu^{*}_{x}+(\sqrt{m}-1)u_{x}u^{*}}{u_{x}^{*}}\\\ \frac{(\sqrt{m}-1)uu^{*}_{x}+(\sqrt{m}+1)u_{x}u^{*}}{u_{x}}&0\\\ \end{array}\right)$ $\displaystyle-\left(\begin{array}[]{cc}0&\frac{(\sqrt{m}-1)u_{x}u^{*}_{xt}-(\sqrt{m}+1)u_{xt}u^{*}_{x}}{4mu_{x}^{*}}\\\ \frac{(\sqrt{m}+1)u_{x}u^{*}_{xt}-(\sqrt{m}-1)u_{xt}u^{*}_{x}}{4mu_{x}^{*}}&0\\\ \end{array}\right).$ Furthermore, Eq.(2.13) can be written in full derivative form $\displaystyle d(e^{-izp(x,t;z)\hat{\sigma}_{3}}\mu)=e^{-izp(x,t;z)\hat{\sigma}_{3}}e^{-d_{-}\hat{\sigma}_{3}}(U_{4}dx+V_{4}dt)m,$ (2.14) from which we can derive two Volterra type integrals $\displaystyle\begin{matrix}\mu_{-}(x,t;z)=\mathbb{I}+\int_{x}^{-\infty}e^{iz[p(x,t;z)-p(s,t;z)]\hat{\sigma}_{3}}e^{-d_{-}\hat{\sigma}_{3}}U_{4}(s,t;z)\mu_{-}(s,t;z)ds,\\\ \mu_{+}(x,t;z)=\mathbb{I}-\int_{x}^{+\infty}e^{iz[p(x,t;z)-p(s,t;z)]\hat{\sigma}_{3}}e^{-d_{-}\hat{\sigma}_{3}}U_{4}(y,t;z)\mu_{+}(s,t;z)ds.\end{matrix}$ (2.15) Based on the definition of $\mu(x,t;z)$ and the above integrals (2.15), we can derive the properties of $\mu(x,t;z)$ including analytic, symmetry and asymptotic behavior properties. ###### Proposition 2.4. The properties of $\mu(x,t;z)$: * 1. (Analytic property) It is assumed that $u(x)-u_{0}\in H^{1,1}(\mathbb{R})$. Then, $\mu_{-,1},\mu_{+,2}$ are analytic in $C_{-}$ and $\mu_{-,2},\mu_{+,1}$ are analytic in $C_{+}$. The $\mu_{\pm,j}(j=1,2)$ mean the $j$-th column of $\mu_{\pm}$. * 2. (Symmetry property) The symmetry of the eigenfunctions $\mu_{\pm}(x,t;z)$ can be shown as $\displaystyle\mu^{*}_{\pm}(x,t;z^{*})=\sigma_{2}\mu_{\pm}(x,t;z)\sigma_{2},$ (2.16) where $\sigma_{2}=\left(\begin{array}[]{cc}0&-i\\\ i&0\\\ \end{array}\right)$. * 3. (Asymptotic property for $z\rightarrow\infty$) The function $\mu_{\pm}(x,t;z)$ admit the following asymptotic expansions as $z\rightarrow\infty$, $\displaystyle\mu_{\pm}(x,t;z)=\mathbb{I}+O(z^{-1}).$ (2.17) ### 2.3 The scattering matrix Considering the fact that the eigenfunctions $\mu_{\pm}(x,t;z)$ are two fundamental matrix solutions of Eq.(2.13) for $z\in\mathbb{R}$, there exists a matrix $S(z)$ that leads to $\displaystyle\mu_{-}(x,t;z)=\mu_{+}(x,t;z)e^{izp(x,t;z)\hat{\sigma}_{3}}S(z),$ (2.18) where $S(z)=(s_{ij}(z))~{}(i,j=1,2)$ is independent of the variable $x$ and $t$. Based on the Abel’s theorem and the properties of $\mu_{\pm}(x,t;z)$ that shown in Proposition 2.4, the properties of $S(z)$ can be derived. ###### Proposition 2.5. The properties of $S(z)$: * 1. (Analytic property) $s_{11}$ is analytic in $\mathbb{C}^{-}$, and $s_{22}$ is analytic in $\mathbb{C}^{+}$. * 2. (Symmetry property) The symmetry of the elements of the scattering matrix $S(z)$ can be shown as $\displaystyle s_{11}(z)=s^{*}_{22}(z^{*}),~{}~{}s_{12}(z)=-s^{*}_{21}(z^{*}).$ (2.19) * 3. (Asymptotic property for $z\rightarrow\infty$) The element $s_{11}(z)$ admit the following asymptotic expansions as $z\rightarrow 0$, $\displaystyle s_{22}(z)=e^{d}\left(1+izc-\frac{c^{2}}{2}z^{2}+O(z^{3})\right).$ (2.20) ### 2.4 The connection between $\mu_{\pm}(x,t;z)$ and $\mu^{0}_{\pm}(x,t;z)$ In the following analysis, we will use the eigenfunctions $\mu_{\pm}(x,t;z)$ to construct the matrix $M(x,t;z)$ and further formulate a RHP. It is worth noting that the asymptotic behavior of $\mu_{\pm}(x,t;z)$ as $z\rightarrow 0$ plays an important role in constructing the solution $u(x,t)$. Thus, the connection between $\mu_{\pm}(x,t;z)$ and $\mu^{0}_{\pm}(x,t;z)$ is necessary. From Eq.(2.2) and Eq.(2.12), we can derive that $\displaystyle\mu_{\pm}(x,t;z)=e^{-d_{-}\sigma_{3}}G^{-1}\mu^{0}_{\pm}(x,t;z)e^{i(zx+\frac{1}{4z}t)\sigma_{3}}C_{\pm}(z)e^{-izp(x,t;z)\sigma_{3}}e^{d\sigma_{3}},$ (2.21) where $C_{\pm}(z)$ are independent of $x$ and $t$. Let $x\rightarrow\infty$, from Eq.(2.21), $C_{\pm}(z)$ can be solved as $\displaystyle C_{+}(z)=\mathbb{I},~{}~{}C_{-}(z)=e^{-d\sigma_{3}}e^{-izc\sigma_{3}},$ where $c=\int^{+\infty}_{-\infty}(\sqrt{m(x,t)}-1)dx$ is a quantity conserved under the dynamics governed by Eq.(1.3). Then, the connection between $\mu_{\pm}(x,t;z)$ and $\mu^{0}_{\pm}(x,t;z)$ can be obtained as $\displaystyle\begin{split}\mu_{-}(x,t;z)=e^{-d_{-}\sigma_{3}}G^{-1}(x,t)\mu^{0}_{-}(x,t;z)e^{-iz\int^{x}_{-\infty}(\sqrt{m(s,t)}-1)ds\sigma_{3}},\\\ \mu_{+}(x,t;z)=e^{-d_{-}\sigma_{3}}G^{-1}(x,t)\mu^{0}_{+}(x,t;z)e^{iz\int^{+\infty}_{x}(\sqrt{m(s,t)}-1)ds\sigma_{3}}e^{d\sigma_{3}}.\end{split}$ (2.22) ## 3 The formulation of a RHP ###### Assumption 3.6. In the following analysis, we make the assumption to avoid the many pathologies possible, i.e., * 1. For $z\in\mathbb{R}$, no spectral singularities exist, i.e, $s_{22}(z)\neq 0$; * 2. Suppose that $s_{22}(z)$ possesses $N$ zero points, denoted as $\mathcal{Z}=\\{(z_{j},Im~{}z_{j}>0)^{N}_{j=1}\\}$. * 3. The discrete spectrum is simple, i.e., if $z_{0}$ is the zero of $s_{22}(z)$, then $s^{\prime}_{22}(z_{0})\neq 0$. Now, we introduce a sectionally meromorphic matrices $\displaystyle\tilde{M}(x,t;z)=\left\\{\begin{aligned} &\tilde{M}^{+}(x,t;z)=\left(\mu_{+,1}(x,t;z),\frac{\mu_{-,2}(x,t;z)}{s_{22}(z)}\right),\quad z\in\mathbb{C}^{+},\\\ &\tilde{M}^{-}(x,t;z)=\left(\frac{\mu_{-,1}(x,t;z)}{s_{11}(z)},\mu_{+,2}(x,t;z)\right),\quad z\in\mathbb{C}^{-},\end{aligned}\right.$ (3.1) where $\tilde{M}^{\pm}(x,t;z)=\lim\limits_{\varepsilon\rightarrow 0^{+}}\tilde{M}(x,t;z\pm i\varepsilon),~{}\varepsilon\in\mathbb{R}$, and reflection coefficients $\displaystyle r(z)=\frac{s_{12}(z)}{s_{22}(z)},~{}~{}\frac{s_{21}(z)}{s_{11}(z)}=-\frac{s^{*}_{12}(z^{*})}{s^{*}_{22}(z^{*})}=-r^{*}(z^{*})=-r^{*}(z),~{}~{}z\in\mathbb{R}.$ (3.2) Based on the above analysis, the matrix function $\tilde{M}(x,t;z)$ admits the following matrix RHP. ###### Riemann-Hilbert Problem 3.7. Find an analysis function $\tilde{M}(x,t;z)$ with the following properties: * 1. $\tilde{M}(x,t;z)$ is meromorphic in $C\setminus\mathbb{R}$; * 2. $\tilde{M}^{*}(x,t;z^{*})=\sigma_{2}\tilde{M}(x,t;z)\sigma_{2}$; * 3. $\tilde{M}^{+}(x,t;z)=\tilde{M}^{-}(x,t;z)V(x,t;z)$, $z\in\mathbb{R}$, where $\displaystyle V(x,t;z)=\left(\begin{array}[]{cc}1&r(z)e^{2izp}\\\ r^{*}(z)e^{-2izp}&1+|r(z)|^{2}\end{array}\right);$ (3.5) * 4. $\tilde{M}(x,t;z)=\mathbb{I}+O(z^{-1})$ as $z\rightarrow\infty$; ###### Remark 3.8. By referring to the Zhou’s vanishing lemma, the existence of the solutions of RHP 3.7 for $(x,t)\in\mathbb{R}^{2}$ is guaranteed. According to a consequence of Liouville’s theorem, we know that if a solution exists, it is unique. Next, in order to reconstruct the solution $u(x,t)$, the asymptotic behavior of $\tilde{M}(x,t;z)$ as $z\rightarrow 0$ need to be taken into account, i.e., $\displaystyle\tilde{M}(x,t;z)=e^{-d_{-}\sigma_{3}}G^{-1}(x,t)\left[\mathbb{I}+z\left(ic_{+}\sigma_{3}+i\left(\begin{array}[]{cc}0&u\\\ u^{*}&0\\\ \end{array}\right)\right)+O(z^{2})\right]e^{d\sigma_{3}},~{}~{}z\rightarrow 0,$ (3.8) where $c_{+}(x,t)=\int^{+\infty}_{x}(\sqrt{m(s,t)}-1)ds$. Since $p(x,t;z)$ that appears in jump matrix (3.12) is not clear, it is still hard to obtain the solution $u(x,t)$. Thus, introducing a transformation $\displaystyle y(x,t)=x-\int^{+\infty}_{x}(\sqrt{m(s,t)}-1)ds=x-c_{+}(x,t),$ (3.9) the jump matrix can be expressed explicitly. However, we can just obtain the solution $u(x,t)$ only in implicit form: it will be given in terms of functions in the new scale, whereas the original scale will also be given in terms of functions in the new scale. We further define that $\displaystyle\tilde{M}(x,t;z)=M(y(x,t),t;z),$ then, the $M(y(x,t),t;z)$ admits the following matrix RHP. ###### Riemann-Hilbert Problem 3.9. Find an analysis function $M(y,t;z)$ with the following properties: * 1. $M(y,t;z)$ is meromorphic in $C\setminus\mathbb{R}$; * 2. $M^{*}(y,t;z^{*})=\sigma_{2}M(x,t;z)\sigma_{2}$; * 3. $M^{+}(y,t;z)=M^{-}(y,t;z)V(x,t;z)$, $z\in\mathbb{R}$, where $\displaystyle V(x,t;z)=e^{i\left(zy+\frac{t}{4z}\right)\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&r(z)\\\ r^{*}(z)&1+|r(z)|^{2}\end{array}\right);$ (3.12) * 4. $M(y,t;z)=\mathbb{I}+O(z^{-1})$ as $z\rightarrow\infty$; Based on the Assumption 3.6, Eq.(2.21) and Proposition 2.4 , there exists norming constants $b_{j}$ such that $\mu_{-,2}(z_{j})=b_{j}e^{2i(z_{j}y+\frac{t}{4z_{j}})}\mu_{+,1}(z_{j});~{}\mu_{-,1}(z^{*}_{j})=-b^{*}_{j}e^{-2i(z^{*}_{j}y+\frac{t}{4z^{*}_{j}})}\mu_{+,2}(z^{*}_{j}),$ Then, the residue condition of $M(y,t;z)$ can be shown as $\displaystyle\mathop{Res}_{z=z_{j}}M=\lim_{z\rightarrow z_{j}}M\left(\begin{array}[]{cc}0&c_{j}e^{2i(z_{j}y+\frac{t}{4z_{j}})}\\\ 0&0\end{array}\right),~{}~{}\mathop{Res}_{z=z^{*}_{j}}M=\lim_{z\rightarrow z^{*}_{j}}M\left(\begin{array}[]{cc}0&0\\\ -c^{*}_{j}e^{-2i(z^{*}_{j}y+\frac{t}{4z^{*}_{j}})}&0\end{array}\right).$ (3.17) where $c_{j}=\frac{b_{j}}{s^{\prime}_{22}(z_{j})}$. In terms of the solution of the RHP 3.9, Proposition 2.4 and Eq.(3.8), the solution $u(x,t)$ can be derived as $u(x,t)=u(y(x,t),t)$, where $\displaystyle\begin{split}e^{-2d}u(y,t)=\lim_{z\rightarrow 0}\frac{\left(M^{-1}(y,t;0)M(y,t;z)\right)_{12}}{iz},\\\ x(y,t)=y+\lim_{z\rightarrow 0}\frac{\left(M^{-1}(y,t;0)M(y,t;z)\right)_{11}-1}{iz}.\end{split}$ (3.18) ## 4 Conjugation In this section, our main purpose is to re-normalize the Riemann-Hilbert problem(3.9). Therefore, we will establish a transformation $M\mapsto M^{(1)}$ by introducing a function. In jump matrix (3.12), the oscillation term is $e^{2i(zy+\frac{t}{4z})}$ which can be denoted as $\displaystyle e^{2i(zy+\frac{t}{4z})}=e^{2it\theta(z)},~{}~{}\theta(z)=\frac{zy}{t}+\frac{1}{4z}.$ (4.1) Next, the phase points of $\theta(z)$ can be derived which can be denoted as $\pm z_{0}$ where $z_{0}=\sqrt{\frac{t}{4y}}$. For the case that $\frac{t}{4y}<0$, the solution $u(x,t)$ of the initial problem (1.3) and (1.5) tends to $0$ fast decay as $t\rightarrow\infty$[23]. Thus, we mainly pay attention to the case that $\frac{t}{4y}>0$. Furthermore, $\theta(z)$ can be written as $\displaystyle\theta(z)=\frac{z}{4}\left(\frac{1}{z_{0}^{2}}+\frac{1}{z^{2}}\right),$ (4.2) from which we can derive that $\displaystyle Re(2it\theta(z))=-2tIm~{}z\frac{|z|^{2}-z_{0}^{2}}{4z_{0}^{2}|z|^{2}}.$ (4.3) Then, we derive the decaying domains of the oscillation term. $Rez$$0$$-z_{0}$$z_{0}$$|e^{2it\theta(z)}|\rightarrow\infty$$|e^{2it\theta(z)}|\rightarrow 0$$|e^{2it\theta(z)}|\rightarrow\infty$$|e^{2it\theta(z)}|\rightarrow 0$$t\rightarrow-\infty$$Rez$0$z_{0}$$-z_{0}$$|e^{2it\theta(z)}|\rightarrow 0$$|e^{2it\theta(z)}|\rightarrow\infty$$|e^{2it\theta(z)}|\rightarrow 0$$|e^{2it\theta(z)}|\rightarrow\infty$$t\rightarrow+\infty$ Figure 1. Exponential decaying domains. To make the following analysis more convenient, we introduce some notations. $\displaystyle\begin{aligned} &\triangle^{+}_{z_{0},1}=\triangle^{-}_{z_{0},-1}=\\{k\in\\{1,\cdots,N\\}|z_{k}|<z_{0}\\},\\\ &\triangle^{-}_{z_{0},1}=\triangle^{+}_{z_{0},-1}=\\{k\in\\{1,\cdots,N\\}|z_{k}|>z_{0}\\},\end{aligned}$ (4.4) where the subscript $\eta=\pm 1$ is defined by $\eta=sgn(t)$. $\displaystyle I_{+}=(-\infty,-z_{0})\cup(z_{0},+\infty),~{}~{}I_{-}=[-z_{0},z_{0}].$ (4.5) In the following analysis, we mainly pay attention to the case that $t\rightarrow+\infty$, and the case $t\rightarrow-\infty$ can be analyzed in a similarly way. In order to re-normalize the Riemann-Hilbert problem(3.9), we first introduce the following function $\displaystyle\delta(z)=\exp\left[i\int_{-z_{0}}^{z_{0}}\frac{\nu(s)}{s-z}ds\right],~{}~{}\nu(s)=-\frac{1}{2\pi}\log(1+|r(s)|^{2}).$ and $\displaystyle T(z)=\prod_{k\in\triangle_{z_{0},1}^{+}}\frac{z-z^{*}_{k}}{z-z_{k}}\delta(z),$ (4.6) which has the following properties. ###### Proposition 4.10. $T(z)$ admits that ($a$) $T$ is meromorphic in $C\setminus I_{-}$; ($b$) For $z\in C\setminus I_{-}$, $T^{*}(z^{*})=\frac{1}{T(z)}$; ($c$) For $z\in I_{-}$, $t\rightarrow+\infty$, the boundary values $T_{\pm}$ satisfy $\displaystyle T_{+}(z)/T_{-}(z)=1+|r(z)|^{2},z\in I_{-};$ (4.7) ($d$) As $|z|\rightarrow\infty$ with $|arg(z)|\leq c<\pi$, $\displaystyle T(z)=1+\frac{i}{z}\left(2\sum_{k\in\triangle_{z_{0},1}^{+}}Im~{}z_{k}-\int_{--z{0}}^{z_{0}}\nu(s)ds\right)+O(z^{-2});$ (4.8) ($e$) As $z\rightarrow z_{0}$ along any ray $z_{0}+e^{i\phi}R_{+}$ with $|\phi|\leq c<\pi$ $\displaystyle|T(z,z_{0})-T_{0}(\pm z_{0})(z\mp z_{0})^{i\nu(\pm z_{0})}|\leq c\parallel r\parallel_{H^{1}(R)}|z\mp z_{0}|^{\frac{1}{2}},$ (4.9) where $T_{0}(z_{0})$ is the complex unit $\displaystyle\begin{split}&T_{0}(\pm z_{0})=\prod_{k\in\triangle_{z_{0},1}^{+}}\left(\frac{\pm z_{0}-z^{*}_{k}}{\pm z_{0}-z_{k}}\right)e^{i\beta^{\pm}(z_{0},\pm z_{0})},\\\ &\beta^{\pm}(z,\pm z_{0})=-\nu(\pm z_{0})\log(z\mp z_{0}+1)+\int_{-z_{0}}^{z_{0}}\frac{\nu(s)-\chi_{\pm}(s)\nu(\pm z_{0})}{s-z}ds.\end{split}$ (4.10) Here $\chi_{\pm}(s)=1$ are the characteristic functions of the interval $s\in(z_{0}-1,z_{0})$ and $s\in(-z_{0},-z_{0}+1)$ respectively. ($f$) As $z\rightarrow 0$, $T(z)$ can be expressed as $\displaystyle T(z)=T(0)(1+zT_{1})+O(z^{2}),$ (4.11) where $T_{1}=2\sum_{k\in\triangle_{z_{0},1}^{+}}\frac{Im~{}z_{k}}{z_{k}}-\int_{-z_{0}}^{z_{0}}\frac{\nu(s)}{s^{2}}ds$. ###### Proof. The properties of $T(z)$ can be proved by a direct calculation, for details, see [37],[40]. ∎ Then, by applying the function $T(z)$, we introduce a transformation $\displaystyle M^{(1)}(y,t;z)=M(y,t;z)T(z)^{\sigma_{3}},$ (4.12) which admits the following matrix RHP. ###### Riemann-Hilbert Problem 4.11. Find an analysis function $M^{(1)}$ with the following properties: * 1. $M^{(1)}$ is meromorphic on $C\setminus R$; * 2. $[M^{(1)}(y,t;z^{*})]^{*}=\sigma_{2}M^{(1)}(x,t;z)\sigma_{2}$; * 3. $M^{(1)}(z)=\mathbb{I}+O(z^{-1})$ as $z\rightarrow\infty$; * 4. $M^{(1)}_{\pm}(z)$ satisfy the jump relationship $M^{(1)}_{+}(z)=M^{(1)}_{-}(z)V^{(1)}(z)$, where $\displaystyle V^{(1)}=\left\\{\begin{aligned} \left(\begin{array}[]{cc}1&0\\\ r^{*}(z)T(z)^{2}e^{-2it\theta}&1\\\ \end{array}\right)\left(\begin{array}[]{cc}1&r(z)T(z)^{-2}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),z\in\mathbb{R}\setminus I_{-},\\\ \left(\begin{array}[]{cc}1&\frac{r(z)T_{-}(z)^{-2}}{1+|r(z)|^{2}}e^{2it\theta}\\\ 0&1\\\ \end{array}\right)\left(\begin{array}[]{cc}1&\\\ \frac{r^{*}(z)T_{+}(z)^{2}}{1+|r(z)|^{2}}e^{-2it\theta}&1\\\ \end{array}\right),z\in I_{-}\setminus\\{\pm z_{0}\\}.\end{aligned}\right.$ (4.13) * 5. $M^{(1)}(z)$ has simple poles at each $z_{k}\in\mathcal{Z}$ and $z^{*}_{k}\in\mathcal{Z}^{*}$ at which $\displaystyle\begin{split}&\mathop{Res}\limits_{z=z_{k}}M^{(1)}=\left\\{\begin{aligned} &\lim_{z\rightarrow z_{k}}M^{(1)}\left(\begin{array}[]{cc}0&0\\\ c_{k}^{-1}\left((\frac{1}{T})^{\prime}(z_{k})\right)^{-2}e^{-2it\theta}&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{+}\\\ &\lim_{z\rightarrow z_{k}}M^{(1)}\left(\begin{array}[]{cc}0&c_{k}T^{-2}(z_{k})e^{2it\theta}\\\ &0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{-}\end{aligned}\right.\\\ &\mathop{Res}\limits_{z=z^{*}_{k}}M^{(1)}=\left\\{\begin{aligned} &\lim_{z\rightarrow z^{*}_{k}}M^{(1)}\left(\begin{array}[]{cc}0&0\\\ -(c^{*}_{k})^{-1}(T^{\prime}(z^{*}_{k}))^{-2}e^{-2it\theta}&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{-}\\\ &\lim_{z\rightarrow z^{*}_{k}}M^{(1)}\left(\begin{array}[]{cc}0&-c^{*}_{k}(T(z^{*}_{k}))^{2}e^{2it\theta}\\\ 0&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{+}\end{aligned}\right.\end{split}$ (4.14) ###### Proof. Based on the above analysis, it is easy to prove the analyticity, jump conditions, asymptotic behaviors and residue condition, for detail, see [37],[40]. ∎ ## 5 Continuous extension to a mixed $\bar{\partial}$-RH problem In this section, our purpose is to extend the jump matrix off the real axis. Here we just need the extension is continuous, and the oscillation term along the new contours are decaying. Firstly, we introduce some the contours $\displaystyle\begin{split}&\Sigma_{j}=z_{0}+e^{i(2j-1)\pi/4}R_{+},~{}~{}j=1,4;\\\ &\Sigma_{j}=z_{0}+e^{i(2j-1)\pi/4}h,~{}~{}h\in\left(0,\frac{\sqrt{2}}{2}z_{0}\right),~{}~{}j=2,3;\\\ &\Sigma_{j}=-z_{0}+e^{i(2j-1)\pi/4}h,~{}~{}h\in\left(0,\frac{\sqrt{2}}{2}z_{0}\right),~{}~{}j=5,8;\\\ &\Sigma_{j}=-z_{0}+e^{i(2j-1)\pi/4}R_{+},~{}~{}j=6,7;\\\ &\Sigma_{j}=e^{i(2j-1)\pi/4}h,~{}~{}h\in\left(0,\frac{\sqrt{2}}{2}z_{0}\right),~{}~{},j=9,10,11,12;\\\ &\Sigma^{2}=\cup_{j=1}^{12}\Sigma_{j}.\end{split}$ (5.1) Then, the complex plane $\mathbb{C}$ is separated into ten sectors which are denoted by $\Omega_{j}(j=1,2,\ldots,10)$ respectively, and shown in Figure 2. $0$$z_{0}$$-z_{0}$$\Omega_{4}$$\Omega_{3}$$\Omega_{9}$$\Omega_{8}$$\Omega_{6}$$\Omega_{1}$$\Omega_{10}$$\Omega_{7}$$\Omega_{2}$$\Omega_{5}$$\Sigma_{1}$$\Sigma_{4}$$\Sigma_{6}$$\Sigma_{7}$$\Sigma_{5}$$\Sigma_{10}$$\Sigma_{8}$$\Sigma_{11}$$\Sigma_{9}$$\Sigma_{2}$$\Sigma_{12}$$\Sigma_{3}$$sgn(t)=1(t\rightarrow+\infty)$ Figure 2. Definition of $R^{(2)}$ in different domains. Moreover, define $\displaystyle\rho=\frac{1}{2}\min_{(z_{a}\neq z_{b})\in\mathcal{Z}\cup\mathcal{Z}^{*}}\\{|z_{a}-z_{b}|\\},$ (5.2) and $\chi_{Z}\in C_{0}^{\infty}(C,[0,1])$ which is supported near the discrete spectrum $\mathcal{Z}\cup\mathcal{Z}^{*}$ such that $\displaystyle\chi_{Z}(z)=\left\\{\begin{aligned} &1,~{}~{}dist(z,\mathcal{Z}\cup\mathcal{Z}^{*})<\rho/3,\\\ &0,~{}~{}dist(z,\mathcal{Z}\cup\mathcal{Z}^{*})>2\rho/3.\end{aligned}\right.$ (5.3) Also we can verify that $dist(\mathcal{Z}\cup\mathcal{Z}^{*},R)>\rho,k=1,2,\ldots,N.$ Next, in order to achieve the purpose of extending the jump matrix onto the new contours along which oscillation term are decaying, we introduce a transformation $\displaystyle M^{(2)}=M^{(1)}R^{(2)},$ (5.4) where $R^{(2)}$ possesses some restrictions. * 1. The aim of the transformation is to extend the jump matrix onto the new contours $\Sigma^{(2)}$. So on the real axis, $M^{(2)}$ must have no jump. * 2. To guarantee that the $\bar{\partial}$-contribution has little impact on the large-time asymptotic solution of $u(x,t)$, the norm of $R^{(2)}$ need to be controlled. * 3. The introduced transformation need to have no impact on the residue condition. Then, we define $R^{(2)}$ as $\displaystyle R^{(2)}=\left\\{\begin{aligned} &\left(\begin{array}[]{cc}1&(-1)^{m_{j}}R_{j}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Omega_{j},~{}~{}j=1,4,7,9,\\\ &\left(\begin{array}[]{cc}1&0\\\ (-1)^{m_{j}}R_{j}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Omega_{j},~{}~{}j=3,6,8,10,\\\ &\left(\begin{array}[]{cc}1&0\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Omega_{2}\cup\Omega_{5},\end{aligned}\right.$ (5.5) where $m_{j}=1(j=1,3,7,8)$ and $m_{j}=0(j=4,6,9,10)$ and $R_{j}(z)$ are defined in the following proposition. ###### Proposition 5.12. There exists functions $R_{j}:\Omega_{j}\rightarrow C,j=1,3,4,6,7,8,9,10$ such that $\displaystyle R_{1}(z)=\left\\{\begin{aligned} &r(z)T^{-2}(z),~{}~{}~{}~{}z\in(z_{0},\infty),\\\ &f_{1}=r(z_{0})T_{0}^{-2}(z_{0})(z-z_{0})^{-2i\nu(z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{1},\end{aligned}\right.$ $\displaystyle R_{3}(z)=\left\\{\begin{aligned} &\frac{r^{*}(z)}{1+|r(z)|^{2}}T_{+}^{2}(z),~{}~{}~{}~{}z\in(0,z_{0}),\\\ &f_{3}=\frac{r^{*}(z_{0})}{1+|r(z_{0})|^{2}}T_{0}^{2}(z_{0})(z-z_{0})^{2i\nu(z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{2},\end{aligned}\right.$ $\displaystyle R_{4}(z)=\left\\{\begin{aligned} &\frac{r(z)}{1+|r(z)|^{2}}T_{-}^{2}(z),~{}~{}~{}~{}z\in(0,z_{0}),\\\ &f_{4}=\frac{r(z_{0})}{1+|r(z_{0})|^{2}}T_{0}^{-2}(z_{0})(z-z_{0})^{-2i\nu(z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{3},\end{aligned}\right.$ $\displaystyle R_{6}(z)=\left\\{\begin{aligned} &r^{*}(z)T^{2}(z),~{}~{}~{}~{}z\in(z_{0},\infty),\\\ &f_{6}=r^{*}(z_{0})T_{0}^{2}(z_{0})(z-z_{0})^{2i\nu(z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{4}.\end{aligned}\right.$ $\displaystyle R_{7}(z)=\left\\{\begin{aligned} &r(z)T^{-2}(z),~{}~{}~{}~{}z\in(-\infty,z_{0}),\\\ &f_{7}=r(-z_{0})T_{0}^{-2}(-z_{0})(z+z_{0})^{-2i\nu(-z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{5},\end{aligned}\right.$ $\displaystyle R_{8}(z)=\left\\{\begin{aligned} &\frac{r^{*}(z)}{1+|r(z)|^{2}}T_{+}^{2}(z),~{}~{}~{}~{}z\in(-z_{0},0),\\\ &f_{8}=\frac{r^{*}(-z_{0})}{1+|r(-z_{0})|^{2}}T_{0}^{2}(-z_{0})(z+z_{0})^{2i\nu(-z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{5},\end{aligned}\right.$ $\displaystyle R_{9}(z)=\left\\{\begin{aligned} &\frac{r(z)}{1+|r(z)|^{2}}T_{-}^{2}(z),~{}~{}~{}~{}z\in(-z_{0},0),\\\ &f_{9}=\frac{r(-z_{0})}{1+|r(-z_{0})|^{2}}T_{0}^{-2}(-z_{0})(z+z_{0})^{-2i\nu(-z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{8},\end{aligned}\right.$ $\displaystyle R_{10}(z)=\left\\{\begin{aligned} &r^{*}(z)T^{2}(z),~{}~{}~{}~{}z\in(-\infty,-z_{0}),\\\ &f_{10}=r^{*}(-z_{0})T_{0}^{2}(-z_{0})(z+z_{0})^{2i\nu(-z_{0})}(1-\chi_{Z}(z)),z\in\Sigma_{7}.\end{aligned}\right.$ And $R_{j}$ admit that $\displaystyle j=1,3,4,6~{}~{}~{}~{}\left\\{\begin{aligned} &|R_{j}(z)|\leq c_{1}\sin^{2}(\arg(z-z_{0}))+c_{2}\left<Rez\right>^{-1/2},\\\ &|\bar{\partial}R_{j}(z)|\leq c_{1}\bar{\partial}\chi_{Z}(z)+c_{2}|z-z_{0}|^{-1/2}+c_{3}|p^{\prime}_{j}(Rez)|,\end{aligned}\right.$ (5.6) $\displaystyle j=7,8,9,10~{}~{}~{}~{}\left\\{\begin{aligned} &|R_{j}(z)|\leq c_{1}\sin^{2}(\arg(z+z_{0}))+c_{2}\left<Rez\right>^{-1/2},\\\ &|\bar{\partial}R_{j}(z)|\leq c_{1}\bar{\partial}\chi_{Z}(z)+c_{2}|z+z_{0}|^{-1/2}+c_{3}|p^{\prime}_{j}(Rez)|,\end{aligned}\right.$ (5.7) $\displaystyle\bar{\partial}R_{j}(z)=0,z\in\Omega_{2}\cup\Omega_{5},or~{}dist(z,\mathcal{Z}\cup\mathcal{Z}^{*})\leq\rho/3,$ (5.8) where $\displaystyle\left<Rez\right>=\sqrt{1+(Rez)^{2}},$ $\displaystyle p_{1}=p_{7}=r(z),~{}~{}p_{3}=p_{8}=\frac{r(z)}{1+|r(z)|^{2}},$ $\displaystyle p_{6}=p_{10}=r^{*}(z),~{}~{}p_{4}=p_{9}=\frac{r^{*}(z)}{1+|r(z)|^{2}}.$ The proof process of the results in Proposition 5.12 is similar to that in [34, 40]. Then, based on $R^{(2)}$ shown in Proposition 5.12 and applying the transformation(5.4), we obtain $M^{(2)}$ which admits the following mixed $\bar{\partial}$-RH problem. ###### Riemann-Hilbert Problem 5.13. Find a matrix value function $M^{(2)}$, admitting * 1. $M^{(2)}(x,t,z)$ is continuous in $\mathbb{C}\setminus(\Sigma^{(2)}\cup\mathcal{Z}\cup\mathcal{Z}^{*})$. * 2. $[M^{(2)}(y,t;z^{*})]^{*}=\sigma_{2}M^{(2)}(x,t;z)\sigma_{2}$. * 3. $M_{+}^{(2)}(x,t,z)=M_{-}^{(2)}(x,t,z)V^{(2)}(x,t,z),$ $z\in\Sigma^{(2)}$, where the jump matrix $V^{(2)}(x,t,z)$ satisfies $\displaystyle V^{(2)}=\left\\{\begin{aligned} &\left(\begin{array}[]{cc}1&R_{1}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{1},\\\ &\left(\begin{array}[]{cc}1&0\\\ R_{3}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{2}\cup\Sigma_{9},\\\ &\left(\begin{array}[]{cc}1&R_{4}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{3}\cup\Sigma_{12},\\\ &\left(\begin{array}[]{cc}1&0\\\ R_{6}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{4},\\\ &\left(\begin{array}[]{cc}1&0\\\ R_{8}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{5}\cup\Sigma_{10},\\\ &\left(\begin{array}[]{cc}1&R_{7}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{6},\\\ &\left(\begin{array}[]{cc}1&0\\\ R_{10}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{7},\\\ &\left(\begin{array}[]{cc}1&R_{9}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{8}\cup\Sigma_{11};\\\ \end{aligned}\right.$ (5.9) * 4. $M^{(2)}(x,t,z)\rightarrow\mathbb{I},$ $z\rightarrow\infty$. * 5. For $\mathbb{C}\setminus(\Sigma^{(2)}\cup\mathcal{Z}\cup\mathcal{Z}^{*})$, $\bar{\partial}M^{(2)}=M^{(2)}\bar{\partial}\mathcal{R}^{(2)}(z),$ where $\displaystyle\bar{\partial}R^{(2)}=\left\\{\begin{aligned} &\left(\begin{array}[]{cc}1&(-1)^{m_{j}}\bar{\partial}R_{j}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Omega_{j},~{}~{}j=1,4,7,9,\\\ &\left(\begin{array}[]{cc}1&0\\\ (-1)^{m_{j}}\bar{\partial}R_{j}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Omega_{j},~{}~{}j=3,6,8,10,\\\ &\left(\begin{array}[]{cc}0&0\\\ 0&0\\\ \end{array}\right),~{}~{}&z\in\Omega_{2}\cup\Omega_{5},\end{aligned}\right.$ (5.10) * 6. $M^{(2)}$ admits the residue conditions at poles $z_{k}\in\mathcal{Z}$ and $z^{*}_{k}\in\mathcal{Z}^{*}$, i.e., $\displaystyle\begin{split}&\mathop{Res}\limits_{z=z_{k}}M^{(2)}=\left\\{\begin{aligned} &\lim_{z\rightarrow z_{k}}M^{(1)}\left(\begin{array}[]{cc}0&0\\\ c_{k}^{-1}\left((\frac{1}{T})^{\prime}(z_{k})\right)^{-2}e^{-2it\theta}&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{+}\\\ &\lim_{z\rightarrow z_{k}}M^{(2)}\left(\begin{array}[]{cc}0&c_{k}T^{-2}(z_{k})e^{2it\theta}\\\ &0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{-}\end{aligned}\right.\\\ &\mathop{Res}\limits_{z=z^{*}_{k}}M^{(1)}=\left\\{\begin{aligned} &\lim_{z\rightarrow z^{*}_{k}}M^{(1)}\left(\begin{array}[]{cc}0&0\\\ -(c^{*}_{k})^{-1}(T^{\prime}(z^{*}_{k}))^{-2}e^{-2it\theta}&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{-}\\\ &\lim_{z\rightarrow z^{*}_{k}}M^{(1)}\left(\begin{array}[]{cc}0&-c^{*}_{k}(T(z^{*}_{k}))^{2}e^{2it\theta}\\\ 0&0\\\ \end{array}\right),k\in\triangle_{z_{0},1}^{+}\end{aligned}\right.\end{split}$ (5.11) ## 6 Decomposition of the mixed $\bar{\partial}$-RH problem The purpose of this section is to decompose the mixed $\bar{\partial}$-RH problem into two parts which include a model RH problem with $\bar{\partial}R^{(2)}=0$ and a pure $\bar{\partial}$-RH problem with $\bar{\partial}R^{(2)}\neq 0$. We denote $M^{(2)}_{RHP}$ as the solution of the model RH problem, and first construct a RH problem for $M^{(2)}_{RHP}$. ###### Riemann-Hilbert Problem 6.14. Find a matrix value function $M^{(2)}_{RHP}$, admitting * 1. $M^{(2)}_{RHP}$ is analytical in $\mathbb{C}\backslash(\Sigma^{(2)}\cup\mathcal{Z}\cup\mathcal{Z}^{*})$; * 2. $[M^{(2)}_{RHP}(y,t;z^{*})]^{*}=\sigma_{2}M^{(2)}_{RHP}(x,t;z)\sigma_{2}$; * 3. $M^{(2)}_{RHP,+}(x,t,z)=M^{(2)}_{RHP,-}(x,t,z)V^{(2)}(x,t,z),$ $z\in\Sigma^{(2)}$, where $V^{(2)}(x,t,z)$ is the same with the jump matrix appears in RHP 4.11; * 4. As $z\rightarrow\infty$, $M^{(2)}_{RHP}(x,t,z)=\mathbb{I}+o(z^{-1})$; * 5. $M^{(2)}_{RHP}$ possesses the same residue condition with $M^{(2)}$. Then, if we can prove the existence of the solution of $M^{(2)}_{RHP}$, the RHP 5.13 can be reduced to a pure $\bar{\partial}$-RH problem. The existence of the solution of $M^{(2)}_{RHP}$ will be proved in section 7. Now, supposing that the solution $M^{(2)}_{RHP}$ exists, and constructing a transformation $\displaystyle M^{(3)}(z)=M^{(2)}(z)M^{(2)}_{RHP}(z)^{-1},$ (6.1) we obtain the following pure $\bar{\partial}$-RH problem. ###### Riemann-Hilbert Problem 6.15. Find a matrix value function $M^{(3)}$, admitting * 1. $M^{(3)}$ is continuous with sectionally continuous first partial derivatives in $\mathbb{C}\backslash(\Sigma^{(2)}\cup\mathcal{Z}\cup\mathcal{Z}^{*})$; * 2. $[M^{(3)}(y,t;z^{*})]^{*}=\sigma_{2}M^{(3)}(x,t;z)\sigma_{2}$; * 3. For $z\in\mathbb{C}$, we obtain $\bar{\partial}M^{(3)}(z)=M^{(3)}(z)W^{(3)}(z)$, where $\displaystyle W^{(3)}=M_{RHP}^{(2)}(z)\bar{\partial}R^{(2)}M_{RHP}^{(2)}(z)^{-1};$ (6.2) * 4. As $z\rightarrow\infty$, $M^{(3)}(z)=I+o(z^{-1})$. ###### Proof. According to the properties of the $M^{(2)}_{RHP}$ and $M^{(2)}$ for RHP 6.14 and RHP 5.13, the analyticity and asymptotic properties of $M^{(3)}$ can be derived easily. Noting the fact that $M^{(2)}_{RHP}$ possesses the same jump matrix with $M^{(2)}$, we obtain that $\displaystyle M^{(3)}_{-}(z)^{-1}M^{(3)}_{+}(z)$ $\displaystyle=M^{(2)}_{RHP,-}(z)M^{(2)}_{-}(z)^{-1}M^{(2)}_{+}(z)M^{(2)}_{RHP,+}(z)^{-1}$ $\displaystyle=M^{(2)}_{RHP,-}(z)V^{2}(z)(M^{(2)}_{RHP,-}(z)V^{2}(z))^{-1}=\mathbb{I},$ which implies that $M^{(3)}$ has no jump. Also, it is easy to prove that there exists no pole in $M^{(3)}$ by a simple analysis, for details, see [34, 37, 40]. ∎ ## 7 The pure RH problem In this section, we construct the solution $M^{(2)}_{RHP}$ of RHP 6.14. Define that $\displaystyle\mathcal{U}_{\pm z_{0}}=\\{z:|z\mp z_{0}|<min\\{\frac{z_{0}}{2},\rho/3\\}\\}.$ Then, we can decompose $M^{(2)}_{RHP}$ into two parts $\displaystyle M^{(2)}_{RHP}(z)=\left\\{\begin{aligned} &E(z)M^{(out)}(z),&&z\in\mathbb{C}\setminus\mathcal{U}_{\pm z_{0}},\\\ &E(z)M^{(\pm z_{0})}(z),&&z\in\mathcal{U}_{\pm z_{0}},\end{aligned}\right.$ (7.1) from which we obtain that $M^{\pm z_{0}}(z)$ possesses no poles in $\mathcal{U}_{\pm z_{0}}$. Besides, $M^{(out)}$ solves a model RHP, the solution of $M^{(\pm z_{0})}$ can be approximated with a known parabolic cylinder model in $\mathcal{U}_{\pm z_{0}}$, and $E(z)$ is an error function which is a solution of a small-norm Riemann-Hilbert problem. Additionally, for the jump matrix $V^{(2)}$, we evaluate its estimate. $\displaystyle||V^{(2)}-\mathbb{I}||_{L^{\infty}(\Sigma_{\pm}^{(2)}\setminus\mathcal{U}_{\pm z_{0}})}=O\left(e^{-\frac{\sqrt{2}}{16}t|z\mp z_{0}|^{2}}\right),$ (7.2) $\displaystyle||V^{(2)}-\mathbb{I}||_{L^{\infty}(\Sigma_{0}^{(2)}}=O\left(e^{-\frac{t}{4z_{0}}}\right),$ (7.3) where $\Sigma_{\pm}^{(2)}$ and $\Sigma_{0}^{(2)}$ are defined as $\displaystyle\Sigma_{+}^{(2)}=\cup_{j=1}^{4}\Sigma_{j},~{}~{}\Sigma_{-}^{(2)}=\cup_{j=5}^{8}\Sigma_{j},~{}~{}\Sigma_{0}^{(2)}=\cup_{j=9}^{12}\Sigma_{j}.$ According to the above estimate of the jump matrix $V^{(2)}$, we know that if we omit the jump condition of $M^{(2)}_{RHP}(z)$, there only exists exponentially small error with respect to $t$ outside the $\mathcal{U}_{+z_{0}}\cup\mathcal{U}_{-z_{0}}$. In addition, noting the fact that $V^{(2)}\rightarrow I$ as $z\rightarrow 0$, it is not necessary to study the neighborhood of $z=0$ alone. ### 7.1 Outer model RH problem: $M^{(out)}$ In this section, we establish a model RH problem and prove that its solution can be approximated by finite sum of soliton solutions. ###### Riemann-Hilbert Problem 7.16. Find a matrix value function $M^{(out)}(y,t;z)$, admitting * 1. $M^{(out)}(y,t;z)$ is analytical in $\mathbb{C}\setminus(\Sigma^{(2)}\cup\mathcal{Z}\cup\mathcal{Z}^{*})$; * 2. $[M^{(out)}(y,t;z^{*})]^{*}=\sigma_{2}M^{(out)}(y,t;z)\sigma_{2}$; * 3. As $z\rightarrow\infty$, $\displaystyle M^{(out)}(y,t;z)=\mathbb{I}+o(z^{-1});$ (7.4) * 4. $M^{(out)}(y,t;z)$ has simple poles at each point in $\mathcal{Z}\cup\mathcal{Z}^{*}$ admitting the same residue condition in RHP 5.13 with $M^{(out)}(y,t;z)$ replacing $M^{(2)}(y,t;z)$. Before we investigate the solution of $M^{(out)}(x,t;z)$ for RHP 7.16, we first study RHP 3.9 for the case of reflectionless. Under this condition, $M(y,t;z)$ has no jump, and we obtain the following Riemann-Hilbert problem from RHP 3.9. ###### Riemann-Hilbert Problem 7.17. Find a matrix value function $M(x,t;z|\sigma_{d})$, admitting * 1. $M(y,t;z|\sigma_{d})$ is analytical in $\mathbb{C}\setminus(\mathcal{Z}\cup\mathcal{Z}^{*})$; * 2. $M^{*}(y,t;z^{*}|\sigma_{d})=\sigma_{2}M(y,t;z|\sigma_{d})\sigma_{2}$; * 3. $M(y,t;z|\sigma_{d})=\mathbb{I}+O(z^{-1})$, $z\rightarrow\infty$; * 4. $M(y,t;z|\sigma_{d})$ satisfies the following residue conditions at simple poles $z_{k}\in\mathcal{Z}$ and $z_{k}^{*}\in\mathcal{Z}^{*}$ $\displaystyle\begin{aligned} &\mathop{Res}_{z=z_{k}}M(x,t;z|\sigma_{d})=\mathop{lim}_{z\rightarrow z_{k}}M(x,t;z|\sigma_{d})N_{k},\\\ &\mathop{Res}_{z=z_{k}^{*}}M(x,t;z|\sigma_{d})=\mathop{lim}_{z\rightarrow z_{k}^{*}}M(x,t;z|\sigma_{d})\sigma_{2}N^{*}_{k}\sigma_{2},\end{aligned}$ (7.5) where $\sigma_{d}=\\{(z_{k},c_{k}),z_{k}\in\mathcal{Z}\\}^{N}_{k=1}$, which satisfy $z_{k}\neq z_{j}$ for $k\neq j$, are scattering data , and $\displaystyle N_{k}=\left(\begin{aligned} \begin{array}[]{cc}0&\gamma_{k}(x,t)\\\ 0&0\end{array}\end{aligned}\right),~{}\gamma_{k}(x,t)=c_{k}e^{2it\theta(z_{k})},$ $\displaystyle\theta(z_{k})=\frac{z_{k}}{4}\left(\frac{1}{z_{0}^{2}}+\frac{1}{z_{k}^{2}}\right).$ ###### Proposition 7.18. The RHP 7.17 exists unique solution. Additionally, the solution admits $\displaystyle\|M(x,t;z|\sigma_{d})\|_{L^{\infty}(\mathbb{C}\setminus(\mathcal{Z}\cup\mathcal{Z}^{*}))}\lesssim 1.$ (7.6) ###### Proof. According to the Liouville’s theorem, the uniqueness of the solution is obvious. The existence of RHP 7.17 and Eq.(7.6) can be proved by simple calculation which is similar to the literature [37, 34]. ∎ #### 7.1.1 Renormalization of the RHP for reflectionless case Under the reflectionless condition, recall that $\displaystyle s_{22}(z)=\prod_{k=1}^{N}\left(\frac{z-z_{k}}{z-z^{*}_{k}}\right).$ (7.7) Taking $\triangle\subseteq\\{1,2,\cdots,N\\}$, $\triangledown\subseteq\\{1,2,\cdots,N\\}\setminus\triangle$, and defining $\displaystyle s_{22}^{\triangle}=\prod_{k\in\triangle}\frac{z-z_{k}}{z-z^{*}_{k}},\quad s_{22}^{\triangledown}=\frac{s_{22}}{s_{22}^{\triangle}}=\prod_{k\in\triangledown}\frac{z-z_{k}}{z-z^{*}_{k}}.$ (7.8) Then, we introduce the normalization transformation $\displaystyle M^{\triangle}(y,t;z|\sigma_{d}^{\triangle})=M(y,t;z|\sigma_{d})s_{22}^{\triangle}(z)^{-\sigma_{3}},$ (7.9) which splits the poles between the columns of $M(x,t;z|\sigma_{d})$ by selecting different $\triangle$. The scattering data $\sigma_{d}^{\triangle}$ are defined by $\sigma_{d}^{\triangle}=\\{(z_{k},c_{k}s_{22}^{\triangle}(z)^{2}),z_{k}\in\mathcal{Z}\\}^{N}_{k=1}$ Then, we can get the modified Riemann-Hilbert problem. ###### Riemann-Hilbert Problem 7.19. Given scattering data $\sigma^{\triangle}_{d}$ and $\triangle\subseteq\\{1,2,\cdots,N\\}$. Find a matrix value function $M^{\triangle}$, admitting * 1. $M^{\triangle}(y,t;z|\sigma^{\triangle}_{d})$ is analytical in $\mathbb{C}\setminus(\mathcal{Z}\bigcup\mathcal{Z}^{*})$; * 2. $[M^{\triangle}(y,t;z^{*}|\sigma_{d}^{\triangle})]^{*}=\sigma_{2}M^{\triangle}(y,t;z|\sigma^{\triangle}_{d})\sigma_{2}$; * 3. $M^{\triangle}(y,t;z|\sigma^{\triangle}_{d})=\mathbb{I}+O(z^{-1})$, $z\rightarrow\infty$; * 4. $M^{\triangle}(y,t;z|\sigma^{\triangle}_{d})$ satisfies the following residue conditions at simple poles $z_{k}\in\mathcal{Z}$ and $z_{k}^{*}\in\mathcal{Z}^{*}$ $\displaystyle\begin{aligned} &\mathop{Res}_{z=z_{k}}M^{\triangle}(x,t;z|\sigma^{\triangle}_{d})=\mathop{lim}_{z\rightarrow z_{k}}M^{\vartriangle}(x,t;z|\sigma^{\triangle}_{d})N^{\triangle}_{k},\\\ &\mathop{Res}_{z=z_{k}^{*}}M^{\triangle}(x,t;z|\sigma^{\triangle}_{d})=\mathop{lim}_{z\rightarrow z_{k}^{*}}M^{\triangle}(x,t;z|\sigma^{\triangle}_{d})\sigma_{2}(N^{\triangle}_{k})^{*}\sigma_{2},\end{aligned}$ (7.10) where $\displaystyle N_{k}^{\triangle}=\left\\{\begin{aligned} \left(\begin{array}[]{cc}0&\gamma_{k}^{\triangle}\\\ 0&0\\\ \end{array}\right),\quad k\notin\triangle,\\\ \left(\begin{array}[]{cc}0&0\\\ \gamma_{k}^{\triangle}&0\\\ \end{array}\right),\quad k\in\triangle,\end{aligned}\right.~{}~{}\gamma_{k}^{\triangle}=\left\\{\begin{aligned} &c_{k}(s_{22}^{\triangle}(z_{k}))^{2}e^{2it\theta(z_{k})}\quad k\notin\triangle,\\\ &c_{k}^{-1}(s_{22}^{\triangle^{\prime}}(z_{k}))^{-2}e^{-2it\theta(z_{k})}\quad k\in\triangle.\end{aligned}\right.$ (7.11) Then, taking $\triangle=\triangle_{z_{0},1}^{+}$ and using $\sigma^{out}_{d}=\\{(z_{k},c_{k}\delta(z_{k})^{2}),z_{k}\in\mathcal{Z}\\}^{N}_{k=1}$ instead of the scattering data $\sigma^{\triangle}_{d}$, we obtain that $\displaystyle M^{(out)}(z)=M^{\triangle_{z_{0},1}^{+}}(z)\delta(z)^{\sigma_{3}}=M^{\vartriangle_{z_{0}}^{-}}(z|\sigma_{d}^{out}).$ (7.12) From the above analysis, we note that $M^{\triangle}(y,t;z|\sigma^{out}_{d})$ is directly transformed from $M(y,t;z|\sigma_{d})$ which leads to that RHP 7.17 has unique solution. For given scattering data $\sigma^{\triangle}_{d}$, the unique $N$-soliton solution of RHP 7.17 can be expressed as $\displaystyle u_{sol}(y,t;\sigma^{\triangle}_{d})=\mathop{lim}_{z\rightarrow 0}\frac{\left(M(0;y,t|\sigma^{\triangle}_{d})^{-1}M(z;y,t|\sigma^{\triangle}_{d})\right)_{12}}{iz}.$ (7.13) This indicates that each normalization encodes $u_{sol}(y,t)$ in the same way. By selecting appropriate $\triangle$, the asymptotic limits in which $t\rightarrow\infty$ with $\frac{y}{t}$ bounded are under better asymptotic control. Next, we study the asymptotic behavior of the soliton solutions. #### 7.1.2 Long-time behavior of soliton solutions We first define some notations $\displaystyle I=\left\\{z:-\frac{1}{4v_{1}}<|z|^{2}<-\frac{1}{4v_{2}}\right\\},~{}~{}Z(I)=\\{z_{k}\in\mathcal{Z}:z_{k}\in I\\},~{}~{}N(I)=|\mathcal{Z}(I)|,$ $\displaystyle Z^{-}(I)=\left\\{z_{k}\in\mathcal{Z}:|z|^{2}>-\frac{1}{4v_{2}^{2}}\right\\},~{}~{}Z^{+}(I)=\left\\{z_{k}\in\mathcal{Z}:|z|^{2}<-\frac{1}{4v_{1}^{2}}\right\\},$ $\displaystyle c_{k}(I)=c_{k}\prod_{Rez_{j}\in I_{-}\setminus I}\left(\frac{z_{k}-z_{j}}{z_{k}-z^{*}_{j}}\right)^{2},$ where $v_{1}\leq v_{2}\in\mathbb{R}^{-}$ are given velocities. Then we define a distance $\displaystyle\mu(I)=\min_{z_{k}\in\mathcal{Z}\setminus\mathcal{Z}(I)}\left\\{Im(z_{k})\frac{-v_{2}}{|z|^{2}}\left(|z|+\frac{1}{2\sqrt{-v_{1}}}\right)dist(z_{k},I)\right\\},$ (7.14) and a space-time cone with given points $y_{1}\leq y_{2}\in\mathbb{R}$ $\displaystyle S(y_{1},y_{2},v_{1},v_{2})=\\{(y,t)\in\mathbb{R}^{2},y=y_{0}+vt~{}with~{}y_{0}\in[y_{1},y_{2}],v\in[v_{1},v_{2}]\\}.$ (7.15) $Rez$$I$$\frac{1}{2\sqrt{-v_{1}}}$$\frac{1}{2\sqrt{-v_{2}}}$$z_{5}$$z^{*}_{5}$$z^{*}_{6}$$z_{7}$$z^{*}_{7}$$z_{6}$$z_{2}$$z^{*}_{2}$$z_{1}$$z^{*}_{1}$$z_{3}$$z^{*}_{3}$$z_{4}$$z^{*}_{4}$$(a)$$y$$y_{2}$$S$$y_{1}$$y=v_{2}t+y_{2}$$y=v_{1}t+y_{2}$$y=v_{2}t+y_{1}$$y=v_{2}t+y_{1}$$(b)$ Figure 3. $(a)$ For example, the original data has nine pairs zero points of discrete spectrum, but insider the cone $S$ only four pairs points with $\mathcal{Z}(I)={z_{1},z_{2},z_{5},z_{7}}$; $(b)$ Space-time cone $S(y_{1},y_{2},v_{1},v_{2})$. ###### Proposition 7.20. For given scattering data $\sigma_{d}^{\triangle}=\\{(z_{k},\hat{c}_{k})\\}$, $t\rightarrow\infty$ and $(y,t)\in S(y_{1},y_{2},v_{1},v_{2})$, we have $\displaystyle M^{\triangle_{z_{0},1}^{+}}(z|\sigma_{d}^{\triangle})=\left(\mathbb{I}+O(e^{-2\mu(I)t})\right)M^{\triangle_{z_{0},1}^{+}(I)}(z|\sigma_{d}(I)),$ (7.16) where $\displaystyle\sigma_{d}(I)=\\{(z_{k},c_{k}(\mathcal{I})s_{22}^{\triangle}(z)^{2}),z_{k}\in\mathcal{Z}(\mathcal{I})\\}.$ (7.17) ###### Proof. Via employing a similar method to the literature [37, 34], the results of this Proposition can be given easily. ∎ Now, we can derive the asymptotic unique solution $M^{(out)}$ of RHP 7.16. ###### Corollary 7.21. There exist unique solution $M^{(out)}$ of RHP 7.16. Particularly, $\displaystyle M^{(out)}(z)$ $\displaystyle=M^{\triangle_{z_{0},1}^{+}}(z)\delta(z)^{-\sigma_{3}}=M^{\triangle_{z_{0},1}^{+}}(z|\sigma_{d}^{out})$ (7.18) $\displaystyle=M^{\triangle_{z_{0},1}^{+}}(z|\sigma_{d}(I))\prod_{Rez_{k}\in I_{-}\setminus I}\left(\frac{z-z_{k}}{z-z^{*}_{k}}\right)^{2}\delta^{-\sigma_{3}}+O(e^{-\mu(I)t}),$ (7.19) where $M^{\triangle_{z_{0},1}^{+}}(z)$ is the solution of RHP 7.19 with $\triangle=\triangle_{z_{0},1}^{+}$ and $\sigma_{d}^{out}=\\{(z_{k},\widetilde{c}_{k}(z_{0}))\\}_{k=1}^{N}$ with $\displaystyle\widetilde{c}_{k}(z_{0})=c_{k}e^{\frac{i}{\pi}\int_{-z_{0}}^{z_{0}}\frac{\log(1+|r(s)|^{2})}{s-z_{k}}ds}.$ (7.20) Substituting Eq.(7.18) into Eq.(7.6), we obtain $\displaystyle\|M^{(out)}(z)\|_{L^{\infty}(\mathbb{C}\setminus(\mathcal{Z}\cup\mathcal{Z}^{*}))}\lesssim 1.$ (7.21) In addition, $\displaystyle\begin{split}u_{sol}(y,t;\sigma_{d}^{out})&=\mathop{lim}_{z\rightarrow 0}\frac{\left(M^{(out)}(0)^{-1}M^{(out)}(z)\right)_{12}}{iz},\\\ &=u_{sol}(y,t;\sigma_{d}(I))+O(e^{-\mu(I)t}),\end{split}$ (7.22) where $u_{sol}(y,t;\sigma_{d}^{out})$ is the $N$-soliton solution of Eq.(1.3) corresponding the scattering data $\sigma_{d}^{out}$. ### 7.2 Local solvable model near phase point $z=\pm z_{0}$ Based on (7.2) and (7.3), it is easily to find that $V^{(2)}-I$ does not have a uniform estimate for large time near the phase point $z=\pm z_{0}$. Therefore, we construct a local solvable model for error function $E(z)$ with a uniformly small jump. Recall that $\rho=\frac{1}{2}\min_{(z_{a}\neq z_{b})\in\mathcal{Z}\cup\mathcal{Z}^{*}}\\{|z_{a}-z_{b}|\\}$ and $dist(\mathcal{Z}\cup\mathcal{Z}^{*},R)>\rho,k=1,2,\ldots,N,$ we find that there are no discrete spectrum in $\mathcal{U}_{\pm z_{0}}$. Consequently, we have $T(z)=\delta(z)$ and RHP 6.14 can be reduced to the following model for the CSP equation [23]. ###### Riemann-Hilbert Problem 7.22. Find a matrix value function $M^{sp,+}$, admitting * 1. $M^{sp,+}(y,t;z)$ is continuous in $\mathbb{C}\setminus(\Sigma^{(2)})$. * 2. $M_{+}^{sp,+}(y,t;z)=M_{-}^{sp,+}(y,t;z)V^{sp}(y,t;z),$ $z\in\Sigma^{(2)}$, where the jump matrix $V^{sp}(y,t;z)$ satisfies $\displaystyle V^{sp}=\left\\{\begin{aligned} &\left(\begin{array}[]{cc}1&r(z_{0})\delta^{-2}(z_{0})(z-z_{0})^{-2i\nu(z_{0})}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{1},\\\ &\left(\begin{array}[]{cc}1&0\\\ \frac{r^{*}(z_{0})}{1+|r(z_{0})|^{2}}\delta^{2}(z_{0})(z-z_{0})^{2i\nu(z_{0})}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{2}\cup\Sigma_{9},\\\ &\left(\begin{array}[]{cc}1&\frac{r(z_{0})}{1+|r(z_{0})|^{2}}\delta^{-2}(z_{0})(z-z_{0})^{-2i\nu(z_{0})}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{3}\cup\Sigma_{12},\\\ &\left(\begin{array}[]{cc}1&0\\\ r^{*}(z_{0})\delta^{2}(z_{0})(z-z_{0})^{2i\nu(z_{0})}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{4},\\\ &\left(\begin{array}[]{cc}1&0\\\ \frac{r^{*}(-z_{0})}{1+|r(-z_{0})|^{2}}\delta^{2}(-z_{0})(z+z_{0})^{-2i\nu(-z_{0})}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{5}\cup\Sigma_{10},\\\ &\left(\begin{array}[]{cc}1&r(-z_{0})\delta^{-2}(-z_{0})(z+z_{0})^{2i\nu(-z_{0})}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{6},\\\ &\left(\begin{array}[]{cc}1&0\\\ r^{*}(-z_{0})\delta^{2}(-z_{0})(z+z_{0})^{-2i\nu(-z_{0})}e^{-2it\theta}&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{7},\\\ &\left(\begin{array}[]{cc}1&\frac{r(-z_{0})}{1+|r(-z_{0})|^{2}}\delta^{-2}(-z_{0})(z+z_{0})^{2i\nu(-z_{0})}e^{2it\theta}\\\ 0&1\\\ \end{array}\right),~{}~{}&z\in\Sigma_{8}\cup\Sigma_{11};\\\ \end{aligned}\right.$ (7.23) * 3. $M^{sp}(y,t;z)\rightarrow\mathbb{I},$ $z\rightarrow\infty$. Next, we apply the parabolic cylinder(PC) model to solve this problem near the phase point $z=\pm z_{0}$. Unlike the process of solving short pulse equation near the phase point, $M^{sp,+}(y,t;z)$ dose not possess the symmetry that $M^{sp,+}(z;\eta=1)=\sigma_{2}M^{sp,+}(-z;\eta=-1)\sigma_{2}$. Therefore, we have to use PC model to solve the problem near the phase point $z=\pm z_{0}$ separately. $z_{0}$$-z_{0}$ Figure 4. The jump contour for the local model near the phase point $z=\pm z_{0}$. We first study this model problem near the phase points $z_{0}$. Recall that $\displaystyle\delta(z)=\exp\left[i\int_{-z_{0}}^{z_{0}}\frac{\nu(s)}{s-z}ds\right]=\frac{(z-z_{0})^{i\nu(z_{0})}}{(z+z_{0})^{i\nu(-z_{0})}}e^{\omega(z)},$ (7.24) where $\omega(z)=-\frac{1}{2\pi i}\int_{-z_{0}}^{z_{0}}\log(z-s)d(\log(1+|r(s)|^{2})$. As $z\rightarrow z_{0}$, $\displaystyle\theta(z)=\frac{1}{2z_{0}}+\frac{1}{4z^{3}_{0}}(z-z_{0})^{2}-\frac{1}{4\xi^{4}(z-z_{0})^{3}},$ (7.25) where $\xi$ is a number between $z$ and $z_{0}$. We evaluate the following scaling transformation $\displaystyle(N_{z_{0}}f)(z)=f\left(z_{0}+\sqrt{z_{0}^{3}t^{-1}}z\right),$ (7.26) then, we can derive that $\displaystyle(N_{z_{0}}\delta e^{-it\theta(z)})(z)=\delta_{(z_{0})}^{(0)}\delta_{(z_{0})}^{(1)}(z),$ (7.27) where $\displaystyle\delta_{(z_{0})}^{(0)}=(z_{0}^{3}t^{-1})^{\frac{i\nu(z_{0})}{2}}(2z_{0})^{-i\nu(-z_{0})}e^{\omega(z_{0})}e^{-\frac{it}{2z_{0}}},$ $\displaystyle\delta_{(z_{0})}^{(1)}(z)=z^{i\nu(z_{0})}\left(\frac{2z_{0}+\sqrt{z_{0}^{3}t^{-1}}z}{2z_{0}}\right)^{-i\nu(-z_{0})}e^{\omega\left(z_{0}+\sqrt{z_{0}^{3}t^{-1}}z\right)-\omega(z_{0})}e^{-\frac{iz^{2}}{4}}.$ From the expression of $\delta_{(z_{0})}^{(1)}(z)$, we can get the conclusion easily that for $\zeta\in\\{\zeta=uz_{0}e^{\pm\frac{i\pi}{4}},-\frac{\rho}{3}<u<\frac{\rho}{3}\\}$, $\displaystyle\delta_{(z_{0})}^{(1)}(\zeta)\thicksim\zeta^{i\nu(z_{0})}e^{-\frac{i\zeta^{2}}{4}},~{}~{}as~{}~{}t\rightarrow+\infty,$ (7.28) from which the influence of the third power can be omitted. Thus, for large $t$, the solution of the Riemann-Hilbert problem for $M^{sp}(y,t;z)$, which is formulated on crosse centered at $z=z_{0}$, can be approximated based on the PC model see Appendix A. We introduce the transformation $\displaystyle\begin{split}\lambda&=\lambda(z_{0})=\sqrt{\frac{t}{z_{0}^{3}}}(z-z_{0}),\\\ r_{0}=r_{0}^{z_{0}}&=r(z_{0})\delta(z_{0})^{-2}e^{2i\left(\nu(z_{0})\log(\frac{t}{(z_{0})^{3}})\right)}e^{\frac{it}{z_{0}^{2}}},\end{split}$ (7.29) then, the solution $M^{sp,+}(y,t;z)$ formulated on crosse centered at $z=z_{0}$ can be obtained via applying the solution $M^{pc,+}(\lambda)=\sigma M^{(pc),+}(\lambda)\sigma$, shown in Appendix $A$, where $\sigma=\left(\begin{array}[]{cc}0&1\\\ 1&0\\\ \end{array}\right)$. Then, the solution of $M^{sp,+}(y,t;z)$ at $z=z_{0}$ can be expressed as $\displaystyle M^{pc,+}(r_{0}^{z_{0}},\lambda)=\mathbb{I}+\frac{M_{1}^{pc,+}(z_{0})}{i\lambda}+O(\lambda^{-2}),$ (7.30) where $\displaystyle M_{1}^{pc,+}=\begin{pmatrix}0&-\beta_{21}^{z_{0}}(r_{0}^{z_{0}})\\\ \beta^{z_{0}}_{12}(r_{0}^{z_{0}})&0\end{pmatrix},$ with $\displaystyle\beta^{z_{0}}_{12}=\beta_{12}(r_{0}^{z_{0}})=\frac{\sqrt{2\pi}e^{i\pi/4}e^{-\pi\nu/2}}{r_{0}^{z_{0}}\Gamma(-i\nu)},\quad\beta_{21}^{z_{0}}=\beta_{21}(r_{0}^{z_{0}})=\frac{-\sqrt{2\pi}e^{-i\pi/4}e^{-\pi\nu/2}}{(r_{0}^{z_{0}})^{*}\Gamma(i\nu)}=\frac{\nu}{\beta^{z_{0}}_{12}}.$ By using (7.29), we obtain $\displaystyle\beta^{z_{0}}_{12}=\arg\tau(z_{0},+)e^{-4iy-i\nu(z_{0})\log(\frac{t^{2}}{(z_{0})^{6}})},$ (7.31) where $|\tau(z_{0},+)|^{2}=|\nu(z_{0})^{2}|$ and $\displaystyle\arg\tau(z_{0},+)=\frac{\pi}{4}+\arg\Gamma(i\nu(z_{0}))-\arg r(z_{0})-2\int^{z_{0}}_{-z_{0}}\log|s-z_{0}|\mathrm{d}\nu(s).$ Furthermore, we consider the model problem near the phase points $-z_{0}$. For $z\rightarrow-z_{0}$, we consider the scaling transformation $\displaystyle(N_{-z_{0}}f)(z)=f\left(-z_{0}+\sqrt{z_{0}^{3}t^{-1}}z\right),$ (7.32) then, we obtain $\displaystyle(N_{-z_{0}}\delta e^{-it\theta(z)})(z)=\delta_{(-z_{0})}^{(0)}\delta_{(-z_{0})}^{(1)}(z),$ (7.33) where $\displaystyle\delta_{(-z_{0})}^{(0)}=(z_{0}^{3}t^{-1})^{-\frac{i\nu(-z_{0})}{2}}(2z_{0})^{i\nu(z_{0})}e^{\tilde{\omega}(-z_{0})}e^{\frac{it}{2z_{0}}},$ $\displaystyle\delta_{(-z_{0})}^{(1)}(z)=(-z)^{-i\nu(-z_{0})}\left(\frac{2z_{0}-\sqrt{z_{0}^{3}t^{-1}}z}{2z_{0}}\right)^{i\nu(z_{0})}e^{\tilde{\omega}\left(-z_{0}+\sqrt{z_{0}^{3}t^{-1}}z\right)-\tilde{\omega}(-z_{0})}e^{\frac{iz^{2}}{4}}.$ with $\displaystyle\tilde{\omega}(z)=-\frac{1}{2\pi i}\int_{-z_{0}}^{z_{0}}\log(s-z)d(\log(1+|r(s)|^{2}).$ From the expression of $\delta_{(z_{0})}^{(1)}(z)$, we can get the conclusion easily that for $\zeta\in\\{\zeta=-uz_{0}e^{\pm\frac{i\pi}{4}},-\frac{\rho}{3}<u<\frac{\rho}{3}\\}$, $\displaystyle\delta_{(-z_{0})}^{(1)}(\zeta)\thicksim(-\zeta)^{-i\nu(-z_{0})}e^{\frac{i\zeta^{2}}{4}},~{}~{}as~{}~{}t\rightarrow+\infty,$ (7.34) from which the impact of the third power can be omitted. Thus, for large $t$, the solution of the Riemann-Hilbert problem for $M^{sp}(y,t;z)$, which is formulated on crosse centered at $z=-z_{0}$, can be approximated based on the PC model. We introduce the transformation $\displaystyle\begin{split}\lambda&=\lambda(-z_{0})=\sqrt{\frac{t}{z_{0}^{3}}}(z+z_{0}),\\\ r_{0}=r_{0}^{-z_{0}}&=\frac{r^{*}(-z_{0})}{1+|r(-z_{0})|^{2}}\delta(-z_{0})^{2}e^{2i\left(\nu(-z_{0})\log(\frac{t}{(z_{0})^{3}})\right)}e^{\frac{it}{z_{0}^{2}}},\end{split}$ (7.35) then, the solution $M^{sp,+}(y,t;z)$ formulated on crosse centered at $z=-z_{0}$ can be obtained via applying the solution $M^{(pc),+}(\lambda)$ shown in Appendix $A$, i.e., $\displaystyle M^{(pc),+}(r_{0}^{-z_{0}},\lambda)=I+\frac{M_{1}^{(pc),+}(-z_{0})}{i\lambda}+O(\lambda^{-2}),$ (7.36) where $\displaystyle M_{1}^{(pc),+}(-z_{0})=\begin{pmatrix}0&\beta_{12}^{-z_{0}}(r_{0}^{-z_{0}})\\\ -\beta^{-z_{0}}_{21}(r_{0}^{-z_{0}})&0\end{pmatrix},$ with $\displaystyle\beta^{-z_{0}}_{12}=\beta_{12}(r_{0}^{-z_{0}})=\frac{\sqrt{2\pi}e^{i\pi/4}e^{-\pi\nu/2}}{r_{0}^{-z_{0}}\Gamma(-i\nu)},\quad\beta_{21}^{-z_{0}}=\beta_{21}(r_{0}^{-z_{0}})=\frac{-\sqrt{2\pi}e^{-i\pi/4}e^{-\pi\nu/2}}{(r_{0}^{-z_{0}})^{*}\Gamma(i\nu)}=\frac{\nu}{\beta^{-z_{0}}_{12}}.$ By using (7.35), we obtain $\displaystyle\beta^{-z_{0}}_{12}=\arg\tau(-z_{0},+)e^{-4iy-i\nu(z_{0})\log\left(\frac{t^{2}}{(z_{0})^{6}}\right)},$ (7.37) where $|\tau(-z_{0},+)|^{2}=|\nu(-z_{0})^{2}|$ and $\displaystyle\arg\tau(-z_{0},+)=\frac{\pi}{4}+\arg\Gamma(i\nu(z_{0}))-\arg\left(\frac{r^{*}(-z_{0})}{1+|r(-z_{0})|^{2}}\right)-2\int^{z_{0}}_{-z_{0}}\log|s+z_{0}|\mathrm{d}\nu(s).$ Noting that the origin is the reference point from which the rays emanate in model problem, we still use the notation $\lambda$ in the following analysis. Considering that $M^{sp,+}$ admits the asymptotic property $\displaystyle M^{sp,+}=\mathbb{I}+\frac{M_{1}^{pc,+}(z_{0})}{i\lambda}+\frac{M_{1}^{(pc),+}(-z_{0})}{i\lambda}+O(\lambda^{-2}),$ (7.38) we then substitute the first formula of (7.29) and (7.35) into (7.38), and obtain $\displaystyle M^{sp,+}=\mathbb{I}+\frac{\sqrt{z_{0}^{3}}}{i\sqrt{t}}\frac{M_{1}^{pc,+}(z_{0})}{z-z_{0}}+\frac{\sqrt{z_{0}^{3}}}{i\sqrt{t}}\frac{M_{1}^{(pc),+}(-z_{0})}{z+z_{0}}+O(\lambda^{-2}).$ (7.39) In the local domain $\mathcal{U}_{\pm z_{0}}$, we can obtain the result that $\displaystyle|M^{sp,+}-\mathbb{I}|\lesssim O(t^{-\frac{1}{2}}),~{}~{}as~{}~{}t\rightarrow+\infty,$ (7.40) which implies that $\displaystyle\|M^{sp,+}(z)\|_{\infty}\lesssim 1.$ (7.41) Since RHP 7.22 and 5.13 possess the same jump conditions in $\mathcal{U}_{\pm z_{0}}$, we apply $M^{sp,+}(z)$ to define a local model in two circles $z\in\mathcal{U}_{\pm z_{0}}$ $\displaystyle M^{(\pm z_{0})}=M^{(out)}(z)M^{sp,+}(z),$ (7.42) which is a bounded function in $\mathcal{U}_{\pm z_{0}}$ and has the same jump matrix as $M^{(2)}_{RHP}(z)$. ### 7.3 The small-norm RHP for $E(z)$ According to the transformation (7.1), we have $\displaystyle E(z)=\left\\{\begin{aligned} &M^{(2)}_{RHP}(z)M^{(out)}(z)^{-1},&&z\in\mathbb{C}\setminus\mathcal{U}_{\pm z_{0}},\\\ &M^{(2)}_{RHP}(z)M^{sp,+}(z)^{-1}M^{(out)}(z)^{-1},&&z\in\mathcal{U}_{\pm z_{0}},\end{aligned}\right.$ (7.43) which is analytic in $\mathbb{C}\setminus\Sigma^{(E)}$ where $\Sigma^{(E)}=\partial\mathcal{U}_{\pm z_{0}}\bigcup(\Sigma^{(2)}\setminus\mathcal{U}_{\pm z_{0}})$. $0$$z_{0}$$-z_{0}$$\Sigma^{(E)}$$\partial\mathcal{U}_{z_{0}}$$\partial\mathcal{U}_{-z_{0}}$ Figure 5. The jump contour $\Sigma^{(E)}=\partial\mathcal{U}_{\pm z_{0}}\bigcup(\Sigma^{(2)}\setminus\mathcal{U}_{\pm z_{0}})$ for the error function $E(z)$. Then it is easy to verify that $E(z)$ admits the Riemann-Hilbert problem. ###### Riemann-Hilbert Problem 7.23. Find a matrix-valued function $E(z)$ such that * 1. $E$ is analytical in $\mathbb{C}\setminus\Sigma^{(E)}$; * 2. $E^{*}(z^{*})=\sigma_{2}E(z)\sigma_{2}$; * 3. $E(z)=\mathbb{I}+O(z^{-1})$, $z\rightarrow\infty$; * 4. $E_{+}(z)=E_{-}(z)V^{(E)}(z)$, $z\in\Sigma^{(E)}$, where $\displaystyle V^{(E)}(z)=\left\\{\begin{aligned} &M^{(out)}(z)V^{(2)}(z)M^{(out)}(z)^{-1},&&z\in\Sigma^{(2)}\setminus\mathcal{U}_{\pm z_{0}},\\\ &M^{(out)}(z)M^{sp,+}(z)M^{(out)}(z)^{-1},&&z\in\partial\mathcal{U}_{\pm z_{0}}.\end{aligned}\right.$ (7.44) By applying Eq.(7.2), Eq.(7.3) and Eq.(7.21), it is easy to obtain that as $t\rightarrow+\infty$, $\displaystyle|V^{(E)}(z)-\mathbb{I}|=\left\\{\begin{aligned} &O\left(e^{-t\frac{\sqrt{2}}{16z_{0}^{2}}|z\mp z_{0}|^{2}}\right)&&z\in\Sigma_{\pm}^{(2)}\setminus\mathcal{U}_{\pm z_{0}},\\\ &O\left(e^{-\frac{t}{4z_{0}}}\right)&&z\in\Sigma_{0}^{(2)}.\end{aligned}\right.$ (7.45) While, for $z\in\partial\mathcal{U}_{\pm z_{0}}$, using Eq.(7.21) and (7.40), we obtain that $\displaystyle|V^{(E)}(z)-\mathbb{I}|=|M^{(out)}(z)(M^{sp,+}(z)-\mathbb{I})M^{(out)}(z)^{-1}|=O(t^{-1/2}),~{}~{}as~{}~{}t\rightarrow+\infty.$ (7.46) Then, the existence and uniqueness of RHP 7.23 can be guaranteed by using a small-norm Riemann-Hilbert problem. Meanwhile, we obtain that $\displaystyle E(z)=\mathbb{I}+\frac{1}{2\pi i}\int_{\Sigma^{(E)}}\frac{(\mathbb{I}+\mu_{E}(s))(V^{(E)}(s)-\mathbb{I})}{s-z}ds$ (7.47) where $\mu_{E}\in L^{2}(\Sigma^{(E)})$ and admits $\displaystyle(1-C_{\omega_{E}})\mu_{E}=\mathbb{I},$ (7.48) where $C_{\omega_{E}}$ is an integral operator which is defined by $\displaystyle C_{\omega_{E}}f=C_{-}\left(f(V^{(E)}-\mathbb{I})\right),$ $\displaystyle C_{-}f(z)=\lim_{z\rightarrow\Sigma_{-}^{(E)}}\frac{1}{2\pi i}\int_{\Sigma_{E}}\frac{f(s)}{s-z}ds,$ where $C_{-}$ is the Cauchy projection operator. Then, based on the properties of the Cauchy projection operator $C_{-}$, and the estimate (7.46), we obtain that $\displaystyle\|C_{\omega_{E}}\|_{L^{2}(\Sigma^{(E)})}\lesssim\|C_{-}\|_{L^{2}(\Sigma^{(E)})}\|V^{(E)}-\mathbb{I}\|_{L^{\infty}(\Sigma^{(E)})}\lesssim O(t^{-1/2}),$ (7.49) which infers to that $1-C_{\omega_{E}}$ is invertible which guarantees the existence and uniqueness of $\mu_{E}$. Then the existence and uniqueness of $E(z)$ are guaranteed. Now, it can be explained that the definition of $M^{(2)}_{RHP}$ is reasonable. Furthermore, to reconstruct the solutions of $u(y,t)$, the asymptotic behavior of $E(z)$ as $z\rightarrow 0$ and large time asymptotic behavior of $E(0)$ is needed. By comparing the estimate (7.45) with (7.46), we find that for $t\rightarrow+\infty$, we only need to consider the calculation on $\partial\mathcal{U}_{\pm z_{0}}$ because it approaches to zero exponentially on other boundary. Then, as $z\rightarrow 0$, we can obtain that $\displaystyle E(z)=E(0)+E_{1}z+O(z^{2}),$ (7.50) where $\displaystyle E(0)=\mathbb{I}+\frac{1}{2\pi i}\int_{\Sigma^{(E)}}\frac{(\mathbb{I}+\mu_{E}(s))(V^{(E)}(s)-I)}{s}ds,$ (7.51) $\displaystyle E_{1}=-\frac{1}{2\pi i}\int_{\Sigma^{(E)}}\frac{(\mathbb{I}+\mu_{E}(s))(V^{(E)}(s)-I)}{s^{2}}ds.$ (7.52) Then, the large time, i.e., $t\rightarrow+\infty$, asymptotic behavior of $E(0)$ and $E_{1}$ can be derived as $\displaystyle E(0)=$ $\displaystyle\mathbb{I}+\frac{1}{2i\pi}\int_{\partial\mathcal{U}_{\pm z_{0}}}(V^{(E)}(s)-I)ds+o(t^{-1})$ $\displaystyle=$ $\displaystyle\mathbb{I}+\frac{\sqrt{z_{0}}}{i\sqrt{t}}M^{(out)}(z_{0})^{-1}M_{1}^{pc,+}(z_{0})M^{(out)}(z_{0})$ $\displaystyle-\frac{\sqrt{z_{0}}}{i\sqrt{t}}M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),+}(-z_{0})M^{(out)}(-z_{0})+\mathcal{O}(t^{-1}),$ (7.53) $\displaystyle E_{1}=$ $\displaystyle\frac{1}{i\sqrt{z_{0}t}}M^{(out)}(z_{0})^{-1}M_{1}^{pc,+}(z_{0})M^{(out)}(z_{0})$ $\displaystyle+\frac{1}{i\sqrt{z_{0}t}}M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),+}(-z_{0})M^{(out)}(-z_{0})+\mathcal{O}(t^{-1}).$ (7.54) From (7.53), we can derive that $\displaystyle E(0)^{-1}=\mathbb{I}+O(t^{-1/2}).$ (7.55) ## 8 Pure $\bar{\partial}$-RH problem In this section, we study the remaining $\bar{\partial}$-RH problem. The $\bar{\partial}$-RH problem 6.15 for $M^{(3)}(z)$ is equivalent to the following integral equation $\displaystyle M^{(3)}(z)=\mathbb{I}-\frac{1}{\pi}\int_{\mathbb{C}}\frac{M^{(3)}W^{(3)}}{s-z}\mathrm{d}A(s),$ (8.1) where $\mathrm{d}A(s)$ is Lebesgue measure. Further, the equation (7.24) can be written in operator form $\displaystyle(\mathbb{I}-\mathrm{S})M^{(3)}(z)=\mathbb{I},$ (8.2) where $\mathrm{S}$ is Cauchy operator $\displaystyle\mathrm{S}[f](z)=-\frac{1}{\pi}\iint_{\mathbb{C}}\frac{f(s)W^{(3)}(s)}{s-z}\mathrm{d}A(s).$ (8.3) We need to prove that the inverse operator $(\mathrm{I}-\mathrm{S})^{-1}$ is invertible, so that the solution $M^{(3)}(z)$ exists. ###### Lemma 8.24. For $t\rightarrow+\infty$, the operator (8.3) admits that $\displaystyle||\mathrm{S}||_{L^{\infty}\rightarrow L^{\infty}}\leq ct^{-1/6}.$ (8.4) where $c$ is a constant. ###### Proof. We mainly prove the case that the matrix function supported in the region $\Omega_{1}$, the other case can be proved similarly. Denoted that $f\in L^{\infty}(\Omega_{1})$, $s=u+iv$ and $z=x+iy$. Then based on (5.10) and (6.2), we can derive that $\displaystyle|S[f](z)|$ $\displaystyle\leq\frac{1}{\pi}\big{|}f\ \big{|}_{L^{\infty}(\Omega_{1})}\iint_{\Omega_{1}}\frac{|M^{(2)}_{RHP}(s)\bar{\partial}R_{1}(s)M^{(2)}_{RHP}(s)^{-1}|}{|s-z|}df(s)$ $\displaystyle\leq c\iint_{\Omega_{1}}\frac{|\bar{\partial}R_{1}(s)||e^{-tv\frac{u^{2}+v^{2}-z_{0}^{2}}{2(u^{2}+v^{2})z_{0}^{2}}}|}{|s-z|}dudv,$ (8.5) where $c$ is a constant. Based on (5.6) and the estimates shown in Appendix $B$, from (8), we obtain that $\displaystyle||\mathrm{S}||_{L^{\infty}\rightarrow L^{\infty}}\leq c(I_{1}+I_{2}+I_{3})\leq ct^{-1/6},$ (8.6) where $\displaystyle I_{1}=\iint_{\Omega_{1}}\frac{|\bar{\partial}\chi_{\mathcal{Z}}(s)|e^{-tv\frac{u^{2}+v^{2}-z_{0}^{2}}{2(u^{2}+v^{2})z_{0}^{2}}}}{|s-z|}df(s),~{}~{}I_{2}=\iint_{\Omega_{1}}\frac{|r^{\prime}(p)|e^{-tv\frac{u^{2}+v^{2}-z_{0}^{2}}{2(u^{2}+v^{2})z_{0}^{2}}}}{|s-z|}df(s),$ (8.7) and $\displaystyle I_{3}=\iint_{\Omega_{1}}\frac{|s-z_{0}|^{-\frac{1}{2}}e^{-tv\frac{u^{2}+v^{2}-z_{0}^{2}}{2(u^{2}+v^{2})z_{0}^{2}}}}{|s-z|}df(s).$ (8.8) ∎ Next, our purpose is to reconstruct the large time asymptotic behaviors of $u(x,t)$. According to (3.18), we need the large time asymptotic behaviors of $M^{(3)}(0)$ and $M_{1}^{(3)}(y,t)$ which are defined as $\displaystyle M^{(3)}(z)=M^{(3)}(0)+M_{1}^{(3)}(y,t)z+O(z^{2}),~{}~{}z\rightarrow 0,$ where $\displaystyle M^{(3)}(0)=\mathbb{I}-\frac{1}{\pi}\iint_{\mathbb{C}}\frac{M^{(3)}(s)W^{(3)}(s)}{s}\mathrm{d}A(s),$ $\displaystyle M^{(3)}_{1}(y,t)=\frac{1}{\pi}\int_{\mathbb{C}}\frac{M^{(3)}(s)W^{(3)}(s)}{s^{2}}\mathrm{d}A(s).$ The $M^{(3)}(0)$ and $M^{(3)}_{1}(y,t)$ satisfy the following lemma. ###### Lemma 8.25. For $t\rightarrow+\infty$, $M^{(3)}(0)$ and $M^{(3)}_{1}(y,t)$ admit the following inequality $\displaystyle\|M^{(3)}(0)-\mathbb{I}\|_{L^{\infty}}\lesssim t^{-1},$ (8.9) $\displaystyle M^{(3)}_{1}(y,t)\lesssim t^{-1}.$ (8.10) The proof of this Lemma is similar to the process that shown in Appendix $B$. ## 9 Soliton resolution for the CSP equation Now, we are going to construct the long time asymptotic of the CSP equation (1.3). Recall a series of transformation including (4.12), (5.4), (6.1) and (7.1), i.e., $\displaystyle M(z)\leftrightarrows M^{(1)}(z)\leftrightarrows M^{(2)}(z)\leftrightarrows M^{(3)}(z)\leftrightarrows E(z),$ we then obtain $\displaystyle M(z)=M^{(3)}(z)E(z)M^{(out)}(z)R^{(2)^{-1}}(z)T^{-\sigma_{3}}(z),~{}~{}z\in\mathbb{C}\setminus\mathcal{U}_{\pm z_{0}}.$ In order to recover the solution $u(x,t)$ , we take $z\rightarrow 0$ along the imaginary axis which implies $z\in\Omega_{2}$ or $z\in\Omega_{5}$, thus $R^{(2)}(z)=I$. Then, we obtain $\displaystyle M(0)=M^{(3)}(0)E(0)M^{(out)}(0)T^{-\sigma_{3}}(0),$ $\displaystyle M=\left(M^{(3)}(0)+M^{(3)}_{1}z+\cdots\right)\left(E(0)+E_{1}z+\cdots\right)\left(M^{(out)}(z)\right)\left(T^{-\sigma_{3}}(0)+\tilde{T}_{1}^{-\sigma_{3}}z+\cdots\right).$ Based on the above analysis, we can derive that $\displaystyle M(0)^{-1}M(z)=$ $\displaystyle T^{\sigma_{3}}(0)M^{(out)}(0)^{-1}M^{(out)}(z)T^{-\sigma_{3}}(0)z$ $\displaystyle+T^{\sigma_{3}}(0)M^{(out)}(0)^{-1}E_{1}M^{(out)}(z)T^{-\sigma_{3}}(0)z$ $\displaystyle+T^{\sigma_{3}}(0)M^{(out)}(0)^{-1}M^{(out)}(z)T^{-\sigma_{3}}(0)z+O(t^{-1}).$ Then, according to the reconstruction formula (3.18), (7.22) and (7.54), as $t\rightarrow+\infty$, we obtain that $\displaystyle u(x,t)e^{-2d}$ $\displaystyle=u(y(x,t),t)e^{-2d}$ $\displaystyle=u_{sol}(y(x,t),t;\sigma_{d}(I))T^{2}(0)(1+T_{1})-it^{-\frac{1}{2}}f^{+}_{12}+O(t^{-1}),$ (9.1) where $\displaystyle y(x,t)=x-$ $\displaystyle c_{+}(x,t,\sigma_{d}(I))-iT_{1}^{-1}-it^{-\frac{1}{2}}f^{+}_{11}+O(t^{-1}),$ $\displaystyle f^{+}_{12}=\frac{1}{i\sqrt{z_{0}}}$ $\displaystyle[M^{(out)}(0)^{-1}(M^{(out)}(z_{0})^{-1}M_{1}^{pc,+}(z_{0})M^{(out)}(z_{0})$ $\displaystyle+M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),+}(-z_{0})M^{(out)}(-z_{0}))M^{(out)}(0)]_{12},$ $\displaystyle f^{+}_{11}=\frac{1}{i\sqrt{z_{0}}}$ $\displaystyle[M^{(out)}(0)^{-1}(M^{(out)}(z_{0})^{-1}M_{1}^{pc,+}(z_{0})M^{(out)}(z_{0})$ $\displaystyle+M^{(out)}(-z_{0})^{-1}M_{1}^{(pc),+}(-z_{0})M^{(out)}(-z_{0}))M^{(out)}(0)]_{11}.$ The long time asymptotic behavior (9) gives the solution resolution for the initial value problem of the CSP equation which contains the soliton term confirmed by $N(I)$-soliton on discrete spectrum and the $t^{-\frac{1}{2}}$ order term on continuous spectrum with residual error up to $O(t^{-1})$. ###### Remark 9.26. The steps in the steepest descent analysis of RHP 3.9 for $t\rightarrow-\infty$ is similar to the case $t\rightarrow+\infty$ which has been presented in section $4$-$8$. When we consider $t\rightarrow-\infty$, the main difference can be traced back to the fact that the regions of growth and decay of the exponential factors $e^{2it\theta}$ are reversed, see Fig. 1. Here, we leave the detailed calculations to the interested reader. Finally, we can give the results shown in Theorem 1.1 ## Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant No. 11975306, the Natural Science Foundation of Jiangsu Province under Grant No. BK20181351, the Six Talent Peaks Project in Jiangsu Province under Grant No. JY-059, and the Fundamental Research Fund for the Central Universities under the Grant Nos. 2019ZDPY07 and 2019QNA35. ## 10 Appendix A: The parabolic cylinder model problem Here, we describe the solution of parabolic cylinder model problem[41, 42]. Define the contour $\Sigma^{pc}=\cup_{j=1}^{4}\Sigma_{j}^{pc}$ where $\displaystyle\Sigma_{j}^{pc}=\left\\{\lambda\in\mathbb{C}|\arg\lambda=\frac{2j-1}{4}\pi\right\\}.$ (A.1) For $r_{0}\in\mathbb{C}$, let $\nu(r)=-\frac{1}{2\pi}\log(1+|r_{0}|^{2})$, we consider the following parabolic cylinder model Riemann-Hilbert problem. ###### Riemann-Hilbert Problem 10.27. Find a matrix-valued function $M^{(pc)}(\lambda)$ such that $\displaystyle\bullet\quad M^{(pc)}(\lambda)~{}\text{is analytic in}~{}\mathbb{C}\setminus\Sigma^{pc},$ (A.2) $\displaystyle\bullet\quad M_{+}^{(pc)}(\lambda)=M_{-}^{(pc)}(\lambda)V^{(pc)}(\lambda),\quad\lambda\in\Sigma^{pc},$ (A.3) $\displaystyle\bullet\quad M^{(pc)}(\lambda)=\mathbb{I}+\frac{M_{1}}{\lambda}+O(\lambda^{2}),\quad\lambda\rightarrow\infty.$ (A.4) where $\displaystyle V^{(pc)}(\lambda)=\left\\{\begin{aligned} \lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&0\\\ r_{0}&1\\\ \end{array}\right),\quad\lambda\in\Sigma_{1}^{pc},\\\ \lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&\frac{r^{*}_{0}}{1+|r_{0}|^{2}}\\\ 0&1\\\ \end{array}\right),\quad\lambda\in\Sigma_{2}^{pc},\\\ \lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&0\\\ \frac{r_{0}}{1+|r_{0}|^{2}}&1\\\ \end{array}\right),\quad\lambda\in\Sigma_{3}^{pc},\\\ \lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&r^{*}_{0}\\\ 0&1\\\ \end{array}\right),\quad\lambda\in\Sigma_{4}^{pc},\end{aligned}\right.$ (A.5) $\Sigma_{1}^{pc}$$\Sigma_{4}^{pc}$$\Sigma_{2}^{pc}$$\Sigma_{3}^{pc}$$0$$\Omega_{6}$$\Omega_{1}$$\Omega_{5}$$\Omega_{2}$$\Omega_{4}$$\Omega_{3}$$\lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&0\\\ r_{0}&1\\\ \end{array}\right)$$\lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&r^{*}_{0}\\\ 0&1\\\ \end{array}\right)$$\lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&\frac{r^{*}_{0}}{1+|r_{0}|^{2}}\\\ 0&1\\\ \end{array}\right)$$\lambda^{i\nu\hat{\sigma}_{3}}e^{-\frac{i\lambda^{2}}{4}\hat{\sigma}_{3}}\left(\begin{array}[]{cc}1&0\\\ \frac{r_{0}}{1+|r_{0}|^{2}}&1\\\ \end{array}\right)$ Figure 6. Jump matrix $V^{(pc)}$. We know that the parabolic cylinder equation can be expressed as [43] $\displaystyle\left(\frac{\partial^{2}}{\partial z^{2}}+(\frac{1}{2}-\frac{z^{2}}{2}+a)\right)D_{a}=0.$ As shown in the literature[26, 44], we obtain the explicit solution $M^{(pc)}(\lambda,r_{0})$: $\displaystyle M^{(pc)}(\lambda,r_{0})=\Phi(\lambda,r_{0})\mathcal{P}(\lambda,r_{0})e^{\frac{i}{4}\lambda^{2}\sigma_{3}}\lambda^{-i\nu\sigma_{3}},$ where $\displaystyle\mathcal{P}(\lambda,r_{0})=\left\\{\begin{aligned} &\left(\begin{array}[]{cc}1&0\\\ -r_{0}&1\\\ \end{array}\right),\quad&\lambda\in\Omega_{1},\\\ &\left(\begin{array}[]{cc}1&-\frac{r^{*}_{0}}{1+|r_{0}|^{2}}\\\ 0&1\\\ \end{array}\right),\quad&\lambda\in\Omega_{3},\\\ &\left(\begin{array}[]{cc}1&0\\\ \frac{r_{0}}{1+|r_{0}|^{2}}&1\\\ \end{array}\right),\quad&\lambda\in\Omega_{4},\\\ &\left(\begin{array}[]{cc}1&r^{*}_{0}\\\ 0&1\\\ \end{array}\right),\quad&\lambda\in\Omega_{6},\\\ &~{}~{}~{}\mathbb{I},\quad&\lambda\in\Omega_{2}\cup\Omega_{5},\end{aligned}\right.$ and $\displaystyle\Phi(\lambda,r_{0})=\left\\{\begin{aligned} \left(\begin{array}[]{cc}e^{-\frac{3\pi\nu}{4}}D_{i\nu}\left(e^{-\frac{3i\pi}{4}}\lambda\right)&-i\beta_{12}e^{-\frac{\pi}{4}(\nu-i)}D_{-i\nu-1}\left(e^{-\frac{i\pi}{4}}\lambda\right)\\\ i\beta_{21}e^{-\frac{3\pi(\nu+i)}{4}}D_{i\nu-1}\left(e^{-\frac{3i\pi}{4}}\lambda\right)&e^{\frac{\pi\nu}{4}}D_{-i\nu}\left(e^{-\frac{i\pi}{4}}\lambda\right)\\\ \end{array}\right),\quad\lambda\in\mathbb{C}^{+},\\\ \left(\begin{array}[]{cc}e^{\frac{\pi\nu}{4}}D_{i\nu}\left(e^{\frac{i\pi}{4}}\lambda\right)&-i\beta_{12}e^{-\frac{3\pi(\nu-i)}{4}}D_{-i\nu-1}\left(e^{\frac{3i\pi}{4}}\lambda\right)\\\ i\beta_{21}e^{\frac{\pi}{4}(\nu+i)}D_{i\nu-1}\left(e^{\frac{i\pi}{4}}\lambda\right)&e^{-\frac{3\pi\nu}{4}}D_{-i\nu}\left(e^{\frac{3i\pi}{4}}\lambda\right)\\\ \end{array}\right),\quad\lambda\in\mathbb{C}^{-},\end{aligned}\right.$ with $\displaystyle\beta_{12}=\frac{\sqrt{2\pi}e^{i\pi/4}e^{-\pi\nu/2}}{r_{0}\Gamma(-i\nu)},\quad\beta_{21}=\frac{-\sqrt{2\pi}e^{-i\pi/4}e^{-\pi\nu/2}}{r_{0}^{*}\Gamma(i\nu)}=\frac{\nu}{\beta_{12}}.$ Then, it is not hard to obtain the asymptotic behavior of the solution by using the well known asymptotic behavior of $D_{a}(z)$, $\displaystyle M^{(pc)}(r_{0},\lambda)=I+\frac{M_{1}^{(pc)}}{i\lambda}+O(\lambda^{-2}),$ (A.6) where $\displaystyle M_{1}^{(pc)}=\begin{pmatrix}0&\beta_{12}\\\ -\beta_{21}&0\end{pmatrix}.$ ## 11 Appendix B: Detailed calculations for the pure $\bar{\partial}$-Problem ###### Proposition 11.28. For $t>0$ and $z\in\Omega_{1}$, there exists constants $c_{j}(j=1,2,3)$ such that $I_{j}(j=1,2,3)$ which defined in (8.7) and (8.8) possess the following estimate $\displaystyle I_{j}\leq c_{j}t^{-\frac{1}{6}},~{}~{}j=1,2,3.$ (B.1) ###### Proof. Let $s=u+iv$ and $z=x+iy$. For $s\in\Omega_{1}$, we know that $\frac{u^{2}+v^{2}-z_{0}^{2}}{(u^{2}+v^{2})z_{0}^{2}}>\frac{v^{2}}{(u^{2}+v^{2})z_{0}^{2}}>0$. Therefore, we assume that there exists an arbitrarily small constant $\varepsilon$ such that $\frac{u^{2}+v^{2}-z_{0}^{2}}{(u^{2}+v^{2})z_{0}^{2}}\geqslant\varepsilon>0$. Then, using the fact that $\displaystyle\Big{|}\Big{|}\frac{1}{s-z}\Big{|}\Big{|}_{L^{2}}(v+z_{0})=(\int_{v+z_{0}}^{\infty}\frac{1}{|s-z|^{2}}du)^{\frac{1}{2}}\leq\frac{\pi}{v-y},$ we can derive that $\displaystyle\begin{split}|I_{1}|&\leq\int_{0}^{+\infty}\int_{v+z_{0}}^{+\infty}\frac{|\bar{\partial}\chi_{\mathcal{Z}}(s)|e^{-tv\frac{u^{2}+v^{2}-z_{0}^{2}}{2(u^{2}+v^{2})z_{0}^{2}}}}{|s-z|}dudv\\\ &\leq\int_{0}^{+\infty}e^{-tv\frac{\varepsilon}{2}}\big{|}\big{|}\bar{\partial}\chi_{\mathcal{Z}}(s)\big{|}\big{|}_{L^{2}(v+z_{0})}\Big{|}\Big{|}\frac{1}{s-z}\Big{|}\Big{|}_{L^{2}(v+z_{0})}dq\\\ &\leq\int_{0}^{y}e^{-tv\frac{\varepsilon}{2}}\frac{1}{\sqrt{y-v}}dv+\int_{y}^{+\infty}e^{-tv\frac{\varepsilon}{2}}\frac{1}{\sqrt{v-y}}dv.\end{split}$ (B.2) Then, using the fact that $e^{-z}\leq z^{-1/6}$, a direct calculation shows that $\displaystyle\int_{0}^{y}e^{-tv\frac{\varepsilon}{2}}\frac{1}{\sqrt{y-v}}dv\lesssim t^{-\frac{1}{6}},$ $\displaystyle\int_{y}^{+\infty}e^{-tv\frac{\varepsilon}{2}}\frac{1}{\sqrt{v-y}}dv\lesssim t^{-\frac{1}{2}}.$ Then, we have $I_{1}\lesssim t^{-\frac{1}{6}}$. Similarly, considering that $r\in H^{1,1}(\mathbb{R})$, we obtain the estimate $\displaystyle|I_{2}|\leq\int_{0}^{+\infty}\int_{v+z_{0}}^{+\infty}\frac{|r^{\prime}(u)|e^{-tv\frac{\varepsilon}{2}}}{|s-z|}dudv\lesssim t^{-\frac{1}{6}}.$ (B.3) To obtain the estimate of $I_{3}$, we consider the following $L^{k}(k>2)$ norm $\displaystyle\bigg{|}\bigg{|}\frac{1}{\sqrt{|s-z_{0}|}}\bigg{|}\bigg{|}_{L^{k}}\leq\left(\int_{v+z_{0}}^{+\infty}\frac{1}{|u-z_{0}+iv|^{\frac{k}{2}}}du\right)^{\frac{1}{k}}\leq cv^{\frac{1}{k}-\frac{1}{2}}.$ (B.4) Similarly, we can derive that $\displaystyle\bigg{|}\bigg{|}\frac{1}{|s-z|}\bigg{|}\bigg{|}_{L^{k}}\leq c|v-y|^{\frac{1}{k}-1}.$ (B.5) By applying (B.4) and (B.5), it is not hard to check that $\displaystyle\begin{split}|I_{3}|&\leq\int_{0}^{+\infty}\int_{v}^{+\infty}\frac{|z-z_{0}|^{-\frac{1}{2}}e^{-tv\frac{\varepsilon}{2}}}{|s-z|}dudv\\\ &\leq\int_{0}^{+\infty}e^{-tv\frac{\varepsilon}{2}}\bigg{|}\bigg{|}\frac{1}{\sqrt{|s-z_{0}|}}\bigg{|}\bigg{|}_{L^{k}}\bigg{|}\bigg{|}\frac{1}{|s-z|}\bigg{|}\bigg{|}_{L^{k}}dv\lesssim t^{-\frac{1}{2}}.\end{split}$ (B.6) Now, we obtain that $I_{1}+I_{2}+I_{3}\lesssim t^{-\frac{1}{6}}$ as $t\rightarrow+\infty$. ∎ ## References * [1] G. P. Agrawal, Nonlinear Fiber Optics. Academic Press, Boston, 1989. * [2] A. Hasegawa, Y. Kodama, Solitons in Optical Communications, Oxford University Press, 1995. * [3] S.F. Tian, T.T. Zhang, Long-time asymptotic behavior for the Gerdjikov-Ivanov type of derivative nonlinear Schrödinger equation with time-periodic boundary condition, Proc. Am. Math. Soc. 146 (2018) 1713-1729. * [4] S.F. Tian, Initial-boundary value problems for the general coupled nonlinear Schrödinger equation on the interval via the Fokas method, J. Differential Equations, 262 (2017) 506-558. * [5] S.F. Tian, The mixed coupled nonlinear Schrödinger equation on the half-line via the Fokas method, Proc. R. Soc. Lond. A 472(2195) (2016) 20160588. * [6] D.S. Wang, B. Guo, X. Wang, Long-time asymptotics of the focusing Kundu-Eckhaus equation with nonzero boundary conditions, J. Differential Equations, 266(9) (2019) 5209-5253. * [7] J.E. Rothenberg, Space-time focusing: breakdown of the slowly varying envelope approximation in the self-focusing of femtosecond pulses, Opt. Lett. 17 (1992) 1340-1342. * [8] T. Schäfer, C.E. Wayne, Propagation of ultra-short optical pulses in cubic nonlinear media, Phys. D, 196 (2004) 90-105. * [9] B. Fuchssteiner and A. S. Fokas, Symplectic structures, their Bäcklund transformations and hereditary symmetries, Phys. D, 4 (1981) 47-66. * [10] P. J. Olver and P. Rosenau, Tri-Hamiltonian duality between solitons and solitary–wave solutions having compact support, Phys. Rev. E., 53(1996) 1900-1906. * [11] A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181(2) (1998) 229-243. * [12] A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach, Ann. Inst. Fourier, 50(2) (2000) 321-362. * [13] A. Constantin, On the scattering problem for the Camassa-Holm equation, Proc. R. Soc. London, Ser. A, 457(2008) (2001) 953-970. * [14] Z. Qiao, A new integrable equation with cuspons and $W/M$-shape-peaks solitons, J. Math. Phys., 47 (2006) 112701. * [15] A.S. Fokas, On a class of physically important integrable equations, Phys. D, 87(1-4) (1995) 145-150. * [16] A. Sakovich, S. Sakovich, Solitary wave solutions of the short pulse equation, J. Phys. A: Math. Gen. 39 (2006) 361-367. * [17] Y. Matsuno, Multiloop solutions and multibreather solutions of the short pulse model equation, J. Phys. Soc. Jpn., 76 (2007) 084003. * [18] B.F. Feng, Complex short pulse and couple complex short pulse equations, Phys. D, 297 (2015) 62-75. * [19] J. Xu, Long-time asymptotics for the short pulse equation, J. Differential Equations, 265 (2018) 3439-3532. * [20] A. Sakovich, S. Sakovich, The short pulse equation is integrable, J. Phys. Soc. Jpn., 74 (2005) 239-241. * [21] L. Ling, B.-F. Feng, Z. Zhu, Multi-soliton, multi-breather and higher order rogue wave solutions to the complex short pulse equation, Phys. D, 327 (2016) 13-29. * [22] B.F. Feng, Complex short pulse and coupled complex short pulse equations, Phys. D, 297 (2015) 085202. * [23] J. Xu, E.G. Fan, Long-time asymptotic behavior for the complex short pulse equation, J. Differential Equations, 269 (2020) 10322-10349. * [24] S.V. Manakov, Nonlinear Fraunhofer diffraction, Sov. Phys. JETP, 38 (1974) 693-696. * [25] V.E. Zakharov, S. V. Manakov, Asymptotic behavior of nonlinear wave systems integrated by the inverse scattering method, Sov. Phys. JETP, 44 (1976) 106-112. * [26] P. Deift, X. Zhou, A steepest descent method for oscillatory Riemann¨CHilbert problems. Asymptotics for the MKdV equation. Ann. Math. 137(2) (1993) 295-368. * [27] P. Deift, X. Zhou, Long-time asymptotics for integrable systems. Higher order theory, Comment. Phys. Math., 165(1) (1994) 175-191 * [28] P. Deift, X. Zhou, Long-Time Behavior of the Non-Focusing Nonlinear Schrödinger Equation, a Case Study, Lectures in Mathematical Sciences, Graduate School of Mathematical Sciences, University of Tokyo, 1994. * [29] P. Deift, X. Zhou, Long-time asymptotics for solutions of the NLS equation with initial data in a weighted Sobolev space, Commun. Pure Appl. Math. 56(8) (2003) 1029-1077. * [30] K. T. R. McLaughlin, P. D. Miller, The $\bar{\partial}$ steepest descent method and the asymptotic behavior of polynomials orthogonal on the unit circle with fixed and exponentially varying non-analytic weights, Int. Math. Res. Not. (2006), Art. ID 48673. * [31] K. T. R. McLaughlin, P. D. Miller, The $\bar{\partial}$ steepest descent method for orthogonal polynomials on the real line with varying weights, Int. Math. Res. Not., IMRN (2008), Art. ID 075. * [32] M. Dieng, K. D. T. McLaughlin, Long-time Asymptotics for the NLS equation via dbar methods, arXiv: 0805.2807. * [33] S. Cuccagna, R. Jenkins, On asymptotic stability of $N$-solitons of the defocusing nonlinear Schrödinger equation, Comm. Math. Phys. 343 (2016) 921-969. * [34] M. Borghese, R. Jenkins, K. T. R. McLaughlin, Long-time asymptotic behavior of the focusing nonlinear Schrödinger equation, Ann. I. H. Poincaré Anal, 35 (2018) 887-920. * [35] R. Jenkins, J. Liu, P. Perry, C. Sulem, Soliton Resolution for the derivative nonlinear Schrödinger equation, Commun. Math. Phys. 363 (2018) 1003-1049. * [36] R. Jenkins, J. Liu, P. Perry, C. Sulem, Global well-posedness for the derivative nonlinear Schrödinger equation, Commun. Part. Diff. Equ. 43(8) (2018) 1151-1195. * [37] Y.L. Yang, E.G. Fan, Soliton Resolution for the Short-pluse Equation, arXiv:2005.12208. * [38] Q.Y. Cheng, E.G. Fan, Soliton resolution for the focusing Fokas-Lenells equation with weighted Sobolev initial data, arXiv:2010.08714. * [39] R.H. Ma, E.G. Fan, Long time asymptotic behavior of the focusing nonlinear Kundu-Eckhaus equation, arXiv:1912.01425. * [40] Z.Q. Li, S.F. Tian, J.J. Yang, Soliton resolution for a coupled generalized nonlinear Schrödinger equations with weighted Sobolev initial data, arXiv:2012.11928. * [41] A. Its, Asymptotic behavior of the solutions to the nonlinear Schrödinger equation, and isomonodromic deformations of systems of linear differential equations, Dokl. Akad. Nauk SSSR, 261(1) (1981) 14-18. * [42] J. Liu, P. Perry, C. Sulem, Long-time behavior of solutions to the derivative nonlinear Schrödinger equation for soliton-free initial data, Ann. I. H. Poincaré, Anal. Non Linéaire, 35 (2018) 217-265. * [43] F.W.J. Olver, A.B. Olde Daalhuis, D.W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R. Miller, B.V. Saunders, NIST Digital Library of Mathematical Functions, (2016). http://dlmf.nist.gov/. * [44] R. Jenkins, K. McLaughlin, Semiclassical limit of focusing NLS for a family of square barrier initial data, Commun. Pure Appl. Math. 67(2) (2014) 246-320.
# Layer-Peeled Model: Toward Understanding Well-Trained Deep Neural Networks Cong Fang<EMAIL_ADDRESS>Hangfeng He<EMAIL_ADDRESS>Qi Long <EMAIL_ADDRESS>Weijie J. Su<EMAIL_ADDRESS> ###### Abstract In this paper, we introduce the Layer-Peeled Model, a nonconvex yet analytically tractable optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this new model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep learning training. First, when working on class- balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which in part explains the recently discovered phenomenon of neural collapse in deep learning training [PHD20]. Moreover, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before its confirmation by our computational experiments. University of Pennsylvania January 26, 2021 ###### Contents 1. 1 Introduction 1. 1.1 Two Applications 2. 1.2 Related Work 2. 2 Derivation 3. 3 Layer-Peeled Model for Explaining Neural Collapse 1. 3.1 Cross-Entropy Loss 2. 3.2 Extensions to Other Loss Functions 4. 4 Layer-Peeled Model for Predicting Minority Collapse 1. 4.1 Technique: Convex Relaxation 2. 4.2 Minority Collapse 3. 4.3 Experiments 5. 5 How to Mitigate Minority Collapse? 6. 6 Discussion 7. A Proofs 1. A.1 Balanced Case 1. A.1.1 Proofs of Theorem 1 and Proposition 2 2. A.1.2 Proofs of Theorems 3 and 4 2. A.2 Imbalanced Case 1. A.2.1 Proofs of Lemma 1 and Proposition 1 2. A.2.2 Proof of Theorem 5 8. B Additional Results ## 1 Introduction In the past decade, deep learning has achieved remarkable performance across a range of scientific and engineering domains [KSH17, LBH15, SHM+16]. Interestingly, these impressive accomplishments were mostly achieved by empirical intuition and various maneuvers, though often plausible, without much principled guidance from a theoretical perspective. On the flip side, however, this reality also suggests the great potential a theory could have for advancing the development of deep learning methodologies in the coming decade. Unfortunately, it is not easy to develop a theoretical foundation for deep learning. Perhaps the most difficult hurdle lies in the nonconvexity of the optimization problem for training neural networks, which, loosely speaking, stems from the interaction between different layers of neural networks. To be more precise, consider a neural network for $K$-class classification as a function, which in its simplest form reads $\bm{f}(\mathbf{x};\bm{W}_{\textnormal{full}})=\mathbf{W}_{L}\sigma(\mathbf{W}_{L-1}\sigma(\cdots\sigma(\mathbf{W}_{1}\mathbf{x}))).$ Here, $\bm{W}_{\textnormal{full}}:=\\{\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{L}\\}$ denotes the partition of the weights in a matrix form according to layers and $\sigma(\cdot)$ is a nonlinear activation function such as the ReLU.111Here the function only outputs logits in $\mathbb{R}^{K}$, and we omit the softmax step. The last-layer weights, $\mathbf{W}_{L}$, consists of $K$ vectors that correspond to the $K$ classes. For simplicity, we omit the bias term and other operations such as max-pooling. Owing to the complex and nonlinear interaction between the $L$ layers, when applying stochastic gradient descent to the optimization problem $\min_{\bm{W}_{\textnormal{full}}}~{}\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\bm{f}(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}),\mathbf{y}_{k})+\frac{\lambda}{2}\|\bm{W}_{\textnormal{full}}\|^{2}$ (1) with a loss function $\mathcal{L}$ for training the neural network, it becomes very difficult to pinpoint how a given layer influences the output $\bm{f}$ (above, $\\{\mathbf{x}_{k,i}\\}_{i=1}^{n_{k}}$ denotes the training examples in the $k$-th class, with label $\mathbf{y}_{k}$,222We often encode $\mathbf{y}_{k}$ as a $K$-dimensional one-hot vector with 1 in the $k$-th entry. $N=n_{1}+\cdots+n_{K}$ is the total number of training examples, $\lambda>0$ is the weight decay parameter, and $\|\cdot\|$ throughout the paper is the $\ell_{2}$ norm). Worse, this difficulty in analyzing deep learning models is compounded by an ever growing number of layers. (a) 1-Layer-Peeled Model (b) 2-Layer-Peeled Model Figure 1: Illustration of Layer-Peeled Models. The right panel represents the 2-Layer-Peeled Model, which is discussed in Section 6. For each panel, we maintain the details of the white (top) box, whereas the gray (bottom) box is modeled by a simple decision variable for every training example. Therefore, an attempt to develop a tractable and comprehensive theory for demystifying deep learning would presumably first need to simplify the interaction between a large number of layers. Following this intuition, in this paper we introduce the following optimization program as a surrogate model for Program (1) for unveiling quantitative patterns of deep neural networks: $\displaystyle\min_{\mathbf{W}_{L},\mathbf{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}_{L}\bm{h}_{k,i},\mathbf{y}_{k})$ (2) subject to $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H},$ where the decision variables $\mathbf{W}_{L}=\left[\mathbf{w}_{1},\ldots,\mathbf{w}_{K}\right]^{\top}\in\mathbb{R}^{K\times p}$ is, as in Program (1), comprised of $K$ linear classifiers in the last layer, $\bm{H}=[\mathbf{h}_{k,i}:1\leq k\leq K,1\leq i\leq n_{k}]\in\mathbb{R}^{p\times N}$ correspond to the $p$-dimensional last-layer activations/features of all $N$ training examples333Strictly speaking, $\bm{H}$ is used to model the activations from the $L-1$ layer. Note that the dimension of the vector $\mathbf{w}_{k}$ is also $p$., and $E_{H}$ and $E_{W}$ are two positive scalars. Although still nonconvex, this new optimization program is presumably much more amenable for analysis than the old one (1) as the interaction now is also between two variables. In relating Program (2) to the optimization problem (1), a first simple observation is that $\bm{f}(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}})=\mathbf{W}_{L}\sigma(\mathbf{W}_{L-1}\sigma(\cdots\sigma(\mathbf{W}_{1}\mathbf{x}_{k,i})))$ in (1) is replaced by $\mathbf{W}_{L}\mathbf{h}_{k,i}$ in (2). Put differently, the black-box nature of the last-layer features, namely $\sigma(\mathbf{W}_{L-1}\sigma(\cdots\sigma(\mathbf{W}_{1}\mathbf{x}_{k,i})))$, is now modeled by a simple decision variable $\mathbf{h}_{k,i}$ with a constraint on its $\ell_{2}$ norm. Intuitively speaking, this simplification is done via peeling off the topmost layer from the neural network. Thus, we call the optimization program (2) the 1-Layer-Peeled Model, or simply the Layer-Peeled Model. At a high level, the Layer-Peeled Model takes a top-down approach to the analysis of deep neural networks. As illustrated in Figure 1, the essence of the modeling strategy is to break down the neural network from top to bottom, specifically singling out the topmost layer and modeling all bottom layers collectively as a single variable. In fact, the top-down perspective that we took in the development of the Layer-Peeled Model was inspired by a recent breakthrough made by Papyan, Han, and Donoho [PHD20], who discovered a mathematically elegant and pervasive phenomenon, termed neural collapse, through massive deep learning experiments on datasets with balanced classes. Roughly speaking, neural collapse refers to the emergence of certain geometric patterns of the last-layer features $\sigma(\mathbf{W}_{L-1}\sigma(\cdots\sigma(\mathbf{W}_{1}\mathbf{x}_{k,i})))$ and the last-layer classifiers $\mathbf{W}_{L}$, when the neural network is well-trained in the sense that it is toward not only zero misclassification error but also negligible cross-entropy loss.444In general, any global minimizer of Program (1) does not yield zero cross-entropy loss due to the penalty term. This top-down approach was also taken in [WL90, SHN+18, OS20, YCY+20, Sha20] to investigate various aspects of deep learning models. ### 1.1 Two Applications Despite its plausibility, the ultimate test of the Layer-Peeled Model lies in its ability to faithfully approximate deep learning models through explaining empirical observations and even predicting new phenomena. In what follows, we provide convincing evidence that the Layer-Peeled Model is up to this task by presenting two findings. To be concrete, we remark that the results below are concerned with well-trained deep learning models, which correspond to, in rough terms, (near) optimal solutions of Program (1). ##### Balanced Data. When the dataset has the same number of training examples in each class, [PHD20] experimentally observed that neural collapse emerges in well-trained deep learning models (1) with the cross-entropy loss: the last-layer features from the same class tend to be very close to their class mean; these $K$ class means centered at the global-mean have the same length and form the maximally possible equal-sized angles between any pair; moreover, the last-layer classifiers become dual to the class means in the sense that they are equal to each other for each class up to a scaling factor. While it seems hopeless to rigorously prove neural collapse for multiple-layer neural networks (1) at the moment, alternatively, we seek to show that this phenomenon emerges in the surrogate model (2). More precisely, when the size of each class $n_{k}=n$ for all $k$, is it true that any global minimizer $\mathbf{W}_{L}^{\star}=\left[\mathbf{w}_{1}^{\star},\ldots,\mathbf{w}_{K}^{\star}\right]^{\top},\bm{H}^{\star}=[\mathbf{h}_{k,i}^{\star}:1\leq k\leq K,1\leq i\leq n]$ of Program (2) exhibits neural collapse (see its formal definition in Section 1.2 and Theorem 1)? The following result answers this question in the affirmative: ###### Finding 1. Neural collapses occurs in the Layer-Peeled Model. A formal statement of this result and a detailed discussion are given in Section 3. This result applies to a family of loss functions $\mathcal{L}$, particularly including the cross-entropy loss and the contrastive loss (see, e.g., [CKNH20]). As an immediate implication, this result provides evidence of the Layer-Peeled Model’s ability to characterize well-trained deep learning models. Figure 2: Minority Collapse predicted by the Layer-Peeled Model (LPM, in dotted lines) and empirically observed in deep learning (DL, in solid lines) on imbalanced datasets with $K_{A}=7$ and $K_{B}=3$. The $y$-axis denotes the average cosine of the angles between any pair of the minority classifier $\mathbf{w}_{K_{A}+1}^{\star},\ldots,\mathbf{w}_{K}^{\star}$ for both LPM and DL. The datasets we use are subsets of the CIFAR10 datasets [Kri09] and the size of the majority classes is fixed to $5000$. The experiments use VGG13 [SZ14] as the deep learning architecture, with weight decay (wd) $\lambda=5\times 10^{-3},5\times 10^{-4}$. The prediction is especially accurate in capturing the phase transition point where the cosine becomes $1$ or, equivalently, the minority classifiers become identical to each other. More details can be found in Section 4.3. ##### Imbalanced Data. While a surrogate model would be satisfactory if it explains already observed phenomena, we set a high standard for the model, asking whether it can predict a new common empirical pattern. Encouragingly, the Layer-Peeled Model happens to meet this standard. Specifically, we consider training deep learning models on imbalanced datasets, where some classes contain many more training examples than others. Despite the pervasiveness of imbalanced classification in many practical applications [JK19], the literature remains scarce on its impact on the trained neural networks from a theoretical standpoint. Here we provide mathematical insights into this problem by using the Layer-Peeled Model. In the following result, we consider optimal solutions to the Layer-Peeled Model on a dataset with two different class sizes: the first $K_{A}$ majority classes each contain $n_{A}$ training examples ($n_{1}=n_{2}=\dots=n_{K_{A}}=n_{A}$), and the remaining $K_{B}:=K-K_{A}$ minority classes each contain $n_{B}$ examples ($n_{K_{A}+1}=n_{K_{A}+2}=\dots=n_{K}=n_{B}$). We call $R:=n_{A}/n_{B}>1$ the imbalance ratio. ###### Finding 2. In the Layer-Peeled Model, the last-layer classifiers corresponding to the minority classes, namely $\mathbf{w}^{\star}_{K_{A}+1},\mathbf{w}^{\star}_{K_{A}+2},\ldots,\mathbf{w}^{\star}_{K}$, collapse to a single vector when $R$ is sufficiently large. This result is elaborated on in Section 4. The derivation involves some novel elements to tackle the nonconvexity of the Layer-Peeled Model (2) and the asymmetry due to the imbalance in class sizes. In slightly more detail, we identify a phase transition as the imbalance ratio $R$ increases: when $R$ is below a threshold, the minority classes are distinguishable in terms of their classifiers; when $R$ is above the threshold, they become indistinguishable. While this phenomenon is merely predicted by the simple Layer-Peeled Model (2), it appears in our computational experiments on deep neural networks. More surprisingly, our prediction of the phase transition point is in excellent agreement with the experiments, as shown in Figure 2. This phenomenon, which we refer to as Minority Collapse, reveals the fundamental difficulty in using deep learning for classification when the dataset is widely imbalanced, even in terms of optimization, not to mention generalization. This is not a priori evident given that neural networks have a large approximation capacity (see, e.g., [Yar17]). Importantly, Minority Collapse emerges at a finite value of the imbalance ratio rather than at infinity. Moreover, even below the phase transition point of this ratio, we find that the angles between any pair of the minority classifiers are already smaller than those of the majority classes, both theoretically and empirically. ### 1.2 Related Work There is a venerable line of work attempting to gain insights into deep learning from a theoretical point of view [JGH18, DLL+19, AZLS19, ZCZG18, COB19, EMW19, BFT17, HS20, PBL20, MMN18, SS19, RVE18, FLYZ20, KWL+19, SSJ20]. See also the reviews [FDZ21, HT20, FMZ19, Sun19] and references therein. The work of neural collapse by [PHD20] in this body of work is particularly noticeable with its mathematically elegant and convincing insights. In brief, [PHD20] observed the following four properties of the last-layer features and classifiers in deep learning training:555See the mathematical description of neural collapse in Theorem 1. * (NC1) Variability collapse: the within-class variation of the last-layer features becomes $0$, which means that these features collapse to their class means. * (NC2) The class means centered at their global mean collapse to the vertices of a simplex equiangular tight frame (ETF) up to scaling. * (NC3) Up to scaling, the last-layer classifiers each collapse to the corresponding class means. * (NC4) The classifier’s decision collapses to simply choosing the class with the closest Euclidean distance between its class mean and the activations of the test example. Now we give the formal definition of ETF [SH03, PHD20]. ###### Definition 1. A $K$-simplex ETF is a collection of points in $\mathbb{R}^{p}$ specified by the columns of the matrix ${\mathbf{M}^{\star}}=\sqrt{\frac{K}{K-1}}\mathbf{P}\left(\mathbf{I}_{K}-\frac{1}{K}\mathbf{1}_{K}\mathbf{1}_{K}^{\top}\right),$ where $\mathbf{I}_{K}\in\mathbb{R}^{K\times K}$ is the identity matrix, $\mathbf{1}_{K}$ is the ones vector, and $\mathbf{P}\in\mathbb{R}^{p\times K}$ ($p\geq K$)666To be complete, we only require $p\geq K-1$. When $p=K-1$, we can choose $\mathbf{P}$ such that $\left[\mathbf{P}^{\top},\mathbf{1}_{K}\right]$ is an orthogonal matrix. is a partial orthogonal matrix such that $\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}_{K}$. These four properties emerge in massive experiments on popular network architectures during the terminal phase of training—when the trained model interpolates the in-sample training data—and a shared setting of these experiments is the use of balanced datasets and the cross-entropy loss with $\ell_{2}$ regularization. Using convincing arguments and numerical evidence, [PHD20] demonstrated that the symmetry and stability of neural collapse improve deep learning training in terms of generalization, robustness, and interpretability. Notably, these improvements occur with the benign overfitting phenomenon in deep neural networks [MBB18, BHMM19, LR20, BLLT20, LSS20]. As an aside, while we were preparing the manuscript, we became aware of [MPP20, EW20, LS20], which produced neural collapse using different models. ## 2 Derivation In this section, we intuitively derive the Layer-Peeled Model as an analytical surrogate for well-trained neural networks. Although our derivation lacks rigor, the priority is to reduce the complexity of the optimization problem (1) while roughly maintaining its structure. Notably, the penalty $\frac{\lambda}{2}\|\bm{W}_{\textnormal{full}}\|^{2}$ corresponds to weight decay used in training deep learning models, which is necessary for preventing this optimization program from attaining its minimum at infinity when $\mathcal{L}$ is the cross-entropy loss. Taking a top-down standpoint, our modeling strategy starts by singling out the weights $\mathbf{W}_{L}$ of the topmost layer and rewriting (1) as $\min_{\mathbf{W}_{L},\bm{H}}~{}\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}_{L}\mathbf{h}(\mathbf{x}_{k,i};\mathbf{W}_{-L}),\mathbf{y}_{k})+\frac{\lambda}{2}\|\mathbf{W}_{L}\|^{2}+\frac{\lambda}{2}\|\mathbf{W}_{-L}\|^{2},$ (3) where $\mathbf{W}_{-L}$ denotes the weights from all layers except for the last layer (for simplicity, we do not assume any bias terms). From the Lagrangian dual viewpoint, a minimum of the optimization program above is also an optimal solution to $\displaystyle\min_{\mathbf{W}_{L},\mathbf{W}_{-L}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}_{L}\mathbf{h}(\mathbf{x}_{k,i};\mathbf{W}_{-L}),\mathbf{y}_{k})$ (4) $\displaystyle\mathrm{s.t.}$ $\displaystyle\|\mathbf{W}_{L}\|^{2}\leq C_{1},$ $\displaystyle\|\mathbf{W}_{-L}\|^{2}\leq C_{2},$ for some positive numbers $C_{1}$ and $C_{2}$.777Denoting by $(\mathbf{W}_{L}^{\star},\mathbf{W}_{-L}^{\star})$ an optimal solution to (3), then we can take $C_{1}=\|\mathbf{W}_{L}^{\star}\|^{2}$ and $C_{2}=\|\mathbf{W}_{-L}^{\star}\|^{2}$. To clear up any confusion, note that due to its nonconvexity, (3) may admit multiple global minima and each in general correspond to different values of $C_{1},C_{2}$. Next, we can equivalently write (4) as $\displaystyle\min_{\mathbf{W}_{L},\bm{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}_{L}\mathbf{h}_{k,i},\mathbf{y}_{k})$ (5) $\displaystyle\mathrm{s.t.}$ $\displaystyle\|\mathbf{W}_{L}\|^{2}\leq C_{1},$ $\displaystyle\bm{H}\in\left\\{\bm{H}(\mathbf{W}_{-L}):\|\mathbf{W}_{-L}\|^{2}\leq C_{2}\right\\},$ where $\bm{H}=[\mathbf{h}_{k,i}:1\leq k\leq K,1\leq i\leq n_{k}]$ denotes the decision variable and the function $\bm{H}(\mathbf{W}_{-L})$ is defined as $\bm{H}(\mathbf{W}_{-L}):=\left[\mathbf{h}(\mathbf{x}_{k,i};\mathbf{W}_{-L}):1\leq k\leq K,1\leq i\leq n_{k}\right]$ for any $\mathbf{W}_{-L}$. To simplify (5), we make the ansatz that the range of $\mathbf{h}(\mathbf{x}_{k,i};\mathbf{W}_{-L})$ under the constraint $\|\mathbf{W}_{-L}\|^{2}\leq C_{2}$ is approximately an ellipse in the sense that $\left\\{\bm{H}(\mathbf{W}_{-L}):\|\mathbf{W}_{-L}\|^{2}\leq C_{2}\right\\}\approx\left\\{\bm{H}:\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\|\mathbf{h}_{k,i}\|^{2}\leq C_{2}^{\prime}\right\\}$ (6) for some $C_{2}^{\prime}>0$. Loosely speaking, this ansatz asserts that $\bm{H}$ should be regarded as a variable in an $\ell_{2}$ space. To shed light on this point, note that $\mathbf{h}_{k,i}$ intuitively lives in the dual space of $\mathbf{W}$ in view of the appearance of the product $\mathbf{W}\mathbf{h}_{k,i}$ in the objective. Furthermore, $\mathbf{W}$ is in an $\ell_{2}$ space for the $\ell_{2}$ constraint on it. Hence, the rationale behind the ansatz follows from the self-duality of $\ell_{2}$ spaces. Inserting this approximation into (5), we obtain the following optimization program, which we call the Layer-Peeled Model: $\displaystyle\min_{\mathbf{W},\mathbf{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (7) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H},$ where, for simplicity, we henceforth write $\mathbf{W}:=\mathbf{W}_{L}\equiv[\mathbf{w}_{1},\ldots,\mathbf{w}_{K}]^{\top}$ for the last-layer classifiers/weights and the thresholds $E_{W}=C_{1}/K$ and $E_{H}=C_{2}^{\prime}/K$. This optimization program is nonconvex but, as we will show soon, is generally mathematically tractable for analysis. On the surface, the Layer-Peeled Model has no dependence on the data $\\{\mathbf{x}_{k,i}\\}$, which however is not the correct picture since the dependence has been implicitly incorporated into the threshold $E_{H}$. In passing, we remark that neural collapse does not emerge if the second constraint of (7) uses the $\ell_{q}$ norm for any $q\neq 2$ and $q>1$, in place of the $\ell_{2}$ norm. This fact in turn justifies in part the ansatz (6). This result is formally stated in Proposition 2 in Section 6. ## 3 Layer-Peeled Model for Explaining Neural Collapse In this section, we consider training deep neural networks on a balanced dataset, meaning $n_{k}=n$ for all classes $1\leq k\leq K$, and our main finding is that the Layer-Peeled Model displays the neural collapse phenomenon, just as in deep learning training [PHD20]. The proofs are all deferred to Appendix A.1. Throughout this section, we assume $p\geq K-1$ unless otherwise specified. This condition is satisfied by many popular architectures, where $p$ is usually tens or hundreds of times of $K$. ### 3.1 Cross-Entropy Loss The cross-entropy loss is perhaps the most popular loss used in training deep learning models for classification tasks. This loss function takes the form $\mathcal{L}(\mathbf{z},\mathbf{y}_{k})=-\log\left(\frac{\exp(\mathbf{z}(k))}{\sum_{{k^{\prime}}=1}^{K}\exp(\mathbf{z}({k^{\prime}}))}\right),$ where $\mathbf{z}(k^{\prime})$ denotes the $k^{\prime}$-th entry of $\mathbf{z}$. Recall that $\mathbf{y}_{k}$ is the label of the $k$-th class and the feature $\mathbf{z}$ is set to $\mathbf{W}\mathbf{h}_{k,i}$ in the Layer-Peeled Model (7). In contrast to the complex deep neural networks, which are often considered a black-box, the Layer-Peeled Model is much more amenable to analysis. As an exemplary use case, the following result shows that any minimizer of the Layer-Peeled Model (7) with the cross-entropy loss admits an almost closed-form expression. ###### Theorem 1. In the balanced case, any global minimizer $\mathbf{W}^{\star}\equiv\left[\mathbf{w}_{1}^{\star},\ldots,\mathbf{w}_{K}^{\star}\right]^{\top},\bm{H}^{\star}\equiv[\mathbf{h}_{k,i}^{\star}:1\leq k\leq K,1\leq i\leq n]$ of (7) with the cross-entropy loss obeys $\bm{h}_{k,i}^{\star}=C\mathbf{w}_{k}^{\star}=C^{\prime}\mathbf{m}_{k}^{\star}$ (8) for all $1\leq i\leq n,1\leq k\leq K$, where the constants $C=\sqrt{E_{H}/E_{W}},C^{\prime}=\sqrt{E_{H}}$, and the matrix $[\mathbf{m}_{1}^{\star},\ldots,\mathbf{m}_{K}^{\star}]$ forms a $K$-simplex ETF specified in Definition 1. ###### Remark 2. Note that the minimizers $(\mathbf{W}^{\star},\bm{H}^{\star})$’s are equivalent to each other up to rotation because of the rational invariance of simplex ETFs (see the rotation $\mathbf{P}$ in Definition 1). This theorem demonstrates the highly symmetric geometry of the last-layer features and weights of the Layer-Peeled Model, which is precisely the phenomenon of neural collapse. Explicitly, (8) says that all within-class (last-layer) features are the same: $\mathbf{h}_{k,i}^{\star}=\mathbf{h}_{k,i^{\prime}}^{\star}$ for all $1\leq i,i^{\prime}\leq n$; next, the $K$ class-mean features $\mathbf{h}_{k}^{\star}:=\mathbf{h}_{k,i}^{\star}$ together exhibit a $K$-simplex ETF up to scaling, from which we immediately conclude that $\cos\measuredangle(\mathbf{h}_{k}^{\star},\mathbf{h}_{{k^{\prime}}}^{\star})=-\frac{1}{K-1}$ (9) for any $k\neq k^{\prime}$ by Definition 1;888Note that the cosine value $-\frac{1}{K-1}$ corresponds to the largest possible angle for any $K$ points that have an equal $\ell_{2}$ norm and equal-sized angles between any pair. As pointed out in [PHD20], the largest angle implies a large-margin solution [SHN+18]. in addition, (8) also displays the precise duality between the last- layer classifiers and features. Taken together, these facts indicate that the minimizer $\left(\mathbf{W}^{\star},\bm{H}^{\star}\right)$ satisfies exactly (NC1)–(NC3). Last, Property (NC4) is also satisfied by recognizing that, for any given last-layer features $\mathbf{h}$, the predicted class is $\operatorname*{arg\,max}_{k}\mathbf{w}_{k}^{\star}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\mathbf{h}$, where $\bm{a}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\bm{b}$ denotes the inner product of the two vectors. Note that the predicted which satisfies $\operatorname*{arg\,max}_{k}\mathbf{w}_{k}^{\star}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\mathbf{h}=\operatorname*{arg\,max}_{k}\mathbf{h}_{k}^{\star}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\mathbf{h}=\operatorname*{arg\,min}_{k}\|\mathbf{h}_{k}^{\star}-\mathbf{h}\|^{2}.$ Conversely, the presence of neural collapse in the Layer-Peeled Model offers evidence of the effectiveness of our model as a tool for analyzing neural networks. To be complete, we remark that other models were very recently proposed to justify the neural collapse phenomenon [MPP20, EW20, LS20] (see also [PL20]). For example, [EW20, LS20] considered models that impose a norm constraint for each individual class, rather than an overall constraint as employed in the Layer-Peeled Model. ### 3.2 Extensions to Other Loss Functions In the modern practice of deep learning, various loss functions are employed to take into account the problem characteristics. Here we show that the Layer- Peeled Model continues to exhibit the phenomenon of neural collapse for some popular loss functions. ##### Contrastive Loss. Contrastive losses have been extensively used recently in both supervised and unsupervised deep learning [PSM14, AKK+19, CKNH20, BZMA20]. These losses pull similar training examples together in their embedding space while pushing apart dissimilar examples. Here we consider the supervised contrastive loss [KTW+20], which (in the balanced) case is defined through the last-layer features as $\mathcal{L}_{c}(\mathbf{h}_{k,i},\mathbf{y}_{k})=\frac{1}{n}\sum_{j=1}^{n}-\log\left(\frac{\exp(\bm{h}_{k,i}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\bm{h}_{k,j}/\tau)}{\sum_{{k^{\prime}}=1}^{K}\sum_{\ell=1}^{n}\exp(\bm{h}_{k,i}\mathop{\mathchoice{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\LARGE$\cdot$}}}{\vbox{\hbox{\normalsize$\cdot$}}}{\vbox{\hbox{\small$\cdot$}}}}\bm{h}_{{k^{\prime}},\ell}/\tau)}\right),$ (10) where $\tau>0$ is a parameter. As the loss does not involve the last-layer classifiers explicitly, the Layer-Peeled Model in the case of the supervised contrastive loss takes the form999In (10), $\mathbf{h}_{k,i}\equiv\mathbf{h}(\mathbf{x}_{k,i},\mathbf{W}_{-L})$ depends on the data, whereas in (11) $\mathbf{h}_{k,i}$’s form the decision variable $\bm{H}$. $\displaystyle\min_{\mathbf{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}_{c}(\mathbf{h}_{k,i},\mathbf{y}_{k})$ (11) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H}.$ We show that this Layer-Peeled Model also exhibits neural collapse in its last-layer features, even though the label information is not explicitly explored in the loss. ###### Theorem 3. Any global minimizer of (11) satisfies $\mathbf{h}_{k,i}^{\star}=\sqrt{E_{H}}\mathbf{m}_{k}^{\star}$ (12) for all $1\leq k\leq K$ and $1\leq i\leq n$, where $[\mathbf{m}_{1}^{\star},\ldots,\mathbf{m}_{K}^{\star}]$ forms a $K$-simplex ETF. Theorem 3 shows that the contrastive loss in the associated Layer-Peeled Model does a perfect job in pulling together training examples from the same class. Moreover, as seen from the denominator in (10), minimizing this loss would intuitively render the between-class inner products of last-layer features as small as possible, thereby pushing the features to form the vertices of a $K$-simplex ETF up to scaling. ##### Softmax-Based Loss. The cross-entropy loss can be thought of as a softmax-based loss. To see this, define the softmax transform as $\mathbf{S}(\mathbf{z})=\left[\frac{\exp(\mathbf{z}(1))}{\sum_{k=1}^{K}\exp(\mathbf{z}(k))},\ldots,\frac{\exp(\mathbf{z}(K))}{\sum_{k=1}^{K}\exp(\mathbf{z}(k))}\right]^{\top}$ for $\mathbf{z}\in\mathbb{R}^{K}$. Let $g_{1}$ be any nonincreasing convex function and $g_{2}$ be any nondecreasing function, both defined on $(0,1)$. We consider a softmax-based loss function that takes the form $\mathcal{L}(\mathbf{z},\mathbf{y}_{k})=g_{1}\left(\mathbf{S}(\mathbf{z})(k)\right)+\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}g_{2}\left(\mathbf{S}(\mathbf{z})({k^{\prime}})\right).$ (13) Here, $\mathbf{S}(\mathbf{z})(k)$ denotes the $k$-th element of $\mathbf{S}(\mathbf{z})$. Taking $g_{1}(x)=-\log x$ and $g_{2}\equiv 0$, we recover the cross-entropy loss. Another example is to take $g_{1}(x)=(1-x)^{q}$ and $g_{2}(x)=x^{q}$ for $q>1$, which can be implemented in most deep learning libraries such as PyTorch [PGM+19]. We have the following theorem regarding the softmax-based loss functions in the balanced case. ###### Theorem 4. Assume $\sqrt{E_{H}E_{W}}>\frac{K-1}{K}\log\left(K^{2}\sqrt{E_{H}E_{W}}+(2K-1)(K-1)\right)$. For any loss function defined in (13), $(\mathbf{W}^{\star},\bm{H}^{\star})$ given by (8) is a global minimizer of Program (7). Moreover, if $g_{2}$ is strictly convex and at least one of $g_{1},g_{2}$ is strictly monotone, then any global minimizer must be given by (8). In other words, neural collapse continues to emerge with softmax-based losses under mild regularity conditions. The first part of this theorem does not preclude the possibility that the Layer-Peeled Model admits solutions other than (8). When applied to the cross-entropy loss, it is worth pointing out that this theorem is a weak version of Theorem 1, albeit more general. Regarding the first assumption in Theorem 4, note that $E_{H}$ and $E_{W}$ would be arbitrarily large if the weight decay $\lambda$ in (1) is sufficiently small, thereby meeting the assumption concerning $\sqrt{E_{H}E_{W}}$ in this theorem. We remark that Theorem 4 does not require the convexity of the loss $\mathcal{L}$. To circumvent the hurdle of nonconvexity, our proof in Appendix A.1 presents several novel elements. In passing, we leave the experimental confirmation of neural collapse with these loss functions for future work. ## 4 Layer-Peeled Model for Predicting Minority Collapse Deep learning models are often trained on datasets where there is a disproportionate ratio of observations in each class [WLW+16, HLLT16, MR17]. For example, in the Places2 challenge dataset [ZKL+16], the number of images in its majority scene categories is about eight times that in its minority classes. Another example is the Ontonotes dataset for part-of-speech tagging [HMP+06], where the number of words in its majority classes can be more than one hundred times that in its minority classes. While empirically the imbalance in class sizes often leads to inferior model performance of deep learning (see, e.g., [JK19]), there remains a lack of a solid theoretical footing for understanding its effect, perhaps due to the complex details of deep learning training. In this section, we use the Layer-Peeled Model to seek a fine-grained characterization of how class imbalance impacts neural networks that are trained for a sufficiently long time. In short, our analysis predicts a phenomenon we term Minority Collapse, which fundamentally limits the performance of deep learning especially on the minority classes, both theoretically and empirically. All omitted proofs are relegated to Appendix A.2. ### 4.1 Technique: Convex Relaxation When it comes to imbalanced datasets, the Layer-Peeled Model no longer admits a simple expression for its minimizers as in the balanced case, due to the lack of symmetry between classes. This fact results in, among others, an added burden on numerically computing the solutions of the Layer-Peeled Model. To overcome this difficulty, we introduce a convex optimization program as a relaxation of the nonconvex Layer-Peeled Model (7), relying on the well-known result for relaxing a quadratically constrained quadratic program as a semidefinite program (see, e.g., [SZ03]). To begin with, defining $\mathbf{h}_{k}$ as the feature mean of the $k$-th class (i.e., $\mathbf{h}_{k}:=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\mathbf{h}_{k,i}$), we introduce a new decision variable $\mathbf{X}:=\left[\bm{h}_{1},\bm{h}_{2},\dots,\bm{h}_{K},\mathbf{W}^{\top}\right]^{\top}\left[\bm{h}_{1},\bm{h}_{2},\dots,\bm{h}_{K},\mathbf{W}^{\top}\right]\in\mathbb{R}^{2K\times 2K}$. By definition, $\mathbf{X}$ is positive semidefinite and satisfies $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}(k,k)\\\ =\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{h}_{k}\|^{2}\overset{a}{\leq}\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\\\ \leq E_{H}$ and $\frac{1}{K}\sum_{k=K+1}^{2K}\mathbf{X}(k,k)=\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\leq E_{W},$ where $\overset{a}{\leq}$ follows from the Cauchy–Schwarz inequality. Thus, we consider the following semidefinite programming problem:101010Although Program (14) involves a semidefinite constraint, it is not a semidefinite program in the strict sense because a semidefinite program uses a linear objective function. $\displaystyle\min_{\mathbf{X}\in\mathbb{R}^{2K\times 2K}}$ $\displaystyle\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{z}_{k},\mathbf{y}_{k})$ (14) $\displaystyle\mathrm{s.t.}$ $\displaystyle\mathbf{z}_{k}=\left[\mathbf{X}(k,K+1),\mathbf{X}(k,K+2),\dots,\mathbf{X}(k,2K)~{}\right]^{\top},~{}\text{ for all }1\leq k\leq K,$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}(k,k)\leq E_{H},\quad\frac{1}{K}\sum_{k=K+1}^{2K}\mathbf{X}(k,k)\leq E_{W},$ $\displaystyle\mathbf{X}\succeq 0.$ Lemma 1 below relates the solutions of (14) to that of (7). ###### Lemma 1. Assume $p\geq 2K$ and the loss function $\mathcal{L}$ is convex in its first argument. Let $\mathbf{X}^{\star}$ be a minimizer of the convex program (14). Define $\left(\mathbf{H}^{\star},\mathbf{W}^{\star}\right)$ as $\displaystyle\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star},~{}(\mathbf{W}^{\star})^{\top}\right]=\mathbf{P}(\mathbf{X}^{\star})^{1/2},$ (15) $\displaystyle\bm{h}_{k,i}^{\star}=\bm{h}_{k}^{\star},~{}\text{ for all }1\leq i\leq n,1\leq k\leq K,$ where $(\mathbf{X}^{\star})^{1/2}$ denotes the positive square root of $\mathbf{X}^{\star}$ and $\mathbf{P}\in\mathbb{R}^{p\times 2K}$ is any partial orthogonal matrix such that $\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}_{2K}$. Then $(\mathbf{H}^{\star},\mathbf{W}^{\star})$ is a minimizer of (7). Moreover, if all $\mathbf{X}^{\star}$’s satisfy $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}^{\star}(k,k)=E_{H}$, then all the solutions of (7) are in the form of (15). This lemma in effect says that the relaxation does not lead to any loss of information when we study the Layer-Peeled Model through a convex program, thereby offering a computationally efficient tool for gaining insights into the terminal phase of training deep neural networks on imbalanced datasets. An appealing feature is that the size of the program (14) is independent of the number of training examples. Besides, this lemma predicts that even in the imbalanced case the last-layer features collapse to their class means under mild conditions. Therefore, Property (NC1) is satisfied (see more discussion about the condition in Section B). The assumption of the convexity of $\mathcal{L}$ in the first argument is satisfied by a large class of loss functions, such as the cross-entropy loss. We also remark that (14) is not the unique convex relaxation. An alternative is to relax (7) via a nuclear norm-constrained convex program [BMP08, HV19] (see more details in Section B). ### 4.2 Minority Collapse With the technique of convex relaxation in place, now we numerically solve the Layer-Peeled Model on imbalanced datasets, with the goal of identifying nontrivial patterns in this regime. As a worthwhile starting point, we consider a dataset that has $K_{A}$ majority classes each containing $n_{A}$ training examples and $K_{B}$ minority classes each containing $n_{B}$ training examples. That is, assume $n_{1}=n_{2}=\dots=n_{K_{A}}=n_{A}$ and $n_{K_{A}+1}=n_{K_{A}+2}=\dots=n_{K}=n_{B}$. For convenience, call $R:=n_{A}/n_{B}>1$ the imbalance ratio. Note that the case $R=1$ reduces to the balanced setting. (a) $E_{W}=1$, $E_{H}=5$ (b) $E_{W}=1$, $E_{H}=10$ Figure 3: The average cosine of the angles between any pair of the minority classifier solved from the Layer-Peeled Model. The average cosine reaches $1$ once $R$ is above some threshold. The total number of classes $K_{A}+K_{B}$ is fixed to $10$. The gray dash-dotted line indicates the value of $-\frac{1}{K-1}$, which is given by (9). The between-majority-class angles can be large even when Minority Collapse emerges. For example, in the case $K_{A}=5$ in Plot (a), the average between-majority-class cosine is $-0.17$, which corresponds to $100$°, when Minority Collapse first occurs. Notably, our simulation suggests that the minority classifiers exhibit an equiangular frame and so do the majority classifiers. An important question is to understand how the $K_{B}$ last-layer minority classifiers behave as the imbalance ratio $R$ increases, as this is directly related to the model performance on the minority classes. To address this question, we show that the average cosine of the angles between any pair of the $K_{B}$ minority classifiers in Figure 3 by solving the simple convex program (14). This figure reveals a two-phase behavior of the minority classifiers $\mathbf{w}^{\star}_{K_{A}+1},\mathbf{w}^{\star}_{K_{A}+2},\ldots,\mathbf{w}^{\star}_{K}$ as $R$ increases: 1. (1) When $R<R_{0}$ for some $R_{0}>0$, the average between-minority-class angle becomes smaller as $R$ increases. 2. (2) Once $R\geq R_{0}$, the average between-minority-class angle become zero, implying that all the minority classifiers collapse to a single vector. Above, the phase transition point $R_{0}$ depends on the imbalance configuration $K_{A},K_{B}$ and the thresholds $E_{H},E_{W}$. We refer to the phenomenon that appears in the second phase as Minority Collapse. While it can be expected that the minority classifiers get closer to each other as the level of imbalance increases, surprisingly, these classifiers become completely indistinguishable once $R$ hits a finite value. Once Minority Collapse takes place, the neural network would predict equal probabilities for all the minority classes regardless of the input. As such, its predictive ability is by no means better than a coin toss when conditioned on the minority classes for both optimization and generalization, and this situation would only get worse in the presence of adversarial perturbations. This phenomenon is especially detrimental when the minority classes are more frequent in the application domains than in the training data. From an optimization point of view, the emergence of Minority Collapse would prevent the model from achieving zero training error since its prediction is simply uniform over the minority classes. While it seems to contradict conventional wisdom on the approximation power of deep learning, a careful examination indicates that the occurrence can be attributed to the two constraints in the Layer-Peeled Model or the $\ell_{2}$ penalty in (1). However, this issue does not disappear by simply setting a small penalty coefficient $\lambda$ in deep learning because the imbalance ratio can be arbitrarily large. Even outside the regime of Minority Collapse, the classification might still be unreliable if the imbalance ratio is large since the softmax predictions for the minority classes can be close to each other. To put the observations in Figure 3 on a firm footing, we prove that Minority Collapse indeed emerges in the Layer-Peeled Model as $R$ tends to infinity. ###### Theorem 5. Assume $p\geq K$ and $n_{A}/n_{B}\to\infty$, and fix $K_{A}$ and $K_{B}$. Let $\left(\mathbf{H}^{\star},\mathbf{W}^{\star}\right)$ be any global minimizer of the Layer-Peeled Model (7) with the cross-entropy loss. As $R\equiv n_{A}/n_{B}\to\infty$, we have $\lim\mathbf{w}^{\star}_{k}-\mathbf{w}^{\star}_{{k^{\prime}}}=\bm{0}_{p},~{}\text{ for all }K_{A}<k<{k^{\prime}}\leq K.$ To intuitively see why Minority Collapse occurs, first note that the majority classes become the predominant part of the risk function as the level of imbalance increases. The minimization of the objective, therefore, pays too much emphasis on the majority classifiers, encouraging the between-majority- class angles to grow and meanwhile shrinking the between-minority-class angles to zero. As an aside, an interesting question for future work is to prove that $\mathbf{w}^{\star}_{k}$ and $\mathbf{w}^{\star}_{{k^{\prime}}}$ are exactly equal for sufficiently large $R$. ### 4.3 Experiments At the moment, Minority Collapse is merely a prediction of the Layer-Peeled Model. An immediate question thus is: does this phenomenon really occur in real-world neural networks? At first glance, it does not necessarily have to be the case since the Layer-Peeled Model is a dramatic simplification of deep neural networks. To this end, we resort to computational experiments.111111Our code is publicly available at https://github.com/HornHehhf/LPM. Explicitly, we consider training two network architectures, VGG and ResNet [HZRS16], on the FashionMNIST [XRV17] and CIFAR10 datasets, and in particular, replace the dropout layers in VGG with batch normalization [IS15]. As both datasets have 10 classes, we use three combinations of $(K_{A},K_{B})=(3,7),(5,5),(7,3)$ to split the data into majority classes and minority classes. In the case of FashionMNIST (CIFAR10), we let the $K_{A}$ majority classes each contain all the $n_{A}=6000$ ($n_{A}=5000$) training examples from the corresponding class of FashionMNIST (CIFAR10), and the $K_{B}$ minority classes each have $n_{B}=6000/R$ ($n_{B}=5000/R$) examples randomly sampled from the corresponding class. The rest experiment setup is basically the same as [PHD20]. In detail, we use the cross-entropy loss and stochastic gradient descent with momentum $0.9$ and weight decay $\lambda=5\times 10^{-4}$. The networks are trained for $350$ epochs with a batch size of $128$. The initial learning is annealed by a factor of $10$ at $1/3$ and $2/3$ of the $350$ epochs. The only difference from [PHD20] is that we simply set the learning rate to $0.1$ instead of sweeping over $25$ learning rates between $0.0001$ and $0.25$. This is because the test performance of our trained models is already comparable with their best reported test accuracy. (a) VGG11 on FashionMNIST (b) VGG13 on CIFAR10 (c) ResNet18 on FashionMNIST (d) ResNet18 on CIFAR10 Figure 4: The occurrence of Minority Collapse in deep neural networks. Each curve denotes the average between-minority-class cosine. We fix $K_{A}+K_{B}=10$. In particular, Figure 4(b) shares the same setting with Figure 2 in Section 1, where the LPM-based predictions are given by $(E_{W},E_{H})$ such that the two constraints in the Layer-Peeled Model become active for the weights of the trained networks. The results of the experiments above are displayed in Figure 4. This figure clearly indicates that the angles between the minority classifiers collapse to zero as soon as $R$ is large enough. Moreover, the numerical examination in Table 1 shows that the norm of the classifier is constant across the minority classes. Taken together, these two pieces clearly give evidence for the emergence of Minority Collapse in these neural networks, thereby further demonstrating the effectiveness of our Layer-Peeled Model. Besides, Figure 4 also shows that the issue of Minority Collapse is compounded when there are more majority classes, which is consistent with Figure 3. For completeness, we remark that, as with neural collapse, Minority Collapse only occurs during the terminal phase of training with a non-diminishing weight decay parameter. (a) VGG11 on FashionMNIST (b) VGG13 on CIFAR10 (c) ResNet18 on FashionMNIST (d) ResNet18 on CIFAR10 Figure 5: Comparison of the test accuracy on minority classes between $R=1$ and $R=1000$. We fix $K_{A}+K_{B}=10$. Note that when $R=1000$, the test accuracy on the minority classes can be lower than $10\%$ because the trained neural networks misclassify many examples in the minority classes as the majority classes. In order to get a handle on how Minority Collapse impacts the test accuracy, we plot the results of another numerical study in Figure 5. The setting is the same as Figure 4, except that now we randomly sample $6$ or $5$ examples per class for the minority classes depending on whether the dataset is FashionMNIST or CIFAR10. The results show that the performance of the trained model deteriorates in the test data if the imbalance ratio $R=1000$, when Minority Collapse has occurred or is about to occur. This is by no means intuitive a priori as the test performance is only restricted to the minority classes and a large value of $R$ only leads to more training data in the majority classes without affecting that in the minority classes. Dataset | FashionMNIST ---|--- Network architecture | VGG11 | ResNet18 No. of majority classes | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ Norm variation | $2.7\times 10^{-5}$ | $4.4\times 10^{-8}$ | $6.0\times 10^{-8}$ | $1.4\times 10^{-5}$ | $5.0^{-8}$ | $6.3\times 10^{-8}$ Dataset | CIFAR10 Network architecture | VGG13 | ResNet18 No. of majority classes | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ Norm variation | $1.4\times 10^{-4}$ | $9.0\times 10^{-7}$ | $5.2\times 10^{-8}$ | $5.4\times 10^{-5}$ | $3.5\times 10^{-7}$ | $5.4\times 10^{-8}$ Table 1: Variability of the lengths of the minority classifiers when $R=\infty$. Each number in the row of “norm variation” is $\mathrm{Std}(\|\mathbf{w}_{B}^{\star}\|)/\mathrm{Avg}(\|\mathbf{w}_{B}^{\star}\|)$, where $\mathrm{Std}(\|\mathbf{w}_{B}^{\star}\|)$ denotes the standard deviation of the lengths of the $K_{B}$ classifiers and the denominator denotes the average. The results indicate that the classifiers of the minority classes have almost the same length. ## 5 How to Mitigate Minority Collapse? In this section, we further exploit the use of the Layer-Peeled Model in an attempt to lessen the detrimental effect of Minority Collapse. Instead of aiming to develop a full set of methodologies to overcome this issue, which is beyond the scope of the paper, our focus is on the evaluation of some simple techniques used for imbalanced datasets. Among many approaches to handling class imbalance in deep learning (see the review [JK19]), perhaps the most popular one is to oversample training examples from the minority classes [BMM18, SXY+19, CJL+19, CWG+19]. In its simplest form, this sampling scheme retains all majority training examples while duplicating each training example from the minority classes for $w_{r}$ times, where the oversampling rate $w_{r}$ is a positive integer. Oversampling in effect turns to the minimization of an adjusted optimization problem that is derived by replacing the risk in the optimization program (1) with $\frac{1}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\left[\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\bm{f}(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}),\mathbf{y}_{k})+w_{r}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}}\mathcal{L}(\bm{f}(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}),\mathbf{y}_{k})\right]$ (16) while keeping the penalty term $\frac{\lambda}{2}\|\bm{W}_{\textnormal{full}}\|^{2}$. Note that oversampling is closely related to weight adjusting (see more discussion in Section B). A close look at (16) suggests that the neural network obtained by minimizing this new program might behave as if it were trained on a (larger) dataset with $n_{A}$ and $w_{r}n_{B}$ examples in each majority class and minority class, respectively. To formalize this intuition, as earlier, we start by considering the Layer-Peeled Model in the case of oversampling: $\displaystyle\min_{\mathbf{H},\mathbf{W}}$ $\displaystyle\frac{1}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\left[\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})+w_{r}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\right]$ (17) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K_{A}}\frac{1}{n_{A}}\sum_{i=1}^{n_{A}}\left\|\bm{h}_{k,i}\right\|^{2}+\frac{1}{K}\sum_{k=K_{A}+1}^{K}\frac{1}{n_{B}}\sum_{i=1}^{n_{B}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H}.$ (a) VGG11 on FashionMNIST (b) VGG13 on CIFAR10 (c) ResNet18 on FashionMNIST (d) ResNet18 on CIFAR10 Figure 6: Effect of oversampling when the imbalance ratio is $R=1000$. Each plot shows the average cosine of the between-minority-class angles. The results indicate that increasing the oversampling rate would enlarge the between-minority-class angles. The following result confirms our intuition that oversampling indeed boosts the size of the minority classes for the Layer-Peeled Model. ###### Proposition 1. Assume $p\geq 2K$ and the loss function $\mathcal{L}$ is convex in the first argument. Let $\mathbf{X}^{\star}$ be any minimizer of the convex program (14) with $n_{1}=n_{2}=\dots=n_{K_{A}}=n_{A}$ and $n_{K_{A}+1}=n_{K_{A}+2}=\dots=n_{K}=w_{r}n_{B}$. Define $\left(\mathbf{H}^{\star},\mathbf{W}^{\star}\right)$ as $\displaystyle\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star},(\mathbf{W}^{\star})^{\top}\right]=\mathbf{P}(\mathbf{X}^{\star})^{1/2},$ (18) $\displaystyle~{}~{}\bm{h}_{k,i}^{\star}=\bm{h}_{k}^{\star},~{}\text{ for all }1\leq i\leq n_{A},1\leq k\leq K_{A},$ $\displaystyle~{}~{}\bm{h}_{k,i}^{\star}=\bm{h}_{k}^{\star},~{}\text{ for all }1\leq i\leq n_{B},K_{A}<k\leq K,$ where $\mathbf{P}\in\mathbb{R}^{p\times 2K}$ is any partial orthogonal matrix such that $\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}_{2K}$. Then $(\mathbf{H}^{\star},\mathbf{W}^{\star})$ is a global minimizer of the oversampling-adjusted Layer-Peeled Model (17). Moreover, if all $\mathbf{X}^{\star}$’s satisfy $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}^{\star}(k,k)=E_{H}$, then all the solutions of (17) are in the form of (18). Together with Lemma 1, Proposition 1 shows that the number of training examples in each minority class is now in effect $w_{r}n_{B}$, instead of $n_{B}$. We turn to Figure 6 for an illustration of the effects of oversampling on real-world deep learning models, using the same experimental setup as in Figure 5. From Figure 6, we see that the angles between pairs of the minority classifiers become larger as the oversampling rate $w_{r}$ increases. Consequently, the issue of Minority Collapse becomes less detrimental in terms of training accuracy as $w_{r}$ increases. This again corroborates the predictive ability of the Layer-Peeled Model. Network architecture | VGG11 | ResNet18 ---|---|--- No. of majority classes | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ | $K_{A}=3$ | $K_{A}=5$ | $K_{A}=7$ Original (minority) | 15.29 | 20.30 | 17.00 | 30.66 | 34.26 | 5.53 Oversampling (minority) | 41.13 | 57.22 | 30.50 | 37.86 | 53.46 | 8.13 Improvement (minority) | 25.84 | 36.92 | 13.50 | 7.20 | 19.20 | 2.60 Original (overall) | 40.10 | 57.61 | 69.09 | 50.88 | 64.89 | 66.13 Oversampling (overall) | 58.25 | 76.17 | 73.37 | 55.91 | 74.56 | 67.10 Improvement (overall) | 18.15 | 18.56 | 4.28 | 5.03 | 9.67 | 0.97 Table 2: Test accuracy (%) on FashionMNIST when $R=1000$. “Original (minority)” means that the test accuracy is evaluated only on the minority classes and oversampling is not used. When oversampling is used, we report the best test accuracy among four oversampling rates: $1$, $10$, $100$, and $1000$. The best test accuracy is never achieved at $w_{r}=1000$, indicating that oversampling with a large $w_{r}$ would impair the test performance. Next, we refer to Table 2 for effect on the test performance. The results clearly demonstrate the improvement in test accuracy brought by oversampling for certain choices of the oversampling rates. The improvement is noticeable on both the minority classes and all classes. A closer look at the results of Table 2, however, reveals that issues remain when addressing Minority Collapse by oversampling. Perhaps the most critical one is that although oversampling with a very large value of $w_{r}$ can mitigate Minority Collapse on the training set, it is at the cost of degrading test accuracy. More specifically, how can we efficiently select an oversampling rate for optimal test performance? More broadly, Minority Collapse does not seem likely to be fully resolved by sampling-based approaches alone, and the doors are widely open for future investigation. ## 6 Discussion In this paper, we have developed the Layer-Peeled Model as a simple yet effective modeling strategy toward understanding well-trained deep neural networks. The derivation of this model follows a top-down strategy by isolating the last layer from the remaining layers. Owing to the analytical and numerical tractability of the Layer-Peeled Model, we provide some explanation of a recently observed phenomenon called neural collapse in deep neural networks trained on balanced datasets [PHD20]. Moving to imbalanced datasets, an analysis of this model suggests that the last-layer classifiers corresponding to the minority classes would collapse to a single point once the imbalance level is above a certain threshold. This new phenomenon, which we refer to as Minority Collapse, occurs consistently in our computational experiments. The efficacy of the Layer-Peeled Model in analyzing well-trained deep learning models implies that the ansatz (6)—a crucial step in the derivation of this model—is at least a useful approximation. Moreover, this ansatz can be further justified by the following result in an indirect manner, which, together with Theorem 1, shows that the $\ell_{2}$ norm suggested by the ansatz happens to be the only choice among all the $\ell_{q}$ norms that is consistent with empirical observations. Its proof is given in Appendix A.1. ###### Proposition 2. Assume $p\geq K$. For any $q\in(1,2)\cup(2,\infty)$, consider the optimization problem $\displaystyle\min_{\mathbf{W},\bm{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{q}_{q}\leq E_{H},$ where $\mathcal{L}$ is the cross-entropy loss. Then, any global minimizer of this program does not satisfy (8) for any positive numbers $C$ and $C^{\prime}$. That is, neural collapse does not emerge in this model. While the paper has demonstrated its noticeable effectiveness, the Layer- Peeled Model requires future investigation for consolidation and extension. First, an important question is to better justify the ansatz (6) used in the development of this model, or equivalently, the second constraint of (7). For example, is the permutation invariance of the weights within the same layer useful for the justification? Moreover, an analysis of the gap between the Layer-Peeled Model and well-trained deep learning models would be a welcome advance. For example, how does the gap depend on the neural network architectures? From a different angle, a possible extension is to retain multiple layers following the top-down viewpoint. Explicitly, letting $1\leq m<L$ be the number of the top layers we wish to retain in the model, we can represent the prediction of the neural network as $\bm{f}(\mathbf{x},\bm{W}_{\textnormal{full}})=\bm{f}(\mathbf{h}(\mathbf{x};\mathbf{W}_{1:(L-m)}),\mathbf{W}_{(L-m+1):L})$ by denoting by $\mathbf{W}_{1:(L-m)}$ and $\mathbf{W}_{(L-m+1):L}$ the first $L-m$ layers and the last $m$ layers, respectively. Consider the $m$-Layer- Peeled Model: $\displaystyle\min_{\mathbf{W},\mathbf{H}}$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\bm{f}(\mathbf{h}_{k,i},\mathbf{W}_{(L-m+1):L}),\mathbf{y}_{k})$ $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\|\mathbf{W}_{(L-m+1):L})\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H}.$ The two constraints might be modified to take into account the network architectures. An immediate question is whether this model with $m=2$ is capable of capturing new patterns of deep learning training. From a practical standpoint, the Layer-Peeled Model together with its convex relaxation (14) offers an analytical and computationally efficient technique to identify and mitigate bias induced by class imbalance when training deep learning models. First, an interesting question is to extend Minority Collapse from the case of two-valued class sizes to general imbalanced datasets. Second, as suggested by our findings in Section 5, how should we choose loss functions in order to mitigate Minority Collapse [CWG+19]. Last, a possible use case of the Layer-Peeled Model is to design more efficient sampling schemes to take into account fairness considerations [BG18, ZS18, MMS+19]. Broadly speaking, insights can be gained not only from the Layer-Peeled Model but also from its modeling strategy. The details of empirical deep learning models, though formidable, can often be simplified by rendering part of the network modular. When the interest is about the top few layers, for example, this paper clearly demonstrates the benefits of taking a top-down strategy for modeling neural networks, especially in consolidating our understanding of previous results and in discovering new patterns. Owing to its mathematical convenience, the Layer-Peeled Model shall open the door for future research extending these benefits. ### Acknowledgments We are grateful to X.Y. Han for helpful discussions about some results of [PHD20]. This work was supported in part by NIH through RF1AG063481, NSF through CAREER DMS-1847415 and CCF-1934876, an Alfred Sloan Research Fellowship, and the Wharton Dean’s Research Fund. ## References * [AKK+19] Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229, 2019. * [AZLS19] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 2388–2464, 2019. * [BCN18] Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018. * [BFT17] Peter Bartlett, Dylan Foster, and Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in Neural Information Processing Systems, 30:6241–6250, 2017. * [BG18] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91, 2018. * [BHMM19] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849–15854, 2019. * [BLLT20] Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 2020. * [BMM18] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249–259, 2018. * [BMP08] Francis Bach, Julien Mairal, and Jean Ponce. Convex sparse matrix factorizations. arXiv preprint arXiv:0812.1869, 2008. * [BZMA20] Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477, 2020. * [CJL+19] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9268–9277, 2019. * [CKNH20] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020. * [COB19] Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In Advances in Neural Information Processing Systems, 2019. * [CWG+19] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems, volume 32, pages 1567–1578, 2019. * [DLL+19] Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, 2019. * [EMW19] Weinan E, Chao Ma, and Lei Wu. A comparative analysis of the optimization and generalization property of two-layer neural network and random feature models under gradient descent dynamics. arXiv preprint arXiv:1904.04326, 2019. * [EW20] Weinan E and Stephan Wojtowytsch. On the emergence of tetrahedral symmetry in the final and penultimate layers of neural network classifiers. arXiv preprint arXiv:2012.05420, 2020. * [FDZ21] Cong Fang, Han Dong, and Tong Zhang. Mathematical models of overparameterized neural networks. Proceedings of the IEEE, pages 1–21, 2021. * [FLLZ18] Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In Advances in Neural Information Processing Systems, pages 689–699, 2018. * [FLYZ20] Cong Fang, Jason D Lee, Pengkun Yang, and Tong Zhang. Modeling from features: a mean-field framework for over-parameterized deep neural networks. arXiv preprint arXiv:2007.01452, 2020. * [FLZ19] Cong Fang, Zhouchen Lin, and Tong Zhang. Sharp analysis for nonconvex SGD escaping from saddle points. In Annual Conference on Learning Theory, pages 1192–1234, 2019\. * [FMZ19] Jianqing Fan, Cong Ma, and Yiqiao Zhong. A selective overview of deep learning. arXiv preprint arXiv:1904.05526, 2019. * [HLLT16] Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375–5384, 2016. * [HMP+06] Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. Ontonotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers, pages 57–60, 2006. * [HS20] Hangfeng He and Weijie J Su. The local elasticity of neural networks. In International Conference on Learning Representations, 2020. * [HT20] Fengxiang He and Dacheng Tao. Recent advances in deep learning theory. arXiv preprint arXiv:2012.10931, 2020. * [HV19] Benjamin D Haeffele and René Vidal. Structured low-rank matrix factorization: Global optimality, algorithms, and applications. IEEE transactions on pattern analysis and machine intelligence, 42(6):1468–1482, 2019. * [HZRS16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [IS15] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015. * [JGH18] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, 2018. * [JK19] Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):27, 2019. * [Kri09] A Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Tront, 2009. * [KSH17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017. * [KTW+20] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020. * [KWL+19] Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, and Rong Ge. Explaining landscape connectivity of low-cost solutions for multilayer nets. In Advances in Neural Information Processing Systems, pages 14601–14610, 2019. * [LBH15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. * [LR20] Tengyuan Liang and Alexander Rakhlin. Just interpolate: Kernel “ridgeless” regression can generalize. Annals of Statistics, 48(3):1329–1347, 2020. * [LS20] Jianfeng Lu and Stefan Steinerberger. Neural collapse with cross-entropy loss. arXiv preprint arXiv:2012.08465, 2020. * [LSS20] Zhu Li, Weijie Su, and Dino Sejdinovic. Benign overfitting and noisy features. arXiv preprint arXiv:2008.02901, 2020. * [MBB18] Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning. In International Conference on Machine Learning, pages 3325–3334. PMLR, 2018. * [MMN18] Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018. * [MMS+19] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635, 2019. * [MPP20] Dustin G Mixon, Hans Parshall, and Jianzong Pi. Neural collapse with unconstrained features. arXiv preprint arXiv:2011.11619, 2020. * [MR17] K Madasamy and M Ramaswami. Data imbalance and classifiers: impact and solutions from a big data perspective. International Journal of Computational Intelligence Research, 13(9):2267–2281, 2017. * [OS20] Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. IEEE Journal on Selected Areas in Information Theory, 2020. * [PBL20] Tomaso Poggio, Andrzej Banburski, and Qianli Liao. Theoretical issues in deep networks. Proceedings of the National Academy of Sciences, 2020. * [PGM+19] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026–8037, 2019. * [PHD20] Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652–24663, 2020. * [PL20] Tomaso Poggio and Qianli Liao. Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv preprint arXiv:2101.00072, 2020. * [PSM14] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014. * [RVE18] Grant M Rotskoff and Eric Vanden-Eijnden. Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error. In Advances in Neural Information Processing Systems, 2018. * [SH03] Thomas Strohmer and Robert W. Heath. Grassmannian frames with applications to coding and communication. Applied and Computational Harmonic Analysis, 14(3):257–275, 2003\. * [Sha20] Ohad Shamir. Gradient methods never overfit on separable data. arXiv preprint arXiv:2007.00028, 2020. * [SHM+16] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. * [SHN+18] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018\. * [SS19] Justin Sirignano and Konstantinos Spiliopoulos. Mean field analysis of neural networks: A central limit theorem. Stochastic Processes and their Applications, 2019. * [SSJ20] Bin Shi, Weijie J Su, and Michael I Jordan. On learning rates and Schrödinger operators. arXiv preprint arXiv:2004.06977, 2020. * [Sun19] Ruoyu Sun. Optimization for deep learning: theory and algorithms. arXiv preprint arXiv:1912.08957, 2019. * [SXY+19] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. arXiv preprint arXiv:1902.07379, 2019. * [SZ03] Jos F Sturm and Shuzhong Zhang. On cones of nonnegative quadratic functions. Mathematics of Operations Research, 28(2):246–267, 2003. * [SZ14] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. * [WL90] Andrew R Webb and David Lowe. The optimised internal representation of multilayer classifier networks performs nonlinear discriminant analysis. Neural Networks, 3(4):367–375, 1990. * [WLW+16] Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J Kennedy. Training deep neural networks on imbalanced data sets. In 2016 international joint conference on neural networks (IJCNN), pages 4368–4374. IEEE, 2016. * [XRV17] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. * [Yar17] Dmitry Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103–114, 2017. * [YCY+20] Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. Learning diverse and discriminative representations via the principle of maximal coding rate reduction. Advances in Neural Information Processing Systems, 33, 2020. * [ZCZG18] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. In Advances in Neural Information Processing Systems, 2018. * [ZKL+16] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055, 2016. * [ZS18] James Zou and Londa Schiebinger. AI can be sexist and racist—it’s time to make it fair, 2018. ## Appendix A Proofs For simplicity, in this appendix we define $[m_{1}:m_{2}]:=\\{m_{1},m_{1}+1,\dots,m_{2}\\}$ for $m_{1},m_{2}\in\mathbb{N}$ with $m_{1}\leq m_{2}$ and $[m_{2}]:=[1:m_{2}]$ for $m_{2}\geq 1$. ### A.1 Balanced Case #### A.1.1 Proofs of Theorem 1 and Proposition 2 Because there are multiplications of variables in the objective function, Program (7) is nonconvex. Thus the KKT condition is not sufficient for optimality. To prove Theorem 1, we directly determine the global minimum of (7). During this procedure, one key step is to show that program (7) is equivalent to minimize a symmetric quadratic function: $\sum_{i=1}^{n}\left[\left(\sum_{k=1}^{K}\bm{h}_{k,i}\right)^{\top}\left(\sum_{k=1}^{K}\mathbf{w}_{k}\right)-K\sum_{k=1}^{K}\bm{h}_{k,i}^{\top}\mathbf{w}_{k}\right]$ under the same constraints with suitable conditions. Finally, by checking all the conditions to reach the minimum, we obtain the minimizer of (7). The detail is shown below. ###### Proof of Theorem 1. By the concavity of $\log(\cdot)$, for any $\mathbf{z}\in\mathbb{R}^{K}$, $k\in[K]$, constants $C_{a},C_{b}>0$, letting $C_{c}=\frac{C_{b}}{(C_{a}+C_{b})(K-1)}$, we have $\displaystyle-\log\left(\frac{\mathbf{z}(k)}{\sum_{{k^{\prime}}=1}^{K}\mathbf{z}({k^{\prime}})}\right)$ $\displaystyle=$ $\displaystyle-\log(\mathbf{z}(k))+\log\left(\frac{C_{a}}{C_{a}+C_{b}}\left(\frac{(C_{a}+C_{b})~{}\mathbf{z}(k)}{C_{a}}\right)+C_{c}\sum_{{k^{\prime}}=1,{k^{\prime}}\neq k}^{K}\frac{\mathbf{z}({k^{\prime}})}{C_{c}}\right)$ $\displaystyle\overset{a}{\geq}$ $\displaystyle-\log(\mathbf{z}(k))+\frac{C_{a}}{C_{a}+C_{b}}\log\left(\frac{(C_{a}+C_{b})~{}\mathbf{z}(k)}{C_{a}}\right)+C_{c}\sum_{{k^{\prime}}=1,{k^{\prime}}\neq k}^{K}\log\left(\frac{\mathbf{z}({k^{\prime}})}{C_{c}}\right)$ $\displaystyle\overset{b}{=}$ $\displaystyle-\frac{C_{b}}{C_{a}+C_{b}}\left[\log(\mathbf{z}(k))-\frac{1}{K-1}\sum_{{k^{\prime}}=1,{k^{\prime}}\neq k}^{K}\log(\mathbf{z}({k^{\prime}}))\right]+C_{d},$ (19) where $\overset{a}{\geq}$ applies the concavity of $\log(\cdot)$ and in $\overset{b}{=}$, we define $C_{d}:=\frac{C_{a}}{C_{a}+C_{b}}\log(\frac{C_{a}+C_{b}}{C_{a}})+\frac{C_{b}}{C_{a}+C_{b}}\log(1/C_{c})$. Note that in (A.1.1), $C_{a}$ and $C_{b}$ can be any positive numbers. To prove Theorem 1, we set $C_{a}:=\exp\left(\sqrt{E_{H}E_{W}}\right)$ and $C_{b}:=\exp\left(-\sqrt{E_{H}E_{W}}/(K-1)\right)$, which shall lead to the tightest lower bound for the objective of (7). Applying (A.1.1) on the objective, we have $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (20) $\displaystyle\geq$ $\displaystyle\frac{C_{b}}{(C_{a}+C_{b})N(K-1)}\sum_{i=1}^{n}\left[\left(\sum_{k=1}^{K}\bm{h}_{k,i}\right)^{\top}\left(\sum_{k=1}^{K}\mathbf{w}_{k}\right)-K\sum_{k=1}^{K}\bm{h}_{k,i}^{\top}\mathbf{w}_{k}\right]+C_{d}.$ Defining $\bar{\bm{h}}_{i}:=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ for $i\in[n]$, it follows by Young’s inequality that $\displaystyle\sum_{i=1}^{n}\left[\left(\sum_{k=1}^{K}\bm{h}_{k,i}\right)^{\top}\left(\sum_{k=1}^{K}\mathbf{w}_{k}\right)-K\sum_{k=1}^{K}\bm{h}_{k,i}^{\top}\mathbf{w}_{k}\right]$ $\displaystyle=$ $\displaystyle K\sum_{i=1}^{n}\sum_{k=1}^{K}(\bar{\bm{h}}_{i}-\bm{h}_{k,i})^{\top}\mathbf{w}_{k}$ $\displaystyle\geq$ $\displaystyle-\frac{K}{2}\sum_{k=1}^{K}\sum_{i=1}^{n}\|\bar{\bm{h}}_{i}-\bm{h}_{k,i}\|^{2}/C_{e}-\frac{C_{e}N}{2}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2},$ (21) where we pick $C_{e}:=\sqrt{E_{H}/E_{W}}$. The two terms in the right hand side of (A.1.1) can be bounded via the constraints of (7). Especially, we have $\frac{C_{e}N}{2}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\leq\frac{KN\sqrt{E_{H}E_{W}}}{2},$ (22) and $\displaystyle\frac{K}{2}\sum_{k=1}^{K}\sum_{i=1}^{n}\|\bar{\bm{h}}_{i}-\bm{h}_{k,i}\|^{2}/C_{e}$ $\displaystyle\overset{a}{=}\frac{K^{2}}{2C_{e}}\sum_{i=1}^{n}\left(\frac{1}{K}\sum_{k=1}^{K}\|\bm{h}_{k,i}\|^{2}-\|\bar{\bm{h}}_{i}\|^{2}\right)$ $\displaystyle\leq\frac{K}{2C_{e}}\sum_{k=1}^{K}\sum_{i=1}^{n}\|\bm{h}_{k,i}\|^{2}\leq\frac{KN\sqrt{E_{H}E_{W}}}{2},$ (23) where $\overset{a}{=}$ uses the fact that $\mathbb{E}\|\mathbf{a}-\mathbb{E}[\mathbf{a}]\|^{2}=\mathbb{E}\|\mathbf{a}\|^{2}-\|\mathbb{E}[\mathbf{a}]\|^{2}$. Thus plugging (A.1.1), (22), and (A.1.1) into (20), we have $\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq\frac{C_{b}}{C_{a}+C_{b}}\frac{K\sqrt{E_{H}E_{W}}}{K-1}+C_{d}:=L_{0}.$ (24) Now we check the conditions to make the equality in (24) hold. By the strict concavity of $\log(\cdot)$, the equality in (20) holds if and only if $\bm{h}_{k,i}\mathbf{w}_{k}=\bm{h}_{{k^{\prime}},i}\mathbf{w}_{{k^{\prime}}}+\log\left(\frac{C_{b}}{C_{a}}\right),$ for all $(k,i,{k^{\prime}})\in\\{(k,i,{k^{\prime}}):k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k,i\in[n]\\}$. The equality in (A.1.1) holds if and only if $\bar{\bm{h}}_{i}-\bm{h}_{k,i}=-C_{e}\mathbf{w}_{k},\quad k\in[K],~{}i\in[n].$ The equalities in (22) and (A.1.1) hold if and only if: $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H},\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W},\quad\bar{\bm{h}}_{i}=\mathbf{0}_{p},~{}i\in[n].$ Applying Lemma 2 shown in the end of the section, we have $\left(\mathbf{H},\mathbf{W}\right)$ satisfies (8). Reversely, it is easy to verify that the equality for (24) is reachable when $(\mathbf{H},\mathbf{W})$ admits (8). So $L_{0}$ is the global minimum of (7) and $(\mathbf{H},\mathbf{W})$ in (8) is the unique form for the minimizers. We complete the proof of Theorem 1. ∎ ###### Proof of Proposition 2. We introduce the set $\mathcal{S}_{R}$ as $\mathcal{S}_{R}:=\left\\{\left(\mathbf{H},\mathbf{W}\right):\begin{matrix}[\bm{h}_{1},\ldots,\bm{h}_{K}]=B_{1}b\mathbf{P}\left[(a+1)\mathbf{I}_{K}-\mathbf{1}_{K}\mathbf{1}_{K}^{\top}\right],\\\ \mathbf{W}=B_{2}B_{3}b\left[(a+1))\mathbf{I}_{K}-\mathbf{1}_{K}\mathbf{1}_{K}^{\top}\right]^{\top}\mathbf{P}^{\top}\\\ \bm{h}_{k,i}=\bm{h}_{k},\quad k\in[K],~{}i\in[n],\\\ b\geq 0,~{}a\geq 0,~{}b^{q}[a^{q}+(K-1)]=1,\\\ |B_{1}|\leq\sqrt{E_{H}},~{}|B_{2}|\leq\sqrt{E_{W}},~{}B_{3}\geq 0,~{}B_{3}^{2}b^{2}[a^{2}+(K-1)]=1,\\\ \mathbf{P}\in\mathbb{R}^{p\times K},~{}\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}_{K}.\end{matrix}\right\\}$ We can examine that $\mathcal{S}_{R}$ admits the constraints of (7). So any $\left(\mathbf{H},\mathbf{W}\right)\in\mathcal{S}_{R}$ is a feasible solution. Moreover, one can observe that this feasible solution has a special symmetry structure: for each $k\in[K]$, the features in class $k$ collapse to their mean $\bm{h}_{k}$, i.e., (NC1), and $\mathbf{w}_{k}$ is parallel to $\bm{h}_{k}$, i.e., (NC3). However, weights do not form the vertices of ETF unless $a=K-1$. Therefore, it suffices to show that the minimizer of $\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ in the set $\mathcal{S}_{R}$ do not satisfy $a=K-1$. In fact, for any $\left(\mathbf{H},\mathbf{W}\right)\in\mathcal{S}_{R}$, the objective function value can be written as a function of $B_{1}$, $B_{2}$, $B_{3}$, $a$, and $b$. We have $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ $\displaystyle=$ $\displaystyle-\log\left(\frac{\exp(B_{1}B_{2}B_{3}b^{2}[a^{2}+(K-1)])}{\exp(B_{1}B_{2}B_{3}b^{2}[a^{2}+K-1])+(K-1)\exp(B_{1}B_{2}B_{3}b^{2}[K-2-2a])}\right)$ $\displaystyle=$ $\displaystyle-\log\left(\frac{1}{1+(K-1)\exp(-B_{1}B_{2}B_{3}b^{2}(a+1)^{2})}\right).$ It follows to maximize $B_{1}B_{2}B_{3}b^{2}(a+1)^{2}$ or equivalently $\left[B_{1}B_{2}B_{3}b^{2}(a+1)^{2}\right]^{2}$. By $B_{3}^{2}b^{2}[a^{2}+(K-1)]=1$ and $b^{q}[a^{q}+(K-1)]=1$, we have $\displaystyle\left[B_{1}B_{2}B_{3}b^{2}(a+1)^{2}\right]^{2}$ $\displaystyle\overset{a}{\leq}E_{H}E_{W}\left[B_{3}^{2}b^{2}(a+1)^{2}\right]\left[b^{2}(a+1)^{2}\right]$ $\displaystyle=E_{H}E_{W}\left[\frac{(a+1)^{2}}{a^{2}+(K-1)}\right]\left[\frac{(a+1)^{q}}{a^{q}+K-1}\right]^{2/q}.$ (25) where $\overset{a}{\leq}$ picks $B_{1}=\sqrt{E_{H}}$ and $B_{2}=\sqrt{E_{W}}$. Let us consider function $g:[0,+\infty)\to\mathbb{R}:g(x)=\left[\frac{(x+1)^{2}}{x^{2}+(K-1)}\right]\left[\frac{(x+1)^{q}}{x^{q}+K-1}\right]^{2/q}$. Note that by the first-order optimality, once if $g^{\prime}(K-1)\neq 0$, then (25) cannot achieve the maximum at $a=K-1$, which is our desired result. Indeed, we have $g^{\prime}(K-1)=\frac{2K^{4}}{\left[(K-1)^{2}+(K-1)\right]\left[(K-1)^{q}+K-1\right]^{2/q+1}}\left[(K-1)-(K-1)^{q-1}\right].$ So $a=K-1$ will not be the maximizer of (25) unless $q=2$. We complete the proof. ∎ ###### Lemma 2. Suppose $\left(\mathbf{H},\mathbf{W}\right)$ satisfies $\bar{\bm{h}}_{i}-\bm{h}_{k,i}=-\sqrt{\frac{E_{H}}{E_{W}}}\mathbf{w}_{k},\quad k\in[K],\quad i\in[n],$ (26) and $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H},\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W},\quad\bar{\bm{h}}_{i}=\mathbf{0}_{p},~{}i\in[n],$ (27) where $\bar{\bm{h}}_{i}:=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ with $i\in[n]$. Moreover, there exists a constant $C$ such that for all $(k,i,{k^{\prime}})\in\\{(k,i,{k^{\prime}}):k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k,i\in[n]\\}$, we have $\bm{h}_{k,i}\cdot\mathbf{w}_{k}=\bm{h}_{k,i}\cdot\mathbf{w}_{{k^{\prime}}}+C.$ (28) Then $\left(\mathbf{H},\mathbf{W}\right)$ satisfies (8). ###### Proof. Combining (26) with the last equality in (27), we have $\mathbf{W}=\sqrt{\frac{E_{W}}{E_{H}}}~{}\bigg{[}\bm{h}_{1},\ldots,\bm{h}_{K}\bigg{]}^{\top},\quad\quad\bm{h}_{k,i}=\bm{h}_{k},~{}k\in[K],~{}i\in[n].$ Thus it remains to show $\displaystyle\mathbf{W}=\sqrt{E_{W}}~{}\left({\mathbf{M}^{\star}}\right)^{\top},$ (29) where ${\mathbf{M}^{\star}}$ is a $K$-simplex ETF. Plugging $\bm{h}_{k}=\bm{h}_{k,i}=\sqrt{\frac{E_{W}}{E_{H}}}\mathbf{w}_{k}$ into (28), we have, for all $(k,{k^{\prime}})\in\\{(k,{k^{\prime}}):k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k\\}$, $\sqrt{\frac{E_{H}}{E_{W}}}\|\mathbf{w}_{k}\|^{2}=\bm{h}_{k}\cdot\mathbf{w}_{k}=\bm{h}_{k}\cdot\mathbf{w}_{{k^{\prime}}}+C=\sqrt{\frac{E_{H}}{E_{W}}}\|\mathbf{w}_{{k^{\prime}}}\|^{2}+C,$ and $\sqrt{\frac{E_{H}}{E_{W}}}\|\mathbf{w}_{{k^{\prime}}}\|^{2}=\bm{h}_{{k^{\prime}}}\cdot\mathbf{w}_{{k^{\prime}}}=\bm{h}_{{k^{\prime}}}\cdot\mathbf{w}_{k}+C=\sqrt{\frac{E_{W}}{E_{H}}}\|\bm{h}_{{k^{\prime}}}\|^{2}+C=\sqrt{\frac{E_{H}}{E_{W}}}\|\mathbf{w}_{{k^{\prime}}}\|^{2}+C.$ Therefore, from $\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W}$, we have $\|\mathbf{w}_{k}\|=\sqrt{E_{W}}$ and $\bm{h}_{k}\mathbf{w}_{{k^{\prime}}}=C^{\prime}:=\sqrt{E_{H}E_{W}}-C$. On the other hand, recalling that $\bar{\bm{h}}_{i}=\mathbf{0}_{p}$ for $i\in[n]$, we have $\sum_{k=1}^{K}\bm{h}_{k}=\mathbf{0}_{p}$, which further yields $\sum_{k=1}^{K}\bm{h}_{k}\cdot\mathbf{w}_{k^{\prime}}=0$ for ${k^{\prime}}\in[K]$. Then it follows from $\bm{h}_{k}\mathbf{w}_{{k^{\prime}}}=C^{\prime}$ and $\bm{h}_{k}\mathbf{w}_{k}=\sqrt{E_{H}E_{W}}$ that $\bm{h}_{k}\mathbf{w}_{{k^{\prime}}}=-\sqrt{E_{H}E_{W}}/(K-1)$. Thus we obtain $\mathbf{W}\mathbf{W}^{\top}=\sqrt{\frac{E_{W}}{E_{H}}}\mathbf{W}[\bm{h}_{1},\ldots,\bm{h}_{K}]=E_{W}\left[\frac{K}{K-1}\left(\mathbf{I}_{K}-\frac{1}{K}\mathbf{1}_{K}\mathbf{1}_{K}^{\top}\right)\right],$ which implies (29). We complete the proof. ∎ #### A.1.2 Proofs of Theorems 3 and 4 The proof of Theorems 3 and 4 follow a similar argument of Theorem 1. ###### Proof of Theorem 3. For $k\in[K]$, $i\in[n]$, and ${k^{\prime}}\in[K]$, define $E_{k,i,{k^{\prime}}}:=\frac{1}{n}\sum_{j=1}^{n}\exp(\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},j}/\tau).$ For constants $C_{a}:=\exp\left(\sqrt{E_{H}E_{W}}\right)$ and $C_{b}:=\exp\left(-\sqrt{E_{H}E_{W}}/(K-1)\right)$, let $C_{c}:=\frac{C_{b}}{(C_{a}+C_{b})(K-1)}$. Using a similar argument as (A.1.1), we have for $j\in[n]$, $\displaystyle-\log\left(\frac{\exp(\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau)}{\sum_{{k^{\prime}}=1}^{K}E_{k,i,{k^{\prime}}}}\right)$ (30) $\displaystyle=$ $\displaystyle-\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau+\log\left(\frac{C_{a}}{C_{a}+C_{b}}\left(\frac{(C_{a}+C_{b})~{}E_{k,i,k}}{C_{a}}\right)+C_{c}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\frac{E_{k,i,{k^{\prime}}}}{C_{c}}\right)$ $\displaystyle\overset{a}{\geq}$ $\displaystyle-\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau+\frac{C_{a}}{C_{a}+C_{b}}\log\left(\frac{(C_{a}+C_{b})~{}E_{k,i,k}}{C_{a}}\right)+C_{c}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\log\left(\frac{E_{k,i,{k^{\prime}}}}{C_{c}}\right)$ $\displaystyle\overset{b}{=}$ $\displaystyle-\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau+\frac{C_{a}}{C_{a}+C_{b}}\log\left(E_{k,i,k}\right)+C_{c}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\log\left(E_{k,i,{k^{\prime}}}\right)+C_{d}$ $\displaystyle\overset{c}{\geq}$ $\displaystyle-\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau+\frac{C_{a}}{(C_{a}+C_{b})n}\sum_{\ell=1}^{n}\bm{h}_{k,i}\cdot\bm{h}_{k,\ell}/\tau+\frac{C_{c}}{n}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\sum_{\ell=1}^{n}\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},\ell}/\tau+C_{d}.$ where $\overset{a}{\geq}$ and $\overset{c}{\geq}$ apply the concavity of $\log(\cdot)$ and in $\overset{b}{=}$, we define $C_{d}:=\frac{C_{a}}{C_{a}+C_{b}}\log(\frac{C_{a}+C_{b}}{C_{a}})+\frac{C_{b}}{C_{a}+C_{b}}\log(1/C_{c})$. Then plugging (30) into the objective function, we have $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\frac{1}{n}\sum_{j=1}^{n}-\log\left(\frac{\exp(\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau)}{\sum_{{k^{\prime}}=1}^{K}\sum_{\ell=1}^{n}\exp(\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},\ell})}\right)$ (31) $\displaystyle=$ $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\frac{1}{n}\sum_{j=1}^{n}-\log\left(\frac{\exp(\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau)}{\sum_{{k^{\prime}}=1}^{K}E_{k,i,{k^{\prime}}}}\right)+\log(n)$ $\displaystyle\overset{\eqref{eq:contral1}}{\geq}$ $\displaystyle\frac{C_{b}K}{(C_{a}+C_{b})N(K-1)\tau}\sum_{k=1}^{K}\sum_{i=1}^{n}\left(-\frac{1}{n}\sum_{j=1}^{n}\left(\bm{h}_{k,i}\cdot\bm{h}_{k,j}-\frac{1}{K}\sum_{{k^{\prime}}=1}^{K}\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},j}\right)\right)+C_{d}+\log(n).$ Now defining $\bar{\bm{h}}_{i}:=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ for $i\in[n]$, a similar argument as (A.1.1) and (A.1.1) gives that $\displaystyle\sum_{k=1}^{K}\sum_{i=1}^{n}\left(-\frac{1}{n}\sum_{j=1}^{n}\left(\bm{h}_{k,i}\cdot\bm{h}_{k,j}-\frac{1}{K}\sum_{{k^{\prime}}=1}^{K}\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},j}\right)\right)$ $\displaystyle=$ $\displaystyle\sum_{k=1}^{K}\sum_{i=1}^{n}\left(-\frac{1}{n}\sum_{j=1}^{n}\bm{h}_{k,i}\cdot(\bm{h}_{k,j}-\bar{\bm{h}}_{j})\right)$ $\displaystyle\overset{a}{\geq}$ $\displaystyle-\frac{1}{2}\sum_{k=1}^{K}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}-\frac{1}{2}\sum_{k=1}^{K}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}-\bar{\bm{h}}_{i}\right\|^{2}$ $\displaystyle\overset{b}{\geq}$ $\displaystyle-\frac{1}{2}\sum_{k=1}^{K}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}-\frac{K}{2}\sum_{i=1}^{n}\left(\frac{1}{K}\sum_{k=1}^{K}\left\|\bm{h}_{k,i}\right\|^{2}-\left\|\bar{\bm{h}}_{i}\right\|^{2}\right)$ $\displaystyle\geq$ $\displaystyle-\sum_{k=1}^{K}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}\overset{c}{\geq}-NE_{H},$ (32) where $\overset{a}{\geq}$ follows from Young’s inequality, $\overset{b}{\geq}$ follows from $\mathbb{E}\|\mathbf{a}-\mathbb{E}[\mathbf{a}]\|^{2}=\mathbb{E}\|\mathbf{a}\|^{2}-\|\mathbb{E}[\mathbf{a}]\|^{2}$, and $\overset{c}{\geq}$ uses the constraint of (10). Therefore, plugging (A.1.2) into (31) yields that $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\frac{1}{n}\sum_{j=1}^{n}-\log\left(\frac{\exp(\bm{h}_{k,i}\cdot\bm{h}_{k,j}/\tau)}{\sum_{{k^{\prime}}=1}^{K}\sum_{\ell=1}^{n}\exp(\bm{h}_{k,i}\cdot\bm{h}_{{k^{\prime}},\ell}/\tau)}\right)$ $\displaystyle\geq$ $\displaystyle-\frac{C_{b}KE_{H}}{(C_{a}+C_{b})(K-1)\tau}+C_{d}+\log(n).$ (33) Now we check the conditions to make the equality in (33) hold. By the strictly concavity of $\log(\cdot)$, the equality in (30) holds only if for all $(k,i,{k^{\prime}})\in\\{(k,i,{k^{\prime}}):k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k,i\in[n]\\}$, $\frac{E_{k,i,k}}{C_{a}}=\frac{E_{k,i,{k^{\prime}}}}{C_{b}}.$ (34) The equality in (A.1.2) holds if and only if: $\bm{h}_{k,i}=\bm{h}_{k},~{}i\in[n],~{}k\in[K],\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\bm{h}_{k}\right\|^{2}=E_{H},\quad\sum_{k=1}^{K}\bm{h}_{k}=\mathbf{0}_{p}.$ (35) Plugging $\bm{h}_{k,i}=\bm{h}_{k}$ into (34), we have for $(k,{k^{\prime}})\in\\{k,{k^{\prime}}:k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k\\}$, $\frac{\exp(\|\bm{h}_{k}\|^{2})}{C_{a}}=\frac{\exp(\bm{h}_{k}\cdot\bm{h}_{{k^{\prime}}})}{C_{b}}=\frac{\exp(\|\bm{h}_{k^{\prime}}\|^{2})}{C_{a}}.$ Then it follows from $\frac{1}{K}\sum_{k=1}^{K}\left\|\bm{h}_{k}\right\|^{2}=E_{H}$ that $\|\bm{h}_{k}\|^{2}=E_{H}$ for $k\in[K]$. On the other hand, since $\sum_{k=1}^{K}\bm{h}_{k}=\mathbf{0}_{p}$, we obtain $\bm{h}_{k}\cdot\bm{h}_{{k^{\prime}}}=-\frac{E_{H}}{K-1}$ for $(k,{k^{\prime}})\in\\{k,{k^{\prime}}:k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k\\}$. Therefore, $[\bm{h}_{1},\ldots,\bm{h}_{K}]^{\top}[\bm{h}_{1},\ldots,\bm{h}_{K}]=E_{H}\left[\frac{K}{K-1}\left(\mathbf{I}_{K}-\mathbf{1}_{K}\mathbf{1}_{K}^{\top}\right)\right],$ which implies (12). Reversely, it is easy to verify that the equality for (33) is reachable when $\mathbf{H}$ admits (12). We complete the proof of Theorem 3. ∎ ###### Proof of Theorem 4. We first determine the minimum value of (7). For the simplicity of our expressions, we introduce $\mathbf{z}_{k,i}:=\mathbf{W}\bm{h}_{k,i}$ for $k\in[K]$ and $i\in[n]$. By the convexity of $g_{2}$, for any $k\in[K]$ and $i\in[n]$, we have $\displaystyle\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}g_{2}\left(\mathbf{S}(\mathbf{z}_{k,j})({k^{\prime}})\right)$ $\displaystyle\geq(K-1)g_{2}\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{S}(\mathbf{z}_{k,i})({k^{\prime}})\right)$ $\displaystyle\overset{a}{=}(K-1)g_{2}\left(1-\frac{1}{K-1}\mathbf{S}(\mathbf{z}_{k,i})(k)\right),$ (36) where $\overset{a}{=}$ uses $\sum_{k=1}^{K}\mathbf{S}(\mathbf{a})(k)=1$ for any $\mathbf{a}\in\mathbb{R}^{K}$. Then it follows by the convexity of $g_{1}$ and $g_{2}$ that $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (37) $\displaystyle=$ $\displaystyle\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\left[g_{1}\left(\mathbf{S}(\mathbf{z}_{k,i})(k)\right)+\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}g_{2}\left(\mathbf{S}(\mathbf{z}_{{k^{\prime}},i})({k^{\prime}})\right)\right]$ $\displaystyle\overset{\eqref{eq:szz}}{\geq}$ $\displaystyle\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\left[g_{1}\left(\mathbf{S}(\mathbf{z}_{k,i})(k)\right)+(K-1)g_{2}\left(1-\frac{1}{K-1}\mathbf{S}(\mathbf{z}_{k,i})(k)\right)\right]$ $\displaystyle\geq$ $\displaystyle g_{1}\left(\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)\right)+(K-1)g_{2}\left(1-\frac{1}{N(K-1)}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)\right).$ Because $g_{1}(x)+(K-1)g_{2}(1-\frac{x}{K-1})$ is monotonously deceasing, it suffices to maximize $\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k).$ To begin with, for any $\mathbf{z}_{k,i}$ with $k\in[K]$ and $i\in[n]$, by convexity of exponential function and the monotonicity of $q(x)=\frac{a}{a+x}$ for $x>0$ if $a>0$, we have $\displaystyle\mathbf{S}(\mathbf{z}_{k,i})(k)$ $\displaystyle=\frac{\exp(\mathbf{z}_{k,i}(k))}{\sum_{{k^{\prime}}=1}^{K}\exp(\mathbf{z}_{k,i}({k^{\prime}}))}$ $\displaystyle\leq\frac{\exp(\mathbf{z}_{k,i}(k))}{\exp(\mathbf{z}_{k,i}(k))+(K-1)\exp\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})\right)}$ $\displaystyle=\frac{1}{1+(K-1)\exp\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)}.$ (38) Consider function $g_{0}:\mathbb{R}\to\mathbb{R}$ as $g_{0}(x)=\frac{1}{1+C\exp(x)}$ with $C:=(K-1)\geq 1$. We have $g_{0}^{\prime\prime}(x)=-\frac{\exp(x)(1+C\exp(x))(1-C\exp(x))}{(1+C\exp(x))^{4}}.$ (39) For any feasible solution $\left(\mathbf{H},\mathbf{W}\right)$ of (7), we divide the index set $[n]$ into two subsets $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ defined below: 1. (A) $i\in\mathcal{S}_{1}$ if there exists at least one $k\in[K]$ such that $\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\geq\log\left(\frac{1}{K-1}\right).$ 2. (B) $i\in\mathcal{S}_{2}$ if for all $k\in[K]$, $\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)<\log\left(\frac{1}{K-1}\right).$ Clearly, $\mathcal{S}_{1}\cap\mathcal{S}_{2}=\varnothing$. Let $|\mathcal{S}_{1}|=t$, then $|\mathcal{S}_{2}|=n-t$. Define function $L:[n]\to\mathbb{R}$ as $L(t):=\begin{cases}N-\left(\frac{1}{2}t+\frac{K(n-t)}{1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)}\right),&\quad t\in[0:n-1],\\\ N-\frac{n}{2},&\quad t=n.\end{cases}$ (40) We show in Lemma 3 (see the end of the proof) that $\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(i)\leq\frac{1}{N}L(0).$ (41) Plugging (41) into (37), the objective function can be lower bounded as: $\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq g_{1}\left(\frac{1}{N}L(0)\right)+(K-1)g_{2}\left(1-\frac{1}{N(K-1)}L(0)\right):=L_{0}.$ (42) On the other hand, one can directly verify that the equality for (42) is reachable when $(\mathbf{H},\mathbf{W})$ satisfies (8). So $L_{0}$ is the global minimum of (7) and (8) is a minimizer of (7). Now we show all the solutions are in form (8) under the assumption that $g_{2}$ is strictly convex and $g_{1}$ (or $g_{2}$) are strictly monotone. By the strict convexity of $g_{2}$, the equality in (A.1.2) holds if and only if for any $k\in[K]$ and $i\in[n]$ and ${k^{\prime}}\in[K]$, ${k^{\prime\prime}}\in[K]$ such that ${k^{\prime}}\neq k$ and ${k^{\prime\prime}}\neq k$, we have $\mathbf{S}(\mathbf{z}_{i,j})(k_{1})=\mathbf{S}(\mathbf{z}_{i,j})(k_{2}),$ which indicates that $\bm{h}_{k,i}\cdot\mathbf{w}_{{k^{\prime}}}=\bm{h}_{k,i}\cdot\mathbf{w}_{{k^{\prime\prime}}}$ (43) Again, by the strict convexity of $g_{2}$, (37) holds if and only if for all $k\in[K]$, $i\in[n]$, and a suitable number $C^{\prime}\in(0,1)$, we have $\mathcal{S}(\mathbf{z}_{k,i})(k):=C^{\prime}.$ (44) Combining (43) with (44), we have for all $(k,i,{k^{\prime}})\in\\{(k,i,{k^{\prime}}):k\in[K],{k^{\prime}}\in[K],{k^{\prime}}\neq k,i\in[n]\\}$, $\frac{\exp(\bm{h}_{k,i}\cdot\mathbf{w}_{k})}{\exp(\bm{h}_{k,i}\cdot\mathbf{w}_{{k^{\prime}}})}=\frac{C^{\prime}(K-1)}{1-C^{\prime}},$ which implies that $\bm{h}_{k,i}\cdot\mathbf{w}_{k}=\bm{h}_{k,i}\cdot\mathbf{w}_{{k^{\prime}}}+\log\left(\frac{C^{\prime}(K-1)}{1-C^{\prime}}\right).$ On the other hand, by the strict monotonicity of $g_{1}(x)+(K-1)g_{2}(1-\frac{x}{K-1})$, the equality in (42) holds if and only if $\frac{1}{N}\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)=L(0)$. Thus Lemma 3 reads $\bar{\bm{h}}_{i}-\bm{h}_{k,i}=-\sqrt{\frac{E_{H}}{E_{W}}}\mathbf{w}_{k},\quad k\in[K],\quad i\in[n],$ and $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H},\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W},\quad\bar{\bm{h}}_{i}=\mathbf{0}_{p},~{}i\in[n],$ where $\bar{\bm{h}}_{i}:=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ with $i\in[n]$. In all, from Lemma 2, we have $\left(\mathbf{H},\mathbf{W}\right)$ satisfies (8), achieving the uniqueness argument. We complete the proof of Theorem 4. ∎ ###### Lemma 3. For any feasible solution $\left(\mathbf{H},\mathbf{W}\right)$, we have $\sum_{i=1}^{n}\sum_{k=1}^{K}\mathbf{S}(\mathbf{W}\bm{h}_{k,i})(k)\leq L(0),$ (45) with $L$ defined in (40). Moreover, recalling the definition of $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ in (A) and (B), the equality in (45) holds if and only if $|\mathcal{S}_{1}|=0$, $\bar{\bm{h}}_{i}-\bm{h}_{k,i}=-\sqrt{\frac{E_{H}}{E_{W}}}\mathbf{w}_{k},\quad k\in[K],\quad i\in[n],$ and $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H},\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W},\quad\bar{\bm{h}}_{i}=\mathbf{0}_{p},~{}i\in[n],$ where $\bar{\bm{h}}_{i}:=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ with $i\in[n]$. ###### Proof of Lemma 3. For any feasible solution $\left(\mathbf{H},\mathbf{W}\right)$, we separately consider $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ defined in (A) and (B), respectively. Let $t:=|\mathcal{S}_{1}|$. * • For $i\in\mathcal{S}_{1}$, let $k\in[K]$ be any index such that $\frac{1}{K-1}\sum_{{k^{\prime}}\neq k}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\geq\log\left(\frac{1}{K-1}\right)$, where $\mathbf{z}_{k,i}:=\mathbf{W}\bm{h}_{k,i}$. By the monotonicity of $g_{0}(x)$, it follows from (38) that $S(\mathbf{z}_{k,i})(k)\leq 1/2$. Furthermore, for the other index ${k^{\prime}}\in[K]$ such that ${k^{\prime}}\neq k$, using that $\frac{\exp(\mathbf{z}_{{k^{\prime}},i}({k^{\prime}}))}{\sum_{{k^{\prime\prime}}=1}^{K}\exp(\mathbf{z}_{{k^{\prime}},i})({k^{\prime\prime}})}\leq 1$, we have $\sum_{i\in\mathcal{S}_{1}}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)\leq t(1/2+K-1).$ (46) * • For $i\in\mathcal{S}_{2}$, by the concavity of $g_{0}(x)$ when $x<\log\left(\frac{1}{K-1}\right)$ from (39), we have, for $\mathcal{S}_{2}\neq\varnothing$, $\displaystyle\sum_{i\in\mathcal{S}_{2}}\sum_{k=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)$ (47) $\displaystyle\overset{\eqref{eq:expb}}{\leq}$ $\displaystyle\sum_{i\in\mathcal{S}_{2}}\sum_{k=1}^{K}\frac{1}{1+(K-1)\exp\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)}$ $\displaystyle\leq$ $\displaystyle\frac{(n-t)K}{1+(K-1)\exp\left(\frac{1}{(n-t)K}\sum_{i\in\mathcal{S}_{2}}\sum_{k=1}^{K}\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)\right)}.$ We can bound $\sum_{i\in\mathcal{S}_{2}}\sum_{k=1}^{K}\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)$ using the similar arguments in (A.1.1) and (A.1.1). Especially, recalling $\bar{\bm{h}}_{i}=\frac{1}{K}\sum_{k=1}^{K}\bm{h}_{k,i}$ for $i\in[n]$, we have $\displaystyle\sum_{i\in\mathcal{S}_{2}}\sum_{k=1}^{K}\left(\frac{1}{K-1}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)$ (48) $\displaystyle=$ $\displaystyle\frac{1}{K-1}\sum_{i\in\mathcal{S}_{2}}\left[\left(\sum_{k=1}^{K}\bm{h}_{k,i}\right)^{\top}\left(\sum_{k=1}^{K}\mathbf{w}_{k}\right)-K\sum_{K=1}^{K}\bm{h}_{k,i}^{\top}\mathbf{w}_{k}\right]$ $\displaystyle\overset{\eqref{eq:padd}}{\geq}$ $\displaystyle-\frac{K}{2(K-1)}\sum_{k=1}^{K}\sum_{i\in\mathcal{S}_{2}}\|\bar{\bm{h}}_{i}-\bm{h}_{k,i}\|^{2}/C^{\prime\prime}-\frac{C^{\prime\prime}K(n-t)}{2(K-1)}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}$ $\displaystyle\overset{\eqref{eq:boundtheta}}{\geq}$ $\displaystyle-\frac{K}{2(K-1)}\sum_{k=1}^{K}\sum_{i\in\mathcal{S}_{2}}\|\bm{h}_{k,i}\|^{2}/C^{\prime\prime}-\frac{C^{\prime\prime}K(n-t)}{2(K-1)}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}$ $\displaystyle\geq$ $\displaystyle-\frac{K}{2(K-1)}\sum_{k=1}^{K}\sum_{i=1}^{n}\|\bm{h}_{k,i}\|^{2}/C^{\prime\prime}-\frac{C^{\prime\prime}K(n-t)}{2(K-1)}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}$ $\displaystyle\geq$ $\displaystyle-\frac{K^{2}}{(K-1)}\sqrt{E_{H}E_{W}(n-t)n},$ where in the last inequality we follow from the constrains of (7) and set $C^{\prime\prime}:=\sqrt{\frac{nE_{H}}{(n-t)E_{W}}}$. We combine the above two cases. When $t\in[0,n-1]$, by plugging (48) into (47), using the monotonicity of $g_{0}(x)$, and adding (46), we have $\displaystyle\sum_{k=1}^{n}\sum_{i=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)$ $\displaystyle\leq N-\left(\frac{1}{2}t+\frac{K}{1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)}(n-t)\right)$ $\displaystyle=L(t).$ (49) And when $t=n$, it directly follows from (47) that $\sum_{k=1}^{n}\sum_{i=1}^{K}\mathbf{S}(\mathbf{z}_{k,i})(k)\leq N-\frac{n}{2}=L(n).$ Therefore, it suffices to show $L(t)\leq L(0)$ for all $t\in[0:n]$. We first consider the case when $t\in[0:N-1]$. We show that $L(t)$ is monotonously decreasing. Indeed, define $q(t):=\frac{K}{1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)}.$ We have $\displaystyle q^{\prime}(t)$ $\displaystyle=\frac{-\frac{1}{2}K\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)\frac{K}{K-1}\sqrt{E_{H}E_{W}n}(n-t)^{-3/2}}{\left[1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)\right]^{2}}$ $\displaystyle\geq\frac{-\frac{1}{2}\frac{K^{2}}{K-1}\sqrt{E_{H}E_{W}n}(n-t)^{-3/2}}{1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)},$ which implies that $\displaystyle L^{\prime}(t)=-\left[\frac{1}{2}-q(t)+q^{\prime}(t)(n-t)\right]$ $\displaystyle\leq$ $\displaystyle\frac{\frac{1}{2}\frac{K^{2}}{K-1}\sqrt{E_{H}E_{W}n}(n-t)^{-1/2}+K}{1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)}-\frac{1}{2}$ $\displaystyle=$ $\displaystyle\frac{K\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}\right)+2K-1-\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)}{2\left[1+\exp\left(\frac{K}{K-1}\sqrt{n/(n-t)}\sqrt{E_{H}E_{W}}-\log(K-1)\right)\right]}.$ Consider function $f(x):\left[\frac{K}{K-1}\sqrt{E_{H}E_{W}},\frac{K}{K-1}\sqrt{E_{H}E_{W}n}\right]\to R$ as: $f(x)=Kx+2K-1-\exp(x-\log(K-1)).$ We have $f^{\prime}(x)=K-\exp(x)/(K-1)<0$ when $x\in\left[\frac{K}{K-1}\sqrt{E_{H}E_{W}},\frac{K}{K-1}\sqrt{E_{H}E_{W}n}\right]$, where we use the assumption that $\sqrt{E_{H}E_{W}}>\frac{K-1}{K}\log\left(K^{2}\sqrt{E_{H}E_{W}}+(2K-1)(K-1)\right)\geq\frac{K-1}{K}\log\left(K(K-1)\right).$ Therefore, for all $x\in\left[\frac{K}{K-1}\sqrt{E_{H}E_{W}},\frac{K}{K-1}\sqrt{E_{H}E_{W}n}\right]$, we have $f(x)\leq f\left(\frac{K}{K-1}\sqrt{E_{H}E_{W}}\right)=\frac{K^{2}}{K-1}\sqrt{E_{H}E_{W}}+2K-1-\frac{1}{K-1}\exp\left(\frac{K}{K-1}\sqrt{E_{H}E_{W}}\right)\overset{a}{<}0,$ where $\overset{a}{<}$ use our assumption again. We obtain $L^{\prime}(t)<0$ for all $t\in[0:N-1]$. So $L(t)$ reaches the maximum if and only if $t=0$ when $t\in[0:N-1]$. Moreover, under our assumption, one can verify that $L(N)<L(0)$. We obtain (45) from (49) with $t=0$. When $t=0$, the equality in the first inequality of (48) holds if and only if: $\bar{\bm{h}}_{i}-\bm{h}_{k,i}=-\sqrt{\frac{E_{H}}{E_{W}}}\mathbf{w}_{k},\quad k\in[K],\quad i\in[n].$ The equality in the second and third inequalities of (48) holds if and only if: $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H},\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}=E_{W},\quad\bar{\bm{h}}_{i}=\mathbf{0}_{p},~{}i\in[n].$ We obtain Lemma 3. ∎ ### A.2 Imbalanced Case #### A.2.1 Proofs of Lemma 1 and Proposition 1 ###### Proof of Lemma 1. For any feasible solution $\left(\bm{H},\mathbf{W}\right)$ for the original program (7), we define $\mathbf{h}_{k}:=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\mathbf{h}_{k,i},~{}k\in[K],\quad\text{and}\quad\mathbf{X}:=\left[\bm{h}_{1},\bm{h}_{2},\dots,\bm{h}_{K},\mathbf{W}^{\top}\right]^{\top}\left[\bm{h}_{1},\bm{h}_{2},\dots,\bm{h}_{K},\mathbf{W}^{\top}\right].$ Clearly, $\mathbf{X}\succeq 0$. For the other two constraints of (14), we have $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}(k,k)\\\ =\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{h}_{k}\|^{2}\overset{a}{\leq}\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\\\ \overset{b}{\leq}E_{H},$ and $\frac{1}{K}\sum_{k=K+1}^{2K}\mathbf{X}(k,k)=\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\overset{c}{\leq}E_{W},$ where $\overset{a}{\leq}$ applies Jensen’s inequality and $\overset{b}{\leq}$ and $\overset{c}{\leq}$ use that $\left(\bm{H},\mathbf{W}\right)$ is a feasible solution. So $\mathbf{X}$ is a feasible solution for the convex program (14). Letting $L_{0}$ be the global minimum of (14), for any feasible solution $\left(\bm{H},\mathbf{W}\right)$, we obtain $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ $\displaystyle=\sum_{k=1}^{K}\frac{n_{k}}{N}\left[\frac{1}{n_{k}}\sum_{k=1}^{n_{k}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\right]$ $\displaystyle\overset{a}{\geq}\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{W}\bm{h}_{k},\mathbf{y}_{k})=\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{z}_{k},\mathbf{y}_{k})\geq L_{0},$ (50) where in $\overset{a}{\geq}$, we use $\mathcal{L}$ is convex on the first argument, and so $\mathcal{L}(\mathbf{W}\mathbf{h},\mathbf{y}_{k})$ is convex on $\mathbf{h}$ given $\mathbf{W}$ and $k\in[K]$. On the other hand, considering the solution $\left(\bm{H}^{\star},\mathbf{W}^{\star}\right)$ defined in (15) with $\mathbf{X}^{\star}$ being a minimizer of (14), we have $\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star},(\mathbf{W}^{\star})^{\top}\right]^{\top}\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star},(\mathbf{W}^{\star})^{\top}\right]=\mathbf{X}^{\star}$ ($p\geq 2K$ guarantees the existence of $\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star},(\mathbf{W}^{\star})^{\top}\right]$). We can verify that $\left(\bm{H}^{\star},\mathbf{W}^{\star}\right)$ is a feasible solution for (7) and have $\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}^{\star}\bm{h}_{k,i}^{\star},\mathbf{y}_{k})=\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{z}_{k}^{\star},\mathbf{y}_{k})=L_{0},$ (51) where $\mathbf{z}_{k}^{\star}=\left[\mathbf{X}^{\star}(k,1+K),\mathbf{X}^{\star}(k,2+K),\dots,\mathbf{X}^{\star}(k,2K)~{}\right]^{\top}$ for $k\in[K]$. Combing (A.2.1) and (51), we conclude that $L_{0}$ is the global minimum of (7) and $(\mathbf{H}^{\star},\mathbf{W}^{\star})$ is a minimizer. Suppose there is a minimizer $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$ that cannot be written as (15). Let $\mathbf{h}_{k}^{\prime}=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\mathbf{h}_{k,i}^{\prime},~{}k\in[K],\quad\text{and}\quad\mathbf{X}^{\prime}=\left[\bm{h}_{1}^{\prime},\bm{h}_{2}^{\prime},\dots,\bm{h}_{K}^{\prime},(\mathbf{W}^{\prime})^{\top}\right]^{\top}\left[\bm{h}_{1}^{\prime},\bm{h}_{2}^{\prime},\dots,\bm{h}_{K}^{\prime},(\mathbf{W}^{\prime})^{\top}\right].$ (A.2.1) implies that $\mathbf{X}^{\prime}$ is a minimizer of (14). As $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$ cannot be written as (15) with $\mathbf{X}^{\star}=\mathbf{X}^{\prime}$, then there is a ${k^{\prime}}\in[K]$, $i,j\in[n_{k^{\prime}}]$ with $i\neq j$ such that $\mathbf{h}_{{k^{\prime}},i}\neq\mathbf{h}_{{k^{\prime}},j}$. We have $\displaystyle\frac{1}{K}\sum_{k=1}^{K}X^{\prime}(k,k)=\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{h}_{k}^{\prime}\|^{2}$ $\displaystyle=$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}^{\prime}\right\|^{2}-\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{k=1}^{K}\|\mathbf{h}_{k,i}^{\prime}-\mathbf{h}_{k}^{\prime}\|^{2}$ $\displaystyle\leq$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}^{\prime}\right\|^{2}-\frac{1}{K}\frac{1}{n_{k^{\prime}}}(\|\mathbf{h}_{{k^{\prime}},i}^{\prime}-\mathbf{h}_{{k^{\prime}}}^{\prime}\|^{2}+\|\mathbf{h}_{{k^{\prime}},j}^{\prime}-\mathbf{h}_{{k^{\prime}}}^{\prime}\|^{2})$ $\displaystyle\leq$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}^{\prime}\right\|^{2}-\frac{1}{K}\frac{1}{2n_{k^{\prime}}}\|\mathbf{h}_{{k^{\prime}},i}^{\prime}-\mathbf{h}_{{k^{\prime}},j}^{\prime}\|^{2}$ $\displaystyle<$ $\displaystyle E_{H}.$ By contraposition, if all $\mathbf{X}^{\star}$ satisfy that $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}^{\star}(k,k)=E_{H}$, then all the solutions of (7) are in form of (15). We complete the proof. ∎ Proposition 1 can be obtained by a same argument as Lemma 1. We omit the proof here. #### A.2.2 Proof of Theorem 5 To prove Theorem 5, we first study a limit case where we only learn the classification for a partial classes. Especially, we solve the optimization program: $\displaystyle\min_{\mathbf{H},\mathbf{W}}$ $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (52) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}\right\|^{2}\leq E_{W},$ where $n_{1}=n_{2}=\dots=n_{K_{A}}=n_{A}$ and $n_{K_{A}+1}=n_{K_{A}+2}=\dots=n_{K}=n_{B}$. Lemma 4 characterizes useful properties for the minimizer of (52). ###### Lemma 4. Let $(\bm{H},\mathbf{W})$ be a minimzer of (52). We have $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$. Define $L_{0}$ as the global minimum of (52), i.e., $L_{0}:=\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k}).$ Then $L_{0}$ only depends on $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$. Moreover, for any feasible solution $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$, if there exist $k,{k^{\prime}}\in[K_{A}+1:K]$ such that $\left\|\mathbf{w}_{k}-\mathbf{w}_{k^{\prime}}\right\|=\varepsilon>0$, we have $\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}\left(\mathbf{W}^{\prime}\bm{h}_{k,i}^{\prime},\mathbf{y}_{k}\right)\geq L_{0}+\varepsilon^{\prime},$ where $\varepsilon^{\prime}>0$ depends on $\varepsilon$, $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$. Now we are ready to prove Theorem 5. The proof is based on the contradiction. ###### Proof of Theorem 5. Consider sequences $n_{A}^{\ell}$ and $n_{B}^{\ell}$ with $R^{\ell}:=n_{A}^{\ell}/n^{\ell}_{B}$ for $\ell=1,2,\dots$. We have $R^{\ell}\to\infty$. For each optimization program indexed by $\ell\in\mathbb{N}_{+}$, we define $(\bm{H}^{\ell,\star}.\mathbf{W}^{\ell,\star})$ as a minimizer and separate the objective function into two parts. That is, we introduce $\mathcal{L}^{\ell}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right):=\frac{K_{A}n_{A}^{\ell}}{K_{A}n_{A}^{\ell}+K_{B}n_{B}^{\ell}}\mathcal{L}^{\ell}_{A}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right)+\frac{K_{B}n_{B}^{\ell}}{K_{A}n_{A}^{\ell}+K_{B}n_{B}^{\ell}}\mathcal{L}^{\ell}_{B}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right),$ with $\mathcal{L}^{\ell}_{A}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right):=\frac{1}{K_{A}n_{A}^{\ell}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}^{\ell}}\mathcal{L}\left(\mathbf{W}^{\ell}\bm{h}_{k,i}^{\ell},\mathbf{y}_{k}\right)$ and $\mathcal{L}^{\ell}_{B}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right):=\frac{1}{K_{B}n_{B}^{\ell}}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}^{\ell}}\mathcal{L}\left(\mathbf{W}^{\ell}\bm{h}_{k,i}^{\ell},\mathbf{y}_{k}\right).$ We define $\left(\bm{H}^{\ell,A},\mathbf{W}^{\ell,A}\right)$ as a minimizer of the optimization program: $\displaystyle\min_{\bm{H}^{\ell},\mathbf{W}^{\ell}}$ $\displaystyle\mathcal{L}^{\ell}_{A}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right)$ (53) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}^{\ell}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K_{A}}\frac{1}{n_{A}^{\ell}}\sum_{i=1}^{n_{A}^{\ell}}\left\|\bm{h}_{k,i}\right\|^{2}+\frac{1}{K}\sum_{k=K_{A}+1}^{K}\frac{1}{n_{B}^{\ell}}\sum_{i=1}^{n_{B}^{\ell}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H},$ and $\left(\bm{H}^{\ell,B},\mathbf{W}^{\ell,B}\right)$ as a minimizer of the optimization program: $\displaystyle\min_{\bm{H}^{\ell},\mathbf{W}^{\ell}}$ $\displaystyle\mathcal{L}^{\ell}_{B}\left(\mathbf{H}^{\ell},\mathbf{W}^{\ell}\right)$ (54) $\displaystyle\mathrm{s.t.}$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}^{\ell}\right\|^{2}\leq E_{W},$ $\displaystyle\frac{1}{K}\sum_{k=1}^{K_{A}}\frac{1}{n_{A}^{\ell}}\sum_{i=1}^{n_{A}^{\ell}}\left\|\bm{h}_{k,i}\right\|^{2}+\frac{1}{K}\sum_{k=K_{A}+1}^{K}\frac{1}{n_{B}^{\ell}}\sum_{i=1}^{n_{B}^{\ell}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H}.$ Note that Programs (53) and (54) and their minimizers have been studied in Lemma 4. We define: $L_{A}:=\mathcal{L}^{\ell}_{A}\left(\mathbf{H}^{\ell,A},\mathbf{W}^{\ell,A}\right)\quad\text{and}\quad L_{B}:=\mathcal{L}^{\ell}_{B}\left(\mathbf{H}^{\ell,B},\mathbf{W}^{\ell,B}\right).$ Then Lemma 4 implies that $L_{A}$ and $L_{B}$ only depend on $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$, and are independent of $\ell$. Moreover, since $\mathbf{h}_{k,i}^{\ell,A}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$, we have $\mathcal{L}^{\ell}_{B}\left(\mathbf{H}^{\ell,A},\mathbf{W}^{\ell,A}\right)=\log(K).$ (55) Now we prove Theorem 5 by contradiction. Suppose there exists a pair $(k,{k^{\prime}})$ such that $\lim_{\ell\to\infty}\mathbf{w}^{\ell,\star}_{k}-\mathbf{w}^{\ell,\star}_{k^{\prime}}\neq\mathbf{0}_{p}$. Then there exists $\varepsilon>0$ such that for a subsequence $\left\\{\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)\right\\}_{\ell=1}^{\infty}$ and an index $\ell_{0}$ when $\ell\geq\ell_{0}$, we have $\left\|\mathbf{w}^{a_{\ell},\star}_{k}-\mathbf{w}^{a_{\ell},\star}_{k^{\prime}}\right\|\geq\varepsilon$. Now we figure out a contradiction by estimating the objective function value on $\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)$. In fact, because $\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)$ is a minimizer of $\mathcal{L}^{\ell}(\mathbf{H}^{\ell},\mathbf{W}^{\ell})$, we have $\displaystyle\mathcal{L}^{a_{\ell}}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)\leq\mathcal{L}^{a_{\ell}}\left(\mathbf{H}^{a_{\ell},A},\mathbf{W}^{a_{\ell},A}\right)$ $\displaystyle\overset{\eqref{eq:lblog}}{=}\frac{K_{A}n_{A}^{a_{\ell}}}{K_{A}n_{A}^{a_{\ell}}+K_{B}n_{B}^{a_{\ell}}}L_{A}+\frac{K_{B}n_{B}^{a_{\ell}}}{K_{A}n_{A}^{a_{\ell}}+K_{B}n_{B}^{a_{\ell}}}\log(K)$ $\displaystyle=L_{A}+\frac{1}{K_{R}R^{a_{\ell}}+1}\left(\log(K)-L_{A}\right)\overset{\ell\to\infty}{\to}L_{A},$ (56) where we define $K_{R}:=K_{A}/K_{B}$ and use $R^{\ell}=n_{A}^{\ell}/n_{B}^{\ell}$. However, when $\ell>\ell_{0}$, because $\left\|\mathbf{w}^{a_{\ell},\star}_{k}-\mathbf{w}^{a_{\ell},\star}_{k^{\prime}}\right\|\geq\varepsilon>0$, Lemma 4 implies that $\mathcal{L}^{a_{\ell}}_{A}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)\geq L_{A}+\varepsilon_{2},$ where $\varepsilon_{2}>0$ only depends on $\varepsilon$, $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$, and is independent of $\ell$. We obtain $\displaystyle\mathcal{L}^{a_{\ell}}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)$ $\displaystyle=\mathcal{L}^{a_{\ell}}_{A}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)+\mathcal{L}^{a_{\ell}}_{B}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)$ $\displaystyle\overset{a}{\geq}\mathcal{L}^{a_{\ell}}_{A}\left(\mathbf{H}^{a_{\ell},\star},\mathbf{W}^{a_{\ell},\star}\right)+\mathcal{L}^{a_{\ell}}_{B}\left(\mathbf{H}^{a_{\ell},B},\mathbf{W}^{a_{\ell},B}\right)$ $\displaystyle=\frac{K_{A}n_{A}^{a_{\ell}}}{K_{A}n_{A}^{a_{\ell}}+K_{B}n_{B}^{a_{\ell}}}(L_{A}+\varepsilon_{2})+\frac{K_{B}n_{B}^{a_{\ell}}}{K_{A}n_{A}^{a_{\ell}}+K_{B}n_{B}^{a_{\ell}}}L_{B}$ $\displaystyle=L_{A}+\varepsilon_{2}+\frac{1}{K_{R}R^{a_{\ell}}+1}(L_{B}-L_{A}-\varepsilon_{2})\overset{\ell\to\infty}{\to}L_{A}+\varepsilon_{2},$ (57) where $\overset{a}{\geq}$ uses $\left(\mathbf{H}^{a_{\ell},B},\mathbf{W}^{a_{\ell},B}\right)$ is the minimizer of (54). Thus we meet contradiction by comparing (A.2.2) with (A.2.2) and achieve Theorem 5. ∎ ###### Proof of Lemma 4. For any constants $C_{a}>0$, $C_{b}>0$, and $C_{c}>0$, define $C_{a}^{\prime}:=\frac{C_{a}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{b}^{\prime}:=\frac{C_{b}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, and $C_{c}^{\prime}:=\frac{C_{c}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{d}:=-C_{a}^{\prime}\log(C_{a}^{\prime})-C_{b}^{\prime}(K_{A}-1)\log(C_{b}^{\prime})-K_{B}C_{c}^{\prime}\log(C_{c}^{\prime})$, $C_{e}:=\frac{K_{A}C_{b}}{K_{A}C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{f}:=\frac{K_{B}C_{c}}{K_{A}C_{b}+K_{B}C_{c}}\in(0,1)$, and $C_{g}:=\frac{K_{A}C_{b}+K_{B}C_{c}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}>0$. Using a similar argument as Theorem 1, we show in Lemma 5 (see the end of the proof), for any feasible solution $(\bm{H},\mathbf{W})$ of (52), the objective value can be bounded from below by: $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (58) $\displaystyle\overset{a}{\geq}$ $\displaystyle-\frac{C_{g}}{K_{A}}\sqrt{KE_{H}}\sqrt{\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}}+C_{d}$ $\displaystyle\overset{b}{\geq}$ $\displaystyle-\frac{C_{g}}{K_{A}}\sqrt{KE_{H}}\sqrt{KE_{W}-K_{A}\left(1/K_{R}-C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{e})}\right)\|\mathbf{w}_{B}\|^{2}-\sum_{k=K_{A}+1}^{K}\left\|\mathbf{w}_{k}-\mathbf{w}_{B}\right\|^{2}}+C_{d},$ where $\mathbf{w}_{A}:=\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}$, $\mathbf{w}_{B}:=\frac{1}{K_{B}}\sum_{k=K_{A}+1}^{K}\mathbf{w}_{k}$, and $K_{R}:=\frac{K_{A}}{K_{B}}$. Moreover, the equality in $\overset{a}{\geq}$ holds only if $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$. Though $C_{a}$, $C_{b}$, and $C_{c}$ can be any positive numbers, we need to carefully pick them to exactly reach the global minimum of (52). In the following, we separately consider three cases according to the values of $K_{A}$, $K_{B}$, and $E_{H}E_{W}$. 1. (i) Consider the case when $K_{A}=1$. We pick $C_{a}:=\exp\left(\sqrt{K_{B}(1+K_{B})E_{H}E_{W}}\right)$, $C_{b}:=1$, and $C_{c}:=\exp\left(-\sqrt{(1+K_{B})E_{H}E_{W}/K_{B}}\right)$. Then from $\overset{a}{\geq}$ in (58), we have $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ $\displaystyle\overset{a}{\geq}$ $\displaystyle- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{\left\|\mathbf{w}_{1}-\mathbf{w}_{B}\right\|^{2}}+C_{d}$ $\displaystyle=$ $\displaystyle- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{\|\mathbf{w}_{1}\|^{2}-2\mathbf{w}_{1}^{\top}\mathbf{w}_{B}+\|\mathbf{w}_{B}\|^{2}}+C_{d}$ $\displaystyle\overset{b}{\geq}$ $\displaystyle- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{(1+1/K_{B})(\|\mathbf{w}_{1}\|^{2}+K_{B}\|\mathbf{w}_{B}\|^{2})}+C_{d}$ $\displaystyle\overset{c}{\geq}$ $\displaystyle- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{(1+1/K_{B})\left(KE_{W}-\sum_{k=2}^{K}\|\mathbf{w}_{k}-\mathbf{w}_{B}\|^{2}\right)}+C_{d}$ $\displaystyle\geq$ $\displaystyle- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{(1+1/K_{B})KE_{W}}+C_{d}:=L_{1},$ (59) where $\overset{a}{\geq}$ uses $C_{e}+C_{f}=1$, $\overset{b}{\geq}$ follows from Young’s inequality, i.e., $-2\mathbf{w}_{1}^{\top}\mathbf{w}_{B}\leq(1/K_{B})\|\mathbf{w}_{1}\|^{2}+K_{B}\|\mathbf{w}_{B}\|^{2}$, and $\overset{c}{\geq}$ follows from $\sum_{k=2}^{K}\|\mathbf{w}_{k}\|^{2}=K_{B}\|\mathbf{w}_{B}\|^{2}+\sum_{k=2}^{K}\|\mathbf{w}_{k}-\mathbf{w}_{B}\|^{2}$ and the constraint that $\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\leq KE_{W}$. On the other hand, when $(\mathbf{H},\mathbf{W})$ satisfies that $\displaystyle\begin{aligned} \mathbf{w}_{1}&=\sqrt{K_{B}E_{W}}\mathbf{u},\quad\mathbf{w}_{k}=-\sqrt{1/K_{B}E_{W}}\mathbf{u},~{}k\in[2:K],\\\ \bm{h}_{1,i}=&\sqrt{(1+K_{B})E_{H}}\mathbf{u},~{}i\in[n_{A}],\quad\quad\bm{h}_{k,i}=\mathbf{0}_{p},~{}k\in[2:K],~{}i\in[n_{B}],\\\ \end{aligned}$ where $\mathbf{u}$ is any unit vector, $(\mathbf{H},\mathbf{W})$ can achieve the equality in (i). So $L_{1}$ is the global minimum of (52). Moreover, $L_{1}$ is achieved only if the equality in $\overset{a}{\geq}$ in (58) holds. From Lemma 52, we have any minimizer satisfies that $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$. Finally, for any feasible solution $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$, if there exist $k,{k^{\prime}}\in[K_{A}+1:K]$ such that $\left\|\mathbf{w}_{k}-\mathbf{w}_{k^{\prime}}\right\|=\varepsilon>0$, we have $\sum_{k=K_{A}+1}^{K}\|\mathbf{w}_{k}-\mathbf{w}_{B}\|^{2}\geq\|\mathbf{w}_{k}-\mathbf{w}_{B}\|^{2}+\|\mathbf{w}_{k^{\prime}}-\mathbf{w}_{B}\|^{2}\geq\frac{\|\mathbf{w}_{k}-\mathbf{w}_{k^{\prime}}\|^{2}}{2}=\varepsilon^{2}/2.$ (60) It follows from $\overset{c}{\geq}$ in (i) that $\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq- C_{g}C_{f}\sqrt{KE_{H}}\sqrt{(1+1/K_{B})\left(KE_{W}-\varepsilon^{2}/2\right)}+C_{d}:=L_{1}+\varepsilon_{1}$ with $\varepsilon_{1}>0$ depending on $\varepsilon$, $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$. 2. (ii) Consider the case when $K_{A}>1$ and $\exp\left((1+1/K_{R})\sqrt{E_{H}E_{W}}/(K_{A}-1)\right)<\sqrt{1+K_{R}}+1$. Let us pick $C_{a}:=\exp\left((1+1/K_{R})\sqrt{E_{H}E_{W}}\right)$, $C_{b}:=\exp\left(-\frac{1}{K_{A}-1}(1+1/K_{R})\sqrt{E_{H}E_{W}}\right)$, and $C_{c}:=1$. Following from $\overset{b}{\geq}$ in (58), we know if $1/K_{R}-C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{f})}>0$, then $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq- C_{g}(1+1/K_{R})\sqrt{E_{H}E_{W}}+C_{d}:=L_{2}.$ (61) In fact, we do have $1/K_{R}-C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{f})}>0$ because $\displaystyle\begin{aligned} \quad&1/K_{R}>C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{e})}\quad\quad\quad\left(\text{by~{}}C_{e}+C_{f}=1\right)\\\ \iff\quad&C_{e}>\sqrt{\frac{1}{1+K_{R}}}\quad\quad\quad\left(\text{by~{}}C_{e}=\frac{K_{B}C_{c}}{K_{A}C_{b}+K_{B}C_{c}}\right)\\\ \iff\quad&\frac{C_{b}}{C_{c}}>\frac{1}{\sqrt{1+K_{R}}+1}\\\ \iff\quad&\exp\left((1+1/K_{R})\sqrt{E_{H}E_{W}}/(K_{A}-1)\right)<\sqrt{1+K_{R}}+1.\end{aligned}$ On the other hand, when $(\mathbf{H},\mathbf{W})$ satisfies that $\displaystyle\begin{aligned} \left[\mathbf{w}_{1},\mathbf{w}_{2},\ldots,\mathbf{w}_{K_{A}}\right]=&\sqrt{\frac{E_{W}}{E_{H}}}~{}\bigg{[}\bm{h}_{1},\ldots,\bm{h}_{K_{A}}\bigg{]}^{\top}=\sqrt{(1+1/K_{R})E_{W}}~{}(\mathbf{M}_{A}^{\star})^{\top},\\\ \bm{h}_{k,i}=&\bm{h}_{k},\quad k\in[K_{A}],~{}i\in[n_{A}]\\\ \bm{h}_{k,i}=&\mathbf{w}_{k}=\mathbf{0}_{p},\quad k\in[K_{A}+1:K],~{}i\in[n_{B}],\\\ \end{aligned}$ where $\mathbf{M}_{A}^{\star}$ is a $K_{A}$-simplex ETF, $(\mathbf{H},\mathbf{W})$ can achieve the equality in (61). So $L_{2}$ is the global minimum of (52). Moreover, $L_{2}$ is achieved only if the equality in $\overset{a}{\geq}$ of (58) holds. From Lemma 5, we have any minimizer satisfies that $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$. Finally, for any feasible solution $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$, if there exist $k,{k^{\prime}}\in[K_{A}+1:K]$ such that $\left\|\mathbf{w}_{k}-\mathbf{w}_{k^{\prime}}\right\|=\varepsilon>0$, plugging (60) into (58), we have $\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq-\frac{C_{g}}{K_{A}}\sqrt{KE_{H}}\sqrt{KE_{W}-\varepsilon^{2}/2}+C_{d}:=L_{2}+\varepsilon_{2},$ (62) with $\varepsilon_{2}>0$ depending on $\varepsilon$, $K_{A}$, $K_{B}$, $E_{H}$, and $E_{W}$. 3. (iii) Consider the case when $K_{A}>1$ and $\exp((1+1/K_{R})\sqrt{E_{H}E_{W}}/(K_{A}-1))\geq\sqrt{1+K_{R}}+1$. Let $C_{f}^{\prime}:=\frac{1}{\sqrt{K_{R}+1}}$ and $C_{e}^{\prime}:=1-C_{f}^{\prime}$. For $x\in[0,1]$, we define: $\displaystyle\begin{aligned} g_{N}(x):&=\sqrt{\frac{(1+K_{R})E_{W}}{K_{R}x^{2}+(K_{R}+K_{R}^{2})(1-x)^{2}}},\\\ g_{a}(x):&=\exp\left(\frac{g_{N}(x)\sqrt{(1+K_{R})E_{H}/K_{R}}}{\sqrt{x^{2}+\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)^{2}(1-x)^{2}}}\left[x^{2}+\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)(1-x)^{2}\right]\right),\\\ g_{b}(x):&=\exp\left(\frac{g_{N}(x)\sqrt{(1+K_{R})E_{H}/K_{R}}}{\sqrt{x^{2}+\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)^{2}(1-x)^{2}}}\left[-\frac{1}{K_{A}-1}x^{2}+\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)(1-x)^{2}\right]\right),\\\ g_{c}(x):&=\exp\left(\frac{g_{N}(x)\sqrt{(1+K_{R})E_{H}/K_{R}}}{\sqrt{x^{2}+\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)^{2}(1-x)^{2}}}\left[-\left(1+\frac{C_{e}^{\prime}}{C_{f}^{\prime}}\right)K_{R}(1-x)^{2}\right]\right).\end{aligned}$ Let $x_{0}\in[0,1]$ be a root of the equation $g_{b}(x)/g_{c}(x)=\frac{1/C_{f}^{\prime}-1}{K_{R}}.$ We first show that the solution $x_{0}$ exists. First of all, one can directly verify when $x\in[0,1]$, $g_{b}(x)/g_{c}(x)$ is continuous. It suffices to prove that (1) $g_{b}(0)/g_{c}(0)\geq\frac{1/C_{f}^{\prime}-1}{K_{R}}$ and (2) $g_{b}(1)/g_{c}(1)\leq\frac{1/C_{f}^{\prime}-1}{K_{R}}$. 1. (1) When $x=0$, we have $g_{b}(x)/g_{c}(x)\geq\exp(0)=1$. At the same time, $\frac{1/C_{f}^{\prime}-1}{K_{R}}=\frac{\sqrt{K_{R}+1}-1}{K_{R}}=\frac{1}{\sqrt{K_{R}+1}+1}\leq 1$. Thus $(i)$ is achieved. 2. (2) When $x=1$, we have $g_{N}(1)=\sqrt{(1+1/K_{R})E_{W}}$, so $\displaystyle\begin{aligned} g_{b}(1)/g_{c}(1)=\exp\left(-(1+1/K_{R})\sqrt{E_{H}E_{W}}/(K_{A}-1)\right)\overset{a}{\leq}\frac{1}{\sqrt{K_{R}+1}+1}=\frac{1/C_{f}^{\prime}-1}{K_{R}}.\end{aligned}$ where $\overset{a}{\leq}$ is obtained by the condition that $\exp\left((1+1/K_{R})\sqrt{E_{H}E_{W}}/(K_{A}-1)\right)\geq\sqrt{1+K_{R}}+1.$ Now we pick $C_{a}:=g_{a}(x_{0})$, $C_{b}:=g_{b}(x_{0})$, and $C_{c}:=g_{c}(x_{0})$, because $\frac{C_{b}}{C_{c}}=\frac{1/C_{f}^{\prime}-1}{K_{R}}$, we have $C_{e}=C_{e}^{\prime}$ and $C_{f}=C_{f}^{\prime}$ and $1/K_{R}=C_{f}^{2}+\frac{C_{f}^{4}}{C_{e}(2-C_{e})}$. Then it follows from $\overset{b}{\geq}$ in (58) that $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\geq- C_{g}(1+1/K_{R})\sqrt{E_{H}E_{W}}+C_{d}=L_{2}.$ (63) On the other hand, consider the solution $(\mathbf{H},\mathbf{W})$ that satisfies $\displaystyle\begin{aligned} &\mathbf{w}_{k}=g_{N}(x_{0})\mathbf{P}_{A}\left[\frac{x_{0}}{\sqrt{(K_{A}-1)K_{A}}}(K_{A}\mathbf{y}_{k}-\mathbf{1}_{K_{A}})+\frac{1-x_{0}}{\sqrt{K_{A}}}\mathbf{1}_{K_{A}}\right],\quad k\in[K_{A}],\\\ &\mathbf{w}_{k}=-\frac{C_{e}(2-C_{e})}{C_{f}^{2}K_{A}}\mathbf{P}_{A}\sum_{k=1}^{K_{A}}\mathbf{w}_{k},\quad k\in[K_{A}+1:K],\\\ &\bm{h}_{k,i}=\frac{\sqrt{(1+1/K_{R})E_{H}}}{\|\mathbf{w}_{i}+\frac{C_{e}}{C_{f}K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}\|}\mathbf{P}_{A}\left[\mathbf{w}_{i}+\frac{C_{e}}{C_{f}K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}\right],\quad k\in[K_{A}],~{}i\in[n_{A}],\\\ &\bm{h}_{k,i}=\mathbf{0}_{p},\quad k\in[K_{A}+1:K],~{}i\in[n_{B}],\end{aligned}$ where $\mathbf{y}_{k}\in\mathbb{R}^{K}$ is the vector containing one in the $k$-th entry and zero elsewhere and $\mathbf{P}_{A}\in\mathbb{R}^{p\times K_{A}}$ is a partial orthogonal matrix such that $\mathbf{P}^{\top}_{A}\mathbf{P}_{A}=\mathbf{I}_{K_{A}}$. We have $\exp\left(\bm{h}_{k,i}^{\top}\mathbf{w}_{k}\right)=g_{a}(x_{0})$ for $i\in[n_{A}]$ and $k\in[K_{A}]$, $\exp\left(\bm{h}_{k,i}^{\top}\mathbf{w}_{k^{\prime}}\right)=g_{b}(x_{0})$ for $i\in[n_{A}]$ and $k,{k^{\prime}}\in[K_{A}]$ such that $k\neq{k^{\prime}}$, and $\exp\left(\bm{h}_{k,i}^{\top}\mathbf{w}_{k^{\prime}}\right)=g_{c}(x_{0})$ for $i\in[n_{A}]$, $k\in[K_{A}]$, and ${k^{\prime}}\in[K_{B}]$. Moreover, $(\mathbf{H},\mathbf{W})$ can achieve the equality in (63). Finally, following a same argument as Case (ii), we have that (1) $L_{2}$ is the global minimum of (52); (2) any minimizer satisfies that $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$ and $i\in[n_{B}]$; (3) for any feasible solution $\left(\bm{H}^{\prime},\mathbf{W}^{\prime}\right)$, if there exist $k,{k^{\prime}}\in[K_{A}+1:K]$ such that $\left\|\mathbf{w}_{k}-\mathbf{w}_{k^{\prime}}\right\|=\varepsilon>0$, then (62) holds. Combining the three cases, we obtain Lemma 4, completing the proof. ∎ ###### Lemma 5. For any constants $C_{a}>0$, $C_{b}>0$, and $C_{c}>0$, define $C_{a}^{\prime}:=\frac{C_{a}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{b}^{\prime}:=\frac{C_{b}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, and $C_{c}^{\prime}:=\frac{C_{c}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{d}:=-C_{a}^{\prime}\log(C_{a}^{\prime})-C_{b}^{\prime}(K_{A}-1)\log(C_{b}^{\prime})-K_{B}C_{c}^{\prime}\log(C_{c}^{\prime})$, $C_{e}:=\frac{K_{A}C_{b}}{K_{A}C_{b}+K_{B}C_{c}}\in(0,1)$, $C_{f}:=\frac{K_{B}C_{c}}{K_{A}C_{b}+K_{B}C_{c}}\in(0,1)$, and $C_{g}:=\frac{K_{A}C_{b}+K_{B}C_{c}}{C_{a}+(K_{A}-1)C_{b}+K_{B}C_{c}}>0$. For any feasible solution $(\bm{H},\mathbf{W})$ of (52), the objective value of (52) can be bounded from below by: $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (64) $\displaystyle\overset{a}{\geq}$ $\displaystyle-\frac{C_{g}}{K_{A}}\sqrt{KE_{H}}\sqrt{\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}}+C_{d}$ $\displaystyle\overset{b}{\geq}$ $\displaystyle-\frac{C_{g}}{K_{A}}\sqrt{KE_{H}}\sqrt{KE_{W}\\!-K_{A}\left(1/K_{R}-C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{e})}\right)\|\mathbf{w}_{B}\|^{2}-\\!\sum_{k=K_{A}+1}^{K}\left\|\mathbf{w}_{k}-\mathbf{w}_{B}\right\|^{2}}+C_{d},$ where $\mathbf{w}_{A}:=\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}$, $\mathbf{w}_{B}:=\frac{1}{K_{B}}\sum_{k=K_{A}+1}^{K}\mathbf{w}_{k}$, and $K_{R}:=\frac{K_{A}}{K_{B}}$. Moreover, the equality in $\overset{a}{\geq}$ hold only if $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$. ###### Proof of Lemma 5. For $k\in[K_{A}]$ and $i\in[n_{k}]$, we introduce $\mathbf{z}_{k,i}=\mathbf{W}\bm{h}_{k,i}$. Because that $C_{a}^{\prime}+(K_{A}-1)C_{b}^{\prime}+K_{B}C_{c}^{\prime}=1$, $C_{a}^{\prime}>0$, $C_{b}^{\prime}>0$, and $C_{c}^{\prime}>0$, by the concavity of $\log(\cdot)$, we have $\displaystyle\begin{aligned} &-\log\left(\frac{\exp(\mathbf{z}_{k,i}(i))}{\sum_{{k^{\prime}}=1}^{K}\exp(\mathbf{z}_{{k^{\prime}},i}(k))}\right)\\\ =&-\mathbf{z}_{k,i}(k)+\log\left(C_{a}^{\prime}\left(\frac{\exp(z_{k,i}(k))}{C_{a}^{\prime}}\right)+\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K_{A}}C_{b}^{\prime}\left(\frac{\exp(z_{k,i}({k^{\prime}}))}{C_{b}^{\prime}}\right)+\sum_{{k^{\prime}}=K_{A}+1}^{K}C_{c}^{\prime}\left(\frac{\exp(z_{k,i}({k^{\prime}}))}{C_{c}^{\prime}}\right)\right)\\\ \geq&-\mathbf{z}_{k,i}(k)+C_{a}^{\prime}\mathbf{z}_{k,i}(k)+C_{b}^{\prime}\sum_{{k^{\prime}}=1,~{}{k^{\prime}}\neq k}^{K_{A}}\mathbf{z}_{k,i}({k^{\prime}})+C_{C}^{\prime}\sum_{{k^{\prime}}=K_{A}+1}^{K}\mathbf{z}_{i,j}(k)+C_{d}\\\ =&C_{g}C_{e}\left(\frac{1}{K_{A}}\sum_{{k^{\prime}}=1}^{K_{A}}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)+C_{g}C_{f}\left(\frac{1}{K_{B}}\sum_{{k^{\prime}}=K_{A}+1}^{K}\mathbf{z}_{k,i}({k^{\prime}})-\mathbf{z}_{k,i}(k)\right)+C_{d}.\end{aligned}$ Therefore, integrating (A.2.2) with $k\in[K_{A}]$ and $i\in[n_{A}]$, recalling that $\mathbf{w}_{A}=\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}$ and $\mathbf{w}_{B}=\frac{1}{K_{B}}\sum_{k=K_{A}+1}^{K}\mathbf{w}_{k}$, we have $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ (65) $\displaystyle\geq$ $\displaystyle\frac{1}{K_{A}n_{A}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}C_{g}\left[C_{e}(\bm{h}_{k,i}\mathbf{w}_{A}-\bm{h}_{k,i}\mathbf{w}_{k})+C_{f}(\bm{h}_{k,i}\mathbf{w}_{B}-\bm{h}_{k,i}\mathbf{w}_{k})\right]+C_{d}$ $\displaystyle\overset{a}{=}$ $\displaystyle\frac{C_{g}}{K_{A}}\sum_{k=1}^{K_{A}}\bm{h}_{k}^{\top}(C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k})+C_{d},$ where in $\overset{a}{=}$, we introduce $\bm{h}_{k}:=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\bm{h}_{k,i}$ for $k\in[K]$, and use $C_{e}+C_{f}=1$. Then it is sufficient to bound $\sum_{k=1}^{K_{A}}\bm{h}_{k}^{\top}(C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k})$. By the Cauchy–Schwarz inequality, we have $\displaystyle\sum_{k=1}^{K_{A}}\bm{h}_{k}^{\top}(C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k})\geq$ $\displaystyle-\sqrt{\sum_{k=1}^{K_{A}}\|\bm{h}_{k}\|^{2}}\sqrt{\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}}$ $\displaystyle\overset{a}{\geq}$ $\displaystyle-\sqrt{\sum_{k=1}^{K_{A}}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\|\bm{h}_{k,i}\|^{2}}\sqrt{\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}}$ $\displaystyle\overset{b}{\geq}$ $\displaystyle-\sqrt{KE_{H}}\sqrt{\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}},$ (66) where $\overset{a}{\geq}$ follows from Jensens’s inequality $\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\|\bm{h}_{k,i}\|^{2}\geq\bm{h}_{k}$ for $k\in[K_{A}]$ and $\overset{b}{\geq}$ uses the constraint that $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}\leq E_{H}$. Moreover, we have $\sum_{k=1}^{K_{A}}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}=E_{H}$ only if $\mathbf{h}_{k,i}=\mathbf{0}_{p}$ for all $k\in[K_{A}+1:K]$. Plugging (66) into (65), we obtain $\overset{a}{\geq}$ in (64). We then bound $\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}$. We have $\displaystyle\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\left\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}-\mathbf{w}_{k}\right\|^{2}$ $\displaystyle=$ $\displaystyle\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\|\mathbf{w}_{k}\|^{2}-2\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\mathbf{w}_{k}\cdot(C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B})+\|C_{e}\mathbf{w}_{A}+C_{f}\mathbf{w}_{B}\|^{2}$ $\displaystyle\overset{a}{=}$ $\displaystyle\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\|\mathbf{w}_{k}\|^{2}-2C_{f}^{2}\mathbf{w}_{A}^{\top}\mathbf{w}_{B}-C_{e}(2-C_{e})\|\mathbf{w}_{A}\|^{2}+C_{f}^{2}\|\mathbf{w}_{B}\|^{2}.$ (67) where $\overset{a}{=}$ uses $\sum_{k=1}^{K_{A}}\mathbf{w}_{k}=K_{A}\mathbf{w}_{A}$. Then using the constraint that $\sum_{k=1}^{K}\|\mathbf{w}_{k}\|\leq KE_{W}$ yields that $\displaystyle\frac{1}{K_{A}}\sum_{k=1}^{K_{A}}\|\mathbf{w}_{k}\|^{2}-2C_{f}^{2}\mathbf{w}_{A}^{\top}\mathbf{w}_{B}-C_{e}(2-C_{e})\|\mathbf{w}_{A}\|^{2}+C_{f}^{2}\|\mathbf{w}_{B}\|^{2}$ (68) $\displaystyle\leq$ $\displaystyle\frac{K}{K_{A}}E_{W}^{2}-\frac{1}{K_{A}}\sum_{k=K_{A}+1}^{K}\\!\|\mathbf{w}_{k}\|^{2}-C_{e}(2-C_{f})\left\|\mathbf{w}_{A}+\frac{C_{f}^{2}}{C_{e}(2-C_{e})}\mathbf{w}_{B}\right\|^{2}\\!\\!+\\!\left(C_{f}^{2}+\frac{C_{f}^{4}}{C_{e}(2-C_{e})}\right)\|\mathbf{w}_{B}\|^{2}$ $\displaystyle\overset{a}{=}$ $\displaystyle\frac{K}{K_{A}}E_{W}^{2}-\left(1/K_{R}-C_{f}^{2}-\frac{C_{f}^{4}}{C_{e}(2-C_{e})}\right)\|\mathbf{w}_{B}\|^{2}-\frac{1}{K_{A}}\sum_{k=K_{A}+1}^{K}\\!\left\|\mathbf{w}_{k}-\mathbf{w}_{B}\right\|^{2},$ where $\overset{a}{\geq}$ applies $\sum_{k=K_{A}+1}^{K}\|\mathbf{w}_{k}\|^{2}=K_{B}\|\mathbf{w}_{B}\|^{2}+\sum_{k=K_{A}+1}^{K}\left\|\mathbf{w}_{k}-\mathbf{w}_{B}\right\|^{2}$. Plugging (A.2.2) and (68) into $\overset{a}{\geq}$ in (64), we obtain $\overset{b}{\geq}$ in (64), completing the proof. ∎ ## Appendix B Additional Results ##### Comparison of Oversampling and Weighted Adjusting. Oversampling and weight adjusting are two commonly-used tricks in deep learning [JK19]. Both of them actually consider the same objective as (16), but applies different optimization algorithms to minimize the objective. It was observed that oversampling is more stable than weight adjusting in optimization. As a by product of this work, we compare the two algorithms below and shows that the variance of updates for oversampling will be potentially much smaller than that of weight adjusting. It was well-known in stochastic optimization field that the variance of the updates decides the convergence of an optimization algorithm (see e.g, [BCN18, FLLZ18, FLZ19]). Thus we offer a reasonable justification for the stability of the oversampling technique. We simply consider sampling the training data without replacement. It slightly differs from the deep learning training methods in practice. Besides, we only consider sampling a single data in each update. The analysis can be directly extended to the mini-batch setting. We first introduce the two methods. The weight adjusting algorithm in each update randomly samples a training data, and updates the parameters $\bm{W}_{\textnormal{full}}$ by the Stochastic Gradient Descent algorithm as $\displaystyle\bm{W}_{\textnormal{full}}^{t+1}=\bm{W}_{\textnormal{full}}^{t}-\eta_{w}\mathbf{v}_{w}^{t},\quad t=0,1,2,\dots,$ (69) where $\bm{W}_{\textnormal{full}}^{t}$ denotes the parameters at iteration step $t$, $\eta_{w}$ is a positive step size, and the stochastic gradient $\mathbf{v}_{w}^{t}$ satisfies that $\mathbf{v}_{w}^{t}=\begin{cases}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k}),&k\in[K_{A}],i\in[n_{A}],\text{~{}with probability~{}}\frac{1}{K_{A}n_{A}+K_{B}n_{B}},\\\ w_{r}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k}),&k\in[K_{A}+1:K_{B}],i\in[n_{B}],\text{~{}with probability~{}}\frac{1}{K_{A}n_{A}+K_{B}n_{B}}.\end{cases}$ We have $\displaystyle\mathbb{E}\left[\mathbf{v}_{w}^{t}\mid\bm{W}_{\textnormal{full}}^{t}\right]$ (70) $\displaystyle=$ $\displaystyle\frac{1}{n_{A}K_{A}+n_{B}K_{B}}\left[\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})+w_{r}\\!\\!\sum_{k=K_{A}+1}^{K}\\!\sum_{i=1}^{n_{B}}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right],$ and $\displaystyle\mathbb{E}\left[\|\mathbf{v}_{w}^{t}\|^{2}\mid\bm{W}_{\textnormal{full}}^{t}\right]=$ $\displaystyle\frac{1}{n_{A}K_{A}+n_{B}K_{B}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\left\|\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right\|^{2}$ $\displaystyle+\frac{w_{r}^{2}}{n_{A}K_{A}+n_{B}K_{B}}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}}\left\|\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right\|^{2}.$ (71) For the oversampling method, the algorithm in effect duplicates the data by $w_{r}$ times and runs Stochastic Gradient Descent on the “whole” data. Therefore, the update goes as $\displaystyle\bm{W}_{\textnormal{full}}^{t+1}=\bm{W}_{\textnormal{full}}^{t}-\eta_{s}\mathbf{v}_{s}^{t},\quad t=0,1,2,\dots,$ (72) where $\mathbf{v}_{s}^{t}$ satisfies that $\mathbf{v}_{s}^{t}=\begin{cases}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k}),&k\in[K_{A}],i\in[n_{A}],\text{~{}with probability~{}}\frac{1}{K_{A}n_{A}+K_{B}w_{r}n_{B}},\\\ \nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k}),&k\in[K_{A}+1:K_{B}],i\in[n_{B}],\text{~{}with probability~{}}\frac{w_{r}}{K_{A}n_{A}+K_{B}w_{r}n_{B}}.\end{cases}$ We obtain $\displaystyle\mathbb{E}\left[\mathbf{v}_{s}^{t}\mid\bm{W}_{\textnormal{full}}^{t}\right]=$ $\displaystyle\frac{1}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})$ $\displaystyle+\frac{w_{r}}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}}\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k}),$ and $\displaystyle\mathbb{E}\left[\|\mathbf{v}_{s}^{t}\|^{2}\mid\bm{W}_{\textnormal{full}}^{t}\right]=$ $\displaystyle\frac{1}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\sum_{k=1}^{K_{A}}\sum_{i=1}^{n_{A}}\left\|\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right\|^{2}$ $\displaystyle+\frac{w_{r}}{n_{A}K_{A}+w_{r}n_{B}K_{B}}\sum_{k=K_{A}+1}^{K}\sum_{i=1}^{n_{B}}\left\|\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right\|^{2}.$ (73) We suppose the two updates in expectation are in a same scale. That means we assume $\eta_{w}=\frac{n_{A}K_{A}+w_{r}n_{B}K_{B}}{n_{A}K_{A}+n_{B}K_{B}}\eta_{s}$. Then $\eta_{w}\mathbb{E}\left[\mathbf{v}_{w}^{t}\mid\bm{W}_{\textnormal{full}}^{t}\right]=\eta_{s}\mathbb{E}\left[\mathbf{v}_{s}^{t}\mid\bm{W}_{\textnormal{full}}^{t}\right]$. In fact, if $K_{A}\asymp 1$, $K_{B}\asymp 1$, $n_{A}\gg n_{B}$, and $1\ll w_{r}\lesssim\left(n_{A}/n_{B}\right)$, we have $\frac{n_{A}K_{A}+w_{r}n_{B}K_{B}}{n_{A}K_{A}+n_{B}K_{B}}\asymp 1$ and so $\eta_{w}\asymp\eta_{s}$. Now by comparing (B) with (B), we obtain that the second moment of $\eta_{w}\mathbf{v}_{w}^{t}$ is much smaller than that of $\eta_{s}\mathbf{v}_{s}^{t}$ since the order of $w_{r}$ for the latter is larger by $1$. For example, let us assume that all the norms of the gradients are in a same order, i.e., $\left\|\nabla_{\bm{W}_{\textnormal{full}}}\mathcal{L}(f(\mathbf{x}_{k,i};\bm{W}_{\textnormal{full}}^{t}),\mathbf{y}_{k})\right\|\asymp a$ for all $k$ and $i$, where $a>0$. Then (B) implies that $\mathbb{E}\left[\|\mathbf{v}_{s}^{t}\|^{2}\mid\bm{W}_{\textnormal{full}}^{t}\right]\asymp\eta_{s}^{2}a^{2}$. However, (B) reads that $\mathbb{E}\left[\|\mathbf{v}_{w}^{t}\|^{2}\mid\bm{W}_{\textnormal{full}}^{t}\right]\asymp\eta_{s}^{2}\frac{n_{A}K_{A}+w_{r}^{2}n_{B}K_{B}}{n_{A}K_{A}+w_{r}n_{B}K_{B}}a^{2}$. Furthermore, if we set $w_{r}\asymp n_{A}/n_{B}$, then $\mathbb{E}\left[\|\mathbf{v}_{w}^{t}\|^{2}\mid\bm{W}_{\textnormal{full}}^{t}\right]\asymp\eta_{s}^{2}w_{r}a^{2}$. Thus the second moment for $\eta_{w}\mathbf{v}_{w}^{t}$ is around $w_{r}$ times of that for $\eta_{s}\mathbf{v}_{s}^{t}$. And this fact also holds for the variance because $\left\|\eta_{s}\mathbb{E}\left[\mathbf{v}_{s}^{t}\mid\bm{W}_{\textnormal{full}}^{t}\right]\right\|\asymp\eta_{s}a$ and the property that $\mathbb{E}\|\mathbf{x}-\mathbb{E}[\mathbf{x}]\|^{2}=\mathbb{E}\|\mathbf{x}\|^{2}-\|\mathbb{E}[\mathbf{x}]\|^{2}$ for any random variable $\mathbf{x}$. Therefore, we can conclude that the variance of updates for oversampling is potentially much smaller than that of weight adjusting. ##### More Discussions on Convex Relaxation and Cross-Entropy Loss. We show Program (7) can also be relaxed as a nuclear norm-constrained convex optimization. The result heavily relies on the progress of matrix decomposition, e.g. [BMP08, HV19]. We will the use the equality (see e.g., [BMP08, Section 2]) that for any matrix $\mathbf{Z}$ and $a>0$, $\|\mathbf{Z}\|_{*}=\inf_{r\in\mathbb{N}_{+}}\inf_{\mathbf{U},\mathbf{V}:\mathbf{U}\mathbf{V}^{\top}=\mathbf{Z}}\frac{a}{2}\|\mathbf{U}\|^{2}+\frac{1}{2a}\|\mathbf{V}\|^{2},$ (74) where $r$ is the number of columns for $\mathbf{U}$ and $\|\cdot\|_{*}$ denotes the nuclear norm. For any feasible solution $\left(\bm{H},\mathbf{W}\right)$ for the original program (7), we define $\mathbf{h}_{k}=\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\mathbf{h}_{k,i},~{}k\in[K],\quad\tilde{\bm{H}}=[\mathbf{h}_{1},\mathbf{h}_{2},\dots,\mathbf{h}_{K}]\in\mathbb{R}^{p\times K},~{}~{}\text{and}~{}~{}\mathbf{Z}=\mathbf{W}\tilde{\bm{H}}\in\mathbb{R}^{K\times K}.$ (75) We consider the convex program: $\displaystyle\min_{\mathbf{Z}\in\mathbb{R}^{K\times K}}$ $\displaystyle\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{Z}_{k},\mathbf{y}_{k})$ (76) $\displaystyle\mathrm{s.t.}$ $\displaystyle\|\mathbf{Z}\|_{*}\leq K\sqrt{E_{H}E_{W}}.$ where $\mathbf{Z}_{k}$ denotes the $k$-th column of $\mathbf{Z}$ for $k\in[K]$. ###### Lemma 6. Assume $p\geq K$ and the loss function $\mathcal{L}$ is convex on the first argument. Let $\mathbf{Z}^{\star}$ be a minimizer of the convex program (76). Let $r$ be the rank of $\mathbf{Z}^{\star}$ and consider thin Singular Value Decomposition (SVD) of $\mathbf{Z}^{\star}$ as $\mathbf{Z}^{\star}=\mathbf{U}^{\star}\mathbf{\Sigma}^{\star}\mathbf{V}^{\star}$. Introduce two diagonal matrices $\mathbf{\Sigma}_{1}^{\star}$ and $\mathbf{\Sigma}_{2}^{\star}$ with the entries defined as $\mathbf{\Sigma}_{1}^{\star}(i,i)=\sqrt{\frac{E_{W}}{E_{H}}}\sqrt{|\mathbf{\Sigma}^{\star}(i,i)|}$ and $\mathbf{\Sigma}_{2}^{\star}(i,i)=\sqrt{\frac{E_{H}}{E_{W}}}\mathbf{\Sigma}^{\star}(i,i)/\sqrt{|\mathbf{\Sigma}^{\star}(i,i)|}$ for $i\in[r]$, respectively. Let $\left(\mathbf{H}^{\star},\mathbf{W}^{\star}\right)$ be $\displaystyle\mathbf{W}=\mathbf{U}^{\star}\mathbf{\Sigma}_{1}^{\star}\mathbf{P}^{\top},\quad\left[\bm{h}_{1}^{\star},\bm{h}_{2}^{\star},\dots,\bm{h}_{K}^{\star}\right]=\mathbf{P}\mathbf{\Sigma}_{2}^{\star}\mathbf{V}^{\star},$ (77) $\displaystyle\bm{h}_{k,i}^{\star}=\bm{h}_{k}^{\star},\quad k\in[K],~{}i\in[n_{k}],$ where $\mathbf{P}\in\mathbb{R}^{p\times r}$ is any partial orthogonal matrix such that $\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}_{r}$. Then $(\mathbf{H}^{\star},\mathbf{W}^{\star})$ is a minimizer of (7). ###### Proof of Lemma 6. For any feasible solution $\left(\bm{H},\mathbf{W}\right)$ for the original program (7), define $\mathbf{h}_{k}$ for $k\in[K]$, $\tilde{\bm{H}}$, and $\mathbf{Z}$ by (75). We show $\mathbf{Z}$ is a feasible solution for the convex program (76). In fact, by (74) with $r=K$ and $a=\sqrt{E_{H}/E_{W}}$, we have $\displaystyle\left\|\mathbf{Z}\right\|_{*}$ $\displaystyle\leq\frac{\sqrt{E_{H}/E_{W}}}{2}\left\|\mathbf{W}\right\|^{2}+\frac{\sqrt{E_{W}/E_{H}}}{2}\left\|\tilde{\bm{H}}\right\|^{2}$ $\displaystyle\overset{a}{\leq}\frac{\sqrt{E_{H}/E_{W}}}{2}\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}+\frac{\sqrt{E_{W}/E_{H}}}{2}\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}$ $\displaystyle\leq K\sqrt{E_{H}E_{W}},$ (78) where $\overset{a}{\leq}$ applies Jensen’s inequality as: $\left\|\tilde{\bm{H}}\right\|^{2}=\sum_{k=1}^{K}\|\mathbf{h}_{k}\|^{2}\leq\sum_{k=1}^{K}\frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\left\|\bm{h}_{k,i}\right\|^{2}.$ Let $L_{0}$ be the global minimum of the convex problem (76). Since $\mathcal{L}$ is convex on the first argument, by the same argument as (A.2.1), we obtain, for any feasible solution $\left(\bm{H},\mathbf{W}\right)$, $\displaystyle\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})$ $\displaystyle=\sum_{k=1}^{K}\frac{n_{k}}{N}\left[\frac{1}{n_{k}}\sum_{k=1}^{n_{k}}\mathcal{L}(\mathbf{W}\bm{h}_{k,i},\mathbf{y}_{k})\right]$ $\displaystyle\geq\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{W}\bm{h}_{k},\mathbf{y}_{k})=\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{Z}_{k},\mathbf{y}_{k})\geq L_{0}.$ (79) On the other hand, for the solution $\left(\bm{H}^{\star},\mathbf{W}^{\star}\right)$ defined in (77) with $\mathbf{Z}^{\star}$, we can verify that $\left(\bm{H}^{\star},\mathbf{W}^{\star}\right)$ is a feasible solution for (7) and $\frac{1}{N}\sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mathcal{L}(\mathbf{W}^{\star}\bm{h}_{k,i}^{\star},\mathbf{y}_{k})=\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{Z}_{k}^{\star},\mathbf{y}_{k})=L_{0}.$ (80) Combining (B) and (80), we have that $L_{0}$ is the global minimum of (7) and $(\mathbf{H}^{\star},\mathbf{W}^{\star})$ is a minimizer. ∎ ###### Property 1. For the cross-entropy loss, we have the following properties. 1. (A) Any minimizer $\mathbf{Z}^{\star}$ of (76) satisfies that $\|\mathbf{Z}\|_{*}=\sqrt{E_{H}E_{W}}$. 2. (B) Any minimizer $(\bm{H}^{\star},\mathbf{W}^{\star})$ of (7) satisfies $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}^{\star}\right\|^{2}=E_{H},\quad\text{and}\quad\quad\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}^{\star}\right\|^{2}=E_{W}.$ 3. (C) Any minimizer $\mathbf{X}^{\star}$ of (14) satisfies that $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}^{\star}(k,k)=E_{H},\quad\text{and}\quad\quad\frac{1}{K}\sum_{k=K+1}^{2K}\mathbf{X}^{\star}(k,k)=E_{W}.$ ###### Proof of Property 1. We first prove (A). Let $\mathbf{Z}^{\star}$ be any minimier of (76). Then by the Karush–Kuhn–Tucker conditions, there is a pair $(\lambda,\mathbf{\xi})$ with $\lambda\geq 0$ and $\mathbf{\xi}\in\partial\|\mathbf{Z}^{\star}\|_{*}$ such that $\nabla_{\mathbf{Z}}\left[\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{Z}_{k}^{\star},\mathbf{y}_{k})\right]+\lambda\mathbf{\xi}=\mathbf{0}^{K\times K},$ where $\partial\|\mathbf{Z}\|_{*}$ denotes the set of sub-gradient of $\|\mathbf{Z}\|_{*}$. For the cross-entropy loss, one can verify that $\nabla_{\mathbf{Z}}\left[\sum_{k=1}^{K}\frac{n_{k}}{N}\mathcal{L}(\mathbf{Z}_{k},\mathbf{y}_{k})\right]\neq\mathbf{0}^{K\times K}$ for all $\mathbf{Z}$. So $\lambda\neq 0$. By the complementary slackness condition, we have $\mathbf{Z}$ will reach the boundary of the constraint, achieving (A). For (B), suppose there is a minimizer $(\bm{H}^{\star},\mathbf{W}^{\star})$ of (7) such that $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}^{\star}\right\|^{2}<E_{H}$ or $\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}^{\star}\right\|^{2}<E_{W}$. Letting $\mathbf{Z}^{\star}$ defined as (75), it follows from (B) that $\mathbf{Z}^{\star}$ is a minimizer of (76). However, by (B), we have $\|\mathbf{Z}^{\star}\|_{*}<\sqrt{E_{H}E_{W}}$, which is contradictory to (A). We obtain (B). For (C), suppose there is a minimizer $\mathbf{X}^{\star}$ of (14) such that $\frac{1}{K}\sum_{k=1}^{K}\mathbf{X}^{\star}(k,k)<E_{H}$ or $\frac{1}{K}\sum_{k=K+1}^{2K}\mathbf{X}^{\star}(k,k)<E_{W}$. Then letting $(\bm{H}^{\star},\mathbf{W}^{\star})$ defined in (15), $(\bm{H}^{\star},\mathbf{W}^{\star})$ is a minimizer of (7) from Theorem 1. However, we have $\frac{1}{K}\sum_{k=1}^{K}\frac{1}{n}\sum_{i=1}^{n}\left\|\bm{h}_{k,i}^{\star}\right\|^{2}<E_{H}$ or $\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{k}^{\star}\right\|^{2}<E_{W}$, which is contradictory to (B). We complete the proof. ∎
# Time-reparametrization invariances, multithermalization and the Parisi scheme Jorge Kurchan Laboratoire de Physique de l’Ecole normale supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France ###### Abstract The Parisi scheme for equilibrium and the corresponding slow dynamics with multithermalization - same temperature common to all observables, different temperatures only possible at widely separated timescales – imply one another. Consistency requires that two systems brought into infinitesimal coupling be able to rearrange their timescales in order that all their temperatures match: this time reorganisation is only possible because the systems have a set of time-reparametrization invariances, that are thus seen to be an essential component of the scenario. ###### Contents 1. I Introduction 1. I.1 Equilibrium 2. I.2 Dynamics 2. II The framework 1. II.1 Factoring out time - general kinematic constraints. 2. II.2 Three examples 3. II.3 Time-reparametrizations 4. II.4 Dynamic multithermalization properties 3. III Connections between dynamic and Parisi scheme 1. III.1 A first, formal bridge between dynamic and static (replica) and calculations 4. IV Properties derived from stochastic stability 1. IV.1 Same temperatures for all observables implies separation of timescales 2. IV.2 Relation between statics and dynamics in finite dimensions 5. V The role of reparametrization invariance(s) 1. V.1 How do the families of reparametrization invariances come about 2. V.2 Two glasses and a wormhole 6. VI Conclusion ## I Introduction A finite dimensional system whose equilibrium solution follows the Parisi scheme Mézard _et al._ (1987) will take an infinite time to reach this equilibrium starting form random configuration. It may also be driven into an out of equilibrium steady-state by an infinitesimal drive, such as shear Thalmann (2001); Berthier _et al._ (2000), or time-dependence of disorder Horner (1992). If the relaxation times are long, or, in a steady-state, if the drive is weak, the dynamics are slow: this is the regime we are interested in. The idea of this paper is composed of two parts: $\bullet$ The out of equilibrium dynamics under these circumstances is a very specific one Sompolinsky and Zippelius (1982, 1981); Horner (1992); Cugliandolo and Kurchan (1993, 1994); Franz and Mézard (1994). At given times one may define a temperature with a thermodynamic meaning Cugliandolo _et al._ (1997), it is the same for all observables. Different temperatures are possible, but in diferent ‘scales’, a notion one has to define. We refer to this situation as ‘multithermalization’ Contucci _et al._ (2019, 2020). Rather unexpectedly, the temperatures involved in the slow dynamics coincide with a series of parameters computed for equilibrium in the Parisi scheme. This may be argued on the basis of a strategy devised by Franz, Mézard, Parisi and Peliti (FMPP) Franz _et al._ (1998, 1999) years ago, in a remarkable work to which we will refer throughout this paper. $\bullet$ Consider two such systems brought into weak contact from the beginning, for example two different lattice models, coupled locally so as to obtain a single model with two sublattices. Assume that at the same times the separate systems have non-coincident temperatures. Does this mean that the coupled system, for which there is no distinction between observables of one or the other system, violates the multithermalization scenario - and, a fortiori, the Parisi scheme? If this were so, both would be fragile to the point of irrelevance. The answer is surprising: the timescales of the systems rearrange so that different temperatures happen at different scales: thus, the combined system conforms to the scenario. The fact that this needs to happen for infinitesimal coupling means that the system needs to be ‘soft’ with respect to time-rearrangements of each temperature separately: in other words, it has to have independent time-reparametrization invariances in the slow dynamics limit. Such invariances where first described by Sompolinsky and Zippelius Sompolinsky and Zippelius (1982, 1981) some forty years ago, and have recently had a crucial role in the interpretation of the SYK model Sachdev and Ye (1993) as a toy model of holography Kitaev (2015); Maldacena and Stanford (2016). Now, it is quite natural to assume that time-reparametrization invariances, an independent one for each temperature, will only be possible if such temperatures happen at widely separated timescales, because then one may ‘move each timescale around’ without changing their mutual interaction: overlapping timescales would make this invariance unlikely. This hierarchy in times, already found in mean-field problems, seems then like a general necessity for having a unique temperature for all observables at given times, and ultimately for the correspondence between dynamics and Parisi scheme. The reader who is convinced by this heuristic argument may skip sections III and IV. In those sections, we extend the procedure of FMPP to confirm, within their framework, that separation of timescales indeed is necessary for the agreement between dynamics and Parisi scheme. Time reparametrizations and the unambiguous definition of timescales, when we are dealing with observables that depend on two or more times, require some clarifying definitions, a large part of which have been already discussed in the past. Most importantly, it is convenient to separate those quantities that are reparametrization-invariant from the reparametrizations themselves, a procedure that may even be implemented experimentally: see Castillo _et al._ (2002, 2003); Chamon _et al._ (2004, 2002); Chamon and Cugliandolo (2007); Chamon _et al._ (2011). ### I.1 Equilibrium The Parisi construction Mézard _et al._ (1987) involves the computation of the Boltzmann-Gibbs distribution, averaged over quenched disorder. The measure is given by an infinite set of pure states, each state $\alpha$ a set of configurations– just like the positive and negative magnetization distributions in a ferromagnet – inside which a variable $s_{i}$ has expectation value $\langle s_{i}\rangle_{\alpha}$ (e.g. in a ferromagnet, $\langle s_{i}\rangle_{\pm}=\pm m$). The overlap between two states is, for example for a spin system $q^{J}_{\alpha\beta}=\frac{1}{N}\sum_{i}\langle s_{i}\rangle_{\alpha}\langle s_{i}\rangle_{\beta}$, where the supraindex $J$ signifies that we have not yet averaged over disorder. Once we do, we obtain $q_{\alpha\beta}=\overline{q^{J}_{\alpha\beta}}$. A histogram of the $q^{J}_{\alpha\beta}$ for a given disorder is mostly dominated by a few spikes, while the average histogram for $q_{\alpha\beta}$ is the Parisi function $P(q)$, a direct product of the formalism. The same information is contained in the primitive $x(q)$, such that $\frac{dx}{dq}=P(q)$. The other hallmark of the Parisi ansatz is the ‘ultrametricity’ property: for any three states at mutual overlaps $q_{12},q_{23},q_{31}$ the two smallest overlaps are equal (all triangles are isosceles): $q_{13}=\min(q_{12};q_{23})$ In fact, the ultrametric solution may be proven Parisi and Ricci-Tersenghi (2000) from two hypotheses: i) stochastic stability (see Ref. Aizenman and Contucci (1998)): the solution keeps its form under small random perturbations, and ii) overlap equivalence Parisi and Ricci-Tersenghi (2000); Contucci _et al._ (2006): all the mutual information about a pair of equilibrium configurations is encoded in their mutual distance or overlap. In other words, we may always write the correlation of an observable in two states as a function of that of another observable in the same states: $\bar{q}_{ab}=g(q_{ab})$ where $g$ is a smooth function. In what follows, when we refer to ‘Parisi scheme’, we consider it assuming these two properties. ### I.2 Dynamics In the dynamic approach we have an evolving system: $-m_{i}{\ddot{s}}_{i}-\frac{\partial V({\bf s})}{\partial s_{i}}=\underbrace{\Gamma_{0}{\dot{s}}_{i}-\eta_{i}}_{bath}$ (1) where $\eta_{i}$ are uncorrelated Gaussian white noises with variance $2\Gamma_{0}T$ and $\Gamma_{0}$ is the strength of the coupling to the ‘white’ bath. This is guaranteed to reach eventually equilibrium, although in the systems that concern us, in times that may diverge with $N$. We are interested in various correlation $C_{AB}(t,t^{\prime})$ and response functions $R_{AB}(t,t^{\prime})$ (here, and in what follows, always $t\geq t^{\prime}$), the average response of $A$ at time $t$ to a kick of $B$ at time $t^{\prime}$. For example: From here, we read the correlations and response functions: $C_{ij}(t,t^{\prime})=\langle s_{i}(t)s_{j}(t^{\prime})\rangle\qquad;\qquad R_{ij}(t,t^{\prime})=\left\langle\frac{\delta s_{i}(t)}{\delta h_{j}(t^{\prime})}\right\rangle$ (2) where $h_{i}$ is a field conjugate to $s_{i}$. We shall often use: $C(t,t^{\prime})=\frac{1}{N}\sum_{i}C_{ii}(t,t^{\prime})\qquad;\qquad R(t,t^{\prime})=\frac{1}{N}\sum_{i}R_{ii}(t,t^{\prime})$ (3) $\chi(t,t^{\prime})=\theta(t-t^{\prime})\int_{-\infty}^{t^{\prime}}dt^{\prime\prime}\;R(t,t^{\prime\prime})$ (4) (note that the definition with these limits of integration is rather unusual) and the symmetrized version $\chi_{s}(t,t^{\prime})=\chi(t,t^{\prime})+\chi(t^{\prime},t)$ (5) In the spirit of the fluctuation-dissipation theorem, we will define effective temperatures Cugliandolo _et al._ (1997) as: $T_{AB}(t,t^{\prime})R_{AB}(t,t^{\prime})=\frac{\partial C_{AB}(t,t^{\prime})}{\partial t^{\prime}}\qquad\qquad;\qquad\qquad T_{AB}(t,t^{\prime})=\frac{T}{X_{AB}(t,t^{\prime})}$ (6) In equilibrium $X=1$ and where $T_{AB}(t,t^{\prime})=T$, the bath’s temperature. When there is time-translational invariance (TTI), $\chi(t-t^{\prime})=\chi(\tau)=\int_{\tau}^{\infty}d\tau^{\prime}\;R(\tau^{\prime})\qquad;\qquad R(\tau)=-\chi^{\prime}(\tau)\;\;for\;\;(\tau>0)$ (7) and a short calculation gives for the Fourier transforms: $i\omega\hat{\chi}_{s}(\omega)=[\hat{R}(\omega)-\hat{R}^{*}(\omega)]$ (8) We may consider many different settings for dynamics, but here we shall only be concerned with the limit of slow dynamics, which may be achieved at least in three ways: * • Aging Castellani and Cavagna (2005): We quench the system from a high to a low temperature, at which the equilibration time is infinite. The system ‘ages’: it evolves slower and slower as the time since the quench elapses. The two- point functions never fully become a function of time-differences. The large parameter is the smallest ‘waiting’ time since the quench $t^{\prime}=t_{w}$, that modulates the decay at $\tau=t-t^{\prime}$. A typical example is $C(t,t^{\prime})=C\left(\frac{\tau}{t_{w}}\right)$ . * • Driven system Thalmann (2001); Berthier _et al._ (2000) When the system is subjected to forces non deriving from a potential – shear, for example – it is an experimental fact that aging is interrupted, in the sense that all functions become time-translational invariant, but slow. Their timescale of the decay of correlation then is controlled by the driving rate $\sigma$, the slower the weaker the drive: $C(t-t^{\prime})=C\left({\tau}{\sigma}\right)$ * • Time-dependent disorder Horner (1992). Another way to make a system with disorder time-translational invariant is to change the disorder slowly : the small parameter is the timescale $\tau_{0}$ of change of disorder: $C(t,t^{\prime})=C\left(\frac{\tau}{\tau_{0}}\right)$.The reason is simple: the system optimises with a constantly changing target. In the case of mean-field glasses, we know that the three situations above correspond, in the limit of slow dynamics, to different time- reparametrizations of the same solution. We shall discuss below the condition for this being the case in finite-dimensions. In what follows, we will refer briefly as ‘asymptotic’ to the limit of either long waiting times, small shear strains or slow variation of parameters, always taken after the thermodynamic limit. ## II The framework ### II.1 Factoring out time - general kinematic constraints. Although one may ask about the time-dependence of any quantity, it turns out that there is a particularly significant sub-ensemble of dynamic quantities: those where time is factored out Cugliandolo and Kurchan (1994), and are thus invariant under reparametrizations $t\rightarrow h(t)$. This is achieved, as we shall see, by using a single correlation as a ‘clock’: * • Given any dynamic parameter $X(t,t^{\prime})$ define for large times $X(t,t^{\prime})\rightarrow X[C(t,t^{\prime})]=\lim_{t^{\prime}\rightarrow\infty}X[C(t,t^{\prime}),t^{\prime}]$. We shall focus on cases in which this limit is non-trivial. This also implies that the integrated response becomes a function of the correlation: $\chi(t,t^{\prime})\rightarrow\chi[C](t,t^{\prime})$. * • given three long, successive times $t_{1}<t_{2}<t_{3}$, and the corresponding correlations $C_{21},C_{32},C_{31}$, define for large times $C_{31}=f(C_{21};C_{32})=\lim_{t_{1}\rightarrow\infty}f(C_{21};C_{32},t_{1})$, a ‘triangle relation’. It is easy to show that $f(a,b)$ is an associative function of $a$ and $b$ (see construction Fig.1). Similarly for the remanent magnetizations $\chi(C_{31})=\tilde{f}[\chi(C_{21});\chi(C_{32})]$, i.e. the triangle relations $\tilde{f}$ and $f$ are isomorphic. * • Given any two correlations of the system $\bar{C}(t,t^{\prime})$ and $C(t,t^{\prime})$ we write, again in the large times limit, $\bar{C}\rightarrow g(C)$ for some $g$. Figure 1: The proof that $C_{41}=f[C_{43},f(C_{32},C_{21})]=f[f(C_{43},C_{32}),C_{21}]$: the function $f$ is associative. The function $f$ is associative and may be classified as such: a purely ‘kinematic’ construction, independent of the dynamics. It is shown in Cugliandolo and Kurchan (1994) that there are ‘skeleton values’ $q_{r}$ of $C$ which delimit correlation scales ${\cal{S}}_{C}$, such that: * • If $C_{21}$ and $C_{32}$ are both in the same scale, then $f(C_{21},C_{32})=g^{-1}[g(C_{21})+g(C_{32})]$, i.e. $f$ is isomorphic to the sum (or the product). The function $g$ is a different one for each interval. * • If $C_{21}$ and $C_{32}$ are in the different scales, then $f(C_{21},C_{32})=\min[g(C_{21}),g(C_{32})]$. This means that the relaxations in different scales take place in very different timescales, so that the time for relaxing within one scale is negligible with respect to the other. * • From this it follows that there is always a time-reparametrization that makes the correlation within a scale time-translational invariant, that is: $C_{21}=C(t_{2}-t_{1})$. For example, if a correlation is of the form $C\left(\frac{t^{\prime}}{t}\right)$, then $h(t)\rightarrow\ln t$ is such a mapping. Note that if there is more than one scale, the times are reparametrized differently for the correlations in each scale. (examples below). ### II.2 Three examples #### Two scales This is the most usual case. An example is when the correlation $0<C\leq 1$ and there is a value $q$ such that for the interval $q\leq C\leq 1$ the correlation is much faster than for the interval $0\leq C<q$. We have, for example: * • For a stationary case $C(t-t^{\prime})=(1-q)\;\bar{A}(t-t^{\prime})+q\;\bar{B}\left(\frac{t-t^{\prime}}{H(\tau_{0})}\right)$, where $H$ is a growing function of $\tau_{0}$. * • For an aging case $t>t^{\prime}$: $C(t,t^{\prime})=(1-q)\;A(t-t^{\prime})+q\;B\left(\frac{L(t^{\prime})}{L(t)}\right)=(1-q)\;A(t-t^{\prime})+q\;B\left(e^{h(t)-h(t^{\prime})}\right)$ where $(A,B,\bar{A},\bar{B})$ are functions decreasing from one to zero as their argument goes from zero to infinity. We have put $h(t)=\ln L(t)$ to emphasize that the form may be brought into a time-translational invariant form via a reparametrization. The stationary case has separated timescales as $\tau_{0}\rightarrow\infty$, and the aging one at long times $t$. The aging form is in particular the one of domain growth, where the fast part $\bar{A}(t-t^{\prime})$ is the relaxation within domains, and the slow part is a function of the domain length $L(t)$. It is easy to see that in this limit $f$ is isomorphic to the addition within each scale $(0-q)$ and $(q-1)$, and is the function $\min$ for correlations in different scales. #### Three scales Again, the correlation $0<C\leq 1$ and there two values $q_{0}$ and $q_{1}$ such that for the interval $q_{1}\leq C\leq 1$ the correlation is much faster than for the interval $q_{0}\leq C<q_{1}$, itself much faster than $0\leq C<q_{0}$ We have, for example, for the stationary state: * • $C(t-t^{\prime})=(1-q_{1})\;\bar{A}(t-t^{\prime})+(q_{1}-q_{0})\;\bar{B}\left(\frac{t-t^{\prime}}{H(\tau_{0})}\right)+q_{0}\;\bar{\bar{B}}\left(\frac{t-t^{\prime}}{\bar{H}({\tau_{0}})}\right)$ where $\bar{\bar{B}}$ is also decreasing from one to zero as their argument goes from zero to infinity. The timescales are nested as $\tau_{0}\rightarrow\infty$: $\bar{H}(\tau_{0})\gg H(\tau_{0})\gg 1$. The function $f$ is the function $\min$ for correlations in any two different scales. #### A continuum of scales An important case is when there is a dense set of values of correlation in which for all values of correlation $f(C_{21},C_{32})=\min[g(C_{21}),g(C_{32})]$ (9) holds. An example is: $C(t,t^{\prime})={\cal{C}}\left(\frac{\ln(t-t^{\prime}+t_{0})}{\ln\tau_{0}}\right)$ (10) where $t_{0}$ is a constant. This form satisfies (9) when $\tau_{0}\rightarrow\infty$ 111Note however that, confusingly, $C(t,t^{\prime})={\cal{C}}\left(\frac{\ln t^{\prime}}{\ln t}\right)$ is only one scale!. It may be viewed as an infinite superposition of scales, e.g: $C(\tau)=\int d\nu\;[\tau_{0}]^{\nu}\;e^{-\tau_{0}^{-\nu}(\tau+t_{0})}\;{\cal{C}}(-\nu)\propto{\cal{C}}\left(\frac{\ln(\tau+t_{0})}{\ln\tau_{0}}\right)$ (11) where we have evaluated the integral by saddle-point over $\nu$. ### II.3 Time-reparametrizations In the sections above, we have written everything using one particular correlation as a ‘clock’, time-dependencies are mediated by that correlation. Note that this is also possible with higher order correlations. We should now define clearly which time-reparametrizations we shall consider. The answer is simple: those that preserve the triangle relations. For example, if the system has two scales, it is easy to see that a possibility is: $\\{t,t^{\prime}\\}\rightarrow\\{t,t^{\prime}\\}\;\;{\mbox{for}}\;\;q\leq C\leq 1\;\;{\mbox{and}}\;\;\\{t,t^{\prime}\\}\rightarrow\\{{h}(t),h(t^{\prime})\\}\;\;{\mbox{for}}\;\;0\leq C\leq q$ (12) Note that i) we have two different reparametrizations for the two scales, and ii) we do not reparametrize the fast (ultraviolet) scale, because it is the one for which reparametrization invariance of the action does not hold – because time-derivatives are not negligible there. For two or more scales, this is easily generalizable to $\\{t,t^{\prime}\\}\rightarrow\\{h_{\cal{S}}(t),h_{\cal{S}}(t^{\prime})\\}$ (13) where the reparametrization depends on the scale ${\cal{S}}$ to which $C(t,t^{\prime})$ belongs. Let us note that this may allow more freedom than reparametrizations found in in models such as SYK, because each scale is reparametrized separately. This is most clearly seen in the case in which there is a continuum of scales, for example Eq (10). In that case, $f$ is invariant with respect to reparametrization of time-differences $(t-t^{\prime})\rightarrow H(t-t^{\prime})$ for any smooth $H$, something that does not happen for discrete scales. #### An example of factoring times away: For a triangle of correlations $[C(t_{int},t_{min}),C(t_{max},t_{int}),C(t_{max},t_{min})]$, we define, asymptoticallyCugliandolo and Kurchan (1994): $\displaystyle C(t_{max},t_{min})$ $\displaystyle\rightarrow$ $\displaystyle f[C(t_{int},t_{min}),C(t_{max},t_{int})]$ (14) $\displaystyle C(t_{max},t_{int})$ $\displaystyle\rightarrow$ $\displaystyle\bar{f}[C(t_{int},t_{min}),C(t_{max},t_{min})]\geq C(t_{max},t_{min})$ (15) $\displaystyle C(t_{int},t_{min})$ $\displaystyle\rightarrow$ $\displaystyle\bar{\bar{f}}[C(t_{max},t_{int}),C(t_{max},t_{min})]\geq C(t_{max},t_{min})$ (16) when the $f=\min$ we have: $\displaystyle C(t_{max},t_{int})$ $\displaystyle\rightarrow$ $\displaystyle C(t_{max},t_{min})\qquad{\mbox{if}}\qquad C(t_{max},t_{int})\leq C(t_{int},t_{min})$ (17) $\displaystyle C(t_{int},t_{min})$ $\displaystyle\rightarrow$ $\displaystyle C(t_{max},t_{min})\qquad{\mbox{if}}\qquad C(t_{max},t_{int})\leq C(t_{max},t_{int})$ (18) Let us see how we use these in an example. In computing the dynamic diagrams we will meet later, we shall need to calculate convolutions such as: $\displaystyle I(t,t^{\prime})$ $\displaystyle=$ $\displaystyle\int_{-\infty}^{t^{\prime}}C(t,t^{\prime\prime})R(t^{\prime},t^{\prime\prime})\;dt^{\prime\prime}$ (19) Introducing the definition of $X$: $\displaystyle\int_{-\infty}^{t^{\prime}}C(t,t^{\prime\prime})X(t^{\prime},t^{\prime\prime})\frac{\partial C}{\partial t^{\prime\prime}}(t^{\prime},t^{\prime\prime})\;dt^{\prime\prime}$ (20) Now we may factor times away: $\displaystyle\bar{I}(C)$ $\displaystyle=$ $\displaystyle\int_{0}^{C}C^{\prime}\;X(\bar{f}(C^{\prime}))\frac{d\bar{f}(C^{\prime},C)}{dC^{\prime}}\;dC^{\prime}\;$ (21) It turns out that all integrals coming from diagrammatic expansions may be treated this way. ### II.4 Dynamic multithermalization properties Th properties above do not really use any property of the dynamics, except that it should have a slow regime. The one we discuss here instead implies a definite assumption on the dynamics. It is inspired in the mean-field solution. #### Thermalization as a residual symmetry Let us construct the path-integral generator Martin _et al._ (1973); Janssen (1976); De Dominicis (1976) associated to the equation of motion (1). Introducing a Fourier variable $\hat{s}_{i}$ and integrating over noise, we get: $Z=\int D[s]D[\hat{s}]\;\exp\left\\{\int dt\sum_{i}\hat{s}_{i}\left(-m_{i}{\ddot{s}}_{i}-\frac{\partial V({\bf s})}{\partial s_{i}}-\Gamma_{0}({\dot{s}}_{i}+T\hat{s}_{i})\right)\right\\}$ (22) in the Ito convention, which means the determinant term is absent. From here, we read the correlations and response functions: $C_{ij}(t,t^{\prime})=\langle s_{i}(t)s_{j}(t^{\prime})\rangle\qquad;\qquad R_{ij}(t,t^{\prime})=\left\langle s_{i}(t)\hat{s}_{j}(t^{\prime})\right\rangle$ (23) where $h_{i}$ is a field conjugate to $s_{i}$. Detailed balance implies that the time-reversal symmetry $s_{i}(t)\rightarrow s_{i}(-t)\qquad\qquad;\qquad\qquad\hat{s}_{i}(t)\rightarrow\hat{s}_{i}(-t)+\beta\dot{s}_{i}(-t)$ (24) leaves the integral invariant up to a boundary term in time. This is the time- reversal detailed-balance property. In particular, if the symmetry is unbroken this implies that all functions depend on time-differences and: $C_{AB}(t-t^{\prime})=C_{BA}(t-t^{\prime})\qquad;\qquad\chi_{AB}(t-t^{\prime})=\beta C_{AB}(t-t^{\prime})-\chi_{AB}(t^{\prime}-t)$ (25) If the bath is absent $\Gamma=0$, and this symmetry holds for any $\beta$ corresponding to the energy of the initial condition. The presence of the bath breaks the larger symmetry to a subgroup Cugliandolo and Kurchan (1999a, b), given by the $\beta$ of the bath. As we shall see below, this may happen spontaneously in each timescale, and with a different $\beta$: we know for sure that this scenario is valid within mean-field, and we shall discuss below what are the implications of it holding for finite-dimensional systems. #### Multi-thermalization as a symmetry-breaking scheme We have seen above that within a correlation scale, the correlation and response functions are such that the ‘triangle relation’ is smooth, and hence isomorphic to the sum (or the product) $g(C_{31})=g(C_{32})+g(C_{21})\qquad;\qquad\chi(t,t^{\prime})\rightarrow\chi[C(t,t^{\prime})]$ (26) This implies that they may be written as: $C(t,t^{\prime})\rightarrow{\cal{C}}[h(t)-h(t^{\prime})]={\cal{C}}[h-h^{\prime}]\qquad;\qquad\chi(t,t^{\prime})\rightarrow{\cal{K}}[h(t)-h(t^{\prime})]={\cal{K}}[h-h^{\prime}]$ (27) and similarly for all correlations and response functions of any number of times. For example, in an aging situation, $h(t)\sim\ln t$ yields a $\left(\frac{t^{\prime}}{t}\right)$-dependence, an ansatz often made. If a timescale is isolated we get to a point at which the ‘kinetic’ terms may be neglected. Having assumed that within a scale all functions depend on differences of $h$’s, in terms of these we have a corresponding ‘time’ reversal symmetry $h\rightarrow-h$ associated to some $\tilde{\beta}$. This implies: ${\cal{C}}_{AB}(h-h^{\prime})={\cal{C}}_{AB}(h^{\prime}-h)\qquad;\qquad{\cal{K}}_{AB}(h-h^{\prime})=\tilde{\beta}{\cal{C}}_{AB}(h-h^{\prime})-{\cal{K}}_{AB}(h^{\prime}-h)$ (28) All in all, we have these symmetries parametrized by $\beta_{\cal{S}}$ in each timescale ${\cal{S}}$, and: $T(C)\neq T(C^{\prime})\qquad\Rightarrow\qquad f(C,C^{\prime})=\min(C,C^{\prime})$ (29) In particular, when there is a continuum of timescales, there is in general a continuum of temperatures. There is a well-defined temperature for all observables within this scale, plus Onsager reciprocity. This is then a symmetry breaking to a smaller group scheme Contucci _et al._ (2019, 2020), labeled by the temperatures of each scale: as such it is consistent, but of course need not be the correct solution of a given problem. The Parisi scheme for statics is also a symmetry breaking scheme into subgroups Mézard _et al._ (1987)(of the permutation group of a noninteger number of elements). One might suspect that there is a correspondence between the two schemes. Both are known to apply to mean-field statics and dynamics. In what follows we shall argue, within the assumption of stochastic stability Aizenman and Contucci (1998) (w.r.t long-range perturbations), that this correspondence is a necessity in finite dimensions. ## III Connections between dynamic and Parisi scheme ### III.1 A first, formal bridge between dynamic and static (replica) and calculations This formal bridge has been known for a long time J. Kurchan (1992), and sometimes used for calculations. As is well known, a path integral like (22) may be written in a compact form in therms of the ‘superspace’ variables: ${\bf s}_{i}(1)=s_{i}(t)+\theta\bar{\eta}+\bar{\theta}\eta+\bar{\theta}\theta\hat{s}(t)$ (30) where $\theta_{a}$, $\bar{\theta}_{a}$ are Grassmann variables, and we denote the full set of coordinates in a compact form as $1=t_{1}\theta_{1}\overline{\theta}_{1}$, $d1=dt_{1}d\theta_{1}d\overline{\theta}_{1}$, etc. $\bar{\eta},\eta$ are fermion variables that play no role here, and will be hence ommited. This notation brings the replica and dynamic treatment into formally very close contact, with one-to-one (topological) correspondence between diagrams. We write Eq (22) as: $\int D{\bf s}\exp\left\\{\int d1[K({\bf s})-V({\bf s})]\right\\}$ (31) Where $K({\bf s})=\sum_{i}\frac{\partial{\bf s}_{i}}{\partial\theta}\left(\frac{\partial{\bf s}_{i}}{\partial\bar{\theta}}-\theta\frac{\partial{\bf s}_{i}}{\partial t}\right)-\frac{\partial^{2}{\bf s}_{i}}{\partial t^{2}}$ (32) is a ‘kinetic’ term which contains the time-derivatives, which will be neglected in the slow-dynamics regimes. We encode the correlations and (causal) responses in the ‘superspace’ order parameter (see J. Kurchan (1992)): $Q_{ij}(1,2)=C_{ij}(t_{1},t_{2})+(\bar{\theta}_{2}-\bar{\theta}_{1})\;\left[\theta_{2}\,\;R_{ij}(t_{1},t_{2})-\theta_{1}\,\;R_{ji}(t_{2},t_{1})\right]\;.$ (33) and similarly $Q(1,2)=C(t_{1},t_{2})+(\bar{\theta}_{2}-\bar{\theta}_{1})\;\left[\theta_{2}\,\;R(t_{1},t_{2})-\theta_{1}\,\;R(t_{2},t_{1})\right]\;.$ (34) which corresponds to the matrix $\displaystyle Q(t,t^{\prime})=\begin{bmatrix}R(t,t^{\prime})&C(t,t^{\prime})\\\ 0&R(t^{\prime},t)\end{bmatrix}$ (35) As we shall see below, we will be led, in this notation, to topologically equal diagrams for replicas and dynamics, with the identifications $\sum_{\alpha=1}^{n}\rightarrow\int d1$ (36) our diagrams will have vertices at supertimes $1=(t,\bar{\theta},\theta)$ (replica $\alpha$, respectively) and lines given by $Q(1,2)$ (respectively $Q_{\alpha\beta}$). One of the lines will be integrated with a generating variable $d1\rightarrow d1j(1)$ , (respectively $\sum_{\alpha}\rightarrow\sum_{\alpha}j_{\alpha}$, where $j$ are the arguments of the generating functions: $j(\alpha)=1+j\delta_{1\alpha}\rightarrow j(1)=1+j\delta(t_{1}-t_{0})\bar{\theta}_{1}\theta_{1}$ (37) We shall see a few examples of this below. The fact that the diagrams have the same form does not automatically mean that their actual values are the same. It has been long known J. Kurchan (1992) that, in the case in which there is a single temperature per timescale, then the results of dynamic diagrams and Parisi-ansatz replica ones are indeed the same diagram by diagram, the question that we shall address in what follows is whether timescale-separation is also necessary for this to happen. ## IV Properties derived from stochastic stability We shall assume that the properties of the system are unchanged when perturbed by random, weak but long-range interactions: stochastic stability. Under this assumption we shall show that the multithermalization and Parisi schemes imply one another, for finite-dimensional systems. ### IV.1 Same temperatures for all observables implies separation of timescales Let us show first that the only way that such a system has the same $T(t,t^{\prime})$ for all observables at the same $(t,t^{\prime})$ is that there is only one temperature for all $(t,t^{\prime})$ associated to a correlation scale. In other words, non-constant temperature within a timescale implies that different observables have different temperatures at the same times. Later on, we will see that this implies that there is no overlap equivalence, at the static level. Let us first consider a lattice system, which we divide in four sublattices. whose components we shall call $s^{(1)}_{i}$, $s^{(2)}_{i}$ and $s^{(3)}_{i}$ and $s^{(4)}_{i}$. Adding to the energy a term $\displaystyle S$ $\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{ij}\left(h^{(1)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(2)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(3)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(4)}_{ij}\right)^{2}$ (38) $\displaystyle+$ $\displaystyle\frac{\gamma}{N}\sum_{ij}\left(h^{(1)}_{ij}h^{(2)}_{jk}s^{(1)}_{k}s^{(2)}_{i}+h^{(2)}_{ij}h^{(3)}_{jk}s^{(2)}_{i}s^{(3)}_{k}+h^{(3)}_{ij}{h}^{(4)}_{jk}s^{(3)}_{i}s^{(4)}_{k}\right)$ with the $h^{(\ell)}_{ij}$ random Gaussian variables. We wish to compute the following correlation and its associated response: $C^{(4)}(t,t^{\prime})=\frac{1}{N^{2}}\sum h^{(1)}_{ij}h^{(4)}_{jk}\left\langle s^{(1)}_{i}(t)s^{(3)}_{k}(t^{\prime})\right\rangle_{\gamma}\qquad;\qquad R^{(4)}(t,t^{\prime})=\frac{1}{N^{2}}\sum h^{(1)}_{ij}h^{(4)}_{jk}\left\langle\frac{\delta s^{(1)}_{i}(t)}{\delta s^{(4)}_{k}(t^{\prime})}\right\rangle_{\gamma}$ (39) These may be encoded in a superspace order parameter $Q^{(4)}(1,2)$, or in its matrix version: $\displaystyle Q^{(4)}(t,t^{\prime})=\begin{bmatrix}R^{(4)}(t,t^{\prime})&C^{(4)}(t,t^{\prime})\\\ 0&R^{(4)}(t^{\prime},t)\end{bmatrix}=\gamma^{3}[Q\otimes Q\otimes Q\otimes Q](t,t^{\prime})$ (40) where $\otimes$ stands for convolution and matrix product. Or, equivalently, in superspace notation: $Q^{4}(1,2)=\gamma^{3}[Q]^{4}(1,2)$ (41) Note that this is a convolution power. As we have seen in Section II , we may always assume, by reparametrizing times within a timescale, that the functions are time-translational invariant, and we may use Fourier transforms: $\displaystyle Q(t-t^{\prime})=\begin{bmatrix}R(t-t^{\prime})&C(t-t^{\prime})\\\ 0&R(t^{\prime}-t)\end{bmatrix}\rightarrow\hat{Q}(\omega)=\begin{bmatrix}\hat{R}(\omega)&\hat{C}(\omega)\\\ 0&\hat{R}^{*}(\omega)\end{bmatrix}$ (42) The generalization to $n$ sublattices is obvious: $\displaystyle\hat{Q}^{(n)}(\omega)=\begin{bmatrix}\hat{R}^{(n)}(\omega)&\hat{C}^{(n)}(\omega)\\\ 0&\hat{R}^{(n)*}(\omega)\end{bmatrix}$ (43) A short calculation gives $\hat{R}^{(n)}=\hat{R}^{n}\qquad;\qquad\hat{C}^{(n)}(\omega)=\frac{\hat{R}^{(n)}(\omega)-\hat{R}^{(n)*}(\omega)}{\hat{R}(\omega)-\hat{R}^{*}(\omega)}\;\hat{C}(\omega)$ (44) this may also be written as: $\frac{\hat{C}^{(n)}(\omega)}{\hat{\chi}_{s}^{(n)}(\omega)}=\frac{\hat{C}(\omega)}{\hat{\chi}_{s}(\omega)}$ (45) If we consider a large value of $n$, then $\hat{C}^{(n)}(\omega)$ and ${\hat{\chi}_{s}^{(n)}(\omega)}$ will be peaked around zero, and we may write $\frac{\hat{C}^{(n)}(\omega)}{\hat{\chi}_{s}^{(n)}(\omega)}\sim\frac{\hat{C}(0)}{\hat{\chi}_{s}(0)}=\bar{T}$ (46) From which we immediately see that $R^{(n)}(\tau),C^{(n)}(\tau)$ satisfy fluctuation dissipation with the average temperature. Now, if $R(\tau),C(\tau)$ do not have a single temperature, then at equal times both pairs of observables have different temperatures (see Figure 2). Figure 2: $C^{(n)}$ and $\chi^{(n)}$ become broader and broader with $n$, $X^{(n)}$ and essentially flat. Within that range, if $X(C)$ is not a constant, then $X^{(n)}(t,t^{\prime})\neq X(t,t^{\prime})$. ### IV.2 Relation between statics and dynamics in finite dimensions In FMPP it is shown that, under certain assumptions, the dynamic $X(C)$ and the equilibrium counterpart $x(q)$ coincide for finite-dimensional systems. This is at first sight very strange, since it concerns a relation between two different kinds of objects that are relevant in completely different time regimes (in and out of equilibrium, respectively), and happen in different regions of phase-space. This section is mostly a review of their results. The Parisi scheme gives us the Parisi order parameter $P(q)$, the probability of overlaps of states, averaged over disorder. We shall sometimes need to distinguish the values of $q$ where $P(q)$ is nonzero: we shall for brevity call them ‘skeleton’ values. This distinction becomes important when we consider the next defining feature of the Parisi construction: the structure of triangles determined by three states $(q_{13};q_{12};q_{23})$. By its very definition, this may only concern skeleton values of the $q$. The natural next question is what becomes of the ultrametricity property of statics: is there any relation between the dynamic triangle relation $C_{13}\rightarrow f(C_{12};C_{23})$ and ultrametricity of equilibrium states $q_{13}\rightarrow\min(q_{12};q_{23})$? Clearly, the second $f$ concerns all values of $C$, while the static ones only the skeleton values, so if there is a correspondence it has to be for the skeleton values only. Dynamically, if we had $C_{13}\rightarrow\min(C_{12};C_{23})$ it would mean that we have hierarchically organized timescales. For example, if the system is TTI then one of the two time differences $t_{2}-t_{1}$ and $t_{3}-t_{2}$ is negligible compared to the other, so correlations make their steps of decay on widely separated times. Franz et al asked whether Parisi scheme implied the existence of widely separated timescales. Their conclusion was that timescale separation is sufficient for having a Parisi scheme, but, though considering it plausible, left open the question as to whether it was also a necessary one. In this paper it is shown that their same scheme also shows that indeed this is so: widely separated timescales are indeed implied by the Parisi scheme, at least such as we know it (i.e. with overlap equivalence and stochastic stability). This closes the circle: for finite dimensional systems the Parisi scheme is included in the dynamic multithermalization one, and it allows to compute some of its dynamic relations for which time has been factored away. This connection between widely separated timescales and the Parisi scheme will lead us to the main point of this paper, the question of time- reparametrization invariances: we shall show that these are crucial for the consistency of the scheme, since they allow two systems brought into contact to ‘adjust their timescales’ so that different effective temperatures match at each scale. #### The basic argument The idea in FMPP is to compute the generalized susceptibilities defined as follows Cugliandolo and Kurchan (1993): one adds a perturbation of the form $H\rightarrow H+\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum h_{i_{1},...,i_{p}}s_{i_{1}}...s_{i_{p}}+\sum h_{i_{1},...,i_{p}}^{2}\right\\}$ (47) where the $h_{i_{1},...,i_{p}}$ are gaussian iid random numbers, and computes the susceptibility $I^{(p)}=\frac{\gamma}{N^{p}}\frac{\partial}{\partial{\gamma}}\left\langle\sum h_{i_{1},...,i_{p}}s_{i_{1}}...s_{i_{p}}\right\rangle$ (48) In equilibrium, and asymptotically for dynamics, we have $I^{(p)}_{equil}=\int x(q)dq^{p}\qquad\qquad;\qquad\qquad I^{(p)}_{dyn}=\int X(C)dC^{p}$ (49) These correspond, in the notation introduced above, to: $I^{(p)}_{equil}=\left.\frac{d}{dj}\sum_{\alpha\beta}Q^{\bullet p}_{\alpha\beta}\;j(\alpha)\right|_{j=0}\qquad\qquad;\qquad\qquad I^{(p)}_{dyn}=\left.\frac{d}{dj}\int d1\;d2\;Q^{\bullet p}(1,2)j(1)\right|_{j=0}$ (50) where $A^{\bullet p}$ is the matrix each of whose elements is the $p$-th power of that of the matrix $A$, a Hadamard (element-by-element) power. This corresponds to the left diagram in Fig. 3. Now, if one can argue that these susceptibilities are (to leading order in $N$ and long times taken after $N\rightarrow\infty$) equal for all $p$, then one concludes that $x(q)$ and $X(C)$ are the same functions. The argument to show this is that, since these susceptibilities may be obtained from a second derivative of a free-energy with respect to sources , equality of energy densities between dynamics and equilibrium (again, to leading order in $N$ and long times, and for all small perturbations) implies equality of susceptibilities. Franz et al used the standard nucleation reasoning forbidding stable states with higher free-energy density, valid in finite dimensions, to argue this. In order to complete the argument, they need to get around an obstacle: a term like $h_{i_{1},...,i_{p}}s_{i_{1}}...s_{i_{p}}$ is long-range, and the nucleation argument would not, in principle, apply. Their clever trick is to consider the $d$-dimensional lattice and mentally fold it $p-1$ times, so as to make $(i_{1},...,i_{p})$ contiguous. The resulting $(p-1)$-layer system is then short-range, and we may apply the nucleation argument. This applies for a single term, and one should then argue that it also does for the sum of them all. For this step one needs that the limit of small perturbation and thermodynamic limit commute, and this is where some form of stochastic stability is required. It is easy to see that the whole argument of FMPP extends naturally to the case in which disorder changes slowly, because by making the timescale long enough all nucleations may take place. The same is true for the weak-shear limit. Now, to prove the correspondence of FMPP in a compact form, we write: $Z_{h}(\gamma)=\Sigma_{\alpha,i}\;e^{S_{0}+S(h,s_{i}^{\alpha})}$ (51) where $\displaystyle S$ $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}\sum_{a}\;j(a)s_{i_{1}}^{a}...s_{i_{p}}^{a}+\frac{1}{2}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}^{2}\right\\}$ (52) $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}{\bf t}_{i_{1},...,i_{p}}+\frac{1}{2}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}^{2}\right\\}$ where here $t_{i_{1},...,i_{p}}\equiv\sum_{a}\;j(a)s_{i_{1}}^{a}...s_{i_{p}}^{a}$ and $j(a)=1+j\delta(t_{1}-t_{o})\delta_{ab}$. Similarly, dynamically we have: $Z_{h}(\gamma)=\Sigma_{\alpha,i}\;e^{S_{0}+S(h,{\bf s}_{i})}$ (53) where $\displaystyle S$ $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}\int d1\;j(1){\bf s}_{i_{1}}...{\bf s}_{i_{p}}(1)+\frac{1}{2}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}^{2}\right\\}$ (54) $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}{\bf t}_{i_{1},...,i_{p}}+\frac{1}{2}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}^{2}\right\\}$ where ${\bf t}_{i_{1},...,i_{p}}\equiv\int d1\;j(1){\bf s}_{i_{1}}...{\bf s}_{i_{p}}(1)$ and $j(1)=1+j\delta(t_{1}-t_{o})\bar{\theta}_{1}\theta_{1}$. Integrating over the $h$’s we get the diagram of the left of in Fig 3. #### Generator function Lego It is natural to extend this to more general $H\rightarrow H+\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum h_{i_{1},...,i_{p}}s_{i_{1}}...s_{i_{p}}+\sum h_{i_{1},...,i_{p}}^{2}+\mu V(h)\right\\}$ (55) and to treat this perturbatively in $\mu$. $\displaystyle S$ $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma}{N^{(p-1)/2}}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}\int d1\;j(1){\bf s}_{i_{1}}...{\bf s}_{i_{p}}(1)+\frac{1}{2}\sum_{i_{1},...,i_{p}}h_{i_{1},...,i_{p}}^{2}+\mu V(h)\right\\}$ (56) $\displaystyle=$ $\displaystyle\left\\{\frac{\gamma^{2}}{N^{(p-1)}}\sum_{i_{1},...,i_{p}}{\bf t}_{i_{1},...,i_{p}}{\bf t}_{i_{1},...,i_{p}}+\frac{1}{2}\sum_{i_{1},...,i_{p}}(h_{i_{1},...,i_{p}}+{\bf t}_{i_{1},...,i_{p}})^{2}+V(h)\right\\}$ $\displaystyle=$ $\displaystyle\left\\{{\gamma^{2}}{N}{\bf I}^{p}+\frac{1}{2}\sum_{i_{1},...,i_{p}}(h_{i_{1},...,i_{p}}+{\bf t}_{i_{1},...,i_{p}})^{2}+V(h)\right\\}$ $\displaystyle=$ $\displaystyle\left\\{{\gamma^{2}}{N}{\bf I}^{p}+\frac{1}{2}\sum_{i_{1},...,i_{p}}(\tilde{h}_{i_{1},...,i_{p}})^{2}+\mu V(\tilde{h}_{i_{1},...,i_{p}}-{\bf t}_{i_{1},...,i_{p}})\right\\}$ Expanding the exponential of $V$, we obtain diagrams with contractions of $\tilde{h}_{i_{1},...,i_{p}}^{2}$, and also lines with products of $\sum{\bf t}_{i_{1},...,i_{p}}$, that may be expressed in terms of $Q(1,2)$’s. The same procedure, applied with replicas, yields the same diagrams, this time in terms of $Q_{ab}$. As mentioned above, the value of corresponding dynamic and replica diagrams coincide – $\left.\frac{d}{dj}\left\\{{\mbox{diagram}}\right\\}\right|_{j=0}$ give the same (a reparametrization-invariant fact) – if the dynamics has widely separated timescales, with one temperature $T(t,t^{\prime})=T/X(C)$ per timescale, and $X(C)=x(q)$. Is the situation with timescales associated to different temperatures widely separated (a.k.a. time-ultrametricity) the only possibility for the coincidence of static and dynamics for diagrams? Our answer will be positive. #### Equality of temperatures and overlap In this section we shall review the argument for two coupled finite- dimensional spin systems, but it is valid for any two sets of variables. We will study the correlations of each system within two states of the global system, statically: $q^{ss}_{ab}=\frac{1}{N}\sum_{i}\langle s_{i}\rangle_{a}\langle s_{i}\rangle_{b}\qquad;\qquad q^{\sigma\sigma}_{ab}=\frac{1}{N}\sum_{i}\langle\sigma_{i}\rangle_{a}\langle\sigma_{i}\rangle_{b}\qquad;\qquad q^{\sigma s}_{ab}=\frac{1}{N}\sum_{i}\langle\sigma_{i}\rangle_{a}\langle s_{i}\rangle_{b}$ (57) and dynamically: $\displaystyle R^{ss}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle\frac{1}{T^{ss}(t,t^{\prime})}\frac{\partial}{\partial t^{\prime}}C^{ss}(t,t^{\prime})=\frac{X^{ss}(t,t^{\prime})}{T}\frac{\partial}{\partial t^{\prime}}C^{ss}(t,t^{\prime})$ $\displaystyle R^{\sigma\sigma}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle\frac{1}{T^{\sigma\sigma}(t,t^{\prime})}\frac{\partial}{\partial t^{\prime}}C^{\sigma\sigma}(t,t^{\prime})=\frac{X^{\sigma\sigma}(t,t^{\prime})}{T}\frac{\partial}{\partial t^{\prime}}C^{\sigma\sigma}(t,t^{\prime})$ $\displaystyle R^{\sigma s}(t,t^{\prime})$ $\displaystyle=$ $\displaystyle\frac{1}{T^{\sigma s}(t,t^{\prime})}\frac{\partial}{\partial t^{\prime}}C^{\sigma s}(t,t^{\prime})=\frac{X^{\sigma s}(t,t^{\prime})}{T}\frac{\partial}{\partial t^{\prime}}C^{\sigma s}(t,t^{\prime})$ (58) We now follow the same steps and apply the perturbations $S=h_{i_{1},...,i_{p}}j(1)[as_{i_{1}}...s_{i_{p}}+b\sigma_{i_{1}}...\sigma_{i_{p}}]+h_{i_{1},...,i_{p}}^{2}$ (59) for all $a,b$ and compute separately the corresponding generalized susceptibilities of each set of variables $I^{(p)}_{ss}=\int X^{ss}(C^{ss})[C^{ss}]^{(p-1)}dC^{ss}$, $I^{(p)}_{\sigma\sigma}=\int X^{\sigma\sigma}(C^{\sigma\sigma})[C^{\sigma\sigma}]^{(p-1)}dC^{\sigma\sigma}$ and $I^{(p)}_{\sigma s}=\int X^{\sigma s}(C^{\sigma s})[C^{\sigma s}]^{(p-1)}dC^{\sigma s}$. Equality of all makes us conclude that there is a correspondence between statics and dynamics at this partial level: $x^{ss}(q^{ss})\leftrightarrow X^{ss}(C^{ss})\qquad;\qquad x^{\sigma\sigma}(q^{\sigma\sigma})\leftrightarrow X^{\sigma\sigma}(C^{\sigma\sigma})\qquad;\qquad x^{\sigma s}(q^{\sigma s})\leftrightarrow X^{\sigma s}(C^{\sigma s})$ (60) #### Thermalization and overlap equivalence We now show that if $C^{ss}(t,t^{\prime})\rightarrow g[C^{\sigma\sigma}(t,t^{\prime})]$ then $q^{ss}=g[q^{\sigma\sigma}]$ with the same function $g$, restricted to skeleton values of correlation associated with the Parisi ansatz. For non-skeleton values there can be no correspondence because the corresponding correlations, since they are absent from the static solution. Construct now the susceptibilities via $h_{i_{1},...,i_{p};i_{p+1}}s_{i_{1}}...s_{i_{p}}(a\sigma_{i_{p+1}}+bs_{i_{p+1}})$ ($h_{i_{1},...,i_{p};i_{p+1}}$ is not symmetrized with respect to the last index) First, for $b=0$ we get the middle diagram of figure 3: $I^{(p)}=\int dC^{\sigma\sigma}\;X^{\sigma\sigma}(C^{\sigma\sigma})[C^{ss}]^{p}+\int X^{ss}(C^{ss})C^{\sigma\sigma}d[C^{ss}]^{p}$ (61) Now, $x(q^{ss}),x(q^{\sigma\sigma})$ are given by the statics too. They are the same if there is overlap equivalence. If so, $\displaystyle I^{(p)}$ $\displaystyle=$ $\displaystyle\int\left\\{dC^{\sigma\sigma}[C^{ss}]^{p}+d[C^{ss}]^{p}C^{\sigma\sigma}\right\\}X^{ss}(C^{ss})$ (62) $\displaystyle=$ $\displaystyle\left.X^{ss}(C^{ss})C^{\sigma\sigma}[C^{ss}]^{p}\right|^{1}_{0}-\int dX^{ss}\left\\{[C^{ss}]^{p}C^{\sigma\sigma}\right\\}$ This is equal to the equilibrium expression $\displaystyle I^{(p)}$ $\displaystyle=$ $\displaystyle\int\left\\{dq^{\sigma\sigma}[q^{ss}]^{p}+d[q^{ss}]^{p}q^{\sigma\sigma}\right\\}x^{ss}(q^{ss})=\left.x^{ss}(q^{ss})q^{\sigma\sigma}[q^{ss}]^{p}\right|^{1}_{0}-\int dx^{ss}\left\\{[q^{ss}]^{p}q^{\sigma\sigma}\right\\}$ (63) $\displaystyle=$ $\displaystyle\left.x^{ss}(q^{ss})q^{\sigma\sigma}[q^{ss}]^{p}\right|^{1}_{0}-\int dq^{ss}\;\frac{dx^{ss}}{dq^{ss}}\left\\{[q^{ss}]^{p}q^{\sigma\sigma}\right\\}$ The equality of all moments proves the equality of the functions $C^{\sigma\sigma}=g[C^{ss}]$ and $q^{\sigma\sigma}=g[q^{ss}]$ but only for the skeleton values at which $\frac{dx^{ss}}{dq^{ss}}\neq 0$. Putting now $b\neq 0$ the linear term in $b$ implies: $\displaystyle I_{b}^{(p)}$ $\displaystyle=$ $\displaystyle\int\left\\{dC^{\sigma s}[C^{ss}]^{p}X^{\sigma s}(C^{\sigma s})+d[C^{ss}]^{p}X^{ss}(C^{ss})C^{\sigma s}\right\\}$ (64) $\displaystyle=$ $\displaystyle\left.X^{ss}(C^{ss})C^{\sigma s}[C^{ss}]^{p}\right|^{1}_{0}-\int dX^{ss}\left\\{[C^{ss}]^{p}C^{\sigma s}\right\\}$ $\displaystyle=$ $\displaystyle\left.x^{ss}(q^{ss})q^{\sigma s}[q^{ss}]^{p}\right|^{1}_{0}-\int dq^{ss}\;\frac{dx^{ss}}{dq^{ss}}\left\\{[q^{ss}]^{p}q^{\sigma s}\right\\}$ Again, the equality of all moments proves the equality of the functions $C^{\sigma s}=\hat{g}[C^{ss}]$ and $q^{\sigma s}=\hat{g}[q^{ss}]$ but only for the skeleton values at which $\frac{dx^{ss}}{dq^{ss}}\neq 0$. Figure 3: Three diagrams in terms of the superspace/replica order parameters. #### Three configurations This calculation was hinted at in FMPP, the only thing missing was a generating function for the diagram involved. We consider a slightly more complicated perturbation: $\displaystyle S$ $\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{ij}\left(h^{(1)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(2)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(3)}_{ij}\right)^{2}$ (65) $\displaystyle+$ $\displaystyle\frac{\gamma}{N}\sum_{ijk}\left(h^{(1)}_{ij}h^{(2)}_{jk}s_{k}s_{i}+h^{(1)}_{ij}h^{(3)}_{jk}s_{i}s_{k}+h^{(2)}_{ij}{h}^{(3)}_{jk}s_{i}s_{k}\right)$ leading to: $\displaystyle S$ $\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{ij}\left(h^{(1)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(2)}_{ij}\right)^{2}+\frac{1}{2}\sum_{ij}\left(h^{(3)}_{ij}\right)^{2}$ (66) $\displaystyle+$ $\displaystyle\frac{\gamma}{N}\sum_{ijk}\left(h^{(1)}_{ij}h^{(2)}_{jk}{\bf t}_{ki}+h^{(1)}_{ij}{\bf t}_{jk}h^{(3)}_{ki}+{\bf t}_{ij}h^{(2)}_{jk}{h}^{(3)}_{ki}\right)$ Integrating away the $h$’s, we get: $e^{...+{\gamma}^{2}N\int d1d2\;Q(1,2)j(1)j(2)+2N{\gamma}^{3}\int d1d2d3\;j(1)j(2)j(3)Q(1,2)Q(2,3)Q(3,1)+O(\gamma^{4})}$ (67) and similarly for replicas. More generally, denoting $I=i_{1},...,i_{p}$, $J=j_{1},...,j_{r}$, $K=k_{1},...,k_{s}$, ${\bf t}_{I}=\int j(1)s_{i_{1}}...s_{i_{p}}\;d1$, and so on, we have, applying the corresponding perturbation: $\displaystyle S$ $\displaystyle=$ $\displaystyle\frac{1}{2}\sum_{IJ}\left(h^{(1)}_{IJ}\right)^{2}+\frac{1}{2}\sum_{IJ}\left(h^{(2)}_{IJ}\right)^{2}+\frac{1}{2}\sum_{JK}\left(h^{(3)}_{JK}\right)^{2}$ (68) $\displaystyle+$ $\displaystyle\frac{\gamma}{N^{(3p-1)/2}}\sum_{IJK}\left(h^{(1)}_{IJ}h^{(2)}_{JK}{\bf t}_{KI}+h^{(1)}_{IJ}{\bf t}_{JK}h^{(3)}_{KI}+{\bf t}_{IJ}h^{(2)}_{JK}{h}^{(3)}_{KI}\right)$ $\displaystyle S={\gamma^{2}N}\left(\int d1d2\;Q^{\bullet p}(1,2)j(1)j(2)+\int d1d2\;Q^{\bullet r}(1,2)j(1)j(2)+\int d1d2\;Q^{\bullet s}(1,2)j(1)j(2)\right)$ (69) $\displaystyle+$ $\displaystyle N\gamma^{3}\int d1d2d3\;j(1)j(2)j(3)\left(Q^{\bullet p}(1,2)Q(2,3)^{\bullet s}Q(3,1)^{\bullet r}+Q^{\bullet p}(1,2)Q(2,3)^{\bullet r}Q(3,1)^{\bullet s}\right)$ $\displaystyle+$ $\displaystyle O(\gamma^{4})$ And similarly, for the replica calculation. This is the diagram to the right of Figure 3 Now, the Onsager property within a timescale implies that operators $Q^{\bullet r}$ and $Q^{\bullet s}$ communte, just as the Parisi matrices do. The result of the diagram is reported in FMPP and is proportional to: $\int dq_{12}dq_{23}dq_{31}\;P({\mbox{triangle}})\;(q_{12})^{p}(dq_{23})^{r}(dq_{31})^{s}$ (70) Equality of these for all $p,r,s$ implies the equality of the probability of triangles constituted by skeleton values of overlaps (we note again that the Parisi scheme has nothing to say about triangles formed by intermediate values of correlations ‘within a scale’). ## V The role of reparametrization invariance(s) As mentioned in the introduction, connecting two systems with a small local interaction $\sum_{i}s_{i}\sigma_{i}$ puts the system (and us) in a dilemma: if the systems had different effective temperatures in a same timescale, the combined system would violate the scenario we have been describing, including the Parisi scheme, because it will have different temperatures for different observables in the same timescale. Thus, the new coupled system will immediately fall outside the scheme. Something clearly is amiss, because this would happen even for mean-field models, for which we know that the scenario holds. One possibility is that all imaginable systems that satisfy a multithermalzed/RSB scheme have the same timescales at the same values of $T$ for the same $X$. Thus, any two systems evolving at the same temperatures would have automatically all the effective temperatures at the same scales. There is a well-understood counterexample to this possibility: ferromagnetic domain growth has fast timescale -relaxation within domains – and a a slow timescale, correponding to the displacements of domain walls. The effective temperature for the slow motion is infinite Berthier _et al._ (1999). Now, the slow timescale is not universal: it may be $C=C\left(\frac{t^{\prime}}{t}\right)$ for a pure ferromagnet, or $C=C\left(\frac{\ln t^{\prime}}{\ln t}\right)$ for a ‘dirty’ one. The other possibility, already hinted at years ago Cugliandolo _et al._ (1997); Cugliandolo and Kurchan (1999a, b), is as follows: when the interaction is strong enough, the two systems change their temperatures so they become equal. This requires a certain critical coupling strength. When the coupling is weaker than that, something stranger should happen: the timescales associated with the two temperatures ”push each other apart”, so the combined system has one more timescale, and the scenario is recovered. That this should happen even with very weak interaction is only possible because the systems develop independent reparametrization invariances, and coupling is always relevant. This very surprising phenomenon ‘saves the scenario’. In Ref. Contucci _et al._ (2019, 2020) we have studied in detail a slightly different context in which this may happen: instead of coupling a system to another one, we couple it to a ‘multibath’. ### V.1 How do the families of reparametrization invariances come about In mean-field models – in fact the only systems for which we know that the scenario holds, the reparametrization invariant families come about as follows. One arrives with the usual formalism for dynamic equations for correlations and response function, either by summing ladder diagrams or by working taking saddle point in the dynamic path integral averaged over disorder. One then verifies that each scale may be treated separately, with the faster scales acting as if they were instantaneous and the slower ones as being frozen. The first step, shared by SYK (which has only two scales) involves separating out the fastest scale, the only one in which time-derivatives are relevant. Then, one considers the infrared scale which will have a reparametrization invariance in the slow-dynamics limit in which the time- derivatives may be neglected. In some systems the procedure stops here. However, in systems such as the Sherrington-Kirkpatrick model, there is a ‘more infrared’ scale, which is infinitely slower than the previous one, and may be separated likewise, and then another, and another. Each scale may be time-reparametrized freely provided it remains separated from the previous one. ### V.2 Two glasses and a wormhole Coupling systems with reparametrization invariances is generally interesting, because the coupling will almost surely be relevant, since relative reparametrizations between systems are soft. In a series of papers Maldacena _et al._ (2017); Maldacena and Qi (2018), two SYK models – toy versions of Black Holes – have been coupled, and the effect is a system with a combined first order transition line with hysteresis in the temperature-coupling plane, terminating in a triple point. Let us briefly show how very much the same transition is expected to happen when coupling two glasses, for the same reasons. A direct interpretation, not using reparametrization invariance, is available in the glassy case. Let us recall a connection between stochastic and quantum dynamics that has been already used several times in the past in statistical physics, condensed matter and quantum field theory Rokhsar and Kivelson (1988); Parisi (1988); Kurchan (2010) and which we have exploited to lay a bridge between glasses and quantum systems with large $T=0$ entropies Facoetti _et al._ (2019). Just as above we consider two systems $N$ coupled degrees of freedom $s_{i}(t),\sigma_{i}(t)$ evolving by stochastic Langevin dynamics $\displaystyle\dot{s_{i}}(t)$ $\displaystyle=$ $\displaystyle-\frac{\partial V}{\partial s_{i}}+\eta_{i}(t)~{},$ $\displaystyle\dot{\sigma_{i}}(t)$ $\displaystyle=$ $\displaystyle-\frac{\partial V}{\partial\sigma_{i}}+\tilde{\eta}_{i}(t)~{},$ (71) and $V$ is the interaction potential, which we shall take to be: $V=\sum_{ijk}J_{ijk}\;(s_{i}s_{j}s_{k}+\sigma_{i}\sigma_{j}\sigma_{k})+z\left(\sum\sigma_{i}^{2}-N\right)+\tilde{z}\left(\sum s_{i}^{2}-N\right)$ (72) where the $J$ are random and fully-connected and the terms proportional to $z$ impose a spherical constraint $\sum_{i}\sigma_{i}^{2}=\sum_{i}s_{i}^{2}=N$ . This is the simplest and better understood mean-field glass, but there are plenty of other examples in the literature, with and without disorder. Here $T_{s}$ is the (classical) temperature of the thermal bath to which the system is coupled, and $\eta_{i}(t),\tilde{\eta}(t)$ are a Gaussian white noises with covariance $\langle\eta_{i}(t)\eta_{i}(t^{\prime})\rangle=2T_{s}\delta(t-t^{\prime})$. The evolution of the probability density is generated by the Fokker–Planck operator $H_{\textrm{FP}}$, $\partial_{t}P_{t}(\mathbf{q})=\sum_{i}\frac{\partial}{\partial q_{i}}\left[T_{s}\frac{\partial}{\partial q_{i}}+\frac{\partial V}{\partial q_{i}}\right]P_{t}(\mathbf{q})\equiv-H_{\textrm{FP}}P_{t}(\mathbf{q}).$ (73) where $q_{i}=\\{s,\sigma\\}$. Detailed balance allows us to write this in an explicitly Hermitian form Zinn-Justin (2002); Kurchan (2010). Rescaling time, one can define the operator $H=\frac{T_{s}}{2}e^{V/2T_{s}}H_{\textrm{FP}}e^{-V/2T_{s}}=\sum_{i}\left[-\frac{T_{s}^{2}}{2}\frac{\partial^{2}}{\partial q_{i}^{2}}+\frac{1}{8}\left(\frac{\partial V}{\partial q_{i}}\right)^{2}-\frac{T_{s}}{4}\frac{\partial^{2}V}{\partial q_{i}^{2}}\right]\ .$ (74) $H$ has the form of a Schrodinger operator with $T_{s}$ playing the role of $\hbar$, unit mass and potential $V_{\textrm{eff}}=\frac{1}{8}\left(\frac{\partial V}{\partial q_{i}}\right)^{2}-\frac{T_{s}}{4}\frac{\partial^{2}V}{\partial q_{i}^{2}}~{}.$ (75) The spectrum of eigenvalues $\lambda_{i}$ and eigenvectors $\psi_{i}$ of $H$ (or $H_{\textrm{FP}}$) have a direct relation to metastable states of the original diffusive dynamics (Gaveau and Schulman (1998); Bovier _et al._ (2004), see also Biroli and Kurchan (2001)): * • The equilibrium state has $\lambda_{o}=0$ and the corresponding right eigenvector of $H_{\textrm{FP}}$ is the Boltzmann distribution associated with the energy function $V$ . * • Given a timescale $t^{*}$, the number of eigenvectors with $\lambda_{i}<\frac{1}{t^{*}}$ is the number of metastable states of the diffusive model with lifetime larger than $t^{*}$. In particular, the eigenvalues $\lambda_{i}\to 0$ in the thermodynamic limit correspond to metastable states whose lifetime diverges with $N$. * • Hence, the resulting object $\mathcal{N}(\beta_{q})=\operatorname{Tr}e^{-\beta_{q}H_{\textrm{FP}}}=Z(\beta_{q})$ counts the number of states of the system that are stable up to a time $\beta_{q}$ or longer Biroli and Kurchan (2001) We thus have introduced a “quantum” Hamiltonian $H$, which is associated with a quantum temperature $T_{q}=1/\beta_{q}$. The original temperature $T_{s}$ now plays the role of the quantum parameter, $\hbar$. Our “quantum energy” is associated with the eigenvalues of $H$, which are a measure of the lifetimes of the original classical diffusive system. We may analize this ‘quantum system’ in terms of the underlying glassy model. The extensive ‘zero temperature entropy’ is nothing but the log of the number of metastable states, the ‘glassy’ reparametrization invariance is now quite analogous to the one of SYK Facoetti _et al._ (2019) We now couple the two systems through a term: $H_{\mu}=H-\mu\sum_{i}\sigma_{i}s_{i}$ (76) which no longer corresponds to a purely stochastic evolution, but rather to the dynamic large deviation of $\int dt\;\sum_{i}\sigma_{i}s_{i}$, a generator function. The system will thus have larger than zero eigenvalue ground state, the value being precisely the large-deviation function for each $\mu$. Had we coupled the system at the level of $V$, the joint system would have zero energy quantum ground state: we know this because the system so obtained is still a glass. The partition function of the Hamiltonian $H_{\mu}$ reads $Z(\mu,\beta_{q})=\operatorname{Tr}e^{-\beta_{q}H_{\mu}}=\operatorname{Tr}e^{-\frac{1}{2}\beta_{q}[T_{s}H_{\textrm{FP}}-\mu\sum_{i}\sigma_{i}s_{i}]}=\int dqe^{-\mu NQ}\;{\cal{N}}(Q,\beta_{q})$ (77) where ${\cal{N}}(Q,\beta_{q})$ measures the number of pairs of metastable states at mutual distance $N\beta_{q}Q=\int_{0}^{\beta_{q}}dt^{\prime}s_{i}(t^{\prime})\sigma_{i}(t^{\prime})$. For the two coupled systems two phenomena compete: there is an exponential number of metastable states in each system, all of them (for large $N$) marginal in the sense of having gapless vibration spectra. The metastable states of the combined system is the set of pairs of states one in each system, and is overwhelmingly dominated by taking different states in each subsystem – these pairs will almost all have small mutual overlap, for entropic reasons. An attractive interaction between configurations of subsystems privileges on the contrary choosing the same state in both subsystems. Bearing in mind that an energetic term dominates the entropic term at lower temperatures, we get a first order mechanism, see Figure 4. Figure 4: The Franz-Parisi potential Franz and Parisi (1995); Kurchan _et al._ (1993): complexity (the log number of metastable states at a given overlap) and interaction energy, both plotted in terms of the overlap. At the point in which the overlap coincides with the state size, the entropy is the one of a single system, and the point is marginal. At larger overlaps, i.e within a state, the complexity becomes negative. The dashed line represents the correction coming from counting states with finite lifetime, as one must at finite ”quantum temperature”. The sum of the attraction and entropic effects yields a first-order transition mechanism (inset). Let us conclude this section with a remark. When we construct a ‘quantum’ Hamiltonian à la Rokhsar-Kivelson, the usual imaginary-time partition function corresponds, as we have mentioned, to counting the number of metastable states; from the point of view of the stochastic system, counting periodic stochastic trajectories. The real time evolution (with an ‘$i$’) does not have any clear meaning from the stochastic point of view. Finally, the ‘aging’ solution corresponds to the following construction: for a general Hamiltonian $H$, given its ground-state $|0\rangle$ with eigenvalue $\lambda_{0}$, and a random initial state $|{\mbox{init}}\rangle$, one computes correlations with the propagation $\langle A\rangle_{aging}(t)=e^{\lambda_{0}t}\langle 0|Ae^{-tH}|{\mbox{init}}\rangle/\langle 0|{\mbox{init}}\rangle\qquad;\qquad C_{aging}(t,t^{\prime})=e^{\lambda_{0}t}\langle 0|Ae^{-(t-t^{\prime})H}Ae^{-t^{\prime}H}|{\mbox{init}}\rangle/\langle 0|{\mbox{init}}\rangle$ In a quantum system with ground-state entropy this process only becomes stationary in times $t^{\prime}$ that diverge with $N$. ## VI Conclusion In this paper we discuss the essential role of time-reparametrization quasi- invariances in solutions of glassy dynamics and equilibrium. As an intermediate step, we have needed to complete the program of Franz et al (FMPP) in establishing a direct link between Parisi scheme and dynamic ‘multithermalization’, valid for finite-dimensional systems under the assumption of stochastic stability with respect to random, long range interactions. For this we had to show that static and dynamic ultrametricties imply one another. In view of the results in mathematical physics Panchenko (2013); Contucci _et al._ (2013) systems that are stochastically stable with respect to long-range random correlations should have static and dynamic properties corresponding to the scenario discussed here. If a system still has a glassy phase, but does not correspond to this scenario, then it seems one would have to conclude that symmetries are more broken (smaller residual groups), in an at present unknown way. An intriguing possibility concerns the quantum SYK-like systems. These have a single infrared timescale, which diverges as the inverse temperature. The analogy with spin-glasses suggests that variants with nested divergent timescales should also be possible. Acknowledgments I wish to thank PL Contucci, F. Corberi, S Franz and E Mingione for clarifying conversations. This work is supported by the Simons Foundation Grant No 454943. ## References * Mézard _et al._ (1987) M. Mézard, G. Parisi, and M. A. Virasoro, _Spin Glass Theory and Beyond_ (World Scientific, Singapore, 1987). * Thalmann (2001) F. Thalmann, The European Physical Journal B-Condensed Matter and Complex Systems 19, 65 (2001). * Berthier _et al._ (2000) L. Berthier, J.-L. Barrat, and J. Kurchan, Physical Review E 61, 5464 (2000). * Horner (1992) H. Horner, Zeitschrift für Physik B Condensed Matter 86, 291 (1992). * Sompolinsky and Zippelius (1982) H. Sompolinsky and A. Zippelius, Physical Review B 25, 6860 (1982). * Sompolinsky and Zippelius (1981) H. Sompolinsky and A. Zippelius, Physical Review Letters 47, 359 (1981). * Cugliandolo and Kurchan (1993) L. F. Cugliandolo and J. Kurchan, Phys. Rev. Lett. 71, 173 (1993). * Cugliandolo and Kurchan (1994) L. F. Cugliandolo and J. Kurchan, Journal of Physics A: Mathematical and General 27, 5749 (1994). * Franz and Mézard (1994) S. Franz and M. Mézard, EPL (Europhysics Letters) 26, 209 (1994). * Cugliandolo _et al._ (1997) L. F. Cugliandolo, J. Kurchan, and L. Peliti, Phys. Rev. E 55, 3898 (1997). * Contucci _et al._ (2019) P. Contucci, J. Kurchan, and E. Mingione, Journal of Physics A: Mathematical and Theoretical 52, 324001 (2019). * Contucci _et al._ (2020) P. Contucci, F. Corberi, J. Kurchan, and E. Mingione, arXiv preprint arXiv:2012.03922 (2020). * Franz _et al._ (1998) S. Franz, M. Mézard, G. Parisi, and L. Peliti, Physical Review Letters 81, 1758 (1998). * Franz _et al._ (1999) S. Franz, M. Mezard, G. Parisi, and L. Peliti, Journal of statistical physics 97, 459 (1999). * Sachdev and Ye (1993) S. Sachdev and J. Ye, Phys. Rev. Lett. 70, 3339 (1993). * Kitaev (2015) A. Kitaev, “A simple model of quantum holography,” (2015), _A simple model of quantum holography_ , http://online.kitp.ucsb.edu/online/entangled15/kitaev/, http://online.kitp.ucsb.edu/online/entangled15/kitaev2/. * Maldacena and Stanford (2016) J. Maldacena and D. Stanford, Phys. Rev. D 94, 106002 (2016). * Castillo _et al._ (2002) H. E. Castillo, C. Chamon, L. F. Cugliandolo, and M. P. Kennett, Phys. Rev. Lett. 88, 237201 (2002). * Castillo _et al._ (2003) H. E. Castillo, C. Chamon, L. F. Cugliandolo, J. L. Iguain, and M. P. Kennett, Phys. Rev. B 68, 134442 (2003). * Chamon _et al._ (2004) C. Chamon, P. Charbonneau, L. F. Cugliandolo, D. R. Reichman, and M. Sellitto, J. Chem. Phys. 121, 10120 (2004). * Chamon _et al._ (2002) C. Chamon, M. P. Kennett, H. E. Castillo, and L. F. Cugliandolo, Phys. Rev. Lett. 89, 217201 (2002). * Chamon and Cugliandolo (2007) C. Chamon and L. F. Cugliandolo, J. Stat. Mech. 2007, P07022 (2007). * Chamon _et al._ (2011) C. Chamon, F. Corberi, and L. F. Cugliandolo, J. Stat. Mech. 2011, P08015 (2011). * Parisi and Ricci-Tersenghi (2000) G. Parisi and F. Ricci-Tersenghi, Journal of Physics A: Mathematical and General 33, 113 (2000). * Aizenman and Contucci (1998) M. Aizenman and P. Contucci, Journal of statistical physics 92, 765 (1998). * Contucci _et al._ (2006) P. Contucci, C. Giardina, C. Giberti, and C. Vernia, Physical review letters 96, 217204 (2006). * Castellani and Cavagna (2005) T. Castellani and A. Cavagna, J. Stat. Mech. 2005, P05012 (2005). * Martin _et al._ (1973) P. C. Martin, E. D. Siggia, and H. A. Rose, Phys. Rev. A 8, 423 (1973). * Janssen (1976) H.-K. Janssen, Z. Phys. B 23, 377 (1976). * De Dominicis (1976) C. De Dominicis, J. Phys. (Paris), Colloq. 37, 247 (1976). * Cugliandolo and Kurchan (1999a) L. Cugliandolo and J. Kurchan, Physica A 263, 242 (1999a). * Cugliandolo and Kurchan (1999b) L. F. Cugliandolo and J. Kurchan, arXiv preprint cond-mat/9911086 (1999b). * J. Kurchan (1992) J. Kurchan, J. Phys. I France 2, 1333 (1992). * Berthier _et al._ (1999) L. Berthier, J.-L. Barrat, and J. Kurchan, The European Physical Journal B-Condensed Matter and Complex Systems 11, 635 (1999). * Maldacena _et al._ (2017) J. Maldacena, D. Stanford, and Z. Yang, Fortschritte der Physik 65, 1700034 (2017). * Maldacena and Qi (2018) J. Maldacena and X.-L. Qi, arXiv preprint arXiv:1804.00491 (2018). * Rokhsar and Kivelson (1988) D. S. Rokhsar and S. A. Kivelson, Phys. Rev. Lett. 61, 2376 (1988). * Parisi (1988) G. Parisi, _Statistical Field Theory_ (Addison-Wesley, Reading, MA, 1988). * Kurchan (2010) J. Kurchan, _Six out of equilibrium lectures_ , Lecture Notes of the Les Houches Summer School, Vol. 90, Aug 2008 (Oxford University Press, Oxford, 2010) arXiv:0901.1271. * Facoetti _et al._ (2019) D. Facoetti, G. Biroli, J. Kurchan, and D. R. Reichman, Physical Review B 100, 205108 (2019). * Zinn-Justin (2002) J. Zinn-Justin, _Quantum Field Theory and Critical Phenomena_ (Oxford University Press, Oxford, 2002). * Gaveau and Schulman (1998) B. Gaveau and L. S. Schulman, J. Math. Phys. 39, 1517 (1998). * Bovier _et al._ (2004) A. Bovier, M. Eckhoff, V. Gayrard, and M. Klein, (2004). * Biroli and Kurchan (2001) G. Biroli and J. Kurchan, Phys. Rev. E 64, 016101 (2001). * Franz and Parisi (1995) S. Franz and G. Parisi, Journal de Physique I 5, 1401 (1995). * Kurchan _et al._ (1993) J. Kurchan, G. Parisi, and M. A. Virasoro, Journal de Physique I 3, 1819 (1993). * Panchenko (2013) D. Panchenko, Annals of Mathematics , 383 (2013). * Contucci _et al._ (2013) P. Contucci, E. Mingione, and S. Starr, Journal of Statistical Physics 151, 809 (2013).
††thanks<EMAIL_ADDRESS> # Information Causality without concatenation Nikolai Miklin International Centre for Theory of Quantum Technologies (ICTQT), University of Gdansk, 80-308 Gdańsk, Poland Marcin Pawłowski International Centre for Theory of Quantum Technologies (ICTQT), University of Gdansk, 80-308 Gdańsk, Poland ###### Abstract Information Causality is a physical principle which states that the amount of randomly accessible data over a classical communication channel cannot exceed its capacity, even if the sender and the receiver have access to a source of nonlocal correlations. This principle can be used to bound the nonlocality of quantum mechanics without resorting to its full formalism, with a notable example of reproducing the Tsirelson’s bound of the Clauser-Horne-Shimony-Holt inequality. Despite being promising, the latter result found little generalization to other Bell inequalities because of the limitations imposed by the process of concatenation, in which several nonsignaling resources are put together to produce tighter bounds. In this work, we show that concatenation can be successfully replaced by limits on the communication channel capacity. It allows us to re-derive and, in some cases, significantly improve all the previously known results in a simpler manner and apply the Information Causality principle to previously unapproachable Bell scenarios. ## I Introduction Information Causality (IC) is a physical principle proposed to bound nonlocality of correlations without resorting to the full formalism of quantum mechanics Pawłowski et al. (2009); Pawłowski and Scarani (2016). Instead, the bounds are derived only from the axioms of information theory. In a nutshell, the principle states that if one party has a single use of a communication channel with a capacity $C$ to send the other party some information, then the amount of information potentially available to the receiver cannot exceed $C$ even if the parties share some nonlocal resources. In Ref. Pawłowski et al. (2009) it was shown that both classical and quantum information theories, and every generalization of them adhering to some of their intuitive properties, obey IC. At the same time, the principle of IC is strong enough to partially recover the boundary of the set of quantum nonlocal correlations Allcock et al. (2009a). Most notably, as shown in Ref. Pawłowski et al. (2009), IC can be used to re-derive the Tsirelson’s bound of the Clauser-Horne-Shimony-Holt (CHSH) inequality, answering the long-standing question of Popescu and Rorhlich on the reason for its value Cirel’son (1980); Clauser et al. (1969); Popescu and Rohrlich (1994). Moreover, IC was shown to rule out stronger-than- quantum correlations, which could not be detected by other bipartite principles, such as Macroscopic Locality Navascués and Wunderlich (2010); Brassard et al. (2006); Cavalcanti et al. (2010). Finally, the principle of IC is the only known candidate with the potential to recover the exact boundary of the set of bipartite nonlocal quantum correlations Navascués et al. (2015). The problem that one faces when deriving bound on the strength of nonlocal correlations from IC is that one has to find a suitable communication protocol that makes use of those correlations. Until now, it was believed that to obtain the strongest results, one must use protocols that allow for concatenation, the process of combining the outcomes of several copies of the nonlocal source in a way that increases the amount of potentially available information in stronger-than-quantum theories. This requirement significantly limits the types of Bell scenarios for which the bounds from IC can be derived Pawłowski et al. (2009); Pawłowski and Żukowski (2010); Cavalcanti et al. (2010). In this work, we argue that the procedure of concatenation is not required and often suboptimal in proofs utilizing IC. More precisely, we show that by considering a non-identity communication channel with suitably chosen capacity and a single copy of a nonlocal resource, one can: (a) re-derive all the results from Ref. Pawłowski et al. (2009); Pawłowski and Żukowski (2010); (b) tighten the bounds found in Ref. Cavalcanti et al. (2010); (c) apply IC to Bell scenarios for which no concatenation procedure is known. We expect that with the modified construction, IC is likely to become a staple tool for finding bounds on Bell nonlocality in situations when traditionally applied numerical methods are computationally demanding Navascués et al. (2007). ## II IC and concatenation We start by restating the formulation of the IC principle from Ref. Pawłowski and Scarani (2016). Consider a communication scenario in which the sender has $N$ real-valued random variables $a_{0},...,a_{N-1}$ and a single use of a channel with classical communication capacity $C$. Then $\sum_{i=0}^{N-1}I(a_{i};b_{i})\leq C,$ (1) where $b_{i}$ is a random variable denoting the receiver’s guess of the value of $a_{i}$ if the receiver chooses to guess it, and $I(a_{i};b_{i})$ is the Shannon’s mutual information. In the original formulation of IC, as given by Ref. Pawłowski et al. (2009), the bound on the right-hand side of Eq. (1) is defined as the size of a message that the sender communicates in each round. The latter, somewhat vague statement, was later clarified in authors’ subsequent work of Ref. Pawłowski and Scarani (2016): “Notice that it does not matter how this information is encoded: when we refer to ‘sending the $M$ bit message’, it should be understood as a single use of a channel with classical communication capacity $M$.” In order to apply IC to quantum correlations, in Ref. Pawłowski et al. (2009) the authors start by considering the following protocol van Dam (2013): The parties share a pair of devices characterized by probability distribution $P(a,b|x,y)$ with all variables $a,b,x,y$ binary, $x$ and $a$ being the input and output of the sender, and $y$ and $b$ – the input and output of the receiver (Here and later in the text, by $P(a,b|x,y)$ we mean all the probabilities $P(a=i,b=j|x=k,y=l)$,$\forall i,j,k,l$). Let $N=2$ and $P(a,b|x,y)$ be such that $b=a\oplus x\cdot y$ with probability $p$ for both $y\in\\{0,1\\}$ ($\oplus$ denotes the summation modulo $2$). The sender chooses $x=a_{0}\oplus a_{1}$ and transmits a message $m=a_{0}\oplus a$ to the receiver. In order to learn about $i$-th bit $a_{i}$, the receiver chooses $y=i$ and computes $b_{i}=m\oplus b$. It is straightforward to confirm that $a_{i}=b_{i}$ with probability $p$. If values of $a_{0}$ and $a_{1}$ are distributed uniformly, this protocol yields $I(a_{i};b_{i})=1-h(p)$, where $h(.)$ is the Shannon’s binary entropy. If $m$ is send over a perfect channel, then $C=1$ since $m$ is just a bit. The bound resulting from Eq. (1) is $2(1-h(p))\leq 1$, which implies $p\leq 0.890$. It is easy to see that the CHSH expression for $P(a,b|x,y)$ is also equal to $p$ Clauser et al. (1969). Hence, we have derived a nontrivial bound on CHSH inequality from IC. However, the maximum quantum value of $p$, known as the Tsirelson bound, is $p_{Q}=\frac{1}{2}\left(1+\frac{1}{\sqrt{2}}\right)\approx 0.854$ Cirel’son (1980), which is significantly lower than what we have just derived. To obtain a tighter bound on $p_{Q}$ from IC, the authors of Ref. Pawłowski et al. (2009) propose to increase $N$, the number of bits given to the sender at each round, and use concatenation. Concatenation is essentially a process of locally combining inputs and outputs of many copies of the same pair of devices. It is, however, different from the simple “wiring” of devices Allcock et al. (2009b), as in the case of concatenation the input of each device of the sender also depends on the bits $a_{i}$. Here, we do not give the explicit description of concatenation, and refer the reader to Ref. Pawłowski et al. (2009). Instead, we only state some details of it as facts. This procedure works for $N=2^{k}$ for some positive integer $k$. At sender’s the devices are placed in layers with $k$-th layer having $2^{k-1}$ devices. Each of the devices produces a “message” just like in the protocol above and the resulting $2^{k-1}$ “messages” are taken as inputs for the devices in layer $k-1$. Each layer of concatenation introduces an error diluting the information about each of the bits $a_{i}$. If at some layer $k-1$ the probability of successfully decoding $a_{i}$ is $p_{k-1}$, at the next level $k$ it will be $p_{k}=p_{k-1}p+(1-p_{k-1})(1-p)=\frac{1+e_{k-1}e}{2},$ (2) where $e_{k-1}$ and $e$ are biases of $p_{k-1}$ and $p$, i.e., $p_{k-1}=\frac{1+e_{k-1}}{2}$ and $p=\frac{1+e}{2}$. The condition in Eq. (1) for a protocol with $k$ levels of concatenation becomes $2^{k}\left(1-h\left(\frac{1+e^{k}}{2}\right)\right)\leq 1.$ (3) For every $k$ it puts a lower bound on $e$ and for $k\to\infty$ the bound on $e$ reaches $\frac{1}{\sqrt{2}}$ while the bound on $p$ converges to our aim, $p_{Q}$. ## III Replacing concatenation with noisy channel There is, however, an easier way to obtain the bound $p_{Q}$ from IC without the need of concatenation. Let us start by considering the same protocol as before for a single pair of devices, but change the communications channel between the sender and the receiver to a binary symmetric one. Such a channel transmits an unchanged input bit with probability $p_{c}$, and with probability $1-p_{c}$ it returns the flipped bit. The capacity of this channel is $1-h(p_{c})$. The probability for $b_{i}$ to be equal to $a_{i}$ is obtained with the same formula in Eq. (2), where instead of $p_{k-1}$ we use $p_{c}$, yielding $\frac{1+e_{c}e}{2}$. The condition in Eq. (1), expressed in the terms of biases becomes $2\left(1-h\left(\frac{1+e_{c}e}{2}\right)\right)\leq 1-h\left(\frac{1+e_{c}}{2}\right).$ (4) The bound implied by the above condition becomes stronger as $p_{c}$ and the channel capacity decrease as shown in Fig. 1. For $p_{c}$ approaching $\frac{1}{2}$ the bound on $p$ approaches $p_{Q}$. As we show below, this is a consequence of a more general result. Figure 1: Bound on $p$ as a function of $p_{c}$ characterizing binary symmetric channel. $p\to p_{Q}$ as $p_{c}\to\frac{1}{2}$. ###### Result 1. Any bound on nonlocality in the case of unbiased errors obtained with concatenation procedure can also be obtained by a protocol involving a single pair of devices and a suitably chosen discrete memoryless channel. Before we move to the proof, we need to clarify what we mean by “unbiased errors case”. Let us consider a single pair of the devices producing probability distribution $P(a,b|x,y)$, where $a,b\in\\{0,1,\dots,d-1\\}$, and $y\in\\{0,1,\dots,n-1\\}$. Defining the range of $x$ is not necessary for this argument. Let us assume a protocol involving the communication of one of $d$ possible messages over a classical identity channel such that with probability $p$, the receiver makes the right guess $b_{i}=a_{i}$. The “unbiased errors” assumption means that $p$ does not depend on $i\in\\{0,1,\dots,n-1\\}$, every term in $p$ corresponding to each value of $a_{i}$ is equal, and that all the other “error” cases $b_{i}\neq a_{i}$ are equally probable and also uniformly distributed with respect to $a_{i}$. Arguably, the case of unbiased errors is a special one, but it is general enough to encompass all currently known results Pawłowski et al. (2009); Pawłowski and Żukowski (2010); Cavalcanti et al. (2010). Let us now clarify what we mean by finding “bound on nonlocality”. Given $P(a,b|x,y)$, one may ask whether this nonlocal behavior complies with IC’s statement, giving a “yes/no” answer. However, one may instead ask a quantitative question of how much noise needs to be added to $P(a,b|x,y)$ in order for it to satisfy IC. It is standard to consider the white noise, and the guessing probability $p$ in IC is proportional to the amount of white noise required. Hence, $p$ quantifies nonlocality of $P(a,b|x,y)$. In some cases, as it is for CHSH, the bound on $p$ also implies the bound on Bell inequality. ###### Proof. Following Ref. Cavalcanti et al. (2010) we generalize the definition of the bias $e$ of probability $p$ as $p=\frac{1+(d-1)e}{d}.$ (5) If we choose to concatenate the protocol $k$ times, as it is done in Refs. Pawłowski and Żukowski (2010); Cavalcanti et al. (2010), the success probability of $b_{i}=a_{i}$ will be $p_{k}=\frac{1+(d-1)e^{k}}{d}$. From the symmetry of the protocol and unbiasedness of errors, we conclude that all the mutual information terms in the IC expression are equal, and given by $I(a_{i};b_{i})=I_{d}(e^{k})$, where $\begin{split}I_{d}(e)&=\log d-h\left(\frac{1+(d-1)e}{d}\right)\\\ &-\frac{(d-1)(1-e)}{d}\log(d-1).\end{split}$ (6) The above expression is known as Fano’s inequality Fano (1961), which is equality in this case (See Appendix A for a short proof). Let us assume that $k$ is such that $k$ levels of concatenation are not enough to demonstrate that $p$ violates IC, but $k+1$ are. In other words: $\begin{split}n^{k}I_{d}(e^{k})\leq\log d\\\ n^{k+1}I_{d}(e^{k+1})>\log d,\end{split}$ (7) which implies $nI_{d}(e^{k+1})>I_{d}(e^{k}).$ (8) Let us now take a single pair of the devices and let the parties communicate over a discrete memoryless channel channel with uniform errors, i.e., with probability $p_{c}$ the message is unchanged and with $1-p_{c}$ probability it is changed to one of the other $d-1$ messages in accordance with the uniform distribution. Let $p_{c}=\frac{1+(d-1)e^{k}}{d}$, then the capacity of the channel is $C=I_{d}(e^{k})$ (See Appendix A for a short proof). The probability for $a_{i}=b_{i}$ is $p_{c}p+\frac{(1-p_{c})(1-p)}{d-1}=\frac{1+(d-1)e^{k+1}}{d}$ and $I(a_{i};b_{i})=I_{d}(e^{k+1})$, for all $i\in\\{0,1,\dots,n-1\\}$. Therefore, in this case the IC condition in Eq. (1) reads $nI_{d}(e^{k+1})\leq C=I_{d}(e^{k}).$ (9) If the probability $p$ is such that Eq. (8) holds, the above inequality will be violated, which means that $p$ can also be detected by the IC with a single pair of the devices and a discrete memoryless channel with capacity $C$. ∎ ## IV New and tighter bounds In this section, we demonstrate that the bounds on nonlocality obtained with the new approach are strictly better in some cases. We also discuss cases in which this approach can provide bounds while the concatenation procedure is not applicable. In the proof of Result 1, instead of choosing $p_{c}$ to be the success probability corresponding to $k$-th level of concatenation, we can take $p_{c}=\frac{1+(d-1)e_{c}}{d}$ and optimize over $e_{c}$. The condition that we use to bound nonlocality is the following $nI_{d}(e_{c}e)\leq I_{d}(e_{c}).$ (10) Even though it is cumbersome to write the solution for the optimal $e_{c}$ explicitly, the optimization over a single real parameter can be done up to an arbitrary numerical precision. In table 1 we compare the bounds on the bias $e$ of the success probability $p$ implied by Eq. (10) with the bounds from Ref. Cavalcanti et al. (2010), calculated using concatenation procedure for $n=2$. | Optimal $e_{c}$ | $e$ | $e^{\prime}$ ---|---|---|--- d=3 | 0.295 | 0.702 | 0.708 d=4 | 0.389 | 0.696 | 0.705 d=5 | 0.436 | 0.690 | 0.700 d=20 | 0.531 | 0.648 | 0.659 Table 1: Optimal $e_{c}$ is the value for which Eq. (10) puts the tightest bound on $e$. $e$ is the improved bound, while $e^{\prime}$ is the bound from Ref. Cavalcanti et al. (2010). The major challenge of finding bounds on nonlocality with the concatenation procedure is calculating each level’s success probabilities. This calculation is easy only if one assumes the unbiasedness of errors, as described in Result 1. This assumption puts a significant constraint on the choice of the protocol and the correlations $P(a,b|x,y)$ that can be easily bounded using the IC. On the contrary, the method suggested in this paper can be applied to any protocol. Below we give an example of bounding nonlocality in a Bell scenario with $3$ settings per party and with binary outcomes (often referred to as $3322$ scenario). Let us consider a non-signaling distribution $P_{NS}(a,b|x,y)$ ($x,y\in\\{0,1,2\\}$) given by the following relation $a\oplus b=\begin{cases}1,&\text{if }(x,y)\in\\{(1,2),(2,1),(2,2)\\}\\}\\\ 0,&\text{otherwise},\end{cases}$ (11) with all marginal probability distribution being uniform, i.e., $\sum_{i}P(a=i,b=j|x,y)=\sum_{j}P(a=i,b=j|x,y)=\frac{1}{2},\forall i,j$. This distribution gives the maximal violation equal to $1$ of the $I_{3322}$ inequality Collins and Gisin (2004). Let us now consider a pair of devices producing correlations $P_{e}(a,b|x,y)=eP_{NS}(a,b|x,y)+(1-e)P_{L}(a,b|x,y)$, where $P_{L}(a,b|x,y)$ is the white noise distribution for which $P_{L}(a=i,b=j|x,y)=\frac{1}{4},\forall i,j$ and all values of $x,y$. The value of $I_{3322}$ inequality for the considered mixing is $2e-1$. We ask a question of the maximal degree of nonlocality, specified by $e$, that is allowed by IC. We can find a protocol, which we specify in Appendix B, that is optimal for the nonlocal correlations $P_{NS}(a,b|x,y)$ from Eq. (11). Using the symmetric channel, parameterized by $e_{c}$, in the limit of $e_{c}\rightarrow 0$ we obtain a bound $e\rightarrow\frac{2}{3}$, which is not far from the quantum bound of $\frac{3}{5}$, which can be confirmed (up to a numerical precision) by the hierarchy of semidefinite programming of Ref. Navascués et al. (2007). To compare, the bound which can be derived from IC with a channel capacity of 1 is about $0.7445$, and there is no clear way to construct a concatenation procedure for this protocol or to calculate the corresponding guessing probabilities. ## V Discussion We have shown that concatenation can be successfully replaced by considering different classical communication channels in the protocols used to bound nonlocality with IC. Apart from showing that all the results already obtained can be re-derived with the new approach, we also showed that some could be improved. Additionally, we gave an example of a scenario that would be very challenging to approach with the concatenation procedure. Perhaps the most important goal of this paper is to show that IC’s scope of applicability can be significantly widened. Therefore, our paper opens many possible ways for future work. We list some of the open questions which we believe deserve a separate study. In the current paper, we limited ourselves to a particular type of discrete memoryless channels, namely symmetric channels. These channels seem very well suited to analyze Bell inequalities, which correspond to unique games, that is games in which for every combination of parties’ inputs and one party output, there is a unique output for the other party that wins the game. However, for other Bell inequalities, different channels can yield stronger results, and find optimal ones is an open question. Additionally, considering random data in the IC scenario with different alphabets could be used to bound nonlocality in distributions with a bias towards one of the outcomes. Finally, the statement of IC can be read in the opposite direction. Namely, given a value of guessing probability, one can obtain a lower bound on the minimal communication required for such correlations, which can be used for randomness certification. ## VI Acknowledgements We acknowledge support by the Foundation for Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme), M.P. acknowledges the support of NCN through grant SHENG (2018/30/Q/ST2/00625). This research was made possible by funding from QuantERA, an ERA-Net cofund in Quantum Technologies (www.quantera.eu), under the project eDICT. ## Appendix A Technical details of the proof of Result 1 Fano’s inequality for two random variables $a$ and $b$ taking values in $\\{0,1,\dots,d-1\\}$ can be written as follows $I(a;b)\geq\log d-h\left(p\right)-(1-p)\log(d-1),$ (12) where $p=\mathrm{P}(a=b)$. In Eq. (6) we took $p=\frac{1+(d-1)e}{d}$ and denoted the function on the right-hand side of Eq. (12) as $I_{d}(e)$. Let $P(a=i,b=j)=r(j|i)P(a=i)$ be the decomposition of the joint probability distribution of $a$ and $b$ with a response function (conditional distribution) $r(j|i)$. For the case of probabilities in the derivations of Result 1, the form of $r(j|i)$ is the following: $r(i|i)=p,\forall i$, and $r(j|i)=\frac{1-p}{d-1},\forall j\neq i$. The mutual information $I(a;b)$ by definition is equal to $I(a;b)=\sum_{i=0}^{d-1}\sum_{j=0}^{d-1}P(a=i)r(j|i)\log\frac{r(j|i)}{\sum_{i=0}^{d-1}r(j|i)P(a=i)},$ (13) Since the input $a$ is always taken to be uniformly distributed, we have that $P(a=i)=\frac{1}{d}$, and hence $\sum_{i=0}^{d-1}r(j|i)P(a=i)=\frac{1}{d},\forall j$. From here we can deduce that $\begin{split}I(a;b)&=\log(d)+\frac{1}{d}\sum_{i=0}^{d-1}\sum_{j=0}^{d-1}r(j|i)\log(r(j|i))\\\ &=\log(d)+p\log(p)+(1-p)\log\left(\frac{1-p}{d-1}\right),\end{split}$ (14) which is exactly equal to the right-hand side of Eq. (12). Now, we give a short proof that capacity of discrete memoryless channel, which transmits $d$-dimensional message unchanged with probability $p_{c}=\frac{1+(d-1)e_{c}}{d}$ and with probability $1-p_{c}$ changes it to one of the $d-1$ other values according to the uniform distribution, is equal to $I_{d}(e)$ (given by the right-hand side of Eq. (12)). By definition, the capacity of a discrete memoryless channel is $C=\max_{P(a)}I(a;b),$ (15) where $b$ is a guess of $a$, and $P(a)$ is the distribution of $a$. The proof is similar to the one above, since the response function $r(j|i)$ is exactly the same for the considered channel (where we substitute $p$ with $p_{c}$). The only part which requires the proof is that the optimal distribution $P(a)$ is the uniform one. It can be easily seen from the fact that, first of all, entropy is a concave function, and that the form of $r(j|i)$ is symmetric. ## Appendix B Protocol for the 3322 scenario Here, we give a specification of a protocol for the $3322$ scenario. In this protocol, the sender has access to three bits $a_{0},a_{1},a_{2}$, depending on which the choice of measurement $x\in\\{0,1,2\\}$ is determined. The bit message $m$ is a function of $a_{0},a_{1},a_{2}$ and the sender’s outcome $a$. The decoding function determines the guess $b_{i}$ of $a_{i}$, depending on $i\in\\{0,1,2\\}$. The receiver chooses the measurement according to $i$, in the most obvious way $y=i$. The decoding functions are $b_{0}=m\oplus b\oplus 1$, $b_{1}=m\oplus b$, and $b_{2}=m\oplus b\oplus 1$. The message $m=a_{0}\oplus a\oplus 1$. Below we specify the function for $x$ by a truth table. $a_{0}$ | $a_{1}$ | $a_{2}$ | $x$ ---|---|---|--- $0$ | $0$ | $0$ | $0$ $0$ | $0$ | $1$ | $2$ $0$ | $1$ | $0$ | $0$ $0$ | $1$ | $1$ | $1$ $1$ | $0$ | $0$ | $1$ $1$ | $0$ | $1$ | $0$ $1$ | $1$ | $0$ | $2$ $1$ | $1$ | $1$ | $0$. (16) The above protocol was obtained using simulated annealing Khachaturyan et al. (1979). ## References * Pawłowski et al. (2009) M. Pawłowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. Żukowski, Nature 461, 1101 (2009), ISSN 1476-4687, URL https://doi.org/10.1038/nature08400. * Pawłowski and Scarani (2016) M. Pawłowski and V. Scarani, _Information Causality_ (Springer Netherlands, Dordrecht, 2016), pp. 423–438, ISBN 978-94-017-7303-4, URL https://doi.org/10.1007/978-94-017-7303-4_12. * Allcock et al. (2009a) J. Allcock, N. Brunner, M. Pawlowski, and V. Scarani, Phys. Rev. A 80, 040103 (2009a), URL https://link.aps.org/doi/10.1103/PhysRevA.80.040103. * Cirel’son (1980) B. S. Cirel’son, Letters in Mathematical Physics 4, 93 (1980), ISSN 1573-0530, URL https://doi.org/10.1007/BF00417500. * Clauser et al. (1969) J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. 23, 880 (1969), URL https://link.aps.org/doi/10.1103/PhysRevLett.23.880. * Popescu and Rohrlich (1994) S. Popescu and D. Rohrlich, Foundations of Physics 24, 379 (1994), ISSN 1572-9516, URL https://doi.org/10.1007/BF02058098. * Navascués and Wunderlich (2010) M. Navascués and H. Wunderlich, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 466, 881 (2010). * Brassard et al. (2006) G. Brassard, H. Buhrman, N. Linden, A. A. Méthot, A. Tapp, and F. Unger, Phys. Rev. Lett. 96, 250401 (2006), URL https://link.aps.org/doi/10.1103/PhysRevLett.96.250401. * Cavalcanti et al. (2010) D. Cavalcanti, A. Salles, and V. Scarani, Nature Communications 1, 136 (2010), ISSN 2041-1723, URL https://doi.org/10.1038/ncomms1138. * Navascués et al. (2015) M. Navascués, Y. Guryanova, M. J. Hoban, and A. Acín, Nature Communications 6, 6288 (2015), ISSN 2041-1723, URL https://doi.org/10.1038/ncomms7288. * Pawłowski and Żukowski (2010) M. Pawłowski and M. Żukowski, Phys. Rev. A 81, 042326 (2010), URL https://link.aps.org/doi/10.1103/PhysRevA.81.042326. * Navascués et al. (2007) M. Navascués, S. Pironio, and A. Acín, Phys. Rev. Lett. 98, 010401 (2007), URL https://link.aps.org/doi/10.1103/PhysRevLett.98.010401. * van Dam (2013) W. van Dam, Natural Computing 12, 9 (2013), ISSN 1572-9796, URL https://doi.org/10.1007/s11047-012-9353-6. * Allcock et al. (2009b) J. Allcock, N. Brunner, N. Linden, S. Popescu, P. Skrzypczyk, and T. Vértesi, Phys. Rev. A 80, 062107 (2009b), URL https://link.aps.org/doi/10.1103/PhysRevA.80.062107. * Fano (1961) R. M. Fano, American Journal of Physics 29, 793 (1961). * Collins and Gisin (2004) D. Collins and N. Gisin, Journal of Physics A: Mathematical and General 37, 1775 (2004), URL https://doi.org/10.1088/0305-4470/37/5/021. * Khachaturyan et al. (1979) A. Khachaturyan, S. Semenovskaya, and B. Vainstein, Sov. Phys. Crystallography 24, 519 (1979).
# The isoperimetric problem on Riemannian manifolds via Gromov–Hausdorff asymptotic analysis Gioacchino<EMAIL_ADDRESS>Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56126 Pisa, Italy. Mattia <EMAIL_ADDRESS>Centro di Ricerca Matematica Ennio De Giorgi, Scuola Normale Superiore, Piazza dei Cavalieri, 3, 56126 Pisa, Italy. Marco<EMAIL_ADDRESS>Dipartimento di Matematica e Applicazioni, Università di Napoli Federico II, Via Cintia, Monte S. Angelo, 80126 Napoli, Italy. ###### Abstract In this paper we prove the existence of isoperimetric regions of any volume in Riemannian manifolds with Ricci bounded below and with a mild assumption at infinity, that is Gromov–Hausdorff asymptoticity to simply connected models of constant sectional curvature. The previous result is a consequence of a general structure theorem for perimeter-minimizing sequences of sets of fixed volume on noncollapsed Riemannian manifolds with a lower bound on the Ricci curvature. We show that, without assuming any further hypotheses on the asymptotic geometry, all the mass and the perimeter lost at infinity, if any, are recovered by at most countably many isoperimetric regions sitting in some Gromov–Hausdorff limits at infinity. The Gromov–Hausdorff asymptotic analysis conducted allows us to provide, in low dimensions, a result of nonexistence of isoperimetric regions in Cartan–Hadamard manifolds that are Gromov–Hausdorff asymptotic to the Euclidean space. While studying the isoperimetric problem in the smooth setting, the nonsmooth geometry naturally emerges, and thus our treatment combines techniques from both the theories. ###### Contents 1. 1 Introduction 2. 2 Definitions and preliminary results 1. 2.1 $\mathsf{RCD}$ spaces 2. 2.2 Finite perimeter sets and GH-convergence 3. 3 Asymptotic geometry and isoperimetric profile 4. 4 Asymptotic mass decomposition of minimizing sequences 1. 4.1 Concentration lemmas 2. 4.2 Asymptotic mass decomposition 5. 5 Existence and rigidity 1. 5.1 Existence theorems 2. 5.2 Rigidity properties of the minimizing sequences 6. 6 Applications and examples 7. A Comparison results in Riemannian Geometry 8. B Boundedness of isoperimetric regions ## 1 Introduction The classical isoperimetric problem can be formulated on every ambient space possessing notions of _volume_ and _perimeter_ on (some subclass of) its subsets. Among sets having assigned positive volume, the problem deals with finding those having least perimeter. Among the most basic questions in the context of the isoperimetric problem, one would naturally ask whether perimeter-minimizing sets exists, but also what goes wrong in the minimization process if such minimizers do not exist. We are interested here in the isoperimetric problem set on smooth Riemannian manifolds and in giving a good description of the minimization process under general assumptions. We denote by $(M^{n},g)$ a Riemannian manifold of dimension $n$ and metric tensor $g$. _We will always assume, unless specified differently, that $n\geq 2$_. The symbols $\mathsf{d},\operatorname{vol},P$ denote the geodesic distance, the volume measure, and the perimeter functional naturally induced by $g$. In such a framework, the isoperimetric problem consists in the minimization problem $\min\left\\{P(E)\ :\ \operatorname{vol}(E)=V\right\\},$ for fixed $V\in(0,\operatorname{vol}(M^{n}))$, where the competitors $E\subset M^{n}$ are finite perimeter sets on $(M^{n},g)$. The infimum of the perimeter $P(E)$ among such competitors of given volume $V$ is called _isoperimetric profile of $M^{n}$ at $V$_ and it is commonly denoted by $I_{(M^{n},g)}(V)$. If $\operatorname{vol}(E)=V$ and $P(E)=I_{(M^{n},g)}(V)$, hence if $E$ solves the isoperimetric problem for its own volume, we will say that $E$ is an _isoperimetric region_ (or an isoperimetric set). Unless otherwise stated we will also always assume that $(M^{n},g)$ is complete, noncompact, and has infinite volume. (1.1) In fact, in case $(M^{n},g)$ is compact an easy application of direct methods in Calculus of Variations provides the existence of isoperimetric regions for any volume in $(0,\operatorname{vol}(M^{n}))$; also, when $(M^{n},g)$ is complete noncompact but with finite volume, the existence of isoperimetric regions for any volume is ensured by the application of [66, Theorem 2.1 and Remark 2.3]. A very classical way for studying the isoperimetric problem at a given volume $V>0$ is to argue by means of direct methods in Calculus of Variations. So, for a given Riemannian manifold $(M^{n},g)$ and a volume $V>0$, one considers a sequence of finite perimeter sets $\Omega_{i}\subset M^{n}$ with $\operatorname{vol}(\Omega_{i})=V$ and $P(\Omega_{i})\to I_{(M^{n},g)}(V)$. It is well-known by the theory of finite perimeter sets that, up to subsequence, $\Omega_{i}$ converges in $L^{1}_{\rm loc}$ to a set $\Omega$, the perimeter is lower semicontinuous, but the volume of $\Omega$ might be strictly less than $V$. It is then common to try to understand the consequences of having lost part of the mass of the minimizing sequence at infinity. Indeed, under suitable assumptions on the geometry of $(M^{n},g)$, one can try to infer that the potential leak of mass would be inconvenient, thus getting existence results. By the same approach, one can also grasp new information about the isoperimetric profile $I_{(M^{n},g)}$. Since the possible leak of mass of a minimizing sequence is due to the fact that the ambient $M^{n}$ is not compact, it has been spontaneous in the literature to assume a priori asymptotic assumptions on the manifold $(M^{n},g)$. In [59], Nardulli assumed that $(M^{n},g)$ is noncollapsed, that its Ricci curvature is bounded from below, and that for any sequence of points $p_{i}\in M^{n}$ there exists a pointed Riemannian manifold $(M^{n}_{\infty},\mathsf{d}_{\infty},p_{\infty})$ such that that $(M^{n},\mathsf{d},p_{i})\to(M^{n}_{\infty},\mathsf{d}_{\infty},p_{\infty})$ in a suitable pointed $C^{1,\alpha}$-sense. We recall that a Riemannian manifold $(M^{n},g)$ is said to be _noncollapsed_ if there is $v_{0}>0$ such that $\operatorname{vol}(B_{1}(p))\geq v_{0}$ for all $p\in M^{n}$. Here with $B_{r}(p)$ we denote the open ball of radius $r$ of center $p\in M^{n}$ according to the distance $\mathsf{d}$. What he proved in [59, Theorem 2] is that, under the latter asymptotic condition, a description of the mass lost at infinity in the previous minimization process can be given, more precisely showing that it is recovered by finitely many isoperimetric regions, each of them contained in one of the limit manifolds $(M^{n}_{\infty},\mathsf{d}_{\infty},p_{\infty})$. On the other hand it turns out, see Remark 2.12, that the class of pointed uniformly noncollapsed manifolds of a given dimension having a uniform lower bound on the Ricci tensor is precompact with respect to pointed measure Gromov–Hausdorff (pmGH for short) convergence, see 2.7 for such a notion. Actually, the precompactness property holds at the level of $\mathsf{RCD}$ spaces, which are metric measure spaces with a synthetic lower bound on the Ricci tensor (see Section 2.1), with uniform bounds on the measure of unit balls. This means that given any sequence of points $p_{i}$ on a noncollapsed manifold $(M^{n},g)$ with Ricci bounded below, the sequence of pointed metric measure spaces $(M^{n},\mathsf{d},\operatorname{vol},p_{i})$ converges in the pmGH sense to a pointed $\mathsf{RCD}$ space, up to a subsequence. Let us point out that, as a consequence of the celebrated volume convergence theorem in [28], the measure on such a limit is the Hausdorff measure of dimension $n$, and thus the limit is in particular an $\mathsf{ncRCD}$ space, see 2.9. Eventually one may hope for a description analogous to the one mentioned above, coming from [59], without further assumptions on $(M^{n},g)$ but the noncollapsedness and a lower bound on the Ricci tensor, exploiting the pmGH precompactness in order to give a description of the lost mass. In fact, the first of our main results is the following theorem which precisely states that minimizing sequences $\Omega_{i}$ of a given volume $V$ split into a “converging” part $\Omega_{i}^{c}$ and into at most countably many “diverging” parts $\Omega_{i,j}^{d}$ that converge in a suitable sense to isoperimetric regions in pmGH limit $\mathsf{ncRCD}$ spaces. Moreover, the limits of $\Omega_{i}^{c}$ and of each $\Omega_{i,j}^{d}$ recover the assigned volume $V$ and the isoperimetric profile of $(M^{n},g)$ at $V$ (in the sense of (1.2) below). All in all, the forthcoming result gives a description of the asymptotic behavior of the diverging mass of minimizing sequences. We stress that the identification of the “converging” part $\Omega_{i}^{c}$ of a minimizing sequence, which is the starting point of our arguments, is a classical result due to Ritoré–Rosales [66, Theorem 2.1]. ###### Theorem 1.1 (Asymptotic mass decomposition). Let $(M^{n},g)$ be a noncollapsed manifold as in (1.1), such that $\mathrm{Ric}\geq(n-1)k$ for some $k\in(-\infty,0]$, and let $V>0$. For every minimizing (for the perimeter) sequence $\Omega_{i}\subset M^{n}$ of volume $V$, with $\Omega_{i}$ bounded for any $i$, up to passing to a subsequence, there exist an increasing sequence $\\{N_{i}\\}_{i\in\mathbb{N}}\subseteq\mathbb{N}$, disjoint finite perimeter sets $\Omega_{i}^{c},\Omega_{i,j}^{d}\subset\Omega_{i}$, and points $p_{i,j}$, with $1\leq j\leq N_{i}$ for any $i$, such that * • $\lim_{i}\mathsf{d}(p_{i,j},p_{i,\ell})=\lim_{i}\mathsf{d}(p_{i,j},o)=+\infty$, for any $j\neq\ell<\overline{N}+1$ and any $o\in M^{n}$, where $\overline{N}:=\lim_{i}N_{i}\in\mathbb{N}\cup\\{+\infty\\}$; * • $\Omega_{i}^{c}$ converges to $\Omega\subset M^{n}$ in the sense of finite perimeter sets (2.1), and we have $\operatorname{vol}(\Omega_{i}^{c})\to_{i}\operatorname{vol}(\Omega)$, and $P(\Omega_{i}^{c})\to_{i}P(\Omega)$. Moreover $\Omega$ is a bounded isoperimetric region; * • for every $j<\overline{N}+1$, $(M^{n},\mathsf{d},\operatorname{vol},p_{i,j})$ converges in the pmGH sense to a pointed $\mathsf{ncRCD}(k,n)$ space $(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j},p_{j})$. Moreover there are isoperimetric regions $Z_{j}\subset X_{j}$ such that $\Omega^{d}_{i,j}\to_{i}Z_{j}$ in $L^{1}$-strong (2.17) and $P(\Omega^{d}_{i,j})\to_{i}P_{X_{j}}(Z_{j})$; * • it holds that $I_{(M^{n},g)}(V)=P(\Omega)+\sum_{j=1}^{\overline{N}}P_{X_{j}}(Z_{j}),\qquad\qquad V=\operatorname{vol}(\Omega)+\sum_{j=1}^{\overline{N}}\mathfrak{m}_{j}(Z_{j}).$ (1.2) Some comments about the above statement are in order. First of all, the fact that the sets of the minimizing sequence are assumed to be bounded does not undermine the generality because sets in the minimizing sequences for the isoperimetric problem can always be taken bounded by the approximation result recalled in Remark 2.3. Also, in the above statement the measures $\mathfrak{m}_{j}$ actually coincide with the $n$-dimensional Hausdorff measures of the metric spaces $(X_{j},\mathsf{d}_{j})$ by definition of $\mathsf{ncRCD}$ space, and the perimeter $P_{X_{j}}$ is defined accordingly (see 2.13). Moreover, we point out that the convergence in the $L^{1}$-strong sense in particular implies the convergence of the volumes of the sets, i.e., $\operatorname{vol}(\Omega_{i,j}^{d})\to_{i}\mathfrak{m}_{j}(Z_{j})$. The above theorem is actually a simplification of a more detailed result, whose technical statement can be found in 4.6. The main advantage of that complete formulation is the detailed construction of $\Omega_{i,j}^{d}$ from $\Omega_{i}^{d}\vcentcolon=\Omega\setminus\Omega_{i}^{c}$, that is the diverging part of the minimizing sequence. We wish to emphasize that the above result is most likely to hold also in the nonsmooth metric ambient of an $\mathsf{ncRCD}$ space with a uniform lower bound on the volume of unit balls, in place of a smooth noncollapsed Riemannian manifold with Ricci curvature bounded from below. However, one of our motivations was to show how the nonsmooth theory becomes natural in studying the behavior of the runaway portions of the minimizing sequences already on classical smooth Riemannian manifolds. Let us also point out that another class of spaces where a similar asymptotic decomposition of the minimizing sequences has been performed is that of the unbounded convex bodies in Euclidean spaces, treated in [44]. We also mention that a decomposition result like 1.1 holds also without the assumption of having a minimizing sequence; more precisely, one can prove that an arbitrary sequence of sets with uniformly bounded volume and perimeter splits, up to subsequence, into subsets converging in $L^{1}_{\rm loc}$ to limit sets sitting either in $M$ or in some GH-limits at infinity. This yields a result of generalized compactness analogous to [57]. In view of 1.3, it becomes then natural to affirm that the more is known about the GH-asymptotic structure of the manifold, the more information one gets about the minimizing sequence, and in turn about the isoperimetric problem. We propose the following notion of GH-asymptoticity. ###### Definition 1.2 (GH-asymptoticity). Let $(M^{n},g)$ be a noncompact Riemannian manifold with distance $\mathsf{d}$ and volume measure $\operatorname{vol}$. We say that $(M^{n},g)$ is _Gromov–Hausdorff asymptotic, $\mathrm{GH}$-asymptotic for short, to a metric space_ $(X,\mathsf{d}_{X})$ if for any diverging sequence of points $q_{i}\in M^{n}$, i.e., such that $\mathsf{d}(q,q_{i})\to+\infty$ for any $q\in M^{n}$, there is $x_{0}\in X$ such that $(M^{n},\mathsf{d},q_{i})\xrightarrow[i\to+\infty]{}(X,\mathsf{d}_{X},x_{0}),$ in the $\mathrm{pGH}$-sense (see 2.7). We say that $(M^{n},g)$ is _measure Gromov–Hausdorff asymptotic, $\mathrm{mGH}$-asymptotic for short, to a metric measure space_ $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ if for any diverging sequence of points $q_{i}\in M^{n}$, there is $x_{0}\in X$ such that $(M^{n},\mathsf{d},\operatorname{vol},q_{i})\xrightarrow[i\to+\infty]{}(X,\mathsf{d}_{X},\mathfrak{m}_{X},x_{0}),$ in the $\mathrm{pmGH}$-sense (see 2.7). In the above definition, if $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ is such that for every $x_{1},x_{2}\in X$ there is an isometry $\varphi:X\to X$ such that $\varphi(x_{1})=\varphi(x_{2})$ and $\varphi_{\sharp}\mathfrak{m}_{X}=\mathfrak{m}_{X}$, then $(M^{n},g)$ is mGH- asymptotic to $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ if for any diverging sequence of points $q_{i}\in M^{n}$, it occurs that $(M^{n},\mathsf{d},\operatorname{vol},q_{i})\to(X,\mathsf{d}_{X},\mathfrak{m}_{X},x)$ for any $x\in X$. Loosely speaking, in such a case it does not matter the point at which the limit space is pointed. We remark that the simply connected Riemannian manifolds of constant sectional curvature satisfy the property above. The following 1.3 enables us to provide a full generalization of the existence result by Mondino–Nardulli [51, Theorem 1.2], where the $C^{0}$-asymptoticity assumption therein is weakened with a GH-asymptoticity hypothesis here. For the next statement see 5.2 and 5.7. ###### Theorem 1.3. Let $k\in(-\infty,0]$ and let $(M^{n},g)$ be as in (1.1) such that $\mathrm{Ric}\geq(n-1)k$ on $M\setminus\mathcal{C}$, where $\mathcal{C}$ is compact. Suppose that $(M^{n},g)$ is GH-asymptotic to the simply connected model of constant sectional curvature $k$ and dimension $n$. Then for any $V>0$ there exists an isoperimetric region of volume $V$ on $(M^{n},g)$. Moreover, in the special case when $M^{n}$ is as in (1.1), $\mathrm{Ric}\geq 0$ everywhere on $M^{n}$, and $M^{n}$ is GH-asymptotic to $\mathbb{R}^{n}$, the isoperimetric regions are indecomposable. We remark that in the first part of the above statement the curvature assumption on the manifold $M^{n}$ is assumed only on the ends of $M^{n}$. We recall that an isoperimetric region $\Omega$ is said to be indecomposable if whenever $E,F$ are finite perimeter sets such that $\operatorname{vol}(\Omega\Delta(E\cup F))=0$ and $P(\Omega)=P(E)+P(F)$, then $\operatorname{vol}(E)=0$ or $\operatorname{vol}(F)=0$. Let us quickly describe some examples that satisfy the hypotheses of 1.3. We will notice in 6.6 that, as a consequence of standard comparison theorems, a complete Riemannian manifold $(M^{n},g)$ for which the injectivity radius diverges to $+\infty$ and the sectional curvature converges to $k\in\mathbb{R}$ at infinity, is GH-asymptotic (actually even $C^{0}$-asymptotic, see Remark 6.3) to the simply connected model of constant sectional curvature $k$ and dimension $n$. Hence, if on a manifold satisfying the latter assumption we have a lower bound $\mathrm{Ric}\geq(n-1)k$ outside a compact set, 1.3 applies and we get the existence of isoperimetric regions for any volume. This is the case, for example, of ALE gravitational instantons and of the class of warped products described in Remark 6.14, which contains, for example, the Bryant type solitons (see [27, Chapter 4, Section 6] and references therein). Moreover, the combination with a fundamental estimate on the injectivity radius [24] enables us to show that nonnegatively Ricci curved manifolds with asymptotically vanishing sectional curvature (6.4) and Euclidean volume growth, that is, $\mathrm{vol}(B_{r}(p))\geq Cr^{n}$ for some positive constant $C$ uniform in $p$, possess isoperimetric regions for any volume (see 6.12). Such class of manifolds is quite rich, as it contains, for example, Perelman’s examples of manifolds with non-unique asymptotic cones at infinity, see [61] and [21, Section 8]. Also, this class of manifolds naturally encompasses the case of manifolds with nonnegative Ricci curvature that are $C^{2,\alpha}$-asymptotically conical, for which the existence and the description of isoperimetric regions for large volumes were investigated in [26], see Remark 6.13. Since, by 1.3, the Ricci curvature suffices to be nonnegatively defined just outside of a compact set, compact perturbations of the above described metrics still enjoy existence of isoperimetric sets for any volume. Let us mention that the classes of manifolds just described are also $C^{0}$-asymptotic to a model space (see Remark 6.3), a strictly stronger condition than that of 1.2. In particular, one could also refer to the asymptotic mass decomposition by Nardulli [59], as done in [51], in order to derive existence for the isoperimetric problem. It could be interesting to produce explicit examples of complete Riemannian manifolds that satisfy the hypotheses of 1.3 but whose metric is not $C^{0}$-asymptotic to the one of the reference model. The precise analysis on the diverging mass of minimizing sequences obtained in 1.1 (and especially in 4.6) and the proof of 1.3 lead to rigidity results that relate the behavior of minimizing sequences to the curvature of the manifold. Roughly speaking, it can be proved that, under the hypotheses of 1.3, if a minimizing sequence does present a leak in the mass at infinity, then out of any compact set of $M^{n}$ one can find a domain with constant sectional curvature $k$, see 5.9. Such a conclusion can be strenghtened when $\mathrm{Ric}\geq 0$ on the entire $M^{n}$: in this case, $(M^{n},g)$ must be flat, see 5.10. In analogy with [51], the main tool we are going to employ in addition to 1.1 to prove the above existence result in 1.3 is a comparison argument, introduced in [55], following from the classical Bishop–Gromov monotonicity theorem recalled in A.1. The coupling of a suitable asymptotic study of a minimizing sequence with a monotonicity formula, aiming at excluding the drifting at infinity, seems to be a powerful and general strategy to infer the existence of isoperimetric sets on Riemannian manifolds. Indeed, in addition to the case where the Ricci tensor is bounded from below, which we are discussing here, it is worth mentioning the recent existence result for isoperimetric sets on asymptotically flat Riemannian manifolds with nonnegative scalar curvature, content of [17, Proposition K.1]. In fact, such result mostly builds on a way easier asymptotic mass decomposition originated in [32, Proposition 4.2] together with an isoperimetric inequality of Shi [68] proved through the celebrated Hawking mass monotonicity along the Inverse Mean Curvature Flow [40]. Apart from the already mentioned contributions, there are many other important results in literature about the existence and description of isoperimetric sets in Riemannian manifolds. Limiting ourselves to the contributions that inspired in some way our investigations, we recall [56, 66] in which the authors studied the isoperimetric problem in abstract cones and in Euclidean cones respectively, [60], where the isoperimetric problem is solved on cylinders, the isoperimetric existence theorem on Riemannian manifolds $(M^{n},g)$ with compact quotient $M/\mathrm{Iso}(M^{n})$, that has been pointed out by Morgan [53, Chapter 3], building also on [2], and the existence result for nonnegatively curved $2$-dimensional surfaces [65]. For the existence and description of isoperimetric sets for large volumes, we mention the papers [32, 33], [25], [26] and [12] where an isoperimetric (for large volumes) foliation has been discovered on asymptotically Schwarzschildian, hyperbolic, conical, and cuspidal manifolds respectively. The isoperimetric problem has been and it is currently studied also in the sub-Riemannian setting: in Carnot groups, the existence of isoperimetric sets of any volume has been established in [43]. We conclude this introduction by pointing out some other results and applications, part of which are technical and needed for proving 1.1 and 1.3. Carrying out the asymtptotic analysis on Riemannian manifolds in the context of the Gromov–Hausdorff convergence allows us to derive useful comparison results between the isoperimetric profile of the manifold and the one of any pmGH limit along sequences of diverging points on the manifold. This leads to 3.2, that essentially estimates from above the isoperimetric profile of a manifold $(M^{n},g)$ with the one of any pmGH limit along sequences of points on $M^{n}$. 3.2 implies some interesting consequences on Cartan–Hadamard manifolds. We will prove that the isoperimetric profile of Cartan–Hadamard manifolds with Ricci bounded below and GH-asymptotic to $\mathbb{R}^{n}$ for $2\leq n\leq 4$ equals the one of the Euclidean space (3.4). Also, if in addition the sectional curvatures are strictly negative, the rigidity statement of 3.4 implies the nonexistence of isoperimetric regions, see Example 3.6. Plan of the paper. In Section 2 we recall definitions, results and we prove a preliminary lemma (see Lemma 2.19) we will need, also in the context of the theory of finite perimeter sets on metric measure spaces. In Section 3 we investigate the above mentioned relations between pmGH limits of manifolds and the isoperimetric profile of the manifold and the one of such pmGH limits. Section 4 is devoted to the analysis of the asymptotic behavior of the mass of minimizing sequences; here we prove 1.1 in its more detailed version, that is 4.6. In Section 5 we prove the above mentioned existence and rigidity results for the isoperimetric problem, and we prove 1.3. In Section 6 we discuss the applications and the examples anticipated above. For the convenience of the reader, in Appendix A we recall two useful well- known comparison results in Riemannian geometry, and in Appendix B we give a self-contained proof of the fact that suitable assumptions on a manifold $(M^{n},g)$ imply that isoperimetric regions are bounded. Acknowledgments. The authors would like to thank Lorenzo Mazzieri for a useful conversation about Example 3.6, Andrea Mondino for discussions related to Section 6, and Daniele Semola for having pointed out the argument in Remark 2.15. They are also grateful to Elia Bruè, Gian Paolo Leonardi, Stefano Nardulli and Vincenzo Scattaglia for inspiring conversations about the subject. G.A. was also partially supported by the European Research Council (ERC Starting Grant 713998 GeoMeG ‘ _Geometry of Metric Groups_ ’). ## 2 Definitions and preliminary results For the notions of BV and Sobolev spaces on Riemannian manifolds we refer the reader to [50, Section 1]. For every finite perimeter set $E$ in $\Omega$ we denote with $P(E,\Omega)$ the perimeter of $E$ inside $\Omega$. When $\Omega=M^{n}$ we simply write $P(E)$. We denote with $\mathcal{H}^{n-1}$ the $(n-1)$-dimensional Hausdorff measure on $M^{n}$ relative to the distance induced by $g$. We recall that for every finite perimeter set $E$ one has $P(E)=\mathcal{H}^{n-1}(\partial^{*}E)$ and the characteristic function $\chi_{E}$ belongs to $BV_{\rm loc}(M^{n},\operatorname{vol})$ with generalized gradient $D\chi_{E}=\nu\mathcal{H}^{n-1}\mathbin{\vrule height=6.88889pt,depth=0.0pt,width=0.55974pt\vrule height=0.55974pt,depth=0.0pt,width=5.59721pt}\partial^{*}E$ for a function $\nu:M\to TM^{n}$ with $|\nu|=1$ at $|D\chi_{E}|$-a.e. point, where $\partial^{*}E$ is the essential boundary of $E$. We recall the following terminology. ###### Definition 2.1 (Convergence of finite perimeter sets). Let $(M^{n},g)$ be a Riemannian manifold. We say that a sequence of measurable (with respect to the volume measure) sets $E_{i}$ _locally converges_ to a measurable set $E$ if the characteristic functions $\chi_{E_{i}}$ converge to $\chi_{E}$ in $L^{1}_{\rm loc}(M^{n},g)$. In such a case we simply write that $E_{i}\to E$ locally on $M^{n}$. If the sets $E_{i}$ have also locally finite perimeter, that is, $P(E_{i},\Omega)<+\infty$ for any $k$ and any bounded open set $\Omega$, we say that $E_{i}\to E$ _in the sense of finite perimeter sets_ if $E_{i}\to E$ locally on $M^{n}$ and the sequence of measures $D\chi_{E_{i}}$ locally weakly* converges as measures, that is, with respect to the duality with compactly supported continuous functions. In such a case, $E$ has locally finite perimeter and the weak* limit of $D\chi_{E_{i}}$ is $D\chi_{E}$. ###### Definition 2.2 (Isoperimetric profile). Let $(M^{n},g)$ be a Riemannian manifold. We define the isoperimetric profile function $I:[0,\operatorname{vol}(M^{n}))\to[0,+\infty)$ as follows $I_{(M^{n},g)}(V):=\inf\\{P(\Omega):\text{$\Omega$ is a finite perimeter set in $M^{n}$ such that $\operatorname{vol}(\Omega)=V$}\\}.$ We also occasionally write $I(V)$ when the ambient manifold $M^{n}$ is understood. ###### Remark 2.3 (Approximation of finite perimeter sets with smooth sets). It can be proved, see [58, Lemma 2.3], that when $M^{n}$ is a complete Riemannian manifold every finite perimeter set $\Omega$ with $0<\operatorname{vol}(\Omega)<+\infty$ and $\operatorname{vol}(\Omega^{c})>0$ is approximated by relatively compact sets $\Omega_{i}$ in $M^{n}$ with smooth boundary such that $\operatorname{vol}(\Omega_{i})=\operatorname{vol}(\Omega)$ for every $i\in\mathbb{N}$, $\operatorname{vol}(\Omega_{i}\Delta\Omega)\to 0$ when $i\to+\infty$, and $P(\Omega_{i})\to P(\Omega)$ when $i\to+\infty$. Thus, by approximation, one can deduce that $I(V)=\inf\\{\mathcal{H}^{n-1}(\partial\Omega):\text{$\Omega\Subset M^{n}$ has smooth boundary, $\operatorname{vol}(\Omega)=V$}\\},$ see [58, Theorem 1.1]. ###### Definition 2.4 (Isoperimetric region). Given a Riemannian manifold $(M^{n},g)$ the set $E$ is an isoperimetric region in $M^{n}$ if $0<\operatorname{vol}(E)<+\infty$ and for every finite perimeter set $\Omega\subset M^{n}$ such that $\operatorname{vol}(\Omega)=\operatorname{vol}(E)$ one has $P(E)\leq P(\Omega)$. The above definition of isoperimetry can of course be rephrased in terms of the isoperimetric profile $I$ by saying that a subset $E\subset M^{n}$ of finite perimeter is isoperimetric for the volume $V$ if $\mathrm{vol}{(E)}=V$ and $I(V)=P(E)=\mathcal{H}^{n-1}{(\partial^{*}E)}$. We also need to recall the definition of the simply connected radial models with constant sectional curvature. ###### Definition 2.5 (Models of constant sectional curvature, cf. [62, Example 1.4.6]). Let us define $\operatorname{sn}_{k}(r):=\begin{cases}(-k)^{-\frac{1}{2}}\sinh((-k)^{\frac{1}{2}}r)&k<0,\\\ r&k=0,\\\ k^{-\frac{1}{2}}\sin(k^{\frac{1}{2}}r)&k>0.\end{cases}$ If $k>0$, then $((0,\pi/\sqrt{k}]\times\mathbb{S}^{n-1},\mathrm{d}r^{2}+\mathrm{sn}_{k}^{2}(r)g_{1})$, where $g_{1}$ is the canonical metric on $\mathbb{S}^{n-1}$, is the radial model of dimension $n$ and constant sectional curvature $k$. The metric can be smoothly extended at $r=0$, and thus we shall write that the the metric is defined on the ball $\mathbb{B}^{n}_{\pi/\sqrt{k}}\subset\mathbb{R}^{n}$. The Riemannian manifold $(\mathbb{B}^{n}_{\pi/\sqrt{k}},g_{k}\vcentcolon=\mathrm{d}r^{2}+\mathrm{sn}_{k}^{2}(r)g_{1})$ is the unique (up to isometry) simply connected Riemannian manifold of dimension $n$ and constant sectional curvature $k>0$. If instead $k\leq 0$, then $((0,+\infty)\times\mathbb{S}^{n-1},\mathrm{d}r^{2}+\mathrm{sn}_{k}^{2}(r)g_{1})$ is the radial model of dimension $n$ and constant sectional curvature $k$. Extending the metric at $r=0$ analogously yields the unique (up to isometry) simply connected Riemannian manifold of dimension $n$ and constant sectional curvature $k\leq 0$, in this case denoted by $(\mathbb{R}^{n},g_{k})$. We denote by $v(n,k,r)$ the volume of the ball of radius $r$ in the (unique) simply connected Riemannian manifold of sectional curvature $k$ of dimension $n$, and by $s(n,k,r)$ the volume of the boundary of such a ball. In particular $s(n,k,r)=n\omega_{n}\mathrm{sn}_{k}^{n-1}(r)$ and $v(n,k,r)=\int_{0}^{r}n\omega_{n}\mathrm{sn}_{k}^{n-1}(r)\,\mathrm{d}r$, where $\omega_{n}$ is the Euclidean volume of the Euclidean unit ball in $\mathbb{R}^{n}$. Moreover, for given $n$, we denote by $\mathsf{d}_{k},\operatorname{vol}_{k},P_{k}$ the geodesic distance, the volume measure, and the perimeter functional on the simply connected Riemannian manifold of sectional curvature $k$ (and dimension $n$), respectively. Let us also recall a classical definition for the convenience of the reader. ###### Definition 2.6 ($\mathrm{AVR}$ and Euclidean volume growth). Let $(M^{n},g)$ be a complete noncompact Riemannian manifold with $\mathrm{Ric}\geq 0$. Thus, from Bishop–Gromov comparison in A.1 we know that the function $[0,+\infty)\ni r\to\frac{\operatorname{vol}(B_{r}(p))}{\omega_{n}r^{n}}$ is nonincreasing and goes to 1 as $r\to 0^{+}$. For any $p\in M^{n}$, we define $\mathrm{AVR}(M^{n},g):=\lim_{r\to+\infty}\frac{\operatorname{vol}(B_{r}(p))}{\omega_{n}r^{n}},$ the asymptotic volume ratio of $(M^{n},g)$. The previous definition is independent of the choice of $p\in M^{n}$. Notice that, by Bishop–Gromov comparison, we have $0\leq\mathrm{AVR}(M^{n},g)\leq 1$, and $\operatorname{vol}(B_{r}(p))\geq\mathrm{AVR}(M^{n},g)\omega_{n}r^{n}$ for every $r>0$, and every $p\in M^{n}$. If $\mathrm{AVR}(M^{n},g)>0$ we say that $(M^{n},g)$ has Euclidean volume growth. Let us now briefly recall the main concepts we will need from the theory of metric measure spaces. We recall that a metric measure space, $\mathrm{m.m.s.}$ for short, $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ is a triple where $(X,\mathsf{d}_{X})$ is a locally compact separable metric space and $\mathfrak{m}_{X}$ is a Borel measure bounded on bounded sets. A pointed metric measure space is a quadruple $(X,\mathsf{d}_{X},\mathfrak{m}_{X},x)$ where $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ is a metric measure space and $x\in X$ is a point. For simplicity, and since it will always be our case, we will always assume that given $(X,\mathsf{d}_{X},\mathfrak{m}_{X})$ a m.m.s.​ the support ${\rm spt}\,\mathfrak{m}_{X}$ of the measure $\mathfrak{m}_{X}$ is the whole $X$. We assume the reader to be familiar with the notion of pointed measured Gromov–Hausdorff convergence, referring to [72, Chapter 27] and to [16, Chapter 7 and 8] for an overview on the subject. In the following treatment we introduce the pmGH-convergence already in a proper realization even if this is not the general definition. Nevertheless, the (simplified) definition of Gromov–Hausdorff convergence via a realization is equivalent to the standard definition of pmGH convergence in our setting, because in the applications we will always deal with locally uniformly doubling measures, see [37, Theorem 3.15 and Section 3.5]. The following definition is actually taken from the introductory exposition of [5]. ###### Definition 2.7 (pGH and pmGH convergence). A sequence $\\{(X_{i},\mathsf{d}_{i},x_{i})\\}_{i\in\mathbb{N}}$ of pointed metric spaces is said to converge in the _pointed Gromov–Hausdorff topology, in the $\mathrm{pGH}$ sense for short,_ to a pointed metric space $(Y,\mathsf{d}_{Y},y)$ if there exist a complete separable metric space $(Z,\mathsf{d}_{Z})$ and isometric embeddings $\begin{split}&\Psi_{i}:(X_{i},\mathsf{d}_{i})\to(Z,\mathsf{d}_{Z}),\qquad\forall\,i\in\mathbb{N},\\\ &\Psi:(Y,\mathsf{d}_{Y})\to(Z,\mathsf{d}_{Z}),\end{split}$ such that for any $\varepsilon,R>0$ there is $i_{0}(\varepsilon,R)\in\mathbb{N}$ such that $\Psi_{i}(B_{R}^{X_{i}}(x_{i}))\subset\left[\Psi(B_{R}^{Y}(y))\right]_{\varepsilon},\qquad\Psi(B_{R}^{Y}(y))\subset\left[\Psi_{i}(B_{R}^{X_{i}}(x_{i}))\right]_{\varepsilon},$ for any $i\geq i_{0}$, where $[A]_{\varepsilon}\vcentcolon=\\{z\in Z\ :\ \mathsf{d}_{Z}(z,A)\leq\varepsilon\\}$ for any $A\subset Z$. Let $\mathfrak{m}_{i}$ and $\mu$ be given in such a way $(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})$ and $(Y,\mathsf{d}_{Y},\mu,y)$ are m.m.s.​ If in addition to the previous requirements we also have $(\Psi_{i})_{\sharp}\mathfrak{m}_{i}\rightharpoonup\Psi_{\sharp}\mu$ with respect to duality with continuous bounded functions on $Z$ with bounded support, then the convergence is said to hold in the _pointed measure Gromov–Hausdorff topology, or in the $\mathrm{pmGH}$ sense for short_. ### 2.1 $\mathsf{RCD}$ spaces Let us briefly introduce the so-called $\mathsf{RCD}$ condition for m.m.s. Since we will use part of the $\mathsf{RCD}$ theory just as an instrument for our purposes and since we will never use in the paper the specific definition of $\mathsf{RCD}$ space, we just outline the main references on the subject and we refer the interested reader to the survey of Ambrosio [3] and the references therein. After the introduction, in the independent works [69, 70] and [46], of the curvature dimension condition $\mathsf{CD}(k,n)$ encoding in a synthetic way the notion of Ricci curvature bounded from below by $k$ and dimension bounded above by $n$, the definition of $\mathsf{RCD}(k,n)$ m.m.s.​ was proposed and extensively studied in [36, 34, 10], see also [19] for the equivalence between the $\mathsf{RCD}^{*}(k,n)$ and the $\mathsf{RCD}(k,n)$ condition. The infinite dimensional counterpart of this notion had been previously investigated in [8], see also [7] for the case of $\sigma$-finite reference measures. ###### Remark 2.8 (pmGH limit of $\mathsf{RCD}$ spaces). We recall that, whenever it exists, a pmGH limit of a sequence $\\{(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})\\}_{i\in\mathbb{N}}$ of (pointed) $\mathsf{RCD}(k,n)$ spaces is still an $\mathsf{RCD}(k,n)$ metric measure space. In particular, due to the compatibility of the $\mathsf{RCD}$ condition with the smooth case of Riemannian manifolds with Ricci curvature bounded from below and to its stability with respect to pointed measured Gromov–Hausdorff convergence, limits of smooth Riemannian manifolds with Ricci curvature uniformly bounded from below by $k$ and dimension uniformly bounded from above by $n$ are $\mathsf{RCD}(k,n)$ spaces. Then the class of $\mathsf{RCD}$ spaces includes the class of Ricci limit spaces, i.e., limits of sequences of Riemannian manifolds with the same dimension and with Ricci curvature uniformly bounded from below. The study of Ricci limits was initiated by Cheeger and Colding in the nineties in the series of papers [20, 21, 22, 23] and has seen remarkable developments in more recent years. Since the above mentioned pioneering works, it was known that the regularity theory for Ricci limits improves adding to the lower curvature bound a uniform lower bound for the volume of unit balls along the converging sequence of Riemannian manifolds: this gives raise to the so-called notion of noncollapsed Ricci limits. In particular, as a consequence of the volume convergence theorem proved in [28], it is known that in the noncollapsed case the limit measure of the volume measures is the Hausdorff measure on the limit metric space, while this might not be the case for a general Ricci limit space. The notion of noncollapsed Ricci limit has also been extended to the synthetic setting. Hence, let us recall the definition of noncollapsed $\mathsf{RCD}(k,n)$ m.m.s., as introduced in [31] (see also [41] where Kitabeppu firstly investigated this class, and [11]). ###### Definition 2.9 ($\mathsf{ncRCD}$ spaces). An $\mathsf{RCD}(k,n)$ metric measure space $(X,\mathsf{d},\mathfrak{m})$ is said to be noncollapsed, $\mathsf{ncRCD}(k,n)$ for short, if $\mathfrak{m}=\mathcal{H}^{n}$, where $\mathcal{H}^{n}$ is the $n$-dimensional Hausdorff measure on $(X,\mathsf{d})$. We remark that if $(X,\mathsf{d},\mathcal{H}^{n})$ is an $\mathsf{ncRCD}(k,n)$ space then $n$ is an integer. Now we are ready to state the volume convergence theorems obtained by Gigli and De Philippis in [31, Theorem 1.2 and Theorem 1.3], which are the synthetic version of the celebrated volume convergence of Colding [28]. ###### Theorem 2.10. Let $\\{(X_{i},\mathsf{d}_{i},\mathcal{H}^{n},x_{i})\\}_{i\in\mathbb{N}}$ be a sequence of pointed $\mathsf{ncRCD}(k,n)$ m.m.s.​ with $k\in\mathbb{R}$ and $n\in[1,+\infty)$. Assume that $(X_{i},\mathsf{d}_{i},x_{i})$ converges in the pGH topology to $(X,\mathsf{d},x)$. Then precisely one of the following happens * (a) $\limsup_{i\to\infty}\mathcal{H}^{n}\left(B_{1}(x_{i})\right)>0$. Then the $\limsup$ is a limit and $(X_{i},\mathsf{d}_{i},\mathcal{H}^{n},x_{i})$ converges in the pmGH topology to $(X,\mathsf{d},\mathcal{H}^{n},x)$. Hence $(X,\mathsf{d},\mathcal{H}^{n})$ is an $\mathsf{ncRCD}(k,n)$ m.m.s.; * (b) $\lim_{i\to\infty}\mathcal{H}^{n}(B_{1}(x_{i}))=0$. In this case we have $\dim_{H}(X,\mathsf{d})\leq n-1$, where $\dim_{H}(X,\mathsf{d})$ is the Hausdorff dimension of $(X,\mathsf{d})$. Moreover, for $k\in\mathbb{R}$ and $n\in[1,+\infty)$, let $\mathbb{B}_{k,n,R}$ be the collection of all equivalence classes up to isometry of closed balls of radius $R$ in $\mathsf{RCD}(k,n)$ spaces, equipped with the Gromov-Hausdorff distance. Then the map $\mathbb{B}_{k,n,R}\ni Z\to\mathcal{H}^{n}(Z)$ is real- valued and continuous. ###### Remark 2.11 ($\mathsf{ncRCD}$ and noncollapsed Ricci limit spaces). We recall that a noncollapsed Ricci limit space is a pointed Gromov–Hausdorff limit of a sequence of complete Riemannian manifolds $\\{(M_{i}^{n},\mathsf{d}_{i},p_{i})\\}_{i\in\mathbb{N}}$ for which there exist $k\leq 0$ and $v>0$ such that $\mathrm{Ric}_{M_{i}}\geq(n-1)k$, and $\operatorname{vol}(B_{1}(p_{i}))\geq v$ for every $i\in\mathbb{N}$. The latter limits have been first introduced and studied in [21]. As a consequence of item (a) of 2.10 we easily see that any noncollapsed Ricci limit space as before is an $\mathsf{ncRCD}(k,n)$ space. As of today it is known that there is a gap between the class of noncollpased Ricci limit spaces and $\mathsf{ncRCD}(k,n)$ spaces: see [71, Remark 4] for an example of an $\mathsf{ncRCD}(0,3)$ space that is not a noncollapsed Ricci limit space. ###### Remark 2.12 (Gromov precompatness theorem for $\mathsf{RCD}$ spaces). Here we recall the synthetic variant of Gromov’s precompactness theorem for $\mathsf{RCD}$ spaces, see [31, Equation (2.1)]. Let $\\{(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})\\}_{i\in\mathbb{N}}$ be a sequence of $\mathsf{RCD}(k_{i},n)$ spaces with $n\in[1,+\infty)$, $\operatorname{spt}(\mathfrak{m}_{i})=X_{i}$ for every $i\in\mathbb{N}$, $\mathfrak{m}_{i}(B_{1}(x_{i}))\in[v,v^{-1}]$ for some $v\in(0,1)$ and for every $i\in\mathbb{N}$, and $k_{i}\to k\in\mathbb{R}$. Then there exists a subsequence pmGH-converging to some $\mathsf{RCD}(k,n)$ space $(X,\mathsf{d},\mathfrak{m},x)$ with $\operatorname{spt}(\mathfrak{m})=X$. We conclude this part by recalling a few basic definitions and results concerning the perimeter functional in the setting of metric measure spaces (see [4, 49, 6]). ###### Definition 2.13 ($BV$ functions and perimeter on m.m.s.). Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space. A function $f\in L^{1}(X,\mathfrak{m})$ is said to belong to the space of _bounded variation functions_ $BV(X,\mathsf{d},\mathfrak{m})$ if there is a sequence $f_{i}\in{\rm Lip}_{\mathrm{loc}}(X)$ such that $f_{i}\to f$ in $L^{1}(X,\mathfrak{m})$ and $\limsup_{i}\int_{X}{\rm lip\,}f_{i}\,\mathrm{d}\mathfrak{m}<+\infty$, where ${\rm lip\,}u(x)\vcentcolon=\limsup_{y\to x}\frac{|u(y)-u(x)|}{\mathsf{d}(x,y)}$ is the _slope_ of $u$ at $x$, for any accumulation point $x\in X$, and ${\rm lip\,}u(x):=0$ if $x\in X$ is isolated. In such a case we define $|Df|(A)\vcentcolon=\inf\left\\{\liminf_{i}\int_{A}{\rm lip\,}f_{i}\,\mathrm{d}\mathfrak{m}\ :\ \text{$f_{i}\in{\rm Lip}_{\rm loc}(A),f_{i}\to f$ in $L^{1}(A,\mathfrak{m})$}\right\\},$ for any open set $A\subset X$. If $E\subset X$ is a Borel set and $A\subset X$ is open, we define the _perimeter $P(E,A)$ of $E$ in $A$_ by $P(E,A)\vcentcolon=\inf\left\\{\liminf_{i}\int_{A}{\rm lip\,}u_{i}\,\mathrm{d}\mathfrak{m}\ :\ \text{$u_{i}\in{\rm Lip}_{\rm loc}(A),u_{i}\to\chi_{E}$ in $L^{1}_{\rm loc}(A,\mathfrak{m})$}\right\\},$ We say that $E$ has _finite perimeter_ if $P(E,X)<+\infty$, and we denote by $P(E)\vcentcolon=P(E,X)$. Let us remark that the set functions $|Df|,P(E,\cdot)$ above are restrictions to open sets of Borel measures that we denote by $|Df|,|D\chi_{E}|$ respectively, see [6], and [49]. The _isoperimetric profile of $(X,\mathsf{d},\mathfrak{m})$_ is then $I_{X}(V)\vcentcolon=\left\\{P(E)\ :\ \text{$E\subset X$ Borel, $\mathfrak{m}(E)=V$}\right\\},$ for any $V\in[0,\mathfrak{m}(X))$. If $E\subset X$ is Borel with $\mathfrak{m}(E)=V$ and $P(E)=I_{X}(V)$, then we say that $E$ is an _isoperimetric region_. It follows from classical approximation results (cf. Remark 2.3) that the above definition yields the usual notion of perimeter on any Riemannian manifold $(M^{n},g)$ recalled at the beginning of this section. ###### Remark 2.14 (Coarea formula on metric measure spaces). Let $(X,\mathsf{d},\mathfrak{m})$ be a metric measure space. Let us observe that from the definitions given above, a Borel set $E$ with finite measure has finite perimeter if and only if the characteristic function $\chi_{E}$ belongs to $BV(X,\mathsf{d},\mathfrak{m})$. If $f\in BV(X,\mathsf{d},\mathfrak{m})$, then $\\{f>\alpha\\}$ has finite perimeter for a.e. $\alpha\in\mathbb{R}$ and the _coarea formula_ holds $\int_{X}u\,\mathrm{d}|Df|=\int_{-\infty}^{+\infty}\left(\int_{X}u\,\mathrm{d}\lvert D\chi_{\\{f>\alpha\\}}\rvert\right)\,\mathrm{d}\alpha,$ for any Borel function $u:X\to[0,+\infty]$, see [49, Proposition 4.2]. If $f$ is also continuous and nonnegative, then $|Df|(\\{f=\alpha\\})=0$ for _every_ $\alpha\in[0,+\infty)$ and the _localized coarea formula_ holds $\int_{\\{a<f<b\\}}u\,\mathrm{d}|Df|=\int_{a}^{b}\left(\int_{X}u\,\mathrm{d}\lvert D\chi_{\\{f>\alpha\\}}\rvert\right)\,\mathrm{d}\alpha,$ for every Borel function $u:X\to[0,+\infty]$ and every $0\leq a<b<+\infty$, see [5, Corollary 1.9]. Applying the above coarea formulas to the distance function $r(y)=\mathsf{d}(y,x)$ from a fixed point $y\in X$, one deduces that balls $B_{r}(y)$ have finite perimeter for almost every radius $r>0$, the function $r\mapsto\mathfrak{m}(B_{r}(y))$ is continuous, $\mathfrak{m}(\partial B_{r}(y))=0$ for every $r>0$, and $\tfrac{d}{dr}\mathfrak{m}(B_{r}(y))=P(B_{r}(y))$ for a.e. $r>0$. ###### Remark 2.15 (Bishop–Gromov comparison theorem on $\mathsf{RCD}$ spaces). Let us recall that for an $\mathsf{RCD}((n-1)k,n)$ space $(X,\mathsf{d},\mathfrak{m})$ the classical Bishop–Gromov volume comparison (cf. A.1) still holds; more precisely, for a fixed $x\in X$, the function $\mathfrak{m}(B_{r}(x))/v(n,k,r)$ is nonincreasing in $r$ and the function $P(B_{r}(x))/s(n,k,r)$ is essentially nonincreasing in $r$, i.e., $P(B_{R}(x))/s(n,k,R)\leq P(B_{r}(x))/s(n,k,r)$ for almost every radii $R\geq r$, see [72, Theorem 18.8 and Equation (18.8)]. Moreover, if $(X,\mathsf{d},\mathfrak{m})$ is an $\mathsf{ncRCD}((n-1)k,n)$ space, and then $\mathfrak{m}=\mathcal{H}^{n}$, one can conclude that $\mathcal{H}^{n}$-almost every point has a unique measure Gromov–Hausdorff tangent isometric to $\mathbb{R}^{n}$ ([31, Theorem 1.12]), and thus, from the volume convergence in 2.10, we get $\lim_{r\to 0}\frac{\mathcal{H}^{n}(B_{r}(x))}{v(n,k,r)}=\lim_{r\to 0}\frac{\mathcal{H}^{n}(B_{r}(x))}{\omega_{n}r^{n}}=1,\qquad\text{for $\mathcal{H}^{n}$-almost every $x$},$ (2.1) where $\omega_{n}$ is the volume of the unit ball in $\mathbb{R}^{n}$. Hence, from the monotonicity at the beginning of the remark we deduce that, if $X$ is an $\mathsf{ncRCD}((n-1)k,n)$ space, then for $\mathcal{H}^{n}$-almost every $x\in X$ we have $\mathcal{H}^{n}(B_{r}(x))\leq v(n,k,r)$ for every $r>0$. We can show the analogous estimate for the perimeter. Since $(X,\mathsf{d},\mathcal{H}^{n})$ is an $\mathsf{ncRCD}((n-1)k,n)$ space, we already observed above that $(X,\mathsf{d}_{r}:=r^{-1}\mathsf{d},\mathcal{H}^{n}_{\mathsf{d}_{r}}:=r^{-n}\mathcal{H}^{n},x)\xrightarrow{pmGH}(\mathbb{R}^{n},\mathsf{d}_{\mathrm{eu}},\mathscr{L}^{n},0),\qquad\text{for $\mathcal{H}^{n}$-almost every $x\in X$,}$ as $r\to 0$. Let us fix $x\in X$ as in the above line. Recall that for such an $x$, (2.1) holds. Let us denote by $P,P_{r}$, the perimeter functionals on $(X,\mathsf{d},\mathcal{H}^{n})$, and $(X,\mathsf{d}_{r},\mathcal{H}^{n}_{\mathsf{d}_{r}})$, respectively. Now fix $\rho>0$. Let $J\subset(0,{\rm diam}\,X)$, which tacitly depends on $\rho$, be a set of full measure such that for every $r\in J$ the perimeter $P(B_{\rho r}(x))$ is finite and the Bishop–Gromov ratio $\tfrac{P(B_{\rho r}(x))}{s(n,k,\rho r)}$ is nonincreasing on $J$. We want to show that $\limsup_{J\ni\,r\to 0}P_{r}(B_{\rho}^{\mathsf{d}_{r}}(x))\leq n\omega_{n}\rho^{n-1}.$ (2.2) Suppose by contradiction that $\limsup_{J\ni\,r\to 0}P_{r}(B_{\rho}^{\mathsf{d}_{r}}(x))>n\omega_{n}\rho^{n-1}+\varepsilon$. Hence, from the fact that $P_{r}=r^{1-n}P$ (this follows from the very definition of perimeter on m.m.s. and the fact that the $\mathfrak{m}=\mathcal{H}^{n}$ here) we would have $\limsup_{J\ni\,r\to 0}\frac{P(B_{\rho r}(x))}{s(n,k,\rho r)}=\limsup_{J\ni\,r\to 0}\frac{P(B_{\rho r}(x))}{n\omega_{n}(\rho r)^{n-1}}>1+\varepsilon/(n\omega_{n}\rho^{n-1}).$ Hence, from the above inequality and the Bishop–Gromov comparison Theorem for the perimeter at the beginning of this remark, we get that there exists $\overline{r}>0$ such that for almost every $r\in(0,\overline{r})$ we have $P(B_{\rho r}(x))>s(n,k,\rho r)(1+\varepsilon/(2n\omega_{n}\rho^{n-1}))$. Thus, integrating the latter inequality from $0$ to $\overline{r}$, changing variables, and using the coarea formula we get $\mathcal{H}^{n}(B_{\rho\overline{r}}(x))>v(n,k,\rho\overline{r})(1+\varepsilon/(2n\omega_{n}\rho^{n-1})),$ which is a contradiction with (2.1) and the monotonicity of the ratios of the volumes given by Bishop–Gromov comparison. Hence (2.2) holds. Let us now prove that the $\limsup$ in (2.2) is a limit and that we also have equality. Indeed, from the fact that for every radius $\rho>0$ we have $\chi_{B_{\rho}^{\mathsf{d}_{r}}(x)}\to\chi_{B_{\rho}^{\mathrm{d}_{\mathrm{eu}}}(0)}$ in $L^{1}$-strong as $r\to 0$, see [5, Remark 3.2], and from the semicontinuity of the perimeter in [5, Proposition 3.6] we conclude that $n\omega_{n}\rho^{n-1}\leq\liminf_{J\ni\,r\to 0}P_{r}(B_{\rho}^{\mathsf{d}_{r}}(x))$, and thus we get that for every $x$ fixed as above and every $\rho>0$ we have $\lim_{J\ni\,r\to 0}P_{r}(B_{\rho}^{\mathsf{d}_{r}}(x))=n\omega_{n}\rho^{n-1}.$ Evalutaing the equality above at $\rho=1$ and by using again that $P_{r}=r^{1-n}P$, we finally get that $\lim_{J\ni\,r\to 0}\frac{P(B_{r}(x))}{s(n,k,r)}=\lim_{J\ni\,r\to 0}\frac{P(B_{r}(x))}{n\omega_{n}r^{n-1}}=1,$ and thus, from the monotonicity of the ratios of the perimeters at the beginning of this remark we first get that $P(B_{r}(x))\leq s(n,k,r)$ for almost every $r>0$. Finally, approximating a ball $B_{r}(x)$ from the outside with balls $B_{r_{j}}(x)$ with $r_{j}\to r^{+}$ and $r_{j}$ such that $P(B_{r_{j}}(x))\leq s(n,k,r_{j})$, the lower semicontinuity of the perimeter implies that $P(B_{r}(x))\leq\liminf_{j}P(B_{r_{j}}(x))\leq s(n,k,r)$ for every $r>0$. All in all, we have proved that if $X$ is an arbitrary $\mathsf{ncRCD}((n-1)k,n)$ space, then for $\mathcal{H}^{n}$-almost every $x\in X$ we have that $P(B_{r}(x))\leq s(n,k,r)$ _for every_ $r>0$. ###### Remark 2.16 (Representation of the perimeter on $\mathsf{ncRCD}$ spaces). Let us fix $(X,\mathsf{d},\mathfrak{m})$ an $\mathsf{RCD}((n-1)k,n)$ space. Hence, from Bishop–Gromov comparison in Remark 2.15, for any fixed $x\in X$, $\limsup_{r\to 0}\mathfrak{m}(B_{2r}(x))/\mathfrak{m}(B_{r}(x))\leq\limsup_{r\to 0}v(n,k,2r)/v(n,k,r)<+\infty,$ i.e., $\mathfrak{m}$ is _asymptotically doubling_ , and therefore the Lebesgue Differentiation Theorem holds true, see [39, Theorem 3.4.3] and [39, Lebesgue Differentiation Theorem, p. 77]. So it makes sense to identify any Borel set $E$ with the set $E^{1}$ of points of density $1$, where, in general, $E^{t}\vcentcolon=\left\\{x\in X\ :\ \lim_{r\searrow 0}\frac{\mathfrak{m}(E\cap B_{r}(x))}{\mathfrak{m}(B_{r}(x))}=t\right\\},$ for any $t\in[0,1]$. The _essential boundary_ of $E$ is then classically defined by $\partial^{*}E\vcentcolon=X\setminus(E^{0}\cup E^{1})$. As in the case of Riemannian manifolds, if $(X,\mathsf{d},\mathfrak{m})$ is also an $\mathsf{ncRCD}((n-1)k,n)$ space, and thus $\mathfrak{m}=\mathcal{H}^{n}$, the perimeter measure can be represented by $|D\chi_{E}|=\mathcal{H}^{n-1}\mathbin{\vrule height=6.88889pt,depth=0.0pt,width=0.55974pt\vrule height=0.55974pt,depth=0.0pt,width=5.59721pt}\partial^{*}E,$ (2.3) for any finite perimeter set $E$. In fact, this follows by putting together the representation given in [4, Theorem 5.3] and the recent one contained in [15, Corollary 4.2]. It easily follows from such a representation formula that if $E\subset X$ has finite perimeter and $x\in X$, then for a.e. radius $r>0$ the intersection $B_{r}(x)\cap E$ has finite perimeter and $|D\chi_{B_{r}(x)\cap E}|=\mathcal{H}^{n-1}\mathbin{\vrule height=6.88889pt,depth=0.0pt,width=0.55974pt\vrule height=0.55974pt,depth=0.0pt,width=5.59721pt}(\partial^{*}E\cap B_{r}(x))+\mathcal{H}^{n-1}\mathbin{\vrule height=6.88889pt,depth=0.0pt,width=0.55974pt\vrule height=0.55974pt,depth=0.0pt,width=5.59721pt}(E\cap\partial^{*}B_{r}(x)).$ (2.4) Indeed for a.e. $r>0$ the ball $B_{r}(x)$ has finite perimeter and $|D\chi_{E}|(\partial B_{r}(x))=0$; so (2.4) follows from (2.3) by noticing that for such an $r$ it holds that $\partial^{*}(B_{r}(x)\cap E)=(\partial^{*}E\cap B_{r}(x))\cup(E\cap\partial^{*}B_{r}(x))$ up to $\mathcal{H}^{n-1}$-negligible sets. ### 2.2 Finite perimeter sets and GH-convergence We need to recall a generalized $L^{1}$-notion of convergence for sets defined on a sequence of metric measure spaces converging in the pmGH sense. Such a definition is given in [5, Definition 3.1], and it is investigated in [5] capitalizing on the results in [9]. ###### Definition 2.17 ($L^{1}$-strong and $L^{1}_{\mathrm{loc}}$ convergence). Let $\\{(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})\\}_{i\in\mathbb{N}}$ be a sequence of pointed metric measure spaces converging in the pmGH sense to a pointed metric measure space $(Y,\mathsf{d}_{Y},\mu,y)$ and let $(Z,\mathsf{d}_{Z})$ be a realization as in 2.7. We say that a sequence of Borel sets $E_{i}\subset X_{i}$ such that $\mathfrak{m}_{i}(E_{i})<+\infty$ for any $i\in\mathbb{N}$ converges _in the $L^{1}$-strong sense_ to a Borel set $F\subset Y$ with $\mu(F)<+\infty$ if $\mathfrak{m}_{i}(E_{i})\to\mu(F)$ and $\chi_{E_{i}}\mathfrak{m}_{i}\rightharpoonup\chi_{F}\mu$ with respect to the duality with continuous bounded functions with bounded support on $Z$. We say that a sequence of Borel sets $E_{i}\subset X_{i}$ converges _in the $L^{1}_{\mathrm{loc}}$-sense_ to a Borel set $F\subset Y$ if $E_{i}\cap B_{R}(x_{i})$ converges to $F\cap B_{R}(y)$ in $L^{1}$-strong for every $R>0$. Observe that in the above definition it makes sense to speak about the convergence $\chi_{E_{i}}\mathfrak{m}_{i}\rightharpoonup\chi_{F}\mu$ with respect to the duality with continuous bounded functions with bounded support on $Z$ as $(X_{i},\mathsf{d}_{i}),(Y,\mathsf{d}_{Y})$ can be assumed to be topological subspaces of $(Z,\mathsf{d}_{Z})$ by means of the isometries $\Psi_{i},\Psi$ of 2.7, and the measures $\mathfrak{m}_{i},\mu$ can be then identified with the push-forwards $(\Psi_{i})_{\sharp}\mathfrak{m}_{i},\Psi_{\sharp}\mu$ respectively. The following result is taken from [5] and will be of crucial importance in the proof of 4.6. ###### Proposition 2.18 ([5, Proposition 3.3, Corollary 3.4, Proposition 3.6, Proposition 3.8]). Let $k\in\mathbb{R}$, $n\geq 1$, and $\\{(X_{i},\mathsf{d}_{i},\mathfrak{m}_{i},x_{i})\\}_{i\in\mathbb{N}}$ be a sequence of $\mathsf{RCD}(k,n)$ m.m.s.​ converging in the pmGH sense to $(Y,\mathsf{d}_{Y},\mu,y)$. Then, * (a) For any $r>0$ and for any sequence of finite perimeter sets $E_{i}\subset\overline{B}_{r}(x_{i})$ satisfying $\sup_{i\in\mathbb{N}}|D\chi_{E_{i}}|(X_{i})<+\infty,$ there exists a subsequence $i_{k}$ and a finite perimeter set $F\subset\overline{B}_{r}(y)$ such that $E_{i_{k}}\to F$ in $L^{1}$-strong as $k\to+\infty$. Moreover $|D\chi_{F}|(Y)\leq\liminf_{k\to+\infty}|D\chi_{E_{i_{k}}}|(X_{i_{k}}).$ * (b) For any sequence of Borel sets $E_{i}\subset X_{i}$ with $\sup_{i\in\mathbb{N}}|D\chi_{E_{i}}|(B_{R}(x_{i}))<+\infty,\qquad\forall\,R>0,$ there exists a subsequence $i_{k}$ and a Borel set $F\subset Y$ such that $E_{i_{k}}\to F$ in $L^{1}_{\mathrm{loc}}$. * (c) Let $F\subset Y$ be a bounded set of finite perimeter. Then there exist a subsequence $i_{k}$, and uniformly bounded finite perimeter sets $E_{i_{k}}\subset X_{i_{k}}$ such that $E_{i_{k}}\to F$ in $L^{1}$-strong and $|D\chi_{E_{i_{k}}}|(X_{i_{k}})\to|D\chi_{F}|(Y)$ as $k\to+\infty$. With the help of the previous result we can now prove the following lemma which will be used in the forthcoming section. ###### Lemma 2.19. Let $(X,\mathsf{d},\mathcal{H}^{n})$ be an $\mathsf{ncRCD}((n-1)k,n)$ space with $\mathcal{H}^{n}(X)=+\infty$. If, for some $v_{0}>0$, $\mathcal{H}^{n}(B_{1}(x))\geq v_{0}$ for any $x\in X$, then the isoperimetric profile $I_{X}$ of $X$ can be rewritten as $I_{X}(V)=\inf\left\\{P(E)\ :\ \text{$E\subset X$ Borel, $\mathcal{H}^{n}(E)=V$, $E$ bounded}\right\\}\qquad\forall\,V\in(0,+\infty).$ (2.5) ###### Proof. Let us observe first that if $E\subset X$ is a finite perimeter set with finite measure $\mathcal{H}^{n}(E)<+\infty$, then for any point $o\in X$ there exists a sequence of radii $R_{i}\to+\infty$ such that * • $\mathcal{H}^{n}(E\cap B_{R_{i}}(o))\geq\mathcal{H}^{n}(E)-1/i$ for any $i$; * • $P(E\cap B_{R_{i}}(o))=P(E,B_{R_{i}}(o))+\mathcal{H}^{n-1}(E\cap\partial^{*}B_{R_{i}}(o))$ for any $i$; * • $\mathcal{H}^{n-1}(E\cap\partial^{*}B_{R_{i}}(o))\leq 1/i$ for any $i$. Indeed, by the results in Remark 2.14, and Remark 2.16, we know that $\mathcal{H}^{n}(E\cap B_{r}(o))=\int_{0}^{r}\mathcal{H}^{n-1}(E\cap\partial^{*}B_{t}(o))\,\mathrm{d}t\xrightarrow[r\to+\infty]{}\mathcal{H}^{n}(E)<+\infty.$ Recalling also (2.4) in order to justify the second item above, the sought claim follows. Now let $V\in(0,+\infty)$ and consider $E_{j}\subset X$ with $\mathcal{H}^{n}(E_{j})=V$ such that $P_{X}(E_{j})\leq I_{X}(V)+1/j$. Fix $o\in X$ and let $R_{i}^{j}$ be given by the first part of the proof applied to $E_{j}$. For any $i,j$ let $B_{\rho_{i,j}}(p_{i,j})\Subset X\setminus B_{R_{i}^{j}}(o)$ be such that $\mathcal{H}^{n}(B_{\rho_{i,j}}(p_{i,j}))=V-\mathcal{H}^{n}(E_{j}\cap B_{R_{i}^{j}}(o))\leq\frac{1}{i}\,,$ and moreover $p_{i,j}$ is chosen such that the comparison inequalities discussed in Remark 2.15 hold. Such balls exist since $\mathcal{H}^{n}(X)=+\infty$. Since balls of radius $1$ have volume $\geq v_{0}$, we can also assume that $\rho_{i,j}<1$. Then the volume comparison (see Remark 2.15) implies $\mathcal{H}^{n}(B_{\rho_{i,j}}(p_{i,j}))\geq v(n,k,\rho_{i,j})v_{0}/v(n,k,1)$. Hence $\lim_{i}\rho_{i,j}=0$ for any $j$. We then get that, by using the perimeter comparison (see Remark 2.15) $\lim_{i}P(B_{\rho_{i,j}}(p_{i,j}))\leq\lim_{i}s(n,k,\rho_{i,j})=0,$ for any $j$. Hence $P([E_{j}\cap B_{R_{i}^{j}}(o)]\cup B_{\rho_{i,j}}(p_{i,j}))\leq P(E_{j})+\frac{1}{i}+s(n,k,\rho_{i,j})\leq I_{X}(V)+\frac{1}{j}+\frac{1}{i}+s(n,k,\rho_{i,j}).$ Taking $i_{j}\geq j$ sufficiently large for any fixed $j$ so that $s(n,k,\rho_{i_{j},j})\leq 1/j$ yields that $P_{X}([E_{j}\cap B_{R_{i_{j}}^{j}}(o)]\cup B_{\rho_{i_{j},j}}(p_{i_{j},j}))\leq I_{X}(V)+\frac{3}{j}.$ Hence $[E_{j}\cap B_{R_{i_{j}}^{j}}(o)]\cup B_{\rho_{i_{j},j}}(p_{i_{j},j})$ is a minimizing (for the perimeter) sequence of bounded sets of volume $V$. So this implies (2.5). ∎ ## 3 Asymptotic geometry and isoperimetric profile In this section we prove some inequalities regarding the isoperimetric profile in some special classes of Riemannian manifold. We first need the following useful result. ###### Lemma 3.1. Let $(X,\mathsf{d},\mathcal{H}^{n})$ be an $\mathsf{ncRCD}((n-1)k,n)$ space. Then its isoperimetric profile $I_{X}:(0,\mathcal{H}^{n}(X))\to[0,+\infty)$ is upper semicontinuous. ###### Proof. Fix $V\in(0,\mathcal{H}^{n}(X))$ and let $\eta>0$. Then take $E\subset X$ Borel such that $\mathcal{H}^{n}(E)=V$ and $P_{X}(E)\leq I_{X}(V)+\eta$. Since the measure $\mathcal{H}^{n}$ is asymptotically doubling, see Remark 2.16, we can identify $E$ with the set of density one points $E^{1}$, and we denote $E^{0}$ the set of density zero points of $E$. Let us fix $x\in E^{1}=E$ and $y\in E^{0}$ such that the comparison inequalities discussed in Remark 2.15 hold. There is $\overline{\rho}$ such that $\begin{split}\mathcal{H}^{n}(E\cap B_{\rho}(x))>\frac{3}{4}\mathcal{H}^{n}(B_{\rho}(x))&\qquad\forall\,\rho\in(0,\overline{\rho}),\\\ \mathcal{H}^{n}(E\cap B_{\rho}(y))<\frac{1}{4}\mathcal{H}^{n}(B_{\rho}(y))&\qquad\forall\,\rho\in(0,\overline{\rho}),\end{split}$ and $\mathsf{d}(x,y)>3\overline{\rho}$. We claim that there is $\delta\in(0,V/2)$ and $\omega:(V-\delta,V+\delta)\to\mathbb{R}$ such that for any $v\in(V-\delta,V+\delta)$ there is $\rho_{x}=\rho_{x}(v),\rho_{y}=\rho_{y}(v)\in[0,\overline{\rho})$ such that $\begin{split}\mathcal{H}^{n}((E\cup B_{\rho_{y}}(y))\setminus B_{\rho_{x}}(x))&=v,\\\ P_{X}((E\cup B_{\rho_{y}}(y))\setminus B_{\rho_{x}}(x))&\leq P_{X}(E)+\omega(v),\\\ \lim_{v\to V}\omega(v)&=0.\end{split}$ (3.1) We observe that such a claim implies the statement of the Lemma. Indeed if $v_{j}\to V$ is any sequence, then the claim yields a sequence of sets $E_{j}\vcentcolon=(E\cup B_{\rho_{y,j}}(y))\setminus B_{\rho_{x,j}}(x)$ with $\mathcal{H}^{n}(E_{j})=v_{j}$, and that satisfy $I_{X}(v_{j})\leq P_{X}(E_{j})\leq P_{X}(E)+\omega(v_{j})\leq I_{X}(V)+\eta+\omega(v_{j}).$ Passing to the $\limsup$ as $j\to+\infty$ in the above inequality, since $v_{j}\to V$ and $V$ are arbitrary, and then letting $\eta\to 0$, readily implies that $I_{X}$ is upper semicontinuous. So we are left to prove the claim. Take $0<\delta<\min\left\\{\mathcal{H}^{n}(B_{\overline{\rho}}(y)\setminus E),\mathcal{H}^{n}(B_{\overline{\rho}}(x)\cap E),V/2\right\\}.$ Observe that the function $\begin{split}[0,\overline{\rho})^{2}\ni(\rho_{1},\rho_{2})\mapsto\,&\mathcal{H}^{n}((E\cup B_{\rho_{2}}(y))\setminus B_{\rho_{1}}(x))\\\ &=\mathcal{H}^{n}(E\cup B_{\rho_{2}}(y))-\mathcal{H}^{n}(E\cap B_{\rho_{1}}(x))\\\ &=\mathcal{H}^{n}(E)-\mathcal{H}^{n}(E\cap B_{\rho_{2}}(y))+\mathcal{H}^{n}(B_{\rho_{2}}(y))-\mathcal{H}^{n}(E\cap B_{\rho_{1}}(x)),\end{split}$ (3.2) is continuous; indeed by the coarea formula (Remark 2.14) we know that $\mathcal{H}^{n}(E\cap B_{\rho}(z))=\int_{0}^{\rho}\int_{X}\chi_{E}\,\mathrm{d}|D\chi_{B_{t}(z)}|\,\mathrm{d}t$ for any $\rho>0$ and $z\in X$. We are ready to prove (3.1). Let $v\in(V-\delta,V+\delta)$; we need to define $\omega(v)$, $\rho_{x}(v)$, and $\rho_{y}(v)$. If $v=V$, then $\omega(V)=\rho_{x}(V)=\rho_{y}(V)=0$ works. So let us assume $v>V$, the case $v<V$ being completely analogous. By the choice of $\delta$ there is $\rho_{v}\in(0,\overline{\rho})$ such that $\mathcal{H}^{n}(E\cup B_{\rho_{v}}(y))=v,\qquad\mathcal{H}^{n}(E\cup B_{\rho}(y))>v\quad\forall\,\rho\in(\rho_{v},\overline{\rho}).$ By continuity of the map in (3.2) there is $\widetilde{\rho}_{v}\in(\rho_{v},\overline{\rho})$ such that $\forall\,\rho\in(\rho_{v},\widetilde{\rho}_{v})\,\,\exists\,\sigma\in(0,\overline{\rho})\ :\ \mathcal{H}^{n}((E\cup B_{\rho}(y))\setminus B_{\sigma}(x))=v.$ Hence there exist $\rho_{x}\in(\rho_{v},\widetilde{\rho}_{v})$ and $\rho_{y}\in(0,\overline{\rho})$ such that $\mathcal{H}^{n}((E\cup B_{\rho_{y}}(y))\setminus B_{\rho_{x}}(x))=v,$ (3.3) and in addition $P_{X}(B_{\rho_{y}}(y))\leq s(n,k,\rho_{y})$, $P_{X}(B_{\rho_{x}}(x))\leq s(n,k,\rho_{x})$ (see the comparison of the perimeter in Remark 2.15). Therefore $\begin{split}P_{X}((E\cup B_{\rho_{y}}(y))\setminus B_{\rho_{x}}(x))&\leq P_{X}(E)+P_{X}(B_{\rho_{y}}(y))+P_{X}(B_{\rho_{x}}(x))\\\ &\leq P_{X}(E)+s(n,k,\rho_{y})+s(n,k,\rho_{x}).\end{split}$ (3.4) Moreover, we can clearly choose $\rho_{x},\rho_{y}\to 0$ if $v\to V^{+}$. Hence defining $\omega(v)\vcentcolon=s(n,k,\rho_{y})+s(n,k,\rho_{x})$, (3.3) and (3.4) imply the claimed (3.1). ∎ We now prove a proposition that roughly says that the isoperimetric profile of a manifold is less or equal than the isoperimetric profile of every pmGH limit at infinity. The following proposition has to be read as a generalization of [59, Lemma 2.7]. ###### Proposition 3.2. Let $(M^{n},g)$ be a complete noncompact noncollapsed Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$ for some $k\in(-\infty,0]$. Let $p_{i}\in M^{n}$ be a diverging sequence of points on $M^{n}$. Then, up to subsequence, there exists $(X_{\infty},\mathsf{d}_{\infty},\mathfrak{m}_{\infty},p_{\infty})$ a pointed noncollapsed Ricci limit space, and thus an $\mathsf{ncRCD}(k,n)$ space (see Remark 2.11), such that $(M^{n},\mathsf{d},\operatorname{vol},p_{i})\xrightarrow[i\to+\infty]{pmGH}(X_{\infty},\mathsf{d}_{\infty},\mathfrak{m}_{\infty},p_{\infty}).$ (3.5) Moreover, whenever a diverging sequence of points $p_{i}\in M^{n}$ and a pointed noncollapsed Ricci limit space $(X_{\infty},\mathsf{d}_{\infty},\mathfrak{m}_{\infty},p_{\infty})$ satisfy (3.5), then $I_{(M^{n},g)}(V)\leq I_{(M^{n},g)}(V_{1})+I_{X_{\infty}}(V_{2})\qquad\forall\,V=V_{1}+V_{2},$ (3.6) with $V,V_{1},V_{2}\geq 0$. In particular $I_{(M^{n},g)}(V)\leq I_{X_{\infty}}(V)\qquad\forall\,V>0,$ (3.7) and if, for any $j\geq 1$, $\\{p_{i,j}\,|\,i\in\mathbb{N}\\}$ is a diverging sequence of points on $M^{n}$ such that $(M^{n},\mathsf{d},\operatorname{vol},p_{i,j})\to(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j},p_{j})$ in the pmGH sense as $i\to+\infty$, then $I_{(M^{n},g)}(V)\leq I_{(M^{n},g)}(V_{0})+\sum_{j=1}^{+\infty}I_{X_{j}}(V_{j}),$ (3.8) whenever $V=\sum_{j=0}^{+\infty}V_{j}$ with $V,V_{j}\geq 0$ for any $j$. ###### Proof. First, we observe that since $M^{n}$ is noncompact and noncollapsed, it has infinite volume; indeed, there exist countably many disjoint balls of radius $1$ contained in $M^{n}$. The convergence in (3.5) is just a consequence of Remark 2.11 and Remark 2.12. So we are left to prove (3.6). Without loss of generality let $V>0$ be fixed, and let $V_{1},V_{2}\geq 0$ with $V_{1}+V_{2}=V$. So we can assume $V_{2}>0$ without loss of generality. Consider $V^{j}:=V_{2}-1/j>0$ for $j$ large enough. Let $\Omega\subset M^{n}$ be a bounded set such that $\operatorname{vol}(\Omega)=V_{1}$ and $P(\Omega)\leq I_{(M^{n},g)}(V_{1})+\eta$ for a fixed $\eta>0$. By the fact that $M^{n}$ is noncollapsed and by 2.10 we know that for some $v_{0}>0$ we have $\mathfrak{m}_{\infty}(B_{1}(x))\geq v_{0}$ for any $x\in X_{\infty}$: indeed, for every $x\in X_{\infty}$ there exists a sequence $\widetilde{p}_{i}$ such that $(M^{n},\mathsf{d},\widetilde{p}_{i})\to(X_{\infty},\mathsf{d}_{\infty},x)$ in the pGH sense, and then we can apply the second part of 2.10, together with the fact that $M^{n}$ is noncollapsed to deduce the sought bound. As $X_{\infty}$ is noncompact, it also follows that $\mathfrak{m}_{\infty}(X_{\infty})=+\infty$. Then by (2.5) there exist bounded sets $E_{j}\subset X_{\infty}$ with $\mathfrak{m}(E_{j})=V^{j}$ and $P_{X_{\infty}}(E_{j})\leq I_{X_{\infty}}(V^{j})+1/j$. By item (c) in 2.18, up to subsequences in $i$, for any $j$ there are $R_{j}>0$ and a sequence $F_{i}^{j}\subset B_{R_{j}}(p_{i})\subset M^{n}$ such that $F_{i}^{j}\to E_{j}$ in $L^{1}$-strong as $i\to+\infty$ and $\lim_{i}P(F_{i}^{j})=P_{X_{\infty}}(E_{j})$. Therefore, if $o\in M^{n}$ is fixed, there is a ball $B_{S}(o)$ such that $\Omega\Subset B_{S}(o)$, and, since $p_{i}$ diverges at infinity, there are balls $B_{\rho_{i,j}}(o^{\prime})\subset M^{n}$ for some $o^{\prime}\in M^{n}$ such that $B_{\rho_{i,j}}(o^{\prime})\Subset M^{n}\setminus(B_{R_{j}}(p_{i})\cup B_{S}(o)),\qquad\operatorname{vol}(B_{\rho_{i,j}}(o^{\prime}))=V_{2}-\operatorname{vol}(F^{j}_{i}),$ for any $i,j$, up to subsequences. For any $j$ there is $i_{j}$ such that $F^{j}_{i_{j}}\Subset M\setminus B_{S}(o)$, $P(F_{i_{j}}^{j})\leq P_{X_{\infty}}(E_{j})+1/j$, and $\operatorname{vol}(F_{i_{j}}^{j})\geq V_{2}-2/j$. Moreover, since $\lim_{j}\operatorname{vol}(B_{\rho_{i_{j},j}}(o^{\prime}))=0$, then $\lim_{j}P(B_{\rho_{i_{j},j}}(o^{\prime}))=0$. Hence, since $F_{i_{j}}^{j}$, $B_{\rho_{i_{j},j}}(o^{\prime})$ and $\Omega$ are mutually disjoint, we have, also by exploiting the previous inequalities, $\begin{split}I_{(M^{n},g)}(V)&\leq P(F_{i_{j}}^{j}\cup B_{\rho_{i_{j},j}}(o^{\prime})\cup\Omega)=P(F_{i_{j}}^{j})+P(B_{\rho_{i_{j},j}}(o^{\prime}))+P(\Omega)\\\ &\leq P_{X_{\infty}}(E_{j})+\frac{1}{j}+P(B_{\rho_{i_{j},j}}(o^{\prime}))+I_{(M^{n},g)}(V_{1})+\eta\\\ &\leq I_{X_{\infty}}\left(V_{2}-\frac{1}{j}\right)+\frac{2}{j}+P(B_{\rho_{i_{j},j}}(o^{\prime}))+I_{(M^{n},g)}(V_{1})+\eta.\end{split}$ Passing to the $\limsup$ in the previous estimate and using that $I_{X_{\infty}}$ is upper semicontinuous by Lemma 3.1 jointly with the fact that $\eta$ is arbitrary, finally implies (3.6). Now (3.7) clearly follows from (3.6) with $V_{1}=0$. Finally, in the notation and assumptions of (3.8), we can iteratively apply (3.6) to get $\begin{split}I_{(M^{n},g)}(V)&\leq I_{X_{1}}(V_{1})+I_{(M^{n},g)}\left(V_{0}+\sum_{j=2}^{+\infty}V_{j}\right)\leq I_{(M^{n},g)}\left(V_{0}+\sum_{j=k}^{+\infty}V_{j}\right)+\sum_{j=1}^{k-1}I_{X_{j}}(V_{j}),\end{split}$ for any $k\geq 2$. Letting $k\to+\infty$, since $\left(V_{0}+\sum_{j=k}^{+\infty}V_{j}\right)\to V_{0}$, passing to the limsup in the above estimate and using Lemma 3.1 imply (3.8). ∎ ###### Remark 3.3 (About the hypotheses in 3.2). We remark that with the same proof of 3.2 we can prove a more general statement substituting $M^{n}$ with an arbitrary $\mathsf{ncRCD}(k,n)$ space $X$ that satisfies $\mathcal{H}^{n}(B_{1}(x))>v_{0}$ for every $x\in X$ and for some $v_{0}>0$. Let us recall that a manifold $(M^{n},g)$ is Cartan–Hadamard if it is complete, $\mathrm{Sect}\leq 0$ and $M^{n}$ is simply connected. Recall that if $(M^{n},g)$ is Cartan–Hadamard, then $M^{n}$ is diffeomorphic to $\mathbb{R}^{n}$. ###### Corollary 3.4. Let $(M^{n},g)$ be a Cartan–Hadamard manifold of dimension $2\leq n\leq 4$ such that there exists $k\in(-\infty,0)$ for which $\mathrm{Ric}\geq(n-1)k$ on $M^{n}$, and such that there exists a diverging sequence $p_{j}\in M^{n}$ for which $(M^{n},\mathsf{d},p_{j})\to(\mathbb{R}^{n},\mathsf{d}_{\mathrm{eu}},0)$ in the pGH sense as $j\to+\infty$. Then $I_{(M^{n},g)}(V)=I_{(\mathbb{R}^{n},g_{\mathrm{eu}})}(V),$ for any $V\geq 0$. Moreover, if there exists an isoperimetric region $\Omega$, then $(\Omega,g)$ is isometric to a Euclidean ball of the same volume. ###### Proof. Let us recall the following result, which is due to Croke, see [30, Proposition 14]. Let $(M^{n},g)$ be a complete Riemannian manifold. Then there exists $C=C(n)>0$ such that $\operatorname{vol}(B_{r}(p))\geq Cr^{n},\qquad\text{for all $p\in M^{n}$, and for all $0<r<\operatorname{inj}(p)/2$.}$ Since $M^{n}$ is Cartan–Hadamard, for every $p\in M^{n}$ we have $\operatorname{inj}(p)=+\infty$. Hence we deduce that $M^{n}$ is noncollapsed. Thus, as a consequence of the volume convergence in 2.10, we get that the pGH limit in the statement is actually a pmGH limit. Hence, from 3.2 we directly get that $I_{(M^{n},g)}\leq I_{(\mathbb{R}^{n},g_{\mathrm{eu}})}$. In case $2\leq n\leq 4$ a sharp isoperimetric inequality, i.e., with the Euclidean constant, is available for Cartan–Hadamard manifold, see [73, p. 1] for the case $n=2$, [42] for the case $n=3$, and [29] for $n=4$. In particular in all the latter cases, denoting with $\omega_{n}$ the volume of the unit ball in $\mathbb{R}^{n}$, one has that $P(\Omega)\geq n\omega_{n}^{1/n}(\operatorname{vol}\Omega)^{(n-1)/n}$ for every finite perimeter set $\Omega\subset M^{n}$, and thus $I_{(M^{n},g)}\geq I_{\mathbb{(}\mathbb{R}^{n},g_{\mathrm{eu}})}$ when $2\leq n\leq 4$. As a result $I_{(M^{n},g)}=I_{(\mathbb{R}^{n},g_{\mathrm{eu}})}$ when $2\leq n\leq 4$. If $\Omega$ is an isoperimetric region, since $2\leq n\leq 4$, we conclude that $\Omega$ is smooth (see [66, Proposition 2.4], or [54]). Thus, the rigidity for the isoperimetric inequalities proved in [73, p. 1], [42], and [29], implies that every isoperimetric region $\Omega$ is isometric to a Euclidean ball of the same volume, thus completing the proof of the theorem. ∎ ###### Remark 3.5 (On the Cartan–Hadamard conjecture). We observe that the inequality $I_{(M^{n},g)}\geq I_{(\mathbb{R}^{n},g_{\mathrm{eu}})}$ in the proof of 3.4 is a consequence of the sharp isoperimetric inequality for Cartan–Hadamard manifold. Instead, as it is clear from the proof, the inequality $I_{(M^{n},g)}\leq I_{(\mathbb{R}^{n},g_{\mathrm{eu}})}$ holds true in every dimension in the hypothesis of 3.4. For dimensions $n>4$ it is longtime conjectured that the isoperimetric inequality with the Euclidean constant, together with a rigidity statements, holds for Cartan–Hadamard manifolds of any dimension. This conjecture is known as the Cartan–Hadamard conjecture. As a result, if this were settled, as a by product of our arguments the equality of 3.4 would hold true in any dimension. Consequently, the counterexamples to existence provided below could be generalized to any dimension The argument that follows, providing examples of nonexistence of isoperimetric sets, is inspired by the parallel situation described in [52, Example 5.6 and Example 5.7] constituted by the isoperimetric-isodiametric problem. ###### Example 3.6 (Nonexistence of isoperimetric sets). An example of $2$-dimensional manifold satisfying the hypotheses of 3.4 is the helicoid. Indeed, the helicoid is simply connected, $\mathrm{Sect}\leq 0$, being a minimal surface, and it can be readily checked that its sectional curvature tends to zero as the distance from the rotation axis increases. Taking into account the periodicity of the helicoid along its rotation axis, then $\mathrm{Ric}\geq k$. An easy application of Lemma A.2 shows that a sequence of points $p_{j}$ whose distance from the rotation axis diverges satisfies the hypotheses of 3.4. Since also $\mathrm{Sect}\neq 0$ at every point, no isoperimetric regions exist on the helicoid. Moreover, if $(\Sigma,g)$ is a Cartan–Hadamard surface with induced distance $\mathsf{d}$ and with asymptotically vanishing sectional curvature (see 6.4 below for the precise definition) then 3.4 allows us to conclude that $I_{(\Sigma,g)}=I_{(\mathbb{R}^{2},g_{\mathrm{eu}})}$. Indeed since the sectional curvature is asymptotically vanishing, then $(\Sigma,g)$ clearly satisfies a uniform lower bound on the Ricci tensor; moreover, since the sectional curvature is asymptotically vanishing and $\operatorname{inj}(p)=+\infty$ for every $p\in M^{n}$, the result in 6.6 below allows to conclude that for every diverging sequence $p_{j}\in\Sigma$ we have $(\Sigma,\mathsf{d},p_{j})\xrightarrow{j\to+\infty}(\mathbb{R}^{n},d_{\mathrm{eu}},0),$ in the pGH topology. Thus all the hypotheses of 3.4 are satisfied and we get the sought equality. Moreover, if in addition $\mathrm{Sect}=0$ at most at isolated points of $\Sigma$, we conclude from the rigidity part of 3.4 that no isoperimetric regions of any volume can exist on $\Sigma$. An example satisfying the previous conditions is the saddle, i.e., the surface of equation $z=x^{2}-y^{2}$ in $\mathbb{R}^{3}$. In order to construct examples that satisfy the hypotheses of 3.4 in dimension $n>2$, one can take $(\Sigma,g)$ an arbitrary Cartan–Hadamard surface with Ricci uniformly bounded below satisfying the pGH-limit hypothesis in 3.4 (e.g., the previously discussed helicoid and saddle), and consider $\Sigma\times\mathbb{R}$, and $\Sigma\times\mathbb{R}^{2}$. Moreover, if one chooses $\Sigma$ such that $\mathrm{Sect}=0$ at most at isolated points of $\Sigma$, then $\Sigma\times\mathbb{R}$ and $\Sigma\times\mathbb{R}^{2}$ cannot have isoperimetric regions of any volume, since rigidity in 3.4 holds. Let us mention another related class of Riemannian manifolds such that no isoperimetric regions exist, in addition to the Cartan–Hadamard manifolds above. These are studied in [64] and consist of particular radial metrics of the form $\mathrm{d}t^{2}+f(t)^{2}\mathrm{d}\theta^{2}$, where $\mathrm{d}\theta^{2}$ is the metric of the unit circle on $\mathbb{R}^{2}$. In such a case $\mathrm{Sect}$ is a function of $t$, and if $t\mapsto\mathrm{Sect}(t)$ is increasing and $\sup\mathrm{Sect}$ is never achieved on $M^{n}$, then no isoperimetric regions exist [64, Theorem 2.16]. ## 4 Asymptotic mass decomposition of minimizing sequences This section is devoted to the proof of the main result of the work, that yield an asymptotic description of the behavior of minimizing sequences (for the perimeter) that possibly lose part of the mass at infinity, culminating in 4.6, that constitutes a more detailed version of 1.1. The starting point is a classical result due to Ritoré–Rosales that can be found in [66, Theorem 2.1], and which is meaningful for noncompact Riemannian manifolds of infinite volume. ###### Theorem 4.1. Let $(M^{n},g)$ be a complete noncompact Riemannian manifold, and fix $V>0$ and $o\in M^{n}$. Let $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ be a minimizing (for the perimeter) sequence of finite perimeter sets of volume $V$. Then there exists a diverging sequence $\\{r_{i}\\}_{i\in\mathbb{N}}$ such that * (i) $\Omega_{i}^{c}:=\Omega_{i}\cap B_{r_{i}}(o)$ and $\Omega_{i}^{d}:=\Omega_{i}\setminus B_{r_{i}}(o)$ are sets of finite perimeter with $\lim_{i\to+\infty}\left(P(\Omega_{i}^{d})+P(\Omega_{i}^{c})\right)=I(V).$ * (ii) There exists a finite perimeter set $\Omega$ with $\operatorname{vol}(\Omega)\leq V$ such that $\lim_{i\to+\infty}\operatorname{vol}(\Omega_{i}^{c})=\operatorname{vol}(\Omega),\qquad\lim_{i\to+\infty}P(\Omega_{i}^{c})=P(\Omega).$ Moreover $\Omega^{c}_{i}\to\Omega$ in $L^{1}_{\rm loc}(M^{n},g)$. * (iii) $\Omega$ is an isoperimetric region for its own volume. We will need another classical and fundamental property of isoperimetric regions. In B.1 we prove that the validity of an isoperimetric inequality for small volumes implies that isoperimetric regions are bounded. Interestingly, noncollapsed manifolds with Ricci curvature bounded from below satisfy such an isoperimetric inequality. This follows from [38, Lemma 3.2]. Thus, such manifolds have bounded isoperimetric regions and we can state the following result. ###### Corollary 4.2. Let $(M^{n},g)$ be a complete noncollapsed Riemannian manifold with $\mathrm{Ric}\geq(n-1)k$ for some $k\in(-\infty,0]$. Then the isoperimetric regions of $(M^{n},g)$ are bounded. ### 4.1 Concentration lemmas The following lemma contains a so-called concentration-compactness result that will play a key role in the study of the decomposition of the diverging mass of minimizing sequences. The result is rather classical and could be stated at the level of measure theory, however we include here a brief proof specializing the concentration-compactness principle to a sequence of sets under the form we will apply it. The following result is inspired by [45, Lemma I.1]. ###### Lemma 4.3 (Concentration-compactness). Let $(M^{n},g)$ be a complete noncompact Riemannian manifold and let $E_{i}$ be a sequence of bounded measurable sets such that $\lim_{i}\operatorname{vol}(E_{i})=W\in(0,+\infty)$. Then, up to passing to a subsequence, exactly one of the following alternatives occur. 1. 1. For any $R>0$ it holds $\lim_{i}\sup_{p\in M}\operatorname{vol}(E_{i}\cap B_{R}(p))=0.$ 2. 2. There exists a sequence of points $p_{i}\in M^{n}$ such that for any $\varepsilon\in(0,W/2)$ there exist $R\geq 1$, $i_{\varepsilon}\in\mathbb{N}$ such that $\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))\geq W-\varepsilon$ for any $i\geq i_{\varepsilon}$. Moreover, there is $I\in\mathbb{N},r\geq 1$ such that $\operatorname{vol}(E_{i}\cap B_{r}(p_{i}))\geq\operatorname{vol}(E_{i}\cap B_{r}(q))$ for any $q\in M^{n}$ and $\operatorname{vol}(E_{i}\cap B_{r}(p_{i}))>W/2$ for any $i\geq I$. 3. 3. There exists $w\in(0,W)$ such that for any $\varepsilon\in(0,w/2)$ there exist $R\geq 1$, $i_{\varepsilon}\in\mathbb{N}$, a sequence of points $p_{i}\in M^{n}$, and a sequence of open sets $U_{i}$ such that $U_{i}=M^{n}\setminus B_{R_{i}}(p_{i})\quad\text{for some $R_{i}\to+\infty$},\,\,\text{and then}\,\,\mathsf{d}(p_{i},U_{i})\xrightarrow{i\to+\infty}+\infty,$ and moreover $\begin{split}|\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))-w|<\varepsilon,&\\\ |\operatorname{vol}(E_{i}\cap U_{i})-(W-w)|<\varepsilon,&\\\ \operatorname{vol}(E_{i}\cap B_{R}(p_{i}))\geq\operatorname{vol}(E_{i}\cap B_{R}(q))&\qquad\forall\,q\in M,\end{split}$ for every $i\geq i_{\varepsilon}$. ###### Proof. Define $Q_{i}(\rho)\vcentcolon=\sup_{p\in M}\operatorname{vol}(E_{i}\cap B_{\rho}(p))$. The functions $Q_{i}:(0,+\infty)\to\mathbb{R}$ are nondecreasing and uniformly bounded, since $\operatorname{vol}(E_{i})\to W$. Hence the sequence $Q_{i}$ is uniformly bounded in $BV_{\rm loc}(0,+\infty)$ and then, up to subsequence, there exists a nondecreasing function $Q\in BV_{\rm loc}(0,+\infty)$ such that $Q_{i}\to Q$ in $BV_{\rm loc}$ and pointwise almost everywhere. Also, let us pointwise define $Q(\rho)\vcentcolon=\lim_{\eta\to 0^{+}}{\rm ess}\inf_{(\rho-\eta,\rho)}Q$, so that $Q$ is defined at every $\rho\in(0,+\infty)$. Moreover, observe that $Q(\rho)\leq W$ for any $\rho>0$. Now three disjoint cases can occur, distinguishing the cases enumerated in the statement. 1. 1. We have that $\lim_{\rho\to+\infty}Q(\rho)=0$, and hence $Q\equiv 0$ since it is nondecreasing. Then item 1 of the statement clearly holds. 2. 2. We have that $\lim_{\rho\to+\infty}Q(\rho)=W$. Then there is $r\geq 1$ such that $\exists\lim_{i}\sup_{p}\operatorname{vol}(E_{i}\cap B_{r}(p))=Q(r)\geq\tfrac{3}{4}W$. Since $E_{i}$ is bounded for any $i$, let $p_{i}\in M^{n}$ such that $\sup_{p}\operatorname{vol}(E_{i}\cap B_{r}(p))=\operatorname{vol}(E_{i}\cap B_{r}(p_{i}))$ for any $i$. We claim that the sequence $p_{i}$ satisfies the property in item 2. Indeed, let $\varepsilon\in(0,W/2)$ be given. Arguing as above, since $\lim_{\rho\to+\infty}Q(\rho)=W$, there is a radius $r^{\prime}>0$ and a sequence $p_{i}^{\prime}\in M^{n}$ such that $\operatorname{vol}(E_{i}\cap B_{r^{\prime}}(p_{i}^{\prime}))\geq W-\varepsilon$ for any $i\geq i_{\varepsilon}$. Then $\mathsf{d}(p_{i},p_{i}^{\prime})<r+r^{\prime}$, for otherwise $W\xleftarrow{}\operatorname{vol}(E_{i})\geq\operatorname{vol}(E_{i}\cap B_{r}(p_{i}))+\operatorname{vol}(E_{i}\cap B_{r^{\prime}}(p_{i}^{\prime})),$ and the right hand side is $>W$ for $i$ large enough. Hence taking $R=r+2r^{\prime}$ we conclude that $\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))\geq W-\varepsilon$ for $i\geq i_{\varepsilon}$ as claimed. 3. 3. We have that $\lim_{\rho\to+\infty}Q(\rho)=w\in(0,W)$. Then for given $\varepsilon\in(0,w/2)$ there is $R\geq 1$ such that $w-\frac{\varepsilon}{8}\leq Q(R)=\lim_{i}\sup_{p}\operatorname{vol}(E_{i}\cap B_{R}(p))=\lim_{i}\operatorname{vol}(E_{i}\cap B_{R}(p_{i})),$ for some $p_{i}\in M^{n}$, where in the last equality we used that $\sup_{p}\operatorname{vol}(E_{i}\cap B_{R}(p))=\operatorname{vol}(E_{i}\cap B_{R}(p_{i})),$ for some $p_{i}$ since $E_{i}$ is bounded. This implies that $\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))\geq\operatorname{vol}(E_{i}\cap B_{R}(q))$ for any $i$ and any $q\in M^{n}$, and there is $i_{\varepsilon}$ such that $|\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))-w|<\varepsilon/4$ for $i\geq i_{\varepsilon}$. For $i\geq i_{\varepsilon}$, there is an increasing sequence $\rho_{j}\to+\infty$ such that $Q(\rho_{j})=\lim_{i}Q_{i}(\rho_{j})$ and we have $\begin{split}w&=\lim_{j\to+\infty}Q(\rho_{j})=\lim_{j}\lim_{i}\sup_{p}\operatorname{vol}(E_{i}\cap B_{\rho_{j}}(p))\geq\limsup_{j}\limsup_{i}\operatorname{vol}(E_{i}\cap B_{\rho_{j}}(p_{i}))\\\ &=\limsup_{j}\limsup_{i}\left(\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))+\operatorname{vol}(E_{i}\cap B_{\rho_{j}}(p_{i})\setminus B_{R}(p_{i}))\right)\\\ &\geq w-\frac{\varepsilon}{4}+\limsup_{j}\limsup_{i}\operatorname{vol}(E_{i}\cap B_{\rho_{j}}(p_{i})\setminus B_{R}(p_{i})).\end{split}$ Then there is $j_{0}$ such that for any $j\geq j_{0}$ we have that $\rho_{j}>R$ and there is $i_{j}$, with $i_{j}$ increasing to $+\infty$ as $j\to+\infty$, that satisfies $\operatorname{vol}(E_{i}\cap B_{\rho_{j}}(p_{i})\setminus B_{R}(p_{i}))<\frac{\varepsilon}{2}\qquad\forall\,i\geq\max\\{i_{\varepsilon},i_{j}\\}.$ (4.1) Hence define $R_{i}\vcentcolon=\rho_{\max\\{j\ :\ i\geq i_{j}\\}}.$ In this way $\operatorname{vol}(E_{i}\cap B_{R_{i}}(p_{i})\setminus B_{R}(p_{i}))<\varepsilon/2$ for any $i\geq\max\\{i_{\varepsilon},i_{j_{0}}\\}$ by (4.1). Defining $U_{i}\vcentcolon=M^{n}\setminus B_{R_{i}}(p_{i})$ we finally get that $\mathsf{d}(p_{i},U_{i})=R_{i}\to+\infty$ and $\begin{split}W\xleftarrow{}\operatorname{vol}(E_{i})&=\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))+\operatorname{vol}(E_{i}\cap B_{R_{i}}(p_{i})\setminus B_{R}(p_{i}))+\operatorname{vol}(E_{i}\cap U_{i})\\\ &\leq w+\frac{3}{4}\varepsilon+\operatorname{vol}(E_{i}\cap U_{i}),\end{split}$ for $i\geq\max\\{i_{\varepsilon},i_{j_{0}}\\}$. By the first line in the above identity, recalling that $|\operatorname{vol}(E_{i}\cap B_{R}(p_{i}))-w|<\varepsilon/4$, we also see that $\limsup_{i}\operatorname{vol}(E_{i}\cap U_{i})\leq W-w+\varepsilon/4$. Hence the proof of item 3 is completed renaming $\max\\{i_{\varepsilon},i_{j_{0}}\\}$ into $i_{\varepsilon}$ and by eventually taking a slightly bigger $i$ in order to ensure the validity of the second inequality of item 3. ∎ Let us briefly recall here a useful covering lemma for complete Riemannian manifolds with Ricci curvature bounded from below, cf. [38, Lemma 1.1]. ###### Lemma 4.4. Let $k\in\mathbb{R}$ and let $(M^{n},g)$ be a complete Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$. Let $0<\rho<T_{k}$, where $T_{k}:=\pi/\sqrt{k}$ if $k>0$, or $T_{k}:=+\infty$ otherwise. Then, there exists a countable family $\\{B_{\rho}(x_{i})\\}_{i\in\mathbb{N}}$ of open balls such that * (i) $\cup_{i\in\mathbb{N}}B_{\rho}(x_{i})=M^{n}$, * (ii) $B_{\rho/2}(x_{i})\cap B_{\rho/2}(x_{j})=\emptyset$ for every $i,j\in\mathbb{N}$, * (iii) for every $y\in M^{n}$ it holds $\sharp\\{i:y\in B_{\rho}(x_{i})\\}\leq\sharp\\{i:y\in B_{2\rho}(x_{i})\\}\leq\frac{v(n,k,6\rho)}{v(n,k,\rho/2)}.$ ###### Proof. Let $\mathscr{F}$ be the collection of the countable families of pairwise disjoint balls $\\{B_{\rho/2}(x_{i}):x_{i}\in M\\}_{i\in\mathbb{N}}$ ordered with the relation $\subset$. By Zorn Lemma it is immediate to deduce the existence of a maximal, with respect to $\subset$, family $\mathscr{G}:=\\{B_{\rho/2}(x_{i}):x_{i}\in M\\}_{i\in\mathbb{N}}$ in $\mathscr{F}$. We want to show that $\mathscr{G}$ verifies the claims. Item (ii) for the family $\mathscr{G}$ is verified by definition. Suppose by contradiction item (i) is false. Thus there exists $x\in M$ such that for every $i\in\mathbb{N}$ we have $d(x,x_{i})\geq\rho$. Then, by the triangle inequality, we get that $B_{\rho/2}(x)\cap B_{\rho/2}(x_{i})=\emptyset$ for all $i\in\mathbb{N}$. Thus $\mathscr{G}\cup\\{B_{\rho/2}(x)\\}$ is an element of $\mathscr{F}$ that strictly contains $\mathscr{G}$, giving a contradiction with the fact that $\mathscr{G}$ is maximal with respect to $\subset$. In order to prove item (iii) for the family $\mathscr{G}$ let us first prove that the number $n$ of disjoint balls $B_{\rho/2}(\widetilde{x}_{1}),\dots,B_{\rho/2}(\widetilde{x}_{n})$ that are contained in $B_{3\rho}(x)$, where $x,\widetilde{x}_{1},\dots,\widetilde{x}_{n}\in M^{n}$, is bounded above by $v(n,k,6\rho)/v(n,k,\rho/2)$. Indeed, calling $B_{\rho/2}(\widetilde{x}_{i_{0}})$ one of the balls with the minimum volume among $B_{\rho/2}(\widetilde{x}_{1}),\dots,B_{\rho/2}(\widetilde{x}_{n})$, we can estimate $n\leq\frac{\operatorname{vol}(B_{3\rho}(x))}{\operatorname{vol}(B_{\rho/2}(\widetilde{x}_{i_{0}}))}\leq\frac{\operatorname{vol}(B_{6\rho}(\widetilde{x}_{i_{0}}))}{\operatorname{vol}(B_{\rho/2}(\widetilde{x}_{i_{0}}))}\leq\frac{v(n,k,6\rho)}{v(n,k,\rho/2)},$ where in the first inequality we are using that $B_{\rho/2}(\widetilde{x}_{1}),\dots,B_{\rho/2}(\widetilde{x}_{n})$ are disjoint and contained in $B_{3\rho}(x)$, and $B_{\rho/2}(\widetilde{x}_{i_{0}})$ is one of the balls with the minimum volume among them; in the second inequality we are using $B_{3\rho}(x)\subset B_{6\rho}(\widetilde{x}_{i_{0}})$ by the triangle inequality; and in the third inequality we are using Bishop–Gromov volume comparison (see A.1). Thus the claim is proved. In order to conclude the proof of item (iii), let us assume that $y\in M^{n}$ is an element of $n$ balls $B_{2\rho}(x_{1})$, $\dots$, $B_{2\rho}(x_{n})$ of the family $\mathscr{G}$ constructed above. Then, by the triangle inequality, $B_{\rho/2}(x_{i})\subset B_{3\rho}(y)$ for every $1\leq i\leq n$. Since $B_{\rho/2}(x_{1}),\dots,B_{\rho/2}(x_{n})$ are disjoint and contained in $B_{3\rho}(y)$ and since $y,x_{1},\dots,x_{n}\in M^{n}$, the previous discussion implies that $n\leq v(n,k,6\rho)/v(n,k,\rho/2)$. As also $\\{i:y\in B_{\rho}(x_{i})\\}\subset\\{i:y\in B_{2\rho}(x_{i})\\}$, the proof of item (iii) is concluded. ∎ We can now deduce a lower bound on the concentration of the mass of a finite perimeter set. The following result is a simpler version of [59, Lemma 2.5]. ###### Lemma 4.5 (Local mass lower bound). Let $k\in\mathbb{R}$ and let $(M^{n},g)$ be a complete Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$. Assume that $(M^{n},g)$ is noncollapsed with $\operatorname{vol}(B_{1}(q))\geq v_{0}>0$ for any $q\in M^{n}$. Then there exists a constant $C_{n,k,v_{0}}>0$ such that for any nonempty bounded finite perimeter set $E$ there exists $p_{0}\in M^{n}$ such that $\operatorname{vol}(E\cap B_{1}(p_{0}))\geq\min\left\\{C_{n,k,v_{0}}\frac{\operatorname{vol}(E)^{n}}{P(E)^{n}},\frac{v_{0}}{2}\right\\}.$ ###### Proof. Without loss of generality we can assume that $k\leq 0$. We distinguish two possible cases. If there is $p_{0}\in M^{n}$ such that $\operatorname{vol}(E\cap B_{1}(p_{0}))\geq\tfrac{1}{2}\operatorname{vol}(B_{1}(p_{0}))$, then clearly $\operatorname{vol}(E\cap B_{1}(p_{0}))\geq v_{0}/2$ and we already have a lower bound. So suppose instead that $\operatorname{vol}(E\cap B_{1}(p))<\frac{1}{2}\operatorname{vol}(B_{1}(p))\qquad\forall\,p\in M.$ (4.2) We apply Lemma 4.4 with $\rho=1$, which yields a covering $\\{B_{1}(x_{i})\\}_{i\in\mathbb{N}}$. Since $E$ is bounded, there is $i_{0}$ such that $L\vcentcolon=\sup_{i\in\mathbb{N}}\operatorname{vol}(E\cap B_{1}(x_{i}))^{\frac{1}{n}}=\operatorname{vol}(E\cap B_{1}(x_{i_{0}}))^{\frac{1}{n}}.$ By (4.2) we can apply the relative isoperimetric inequality in balls contained in [48, Corollaire 1.2]. This immediately gives that $\operatorname{vol}(E\cap B_{1}(p))^{\frac{n-1}{n}}\leq c\,\mathcal{H}^{n-1}(\partial^{*}E\cap B_{1}(p))\qquad\forall\,p\in M,$ (4.3) where $c=c(n,k,v_{0})$. Therefore, using (4.3) and item (iii) in Lemma 4.4 we can estimate $\begin{split}\operatorname{vol}(E)&\leq\sum_{i}\operatorname{vol}(E\cap B_{1}(x_{i}))=\sum_{i}\operatorname{vol}(E\cap B_{1}(x_{i}))^{\frac{1}{n}}\operatorname{vol}(E\cap B_{1}(x_{i}))^{\frac{n-1}{n}}\\\ &\leq L\sum_{i}c\,\mathcal{H}^{n-1}(\partial^{*}E\cap B_{1}(x_{i}))\leq Lc\,\frac{v(n,k,6)}{v(n,k,1/2)}P(E),\end{split}$ that is $\operatorname{vol}(E\cap B_{1}(x_{i_{0}}))=L^{n}\geq C_{n,k,v_{0}}\frac{\operatorname{vol}(E)^{n}}{P(E)^{n}},$ where $C_{n,k,v_{0}}\vcentcolon=\left(v(n,k,1/2)/(c\,v(n,k,6))\right)^{n}$. ∎ ### 4.2 Asymptotic mass decomposition We are now ready to prove the following key result that has to be read as a generalization of [59, Theorem 2]. Indeed, roughly speaking, we are going to prove that whenever a complete noncompact noncollapsed Riemannian manifold with a lower bound on $\mathrm{Ric}$ is given, then the diverging part $\Omega_{i}^{d}$ of any perimeter-minimizing sequence, see 4.1, can be splitted in different sets that convergence, in volume and perimeter, to isoperimetric regions in some pmGH limits at infinity. We thus recover, in the weaker setting of Gromov–Hausdorff convergence, the statement of [59, Theorem 2], except from the precise bound on the number of regions that go to infinity contained in [59, item (X) of Theorem 2], without asking anything a priori on the geometry at infinity of the manifold. For the proof we are inspired by the strategies of [59], even though our reasoning is somewhat different as it heavily exploits the results from the nonsmooth theory discussed in Section 2.2. ###### Theorem 4.6 (Asymptotic mass decomposition). Let $k\in\mathbb{R}$ and let $(M^{n},g)$ be a complete noncompact Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$. Assume that $(M^{n},g)$ is noncollapsed with $\operatorname{vol}(B_{1}(q))\geq v_{0}>0$ for any $q\in M^{n}$. Let $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ be a minimizing (for the perimeter) sequence of finite perimeter sets of volume $V>0$, assume that $\Omega_{i}$ is bounded for any $i$, and let $\Omega_{i}^{c},\Omega_{i}^{d}$ be as in 4.1. If $\lim_{i}\operatorname{vol}(\Omega_{i}^{d})=W>0$, then, up to subsequence, there exist an increasing sequence of natural numbers $N_{i}\geq 1$, a sequence of points $p_{i,j}\in M^{n}$ for $j=1,\ldots,N_{i}$, a sequence of radii $T_{i,j}\geq 1$ for $j=1,\ldots,N_{i}$ verifying the following properties. * (i) Letting $\overline{N}\vcentcolon=\lim_{i}N_{i}\in\mathbb{N}\cup\\{+\infty\\}$, we have $\begin{split}\lim_{i}\mathsf{d}(p_{i,j},q)=+\infty&\qquad\forall\,q\in M,\forall\,j<\overline{N}+1,\\\ \lim_{i}\mathsf{d}(p_{i,j},p_{i,k})=+\infty&\qquad\forall\,j\neq k<\overline{N}+1,\\\ B_{T_{i,j}}(p_{i,j})\cap B_{T_{i,k}}(p_{i,k})=\emptyset&\qquad\forall\,i\in\mathbb{N},\forall\,j\neq k\leq N_{i},\\\ &\qquad j\neq\overline{N},k\neq\overline{N},\\\ \lim_{i}T_{i,j}=T_{j}<+\infty&\qquad\forall\,j<\overline{N},\\\ \text{if also $\overline{N}<+\infty$, then $\lim_{i}T_{i,\overline{N}}=+\infty$ and }&\\\ \partial B_{T_{i,\overline{N}}}(p_{i,\overline{N}})\cap\partial B_{T_{i,j}}(p_{i,j})=\emptyset&\qquad\forall\,i\ :\ N_{i}=\overline{N},\forall\,j<\overline{N}.\end{split}$ (4.4) * (ii) Denoting $G_{i}\vcentcolon=B_{T_{i,\overline{N}}}(p_{i,\overline{N}})\cap\Omega_{i}^{d}\setminus\bigcup_{j=1}^{\overline{N}-1}B_{T_{i,j}}(p_{i,j})$ if $\overline{N}<+\infty$ and $i$ is such that $N_{i}=\overline{N}$, it holds that $\lim_{i}P(\Omega_{i}^{d})=\begin{cases}\lim_{i}P(G_{i})+\sum_{j=1}^{\overline{N}-1}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))&\text{ if }\overline{N}<+\infty,\\\ \lim_{i}\sum_{j=1}^{N_{i}}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))&\text{ if }\overline{N}=+\infty,\end{cases}$ * (iii) For any $j<\overline{N}+1$ there exists an $\mathsf{ncRCD}((n-1)k,n)$ space, points $p_{j}\in X_{j}$ and Borel sets $Z_{j}\subset X_{j}$ such that $\begin{split}(M^{n},\mathsf{d},\operatorname{vol},p_{i,j})\xrightarrow[i]{}(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j},p_{j})&\qquad\text{in the $pmGH$ sense for any $j$},\\\ \Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j})\xrightarrow[i]{}Z_{j}\subset X_{j}&\qquad\text{in the $L^{1}$-strong sense for any $j<\overline{N}$},\\\ \operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\xrightarrow[i]{}\mathfrak{m}_{j}(Z_{j})&\qquad\forall\,j<\overline{N}\\\ \lim_{i}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))=P_{X_{j}}(Z_{j})&\qquad\forall\,j<\overline{N},\end{split}$ (4.5) and if $\overline{N}<+\infty$ then $\begin{split}G_{i}\xrightarrow[i]{}Z_{\overline{N}}\subset X_{\overline{N}}&\qquad\text{in the $L^{1}$-strong sense},\\\ \operatorname{vol}(G_{i})\xrightarrow[i]{}\mathfrak{m}_{\overline{N}}(Z_{\overline{N}}),&\\\ \lim_{i}P(G_{i})=P_{X_{\overline{N}}}(Z_{\overline{N}}),\end{split}$ (4.6) where $P_{X_{j}}$ is the perimeter functional on $(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j})$, and $Z_{j}$ is an isoperimetric region in $X_{j}$ for any $j<\overline{N}+1$. * (iv) It holds that $I(V)=P(\Omega)+\sum_{j=1}^{\overline{N}}P_{X_{j}}(Z_{j}),\qquad\qquad V=\operatorname{vol}(\Omega)+\sum_{j=1}^{\overline{N}}\mathfrak{m}_{j}(Z_{j}),$ (4.7) where $\Omega=\lim_{i}\Omega_{i}^{c}$ is as in 4.1. In particular $\begin{split}&\lim_{i}P(\Omega_{i}^{d})=\sum_{j=1}^{\overline{N}}P_{X_{j}}(Z_{j}),\qquad\qquad W=\sum_{j=1}^{\overline{N}}\mathfrak{m}_{j}(Z_{j}).\end{split}$ (4.8) ###### Proof. We divide the proof in several steps. * Step 1. Up to passing to a subsequence in $i$, we claim that for any $i$ there exist an increasing sequence of natural numbers $N_{i}\geq 1$ with limit $\overline{N}\vcentcolon=\lim_{i}N_{i}\in\mathbb{N}\cup\\{+\infty\\}$, points $p_{i,1},\ldots,p_{i,N_{i}}\in M^{n}$ for any $i$, radii $R_{j}\geq 1$ and numbers $\eta_{j}\in(0,1]$ defined for $j<\overline{N}$, and, if $\overline{N}<+\infty$, also a sequence of radii $R_{i,\overline{N}}\geq 1$, such that $\begin{split}\lim_{i}\mathsf{d}(p_{i,j},q)=+\infty&\qquad\forall\,q\in M,\forall\,j<\overline{N}+1,\\\ \lim_{i}\mathsf{d}(p_{i,j},p_{i,k})=+\infty&\qquad\forall\,j\neq k<\overline{N}+1,\\\ \mathsf{d}(p_{i,j},p_{i,k})\geq R_{j}+R_{k}+2&\qquad\forall\,i\in\mathbb{N},\forall\,j\neq k\leq N_{i},\\\ &\qquad j\neq\overline{N},k\neq\overline{N},\\\ \exists\,\lim_{i}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))=w_{j}^{\prime}>0&\qquad\forall\,j<\overline{N},\\\ \mathcal{H}^{n-1}(\partial^{*}\Omega_{i}^{d}\cap\partial B_{R_{j}}(p_{i,j}))=0&\qquad\forall\,i,\forall\,j\leq N_{i},j\neq\overline{N},\\\ \mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{j}}(p_{i,j}))\leq\frac{\eta_{j}}{2^{j}}&\qquad\forall\,i,\forall\,j\leq N_{i},j\neq\overline{N},\\\ \text{if also $\overline{N}=+\infty$, then }W\geq\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))&\qquad\forall\,i,\\\ \text{if instead $\overline{N}<+\infty$, then $\lim_{i}R_{i,\overline{N}}=+\infty$,}&\\\ \mathcal{H}^{n-1}(\partial^{*}\Omega_{i}^{d}\cap\partial B_{R_{i,\overline{N}}}(p_{i,\overline{N}}))=0&\qquad\forall\,i\ :\ N_{i}=\overline{N},\\\ \mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{i,\overline{N}}}(p_{i,\overline{N}}))\leq\frac{1}{2^{\overline{N}}}&\qquad\forall\,i\ :\ N_{i}=\overline{N},\\\ \mathsf{d}\left(\partial B_{R_{i,\overline{N}}}(p_{i,\overline{N}}),\partial B_{R_{j}}(p_{i,j})\right)>2&\qquad\forall\,i\ :\ N_{i}=\overline{N},\forall\,j\neq\overline{N},\\\ W=\lim_{i}\operatorname{vol}\left(B_{R_{i,\overline{N}}}(p_{i,\overline{N}})\cap\left(\Omega_{i}^{d}\setminus\bigcup_{j=1}^{\overline{N}-1}B_{R_{j}}(p_{i,j})\right)\right)&+\sum_{j=1}^{\overline{N}-1}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j})).\\\ \end{split}$ (4.9) Let us briefly explain how the proof of this step proceeds. We are going to produce the claimed points and radii by induction, with respect to $j$, applying Lemma 4.3. We will prove that each time we apply Lemma 4.3 on some set during the proof of this step, we will never end up in Item 1. As a first step, we shall apply Lemma 4.3 on $E_{i}=\Omega_{i}^{d}$. If Item 2 occurs, then we will show that $\overline{N}=1=N_{i}$ for any $i$; indeed, Item 2 yields a sequence of points $p_{i,1}$ and a diverging sequence of radii $R_{i,1}$ such that all the mass $W$ eventually concentrates in the sequence of balls $B_{R_{i,1}}(p_{i,1})$. Moreover $p_{i,1}$ diverges at infinity (as $\Omega_{i}^{d}$ does), we do not construct other sequences of points, and all the identities in (4.9) can be realized by appropriately choosing $R_{i,1}$. If instead Item 3 occurs, then Item 3 yields the first sequence of points $p_{i,1}$ and a radius $R_{1}$ such that a certain amount $w_{1}^{\prime}>0$ of mass eventually concentrates in the balls $B_{R_{1}}(p_{i,1})$. Moreover points $p_{i,1}$ diverge at infinity. Now, in this case, we will iterate the construction by applying Lemma 4.3 to the sequence $\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})$. As anticipated, we will see that Item 1 does not occur. If Item 2 occurs, then _for large_ $i$ we find a second sequence of points $p_{i,2}$ and diverging radii $R_{i,2}$ such that all the remaining mass eventually concentrates in the sequence of balls $B_{R_{i,2}}(p_{i,2})$. In this case $\overline{N}=2=N_{i}$ for those $i$ such that $p_{i,2}$ are defined. Moreover, the accurate choice of the radii eventually realizes the relations in (4.9). If instead Item 3 occurs again, then Item 3 yields a second sequence of points $p_{i,2}$ defined _for large_ $i$ and a radius $R_{2}$ such that a new amount $w_{2}^{\prime}>0$ of mass eventually concentrates in the balls $B_{R_{2}}(p_{i,2})$. Also the radius $R_{2}$ can be chosen so that, in relation to the already constructed balls $B_{R_{1}}(p_{i,1})$, the identities in (4.9) will be eventually satisfied. In this latter case we iterate the construction again by applying Lemma 4.3 on $\Omega_{i}^{d}\setminus\cup_{j=1}^{2}B_{R_{j}}(p_{i,j})$ and so on. Each time we apply Lemma 4.3, we make sure that the newly constructed sequences of points and radii satisfy the relations prescribed in (4.9) in relation to the already defined sequences of balls. As described above, if Item 2 occurs at some iteration, then the construction stops and $\overline{N}$ equals the number of times the iterations occurred. If instead each application of Lemma 4.3 leads to Item 3, then $\overline{N}=+\infty$, the construction is iterated infinitely many times, and we end up with countably many sequences of points $p_{i,j}$ and radii $R_{j}$. These sequences will eventually satisfy (4.9) because at any iteration the new sequences of points and the new radii are “coherent” with the previously constructed, i.e., they satisfy the relations in (4.9) in relation to the already constructed balls. Observe that, for given $j\in\mathbb{N}$, the first index $i$ such that $p_{i,j}$ is defined (if it exists) depends on choosing a large index given by Lemma 4.3 depending on a chosen threshold $\varepsilon$; therefore the sequence $N_{i}$ is inductively constructed together with the appearance of the sequences $p_{i,j},R_{j}$. Now, let us move to the proof. As the first step ($j=1$), let us apply Lemma 4.3 on $E_{i}=\Omega_{i}^{d}$. Since $W>0$ and, from item (i) of 4.1 there exists a constant $C_{1}>0$ such that $P(\Omega_{i}^{d})\leq C_{1}$, Lemma 4.5 implies that Item 1 in Lemma 4.3 does not occur. Indeed by Lemma 4.5 we find a sequence $q_{i}\in M^{n}$ such that $\operatorname{vol}(\Omega_{i}^{d}\cap B_{1}(q_{i}))\geq\min\left\\{\frac{C_{n,k,v_{0}}}{C_{1}^{n}}\left(\frac{W}{2}\right)^{n},\frac{v_{0}}{2}\right\\},$ for any large $i$ such that $\operatorname{vol}(\Omega_{i}^{d})\geq W/2$. Such an estimate would contradict the occurrence of Item 1. So suppose that Item 3 in Lemma 4.3 occurs. Then denote by $w_{1}:=w\in(0,W)$ the number given by Item 3. Then take $\alpha_{1}\vcentcolon=\min\left\\{\frac{C_{n,k,v_{0}}}{\overline{C}^{n}}\left(\frac{W-w_{1}}{2}\right)^{n},\frac{v_{0}}{2}\right\\},\qquad\varepsilon_{1}<\frac{1}{3}\frac{\eta_{1}}{2^{2}}\vcentcolon=\frac{1}{3}\frac{1}{2^{2}}\min\left\\{1,\alpha_{1},\frac{w_{1}}{2}\right\\},$ where $C_{n,k,v_{0}}$ is as in Lemma 4.5 and $\overline{C}\vcentcolon=C_{1}+2\geq P(\Omega_{i}^{d})+2$ for any $i$. Hence let $p_{i,1},R^{*}_{1}\geq 1$ be given by Item 3 applied with $\varepsilon=\varepsilon_{1}$. Take $R_{1}\geq R^{*}_{1}$ such that $\partial B_{R_{1}}(p_{i,1})$ is Lipschitz and $\mathcal{H}^{n-1}(\partial^{*}\Omega_{i}^{d}\cap\partial B_{R_{1}}(p_{i,1}))=0$ for any $i$. Moreover, up to subsequence, we have that there exists $\lim_{i}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{1}}(p_{i,1}))=:w_{1}^{\prime}\in(0,W)$. Also, since $\Omega_{i}^{d}\cap\mathcal{C}=\emptyset$ definitely for any compact set $\mathcal{C}$, then $\mathsf{d}(p_{i,1},q)\to+\infty$ for any fixed $q\in M^{n}$. Finally, by Item 3 and since $\Omega_{i}^{d}$ is bounded, there is a sequence of open sets $V_{i}^{1}$ such that $\mathsf{d}(B_{R_{1}}(p_{i,1}),V_{i}^{1})\to+\infty$ and $\operatorname{vol}(\Omega_{i}^{d})-\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{1}}(p_{i,1}))-\operatorname{vol}(\Omega_{i}^{d}\cap V_{i}^{1})<3\varepsilon_{1}<\eta_{1}/2^{2},$ (4.10) for $i$ sufficiently large. So, for large $i$, by the coarea formula we can estimate $\begin{split}\frac{\eta_{1}}{2^{2}}>\int_{R_{1}}^{\mathsf{d}({p_{i,1}},V_{i}^{1})}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{t}(p_{i,1}))\,\mathrm{d}t>\int_{R_{1}}^{R_{1}+1}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{t}(p_{i,1}))\,\mathrm{d}t.\end{split}$ Therefore, up to taking a new radius in $(R_{1},R_{1}+1)$, still denoted by $R_{1}$, we can further ensure that $\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{1}}(p_{i,1}))\leq\frac{\eta_{1}}{2}.$ If instead Item 2 in Lemma 4.3 occurs, then we take $\overline{N}=1=N_{i}$ for any $i$ as Item 2 yields sequences $p_{i,1},R_{i,1}$ such that $\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{i,1}}(p_{i,1}))\geq W-1/i$, up to subsequence. Indeed, arguing as above, also in this case we can ensure all the remaining properties in (4.9) and we can also take $R_{i,1}\to+\infty$ as $i\to+\infty$. So we have seen that in case for $j=1$ the alternative in Item 3 occurs, the construction must be iterated. We now show the inductive construction only for the step $j=2$, the passage $j\Rightarrow j+1$ being completely analogous. For $j=2$ we now apply Lemma 4.3 on $E_{i}=\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})$. Again, since $\operatorname{vol}(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))\to W-w_{1}^{\prime}>0$, Item 1 in Lemma 4.3 does not occur because of Lemma 4.5. Indeed, just like we did for $j=1$, a positive lower bound on the volume and the finite upper bound on the perimeter given by $P(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))\leq P(\Omega_{i}^{d})+\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{1}}(p_{i,1}))\leq P(\Omega_{i}^{d})+\eta_{1}/2\leq\overline{C}$ imply that Item 1 would contradict Lemma 4.5. So if Item 3 in Lemma 4.3 occurs, then denote by $w_{2}:=w\in(0,W-w_{1}^{\prime})$ the number given by Item 3. In this case we take $\begin{split}\alpha_{2}\vcentcolon=\min\left\\{\frac{C_{n,k,v_{0}}}{\overline{C}^{n}}\left(\frac{W-w_{1}^{\prime}-w_{2}}{2}\right)^{n},\frac{v_{0}}{2}\right\\},\qquad\varepsilon_{2}<\frac{1}{3}\frac{\eta_{2}}{2^{3}}\vcentcolon=\frac{1}{3}\frac{1}{2^{3}}\min\left\\{1,\alpha_{2},\frac{w_{2}}{2}\right\\},&\end{split}$ Hence Item 3 gives sequences $p_{i,2},R^{*}_{2}\geq 1$. As before, let $R_{2}\geq R^{*}_{2}$ such that, up to passing to a subsequence, we have that $\partial B_{R_{2}}(p_{i,2})$ is Lipschitz, $\mathcal{H}^{n-1}(\partial^{*}\Omega_{i}^{d}\cap\partial B_{R_{2}}(p_{i,2}))=0$ for any $i$, there exists $\lim_{i}\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{R_{2}}(p_{i,2}))=w_{2}^{\prime}>0$, and we have that $\mathsf{d}(p_{i,2},q)\to+\infty$ for any $q\in M^{n}$. Moreover, the use of the coarea formula as done above now yields $\mathcal{H}^{n-1}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap\partial B_{R_{2}}(p_{i,2}))\leq\frac{\eta_{2}}{2^{2}}.$ We now show that $\mathsf{d}(p_{i,1},p_{i,2})\to+\infty$, and then, up to subsequence, we can also assume that $\mathsf{d}(p_{i,1},p_{i,2})\geq R_{1}+R_{2}+2$ for any $i$ such that $p_{i,2}$ is defined. Indeed, if $\limsup_{i}\mathsf{d}(p_{i,1},p_{i,2})$ is bounded, then $\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{R_{2}}(p_{i,2}))\leq\operatorname{vol}(\Omega_{i}^{d}\setminus\left(B_{R_{1}}(p_{i,1})\cup V_{i}^{1}\right))\leq\frac{\eta_{1}}{2^{2}},$ for large $i$. On the other hand we know that, for large $i$, $\operatorname{vol}(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))\geq W-w_{1}^{\prime}+o(1)\geq(W-w_{1})/2$, and $P(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))\leq\overline{C}$, for large $i$. Therefore, using the characterization of $p_{i,2}$ in Item 3 and applying Lemma 4.5 on $\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})$, we get for some $q_{i}\in M^{n}$ that $\begin{split}\frac{\eta_{1}}{2^{2}}&\geq\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{R_{2}}(p_{i,2}))\geq\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{R^{*}_{2}}(p_{i,2}))\\\ &\geq\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{R^{*}_{2}}(q_{i}))\geq\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap B_{1}(q_{i}))\\\ &\geq\min\left\\{C_{n,k,v_{0}}\frac{\operatorname{vol}(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))^{n}}{P(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))^{n}},\frac{v_{0}}{2}\right\\}\geq\alpha_{1},\end{split}$ for large $i$. But since $\eta_{1}\leq\alpha_{1}$, the above inequality yields a contradiction. Now since $\mathsf{d}(p_{i,1},p_{i,2})\to_{i}+\infty$, the above identities simplify into $w_{2}^{\prime}=\lim_{i}\operatorname{vol}(\Omega_{i}\cap B_{R_{2}}(p_{i,2})),\qquad\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{2}}(p_{i,2}))\leq\frac{\eta_{2}}{2^{2}},$ up to passing to a subsequence; also, by Item 3, analogously as in (4.10) we obtain $\begin{split}\operatorname{vol}&(\Omega_{i}^{d})-\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{1}}(p_{i,1}))-\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{2}}(p_{i,2}))-\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap V^{2}_{i})\\\ &=\operatorname{vol}(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))-\operatorname{vol}((\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1}))\cap B_{R_{2}}(p_{i,2}))-\operatorname{vol}(\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right)\cap V^{2}_{i})\\\ &\leq 3\varepsilon_{2}=\frac{\eta_{2}}{2^{3}},\end{split}$ for any large $i$ such that $p_{i,2}$ is defined, for a sequence of bounded open sets $V^{2}_{i}$ such that $\mathsf{d}(p_{i,2},V^{2}_{i})\to+\infty$. At this point, the new sequence $p_{i,2}$ and the radii $R_{2}$ satisfy the conditions prescribed in (4.9) in relation to the already constructed sequence of balls $B_{R_{1}}(p_{i,1})$. Finally, if instead Item 2 in Lemma 4.3 occurs for $j=2$, then Item 2 yields sequences $p_{i,2},R_{i,2}\geq 1$ such that $\operatorname{vol}(B_{R_{i,2}}(p_{i,2})\cap\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right))\geq W-w_{1}^{\prime}-1/i$, up to subsequence for large $i$. Since Item 2 also gives $r\geq 1$ such that $\operatorname{vol}(B_{r}(p_{i,2})\cap\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right))\geq\operatorname{vol}(B_{r}(q)\cap\left(\Omega_{i}^{d}\setminus B_{R_{1}}(p_{i,1})\right))$ for any $q\in M^{n}$, arguing as above one easily gets that $\mathsf{d}(p_{i,1},p_{i,2})\to+\infty$. Hence $\overline{N}=2=N_{i}$ for large $i$. Moreover, arguing as in the above step $j=1$, also in this case we can ensure all the remaining properties in (4.9), we can also take $R_{i,2}\to+\infty$ as $i\to+\infty$, and assume that $\mathsf{d}(\partial B_{R_{i,2}}(p_{i,2}),\partial B_{R_{1}}(p_{i,1}))>2$ for large $i$. Now if for $j=2$ Item 3 occurs, one needs to continue the construction for $j=3$. Now one applies Lemma 4.3 on $E_{i}=\Omega_{i}^{d}\setminus(B_{R_{1}}(p_{i,1})\cup B_{R_{2}}(p_{i,2}))$. Once again Item 1 cannot occur, because of Lemma 4.5 and since $\operatorname{vol}(\Omega_{i}^{d}\setminus(B_{R_{1}}(p_{i,1})\cup B_{R_{2}}(p_{i,2})))\to W-w_{1}^{\prime}-w_{2}^{\prime}>0$. Then it can be checked that the construction inductively proceeds depending on whether Item 2 or Item 3 occurs for $j=3$ as discussed above for $j=2$. Eventually one gets the desired sequences $N_{i},p_{i,j},R_{j},\eta_{j}$ as claimed in (4.9). * Step 2. We claim that if $\overline{N}=+\infty$ then $W=\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j})).$ (4.11) Moreover, we claim that, up to passing to a subsequence in $i$, there exist sequences of radii $\\{T_{i,j}\\}_{i\in\mathbb{N}}$ such that $T_{i,j}\in(R_{j},R_{j}+1)$ for any $j<\overline{N}$, and $T_{i,\overline{N}}\in(R_{i,\overline{N}},R_{i,\overline{N}}+1)$ if $\overline{N}<+\infty$, such that (4.4) holds and $\lim_{i}\sum_{j=1}^{N_{i}}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{T_{i,j}}(p_{i,j}))=0.$ (4.12) Assume first that $\overline{N}=+\infty$. We observe that, up to passing to a subsequence in $i$, we have $W\geq\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))\geq\sum_{j=1}^{M}w^{\prime}_{j},$ for any $M\in\mathbb{N}$, and then $W\geq\sum_{j=1}^{+\infty}w^{\prime}_{j}$. Suppose by contradiction that $W>\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))$, and define $\widetilde{\Omega}_{i}^{v}\vcentcolon=\Omega_{i}^{d}\setminus\bigcup_{j=1}^{N_{i}}B_{R_{j}}(p_{i,j}).$ By the absurd hypothesis, up to passing to a subsequence, we have that $\lim_{i}\operatorname{vol}(\widetilde{\Omega}_{i}^{v})=\omega>0$ and, by (4.9), we estimate $\begin{split}P(\widetilde{\Omega}_{i}^{v})&\leq P(\Omega_{i}^{d})+\sum_{j=1}^{N_{i}}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{j}}(p_{i,j}))\leq\overline{C}.\end{split}$ On the other hand, applying Lemma 4.5 on $\widetilde{\Omega}_{i}^{v}$ yields $\operatorname{vol}(\widetilde{\Omega}_{i}^{v}\cap B_{1}(q_{i}))\geq\min\left\\{C_{n,k,v_{0}}\frac{\operatorname{vol}(\widetilde{\Omega}_{i}^{v})^{n}}{P(\widetilde{\Omega}_{i}^{v})^{n}},\frac{v_{0}}{2}\right\\}\geq\min\left\\{C_{n,k,v_{0}}\frac{(\omega/2)^{n}}{\overline{C}^{n}},\frac{v_{0}}{2}\right\\}\eqqcolon C_{\omega},$ for some $q_{i}\in M^{n}$, for large $i$. Hence for large $i$ and for any fixed $j_{0}\leq N_{i}$ we then have $\begin{split}C_{\omega}&\leq\operatorname{vol}(\widetilde{\Omega}_{i}^{v}\cap B_{1}(q_{i}))\leq\operatorname{vol}\left(B_{1}(q_{i})\cap\,\Omega_{i}^{d}\setminus\bigcup_{j=1}^{j_{0}-1}B_{R_{j}}(p_{i,j})\right)\\\ &\leq\operatorname{vol}\left(B_{R^{*}_{j_{0}}}(q_{i})\cap\,\Omega_{i}^{d}\setminus\bigcup_{j=1}^{j_{0}-1}B_{R_{j}}(p_{i,j})\right)\\\ &\leq\operatorname{vol}\left(B_{R^{*}_{j_{0}}}(p_{i,j_{0}})\cap\,\Omega_{i}^{d}\setminus\bigcup_{j=1}^{j_{0}-1}B_{R_{j}}(p_{i,j})\right)\\\ &\leq\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j_{0}}}(p_{i,j_{0}})),\end{split}$ where $R^{*}_{j_{0}}\leq R_{j_{0}}$ was determined by the application of Item 3 in the Step 1. Since $N_{i}\to+\infty$, then from the estimate above we would get $+\infty=\sum_{j=1}^{+\infty}w^{\prime}_{j}\leq W$, that gives a contradiction. Hence (4.11) is proved. Now we prove (4.12). Assume first that $\overline{N}=+\infty$. Then in the above notation, using (4.9), in particular the fact that $\mathsf{d}(p_{i,j},p_{i,k})\geq R_{j}+R_{k}+2$, and the coarea formula, we estimate $\begin{split}\operatorname{vol}(\widetilde{\Omega}_{i}^{v})&\geq\sum_{j=1}^{N_{i}}\int_{R_{j}}^{R_{j}+1}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{t}(p_{i,j}))\,\mathrm{d}t\geq\frac{1}{2}\sum_{j=1}^{N_{i}}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{T_{i,j}}(p_{i,j})),\end{split}$ (4.13) for some $T_{i,j}\in(R_{j},R_{j}+1)$ for any $j$. Up to subsequence (in $i$) we have that $T_{i,j}\to T_{j}$ for any $j$, and since $\operatorname{vol}(\widetilde{\Omega}_{i}^{v})\to 0$ by (4.11), then (4.12) follows together with the properties stated in (4.4). If instead $\overline{N}<+\infty$, since $\mathsf{d}\left(\partial B_{R_{i,\overline{N}}}(p_{i,\overline{N}}),\partial B_{R_{j}}(p_{i,j})\right)>2\qquad\forall\,i\ :\ N_{i}=\overline{N},\forall\,j<\overline{N},$ by (4.9), letting now $\widehat{\Omega}_{i}^{v}\vcentcolon=\Omega_{i}^{d}\setminus\left(\bigcup_{j=1}^{\overline{N}-1}B_{R_{j}}(p_{i,j})\cup B_{R_{i,\overline{N}}}(p_{i,\overline{N}})\right)$, as $\operatorname{vol}(\widehat{\Omega}_{i}^{v})\to 0$ by the last line in (4.9), we can perform an analogous estimate as in (4.13), therefore getting the desired $T_{i,j}$ for any $j\leq\overline{N}$ satisfying (4.12). Hence (4.4) holds also in this case. * Step 3. We claim that letting $\Omega_{i}^{v}\vcentcolon=\Omega_{i}^{d}\setminus\bigcup_{j=1}^{N_{i}}\left(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j})\right)$, then $\lim_{i}\operatorname{vol}(\Omega_{i}^{v})=0,$ (4.14) and that, if $\overline{N}=+\infty$, then $W=\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))=\sum_{j=1}^{+\infty}\lim_{i}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j})).$ (4.15) Since $T_{i,j}\geq R_{j}$ for any $j<\overline{N}$, we have that, if $\overline{N}=+\infty$, then $\Omega_{i}^{v}\subset\widetilde{\Omega}_{i}^{v}$, while if $\overline{N}<+\infty$, then analogously $\Omega_{i}^{v}\subset\widehat{\Omega}_{i}^{v}$. Hence in any case (4.14) follows from (4.11), if $\overline{N}=+\infty$, or from the last line in (4.9), if $\overline{N}<+\infty$. Now suppose that $\overline{N}=+\infty$. Since $T_{i,j}\geq R_{j}$ for any $j$, by (4.11) we see that $W=\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))\leq\lim_{i}\sum_{j=1}^{N_{i}}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\leq W.$ Up to subsequence, denote $\omega_{j}\vcentcolon=\lim_{i}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))$ for any $j$. By the above identity, we see that $W\geq\sum_{j=1}^{+\infty}\omega_{j}$, and then $\lim_{j}\omega_{j}=0$. In order to prove the second part of (4.15), suppose by contradiction that $\sum_{j=1}^{+\infty}\omega_{j}=Y<W$. We argue as before considering $C^{*}\vcentcolon=\min\left\\{\frac{C_{n,k,v_{0}}}{\overline{C}^{n}}\left(\frac{W-Y}{2}\right)^{n},\frac{v_{0}}{2}\right\\}.$ Let $j^{*}$ be such that $\omega_{j}<C^{*}$ for any $j\geq j^{*}$. From now on consider $j>j^{*}$. We clearly have $\operatorname{vol}\left(\Omega_{i}^{d}\setminus\bigcup_{k=1}^{j-1}B_{R_{k}}(p_{i,k})\right)\geq\frac{W-Y}{2},$ for any large $i$. Moreover $P\left(\Omega_{i}^{d}\setminus\bigcup_{k=1}^{j-1}B_{R_{k}}(p_{i,k})\right)\leq P(\Omega_{i}^{d})+\sum_{k=1}^{j-1}\mathcal{H}^{n-1}(\Omega_{i}^{d}\cap\partial B_{R_{k}}(p_{i,k}))\leq\overline{C},$ by (4.9). On the other hand, applying Lemma 4.5 on $\Omega_{i}^{d}\setminus\bigcup_{k=1}^{j-1}B_{R_{k}}(p_{i,k})$ yields the existence of $q_{i}\in M^{n}$ such that $\operatorname{vol}\left(B_{1}(q_{i})\cap\,\Omega_{i}^{d}\setminus\bigcup_{k=1}^{j-1}B_{R_{k}}(p_{i,k})\right)\geq C^{*},$ for any large $i$. As $p_{i,j}$ is obtained by applying Item 3 on $\Omega_{i}^{d}\setminus\bigcup_{k=1}^{j-1}B_{R_{k}}(p_{i,k})$ and all the produced balls are disjoint, this implies that $\operatorname{vol}(\Omega_{i}^{d}\cap B_{R_{j}}(p_{i,j}))\geq C^{*},$ for any $j>j^{*}$ and any $i$ large. Hence $\omega_{j}\geq C^{*}$ for any $j>j^{*}$, and $\sum_{j=1}^{+\infty}\omega_{j}=+\infty$, yielding a contradiction. * Step 4. We claim that $\lim_{i}P(\Omega_{i}^{v})=0,$ (4.16) and, denoting $G_{i}\vcentcolon=B_{T_{i,\overline{N}}}(p_{i,\overline{N}})\cap\Omega_{i}^{d}\setminus\bigcup_{j=1}^{\overline{N}-1}B_{T_{i,j}}(p_{i,j})$ if $\overline{N}<+\infty$, that $\lim_{i}P(\Omega_{i}^{d})=\begin{cases}\lim_{i}\left(P(G_{i})+\sum_{j=1}^{\overline{N}-1}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\right)&\overline{N}<+\infty,\\\ \lim_{i}\sum_{j=1}^{N_{i}}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))&\overline{N}=+\infty,\end{cases}$ (4.17) We also claim that item (iii) of the statement holds. In order to prove (4.16), we assume without loss of generality that $\operatorname{vol}(\Omega_{i}^{v})>0$. Let us assume first that $\overline{N}=+\infty$. By (4.12) we have that $\lim_{i}P(\Omega_{i}^{d})=\lim_{i}\left(P(\Omega_{i}^{v})+\sum_{j=1}^{N_{i}}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\right).$ (4.18) If, by contradiction, $\lim_{i}P(\Omega_{i}^{v})>0$, then we consider the new sequence $F_{i}=\Omega_{i}^{c}\cup B_{\rho_{i}}(q_{i})\cup\bigcup_{j=1}^{N_{i}}\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}),$ where $B_{\rho_{i}}(q_{i})$ is a ball such that $\operatorname{vol}(B_{\rho_{i}}(q_{i}))=\operatorname{vol}(\Omega_{i}^{v})$ and $B_{\rho_{i}}(q_{i})\cap\Omega_{i}=\emptyset$. Observe that such a ball exists since $\Omega_{i}$ is bounded and $\operatorname{vol}(\Omega_{i}^{v})\to 0$ by (4.14), hence $\rho_{i}<1$ for large $i$. Actually $\rho_{i}\to 0$, indeed A.1 implies that $\operatorname{vol}(B_{r}(q))\geq v(n,k,r)\frac{\operatorname{vol}(B_{1}(q))}{v(n,k,1)}\geq\frac{v_{0}}{v(n,k,1)}v(n,k,r),$ for any $r\in(0,1)$. Hence $v(n,k,\rho_{i})\to 0$ and hence $\rho_{i}\to 0$. Moreover by A.1 (together with Remark 2.15) we have $P(B_{\rho_{i}}(q_{i}))\leq s(n,k,\rho_{i})\xrightarrow[i]{}0.$ (4.19) Now observe that by 4.1 we have that $\lim_{i}P(\Omega_{i})=\lim_{i}\left(P(\Omega_{i}^{c})+P(\Omega_{i}^{d})\right)=\lim_{i}P(\Omega_{i})+2\mathcal{H}^{n-1}(\partial B_{r_{i}}(o)\cap\Omega_{i}),$ and thus $\lim_{i}\mathcal{H}^{n-1}(\partial B_{r_{i}}(o)\cap\Omega_{i})=0$. Hence by definition of $F_{i}$ we can write $P(F_{i})=\mathcal{H}^{n-1}(\Sigma_{i})+P(\Omega_{i}^{c})+P(B_{\rho_{i}}(q_{i}))+\sum_{j=1}^{N_{i}}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j})),$ where $\Sigma_{i}\subset\partial B_{r_{i}}(o)\cap\Omega_{i}$, and thus $\lim_{i}\mathcal{H}^{n-1}(\Sigma_{i})=0$. Therefore, by (4.18), (4.19), and since $\operatorname{vol}(F_{i})=V$, the absurd hypothesis implies $I(V)=\lim_{i}\left(P(\Omega_{i}^{c})+P(\Omega_{i}^{d})\right)>\lim_{i}P(F_{i})\geq I(V),$ that is a contradiction. Employing the same argument, it is immediate to check that a similar reasoning implies that (4.16) holds even in case $\overline{N}<+\infty$. Indeed (4.18) still holds, $\Omega_{i}^{d}$ is bounded by assumption, and then the suitable new definition of $F_{i}$ leads to the same conclusion. So if $\overline{N}<+\infty$, we see that (4.16) and (4.12) imply the first line in (4.17). If instead $\overline{N}=+\infty$, then (4.16) and (4.18) imply the second line in (4.17). It remains to prove the claims in item (iii). By Remark 2.12, up to passing to a subsequence in $i$ and by a diagonal argument, we immediately have that for any $j<\overline{N}+1$ there exist a noncollapsed Ricci limit space $(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j})$, which is thus an $\mathsf{ncRCD}((n-1)k,n)$ space (see Remark 2.11), and points $p_{j}\in X_{j}$ such that $(M^{n},\mathsf{d},\operatorname{vol},p_{i,j})\xrightarrow[i]{}(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j},p_{j})\qquad\text{in the pmGH sense for any $j<\overline{N}+1$}.$ Let us deal with the case $\overline{N}=+\infty$ first. Recalling for example from (4.17) that $P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))$ is uniformly bounded with respect to $i$ for any $j<\overline{N}$, we can directly apply item (a) of 2.18 to get the convergence of $\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j})$ to some $Z_{j}\subset X_{j}$ in the $L^{1}$-strong sense for any $j<\overline{N}$. Moreover, again from item (a) of 2.18, we get that $\liminf_{i}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\geq P_{X_{j}}(Z_{j})$ for every $j<\overline{N}$. We now check that $Z_{j}$ is isoperimetric for its own volume $\mathfrak{m}_{j}(Z_{j})$ in $X_{j}$ for every $j<\overline{N}$, and that $\begin{split}\lim_{i}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))=P_{X_{j}}(Z_{j}),\end{split}$ (4.20) for every $j<\overline{N}$. Since $M^{n}$ is noncollapsed, by 2.10 one has that, for some $v_{0}>0$, $\mathfrak{m}_{j}(B_{1}(x))\geq v_{0}>0$ for any $j$ and $x\in X_{j}$ (see the argument at the beginning of the proof of 3.2). So, by Lemma 2.19, we have that if by contradiction for some $j<\overline{N}$ it occurs that either $Z_{j}$ is not isoperimetric or $\limsup_{i}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))>P_{X_{j}}(Z_{j})$, there exists a bounded finite perimeter set $W_{j}\subset X_{j}$ such that $\mathfrak{m}_{j}(W_{j})=\mathfrak{m}_{j}(Z_{j})$ and, possibly passing to subsequences in $i$, $\lim_{i}P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\geq P_{X_{j}}(W_{j})+\eta,$ (4.21) for some $\eta>0$. By [58, Theorem 2] it is known that $I$ is continuous, and thus there is $\varepsilon_{0}>0$ such that $|I(V)-I(V-\varepsilon)|<\frac{\eta}{2},$ (4.22) whenever $|\varepsilon|<\varepsilon_{0}$. Now by item (c) in 2.18, up to subsequence, there exists a sequence of sets $E_{i,j}$ contained in $B_{L}(p_{i,j})$ for some $L>0$ such that $E_{i,j}$ converges in $L^{1}$-strong to $W_{j}$ and $\lim_{i}P(E_{i,j})=P_{X_{j}}(W_{j}).$ Moreover by 4.1 we know that $\Omega_{i}^{c}\to\Omega$ with $P(\Omega_{i}^{c})\to P(\Omega)$, and $\Omega$ is an isoperimetric region on $(M^{n},g)$. Hence $\Omega$ is bounded by 4.2. So for large $i$ there is $S>0$ such that $\Omega\Subset B_{S}(o)\Subset B_{r_{i}}(o)$, where $r_{i}$ is the sequence in 4.1, and defining $\widetilde{\Omega}_{i}^{c}\vcentcolon=\Omega_{i}^{c}\cap B_{S}(o)$ we have $\operatorname{vol}(\widetilde{\Omega}_{i}^{c})\to\operatorname{vol}(\Omega),\qquad P(\widetilde{\Omega}_{i}^{c})\to P(\Omega).$ Therefore we can define a new sequence $H_{i}\vcentcolon=\widetilde{\Omega}_{i}^{c}\cup E_{i,j}\cup\bigcup_{\stackrel{{\scriptstyle\ell=1}}{{\ell\neq j}}}^{K}\Omega_{i}^{d}\cap B_{T_{i,\ell}}(p_{i,\ell}),$ where $K>j$ is such that, by taking into account (4.15) and the fact that $E_{i,j}$ converge in $L^{1}$-strong to $W_{j}$ that satisfies $\mathfrak{m}_{j}(W_{j})=\mathfrak{m}_{j}(Z_{j})=\lim_{i}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))$, we have that $\lim_{i}\left(\operatorname{vol}(\widetilde{\Omega}_{i}^{c})+\operatorname{vol}(E_{i,j})+\sum_{\stackrel{{\scriptstyle\ell=1}}{{\ell\neq j}}}^{K}\operatorname{vol}(\Omega_{i}^{d}\cap B_{T_{i,\ell}}(p_{i,\ell}))\right)=V-\varepsilon,$ for some $\varepsilon\in[0,\varepsilon_{0})$. Now since $K$ is finite, the sets whose union defines $H_{i}$ have diverging mutual distance, and thus $\lim_{i}\operatorname{vol}(H_{i})=V-\varepsilon$ and $\begin{split}\lim_{i}P(H_{i})&=P(\Omega)+P_{X_{j}}(W_{j})+\lim_{i}\sum_{\stackrel{{\scriptstyle\ell=1}}{{\ell\neq j}}}^{K}P(\Omega_{i}^{d}\cap B_{T_{i,\ell}}(p_{i,\ell}))\\\ &\leq P(\Omega)+P_{X_{j}}(W_{j})+\lim_{i}\sum_{\stackrel{{\scriptstyle\ell=1}}{{\ell\neq j}}}^{N_{i}}P(\Omega_{i}^{d}\cap B_{T_{i,\ell}}(p_{i,\ell}))\\\ &=P(\Omega)+P_{X_{j}}(W_{j})+\lim_{i}\left(P(\Omega_{i}^{d})-P(\Omega_{i}^{d}\cap B_{T_{i,j}}(p_{i,j}))\right)\\\ &\leq I(V)-\eta,\end{split}$ where in the last two lines we used (4.17), 4.1, and (4.21). On the other hand $\lim_{i}P(H_{i})\geq\liminf_{i}I(V-\varepsilon_{i})$ for some sequence $\varepsilon_{i}\to\varepsilon\in[0,\varepsilon_{0})$. Hence $I(V)-\eta\geq\liminf_{i}I(V-\varepsilon_{i})=I(V-\varepsilon)\geq I(V)-\frac{\eta}{2},$ by continuity of $I$ and the choice of $\varepsilon_{0}$ in (4.22), that yields a contradiction. Hence if $\overline{N}=+\infty$, we completed the proof of item (iii). Finally in case $\overline{N}<+\infty$, and for indices $j<\overline{N}$ the proof of (4.5) can be performed in the very analogous way, exploiting the continuity of $I$. More precisely, also in this case the absurd hypothesis consists in (4.21) and we can define $W_{j}$, $E_{i,j}$, and $\widetilde{\Omega}_{i}^{c}$ as before. Moreover, as the $\overline{N}$-th generation of points $p_{i,\overline{N}}$ are determined in the Step 1 by the application of Item 2 in Lemma 4.3, for any $\bar{}\varepsilon>0$ we find $L^{\prime}>0$ such that the newly defined sequence $\widehat{H}_{i}\vcentcolon=\widetilde{\Omega}_{i}^{c}\cup E_{i,j}\cup\,[\Omega_{i}^{d}\cap B_{L^{\prime}}(p_{i,\overline{N}})]\,\cup\bigcup_{\stackrel{{\scriptstyle\ell=1}}{{\ell\neq j}}}^{\overline{N}-1}\Omega_{i}^{d}\cap B_{T_{i,\ell}}(p_{i,\ell})$ satisfies $\operatorname{vol}(\widehat{H}_{i})\to V-\bar{}\varepsilon$. Up to choosing a larger finite $L^{\prime}$, the previous calculations can be still carried out, leading to the desired contradiction. It only remains to prove (4.6) and that the resulting $Z_{\overline{N}}$ is an isoperimetric region. Similarly as above, since for example from (4.17) we know that $P(G_{i})$ is uniformly bounded, by item (b) of 2.18, we have that, up to subsequence, $G_{i}$ converges to a finite perimeter set $Z_{\overline{N}}\subset X_{\overline{N}}$ in $L^{1}_{\rm loc}$, that means that for every $r>0$ it occurs that $G_{i}\cap B_{r}(p_{i,\overline{N}})\to Z_{\overline{N}}\cap B_{r}(p_{\overline{N}})$ in $L^{1}$-strong as $i\to+\infty$. Now since $T_{i,\overline{N}}\geq R_{i,\overline{N}}$ and $p_{i,\overline{N}},R_{i,\overline{N}}$ are produced by Item 2 in Lemma 4.3, then for any $\delta>0$ there is $r>0$ such that $\operatorname{vol}(G_{i}\setminus B_{r}(p_{i,\overline{N}}))<\delta$ for any $i$. Hence it is immediate to deduce that $\operatorname{vol}(G_{i})\to\mathfrak{m}_{\overline{N}}(Z_{\overline{N}})$, and thus $G_{i}\to Z_{\overline{N}}$ in $L^{1}$-strong. So now one can argue exactly as we did above for $j<\overline{N}$, and one shows that $Z_{\overline{N}}$ is an isoperimetric region in $X_{\overline{N}}$ and $\lim_{i}P(G_{i})=P_{X_{\overline{N}}}(Z_{\overline{N}})$. * Step 5. We claim that item (iv) holds. Indeed we already know from (4.15) (and the last condition in (4.9) when $\overline{N}<+\infty$), and 4.1 that $W=\sum_{j=1}^{\overline{N}}\mathfrak{m}_{j}(Z_{j}),\qquad V=\operatorname{vol}(\Omega)+W.$ Moreover from 4.1, (4.17), and item (iii) we also deduce $I(V)=\lim_{i}\left(P(\Omega_{i}^{c})+P(\Omega_{i}^{d})\right)\geq P(\Omega)+\sum_{j=1}^{\overline{N}}P_{X_{j}}(Z_{j})=I(\operatorname{vol}(\Omega))+\sum_{j=1}^{\overline{N}}I_{X_{j}}(\mathfrak{m}(Z_{j})).$ On the other hand, we are exactly in the hypotheses for applying (3.8), that yields $I(V)\leq I(\operatorname{vol}(\Omega))+\sum_{j=1}^{\overline{N}}I_{X_{j}}(\mathfrak{m}(Z_{j})).$ Hence equality holds, and this completes the proof of (4.7). ∎ ## 5 Existence and rigidity The aim of this chapter is to exploit the previous result about the asymptotic mass decomposition to obtain existence and some properties of isoperimetric regions for Riemannian manifolds with some GH-prescriptions at infinity, completing the proof of 1.3. We also derive some rigidity properties for minimizing (for the perimeter) sequences of finite perimeter sets with fixed volume. ### 5.1 Existence theorems The following result, due to Morgan–Johnson [55, Theorem 3.5], constitutes the key for the existence, since it asserts that a geodesic ball lying on a manifold with $\mathrm{Ric}\geq(n-1)k$ is isoperimetrically more convenient then the ball in the model of curvature $k$ (having the same volume). The centrality of this result in such context was already pointed out in [51]. ###### Theorem 5.1. Let $(M^{n},g)$ be a complete Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$ on some open set $\Omega\subset M^{n}$, for $k\in\mathbb{R}$. Then $P(B)\leq P_{k}(\mathbb{B}_{k}(\operatorname{vol}(B))),\quad\text{for every geodesic ball $B\subset\Omega$},$ where $\mathbb{B}_{k}(\operatorname{vol}(B))$ is a geodesic ball on the simply connected model of sectional curvature $k$ and dimension $n$ having volume equal to $\operatorname{vol}(B)$. Moreover, equality holds if and only if $(B,g)$ is isometric to $(\mathbb{B}_{k}(\operatorname{vol}(B)),g_{k})$, where $g_{k}$ is the metric on the simply connected model of sectional curvature $k$ and dimension $n$. We can now state and prove our main existence result. ###### Theorem 5.2. Let $k\in(-\infty,0]$ and let $(M^{n},g)$ be a complete noncompact Riemannian manifold such that $\mathrm{Ric}\geq(n-1)k$ on $M\setminus\mathcal{C}$, where $\mathcal{C}$ is compact. Suppose that $(M^{n},g)$ is GH-asymptotic to the simply connected model of constant sectional curvature $k$ and dimension $n$. Then for any $V>0$ there exists an isoperimetric region of volume $V$ on $(M^{n},g)$. ###### Proof. Since $(M^{n},g)$ is GH-asymptotic to the simply connected model of constant sectional curvature $k$ and dimension $n$, then $(M^{n},g)$ is noncollapsed. Indeed, if there is a sequence of balls $B_{1}(y_{i})$ with $\lim_{i}\operatorname{vol}(B_{1}(y_{i}))=0$, then $y_{i}$ must diverge to infinity, hence by assumption $(M^{n},\mathsf{d},y_{i})$ converges in the pGH- sense to the simply connected model of constant sectional curvature $k$ and dimension $n$, which we denote here by $\mathbb{M}^{n}_{k}$, with its own geodesic distance and volume measure, and pointed at some fixed $o\in\mathbb{M}^{n}_{k}$. Hence item (b) in 2.10 occurs, and thus $n={\rm dim}_{H}\,\mathbb{M}^{n}_{k}\leq n-1$, which is impossible. By Remark 2.3, let $\Omega_{i}\subset M^{n}$ be a minimizing sequence (for the perimeter) of volume $V>0$ such that $\Omega_{i}$ is bounded and smooth for any $i$. Let $\Omega_{i}^{c},\Omega_{i}^{d}$ be as in 4.1. If $\operatorname{vol}(\Omega_{i}^{d})\to 0$, then the set $\Omega$ given by 4.1 is an isoperimetric region of the volume $V$ and the proof ends. So suppose instead that $\lim_{i}\operatorname{vol}(\Omega_{i}^{d})=W>0$. Then we can apply 4.6. We employ the notation of 4.6. By assumption and from 2.10, for any $j<\overline{N}+1$ the pmGH limit space $(X_{j},\mathsf{d}_{j},\mathfrak{m}_{j},p_{j})$ is $\mathbb{M}^{n}_{k}$ with its own geodesic distance and volume measure, and pointed at some fixed $o\in\mathbb{M}^{n}_{k}$. Moreover, since in $\mathbb{M}_{k}^{n}$ balls are isoperimetric regions for their own volume, we have that, for any $j<\overline{N}+1$, $P_{k}(Z_{j})\geq P_{k}(\mathbb{B}_{k}(\operatorname{vol}_{k}(Z_{j}))),$ (5.1) where $\mathbb{B}_{k}(\operatorname{vol}_{k}(Z_{j}))$ is a geodesic ball in $\mathbb{M}_{k}^{n}$ having volume equal to $\operatorname{vol}_{k}(Z_{j})$, while $P_{k}$ is the perimeter functional on $\mathbb{M}^{n}_{k}$. Now observe that for any compact set $\mathcal{K}\subset M^{n}$, we have that $\sup\left\\{\operatorname{vol}(B_{r}(p))\ :\ r>0\,\text{ and }\,B_{r}(p)\Subset M\setminus\mathcal{K}\right\\}=+\infty.$ (5.2) Indeed, suppose by contradiction the above supremum is bounded by a constant $S<+\infty$. Take $R>0$ such that $\operatorname{vol}_{k}(B^{\mathbb{M}_{k}^{n}}_{R}(o))>10S$, where $B^{\mathbb{M}_{k}^{n}}_{R}(o)$ is a ball of radius $R$ and center $o$ in $\mathbb{M}_{k}^{n}$. Consider a sequence of balls $B_{R}(x_{i})\subset M^{n}$ of radius $R$ with $\mathsf{d}(x_{i},\mathcal{K})\to+\infty$. Then, up to passing to a subsequence, 2.10 and the absurd hypothesis imply that $S\geq\lim_{i}\operatorname{vol}(B_{R}(x_{i}))>10S,$ that is impossible. Hence (5.2), together with the continuity of the volume with respect to the radius of balls, imply that, for any compact set $\mathcal{K}\subset M^{n}$ and any assigned finite volume $v$, we can find a ball of volume $v$ compactly contained in the end $M\setminus\mathcal{K}$. So let $v_{j}\vcentcolon=\operatorname{vol}_{k}(Z_{j})$ for $j<\overline{N}+1$. Since $\Omega=\lim_{i}\Omega_{i}^{c}$ is bounded by 4.2, there is a first compact set $\mathcal{K}_{1}$ such that $\Omega\cup\mathcal{C}\subset\mathcal{K}_{1}$. Then by (5.2) there is a ball $B_{r_{1}}(q_{1})\Subset M^{n}\setminus\mathcal{K}_{1}$ such that $\operatorname{vol}(B_{r_{1}}(q_{1}))=v_{1}$. Inductively, for any $2\leq j<\overline{N}+1$ we find a compact set $\mathcal{K}_{j}$ such that $\mathcal{K}_{j}\Supset B_{r_{j-1}}(q_{j-1})\cup\mathcal{K}_{j-1},$ and balls $B_{r_{j}}(q_{j})\Subset M^{n}\setminus\mathcal{K}_{j}$ having volume $\operatorname{vol}(B_{r_{j}}(q_{j}))=v_{j}$. Hence the balls $\\{B_{r_{j}}(q_{j})\ :\ j<\overline{N}+1\\}$ are pairwise located at positive distance and we can define the set $\widetilde{\Omega}\vcentcolon=\Omega\cup\bigcup_{j=1}^{\overline{N}}B_{r_{j}}(q_{j}).$ By (4.7) we have that $\operatorname{vol}(\widetilde{\Omega})=\operatorname{vol}(\Omega)+\sum_{j=1}^{\overline{N}}\operatorname{vol}(B_{r_{j}}(q_{j}))=\operatorname{vol}(\Omega)+\sum_{j=1}^{\overline{N}}v_{j}=\operatorname{vol}(\Omega)+W=V.$ Moreover, combining (4.7), (5.1), and 5.1, since all the constructed balls $B_{r_{j}}(q_{j})$ are contained in an open set of $M^{n}$ on which $\mathrm{Ric}\geq(n-1)k$, we obtain $\begin{split}I(V)&=P(\Omega)+\sum_{j=1}^{\overline{N}}P_{k}(Z_{j})\geq P(\Omega)+\sum_{j=1}^{\overline{N}}P_{k}(\mathbb{B}_{k}(\operatorname{vol}_{k}(Z_{j})))\\\ &\geq P(\Omega)+\sum_{j=1}^{\overline{N}}P(B_{r_{j}}(q_{j}))=P(\widetilde{\Omega}).\end{split}$ (5.3) Therefore $\widetilde{\Omega}$ is an isoperimetric region for the volume $V$. ∎ ###### Remark 5.3. We observe that a posteriori $\overline{N}$ is a finite natural number in $\mathbb{N}$ in the proof of 5.2. Indeed, if $\overline{N}=+\infty$, then the countably many constructed balls $B_{r_{j}}(q_{j})$ can be easily taken so that the resulting $\widetilde{\Omega}$ is unbounded. But as $\widetilde{\Omega}$ turns out to be an isoperimetric region, it must be bounded by 4.2. ###### Remark 5.4 (About the GH-asymptoticity hypothesis in 5.2). Assuming only that a complete manifold $(M^{n},g)$ has a uniform lower bound on the Ricci tensor and that it is noncollapsed is not sufficient to get the existence of isoperimetric regions (cf. 5.2). In fact, in Example 3.6 we gave some examples of complete noncollapsed manifolds $(M^{n},g)$ having even asymptotically vanishing sectional curvature, i.e., such that for any $\varepsilon>0$ there is a compact $\mathcal{C}\subset M^{n}$ s.t. $|\mathrm{Sect}|<\varepsilon$ on $M\setminus\mathcal{C}$, such that no isoperimetric regions of any assigned volume exist. Some of these examples were already known in the literature, see the introduction of [59]. On the other hand, the above mentioned examples do not have nonnegative Ricci curvature. It will be target of future investigations to understand whether complete manifolds $(M^{n},g)$ with $\mathrm{Ric}\geq 0$ and $\mathrm{AVR}(M^{n},g)>0$ have isoperimetric regions. This could be related also to the geometry of the asymptotic cones to $(M^{n},g)$. The existence of isoperimetric regions in nonnegatively Ricci curved manifolds is very interestingly linked to the concavity of the isoperimetric profile. This relation relies on the study contained in [13]. Similarly as in [51], the concavity of the isoperimetric profile is then related to the indecomposability of the isoperimetric regions as well as to rigidity statements on the behavior of the minimizing sequences, that will be studied in the next section. We conclude this section by mentioning these anticipated results about the concavity of the profile and the indecomposability of isoperimetric sets. ###### Proposition 5.5. Let $(M^{n},g)$ be a complete Riemannian manifold such that $\mathrm{Ric}\geq 0$ everywhere. Let us assume the isoperimetric profile $I_{(M^{n},g)}$ is a continuous function, and that for every $V>0$ there exists an isoperimetric region of volume $V$. Then the function $I_{(M^{n},g)}^{n/(n-1)}$ is concave, and thus the function $I_{(M^{n},g)}$ is strictly concave. ###### Proof. Let $I$ be an open interval of $\mathbb{R}$. For an arbitrary function $f:I\to\mathbb{R}$ let us denote $\overline{D}^{2}f(x):=\limsup_{u\to 0^{+}}\frac{f(x+u)+f(x-u)-2f(x)}{u^{2}}.$ From [13, Theorem 2.1] it follows that whenever $(M^{n},g)$ is a compact Riemannian manifold with $\mathrm{Ric}\geq 0$ in the sense of quadratic forms, then $\overline{D}^{2}I_{(M^{n},g)}^{n/(n-1)}(x)\leq 0$ for every $x\in(0,\operatorname{vol}(M^{n}))$. As noticed by Bayle in [13, Remark 2.4], the same result holds for an arbitrary complete Riemannian manifold $(M^{n},g)$ with $\mathrm{Ric}\geq 0$ such that for every $V>0$ there exists an isoperimetric region of volume $V$. Indeed, the computations in [13] only rely on the existence of an isoperimetric region for any volume $V>0$ and the regularity result in [66, Proposition 2.4] (see also [54]). It is now an elementary consequence (see [14, Proposition B.2.1]) that, since $I_{(M^{n},g)}$ is continuous and satisfies $\overline{D}^{2}I_{(M^{n},g)}^{n/(n-1)}\leq 0$, then $I_{(M^{n},g)}^{n/(n-1)}$ is concave, and thus $I_{(M^{n},g)}$ is strictly concave. ∎ ###### Proposition 5.6. Let $(M^{n},g)$ be a complete Riemannian manifold such that its isoperimetric profile is a strictly concave function. Then, any nonempty isoperimetric region $\Omega$ is indecomposable, i.e., if $E,F$ are finite perimeter sets such that $\operatorname{vol}(\Omega\Delta(E\cup F))=0$ and $P(\Omega)=P(E)+P(F)$, then $\operatorname{vol}(E)=0$ or $\operatorname{vol}(F)=0$. ###### Proof. It is sufficient to notice that since $I_{(M^{n},g)}$ is strictly concave and $I_{(M^{n},g)}(0)=0$, then $I_{(M^{n},g)}$ is strictly subadittive. Hence one can argue exactly as at the end of [51, Corollary 3.4] to prove the statement. ∎ ###### Corollary 5.7. Let $(M^{n},g)$ be a complete noncompact Riemannian manifold with $\mathrm{Ric}\geq 0$ and such that it is GH-asymptotic to $\mathbb{R}^{n}$. Then the isoperimetric profile $I_{(M^{n},g)}$ is strictly concave and the isoperimetric regions are indecomposable. ###### Proof. By the reasoning at the beginning of the proof of 5.2 we get that $M^{n}$ is noncollapsed. Hence $I_{(M^{n},g)},$ is continuous, thanks to [58, Theorem 2], since $M^{n}$ satisfies $\mathrm{Ric}\geq 0$ and it is noncollapsed. Then the conclusion is reached by joining together 5.5 and 5.6. ∎ The statement of 1.3 is just the combination of 5.2 and 5.7. Hence, it is proved. ### 5.2 Rigidity properties of the minimizing sequences In the argument leading to the proof of the main isoperimetric existence result (5.2), we essentially showed that whenever a minimizing sequence does not provide an isoperimetric set of the desired volume, then the union of a smaller isoperimetric set with a suitable union of geodesic balls recovers the desired volume while mantaining the optimality in perimeter. This happens in case part of the minimizing sequence is diverging at infinity. The results below roughly assert that, in case this happens, the ambient manifold has, outside every compact set, some open set with constant sectional curvature. We adopt the following terminology. ###### Definition 5.8 (Finite perimeter sets diverging at infinity). Let $(M^{n},g)$ be a complete Riemannian manifold. Let $\Omega_{i}\subset M^{n}$ be a sequence of finite perimeter sets such that $0<c_{1}\leq\operatorname{vol}(\Omega_{i})\leq c_{2}<+\infty$ for any $i$. * • We say that $\Omega_{i}$ _diverges at infinity_ if $\limsup_{i\to+\infty}\operatorname{vol}(\Omega_{i}\cap\mathcal{C})=0$ for any compact set $\mathcal{C}\subset M^{n}$. * • Suppose that $\Omega_{i}\to\Omega$ in the sense of finite perimeter sets. Then we say that $\Omega_{i}$ _loses mass at infinity_ if $\operatorname{vol}(\Omega)<\liminf_{i\to+\infty}\operatorname{vol}(\Omega_{i})$. We are now ready for proving the results about rigidity of perimeter- minimizing sequences. ###### Theorem 5.9. Let $(M^{n},g)$ be a complete Riemannian manifold satisfying the hypothesis of 5.2, with $k\in(-\infty,0]$. Then, for every $V>0$, every minimizing (for the perimeter) sequence of finite perimeter sets $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ of volume $V>0$ * (i) either does not lose mass at infinity, * (ii) or loses mass at infinity and in this case for every compact set $\mathcal{K}\subset M^{n}$ there exists a geodesic ball in $M^{n}\setminus\mathcal{K}$ that has constant sectional curvature $k$. ###### Proof. Let $V,\Omega_{i}$ be as in the statement. Let $\Omega$ be the limit of $\Omega_{i}^{c}$ in $L^{1}_{\rm loc}(M^{n},g)$, where $\Omega_{i}^{c}$ is as in 4.1. Assume that $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ loses mass at infinity, i.e. $V-\operatorname{vol}(\Omega)>0$, and let $\mathcal{K}$ be a fixed compact set. From the argument in the proof of 5.2, the balls recovering the lost volume $V-\operatorname{vol}(\Omega)$ can be taken outside $\mathcal{K}$. Hence there exists a finite number $\overline{N}$ (cf. Remark 5.3) of balls such that in (5.3) the equality holds, because there we also obviously have $P(\widetilde{\Omega})\geq I(V)$. Then, employing the notation of the proof of 5.2, we deduce that for every $j<\overline{N}+1$, $P(B_{r_{j}}(q_{j}))=P_{k}(\mathbb{B}_{k}(\operatorname{vol}_{k}(Z_{j})))$. Hence, for every $j<\overline{N}+1$, $(B_{r_{j}}(q_{j}),g)$ is isometric to $(\mathbb{B}_{k}(\operatorname{vol}_{k}(Z_{j})),g_{k})$ by the rigidity part of 5.1. Thus, outside every compact set $\mathcal{K}$ there exists at least one geodesic ball on which the sectional curvature is constantly equal to $k$. ∎ In case in the hypotheses of 5.2 we have that $\mathrm{Ric}\geq 0$ everywhere, we can use the strict concavity of the isoperimetric profile to deduce the following stronger rigidity result. ###### Theorem 5.10. Let $(M^{n},g)$ be a complete noncompact Riemannian manifold such that $\mathrm{Ric}\geq 0$ on $M^{n}$. Suppose that $(M^{n},g)$ is GH-asymptotic to $\mathbb{R}^{n}$. Then, for every $V>0$, every minimizing (for the perimeter) sequence of finite perimeter sets $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ of volume $V$ * (i) either does not lose mass at infinity, * (ii) or diverges at infinity and in this case $\mathrm{Sect}\equiv 0$ on $M^{n}$, and thus $(M^{n},g)$ is isometric to a quotient of $\mathbb{R}^{n}$ with respect to a proper discrete subgroup of the Euclidean isometries. ###### Proof. Let $V,\Omega_{i}$ be as in the statement. Let $\Omega$ be the limit of $\Omega_{i}^{c}$ in $L^{1}_{\rm loc}(M^{n},g)$, where $\Omega_{i}^{c}$ is as in 4.1. Let $\Omega_{i}^{d}=\Omega_{i}\setminus\Omega_{i}^{c}$ be as in 4.1. Since $M^{n}$ is noncollapsed, see the first part of the proof of 5.2, and satisfies $\mathrm{Ric}\geq 0$ everywhere, an application of [58, Theorem 2] tells us that the isoperimetric profile $I_{(M^{n},g)}$ is continuous. Then, since we have also the existence of isoperimetric regions of any volume from 5.2, we can use 5.5 and conclude that $I_{(M^{n},g)}$, in what follows simply denoted by $I$, is continuous and strictly subadditive. Assume that $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ loses mass at infinity, in other words that $\operatorname{vol}(\Omega)<V$. Then, $W:=\lim_{i}\operatorname{vol}(\Omega^{d}_{i})>0$. We first prove that $\operatorname{vol}(\Omega)=0$. Indeed, supposing by contradiction that $\operatorname{vol}(\Omega)>0$, we get, using also the identities in 4.1 and the subadditivity of the isoperimetric profile $\begin{split}I(V)&=\lim_{i}\left(P(\Omega_{i}^{c})+P(\Omega_{i}^{d})\right)=P(\Omega)+\lim_{i}P(\Omega_{i}^{d})\\\ &\geq I(\operatorname{vol}(\Omega))+\limsup_{i}I(\operatorname{vol}(\Omega_{i}^{d}))=I(\operatorname{vol}(\Omega))+I(W)\\\ &>I(\operatorname{vol}(\Omega)+W)=I(V),\end{split}$ that is impossible. We can then assume that $W>0$ and $\Omega=\emptyset$, and so since $\Omega_{i}^{c}=\Omega_{i}\cap B_{r_{i}}(o)$ for some diverging sequence $r_{i}>0$, for any compact set $\mathcal{K}$ we have $\limsup_{i}\operatorname{vol}(\Omega_{i}\cap\mathcal{K})=\limsup_{i}\operatorname{vol}(\Omega_{i}^{c}\cap\mathcal{K})=\operatorname{vol}(\Omega\cap\mathcal{K})=0.$ Hence $\Omega_{i}$ diverges at infinity. Since we also have $\mathrm{Ric}\geq 0$ everywhere, the finite number $\overline{N}$ (see Remark 5.3) of balls $B_{r_{j}}(q_{j})$ with volume $v_{j}$ in the proof of 5.2 can be taken anywhere on the manifold. Thus, for example, for every point $q\in M^{n}$ on the manifold we can choose $B_{r_{1}}(q_{1})$ in the proof of 5.2 to be some ball $B_{r_{1}}(q)$ with the radius $r_{1}$ chosen in such a way the volume of $B_{r_{1}}(q)$ is $v_{1}$. Thus, since in (5.3) equality holds, this implies that $P(B_{r_{1}}(q))=P_{\rm eu}(\mathbb{B}^{n}(v_{1}))$, and thus the rigidity part of 5.1 implies that $(B_{r_{1}}(q),g)$ is isometric to $(\mathbb{B}^{n}(v_{1}),g_{\rm eu})$, and thus the sectional curvature is constantly equal to zero on $B_{r_{1}}(q)$. Exploiting the arbitrariness of $q$ we conclude that $\mathrm{Sect}\equiv 0$ on $M^{n}$, and thus the proof is concluded. ∎ ###### Remark 5.11 (A remark on the proof of 5.10). It is useful to notice that in the proof of 5.10 we actually proved the following result: if $(M^{n},g)$ is a complete Riemannian manifold such that the isoperimetric profile $I_{(M^{n},g)}$ is continuous and strictly subadditive, then a minimizing (for the perimeter) sequence of finite perimeter sets $\\{\Omega_{i}\\}_{i\in\mathbb{N}}$ of volume $V>0$ either diverges at infinity, or converges to a finite perimeter set $\Omega$ (in the sense of finite perimeter sets) without losing mass at infinity. In the latter case, $\Omega$ is an isoperimetric region of volume $V$ and $P(\Omega_{i})\to P(\Omega)$. Indeed, such a statement follows by repeating the arguments in the proof of 5.10 employing 4.1. In fact, this results is completely analogous to [51, Corollary 3.5], whose proof indeed relies on the assumptions that the isoperimetric profile is continuous and strictly subadditive only. ## 6 Applications and examples In this section we give effective conditions that imply the hypotheses of 5.2. We start by recalling some definitions about convergence of manifolds. The following definition is taken from [62, Section 11.3.2]. ###### Definition 6.1 ($C^{0}$-convergence of manifolds). Given $(M^{n},g,p)$ a pointed Riemannian manifold, and $\\{(M_{i}^{n},g_{i},p_{i})\\}_{i\in\mathbb{N}}$ a sequence of pointed Riemannian manifolds, we say that $(M_{i}^{n},g_{i},p_{i})$ converge to $(M^{n},g,p)$ in the $C^{0}$-sense if for every $R>0$ there exists a domain $\Omega\subset M^{n}$ containing $B_{R}(p)$ and, for large $i$, embeddings $F_{i}:\Omega\to M_{i}^{n}$ such that $F_{i}(p)=p_{i}$, $F_{i}(\Omega)$ contains $B_{R}(p_{i})$, and the pull-back metrics $F_{i}^{*}g_{i}$ converge to $F$ in the $C^{0}$-sense on $\Omega$, i.e., all the components of the metric tensors converge in the $C^{0}$ norm in a finite covering of coordinate patches on $\Omega$. By using the previous notion of convergence, we can define what means for a Riemannian manifold to be $C^{0}$-asymptotic to the simply connected model $\mathbb{M}^{n}_{k}$ of dimension $n\in\mathbb{N}$ and constant sectional curvature $k\in\mathbb{R}$. The forthcoming notion has been investigated in [51], see in particular [51, Theorem 1.2]. ###### Definition 6.2 ($C^{0}$-local asymptoticity). We say that a Riemannian manifold $(M^{n},g)$ is $C^{0}$-locally asymptotic to the simply connected model $\mathbb{M}^{n}_{k}$ of dimension $n\in\mathbb{N}$ and constant sectional curvature $k\in\mathbb{R}$ if for every diverging sequence of points $p_{i}$ in $M^{n}$ we have that $(M^{n},g,p_{i})$ converge to $(\mathbb{M}^{n}_{k},g_{k},o)$ in the $C^{0}$-sense, where $g_{k}$ is the Riemannian metric on $\mathbb{M}^{n}_{k}$ and $o$ is a fixed origin. ###### Remark 6.3 (GH-asymptoticity and $C^{0}$-local asymptoticity). We remark that our 5.2 implies one of the main theorems in [51], namely [51, Theorem 1.2]. Indeed, the notion of being $C^{0}$-locally asymptotic to the simply connected model $\mathbb{M}^{n}_{k}$ of constant sectional curvature $k\in\mathbb{R}$ and dimension $n\in\mathbb{N}$, see [51, Definition 2.2, Definition 2.4], is readily stronger than being GH-asymptotic to $\mathbb{M}^{n}_{k}$, cf. [62, Section 11.3.2]. As a consequence, all the examples in [51, Remark 1.1], namely the ALE gravitational instantons, the asymptotically hyperbolic Einstein manifolds, and the Bryant type solitons satisfy the hypotheses of 5.2. It would be interesting to show an example of a Riemannian manifold that is GH-asymptotic to the model $\mathbb{M}^{n}_{k}$ but that is not $C^{0}$-locally asymptotic to it. As an easy consequence of Lemma A.2 we get a criterion to check that a Riemannian manifold is $C^{0}$-locally asymptotic, and hence GH-asymptotic (see Remark 6.3), to the simply connected model of constant sectional curvature $k\in\mathbb{R}$ and dimension $n\in\mathbb{N}$. Let us introduce our notions of _sectional curvature asymptotically equal to $k$_ and of _asymptotically diverging injectivity radius_. ###### Definition 6.4 (Sectional curvature asymptotically equal to $k$). Given $k\in(-\infty,0]$, we say that a noncompact Riemannian manifold $(M^{n},g)$ has sectional curvature asymptotically equal to $k$ if there exists $o\in M^{n}$ such that for every $0<\varepsilon<1$ there exists $R_{\varepsilon}>0$ for which $\begin{split}\lvert\mathrm{Sect}_{x}(\pi)-k\rvert\leq\varepsilon&\qquad\text{for all}\,x\in M\setminus\overline{B}_{R_{\varepsilon}}(o),\,\,\text{for all 2-planes $\pi$ in $T_{x}M^{n}$}.\end{split}$ (6.1) If $k=0$ we say that $(M^{n},g)$ has asymptotically vanishing sectional curvature. We stress that, from now on, when we write $\lvert\mathrm{Sect}\rvert\leq c$ everywhere on some set $\Omega$, we mean that $\lvert\mathrm{Sect}_{x}(\pi)\rvert\leq c$ for every $x\in\Omega$ and every 2-plane $\pi\in T_{x}M^{n}$. ###### Definition 6.5 (Asymptotically diverging injectivity radius). We say that a noncompact Riemannian manifold $(M^{n},g)$ has asymptotically diverging injectivity radius if there exists $o\in M^{n}$ such that for every $S>1$ there exists $R_{S}>0$ for which $\begin{split}\mathrm{inj}(x)\geq S&\qquad\text{for all}\,\,x\in M\setminus\overline{B}_{R_{S}}(o).\end{split}$ (6.2) In the following statement we record how the coupling of the two conditions above suffices to infer the GH-asymptoticity to space forms. The following statement is a consequence of Lemma A.2. ###### Proposition 6.6. Let $(M^{n},g)$ be a complete noncompact Riemannian manifold with _sectional curvature asymptotically equal to $k$_, for some $k\in(-\infty,0]$, and with _asymptotically diverging injectivity radius_. Then, $(M^{n},g)$ is $C^{0}$-locally asymptotic, and hence GH-asymptotic, to the simply connected model $\mathbb{M}^{n}_{k}$ of dimension $n\in\mathbb{N}$ and constant sectional curvature $k\in(-\infty,0]$. By means of a classical compactness theorem, we can actually prove more than the $C^{0}$-local convergence above, as explained in the following Remark. ###### Remark 6.7 (A stronger thesis in 6.6). We remark that under the same hypotheses of 6.6 we can prove that, for every $0<\alpha<1$, $(M^{n},g)$ is $C^{1,\alpha}$-asymptotic to the simply connected model of dimension $n\in\mathbb{N}$ and constant sectional curvature $k\in(-\infty,0]$, see [62, Section 11.3.1 and Section 11.3.2] for the main definitions about $C^{m,\alpha}$ convergence, which we give for granted here. Indeed, under the hypotheses of 6.6, it is readily seen that there exist constants $K,R>0$ such that $\lvert\mathrm{Sect}\rvert\leq K$ on $M^{n}$, and $\operatorname{inj}\geq R$ on $M^{n}$. Hence, from the compactness result in [62, Theorem 11.4.7] we get that for every diverging sequence of points $p_{i}\in M^{n}$ we have that $(M^{n},g,p_{i})$ converge in the $C^{1,\alpha}$-pointed topology, up to subsequences, to a Riemannian manifold $(M_{\infty}^{n},g_{\infty},p_{\infty})$. But since the $C^{0}$-limit of $(M^{n},g,p_{i})$ is $\mathbb{M}^{n}_{k}$, we get that every subsequence of $(M^{n},g,p_{i})$ must converge, up to subsequences, to $\mathbb{M}^{n}_{k}$ in the $C^{1,\alpha}$-pointed topology. Hence, the entire sequence $(M^{n},g,p_{i})$ converges in the $C^{1,\alpha}$-pointed topology to $\mathbb{M}^{n}_{k}$. We are now committed to link the two above defined notions of asymptotically constant sectional curvature and asymptotically diverging injectivity radius, in order to discuss some effective and basic conditions that on a complete noncompact Riemannian manifold ultimately imply the existence of isoperimetric regions of any volume. The following is essentially a consequence of a fundamental injectivity radius estimate in [24]. ###### Lemma 6.8. Let $(M^{n},g)$ be a complete Riemannian manifold with asymptotically vanishing sectional curvature. Let us assume there exists a compact set $\mathcal{C}\subset M^{n}$, a real number $\alpha<1$, and a constant $C>0$ such that $\mathrm{vol}{(B_{r}(p))}\geq Cr^{n-\alpha},$ (6.3) for any ball $B_{r}(p)\subset M^{n}\setminus\mathcal{C}$ with $r>1$. Then $(M^{n},g)$ has asymptotically diverging injectivity radius. ###### Proof. Let $0<\varepsilon<1/100$, and fix $o\in M^{n}$. By the asymptotic vanishing of the sectional curvature, there exists a radius $R_{\varepsilon}$ such that $\lvert\mathrm{Sect}\rvert<\varepsilon$ on $M^{n}\setminus\overline{B}_{R_{\varepsilon}}(o)$ and $\mathcal{C}\subset B_{R_{\varepsilon}}(o)$. Let then $p\in M^{n}\setminus\overline{B}_{R_{\varepsilon}+\pi/\sqrt{\varepsilon}}(o)$, and observe that $B_{\pi/\sqrt{\varepsilon}}(p)\Subset(M^{n}\setminus\overline{B}_{R_{\varepsilon}}(o))$. Assume that $\mathrm{inj}(p)<\pi/\sqrt{\varepsilon}$. Then, there exists $q\in\mathrm{Cut}(p)$ such that $d(p,q)=\mathrm{inj}(p)$, and that in particular still belongs to $M^{n}\setminus\overline{B}_{R_{\varepsilon}}(o)$. Then, by [18, Proposition 2.12, Chapter 13], either there exists a geodesic $\gamma$ joining $p$ and $q$ such that $q$ is conjugate to $p$ along $\gamma$, or there exists a geodesic loop $\sigma$ based at $p$ passing through $q$ with length equal to $2\operatorname{inj}(p)$, so that in particular $\sigma$ is still contained in $M^{n}\setminus\overline{B}_{R_{\varepsilon}}(o)$. In the first case, a straightforward application of Rauch’s Comparison Theorem [18, Proposition 2.4, Chapter 10] implies that $d(p,q)\geq\pi/\sqrt{\varepsilon}$, a contradiction with the assumption above. In the second case, we have $2\,\mathrm{inj}(p)=\ell:=\mathrm{length}\,(\sigma)$, and we can thus estimate $\mathrm{inj}(p)$ in terms of volumes of balls by means of [24, Theorem 4.3], that yields $\mathrm{inj}(p)\geq\frac{\pi}{8\sqrt{\varepsilon}}\left(1+\frac{v(n,-\varepsilon,\frac{7\pi}{16\sqrt{\varepsilon}})}{\mathrm{vol}\left(B_{\frac{3\pi}{16\sqrt{\varepsilon}}}\left(p\right)\right)}\right)^{-1}\geq\frac{\pi}{8\sqrt{\varepsilon}}\left(1+\frac{v(n,-\varepsilon,\pi/\sqrt{\varepsilon})}{\mathrm{vol}\left(B_{\frac{3\pi}{16\sqrt{\varepsilon}}}\left(p\right)\right)}\right)^{-1},$ (6.4) where we applied [24, Theorem 4.3] with $r=\pi/\sqrt{\varepsilon}$, $r_{0}=r/4$, and $s=\tfrac{3}{16}r$ in the notation therein. We have, for a dimensional constant $C(n)$, the following equality $v(n,-\varepsilon,\pi/\sqrt{\varepsilon})=\int\limits_{0}^{\pi/\sqrt{\varepsilon}}\left(\frac{\sinh(s\sqrt{\varepsilon})^{n-1}}{\sqrt{\varepsilon}}\right)^{n-1}\,\mathrm{d}s=\frac{1}{(\sqrt{\varepsilon})^{n}}\int\limits_{0}^{\pi}\sinh(t)^{n-1}\,\mathrm{d}t=C(n)\frac{1}{(\sqrt{\varepsilon})^{n}}.$ (6.5) Plugging (6.3) and (6.5) into (6.4) then yields $\mathrm{inj}(p)\geq\frac{\pi}{8\sqrt{\varepsilon}}\left(1+\overline{C}\varepsilon^{-\frac{\alpha}{2}}\right)^{-1},$ where $\overline{C}=\overline{C}(C(n),C)$. All in all, we proved $\mathrm{inj}(p)\geq\mathrm{min}\left\\{\frac{\pi}{\sqrt{\varepsilon}},\frac{\pi}{8\sqrt{\varepsilon}}\left(1+\overline{C}\varepsilon^{-\frac{\alpha}{2}}\right)^{-1}\right\\}.$ Since $\alpha<1$, the right-hand-side of the previous inequality diverges at infinity when $\varepsilon\to 0$, implying that $(M^{n},g)$ has asymptotically diverging injectivity radius. ∎ ###### Remark 6.9 (A weakened hypothesis in Lemma 6.8). From the proof of Lemma 6.8 it is readily noticed that the condition (6.3) can be slightly weakened into the following $\liminf_{r\to+\infty}\liminf_{d(o,p)\to+\infty}\frac{\operatorname{vol}(B_{r}(p))}{r^{n-\alpha}}>0,$ (6.6) where $o\in M^{n}$ is some fixed origin. As a consequence of 6.6, Lemma 6.8, and 5.2 we get the following isoperimetric existence result under curvature and volume conditions. ###### Corollary 6.10. Let $(M^{n},g)$ be a complete Riemannian manifold with asymptotically vanishing sectional curvature. Moreover, assume that there exists a compact set $\mathcal{C}$ such that that $\mathrm{Ric}\geq 0$ on $M\setminus\mathcal{C}$, and moreover there exist $\alpha<1$ and $C>0$ such that $\operatorname{vol}(B_{r}(p))\geq Cr^{n-\alpha}$ for any ball $B_{r}(p)\Subset M\setminus\mathcal{C}$ with $r>1$. Then, for every $V>0$ there exists an isoperimetric region of volume $V$. ###### Remark 6.11 (A weakened hypothesis in 6.10). As already discussed in Remark 6.9, we can slightly weaken the hypothesis on the bound of the volumes in the previous 6.10 with (6.6). The volume condition in the statement of 6.10 is automatically satisfied on manifolds $(M^{n},g)$ with nonnegative Ricci curvature, asymptotically vanishing sectional curvature, and Euclidean volume growth, that is, $\mathrm{AVR}(M^{n},g)>0$. Indeed, in such a case, by Bishop-Gromov we have $\mathrm{AVR}(M^{n},g)\omega_{n}r^{n}\leq\mathrm{vol}(B_{r}(p))\leq\omega_{n}r^{n}$ for any $p\in M^{n}$ and any $r>0$. We observe also that, since such a condition on the volume of balls is needed to hold just outside some compact set, we actually get the existence of isoperimetric regions of any volume on any compact perturbation of a complete Riemannian manifold with asymptotically vanishing sectional curvature, $\mathrm{Ric}\geq 0$, and Euclidean volume growth. ###### Corollary 6.12. Let $(M^{n},\tilde{g})$ be a complete Riemannian manifold with nonnegative Ricci curvature, asymptotically vanishing sectional curvature, and Euclidean volume growth. Let $(M^{n},g)$ be a compact perturbation of $(M^{n},\tilde{g})$, that is, there exists a compact set $\mathcal{C}$ such that $\tilde{g}=g$ on $M^{n}\setminus\mathcal{C}$. Then, for every $V>0$ there exists an isoperimetric region of volume $V$. It is interesting to observe that 6.12 applies to Perelman’s example constructed in [61] (see also [21, Section 8]), that is a complete Riemannian manifold $(M^{n},g)$ with nonnegative Ricci curvature, Euclidean volume growth, asymptotically vanishing sectional curvature (it satisfies a quadratic decay), and admitting non-isometric asymptotic cones. We recall that an asymptotic cone to a manifold $(M^{n},g)$ at some $x\in M^{n}$ is the pGH limit of the sequence of metric spaces $(M^{n},r_{i}^{-1}\mathsf{d},x)$ for some diverging sequence $r_{i}\to+\infty$. In particular, we deduce that the non-uniqueness of asymptotic cones is not an obstruction to the existence of isoperimetric regions, even in the case of $\mathrm{Ric}\geq 0$ and Euclidean volume growth. ###### Remark 6.13 (Asymptotically Euclidean and conical manifolds). A direct application of 6.12 implies that every compact perturbation of an ALE manifold with $\mathrm{Ric}\geq 0$, see e.g. [1, Definition 4.13], has isoperimetric regions for any volume. Indeed, it is immediately checked that an ALE manifold with $\mathrm{Ric}\geq 0$ has Euclidean volume growth and asymptotically vanishing sectional curvature. An application of 6.12 implies also that every compact perturbation of a $C^{2}$-asymptotically conical manifold (in the sense of [26]) with $\mathrm{Ric}\geq 0$ has isoperimetric regions for every volume. Indeed, every $C^{2}$-asymptotically conical manifold has asymptotically vanishing sectional curvature and Euclidean volume growth. We remark that in [26, Theorem 3] the authors prove that a $C^{1,\alpha}$-asymptotically conical manifold (without further bounds on $\mathrm{Ric}$) has isoperimetric regions for large volumes, and they describe the structure of isoperimetric regions with large volumes for $C^{2,\alpha}$-asymptotically conical manifolds. Our results allow to generalize the applications on asymptotically conical manifolds considered in Remark 6.13 to manifolds which are suitably asymptotic to warped products with $\mathrm{Ric}\geq 0$ and asymptotically vanishing sectional curvature. We discuss this observation in the next remark. ###### Remark 6.14 (Warped products with $\mathrm{Ric}\geq 0$ and asymptotically vanishing sectional curvature). Let $(W,\widetilde{g})$ be an arbitrary warped product defined by $W:=(0,+\infty)\times L,\qquad\widetilde{g}:=\,\mathrm{d}r^{2}+f(r)^{2}g_{L},$ where $(L,g_{L})$ is a compact Riemannian manifold and $f:(0,+\infty)\to(0,+\infty)$ is a smooth function. Let $(M^{n},g)$ be a complete Riemannian manifold such that there exists a compact set $\mathcal{K}\subset M^{n}$ such that $(M^{n}\setminus\mathcal{K},g)$ is isometric to $((a,+\infty)\times L,\widetilde{g})$ for some $a\geq 0$. We want to show here that if $\lim_{r\to+\infty}f(r)=+\infty,\qquad\text{and $W$ has asymptotically vanishing sectional curvature},$ (6.7) then $(M^{n},g)$ has asymptotically diverging injectivity radius. Observe that, by a direct computation of the Riemann tensor, the asymptotic vanishing of the sectional curvature is ensured every time $f^{\prime}=o(f)$ and $f^{\prime\prime}=o(f)$ as $r\to+\infty$. Let us now prove the latter claim after (6.7). Indeed, since the sectional curvature is asymptotically vanishing, arguing as in the proof of Lemma 6.8, for a given $\varepsilon\in(0,1)$ it suffices to estimate from below the length $\ell$ of a geodesic loop based at $p\in M\setminus\mathcal{K}_{\varepsilon}$ by $\ell\geq C/\varepsilon$ for a constant $C$ independent of $\varepsilon$, and for some compact set $\mathcal{K}_{\varepsilon}\supset\mathcal{K}$. We identify $M\setminus\mathcal{K}$ with $((a,+\infty)\times L,\widetilde{g})$. Take $\widetilde{\mathcal{K}}_{\varepsilon}$ such that $|\mathrm{Sect}|<\varepsilon$ on $M\setminus\widetilde{\mathcal{K}}_{\varepsilon}$. As in Lemma 6.8, we can consider $\mathcal{K}_{\varepsilon}\supset\widetilde{\mathcal{K}}_{\varepsilon}$ such that for every $p\in M\setminus\mathcal{K}_{\varepsilon}$ we have $\mathsf{d}(p,\widetilde{\mathcal{K}}_{\varepsilon})>\pi/\sqrt{\varepsilon}$. Also, without loss of generality, up to eventually enlarging $\mathcal{K}_{\varepsilon}$, we can just estimate a geodesic loop $\gamma$ based at $p$ such that $\gamma:[0,1]\to((a_{\varepsilon},+\infty)\times L,\widetilde{g})$ and $a_{\varepsilon}$ is such that $f(r)\geq 1/\varepsilon$ for $r\geq a_{\varepsilon}$. We have $\gamma=(\gamma_{1},\gamma_{2})$, and $\gamma_{2}^{\prime}(0)\neq 0$, for otherwise $\gamma$ would be tangent to $(a_{\varepsilon},+\infty)$ and $\gamma$ would not be closed. Then $\gamma_{2}$ is a nonconstant continuous curve in $L$. For $S\subset L$, it can be shown by the direct computation of the second fundamental form of the isometric embedding $(a^{\prime},+\infty)\times S\hookrightarrow((a^{\prime},+\infty)\times L,\widetilde{g})$ that $S$ is a totally geodesic submanifold of $(L,g_{L})$ if and only if so is $(a^{\prime},+\infty)\times S$ in $((a^{\prime},+\infty)\times L,\widetilde{g})$, for any $a^{\prime}$. This implies that $\gamma_{2}$ is a geodesic loop in $(L,g_{L})$, up to reparametrization. Hence the length of $\gamma$ is estimated from below by $\begin{split}\ell&=\int_{0}^{1}\left(|\gamma_{1}^{\prime}(t)|^{2}+f^{2}(\gamma_{1}(t))g_{L}(\gamma_{2}^{\prime}(t),\gamma_{2}^{\prime}(t))\right)^{1/2}\,\mathrm{d}t\\\ &\geq\frac{\sqrt{2}}{2}\left(\int_{0}^{1}|\gamma_{1}^{\prime}(t)|\,\mathrm{d}t+L(\gamma_{2})\min_{[0,1]}f(\gamma_{1}(t))\right)\\\ &\geq\frac{\sqrt{2}}{2}\left(\int_{0}^{1}|\gamma_{1}^{\prime}(t)|\,\mathrm{d}t+{\rm syst}(L)\,\min_{[0,1]}f(\gamma_{1}(t))\right),\end{split}$ where ${\rm syst}(L)>0$ denotes the systole of $(L,g_{L})$, that is the length of the shortest geodesic loop in $(L,g_{L})$. By construction, the above estimate implies that $\ell\geq\frac{\sqrt{2}}{2}{\rm syst}(L)\frac{1}{\varepsilon}.$ Hence we conclude that the injectivity radius is asymptotically diverging. Therefore, if in addition to (6.7), we have that $\mathrm{Ric}\geq 0$ outside a compact set of $M^{n}$, then 6.6 applies, $(M^{n},g)$ is GH-asymptotic to the Euclidean space $\mathbb{R}^{n}$, and by 5.2 there exist isoperimetric regions of any volume. It is clear that the very same conclusion holds true if we assumed that $(M^{n},g)$ satisfies $\mathrm{Ric}\geq 0$ outside a compact set and that $M^{n}$ is just $C^{2}$-asymptotic to $(W,\widetilde{g})$. We observe that, for examples, the Bryant type solitons mentioned in Remark 6.3 fit in this setting of warped products. ## Appendix A Comparison results in Riemannian Geometry We write down a complete statement of a rather classical comparison result in Geometric Analysis, i.e., the Bishop–Gromov–Günther volume and area comparison under Ricci curvature lower bounds and sectional curvature upper bounds. The conclusions (A.1), (A.2), (A.4), (A.5), and the rigidity part of A.1 are consequences, e.g., of [35, Theorem 3.101], [67, Theorem 1.2 and Theorem 1.3], and the arguments within their proofs. We stress that in the case of a Ricci lower bound, the balls are not required to stay within the cut-locus as first realized by Gromov, see, e.g., [27, Theorem 1.132]. Finally, the conclusion (A.3) follows from [63, Corollary 2.22, item (i)] and the coarea formula, while (A.6) follows verbatim from the proof of [63, Corollary 2.22, item (i)] by using (A.4) and concluding again with the coarea formula. We also stress that in the forthcoming A.1 we do not actually assume the curvature bounds on all $M^{n}$, but just on an open subset $\Omega\subset M^{n}$. Consequently the conclusions hold for balls contained inside $\Omega$: indeed, the proofs of the classical geometric comparison theorems leading to A.1 can be localized, see, e.g., [63, Remark 2.6]. For a comparison result assuming more general Ricci lower bounds, we also refer the reader to [63, Theorem 2.14]. ###### Theorem A.1 (Volume and perimeter comparison). Let $(M^{n},g)$ be a complete Riemannian manifold, and let $\Omega\subset M^{n}$ be a on open subset such that $\mathrm{Ric}\geq(n-1)k$ on $\Omega$ in the sense of quadratic forms for some $k\in\mathbb{R}$. Let us set $T_{k}:=+\infty$ if $k\leq 0$, and $T_{k}:=\pi/\sqrt{k}$ if $k>0$. Then, for every $p\in\Omega$ and for $r\leq T_{k}$ such that $B_{r}(p)\Subset\Omega$ the following hold $\displaystyle\frac{\operatorname{vol}(B_{r}(p))}{v(n,k,r)}\to 1\,\text{as $r\to 0$ and it is nonincreasing},$ (A.1) $\displaystyle\frac{P(B_{r}(p))}{s(n,k,r)}\to 1\,\text{as $r\to 0$ and it is almost everywhere nonincreasing},$ (A.2) $\displaystyle\frac{P(B_{r}(p))}{s(n,k,r)}\leq\frac{\operatorname{vol}(B_{r}(p))}{v(n,k,r)}\,\text{almost everywhere}.$ (A.3) Moreover, if one has $\operatorname{vol}(B_{\overline{r}}(p))=v(n,k,\overline{r})$ for some $\overline{r}\leq T_{k}$ such that $B_{\overline{r}}(p)\Subset\Omega$, then $B_{\overline{r}}(p)$ is isometric to the ball of radius $\overline{r}$ in the simply connected model of constant sectional curvature $k$ and dimension $n$. Conversely, let $(M^{n},g)$ be a complete Riemannian manifold, and let $\Omega\subset M^{n}$ be an open subset such that $\mathrm{Sect}\leq k$ on $\Omega$, for some $k\in\mathbb{R}$. Then, for every $p\in\Omega$ and for $r\leq\min\\{T_{k},\operatorname{inj}(p)\\}$ such that $B_{r}(p)\Subset\Omega$ the following hold $\displaystyle\frac{\operatorname{vol}(B_{r}(p))}{v(n,k,r)}\to 1\,\text{as $r\to 0$ and it is nondecreasing},$ (A.4) $\displaystyle\frac{P(B_{r}(p))}{s(n,k,r)}\to 1\,\text{as $r\to 0$ and it is nondecreasing},$ (A.5) $\displaystyle\frac{P(B_{r}(p))}{s(n,k,r)}\geq\frac{\operatorname{vol}(B_{r}(p))}{v(n,k,r)}.$ (A.6) Moreover, if one has $\operatorname{vol}(B_{\overline{r}}(p))=v(n,k,\overline{r})$ for some $\overline{r}\leq\min\\{T_{k},\operatorname{inj}(p)\\}$ such that $B_{\overline{r}}(p)\Subset\Omega$, then $B_{\overline{r}}(p)$ is isometric to the ball of radius $\overline{r}$ in the simply connected model of constant sectional curvature $k$ and dimension $n$. In the case of sectional curvature bounds, the above result strengthens and yields a comparison between metric tensors. Again, this is well known, but since we were not able to find a complete proof in the literature, we provide one. ###### Lemma A.2 (Comparison of metrics). Let $(M^{n},g)$ be a complete Riemannian manifold and fix $p\in M^{n}$. For every $k\in\mathbb{R}$, let $T_{k}$ be as in A.1. Denote by $r$ the distance from $p$, let $R=\operatorname{inj}(p)$, and let $\\{x^{i}\\}_{i=1}^{n}$ be geodesic normal coordinates at the point $p$. Through the latter coordinates, let us identify $B_{R}(p)$ with the Euclidean ball $\mathbb{B}^{n}_{R}$, and let us denote by $g_{1}$ the canonical metric on $\mathbb{S}^{n-1}$. Then the following statements hold true * (i) if $\mathrm{Sect}(\nabla r\wedge X)\leq k$ for any $X\perp\nabla r$ with $g(X,X)=1$, then the inequality $g\geq g_{k}:=\mathrm{d}r^{2}+\operatorname{sn}_{k}(r)^{2}g_{1}$ holds in the sense of quadratic forms on $\mathbb{B}^{n}_{\rho}$, where $\rho:=\min\\{R,T_{k}\\}$, * (ii) if $\mathrm{Sect}(\nabla r\wedge X)\geq k$ for any $X\perp\nabla r$ with $g(X,X)=1$, then the inequality $g\leq g_{k}:=\mathrm{d}r^{2}+\operatorname{sn}_{k}(r)^{2}g_{1}$ holds in the sense of quadratic forms on $\mathbb{B}^{n}_{\rho}$, where $\rho:=\min\\{R,T_{k}\\}$. ###### Proof. We prove the first item, the second one being completely analogous. On $\mathbb{B}^{n}_{\rho}\cap\\{x_{1}\neq 0\\}$ we consider the polar frame given by $\partial_{r}:=(x^{i}/r)\partial_{i},v_{2},\ldots,v_{n}$, where $r=\sqrt{\sum_{j}(x^{j})^{2}}$ and $v_{j}\vcentcolon=x^{1}\partial_{j}-x^{j}\partial_{1}$ for $j=2,\ldots,n$. We also consider on $\mathbb{B}^{n}_{\rho}\setminus S\simeq(0,\rho)\times\mathbb{S}^{n-1}\setminus\Sigma$ polar coordinates $(r,\theta)$, which are defined out of a negligible set $S$ which is the cone over some $\Sigma\subset\mathbb{S}^{n-1}$ (that is negligible in $\mathbb{S}^{n-1}$) in $\mathbb{R}^{n}$. When considering polar coordinates, we shall write that the following computations hold on $\mathbb{B}^{n}_{\rho}$ meaning that they are well defined on $\mathbb{B}^{n}_{\rho}\setminus S$ but then the resulting conclusions extend on $S$ by continuity. So, since $\mathrm{d}r(v_{i})=g(\nabla r,v_{i})=0$ for all $i=2,\dots,n$ (see the proof of [62, Lemma 5.5.5]) in the latter frame we can rewrite the metric as $g=\mathrm{d}r^{2}+h_{ij}(r,\theta)w^{i}\otimes w^{j}$, where $\\{w^{i}\\}$ is the dual coframe corresponding to $\\{v_{i}\\}$. Moreover, since $\mathrm{d}r(v_{i})=g(\nabla r,v_{i})=0$ for any $i=2,\ldots,n$, we can also rewrite $g_{k}=\mathrm{d}r^{2}+\operatorname{sn}_{k}(r)^{2}\overline{h}_{ij}(\theta)w^{i}\otimes w^{j}$. We remark that $\overline{h}_{ij}=g_{1}(v_{i},v_{j})$ does not depend on $r$ as each $v_{i}$ is a tangent vector along $\mathbb{S}^{n-1}$ independent of $r$: this is readily checked by writing $v_{i}$ in polar coordinates. We claim that $h_{ij}(r,\theta)w^{i}\otimes w^{j}\geq\operatorname{sn}_{k}(r)^{2}\overline{h}_{ij}(\theta)w^{i}\otimes w^{j}$ in the sense of quadratic forms on $\mathbb{B}^{n}_{\rho}\cap\\{x_{1}\neq 0\\}$, which implies the lemma true on $\mathbb{B}^{n}_{\rho}\cap\\{x_{1}\neq 0\\}$. It eventually follows, by considering each $x^{k}$ in place of $x^{1}$, that the analogous argument implies that the assertion we want to prove is true on $\mathbb{B}^{n}_{\rho}\setminus\\{0\\}$, and then on $\mathbb{B}^{n}_{\rho}$ by continuity. One can easily check that $[\partial_{r},v_{i}]=0$ for any $i=2,\ldots,n$, and $\partial_{r}=\nabla r$ on $\mathbb{B}^{n}_{\rho}\setminus\\{0\\}$, see the proof of [62, Lemma 5.5.5]. Thus we have $\partial_{r}(g(v_{i},v_{j}))=g(\nabla_{\partial_{r}}v_{i},v_{j})+g(\nabla_{\partial_{r}}v_{j},v_{i})=g(\nabla_{v_{i}}\nabla r,v_{j})+g(\nabla_{v_{j}}\nabla r,v_{i})=2\nabla^{2}r(v_{i},v_{j}),$ for any $i,j=2,\ldots,n$. Hence, as $\mathrm{d}r(v_{i})=0$ for any $i=2,\ldots,n$, the Hessian comparison theorem [62, Theorem 6.4.3] implies $\partial_{r}h_{ij}=\partial_{r}(g(v_{i},v_{j}))\geq 2\frac{\operatorname{sn}_{k}^{\prime}(r)}{\operatorname{sn}_{k}(r)}g(v_{i},v_{j})=2\frac{\operatorname{sn}_{k}^{\prime}(r)}{\operatorname{sn}_{k}(r)}h_{ij},$ in the sense of quadratic forms on $\mathbb{B}^{n}_{\rho}\cap\\{x_{1}\neq 0\\}$, so that $\partial_{r}\left(\frac{h_{ij}}{\operatorname{sn}_{k}(r)^{2}}\right)\geq 0$ in the sense of quadratic forms on $\mathbb{B}^{n}_{\rho}\cap\\{x_{1}\neq 0\\}$. For every $\theta$ and every $i,j=1,\dots,n$ we have $\lim_{r\to 0^{+}}\frac{g(v_{i},v_{j})}{\operatorname{sn}_{k}(r)^{2}}=\lim_{r\to 0^{+}}\frac{h_{ij}(r,\theta)}{\operatorname{sn}_{k}(r)^{2}}=\overline{h}_{ij}(\theta).$ Indeed, for $\theta$ fixed, $g(v_{i},v_{j})=r^{2}\overline{h}_{ij}(\theta)(1+O(r^{2}))$ as $r\to 0^{+}$, since in normal coordinates $g_{ij}=\delta_{ij}(1+O(r^{2}))$ as $r\to 0^{+}$. Hence classical ODE comparison implies that $h_{ij}(r,\theta)=g(v_{i},v_{j})\geq\operatorname{sn}_{k}(r)^{2}\overline{h}_{ij}(\theta)$ in the sense of quadratic forms, thus giving the conclusion. ∎ ## Appendix B Boundedness of isoperimetric regions In this part we prove that having at disposal a Euclidean-like isoperimetric inequality for merely _small_ volumes suffices to imply that isoperimetric regions on a complete Riemannian manifold are bounded. This is a technical fact that we will employ several times. The proof is based on a rather classical argument already appearing in [66, Proposition 3.7] and in [53, Lemma 13.6] in the Euclidean setting, and in [59, Theorem 3] on Riemannian manifolds. However, we present here a rather self-contained proof for the convenience of the reader, pointing out that the weak assumption of a Euclidean-like isoperimetric inequality for small volumes is sufficient for the assertion. ###### Theorem B.1. Let $(M^{n},g)$ be a complete Riemannian manifold. Assume that there is $v_{0}>0$ such that the isoperimetric inequality $c_{0}\operatorname{vol}(\Omega)^{(n-1)/n}\leq P(\Omega),$ holds true with some $c_{0}>0$ for any finite perimeter set $\Omega\subset M^{n}$ with $\operatorname{vol}(\Omega)<v_{0}$. Then the isoperimetric regions of $(M^{n},g)$ are bounded. ###### Proof. Let $E$ be an isoperimetric region and fix a point $p_{0}\in M^{n}$. Let, for every $r>0$, $V(r)\vcentcolon=\operatorname{vol}(E\setminus B_{r}(p_{0})),\qquad A(r)\vcentcolon=P(E,M\setminus B_{r}(p_{0})).$ By hypothesis there exists $r_{0}>0$ such that for any $r\geq r_{0}$ the volume $V(r)$ is sufficiently small to apply the isoperimetric inequality. In particular, for almost every $r\geq r_{0}$ we can write $|V^{\prime}(r)|+A(r)=\mathcal{H}^{n-1}(\partial B_{r}(p_{0})\cap E)+P(E,M\setminus B_{r}(p_{0}))=P(E\setminus B_{r}(p_{0}))\geq c_{0}V(r)^{\frac{n-1}{n}}.$ We want to prove that $A(r)\leq|V^{\prime}(r)|+CV(r),$ (B.1) for some constant $C$, and for almost every $r$ sufficiently big. Combining with the previous inequality, in this way we would get $c_{0}V(r)^{\frac{n-1}{n}}\leq CV(r)+2|V^{\prime}(r)|\leq\frac{c_{0}}{2}V(r)^{\frac{n-1}{n}}-2V^{\prime}(r),$ because $|V^{\prime}(r)|=-V^{\prime}(r)$ and $CV(r)\leq\tfrac{c_{0}}{2}V(r)^{\frac{n-1}{n}}$ for almost every sufficiently big radius. Hence ODE comparison implies that $V(r)$ vanishes at some $r=\overline{r}<+\infty$, i.e., $E$ is bounded as a set of finite perimeter. So we are left to prove (B.1). Let $R>0$ be fixed such that $P(E,B_{R}(p_{0}))>0$. There exists $\varepsilon_{0}=\varepsilon_{0}(R,E)>0$ and $C=C(R,E)>0$ such that for any $\varepsilon\in(-\varepsilon_{0},\varepsilon_{0})$ there is a finite perimeter set $F$ with $F\Delta E\Subset B_{R}(p_{0}),\qquad\operatorname{vol}(F)=\operatorname{vol}(E)+\varepsilon,\qquad P(F,B_{R}(p_{0}))\leq P(E,B_{R}(p_{0}))+C|\varepsilon|.$ (B.2) Indeed, this follows by the fact that, since the gradient of the characteristic function $\chi_{E}$ is represented by a measure $\nu|D\chi_{E}|$ where $\nu:M\to TM^{n}$ with $|\nu|=1$ at $|D\chi_{E}|$-a.e. point, and $P(E,\Omega)=\sup\left\\{\int\langle X,\nu\rangle\,\mathrm{d}|D\chi_{E}|\ :\ X\in\mathfrak{X}(\Omega),\operatorname{spt}X\Subset\Omega,|X|\leq 1\right\\}$ for any open set $\Omega$, we can take a field $X$ with $|X|\leq 1$ and compact support in $B_{R}(p_{0})$ such that $\int\langle X,\nu\rangle\,\mathrm{d}|D\chi_{E}|\geq\tfrac{1}{2}P(E,B_{R}(p_{0}))>0$. Then, for small $t$, there is a smooth family of diffeomorphisms $\phi_{t}$ such that $\phi_{0}={\rm id}$ and $\partial_{t}\phi_{t}|_{0}=X$. So the sets $F_{t}\vcentcolon=\phi_{t}(E)$ verify the expansions $\begin{split}\operatorname{vol}(F_{t})&=\operatorname{vol}(E)+t\int\langle X,\nu\rangle\,\mathrm{d}|D\chi_{E}|+O(t^{2}),\\\ P(F_{t},B_{R}(p_{0}))&=P(E,B_{R}(p_{0}))+t\int(\operatorname{div}X-\langle\nabla_{\nu}X,\nu\rangle)\,\mathrm{d}|D\chi_{E}|+O(t^{2}).\end{split}$ The above formulas for the variations of volume and perimeter are easily checked to hold on $M^{n}$ by the same computations carried out in the Euclidean space in [47, Theorem 17.5 & Proposition 17.8]. Since $\int\langle X,\nu\rangle\,\mathrm{d}|D\chi_{E}|>0$ and $\left|\int(\operatorname{div}X-\langle\nabla_{\nu}X,\nu\rangle)\,\mathrm{d}|D\chi_{E}|\right|\leq C(R,E)$, (B.2) follows by taking $\varepsilon_{0}(R,E)$ sufficiently small and then $F=F_{t_{\varepsilon}}$ for the suitable $t_{\varepsilon}$. Now consider $r>R$ such that $V(r)<\varepsilon_{0}$, and set $\varepsilon=V(r)$. Then there is $F$ satisfying (B.2). Define also $\widetilde{F}=F\cap B_{r}(p_{0})$, so that $\operatorname{vol}(\widetilde{F})=\operatorname{vol}(F)-\operatorname{vol}(F\setminus B_{r}(p_{0}))=\operatorname{vol}(F)-\operatorname{vol}(E\setminus B_{r}(p_{0}))=\operatorname{vol}(E)+\varepsilon-\varepsilon=\operatorname{vol}(E).$ Moreover, for almost every such $r$ we can additionally require that $P(F,\partial B_{r}(p_{0}))\vcentcolon=\int_{\partial B_{r}(p_{0})}\,\mathrm{d}|D\chi_{F}|=0$, as $|D\chi_{F}|$ is a finite Radon measure, see [47, Proposition 2.16]. In this way (see [47, Theorem 16.3]) we have $\begin{split}P(\widetilde{F})&=P(F,B_{r}(p_{0}))+\mathcal{H}^{n-1}(\partial B_{r}(p_{0})\cap F)\\\ &=P(F)-P(F,M\setminus B_{r}(p_{0}))+\mathcal{H}^{n-1}(\partial B_{r}(p_{0})\cap F).\end{split}$ Since $E$ is an isoperimetric set we estimate $\begin{split}P(E)&\leq P(\widetilde{F})=P(F)-P(E,M\setminus B_{r}(p_{0}))+\mathcal{H}^{n-1}(\partial B_{r}(p_{0})\cap E)\\\ &\leq P(E)+C\varepsilon-A(r)+|V^{\prime}(r)|,\end{split}$ that is $A(r)\leq|V^{\prime}(r)|+CV(r)$. Hence we see that (B.1) holds for almost every $r>R$ such that $V(r)<\varepsilon_{0}$, and the proof is completed. ∎ ## References * [1] V. Agostiniani, M. Fogagnolo and L. Mazzieri “Sharp geometric inequalities for closed hypersurfaces in manifolds with nonnegative Ricci curvature” In _Invent. Math._ 222.3, 2020, pp. 1033–1101 DOI: 10.1007/s00222-020-00985-4 * [2] F.. Almgren “Existence and regularity almost everywhere of solutions to elliptic variational problems with constraints” In _Mem. Amer. Math. Soc._ 4.165, 1976, pp. viii+199 DOI: 10.1090/memo/0165 * [3] L. Ambrosio “Calculus, heat flow and curvature-dimension bounds in metric measure spaces” In _Proceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Vol. I. Plenary lectures_ , pp. 301–340 * [4] L. Ambrosio “Fine properties of sets of finite perimeter in doubling metric measure spaces” Calculus of variations, nonsmooth analysis and related topics In _Set-Valued Anal._ 10.2-3, 2002, pp. 111–128 DOI: 10.1023/A:1016548402502 * [5] L. Ambrosio, E. Bruè and D. Semola “Rigidity of the 1-Bakry-Émery inequality and sets of finite perimeter in RCD spaces” In _Geom. Funct. Anal._ 29.4, 2019, pp. 949–1001 DOI: 10.1007/s00039-019-00504-5 * [6] L. Ambrosio and S. Di Marino “Equivalent definitions of $BV$ space and of total variation on metric measure spaces” In _J. Funct. Anal._ 266.7, 2014, pp. 4150–4188 DOI: 10.1016/j.jfa.2014.02.002 * [7] L. Ambrosio, N. Gigli, A. Mondino and T. Rajala “Riemannian Ricci curvature lower bounds in metric measure spaces with $\sigma$-finite measure” In _Trans. Amer. Math. Soc._ 367.7, 2015, pp. 4661–4701 DOI: 10.1090/S0002-9947-2015-06111-X * [8] L. Ambrosio, N. Gigli and G. Savaré “Metric measure spaces with Riemannian Ricci curvature bounded from below” In _Duke Math. J._ 163.7, 2014, pp. 1405–1490 DOI: 10.1215/00127094-2681605 * [9] L. Ambrosio and S. Honda “New stability results for sequences of metric measure spaces with uniform Ricci bounds from below” In _Measure theory in non-smooth spaces_ , Partial Differ. Equ. Meas. Theory, pp. 1–51 * [10] L. Ambrosio, A. Mondino and G. Savaré “Nonlinear diffusion equations and curvature conditions in metric measure spaces” In _Mem. Amer. Math. Soc._ 262, 2019 DOI: 10.1090/memo/1270 * [11] G. Antonelli, E. Bruè and D. Semola “Volume bounds for the quantitative singular strata of non collapsed RCD metric measure spaces” In _Anal. Geom. Metr. Spaces_ 7.1, 2019, pp. 158–178 DOI: https://doi.org/10.1515/agms-2019-0008 * [12] C. Arezzo and K. Corrales “Existence of CMC-foliations in asymptotically cuspidal manifolds” Preprint arXiv:1811.12054 * [13] V. Bayle “A differential inequality for the isoperimetric profile” In _Int. Math. Res. Not._ , 2004, pp. 311–342 * [14] V. Bayle “Propriétés de concavité du profil isopérimétrique et applications” https://tel.archives-ouvertes.fr/tel-00004317v1/document, PhD Thesis Institut Fourier, 2003 * [15] E. Bruè, E. Pasqualetto and D. Semola “Rectifiability of the reduced boundary for sets of finite perimeter over $\mathsf{RCD}(k,n)$ spaces” Preprint arXiv:1909.00381 * [16] D. Burago, Y. Burago and S. Ivanov “A course in metric geometry” 33, Graduate Studies in Mathematics American Mathematical Society, Providence, RI, 2001, pp. xiv+415 DOI: 10.1090/gsm/033 * [17] A. Carlotto, O. Chodosh and M. Eichmair “Effective versions of the positive mass theorem” In _Inventiones mathematicae_ 206.3, 2016, pp. 975–1016 DOI: 10.1007/s00222-016-0667-3 * [18] M.P. Carmo “Riemannian Geometry”, Mathematics (Boston, Mass.) Birkhäuser, 1992 * [19] F. Cavalletti and E. Milman “The Globalization theorem for the Curvature Dimension condition” Preprint arXiv:1612.07623, 2016 * [20] J. Cheeger and T.. Colding “Lower bounds on Ricci curvature and the almost rigidity of warped products” In _Ann. of Math. (2)_ 144.1, 1996, pp. 189–237 DOI: 10.2307/2118589 * [21] J. Cheeger and T.. Colding “On the structure of spaces with Ricci curvature bounded below. I” In _J. Differential Geom._ 46.3, 1997, pp. 406–480 URL: http://projecteuclid.org/euclid.jdg/1214459974 * [22] J. Cheeger and T.. Colding “On the structure of spaces with Ricci curvature bounded below. II” In _J. Differential Geom._ 54.1, 2000, pp. 13–35 URL: http://projecteuclid.org/euclid.jdg/1214342145 * [23] J. Cheeger and T.. Colding “On the structure of spaces with Ricci curvature bounded below. III” In _J. Differential Geom._ 54.1, 2000, pp. 37–74 URL: http://projecteuclid.org/euclid.jdg/1214342146 * [24] J. Cheeger, M. Gromov and M. Taylor “Finite propagation speed, kernel estimates for functions of the Laplace operator, and the geometry of complete Riemannian manifolds” In _J. Differential Geometry_ 17.1, 1982, pp. 15–53 * [25] O. Chodosh “Large Isoperimetric Regions in Asymptotically Hyperbolic Manifolds” In _Communications in Mathematical Physics_ 343.2, 2016, pp. 393–443 DOI: 10.1007/s00220-015-2457-y * [26] O. Chodosh, M. Eichmair and A. Volkmann “Isoperimetric structure of asymptotically conical manifolds” In _J. Differential Geom._ 105.1, 2017, pp. 1–19 URL: http://projecteuclid.org/euclid.jdg/1483655857 * [27] B. Chow, P. Lu and L. Ni “Hamilton’s Ricci flow” 77, Graduate Studies in Mathematics Providence, RI: American Mathematical Society, 2006, pp. xxxvi+608 * [28] T.. Colding “Ricci curvature and volume convergence” In _Ann. of Math. (2)_ 145.3, 1997, pp. 477–501 DOI: 10.2307/2951841 * [29] C.. Croke “A sharp four-dimensional isoperimetric inequality” In _Comment. Math. Helv._ 59.2, 1984, pp. 187–192 DOI: 10.1007/BF02566344 * [30] C.. Croke “Some isoperimetric inequalities and eigenvalue estimates” In _Ann. Sci. École Norm. Sup. (4)_ 13.4, 1980, pp. 419–435 * [31] G. De Philippis and N. Gigli “Non-collapsed spaces with Ricci curvature bounded from below” In _J. Éc. polytech. Math._ 5, 2018, pp. 613–650 DOI: 10.5802/jep.80 * [32] M. Eichmair and J. Metzger “Large isoperimetric surfaces in initial data sets” In _J. Differential Geom._ 94.1 Lehigh University, 2013, pp. 159–186 DOI: 10.4310/jdg/1361889064 * [33] M. Eichmair and J. Metzger “Unique isoperimetric foliations of asymptotically flat manifolds in all dimensions” In _Inventiones mathematicae_ 194.3, 2013, pp. 591–630 DOI: 10.1007/s00222-013-0452-5 * [34] M. Erbar, K. Kuwada and K.-T. Sturm “On the equivalence of the entropic curvature-dimension condition and Bochner’s inequality on metric measure spaces” In _Invent. Math._ 201.3, 2015, pp. 993–1071 URL: https://doi.org/10.1007/s00222-014-0563-7 * [35] S. Gallot, D. Hulin and J. Lafontaine “Riemannian Geometry”, Universitext Springer Berlin Heidelberg, 2004 * [36] N. Gigli “The splitting theorem in non smooth context” Preprint arXiv:1302.5555, 2013 * [37] N. Gigli, A. Mondino and G. Savaré “Convergence of pointed non-compact metric measure spaces and stability of Ricci curvature bounds and heat flows” In _Proc. Lond. Math. Soc. (3)_ 111.5, 2015, pp. 1071–1129 DOI: 10.1112/plms/pdv047 * [38] E. Hebey “Nonlinear analysis on manifolds: Sobolev spaces and inequalities” 5, Courant Lecture Notes in Mathematics New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 1999, pp. x+309 * [39] J. Heinonen, P. Koskela, N. Shanmugalingam and J.. Tyson “Sobolev spaces on metric measure spaces” An approach based on upper gradients 27, New Mathematical Monographs Cambridge University Press, Cambridge, 2015, pp. xii+434 DOI: 10.1017/CBO9781316135914 * [40] G. Huisken and T. Ilmanen “The Inverse Mean Curvature Flow and the Riemannian Penrose Inequality” In _J. Differential Geom._ 59.3 Lehigh University, 2001, pp. 353–437 * [41] Y. Kitabeppu “A Bishop-type inequality on metric measure spaces with Ricci curvature bounded below” In _Proc. Amer. Math. Soc._ 145.7, 2017, pp. 3137–3151 DOI: 10.1090/proc/13517 * [42] B. Kleiner “An isoperimetric comparison theorem” In _Invent. Math._ 108.1, 1992, pp. 37–47 DOI: 10.1007/BF02100598 * [43] G.. Leonardi, S. Rigot and D. Vittone “Isodiametric sets in the Heisenberg group” In _Rev. Mat. Iberoam._ 28.4, 2012, pp. 999–1024 DOI: 10.4171/RMI/700 * [44] G.. Leonardi, M. Ritoré and E. Vernakidis “Isoperimetric inequalities in unbounded convex bodies” Preprint arXiv:1606.03906. To appear in _Memoirs of the AMS_ * [45] P.-L. Lions “The concentration-compactness principle in the calculus of variations. The locally compact case. I” In _Ann. Inst. H. Poincaré Anal. Non Linéaire_ 1.2, 1984, pp. 109–145 * [46] J. Lott and C. Villani “Ricci curvature for metric-measure spaces via optimal transport” In _Ann. of Math. (2)_ 169.3, 2009, pp. 903–991 DOI: 10.4007/annals.2009.169.903 * [47] F. Maggi “Sets of finite perimeter and geometric variational problems” An introduction to geometric measure theory 135, Cambridge Studies in Advanced Mathematics Cambridge University Press, Cambridge, 2012, pp. xx+454 DOI: 10.1017/CBO9781139108133 * [48] P. Maheux and L. Saloff-Coste “Analyse sur les boules d’un opérateur sous-elliptique” In _Math. Ann._ 303, 1995, pp. 713–740 DOI: https://doi.org/10.1007/BF01461013 * [49] M. Miranda “Functions of bounded variation on “good” metric spaces” In _J. Math. Pures Appl. (9)_ 82.8, 2003, pp. 975–1004 DOI: 10.1016/S0021-7824(03)00036-9 * [50] M. Miranda Jr., D. Pallara, F. Paronetto and M. Preunkert “Heat semigroup and functions of bounded variation on Riemannian manifolds” In _J. Reine Angew. Math._ 613, 2007, pp. 99–119 DOI: 10.1515/CRELLE.2007.093 * [51] A. Mondino and S. Nardulli “Existence of isoperimetric regions in non-compact Riemannian manifolds under Ricci or scalar curvature conditions” In _Comm. Anal. Geom._ 24.1, 2016, pp. 115–138 * [52] A. Mondino and E. Spadaro “On an isoperimetric-isodiametric inequality” In _Anal. PDE_ 10.1 MSP, 2017, pp. 95–126 DOI: 10.2140/apde.2017.10.95 * [53] F. Morgan “Geometric measure theory: a beginner’s guide” Academic Press, 2000 URL: http://gen.lib.rus.ec/book/index.php?md5=81ddadb6673c1d450b7af71d34cb0ac0 * [54] F. Morgan “Regularity of isoperimetric hypersurfaces in Riemannian manifolds” In _Transactions of the American Mathematical Society_ 355.12, 2003, pp. 5041–5052 * [55] F. Morgan and D.. Johnson “Some sharp isoperimetric theorems for Riemannian manifolds” In _Indiana Univ. Math. J._ 49.3, 2000, pp. 1017–1041 * [56] F. Morgan and M. Ritoré “Isoperimetric regions in cones” In _Trans. Amer. Math. Soc._ 354.6, 2002, pp. 2327–2339 DOI: 10.1090/S0002-9947-02-02983-5 * [57] A.. Muñoz Flores and S. Nardulli “Generalized Compactness for Finite Perimeter Sets and Applications to the Isoperimetric Problem” In _Journal of Dynamical and Control Systems_ , 2020 URL: https://doi.org/10.1007/s10883-020-09517-y * [58] A.. Muñoz Flores and S. Nardulli “Local Hölder continuity of the isoperimetric profile in complete noncompact Riemannian manifolds with bounded geometry” In _Geom. Dedicata_ 201, 2019, pp. 1–12 DOI: 10.1007/s10711-018-0416-4 * [59] S. Nardulli “Generalized existence of isoperimetric regions in non-compact Riemannian manifolds and applications to the isoperimetric profile” In _Asian J. Math._ 18.1, 2014, pp. 1–28 * [60] R… Pedrosa “The Isoperimetric Problem in Spherical Cylinders” In _Annals of Global Analysis and Geometry_ 26.4, 2004, pp. 333–354 DOI: 10.1023/B:AGAG.0000047528.20962.e2 * [61] G. Perelman “A complete Riemannian manifold of positive Ricci curvature with Euclidean volume growth and nonunique asymptotic cone” In _Comparison geometry (Berkeley, CA, 1993–94)_ 30, Math. Sci. Res. Inst. Publ. Cambridge Univ. Press, Cambridge, 1997, pp. 165–166 * [62] P. Petersen “Riemannian geometry. Third edition.” 171, Graduate Texts in Mathematics Springer, Cham, 2016, pp. xviii+499 * [63] S. Pigola, M. Rigoli and A.. Setti “Vanishing and finiteness results in geometric analysis” A generalization of the Bochner technique 266, Progress in Mathematics Birkhäuser Verlag, Basel, 2008, pp. xiv+282 * [64] M. Ritoré “Constant geodesic curvature curves and isoperimetric domains in rotationally symmetric surfaces” In _Comm. Anal. Geom._ 9.5, 2001, pp. 1093–1138 DOI: 10.4310/CAG.2001.v9.n5.a5 * [65] M. Ritoré “The isoperimetric problem in complete surfaces of nonnegative curvature” In _J. Geom. Anal._ 11.3, 2001, pp. 509–517 DOI: 10.1007/BF02922017 * [66] M. Ritoré and C. Rosales “Existence and characterization of regions minimizing perimeter under a volume constraint inside Euclidean cones” In _Trans. Amer. Math. Soc._ 356.11, 2004, pp. 4601–4622 * [67] R. Schoen and S.-T. Yau “Lectures on differential geometry”, Conference Proceedings and Lecture Notes in Geometry and Topology, I International Press, Cambridge, MA, 1994, pp. v+235 * [68] Y. Shi “The Isoperimetric Inequality on Asymptotically Flat Manifolds with Nonnegative Scalar Curvature” In _International Mathematics Research Notices_ 2016.22, 2016, pp. 7038–7050 DOI: 10.1093/imrn/rnv395 * [69] K.-T. Sturm “On the geometry of metric measure spaces. I” In _Acta Math._ 196.1, 2006, pp. 65–131 DOI: 10.1007/s11511-006-0002-8 * [70] K.-T. Sturm “On the geometry of metric measure spaces. II” In _Acta Math._ 196.1, 2006, pp. 133–177 DOI: 10.1007/s11511-006-0003-7 * [71] P. Topping “Ricci flow and Ricci Limit Spaces” Preprint arXiv:1904.11375, 2019 * [72] C. Villani “Optimal transport” Old and new 338, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] Springer-Verlag, Berlin, 2009, pp. xxii+973 DOI: 10.1007/978-3-540-71050-9 * [73] A. Weil ““Sur les surfaces à curbure négative”, in Oeuvres scientifiques/Collected papers. I. 1926–1951”, Springer Collected Works in Mathematics Springer, Heidelberg, 1979, pp. xviii+574
# Causality theory of spacetimes with continuous Lorentzian metrics revisited Leonardo García-Heveling Department of Mathematics, Radboud University, Nijmegen, The Netherlands. <EMAIL_ADDRESS> Acknowledgements: I am very grateful to Annegret Burtscher for discussions and detailed comments on the draft. ###### Abstract We revisit the causal structures $J^{+}$ and $K^{+}$ on a spacetime, and introduce a new one, called $k^{+}$. The $k^{+}$-relation can be used to characterize causal curves, and for smooth Lorentzian metrics, it yields the same result as the standard $J^{+}$-relation. If, on the other hand, the metric is merely continuous, then the different causal structures become inequivalent. We compare them by investigating three properties, namely the validity of the push-up lemma, the openness of chronological futures, and the existence of limit causal curves. Depending on the definition of causal structure chosen, we show that at most two of these three properties hold for continuous metrics. In particular, by using the new relation $k^{+}$, the push-up lemma holds even when the metric is continuous, while it generally does not for the standard $J^{+}$-relation. Finally, we argue that, in general, no reasonable notion of causal structure can have all three properties. Key words: low regularity, causality theory, push-up lemma, causal bubbles. ## 1 Introduction The study of spacetimes with metrics of low regularity is a topic of rising importance in Lorentzian geometry. The main motivation stems from the strong cosmic censorship conjecture [7, 21] and the occurrence of weak solutions to Einstein’s equations coupled to certain matter models [4, 10]. It has hence become an important research question to establish which properties of the usual, smooth spacetimes are more “robust” or “fundamental”, in the sense that they continue to hold in lower regularity, and which, on the other hand, depend sensibly on the smoothness assumption. In trying to answer this question, the need arises to axiomatize the notion of spacetime, and in particular, to treat the causal structure in an order-theoretic way. This, in turn, connects well with ideas in quantum gravity, such as causal set theory [3]. To make matters more concrete, in the present paper we shall study spacetimes $(M,g)$ where $g$ is a continuous Lorentzian metric. However, since our approach is indeed of the order-theoretic type, it can easily be adapted to other settings. Let us start by recalling the case of a classical spacetime $(M,g)$ where $g$ is smooth. The chronological and causal relations $I^{+}$ and $J^{+}$ can then be defined using the notions of timelike and non-spacelike curve respectively. The three following facts are well-known: 1. 1. The push-up lemma: if $p\in I^{+}(q)$ and $q\in J^{+}(r)$ then $p\in I^{+}(r)$. 2. 2. The limit curve theorem: the uniform limit of a converging sequence of causal curves is a causal curve. 3. 3. Openness of chronological pasts and futures: the sets $I^{\pm}(p)$ are open, for any $p\in M$. With these three results at hand, one can develop a large portion of causality theory, such as the causal ladder and the characterization of time functions, without ever again mentioning the metric $g$ or the manifold structure on $M$ explicitly. This is confirmed by the “Lorentzian length spaces” approach of Kunzinger and Sämann [14] and follow-up work [1, 9]. In order to do causality theory on a spacetime with a $\mathcal{C}^{0}$-metric, the first question is how the causal structure should even be defined. The obvious answer is to define $I^{+}$ and $J^{+}$ through timelike and non-spacelike curves, just as in the smooth case. However, there are two potential problems: 1. A. Points where the metric is not $\mathcal{C}^{2}$ do not admit normal neighborhoods. 2. B. The regularity class where we define timelike curves becomes important. Chruściel and Grant [6] showed that because of A, the push-up lemma fails, while the limit curve theorems are unaffected (points 1 and 2 above). As a consequence, spacetimes with continuous metric exhibit so called “causal bubbles”, open regions contained in $J^{+}$ but not in $I^{+}$. Regarding B, when the metric is at least $\mathcal{C}^{2}$, it was shown by Chruściel [5] that one obtains the same chronological relation $I^{+}$ regardless of whether timelike curves are required to be Lipschitz or piecewise-differentiable. In the case of continuous metrics, however, it was shown by Grant et al. [11] that this choice makes an important difference. In particular, they showed that the chronological futures and pasts are open when using piecewise- differentiable curves, but not when using Lipschitz curves (point 3 above). A radically different, and in fact earlier, approach is that of Sorkin and Woolgar [22]. They propose to keep the definition of $I^{+}$ by piecewise- differentiable timelike curves, and then introduce another relation $K^{+}$ as the smallest transitive, topologically closed relation containing $I^{+}$. The relation $K^{+}$ can then be used to replace $J^{+}$. Even for smooth metrics, the two relations $K^{+}$ and $J^{+}$ do not coincide. Nonetheless, it is possible to define the usual causal curves (and hence $J^{+}$) in terms of $K^{+}$, without referring to the metric directly. The $K^{+}$-relation has since found a variety of applications, most notably Minguzzi’s works on stable causality [19] and time functions [18]. However, there is no push-up lemma for the $K^{+}$-relation. Following a similar philosophy, we propose a new causal relation $k^{+}$. We define $k^{+}$ as the largest relation such that the push-up lemma holds true. Just as Sorkin and Woolgar did with $K^{+}$, we propose a definition of causal curve based on $k^{+}$. When the metric is smooth, causal curves defined through $k^{+}$ coincide with those defined by the metric $g$, but when the metric is merely continuous, they do not. As a consequence, we show that while our new causal relation satisfies the push up lemma, the limit curve theorems cease to hold. We then argue that, essentially because we chose $k^{+}$ to be maximal, there in fact do not exist any causal relations that can satisfy both properties at the same time. That is, at least, if we want to keep the usual definition of (piecewise continuously differentiable) timelike curve, the only one that guarantees open futures. We also explore the possibility of alternative chronological relations, but we conclude that one runs into the same problems. The goal of this paper is thus two-fold. On the one hand, we have shown that spacetimes with continuous metrics are unavoidably pathological. This strengthens the view, already present in the literature, that one should focus on a special class of continuous metrics, the so-called causally plain ones. These are the ones where the usual push-up lemma holds, and include, for example, the class of Lipschitz metrics [6, Cor. 1.17]. On the other hand, we believe that the methods developed here, mainly the $k^{+}$-relation, can find further applications in (low regularity) Lorentzian geometry; for example, in the study of spacetimes with degenerate metrics [8], or with metrics that are not even continuous [10]. Outline. In Section 2 we provide more background, define the new causal relation $k^{+}$ and study its properties. In Section 3 we define causal curves in terms of $\tilde{k}^{+}$, and show that these are just the usual causal curves when the metric is smooth. In Section 4, we discuss other possible choices of causal structure. In Section 5, we summarize and discuss our results. ## 2 The $k^{+}$-relation ### 2.1 Basic notions in causality theory Let $M$ denote a Hausdorff, paracompact $\mathcal{C}^{1}$-manifold, and $g$ a $\mathcal{C}^{0}$-Lorentzian metric. Assume that $(M,g)$ admits a $\mathcal{C}^{0}$-vector field $X$ such that $g(X,X)<0$, called a time orientation. The pair $(M,g)$ together with a choice of time orientation is called a _$\mathcal{C}^{0}$ -spacetime_. Whenever we say that $g$ is $\mathcal{C}^{2}$ (or smooth), or that $(M,g)$ is a $\mathcal{C}^{2}$ (or smooth) spacetime, we mean that $M$ admits a $\mathcal{C}^{3}$ (resp. smooth) subatlas such that $g$ is $\mathcal{C}^{2}$ (resp. smooth) in this subatlas. By _relation_ on $M$ we will mean a subset of $M\times M$. The closure of a relation $R$, denoted by $\overline{R}$, is the topological closure in the product topology on $M\times M$. Likewise, we say that $R$ is open if it is an open subset of $M\times M$. There exist two different definitions for the notion of _timelike curve_ , and thus two different notions of _chronological relation_. For $\mathcal{C}^{2}$-metrics the two notions are equivalent [5, Cor. 2.4.11], but not for $\mathcal{C}^{0}$-metrics [11]. One is based on the class $\mathcal{L}$ of locally Lipschitz curves, and the other one on the class $\mathcal{C}_{\mathrm{pw}}^{1}$ of piecewise continuously differentiable curves: $\displaystyle{I}^{+}:=\\{(p,q)\in M\times M\mid\ $ there exists an $\mathcal{L}$-curve $\gamma:[a,b]\to M$ such that $\gamma(a)=p$, $\gamma(b)=q$, $g(\dot{\gamma},\dot{\gamma})<0$ $\displaystyle\text{and $g(\dot{\gamma},X)<0$ almost everywhere}\\},$ $\displaystyle\check{I}^{+}:=\\{(p,q)\in M\times M\mid\ $ there exists a $\mathcal{C}_{\mathrm{pw}}^{1}$-curve $\gamma:[a,b]\to M$ such that $\gamma(a)=p$, $\gamma(b)=q$ and $g(\dot{\gamma},\dot{\gamma})<0$ $\displaystyle\text{and $g(\dot{\gamma},X)<0$ everywhere}\\}.$ Recall that an $\mathcal{L}$-curve is differentiable almost everywhere by Rademacher’s theorem. In the second case, when $\gamma$ is $\mathcal{C}_{\mathrm{pw}}^{1}$, the condition $g(\dot{\gamma},\dot{\gamma})<0$ is meant to hold for both one-sided derivatives (which may differ from each other at break points). It was established by Grant et al. that $\check{I}^{+}$ is open, but not necessarily ${I}^{+}$ [11]. The standard _$g$ -causal relation_ is defined as $\displaystyle J^{+}:=\\{(p,q)\in M\times M\mid\ $ there exists an $\mathcal{L}$-curve $\gamma:[a,b]\to M$ such that $\gamma(a)=p$, $\gamma(b)=q$ and $g(\dot{\gamma},\dot{\gamma})\leq 0$ $\displaystyle\text{and $g(\dot{\gamma},X)<0$ almost everywhere}\\},$ where we say that $\gamma$ is a _$g$ -causal curve_. We can analogously define the past relations $I^{-}$, $\check{I}^{-}$ and $J^{-}$ by requiring $g(\dot{\gamma},X)>0$, but since they are simply given by reversing the factors, there is no need to treat them separately. Therefore we also shall not specify every time that our timelike and causal curves are always future- directed. We do, however, sometimes use the notations $q\in J^{+}(p)$, and $p\in J^{-}(q)$, both meaning the same, namely $(p,q)\in J^{+}$. We finish this subsection with a short digression about limit curve theorems. In the literature, there exist multiple statements with this name; a detailed review can be found in [16] for smooth spacetimes. In [6, Thm. 1.6] it is shown how the limit curve theorems form the smooth case also carry over to continuous spacetimes (see also [20, Thm. 1.5]). Rougly speaking, a limit curve theorem is the combination of the following two statements: 1. 1. Under certain assumptions, a sequence of causal curves has a convergent (in some appropriate sense) subsequence. 2. 2. The limit of said subsequence is itself a causal curve. Here causal usually means $g$-causal, but we will also discuss alternative notions of causal curve. Regarding part 1, there exist many versions tailored to different applications. A common variation is to require the curves to be Lipschitz (as we did in our definition of $J^{+}$) and add some compactness assumptions in order to apply the Arzelà–Ascoli theorem. Part 2 is where the causal structure becomes important; we discuss it in the context of our new $k^{+}$-relation in Remark 3.4. ### 2.2 Definition of $k^{+}$ We first introduce some nomenclature, the underlying concepts being fairly standard. By $(M,g)$ we continue to denote a $\mathcal{C}^{0}$-spacetime, although some of the ideas make sense even if $M$ is just a set. The following definition gives a compatibility condition between the chronological and causal relations, in this case denoted abstractly by $R$ and $S$ respectively. ###### Definition 2.1. Let $R,S\subseteq M\times M$ be two relations. We say that $S$ satisfies _push-up relative to $R$_ if the following two properties hold: 1. (i) $(x,y)\in S,(y,z)\in R\implies(x,z)\in R$, 2. (ii) $(x,y)\in R,(y,z)\in S\implies(x,z)\in R$. Let $R,S,S^{\prime}\subseteq M\times M$ be relations. Then it is easy to see that: 1. (a) If $R$ is transitive, then $R$ satisfies push-up relative to itself. 2. (b) If $S$ satisfies push-up relative to $R$, and $S^{\prime}\subseteq S$, then also $S^{\prime}$ satisfies push-up relative to $R$. 3. (c) If $S$ and $S^{\prime}$ each satisfy push-up relative to $R$, then so does $S\cup S^{\prime}$. If $(M,g)$ is $\mathcal{C}^{2}$, then $J^{+}$ satisfies push-up relative to ${I}^{+}$ (and equivalently, $\check{I}^{+}$). This fact is known as the push- up lemma [5, Lem. 2.4.14]. Those $\mathcal{C}^{0}$-spacetimes where $J^{+}$ satisfies push-up relative $\check{I}^{+}$ are called _causally plain_. The term was coined by Chruściel and Grant [6, Def. 1.16], but beware that they used ${I}^{+}$ in place of $\check{I}^{+}$. In any case, not all $\mathcal{C}^{0}$-spacetimes are causally plain [6, Ex. 1.11]. The failure of the push-up lemma on arbitraty $\mathcal{C}^{0}$-spacetimes motivates our next, central definition. ###### Definition 2.2. The _$k^{+}$ -relation_ is the largest relation that satisfies push-up relative to $\check{I}^{+}$. ###### Proposition 2.3. There exists a unique relation $k^{+}$ satisfying Definition 2.2. Moreover, $k^{+}$ is transitive and reflexive. ###### Proof. Consider the set $\mathcal{S}\subseteq\mathcal{P}(M\times M)$ of all relations satisfying push-up relative to $\check{I}^{+}$. We construct the maximal such relation by $k^{+}:=\bigcup_{S\in\mathcal{S}}S,$ proving existence. The diagonal relation $\Delta:=\\{(p,p)\mid p\in M\\}$ is contained in $\mathcal{S}$, since in fact it satisfies push-up with respect to any relation. Hence $\Delta\subseteq k^{+}$, meaning that $k^{+}$ is reflexive. To show transitivity, let $\displaystyle S^{\prime}:=\\{(x,y)\in M\mid\ $ $\displaystyle\exists N\in\mathbb{N},\exists p_{i}\in M,i=1,...,N\text{ such that }$ $\displaystyle p_{1}=x,p_{N}=y\text{ and }(p_{i},p_{i+1})\in k^{+}\text{ for all }i=1,...,N-1\\}.$ Then $k^{+}\subseteq S^{\prime}$. It can be shown inductively that $S^{\prime}$ satisfies push-up relative to $\check{I}^{+}$, hence also $S^{\prime}\subseteq k^{+}$. Therefore $k^{+}=S^{\prime}$, and $S^{\prime}$ is clearly transitive. ∎ The next lemma gives another important property of $k^{+}$. ###### Lemma 2.4. It holds that $k^{+}\subseteq\overline{\check{I}^{+}}$. ###### Proof. Suppose $(p,q)\in k^{+}$. Let $\gamma:[0,1)\to M$ be any timelike curve with $\gamma(0)=q$. Then for all $t\in(0,1)$, $\left(q,\gamma(t)\right)\in\check{I}^{+}$. Because $(p,q)\in k^{+}$, the push-up property implies $\left(p,\gamma(t)\right)\in\check{I}^{+}$. Since $\gamma$ is continuous, $(p,\gamma(t))\to(p,q)$ as $t\to 0$, hence $(p,q)\in\overline{\check{I}^{+}}$. ∎ We end this subsection by showing how the $k^{+}$-relation fits into the current literature. A $\mathcal{C}^{0}$-spacetime $(M,g)$ is said to be _weakly distinguishing_ whenever, for all $p,q\in M$, $\check{I}^{+}(p)=\check{I}^{+}(q)$ and $\check{I}^{-}(p)=\check{I}^{-}(q)$ together imply $p=q$ [17, Def. 4.47]. Given two relations $R,S$ on $(M,g)$ (or any set, for that matter), we say that the pair $(R,S)$ is a _causal structure_ in the sense of Kronheimer and Penrose [13, Def. 1.2] if: 1. (i) $S$ is transitive, reflexive and antisymmetric, 2. (ii) $R$ is contained in $S$ and irreflexive, 3. (iii) $S$ satisfies push-up relative to $R$. ###### Proposition 2.5. Let $(M,g)$ be a $\mathcal{C}^{0}$-spacetime. Then $(\check{I}^{+},k^{+})$ is a causal structure in the sense of Kronheimer and Penrose if and only if $(M,g)$ is weakly distinguishing. ###### Proof. Point (iii) is satisfied by the very definition of $k^{+}$. To see point (ii), recall that if $(M,g)$ is weakly distinguishing, then $(M,g)$ is chronological, meaning precisely that $\check{I}^{+}$ is irreflexive. That $\check{I}^{+}$ is contained in $k^{+}$ is clear because $\check{I}^{+}$, being transitive, must satisfy push-up with respect to itself (see property (b) right after Definition 2.1). Since $k^{+}$ is always transitive and reflexive by Lemma 2.3, it only remains to show that $k^{+}$ is antisymmetric. Note that $(p,q)\in k^{+}$ if and only if $\check{I}^{+}(q)\subseteq\check{I}^{+}(p)$ and $\check{I}^{-}(p)\subseteq\check{I}^{-}(q)$. Hence, $(p,q)\in k^{+}$ and $(q,p)\in k^{+}$ if and only if $\check{I}^{+}(p)=\check{I}^{+}(q)$ and $\check{I}^{-}(p)=\check{I}^{-}(q)$. Thus $p=q$ for all such pairs $(p,q)$ if and only if $(M,g)$ is weakly distinguishing. ∎ Another way of phrasing the last result in the usual language of causality theory is to say that “$(M,g)$ is $k^{+}$-causal if and only if it is weakly distinguishing”. ### 2.3 The local structure of $k^{+}$ Given a neighborhood $U\subseteq M$, we can define the localized relations $\check{I}^{+}_{U}$, $J^{+}_{U}$ and $k^{+}_{U}$ by applying the usual definitions to the spacetime $(U,g|_{U})$. It is easy to see that $\check{I}^{+}_{U}\subseteq\check{I}^{+}$ and $J^{+}_{U}\subseteq J^{+}$. ###### Lemma 2.6. Let $U\subseteq M$ be an open neighborhood. Then $k^{+}_{U}\subseteq k^{+}$. ###### Proof. By definition, $k^{+}_{U}$ satisfies push-up relative to $\check{I}^{+}_{U}$. We need to show that $k^{+}_{U}$ also satisfies push-up relative to $\check{I}^{+}$, and then the claim follows from maximality of $k^{+}$. Suppose that $(x,y)\in k^{+}_{U}$ and $(y,z)\in\check{I}^{+}$. Then there exists a timelike curve $\gamma:[0,1]\to M$ from $y$ to $z$. Since, by assumption, $y\in U$, we must have that for $\epsilon>0$ small enough, $\gamma|_{[0,\epsilon]}\in U$. Thus we have $(y,\gamma(\epsilon))\in\check{I}^{+}_{U}$, which implies $(x,\gamma(\epsilon))\in\check{I}^{+}_{U}\subseteq\check{I}^{+}$ by definition of $k^{+}_{U}$. Since also $(\gamma(\epsilon),z)\in\check{I}^{+}$, transitivity of $\check{I}^{+}$ implies $(x,z)\in\check{I}^{+}$. Part (ii) of Definition 2.1 can be shown analogously. ∎ We want to investigate whether, for $U$ small enough, $k^{+}_{U}$ is closed. The motivation lies in the limit curve theorems (see Remark 3.4 for the details). By Lemma 2.4, $k^{+}_{U}\subseteq\overline{\check{I}^{+}_{U}}$. Since also $\check{I}^{+}_{U}\subseteq k^{+}_{U}$, we conclude that $k^{+}_{U}$ is closed if and only if $k^{+}_{U}=\overline{\check{I}^{+}_{U}}$. By Definition 2.2, $k^{+}_{U}=\overline{\check{I}^{+}_{U}}$ if and only if $\overline{\check{I}^{+}_{U}}$ satisfies push-up. Unfortunately, the next example [11, Ex. 3.1], shows that the latter is not necessarily the case. ###### Example 2.7. Let $M=^{2}$ with metric given by $g_{\alpha}:=-\sin 2\theta(x)dt^{2}-2\cos 2\theta(x)dxdt+\sin 2\theta(x)dx^{2}$ where $\theta(x):=\begin{cases}0,&x<-1\\\ \arccos|x|^{\alpha},&-1\leq x\leq 0\\\ \frac{\pi}{2},&x>0,\end{cases}$ and $0<\alpha<1$ arbitrary. The metric $g_{\alpha}$ is $\alpha$-Hölder continuous, and in fact smooth outside of $\\{x=-1\\}\cup\\{x=0\\}$. This example was introduced by Grant et al. [11, Ex. 3.1], who showed that $\check{I}^{+}\subsetneq{I}^{+}$. Let $p=(0,0)$ and $U\subseteq M$ any open neighborhood of $p$. Then the following hold: 1. (i) ${I}^{+}_{U}$ is not open, 2. (ii) $k^{+}_{U}$ is not closed. Point (i) is shown in [11, Ex. 3.1] (they in fact show that ${I}^{+}$ is not open, but their argument is also valid on neighborhoods). In order to show point (ii), first note that the past $\check{I}^{-}(p)$ (blue region in Figure 1) is contained in $\\{x>0\\}$. This is so because a timelike $\mathcal{C}_{\mathrm{pw}}^{1}$-curve must have timelike tangent vector everywhere, which implies having a positive $x$-component when in $\\{x\geq 0\\}$. When using $\mathcal{L}$-curves, the past set $I^{-}(p)$ is in fact bigger [11, Ex. 3.1], but we will not discuss this further. Consider the curve $\gamma:(-\epsilon,0)\to M,s\mapsto(t(s),x(s))$ given by $\displaystyle t(s):=\frac{1}{1-\alpha}A^{1-\alpha}s,$ $\displaystyle x(s):=-A|s|^{\frac{1}{1-\alpha}},$ where $A>0$ is arbitrary and $\epsilon>0$ is small. It is shown in [11, Ex. 3.1] that $\gamma$ is a timelike curve in the $\mathcal{C}_{\mathrm{pw}}^{1}$-sense (but its extension to the endpoint $s=0$ is not). Since $\gamma(s)\to p$ as $s\to 0$, we conclude that $(\gamma(-\epsilon^{\prime}),p)\in\overline{\check{I}^{+}}$ for all $\epsilon^{\prime}<\epsilon$. $p$$\gamma(-\epsilon^{\prime})$$q$$\check{I}^{-}(p)$$\gamma$$x$$t$ Figure 1: The spacetime of Example 2.7, with the curve $\gamma$ that lies outside of $\check{I}^{-}(p)$, while nonetheless $(\gamma(-\epsilon^{\prime}),p)\in\overline{\check{I}^{+}}$. Next we show that $(\gamma(-\epsilon^{\prime}),p)\not\in k^{+}$. To see this, consider any point of the form $q=(x(-\epsilon^{\prime}),t_{q})$ with $t_{q}<t(-\epsilon^{\prime})$ (see Figure 1). Then $(q,\gamma(-\epsilon^{\prime}))\in\check{I}^{+}$, the connecting vertical segment being an example of timelike $\mathcal{C}_{\mathrm{pw}}^{1}$-curve between $q$ and $\gamma(-\epsilon^{\prime})$. If we assume $(\gamma(-\epsilon^{\prime}),p)\in k^{+}$, then by push-up it follows that $(q,p)\in\check{I}^{+}$. However, $(q,p)\not\in\check{I}^{+}$ because $x(-\epsilon^{\prime})<0$ and $\check{I}^{-}(p)$ is contained in $\\{x>0\\}$. Hence $(\gamma(-\epsilon^{\prime}),p)\not\in k^{+}$. Since we can pick $\epsilon^{\prime}>0$ arbitrarily small, and $t_{q}$ smaller but arbitrarily close to $t(-\epsilon^{\prime})$, the previous discussion applies to any neighborhood $U$ of $p$. Thus $k^{+}_{U}\subsetneq\overline{\check{I}^{+}}_{U}$, no matter how we choose $U$. Grant et al. showed that in the previous example also $\check{I}^{+}\subsetneq{I}^{+}$, and that ${I}^{+}$ is not open (recall that $\check{I}^{+}$ is always open). If we were to define $k^{+}$ by requiring push-up with respect to ${I}^{+}$ instead of $\check{I}^{+}$, it may be that $k^{+}_{U}$ is closed (for small enough $U$). We do not explore this possibility here, and simply note that this would be at the cost of chronological futures not being open. Hence the conclusion is, either way, that one cannot have push-up, open futures and (local) closedness at the same time. That is, at least, if one wants the chronological relation to be given by the usual ${I}^{+}$ or $\check{I}^{+}$. We explore alternatives to ${I}^{+}$ and $\check{I}^{+}$ in Section 4, but the conclusion there is also that one of the three properties has to be sacrificed. ## 3 Causal curves in terms of the $k^{+}$-relation Throughout this section, $F$ denotes an interval, meaning any connected subset of . ###### Definition 3.1. A continuous curve $\gamma:F\to M$ is called _$k^{+}$ -causal_ if for every $t\in F$ and every open neighborhood $U\subseteq M$ of $\gamma(t)$, there exists an open neighborhood $V\subseteq F$ of $t$ such that $s_{1}<s_{2}\implies\left(\gamma(s_{1}),\gamma(s_{2})\right)\in k^{+}_{U}\ \text{ for all }\ s_{1},s_{2}\in V.$ ###### Remark 3.2. Similarly to Definition 3.1, one can define $J^{+}$-causal curves, as was done already by Hawking and Ellis in 1973 [12, Chap. 6.2], and $K^{+}$-causal curves [22, Def. 17]. Any $g$-causal curve is automatically also $J^{+}$-causal. However, the converse is not true, since a $J^{+}$-causal curve may not even have a well defined tangent vector. Nonetheless, if two points $p,q\in M$ can be joined by a $J^{+}$-causal curve $\gamma$, then $(p,q)\in J^{+}$. In particular, there exists a $g$-causal curve $\sigma$, not necessarily equal to $\gamma$, which joins $p$ and $q$. Similarly to the previous remark, by transitivity and Lemma 2.6 it follows that if two points $p,q$ can be joined by a $k^{+}$-causal curve, then $(p,q)\in k^{+}$. The next example motivates why Definition 3.1 has to be formulated in a local way, i.e. why we do not simply require $s_{1}<s_{2}\implies\left(\gamma(s_{1}),\gamma(s_{2})\right)\in k^{+}$ for all $s_{1},s_{2}\in F$. ###### Example 3.3. Let $M=S^{1}\times$ with metric $ds^{2}=-dt^{2}+dx^{2}$. This spacetime is totally vicious, so $\check{I}^{+}=M\times M$ and hence also $k^{+}=M\times M$. Therefore, any $\mathcal{C}^{0}$-curve $\gamma:F\to M$ satisfies $\left(\gamma(s_{1}),\gamma(s_{2})\right)\in k^{+}$ for all $s_{1},s_{2}\in F$. However, $M$ locally looks like Minkowski spacetime, where $k^{+}=J^{+}$, hence not all curves on $M$ are $k^{+}$-causal in the sense of Definition 3.1. Having defined $k^{+}$-causal curves, we briefly return to Example 2.7 in order to better understand the relationship between closedness of $k^{+}$ and limit curve theorems. ###### Remark 3.4 (On limit curve theorems). Suppose that $(\gamma_{n})_{n}$ is a sequence of $k^{+}$-causal curves converging pointwise to a $\mathcal{C}^{0}$-curve $\gamma_{\infty}:F\to M$. Suppose that for every $t\in F$, there exists a neighborhood $U\subseteq M$ of $\gamma_{\infty}(t)$ such that $k^{+}_{U}$ is closed. Then the curve $\gamma_{\infty}$ is $k^{+}$-causal, because any pair of points on $\gamma_{\infty}$ can be written as a limit of pairs of points on $\gamma_{n}$. In Example 2.7, we showed that the point $p=(0,0)$ does not admit any neighborhood $U$ such that $k^{+}_{U}$ is closed. Let $\gamma$ be as in Example 2.7, and consider the sequence of curves given by $\gamma_{n}=\gamma|_{(-\epsilon,1/n]}$. The sequence $(\gamma_{n})_{n}$ converges pointwise (even uniformly, after appropriate reparametrization) to a curve $\gamma_{\infty}:(-\epsilon,0]\to M$ which is just $\gamma$ with $p$ added as its endpoint. However, we showed in Example 2.7 that $(\gamma(t),p)\not\in k^{+}$ for all $-\epsilon<t<0$, hence $\gamma_{\infty}$ is not $k^{+}$-causal. Moving on, we use $k^{+}$-causal curves to define a new causal relation on $M$. ###### Definition 3.5. We define the $\tilde{k}^{+}$-relation as follows: $(p,q)\in\tilde{k}^{+}$ if there exists a $k^{+}$-causal curve from $p$ to $q$. By Lemma 2.6 and transitivity of $k^{+}$, $\tilde{k}^{+}\subseteq k^{+}$. In particular, $\tilde{k}^{+}$ satisfies push-up relative to $\check{I}^{+}$. It is also clear that the concatenation of two $k^{+}$-causal curves is again $k^{+}$-causal, hence $\tilde{k}^{+}$ is transitive. Example 3.3 shows that $\tilde{k}^{+}$ can be strictly smaller than $k^{+}$. We finish this section with one of the main results of the paper, namely that $\tilde{k}^{+}=J^{+}$ on smooth (and more generally, causally plain) spacetimes. Recall that a $\mathcal{C}^{0}$-spacetime is called causally plain if $J^{+}$ satisfies push-up relative to $\check{I}^{+}$. ###### Lemma 3.6. Let $(M,g)$ be a $\mathcal{C}^{0}$-spacetime, and $\gamma:F\to M$ a $k^{+}$-causal curve. Then $\gamma$ is also $J^{+}$-causal. ###### Proof. Let $t\in F$ be arbitrary. By [6, Proposition 1.10], there exists a neighborhood $U$ of $p:=\gamma(t)$, called a cylindrical neighborhood, such that $\overline{\check{I}^{\pm}_{U}(p)}\subseteq J_{U}^{+}(p)$ (here we mean the closure of the set $\check{I}^{\pm}_{U}(p)\subseteq U$). Let $V\in F$ be a neighborhood of $t$ as in Definition 3.1, and $s\in V$. Suppose $s\leq t$, the other case being analogous. Because $\gamma$ is $k^{+}$-causal, we have $\left(p,\gamma(s)\right)\in k^{+}_{U}$. Let $\sigma:[0,\epsilon)\to U$ be any timelike $\mathcal{C}_{\mathrm{pw}}^{1}$-curve with $\sigma(0)=\gamma(s)$. By push-up, $\operatorname{Im}(\sigma)\subseteq\check{I}^{+}\left(p\right)$. By continuity, $\sigma(u)\to p$ as $u\to 0$. Hence, by the previous and our choice of $U$, $\gamma(s)\in\overline{\check{I}^{+}_{U}(p)}\subseteq J_{U}^{+}(p)$. In other words, $(\gamma(t),\gamma(s))\in J_{U}^{+}$. Since $t,s$ are arbitrary (as long as they are close enough), we conclude that $\gamma$ is $J^{+}$-causal. ∎ ###### Theorem 3.7. Let $(M,g)$ be a causally plain $\mathcal{C}^{0}$-spacetime. Then $\tilde{k}^{+}=J^{+}$. ###### Proof. If $(M,g)$ is causally plain, then $J^{+}$ satisfies push-up, hence $J^{+}\subseteq k^{+}$. In particular, on a subset $U\subset M$, we have $J_{U}^{+}\subseteq k^{+}_{U}$. Assume $(p,q)\in J^{+}$. Then there exists a $g$-causal curve $\gamma:[a,b]\to M$ from $p$ to $q$. By continuity, for every $t\in[a,b]$ and every neighborhood $U$ of $\gamma(t)$, there exists a neighborhood $V\subseteq[a,b]$ of $t$ small enough such that $\gamma|_{V}$ is contained in $U$. If $s_{1},s_{2}\in V$ and $s_{1}<s_{2}$, then $\left(\gamma(s_{1}),\gamma(s_{2})\right)\in J^{+}_{U}\subseteq k^{+}_{U}$. Thus $\gamma$ is a $k^{+}$-causal curve, and since $\gamma$ is arbitrary, we conculde that $J^{+}\subseteq\tilde{k}^{+}$. The other inclusion follows from Lemma 3.6, by noting that if two points $p,q$ can be joined by a $J^{+}$-causal curve, then $(p,q)\in J^{+}$. ∎ ## 4 Other causal structures It is possible to repeat the procedure of Section 3 for Sorkin and Woolgar’s $K^{+}$, and define a relation $\tilde{K}^{+}$ based on $K^{+}$-causal curves (the latter class of curves is also studied in [22, Section 3]). On a smooth spacetime, every point admits an arbitrarily small neighborhood $U$ (a convex normal neighborhood) such that $J^{+}_{U}=\overline{\check{I}^{+}_{U}}$. Thus we conclude that on smooth spacetimes, $\tilde{K}^{+}=J^{+}$. Unfortunately, Example 2.7 and Remark 3.4 tell us that $\tilde{K}^{+}$ cannot satisfy push-up with respect to $\check{I}^{+}$ on all $\mathcal{C}^{0}$-spacetimes. That is because, if it did, then $\tilde{K}^{+}\subseteq k^{+}$. Recall that $K^{+}$ is closed and contains $\check{I}^{+}$. It follows that if a curve $\gamma$ is the limit of a sequence of $\mathcal{C}_{\mathrm{pw}}^{1}$-timelike curves, then $\gamma$ must be $k^{+}$-causal. However, in Remark 3.4, we saw an example of such a $\gamma$ where the endpoints are not $k^{+}$-related to each other, a contradiction. A different approach is to consider $J^{+}$ to be the more fundamental relation, and then find an appropriate notion of chronological order, say $\hat{I}^{+}$. Ideally, we would like all of the following three properties to hold. 1. (i) $\hat{I}^{+}$ is open and contained in $J^{+}$. 2. (ii) $J^{+}$ satisfies push-up relative to $\hat{I}^{+}$. 3. (iii) For every point $p\in M$ and every neighborhood $U$ of $p$, $\hat{I}^{\pm}(p)\bigcap U\neq\emptyset$. This leaves us with only one choice, namely $\hat{I}^{+}=\operatorname{Int}J^{+}$. To see this is the only option, note first that any relation not contained in $\operatorname{Int}J^{+}$ would either not be open or not be contained in $J^{+}$. But if we choose $\hat{I}^{+}$ strictly smaller than $\operatorname{Int}J^{+}$, then $J^{+}$ cannot satisfy push-up relative to it: For if $(p,q)\in\operatorname{Int}J^{+}$, then we can find a point $r\in\hat{I}^{-}(q)$ close enough to $q$ such that $(p,r)\in J^{+}$. Hence, push-up can only be satisfied if $(p,q)\in\hat{I}^{+}$. This being said, the following example shows that $J^{+}$ does _not_ always satisfy push-up relative to $\operatorname{Int}J^{+}$. ###### Example 4.1. This example is adapted from [6, Ex. 1.11] and [15, Sec. 4.1]. Let $M=(-2,2)\times$ with metric given by $ds^{2}=-dt^{2}-2(1-|t|^{1/2})dtdx+|t|^{1/2}(2-|t|^{1/2})dx^{2}.$ This metric is smooth everywhere except on the $x$-axis. A null curve starting at the point $p=(-1,0)$ can be parametrized as $x\mapsto\gamma(x)=(t(x),x)$, and then $\dot{t}=\begin{cases}|t|^{1/2}&\text{or}\\\ |t|^{1/2}-2.&\end{cases}$ By solving this equation, we obtain the boundary of $J^{+}(p)$ (light blue region in Figure 2). Consider the first case of the equation, which is when the null curve $\gamma$ moves upwards and to the right. We are interested in finding the value $x_{1}$ such that $t(x)\to 0$ as $x\to x_{1}$. It can easily be computed by separation of variables: $x_{1}=\int^{x_{1}}_{0}dx=\int^{0}_{-1}\frac{dt}{|t|^{1/2}}=2.$ Let $q=(0,0)$, then $(p,q)\in\operatorname{Int}J^{+}$. Consider a third point $r=(0,3)$, as in Figure 2. Now $(q,r)\in J^{+}$, because the curve $x\mapsto(0,x)$ is null. However, $(p,r)\not\in\operatorname{Int}J^{+}$, since there are points of the form $(-\epsilon,3)$ arbitrarily close to $r$ which cannot lie in $J^{+}(p)$ because they lie below the $x$-axis and their $x$-coordinate is larger than $2$. Hence $J^{+}$ does not satisfy push-up relative to $\operatorname{Int}J^{+}$ in this example. $q$$p$$r$$J^{+}(q)$$J^{+}(p)$$x$$t$ Figure 2: The points $p,q,r$ in Example 4.1, which satisfy $(p,q)\in\operatorname{Int}J^{+}$ and $(q,r)\in J^{+}$ but $(p,r)\not\in\operatorname{Int}J^{+}$. Finally, we point out yet another option, which is to define chronological futures via the Lorentzian distance. Recall that in the smooth case, the Lorentzian distance $d(p,q)$, given by maximizing the length of $g$-causal curves between $p$ and $q$, satisfies $d(p,q)>0\iff(p,q)\in\check{I}^{+}$ (see [2, Chap. 4] for details). On $\mathcal{C}^{0}$-spacetimes, only the “$\impliedby$” implication continues to hold. To see this, take the points $p,q,r$ in Example 4.1 (depicted in Figure 2). We can connect $p$ and $q$ by a vertical segment, which is timelike (hence causal) and has length equal to $1$. We can connect $q$ and $r$ by a horizontal segment, which is also causal, and has length $0$. Concatenating the two segments, we get a causal curve of length $1$ from $p$ to $r$, despite the fact that $(p,r)\not\in\check{I}^{+}$. Further, we see that $r\in\partial J^{+}(p)$, and since $\\{d>0\\}\subseteq J^{+}$ by definition, we conclude that $\\{d>0\\}$ is not open in this example. Nonetheless, another property of the Lorentzian distance, the inverse triangle inequality $d(p,r)\geq d(p,q)+d(q,r)\text{ if }(p,r),(r,q)\in J^{+},$ does hold for all $\mathcal{C}^{0}$-metrics. We deduce from it that $J^{+}$ satisfies push-up relative to $\\{d>0\\}$. Hence the combination of $J^{+}$ and $\\{d>0\\}$ gives us push-up and limit curve theorems, but at the price of non-open futures. ## 5 Conclusions Table 1 summarizes the properties of different choices of causal and chronological relation on $\mathcal{C}^{0}$-spacetimes. While each of the choices (rows) is distinct from the others for $\mathcal{C}^{0}$-metrics, they all coincide for smooth metrics. In particular, in the smooth case, the standard causal structure ticks all three boxes. For $\mathcal{C}^{0}$-metrics, on the other hand, no combination of chronological and causal order has all of the three properties that we considered. The newly introduced $(\check{I}^{+},\tilde{k}^{+})$ is the only causal structure that has both push-up and an open chronological relation. Moreover, it defines a causal structure in the sense of Kronheimer and Penrose (see Proposition 2.5). Table 1: Comparison of different causal structures on $\mathcal{C}^{0}$-spacetimes by three important properties. Chronological | Causal | Push-up | Open | Limit ---|---|---|---|--- order | order | | futures | curves $\check{I}^{+}$ | $J^{+}$ | ✗ | ✓ | ✓ $\check{I}^{+}$ | $\tilde{K}^{+}$ | ✗ | ✓ | ✓ $\check{I}^{+}$ | $\tilde{k}^{+}$ | ✓ | ✓ | ✗ ${I}^{+}$ | $J^{+}$ | ✗ | ✗ | ✓ ${I}^{+}$ | $\tilde{K}^{+}$ | ? | ✗ | ✓ ${I}^{+}$ | $\tilde{k}^{+}$ | ✓ | ✗ | ? $\\{d>0\\}$ | $J^{+}$ | ✓ | ✗ | ✓ $\operatorname{Int}J^{+}$ | $J^{+}$ | ✗ | ✓ | ✓ It is fair to say that we have exhausted all reasonable possibilities. For if we want the chronological relation to be given by timelike $\mathcal{C}_{\mathrm{pw}}^{1}$-curves, then Example 2.7 and Remark 3.4 tell us that no causal relation can satisfy push-up and at the same time admit a limit curve theorem. We do not know if this changes when using timelike $\mathcal{L}$-curves, but even if so, we would loose the openness of chronological futures instead [11]. If, on the other hand, we choose for the causal relation to be given by $g$-causal curves, and try to define a compatible chronological relation, we run into the same problems by the discussion in Section 4. ## References * [1] L. Aké Hau, Armando J. Cabrera Pacheco and Didier A. Solis “On the causal hierarchy of Lorentzian length spaces” In _Classical Quantum Gravity_ 37.21, 2020, pp. 215013–215034 DOI: 10.1088/1361-6382/abb25f * [2] John K. Beem, Paul E. Ehrlich and Kevin L. Easley “Global Lorentzian geometry” 202, Monographs and Textbooks in Pure and Applied Mathematics Marcel Dekker, Inc., New York, 1996 * [3] Luca Bombelli, Joohan Lee, David Meyer and Rafael D. Sorkin “Space-time as a causal set” In _Phys. Rev. Lett._ 59.5, 1987, pp. 521–524 DOI: 10.1103/PhysRevLett.59.521 * [4] Annegret Y. Burtscher and Philippe G. LeFloch “The formation of trapped surfaces in spherically-symmetric Einstein–Euler spacetimes with bounded variation” In _Journal de Mathématiques Pures et Appliquées_ 102.6, 2014, pp. 1164–1217 DOI: https://doi.org/10.1016/j.matpur.2014.10.003 * [5] Piotr T. Chruściel “Elements of causality theory”, 2011 arXiv:1110.6706 [gr-qc] * [6] Piotr T. Chruściel and James D.. Grant “On Lorentzian causality with continuous metrics” In _Classical Quantum Gravity_ 29.14, 2012, pp. 145001–145033 DOI: 10.1088/0264-9381/29/14/145001 * [7] Mihalis Dafermos and Jonathan Luk “The interior of dynamical vacuum black holes I: The $C^{0}$-stability of the Kerr Cauchy horizon”, 2017 arXiv:1710.01722 [gr-qc] * [8] H.. Dowker, R.. Garcia and S. Surya “$K$-causality and degenerate spacetimes” In _Classical Quantum Gravity_ 17.21, 2000, pp. 4377–4396 DOI: 10.1088/0264-9381/17/21/303 * [9] L. García-Heveling “Time functions on Lorentzian length spaces” In preparation * [10] Robert P. Geroch and Jennie H. Traschen “Strings and other distributional sources in general relativity” In _Phys. Rev. D_ 36, 1987, pp. 1017–1031 DOI: 10.1103/PhysRevD.36.1017 * [11] James D.. Grant, Michael Kunzinger, Clemens Sämann and Roland Steinbauer “The future is not always open” In _Lett. Math. Phys._ 110.1, 2020, pp. 83–103 DOI: 10.1007/s11005-019-01213-8 * [12] S.. Hawking and G… Ellis “The large scale structure of space-time”, Cambridge Monographs on Mathematical Physics Cambridge University Press, 1973 DOI: 10.1017/CBO9780511524646 * [13] E.. Kronheimer and R. Penrose “On the structure of causal spaces” In _Proc. Cambridge Philos. Soc._ 63, 1967, pp. 481–501 DOI: 10.1017/s030500410004144x * [14] Michael Kunzinger and Clemens Sämann “Lorentzian length spaces” In _Ann. Global Anal. Geom._ 54.3, 2018, pp. 399–447 DOI: 10.1007/s10455-018-9633-1 * [15] Eric Ling “Aspects of $C^{0}$ causal theory” In _Gen. Relativity Gravitation_ 52.6, 2020, pp. Paper No. 5740 DOI: 10.1007/s10714-020-02708-9 * [16] E. Minguzzi “Limit curve theorems in Lorentzian geometry” In _J. Math. Phys._ 49, 2008, pp. 092501–092518 DOI: 10.1063/1.2973048 * [17] E. Minguzzi “Lorentzian causality theory” In _Living Rev. Rel._ 22.1, 2019, pp. 3–204 DOI: 10.1007/s41114-019-0019-x * [18] E. Minguzzi “Time functions as utilities” In _Comm. Math. Phys._ 298.3, 2010, pp. 855–868 DOI: 10.1007/s00220-010-1048-1 * [19] Ettore Minguzzi “$K$-causality coincides with stable causality” In _Comm. Math. Phys._ 290.1, 2009, pp. 239–248 DOI: 10.1007/s00220-009-0794-4 * [20] Clemens Sämann “Global hyperbolicity for spacetimes with continuous metrics” In _Ann. Henri Poincaré_ 17.6, 2016, pp. 1429–1455 DOI: 10.1007/s00023-015-0425-x * [21] Jan Sbierski “The $C^{0}$-inextendibility of the Schwarzschild spacetime and the spacelike diameter in Lorentzian geometry” In _J. Diff. Geom._ 108.2, 2018, pp. 319–378 DOI: 10.4310/jdg/1518490820 * [22] R.. Sorkin and E. Woolgar “A causal order for spacetimes with $C^{0}$ Lorentzian metrics: proof of compactness of the space of causal curves” In _Classical Quantum Gravity_ 13.7, 1996, pp. 1971–1993 DOI: 10.1088/0264-9381/13/7/023